Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unstructured Mesh Tallies with CAD Geometry in OpenMC
In the first notebook on this topic, we looked at how to set up a tally using an unstructured mesh in OpenMC.
In this notebook, we will explore using unstructured mesh in conjunction with CAD-based geometry to perform detailed geometry analysis on complex geomerty.
NOTE
Step2: We'll need to download our DAGMC geometry and unstructured mesh files. We'll be retrieving those using the function and URLs below.
Step3: The model we'll be looking at in this example is a steel piping manifold
Step4: This is a nice example of a model which would be extremely difficult to model using CSG. To get started, we'll need two files
Step5: Now let's download the geometry and mesh files.
(This may take some time.)
Step6: Next we'll create a 5 MeV neutron point source at the entrance the single pipe on the low side of the model with
Step7: And we'll indicate that we're using a CAD-based geometry.
Step8: We'll run a few particles through this geometry to make sure everything is working properly.
Step9: Now let's setup the unstructured mesh tally. We'll do this the same way we did in the previous notebook.
Step10: Again we should see that tally_1.200.vtk file which we can use to visualize our results in VisIt, ParaView, or another tool of your choice that supports VTK files.
Step11: For the purpose of this example, we haven't run enough particles to score in all of the tet elements, but we indeed see larger flux values near the source location at the bottom of the model.
Visualization with statepoint data
It was mentioned in the previous unstructured mesh example that the centroids and volumes of elements are written to the state point file. Here, we'll explore how to use that information to produce point cloud information for visualization of this data.
This is particularly important when combining an unstructured mesh tally with other filters as a .vtk file will not automatically be written with the statepoint file in that scenario. To demonstrate this, let's setup a tally similar to the one above, but add an energy filter and re-run the model.
Step12: Noice the warning at the end of the output above indicating that the .vtk file we used before isn't written in this case.
Let's open up this statepoint file and get the information we need to create the point cloud data instead.
NOTE
Step13: We should now see our new flux file in the directory. It can be used to visualize the results in the same way as our other .vtk files. | Python Code:
import os
from IPython.display import Image
import openmc
import openmc.lib
assert(openmc.lib._dagmc_enabled())
Explanation: Unstructured Mesh Tallies with CAD Geometry in OpenMC
In the first notebook on this topic, we looked at how to set up a tally using an unstructured mesh in OpenMC.
In this notebook, we will explore using unstructured mesh in conjunction with CAD-based geometry to perform detailed geometry analysis on complex geomerty.
NOTE: This notebook will not run successfully if OpenMC has not been built with DAGMC support enabled.
End of explanation
from IPython.display import display, clear_output
import urllib.request
manifold_geom_url = 'https://tinyurl.com/rp7grox' # 99 MB
manifold_mesh_url = 'https://tinyurl.com/wojemuh' # 5.4 MB
def download(url, filename):
Helper function for retrieving dagmc models
def progress_hook(count, block_size, total_size):
prog_percent = 100 * count * block_size / total_size
prog_percent = min(100., prog_percent)
clear_output(wait=True)
display('Downloading {}: {:.1f}%'.format(filename, prog_percent))
urllib.request.urlretrieve(url, filename, progress_hook)
Explanation: We'll need to download our DAGMC geometry and unstructured mesh files. We'll be retrieving those using the function and URLs below.
End of explanation
Image("./images/manifold-cad.png", width=800)
Explanation: The model we'll be looking at in this example is a steel piping manifold:
End of explanation
air = openmc.Material(name='air')
air.set_density('g/cc', 0.001205)
air.add_element('N', 0.784431)
air.add_element('O', 0.210748)
air.add_element('Ar',0.0046)
steel = openmc.Material(name='steel')
steel.set_density('g/cc', 8.0)
steel.add_element('Si', 0.010048)
steel.add_element('S', 0.00023)
steel.add_element('Fe', 0.669)
steel.add_element('Ni', 0.12)
steel.add_element('Mo', 0.025)
steel.add_nuclide('P31',0.00023)
steel.add_nuclide('Mn55',0.011014)
materials = openmc.Materials([air, steel])
materials.export_to_xml()
Explanation: This is a nice example of a model which would be extremely difficult to model using CSG. To get started, we'll need two files:
1. the DAGMC gometry file on which we'll track particles and
2. a tetrahedral mesh of the piping structure on which we'll score tallies
To start, let's create the materials we'll need for this problem. The pipes are steel and we'll model the surrounding area as air.
End of explanation
# get the manifold DAGMC geometry file
download(manifold_geom_url, 'dagmc.h5m')
# get the manifold tet mesh
download(manifold_mesh_url, 'manifold.h5m')
Explanation: Now let's download the geometry and mesh files.
(This may take some time.)
End of explanation
src_pnt = openmc.stats.Point(xyz=(0.0, 0.0, 0.0))
src_energy = openmc.stats.Discrete(x=[5.e+06], p=[1.0])
source = openmc.Source(space=src_pnt, energy=src_energy)
settings = openmc.Settings()
settings.source = source
settings.run_mode = "fixed source"
settings.batches = 10
settings.particles = 100
Explanation: Next we'll create a 5 MeV neutron point source at the entrance the single pipe on the low side of the model with
End of explanation
settings.dagmc = True
settings.export_to_xml()
Explanation: And we'll indicate that we're using a CAD-based geometry.
End of explanation
openmc.run()
Explanation: We'll run a few particles through this geometry to make sure everything is working properly.
End of explanation
unstructured_mesh = openmc.UnstructuredMesh("manifold.h5m")
mesh_filter = openmc.MeshFilter(unstructured_mesh)
tally = openmc.Tally()
tally.filters = [mesh_filter]
tally.scores = ['flux']
tally.estimator = 'tracklength'
tallies = openmc.Tallies([tally])
tallies.export_to_xml()
settings.batches = 200
settings.particles = 5000
settings.export_to_xml()
openmc.run(output=False)
Explanation: Now let's setup the unstructured mesh tally. We'll do this the same way we did in the previous notebook.
End of explanation
!ls *.vtk
Image("./images/manifold_flux.png", width="800")
Explanation: Again we should see that tally_1.200.vtk file which we can use to visualize our results in VisIt, ParaView, or another tool of your choice that supports VTK files.
End of explanation
# energy filter with bins from 0 to 1 MeV and 1 MeV to 5 MeV
energy_filter = openmc.EnergyFilter((0.0, 1.e+06, 5.e+06))
tally.filters = [mesh_filter, energy_filter]
print(tally)
print(energy_filter)
tallies.export_to_xml()
!cat tallies.xml
openmc.run(output=False)
Explanation: For the purpose of this example, we haven't run enough particles to score in all of the tet elements, but we indeed see larger flux values near the source location at the bottom of the model.
Visualization with statepoint data
It was mentioned in the previous unstructured mesh example that the centroids and volumes of elements are written to the state point file. Here, we'll explore how to use that information to produce point cloud information for visualization of this data.
This is particularly important when combining an unstructured mesh tally with other filters as a .vtk file will not automatically be written with the statepoint file in that scenario. To demonstrate this, let's setup a tally similar to the one above, but add an energy filter and re-run the model.
End of explanation
with openmc.StatePoint("statepoint.200.h5") as sp:
tally = sp.tallies[1]
umesh = sp.meshes[1]
centroids = umesh.centroids
mesh_vols = umesh.volumes
thermal_flux = tally.get_values(scores=['flux'],
filters=[openmc.EnergyFilter],
filter_bins=[((0.0, 1.e+06),)])
fast_flux = tally.get_values(scores=['flux'],
filters=[openmc.EnergyFilter],
filter_bins=[((1.e+06, 5.e+06),)])
data_dict = {'Flux 0 - 1 MeV' : thermal_flux,
'Flux 1 - 5 MeV' : fast_flux,
'Total Flux' : thermal_flux + fast_flux}
umesh.write_data_to_vtk("manifold", data_dict)
Explanation: Noice the warning at the end of the output above indicating that the .vtk file we used before isn't written in this case.
Let's open up this statepoint file and get the information we need to create the point cloud data instead.
NOTE: You will need the Python vtk module installed to run this part of the notebook.
End of explanation
!ls *.vtk
Image("./images/manifold_pnt_cld.png", width=800)
Explanation: We should now see our new flux file in the directory. It can be used to visualize the results in the same way as our other .vtk files.
End of explanation |
401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explicit 1D Benchmarks
This file demonstrates how to generate, plot, and output data for 1d benchmarks
Choose from
Step1: Generate the data with noise
Step2: Plot inline and save image
Step3: Output json and csv data | Python Code:
from pypge.benchmarks import explicit
import numpy as np
# visualization libraries
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# plot the visuals in ipython
%matplotlib inline
Explanation: Explicit 1D Benchmarks
This file demonstrates how to generate, plot, and output data for 1d benchmarks
Choose from:
Koza_01
Koza_02
Koza_03
Lipson_01
Lipson_02
Lipson_03
Nguyen_01
Nguyen_02
Nguyen_03
Nguyen_04
Nguyen_05
Nguyen_06
Nguyen_07
Nguyen_08
Imports
End of explanation
# choose your problem here
prob = explicit.Koza_01()
# you can also specify the following params as keyword arguments
#
# params = {
# 'name': "Koza_01",
# 'xs_str': ["x"],
# 'eqn_str': "x**4 + x**3 + x**2 + x",
# 'xs_params': [ (-4.0,4.0) ],
# 'npts': 200,
# 'noise': 0.1
# }
# or make your own with the following
#
# explicit.Explicit_1D(params):
Explanation: Generate the data with noise
End of explanation
print prob['name'], prob['eqn']
print prob['xpts'].shape
fig = plt.figure()
fig.set_size_inches(16, 12)
plt.plot(prob['xpts'][0], prob['ypure'], 'r.')
plt.legend(loc='center left', bbox_to_anchor=(0.67, 0.12))
plt.title(prob['name'] + " Clean", fontsize=36)
plt.savefig("img/benchmarks/" + prob['name'].lower() + "_clean.png", dpi=200)
# plt.show()
### You can only do one of 'savefig()' or 'show()'
fig = plt.figure()
fig.set_size_inches(16, 12)
plt.plot(prob['xpts'][0], prob['ypts'], 'b.')
plt.legend(loc='center left', bbox_to_anchor=(0.67, 0.12))
plt.title(prob['name'] + " Noisy", fontsize=36)
plt.savefig("img/benchmarks/" + prob['name'].lower() + "_noisy.png", dpi=200)
# plt.show()
Explanation: Plot inline and save image
End of explanation
data = np.array([prob['xpts'][0], prob['ypts']]).T
print data.shape
cols = [['x', 'out']]
out_data = cols + data.tolist()
import json
json_out = json.dumps( out_data, indent=4)
# print json_out
f_json = open("data/benchmarks/" + prob['name'].lower() + ".json", 'w')
f_json.write(json_out)
f_json.close()
f_csv = open("data/benchmarks/" + prob['name'].lower() + ".csv", 'w')
for row in out_data:
line = ", ".join([str(col) for col in row]) + "\n"
f_csv.write(line)
f_csv.close()
Explanation: Output json and csv data
End of explanation |
402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple example of multilabel learning
Step1: Data loading
Load yeast dataset.
Step2: Features
Step3: Labels
Step4: Data analysis
Step5: Histogram of #positive labels.
Step8: Dataset creation
Step9: Evaluation
The sigmoid function.
Step11: Loss between a ground truth and a prediction.
Step12: Binary relevance baseline
Train a logistic regression model for each label.
Step13: Result analysis
Coefficient matrix (#Genres, #Songs).
Step15: Binary relevance with exponential loss
Train a regression model with exponential loss for each label.
Step16: Check gradient.
Step18: Binary relevance with bipartite ranking
Train a bipartite ranking model for each label.
Step19: Check gradient.
Step22: Ranking loss
Multi-label learning with ranking loss.
Step23: Check gradient
Step27: p-classification loss
Multi-label learning with p-norm push loss.
Step28: Check gradient
Step29: Results for different hyper-parameter configurations
Step30: Top push loss
Methods to compute $v = (\sum_{n=1}^N A_{N \times K})^\top (\sum_{n=1}^N A_{N \times K})$
Step31: Speed test. | Python Code:
%matplotlib inline
import os, sys, time
import pickle as pkl
import numpy as np
import pandas as pd
import sklearn as sk
import cython
import itertools
from scipy.io import arff
from scipy.optimize import minimize
from scipy.optimize import check_grad
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split, cross_val_score
import matplotlib.pyplot as plt
import seaborn as sns
data_dir = 'data'
#yeast_ftrain = os.path.join(data_dir, 'yeast/yeast-train.arff')
#yeast_ftest = os.path.join(data_dir, 'yeast/yeast-test.arff')
#bibtex_ftrain = os.path.join(data_dir, 'bibtex/bibtex-train.arff')
#bibtex_ftest = os.path.join(data_dir, 'bibtex/bibtex-test.arff')
#fbookmarks = os.path.join(data_dir, 'bookmarks/bookmarks.arff')
mm_ftrain = os.path.join(data_dir, 'mediamill/mediamill-train.arff')
mm_ftest = os.path.join(data_dir, 'mediamill/mediamill-test.arff')
SEED = 123456789
Explanation: A simple example of multilabel learning
End of explanation
#data_train, meta_train = arff.loadarff(yeast_ftrain)
#data_train, meta_train = arff.loadarff(open(bibtex_ftrain))
#data_bookmarks = arff.loadarff(open(fbookmarks))
data_train, meta_train = arff.loadarff(mm_ftrain)
#data_test, meta_test = arff.loadarff(yeast_ftest)
#data_test, meta_test = arff.loadarff(bibtex_ftest)
data_test, meta_test = arff.loadarff(mm_ftest)
type(data_train)
print(data_train[0])
len(list(data_train[0]))
len(list(data_train[0])[120:])
len(list(data_train[0])[:120])
Explanation: Data loading
Load yeast dataset.
End of explanation
#nFeatures = np.array(list(data_train[0])[:-14], dtype=np.float).shape[0]
nFeatures = np.array(list(data_train[0])[:120], dtype=np.float).shape[0]
print('#features:', nFeatures)
#np.array(list(data_train[0])[:-14], dtype=np.float)
Explanation: Features
End of explanation
#nLabels = np.array(list(data_train[0])[-14:], dtype=np.int).shape[0]
nLabels = np.array(list(data_train[0])[120:], dtype=np.int).shape[0]
print('#labels:', nLabels)
#np.array(list(data_train[0])[-14:], dtype=np.int)
Explanation: Labels
End of explanation
print('#training examples:', len(data_train))
print('#test examples:', len(data_test))
Explanation: Data analysis
End of explanation
#nPositives = [np.sum(np.array(list(data_train[ix])[-14:], dtype=np.int)) for ix in range(len(data_train))]
nPositives = [np.sum(np.array(list(data_train[ix])[120:], dtype=np.int)) for ix in range(len(data_train))]
pd.Series(nPositives).hist(bins=10)
Explanation: Histogram of #positive labels.
End of explanation
def create_dataset(label_ix, data):
Create the labelled dataset for a given label index
Input:
- label_ix: label index, number in { 0, ..., # labels }
- data: original data with features + labels
Output:
- (Feature, Label) pair (X, y)
X comprises the features for each example
y comprises the labels of the corresponding example
assert(label_ix >= 0)
assert(label_ix < nLabels)
N = len(data)
d = nFeatures
#magic = -14
magic = 120
X = np.zeros((N, d), dtype = np.float)
y = np.zeros(N, dtype = np.int)
for i in range(N):
X[i, :] = list(data[i])[:magic]
y[i] = list(data[i])[magic:][label_ix]
return X, y
def create_dataset_v2(data):
Create the labelled dataset for a given label index
Input:
- data: original data with features + labels
Output:
- (Feature, Label) pair (X, y)
X comprises the features for each example
Y comprises the labels of the corresponding example
N = len(data)
D = nFeatures
L = nLabels
#magic = -14
magic = 120
X = np.zeros((N, D), dtype = np.float)
Y = np.zeros((N, L), dtype = np.int)
for i in range(N):
X[i, :] = list(data[i])[:magic]
Y[i, :] = list(data[i])[magic:]
return X, Y
Explanation: Dataset creation
End of explanation
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
Explanation: Evaluation
The sigmoid function.
End of explanation
def evalPred(truth, pred, lossType = 'Hamming'):
Compute loss given ground truth and prediction
Input:
- truth: binary array of true labels
- pred: real-valued array of predictions
- lossType: can be subset 0-1, Hamming, ranking, and Precision@K where K = # positive labels.
assert(len(truth) == len(pred))
L = len(truth)
nPos = np.sum(truth)
predBin = np.array((pred > 0), dtype=np.int)
if lossType == 'Subset01':
return 1 - int(np.all(truth == predBin))
elif lossType == 'Hamming':
return np.sum(truth != predBin) / L
elif lossType == 'Ranking':
loss = 0
for i in range(L-1):
for j in range(i+1, L):
if truth[i] > truth[j]:
if pred[i] < pred[j]:
loss += 1
if pred[i] == pred[j]:
loss += 0.5
#return loss / (nPos * (L-nPos))
return loss
elif lossType == 'Precision@K':
# sorted indices of the labels most likely to be +'ve
idx = np.argsort(pred)[::-1]
# true labels according to the sorted order
y = truth[idx]
# fraction of +'ves in the top K predictions
return np.mean(y[:nPos])if nPos > 0 else 0
elif lossType == 'Precision@3':
# sorted indices of the labels most likely to be +'ve
idx = np.argsort(pred)[::-1]
# true labels according to the sorted order
y = truth[idx]
# fraction of +'ves in the top K predictions
return np.mean(y[:3])
elif lossType == 'Precision@5':
# sorted indices of the labels most likely to be +'ve
idx = np.argsort(pred)[::-1]
# true labels according to the sorted order
y = truth[idx]
# fraction of +'ves in the top K predictions
return np.mean(y[:5])
else:
assert(False)
def avgPrecisionK(allTruths, allPreds):
losses = []
lossType = 'Precision@K'
for i in range(allPreds.shape[0]):
pred = allPreds[i, :]
truth = allTruths[i, :]
losses.append(evalPred(truth, pred, lossType))
return np.mean(losses)
def printEvaluation(allTruths, allPreds):
N = allTruths.shape[0]
print(N)
for lossType in ['Precision@K']:
# ['Subset01', 'Hamming', 'Ranking', 'Precision@K', 'Precision@3', 'Precision@5']:
losses = [ ]
for i in range(allPreds.shape[0]):
pred = allPreds[i, :]
truth = allTruths[i, :]
losses.append(evalPred(truth, pred, lossType))
#print(allPreds[i])
#print(pred)
#print(truth)
#break
#print('%24s: %1.4f' % ('Average %s Loss' % lossType, np.mean(losses)))
print('%s: %1.4f, %.3f' % ('Average %s' % lossType, np.mean(losses), np.std(losses) / np.sqrt(N)))
#plt.hist(aucs, bins = 10);
Explanation: Loss between a ground truth and a prediction.
End of explanation
classifiers = [ LogisticRegression(class_weight = 'balanced', C = 10**0) for i in range(nLabels) ]
#classifiers = [ LogisticRegression(class_weight = 'balanced', C = 10) for i in range(nLabels) ]
allPreds_train = [ ]
allPreds_test = [ ]
allTruths_train = [ ]
allTruths_test = [ ]
coefMat = [ ]
labelIndices = [ ]
for label_ix in range(nLabels):
print('Training for Label %d' % (label_ix+1))
X_train, y_train = create_dataset(label_ix, data = data_train)
X_test, y_test = create_dataset(label_ix, data = data_test)
allTruths_train.append(y_train)
allTruths_test.append(y_test)
assert( (not np.all(y_train == 0)) and (not np.all(y_train == 1)) )
classifiers[label_ix].fit(X_train, y_train)
allPreds_train.append(classifiers[label_ix].decision_function(X_train))
allPreds_test.append(classifiers[label_ix].decision_function(X_test))
coefMat.append(classifiers[label_ix].coef_.reshape(-1))
#labelIndices.append(label_ix)
#print(classifiers[label_ix].coef_)
#print(classifiers[label_ix].intercept_)
allTruths_train = np.array(allTruths_train).T
allTruths_test = np.array(allTruths_test).T
allPreds_train = np.array(allPreds_train).T
allPreds_test = np.array(allPreds_test).T
print(allPreds_test.shape)
print(allTruths_test.shape)
#allPreds[0]
print('Training set:')
printEvaluation(allTruths_train, allPreds_train)
print('Test set:')
printEvaluation(allTruths_test, allPreds_test)
Explanation: Binary relevance baseline
Train a logistic regression model for each label.
End of explanation
coefMat = np.array(coefMat).T
coefMat.shape
#sns.heatmap(coefMat[:, :30])
Explanation: Result analysis
Coefficient matrix (#Genres, #Songs).
End of explanation
def obj_exp(w, X, y, C):
Objective with L2 regularisation and exponential loss
Input:
- w: current weight vector
- X: feature matrix, N x D
- y: label vector, N x 1
- C: regularisation constant
assert(len(y) == X.shape[0])
assert(len(w) == X.shape[1])
assert(C >= 0)
N, D = X.shape
J = 0.0 # cost
g = np.zeros_like(w) # gradient
for n in range(N):
x = X[n, :]
prod = np.dot(w, x)
# negative label
if y[n] == 0:
t1 = np.exp(prod)
J += t1
g = g + t1 * x
# positive label
else:
t2 = np.exp(-prod)
J += t2
g = g - t2 * x
J = 0.5 * C * np.dot(w, w) + J / N
g = C * w + g / N
return (J, g)
Explanation: Binary relevance with exponential loss
Train a regression model with exponential loss for each label.
End of explanation
#%%script false
X_train_, y_train_ = create_dataset(3, data = data_train)
w0 = np.random.rand(X_train_.shape[1])
C = 1
check_grad(lambda w: obj_exp(w, X_train_, y_train_, C)[0], \
lambda w: obj_exp(w, X_train_, y_train_, C)[1], w0)
params = [ ]
allPreds_train = [ ]
allPreds_test = [ ]
allTruths_train = [ ]
allTruths_test = [ ]
np.random.seed(SEED)
C = 1
for label_ix in range(nLabels):
#sys.stdout.write('\r%d / %d' % (label_ix + 1, nLabels))
#sys.stdout.flush()
print('\r%d / %d ' % (label_ix + 1, nLabels))
X_train, y_train = create_dataset(label_ix, data = data_train)
X_test, y_test = create_dataset(label_ix, data = data_test)
allTruths_train.append(y_train)
allTruths_test.append(y_test)
assert( (not np.all(y_train == 0)) and (not np.all(y_train == 1)) )
opt_method = 'BFGS' #'Newton-CG'
#opt_method = 'nelder-mead'
options = {'disp': True}
w0 = np.random.rand(X_train.shape[1]) # initial guess
opt = minimize(obj_exp, w0, args=(X_train, y_train, C), method=opt_method, jac=True, options=options)
if opt.success == True:
w = opt.x
params.append(w)
#allPreds.append(sigmoid(np.dot(X_test, w)))
allPreds_train.append(np.dot(X_train, w))
allPreds_test.append(np.dot(X_test, w))
else:
sys.stderr.write('Optimisation failed, label_ix=%d\n' % label_ix)
w = np.zeros(X_train.shape[1])
params.append(w)
#allPreds_test.append(np.dot(X_test, w))
allTruths_train = np.array(allTruths_train).T
allTruths_test = np.array(allTruths_test).T
allPreds_train = np.array(allPreds_train).T
allPreds_test = np.array(allPreds_test).T
print(allPreds_test.shape)
print(allTruths_test.shape)
#allPreds[0]
print('Training set:')
printEvaluation(allTruths_train, allPreds_train)
print('Test set:')
printEvaluation(allTruths_test, allPreds_test)
Explanation: Check gradient.
End of explanation
#%load_ext Cython
#%%cython -a
import numpy as np
#cimport numpy as np
#cpdef obj_biranking(w, X, y):
def obj_biranking(w, X, y, C):
Objective with L2 regularisation and bipartite ranking loss
Input:
- w: current weight vector
- X: feature matrix, N x D
- y: label vector, N x 1
- C: regularisation constant
assert(len(y) == X.shape[0])
assert(len(w) == X.shape[1])
assert(C >= 0)
#cdef int nPos, nNeg, i, j
#cdef double J, term, denom
nPos = np.sum(y) # num of positive examples
nNeg = len(y) - nPos # num of negative examples
ixPos = np.nonzero(y)[0].tolist() # indices positive examples
ixNeg = list(set(np.arange(len(y))) - set(ixPos)) # indices negative examples
J = 0.0 # cost
g = np.zeros_like(w) # gradient
scorePos = X[ixPos, :].dot(w)[:,np.newaxis] # nPos x 1
scoreNeg = X[ixNeg, :].dot(w)[:,np.newaxis] # nNeg x 1
scoreDif = scorePos - scoreNeg.T # nPos x nNeg
#J = np.mean(np.log(1 + np.exp(-scoreDif)))
J = 0.5 * C * np.dot(w, w) + np.mean(np.log1p(np.exp(-scoreDif)))
A = -1/(1 + np.exp(scoreDif))
T1 = X[ixPos, :].T.dot(A.sum(axis = 1))
T2 = X[ixNeg, :].T.dot(A.sum(axis = 0))
g = C * w + 1/(nPos * nNeg) * (T1 - T2)
return (J, g)
Explanation: Binary relevance with bipartite ranking
Train a bipartite ranking model for each label.
End of explanation
X_train_, y_train_ = create_dataset(6, data = data_train)
#%%script false
w0 = w = np.random.rand(X_train_.shape[1])
C = 1
check_grad(lambda w: obj_biranking(w, X_train_, y_train_, C)[0], \
lambda w: obj_biranking(w, X_train_, y_train_, C)[1], w0)
#1.1331503772158218e-06 * np.sqrt(nLabels)
params = [ ]
allPreds_train = [ ]
allPreds_test = [ ]
allTruths_train = [ ]
allTruths_test = [ ]
np.random.seed(SEED)
C = 1
for label_ix in range(nLabels):
#sys.stdout.write('\r%d / %d' % (label_ix + 1, nLabels))
#sys.stdout.flush()
print('\r%d / %d ' % (label_ix + 1, nLabels))
X_train, y_train = create_dataset(label_ix, data = data_train)
X_test, y_test = create_dataset(label_ix, data = data_test)
allTruths_train.append(y_train)
allTruths_test.append(y_test)
assert( (not np.all(y_train == 0)) and (not np.all(y_train == 1)) )
opt_method = 'BFGS' #'Newton-CG'
#opt_method = 'nelder-mead'
options = {'disp': True}
w0 = np.random.rand(X_train.shape[1]) # initial guess
opt = minimize(obj_biranking, w0, args=(X_train, y_train, C), method=opt_method, jac=True, options=options)
if opt.success == True:
w = opt.x
params.append(w)
#allPreds.append(sigmoid(np.dot(X_test, w)))
allPreds_train.append(np.dot(X_train, w))
allPreds_test.append(np.dot(X_test, w))
else:
sys.stderr.write('Optimisation failed, label_ix=%d\n' % label_ix)
w = np.zeros(X_train.shape[1])
params.append(w)
allPreds_test.append(np.dot(X_test, w))
allTruths_train = np.array(allTruths_train).T
allTruths_test = np.array(allTruths_test).T
allPreds_train = np.array(allPreds_train).T
allPreds_test = np.array(allPreds_test).T
print(allPreds_test.shape)
print(allTruths_test.shape)
#allPreds[0]
print('Training set:')
printEvaluation(allTruths_train, allPreds_train)
print('Test set:')
printEvaluation(allTruths_test, allPreds_test)
Explanation: Check gradient.
End of explanation
def obj_ranking_loop(w, X, Y, C):
Objective with L2 regularisation and ranking loss
Input:
- w: current weight vector, flattened L x D
- X: feature matrix, N x D
- Y: label matrix, N x L
- C: regularisation constant
N, D = X.shape
L = Y.shape[1]
assert(w.shape[0] == L * D)
W = w.reshape(L, D) # reshape weight matrix
J = 0.0 # cost
G = np.zeros_like(W) # gradient matrix
for n in range(N):
Jn = 0.0
Gn = np.zeros_like(W)
x = X[n, :]
y = Y[n, :]
nPos = np.sum(y) # num of positive examples
nNeg = L - nPos # num of negative examples
denom = nPos * nNeg
ixPos = np.nonzero(y)[0].tolist() # indices positive examples
ixNeg = list(set(np.arange(L)) - set(ixPos)) # indices negative examples
for i in ixPos:
for j in ixNeg:
wDiff = W[i, :] - W[j, :]
sDiff = np.dot(wDiff, x)
term = np.exp(sDiff)
Jn += np.log1p(1.0 / term)
Gn[i, :] = Gn[i, :] - x / (1 + term)
#for j in ixNeg:
# for i in ixPos:
# wDiff = W[i, :] - W[j, :]
# sDiff = np.dot(wDiff, x)
# term = np.exp(sDiff)
Gn[j, :] = Gn[j, :] + x / (1 + term)
J += Jn / denom
G = G + Gn / denom
J = 0.5 * C * np.dot(w, w) + J / N
G = C * W + G / N
return (J, G.ravel())
#np.tile([1,2,3], (3,1)) * np.array([0.1, 0.2, 0.3])[:, None]
#np.tile([1,2,3], (3,1)) / np.array([0.1, 0.2, 0.3])[:, None]
#np.tile([1,2,3], (3,1)) * np.array([0.1, 0.2, 0.3])[:,]
#np.tile([1,2,3], (3,1)) / np.array([0.1, 0.2, 0.3])[:,]
def obj_ranking(w, X, Y, C):
Objective with L2 regularisation and ranking loss
Input:
- w: current weight vector, flattened L x D
- X: feature matrix, N x D
- Y: label matrix, N x L
- C: regularisation constant
N, D = X.shape
L = Y.shape[1]
assert(w.shape[0] == L * D)
W = w.reshape(L, D) # reshape weight matrix
J = 0.0 # cost
G = np.zeros_like(W) # gradient matrix
for n in range(N):
Jn = 0.0
Gn = np.zeros_like(W)
x = X[n, :]
y = Y[n, :]
nPos = np.sum(y) # num of positive examples
nNeg = L - nPos # num of negative examples
denom = nPos * nNeg
ixPos = np.nonzero(y)[0].tolist() # indices positive examples
ixNeg = list(set(np.arange(L)) - set(ixPos)) # indices negative examples
ixmat = np.array(list(itertools.product(ixPos, ixNeg))) # shape: ixPos*ixNeg by 2
dW = W[ixmat[:, 0], :] - W[ixmat[:, 1], :]
sVec = np.dot(dW, x)
Jn = np.sum(np.log1p(np.exp(-sVec)))
coeffVec = np.divide(1, 1 + np.exp(sVec))
coeffPos = pd.DataFrame(coeffVec)
coeffPos['gid'] = ixmat[:, 0]
coeffPos = coeffPos.groupby('gid', sort=False).sum()
coeffNeg = pd.DataFrame(coeffVec)
coeffNeg['gid'] = ixmat[:, 1]
coeffNeg = coeffNeg.groupby('gid', sort=False).sum()
#print(coeffPos)
#print(coeffNeg)
coeffs = np.ones(L)
coeffs[ixPos] = -coeffPos.loc[ixPos].values.squeeze()
coeffs[ixNeg] = coeffNeg.loc[ixNeg].values.squeeze()
#print(coeffs)
Gn = np.tile(x, (L, 1)) * coeffs[:, None]
J += Jn / denom
G = G + Gn / denom
J = 0.5 * C * np.dot(w, w) + J / N
G = C * W + G / N
return (J, G.ravel())
X_train, Y_train = create_dataset_v2(data = data_train)
X_test, Y_test = create_dataset_v2(data = data_test)
Explanation: Ranking loss
Multi-label learning with ranking loss.
End of explanation
#%%script false
C = 1
w0 = np.random.rand(nFeatures * nLabels)
check_grad(lambda w: obj_ranking(w, X_train[:10], Y_train[:10], C)[0], \
lambda w: obj_ranking(w, X_train[:10], Y_train[:10], C)[1], w0)
allTruths_train = Y_train
allTruths_test = Y_test
allPreds_train = None
allPreds_test = None
np.random.seed(SEED)
opt_method = 'BFGS' #'Newton-CG'
#opt_method = 'nelder-mead'
options = {'disp': True}
C = 1
w0 = np.random.rand(nFeatures * nLabels) # initial guess
opt = minimize(obj_ranking, w0, args=(X_train, Y_train, C), method=opt_method, jac=True, options=options)
if opt.success == True:
w = opt.x
#allPreds = sigmoid(np.dot(X_test, w.reshape(nLabels, nFeatures).T))
allPreds_train = np.dot(X_train, w.reshape(nLabels, nFeatures).T)
allPreds_test = np.dot(X_test, w.reshape(nLabels, nFeatures).T)
else:
sys.stderr.write('Optimisation failed')
print(allPreds_test.shape)
print(allTruths_test.shape)
print('Training set:')
printEvaluation(allTruths_train, allPreds_train)
print('Test set:')
printEvaluation(allTruths_test, allPreds_test)
Explanation: Check gradient
End of explanation
def obj_pnorm_push_loop(w, X, Y, p, C):
Objective with L2 regularisation and p-norm push loss
Input:
- w: current weight vector, flattened L x D
- X: feature matrix, N x D
- Y: label matrix, N x L
- p: constant for p-norm push loss
- C: regularisation constant
N, D = X.shape
L = Y.shape[1]
assert(w.shape[0] == L * D)
assert(p >= 1)
assert(C >= 0)
W = w.reshape(L, D) # reshape weight matrix
J = 0.0 # cost
G = np.zeros_like(W) # gradient matrix
for n in range(N):
Gn = np.zeros_like(W)
x = X[n, :]
y = Y[n, :]
nPos = np.sum(y) # num of positive examples
nNeg = L - nPos # num of negative examples
for k in range(L):
wk = W[k, :]
term = np.dot(wk, x)
if y[k] == 1:
term2 = np.exp(-term) / nPos
J += term2
Gn[k, :] = -x * term2
else:
term2 = np.exp(p * term) / nNeg
J += term2 / p
Gn[k, :] = x * term2
G = G + Gn
J = 0.5 * C * np.dot(w, w) + J / N
G = C * W + G / N
return (J, G.ravel())
def obj_pnorm_push_loopn(w, X, Y, p, C):
Objective with L2 regularisation and p-norm push loss
Input:
- w: current weight vector, flattened L x D
- X: feature matrix, N x D
- Y: label matrix, N x L
- p: constant for p-norm push loss
- C: regularisation constant
N, D = X.shape
L = Y.shape[1]
assert(w.shape[0] == L * D)
assert(p >= 1)
assert(C >= 0)
W = w.reshape(L, D) # reshape weight matrix
J = 0.0 # cost
G = np.zeros_like(W) # gradient matrix
for n in range(N):
Gn = np.zeros_like(W)
x = X[n, :]
y = Y[n, :]
nPos = np.sum(y) # num of positive examples
nNeg = L - nPos # num of negative examples
ixPos = np.nonzero(y)[0].tolist() # indices positive examples
ixNeg = list(set(np.arange(L)) - set(ixPos)) # indices negative examples
scalingPos = np.exp( -np.dot(W[ixPos, :], x)) / nPos
scalingNeg = np.exp(p * np.dot(W[ixNeg, :], x)) / nNeg
Gn[ixPos, :] = np.tile(-x, (nPos,1)) * scalingPos[:, None] # scaling each row of a matrix
Gn[ixNeg, :] = np.tile( x, (nNeg,1)) * scalingNeg[:, None] # with a different scalar
J += np.sum(scalingPos) + np.sum(scalingNeg) / p
G = G + Gn
J = 0.5 * C * np.dot(w, w) + J / N
G = C * W + G / N
return (J, G.ravel())
def obj_pnorm_push(w, X, Y, p, C):
Objective with L2 regularisation and p-norm push loss
Input:
- w: current weight vector, flattened L x D
- X: feature matrix, N x D
- Y: label matrix, N x L
- p: constant for p-norm push loss
- C: regularisation constant
N, D = X.shape
L = Y.shape[1]
assert(w.shape[0] == L * D)
assert(p >= 1)
assert(C >= 0)
W = w.reshape(L, D) # reshape weight matrix
J = 0.0 # cost
G = np.zeros_like(W) # gradient matrix
for k in range(nLabels):
wk = W[k, :]
Yk = Y[:, k]
sPosVec = np.dot(X[Yk == 1, :], wk) # Nk+ by 1
sNegVec = np.dot(X[Yk == 0, :], wk) # NK- by 1
#nPosVec = np.sum(Y[Yk == 1, :], axis=1) # Nk+ by 1
#nNegVec = np.sum(Y[Yk == 0, :], axis=1) # NK- by 1
nPosVec = np.sum(Y[Yk == 1, :], axis=1) + 0.1 # Nk+ by 1 with smoothing
nNegVec = np.sum(Y[Yk == 0, :], axis=1) + 0.1 # NK- by 1 with smoothing
#nPosVec = np.ones_like(sPosVec) * N
#nNegVec = np.ones_like(sNegVec) * N
lossPos = np.divide(np.exp(-sPosVec), nPosVec) # NK+ by 1
lossNeg = np.divide(np.exp(p * sNegVec), nNegVec) # NK- by 1
J += np.sum(lossPos) + np.sum(lossNeg / p)
GradPos = -X[Yk == 1, :] * lossPos[:, None]
GradNeg = X[Yk == 0, :] * lossNeg[:, None]
G[k, :] = np.sum(GradPos, axis=0) + np.sum(GradNeg, axis=0)
J = 0.5 * C * np.dot(w, w) + J / N
G = C * W + G / N
return (J, G.ravel())
X_train, Y_train = create_dataset_v2(data = data_train)
X_test, Y_test = create_dataset_v2(data = data_test)
Explanation: p-classification loss
Multi-label learning with p-norm push loss.
End of explanation
%%script false
p = 1
C = 1
w0 = np.random.rand(nFeatures * nLabels)
check_grad(lambda w: obj_pnorm_push(w, X_train, Y_train, p, C)[0], \
lambda w: obj_pnorm_push(w, X_train, Y_train, p, C)[1], w0)
allTruths_train = Y_train
allTruths_test = Y_test
allPreds_train = None
allPreds_test = None
np.random.seed(SEED)
p = 1 # [1, 10]
C = 1 # [0, 1]
opt_method = 'BFGS' #'Newton-CG'
#opt_method = 'nelder-mead'
options = {'disp': True}
w0 = np.random.rand(nFeatures * nLabels) # initial guess
opt = minimize(obj_pnorm_push, w0, args=(X_train, Y_train, p, C), method=opt_method, jac=True, options=options)
if opt.success == True:
w = opt.x
allPreds_train = np.dot(X_train, w.reshape(nLabels, nFeatures).T)
allPreds_test = np.dot(X_test, w.reshape(nLabels, nFeatures).T)
else:
sys.stderr.write('Optimisation failed')
print(allPreds_test.shape)
print(allTruths_test.shape)
print('Training set:')
printEvaluation(allTruths_train, allPreds_train)
print('Test set:')
printEvaluation(allTruths_test, allPreds_test)
Explanation: Check gradient
End of explanation
#make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
#cross_val_score(clf, iris.data, iris.target, cv=cv)
#%%script false
precisions_train = dict()
precisions_test = dict()
allTruths_test = Y_test
allTruths_train = Y_train
p_set = [1, 3, 10, 30]
C_set = [0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000]
opt_method = 'BFGS' #'Newton-CG'
for p in p_set:
for C in C_set:
print('-------------------------------------')
print('p in loss: {}, C for regularisation: {}'.format(p, C))
allPreds = None
allPreds_train = None
w0 = np.random.rand(nFeatures * nLabels) # initial guess
opt = minimize(obj_pnorm_push, w0, args=(X_train, Y_train, p, C), method=opt_method, jac=True)
if opt.success == True:
w = opt.x
allPreds_test = np.dot(X_test, w.reshape(nLabels, nFeatures).T)
allPreds_train = np.dot(X_train, w.reshape(nLabels, nFeatures).T)
precisions_train[(p,C)] = avgPrecisionK(allTruths_train, allPreds_train)
precisions_test[(p,C)] = avgPrecisionK(allTruths_test, allPreds_test)
else:
sys.stderr.write('Optimisation failed')
precisions_train[(p,C)] = 0
precisions_test[(p,C)] = 0
print('%20s %.4f' % ('Average Precision@K on training set: ', precisions_train[(p,C)]))
print('%20s %.4f\n' % ('Average Precision@K on test set: ', precisions_test[(p,C)]))
#%%script false
fig = plt.figure(figsize=[10, 8])
ax = plt.subplot(1,1,1)
#colors = itertools.cycle(['r', 'g'])
styles = itertools.cycle(['-', '--', ':', '-.'])
for p in p_set:
ls = styles.__next__()
plt.plot(np.arange(len(C_set)), [precisions_train[(p,C)] for C in C_set], \
ls=ls, c='r', label='p=%d'%p + ', train')
plt.plot(np.arange(len(C_set)), [precisions_test[(p,C)] for C in C_set], \
ls=ls, c='g', label='p=%d'%p + ', test')
plt.plot(np.arange(len(C_set)), [0.5149 for C in C_set], ls='-', c='b', label='Logistic Regression, test')
plt.legend(loc='best')
plt.xticks(np.arange(len(C_set)), C_set, fontsize=10, rotation=0, horizontalalignment='center')
plt.xlabel('Regularisation Constant')
plt.ylabel('Average Precision@K')
plt.title('Performance on Yeast dataset, multi-label learning with p-norm push loss', fontsize=15)
fig.savefig('pnorm.svg')
Explanation: Results for different hyper-parameter configurations
End of explanation
A = np.random.rand(15).reshape(3,5)
A
x = np.sum(A, axis=0)
#x
C = np.dot(A, A.T)
one = np.ones(A.shape[0])
np.dot(x, x)
np.sum(np.sum(C))
np.dot(np.dot(one, C), one)
D = np.dot(A.T, A)
np.sum(np.sum(D))
Explanation: Top push loss
Methods to compute $v = (\sum_{n=1}^N A_{N \times K})^\top (\sum_{n=1}^N A_{N \times K})$:
Let $\mathbf{x} = (\sum_{n=1}^N A_{N \times K})$, then $v = \mathbf{x}^\top \mathbf{x}$
Let $C_{N \times N} = A_{N \times K} A_{N \times K}^\top$, then $v = \sum_{n=1}^N \sum_{m=1}^N C_{n,m}$,
or $v = \mathbf{1}N^\top C{N \times N} \mathbf{1}_N$.
NOTE: if $D_{K \times K} = A_{N \times K}^\top A_{N \times K}$, then $v \ne \sum_{k=1}^K \sum_{l=1}^L D_{K \times K}$, this can be checked with a simple $A_{2x2} = [A_{11}, A_{12}; A_{21}, A_{22}]$ matrix.
Example:
End of explanation
A = np.random.rand(20000000).reshape(10000,2000)
#A = np.random.rand(10000000).reshape(2000,10000)
%%timeit
x = np.sum(A, axis=0)
np.dot(x, x)
%%timeit
C = np.dot(A, A.T)
np.sum(np.sum(C))
%%timeit
C = np.dot(A, A.T)
one = np.ones(A.shape[0])
np.dot(np.dot(one, C), one)
Explanation: Speed test.
End of explanation |
403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ripple Counter from Toggle Flip-Flops
In this example we create a ripple counter from toggle flip-flops.
We also show how to define new Magma Circuits
and introduce generators.
Step1: In the last example, we defined a function that created a
toggle flip-flop (TFF) from a DFF and an XOR gate.
Let's convert the TFF to a Circuit.
In Magma a Circuit is equivalent to a verilog module.
Circuits can be instanced and then wired to other circuits.
m.ClockIO() appends Magma's standard clock interface ports to the interface. When no parameters are specified, this just adds the port CLK with type In(Clock).
Step2: Let's inspect the interface to see the result of appending m.ClockIO().
Step3: Now we'll define a generator for our RippleCounter that accepts a single argument width. A generator in magma is a subclass of m.Generator that defines a static method generate which returns Magma Circuit.
Step4: Now we can generate a 4-bit RippleCounter by calling the generate function directly.
Step5: Let's test our circuit using fault. Magma's Python simulator does not support asynchronous logic, so we'll use verilator.
Step6: We can also look at the generated verilog | Python Code:
import magma as m
Explanation: Ripple Counter from Toggle Flip-Flops
In this example we create a ripple counter from toggle flip-flops.
We also show how to define new Magma Circuits
and introduce generators.
End of explanation
from mantle import DFF
class TFF(m.Circuit):
io = m.IO(O=m.Out(m.Bit)) + m.ClockIO()
ff = DFF()
m.wire( ff(~ff.O), io.O )
Explanation: In the last example, we defined a function that created a
toggle flip-flop (TFF) from a DFF and an XOR gate.
Let's convert the TFF to a Circuit.
In Magma a Circuit is equivalent to a verilog module.
Circuits can be instanced and then wired to other circuits.
m.ClockIO() appends Magma's standard clock interface ports to the interface. When no parameters are specified, this just adds the port CLK with type In(Clock).
End of explanation
print(TFF)
Explanation: Let's inspect the interface to see the result of appending m.ClockIO().
End of explanation
class RippleCounter(m.Generator):
@staticmethod
def generate(width: int):
class _RippleCounter(m.Circuit):
name = f'Ripple{width}'
io = m.IO(O=m.Out(m.Bits[width])) + m.ClockIO()
tffs = [TFF(name=f"tff{i}") for i in range(width)]
O = io.CLK
for i in range(width):
m.wire(m.clock(O), tffs[i].CLK)
O = tffs[i].O
m.wire(O, io.O[i])
return _RippleCounter
Explanation: Now we'll define a generator for our RippleCounter that accepts a single argument width. A generator in magma is a subclass of m.Generator that defines a static method generate which returns Magma Circuit.
End of explanation
Ripple4 = RippleCounter.generate(4)
print(repr(Ripple4))
Explanation: Now we can generate a 4-bit RippleCounter by calling the generate function directly.
End of explanation
import fault
tester = fault.Tester(Ripple4, Ripple4.CLK)
for i in range(1 << 4):
tester.step(2)
tester.print("O=%x\n", Ripple4.O)
tester.compile_and_run(target="verilator", disp_type="realtime")
Explanation: Let's test our circuit using fault. Magma's Python simulator does not support asynchronous logic, so we'll use verilator.
End of explanation
m.compile("build/ripple", Ripple4, inline=True)
%%bash
cat build/ripple.v
Explanation: We can also look at the generated verilog
End of explanation |
404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tracking Parameters and Metrics for Vertex AI Custom Training Jobs
Learning objectives
In this notebook, you learn how to
Step1: Please ignore the incompatibility errors.
Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Set gcloud config to your project ID.
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Import required libraries.
Step10: Initialize Vertex AI and set an experiment
Define experiment name.
Step11: If EXEPERIMENT_NAME is not set, set a default one below
Step12: Initialize the client for Vertex AI.
Step13: Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit
Step14: Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model.
Step15: Write the training script
Run the following cell to create the training script that is used in the sample custom training job.
Step16: Launch a custom training job and track its training parameters on Vertex AI ML Metadata
Step17: Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
Step18: Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins.
Step19: Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset.
Step20: Perform online prediction.
Step21: Calculate and track prediction evaluation metrics.
Step22: Extract all parameters and metrics created during this experiment.
Step23: View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console.
Step24: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install additional packages
! pip3 install -U tensorflow $USER_FLAG
! python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade
! pip3 install scikit-learn {USER_FLAG}
Explanation: Tracking Parameters and Metrics for Vertex AI Custom Training Jobs
Learning objectives
In this notebook, you learn how to:
Track training parameters and prediction metrics for a custom training job.
Extract and perform analysis for all parameters and metrics within an experiment.
Overview
This notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data.
Dataset
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
Install additional packages
Install additional package dependencies not installed in your notebook environment.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Please ignore the incompatibility errors.
Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = "qwiklabs-gcp-03-aaf99941e8b2" # Replace your project ID here
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-03-aaf99941e8b2" # Replace your project ID here
Explanation: Otherwise, set your project ID here.
End of explanation
!gcloud config set project $PROJECT_ID
Explanation: Set gcloud config to your project ID.
End of explanation
# Import necessary library and define Timestamp
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
BUCKET_URI = "gs://qwiklabs-gcp-03-aaf99941e8b2" # Replace your bucket name here
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://qwiklabs-gcp-03-aaf99941e8b2": # Replace your bucket name here
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
# Create your bucket
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
# Import required libraries
import pandas as pd
from google.cloud import aiplatform
from sklearn.metrics import mean_absolute_error, mean_squared_error
from tensorflow.python.keras.utils import data_utils
Explanation: Import libraries and define constants
Import required libraries.
End of explanation
EXPERIMENT_NAME = "new" # Give your experiment a name of you choice
Explanation: Initialize Vertex AI and set an experiment
Define experiment name.
End of explanation
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
Explanation: If EXEPERIMENT_NAME is not set, set a default one below:
End of explanation
aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=BUCKET_URI,
experiment=EXPERIMENT_NAME,
)
Explanation: Initialize the client for Vertex AI.
End of explanation
Download and copy the csv file in your bucket
!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv
!gsutil cp abalone_train.csv {BUCKET_URI}/data/
gcs_csv_path = f"{BUCKET_URI}/data/abalone_train.csv"
Explanation: Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
End of explanation
# Create a managed tabular dataset
ds = # TODO 1: Your code goes here(display_name="abalone", gcs_source=[gcs_csv_path])
ds.resource_name
Explanation: Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model.
End of explanation
%%writefile training_script.py
import pandas as pd
import argparse
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--num_units', dest='num_units',
default=64, type=int,
help='Number of unit for first layer.')
args = parser.parse_args()
# uncomment and bump up replica_count for distributed training
# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# tf.distribute.experimental_set_strategy(strategy)
col_names = ["Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age"]
target = "Age"
def aip_data_to_dataframe(wild_card_path):
return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names)
for fp in tf.data.Dataset.list_files([wild_card_path])])
def get_features_and_labels(df):
return df.drop(target, axis=1).values, df[target].values
def data_prep(wild_card_path):
return get_features_and_labels(aip_data_to_dataframe(wild_card_path))
model = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)])
model.compile(loss='mse', optimizer='adam')
model.fit(*data_prep(os.environ["AIP_TRAINING_DATA_URI"]),
epochs=args.epochs ,
validation_data=data_prep(os.environ["AIP_VALIDATION_DATA_URI"]))
print(model.evaluate(*data_prep(os.environ["AIP_TEST_DATA_URI"])))
# save as Vertex AI Managed model
tf.saved_model.save(model, os.environ["AIP_MODEL_DIR"])
Explanation: Write the training script
Run the following cell to create the training script that is used in the sample custom training job.
End of explanation
# Define the training parameters
job = aiplatform.CustomTrainingJob(
display_name="train-abalone-dist-1-replica",
script_path="training_script.py",
container_uri="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest",
)
Explanation: Launch a custom training job and track its training parameters on Vertex AI ML Metadata
End of explanation
aiplatform.start_run("custom-training-run-1") # Change this to your desired run name
parameters = {"epochs": 10, "num_units": 64}
aiplatform.log_params(parameters)
# Launch the training job
model = # TODO 2: Your code goes here(
ds,
replica_count=1,
model_display_name="abalone-model",
args=[f"--epochs={parameters['epochs']}", f"--num_units={parameters['num_units']}"],
)
Explanation: Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
End of explanation
# Deploy the model
endpoint = # TODO 3: Your code goes here(machine_type="n1-standard-4")
Explanation: Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins.
End of explanation
def read_data(uri):
dataset_path = data_utils.get_file("abalone_test.data", uri)
col_names = [
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Age",
]
dataset = pd.read_csv(
dataset_path,
names=col_names,
na_values="?",
comment="\t",
sep=",",
skipinitialspace=True,
)
return dataset
def get_features_and_labels(df):
target = "Age"
return df.drop(target, axis=1).values, df[target].values
test_dataset, test_labels = get_features_and_labels(
read_data(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv"
)
)
Explanation: Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset.
End of explanation
# Perform online prediction using endpoint
prediction = # TODO 4: Your code goes here(test_dataset.tolist())
prediction
Explanation: Perform online prediction.
End of explanation
mse = mean_squared_error(test_labels, prediction.predictions)
mae = mean_absolute_error(test_labels, prediction.predictions)
aiplatform.log_metrics({"mse": mse, "mae": mae})
Explanation: Calculate and track prediction evaluation metrics.
End of explanation
# Extract all parameters and metrics of the experiment
# TODO 5: Your code goes here
Explanation: Extract all parameters and metrics created during this experiment.
End of explanation
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
Explanation: View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console.
End of explanation
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete dataset
ds.delete()
# Delete the training job
job.delete()
# Undeploy model from endpoint
endpoint.undeploy_all()
# Delete the endpoint
endpoint.delete()
# Delete the model
model.delete()
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $BUCKET_URI
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
Vertex AI Dataset
Training Job
Model
Endpoint
Cloud Storage Bucket
End of explanation |
405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trends Places To BigQuery Via Values
Move using hard coded WOEID values.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Trends Places To BigQuery Via Values Recipe Parameters
Provide Twitter Credentials.
Provide a comma delimited list of WOEIDs.
Specify BigQuery dataset and table to write API call results to.
Writes
Step3: 4. Execute Trends Places To BigQuery Via Values
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Trends Places To BigQuery Via Values
Move using hard coded WOEID values.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'secret':'',
'key':'',
'woeids':[],
'destination_dataset':'',
'destination_table':'',
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Trends Places To BigQuery Via Values Recipe Parameters
Provide Twitter Credentials.
Provide a comma delimited list of WOEIDs.
Specify BigQuery dataset and table to write API call results to.
Writes: WOEID, Name, Url, Promoted_Content, Query, Tweet_Volume
Note Twitter API is rate limited to 15 requests per 15 minutes. So keep WOEID lists short.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'twitter':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'secret':{'field':{'name':'secret','kind':'string','order':1,'default':''}},
'key':{'field':{'name':'key','kind':'string','order':2,'default':''}},
'trends':{
'places':{
'single_cell':True,
'values':{'field':{'name':'woeids','kind':'integer_list','order':3,'default':[]}}
}
},
'out':{
'bigquery':{
'dataset':{'field':{'name':'destination_dataset','kind':'string','order':6,'default':''}},
'table':{'field':{'name':'destination_table','kind':'string','order':7,'default':''}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Trends Places To BigQuery Via Values
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Processes in Shogun
By Heiko Strathmann - <a href="mailto
Step1: Some Formal Background (Skip if you just want code examples)
This notebook is about Bayesian regression models with Gaussian Process priors. A Gaussian Process (GP) over real valued functions on some domain $\mathcal{X}$, $f(\mathbf{x})
Step2: Apart from its apealling form, this curve has the nice property of given rise to analytical solutions to the required integrals. Recall these are given by
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
Since all involved elements, the likelihood $p(\mathbf{y}|\mathbf{f})$, the GP prior $p(\mathbf{f}|\boldsymbol{\theta})$ are Gaussian, the same follows for the GP posterior $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, and the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$. Therefore, we just need to sit down with pen and paper to derive the resulting forms of the Gaussian distributions of these objects (see references). Luckily, everything is already implemented in Shogun.
In order to get some intuition about Gaussian Processes in general, let us first have a look at these latent Gaussian variables, which define a probability distribution over real values functions $f(\mathbf{x})
Step3: First, we compute the kernel matrix $\mathbf{C}_\boldsymbol{\theta}$ using the <a href="http
Step4: This matrix, as any kernel or covariance matrix, is positive semi-definite and symmetric. It can be viewed as a similarity matrix. Here, elements on the diagonal (corresponding to $\mathbf{x}=\mathbf{x}'$) have largest similarity. For increasing kernel bandwidth $\tau$, more and more elements are similar. This matrix fully specifies a distribution over functions $f(\mathbf{x})
Step5: Note how the functions are exactly evaluated at the training covariates $\mathbf{x}_i$ which are randomly distributed on the x-axis. Even though these points do not visualise the full functions (we can only evaluate them at a finite number of points, but we connected the points with lines to make it more clear), this reveils that larger values of the kernel bandwidth $\tau$ lead to smoother latent Gaussian functions.
In the above plots all functions are equally possible. That is, the prior of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ does not favour any particular function setups. Computing the posterior given our training data, the distribution ober $\mathbf{f}|\mathbf{y},\boldsymbol{\theta}$ then corresponds to restricting the above distribution over functions to those that explain the training data (up to observation noise). We will now use the Shogun class <a href="http
Step6: Note how the above function samples are constrained to go through our training data labels (up to observation noise), as much as their smoothness allows them. In fact, these are already samples from the predictive distribution, which gives a probability for a label $\mathbf{y}^$ for any covariate $\mathbf{x}^$. These distributions are Gaussian (!), nice to look at and extremely useful to understand the GP's underlying model. Let's plot them. We finally use the Shogun class <a href="http
Step7: The question now is
Step8: Now we can output the best parameters and plot the predictive distribution for those.
Step9: Now the predictive distribution is very close to the true data generating process.
Non-Linear, Binary Bayesian Classification
In binary classification, the observed data comes from a space of discrete, binary labels, i.e. $\mathbf{y}\in\mathcal{Y}^n={-1,+1}^n$, which are represented via the Shogun class <a href="http
Step10: Note how the logit function maps any input value to $[0,1]$ in a continuous way. The other plot above is for another classification likelihood is implemented in Shogun is the Gaussian CDF function
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \Phi(y_i f_i),$
where $\Phi
Step11: We will now pass this data into Shogun representation, and use the standard Gaussian kernel (or squared exponential covariance function (<a href="http
Step12: This is already quite nice. The nice thing about Gaussian Processes now is that they are Bayesian, which means that have a full predictive distribution, i.e., we can plot the probability for a point belonging to a class. These can be obtained via the interface of <a href="http
Step13: If you are interested in the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$, for example for the sake of comparing different model parameters $\boldsymbol{\theta}$ (more in model-selection later), it is very easy to compute it via the interface of <a href="http
Step14: This plot clearly shows that there is one kernel width (aka hyper-parameter element $\theta$) for that the marginal likelihood is maximised. If one was interested in the single best parameter, the above concept can be used to learn the best hyper-parameters of the GP. In fact, this is possible in a very efficient way since we have a lot of information about the geometry of the marginal likelihood function, as for example its gradient
Step15: In the above plots, it is quite clear that the maximum of the marginal likelihood corresponds to the best single setting of the parameters. To give some more intuition
Step16: This now gives us a trained Gaussian Process with the best hyper-parameters. In the above setting, this is the s <a href="http
Step17: Note how nicely this predictive distribution matches the data generating distribution. Also note that the best kernel bandwidth is different to the one we saw in the above plot. This is caused by the different kernel scalling that was also learned automatically. The kernel scaling, roughly speaking, corresponds to the sharpness of the changes in the surface of the predictive likelihood. Since we have two hyper-parameters, we can plot the surface of the marginal likelihood as a function of both of them. This is sometimes interesting, for example when this surface has multiple maximum (corresponding to multiple "best" parameter settings), and thus might be useful for analysis. It is expensive however.
Step18: Our found maximum nicely matches the result of the "grid-search". The take home message for this is | Python Code:
%matplotlib inline
# import all shogun classes
from shogun import *
import random
import numpy as np
import matplotlib.pyplot as plt
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from math import exp
Explanation: Gaussian Processes in Shogun
By Heiko Strathmann - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de">herrstrathmann.de</a>. Based on the GP framework of the <a href="http://www.google-melange.com/gsoc/project/google/gsoc2013/votjak/8001">Google summer of code 2013 project</a> of Roman Votyakov - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/votjakovr">github.com/votjakovr</a>, and the <a href="http://www.google-melange.com/gsoc/project/google/gsoc2012/walke434/39001">Google summer of code 2012 project</a> of Jacob Walker - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/puffin444">github.com/puffin444</a>
This notebook is about <a href="http://en.wikipedia.org/wiki/Bayesian_linear_regression">Bayesian regression</a> and <a href="http://en.wikipedia.org/wiki/Statistical_classification">classification</a> models with <a href="http://en.wikipedia.org/wiki/Gaussian_process">Gaussian Process (GP)</a> priors in Shogun. After providing a semi-formal introduction, we illustrate how to efficiently train them, use them for predictions, and automatically learn parameters.
End of explanation
# plot likelihood for three different noise lebels $\sigma$ (which is not yet squared)
sigmas=np.array([0.5,1,2])
# likelihood instance
lik=GaussianLikelihood()
# A set of labels to consider
lab=RegressionLabels(np.linspace(-4.0,4.0, 200))
# A single 1D Gaussian response function, repeated once for each label
# this avoids doing a loop in python which would be slow
F=np.zeros(lab.get_num_labels())
# plot likelihood for all observations noise levels
plt.figure(figsize=(12, 4))
for sigma in sigmas:
# set observation noise, this is squared internally
lik.set_sigma(sigma)
# compute log-likelihood for all labels
log_liks=lik.get_log_probability_f(lab, F)
# plot likelihood functions, exponentiate since they were computed in log-domain
plt.plot(lab.get_labels(), map(exp,log_liks))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$y_i$")
plt.title("Regression Likelihoods for different observation noise levels")
_=plt.legend(["sigma=$%.1f$" % sigma for sigma in sigmas])
Explanation: Some Formal Background (Skip if you just want code examples)
This notebook is about Bayesian regression models with Gaussian Process priors. A Gaussian Process (GP) over real valued functions on some domain $\mathcal{X}$, $f(\mathbf{x}):\mathcal{X} \rightarrow \mathbb{R}$, written as
$\mathcal{GP}(m(\mathbf{x}), k(\mathbf{x},\mathbf{x}')),$
defines a distribution over real valued functions with mean value $m(\mathbf{x})=\mathbb{E}[f(\mathbf{x})]$ and inter-function covariance $k(\mathbf{x},\mathbf{x}')=\mathbb{E}[(f(\mathbf{x})-m(\mathbf{x}))^T(f(\mathbf{x})-m(\mathbf{x})]$. This intuitively means tha the function value at any point $\mathbf{x}$, i.e., $f(\mathbf{x})$ is a random variable with mean $m(\mathbf{x})$; if you take the average of infinitely many functions from the Gaussian Process, and evaluate them at $\mathbf{x}$, you will get this value. Similarily, the function values at two different points $\mathbf{x}, \mathbf{x}'$ have covariance $k(\mathbf{x}, \mathbf{x}')$. The formal definition is that Gaussian Process is a collection of random variables (may be infinite) of which any finite subset have a joint Gaussian distribution.
One can model data with Gaussian Processes via defining a joint distribution over
$n$ data (labels in Shogun) $\mathbf{y}\in \mathcal{Y}^n$, from a $n$ dimensional continous (regression) or discrete (classification) space. These data correspond to $n$ covariates $\mathbf{x}_i\in\mathcal{X}$ (features in Shogun) from the input space $\mathcal{X}$.
Hyper-parameters $\boldsymbol{\theta}$ which depend on the used model (details follow).
Latent Gaussian variables $\mathbf{f}\in\mathbb{R}^n$, coming from a GP, i.e., they have a joint Gaussian distribution. Every entry $f_i$ corresponds to the GP function $f(\mathbf{x_i})$ evaluated at covariate $\mathbf{x}_i$ for $1\leq i \leq n$.
The joint distribution takes the form
$p(\mathbf{f},\mathbf{y},\theta)=p(\boldsymbol{\theta})p(\mathbf{f}|\boldsymbol{\theta})p(\mathbf{y}|\mathbf{f}),$
where $\mathbf{f}|\boldsymbol{\theta}\sim\mathcal{N}(\mathbf{m}\theta, \mathbf{C}\theta)$ is the joint Gaussian distribution for the GP variables, with mean $\mathbf{m}\boldsymbol{\theta}$ and covariance $\mathbf{C}\theta$. The $(i,j)$-th entriy of $\mathbf{C}_\boldsymbol{\theta}$ is given by the covariance or kernel between the $(i,j)$-th covariates $k(\mathbf{x}_i, \mathbf{x}_j)$. Examples for kernel and mean functions are given later in the notebook.
Mean and covariance are both depending on hyper-parameters coming from a prior distribution $\boldsymbol{\theta}\sim p(\boldsymbol{\theta})$. The data itself $\mathbf{y}\in \mathcal{Y}^n$ (no assumptions on $\mathcal{Y}$ for now) is modelled by a likelihood function $p(\mathbf{y}|\mathbf{f})$, which gives the probability of the data $\mathbf{y}$ given a state of the latent Gaussian variables $\mathbf{f}$, i.e. $p(\mathbf{y}|\mathbf{f}):\mathcal{Y}^n\rightarrow [0,1]$.
In order to do inference for a new, unseen covariate $\mathbf{x}^\in\mathcal{X}$, i.e., predicting its label $y^\in\mathcal{Y}$ or in particular computing the predictive distribution for that label, we have integrate over the posterior over the latent Gaussian variables (assume fixed $\boldsymbol{\theta}$ for now, which means you can just ignore the symbol in the following if you want),
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}.$
This posterior, $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, can be obtained using standard <a href="http://en.wikipedia.org/wiki/Bayes'_theorem">Bayes-Rule</a> as
$p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})=\frac{p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})}{p(\mathbf{y}|\boldsymbol{\theta})},$
with the so called evidence or marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$ given as another integral over the prior over the latent Gaussian variables
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
In order to solve the above integrals, Shogun offers a variety of approximations. Don't worry, you will not have to deal with these nasty integrals on your own, but everything is hidden within Shogun. Though, if you like to play with these objects, you will be able to compute only parts.
Note that in the above description, we did not make any assumptions on the input space $\mathcal{X}$. As long as you define mean and covariance functions, and a likelihood, your data can have any form you like. Shogun in fact is able to deal with standard <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html">dense numerical data</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSparseFeatures.html"> sparse data</a>, and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStringFeatures.html">strings of any type</a>, and many more out of the box. We will provide some examples below.
To gain some intuition how these latent Gaussian variables behave, and how to model data with them, see the regression part of this notebook.
Non-Linear Bayesian Regression
Bayesian regression with Gaussian Processes is among the most fundamental applications of latent Gaussian models. As usual, the oberved data come from a contintous space, i.e. $\mathbf{y}\in\mathbb{R}^n$, which is represented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CRegressionLabels.html">CRegressionLabels</a>. We assume that these observations come from some distribution $p(\mathbf{y}|\mathbf{f)}$ that is based on a fixed state of latent Gaussian response variables $\mathbf{f}\in\mathbb{R}^n$. In fact, we assume that the true model is the latent Gaussian response variable (which defined a distribution over functions; plus some Gaussian observation noise which is modelled by the likelihood as
$p(\mathbf{y}|\mathbf{f})=\mathcal{N}(\mathbf{f},\sigma^2\mathbf{I})$
This simple likelihood is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianLikelihood.html">CGaussianLikelihood</a>. It is the well known bell curve. Below, we plot the likelihood as a function of $\mathbf{y}$, for $n=1$.
End of explanation
def generate_regression_toy_data(n=50, n_test=100, x_range=15, x_range_test=20, noise_var=0.4):
# training and test sine wave, test one has more points
X_train = np.random.rand(n)*x_range
X_test = np.linspace(0,x_range_test, 500)
# add noise to training observations
y_test = np.sin(X_test)
y_train = np.sin(X_train)+np.random.randn(n)*noise_var
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = generate_regression_toy_data()
plt.figure(figsize=(16,4))
plt.plot(X_train, y_train, 'ro')
plt.plot(X_test, y_test)
plt.legend(["Noisy observations", "True model"])
plt.title("One-Dimensional Toy Regression Data")
plt.xlabel("$\mathbf{x}$")
_=plt.ylabel("$\mathbf{y}$")
Explanation: Apart from its apealling form, this curve has the nice property of given rise to analytical solutions to the required integrals. Recall these are given by
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
Since all involved elements, the likelihood $p(\mathbf{y}|\mathbf{f})$, the GP prior $p(\mathbf{f}|\boldsymbol{\theta})$ are Gaussian, the same follows for the GP posterior $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, and the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$. Therefore, we just need to sit down with pen and paper to derive the resulting forms of the Gaussian distributions of these objects (see references). Luckily, everything is already implemented in Shogun.
In order to get some intuition about Gaussian Processes in general, let us first have a look at these latent Gaussian variables, which define a probability distribution over real values functions $f(\mathbf{x}):\mathcal{X} \rightarrow \mathbb{R}$, where in the regression case, $\mathcal{X}=\mathbb{R}$.
As mentioned above, the joint distribution of a finite number (say $n$) of variables $\mathbf{f}\in\mathbb{R}^n$ from a Gaussian Process $\mathcal{GP}(m(\mathbf{x}), k(\mathbf{x},\mathbf{x}'))$, takes the form
$\mathbf{f}|\boldsymbol{\theta}\sim\mathcal{N}(\mathbf{m}\theta, \mathbf{C}\theta),$
where $\mathbf{m}\theta$ is the mean function's mean and $\mathbf{C}\theta$ is the pairwise covariance or kernel matrix of the input covariates $\mathbf{x}_i$. This means, we can easily sample function realisations $\mathbf{f}^{(j)}$ from the Gaussian Process, and more important, visualise them.
To this end, let us consider the well-known and often used Gaussian Kernel or squared exponential covariance, which is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> in the parametric from (note that there are other forms in the literature)
$ k(\mathbf{x}, \mathbf{x}')=\exp\left( -\frac{||\mathbf{x}-\mathbf{x}'||_2^2}{\tau}\right),$
where $\tau$ is a hyper-parameter of the kernel. We will also use the constant <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CZeroMean.html">CZeroMean</a> mean function, which is suitable if the data's mean is zero (which can be achieved via removing it).
Let us consider some toy regression data in the form of a sine wave, which is observed at random points with some observations noise.
End of explanation
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=RealFeatures(X_train.reshape(1,len(X_train)))
feats_test=RealFeatures(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# compute covariances for different kernel parameters
taus=np.asarray([.1,4.,32.])
Cs=np.zeros(((len(X_train), len(X_train), len(taus))))
for i in range(len(taus)):
# compute unscalled kernel matrix (first parameter is maximum size in memory and not very important)
kernel=GaussianKernel(10, taus[i])
kernel.init(feats_train, feats_train)
Cs[:,:,i]=kernel.get_kernel_matrix()
# plot
plt.figure(figsize=(16,5))
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
plt.imshow(Cs[:,:,i], interpolation="nearest")
plt.xlabel("Covariate index")
plt.ylabel("Covariate index")
_=plt.title("tau=%.1f" % taus[i])
Explanation: First, we compute the kernel matrix $\mathbf{C}_\boldsymbol{\theta}$ using the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> with hyperparameter $\boldsymbol{\theta}={\tau}$ with a few differnt values. Note that in Gaussian Processes, kernels usually have a scaling parameter. We skip this one for now and cover it later.
End of explanation
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP prior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(np.zeros(len(X_train)), Cs[:,:,i], 3)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=X_train.argsort()
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
Explanation: This matrix, as any kernel or covariance matrix, is positive semi-definite and symmetric. It can be viewed as a similarity matrix. Here, elements on the diagonal (corresponding to $\mathbf{x}=\mathbf{x}'$) have largest similarity. For increasing kernel bandwidth $\tau$, more and more elements are similar. This matrix fully specifies a distribution over functions $f(\mathbf{x}):\mathcal{X}\rightarrow\mathbb{R}$ over a finite set of latent Gaussian variables $\mathbf{f}$, which we can sample from and plot. To this end, we use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStatistics.html">CStatistics</a>, which offers a method to sample from multivariate Gaussians.
End of explanation
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP posterior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# create inference method instance with very small observation noise to make
inf=ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
C_post=inf.get_posterior_covariance()
m_post=inf.get_posterior_mean()
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(m_post, C_post, 5)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=sorted(range(len(X_train)),key=lambda x:X_train[x])
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.plot(X_train, y_train, 'r*')
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
Explanation: Note how the functions are exactly evaluated at the training covariates $\mathbf{x}_i$ which are randomly distributed on the x-axis. Even though these points do not visualise the full functions (we can only evaluate them at a finite number of points, but we connected the points with lines to make it more clear), this reveils that larger values of the kernel bandwidth $\tau$ lead to smoother latent Gaussian functions.
In the above plots all functions are equally possible. That is, the prior of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ does not favour any particular function setups. Computing the posterior given our training data, the distribution ober $\mathbf{f}|\mathbf{y},\boldsymbol{\theta}$ then corresponds to restricting the above distribution over functions to those that explain the training data (up to observation noise). We will now use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CExactInferenceMethod.html">CExactInferenceMethod</a> to do exactly this. The class is the general basis of exact GP regression in Shogun. We have to define all parts of the Gaussian Process for the inference method.
End of explanation
# helper function that plots predictive distribution and data
def plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances):
# evaluate predictive distribution in this range of y-values and preallocate predictive distribution
y_values=np.linspace(-3,3)
D=np.zeros((len(y_values), len(X_test)))
# evaluate normal distribution at every prediction point (column)
for j in range(np.shape(D)[1]):
# create gaussian distributio instance, expects mean vector and covariance matrix, reshape
gauss=GaussianDistribution(np.array(means[j]).reshape(1,), np.array(variances[j]).reshape(1,1))
# evaluate predictive distribution for test point, method expects matrix
D[:,j]=np.exp(gauss.log_pdf_multiple(y_values.reshape(1,len(y_values))))
plt.pcolor(X_test,y_values,D)
plt.colorbar()
plt.contour(X_test,y_values,D)
plt.plot(X_test,y_test, 'b', linewidth=3)
plt.plot(X_test,means, 'm--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.legend(["Truth", "Prediction", "Data"])
plt.figure(figsize=(18,10))
plt.suptitle("GP inference for different kernel widths")
for i in range(len(taus)):
plt.subplot(len(taus),1,i+1)
# create GP instance using inference method and train
# use Shogun objects from above
inf.set_kernel(GaussianKernel(10,taus[i]))
gp=GaussianProcessRegression(inf)
gp.train()
# predict labels for all test data (note that this produces the same as the below mean vector)
means = gp.apply(feats_test)
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# note: y_predicted == means
# plot predictive distribution and training data
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("tau=%.1f" % taus[i])
Explanation: Note how the above function samples are constrained to go through our training data labels (up to observation noise), as much as their smoothness allows them. In fact, these are already samples from the predictive distribution, which gives a probability for a label $\mathbf{y}^$ for any covariate $\mathbf{x}^$. These distributions are Gaussian (!), nice to look at and extremely useful to understand the GP's underlying model. Let's plot them. We finally use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianProcessRegression.html">CGaussianProcessRegression</a> to represent the whole GP under an interface to perform inference with. In addition, we use the helper class class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianDistribution.html">CGaussianDistribution</a> to evaluate the log-likelihood for every test point's $\mathbf{x}^_j$ value $\mathbf{y}_j^$.
End of explanation
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
gp = GaussianProcessRegression(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, feats_train, labels_train, GradientCriterion(), False)
grad.set_function(inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP, train
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
best_sigma=GaussianLikelihood.obtain_from_generic(inf.get_model()).get_sigma()
print "Selected tau (kernel bandwidth):", best_width
print "Selected gamma (kernel scaling):", best_scale
print "Selected sigma (observation noise):", best_sigma
Explanation: The question now is: Which set of hyper-parameters $\boldsymbol{\theta}={\tau, \gamma, \sigma}$ to take, where $\gamma$ is the kernel scaling (which we omitted so far), and $\sigma$ is the observation noise (which we left at its defaults value of one)? The question of model-selection will be handled in a bit more depth in the binary classification case. For now we just show code how to do it as a black box. See below for explanations.
End of explanation
# train gp
gp.train()
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# plot predictive distribution
plt.figure(figsize=(18,5))
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("Maximum Likelihood II based inference")
Explanation: Now we can output the best parameters and plot the predictive distribution for those.
End of explanation
# two classification likelihoods in Shogun
logit=LogitLikelihood()
probit=ProbitLikelihood()
# A couple of Gaussian response functions, 1-dimensional here
F=np.linspace(-5.0,5.0)
# Single observation label with +1
lab=BinaryLabels(np.array([1.0]))
# compute log-likelihood for all values in F
log_liks_logit=np.zeros(len(F))
log_liks_probit=np.zeros(len(F))
for i in range(len(F)):
# Shogun expects a 1D array for f, not a single number
f=np.array(F[i]).reshape(1,)
log_liks_logit[i]=logit.get_log_probability_f(lab, f)
log_liks_probit[i]=probit.get_log_probability_f(lab, f)
# in fact, loops are slow and Shogun offers a method to compute the likelihood for many f. Much faster!
log_liks_logit=logit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
log_liks_probit=probit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
# plot the sigmoid functions, note that Shogun computes it in log-domain, so we have to exponentiate
plt.figure(figsize=(12, 4))
plt.plot(F, np.exp(log_liks_logit))
plt.plot(F, np.exp(log_liks_probit))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$f_i$")
plt.title("Classification Likelihoods")
_=plt.legend(["Logit", "Probit"])
Explanation: Now the predictive distribution is very close to the true data generating process.
Non-Linear, Binary Bayesian Classification
In binary classification, the observed data comes from a space of discrete, binary labels, i.e. $\mathbf{y}\in\mathcal{Y}^n={-1,+1}^n$, which are represented via the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CBinaryLabels.html">CBinaryLabels</a>. To model these observations with a GP, we need a likelihood function $p(\mathbf{y}|\mathbf{f})$ that maps a set of such discrete observations to a probability, given a fixed response $\mathbf{f}$ of the Gaussian Process.
In regression, this way straight-forward, as we could simply use the response variable $\mathbf{f}$ itself, plus some Gaussian noise, which gave rise to a probability distribution. However, now that the $\mathbf{y}$ are discrete, we cannot do the same thing. We rather need a function that squashes the Gaussian response variable itself to a probability, given some data. This is a common problem in Machine Learning and Statistics and is usually done with some sort of Sigmoid function of the form $\sigma:\mathbb{R}\rightarrow[0,1]$. One popular choicefor such a function is the Logit likelihood, given by
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \frac{1}{1-\exp(-y_i f_i)}.$
This likelihood is implemented in Shogun under <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLogitLikelihood.html">CLogitLikelihood</a> and using it is sometimes refered to as logistic regression. Using it with GPs results in non-linear Bayesian logistic regression. We can easily use the class to illustrate the sigmoid function for a 1D example and a fixed data point with label $+1$
End of explanation
def generate_classification_toy_data(n_train=100, mean_a=np.asarray([0, 0]), std_dev_a=1.0, mean_b=3, std_dev_b=0.5):
# positive examples are distributed normally
X1 = (np.random.randn(n_train, 2)*std_dev_a+mean_a).T
# negative examples have a "ring"-like form
r = np.random.randn(n_train)*std_dev_b+mean_b
angle = np.random.randn(n_train)*2*np.pi
X2 = np.array([r*np.cos(angle)+mean_a[0], r*np.sin(angle)+mean_a[1]])
# stack positive and negative examples in a single array
X_train = np.hstack((X1,X2))
# label positive examples with +1, negative with -1
y_train = np.zeros(n_train*2)
y_train[:n_train] = 1
y_train[n_train:] = -1
return X_train, y_train
def plot_binary_data(X_train, y_train):
plt.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plt.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
X_train, y_train=generate_classification_toy_data()
plot_binary_data(X_train, y_train)
_=plt.title("2D Toy classification problem")
Explanation: Note how the logit function maps any input value to $[0,1]$ in a continuous way. The other plot above is for another classification likelihood is implemented in Shogun is the Gaussian CDF function
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \Phi(y_i f_i),$
where $\Phi:\mathbb{R}\rightarrow [0,1]$ is the <a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cumulative distribution function</a> (CDF) of the standard Gaussian distribution $\mathcal{N}(0,1)$. It is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CProbitLikelihood.html">CProbitLikelihood</a> and using it is refered to as probit regression. While the Gaussian CDF has some convinient properties for integrating over it (and thus allowing some different modelling decisions), it doesn not really matter what you use in Shogun in most cases. However, for the sake of completeness, it is also potted above, being very similar to the logit likelihood.
TODO: Show a function squashed through the logit likelihood
Recall that in order to do inference, we need to solve two integrals (in addition to the Bayes rule, see above)
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
In classification, the second integral is not available in closed form since it is the convolution of a Gaussian, $p(\mathbf{f}|\boldsymbol{\theta})$, and a non-Gaussian, $p(\mathbf{y}|\mathbf{f})$, distribution. Therefore, we have to rely on approximations in order to compute and integrate over the posterior $p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})$. Shogun offers various standard methods from the literature to deal with this problem, including the Laplace approximation (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLaplacianInferenceMethod.html">CLaplacianInferenceMethod</a>), Expectation Propagation (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CEPInferenceMethod.html">CEPInferenceMethod</a>) for inference and evaluatiing the marginal likelihood. These two approximations give rise to a Gaussian posterior $p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})$, which can then be easily computed and integrated over (all this is done by Shogun for you).
While the Laplace approximation is quite fast, EP usually has a better accuracy, in particular if one is not just interetsed in binary decisions but also in certainty values for these predictions. Go for Laplace if interested in binary decisions, and for EP otherwise.
TODO, add references to inference methods.
We will now give an example on how to do GP inference for binary classification in Shogun on some toy data. For that, we will first definea function to generate a classical non-linear classification problem.
End of explanation
# for building combinations of arrays
from itertools import product
# convert training data into Shogun representation
train_features = RealFeatures(X_train)
train_labels = BinaryLabels(y_train)
# generate all pairs in 2d range of testing data (full space), discretisation resultion is n_test
n_test=50
x1 = np.linspace(X_train[0,:].min()-1, X_train[0,:].max()+1, n_test)
x2 = np.linspace(X_train[1,:].min()-1, X_train[1,:].max()+1, n_test)
X_test = np.asarray(list(product(x1, x2))).T
# convert testing features into Shogun representation
test_features = RealFeatures(X_test)
# create Gaussian kernel with width = 2.0
kernel = GaussianKernel(10, 2)
# create zero mean function
zero_mean = ZeroMean()
# you can easily switch between probit and logit likelihood models
# by uncommenting/commenting the following lines:
# create probit likelihood model
# lik = ProbitLikelihood()
# create logit likelihood model
lik = LogitLikelihood()
# you can easily switch between Laplace and EP approximation by
# uncommenting/commenting the following lines:
# specify Laplace approximation inference method
#inf = LaplacianInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# specify EP approximation inference method
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
# create and train GP classifier, which uses Laplace approximation
gp = GaussianProcessClassification(inf)
gp.train()
test_labels=gp.apply(test_features)
# plot data and decision boundary
plot_binary_data(X_train, y_train)
plt.pcolor(x1, x2, test_labels.get_labels().reshape(n_test, n_test))
_=plt.title('Decision boundary')
Explanation: We will now pass this data into Shogun representation, and use the standard Gaussian kernel (or squared exponential covariance function (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a>)) and the Laplace approximation to obtain a decision boundary for the two classes. You can easily exchange different likelihood models and inference methods.
End of explanation
# obtain probabilities for
p_test = gp.get_probabilities(test_features)
# create figure
plt.title('Training data, predictive probability and decision boundary')
# plot training data
plot_binary_data(X_train, y_train)
# plot decision boundary
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
# plot probabilities
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This is already quite nice. The nice thing about Gaussian Processes now is that they are Bayesian, which means that have a full predictive distribution, i.e., we can plot the probability for a point belonging to a class. These can be obtained via the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianProcessClassification.html">CGaussianProcessClassification</a>
End of explanation
# generate some non-negative kernel widths
widths=2**np.linspace(-5,6,20)
# compute marginal likelihood under Laplace apprixmation for every width
# use Shogun objects from above
marginal_likelihoods=np.zeros(len(widths))
for i in range(len(widths)):
# note that GP training is automatically done/updated if a parameter is changed. No need to call train again
kernel.set_width(widths[i])
marginal_likelihoods[i]=-inf.get_negative_log_marginal_likelihood()
# plot marginal likelihoods as a function of kernel width
plt.plot(np.log2(widths), marginal_likelihoods)
plt.title("Log Marginal likelihood for different kernels")
plt.xlabel("Kernel Width in log-scale")
_=plt.ylabel("Log-Marginal Likelihood")
print "Width with largest marginal likelihood:", widths[marginal_likelihoods.argmax()]
Explanation: If you are interested in the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$, for example for the sake of comparing different model parameters $\boldsymbol{\theta}$ (more in model-selection later), it is very easy to compute it via the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CInferenceMethod.html">CInferenceMethod</a>, i.e., every inference method in Shogun can do that. It is even possible to obtain the mean and covariance of the Gaussian approximation to the posterior $p(\mathbf{f}|\mathbf{y})$ using Shogun. In the following, we plot the marginal likelihood under the EP inference method (more accurate approximation) as a one dimensional function of the kernel width.
End of explanation
# again, use Shogun objects from above, but a few extremal widths
widths_subset=np.array([widths[0], widths[marginal_likelihoods.argmax()], widths[len(widths)-1]])
plt.figure(figsize=(18, 5))
for i in range(len(widths_subset)):
plt.subplot(1,len(widths_subset),i+1)
kernel.set_width(widths_subset[i])
# obtain and plot predictive distribution
p_test = gp.get_probabilities(test_features)
title_str="Width=%.2f, " % widths_subset[i]
if i is 0:
title_str+="too complex, overfitting"
elif i is 1:
title_str+="just right"
else:
title_str+="too smooth, underfitting"
plt.title(title_str)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This plot clearly shows that there is one kernel width (aka hyper-parameter element $\theta$) for that the marginal likelihood is maximised. If one was interested in the single best parameter, the above concept can be used to learn the best hyper-parameters of the GP. In fact, this is possible in a very efficient way since we have a lot of information about the geometry of the marginal likelihood function, as for example its gradient: It turns out that for example the above function is smooth and we can use the usual optimisation techniques to find extrema. This is called maximum likelihood II. Let's have a closer look.
Excurs: Model-Selection with Gaussian Processes
First, let us have a look at the predictive distributions of some of the above kernel widths
End of explanation
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, train_features, train_labels, GradientCriterion(), False)
grad.set_function(inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
print "Selected kernel bandwidth:", best_width
print "Selected kernel scale:", best_scale
Explanation: In the above plots, it is quite clear that the maximum of the marginal likelihood corresponds to the best single setting of the parameters. To give some more intuition: The interpretation of the marginal likelihood
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$
is the probability of the data given the model parameters $\boldsymbol{\theta}$. Note that this is averaged over all possible configurations of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ given a fixed configuration of parameters. However, since this is probability distribution, it has to integrate to $1$. This means that models that are too complex (and thus being able to explain too many different data configutations) and models that are too simple (and thus not able to explain the current data) give rise to a small marginal likelihood. Only when the model is just complex enough to explain the data well (but not more complex), the marginal likelihood is maximised. This is an implementation of a concept called <a href="http://en.wikipedia.org/wiki/Occam's_razor#Probability_theory_and_statistics">Occam's razor</a>, and is a nice motivation why you should be Bayesian if you can -- overfitting doesn't happen that quickly.
As mentioned before, Shogun is able to automagically learn all of the hyper-parameters $\boldsymbol{\theta}$ using gradient based optimisation on the marginal likelihood (whose derivatives are computed internally). To this is, we use the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientModelSelection.html">CGradientModelSelection</a>. Note that we could also use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGridSearchModelSelection.html">CGridSearchModelSelection</a> to do a standard grid-search, such as is done for Support Vector Machines. However, this is highly ineffective, in particular when the number of parameters grows. In addition, order to evaluate parameter states, we have to use the classes <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientEvaluation.html">CGradientEvaluation</a>, and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GradientCriterion.html">GradientCriterion</a>, which is also much cheaper than the usual <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCrossValidation.html">CCrossValidation</a>, since it just evaluates the gradient of the marginal likelihood rather than performing many training and testing runs. This is another very nice motivation for using Gaussian Processes: optimising parameters is much easier. In the following, we demonstrate how to select all parameters of the used model. In Shogun, parameter configurations (corresponding to $\boldsymbol{\theta}$ are stored in instances of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CParameterCombination.html">CParameterCombination</a>, which can be applied to machines.
This approach is known as maximum likelihood II (the 2 is for the second level, averaging over all possible $\mathbf{f}|\boldsymbol{\theta}$), or evidence maximisation.
End of explanation
# train gp
gp.train()
# visualise predictive distribution
p_test = gp.get_probabilities(test_features)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This now gives us a trained Gaussian Process with the best hyper-parameters. In the above setting, this is the s <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> bandwith, and its scale (which is stored in the GP itself since Shogun kernels do not support scalling). We can now again visualise the predictive distribution, and also output the best parameters.
End of explanation
# parameter space, increase resolution if you want finer plots, takes long though
resolution=5
widths=2**np.linspace(-4,10,resolution)
scales=2**np.linspace(-5,10,resolution)
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
inf.set_tolerance(1e-3)
# compute marginal likelihood for every parameter combination
# use Shogun objects from above
marginal_likelihoods=np.zeros((len(widths), len(scales)))
for i in range(len(widths)):
for j in range(len(scales)):
kernel.set_width(widths[i])
inf.set_scale(scales[j])
marginal_likelihoods[i,j]=-inf.get_negative_log_marginal_likelihood()
# contour plot of marginal likelihood as a function of kernel width and scale
plt.contour(np.log2(widths), np.log2(scales), marginal_likelihoods)
plt.colorbar()
plt.xlabel("Kernel width (log-scale)")
plt.ylabel("Kernel scale (log-scale)")
_=plt.title("Log Marginal Likelihood")
# plot our found best parameters
_=plt.plot([np.log2(best_width)], [np.log2(best_scale)], 'r*', markersize=20)
Explanation: Note how nicely this predictive distribution matches the data generating distribution. Also note that the best kernel bandwidth is different to the one we saw in the above plot. This is caused by the different kernel scalling that was also learned automatically. The kernel scaling, roughly speaking, corresponds to the sharpness of the changes in the surface of the predictive likelihood. Since we have two hyper-parameters, we can plot the surface of the marginal likelihood as a function of both of them. This is sometimes interesting, for example when this surface has multiple maximum (corresponding to multiple "best" parameter settings), and thus might be useful for analysis. It is expensive however.
End of explanation
# for measuring runtime
import time
# simple regression data
X_train, y_train, X_test, y_test = generate_regression_toy_data(n=1000)
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=RealFeatures(X_train.reshape(1,len(X_train)))
feats_test=RealFeatures(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# inducing features (here: a random grid over the input space, try out others)
n_inducing=10
#X_inducing=linspace(X_train.min(), X_train.max(), n_inducing)
X_inducing=np.random.rand(X_train.min()+n_inducing)*X_train.max()
feats_inducing=RealFeatures(X_inducing.reshape(1,len(X_inducing)))
# create FITC inference method and GP instance
inf = FITCInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma), feats_inducing)
gp = GaussianProcessRegression(inf)
start=time.time()
gp.train()
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
print "FITC inference took %.2f seconds" % (time.time()-start)
# exact GP
start=time.time()
inf_exact = ExactInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma))
inf_exact.set_scale(best_scale)
gp_exact = GaussianProcessRegression(inf_exact)
gp_exact.train()
means_exact = gp_exact.get_mean_vector(feats_test)
variances_exact = gp_exact.get_variance_vector(feats_test)
print "Exact inference took %.2f seconds" % (time.time()-start)
# comparison plot FITC and exact inference, plot 95% confidence of both predictive distributions
plt.figure(figsize=(18,5))
plt.plot(X_test, y_test, color="black", linewidth=3)
plt.plot(X_test, means, 'r--', linewidth=3)
plt.plot(X_test, means_exact, 'b--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.plot(X_inducing, np.zeros(len(X_inducing)), 'g*', markersize=15)
# tube plot of 95% confidence
error=1.96*np.sqrt(variances)
plt.plot(X_test,means-error, color='red', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means-error,means+error,color='red', alpha=0.3)
error_exact=1.96*np.sqrt(variances_exact)
plt.plot(X_test,means_exact-error_exact, color='blue', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means_exact-error_exact,means_exact+error_exact,color='blue', alpha=0.3)
# plot upper confidence lines later due to legend
plt.plot(X_test,means+error, color='red', alpha=0.3, linewidth=3)
plt.plot(X_test,means_exact+error_exact, color='blue', alpha=0.3, linewidth=3)
plt.legend(["True", "FITC prediction", "Exact prediction", "Data", "Inducing points", "95% FITC", "95% Exact"])
_=plt.title("Comparison FITC and Exact Regression")
Explanation: Our found maximum nicely matches the result of the "grid-search". The take home message for this is: With Gaussian Processes, you neither need to do expensive brute force approaches to find best paramters (but you can use gradient descent), nor do you need to do expensive cross-validation to evaluate your model (but can use the Bayesian concept of maximum likelihood II).
Excurs: Large-Scale Regression
One "problem" with the classical method of Gaussian Process based inference is the computational complexity of $\mathcal{O}(n^3)$, where $n$ is the number of training examples. This is caused by matrix inversion, Cholesky factorization, etc. Up to a few thousand points, this is feasible. You will quickly run into memory and runtime problems for very large problems.
One way of approaching very large problems is called Fully Independent Training Components, which is a low-rank plus diagonal approximation to the exact covariance. The rough idea is to specify a set of $m\ll n$ inducing points and to base all computations on the covariance between training/test and inducing points only, which intuitively corresponds to combining various training points around an inducing point. This reduces the computational complexity to $\mathcal{O}(nm^2)$, where again $n$ is the number of training points, and $m$ is the number of inducing points. This is quite a significant decrease, in particular if the number of inducing points is much smaller than the number of examples.
The optimal way to specify inducing points is to densely and uniformly place them in the input space. However, this might be quickly become infeasible in high dimensions. In this case, a random subset of the training data might be a good idea.
In Shogun, the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CFITCInferenceMethod.html">CFITCInferenceMethod</a> handles inference for regression with the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianLikelihood.html">CGaussianLikelihood</a>. Below, we demonstrate its usage on a toy example and compare to exact regression. Note that <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientModelSelection.html">CGradientModelSelection</a> still works as before. We compare the runtime for inference with both GP.
First, note that changing the inference method only requires the change of a single line of code
End of explanation |
407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Características de HoG
En este notebook creará un conjunto de imagenes con caras y no caras mediante las que obtendremos las características de HoG que nos servirán como conjunto de entrenamiento para nuestro clasificador.
Además, estas características serán serializadas para que podamos acceder a ellas las veces que deseemos evitando su procesamiento.
Histogram of Oriented Gradients (HoG)
HoG es una técnica para la extracción de características, desarrollada en el contexto del procesamiento de imagenes, que involucra los siguientes pasos
Step1: Alternativa
Vamos a proporcionar, de manera alternativa a la anterior, nuestro propio conjunto de imagenes.
Step2: 2. Crear un conjunto de entrenamiento de imagenes de no-caras que supongan falsos-positivos
Una vez obtenido nuestro conjunto de positivos, necesitamos obtener un conjunto de imagenes que no tengan caras. Para ello, la técnica que se utiliza en el notebook en el que me estoy basando es subdividir imágenes de mayor tamaño que no contengan caras. Y, así, obtener múltiples imágenes.
Step3: 3. Extraer las características de HoG del conjunto de entrenamiento
Este tercer paso resulta de especial interes, puesto que vamos a obtener las características de HoG sobre las que previamente hemos hablado.
Step4: 4. Serializamos el conjunto de entrenamiento
Simplemente almacenamos los objetos X_train e y_train para, como explicabamos al principio, evitar el recalculo de estas características cada vez que deseemos utilizarlas. | Python Code:
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people()
positive_patches = faces.images
positive_patches.shape
Explanation: Características de HoG
En este notebook creará un conjunto de imagenes con caras y no caras mediante las que obtendremos las características de HoG que nos servirán como conjunto de entrenamiento para nuestro clasificador.
Además, estas características serán serializadas para que podamos acceder a ellas las veces que deseemos evitando su procesamiento.
Histogram of Oriented Gradients (HoG)
HoG es una técnica para la extracción de características, desarrollada en el contexto del procesamiento de imagenes, que involucra los siguientes pasos:
Pre-normalizado de las imagenes. Aunque puede suponer una mayor dependencía de las características que varían segun la iluminación.
Aplicar a la imagen dos filtros sensibles al brillo tanto horizontal como vertical. Lo cual nos aporta información sobre bordes, contornos y texturas.
Subdividir la imagen en celdas de un tamaño concreto y calcular el histograma del gradiente para cada celda.
Normalizar los histogramas, previamente calculados, mediante la comparación con sus vecinos. Eliminando así el efecto de la iluminación en la imagen.
Construir un vector de caracteristicas unidimensional de la información de cada celda.
1. Crear un conjunto de entrenamiento de imagenes de caras que supongan positivos
Scikit nos proporciona un conjunto de imagenes variadas de caras que nos permitirán obtener un conjunto de entrenamiento de positivos para nuestro objetivo. Más de 13000 caras para ser concretos.
End of explanation
# from skimage import io
# from skimage.color import rgb2gray
# positive_patches = list()
# path = "../imgaug/imgs/"
# for i in range(376):
# for j in range(63):
# image = io.imread(path+str(i)+str(j)+".jpg")
# positive_patches.append(rgb2gray(image))
Explanation: Alternativa
Vamos a proporcionar, de manera alternativa a la anterior, nuestro propio conjunto de imagenes.
End of explanation
from skimage import feature, color, data, transform
imgs_to_use = ['camera', 'text', 'coins', 'moon',
'page', 'clock', 'immunohistochemistry',
'chelsea', 'coffee', 'hubble_deep_field']
images = [color.rgb2gray(getattr(data, name)())
for name in imgs_to_use]
import numpy as np
from sklearn.feature_extraction.image import PatchExtractor
def extract_patches(img, N, scale=1.0, patch_size=positive_patches[0].shape):
extracted_patch_size = tuple((scale * np.array(patch_size)).astype(int))
extractor = PatchExtractor(patch_size=extracted_patch_size,
max_patches=N, random_state=0)
patches = extractor.transform(img[np.newaxis])
if scale != 1:
patches = np.array([transform.resize(patch, patch_size)
for patch in patches])
return patches
negative_patches = np.vstack([extract_patches(im, 1000, scale)
for im in images for scale in [0.5, 1.0, 2.0]])
negative_patches.shape
# Alternativa
# negative_patches = np.vstack([extract_patches(im, 1000, scale, patch_size=(62,47))
# for im in images for scale in [0.5, 1.0, 2.0]])
# negative_patches.shape
Explanation: 2. Crear un conjunto de entrenamiento de imagenes de no-caras que supongan falsos-positivos
Una vez obtenido nuestro conjunto de positivos, necesitamos obtener un conjunto de imagenes que no tengan caras. Para ello, la técnica que se utiliza en el notebook en el que me estoy basando es subdividir imágenes de mayor tamaño que no contengan caras. Y, así, obtener múltiples imágenes.
End of explanation
from itertools import chain
positive_patches = np.array(positive_patches)
print(negative_patches.shape, positive_patches.shape)
X_train = np.array([feature.hog(im)
for im in chain(positive_patches,
negative_patches)])
y_train = np.zeros(X_train.shape[0])
y_train[:positive_patches.shape[0]] = 1
Explanation: 3. Extraer las características de HoG del conjunto de entrenamiento
Este tercer paso resulta de especial interes, puesto que vamos a obtener las características de HoG sobre las que previamente hemos hablado.
End of explanation
import pickle # Módulo para serializar
path = '../../rsc/obj/'
X_train_path = path + 'X_train.sav'
y_train_path = path + 'y_train.sav'
pickle.dump(X_train, open(X_train_path, 'wb'))
pickle.dump(y_train, open(y_train_path, 'wb'))
Explanation: 4. Serializamos el conjunto de entrenamiento
Simplemente almacenamos los objetos X_train e y_train para, como explicabamos al principio, evitar el recalculo de estas características cada vez que deseemos utilizarlas.
End of explanation |
408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background information on configurations
This tutorial gives a short introduction to MNE configurations.
Step1: MNE-python stores configurations to a folder called .mne in the user's
home directory, or to AppData directory on Windows. The path to the config
file can be found out by calling
Step2: These configurations include information like sample data paths and plotter
window sizes. Files inside this folder should never be modified manually.
Let's see what the configurations contain.
Step3: We see fields like "MNE_DATASETS_SAMPLE_PATH". As the name suggests, this is
the path the sample data is downloaded to. All the fields in the
configuration file can be modified by calling
Step4: The default value is now set to INFO. This level will now be used by default
every time we call a function in MNE. We can set the global logging level for
only this session by calling
Step5: Notice how the value in the config file was not changed. Logging level of
WARNING only applies for this session. Let's see what logging level of
WARNING prints for
Step6: Nothing. This means that no warnings were emitted during the computation. If
you look at the documentation of
Step7: As you see there is some info about what the function is doing. The logging
level can be set to 'DEBUG', 'INFO', 'WARNING', 'ERROR' or 'CRITICAL'. It can
also be set to an integer or a boolean value. The correspondance to string
values can be seen in the table below. verbose=None uses the default
value from the configuration file.
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+ | Python Code:
import os.path as op
import mne
from mne.datasets.sample import data_path
fname = op.join(data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname).crop(0, 10)
original_level = mne.get_config('MNE_LOGGING_LEVEL', 'INFO')
Explanation: Background information on configurations
This tutorial gives a short introduction to MNE configurations.
End of explanation
print(mne.get_config_path())
Explanation: MNE-python stores configurations to a folder called .mne in the user's
home directory, or to AppData directory on Windows. The path to the config
file can be found out by calling :func:mne.get_config_path.
End of explanation
print(mne.get_config())
Explanation: These configurations include information like sample data paths and plotter
window sizes. Files inside this folder should never be modified manually.
Let's see what the configurations contain.
End of explanation
mne.set_config('MNE_LOGGING_LEVEL', 'INFO')
print(mne.get_config(key='MNE_LOGGING_LEVEL'))
Explanation: We see fields like "MNE_DATASETS_SAMPLE_PATH". As the name suggests, this is
the path the sample data is downloaded to. All the fields in the
configuration file can be modified by calling :func:mne.set_config.
Logging
Configurations also include the default logging level for the functions. This
field is called "MNE_LOGGING_LEVEL".
End of explanation
mne.set_log_level('WARNING')
print(mne.get_config(key='MNE_LOGGING_LEVEL'))
Explanation: The default value is now set to INFO. This level will now be used by default
every time we call a function in MNE. We can set the global logging level for
only this session by calling :func:mne.set_log_level function.
End of explanation
cov = mne.compute_raw_covariance(raw)
Explanation: Notice how the value in the config file was not changed. Logging level of
WARNING only applies for this session. Let's see what logging level of
WARNING prints for :func:mne.compute_raw_covariance.
End of explanation
cov = mne.compute_raw_covariance(raw, verbose=True)
Explanation: Nothing. This means that no warnings were emitted during the computation. If
you look at the documentation of :func:mne.compute_raw_covariance, you
notice the verbose keyword. Setting this parameter does not touch the
configurations, but sets the logging level for just this one function call.
Let's see what happens with logging level of INFO.
End of explanation
mne.set_config('MNE_LOGGING_LEVEL', original_level)
Explanation: As you see there is some info about what the function is doing. The logging
level can be set to 'DEBUG', 'INFO', 'WARNING', 'ERROR' or 'CRITICAL'. It can
also be set to an integer or a boolean value. The correspondance to string
values can be seen in the table below. verbose=None uses the default
value from the configuration file.
+----------+---------+---------+
| String | Integer | Boolean |
+==========+=========+=========+
| DEBUG | 10 | |
+----------+---------+---------+
| INFO | 20 | True |
+----------+---------+---------+
| WARNING | 30 | False |
+----------+---------+---------+
| ERROR | 40 | |
+----------+---------+---------+
| CRITICAL | 50 | |
+----------+---------+---------+
End of explanation |
409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 函数式 API
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 简介
Keras 函数式 API 是一种比 tf.keras.Sequential API 更加灵活的模型创建方式。函数式 API 可以处理具有非线性拓扑的模型、具有共享层的模型,以及具有多个输入或输出的模型。
深度学习模型通常是层的有向无环图 (DAG)。因此,函数式 API 是构建层计算图的一种方式。
请考虑以下模型:
(input
Step3: 数据的形状设置为 784 维向量。由于仅指定了每个样本的形状,因此始终忽略批次大小。
例如,如果您有一个形状为 (32, 32, 3) 的图像输入,则可以使用:
Step4: 返回的 inputs 包含馈送给模型的输入数据的形状和 dtype。形状如下:
Step5: dtype 如下:
Step6: 可以通过在此 inputs 对象上调用层,在层计算图中创建新的节点:
Step7: “层调用”操作就像从“输入”向您创建的该层绘制一个箭头。您将输入“传递”到 dense 层,然后得到 x。
让我们为层计算图多添加几个层:
Step8: 此时,您可以通过在层计算图中指定模型的输入和输出来创建 Model:
Step9: 让我们看看模型摘要是什么样子:
Step10: 您还可以将模型绘制为计算图:
Step11: 并且,您还可以选择在绘制的计算图中显示每层的输入和输出形状:
Step12: 此图和代码几乎完全相同。在代码版本中,连接箭头由调用操作代替。
“层计算图”是深度学习模型的直观心理图像,而函数式 API 是创建密切反映此图像的模型的方法。
训练、评估和推断
对于使用函数式 API 构建的模型来说,其训练、评估和推断的工作方式与 Sequential 模型完全相同。
如下所示,加载 MNIST 图像数据,将其改造为向量,将模型与数据拟合(同时监视验证拆分的性能),然后在测试数据上评估模型:
Step13: 有关更多信息,请参阅训练和评估指南。
保存和序列化
对于使用函数式 API 构建的模型,其保存模型和序列化的工作方式与 Sequential 模型相同。保存函数式模型的标准方式是调用 model.save() 将整个模型保存为单个文件。您可以稍后从该文件重新创建相同的模型,即使构建该模型的代码已不再可用。
保存的文件包括:
模型架构
模型权重值(在训练过程中得知)
模型训练配置(如果有的话,如传递给 compile)
优化器及其状态(如果有的话,用来从上次中断的地方重新开始训练)
Step14: 有关详细信息,请阅读模型序列化和保存指南。
使用相同的层计算图定义多个模型
在函数式 API 中,模型是通过在层计算图中指定其输入和输出来创建的。这意味着可以使用单个层计算图来生成多个模型。
在下面的示例中,您将使用相同的层堆栈来实例化两个模型:能够将图像输入转换为 16 维向量的 encoder 模型,以及用于训练的端到端 autoencoder 模型。
Step15: 在上例中,解码架构与编码架构严格对称,因此输出形状与输入形状 (28, 28, 1) 相同。
Conv2D 层的反面是 Conv2DTranspose 层,MaxPooling2D 层的反面是 UpSampling2D 层。
所有模型均可像层一样调用
您可以通过在 Input 上或在另一个层的输出上调用任何模型来将其当作层来处理。通过调用模型,您不仅可以重用模型的架构,还可以重用它的权重。
为了查看实际运行情况,下面是对自动编码器示例的另一种处理方式,该示例创建了一个编码器模型、一个解码器模型,并在两个调用中将它们链接,以获得自动编码器模型:
Step16: 如您所见,模型可以嵌套:模型可以包含子模型(因为模型就像层一样)。模型嵌套的一个常见用例是装配。例如,以下展示了如何将一组模型装配成一个平均其预测的模型:
Step17: 处理复杂的计算图拓扑
具有多个输入和输出的模型
函数式 API 使处理多个输入和输出变得容易。而这无法使用 Sequential API 处理。
例如,如果您要构建一个系统,该系统按照优先级对自定义问题工单进行排序,然后将工单传送到正确的部门,则此模型将具有三个输入:
工单标题(文本输入),
工单的文本正文(文本输入),以及
用户添加的任何标签(分类输入)
此模型将具有两个输出:
介于 0 和 1 之间的优先级分数(标量 Sigmoid 输出),以及
应该处理工单的部门(部门范围内的 Softmax 输出)。
您可以使用函数式 API 通过几行代码构建此模型:
Step18: 现在绘制模型:
Step19: 编译此模型时,可以为每个输出分配不同的损失。甚至可以为每个损失分配不同的权重,以调整其对总训练损失的贡献。
Step20: 由于输出层具有不同的名称,您还可以像下面这样指定损失:
Step21: 通过传递输入和目标的 NumPy 数组列表来训练模型:
Step22: 当使用 Dataset 对象调用拟合时,它应该会生成一个列表元组(如 ([title_data, body_data, tags_data], [priority_targets, dept_targets]) 或一个字典元组(如 ({'title'
Step23: 绘制模型:
Step24: 现在训练模型:
Step25: 共享层
函数式 API 的另一个很好的用途是使用共享层的模型。共享层是在同一个模型中多次重用的层实例,它们会学习与层计算图中的多个路径相对应的特征。
共享层通常用于对来自相似空间(例如,两个具有相似词汇的不同文本)的输入进行编码。它们可以实现在这些不同的输入之间共享信息,以及在更少的数据上训练这种模型。如果在其中的一个输入中看到了一个给定单词,那么将有利于处理通过共享层的所有输入。
要在函数式 API 中共享层,请多次调用同一个层实例。例如,下面是一个在两个不同文本输入之间共享的 Embedding 层:
Step26: 提取和重用层计算图中的节点
由于要处理的层计算图是静态数据结构,可以对其进行访问和检查。而这就是将函数式模型绘制为图像的方式。
这也意味着您可以访问中间层的激活函数(计算图中的“节点”)并在其他地方重用它们,这对于特征提取之类的操作十分有用。
让我们来看一个例子。下面是一个 VGG19 模型,其权重已在 ImageNet 上进行了预训练:
Step27: 下面是通过查询计算图数据结构获得的模型的中间激活:
Step28: 使用以下特征来创建新的特征提取模型,该模型会返回中间层激活的值:
Step29: 这尤其适用于诸如神经样式转换之类的任务。
使用自定义层扩展 API
tf.keras 包含了各种内置层,例如:
卷积层:Conv1D、Conv2D、Conv3D、Conv2DTranspose
池化层:MaxPooling1D、MaxPooling2D、MaxPooling3D、AveragePooling1D
RNN 层:GRU、LSTM、ConvLSTM2D
BatchNormalization、Dropout、Embedding 等
但是,如果找不到所需内容,可以通过创建您自己的层来方便地扩展 API。所有层都会子类化 Layer 类并实现下列方法:
call 方法,用于指定由层完成的计算。
build 方法,用于创建层的权重(这只是一种样式约定,因为您也可以在 __init__ 中创建权重)。
要详细了解从头开始创建层的详细信息,请阅读自定义层和模型指南。
以下是 tf.keras.layers.Dense 的基本实现:
Step30: 为了在您的自定义层中支持序列化,请定义一个 get_config 方法,返回层实例的构造函数参数:
Step31: 您也可以选择实现 from_config(cls, config) 类方法,该方法用于在给定其配置字典的情况下重新创建层实例。from_config 的默认实现如下:
python
def from_config(cls, config)
Step32: 您可以在函数式 API 中使用任何子类化层或模型,前提是它实现了遵循以下模式之一的 call 方法:
call(self, inputs, **kwargs) - 其中 inputs 是张量或张量的嵌套结构(例如张量列表),**kwargs 是非张量参数(非输入)。
call(self, inputs, training=None, **kwargs) - 其中 training 是指示该层是否应在训练模式和推断模式下运行的布尔值。
call(self, inputs, mask=None, **kwargs) - 其中 mask 是一个布尔掩码张量(对 RNN 等十分有用)。
call(self, inputs, training=None, mask=None, **kwargs) - 当然,您可以同时具有掩码和训练特有的行为。
此外,如果您在自定义层或模型上实现了 get_config 方法,则您创建的函数式模型将仍可序列化和克隆。
下面是一个从头开始编写、用于函数式模型的自定义 RNN 的简单示例: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: 函数式 API
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/keras/functional" class=""><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class="">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/functional.ipynb" class=""><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/functional.ipynb" class=""><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class="">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/functional.ipynb" class=""><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="">下载笔记本</a></td>
</table>
设置
End of explanation
inputs = keras.Input(shape=(784,))
Explanation: 简介
Keras 函数式 API 是一种比 tf.keras.Sequential API 更加灵活的模型创建方式。函数式 API 可以处理具有非线性拓扑的模型、具有共享层的模型,以及具有多个输入或输出的模型。
深度学习模型通常是层的有向无环图 (DAG)。因此,函数式 API 是构建层计算图的一种方式。
请考虑以下模型:
(input: 784-dimensional vectors) ↧ [Dense (64 units, relu activation)] ↧ [Dense (64 units, relu activation)] ↧ [Dense (10 units, softmax activation)] ↧ (output: logits of a probability distribution over 10 classes)
这是一个具有三层的基本计算图。要使用函数式 API 构建此模型,请先创建一个输入节点:
End of explanation
# Just for demonstration purposes.
img_inputs = keras.Input(shape=(32, 32, 3))
Explanation: 数据的形状设置为 784 维向量。由于仅指定了每个样本的形状,因此始终忽略批次大小。
例如,如果您有一个形状为 (32, 32, 3) 的图像输入,则可以使用:
End of explanation
inputs.shape
Explanation: 返回的 inputs 包含馈送给模型的输入数据的形状和 dtype。形状如下:
End of explanation
inputs.dtype
Explanation: dtype 如下:
End of explanation
dense = layers.Dense(64, activation="relu")
x = dense(inputs)
Explanation: 可以通过在此 inputs 对象上调用层,在层计算图中创建新的节点:
End of explanation
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)
Explanation: “层调用”操作就像从“输入”向您创建的该层绘制一个箭头。您将输入“传递”到 dense 层,然后得到 x。
让我们为层计算图多添加几个层:
End of explanation
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")
Explanation: 此时,您可以通过在层计算图中指定模型的输入和输出来创建 Model:
End of explanation
model.summary()
Explanation: 让我们看看模型摘要是什么样子:
End of explanation
keras.utils.plot_model(model, "my_first_model.png")
Explanation: 您还可以将模型绘制为计算图:
End of explanation
keras.utils.plot_model(model, "my_first_model_with_shape_info.png", show_shapes=True)
Explanation: 并且,您还可以选择在绘制的计算图中显示每层的输入和输出形状:
End of explanation
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop(),
metrics=["accuracy"],
)
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
Explanation: 此图和代码几乎完全相同。在代码版本中,连接箭头由调用操作代替。
“层计算图”是深度学习模型的直观心理图像,而函数式 API 是创建密切反映此图像的模型的方法。
训练、评估和推断
对于使用函数式 API 构建的模型来说,其训练、评估和推断的工作方式与 Sequential 模型完全相同。
如下所示,加载 MNIST 图像数据,将其改造为向量,将模型与数据拟合(同时监视验证拆分的性能),然后在测试数据上评估模型:
End of explanation
model.save("path_to_my_model")
del model
# Recreate the exact same model purely from the file:
model = keras.models.load_model("path_to_my_model")
Explanation: 有关更多信息,请参阅训练和评估指南。
保存和序列化
对于使用函数式 API 构建的模型,其保存模型和序列化的工作方式与 Sequential 模型相同。保存函数式模型的标准方式是调用 model.save() 将整个模型保存为单个文件。您可以稍后从该文件重新创建相同的模型,即使构建该模型的代码已不再可用。
保存的文件包括:
模型架构
模型权重值(在训练过程中得知)
模型训练配置(如果有的话,如传递给 compile)
优化器及其状态(如果有的话,用来从上次中断的地方重新开始训练)
End of explanation
encoder_input = keras.Input(shape=(28, 28, 1), name="img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder")
autoencoder.summary()
Explanation: 有关详细信息,请阅读模型序列化和保存指南。
使用相同的层计算图定义多个模型
在函数式 API 中,模型是通过在层计算图中指定其输入和输出来创建的。这意味着可以使用单个层计算图来生成多个模型。
在下面的示例中,您将使用相同的层堆栈来实例化两个模型:能够将图像输入转换为 16 维向量的 encoder 模型,以及用于训练的端到端 autoencoder 模型。
End of explanation
encoder_input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()
decoder_input = keras.Input(shape=(16,), name="encoded_img")
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
decoder = keras.Model(decoder_input, decoder_output, name="decoder")
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name="img")
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder")
autoencoder.summary()
Explanation: 在上例中,解码架构与编码架构严格对称,因此输出形状与输入形状 (28, 28, 1) 相同。
Conv2D 层的反面是 Conv2DTranspose 层,MaxPooling2D 层的反面是 UpSampling2D 层。
所有模型均可像层一样调用
您可以通过在 Input 上或在另一个层的输出上调用任何模型来将其当作层来处理。通过调用模型,您不仅可以重用模型的架构,还可以重用它的权重。
为了查看实际运行情况,下面是对自动编码器示例的另一种处理方式,该示例创建了一个编码器模型、一个解码器模型,并在两个调用中将它们链接,以获得自动编码器模型:
End of explanation
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1)(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
Explanation: 如您所见,模型可以嵌套:模型可以包含子模型(因为模型就像层一样)。模型嵌套的一个常见用例是装配。例如,以下展示了如何将一组模型装配成一个平均其预测的模型:
End of explanation
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name="title"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name="tags"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred],
)
Explanation: 处理复杂的计算图拓扑
具有多个输入和输出的模型
函数式 API 使处理多个输入和输出变得容易。而这无法使用 Sequential API 处理。
例如,如果您要构建一个系统,该系统按照优先级对自定义问题工单进行排序,然后将工单传送到正确的部门,则此模型将具有三个输入:
工单标题(文本输入),
工单的文本正文(文本输入),以及
用户添加的任何标签(分类输入)
此模型将具有两个输出:
介于 0 和 1 之间的优先级分数(标量 Sigmoid 输出),以及
应该处理工单的部门(部门范围内的 Softmax 输出)。
您可以使用函数式 API 通过几行代码构建此模型:
End of explanation
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
Explanation: 现在绘制模型:
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[
keras.losses.BinaryCrossentropy(from_logits=True),
keras.losses.CategoricalCrossentropy(from_logits=True),
],
loss_weights=[1.0, 0.2],
)
Explanation: 编译此模型时,可以为每个输出分配不同的损失。甚至可以为每个损失分配不同的权重,以调整其对总训练损失的贡献。
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"priority": keras.losses.BinaryCrossentropy(from_logits=True),
"department": keras.losses.CategoricalCrossentropy(from_logits=True),
},
loss_weights=[1.0, 0.2],
)
Explanation: 由于输出层具有不同的名称,您还可以像下面这样指定损失:
End of explanation
# Dummy input data
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32")
# Dummy target data
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit(
{"title": title_data, "body": body_data, "tags": tags_data},
{"priority": priority_targets, "department": dept_targets},
epochs=2,
batch_size=32,
)
Explanation: 通过传递输入和目标的 NumPy 数组列表来训练模型:
End of explanation
inputs = keras.Input(shape=(32, 32, 3), name="img")
x = layers.Conv2D(32, 3, activation="relu")(inputs)
x = layers.Conv2D(64, 3, activation="relu")(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_1_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_2_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation="relu")(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation="relu")(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10)(x)
model = keras.Model(inputs, outputs, name="toy_resnet")
model.summary()
Explanation: 当使用 Dataset 对象调用拟合时,它应该会生成一个列表元组(如 ([title_data, body_data, tags_data], [priority_targets, dept_targets]) 或一个字典元组(如 ({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}))。
有关详细说明,请参阅训练和评估指南。
小 ResNet 模型
除了具有多个输入和输出的模型外,函数式 API 还使处理非线性连接拓扑(这些模型的层没有按顺序连接)变得容易。这是 Sequential API 无法处理的。
关于这一点的一个常见用例是残差连接。让我们来为 CIFAR10 构建一个小 ResNet 模型以进行演示:
End of explanation
keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True)
Explanation: 绘制模型:
End of explanation
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["acc"],
)
# We restrict the data to the first 1000 samples so as to limit execution time
# on Colab. Try to train on the entire dataset until convergence!
model.fit(x_train[:1000], y_train[:1000], batch_size=64, epochs=1, validation_split=0.2)
Explanation: 现在训练模型:
End of explanation
# Embedding for 1000 unique words mapped to 128-dimensional vectors
shared_embedding = layers.Embedding(1000, 128)
# Variable-length sequence of integers
text_input_a = keras.Input(shape=(None,), dtype="int32")
# Variable-length sequence of integers
text_input_b = keras.Input(shape=(None,), dtype="int32")
# Reuse the same layer to encode both inputs
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
Explanation: 共享层
函数式 API 的另一个很好的用途是使用共享层的模型。共享层是在同一个模型中多次重用的层实例,它们会学习与层计算图中的多个路径相对应的特征。
共享层通常用于对来自相似空间(例如,两个具有相似词汇的不同文本)的输入进行编码。它们可以实现在这些不同的输入之间共享信息,以及在更少的数据上训练这种模型。如果在其中的一个输入中看到了一个给定单词,那么将有利于处理通过共享层的所有输入。
要在函数式 API 中共享层,请多次调用同一个层实例。例如,下面是一个在两个不同文本输入之间共享的 Embedding 层:
End of explanation
vgg19 = tf.keras.applications.VGG19()
Explanation: 提取和重用层计算图中的节点
由于要处理的层计算图是静态数据结构,可以对其进行访问和检查。而这就是将函数式模型绘制为图像的方式。
这也意味着您可以访问中间层的激活函数(计算图中的“节点”)并在其他地方重用它们,这对于特征提取之类的操作十分有用。
让我们来看一个例子。下面是一个 VGG19 模型,其权重已在 ImageNet 上进行了预训练:
End of explanation
features_list = [layer.output for layer in vgg19.layers]
Explanation: 下面是通过查询计算图数据结构获得的模型的中间激活:
End of explanation
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype("float32")
extracted_features = feat_extraction_model(img)
Explanation: 使用以下特征来创建新的特征提取模型,该模型会返回中间层激活的值:
End of explanation
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
Explanation: 这尤其适用于诸如神经样式转换之类的任务。
使用自定义层扩展 API
tf.keras 包含了各种内置层,例如:
卷积层:Conv1D、Conv2D、Conv3D、Conv2DTranspose
池化层:MaxPooling1D、MaxPooling2D、MaxPooling3D、AveragePooling1D
RNN 层:GRU、LSTM、ConvLSTM2D
BatchNormalization、Dropout、Embedding 等
但是,如果找不到所需内容,可以通过创建您自己的层来方便地扩展 API。所有层都会子类化 Layer 类并实现下列方法:
call 方法,用于指定由层完成的计算。
build 方法,用于创建层的权重(这只是一种样式约定,因为您也可以在 __init__ 中创建权重)。
要详细了解从头开始创建层的详细信息,请阅读自定义层和模型指南。
以下是 tf.keras.layers.Dense 的基本实现:
End of explanation
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config, custom_objects={"CustomDense": CustomDense})
Explanation: 为了在您的自定义层中支持序列化,请定义一个 get_config 方法,返回层实例的构造函数参数:
End of explanation
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
Explanation: 您也可以选择实现 from_config(cls, config) 类方法,该方法用于在给定其配置字典的情况下重新创建层实例。from_config 的默认实现如下:
python
def from_config(cls, config): return cls(**config)
何时使用函数式 API
什么时候应该使用 Keras 函数式 API 来创建新的模型,或者什么时候应该直接对 Model 类进行子类化呢?通常来说,函数式 API 更高级、更易用且更安全,并且具有许多子类化模型所不支持的功能。
但是,当构建不容易表示为有向无环的层计算图的模型时,模型子类化会提供更大的灵活性。例如,您无法使用函数式 API 来实现 Tree-RNN,而必须直接子类化 Model 类。
要深入了解函数式 API 和模型子类化之间的区别,请阅读 TensorFlow 2.0 符号式 API 和命令式 API 介绍。
函数式 API 的优势:
下列属性对于序贯模型(也是数据结构)同样适用,但对于子类化模型(是 Python 字节码而非数据结构)则不适用。
更加简洁
没有 super(MyClass, self).__init__(...),没有 def call(self, ...): 等内容。
对比:
python
inputs = keras.Input(shape=(32,)) x = layers.Dense(64, activation='relu')(inputs) outputs = layers.Dense(10)(x) mlp = keras.Model(inputs, outputs)
下面是子类化版本:
python
class MLP(keras.Model): def __init__(self, **kwargs): super(MLP, self).__init__(**kwargs) self.dense_1 = layers.Dense(64, activation='relu') self.dense_2 = layers.Dense(10) def call(self, inputs): x = self.dense_1(inputs) return self.dense_2(x) # Instantiate the model. mlp = MLP() # Necessary to create the model's state. # The model doesn't have a state until it's called at least once. _ = mlp(tf.zeros((1, 32)))
定义连接计算图时进行模型验证
在函数式 API 中,输入规范(形状和 dtype)是预先创建的(使用 Input)。每次调用层时,该层都会检查传递给它的规范是否符合其假设,如不符合,它将引发有用的错误消息。
这样可以保证能够使用函数式 API 构建的任何模型都可以运行。所有调试(除与收敛有关的调试外)均在模型构造的过程中静态发生,而不是在执行时发生。这类似于编译器中的类型检查。
函数式模型可绘制且可检查
您可以将模型绘制为计算图,并且可以轻松访问该计算图中的中间节点。例如,要提取和重用中间层的激活(如前面的示例所示),请运行以下代码:
python
features_list = [layer.output for layer in vgg19.layers] feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
函数式模型可以序列化或克隆
因为函数式模型是数据结构而非一段代码,所以它可以安全地序列化,并且可以保存为单个文件,从而使您可以重新创建完全相同的模型,而无需访问任何原始代码。请参阅序列化和保存指南。
要序列化子类化模型,实现器必须在模型级别指定 get_config() 和 from_config() 方法。
函数式 API 的劣势:
不支持动态架构
函数式 API 将模型视为层的 DAG。对于大多数深度学习架构来说确实如此,但并非所有(例如,递归网络或 Tree RNN 就不遵循此假设,无法在函数式 API 中实现)。
混搭 API 样式
在函数式 API 或模型子类化之间进行选择并非是让您作出二选一的决定而将您限制在某一类模型中。tf.keras API 中的所有模型都可以彼此交互,无论它们是 Sequential 模型、函数式模型,还是从头开始编写的子类化模型。
您始终可以将函数式模型或 Sequential 模型用作子类化模型或层的一部分:
End of explanation
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation="tanh")
self.projection_2 = layers.Dense(units=units, activation="tanh")
self.classifier = layers.Dense(1)
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Note that you specify a static batch size for the inputs with the `batch_shape`
# arg, because the inner computation of `CustomRNN` requires a static batch size
# (when you create the `state` zeros tensor).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
Explanation: 您可以在函数式 API 中使用任何子类化层或模型,前提是它实现了遵循以下模式之一的 call 方法:
call(self, inputs, **kwargs) - 其中 inputs 是张量或张量的嵌套结构(例如张量列表),**kwargs 是非张量参数(非输入)。
call(self, inputs, training=None, **kwargs) - 其中 training 是指示该层是否应在训练模式和推断模式下运行的布尔值。
call(self, inputs, mask=None, **kwargs) - 其中 mask 是一个布尔掩码张量(对 RNN 等十分有用)。
call(self, inputs, training=None, mask=None, **kwargs) - 当然,您可以同时具有掩码和训练特有的行为。
此外,如果您在自定义层或模型上实现了 get_config 方法,则您创建的函数式模型将仍可序列化和克隆。
下面是一个从头开始编写、用于函数式模型的自定义 RNN 的简单示例:
End of explanation |
410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.is_synchronized
Whether the automaton is synchronized
Step1: The following automaton is not synchronized, because a transition with less letters on the second tape $a| \varepsilon$ is followed by a transition with as many letters on each tape $b|y$.
Step2: This automaton is synchronized, because the transition with less letters on the first tape occurs "at the end"
Step3: Spontaneous transitions are not taken in account when checking for synchronization.
Step4: Note that in a synchronized automaton, the corresponding delay_automaton has delays of 0 or strictly increasing (apart from spontaneous transitions). | Python Code:
import vcsn
ctx = vcsn.context("lat<law_char, law_char>, b")
Explanation: automaton.is_synchronized
Whether the automaton is synchronized:
- every transition has the same number of letters on every tape, except for a few leading to final states
- in each accepting path, disregarding spontaneous transitions, if a $\varepsilon$ is seen on one tape, no more letters will appear on this tape.
Precondition:
automaton is a transducer
See also:
- automaton.synchronize
- automaton.delay_automaton
Examples
End of explanation
a = ctx.expression(r"a|x+(a|\e)(b|y)").standard()
a
a.is_synchronized()
Explanation: The following automaton is not synchronized, because a transition with less letters on the second tape $a| \varepsilon$ is followed by a transition with as many letters on each tape $b|y$.
End of explanation
a = ctx.expression(r"a|x+(b|y)(e|xyz)").standard()
a
a.is_synchronized()
Explanation: This automaton is synchronized, because the transition with less letters on the first tape occurs "at the end" : it is not followed by transitions with more letters on this tape.
End of explanation
a = ctx.expression(r"a|x+(b|y)(cde|z)").thompson()
a
a.is_synchronized()
Explanation: Spontaneous transitions are not taken in account when checking for synchronization.
End of explanation
a.delay_automaton()
Explanation: Note that in a synchronized automaton, the corresponding delay_automaton has delays of 0 or strictly increasing (apart from spontaneous transitions).
End of explanation |
411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习纳米学位
监督学习
项目2
Step1: 练习:数据探索
首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量:
总的记录数量,'n_records'
年收入大于50,000美元的人数,'n_greater_50k'.
年收入最多为50,000美元的人数 'n_at_most_50k'.
年收入大于50,000美元的人所占的比例, 'greater_percent'.
提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。
Step2: 准备数据
在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。
获得特征和标签
income 列是我们需要的标签,记录一个人的年收入是否高于50K。 因此我们应该把他从数据中剥离出来,单独存放。
Step3: 转换倾斜的连续特征
一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。
运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。
Step4: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https
Step5: 规一化数字特征
除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。
运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。
Step6: 练习:数据预处理
从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C.
| 特征X | | 特征X_A | 特征X_B | 特征X_C |
|
Step7: 混洗和切分数据
现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。然后再进一步把训练数据分为训练集和验证集,用来选择和优化模型。
运行下面的代码单元来完成切分。
Step8: 评价模型性能
在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。四种算法包含一个天真的预测器 和三个你选择的监督学习器。
评价方法和朴素的预测器
CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
尤其是,当 $\beta = 0.5$ 的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。
问题 1 - 天真的预测器的性能
通过查看收入超过和不超过 \$50,000 的人数,我们能发现多数被调查者年收入没有超过 \$50,000。如果我们简单地预测说“这个人的收入没有超过 \$50,000”,我们就可以得到一个 准确率超过 50% 的预测。这样我们甚至不用看数据就能做到一个准确率超过 50%。这样一个预测被称作是天真的。通常对数据使用一个天真的预测器是十分重要的,这样能够帮助建立一个模型表现是否好的基准。 使用下面的代码单元计算天真的预测器的相关性能。将你的计算结果赋值给'accuracy', ‘precision’, ‘recall’ 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。
如果我们选择一个无论什么情况都预测被调查者年收入大于 \$50,000 的模型,那么这个模型在验证集上的准确率,查准率,查全率和 F-score是多少?
Step9: 监督学习模型
问题 2 - 模型应用
你能够在 scikit-learn 中选择以下监督学习模型
- 高斯朴素贝叶斯 (GaussianNB)
- 决策树 (DecisionTree)
- 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K近邻 (K Nearest Neighbors)
- 随机梯度下降分类器 (SGDC)
- 支撑向量机 (SVM)
- Logistic回归(LogisticRegression)
从上面的监督学习模型中选择三个适合我们这个问题的模型,并回答相应问题。
模型1
模型名称
回答:决策树
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:慢性胃炎中医辨证分型中的应用。(http
Step10: 练习:初始模型的评估
在下面的代码单元中,您将需要实现以下功能:
- 导入你在前面讨论的三个监督学习模型。
- 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。
- 使用模型的默认参数值,在接下来的部分中你将需要对某一个模型的参数进行调整。
- 设置random_state (如果有这个参数)。
- 计算1%, 10%, 100%的训练数据分别对应多少个数据点,并将这些值存储在'samples_1', 'samples_10', 'samples_100'中
注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行!
Step11: 提高效果
在这最后一节中,您将从三个有监督的学习模型中选择 最好的 模型来使用学生数据。你将在整个训练集(X_train和y_train)上使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的 F-score。
问题 3 - 选择最佳的模型
基于你前面做的评价,用一到两段话向 CharityML 解释这三个模型中哪一个对于判断被调查者的年收入大于 \$50,000 是最合适的。
提示:你的答案应该包括评价指标,预测/训练时间,以及该算法是否适合这里的数据。
回答:
出乎意料,神经网络的各项指标竟然是最好,训练时间短,在测试集上的准确率和FScrore都是三个算法中最高的。
算法适用性这边理解比较浅,请助教解答下,应该从那几个方面选择算法,最好提供一些资料可以查阅。
问题 4 - 用通俗的话解释模型
用一到两段话,向 CharityML 用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。
回答: 我们使用了多层神经网络去预测捐款者,神经网络主要由一堆神经元构成,每个神经元都会负责一个很小的逻辑判断,接收几个输入参数,然后通过激活函数决定神经元最后的输出,而这个输出又可能作为输入传到下一个不同的神经元中。经过多层神经元的转换,会形成一套体系,这个体系可以接受我们的输入,最后的输出结果就是预测结果。
多层神经网络中的反向传播算法,类似于一个自适应的反馈系统;
就像一个公司要做一些决策,一级领导指示二级领导,二级领导布置任务给底层员工,这是一般的正向决策过程,反向传播就是,当底层员工发现一些问题后,报告给二级领导,二级领导又报告给一级领导,然后一、二级领导都会根据反馈调整自己的决策,以便下次取得更好的结果。
反向传播这块确实还没理解深入,算法也看不懂,还请老师给些资料看看,我自己搜到的都是5000字以内的那种,很粗略,希望有点比较系统的知识。
练习:模型调优
调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:
导入sklearn.model_selection.GridSearchCV 和 sklearn.metrics.make_scorer.
初始化你选择的分类器,并将其存储在clf中。
设置random_state (如果有这个参数)。
创建一个对于这个模型你希望调整参数的字典。
例如
Step12: 问题 5 - 最终模型评估
你的最优模型在测试数据上的准确率和 F-score 是多少?这些分数比没有优化的模型好还是差?你优化的结果相比于你在问题 1中得到的天真预测器怎么样?
注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。
结果
Step13: 问题 7 - 提取特征重要性
观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。
这五个特征的权重加起来是否超过了0.5?<br>
这五个特征和你在问题 6中讨论的特征比较怎么样?<br>
如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?<br>
如果你的选择不相近,那么为什么你觉得这些特征更加相关?
回答:超过了
有些相似,但是整体不准确
我选取的特征,一个是一些基本属性,而且很有可能影响其收入或同理心,但是数据表现的却十分冷酷,是否捐款和赚钱花钱有最大的关系。
特征选择
如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。
Step14: 问题 8 - 特征选择的影响
最终模型在只是用五个特征的数据上和使用所有的特征数据上的 F-score 和准确率相比怎么样?
如果训练时间是一个要考虑的因素,你会考虑使用部分特征的数据作为你的训练集吗?
回答:均有下降
如果在数据比较大、硬件资源比较匮乏的时候,我会考虑使用,因为选取主要特征的方法会极大提高训练速度
但是再中小型数据或者说硬件资源足够时,我会尽量保证其准确性,一个良好准确的模型的训练时间损耗是值得的
问题 9 - 在测试集上测试你的模型
终于到了测试的时候,记住,测试集只能用一次。
使用你最有信心的模型,在测试集上测试,计算出准确率和 F-score。
简述你选择这个模型的原因,并分析测试结果 | Python Code:
# 检查你的Python版本
from sys import version_info
if version_info.major != 2 and version_info.minor != 7:
raise Exception('请使用Python 2.7来完成此项目')
# 为这个项目导入需要的库
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # 允许为DataFrame使用display()
# 导入附加的可视化代码visuals.py
import visuals as vs
# 为notebook提供更加漂亮的可视化
%matplotlib inline
# 导入人口普查数据
data = pd.read_csv("census.csv")
# 成功 - 显示第一条记录
display(data.head(n=1))
Explanation: 机器学习纳米学位
监督学习
项目2: 为CharityML寻找捐献者
欢迎来到机器学习工程师纳米学位的第二个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。
提示:Code 和 Markdown 区域可通过Shift + Enter快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将使用1994年美国人口普查收集的数据,选用几个监督学习算法以准确地建模被调查者的收入。然后,你将根据初步结果从中选择出最佳的候选算法,并进一步优化该算法以最好地建模这些数据。你的目标是建立一个能够准确地预测被调查者年收入是否超过50000美元的模型。这种类型的任务会出现在那些依赖于捐款而存在的非营利性组织。了解人群的收入情况可以帮助一个非营利性的机构更好地了解他们要多大的捐赠,或是否他们应该接触这些人。虽然我们很难直接从公开的资源中推断出一个人的一般收入阶层,但是我们可以(也正是我们将要做的)从其他的一些公开的可获得的资源中获得一些特征从而推断出该值。
这个项目的数据集来自UCI机器学习知识库。这个数据集是由Ron Kohavi和Barry Becker在发表文章_"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_之后捐赠的,你可以在Ron Kohavi提供的在线版本中找到这个文章。我们在这里探索的数据集相比于原有的数据集有一些小小的改变,比如说移除了特征'fnlwgt' 以及一些遗失的或者是格式不正确的记录。
探索数据
运行下面的代码单元以载入需要的Python库并导入人口普查数据。注意数据集的最后一列'income'将是我们需要预测的列(表示被调查者的年收入会大于或者是最多50,000美元),人口普查数据中的每一列都将是关于被调查者的特征。
End of explanation
# TODO:总的记录数
n_records = len(data)
# TODO:被调查者的收入大于$50,000的人数
n_greater_50k = len(data[data['income']=='>50K'])
# TODO:被调查者的收入最多为$50,000的人数
n_at_most_50k = len(data[data['income']=='<=50K'])
# TODO:被调查者收入大于$50,000所占的比例
greater_percent = n_greater_50k / float(n_records) * 100
# 打印结果
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
Explanation: 练习:数据探索
首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量:
总的记录数量,'n_records'
年收入大于50,000美元的人数,'n_greater_50k'.
年收入最多为50,000美元的人数 'n_at_most_50k'.
年收入大于50,000美元的人所占的比例, 'greater_percent'.
提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。
End of explanation
# 将数据切分成特征和对应的标签
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
Explanation: 准备数据
在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。
获得特征和标签
income 列是我们需要的标签,记录一个人的年收入是否高于50K。 因此我们应该把他从数据中剥离出来,单独存放。
End of explanation
# 可视化 'capital-gain'和'capital-loss' 两个特征
vs.distribution(features_raw)
Explanation: 转换倾斜的连续特征
一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。
运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。
End of explanation
# 对于倾斜的数据使用Log转换
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# 可视化对数转换后 'capital-gain'和'capital-loss' 两个特征
vs.distribution(features_raw, transformed = True)
Explanation: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">对数转换</a>,将数据转换成对数,这样非常大和非常小的值不会对学习算法产生负面的影响。并且使用对数变换显著降低了由于异常值所造成的数据范围异常。但是在应用这个变换时必须小心:因为0的对数是没有定义的,所以我们必须先将数据处理成一个比0稍微大一点的数以成功完成对数转换。
运行下面的代码单元来执行数据的转换和可视化结果。再次,注意值的范围和它们是如何分布的。
End of explanation
from sklearn.preprocessing import MinMaxScaler
# 初始化一个 scaler,并将它施加到特征上
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# 显示一个经过缩放的样例记录
display(features_raw.head(n = 1))
Explanation: 规一化数字特征
除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。
运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。
End of explanation
# TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码
features = pd.get_dummies(features_raw)
# TODO:将'income_raw'编码成数字值
income = income_raw.map(lambda x: 0 if x == '<=50K' else 1)
# print income.head(n=9)
# 打印经过独热编码之后的特征数量
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# 移除下面一行的注释以观察编码的特征名字
print encoded
Explanation: 练习:数据预处理
从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C.
| 特征X | | 特征X_A | 特征X_B | 特征X_C |
| :-: | | :-: | :-: | :-: |
| B | | 0 | 1 | 0 |
| C | ----> 独热编码 ----> | 0 | 0 | 1 |
| A | | 1 | 0 | 0 |
此外,对于非数字的特征,我们需要将非数字的标签'income'转换成数值以保证学习算法能够正常工作。因为这个标签只有两种可能的类别("<=50K"和">50K"),我们不必要使用独热编码,可以直接将他们编码分别成两个类0和1,在下面的代码单元中你将实现以下功能:
- 使用pandas.get_dummies()对'features_raw'数据来施加一个独热编码。
- 将目标标签'income_raw'转换成数字项。
- 将"<=50K"转换成0;将">50K"转换成1。
End of explanation
# 导入 train_test_split
from sklearn.model_selection import train_test_split
# 将'features'和'income'数据切分成训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0,
stratify = income)
# 将'X_train'和'y_train'进一步切分为训练集和验证集
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=0,
stratify = y_train)
# 显示切分的结果
print "Training set has {} samples.".format(X_train.shape[0])
print "Validation set has {} samples.".format(X_val.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
Explanation: 混洗和切分数据
现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。然后再进一步把训练数据分为训练集和验证集,用来选择和优化模型。
运行下面的代码单元来完成切分。
End of explanation
#不能使用scikit-learn,你需要根据公式自己实现相关计算。
# 不知道这里是不是将y_val传过来就可以了,请指导下
income_pred = y_val.apply(lambda x : 1)
TP = sum(map(lambda x,y:1 if x==1 and y==1 else 0,y_val,income_pred))
FN = sum(map(lambda x,y:1 if x==1 and y==0 else 0,y_val,income_pred))
FP = sum(map(lambda x,y:1 if x==0 and y==1 else 0,y_val,income_pred))
TN = sum(map(lambda x,y:1 if x==0 and y==0 else 0,y_val,income_pred))
print TP
print FN
print FP
print TN
#TODO: 计算准确率
accuracy = float(TP + TN)/len(y_val)
# TODO: 计算查准率 Precision
precision = TP/float(TP + FP)
# TODO: 计算查全率 Recall
recall = TP/float(TP + FN)
# TODO: 使用上面的公式,设置beta=0.5,计算F-score
fscore = (1 + 0.5*0.5)*(precision * recall)/(0.5*0.5*precision + recall)
# 打印结果
print "Naive Predictor on validation data: \n \
Accuracy score: {:.4f} \n \
Precision: {:.4f} \n \
Recall: {:.4f} \n \
F-score: {:.4f}".format(accuracy, precision, recall, fscore)
Explanation: 评价模型性能
在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。四种算法包含一个天真的预测器 和三个你选择的监督学习器。
评价方法和朴素的预测器
CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
尤其是,当 $\beta = 0.5$ 的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。
问题 1 - 天真的预测器的性能
通过查看收入超过和不超过 \$50,000 的人数,我们能发现多数被调查者年收入没有超过 \$50,000。如果我们简单地预测说“这个人的收入没有超过 \$50,000”,我们就可以得到一个 准确率超过 50% 的预测。这样我们甚至不用看数据就能做到一个准确率超过 50%。这样一个预测被称作是天真的。通常对数据使用一个天真的预测器是十分重要的,这样能够帮助建立一个模型表现是否好的基准。 使用下面的代码单元计算天真的预测器的相关性能。将你的计算结果赋值给'accuracy', ‘precision’, ‘recall’ 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。
如果我们选择一个无论什么情况都预测被调查者年收入大于 \$50,000 的模型,那么这个模型在验证集上的准确率,查准率,查全率和 F-score是多少?
End of explanation
# TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_val, y_val):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_val: features validation set
- y_val: income validation set
'''
results = {}
# TODO:使用sample_size大小的训练数据来拟合学习器
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # 获得程序开始时间
learner.fit(X_train[:sample_size],y_train[:sample_size])
end = time() # 获得程序结束时间
# TODO:计算训练时间
results['train_time'] = end - start
# TODO: 得到在验证集上的预测值
# 然后得到对前300个训练数据的预测结果
start = time() # 获得程序开始时间
predictions_val = learner.predict(X_val)
predictions_train = learner.predict(X_train[:300])
end = time() # 获得程序结束时间
# TODO:计算预测用时
results['pred_time'] = end - start
# TODO:计算在最前面的300个训练数据的准确率
results['acc_train'] = accuracy_score(y_train[:300],predictions_train)
# TODO:计算在验证上的准确率
results['acc_val'] = accuracy_score(y_val,predictions_val)
# TODO:计算在最前面300个训练数据上的F-score
results['f_train'] = fbeta_score(y_train[:300],predictions_train,beta=0.5)
# TODO:计算验证集上的F-score
results['f_val'] = fbeta_score(y_val,predictions_val,beta=0.5)
# 成功
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# 返回结果
return results
Explanation: 监督学习模型
问题 2 - 模型应用
你能够在 scikit-learn 中选择以下监督学习模型
- 高斯朴素贝叶斯 (GaussianNB)
- 决策树 (DecisionTree)
- 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K近邻 (K Nearest Neighbors)
- 随机梯度下降分类器 (SGDC)
- 支撑向量机 (SVM)
- Logistic回归(LogisticRegression)
从上面的监督学习模型中选择三个适合我们这个问题的模型,并回答相应问题。
模型1
模型名称
回答:决策树
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:慢性胃炎中医辨证分型中的应用。(http://www.airitilibrary.com/Publication/alDetailedMesh?docid=0258879x-200409-25-9-1009-1012-a)
雷电潜势预报中的应用。(http://www.airitilibrary.com/Publication/alDetailedMesh?docid=16742184-200812-28-4-55-58-a)
这个模型的优势是什么?他什么情况下表现最好?
回答:优势:
1. 容易解释、算法简单,可以可视化
2. 几乎不需要数据预处理
3. 可以同时处理数值变量和输入变量
适用于:数据拥有比较清晰的特征(较容易区分),每个可区分的特征都能分出部分数据,最终结果是布尔类型。
这个模型的缺点是什么?什么条件下它表现很差?
回答:缺点:
1. 容易被攻击,只需要伪造很少的特征即可瞒过分类器。
2. 数据中非常小的变异也会造成一颗完全不同的树
3.
当样本的数据特征不能或很难将整个样本分类的话
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:决策树作为一个简单的模型,理论上任何数据拿到后都可以使用此模型进行一次尝试。当前数据集可以使用特征来进行分类,最终输出一个二元标签(收入是否大于50K)。
模型2
模型名称
回答:SVM
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:
测试用例生成(http://www.arocmag.com/getarticle/?aid=cff7c760dfdd88ca)
基因数据表达分类(http://d.wanfangdata.com.cn/periodical/jsjyyyhx200305004)
这个模型的优势是什么?他什么情况下表现最好?
回答:
1. 的分类效果非常好。
2. 可以有效地处理高维空间数据。
3. 可以有效地处理变量个数大于样本个数的数据。
4. 只利用一部分子集来训练模型,所以 SVM 模型不需要太大的内存。
当数据比较完善,没有太多噪声,变量较多时表现较好。
这个模型的缺点是什么?什么条件下它表现很差?
回答:
1. 无法很好地处理大规模数据集,因为此时它需要较长的训练时间。
2. 无法处理包含太多噪声的数据集。
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:
当前模型的feature非常多,SVM适合处理这种feature比较多的DataSet。
输出Label为二元,符合SVM的分类输出特性
模型3
模型名称
回答:
神经网络
描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处)
回答:
神经网络应用于电力变压器故障诊断(http://aeps.alljournals.ac.cn/aeps/ch/reader/create_pdf.aspx?file_no=5586&flag=&journal_id=aeps&year_id=1996)
这个模型的优势是什么?他什么情况下表现最好?
回答:
分类的准确度高,并行分布处理能力强,分布存储及学习能力强,对噪声神经有较强的鲁棒性和容错能力,能充分逼近复杂的非线性关系,具备联想记忆的功能等。
数据量比较大,参数之间存在联系的时候,表现最好
这个模型的缺点是什么?什么条件下它表现很差?
回答:
神经网络需要大量的参数,如网络拓扑结构、权值和阈值的初始值;不能观察之间的学习过程,输出结果难以解释,会影响到结果的可信度和可接受程度;学习时间过长,甚至可能达不到学习的目的。
准确率依赖于庞大的训练集,原本受限于计算机的速度。因此在数据集比较小,计算机速度过低时表现较差。
根据我们当前数据集的特点,为什么这个模型适合这个问题。
回答:
当前数据是没有那么大,而且训练会在我的个人电脑上进行,所以不太适合。但是可以将此算法作为其他两个的对比。
练习 - 创建一个训练和预测的流水线
为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在验证集上做预测的训练和验证的流水线是十分重要的。
你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能:
从sklearn.metrics中导入fbeta_score和accuracy_score。
用训练集拟合学习器,并记录训练时间。
对训练集的前300个数据点和验证集进行预测并记录预测时间。
计算预测训练集的前300个数据点的准确率和F-score。
计算预测验证集的准确率和F-score。
End of explanation
# TODO:从sklearn中导入三个监督学习模型
from sklearn import tree
from sklearn import svm
from sklearn.neural_network import MLPClassifier
# TODO:初始化三个模型
clf_A = tree.DecisionTreeClassifier(random_state=1)
clf_B = svm.SVC(random_state=1)
clf_C = MLPClassifier(solver='lbfgs', alpha=1e-5,hidden_layer_sizes=(5, 2), random_state=1)
# TODO:计算1%, 10%, 100%的训练数据分别对应多少点
samples_1 = len(X_train)/100
samples_10 = len(X_train)/10
samples_100 = len(X_train)
# 收集学习器的结果
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = train_predict(clf, samples, X_train, y_train, X_val, y_val)
# 对选择的三个模型得到的评价结果进行可视化
vs.evaluate(results, accuracy, fscore)
Explanation: 练习:初始模型的评估
在下面的代码单元中,您将需要实现以下功能:
- 导入你在前面讨论的三个监督学习模型。
- 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。
- 使用模型的默认参数值,在接下来的部分中你将需要对某一个模型的参数进行调整。
- 设置random_state (如果有这个参数)。
- 计算1%, 10%, 100%的训练数据分别对应多少个数据点,并将这些值存储在'samples_1', 'samples_10', 'samples_100'中
注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行!
End of explanation
# TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.neural_network import MLPClassifier
# TODO:初始化分类器
clf = MLPClassifier(alpha=1e-5,hidden_layer_sizes=(5, 2), random_state=1)
# TODO:创建你希望调节的参数列表
parameters = {'solver':['lbfgs', 'sgd', 'adam'],'learning_rate_init':[0.1,0.01,0.001]}
# TODO:创建一个fbeta_score打分对象
scorer = make_scorer(fbeta_score, beta=0.5)
# TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数
grid_obj = GridSearchCV(clf, parameters,scoring=scorer)
# TODO:用训练数据拟合网格搜索对象并找到最佳参数
grid_obj.fit(X_train, y_train)
# 得到estimator
best_clf = grid_obj.best_estimator_
# 使用没有调优的模型做预测
predictions = (clf.fit(X_train, y_train)).predict(X_val)
best_predictions = best_clf.predict(X_val)
# 汇报调参前和调参后的分数
print "Unoptimized model\n------"
print "Accuracy score on validation data: {:.4f}".format(accuracy_score(y_val, predictions))
print "F-score on validation data: {:.4f}".format(fbeta_score(y_val, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the validation data: {:.4f}".format(accuracy_score(y_val, best_predictions))
print "Final F-score on the validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5))
Explanation: 提高效果
在这最后一节中,您将从三个有监督的学习模型中选择 最好的 模型来使用学生数据。你将在整个训练集(X_train和y_train)上使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的 F-score。
问题 3 - 选择最佳的模型
基于你前面做的评价,用一到两段话向 CharityML 解释这三个模型中哪一个对于判断被调查者的年收入大于 \$50,000 是最合适的。
提示:你的答案应该包括评价指标,预测/训练时间,以及该算法是否适合这里的数据。
回答:
出乎意料,神经网络的各项指标竟然是最好,训练时间短,在测试集上的准确率和FScrore都是三个算法中最高的。
算法适用性这边理解比较浅,请助教解答下,应该从那几个方面选择算法,最好提供一些资料可以查阅。
问题 4 - 用通俗的话解释模型
用一到两段话,向 CharityML 用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。
回答: 我们使用了多层神经网络去预测捐款者,神经网络主要由一堆神经元构成,每个神经元都会负责一个很小的逻辑判断,接收几个输入参数,然后通过激活函数决定神经元最后的输出,而这个输出又可能作为输入传到下一个不同的神经元中。经过多层神经元的转换,会形成一套体系,这个体系可以接受我们的输入,最后的输出结果就是预测结果。
多层神经网络中的反向传播算法,类似于一个自适应的反馈系统;
就像一个公司要做一些决策,一级领导指示二级领导,二级领导布置任务给底层员工,这是一般的正向决策过程,反向传播就是,当底层员工发现一些问题后,报告给二级领导,二级领导又报告给一级领导,然后一、二级领导都会根据反馈调整自己的决策,以便下次取得更好的结果。
反向传播这块确实还没理解深入,算法也看不懂,还请老师给些资料看看,我自己搜到的都是5000字以内的那种,很粗略,希望有点比较系统的知识。
练习:模型调优
调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能:
导入sklearn.model_selection.GridSearchCV 和 sklearn.metrics.make_scorer.
初始化你选择的分类器,并将其存储在clf中。
设置random_state (如果有这个参数)。
创建一个对于这个模型你希望调整参数的字典。
例如: parameters = {'parameter' : [list of values]}。
注意: 如果你的学习器有 max_features 参数,请不要调节它!
使用make_scorer来创建一个fbeta_score评分对象(设置$\beta = 0.5$)。
在分类器clf上用'scorer'作为评价函数运行网格搜索,并将结果存储在grid_obj中。
用训练集(X_train, y_train)训练grid search object,并将结果存储在grid_fit中。
注意: 取决于你选择的参数列表,下面实现的代码可能需要花一些时间运行!
End of explanation
# TODO:导入一个有'feature_importances_'的监督学习模型
from sklearn.ensemble import AdaBoostClassifier
# TODO:在训练集上训练一个监督学习模型
model = AdaBoostClassifier(random_state=0,n_estimators=500).fit(X_train, y_train)
# TODO: 提取特征重要性
importances = model.feature_importances_
# 绘图
vs.feature_plot(importances, X_train, y_train)
Explanation: 问题 5 - 最终模型评估
你的最优模型在测试数据上的准确率和 F-score 是多少?这些分数比没有优化的模型好还是差?你优化的结果相比于你在问题 1中得到的天真预测器怎么样?
注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。
结果:
| 评价指标 | 天真预测器 | 未优化的模型 | 优化的模型 |
| :------------: | :-----------------: | :---------------: | :-------------: |
| 准确率 | 0.2 | 0.8512 | 0.8512 |
| F-score |0.2917 | 0.7028 | 0.7028 |
回答:
比起天真预测器的低分数,未优化的多层神经网络已经表现很好,优化后的分数没有变化,说明调节的几个参数对于多层神经网络来讲没有什么很大的影响
特征的重要性
在数据上(比如我们这里使用的人口普查的数据)使用监督学习算法的一个重要的任务是决定哪些特征能够提供最强的预测能力。专注于少量的有效特征和标签之间的关系,我们能够更加简单地理解这些现象,这在很多情况下都是十分有用的。在这个项目的情境下这表示我们希望选择一小部分特征,这些特征能够在预测被调查者是否年收入大于\$50,000这个问题上有很强的预测能力。
选择一个有 'feature_importance_' 属性的scikit学习分类器(例如 AdaBoost,随机森林)。'feature_importance_' 属性是对特征的重要性排序的函数。在下一个代码单元中用这个分类器拟合训练集数据并使用这个属性来决定人口普查数据中最重要的5个特征。
问题 6 - 观察特征相关性
当探索数据的时候,它显示在这个人口普查数据集中每一条记录我们有十三个可用的特征。
在这十三个记录中,你认为哪五个特征对于预测是最重要的,选择每个特征的理由是什么?你会怎样对他们排序?
回答:
- 特征1:age:年龄,年轻的用户经济还未独立,或资产还不充足,收入可能不足50K
- 特征2:education-num:教育水平,受教育水平较高的收入可能将较高
- 特征3:native-country:国籍,国籍很可能影响人的收入,并且本国居民更易捐款
- 特征4:workclass:工作类别,在政府工作或在公益机构工作的人,收入可能大于50K
- 特征5:income:收入高的人更易捐款
练习 - 提取特征重要性
选择一个scikit-learn中有feature_importance_属性的监督学习分类器,这个属性是一个在做预测的时候根据所选择的算法来对特征重要性进行排序的功能。
在下面的代码单元中,你将要实现以下功能:
- 如果这个模型和你前面使用的三个模型不一样的话从sklearn中导入一个监督学习模型。
- 在整个训练集上训练一个监督学习模型。
- 使用模型中的 'feature_importances_'提取特征的重要性。
End of explanation
# 导入克隆模型的功能
from sklearn.base import clone
# 减小特征空间
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_val_reduced = X_val[X_val.columns.values[(np.argsort(importances)[::-1])[:5]]]
# 在前面的网格搜索的基础上训练一个“最好的”模型
clf_on_reduced = (clone(best_clf)).fit(X_train_reduced, y_train)
# 做一个新的预测
reduced_predictions = clf_on_reduced.predict(X_val_reduced)
# 对于每一个版本的数据汇报最终模型的分数
print "Final Model trained on full data\n------"
print "Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, best_predictions))
print "F-score on validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, reduced_predictions))
print "F-score on validation data: {:.4f}".format(fbeta_score(y_val, reduced_predictions, beta = 0.5))
Explanation: 问题 7 - 提取特征重要性
观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。
这五个特征的权重加起来是否超过了0.5?<br>
这五个特征和你在问题 6中讨论的特征比较怎么样?<br>
如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?<br>
如果你的选择不相近,那么为什么你觉得这些特征更加相关?
回答:超过了
有些相似,但是整体不准确
我选取的特征,一个是一些基本属性,而且很有可能影响其收入或同理心,但是数据表现的却十分冷酷,是否捐款和赚钱花钱有最大的关系。
特征选择
如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。
End of explanation
#TODO test your model on testing data and report accuracy and F score
final_predictions = best_clf.predict(X_test)
print "最终准确率: {:.4f}".format(accuracy_score(y_test, final_predictions))
print "最终F-Score: {:.4f}".format(fbeta_score(y_test, final_predictions, beta = 0.5))
Explanation: 问题 8 - 特征选择的影响
最终模型在只是用五个特征的数据上和使用所有的特征数据上的 F-score 和准确率相比怎么样?
如果训练时间是一个要考虑的因素,你会考虑使用部分特征的数据作为你的训练集吗?
回答:均有下降
如果在数据比较大、硬件资源比较匮乏的时候,我会考虑使用,因为选取主要特征的方法会极大提高训练速度
但是再中小型数据或者说硬件资源足够时,我会尽量保证其准确性,一个良好准确的模型的训练时间损耗是值得的
问题 9 - 在测试集上测试你的模型
终于到了测试的时候,记住,测试集只能用一次。
使用你最有信心的模型,在测试集上测试,计算出准确率和 F-score。
简述你选择这个模型的原因,并分析测试结果
End of explanation |
412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4
Step8: PMI
Point-wise mutual information is a measure of association used in information theory and statistics. In the words of Jurafsky and Martin
Step24: Unigram and Skipgram
To train unigram and skipgram language models, you can use the following code.
Step25: Text corpora
We have the following text corpora availlable for you
Step26: The TedX talks
Step27: The collection of Wikipedia entries
Step29: 1. Collocations with PPMI (40 points)
$PPMI$ can be used to find collocations; words that co-occur significantly more often than can be atrributed to chance. $PPMI$ is a natural measure for this task
Step30: PTB
(a) Use the function you wrote to find collocations in the Penn Treebank dataset. Recall
Step31: TedX
(b) Use the function you wrote to find collocations in the TedX dataset. (5 points)
Step32: Wikipedia
(c) Find collocations in the Wikipedia dataset. (5 points)
Step33: [Optional] Use the following template to query the model for PMI values for word-pairs. In particular, it is interesting to see what PMI our model assigns to the collocations in the PMI wikipedia entry
Step34: (d) Now that you have (hopefully) been succesful at finding some collocations, take a moment to compare the lists of word-pairs from above, for which $PPMI(w_i,w_j)$ is high, with the list of word-pairs for which $p(w_i,w_j)$ is high. (5 points)
That means
Step36: YOUR ANSWER HERE
2. Word-embeddings via PMI-SVD (60 points)
Inspired by this blog-post we consider a classic method to obtain word embeddings that combines $PMI$ with linear algebra.
Go ahead and read the post, it's an easy and quick read.
PPMI matrix
(a) The first step is to make a PPMI matrix $P$. This matrix will have entries
$$(P)_{ij} = PPMI(w_i, w_j)$$
Where $w_i$ and $w_j$ are the $i$-th and $j$-th words in our w2i dictionary. Finish the function make_ppmi_matrix. (20 points)
[Note] If you really want to scale up (more about that below), you can consider writing a second version of the function that uses a sparse matrix datastructe for the $PMI$ matrix. Most of the entries in it will be zero. Scipy sparse has a number of options. (You will not be able to plot this matrix with imshow though, like we will do a in a bit).
Step37: (b) Use the function you wrote above to construct the $PPMI$ matrix for a corpus of your choice. (10 points)
[Note] You are adviced to start with the relatively small PTB dataset, and a small list of words, for example only the top 1000 most frequent words. If you get this working, you can scale up to the TedX and Wikipedia datasets, and to a list of the top 5,000 or 10,000 words, or even more. (The upper limit depends on your patience, the memory availlable on your laptop, and whether you are using a sparse datastructure). But you should already expect to get good performance in the 5,000-10,000 words range!
Step38: (c) Plot the $PPMI$ matrix with plt.imshow. You can use the following template code. What do you notice about the matrix? (10 points)
Step39: YOUR ANSWER HERE
Singular Value Decomposition
Now that we have the $PPMI$ co-occurence matrix, we are ready to compute its singular value decomposition (SVD).
Definition
SVD is a very elegant linear algebra technique. It is defined as follows. Let $A \in \mathbb{R}^{m\times n}$, then the singular value decomposition of A is given by
$$A = U\Sigma V^{\top},$$
where
Step40: Reducing the dimension
Now we can create the rank $k$ matrix $\tilde{U}$ by selection only the first $k$ columns of $U$. The value $k$ will be the embedding dimension of our word vectors. Now we have our word embeddings! They are the rows of $\tilde{U}$.
Step41: When we look at the word embeddings we can see that, unlike the full $PMI$ matrix, the vectors in $\tilde{U}$ are dense
Step43: Visualization with t-SNE
A simple and rewarding way to evaluate to the word embeddings is by visualizing them.
When visualizing the vectors, we ideally show them in their $k$-dimensional vector space, but that is impossible to do. To plot them we need to reduce their dimension much beyond $k$.
A very popular method for this is t-SNE. t-SNE is a very effective technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets. What it does is find a lower dimensional surface on which to project the high-dimensional data in such a way that the local structure of the original data is preserved as much as possible in the projected data. In simple words
Step44: (d) Use the function emb_scatter to plot the word-vector in $\tilde{U}$. You are adviced to plot only the first 500-1000 word-vectors to make the resulting plot not too cluttered. (10 points)
Now go find some interesting clusters! | Python Code:
from collections import defaultdict, Counter
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Lab 4: Pointwise Mutual Information
This lab is about applications of Pointwise Mutual Information (PMI) in natural language processing.
Tasks
Find collocations with PMI.
Create dense vector representations for words (also known as word embeddings) by taking the singular value decomposition of a PMI co-occurence matrix.
Rules
The lab exercises should be made in groups of two people.
The deadline is Sunday 10 Dec 23:59.
The assignment should submitted to Blackboard as .ipynb. Only one submission per group.
The filename should be lab4_lastname1_lastname2.ipynb, so for example lab4_Levy_Goldberg.ipynb.
The notebook is graded on a scale of 0-100. The number of points for each question is indicated in parantheses.
Notes on implementation:
You should write your code and answers in this iPython Notebook (see http://ipython.org/notebook.html for reference material). If you have problems, please contact your teaching assistant.
Use only one cell for code and one cell for markdown answers!
Put all code in the cell with the # YOUR CODE HERE comment.
For theoretical question, put your solution in the YOUR ANSWER HERE cell.
Test your code and make sure we can run your notebook
End of explanation
class TextData:
Stores text data with additional attributes.
:param fname: a path to a txt file
def __init__(self, fname):
self._fname = fname
self._data = []
self._w2i = defaultdict(lambda: len(self._w2i))
self._i2w = dict()
self._counter = Counter()
self._ntokens = 0 # number of tokens in dataset
self._read()
def _read(self):
with open(self._fname, "r") as fh:
for line in fh:
tokens = line.strip().lower().split()
self._data.append(tokens)
self._counter.update(tokens)
# Store number of tokens in the text
self._ntokens = sum(self._counter.values())
# Store the words in w2i in order of frequency from high to low
for word, _ in self._counter.most_common(self._ntokens):
self._i2w[self._w2i[word]] = word
def __len__(self):
Number of tokens in the dataset
return self._ntokens
@property
def data(self):
The data as list of lists
return self._data
@property
def counter(self):
The word-counts as counter
return self._counter
@property
def vocab_size(self):
Number of words in the dataset
return len(self._w2i)
@property
def w2i(self):
Word to index dictionary
Words are sorted in order of frequency from high to low
return self._w2i
@property
def i2w(self):
Inverse dictionary of w2i: index to words
return self._i2w
Explanation: PMI
Point-wise mutual information is a measure of association used in information theory and statistics. In the words of Jurafsky and Martin:
The point-wise mutual information is a measure of how often two events $x$ and $y$ occur, compared with what we would expect if they were independent.
It is formally defined as follows. Let $x$ and $y$ be outcomes of discrete random variables $X$ and $Y$. Then the point-wise mutual information between $x$ and $y$ is
$$PMI(x,y) = \log\frac{p(x,y)}{p(x)p(y)}.$$
The values for $PMI$ range between minus and plus infinity. Large positive values indicate that $x$ and $y$ are strongly associated. A $PMI$ of zero means $x$ and $y$ are completely independent. And because of the logarithm, the values can also be negative, which indicates that $x$ and $y$ are co-occurring less often than we would expect by chance. Negative values tend to be unreliable though, often resulting from the lack of coocurence statistics for $x$ and $y$. A more robust measure therefore is a variant of $PMI$ called the $Positive$ $PMI$:
$$PPMI(x,y) = \max(0, PMI(x, y)).$$
In NLP applications, the $x$ and $y$ in the definition above are generally words, and the distributions based on their occurence in a text-corpus. Therefore from now on, let's refer to $x$ and $y$ as $w_i$ and $w_j$.
You can read more about $PMI$ and $PPMI$ in the following sources:
Jurafsky and Martin 15.2 is dedicated to $PMI$
The Wikipedia entry on PMI (it even has a language-related application of $PMI$)
Co-occurrence statistics
To compute the $PMI$ of two words $w_i$ and $w_j$ we need the probabilities $p(w_i)$, $p(w_j)$ and $p(w_i,w_j)$. What are they?
Unigram
The probabilites $p(w_i)$ and $p(w_j)$ are the probabilities of the words occuring by themselves in the corpus: they are the unigram probabilities of $w_i$ and $w_j$. We have seen before how to get these.
Skip-gram
The probability $p(w_i, w_j)$ is the probability of $w_i$ and $w_j$ co-occuring together in the corpus. Co-occurrence is often modelled with a skip-gram model. A skip-gram collects the co-occurence statistics of a word with the words that surround it within some fixed context window, to the left of the word, and to the right of the word.
Consider the following fragment of a sentence in which a context window of 3 words around the word fox are shown:
... the quick brown fox jumped over the ...
The skipgram counts that we then extract from this fragment are
$$C(quick, fox) = C(brown, fox)= C(jumped, fox) = C(over, fox) = 1,$$
and
$$C(the, fox) = 2.$$
After all theses counts have been collected, we normalize and divide, giving probabilities $p(quick, fox), p(brown, fox)$ etc. such that
$$\sum_{{w,v}} p(w,v) = 1.$$
[Note] The word-pairs ${w,v}$ are unordered. So $p(quick, fox) = p(fox, quick)$.
We have provided you below with all the code nessecary to obtain these distributions. Read this through, and pay some attention to the methods that they are provided with. These will be usefull in some of the questions.
Read in data
As always, we will obtain the distributions from a large text corpus. To read in such a text we provide you with the following class:
End of explanation
class Unigram:
A unigram language model
def __init__(self, data):
self._data = data
self._unigram_counts = defaultdict(int)
self._unigram_probs = defaultdict(float)
self._unigram_counter = Counter()
self._unigram_distribution = Counter()
self._train()
def _train(self):
Trains the model trained on data
# Get the word counts
for sent in self._data:
for word in sent:
self._unigram_counts[word] += 1
# Normalize the word counts
s = sum(self._unigram_counts.values())
for word_pair, count in self._unigram_counts.items():
self._unigram_probs[word_pair] = count / s
# There are some advantages to additionally use a Counter
self._unigram_counter = Counter(self._unigram_counts)
self._unigram_distribution = Counter(self._unigram_probs)
def prob(self, w):
Returns the unigram probability p(w)
return self._unigram_probs[w]
def count(self, w):
Returns the unigram count c(w)
return self._unigram_counts[w]
@property
def data(self):
Returns the data
return self._data
@property
def counter(self):
Returns the unigram counts as Counter
return self._unigram_counter
@property
def distribution(self):
Returns the unigram distribution as Counter
return self._unigram_distribution
class Skipgram:
A skip-gram language model
Note: p(w,v) = p(v,w)
def __init__(self, data, context_window=5):
self._data = data
self._context_window = context_window
self._skipgram_counts = defaultdict(int)
self._skipgram_probs = defaultdict(float)
self._skipgram_counter = Counter()
self._skipgram_distribution = Counter()
self._train()
def _train(self):
Trains the model
# Get the co-occurrence counts
for sent in self._data:
for i in range(self._context_window, len(sent) - self._context_window):
context = [sent[i - j] for j in range(1, self._context_window + 1)] + \
[sent[i + j] for j in range(1, self._context_window + 1)]
w = sent[i]
for v in context:
word_pair = tuple(sorted([w, v])) # causes p(w,v) = p(v,w)
self._skipgram_counts[word_pair] += 1
# Turn the co-occurrence counts into probabilities
s = sum(self._skipgram_counts.values())
for word_pair, count in self._skipgram_counts.items():
self._skipgram_probs[word_pair] = count / s
# There are some advantages to additionally use a Counter
self._skipgram_counter = Counter(self._skipgram_counts)
self._skipgram_distribution = Counter(self._skipgram_probs)
def prob(self, w, v):
Returns the skip-gram probability
Note: p(w,v) = p(v,w)
word_pair = tuple(sorted([w,v]))
return self._skipgram_probs[word_pair]
def count(self, w, v):
Returns the skip-gram counts count(w,v) = count(v,w)
word_pair = tuple(sorted([w,v]))
return self._skipgram_counts[word_pair]
@property
def data(self):
Returns the data
return self._data
@property
def context_window(self):
Returns the context window size
return self._context_window
@property
def counter(self):
Returns the skipgram counte as Counter,
so that we can use the method most_common.
return self._skipgram_counter
@property
def distribution(self):
Returns the skipgram probs as Counter,
so that we can use the method most_common.
return self._skipgram_distribution
Explanation: Unigram and Skipgram
To train unigram and skipgram language models, you can use the following code.
End of explanation
%time ptb = TextData("ptb.txt")
print("Number of tokens: ", len(ptb))
print("Vocabulary size: ", ptb.vocab_size)
%time ptb_skipgram = Skipgram(ptb.data)
%time ptb_unigram = Unigram(ptb.data)
Explanation: Text corpora
We have the following text corpora availlable for you:
* A fragment of the Penn Treebank with around 900.000 tokens
* A collection of TedX talks with around 5 million tokens
* A collection of Wikipedia entries, contained around 83 million tokens
The Penn Treebank:
End of explanation
%time ted = TextData("ted.txt")
print("Number of tokens: ", len(ted))
print("Vocabulary size: ", ted.vocab_size)
%time ted_skipgram = Skipgram(ted.data)
%time ted_unigram = Unigram(ted.data)
Explanation: The TedX talks:
End of explanation
%time wiki = TextData("wiki.txt")
print("Number of tokens: ", len(wiki))
print("Vocabulary size: ", wiki.vocab_size)
%time wiki_skipgram = Skipgram(wiki.data)
%time wiki_unigram = Unigram(wiki.data)
Explanation: The collection of Wikipedia entries: (This will take some time. Expect around 15 mins for the skipgram model.)
End of explanation
def PPMI(unigram, skipgram, x, y):
Returns the positive PMI of x and y
PPMI(x, y) = max(0, PMI(x, y))
where PMI(x,y) is computed using the
unigram and skipgram language model
PMI(x,y) = log [p(x,y) / p(x)p(y)]
:param unigram: an instance of Unigram
:param skipgram: an instance of Skipgram
:returns ppmi: a float between 0 and +inf
### YOUR CODE HERE
return ppmi
Explanation: 1. Collocations with PPMI (40 points)
$PPMI$ can be used to find collocations; words that co-occur significantly more often than can be atrributed to chance. $PPMI$ is a natural measure for this task: the collocations are precisely the word-pairs $w_i$ and $w_j$ for which $PPMI(w_i,w_j)$ is high!
Have a look at the bottom of the PMI wikipedia entry: there it lists a number of such collocations found with $PMI$.
You will try to find collocations in the text-data that we've provided you with. First, you will need a function that computes the $PPMI$ given a $p(x)$, $p(y)$ and $p(x,y)$.
(a) Complete the function PPMI below. (20 points)
End of explanation
### YOUR CODE HERE
Explanation: PTB
(a) Use the function you wrote to find collocations in the Penn Treebank dataset. Recall: these are word-pairs in the corpus that have a high $PPMI$. Print a list of at most 30 of word-pairs (and they don't all have to make total sense!). (5 points)
[Hint] You might find that highest $PPMI$ word-pairs are not really what you are looking for. They make no sense as collocations, for example due to noise arrising from insufficient statistics. In that case you should search a little bit further down the line, with words that have a little lower $PPMI$. Note for example that the collocations in the PMI wikipedia entry have PMI's between 8 and 11.
End of explanation
### YOUR CODE HERE
Explanation: TedX
(b) Use the function you wrote to find collocations in the TedX dataset. (5 points)
End of explanation
### YOUR CODE HERE
Explanation: Wikipedia
(c) Find collocations in the Wikipedia dataset. (5 points)
End of explanation
# extend this list as you like
wiki_collocations = [("puerto", "rico"), ("los", "angeles"), ("hong", "kong"), ("carbon", "dioxide"),
("star", "trek"), ("star", "wars"), ("nobel", "prize"), ("prize", "laureate"),
("donald", "knuth"), ("discrete", "mathematics"), ("the", "and"), ("of", "it")]
print("{}{}{}".format("word 1".ljust(15), "word 2".ljust(15), "PMI"))
print("-----------------------------------------------")
for (w, v) in wiki_collocations:
print("{}{}{}".format(w.ljust(15), v.ljust(15), PPMI(wiki_unigram, wiki_skipgram, w, v)))
Explanation: [Optional] Use the following template to query the model for PMI values for word-pairs. In particular, it is interesting to see what PMI our model assigns to the collocations in the PMI wikipedia entry:
End of explanation
### YOUR CODE HERE
Explanation: (d) Now that you have (hopefully) been succesful at finding some collocations, take a moment to compare the lists of word-pairs from above, for which $PPMI(w_i,w_j)$ is high, with the list of word-pairs for which $p(w_i,w_j)$ is high. (5 points)
That means:
* Print out a list of word-pairs for which $p(w_i,w_j)$ is high (the top 30 will suffice, for the corpus of your choice).
* What difference do you see? Is the list of top $p(w_i,w_j)$ word-pairs as useful as the list of top $PPMI(w_i,w_j)$ word-pairs?
End of explanation
def make_ppmi_matrix(words, unigram, skipgram):
Constructs a PPMI matrix of with the words
in words.
:param words: the list of words (not indices!) for which the PPMI
values will be stored in the matrix
:param unigram: an instance of Unigram
:param skipgram: an instance of Skipgram
:returns P: a numpy array such that P[i,j] = PPMI[words[i], words[j]]
### YOUR CODE HERE
return P
Explanation: YOUR ANSWER HERE
2. Word-embeddings via PMI-SVD (60 points)
Inspired by this blog-post we consider a classic method to obtain word embeddings that combines $PMI$ with linear algebra.
Go ahead and read the post, it's an easy and quick read.
PPMI matrix
(a) The first step is to make a PPMI matrix $P$. This matrix will have entries
$$(P)_{ij} = PPMI(w_i, w_j)$$
Where $w_i$ and $w_j$ are the $i$-th and $j$-th words in our w2i dictionary. Finish the function make_ppmi_matrix. (20 points)
[Note] If you really want to scale up (more about that below), you can consider writing a second version of the function that uses a sparse matrix datastructe for the $PMI$ matrix. Most of the entries in it will be zero. Scipy sparse has a number of options. (You will not be able to plot this matrix with imshow though, like we will do a in a bit).
End of explanation
### YOUR CODE HERE
Explanation: (b) Use the function you wrote above to construct the $PPMI$ matrix for a corpus of your choice. (10 points)
[Note] You are adviced to start with the relatively small PTB dataset, and a small list of words, for example only the top 1000 most frequent words. If you get this working, you can scale up to the TedX and Wikipedia datasets, and to a list of the top 5,000 or 10,000 words, or even more. (The upper limit depends on your patience, the memory availlable on your laptop, and whether you are using a sparse datastructure). But you should already expect to get good performance in the 5,000-10,000 words range!
End of explanation
# ppmi_matrix = ...
# The whole PPMI matrix
ax, fig = plt.subplots(figsize=(10,10))
plt.imshow(ppmi_matrix)
plt.colorbar()
plt.show()
# Only the top-left corner (which hold the most frequent words)
ax, fig = plt.subplots(figsize=(10,10))
plt.imshow(ppmi_matrix[0:300, 0:300])
plt.colorbar()
plt.show()
Explanation: (c) Plot the $PPMI$ matrix with plt.imshow. You can use the following template code. What do you notice about the matrix? (10 points)
End of explanation
from scipy.linalg import svd
%time U, s, Vt = svd(wiki_ppmi_matrix)
Explanation: YOUR ANSWER HERE
Singular Value Decomposition
Now that we have the $PPMI$ co-occurence matrix, we are ready to compute its singular value decomposition (SVD).
Definition
SVD is a very elegant linear algebra technique. It is defined as follows. Let $A \in \mathbb{R}^{m\times n}$, then the singular value decomposition of A is given by
$$A = U\Sigma V^{\top},$$
where:
$U$ is an $m \times m$ unitary matrix.
A real-valued matrix $U$ is called unitary when $U^{\top}U = UU^{\top} = I$. In other words: $U$ forms an orthonormal basis for $\mathbb{R}^{m}$.
$\Sigma$ is a diagonal $m \times n$ matrix with non-negative real numbers.
The diagonal values are the so called singular values $\sigma_1, \sigma_2,\dots,\sigma_n$ (supposing $n \leq m$). The convention is that in the matrix $\Sigma$ these are ordered from large to small: $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_n \geq 0$.
$V$ is an $n \times n$ unitary matrix (and $V^{\top}$ its transpose)
Low-rank approximation
When we select only the first $k$ collums of $U$ and $V$ and the first $k$ singular values in $\Sigma$ we get
* $\tilde{U} \in \mathbb{R}^{m \times k}$
* $\tilde{\Sigma} \in \mathbb{R}^{k \times k}$
* $\tilde{V} \in \mathbb{R}^{n \times k}.$
The reduced matrices can be used to make a rank $k$ matrix $\tilde{A} \in \mathbb{R}^{m\times n}$. The matrix $\tilde{A}$ is an approximation of the matrix $A$:
$$A \approx \tilde{U}\tilde{\Sigma} \tilde{V}^{\top} = \tilde{A}.$$
Moreover, this approximation is the 'best' approximation of $A$ in the sense that in minimizes the distance to $A$ in a type of matrix norm called the Frobenius norm:
$$\tilde{A} = \arg \min_{A'} || A - A'||_{F}$$
You can learn more about SVD from these sources:
Jurafksy and Martin 16.1 (3rd edition)
The Wikipedia page (with a very good visual illustration)
Or this long but good blog-post series (if you have some time!)
For the purposes of this tutorial, however, all we need to know to get our embeddings is how to use SVD in python. And that is straightforward:
End of explanation
k = 300
U_tilde = U[:,:k]
Explanation: Reducing the dimension
Now we can create the rank $k$ matrix $\tilde{U}$ by selection only the first $k$ columns of $U$. The value $k$ will be the embedding dimension of our word vectors. Now we have our word embeddings! They are the rows of $\tilde{U}$.
End of explanation
ax, fig = plt.subplots(figsize=(15,15))
plt.imshow(U_tilde[:800])
plt.colorbar()
plt.show()
Explanation: When we look at the word embeddings we can see that, unlike the full $PMI$ matrix, the vectors in $\tilde{U}$ are dense: almost all values non-zero.
End of explanation
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.palettes import d3
from bokeh.io import output_notebook
output_notebook()
def emb_scatter(data, names, perplexity=30.0, N=20):
Uses t-SNE with given perplexity to reduce the dimension of the
vectors in data to 2, plots these in a bokeh 2d scatter plot,
and colors them with N colors using K-means clustering of the
originial vectors. The colored dots are tagged with labels from
the list names.
:param data: numpy array of shape [num_vectors, embedding_dim]
:param names: a list of words of length num_vectors in the same order as data
:param perplexity: the perplexity for t-SNE
:param N: the number of clusters to find by K-means
## Try to find some clusters ##
print("Finding clusters")
kmeans = KMeans(n_clusters=N)
kmeans.fit(data)
klabels = kmeans.labels_
## Get a tsne fit ##
print("Fitting tsne")
tsne = TSNE(n_components=2, perplexity=perplexity)
emb_tsne = tsne.fit_transform(data)
## Plot the tsne of the embeddings with bokeh ##
# source: https://github.com/oxford-cs-deepnlp-2017/practical-1
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="T-SNE for most common words")
# Set colormap as a list
colormap = d3['Category20'][N]
colors = [colormap[i] for i in klabels]
source = ColumnDataSource(data=dict(x1=emb_tsne[:,0],
x2=emb_tsne[:,1],
names=names,
colors=colors))
p.scatter(x="x1", y="x2", size=8, source=source, color='colors')
labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(labels)
show(p)
Explanation: Visualization with t-SNE
A simple and rewarding way to evaluate to the word embeddings is by visualizing them.
When visualizing the vectors, we ideally show them in their $k$-dimensional vector space, but that is impossible to do. To plot them we need to reduce their dimension much beyond $k$.
A very popular method for this is t-SNE. t-SNE is a very effective technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets. What it does is find a lower dimensional surface on which to project the high-dimensional data in such a way that the local structure of the original data is preserved as much as possible in the projected data. In simple words: data-points that are close to each other in the original space end up close to each other in the projected space. (Note: this does not hold large distances! Points that are far apart in the original space do not necessarily end up far apart in the projected space.)
So, t-SNE is like Principal Component Analysis (PCA), another popular dimensionality reduction technique. But, due to the non-linear nature of the surface it finds (the 'manifold') t-SNE has more flexibility.
If you want to know more about t-SNE you can read the following sources:
* How to Use t-SNE Effectively on Distill discusses, among others, the effects of the perplexity parameter in t-SNE
* This Scikit-learn example
The code plots below uses t-SNE to make a two-dimensional plot with our word-embeddings. We also throw in some K-means clustering in the original high-dimensional space so that we can color our dimension-reduced word-embeddings. This gives us some for additional interpretative abilities.
End of explanation
### YOUR CODE HERE
Explanation: (d) Use the function emb_scatter to plot the word-vector in $\tilde{U}$. You are adviced to plot only the first 500-1000 word-vectors to make the resulting plot not too cluttered. (10 points)
Now go find some interesting clusters!
End of explanation |
413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FiPy 1D two-phase flow in porous mediaq, 11 October, 2019
Different approaches
Step1: Visualize the relative permeability and fractional flow curves
Step2: Equations
$$\varphi \frac{\partial S_w}{\partial t}+u \frac{\partial f_w}{\partial x}=0$$ or
$$\varphi \frac{\partial S_w}{\partial t}+\nabla.\left( u \frac{\partial f_w}{\partial S_w} S_w\right)+ \nabla. \left( u f_w-u\frac{\partial f_w}{\partial S_w} S_{w0} \right)=0$$
Step3: Analytical solution | Python Code:
from fipy import *
# relperm parameters
swc = 0.1
sor = 0.1
krw0 = 0.3
kro0 = 1.0
nw = 2.0
no = 2.0
# domain and boundaries
k = 1e-12 # m^2
phi = 0.4
u = 1.e-5
p0 = 100e5 # Pa
Lx = 100.
Ly = 10.
nx = 100
ny = 10
dx = Lx/nx
dy = Ly/ny
# fluid properties
muo = 0.002
muw = 0.001
# define the fractional flow functions
def krw(sw):
res = krw0*((sw-swc)/(1-swc-sor))**nw
return res
def dkrw(sw):
res = krw0*nw/(1-swc-sor)*((sw-swc)/(1-swc-sor))**(nw-1)
return res
def kro(sw):
res = kro0*((1-sw-sor)/(1-swc-sor))**no
return res
def dkro(sw):
res = -kro0*no/(1-swc-sor)*((1-sw-sor)/(1-swc-sor))**(no-1)
return res
def fw(sw):
res = krw(sw)/muw/(krw(sw)/muw+kro(sw)/muo)
return res
def dfw(sw):
res = (dkrw(sw)/muw*kro(sw)/muo-krw(sw)/muw*dkro(sw)/muo)/(krw(sw)/muw+kro(sw)/muo)**2
return res
import matplotlib.pyplot as plt
import numpy as np
sw_plot = np.linspace(swc, 1-sor, 50)
Explanation: FiPy 1D two-phase flow in porous mediaq, 11 October, 2019
Different approaches:
* Coupled
* Sequential
* ...
End of explanation
krw_plot = [krw(sw) for sw in sw_plot]
kro_plot = [kro(sw) for sw in sw_plot]
fw_plot = [fw(sw) for sw in sw_plot]
plt.figure(1)
plt.plot(sw_plot, krw_plot, sw_plot, kro_plot)
plt.show()
plt.figure(2)
plt.plot(sw_plot, fw_plot)
plt.show()
# create the grid
mesh = Grid1D(dx = Lx/nx, nx = nx)
x = mesh.cellCenters
# create the cell variables and boundary conditions
sw = CellVariable(mesh=mesh, name="saturation", hasOld=True, value = swc)
p = CellVariable(mesh=mesh, name="pressure", hasOld=True, value = p0)
# sw.setValue(1,where = x<=dx)
sw.constrain(1.0,mesh.facesLeft)
#sw.constrain(0., mesh.facesRight)
sw.faceGrad.constrain([0], mesh.facesRight)
p.constrain(p0, mesh.facesRight)
p.constrain(1.05*p0, mesh.facesLeft)
Explanation: Visualize the relative permeability and fractional flow curves
End of explanation
eq_p = DiffusionTerm(var=p, coeff=-k*(krw(sw.faceValue)/muw+kro(sw.faceValue)/muo))+ \
UpwindConvectionTerm(var=sw, coeff=k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*p.faceGrad)- \
(k*(dkrw(sw.faceValue)/muw+dkro(sw.faceValue)/muo)*sw.faceValue*p.faceGrad).divergence == 0
eq_sw = TransientTerm(coeff=phi, var=sw) + \
DiffusionTerm(var=p, coeff=-k*krw(sw.faceValue)/muw)+ \
UpwindConvectionTerm(var=sw, coeff=-k*dkrw(sw.faceValue)/muw*p.faceGrad)- \
(-k*dkrw(sw.faceValue)/muw*p.faceGrad*sw.faceValue).divergence == 0
eq = eq_p & eq_sw
steps = 200
dt0 = 500.
dt = dt0
t_end = steps*dt0
t = 0.0
viewer = Viewer(vars = sw, datamax=1.1, datamin=-0.1)
while t<t_end:
swres = 1.0e6
loop_count = 0
while True:
swres_new = eq.sweep(dt = dt)
loop_count+=1
if loop_count==1:
sw_res = swres_new
if swres_new>sw_res or loop_count>5:
dt = dt/3
continue
swres=swres_new
print(swres)
if swres_new<1e-5:
sw.updateOld()
p.updateOld()
t+=dt
dt = dt0
break
# Note: try to use the appleyard method; the overflow is a result of wrong rel-perm values
viewer.plot()
Explanation: Equations
$$\varphi \frac{\partial S_w}{\partial t}+u \frac{\partial f_w}{\partial x}=0$$ or
$$\varphi \frac{\partial S_w}{\partial t}+\nabla.\left( u \frac{\partial f_w}{\partial S_w} S_w\right)+ \nabla. \left( u f_w-u\frac{\partial f_w}{\partial S_w} S_{w0} \right)=0$$
End of explanation
import fractional_flow as ff
xt_shock, sw_shock, xt_prf, sw_prf, t, p_inj, R_oil = ff.frac_flow_wf(muw=muw, muo=muo, ut=u, phi=1.0, \
k=1e-12, swc=swc, sor=sor, kro0=kro0, no=no, krw0=krw0, \
nw=nw, sw0=swc, sw_inj=1.0, L=Lx, pv_inj=5.0)
plt.figure()
plt.plot(xt_prf, sw_prf)
plt.plot(x.value.squeeze()/(steps*dt), sw.value)
plt.show()
?eq.sweep
Explanation: Analytical solution
End of explanation |
414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interacting with models
November 2014, by Max Zwiessele
with edits by James Hensman
The GPy model class has a set of features which are designed to make it simple to explore the parameter space of the model. By default, the scipy optimisers are used to fit GPy models (via model.optimize()), for which we provide mechanisms for ‘free’ optimisation
Step1: Examining the model using print
To see the current state of the model parameters, and the model’s (marginal) likelihood just print the model
print m
The first thing displayed on the screen is the log-likelihood value of the model with its current parameters. Below the log-likelihood, a table with all the model’s parameters is shown. For each parameter, the table contains the name of the parameter, the current value, and in case there are defined
Step2: In this case the kernel parameters (bf.variance, bf.lengthscale) as well as the likelihood noise parameter (Gaussian_noise.variance), are constrained to be positive, while the inducing inputs have no constraints associated. Also there are no ties or prior defined.
You can also print all subparts of the model, by printing the subcomponents individually; this will print the details of this particular parameter handle
Step3: When you want to get a closer look into multivalue parameters, print them directly
Step4: Interacting with Parameters
Step5: This will already have updated the model’s inner state
Step6: There is access to setting parameters by regular expression, as well. Here are a few examples of how to set parameters by regular expression. Note that each time the values are set, computations are done internally to compute the log likeliood of the model.
Step7: A handy trick for seeing all of the parameters of the model at once is to regular-expression match every variable
Step8: Setting and fetching parameters parameter_array
Another way to interact with the model’s parameters is through the parameter_array. The Parameter array holds all the parameters of the model in one place and is editable. It can be accessed through indexing the model for example you can set all the parameters through this mechanism
Step9: Parameters themselves (leafs of the hierarchy) can be indexed and used the same way as numpy arrays. First let us set a slice of the inducing_inputs
Step10: Or you use the parameters as normal numpy arrays for calculations
Step11: Getting the model parameter’s gradients
The gradients of a model can shed light on understanding the (possibly hard) optimization process. The gradients of each parameter handle can be accessed through their gradient field.
Step12: If we optimize the model, the gradients (should be close to) zero
Step13: Adjusting the model’s constraints
When we initially call the example, it was optimized and hence the log-likelihood gradients were close to zero. However, since we have been changing the parameters, the gradients are far from zero now. Next we are going to show how to optimize the model setting different restrictions on the parameters.
Once a constraint has been set on a parameter, it is possible to remove it with the command unconstrain(), which can be called on any parameter handle of the model. The methods constrain() and unconstrain() return the indices which were actually unconstrained, relative to the parameter handle the method was called on. This is particularly handy for reporting which parameters where reconstrained, when reconstraining a parameter, which was already constrained
Step14: If you want to unconstrain only a specific constraint, you can call the respective method, such as unconstrain_fixed() (or unfix()) to only unfix fixed parameters
Step15: Tying Parameters
Not yet implemented for GPy version 0.8.0
Optimizing the model
Once we have finished defining the constraints, we can now optimize the model with the function optimize.
Step16: By deafult, GPy uses the lbfgsb optimizer.
Some optional parameters may be discussed here.
optimizer
Step17: We can even change the backend for plotting and plot the model using a different backend. | Python Code:
m = GPy.examples.regression.sparse_GP_regression_1D(plot=False, optimize=False)
Explanation: Interacting with models
November 2014, by Max Zwiessele
with edits by James Hensman
The GPy model class has a set of features which are designed to make it simple to explore the parameter space of the model. By default, the scipy optimisers are used to fit GPy models (via model.optimize()), for which we provide mechanisms for ‘free’ optimisation: GPy can ensure that naturally positive parameters (such as variances) remain positive. But these mechanisms are much more powerful than simple reparameterisation, as we shall see.
Along this tutorial we’ll use a sparse GP regression model as example. This example can be in GPy.examples.regression. All of the examples included in GPy return an instance of a model class, and therefore they can be called in the following way:
End of explanation
m
Explanation: Examining the model using print
To see the current state of the model parameters, and the model’s (marginal) likelihood just print the model
print m
The first thing displayed on the screen is the log-likelihood value of the model with its current parameters. Below the log-likelihood, a table with all the model’s parameters is shown. For each parameter, the table contains the name of the parameter, the current value, and in case there are defined: constraints, ties and prior distrbutions associated.
End of explanation
m.rbf
Explanation: In this case the kernel parameters (bf.variance, bf.lengthscale) as well as the likelihood noise parameter (Gaussian_noise.variance), are constrained to be positive, while the inducing inputs have no constraints associated. Also there are no ties or prior defined.
You can also print all subparts of the model, by printing the subcomponents individually; this will print the details of this particular parameter handle:
End of explanation
m.inducing_inputs
m.inducing_inputs[0] = 1
Explanation: When you want to get a closer look into multivalue parameters, print them directly:
End of explanation
m.rbf.lengthscale = 0.2
print m
Explanation: Interacting with Parameters:
The preferred way of interacting with parameters is to act on the parameter handle itself. Interacting with parameter handles is simple. The names, printed by print m are accessible interactively and programatically. For example try to set the kernel's lengthscale to 0.2 and print the result:
End of explanation
print m['.*var']
#print "variances as a np.array:", m['.*var'].values()
#print "np.array of rbf matches: ", m['.*rbf'].values()
Explanation: This will already have updated the model’s inner state: note how the log-likelihood has changed. YOu can immediately plot the model or see the changes in the posterior (m.posterior) of the model.
Regular expressions
The model’s parameters can also be accessed through regular expressions, by ‘indexing’ the model with a regular expression, matching the parameter name. Through indexing by regular expression, you can only retrieve leafs of the hierarchy, and you can retrieve the values matched by calling values() on the returned object
End of explanation
m['.*var'] = 2.
print m
m['.*var'] = [2., 3.]
print m
Explanation: There is access to setting parameters by regular expression, as well. Here are a few examples of how to set parameters by regular expression. Note that each time the values are set, computations are done internally to compute the log likeliood of the model.
End of explanation
print m['']
Explanation: A handy trick for seeing all of the parameters of the model at once is to regular-expression match every variable:
End of explanation
new_params = np.r_[[-4,-2,0,2,4], [.1,2], [.7]]
print new_params
m[:] = new_params
print m
Explanation: Setting and fetching parameters parameter_array
Another way to interact with the model’s parameters is through the parameter_array. The Parameter array holds all the parameters of the model in one place and is editable. It can be accessed through indexing the model for example you can set all the parameters through this mechanism:
End of explanation
m.inducing_inputs[2:, 0] = [1,3,5]
print m.inducing_inputs
Explanation: Parameters themselves (leafs of the hierarchy) can be indexed and used the same way as numpy arrays. First let us set a slice of the inducing_inputs:
End of explanation
precision = 1./m.Gaussian_noise.variance
print precision
Explanation: Or you use the parameters as normal numpy arrays for calculations:
End of explanation
print "all gradients of the model:\n", m.gradient
print "\n gradients of the rbf kernel:\n", m.rbf.gradient
Explanation: Getting the model parameter’s gradients
The gradients of a model can shed light on understanding the (possibly hard) optimization process. The gradients of each parameter handle can be accessed through their gradient field.:
End of explanation
m.optimize()
print m.gradient
Explanation: If we optimize the model, the gradients (should be close to) zero
End of explanation
m.rbf.variance.unconstrain()
print m
m.unconstrain()
print m
Explanation: Adjusting the model’s constraints
When we initially call the example, it was optimized and hence the log-likelihood gradients were close to zero. However, since we have been changing the parameters, the gradients are far from zero now. Next we are going to show how to optimize the model setting different restrictions on the parameters.
Once a constraint has been set on a parameter, it is possible to remove it with the command unconstrain(), which can be called on any parameter handle of the model. The methods constrain() and unconstrain() return the indices which were actually unconstrained, relative to the parameter handle the method was called on. This is particularly handy for reporting which parameters where reconstrained, when reconstraining a parameter, which was already constrained:
End of explanation
m.inducing_inputs[0].fix()
m.rbf.constrain_positive()
print m
m.unfix()
print m
Explanation: If you want to unconstrain only a specific constraint, you can call the respective method, such as unconstrain_fixed() (or unfix()) to only unfix fixed parameters:
End of explanation
m.Gaussian_noise.constrain_positive()
m.rbf.constrain_positive()
m.optimize()
Explanation: Tying Parameters
Not yet implemented for GPy version 0.8.0
Optimizing the model
Once we have finished defining the constraints, we can now optimize the model with the function optimize.:
End of explanation
fig = m.plot()
Explanation: By deafult, GPy uses the lbfgsb optimizer.
Some optional parameters may be discussed here.
optimizer: which optimizer to use, currently there are lbfgsb, fmin_tnc, scg, simplex or any unique identifier uniquely identifying an optimizer.
Thus, you can say m.optimize('bfgs') for using the lbfgsb optimizer
messages: if the optimizer is verbose. Each optimizer has its own way of printing, so do not be confused by differing messages of different optimizers
max_iters: Maximum number of iterations to take. Some optimizers see iterations as function calls, others as iterations of the algorithm. Please be advised to look into scipy.optimize for more instructions, if the number of iterations matter, so you can give the right parameters to optimize()
gtol: only for some optimizers. Will determine the convergence criterion, as the tolerance of gradient to finish the optimization.
Plotting
Many of GPys models have built-in plot functionality. we distringuish between plotting the posterior of the function (m.plot_f) and plotting the posterior over predicted data values (m.plot). This becomes especially important for non-Gaussian likleihoods. Here we'll plot the sparse GP model we've been working with. for more information of the meaning of the plot, please refer to the accompanying basic_gp_regression and sparse_gp noteooks.
End of explanation
GPy.plotting.change_plotting_library('plotly')
fig = m.plot(plot_density=True)
GPy.plotting.show(fig, filename='gpy_sparse_gp_example')
Explanation: We can even change the backend for plotting and plot the model using a different backend.
End of explanation |
415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Estimator で線形モデルを構築する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: Titanic データセットを読み込む
Titanic データセットを使用して、性別、年齢、船室クラスなどの特性に基づき、(やや悪趣味ではありますが)乗船者の生存を予測することを目標とします。
Step3: データを確認する
データセットには、次の特徴が含まれます。
Step4: トレーニングセットと評価セットには、それぞれ 627 個と 264 個の例があります。
Step5: 乗船者の大半は 20 代から 30 代です。
Step6: 男性の乗船者数は女性の乗船者数の約 2 倍です。
Step7: 乗船者の大半は「3 等」の船室クラスを利用していました。
Step8: 女性は男性よりも生存する確率がはるかに高く、これは明らかにモデルの予測特徴です。
Step9: モデルの特徴量エンジニアリング
Estimator は、モデルがどのように各行の入力特徴量を解釈すべきかを説明する特徴量カラムというシステムを使用しています。Estimator は数値入力のベクトルを期待するのに対し、特徴量カラムはモデルがどのように各特徴量を変換すべきかが記述されています。
効率的なモデルを学習するには、適切な特徴カラムの選択と作成が鍵となります。特徴量カラムは、元の特徴量 dict の生の入力の 1 つ(基本特徴量カラム)または 1 つ以上の基本カラムに定義された変換を使って作成された新規カラム(派生特徴量カラム)のいずれかです。
線形 Estimator は、数値特徴とカテゴリ特徴の両方を利用します。特徴量カラムは TensorFlow Estimator と機能し、その目的はモデリングに使用される特徴量を定義することにあります。さらに、One-Hotエンコーディング、正規化、およびバケット化などのいくつかの特徴量エンジニアリング機能を提供します。
基本特徴量カラム
Step10: input_function は、入力パイプラインをストリーミングの手法でフィードする tf.data.Dataset にデータを変換する方法を指定します。tf.data.Dataset は、dataframe や csv 形式ファイルなど、複数のソースを取ることができます。
Step11: 次のようにデータセットを検査できます。
Step12: また、tf.keras.layers.DenseFeatures レイヤーを使用して、特定の特徴量カラムの結果を検査することもできます。
Step13: DenseFeatures は密なテンソルのみを受け入れ、それを最初にインジケータカラムに変換する必要のあるカテゴリカラムを検査します。
Step14: すべての基本特徴量をモデルに追加したら、モデルをトレーニングすることにしましょう。モデルのトレーニングは、 tf.estimator API を使ってコマンド 1 つで行います。
Step15: 派生特徴量カラム
精度が 75% に達しました。それぞれの基本特徴量カラムを個別に使用しても、データを説明するには不足している場合があります。たとえば、年齢とラベルの間の相関関係は、性別が変われば異なることがあります。そのため、gender="Male" と gender="Female" で単一モデルの重みのみを把握しただけでは、すべての年齢と性別の組み合わせをキャプチャすることはできません(gender="Male" と age="30" と gender="Male" と age="40" を区別するなど)。
さまざまな特徴量の組み合わせの間の違いを把握するには、相互特徴量カラムをモデルに追加できます(また、相互カラムの前に年齢カラムをバケット化できます)。
Step16: 組み合わせた特徴量をモデルに追加したら、モデルをもう一度トレーニングしましょう。
Step17: これで、77.6% の精度に達しました。基本特徴量のみでトレーニングした場合よりわずかに改善されています。ほかの特徴量と変換を使用して、さらに改善されるか確認してみましょう!
このトレーニングモデルを使用して、評価セットからある乗船者に予測を立てることができます。TensorFlow モデルは、バッチ、コレクション、または例に対してまとめて予測を立てられるように最適化されています。以前は、eval_input_fn は評価セット全体を使って定義されていました。
Step18: 最後に、結果の受信者操作特性(ROC)を見てみましょう。真陽性率と偽陽性率間のトレードオフに関し、より明確な予想を得ることができます。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install sklearn
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import clear_output
from six.moves import urllib
Explanation: Estimator で線形モデルを構築する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/estimator/linear"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/estimator/linear.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
警告: 新しいコードには Estimators は推奨されません。Estimators は v1.Session スタイルのコードを実行しますが、これは正しく記述するのはより難しく、特に TF 2 コードと組み合わせると予期しない動作をする可能性があります。Estimators は、互換性保証の対象となりますが、セキュリティの脆弱性以外の修正は行われません。詳細については、移行ガイドを参照してください。
概要
このエンドツーエンドのウォークスルーでは、tf.estimator API を使用してロジスティック回帰モデルをトレーニングします。このモデルはほかのより複雑なアルゴリズムの基準としてよく使用されます。
注意: Keras によるロジスティック回帰の例はこちらからご覧いただけます。これは、本チュートリアルよりも推奨されます。
セットアップ
End of explanation
import tensorflow.compat.v2.feature_column as fc
import tensorflow as tf
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
Explanation: Titanic データセットを読み込む
Titanic データセットを使用して、性別、年齢、船室クラスなどの特性に基づき、(やや悪趣味ではありますが)乗船者の生存を予測することを目標とします。
End of explanation
dftrain.head()
dftrain.describe()
Explanation: データを確認する
データセットには、次の特徴が含まれます。
End of explanation
dftrain.shape[0], dfeval.shape[0]
Explanation: トレーニングセットと評価セットには、それぞれ 627 個と 264 個の例があります。
End of explanation
dftrain.age.hist(bins=20)
Explanation: 乗船者の大半は 20 代から 30 代です。
End of explanation
dftrain.sex.value_counts().plot(kind='barh')
Explanation: 男性の乗船者数は女性の乗船者数の約 2 倍です。
End of explanation
dftrain['class'].value_counts().plot(kind='barh')
Explanation: 乗船者の大半は「3 等」の船室クラスを利用していました。
End of explanation
pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive')
Explanation: 女性は男性よりも生存する確率がはるかに高く、これは明らかにモデルの予測特徴です。
End of explanation
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
vocabulary = dftrain[feature_name].unique()
feature_columns.append(tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name, dtype=tf.float32))
Explanation: モデルの特徴量エンジニアリング
Estimator は、モデルがどのように各行の入力特徴量を解釈すべきかを説明する特徴量カラムというシステムを使用しています。Estimator は数値入力のベクトルを期待するのに対し、特徴量カラムはモデルがどのように各特徴量を変換すべきかが記述されています。
効率的なモデルを学習するには、適切な特徴カラムの選択と作成が鍵となります。特徴量カラムは、元の特徴量 dict の生の入力の 1 つ(基本特徴量カラム)または 1 つ以上の基本カラムに定義された変換を使って作成された新規カラム(派生特徴量カラム)のいずれかです。
線形 Estimator は、数値特徴とカテゴリ特徴の両方を利用します。特徴量カラムは TensorFlow Estimator と機能し、その目的はモデリングに使用される特徴量を定義することにあります。さらに、One-Hotエンコーディング、正規化、およびバケット化などのいくつかの特徴量エンジニアリング機能を提供します。
基本特徴量カラム
End of explanation
def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32):
def input_function():
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df))
if shuffle:
ds = ds.shuffle(1000)
ds = ds.batch(batch_size).repeat(num_epochs)
return ds
return input_function
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, num_epochs=1, shuffle=False)
Explanation: input_function は、入力パイプラインをストリーミングの手法でフィードする tf.data.Dataset にデータを変換する方法を指定します。tf.data.Dataset は、dataframe や csv 形式ファイルなど、複数のソースを取ることができます。
End of explanation
ds = make_input_fn(dftrain, y_train, batch_size=10)()
for feature_batch, label_batch in ds.take(1):
print('Some feature keys:', list(feature_batch.keys()))
print()
print('A batch of class:', feature_batch['class'].numpy())
print()
print('A batch of Labels:', label_batch.numpy())
Explanation: 次のようにデータセットを検査できます。
End of explanation
age_column = feature_columns[7]
tf.keras.layers.DenseFeatures([age_column])(feature_batch).numpy()
Explanation: また、tf.keras.layers.DenseFeatures レイヤーを使用して、特定の特徴量カラムの結果を検査することもできます。
End of explanation
gender_column = feature_columns[0]
tf.keras.layers.DenseFeatures([tf.feature_column.indicator_column(gender_column)])(feature_batch).numpy()
Explanation: DenseFeatures は密なテンソルのみを受け入れ、それを最初にインジケータカラムに変換する必要のあるカテゴリカラムを検査します。
End of explanation
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns)
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result)
Explanation: すべての基本特徴量をモデルに追加したら、モデルをトレーニングすることにしましょう。モデルのトレーニングは、 tf.estimator API を使ってコマンド 1 つで行います。
End of explanation
age_x_gender = tf.feature_column.crossed_column(['age', 'sex'], hash_bucket_size=100)
Explanation: 派生特徴量カラム
精度が 75% に達しました。それぞれの基本特徴量カラムを個別に使用しても、データを説明するには不足している場合があります。たとえば、年齢とラベルの間の相関関係は、性別が変われば異なることがあります。そのため、gender="Male" と gender="Female" で単一モデルの重みのみを把握しただけでは、すべての年齢と性別の組み合わせをキャプチャすることはできません(gender="Male" と age="30" と gender="Male" と age="40" を区別するなど)。
さまざまな特徴量の組み合わせの間の違いを把握するには、相互特徴量カラムをモデルに追加できます(また、相互カラムの前に年齢カラムをバケット化できます)。
End of explanation
derived_feature_columns = [age_x_gender]
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns+derived_feature_columns)
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result)
Explanation: 組み合わせた特徴量をモデルに追加したら、モデルをもう一度トレーニングしましょう。
End of explanation
pred_dicts = list(linear_est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities')
Explanation: これで、77.6% の精度に達しました。基本特徴量のみでトレーニングした場合よりわずかに改善されています。ほかの特徴量と変換を使用して、さらに改善されるか確認してみましょう!
このトレーニングモデルを使用して、評価セットからある乗船者に予測を立てることができます。TensorFlow モデルは、バッチ、コレクション、または例に対してまとめて予測を立てられるように最適化されています。以前は、eval_input_fn は評価セット全体を使って定義されていました。
End of explanation
from sklearn.metrics import roc_curve
from matplotlib import pyplot as plt
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,)
Explanation: 最後に、結果の受信者操作特性(ROC)を見てみましょう。真陽性率と偽陽性率間のトレードオフに関し、より明確な予想を得ることができます。
End of explanation |
416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
help(tf.layers.dense)
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
img_dim = 784
inputs_ = tf.placeholder(tf.float32, shape=[None, img_dim], name='inputs')
targets_ = tf.placeholder(tf.float32, shape=[None, img_dim], name='inputs')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, img_dim)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer().minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handling categorical data
In this notebook, I'll demonstrate different ways of mapping or encoding categorical data.
Step1: 1. Mapping ordinal features
Create a mapping dictionary first and then map the categorical string values into integers.
Step2: Create a function that converts strings into numbers
Step3: 2. Convert nominal categorical feature into dummy variables
Often, machine learning algorithms require that categorical variables be converted into dummy variables (also called OneHot encoding). For example, a single feature Fruit would be converted into three features, Apples, Oranges, and Bananas, one for each category in the categorical feature.
There are common ways to preprocess categorical features
Step4: 3. Encoding class labels
Create a mapping dictionary by enumerating unique categories. Note that class labels are not ordinal; they are nominal.
Step5: Use LabelEncoder in scikit-learn to convert class labels into integers
Step6: 4. Convert categorical variable with Patsy | Python Code:
# create a pandas dataframe with categorical variables to work with
import pandas as pd
df = pd.DataFrame([['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1']])
df.columns = ['color', 'size', 'price', 'classlabel']
df
Explanation: Handling categorical data
In this notebook, I'll demonstrate different ways of mapping or encoding categorical data.
End of explanation
size_mapping = {'XL': 3,
'L': 2,
'M': 1}
df['size'] = df['size'].map(size_mapping)
df
# transform integers back to string values using a reverse-mapping dictionary
inv_size_mapping = {v: k for k, v in size_mapping.items()}
df['size'].map(inv_size_mapping)
Explanation: 1. Mapping ordinal features
Create a mapping dictionary first and then map the categorical string values into integers.
End of explanation
def size_to_numeric(x):
if x=='XL':
return 3
if x=='L':
return 2
if x=='M':
return 1
df['size_num'] = df['size'].apply(size_to_numeric)
df
Explanation: Create a function that converts strings into numbers
End of explanation
# using pandas 'get_dummies'
pd.get_dummies(df[['price','color', 'size']])
# using pandas 'get_dummies'
pd.get_dummies(df['color'])
pd.get_dummies(df['color']).join(df[['size', 'price']])
# using scikit-learn LabelEncoder and OneHotEncoder
from sklearn.preprocessing import LabelEncoder
color_le = LabelEncoder()
df['color'] = color_le.fit_transform(df['color'])
df
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder()
color = ohe.fit_transform(df['color'].reshape(-1,1)).toarray()
df_color = pd.DataFrame(color, columns = ['blue', 'green', 'red'])
df_color
df[['size', 'price']].join(df_color)
Explanation: 2. Convert nominal categorical feature into dummy variables
Often, machine learning algorithms require that categorical variables be converted into dummy variables (also called OneHot encoding). For example, a single feature Fruit would be converted into three features, Apples, Oranges, and Bananas, one for each category in the categorical feature.
There are common ways to preprocess categorical features: using pandas or scikit-learn.
End of explanation
import numpy as np
class_mapping = {label: idx for idx, label in enumerate(np.unique(df['classlabel']))}
df['classlabel'] = df['classlabel'].map(class_mapping)
df
Explanation: 3. Encoding class labels
Create a mapping dictionary by enumerating unique categories. Note that class labels are not ordinal; they are nominal.
End of explanation
from sklearn.preprocessing import LabelEncoder
class_le = LabelEncoder()
df['classlabel'] = class_le.fit_transform(df['classlabel'].values)
df
class_le.inverse_transform(df.classlabel)
Explanation: Use LabelEncoder in scikit-learn to convert class labels into integers
End of explanation
import patsy
df = pd.DataFrame([['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1']])
df.columns = ['color', 'size', 'price', 'classlabel']
# Convert df['color'] into a categorical variable, setting one category as the baseline
patsy.dmatrix('color', df, return_type='dataframe')
# Convert df['color'] into a categorical variable without setting one category as baseline
patsy.dmatrix('color-1', df, return_type='dataframe')
Explanation: 4. Convert categorical variable with Patsy
End of explanation |
418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: In order to use wget inside a Python program, you have to install it with pip
Step2: Then, you have to import the wget package
Step3: Now, let's develop a python function to be able to give the parameters of download. The code below is not complete and probably won't work under general circumstances, but the intention of it is simply to demonstrate the very first steps. The arguments of the function will be the following
Step4: Now, check out the content of you folder in colab.
Get the date from computer time
In case your intention is to download the data on a daily basis, it is worth using your computer's date and time.
Step5: Yesterday as an object can be defined using the timedelta, as follows
Step6: Finally, you can download the data for the yesterday object.
Step7: Unzip files in python
RINEX files are stored as zipped ones. Of course, you can unzip them in python. For that, it is necessary two modules | Python Code:
!wget ftp://igs.bkg.bund.de/EUREF/BRDC/2022/033/BRDC00WRD_R_20220330000_01D_MN.rnx.gz
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/download_gnss_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Download navigational RINEX file from IGS data center
When processing raw GNSS data, it is more common to need RINEX navigation files for a specific day than for a period of time. IGS Data Centers store combined navigational files with free access. For combined data, it means that the data is combined from data recorded by a wide range of permanent stations all over the globe or in a particular region. It is possible to choose data from particular satellite constellations; however, mixed data can come across as the be the best option.
In this tutorial, it is presented how to manage the download of RINEX navigation files from an IGS regional data center using simple python scripts.
First off, let's download the mixed and combined navigation file for the day of year 33 in 2022 from IGS BKG data center. The data is stored at the ftp server: ftp://igs.bkg.bund.de/, in the remote directory /BRDC/2022/033/, and the file name is BRDC00WRD_R_20220410000_01D_MN.rnx.gz.
Having all the information, downloading is simply possible with wget. After typing "!wget", please add the full path of the file: ftp://igs.bkg.bund.de/EUREF/BRDC/2022/033/BRDC00WRD_R_20220330000_01D_MN.rnx.gz
End of explanation
pip install wget
Explanation: In order to use wget inside a Python program, you have to install it with pip:
End of explanation
import wget
Explanation: Then, you have to import the wget package:
End of explanation
from datetime import date
def download_rinex_navigation(ftp, rpath, my_date):
doy = my_date.strftime('%j')
year = my_date.strftime('%Y')
url = ftp + rpath + year + '/' + doy + '/' + 'BRDC00WRD_R_' + year + doy + '0000_01D_MN.rnx.gz'
try:
name = wget.download(url)
except URLError:
return None
return name
my_date = date(2022, 2, 2)
print(my_date)
download_rinex_navigation('ftp://igs.bkg.bund.de/', 'EUREF/BRDC/', my_date)
Explanation: Now, let's develop a python function to be able to give the parameters of download. The code below is not complete and probably won't work under general circumstances, but the intention of it is simply to demonstrate the very first steps. The arguments of the function will be the following:
ftp server name;
remote path;
date of data as datetime object.
The full path and the file name might be different on other data servers.
In the example below, the navigation data is downloaded for a specific day (2 February 2022). At first, the date function from the datetime module is used to convert year, month and day to a datetime object.
Then, the strftime function is used to convert the datetime object to a string. A wide range of formats is available (the script below, for example, gets the year and day of year).
End of explanation
from datetime import date, timedelta
Explanation: Now, check out the content of you folder in colab.
Get the date from computer time
In case your intention is to download the data on a daily basis, it is worth using your computer's date and time.
End of explanation
yesterday = date.today() - timedelta(days=1)
print(yesterday)
Explanation: Yesterday as an object can be defined using the timedelta, as follows:
End of explanation
download_rinex_navigation('ftp://igs.bkg.bund.de/', 'EUREF/BRDC/', yesterday)
Explanation: Finally, you can download the data for the yesterday object.
End of explanation
import gzip
import shutil
zipped = download_rinex_navigation('ftp://igs.bkg.bund.de/', 'EUREF/BRDC/', yesterday)
unzipped = zipped[:-3]
with gzip.open(zipped, 'rb') as f_in:
with open(unzipped, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
Explanation: Unzip files in python
RINEX files are stored as zipped ones. Of course, you can unzip them in python. For that, it is necessary two modules: gzip and shutil:
End of explanation |
419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Doc2Vec Tutorial on the Lee Dataset
Step1: What is it?
Doc2Vec is an NLP tool for representing documents as a vector and is a generalizing of the Word2Vec method. This tutorial will serve as an introduction to Doc2Vec and present ways to train and assess a Doc2Vec model.
Resources
Word2Vec Paper
Doc2Vec Paper
Dr. Michael D. Lee's Website
Lee Corpus
IMDB Doc2Vec Tutorial
Getting Started
To get going, we'll need to have a set of documents to train our doc2vec model. In theory, a document could be anything from a short 140 character tweet, a single paragraph (i.e., journal article abstract), a news article, or a book. In NLP parlance a collection or set of documents is often referred to as a <b>corpus</b>.
For this tutorial, we'll be training our model using the Lee Background Corpus included in gensim. This corpus contains 314 documents selected from the Australian Broadcasting
Corporation’s news mail service, which provides text e-mails of headline stories and covers a number of broad topics.
And we'll test our model by eye using the much shorter Lee Corpus which contains 50 documents.
Step2: Define a Function to Read and Preprocess Text
Below, we define a function to open the train/test file (with latin encoding), read the file line-by-line, pre-process each line using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Note that, for a given file (aka corpus), each continuous line constitutes a single document and the length of each line (i.e., document) can vary. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.
Step3: Let's take a look at the training corpus
Step4: And the testing corpus looks like this
Step5: Notice that the testing corpus is just a list of lists and does not contain any tags.
Training the Model
Instantiate a Doc2Vec Object
Now, we'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes.
Step6: Build a Vocabulary
Step7: Essentially, the vocabulary is a dictionary (accessible via model.wv.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.wv.vocab['penalty'].count for counts for the word penalty).
Time to Train
If the BLAS library is being used, this should take no more than 3 seconds.
If the BLAS library is not being used, this should take no more than 2 minutes, so use BLAS if you value your time.
Step8: Inferring a Vector
One important thing to note is that you can now infer a vector for any piece of text without having to re-train the model by passing a list of words to the model.infer_vector function. This vector can then be compared with other vectors via cosine similarity.
Step9: Assessing Model
To assess our new model, we'll first infer new vectors for each document of the training corpus, compare the inferred vectors with the training corpus, and then returning the rank of the document based on self-similarity. Basically, we're pretending as if the training corpus is some new unseen data and then seeing how they compare with the trained model. The expectation is that we've likely overfit our model (i.e., all of the ranks will be less than 2) and so we should be able to find similar documents very easily. Additionally, we'll keep track of the second ranks for a comparison of less similar documents.
Step10: Let's count how each document ranks with respect to the training corpus
Step11: Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. the checking of an inferred-vector against a training-vector is a sort of 'sanity check' as to whether the model is behaving in a usefully consistent manner, though not a real 'accuracy' value.
This is great and not entirely surprising. We can take a look at an example
Step12: Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself
Step13: Testing the Model
Using the same approach above, we'll infer the vector for a randomly chosen test document, and compare the document to our model by eye. | Python Code:
import gensim
import os
import collections
import smart_open
import random
Explanation: Doc2Vec Tutorial on the Lee Dataset
End of explanation
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])
lee_train_file = test_data_dir + os.sep + 'lee_background.cor'
lee_test_file = test_data_dir + os.sep + 'lee.cor'
Explanation: What is it?
Doc2Vec is an NLP tool for representing documents as a vector and is a generalizing of the Word2Vec method. This tutorial will serve as an introduction to Doc2Vec and present ways to train and assess a Doc2Vec model.
Resources
Word2Vec Paper
Doc2Vec Paper
Dr. Michael D. Lee's Website
Lee Corpus
IMDB Doc2Vec Tutorial
Getting Started
To get going, we'll need to have a set of documents to train our doc2vec model. In theory, a document could be anything from a short 140 character tweet, a single paragraph (i.e., journal article abstract), a news article, or a book. In NLP parlance a collection or set of documents is often referred to as a <b>corpus</b>.
For this tutorial, we'll be training our model using the Lee Background Corpus included in gensim. This corpus contains 314 documents selected from the Australian Broadcasting
Corporation’s news mail service, which provides text e-mails of headline stories and covers a number of broad topics.
And we'll test our model by eye using the much shorter Lee Corpus which contains 50 documents.
End of explanation
def read_corpus(fname, tokens_only=False):
with smart_open.smart_open(fname, encoding="iso-8859-1") as f:
for i, line in enumerate(f):
if tokens_only:
yield gensim.utils.simple_preprocess(line)
else:
# For training data, add tags
yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(line), [i])
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))
Explanation: Define a Function to Read and Preprocess Text
Below, we define a function to open the train/test file (with latin encoding), read the file line-by-line, pre-process each line using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Note that, for a given file (aka corpus), each continuous line constitutes a single document and the length of each line (i.e., document) can vary. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.
End of explanation
train_corpus[:2]
Explanation: Let's take a look at the training corpus
End of explanation
print(test_corpus[:2])
Explanation: And the testing corpus looks like this:
End of explanation
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=2, epochs=55)
Explanation: Notice that the testing corpus is just a list of lists and does not contain any tags.
Training the Model
Instantiate a Doc2Vec Object
Now, we'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes.
End of explanation
model.build_vocab(train_corpus)
Explanation: Build a Vocabulary
End of explanation
%time model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)
Explanation: Essentially, the vocabulary is a dictionary (accessible via model.wv.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.wv.vocab['penalty'].count for counts for the word penalty).
Time to Train
If the BLAS library is being used, this should take no more than 3 seconds.
If the BLAS library is not being used, this should take no more than 2 minutes, so use BLAS if you value your time.
End of explanation
model.infer_vector(['only', 'you', 'can', 'prevent', 'forest', 'fires'])
Explanation: Inferring a Vector
One important thing to note is that you can now infer a vector for any piece of text without having to re-train the model by passing a list of words to the model.infer_vector function. This vector can then be compared with other vectors via cosine similarity.
End of explanation
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
inferred_vector = model.infer_vector(train_corpus[doc_id].words)
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
second_ranks.append(sims[1])
Explanation: Assessing Model
To assess our new model, we'll first infer new vectors for each document of the training corpus, compare the inferred vectors with the training corpus, and then returning the rank of the document based on self-similarity. Basically, we're pretending as if the training corpus is some new unseen data and then seeing how they compare with the trained model. The expectation is that we've likely overfit our model (i.e., all of the ranks will be less than 2) and so we should be able to find similar documents very easily. Additionally, we'll keep track of the second ranks for a comparison of less similar documents.
End of explanation
collections.Counter(ranks) # Results vary due to random seeding and very small corpus
Explanation: Let's count how each document ranks with respect to the training corpus
End of explanation
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
Explanation: Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. the checking of an inferred-vector against a training-vector is a sort of 'sanity check' as to whether the model is behaving in a usefully consistent manner, though not a real 'accuracy' value.
This is great and not entirely surprising. We can take a look at an example:
End of explanation
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(train_corpus) - 1)
# Compare and print the most/median/least similar documents from the train corpus
print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: «{}»\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))
Explanation: Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself
End of explanation
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus) - 1)
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): «{}»\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
Explanation: Testing the Model
Using the same approach above, we'll infer the vector for a randomly chosen test document, and compare the document to our model by eye.
End of explanation |
420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DTOcean Tidal Hydrodynamics Example
Note, this example assumes the Hydroynamics Module has been installed
Step1: Create the core, menus and pipeline tree
The core object carrys all the system information and is operated on by the other classes
Step2: Create a new project
Step3: Set the device type
Step4: Initiate the pipeline
This step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.
Step5: Discover available modules
Step6: Activate a module
Note that the order of activation is important and that we can't deactivate yet!
Step7: Check the status of the module inputs
Step8: Initiate the dataflow
This indicates that the filtering and module / theme selections are complete
Step9: Move the system to the post-filter state and ready the system
Step10: Load test data
Prepare the test data for loading. The files required can be found in the test_data directory of the source code and should be copied to the directory that the notebook is running. When the python file is run a pickle file is generated containing a dictionary of inputs.
Step11: Get a variable from the tree
Step12: Discover which interfaces can be used to enter the variable
Each piece of data must be provided by one or many interfaces, be that raw input or from special file types.
Step13: Check that the variable has been entered correctly
Step14: Auto plot a variable
Step15: Look for other available plots
Step16: Plot a specific plot
Step17: Check if the module can be executed
Step18: Execute the current module
The "current" module refers to the next module to be executed in the chain (pipeline) of modules. This command will only execute that module and another will be used for executing all of the modules at once.
Note, any data supplied by the module will be automatically copied into the active data state.
Step19: Examine the results
Currently, there is no robustness built into the core, so the assumption is that the module executed successfully. This will have to be improved towards deployment of the final software.
Let's check the number of devices and annual output of the farm, using just information in the data object.
Step20: Plotting some graphs
By having data objects with set formats it should be possible to create automated plot generation. However, some plots may be too complex and some special cases may need defined.
Step21: Plotting the Layout
This may require such a special case. It is not clear is a new data type is required or just special plots associated to variable IDs. | Python Code:
%matplotlib inline
from IPython.display import display, HTML
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (14.0, 8.0)
import numpy as np
from dtocean_core import start_logging
from dtocean_core.core import Core
from dtocean_core.menu import DataMenu, ModuleMenu, ProjectMenu
from dtocean_core.pipeline import Tree
def html_list(x):
message = "<ul>"
for name in x:
message += "<li>{}</li>".format(name)
message += "</ul>"
return message
def html_dict(x):
message = "<ul>"
for name, status in x.iteritems():
message += "<li>{}: <b>{}</b></li>".format(name, status)
message += "</ul>"
return message
# Bring up the logger
start_logging()
Explanation: DTOcean Tidal Hydrodynamics Example
Note, this example assumes the Hydroynamics Module has been installed
End of explanation
new_core = Core()
data_menu = DataMenu()
project_menu = ProjectMenu()
module_menu = ModuleMenu()
pipe_tree = Tree()
Explanation: Create the core, menus and pipeline tree
The core object carrys all the system information and is operated on by the other classes
End of explanation
project_title = "DTOcean"
new_project = project_menu.new_project(new_core, project_title)
Explanation: Create a new project
End of explanation
options_branch = pipe_tree.get_branch(new_core, new_project, "System Type Selection")
variable_id = "device.system_type"
my_var = options_branch.get_input_variable(new_core, new_project, variable_id)
my_var.set_raw_interface(new_core, "Tidal Fixed")
my_var.read(new_core, new_project)
Explanation: Set the device type
End of explanation
project_menu.initiate_pipeline(new_core, new_project)
Explanation: Initiate the pipeline
This step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.
End of explanation
names = module_menu.get_available(new_core, new_project)
message = html_list(names)
HTML(message)
Explanation: Discover available modules
End of explanation
module_name = 'Hydrodynamics'
module_menu.activate(new_core, new_project, module_name)
hydro_branch = pipe_tree.get_branch(new_core, new_project, 'Hydrodynamics')
Explanation: Activate a module
Note that the order of activation is important and that we can't deactivate yet!
End of explanation
input_status = hydro_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
Explanation: Check the status of the module inputs
End of explanation
project_menu.initiate_dataflow(new_core, new_project)
Explanation: Initiate the dataflow
This indicates that the filtering and module / theme selections are complete
End of explanation
new_core.inspect_level(new_project, "modules initial")
new_core.reset_level(new_project, preserve_level=True)
Explanation: Move the system to the post-filter state and ready the system
End of explanation
%run test_data/inputs_wp2_tidal.py
hydro_branch.read_test_data(new_core,
new_project,
"test_data/inputs_wp2_tidal.pkl")
Explanation: Load test data
Prepare the test data for loading. The files required can be found in the test_data directory of the source code and should be copied to the directory that the notebook is running. When the python file is run a pickle file is generated containing a dictionary of inputs.
End of explanation
variable_id = 'project.rated_power'
my_var = hydro_branch.get_input_variable(new_core, new_project, variable_id)
Explanation: Get a variable from the tree
End of explanation
list_raw = my_var.get_raw_interfaces(new_core)
message = html_list(list_raw)
HTML(message)
Explanation: Discover which interfaces can be used to enter the variable
Each piece of data must be provided by one or many interfaces, be that raw input or from special file types.
End of explanation
variable_value = new_core.get_data_value(new_project, variable_id)
display(variable_value)
Explanation: Check that the variable has been entered correctly
End of explanation
new_var = hydro_branch.get_input_variable(new_core,
new_project,
'device.turbine_performance')
new_var.plot(new_core, new_project)
Explanation: Auto plot a variable
End of explanation
plots = new_var.get_available_plots(new_core, new_project)
msg = html_list(plots)
HTML(msg)
Explanation: Look for other available plots
End of explanation
new_var.plot(new_core, new_project, 'Tidal Power Performance')
Explanation: Plot a specific plot
End of explanation
can_execute = module_menu.is_executable(new_core, new_project, module_name)
display(can_execute)
input_status = hydro_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
Explanation: Check if the module can be executed
End of explanation
module_menu.execute_current(new_core, new_project)
Explanation: Execute the current module
The "current" module refers to the next module to be executed in the chain (pipeline) of modules. This command will only execute that module and another will be used for executing all of the modules at once.
Note, any data supplied by the module will be automatically copied into the active data state.
End of explanation
n_devices = new_core.get_data_value(new_project, "project.number_of_devices")
meta = new_core.get_metadata("project.number_of_devices")
name = meta.title
message_one = "<p><b>{}:</b> {}</p>".format(name, n_devices)
farm_annual_energy = new_core.get_data_value(new_project, "project.annual_energy")
meta = new_core.get_metadata("project.annual_energy")
name = meta.title
value = farm_annual_energy
units = meta.units[0]
message_two = "<p><b>{}:</b> <i>{}</i> ({})</p>".format(name, value, units)
HTML(message_one + message_two)
Explanation: Examine the results
Currently, there is no robustness built into the core, so the assumption is that the module executed successfully. This will have to be improved towards deployment of the final software.
Let's check the number of devices and annual output of the farm, using just information in the data object.
End of explanation
mean_power_per_dev_value = new_core.get_data_value(new_project,
"project.mean_power_per_device")
meta = new_core.get_metadata("project.mean_power_per_device")
chart_values = np.array(mean_power_per_dev_value.values())
plt.bar(range(len(mean_power_per_dev_value)),
chart_values,
align='center')
plt.xticks(range(len(mean_power_per_dev_value)),
mean_power_per_dev_value.keys())
plt.title(meta.title)
plt.ylabel(meta.units[0])
plt.tight_layout()
# plt.savefig('annual_power_per_device.png')
plt.show()
Explanation: Plotting some graphs
By having data objects with set formats it should be possible to create automated plot generation. However, some plots may be too complex and some special cases may need defined.
End of explanation
layout_value = new_core.get_data_value(new_project, "project.layout")
layout_meta = new_core.get_metadata("project.layout")
x = []
y = []
for coords in layout_value.itervalues():
x.append(coords.x)
y.append(coords.y)
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1, axisbg='lightskyblue')
ax1.plot(x,y,'k+', mew=2, markersize=10)
plt.title(layout_meta.title)
plt.axis('equal')
plt.show()
pmf_values = new_core.get_data_value(new_project, "project.mean_power_pmf_per_device")
display(pmf_values)
plt.plot(pmf_values["device001"][:,0], pmf_values["device001"][:,1], '+', mew=5, markersize=20)
plt.tight_layout()
plt.show()
tidal_occurance = new_core.get_data_value(new_project, "farm.tidal_occurrence")
print tidal_occurance.p.values
plt.quiver(tidal_occurance["U"].values[:,:,1],
tidal_occurance["V"].values[:,:,1])
plt.show()
hist_values = new_core.get_data_value(new_project, "project.mean_power_hist_per_device")
plot_bins = hist_values["device001"]['bins']
plot_values = hist_values["device001"]['values']
_ = plt.bar(plot_bins[:-1], plot_values, width=plot_bins[1:] - plot_bins[:-1])
plt.show()
Explanation: Plotting the Layout
This may require such a special case. It is not clear is a new data type is required or just special plots associated to variable IDs.
End of explanation |
421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2015
https
Step1: Overview
Building, compiling, and running expressions with Theano
What is Theano?
First steps with Theano
Configuring Theano
Working with array structures
Wrapping things up – a linear regression example
Choosing activation functions for feedforward neural networks
Logistic function recap
Estimating probabilities in multi-class classification via the softmax function
Broadening the output spectrum by using a hyperbolic tangent
Training neural networks efficiently using Keras
Summary
<br>
<br>
Step2: Building, compiling, and running expressions with Theano
Depending on your system setup, it is typically sufficient to install Theano via
pip install Theano
For more help with the installation, please see
Step3: <br>
<br>
What is Theano?
...
First steps with Theano
Introducing the TensorType variables. For a complete list, see http
Step4: <br>
<br>
Configuring Theano
Configuring Theano. For more options, see
- http
Step5: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see
Step6: You can run a Python script on CPU via
Step7: Updating shared arrays.
More info about memory management in Theano can be found here
Step8: We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
Step9: <br>
<br>
Wrapping things up
Step10: Implementing the training function.
Step11: Plotting the sum of squared errors cost vs epochs.
Step12: Making predictions.
Step13: <br>
<br>
Choosing activation functions for feedforward neural networks
...
Logistic function recap
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Net input $z$
Step14: Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
Step15: <br>
<br>
Estimating probabilities in multi-class classification via the softmax function
The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).
$$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is
Step16: <br>
<br>
Broadening the output spectrum using a hyperbolic tangent
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
Output range
Step18: <br>
<br>
Training neural networks efficiently using Keras
Loading MNIST
1) Download the 4 MNIST datasets from http
Step19: Multi-layer Perceptron in Keras
Once you have Theano installed, Keras can be installed via
pip install Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
Step20: One-hot encoding of the class variable | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,matplotlib,theano,keras
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
Explanation: Sebastian Raschka, 2015
https://github.com/rasbt/python-machine-learning-book
Python Machine Learning - Code Examples
Chapter 13 - Parallelizing Neural Network Training with Theano
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
from IPython.display import Image
Explanation: Overview
Building, compiling, and running expressions with Theano
What is Theano?
First steps with Theano
Configuring Theano
Working with array structures
Wrapping things up – a linear regression example
Choosing activation functions for feedforward neural networks
Logistic function recap
Estimating probabilities in multi-class classification via the softmax function
Broadening the output spectrum by using a hyperbolic tangent
Training neural networks efficiently using Keras
Summary
<br>
<br>
End of explanation
Image(filename='./images/13_01.png', width=500)
Explanation: Building, compiling, and running expressions with Theano
Depending on your system setup, it is typically sufficient to install Theano via
pip install Theano
For more help with the installation, please see: http://deeplearning.net/software/theano/install.html
End of explanation
import theano
from theano import tensor as T
# initialize
x1 = T.scalar()
w1 = T.scalar()
w0 = T.scalar()
z1 = w1 * x1 + w0
# compile
net_input = theano.function(inputs=[w1, x1, w0], outputs=z1)
# execute
net_input(2.0, 1.0, 0.5)
Explanation: <br>
<br>
What is Theano?
...
First steps with Theano
Introducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors
End of explanation
print(theano.config.floatX)
theano.config.floatX = 'float32'
Explanation: <br>
<br>
Configuring Theano
Configuring Theano. For more options, see
- http://deeplearning.net/software/theano/library/config.html
- http://deeplearning.net/software/theano/library/floatX.html
End of explanation
print(theano.config.device)
Explanation: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Note that float32 is recommended for GPUs; float64 on GPUs is currently still relatively slow.
End of explanation
import numpy as np
# initialize
# if you are running Theano on 64 bit mode,
# you need to use dmatrix instead of fmatrix
x = T.fmatrix(name='x')
x_sum = T.sum(x, axis=0)
# compile
calc_sum = theano.function(inputs=[x], outputs=x_sum)
# execute (Python list)
ary = [[1, 2, 3], [1, 2, 3]]
print('Column sum:', calc_sum(ary))
# execute (NumPy array)
ary = np.array([[1, 2, 3], [1, 2, 3]], dtype=theano.config.floatX)
print('Column sum:', calc_sum(ary))
Explanation: You can run a Python script on CPU via:
THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py
or GPU via
THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py
It may also be convenient to create a .theanorc file in your home directory to make those configurations permanent. For example, to always use float32, execute
echo -e "\n[global]\nfloatX=float32\n" >> ~/.theanorc
Or, create a .theanorc file manually with the following contents
[global]
floatX = float32
device = gpu
<br>
<br>
Working with array structures
End of explanation
# initialize
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[x],
updates=update,
outputs=z)
# execute
data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
for i in range(5):
print('z%d:' % i, net_input(data))
Explanation: Updating shared arrays.
More info about memory management in Theano can be found here: http://deeplearning.net/software/theano/tutorial/aliasing.html
End of explanation
# initialize
data = np.array([[1, 2, 3]],
dtype=theano.config.floatX)
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[],
updates=update,
givens={x: data},
outputs=z)
# execute
for i in range(5):
print('z:', net_input())
Explanation: We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
End of explanation
import numpy as np
X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],
[5.0], [6.0], [7.0], [8.0], [9.0]],
dtype=theano.config.floatX)
y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0,
6.3, 6.6, 7.4, 8.0, 9.0],
dtype=theano.config.floatX)
Explanation: <br>
<br>
Wrapping things up: A linear regression example
Creating some training data.
End of explanation
import theano
from theano import tensor as T
import numpy as np
def train_linreg(X_train, y_train, eta, epochs):
costs = []
# Initialize arrays
eta0 = T.fscalar('eta0')
y = T.fvector(name='y')
X = T.fmatrix(name='X')
w = theano.shared(np.zeros(
shape=(X_train.shape[1] + 1),
dtype=theano.config.floatX),
name='w')
# calculate cost
net_input = T.dot(X, w[1:]) + w[0]
errors = y - net_input
cost = T.sum(T.pow(errors, 2))
# perform gradient update
gradient = T.grad(cost, wrt=w)
update = [(w, w - eta0 * gradient)]
# compile model
train = theano.function(inputs=[eta0],
outputs=cost,
updates=update,
givens={X: X_train,
y: y_train,})
for _ in range(epochs):
costs.append(train(eta))
return costs, w
Explanation: Implementing the training function.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)
plt.plot(range(1, len(costs)+1), costs)
plt.tight_layout()
plt.xlabel('Epoch')
plt.ylabel('Cost')
plt.tight_layout()
# plt.savefig('./figures/cost_convergence.png', dpi=300)
plt.show()
Explanation: Plotting the sum of squared errors cost vs epochs.
End of explanation
def predict_linreg(X, w):
Xt = T.matrix(name='X')
net_input = T.dot(Xt, w[1:]) + w[0]
predict = theano.function(inputs=[Xt], givens={w: w}, outputs=net_input)
return predict(X)
plt.scatter(X_train, y_train, marker='s', s=50)
plt.plot(range(X_train.shape[0]),
predict_linreg(X_train, w),
color='gray',
marker='o',
markersize=4,
linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
# plt.savefig('./figures/linreg.png', dpi=300)
plt.show()
Explanation: Making predictions.
End of explanation
# note that first element (X[0] = 1) to denote bias unit
X = np.array([[1, 1.4, 1.5]])
w = np.array([0.0, 0.2, 0.4])
def net_input(X, w):
z = X.dot(w)
return z
def logistic(z):
return 1.0 / (1.0 + np.exp(-z))
def logistic_activation(X, w):
z = net_input(X, w)
return logistic(z)
print('P(y=1|x) = %.3f' % logistic_activation(X, w)[0])
Explanation: <br>
<br>
Choosing activation functions for feedforward neural networks
...
Logistic function recap
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Net input $z$:
$$z = w_1x_{1} + \dots + w_mx_{m} = \sum_{j=1}^{m} x_{j}w_{j} \ = \mathbf{w}^T\mathbf{x}$$
Logistic activation function:
$$\phi_{logistic}(z) = \frac{1}{1 + e^{-z}}$$
Output range: (0, 1)
End of explanation
# W : array, shape = [n_output_units, n_hidden_units+1]
# Weight matrix for hidden layer -> output layer.
# note that first column (A[:][0] = 1) are the bias units
W = np.array([[1.1, 1.2, 1.3, 0.5],
[0.1, 0.2, 0.4, 0.1],
[0.2, 0.5, 2.1, 1.9]])
# A : array, shape = [n_hidden+1, n_samples]
# Activation of hidden layer.
# note that first element (A[0][0] = 1) is for the bias units
A = np.array([[1.0],
[0.1],
[0.3],
[0.7]])
# Z : array, shape = [n_output_units, n_samples]
# Net input of output layer.
Z = W.dot(A)
y_probas = logistic(Z)
print('Probabilities:\n', y_probas)
y_class = np.argmax(Z, axis=0)
print('predicted class label: %d' % y_class[0])
Explanation: Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
End of explanation
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
def softmax_activation(X, w):
z = net_input(X, w)
return softmax(z)
y_probas = softmax(Z)
print('Probabilities:\n', y_probas)
y_probas.sum()
y_class = np.argmax(Z, axis=0)
y_class
Explanation: <br>
<br>
Estimating probabilities in multi-class classification via the softmax function
The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).
$$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is:
Output range: (0, 1)
End of explanation
def tanh(z):
e_p = np.exp(z)
e_m = np.exp(-z)
return (e_p - e_m) / (e_p + e_m)
import matplotlib.pyplot as plt
%matplotlib inline
z = np.arange(-5, 5, 0.005)
log_act = logistic(z)
tanh_act = tanh(z)
# alternatives:
# from scipy.special import expit
# log_act = expit(z)
# tanh_act = np.tanh(z)
plt.ylim([-1.5, 1.5])
plt.xlabel('net input $z$')
plt.ylabel('activation $\phi(z)$')
plt.axhline(1, color='black', linestyle='--')
plt.axhline(0.5, color='black', linestyle='--')
plt.axhline(0, color='black', linestyle='--')
plt.axhline(-1, color='black', linestyle='--')
plt.plot(z, tanh_act,
linewidth=2,
color='black',
label='tanh')
plt.plot(z, log_act,
linewidth=2,
color='lightgreen',
label='logistic')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/activation.png', dpi=300)
plt.show()
Image(filename='./images/13_05.png', width=700)
Explanation: <br>
<br>
Broadening the output spectrum using a hyperbolic tangent
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
Output range: (-1, 1)
End of explanation
import os
import struct
import numpy as np
def load_mnist(path, kind='train'):
Load MNIST data from `path`
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte'
% kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte'
% kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',
lbpath.read(8))
labels = np.fromfile(lbpath,
dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack(">IIII",
imgpath.read(16))
images = np.fromfile(imgpath,
dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('mnist', kind='train')
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist('mnist', kind='t10k')
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
Explanation: <br>
<br>
Training neural networks efficiently using Keras
Loading MNIST
1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/
train-images-idx3-ubyte.gz: training set images (9912422 bytes)
train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
2) Unzip those files
3 Copy the unzipped files to a directory ./mnist
End of explanation
import theano
theano.config.floatX = 'float32'
X_train = X_train.astype(theano.config.floatX)
X_test = X_test.astype(theano.config.floatX)
Explanation: Multi-layer Perceptron in Keras
Once you have Theano installed, Keras can be installed via
pip install Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
End of explanation
from keras.utils import np_utils
print('First 3 labels: ', y_train[:3])
y_train_ohe = np_utils.to_categorical(y_train)
print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3])
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
np.random.seed(1)
model = Sequential()
model.add(Dense(input_dim=X_train.shape[1],
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=y_train_ohe.shape[1],
init='uniform',
activation='softmax'))
sgd = SGD(lr=0.001, decay=1e-7, momentum=.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, y_train_ohe,
nb_epoch=50,
batch_size=300,
verbose=1,
validation_split=0.1,
show_accuracy=True)
y_train_pred = model.predict_classes(X_train, verbose=0)
print('First 3 predictions: ', y_train_pred[:3])
train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
print('Training accuracy: %.2f%%' % (train_acc * 100))
y_test_pred = model.predict_classes(X_test, verbose=0)
test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
print('Test accuracy: %.2f%%' % (test_acc * 100))
Explanation: One-hot encoding of the class variable:
End of explanation |
422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step5: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step7: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
Step8: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step9: Problem 3
Another check
Step10: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
Step11: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step12: Problem 4
Convince yourself that the data is still good after shuffling!
Step13: Finally, let's save the data for later reuse
Step14: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions
Step15: Comparison parallel and sync overlapping calculation
For distance measurement between images used Manhattan metric (reference)
Step16: Synchronously
Step17: Asynchronously
Step18: Estimation overlapping
Step19: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import time
from datetime import timedelta
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from scipy.spatial import distance
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
from ipyparallel import Client, require
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
%run label_util.py
def draw_images(label, a_arr, b_arr, bins_size=20):
x = np.array(range(bins_size))
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(8, 1))
h_a = np.histogram(a_arr, bins=bins_size)
h_b = np.histogram(b_arr, bins=bins_size)
ax1.imshow(a_arr)
ax1.set_title('Label: ' + label)
ax2.bar(x, h_a[0])
ax3.imshow(b_arr)
ax4.bar(x, h_b[0])
plt.show()
def overlapping_comparison(counters, dataset_a, dataset_b, number_shown = 3):
letters = list(counters.keys())
i = 0
shown = 0
while shown < number_shown and i < len(letters):
character = letters[i]
similar_keys = list(counters[character].keys())
key = similar_keys[0]
if key == 'counter' and len(similar_keys) > 1:
key = similar_keys[1]
idx_a = int(key)
idx_b = counters[character][key][0]
label = '{0} (Distance: {1}) [{2}] - [{3}]'.format(character, manhattan_distance(dataset_a[idx_a], dataset_b[idx_b]), idx_a, idx_b)
draw_images(label, dataset_a[idx_a], dataset_b[idx_b])
i += 1
shown += 1
def display_overlap(counters):
key_lst = sorted(counters.keys())
total = 0
for key in key_lst:
total += counters[key]['counter']
print('Label {0}: {1}'.format(key, counters[key]['counter']))
print('Total:', total)
def wrap_tuples(labels, dataset):
result = []
for idx, item in enumerate(zip(labels, dataset)):
result.append((idx, item[0], item[1]))
return result
def is_equal_comparison(a_arr, b_arr):
return (a_arr==b_arr).all()
def euclidean_distance(a_arr, b_arr):
'''Euclidean distance without the sqrt'''
return np.sum(np.power(a_arr - b_arr, 2))
@require('numpy as np')
def manhattan_distance(a_arr, b_arr):
return np.sum(np.absolute(a_arr - b_arr))
def count_duplication(counters, lbl, idxA, idxB):
str_lbl = get_char_by_lbl(lbl)
if str_lbl not in counters:
counters[str_lbl] = {}
counters[str_lbl]['counter'] = 0
counters[str_lbl]['counter'] += 1
if str(idxA) not in counters[str_lbl]:
counters[str_lbl][str(idxA)] = []
counters[str_lbl][str(idxA)].append(idxB)
def count_equal_data(label_lst_A, data_lst_A, label_lst_B, data_lst_B, distance_threshold=0, min_distance_threshold = 0):
start_time = time.clock()
counters = {}
for idxA, lblA in enumerate(label_lst_A):
for idxB, lblB in enumerate(label_lst_B):
if lblA == lblB:
itemA = data_lst_A[idxA]
itemB = data_lst_B[idxB]
if distance_threshold == 0 and is_equal_comparison(itemA, itemB):
count_duplication(counters, lblA, idxA, idxB)
if distance_threshold > 0 and distance_threshold >= manhattan_distance(itemA, itemB) > min_distance_threshold:
count_duplication(counters, lblA, idxA, idxB)
end_time = time.clock()
return (counters, timedelta(seconds=end_time - start_time))
def count_equal_tuples(tuple_lst_A, tuple_lst_B, distance_threshold=0, min_distance_threshold = 0):
idx_idx = 0
lbl_idx = 1
data_idx = 2
counters = {}
for item_A in tuple_lst_A:
for item_B in tuple_lst_B:
if item_A[lbl_idx] == item_B[lbl_idx]:
if distance_threshold == 0 and is_equal_comparison(item_A[data_idx], item_B[data_idx]):
count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx])
if distance_threshold > 0 and distance_threshold >= manhattan_distance(item_A[data_idx], item_B[data_idx]) > min_distance_threshold:
count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx])
return counters
@require(get_char_by_lbl)
def count_duplication(counters, lbl, idxA, idxB):
str_lbl = get_char_by_lbl(lbl)
if str_lbl not in counters:
counters[str_lbl] = {}
counters[str_lbl]['counter'] = 0
counters[str_lbl]['counter'] += 1
if str(idxA) not in counters[str_lbl]:
counters[str_lbl][str(idxA)] = []
counters[str_lbl][str(idxA)].append(idxB)
@require(is_equal_comparison, count_duplication, manhattan_distance)
def item_acync_handler():
idx_idx = 0
lbl_idx = 1
data_idx = 2
for item_A in tuple_lst_A:
for item_B in tuple_lst_B:
if item_A[lbl_idx] == item_B[lbl_idx]:
if distance_threshold == 0 and is_equal_comparison(item_A[data_idx], item_B[data_idx]):
count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx])
if distance_threshold > 0 and distance_threshold >= manhattan_distance(item_A[data_idx], item_B[data_idx]) > min_distance_threshold:
count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx])
def reduce_counters(counters_lst):
result = {}
for counters in counters_lst:
for letter_key, item in counters.items():
if letter_key not in result:
result[letter_key] = {'counter': 0}
for key, value in item.items():
if key == 'counter':
result[letter_key][key] += value
elif key not in result[letter_key]:
result[letter_key][key] = value
else:
for idx in value:
result[letter_key][key].append(idx)
return result
def count_equal_tuples_parallel(tuple_lst_A, tuple_lst_B, distance_threshold=0, min_distance_threshold = 0):
rc = Client()
dview = rc[:]
dview.push(dict(tuple_lst_B = tuple_lst_B, map_dict=map_dict,
distance_threshold=distance_threshold,
min_distance_threshold=min_distance_threshold))
dview['counters'] = {}
dview.scatter('tuple_lst_A', tuple_lst_A)
dview.block=True
dview.apply(item_acync_handler)
result = reduce_counters(dview['counters'])
return result
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
from IPython.display import Image, display
num_first_items = 1
def display_first_items(folder_path):
print('Letter:', folder_path[-1:])
lst = os.listdir(folder_path)[:num_first_items]
for file_name in lst:
full_file_name = os.path.join(folder_path, file_name)
display(Image(filename=full_file_name))
for folder in train_folders:
display_first_items(folder)
for folder in test_folders:
display_first_items(folder)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
if image_data.mean() == 0.5:
print('No data in image:', image_file)
continue
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1600)
Explanation: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
def show_pickle(file_path):
print(file_path)
with open(file_path, 'rb') as f:
dataset = pickle.load(f)
plt.figure(figsize=(1,1))
plt.imshow(dataset[1])
plt.show()
for pickle_file in train_datasets:
show_pickle(pickle_file)
for pickle_file in test_datasets:
show_pickle(pickle_file)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
def show_pickle_stats(file_path):
with open(file_path, 'rb') as f:
dataset = pickle.load(f)
print(file_path, len(dataset))
for pickle_file in train_datasets:
show_pickle_stats(pickle_file)
for pickle_file in test_datasets:
show_pickle_stats(pickle_file)
Explanation: Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
print('train labels:', count_labels(train_labels))
print('valid labels:', count_labels(valid_labels))
print('test labels:', count_labels(test_labels))
Explanation: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
print('train labels:', count_labels(train_labels))
print('valid labels:', count_labels(valid_labels))
print('test labels:', count_labels(test_labels))
def show_data(dataset, labels, size=3):
print('=============================================')
for lbl, img_arr in zip(labels[:size], dataset[:size]):
print(map_dict[str(lbl)])
plt.figure(figsize=(1,1))
plt.imshow(img_arr)
plt.show()
show_data(train_dataset, train_labels)
show_data(test_dataset, test_labels)
show_data(valid_dataset, valid_labels)
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
bins_size = 28 * 4
def calc_histogram(dataset, bins = bins_size):
start_time = time.clock()
hist_list = []
for item in dataset:
hist = np.histogram(item, bins=bins)
hist_list.append(hist[0])
end_time = time.clock()
return (hist_list, timedelta(seconds=end_time - start_time))
train_histogram, calc_duration = calc_histogram(train_dataset, bins_size)
print('Histograms for train dataset calculates in', calc_duration)
valid_histogram, calc_duration = calc_histogram(valid_dataset, bins_size)
print('Histograms for validation dataset calculates in', calc_duration)
test_histogram, calc_duration = calc_histogram(test_dataset, bins_size)
print('Histograms for test dataset calculates in', calc_duration)
# pickle_hist_file = 'notMNIST.hist.pickle'
# try:
# f = open(pickle_hist_file, 'wb')
# save = {
# 'train_histogram': train_histogram,
# 'valid_histogram': valid_histogram,
# 'test_histogram': test_histogram,
# }
# pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
# f.close()
# except Exception as e:
# print('Unable to save data to', pickle_hist_file, ':', e)
# raise
# statinfo = os.stat(pickle_hist_file)
# print('Compressed histograms pickle size:', statinfo.st_size)
Explanation: Finally, let's save the data for later reuse:
End of explanation
pickle_file = 'notMNIST.pickle'
pickle_hist_file = 'notMNIST.hist.pickle'
try:
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
except Exception as e:
print('Unable to load full dataset to', pickle_file, ':', e)
raise
# try:
# with open(pickle_hist_file, 'rb') as f:
# save = pickle.load(f)
# train_histogram = save['train_histogram']
# valid_histogram = save['valid_histogram']
# test_histogram = save['test_histogram']
# print('Training histogram:', len(train_histogram))
# print('Validation histogram:', len(valid_histogram))
# print('Testing histogram:', len(test_histogram))
# except Exception as e:
# print('Unable to load full dataset to', pickle_file, ':', e)
# raise
start_time = time.clock()
train_tuple_lst = wrap_tuples(train_labels, train_dataset)
valid_tuple_lst = wrap_tuples(valid_labels, valid_dataset)
test_tuple_lst = wrap_tuples(test_labels, test_dataset)
end_time = time.clock()
print('Labels and data sets to tuples time:', timedelta(seconds=end_time - start_time))
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
End of explanation
distance_overlapping = 10
Explanation: Comparison parallel and sync overlapping calculation
For distance measurement between images used Manhattan metric (reference)
End of explanation
start_time = time.clock()
overlap_valid_test = count_equal_tuples(valid_tuple_lst, test_tuple_lst)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets during', duration)
display_overlap(overlap_valid_test)
start_time = time.clock()
overlap_valid_test_near = count_equal_tuples(valid_tuple_lst, test_tuple_lst, distance_overlapping)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration)
display_overlap(overlap_valid_test_near)
Explanation: Synchronously
End of explanation
start_time = time.clock()
overlap_valid_test = count_equal_tuples_parallel(valid_tuple_lst, test_tuple_lst)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets during', duration)
display_overlap(overlap_valid_test)
overlapping_comparison(overlap_valid_test, valid_dataset, test_dataset)
start_time = time.clock()
overlap_valid_test_near = count_equal_tuples_parallel(valid_tuple_lst, test_tuple_lst, distance_overlapping)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration)
display_overlap(overlap_valid_test_near)
overlapping_comparison(overlap_valid_test_near, valid_dataset, test_dataset)
start_time = time.clock()
overlap_valid_test_far = count_equal_tuples_parallel(valid_tuple_lst, test_tuple_lst, 110, 100)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets (with overlaping interval) during', duration)
display_overlap(overlap_valid_test_far)
overlapping_comparison(overlap_valid_test_far, valid_dataset, test_dataset)
Explanation: Asynchronously
End of explanation
start_time = time.clock()
overlap_train_valid = count_equal_tuples_parallel(train_tuple_lst, valid_tuple_lst)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets during', duration)
display_overlap(overlap_train_valid)
overlapping_comparison(overlap_train_valid, train_dataset, valid_dataset)
start_time = time.clock()
overlap_train_valid_near = count_equal_tuples_parallel(train_tuple_lst, valid_tuple_lst, distance_overlapping)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration)
display_overlap(overlap_train_valid_near)
overlapping_comparison(overlap_train_valid_near, train_dataset, valid_dataset)
start_time = time.clock()
overlap_train_test = count_equal_tuples_parallel(train_tuple_lst, test_tuple_lst)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets during', duration)
display_overlap(overlap_train_test)
overlapping_comparison(overlap_train_test, train_dataset, test_dataset)
start_time = time.clock()
overlap_train_test_near = count_equal_tuples_parallel(train_tuple_lst, test_tuple_lst, distance_overlapping)
end_time = time.clock()
duration = timedelta(seconds=end_time - start_time)
print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration)
display_overlap(overlap_train_test_near)
overlapping_comparison(overlap_train_test_near, train_dataset, test_dataset)
%timeit is_equal_comparison(item_a, item_b)
%timeit manhattan_distance(valid_histogram[8], valid_histogram[9])
%timeit distance.cityblock(item_a.flatten(), item_b.flatten())
Explanation: Estimation overlapping
End of explanation
from sklearn.linear_model import LogisticRegression
%run label_util.py
# print('train labels:', count_labels(train_labels))
# print('valid labels:', count_labels(valid_labels))
# print('test labels:', count_labels(test_labels))
# show_data(train_dataset, train_labels)
# show_data(test_dataset, test_labels)
# show_data(valid_dataset, valid_labels)
from collections import Counter
cnt = Counter(valid_labels)
keys = cnt.keys()
one_class_size = 50 // len(keys)
for key in keys:
class_indexes = np.where(valid_labels == key)[0][:one_class_size]
print(type(valid_labels[class_indexes]), valid_labels[class_indexes])
valid_labels.shape
logreg = linear_model.LogisticRegression()
Explanation: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation |
423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing Bike-Share data with Pandas
(they're just so cute and helpful)
Grace & Tanner
1. Compute the average temperature by season ('season_desc'), fixing season error in original data.
Step1: df[df['A'] > 2]['B'] = new_val # new_val not set in df
The warning offers a suggestion to rewrite as follows
Step2: 3. Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months?
Step3: As temperatures increase, so to do the number of rentals/riders. Note, though, that there are very few days where the temperature rises above 0.8 Shellman degrees. It makes sense, then, for summer to have the highest number of riders (which it does) because of the season's preference for higher temperatures. It also follows that winter, being fucking cold, has significantly fewer riders. We can also see that, when grouping by month, the summery months of June through September have the highest number of rentals.
notes from next-day review
Step4: 4. There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?
Step5: When we compare registered riders to casual riders by season, our results are somewhat suprising. It appears that there are more casual riders in winter and fall than registered riders, which is the opposite of what we expected. However, when we compare the total number of casual riders to registered riders based on temperature, there are always more registered riders than casual riders regardless of how hot or cold it is. Comparing by windspeed also gives us somewhat expected answers, as there are significantly more registered riders at higher windspeeds (.4 -.6) than casual riders. | Python Code:
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
weather = pd.read_table('data/daily_weather.tsv')
weather
type(weather)
weather.groupby('season_desc')['temp'].mean()
weather.loc[weather['season_code'] == 1, 'season_desc'] = 'winter'
weather
weather.loc[weather['season_code'] == 2, 'season_desc'] = 'spring'
weather.loc[weather['season_code'] == 3, 'season_desc'] = 'summer'
weather.loc[weather['season_code'] == 4, 'season_desc'] = 'fall'
weather
weather.loc[weather['season_desc'] == 'winter', 'season_code'] = 4
weather.loc[weather['season_desc'] == 'spring', 'season_code'] = 1
weather.loc[weather['season_desc'] == 'summer', 'season_code'] = 2
weather.loc[weather['season_desc'] == 'fall', 'season_code'] = 3
weather.groupby('season_desc')['temp'].mean()
weather
Explanation: Processing Bike-Share data with Pandas
(they're just so cute and helpful)
Grace & Tanner
1. Compute the average temperature by season ('season_desc'), fixing season error in original data.
End of explanation
weather['date'] = pd.to_datetime(weather['date'])
weather.date
weather['date'].dt.month
weather.groupby(weather['date'].dt.month)['total_riders'].sum()
Explanation: df[df['A'] > 2]['B'] = new_val # new_val not set in df
The warning offers a suggestion to rewrite as follows:
df.loc[df['A'] > 2, 'B'] = new_val
However, this doesn't fit your usage, which is equivalent to:
df = df[df['A'] > 2]
df['B'] = new_val
Warning we got on trying to rewrite certain dataframe rows, which led us to use .loc.
2. Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month.
End of explanation
weather
weather.groupby(weather['date'].dt.month)['total_riders'].sum()
weather.groupby(weather['season_desc'])['total_riders'].sum()
weather['total_riders'].groupby(pd.cut(weather['temp'], np.arange(0, 1.0+0.2, 0.2))).sum()
Explanation: 3. Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months?
End of explanation
weather[['season_desc', 'temp', 'total_riders']].groupby('season_desc').corr()
Explanation: As temperatures increase, so to do the number of rentals/riders. Note, though, that there are very few days where the temperature rises above 0.8 Shellman degrees. It makes sense, then, for summer to have the highest number of riders (which it does) because of the season's preference for higher temperatures. It also follows that winter, being fucking cold, has significantly fewer riders. We can also see that, when grouping by month, the summery months of June through September have the highest number of rentals.
notes from next-day review:
End of explanation
weather
weather.groupby(weather['season_desc'])['no_casual_riders', 'no_reg_riders'].sum()
weather.groupby(pd.cut(weather['windspeed'], np.arange(0, 1.0+0.2, 0.2)))['no_casual_riders', 'no_reg_riders'].sum()
weather.groupby(pd.cut(weather['temp'], np.arange(0, 1.0+0.2, 0.2)))['no_casual_riders', 'no_reg_riders'].sum()
Explanation: 4. There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?
End of explanation
weather.groupby(weather['weather_desc'])['no_casual_riders', 'no_reg_riders'].sum()
Explanation: When we compare registered riders to casual riders by season, our results are somewhat suprising. It appears that there are more casual riders in winter and fall than registered riders, which is the opposite of what we expected. However, when we compare the total number of casual riders to registered riders based on temperature, there are always more registered riders than casual riders regardless of how hot or cold it is. Comparing by windspeed also gives us somewhat expected answers, as there are significantly more registered riders at higher windspeeds (.4 -.6) than casual riders.
End of explanation |
424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow Addons 回调:TQDM 进度条
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 导入并归一化数据
Step3: 构建简单的 MNIST CNN 模型
Step4: 默认的 TQDMCallback 用法
Step5: 当您运行上面的单元时,以下是预期输出 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
!pip install -q "tqdm>=4.36.1"
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
import tqdm
# quietly deep-reload tqdm
import sys
from IPython.lib import deepreload
stdout = sys.stdout
sys.stdout = open('junk','w')
deepreload.reload(tqdm)
sys.stdout = stdout
tqdm.__version__
Explanation: TensorFlow Addons 回调:TQDM 进度条
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/addons/tutorials/tqdm_progress_bar"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/tqdm_progress_bar.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/tqdm_progress_bar.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/addons/tutorials/tqdm_progress_bar.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
此笔记本将演示如何使用 TensorFlow Addons 中的 TQDMCallback。
设置
End of explanation
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# normalize data
x_train, x_test = x_train / 255.0, x_test / 255.0
Explanation: 导入并归一化数据
End of explanation
# build the model using the Sequential API
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
Explanation: 构建简单的 MNIST CNN 模型
End of explanation
# initialize tqdm callback with default parameters
tqdm_callback = tfa.callbacks.TQDMProgressBar()
# train the model with tqdm_callback
# make sure to set verbose = 0 to disable
# the default progress bar.
model.fit(x_train, y_train,
batch_size=64,
epochs=10,
verbose=0,
callbacks=[tqdm_callback],
validation_data=(x_test, y_test))
Explanation: 默认的 TQDMCallback 用法
End of explanation
# TQDMProgressBar() also works with evaluate()
model.evaluate(x_test, y_test, batch_size=64, callbacks=[tqdm_callback], verbose=0)
Explanation: 当您运行上面的单元时,以下是预期输出
End of explanation |
425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Module for computations (if needed)
Step1: Lab 05
Step2: Analytical solutions
Step3: Simulations' data | Python Code:
import FEM_utilities as FEM
Explanation: Module for computations (if needed)
End of explanation
def write_inp_file(L, z0, nn, eltype, elname, isRiks, matname, A, E, ν, increment, t_data, F):
n_coords = np.zeros((nn,2))
for ni in range(1,nn):
n_coords[ni,:] = ni/(nn-1)*np.array([L, z0])
if isRiks:
basename = 't_ne'+str(nn-1).zfill(2)+'_aR_s'+str(int(t_data[3]*100)).zfill(3)
else:
basename = 't_ne'+str(nn-1).zfill(2)+'_aN_s'+str(int(t_data[3]*100)).zfill(3)
filename = basename+'.inp'
outfile = open(filename, "wt")
outfile.write("** Lab 05 input file test\n")
# NODES section
outfile.write("**\n")
outfile.write("** Nodes\n")
outfile.write("**\n")
outfile.write("*NODE\n")
for i in range(nn):
nodestring = "{0:4d},{1:8},{2:8}\n".format(i+1,n_coords[i,0],n_coords[i,1])
outfile.write(nodestring)
# ELEMENTS section
outfile.write("**\n")
outfile.write("** Elements\n")
outfile.write("**\n")
outfile.write("*ELEMENT, TYPE={0}, ELSET={1}\n".format(eltype,elname))
for i in range(1,nn):
outfile.write("{0:4d},{1:4d},{2:4d}\n".format(i,i,i+1))
# SOLID section
outfile.write("**\n")
outfile.write("** Solid section\n")
outfile.write("**\n")
outfile.write("*SOLID SECTION, MATERIAL={0}, ELSET={1}\n".format(matname,elname))
outfile.write(str(A)+",\n")
# MATERIAL section
outfile.write("**\n")
outfile.write("** Materials\n")
outfile.write("**\n")
outfile.write("*MATERIAL, name = {0}\n".format(matname))
outfile.write("*ELASTIC\n")
outfile.write("{0},{1:6}\n".format(E,ν))
# BOUNDARY CONDITIONS
outfile.write("**\n")
outfile.write("** Boundary conditions\n")
outfile.write("**\n")
outfile.write("*BOUNDARY\n")
outfile.write("1,\t1,\t3\n")
outfile.write("{0},\t1\n".format(nn))
outfile.write("{0},\t3\n".format(nn))
# calculation steps
outfile.write("**\n")
outfile.write("** Step\n")
outfile.write("**\n")
outfile.write("*STEP, NLGEOM, INC={0}\n".format(increment))
if isRiks:
outfile.write("*STATIC, RIKS\n")
else:
outfile.write("*STATIC\n")
outfile.write("{0:8},{1:8},{2:8},{3:8}\n".format(t_data[0], t_data[1], t_data[2], t_data[3]))
# LOADS
outfile.write("**\n")
outfile.write("** Loads\n")
outfile.write("**\n")
outfile.write("*Cload\n")
outfile.write("{0}, {1:2d}, {2}\n".format(nn, 2, -F ))
outfile.write("*OUTPUT,FIELD\n")
outfile.write("*ELEMENT OUTPUT\n")
outfile.write("S,COORD,\n")
outfile.write("*EL PRINT\n")
outfile.write(" S,COORD,\n")
outfile.write("*OUTPUT,FIELD\n")
outfile.write("*NODE OUTPUT\n")
outfile.write("U,COORD\n")
outfile.write("*NODE PRINT\n")
outfile.write("U,COORD\n")
outfile.write("*OUTPUT,HISTORY,FREQUENCY={0}\n".format(increment))
outfile.write("*END STEP\n")
outfile.close()
if not os.path.exists('../Lab05_abaqus/'+basename+'/'):
os.makedirs('../Lab05_abaqus/'+basename+'/')
shutil.move(filename,'../Lab05_abaqus/'+basename+'/'+filename)
L = 2500. # lenght [mm]
z0 = 25. # initial height [mm]
isRiks = True
eltype = 'T3D2H'
elname = 'EALL'
matname = 'material_1'
E = 72000. # modulus [MPa]
ν = 0.33 # Poisson's coefficient
EA = 5e7 # cross sectional properties [N]
A = EA/E # area of cross section [mm^2]
# step parameters
increment = 1000 # max number of calculation steps
# dt0, t_tot, dt_min, dt_max
t_data1 = [0.05, 1.0, 1e-4, 0.05]
t_data2 = [0.25, 1.0, 0.25, 0.25]
ts = [t_data1, t_data2]
# load
F = 15 # load [N]
num_nodes = [2, 3, 6]
for k in num_nodes:
# shall we perform Riks analysis
for isRiks in [True, False]:
# time steps
for td in ts:
write_inp_file(L, z0, k, eltype, elname, isRiks, matname, A, E, ν, increment, td, F)
current_dir = os.getcwd()
os.chdir("../Lab05_abaqus/")
# %%bash
#./lab05.py
os.chdir(current_dir)
Explanation: Lab 05: Riks method
Write input file
Set parameters for simulation:
isRiks : whether to perform Riks or Newton-Raphson analysis
basename : name used for input file and directory
eltype : type of truss element (see conventions here)
matname: name of material
E, $\nu$: elastic properties (Young modulus and Poisson ratio, as TYPE is by default isotropic)
EA : cross-sectional properties of truss [N]
elname: *Elset assigned name
End of explanation
w_z_max = 2.5
w_z = np.linspace(0,w_z_max,500)
y = w_z - 1.5*w_z**2 + 0.5 * w_z**3
Explanation: Analytical solutions:
$F-w$ analytical relation:
$F = -\frac{EA}{L^3} \cdot ( z^2 \cdot w + \frac{3}{2}z \cdot w^2 + \frac{1}{2}w^3 )$
End of explanation
def readDataL05(filename, isRiks):
file=open(filename,'r')
row = file.readlines()
step = []
U2 = []
S11 = []
state = 0
for line in row:
strlist = line.split()
if 'INCREMENT' in strlist and 'SUMMARY' in strlist:
state = 1
elif 'U2' in strlist:
state = 2
elif 'S11' in strlist:
state = 3
elif 'TOTAL' in strlist and state == 1 and not isRiks:
#print(strlist)
step.append(float(strlist[-1]))
state = 0
elif 'FACTOR' in strlist and state == 1 and isRiks:
step.append(float(strlist[4]))
state = 0
elif 'MINIMUM' in strlist and state == 2:
U2.append(float(strlist[2]))
state = 0
elif 'MAXIMUM' == strlist and state == 3:
S11.append(float(strlist[2]))
state = 0
return np.array(step), np.array(U2), np.array(S11)
dirs = glob.glob("../Lab05_abaqus/t_*")
sim_data = {}
for di in dirs:
test = di.split('/')[-1]
if '_aR_' in test:
a, b, c = readDataL05(di+"/"+test+".dat", True)
else:
a, b, c = readDataL05(di+"/"+test+".dat", False)
xy = np.zeros((len(a),2))
xy[:,0] = -b/z0
xy[:,1] = F*a*L**3/(EA*z0**3)
sim_data[test] = xy
#sim_data
w_z_max = 2.5
plt.figure(figsize=(16,10), dpi=300)
plt.plot(w_z,y, '-', lw=3, label='analytical')
for i, test in enumerate(sim_data):
n_el = int(test[4:6])
if '_aR_' in test:
atype = 'Riks'
if n_el == 1:
sym = 'h'
c = ((i+1)/len(sim_data),0.0,0.0)
elif n_el == 2:
sym = '*'
c = ((i+1)/len(sim_data),(i+1)/len(sim_data),0.0)
else:
sym = '^'
c = (0.0,(i+1)/len(sim_data),0.0)
if float(test[-3:])/100 == 0.25:
msize = 12
else:
msize = 6
lb = r"$n_{el}=$"+str(n_el)+" $\Delta l_{max}$="+str(float(test[-3:])/100)+" "+atype
plt.plot(sim_data[test][:,0],sim_data[test][:,1],sym, ms=msize, color=c, label=lb )
plt.xlim([0,w_z_max])
plt.ylim([-0.2,.4])
plt.xticks(np.arange(0.0,w_z_max+.1,0.25))
plt.yticks(np.arange(-0.2,0.4+.1,.1))
plt.title('Force displacement relation - Riks',fontsize=16)
plt.xlabel(r'$-\frac{w}{z}$', fontsize=15)
plt.ylabel(r'$\frac{FL^3}{EAz^3}$', fontsize=15)
plt.legend(loc='upper center', shadow=True, fontsize=14)
plt.grid()
plt.savefig('Lab05_Riks.jpg')
w_z_max =0.5
plt.figure(figsize=(16,10), dpi=300)
plt.plot(w_z,y, '-', lw=3, label='analytical')
for i, test in enumerate(sim_data):
n_el = int(test[4:6])
if '_aN_' in test:
atype = 'NR'
if n_el == 1:
sym = 'h'
c = ((i+1)/len(sim_data),0.0,0.0)
elif n_el == 2:
sym = '*'
c = ((i+1)/len(sim_data),(i+1)/len(sim_data),0.0)
else:
sym = '^'
c = (0.0,(i+1)/len(sim_data),(i+1)/len(sim_data))
if float(test[-3:])/100 == 0.25:
msize = 12
else:
msize = 6
lb = r"$n_{el}=$"+str(n_el)+" $\Delta l_{max}$="+str(float(test[-3:])/100)+" "+atype
plt.plot(sim_data[test][:,0],sim_data[test][:,1],sym, ms=msize, color=c, label=lb )
plt.xlim([0,w_z_max])
plt.ylim([-0.2,.4])
plt.xticks(np.arange(0.0,w_z_max+.1,0.25))
plt.yticks(np.arange(-0.2,0.4+.1,.1))
plt.title('Force displacement relation - Newton Raphson',fontsize=16)
plt.xlabel(r'$-\frac{w}{z}$',fontsize=15)
plt.ylabel(r'$\frac{FL^3}{EAz^3}$',fontsize=15)
plt.legend(loc='upper center', shadow=True,fontsize=14)
plt.grid()
plt.savefig('Lab05_NR.jpg')
Explanation: Simulations' data
End of explanation |
426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regresión en Scikit-Learn
Step1: Dataset
Cargamos los cuatro conjuntos de datos como arreglos numpy. Los datos reales corresponden a mediciones verificadas con algún instrumento. En la práctica esto no se realiza frecuentemente, por lo que es necesario usar estadística para realizar predicciones.
Obsérvese que a las X's se les añade una columna con unos dado que asumimos $x_0 \equiv 1$. Esto con el fin de hacer regresión lineal más adelante.
Step2: Gráficos
Ejercicios
Step3: Modelo a partir de la ecuación normal de mínimos cuadrados
$$W = (X^T X)^{-1} X^T Y$$
Step4: Ahora, graficamos la recta contra los datos de entrenamiento
Step5: Ejercicios
Step6: Regularizadores
Step7: Varianza esperada y sesgo esperado
Step8: Varianza y sesgo | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from semana2_datos import *
Explanation: Regresión en Scikit-Learn
End of explanation
X_1 = np.array([[1,x] for x, y in data_1])
Y_1 = np.array([y for x, y in data_1])
X_2 = np.array([[1,x] for x, y in data_2])
Y_2 = np.array([y for x, y in data_2])
X_3 = np.array([[1,x] for x, y in data_3])
Y_3 = np.array([y for x, y in data_3])
X_4 = np.array([[1,x] for x, y in data_4])
Y_4 = np.array([y for x, y in data_4])
X_real = np.array([[1,x] for x, y in data_real])
Y_real = np.array([y for x, y in data_real])
Explanation: Dataset
Cargamos los cuatro conjuntos de datos como arreglos numpy. Los datos reales corresponden a mediciones verificadas con algún instrumento. En la práctica esto no se realiza frecuentemente, por lo que es necesario usar estadística para realizar predicciones.
Obsérvese que a las X's se les añade una columna con unos dado que asumimos $x_0 \equiv 1$. Esto con el fin de hacer regresión lineal más adelante.
End of explanation
# data_1
x_1 = np.array([x for _, x in X_1])
plt.scatter(x_1, Y_1)
Explanation: Gráficos
Ejercicios:
- Graficar data_2.
- Graficar data_3 y data_4 en un mismo plano y con distinto color.
End of explanation
# data_1
W_1 = np.matmul(np.matmul(np.linalg.inv(np.matmul(X_1.T,
X_1)), X_1.T), Y_1)
f1 = lambda X: W_1[1]*X + W_1[0]
print('Los pesos aprendidos son\nw0: {}\nw1: {}'.format(W_1[0], W_1[1]))
Explanation: Modelo a partir de la ecuación normal de mínimos cuadrados
$$W = (X^T X)^{-1} X^T Y$$
End of explanation
# data_1
plt.scatter(x_1, Y_1)
lower = min(x_1)
upper = max(x_1)
sample_x1 = np.linspace(lower, upper, num=len(x_1))
plt.plot(sample_x1, f1(sample_x1), color='r')
Explanation: Ahora, graficamos la recta contra los datos de entrenamiento
End of explanation
# Polynomial regression
x_1 = np.expand_dims(x_1, axis=1)
poly = PolynomialFeatures(3)
features = poly.fit_transform(x_1)
features.shape
lin_reg = LinearRegression()
lin_reg.fit(features, Y_1)
sample_x1 = np.expand_dims(sample_x1, axis=1)
sample_x1_poly = poly.fit_transform(sample_x1)
y1_poly = lin_reg.predict(sample_x1_poly)
plt.scatter(x_1, Y_1)
plt.plot(sample_x1, y1_poly, color='g')
Explanation: Ejercicios:
Ajustar un regresor lineal para data_2, data_3 y data_4 usando el módulo correspondiente de Scikit-Learn.
Graficar las rectas contra sus respectivos conjuntos de datos. data_3 y data_4 deberán de aparecer en el mismo plano con distinción de color.
Regresión polinómica y regularizaciones
End of explanation
# Ridge para x_1
ridge = Ridge(alpha=2.5)
ridge.fit(features, Y_1)
y1_ridge = ridge.predict(sample_x1_poly)
plt.scatter(x_1, Y_1)
plt.plot(sample_x1, y1_ridge, color='r')
Explanation: Regularizadores
End of explanation
# Expected prediction
def E(f_x):
return float(sum(f_x)) / float(len(f_x))
x_real = np.array([x for _, x in X_real])
exp_pred_1 = E(f1(x_real))
avg_y_real = E(Y_real)
avg_y_real - exp_pred_1
Explanation: Varianza esperada y sesgo esperado
End of explanation
# data 1
f1 = W_1[1]*sample_x1 + W_1[0]
f1_to_2 = f1 ** 2
E11 = E(f1_to_2)
E_f1 = E(f1)
E12 = E_f1 ** 2
E11 - E12
Explanation: Varianza y sesgo
End of explanation |
427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-2', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CCCMA
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welch's periodograms and spectrograms
Step1: Computing the power spectrum
The spectral power computes the power of the signal at a frequency. The spectral amplitude distribution is the series of amplitudes for each component of the oscillation. To plot the power spectrum we need to calculate first the Fourier coefficients to obtain the amplitudes or powers. To do it, the data can be taken into a number of components or segments. To obtain the power spectrum of a signal, we simply square the amplitude of the oscillation.
When computing the Fourier transforms or any spectra, it is generaly advisable to define the spectral resolution or frequency resolution.
It is the spacing between successive frequencies, and it is given by the ratio between the sampling rate and the total number of samples of the signal. Thus, at a fixed sampling
<BR>
$$
\text{Spectral resolution} = \frac{\text{Sampling Rate}}{\text{Nsamples}}
$$
<BR>
The Welch method computes an estimate of the power spectrum by averaging the data into overlapping segments,
The length of the segment determin the appropiate spectral resolution.
<BR>
Other aspects to take into account is the tappering window to be applied, and the degree of overlap between segments.
Step2: A synthetic sine wave
A 120 seconds sine signal sampled at 100 Hz
Step3: The two normalizations necessary to obtain the amplitudes from the Fourier coefficients are
Step4: For the Welch's periodogram, the spectral resolution is given by the lenght of the segment. If we take a segment that correspond to the sampling rate, then the frequency resolution is 1 Hz. A larger segment (e.g., 4 times the size of the sampling rate) would give a higher frequency resolution (0.25 Hz).
Welch's periodogram will return the power (amplitude to the square) versus frequency. We can also have the power density. Power density is the power per Hz, and if we take a segment which is four times the sampling rate, then the power will be multiplied by four to return power/Hz. In this case, when we add in Welch's, scaling=density and will obtain the power times 4, because our segment has a lenght of four times the sampling rate.
Step5: Example of EGG with delta band
Step6: The relative band (absolute power divided by the total power of the spectrum)
can be calculated, irrespective of if power of power densities are given in
the Welch's spectrum. | Python Code:
%pylab inline
from matplotlib import style
style.use('fivethirtyeight')
from numpy import pi as PI
from scipy import signal
from scipy.integrate import simps
Explanation: Welch's periodograms and spectrograms
End of explanation
%ls data
Explanation: Computing the power spectrum
The spectral power computes the power of the signal at a frequency. The spectral amplitude distribution is the series of amplitudes for each component of the oscillation. To plot the power spectrum we need to calculate first the Fourier coefficients to obtain the amplitudes or powers. To do it, the data can be taken into a number of components or segments. To obtain the power spectrum of a signal, we simply square the amplitude of the oscillation.
When computing the Fourier transforms or any spectra, it is generaly advisable to define the spectral resolution or frequency resolution.
It is the spacing between successive frequencies, and it is given by the ratio between the sampling rate and the total number of samples of the signal. Thus, at a fixed sampling
<BR>
$$
\text{Spectral resolution} = \frac{\text{Sampling Rate}}{\text{Nsamples}}
$$
<BR>
The Welch method computes an estimate of the power spectrum by averaging the data into overlapping segments,
The length of the segment determin the appropiate spectral resolution.
<BR>
Other aspects to take into account is the tappering window to be applied, and the degree of overlap between segments.
End of explanation
srate = 100
Nyquist = srate/2
print('Nysquist frequency is %d Hz'%Nyquist)
time = np.linspace(0,120,num=120*srate)
mysine = lambda f: np.sin(2*PI*f*time)
sine = 5*mysine(10)
# Plot only 5 second
fig, ax = plt.subplots(figsize=(16,4))
plt.plot(time, sine, lw = 1.5, color='C1')
plt.xlabel('Time (sec)'), plt.ylabel('Voltage ($\mu$Volts)');
plt.xlim(0,5);
Explanation: A synthetic sine wave
A 120 seconds sine signal sampled at 100 Hz
End of explanation
# Fourier transform
FourierCoeff = np.fft.fft(sine)/sine.size
DC = [np.abs(FourierCoeff[0])]
amp = np.concatenate((DC, 2*np.abs(FourierCoeff[1:])))
# compute frequencies vector until half the sampling rate
Nsamples = int( math.floor(sine.size/2) )
hz = np.linspace(0, Nyquist, num = Nsamples + 1 )
dhz = hz[1]
print('Spectral resolution = %2.4f Hz'%dhz)
Explanation: The two normalizations necessary to obtain the amplitudes from the Fourier coefficients are:
1) Divide the Fourier coefficients by the size of the signal
2) multipy by two to obtain the amplitude of the negative frequencies
In addition, we can set the DC-component to zero.
End of explanation
# Perform Welch's periodogram
segment = int( srate*5 )
myhann = signal.get_window('hann', segment)
# obtain simply Power (amplitude^2) withouth tappering
myparams = dict(fs = srate, nperseg = segment, window = np.ones(segment),
noverlap = 0, scaling = 'spectrum', return_onesided=True)
freq, ps = signal.welch(x = sine, **myparams)# units uV**2
ps = 2*ps # correct for negative frequencies
#obtain Power density (amplitude^2/Hz) withouth tappering
myparams = dict(fs = srate, nperseg = segment, window = np.ones(segment),
noverlap = 0, scaling = 'density', return_onesided=True)
freq, psd = signal.welch(x = sine, **myparams)# units uV**2/Hz
psd = 2*psd # correct for negative frequencies
dfreq = freq[1]
print('Spectral resolution = %2.4f Hz'%dfreq)
# Plot the power spectrum
fig, ax = plt.subplots(1, 2, figsize=(16, 4))
ax[0].set_title("Amplitude spectrum (Fourier transform)")
ax[0].stem(hz,amp[:len(hz)], use_line_collection = True)
ax[0].set_ylabel('Amplitude ($\mu V$)')
ax[1].set_title("Power spectrum (Welch's periodogram)")
ax[1].plot(freq, ps, color='k', lw=2)
ax[1].set_ylabel('Power spectrum ($\mu V^2$)')
for myax in ax:
myax.set_xlabel('Frequency (Hz)')
myax.set_xlim(0,20)
myticks = list(range(6)) + list(range(10, 25,5))
myax.set_xticks(myticks)
# Sine wave has an amplitude of 2 uV, the power is 4 uV and the power density is 4*5 if we took
# a segment with size = 4x the sampling rate.
print('Signal amplitude = %2.4f uVolts'%amp[int(10/dhz)])
print('Signal power = %2.4f uVolts^2'%ps[int(10/dfreq)])
print('Singal power density = %2.4f uVolts^2/Hz'%psd[int(10/dfreq)])
np.sqrt(124.9824/5)
# Plot spectrogram
# now we will analyze window lenghts of 500 ms
segment = int(0.5*srate) # 500 points (0.5 sec, 500 ms)
# we have less frequency resolution here because the signals are smaller
Nsamples = int( np.floor(segment/2) )
hz = np.linspace(0, Nyquist, Nsamples + 1)
dfreq = hz[1]
print('Spectral resolution = %2.4f Hz'%dfreq)
nsteps = int(np.floor( (sine.size/segment)) )
print(sine.size, nsteps)
dt = 1/srate
sine.size*dt
dt_segment = 1/0.50 # the segment is half the sampling of the signal
nsteps/dt_segment
# compute manually the Fourier transform
myamp = list()
for i in range(nsteps):
# signal duration 500 ms
pstart = i*segment
data = sine[ pstart : pstart + segment]
#data = sine[i*step:i*step+segment]
FourierCoeff = np.fft.fft(data)/segment
DC = [np.abs(FourierCoeff[0])] # DC component
fft = np.concatenate((DC, 2*np.abs(FourierCoeff[1:])))
amp = fft[:int(45/dfreq)] # only until 45 Hz
myamp.append( amp )
hz = hz[:int(45/dfreq)]
fig, ax = plt.subplots(2,1, figsize = (16,8), constrained_layout=True)
# Plot the recording
ax[0].plot(time, sine, lw = 1, color='C0')
ax[0].set_ylabel('Amplitude ($\mu V$)')
ax[0].set_title('Sine wave with manual Fourier')
# spectrum is a ContourSet object
dt_segment= 1/0.5 # one segment is 0.5 seconds
X = np.arange(nsteps)/dt_segment
Y = hz
Z = np.array(myamp).T # shape freq,time
levels = 10
spectrum = ax[1].contourf(X,Y,Z,levels, cmap='jet')#,'linecolor','none')
# adjust the colormap
cbar = plt.colorbar(spectrum)#, boundaries=np.linspace(0,1,5))
cbar.ax.set_ylabel('Amplitude ($\mu$V)', rotation=90)
#cbar.set_ticks(np.arange(0,5,10))
#A working example (for any value range) with five ticks along the bar is:
#m0=int(np.floor(np.min(myamp))) # colorbar min value
#m4=int(np.ceil(np.max(myamp))) # colorbar max value
#m1=int(1*(m4-m0)/4.0 + m0) # colorbar mid value 1
#m2=int(2*(m4-m0)/4.0 + m0) # colorbar mid value 2
#m3=int(3*(m4-m0)/4.0 + m0) # colorbar mid value 3
#cbar.set_ticks([m0,m1,m2,m3,m4])
#cbar.set_ticklabels([m0,m1,m2,m3,m4])
#cbar.set_ticks(np.arange(0, 1.1, 0.5))
ax[1].axhline(y = 8, linestyle='--', linewidth = 1.5, color='white')
ax[1].axhline(y = 12, linestyle='--', linewidth = 1.5, color='white')
ax[1].set_ylim([0,40])
ax[1].set_yticks(arange(0,45,5))
ax[1].set_ylabel('Frequency (Hz)')
for myax in ax:
myax.set_xticks(np.arange(0, 121, 5))
myax.set_xlim(0, 10)
myax.set_xlabel('Time (sec.)')
# With scipy
myparams = dict(nperseg = segment, noverlap = 0, return_onesided=True, scaling='density')
freq, nseg, Sxx = signal.spectrogram(x = sine, fs=srate, **myparams)
Sxx = 2*Sxx
print('Frequency resolution = %2.4f Hz'%freq[1])
Sxx.shape
fig, ax = plt.subplots(2,1, figsize = (16,8), constrained_layout=True)
ax[0].plot(time, sine, lw = 1, color='C0')
ax[0].set_ylabel('Amplitude ($\mu V$)')
ax[0].set_title('Sine wave with spectrogram form scipy')
# spectrum is a ContourSet object
dt = sine.size/nsteps # 120 seconds in number of steps
X = np.arange(nsteps)*dt
Y = freq
Z = Sxx
levels = 10
spectrum = ax[1].contourf(X,Y,Z,levels, cmap='jet')#,'linecolor','none')
# get the colormap
cbar = plt.colorbar(spectrum)#, boundaries=np.linspace(0,1,5))
cbar.ax.set_ylabel('Power ($\mu^2$V)', rotation=90)
#cbar.set_ticks(np.arange(0,50,10))
#A working example (for any value range) with five ticks along the bar is:
#m0=int(np.floor(np.min(Sxx))) # colorbar min value
#m4=int(np.ceil(np.max(Sxx))) # colorbar max value
#m1=int(1*(m4-m0)/4.0 + m0) # colorbar mid value 1
#m2=int(2*(m4-m0)/4.0 + m0) # colorbar mid value 2
#m3=int(3*(m4-m0)/4.0 + m0) # colorbar mid value 3
#cbar.set_ticks([m0,m1,m2,m3,m4])
#cbar.set_ticklabels([m0,m1,m2,m3,m4])
#cbar.set_ticks(np.arange(0, 1.1, 0.5))
ax[1].axhline(y = 8, linestyle='--', linewidth = 1.5, color='white')
ax[1].axhline(y = 12, linestyle='--', linewidth = 1.5, color='white')
ax[1].set_ylim([0,40])
ax[1].set_yticks(arange(0,45,5))
ax[1].set_ylabel('Frequency (Hz)')
for myax in ax:
myax.set_xticks(np.arange(0, 121, 5))
myax.set_xlim(0, 10)
myax.set_xlabel('Time (sec.)')
# Plot first spectrum
plt.figure( figsize=(12,4))
mypower = Sxx[:,0]
plt.plot(hz,myamp[0], 'o-', color='C0', label='Amplitude')
plt.plot(freq, mypower, '^-', color='red', label='Power')
plt.plot(freq, mypower/0.5, 's-', color='green', label = 'Power density')
plt.xlabel("Frequency (Hz)")
plt.legend(frameon = False, loc=2);
Explanation: For the Welch's periodogram, the spectral resolution is given by the lenght of the segment. If we take a segment that correspond to the sampling rate, then the frequency resolution is 1 Hz. A larger segment (e.g., 4 times the size of the sampling rate) would give a higher frequency resolution (0.25 Hz).
Welch's periodogram will return the power (amplitude to the square) versus frequency. We can also have the power density. Power density is the power per Hz, and if we take a segment which is four times the sampling rate, then the power will be multiplied by four to return power/Hz. In this case, when we add in Welch's, scaling=density and will obtain the power times 4, because our segment has a lenght of four times the sampling rate.
End of explanation
eeg = np.loadtxt('data/EEG.txt')
sr = 100 # samples per second
dt = 1/sr
time = np.arange(eeg.size)*dt
Nysquid = sr/2
print('Nysquid frequency is %d Hz'%Nysquid)
fig, ax = plt.subplots(figsize=(16,4))
plt.plot(time, eeg, lw = 1.5)
plt.xlabel('Time (sec)'), plt.ylabel('Voltage ($\mu$Volts)');
# Fourier transform
FourierCoeff = np.fft.fft(eeg)/eeg.size
DC = [np.abs(FourierCoeff[0])]
amp = np.concatenate((DC, 2*np.abs(FourierCoeff[1:])))
# compute frequencies vector until half the sampling rate
Nsamples = int( math.floor(eeg.size/2) )
hz = np.linspace(0, sr/2., num = Nsamples + 1 )
print('Spectral resolution = %2.4f Hz'%hz[1])
# Perform Welch's periodogram with hann window and 50% overlap
segment = int( 5*sr )
myhann = signal.get_window('hann', segment)
# obtain simply Power (uV^2) with Hann window and 50% overlap
myparams = dict(fs = sr, nperseg = segment, window = myhann,
noverlap = segment/2, scaling = 'spectrum', return_onesided=True)
freq, ps = signal.welch(x = eeg, **myparams)# units uV**2
ps = 2*ps # correct for negative frequencies
# obtain Power density (uV^2/Hz) with Hann window and 50% overlap
# to get back to simply power, divide by the segment lenght in seconds (four in our case)
myparams2 = dict(fs = sr, nperseg = segment, window = myhann,
noverlap = segment/2, scaling = 'density', return_onesided=True)
freq, psd = signal.welch(x = eeg, **myparams2)# units uV**2
psd = 2*psd
dfreq = freq[1]
print('Spectral resolution = %2.4f Hz'%dfreq)
# Plot the power spectrum
fig, ax = plt.subplots(1, 2, figsize=(16, 4))
ax[0].set_title("Amplitude spectrum (Fourier transform)")
ax[0].stem(hz,amp[:len(hz)], use_line_collection = True)
ax[0].plot(freq, np.sqrt(ps), color='brown', lw = 2)
ax[0].set_ylabel('Amplitude ($\mu V$)')
ax[1].set_title("Power density (Welch's periodogram)")
ax[1].plot(freq, psd, color='k', lw=2)
ax[1].set_ylabel('Power spectrum density \n($\mu V^2/Hz$)')
for myax in ax:
myax.set_xlabel('Frequency (Hz)')
myax.set_xlim(0,20)
myticks = list(range(6)) + list(range(10, 25,5))
myax.set_xticks(myticks)
# compute the signal at 1 Hz
print('Signal amplitude @1Hz = %2.4f uVolts'%amp[int(1/dhz)])
print('Signal power @1Hz = %2.4f uVolts^2'%ps[int(1/dfreq)])
print('Singal power density @1Hz = %2.4f uVolts^2/Hz'%psd[int(1/dfreq)])
np.sqrt(475.8246/5)
Explanation: Example of EGG with delta band
End of explanation
# compute delta-band only for power densities!!!
idx_delta = np.logical_and(freq >= 0.4, freq <=4)
delta_power = simps(psd[idx_delta], dx = dfreq)
total_power = simps(psd, dx = dfreq)
print('Absolute delta power: %.3f uV^2' % delta_power)
print('Relative delta power: %.3f ' % (delta_power/total_power))
# With scipy
segment = int(sr)
myparams = dict(nperseg = segment, noverlap = 0, return_onesided=True, scaling='spectrum')
freq, nseg, Sxx = signal.spectrogram(x = eeg, fs=srate, **myparams)
Sxx = 2*Sxx
print('Frequency resolution = %2.4f Hz'%freq[1])
len(freq), len(nseg), len(eeg)
fig, ax = plt.subplots(2,1, figsize = (16,8), constrained_layout=True)
ax[0].plot(time, eeg, lw = 1, color='C0')
ax[0].set_ylabel('Amplitude ($\mu V$)')
ax[0].set_title('Sine wave with spectrogram form scipy')
myamp = np.sqrt(Sxx)
# spectrum is a ContourSet object
dt_seg = 1 # 120 seconds in number of steps
Z = myamp
Y = freq
X = 0.5+np.arange(len(nseg))
levels = 50
mycontour = ax[1].contourf(X,Y,Z,levels, cmap='jet')#,'linecolor','none')
# get the colormap
cbar = plt.colorbar(mycontour)#, boundaries=np.linspace(0,1,5))
cbar.ax.set_ylabel('Amplitude ($\mu$V)', rotation=90)
#cbar.set_ticks(np.arange(0,50,10))
#A working example (for any value range) with five ticks along the bar is:
m0=int(np.floor(np.min(myamp))) # colorbar min value
m4=int(np.ceil(np.max(myamp))) # colorbar max value
m1=int(1*(m4-m0)/4.0 + m0) # colorbar mid value 1
m2=int(2*(m4-m0)/4.0 + m0) # colorbar mid value 2
m3=int(3*(m4-m0)/4.0 + m0) # colorbar mid value 3
cbar.set_ticks([m0,m1,m2,m3,m4])
cbar.set_ticklabels([m0,m1,m2,m3,m4])
#cbar.set_ticks(np.arange(0, 1.1, 0.5))
ax[1].axhline(y = 0.5, linestyle='--', linewidth = 1.5, color='white')
ax[1].axhline(y = 5, linestyle='--', linewidth = 1.5, color='white')
ax[1].set_yticks(arange(0,45,5))
ax[1].set_ylim([0,30])
ax[1].set_ylabel('Frequency (Hz)')
for myax in ax:
myax.set_xticks(np.arange(0, 30, 5))
myax.set_xlim(0, 30)
myax.set_xlabel('Time (sec.)')
Explanation: The relative band (absolute power divided by the total power of the spectrum)
can be calculated, irrespective of if power of power densities are given in
the Welch's spectrum.
End of explanation |
429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercícios
1 - Aplique os algoritmos K-means [1] e AgglomerativeClustering [2] em qualquer dataset que você desejar (recomendação
Step1: 1ª Questão
Step2: Visualização dos Dados
Step3: Aplicação do K-Means
Step4: Métricas de Avaliação
Step5: 2ª Questão
Implementando o Método do Cotovelo
Step6: Questão 3 | Python Code:
import numpy as np
import pandas as pd
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
Explanation: Exercícios
1 - Aplique os algoritmos K-means [1] e AgglomerativeClustering [2] em qualquer dataset que você desejar (recomendação: iris). Compare os resultados utilizando métricas de avaliação de clusteres (completeness e homogeneity, por exemplo) [3].
[1] http://scikit-learn.org/stable/modules/clustering.html#k-means
[2] http://scikit-learn.org/0.17/modules/clustering.html#hierarchical-clustering
[3] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation
2 - Qual o valor de K (número de clusteres) você escolheu para a questão anterior? Desenvolva o Método do Cotovelo (não utilizar lib!) e descubra o K mais adequado. Após descobrir, aplique novamente o K-means com o K adequado.
Ajuda: atributos do k-means
3 - Após a questão 2, você aplicou o algoritmo com K apropriado. Refaça o cálculo das métricas de acordo com os resultados de clusters obtidos com a questão anterior e verifique se o resultado melhorou.
Bibliotecas
End of explanation
# Carregando o Wine Dataset (https://archive.ics.uci.edu/ml/datasets/Wine)
data = pd.read_csv("wine.data")
X = data.iloc[:,1:].values
y = data.iloc[:,0].values
# Pre-processing the data (for PCA)
X = (X - X.mean(axis=0)) / X.std(axis=0)
Explanation: 1ª Questão:
Carregando o Dataset (Glass)
End of explanation
# Plotando uma visualização 3-Dimensional dos Dados
# Podemos observar que os dados (em 3-Dimensões) são extremamente superpostos
pcaData = PCA(n_components=3).fit_transform(X)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(pcaData[:,0], pcaData[:,1], pcaData[:,2], c=y, cmap=plt.cm.Dark2)
plt.show()
Explanation: Visualização dos Dados
End of explanation
# Criamos o objeto da classe KMeans
kmeans = KMeans(n_clusters=2, random_state=0)
# Realizamos a Clusterização
kmeans.fit(X)
clts = kmeans.predict(X)
# Plotando uma visualização 3-Dimensional dos Dados, agora com os clusteres designados pelo K-Means
# Compare a visualização com o gráfico da celula de cima
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(pcaData[:,0], pcaData[:,1], pcaData[:,2], c=clts, cmap=plt.cm.Dark2)
plt.show()
Explanation: Aplicação do K-Means
End of explanation
# Utilizamos três métricas de avaliação dos Clusteres, com base nos dados já classificados:
# -> Homogeneity: porcentagem relativa ao objetivo de ter, em cada cluster, apenas membros de uma mesma classe
# -> Completeness: porcentagem relativa ao objetivo de ter todos os membros de uma classe no mesmo cluster
# -> V-Measure: medida que relaciona Homogeneity com Completeness, e é equivalente à uma métrica conhecida como NMI (Normalized Mutual Information).
homoScore = metrics.homogeneity_score(y, clts)
complScore = metrics.completeness_score(y, clts)
vMeasureScore = metrics.v_measure_score(y, clts)
print("### Avaliação ({0} Clusters) ###".format(kmeans.n_clusters))
print("Homogeneity: \t{0:.3}".format(homoScore))
print("Completeness: \t{0:.3}".format(complScore))
print("V-Measure: \t{0:.3}".format(vMeasureScore))
Explanation: Métricas de Avaliação
End of explanation
# Método do Cotevelo baseado na Inertia (Soma Quadrática da Distância Intra-Cluster de cada Ponto)
numK = np.arange(1,10); inertias = []
for i in numK:
print(".", end="")
kmeans.n_clusters = i
kmeans.fit(X)
inertias.append(kmeans.inertia_)
# Plotagens
plt.figure()
plt.title("Elbow Method")
plt.xlabel("Num of Clusters"); plt.ylabel("Inertia")
plt.plot(numK, inertias, 'bo-')
plt.show()
Explanation: 2ª Questão
Implementando o Método do Cotovelo
End of explanation
# Realizamos a Clusterização, agora com um número selecionado de Clusteres
kmeans.n_clusters = 3
kmeans.fit(X)
clts = kmeans.predict(X)
# Visualização das Métricas de Avaliação
homoScore = metrics.homogeneity_score(y, clts)
complScore = metrics.completeness_score(y, clts)
vMeasureScore = metrics.v_measure_score(y, clts)
print("### Avaliação ({0} Clusters) ###".format(kmeans.n_clusters))
print("Homogeneity: \t{0:.3}".format(homoScore))
print("Completeness: \t{0:.3}".format(complScore))
print("V-Measure: \t{0:.3}".format(vMeasureScore))
# Plotando uma visualização 3-Dimensional dos Dados, agora com os clusteres designados pelo K-Means
# Compare a visualização com os gráficos anteriores
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(pcaData[:,0], pcaData[:,1], pcaData[:,2], c=clts, cmap=plt.cm.Dark2)
plt.show()
Explanation: Questão 3
End of explanation |
430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenStreetMap is an open project, which means it's free and everyone can use it and edit as they like. OpenStreetMap is direct competitor of Google Maps. How OpenStreetMap can compete with the giant you ask? It's depend completely on crowd sourcing. There's lot of people willingly update the map around the world, most of them fix their map country.
Openstreetmap is so powerful, and rely heavily on the human input. But its strength also the downfall. Everytime there's human input, there's always be human error.It's very error prone.I choose whole places of Jakarta. Jakarta is the capital of Indonesia.This dataset is huge, over 250,000 examples. It's my hometown, and i somewhat want to help the community.
<!-- TEASER_END -->
Problems Encountered in the Map
When I open OpenStreetMap dataset, I notice following issues
Step1: Inconsistent phone number format
We also have inconsistent phone number
Step2: Overview of the data
You can see the filesize about the dataset.
Step3: Show the top 5 of contributed users
We also can find the top 5 contributed users. These users are count by how they created the point in the map, and sort descent
Step4: Show the restaurant's name, the food they serve, and contact number | Python Code:
pipeline = [{'$match': {'address.street':{'$exists':1}}},
{'$project': {'_id': '$address.street'}},
{'$limit' : 5}]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
Explanation: OpenStreetMap is an open project, which means it's free and everyone can use it and edit as they like. OpenStreetMap is direct competitor of Google Maps. How OpenStreetMap can compete with the giant you ask? It's depend completely on crowd sourcing. There's lot of people willingly update the map around the world, most of them fix their map country.
Openstreetmap is so powerful, and rely heavily on the human input. But its strength also the downfall. Everytime there's human input, there's always be human error.It's very error prone.I choose whole places of Jakarta. Jakarta is the capital of Indonesia.This dataset is huge, over 250,000 examples. It's my hometown, and i somewhat want to help the community.
<!-- TEASER_END -->
Problems Encountered in the Map
When I open OpenStreetMap dataset, I notice following issues:
Street type abbreviations
Incosistent phone number format
Street Type Abbreviations
Take the name of the street for example. People like to abbreviate the type of the street.
Street become St. st. In Indonesia, 'Jalan'(Street-Eng), also abbreviated as Jln, jln, jl, Jln.
It maybe get us less attention. But for someone as Data Scientist/Web Developer, they expect the street to have generic format.
'Jalan Sudirman' -> Jalan <name> -> name = Sudirman
'Jln Sudirman' -> Jalan <name> -> ERROR!
There are also some users that input street name in two type name, street address and full address. I incorporate all of address name to street address, and result in the following,
End of explanation
pipeline = [{'$match': {'phone':{'$exists':1}}},
{'$project': {'_id': '$phone'}},
{'$limit' : 5}]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
Explanation: Inconsistent phone number format
We also have inconsistent phone number:
{u'_id': u'021-720-0981209'}
{u'_id': u'(021) 7180317'}
{u'_id': u'081807217074'}
{u'_id': u'+62 857 4231 9136'}
This makes difficult for any developer to parse to common format. The country of Jakarta is Indonesia, and it has country code of +62. And we see here that some users prefer to have separator with dash or spaces. Some users even separate the country code and city code(Jakarta: 21) in parantheses. We also see that the numbers prefix with 0, which can be used if you're in Indonesia, but not internationally.
So we have to convert these numbert into common format. Number could benefit by incorporating spaces, that way if developer uses the data, phone number can be extracted by country code, city code, and the rest of the number. Since mobile number doesn't have city code, we can just leave it alone. We can't take prefix all of the number by country code, since operator phone number, like McDonalds, doesn't need country code. So after I solve all of this issues, the results,
End of explanation
!ls -lh dataset/jakarta*
Explanation: Overview of the data
You can see the filesize about the dataset.
End of explanation
pipeline = [
{'$match': {'created.user':{'$exists':1}}},
{'$group': {'_id':'$created.user',
'count':{'$sum':1}}},
{'$sort': {'count':-1}},
{'$limit' : 5}
]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
Explanation: Show the top 5 of contributed users
We also can find the top 5 contributed users. These users are count by how they created the point in the map, and sort descent
End of explanation
pipeline = [{'$match': {'amenity':'restaurant',
'name':{'$exists':1},
'cuisine':{'$exists':1},
'phone':{'$exists':1}}},
{'$project':{'_id':'$name',
'cuisine':'$cuisine',
'contact':'$phone'}}]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
Explanation: Show the restaurant's name, the food they serve, and contact number
End of explanation |
431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris Dataset
From Wikipedia
Step1: read_html
Wikipedia has the same dataset as a html table at https
Step2: Plotting
Let's use pandas to plot the sepal_length vs the petal_length.
Step3: It would be nice to encode by color and plot all combinations of values, but this isn't easy with matplotlib. Instead, let's use seaborn (conda install seaborn).
Step4: Excercise
Visit the https
Step5: Classification
Let's say that we are an amature botonist and we'd like to determine the specied of Iris in our front yard, but that all we have available to us to make that classification is this dataset and a ruler.
Approach
This is a classic machine learning / classification problem where we want to used a collection of "labeled" data to help us sort through new data that we receive. In this case, the new data is a set of four measurements for a flower in our yard.
Because we have labeled data, this is a "supervised leanring" problem. If we did not know which species each point in the dataset belonged to, we could still use machine learning for "unsupervised learning".
Let's reimport the data using scikit learn.
Step6: Try Different Classifiers
Step7: Which Classifier is Best?
First, let's predict the species from the measurements. Because the classifier is clearly not perfect, we expect some mis-classifications.
Step8: Inaccuracy Score
Because we only have two classes, we can find the accuracy by taking the mean of the magnitude of the difference. This value is percent of time we are inaccurate. A lower score is better.
Step9: Excercise
In the above code we excluded species==0 and we only classified based on the sepal dimensions. Complete the following
Step10: Clustering
Instead of using the labels, we could ignor the labels and do blind clustering on the dataset. Let's try that with sklearn.
Step11: Visualize Clusters
Now let's visualize how we did. We'd hope that the cluster color would be as well-seperated as the original data labels.
Step12: Accuracy
The plot looks good, but it isn't clear how good the labels are until we compare them with the true labels.
Step13: Excercise
Visit http | Python Code:
import pandas as pd
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
df = pd.read_csv(url,names=['sepal_length',
'sepal_width',
'petal_length',
'petal_width',
'species'])
df.head()
Explanation: Iris Dataset
From Wikipedia:
The Iris flower data set or Fisher's Iris data set is a multivariate data set introduced by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. Two of the three species were collected in the Gaspé Peninsula "all from the same pasture, and picked on the same day and measured at the same time by the same person with the same apparatus".
Pandas
Pandas is a library modeled after the R dataframe API that enables the quick exploration and processing of heterogenous data.
One of the many great things about pandas is that is has many functions for grabbing data--including functions for grabbing data from the internet. In the cell below, we grabbed data from the https://archive.ics.uci.edu/ml/datasets/Iris, which has the data as a csv (without headers).
End of explanation
df_w = pd.read_html('https://en.wikipedia.org/wiki/Iris_flower_data_set',header=0)[0]
df_w.head()
Explanation: read_html
Wikipedia has the same dataset as a html table at https://en.wikipedia.org/wiki/Iris_flower_data_set. Let's use http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html to grab the data directly from Wikipedia.
You might have to run the following command first:
conda install html5lib BeautifulSoup4 lxml
End of explanation
import pylab as plt
%matplotlib inline
plt.scatter(df.sepal_length, df.petal_length)
Explanation: Plotting
Let's use pandas to plot the sepal_length vs the petal_length.
End of explanation
import seaborn as sns
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue='species')
sns.swarmplot(x="species", y="petal_length", data=df)
from pandas.tools.plotting import radviz
radviz(df, "species",)
Explanation: It would be nice to encode by color and plot all combinations of values, but this isn't easy with matplotlib. Instead, let's use seaborn (conda install seaborn).
End of explanation
## Plot 1 Here
## Plot 2 Here
Explanation: Excercise
Visit the https://seaborn.pydata.org/ and make two new plots with this Iris dataset using seaborn functions we haven't used above.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, svm
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
# keep only two features and keep only two species
X = X[y != 0, :2]
y = y[y != 0]
X,y, X.shape
Explanation: Classification
Let's say that we are an amature botonist and we'd like to determine the specied of Iris in our front yard, but that all we have available to us to make that classification is this dataset and a ruler.
Approach
This is a classic machine learning / classification problem where we want to used a collection of "labeled" data to help us sort through new data that we receive. In this case, the new data is a set of four measurements for a flower in our yard.
Because we have labeled data, this is a "supervised leanring" problem. If we did not know which species each point in the dataset belonged to, we could still use machine learning for "unsupervised learning".
Let's reimport the data using scikit learn.
End of explanation
# fit the model
for fig_num, kernel in enumerate(('linear', 'rbf', 'poly')):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
plt.figure(fig_num)
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10)
plt.axis('tight')
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.title(kernel)
plt.show()
Explanation: Try Different Classifiers
End of explanation
y_pred = clf.predict(X)
print(y,y_pred)
Explanation: Which Classifier is Best?
First, let's predict the species from the measurements. Because the classifier is clearly not perfect, we expect some mis-classifications.
End of explanation
for kernel in ('linear', 'rbf', 'poly'):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
y_pred = clf.predict(X)
print(kernel,np.mean(np.abs(y-y_pred))*100,'%')
Explanation: Inaccuracy Score
Because we only have two classes, we can find the accuracy by taking the mean of the magnitude of the difference. This value is percent of time we are inaccurate. A lower score is better.
End of explanation
## species==1
## petals
Explanation: Excercise
In the above code we excluded species==0 and we only classified based on the sepal dimensions. Complete the following:
Copy the code cells from above and exclude species==1
Copy the code cells from above and use the petal dimensions for classification
For each case, use the inaccuracy score to see how good the classification works.
End of explanation
from sklearn.cluster import KMeans, DBSCAN
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8': KMeans(n_clusters=8),
'dbscan_iris_1': DBSCAN(eps=1)}
for name, est in estimators.items():
est.fit(X)
labels = est.labels_
df[name] = labels
Explanation: Clustering
Instead of using the labels, we could ignor the labels and do blind clustering on the dataset. Let's try that with sklearn.
End of explanation
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue='dbscan_iris_1')
Explanation: Visualize Clusters
Now let's visualize how we did. We'd hope that the cluster color would be as well-seperated as the original data labels.
End of explanation
from sklearn.metrics import homogeneity_score
for name, est in estimators.items():
print('completeness', name, homogeneity_score(df[name],df['species']))
print('homogeneity', name, homogeneity_score(df['species'],df[name]))
Explanation: Accuracy
The plot looks good, but it isn't clear how good the labels are until we compare them with the true labels.
End of explanation
## Algo One
## Algo Two
Explanation: Excercise
Visit http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html and add two more clustering algorithms of your choice to the comparisons above.
End of explanation |
432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DataPack
Structure
matchzoo.DataPack is a MatchZoo native data structure that most MatchZoo data handling processes build upon. A matchzoo.DataPack consists of three parts
Step1: The main reason for using a matchzoo.DataPack instead of pandas.DataFrame is efficiency
Step2: Notice that frame is not a method, but a property that returns a matchzoo.DataPack.FrameView object.
Step3: This view reflects changes in the data pack, and can be called to create a pandas.DataFrame at any time.
Step4: Slicing a DataPack
You may use [] to slice a matchzoo.DataPack similar to slicing a list. This also returns a shallow copy of the sliced data like slicing a list.
Step5: A sliced data pack's relation will directly reflect the slicing.
Step6: In addition, left and right will be processed so only relevant information are kept.
Step7: It is also possible to slice a frame view object.
Step8: And this is equivalent to slicing the data pack first, then the frame, since both of them are based on the relation column.
Step9: Slicing is extremely useful for partitioning data for training vs testing.
Step10: Transforming Texts
Use apply_on_text to transform texts in a matchzoo.DataPack. Check the documentation for more information.
Step11: Since adding a column indicating text length is a quite common usage, you may simply do
Step12: To one-hot encode the labels
Step13: Building Your own DataPack
Use matchzoo.pack to build your own data pack. Check documentation for more information.
Step14: Unpack
Format data in a way so that MatchZoo models can directly fit it. For more details, consult matchzoo/tutorials/models.ipynb.
Step15: Data Sets
MatchZoo incorporates various datasets that can be loaded as MatchZoo native data structures.
Step16: The toy dataset doesn't need to be downloaded and can be directly used. It's the best choice to get things rolling.
Step17: Other larger datasets will be automatically downloaded the first time you use it. Run the following lines to trigger downloading.
Step18: Preprocessing
Preprocessors
matchzoo.preprocessors are responsible for transforming data into correct forms that matchzoo.models. BasicPreprocessor is used for models with common forms, and some other models have customized preprocessors made just for them.
Step19: When in doubt, use the default preprocessor a model class provides.
Step20: A preprocessor should be used in two steps. First, fit, then, transform. fit collects information into context, which includes everything the preprocessor needs to transform together with other useful information for later use. fit will only change the preprocessor's inner state but not the input data. In contrast, transform returns a modified copy of the input data without changing the preprocessor's inner state.
Step21: Processor Units
Preprocessors utilize mz.processor_units to transform data. Processor units correspond to specific transformations and you may use them independently to preprocess a data pack.
Step22: Or use chain_transform to apply multiple processor units at one time
Step23: Notice that some processor units are stateful so we have to fit them before using their transform.
Step24: Such StatefulProcessorUnit will save information in its state when fit, similar to the context of a preprocessor. In our case here, the vocabulary unit will save a term to index mapping, and a index to term mapping, called term_index and index_term respectively. Then we can proceed transforming a data pack.
Step25: Since this usage is quite common, we wrapped a function to do the same thing. For other stateful units, consult their documentations and try mz.build_unit_from_data_pack.
Step26: DataGenerator
Some MatchZoo models (e.g. DRMM, MatchPyramid) require batch-wise information for training so using fit_generator instead of using fit is necessary. In addition, sometimes your memory just can't hold all transformed data so to delay a part of the preprocessing process is necessary.
MatchZoo provides DataGenerator as an alternative. Instead of fit, you may do a fit_generator that takes a data generator that unpack data on the fly.
Step27: The data preprocessing of DSSM eats a lot of memory, but we can workaround that using the callback hook of DataGenerator.
Step28: In addition, losses like RankHingeLoss and RankCrossEntropyLoss have to be used with DataGenerator with mode='pair', since batch-wise information are needed and computed on the fly. | Python Code:
data_pack = mz.datasets.toy.load_data()
data_pack.left.head()
data_pack.right.head()
data_pack.relation.head()
Explanation: DataPack
Structure
matchzoo.DataPack is a MatchZoo native data structure that most MatchZoo data handling processes build upon. A matchzoo.DataPack consists of three parts: left, right and relation, each one of is a pandas.DataFrame.
End of explanation
data_pack.frame().head()
Explanation: The main reason for using a matchzoo.DataPack instead of pandas.DataFrame is efficiency: we save space from storing duplicate texts and save time from processing duplicate texts.
DataPack.FrameView
However, since a big table is easier to understand and manage, we provide the frame that merges three parts into a single pandas.DataFrame when called.
End of explanation
type(data_pack.frame)
Explanation: Notice that frame is not a method, but a property that returns a matchzoo.DataPack.FrameView object.
End of explanation
frame = data_pack.frame
data_pack.relation['label'] = data_pack.relation['label'] + 1
frame().head()
Explanation: This view reflects changes in the data pack, and can be called to create a pandas.DataFrame at any time.
End of explanation
data_slice = data_pack[5:10]
Explanation: Slicing a DataPack
You may use [] to slice a matchzoo.DataPack similar to slicing a list. This also returns a shallow copy of the sliced data like slicing a list.
End of explanation
data_slice.relation
Explanation: A sliced data pack's relation will directly reflect the slicing.
End of explanation
data_slice.left
data_slice.right
Explanation: In addition, left and right will be processed so only relevant information are kept.
End of explanation
data_pack.frame[5:10]
Explanation: It is also possible to slice a frame view object.
End of explanation
data_slice.frame() == data_pack.frame[5:10]
Explanation: And this is equivalent to slicing the data pack first, then the frame, since both of them are based on the relation column.
End of explanation
num_train = int(len(data_pack) * 0.8)
data_pack.shuffle(inplace=True)
train_slice = data_pack[:num_train]
test_slice = data_pack[num_train:]
Explanation: Slicing is extremely useful for partitioning data for training vs testing.
End of explanation
data_slice.apply_on_text(len).frame()
data_slice.apply_on_text(len, rename=('left_length', 'right_length')).frame()
Explanation: Transforming Texts
Use apply_on_text to transform texts in a matchzoo.DataPack. Check the documentation for more information.
End of explanation
data_slice.append_text_length().frame()
Explanation: Since adding a column indicating text length is a quite common usage, you may simply do:
End of explanation
data_pack.relation['label'] = data_pack.relation['label'].astype(int)
data_pack.one_hot_encode_label(num_classes=3).frame().head()
Explanation: To one-hot encode the labels:
End of explanation
data = pd.DataFrame({
'text_left': list('ARSAARSA'),
'text_right': list('arstenus')
})
my_pack = mz.pack(data)
my_pack.frame()
Explanation: Building Your own DataPack
Use matchzoo.pack to build your own data pack. Check documentation for more information.
End of explanation
x, y = data_pack[:3].unpack()
x
y
Explanation: Unpack
Format data in a way so that MatchZoo models can directly fit it. For more details, consult matchzoo/tutorials/models.ipynb.
End of explanation
mz.datasets.list_available()
Explanation: Data Sets
MatchZoo incorporates various datasets that can be loaded as MatchZoo native data structures.
End of explanation
toy_train_rank = mz.datasets.toy.load_data()
toy_train_rank.frame().head()
toy_dev_classification, classes = mz.datasets.toy.load_data(
stage='train', task='classification', return_classes=True)
toy_dev_classification.frame().head()
classes
Explanation: The toy dataset doesn't need to be downloaded and can be directly used. It's the best choice to get things rolling.
End of explanation
wiki_dev_entailment_rank = mz.datasets.wiki_qa.load_data(stage='dev')
wiki_dev_entailment_rank.frame().head()
snli_test_classification, classes = mz.datasets.snli.load_data(
stage='test', task='classification', return_classes=True)
snli_test_classification.frame().head()
classes
Explanation: Other larger datasets will be automatically downloaded the first time you use it. Run the following lines to trigger downloading.
End of explanation
mz.preprocessors.list_available()
Explanation: Preprocessing
Preprocessors
matchzoo.preprocessors are responsible for transforming data into correct forms that matchzoo.models. BasicPreprocessor is used for models with common forms, and some other models have customized preprocessors made just for them.
End of explanation
preprocessor = mz.models.Naive.get_default_preprocessor()
Explanation: When in doubt, use the default preprocessor a model class provides.
End of explanation
train_raw = mz.datasets.toy.load_data('train', 'ranking')
test_raw = mz.datasets.toy.load_data('test', 'ranking')
preprocessor.fit(train_raw)
preprocessor.context
train_preprocessed = preprocessor.transform(train_raw)
test_preprocessed = preprocessor.transform(test_raw)
model = mz.models.Naive()
model.guess_and_fill_missing_params()
model.build()
model.compile()
x_train, y_train = train_preprocessed.unpack()
model.fit(x_train, y_train)
x_test, y_test = test_preprocessed.unpack()
model.evaluate(x_test, y_test)
Explanation: A preprocessor should be used in two steps. First, fit, then, transform. fit collects information into context, which includes everything the preprocessor needs to transform together with other useful information for later use. fit will only change the preprocessor's inner state but not the input data. In contrast, transform returns a modified copy of the input data without changing the preprocessor's inner state.
End of explanation
data_pack = mz.datasets.toy.load_data()
data_pack.frame().head()
tokenizer = mz.preprocessors.units.Tokenize()
data_pack.apply_on_text(tokenizer.transform, inplace=True)
data_pack.frame[:5]
lower_caser = mz.preprocessors.units.Lowercase()
data_pack.apply_on_text(lower_caser.transform, inplace=True)
data_pack.frame[:5]
Explanation: Processor Units
Preprocessors utilize mz.processor_units to transform data. Processor units correspond to specific transformations and you may use them independently to preprocess a data pack.
End of explanation
data_pack = mz.datasets.toy.load_data()
chain = mz.chain_transform([mz.preprocessors.units.Tokenize(),
mz.preprocessors.units.Lowercase()])
data_pack.apply_on_text(chain, inplace=True)
data_pack.frame[:5]
Explanation: Or use chain_transform to apply multiple processor units at one time
End of explanation
mz.preprocessors.units.Vocabulary.__base__
vocab_unit = mz.preprocessors.units.Vocabulary()
texts = data_pack.frame()[['text_left', 'text_right']]
all_tokens = texts.sum().sum()
vocab_unit.fit(all_tokens)
Explanation: Notice that some processor units are stateful so we have to fit them before using their transform.
End of explanation
for vocab in 'how', 'are', 'glacier':
print(vocab, vocab_unit.state['term_index'][vocab])
data_pack.apply_on_text(vocab_unit.transform, inplace=True)
data_pack.frame()[:5]
Explanation: Such StatefulProcessorUnit will save information in its state when fit, similar to the context of a preprocessor. In our case here, the vocabulary unit will save a term to index mapping, and a index to term mapping, called term_index and index_term respectively. Then we can proceed transforming a data pack.
End of explanation
data_pack = mz.datasets.toy.load_data()
vocab_unit = mz.build_vocab_unit(data_pack)
data_pack.apply_on_text(vocab_unit.transform).frame[:5]
Explanation: Since this usage is quite common, we wrapped a function to do the same thing. For other stateful units, consult their documentations and try mz.build_unit_from_data_pack.
End of explanation
x_train, y_train = train_preprocessed.unpack()
model.fit(x_train, y_train)
data_gen = mz.DataGenerator(train_preprocessed)
model.fit_generator(data_gen)
Explanation: DataGenerator
Some MatchZoo models (e.g. DRMM, MatchPyramid) require batch-wise information for training so using fit_generator instead of using fit is necessary. In addition, sometimes your memory just can't hold all transformed data so to delay a part of the preprocessing process is necessary.
MatchZoo provides DataGenerator as an alternative. Instead of fit, you may do a fit_generator that takes a data generator that unpack data on the fly.
End of explanation
preprocessor = mz.preprocessors.DSSMPreprocessor(with_word_hashing=False)
data = preprocessor.fit_transform(train_raw, verbose=0)
dssm = mz.models.DSSM()
dssm.params['task'] = mz.tasks.Ranking()
dssm.params.update(preprocessor.context)
dssm.build()
dssm.compile()
term_index = preprocessor.context['vocab_unit'].state['term_index']
hashing_unit = mz.preprocessors.units.WordHashing(term_index)
data_generator = mz.DataGenerator(
data,
batch_size=4,
callbacks=[
mz.data_generator.callbacks.LambdaCallback(
on_batch_data_pack=lambda dp: dp.apply_on_text(
hashing_unit.transform, inplace=True, verbose=0)
)
]
)
dssm.fit_generator(data_generator)
Explanation: The data preprocessing of DSSM eats a lot of memory, but we can workaround that using the callback hook of DataGenerator.
End of explanation
num_neg = 4
task = mz.tasks.Ranking(loss=mz.losses.RankHingeLoss(num_neg=num_neg))
preprocessor = model.get_default_preprocessor()
train_processed = preprocessor.fit_transform(train_raw)
model = mz.models.Naive()
model.params['task'] = task
model.params.update(preprocessor.context)
model.build()
model.compile()
data_gen = mz.DataGenerator(
train_processed,
mode='pair',
num_neg=num_neg,
num_dup=2,
batch_size=32
)
model.fit_generator(data_gen)
Explanation: In addition, losses like RankHingeLoss and RankCrossEntropyLoss have to be used with DataGenerator with mode='pair', since batch-wise information are needed and computed on the fly.
End of explanation |
433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Uncertainty quantification
R.A. Collenteur, University of Graz, WIP (May-2021)
In this notebook it is shown how to compute the uncertainty of the model simulation using the built-in uncertainty quantification options of Pastas.
Confidence interval of simulation
Prediction interval of simulation
Confidence interval of step response
Confidence interval of block response
Confidence interval of contribution
Custom confidence intervals
The compute the confidence intervals, parameters sets are drawn from a multivariate normal distribution based on the jacobian matrix obtained during parameter optimization. This method to quantify uncertainties has some underlying assumptions on the model residuals (or noise) that should be checked. This notebook only deals with parameter uncertainties and not with model structure uncertainties.
Step1: Create a model
We first create a toy model to simulate the groundwater levels in southeastern Austria. We will use this model to illustrate how the different methods for uncertainty quantification can be used.
Step2: Diagnostic Checks
Before we perform the uncertainty quantification, we should check if the underlying statistical assumptions are met. We refer to the notebook on Diagnostic checking for more details on this.
Step3: Confidence intervals
After the model is calibrated, a fit attribute is added to the Pastas Model object (ml.fit). This object contains information about the optimizations (e.g., the jacobian matrix) and a number of methods that can be used to quantify uncertainties.
Step4: Prediction interval
Step5: Uncertainty of step response
Step6: Uncertainty of block response
Step7: Uncertainty of the contributions
Step8: Custom Confidence intervals
It is also possible to compute the confidence intervals manually, for example to estimate the uncertainty in the recharge or statistics (e.g., SGI, NSE). We can call ml.fit.get_parameter_sample to obtain random parameter samples from a multivariate normal distribution using the optimal parameters and the covariance matrix. Next, we use the parameter sets to obtain multiple simulations of 'something', here the recharge.
Step9: Uncertainty of the NSE
The code pattern shown above can be used for many types of uncertainty analyses. Another example is provided below, where we compute the uncertainty of the Nash-Sutcliffe efficacy. | Python Code:
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.set_log_level("ERROR")
ps.show_versions()
Explanation: Uncertainty quantification
R.A. Collenteur, University of Graz, WIP (May-2021)
In this notebook it is shown how to compute the uncertainty of the model simulation using the built-in uncertainty quantification options of Pastas.
Confidence interval of simulation
Prediction interval of simulation
Confidence interval of step response
Confidence interval of block response
Confidence interval of contribution
Custom confidence intervals
The compute the confidence intervals, parameters sets are drawn from a multivariate normal distribution based on the jacobian matrix obtained during parameter optimization. This method to quantify uncertainties has some underlying assumptions on the model residuals (or noise) that should be checked. This notebook only deals with parameter uncertainties and not with model structure uncertainties.
End of explanation
gwl = pd.read_csv("data_wagna/head_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2).loc["2006":].iloc[0::10]
evap = pd.read_csv("data_wagna/evap_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2)
prec = pd.read_csv("data_wagna/rain_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2)
# Model settings
tmin = pd.Timestamp("2007-01-01") # Needs warmup
tmax = pd.Timestamp("2016-12-31")
ml = ps.Model(gwl)
sm = ps.RechargeModel(prec, evap, recharge=ps.rch.FlexModel(),
rfunc=ps.Exponential, name="rch")
ml.add_stressmodel(sm)
# Add the ARMA(1,1) noise model and solve the Pastas model
ml.add_noisemodel(ps.ArmaModel())
ml.solve(tmin=tmin, tmax=tmax, noise=True)
Explanation: Create a model
We first create a toy model to simulate the groundwater levels in southeastern Austria. We will use this model to illustrate how the different methods for uncertainty quantification can be used.
End of explanation
ml.plots.diagnostics();
Explanation: Diagnostic Checks
Before we perform the uncertainty quantification, we should check if the underlying statistical assumptions are met. We refer to the notebook on Diagnostic checking for more details on this.
End of explanation
ci = ml.fit.ci_simulation(alpha=0.05, n=1000)
ax = ml.plot(figsize=(10,3));
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Observations", "Simulation", "95% Confidence interval"], ncol=3, loc=2)
Explanation: Confidence intervals
After the model is calibrated, a fit attribute is added to the Pastas Model object (ml.fit). This object contains information about the optimizations (e.g., the jacobian matrix) and a number of methods that can be used to quantify uncertainties.
End of explanation
ci = ml.fit.prediction_interval(n=1000)
ax = ml.plot(figsize=(10,3));
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Observations", "Simulation", "95% Prediction interval"], ncol=3, loc=2)
Explanation: Prediction interval
End of explanation
ci = ml.fit.ci_step_response("rch")
ax = ml.plots.step_response(figsize=(6,2))
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=4)
Explanation: Uncertainty of step response
End of explanation
ci = ml.fit.ci_block_response("rch")
ax = ml.plots.block_response(figsize=(6,2))
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=1)
Explanation: Uncertainty of block response
End of explanation
ci = ml.fit.ci_contribution("rch")
r = ml.get_contribution("rch")
ax = r.plot(figsize=(10,3))
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=1)
plt.tight_layout()
Explanation: Uncertainty of the contributions
End of explanation
params = ml.fit.get_parameter_sample(n=1000, name="rch")
data = {}
# Here we run the model n times with different parameter samples
for i, param in enumerate(params):
data[i] = ml.stressmodels["rch"].get_stress(p=param)
df = pd.DataFrame.from_dict(data, orient="columns").loc[tmin:tmax].resample("A").sum()
ci = df.quantile([0.025, .975], axis=1).transpose()
r = ml.get_stress("rch").resample("A").sum()
ax = r.plot.bar(figsize=(10,2), width=0.5, yerr=[r-ci.iloc[:,0], ci.iloc[:,1]-r])
ax.set_xticklabels(labels=r.index.year, rotation=0, ha='center')
ax.set_ylabel("Recharge [mm a$^{-1}$]")
ax.legend(ncol=3);
Explanation: Custom Confidence intervals
It is also possible to compute the confidence intervals manually, for example to estimate the uncertainty in the recharge or statistics (e.g., SGI, NSE). We can call ml.fit.get_parameter_sample to obtain random parameter samples from a multivariate normal distribution using the optimal parameters and the covariance matrix. Next, we use the parameter sets to obtain multiple simulations of 'something', here the recharge.
End of explanation
params = ml.fit.get_parameter_sample(n=1000)
data = []
# Here we run the model n times with different parameter samples
for i, param in enumerate(params):
sim = ml.simulate(p=param)
data.append(ps.stats.nse(obs=ml.observations(), sim=sim))
fig, ax = plt.subplots(1,1, figsize=(4,3))
plt.hist(data, bins=50, density=True)
ax.axvline(ml.stats.nse(), linestyle="--", color="k")
ax.set_xlabel("NSE [-]")
ax.set_ylabel("frequency [-]")
from scipy.stats import norm
import numpy as np
mu, std = norm.fit(data)
# Plot the PDF.
xmin, xmax = ax.set_xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
ax.plot(x, p, 'k', linewidth=2)
Explanation: Uncertainty of the NSE
The code pattern shown above can be used for many types of uncertainty analyses. Another example is provided below, where we compute the uncertainty of the Nash-Sutcliffe efficacy.
End of explanation |
434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {c:i for i, c in enumerate(vocab)}
int_to_vocab = {i:c for i, c in enumerate(vocab)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
dict_punctuation = {
'.':'||Period||',
',':'||Comma||',
'"':'||Quotation_Mark||',
';':'||Semicolon||',
'!':'||Exclamation_Mark||',
'?':'||Question_Mark||',
'(':'||Left_Parenthesis||',
')':'||Right_Parenthesis||',
'--':'||Dash||',
'\n':'||Return||'
}
return dict_punctuation
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name = 'input')
targets = tf.placeholder(tf.int32, [None, None], name = 'targets')
learning_rate = tf.placeholder(tf.float32, name = 'learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
lstm_layers = 1
keep_prob = 1
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
cell_state = cell.zero_state(batch_size, tf.float32)
cell_state = tf.identity(cell_state, name = 'initial_state')
return cell, cell_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name = 'final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
batch_count = len(int_text)//(batch_size * seq_length)
counter = (batch_size * seq_length)
final = []
row = []
for i in range(batch_count):
x = int_text[i * counter : (i + 1) * counter]
x = np.reshape(x, (batch_size, seq_length))
y = int_text[(i * counter) + 1 : ((i + 1) * counter) + 1]
y = np.reshape(y, (batch_size, seq_length))
row = np.array([x,y])
final.append(row)
return np.array(final)
# test = get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)
# print(test)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 10
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 100
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.1
# Show stats for every n number of batches
show_every_n_batches = 64
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name('input:0'), loaded_graph.get_tensor_by_name('initial_state:0'), loaded_graph.get_tensor_by_name('final_state:0'), loaded_graph.get_tensor_by_name('probs:0')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab.get(np.argmax(probabilities))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Silicon Forest Math Series<br/>Oregon Curriculum Network
Generators and Coroutines
Generator functions relate to generator expressions, objects which delay exectution until pressed into service as iterators, a type with a __next__ method.
The for loop counts on an iterator for its target and implicitly applies iter(obj) to whatever it gets. Iterators save on memory as they know how to share and underlying iterable.
Some iterators also have the ability to function as objects able to pause and resume operations, without forgetting their internal state.
Step1: Any object expecting to be the target of a for loop, if not already an iterator, needs to either
Step3: The generator function below shows what most Pythonistas consider the base case for the keyword yield
Step4: Now lets take a look at pretty much the same iterator written a different way
Step5: Feeding an open-ended iterator to a for loop runs the risk of a "forever loop". The itertools module is filled with tools designed to rein in the infinite loopers. Just use islice(obj, start, stop) to keep the for loop finite
Step8: In the code below, we see (yield) turned around and used not to return objects, but to receive them. The parentheses are by convention and suggest a "mouth" bringing something in from outside.
A generator function's .send() method resumes its execution inside its body with an intake of some passed-in object from the caller, the argument to send. In the above example, two callers are nested, the inner one writing a prime to a file, the outer one feeding it next primes for said catalog.
The coroutine decorator may seem a bit mysterious at first. A generator function does not run any of its code upon being instanced. No yield statement has yet been encountered, so use of .send(obj) would raise an exception were obj any object but None.
The decorator has already fed a .send(None) to the generator in question, equivalent to feeding it to next() a first time. The decorator applies a first action somewhat like cocking a pistol, putting the generator or coroutine in the firing position, positioned at a first yield. | Python Code:
powers = (lambda x: pow(x, n) for n in range(-4,5))
phi = (1 + pow(5,0.5)) * 0.5 # golden proportion
for n, f in enumerate(powers, start=-4): # iterates through lambda expressions
print("phi ** {:2} == {:10.8f}".format(n, f(phi)))
Explanation: Silicon Forest Math Series<br/>Oregon Curriculum Network
Generators and Coroutines
Generator functions relate to generator expressions, objects which delay exectution until pressed into service as iterators, a type with a __next__ method.
The for loop counts on an iterator for its target and implicitly applies iter(obj) to whatever it gets. Iterators save on memory as they know how to share and underlying iterable.
Some iterators also have the ability to function as objects able to pause and resume operations, without forgetting their internal state.
End of explanation
class Any:
def __init__(self):
self.__dict__ = {0:'scissors', 1:'paper', 2:'rock'}
def __getitem__(self, n): # enough for iter() to go on
if n == len(self.__dict__):
raise StopIteration # tells for loop when to stop
return self.__dict__[n]
for thing in Any():
print(thing)
Explanation: Any object expecting to be the target of a for loop, if not already an iterator, needs to either:
dedicate an __iter__ method to showing what to return when iter() gets applied, or
have a __getitem__ ready to go, as iter( ) is smart enough to make up an object wherein consecutive integers, starting from 0, go to any __getitem__ method.
End of explanation
import pprint
def primes():
generate successive prime numbers (trial by division)
candidate = 1
_primes_so_far = [2] # first prime, only even prime
yield _primes_so_far[0] # share it!
while True:
candidate += 2 # check odds only from now on
for prev in _primes_so_far:
if prev**2 > candidate:
yield candidate # new prime!
_primes_so_far.append(candidate)
break
if not divmod(candidate, prev)[1]: # no remainder!
break # done looping
p = primes() # generator function based iterator
pp = pprint.PrettyPrinter(width=40, compact=True)
pp.pprint([next(p) for _ in range(50)]) # next 30 primes please!
Explanation: The generator function below shows what most Pythonistas consider the base case for the keyword yield: using it much like return, to provide an object to the generator function's caller, in this case a next prime number, going in sequence.
The trial-by-division algorithm requires keeping a growing sequence of successive primes, and using them to test new candidates. After taking care of the only even prime, 2, it makes sense to jump forward through the odds, screening out all the composites.
End of explanation
class Primes:
def __init__(self):
self.candidate = 1
self._primes_so_far = [2] # first prime, only even prime
def __iter__(self):
return self
def __next__(self):
while True:
self.candidate += 2 # check odds only from now on
for prev in self._primes_so_far:
if prev**2 > self.candidate:
self._primes_so_far.append(self.candidate)
return self._primes_so_far[-2]
if not divmod(self.candidate, prev)[1]: # no remainder!
break
pp = pprint.PrettyPrinter(width=40, compact=True)
p = Primes() # class based iterator
pp.pprint([next(p) for _ in range(30)]) # n
Explanation: Now lets take a look at pretty much the same iterator written a different way: with an explicit __iter__ and __next__.
End of explanation
from itertools import islice
p = Primes()
for n in islice(p, 0, 20):
print(n, end=", ")
Explanation: Feeding an open-ended iterator to a for loop runs the risk of a "forever loop". The itertools module is filled with tools designed to rein in the infinite loopers. Just use islice(obj, start, stop) to keep the for loop finite:
End of explanation
# -*- coding: utf-8 -*-
Created on Thu Oct 13 13:48:52 2016
@author: Kirby Urner
David Beazley:
https://youtu.be/Z_OAlIhXziw?t=23m42s
Trial by division, but this time the primes coroutine acts
more as a filter, passing qualified candidates through to
print_me, which writes to a file.
import pprint
def coroutine(func):
Advances decorated generator function to the first yield
def start(*args, **kwargs):
cr = func(*args, **kwargs)
cr.send(None) # or next(cr) or cr.__next__()
return cr
return start
@coroutine
def print_me(file_name):
with open(file_name, 'w') as file_obj:
while True:
to_print = (yield)
file_obj.write(str(to_print)+"\n")
@coroutine
def primes(target):
_primes_so_far = [2]
target.send(2)
while True:
candidate = (yield)
for prev in _primes_so_far:
if not divmod(candidate, prev)[1]:
break
if prev**2 > candidate:
_primes_so_far.append(candidate)
target.send(candidate)
break
output = print_me("primes.txt")
p = primes(output)
for x in range(3, 200, 2): # test odds 3-199
p.send(x)
with open("primes.txt", 'r') as file_obj:
print(", ".join(file_obj.read().split("\n"))[:-2])
Explanation: In the code below, we see (yield) turned around and used not to return objects, but to receive them. The parentheses are by convention and suggest a "mouth" bringing something in from outside.
A generator function's .send() method resumes its execution inside its body with an intake of some passed-in object from the caller, the argument to send. In the above example, two callers are nested, the inner one writing a prime to a file, the outer one feeding it next primes for said catalog.
The coroutine decorator may seem a bit mysterious at first. A generator function does not run any of its code upon being instanced. No yield statement has yet been encountered, so use of .send(obj) would raise an exception were obj any object but None.
The decorator has already fed a .send(None) to the generator in question, equivalent to feeding it to next() a first time. The decorator applies a first action somewhat like cocking a pistol, putting the generator or coroutine in the firing position, positioned at a first yield.
End of explanation |
436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plots for Fig. 1 and Fig. 4b for Meg Urry's 2016 NSF Proposal
Grant Tremblay, Yale University
Step1: For now, I'll use the matplotlib ggplot style from R. It's pretty.
Step2: Find the data tables and dump them into lists.
Step3: Populate the redshift-Luminosity tables.
Step4: Populate the R-K tables.
Step5: Steph asked that we just combine all stars into one category, for simplicity
Step6: These are now Astropy table objects.
Make a L-z plot
Step7: Make an r-k plot | Python Code:
import os
import glob
import math
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import ascii
from astropy.table import vstack
from astropy import units as u
from astropy import constants as const
Explanation: Plots for Fig. 1 and Fig. 4b for Meg Urry's 2016 NSF Proposal
Grant Tremblay, Yale University
End of explanation
# Plots should be pretty
plt.style.use('ggplot')
%matplotlib inline
Explanation: For now, I'll use the matplotlib ggplot style from R. It's pretty.
End of explanation
lum_v_z_files = glob.glob('data/lum_v_z/*.txt')
r_k_files = glob.glob('data/r-k/*.txt')
# by globbing on .txt files, the output will be sorted alphabetically by name
Explanation: Find the data tables and dump them into lists.
End of explanation
lum_v_z_files
column_names_lum_v_z = ["z", "Xlum"]
s82x_lum_v_z = ascii.read(lum_v_z_files[2], names=column_names_lum_v_z)
cosmos_lum_v_z = ascii.read(lum_v_z_files[1], names=column_names_lum_v_z)
cdfs_lum_v_z = ascii.read(lum_v_z_files[0], names=column_names_lum_v_z)
Explanation: Populate the redshift-Luminosity tables.
End of explanation
r_k_files
column_names_r_k = ["R-K", "X/O"]
extragalactic_sources_r_k = ascii.read(r_k_files[0], names=column_names_r_k)
stars_r_k = ascii.read(r_k_files[3], names=column_names_r_k)
sources_lacking_redshifts_r_k = ascii.read(r_k_files[2], names=column_names_r_k)
rw1_stars_r_k = ascii.read(r_k_files[1], names=column_names_r_k)
targets_r_k = ascii.read(r_k_files[4], names=column_names_r_k)
Explanation: Populate the R-K tables.
End of explanation
# Stack the two tables on top of each other
stars = vstack([rw1_stars_r_k, stars_r_k])
Explanation: Steph asked that we just combine all stars into one category, for simplicity:
End of explanation
plt.figure()
fig, ax = plt.subplots()
ax.set_xlabel('Redshift (z)')
ax.set_ylabel('Log X-ray Luminosity (0.5-2 keV)')
ax.plot(cdfs_lum_v_z["z"], cdfs_lum_v_z["Xlum"], marker='s', linestyle="None", alpha=1.0, label="CDFS", color=plt.rcParams['axes.color_cycle'][2])
ax.plot(cosmos_lum_v_z["z"], cosmos_lum_v_z["Xlum"], marker='^', linestyle="None", alpha=1.0, label="COSMOS Legacy", color=plt.rcParams['axes.color_cycle'][1])
ax.plot(s82x_lum_v_z["z"], s82x_lum_v_z["Xlum"], marker='o', linestyle="None", alpha=1.0, label="Stripe 82X", color=plt.rcParams['axes.color_cycle'][0])
ax.legend(loc=4, frameon=True, numpoints=1, prop={'size':10})
plt.savefig("Fig1b.pdf")
ax.set_aspect('equal')
plt.savefig("Fig1b_equal_aspect.pdf")
Explanation: These are now Astropy table objects.
Make a L-z plot
End of explanation
plt.figure()
fig, ax = plt.subplots()
ax.set_xlabel('R-K color (Vega)')
ax.set_ylabel('X-ray / Optical Ratio')
ax.plot(extragalactic_sources_r_k["R-K"], extragalactic_sources_r_k["X/O"], marker='o', color="Gray", markeredgewidth=0, alpha=0.5, linestyle="None", label="Extragalactic Sources")
ax.plot(targets_r_k["R-K"], targets_r_k["X/O"], marker='s', linestyle="None", label="NIR Spectroscopy Targets")
ax.plot(sources_lacking_redshifts_r_k["R-K"], sources_lacking_redshifts_r_k["X/O"], marker='^', linestyle="None", label="Optical Spectroscopy Targets")
ax.plot(stars["R-K"], stars["X/O"], marker='o', linestyle="None", label="Stars")
ax.plot([4, 7], [0, 0], color='k', linestyle='-', linewidth=1)
ax.plot([4, 4], [0, 4], color='k', linestyle='-', linewidth=1)
ax.legend(loc=0, frameon=True, numpoints=1, prop={'size':10})
plt.savefig("Fig4b.pdf")
Explanation: Make an r-k plot
End of explanation |
437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data readout basics i.e. for some analysis or plotting.
This notebook is intended to show the readout of created hdf5 files with python. For the handling of the measured file qkit provides a file info database (fid) for convenient access to the stored data, for more info on that see the corresponding notebook.<br>
The readout (as well as the storage in the first place) is done with the store.Data module.
Step1: The qkit.start() starts the fid which searches the given qkit.conf['datadir'] for hdf5 files. A python dict is created, mapping the 6-digit timestamp of the measurement (UUID) to its absolute path on the hard drive. For getting the absolute path of the measurement file usethe qkit.fid.get command with the UUID.
Step2: To open a measurement and get the information a h5 object has to be created
Step3: The data and metadata can be accessed via the h5 object. All the needed information can be auto-compleded via tabbing.
Step4: The UUID can also be reverted back to recover the time the file was created. | Python Code:
## start qkit and import the needed modules. we here assume an already configured qkit analysis environment
import qkit
qkit.start()
from qkit.storage.store import Data
Explanation: Data readout basics i.e. for some analysis or plotting.
This notebook is intended to show the readout of created hdf5 files with python. For the handling of the measured file qkit provides a file info database (fid) for convenient access to the stored data, for more info on that see the corresponding notebook.<br>
The readout (as well as the storage in the first place) is done with the store.Data module.
End of explanation
abs_path = qkit.fid.get('XXXXXX')
Explanation: The qkit.start() starts the fid which searches the given qkit.conf['datadir'] for hdf5 files. A python dict is created, mapping the 6-digit timestamp of the measurement (UUID) to its absolute path on the hard drive. For getting the absolute path of the measurement file usethe qkit.fid.get command with the UUID.
End of explanation
h5 = Data(qkit.fid.get('XXXXXX'))
Explanation: To open a measurement and get the information a h5 object has to be created
End of explanation
amp = h5.data.amplitude[:] # gets the measurement data as numpy array
pha = h5.data.phase[:]
## general rule: h5.[folder].[ds_name] gives the dataset, '[:]' gives the bare data cast to a numpy array
## already analyzed data is stored in the 'analysis' folder
# all stored metadata can be accessed by auto-complete as well
x_ds_url = h5.data.amplitude.x_ds_url
comment = h5.data.comment
Explanation: The data and metadata can be accessed via the h5 object. All the needed information can be auto-compleded via tabbing.
End of explanation
qkit.fid.get_date('XXXXXX')
Explanation: The UUID can also be reverted back to recover the time the file was created.
End of explanation |
438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use Siphon and Cartopy to visualize data served by a THREDDS Data Server (TDS) via ncWMS
Step1: Use Siphon to get the latest run of the NCEP 2.5 km HRRR run
HRRR => High Resolution Rapid Refresh
Temporal output
Step2: List the avaliable layers in the dataset
Step3: Get the "Temperature_height_above_ground" layer
What is the "height_above_ground"?
Cannot automagically get the range, so use 273 K to 315 K
Step4: List the available times and choose the first
Step5: Use the information from above and Cartopy to plot the layer
Note that ncWMS is not returning a data array, but rather an image, so this is relatively "fast" (a quick preview)
The tradeoff is that you cannot do computational analysis using ncWMS | Python Code:
import cartopy
import matplotlib as mpl
import matplotlib.pyplot as plt
from owslib.wms import WebMapService
from siphon.catalog import get_latest_access_url
Explanation: How to use Siphon and Cartopy to visualize data served by a THREDDS Data Server (TDS) via ncWMS
End of explanation
catalog = 'http://thredds-jumbo.unidata.ucar.edu/thredds/catalog/grib/NCEP/HRRR/CONUS_2p5km/catalog.xml'
serverurl = get_latest_access_url(catalog, 'WMS')
wms = WebMapService( serverurl, version='1.1.1')
Explanation: Use Siphon to get the latest run of the NCEP 2.5 km HRRR run
HRRR => High Resolution Rapid Refresh
Temporal output: 1 hour
Grid spacing: 2.5 km
Roughly 2 GB per run, limited subset of variables actually made available from NCEP to save on bandwidth
get the ncWMS access point
use owslib's WebMapService to connect to the ncWMS endpoint
End of explanation
#Listing all available layers...
layers = list(wms.contents)
for layer in layers:
print('Layer name: {}'.format(wms[layer].name))
Explanation: List the avaliable layers in the dataset
End of explanation
temp = wms['Temperature_height_above_ground']
elevations = [elevation.strip() for elevation in temp.elevations]
print(elevations)
# only one elevation, so use it
elevation = elevations[0]
# have to guess the range
color_max = 315 # K
color_min = 273 # K
colorscalerange = '{},{}'.format(color_min,color_max)
Explanation: Get the "Temperature_height_above_ground" layer
What is the "height_above_ground"?
Cannot automagically get the range, so use 273 K to 315 K
End of explanation
times = [time.strip() for time in temp.timepositions]
print(times)
# get the first time - Forecast Hour 0
time = times[0]
Explanation: List the available times and choose the first
End of explanation
# pick a projection - going with Miller for this example
# note that with Cartopy you are NOT limited to the projections avaliable through ncWMS
plt_proj = cartopy.crs.Miller()
fig, ax = plt.subplots(figsize=(12,12), subplot_kw={'projection': plt_proj})
# Colorbar goodness
cax = fig.add_axes([0.95, 0.3, 0.02, 0.42])
norm = plt.Normalize(vmin=color_min, vmax=color_max)
cmap = plt.cm.gist_heat
cb = mpl.colorbar.ColorbarBase(cax, cmap=cmap, norm=norm, spacing='proportional', orientation='vertical')
cb.set_label('Temperature [K]')
# use bounding box info obtained from the ncWMS service to frame the image
extent = (temp.boundingBox[0], temp.boundingBox[2], temp.boundingBox[1], temp.boundingBox[3])
ax.set_extent(extent)
# ncWMS keywords (which includes the WMS keywords as well)
wms_kwargs = {'colorscalerange': colorscalerange,
'abovemaxcolor': 'transparent',
'belowmincolor': 'transparent',
'transparent': 'true',
'elevation': elevation,
'time': time}
# plot the layer using Cartopy's WMS interface
ax.add_wms(wms=serverurl, layers=[temp.name], wms_kwargs=wms_kwargs, cmap=cmap)
# add coastlines, country borders and state outlines
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS)
ax.add_feature(cartopy.feature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none'), linestyle=':')
Explanation: Use the information from above and Cartopy to plot the layer
Note that ncWMS is not returning a data array, but rather an image, so this is relatively "fast" (a quick preview)
The tradeoff is that you cannot do computational analysis using ncWMS
End of explanation |
439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Analysis and Visualization with Python and the NLTK
This notebook was originally prepared for use during a workshop called "An Introduction to Visualizing Text with Python," which took place during Columbia's Art of Data Visualization week in April 2016. But you can run these commands yourself. To begin, you'll need this software
Step1: Work with our Own Text
Step2: Exploring Texts
Let's explore these texts a little. There are lots of things we can do with these texts.
To see a list, type text1. and press <Tab>. One thing we can do is look at statistically significant co-occurring two-word phrases, here known as collocations
Step3: But what if we get tired of doing that for each text, and want to do it with all of them?
Let's put the texts into a list.
Step4: Let's look at it to make sure it's all there.
Step5: Now that we have a list of all the texts, we can loop through each one, running the collocations() function on each
Step6: Concordances and Dispersion Plots
Now let's look up an individual word in a text, and have NLTK give us some context
Step7: Not bad. But what if we want to see visually where those words occur over the course of the text? We can use the function dispersion_plot
Step8: Let's try that on Moby Dick
Step9: By looking at dispersion plots of characters' names, we can almost tell which characters in Sense and Sensibility have romantic relationships
Step10: Measuring Text Vocabulary
We can use the len (length) function to count the total number of words in a text
Step11: And we can do this for all the texts by putting it in a lookup function, like this
Step12: If we import this table into Pandas, we can see this data a little easier
Step13: And by plotting it, we can get a better visual representation
Step14: But word counts themselves are not very interesting, so let's see if we can not only count the words,
but count the vocabulary of a text. To do that, we can use set(), which will count every word once.
Step15: We can count the words in the sentence easily
Step16: To count the words, but ignore repeated words, we can use the function set().
Step17: So if we count this set, we can determine the vocabulary of a text
Step18: Let's see if we can find the vocabulary of Moby Dick.
Step19: Pretty big, but then again, Moby Dick is kind of a long novel.
We can adjust for the words by adjusting for the total words
Step20: This would get tedious if we did this for every text, so let's write a function!
Step21: Let's go through each text, and get its vocabulary, and put it in a table.
Step22: Let's put that table into Pandas so we can see it better
Step23: Now let's plot that
Step24: OK, now let's make a famous wordcloud from a text. This just takes the most statistically significant words, and plots them where the size of each word corresponds to its frequency.
Step25: Plotting Words (Conditional Frequency Distributions)
Now let's take a look at the inaugural address corpus in detail.
Step26: We'll set up a conditional word frequency distribution for it,
pairing off a list of words with the list of inaugural addresses.
Step27: You can replace the words 'america' and 'citizen' here with whatever words you want, to further explore this corpus.
Now let's play around with the Brown corpus. It's a categorized text corpus. Let's see all the categories
Step28: Now let's create another conditional frequency distribution, this time based on these genres.
Step29: Finally, we can plot these words by genre | Python Code:
# Get the Natural Language Processing Toolkit
import nltk
nltk.download('book') # You only need to run this command once, to get the NLTK book data.
# Get the data science package Pandas
import pandas as pd
# Get the library matplotlib for making pretty charts
import matplotlib
import matplotlib.pyplot as plt
# Make plots appear here in this notebook
%matplotlib inline
# This just makes the plot size bigger, so that we can see it easier.
plt.rcParams['figure.figsize'] = (12,4)
# We'll use the OS module to download things.
import os
# Get all the example books from the NLTK textbook
from nltk.book import *
Explanation: Text Analysis and Visualization with Python and the NLTK
This notebook was originally prepared for use during a workshop called "An Introduction to Visualizing Text with Python," which took place during Columbia's Art of Data Visualization week in April 2016. But you can run these commands yourself. To begin, you'll need this software:
Python 3
These Python 3 packages (make sure you don't install their Python 2 versions):
Jupyter (formerly called iPython Notebook)
NLTK (the Natural Language Processing Toolkit)
Pandas (a data science library)
Wordcloud
There are lots of different ways to get this software. You can either install it on your computer, or run it in the cloud. Here are a few different ways of doing that. When you see text in a monospace typeface, those are commands to be entered in the terminal. On a Mac, open a terminal by typing "Terminal" into Spotlight. On Windows, press Win+R and type cmd to get a terminal.
Installation on Linux
Make sure your package list is up to date: sudo apt-get update
Get Python 3: sudo apt-get install python3 python3-pip python3-pandas
Get the Python packages: sudo pip3 install jupyter nltk wordcloud
Start a Jupyter notebook: jupyter notebook
Installation on Mac or Windows:
Get Anaconda with Python 3 from https://www.continuum.io/downloads
Anaconda comes with Pandas, NLTK, and Jupyter, so just install Wordcloud: conda install --name wordcloud
Start Jupyter Notebooks by clicking "launch" under "Jupyter Notebook."
Make a new Python 3 Jupyter Notebook by clicking the menu "New," then "Python 3."
Or Use DHBox
Go to http://dhbox.org and click "log in." Log into the workshop box with the credentials I gave earlier.
In the "dataviz-2017" profile menu in the top right, click "Apps."
Click the "Jupyter Notebook" tab.
Make a new Jupyter Notebook by clicking the menu "New," then "Python 3."
<!--
## Installation on DHBox
DHBox is a platform for running digital humanities (DH) software in the cloud.
1. Make a DHBox account on http://dhbox.org, by clicking "sign up." 2. In your user menu in the upper-right corner, select "Apps."
3. Click the tab "Command Line."
4. Enter the commands from the section "Installation on Linux" above.
5. Click the tab "Jupyter Notebooks," and enter your password again.
One you have all the software installed, you can run the commands below either by copying and pasting them from this notebook, or by running them directly in this notebook, by downloading the notebook and opening it with jupyter (i.e. `jupyter notebook dataviz-workshop.ipynb`).
-->
End of explanation
# Download Alice in Wonderland
os.system('wget http://www.gutenberg.org/files/11/11-0.txt')
# Tokenize it (break it into words), and make an NLTK Text object out of it.
aliceRaw = open('11-0.txt').read()
aliceWords = nltk.word_tokenize(aliceRaw)
alice = nltk.Text(aliceWords)
alice
Explanation: Work with our Own Text
End of explanation
text1.collocations()
text2.collocations()
Explanation: Exploring Texts
Let's explore these texts a little. There are lots of things we can do with these texts.
To see a list, type text1. and press <Tab>. One thing we can do is look at statistically significant co-occurring two-word phrases, here known as collocations:
End of explanation
alltexts = [text1, text2, text3, text4, text5, text6, text7, text8, text9, alice]
Explanation: But what if we get tired of doing that for each text, and want to do it with all of them?
Let's put the texts into a list.
End of explanation
alltexts
Explanation: Let's look at it to make sure it's all there.
End of explanation
for text in alltexts: # For each text in the list "alltexts,"
text.collocations() # Get the collocations
print('---') # Print a divider between the collocations
Explanation: Now that we have a list of all the texts, we can loop through each one, running the collocations() function on each:
End of explanation
text6.concordance('shrubbery')
Explanation: Concordances and Dispersion Plots
Now let's look up an individual word in a text, and have NLTK give us some context:
End of explanation
text6.dispersion_plot(['shrubbery', 'ni'])
Explanation: Not bad. But what if we want to see visually where those words occur over the course of the text? We can use the function dispersion_plot:
End of explanation
text1.dispersion_plot(['Ahab', 'Ishmael', 'whale'])
Explanation: Let's try that on Moby Dick:
End of explanation
text2.dispersion_plot(['Elinor', 'Marianne', 'Edward', 'Willoughby'])
Explanation: By looking at dispersion plots of characters' names, we can almost tell which characters in Sense and Sensibility have romantic relationships:
End of explanation
len(text1)
Explanation: Measuring Text Vocabulary
We can use the len (length) function to count the total number of words in a text:
End of explanation
lengths = {text.name: len(text) for text in alltexts}
lengths
Explanation: And we can do this for all the texts by putting it in a lookup function, like this:
End of explanation
pd.Series(lengths)
Explanation: If we import this table into Pandas, we can see this data a little easier
End of explanation
pd.Series(lengths).plot(kind='bar')
Explanation: And by plotting it, we can get a better visual representation:
End of explanation
porky_sentence = "the the the the the that's all folks"
porky_words = porky_sentence.split()
porky_words
Explanation: But word counts themselves are not very interesting, so let's see if we can not only count the words,
but count the vocabulary of a text. To do that, we can use set(), which will count every word once.
End of explanation
len(porky_words)
Explanation: We can count the words in the sentence easily:
End of explanation
set(porky_words)
Explanation: To count the words, but ignore repeated words, we can use the function set().
End of explanation
len(set(porky_words))
Explanation: So if we count this set, we can determine the vocabulary of a text:
End of explanation
len(set(text1))
Explanation: Let's see if we can find the vocabulary of Moby Dick.
End of explanation
len(text1) / len(set(text1))
Explanation: Pretty big, but then again, Moby Dick is kind of a long novel.
We can adjust for the words by adjusting for the total words:
End of explanation
def vocab(text): # Define a function called `vocab` that takes the input `text`
return len(text) / len(set(text)) # Divide the number of words by the number of unique words.
vocab(porky_words)
Explanation: This would get tedious if we did this for every text, so let's write a function!
End of explanation
vocabularies = {text.name: vocab(text) for text in alltexts}
Explanation: Let's go through each text, and get its vocabulary, and put it in a table.
End of explanation
pd.Series(vocabularies)
Explanation: Let's put that table into Pandas so we can see it better:
End of explanation
pd.Series(vocabularies).plot(kind='bar')
Explanation: Now let's plot that:
End of explanation
from wordcloud import WordCloud # Get the library
rawtext = ' '.join(text1.tokens) # Stitch it back together.
wc = WordCloud(width=1000, height=600, background_color='white').generate(rawtext)
# This just makes the plot size bigger, so that we can see it easier.
plt.rcParams['figure.figsize'] = (12,4)
plt.figure()
plt.axis('off') # Turn off axis ticks
plt.imshow(wc, interpolation="bilinear");# Plot it
Explanation: OK, now let's make a famous wordcloud from a text. This just takes the most statistically significant words, and plots them where the size of each word corresponds to its frequency.
End of explanation
from nltk.corpus import inaugural
Explanation: Plotting Words (Conditional Frequency Distributions)
Now let's take a look at the inaugural address corpus in detail.
End of explanation
plt.rcParams['figure.figsize'] = (14,5) # Adjust the plot size.
cfd = nltk.ConditionalFreqDist(
(target, fileid[:4])
for fileid in inaugural.fileids()
for w in inaugural.words(fileid)
for target in ['america', 'citizen']
if w.lower().startswith(target))
cfd.plot()
Explanation: We'll set up a conditional word frequency distribution for it,
pairing off a list of words with the list of inaugural addresses.
End of explanation
nltk.corpus.brown.categories()
Explanation: You can replace the words 'america' and 'citizen' here with whatever words you want, to further explore this corpus.
Now let's play around with the Brown corpus. It's a categorized text corpus. Let's see all the categories:
End of explanation
genres = ['adventure', 'romance', 'science_fiction']
words = ['can', 'could', 'may', 'might', 'must', 'will']
cfdist = nltk.ConditionalFreqDist(
(genre, word)
for genre in genres
for word in nltk.corpus.brown.words(categories=genre)
if word in words)
cfdist
Explanation: Now let's create another conditional frequency distribution, this time based on these genres.
End of explanation
pd.DataFrame(cfdist).T.plot(kind='bar')
Explanation: Finally, we can plot these words by genre:
End of explanation |
440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tabular data
Step1: Starting from reading this dataset, to answering questions about this data in a few lines of code
Step2: How does the survival rate of the passengers differ between sexes?
Step3: Or how does it differ between the different classes?
Step4: Are young people more likely to survive?
Step5: All the needed functionality for the above examples will be explained throughout this tutorial.
Data structures
Pandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame).
Series
A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created
Step6: Attributes of a Series
Step7: You can access the underlying numpy array representation with the .values attribute
Step8: We can access series values via the index, just like for NumPy arrays
Step9: Unlike the NumPy array, though, this index can be something other than integers
Step10: In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.
In fact, it's possible to construct a series directly from a Python dictionary
Step11: We can index the populations like a dict as expected
Step12: but with the power of numpy arrays
Step13: DataFrames
Step14: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute
Step15: To check the data types of the different columns
Step16: An overview of that information can be given with the info() method
Step17: Also a DataFrame has a values attribute, but attention
Step18: If we don't like what the index looks like, we can reset it and set one of our columns
Step19: To access a Series representing a column in the data, use typical indexing syntax
Step20: Basic operations on Series/Dataframes
As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
Step21: Elementwise-operations (like numpy)
Just like with numpy arrays, many operations are element-wise
Step22: Alignment! (unlike numpy)
Only, pay attention to alignment
Step23: Reductions (like numpy)
The average population number
Step24: The minimum area
Step25: For dataframes, often only the numeric columns are included in the result
Step26: <div class="alert alert-success">
<b>EXERCISE</b>
Step27: One useful method to use is the describe method, which computes summary statistics for each column
Step28: The plot method can be used to quickly visualize the data in different ways
Step29: However, for this dataset, it does not say that much
Step30: You can play with the kind keyword | Python Code:
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: Tabular data
End of explanation
df['Age'].hist()
Explanation: Starting from reading this dataset, to answering questions about this data in a few lines of code:
What is the age distribution of the passengers?
End of explanation
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
Explanation: How does the survival rate of the passengers differ between sexes?
End of explanation
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
Explanation: Or how does it differ between the different classes?
End of explanation
df['Survived'].sum() / df['Survived'].count()
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])
Explanation: Are young people more likely to survive?
End of explanation
s = pd.Series([0.1, 0.2, 0.3, 0.4])
s
Explanation: All the needed functionality for the above examples will be explained throughout this tutorial.
Data structures
Pandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame).
Series
A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:
End of explanation
s.index
Explanation: Attributes of a Series: index and values
The series has a built-in concept of an index, which by default is the numbers 0 through N - 1
End of explanation
s.values
Explanation: You can access the underlying numpy array representation with the .values attribute:
End of explanation
s[0]
Explanation: We can access series values via the index, just like for NumPy arrays:
End of explanation
s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])
s2
s2['c']
Explanation: Unlike the NumPy array, though, this index can be something other than integers:
End of explanation
pop_dict = {'Germany': 81.3,
'Belgium': 11.3,
'France': 64.3,
'United Kingdom': 64.9,
'Netherlands': 16.9}
population = pd.Series(pop_dict)
population
Explanation: In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.
In fact, it's possible to construct a series directly from a Python dictionary:
End of explanation
population['France']
Explanation: We can index the populations like a dict as expected:
End of explanation
population * 1000
Explanation: but with the power of numpy arrays:
End of explanation
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
Explanation: DataFrames: Multi-dimensional Data
A DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img src="img/dataframe.png" width=110%>
One of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view:
End of explanation
countries.index
countries.columns
Explanation: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute:
End of explanation
countries.dtypes
Explanation: To check the data types of the different columns:
End of explanation
countries.info()
Explanation: An overview of that information can be given with the info() method:
End of explanation
countries.values
Explanation: Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:
End of explanation
countries = countries.set_index('country')
countries
Explanation: If we don't like what the index looks like, we can reset it and set one of our columns:
End of explanation
countries['area']
Explanation: To access a Series representing a column in the data, use typical indexing syntax:
End of explanation
# redefining the example objects
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
countries = pd.DataFrame({'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']})
Explanation: Basic operations on Series/Dataframes
As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
End of explanation
population / 100
countries['population'] / countries['area']
Explanation: Elementwise-operations (like numpy)
Just like with numpy arrays, many operations are element-wise:
End of explanation
s1 = population[['Belgium', 'France']]
s2 = population[['France', 'Germany']]
s1
s2
s1 + s2
Explanation: Alignment! (unlike numpy)
Only, pay attention to alignment: operations between series will align on the index:
End of explanation
population.mean()
Explanation: Reductions (like numpy)
The average population number:
End of explanation
countries['area'].min()
Explanation: The minimum area:
End of explanation
countries.median()
Explanation: For dataframes, often only the numeric columns are included in the result:
End of explanation
countries.sort_values('density', ascending=False)
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the population numbers relative to Belgium
</div>
<div class="alert alert-success">
<b>EXERCISE</b>: Calculate the population density for each country and add this as a new column to the dataframe.
</div>
Some other useful methods
Sorting the rows of the DataFrame according to the values in a column:
End of explanation
countries.describe()
Explanation: One useful method to use is the describe method, which computes summary statistics for each column:
End of explanation
countries.plot()
Explanation: The plot method can be used to quickly visualize the data in different ways:
End of explanation
countries['population'].plot(kind='bar')
Explanation: However, for this dataset, it does not say that much:
End of explanation
pd.read
states.to
Explanation: You can play with the kind keyword: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin'
Importing and exporting data
A wide range of input/output formats are natively supported by pandas:
CSV, text
SQL database
Excel
HDF5
json
html
pickle
...
End of explanation |
441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VaR computation for single risk factor / scenario set
Step1: Scenarios are already available. We will load the csv file into Spark (no Big Data here to see, move along).
Step2: Now let's compute the VaR for each day.
Step3: Example with a portfolio
where the scenarios must be summed up.
Step4: Define one portfolio | Python Code:
Simulation = namedtuple('Simulation', ('date', 'neutral', 'scenarios'))
RFScenario = namedtuple('RFScenario', ('rf', 'date', 'neutral', 'scenarios'))
from pyspark.mllib.linalg import Vectors, DenseVector, SparseVector, _convert_to_vector
def parse(row):
DATE_FMT = "%Y-%m-%d"
row[0] = datetime.datetime.strptime(row[0], DATE_FMT)
for i in np.arange(1,len(row)):
row[i] = float(row[i])
return Simulation(row[0], row[1], DenseVector(row[2:]))
def parse2(row):
DATE_FMT = "%Y-%m-%d"
row[0] = row[0]
row[1] = datetime.datetime.strptime(row[1], DATE_FMT)
for i in np.arange(2,len(row)):
row[i] = float(row[i])
# return row
return RFScenario(row[0], row[1], row[2], DenseVector(row[3:6]))
# test
# s = "2015-05-08,32.42,32.864847227683306,32.50044000839989,31.962723820560473,31.920709606792094,32.528263796919845,31.86562405274838,32.136619526291824,32"
# datetime.datetime.strptime( s.split(',')[0], '%Y-%m-%d')
# float( s.split(',')[1])
Explanation: VaR computation for single risk factor / scenario set
End of explanation
csv_filename = "../data/scenarios.csv"
lines = sc.textFile(csv_filename)
parts = lines.map(lambda l: l.split(","))
rows = parts.map(parse)
df = sqlContext.createDataFrame(rows)
df.show()
df.describe()
Explanation: Scenarios are already available. We will load the csv file into Spark (no Big Data here to see, move along).
End of explanation
def var(scenarios, level=99, neutral_scenario=0):
pnls = scenarios - neutral_scenario
return - np.percentile(pnls, 100-level, interpolation='linear')
pnls = df.map( lambda r: {'date': r.date,
'neutral': r.neutral,
'var': float(var(r.scenarios.array, neutral_scenario=r.neutral))})
a = sqlContext.createDataFrame(pnls)
%matplotlib notebook
a.toPandas().plot();
Explanation: Now let's compute the VaR for each day.
End of explanation
csv_filename = "/Users/miguel/Jottacloud/devel/osqf2015/data/scenarios2.csv"
scenarios_rdd = sc.textFile(csv_filename).map(lambda l: l.split(",")).map(parse2)
scenarios_rdd.takeSample(False, 1,0)
dfs = sqlContext.createDataFrame(scenarios_rdd)
Explanation: Example with a portfolio
where the scenarios must be summed up.
End of explanation
pf_rdd = sc.parallelize([('RF1', 1.), ('RF2', 2.)])
dfpf = sqlContext.createDataFrame(pf_rdd, ['rf', 'qty'])
res = dfs.join(dfpf, dfpf.rf == dfs.rf).select(dfs.rf, dfpf.qty, dfs.date, dfs.neutral, dfs.scenarios)
pf_values = res.map(lambda r: Row(date=r.date,
neutral=r.neutral*r.qty,
scenarios=DenseVector(r.scenarios.array * r.qty)))
aaa = pf_values.map(lambda x: (x[0], (x[1],x[2]))).aggregateByKey(0, lambda v, d: d, lambda x,y: (x[0]+y[0], x[1]+y[1])).map(lambda r: Row(date=r[0], neutral=r[1][0], scenarios=r[1][1]))
df_res = sqlContext.createDataFrame(aaa)
pnls = df_res.map( lambda r: {'date': r.date,
'neutral': r.neutral,
'var': float(var(r.scenarios.array, neutral_scenario=r.neutral))}).toDF().toPandas()
%matplotlib notebook
pnls.plot()
df_vals.groupBy('date').agg({'neutral': 'sum'}).collect()
Explanation: Define one portfolio
End of explanation |
442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step2: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook
Step4: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step5: Use interactive to build a user interface for exploing the draw_circle function
Step6: Use the display function to show the widgets created by interactive | Python Code:
# YOUR CODE HERE
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import SVG, display
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
# YOUR CODE HERE
#I had the other Ed's help, take svg arguments in string and replace with given values.
o = '<svg width="%s" height="%s">\n<circle cx="%s" cy="%s" r="%s" fill="%s" />\n</svg>' % (width, height, cx, cy, r, fill)
display(SVG(o))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
# YOUR CODE HERE
w = interactive(draw_circle, width=fixed(300), height=fixed(300), cy=[0, 300], cx=[0, 300], r=[0, 50], fill='red')
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
# YOUR CODE HERE
display(w)
assert True # leave this to grade the display of the widget
Explanation: Use the display function to show the widgets created by interactive:
End of explanation |
443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
연습문제
아래 문제들을 해결하는 코드를 lab07.py 파일에 작성하여 제출하라.
연습 1
미국 해양대기청(NOAA)은 전세계 날씨를 실시간으로 제공한다. 한국의 경우 공항이 있는 도시의 날씨정보를 제공하며 평택도 포함된다. 평택의 현재 날씨 정보를 텍스트파일로 얻고자 하면 아래 NOAA 사이트를 클릭해서 파일을 다운로드받으면 된다.
아니면 아래 함수를 이용하여 위 링크에 연결된 파일 내용을 확인할 수 있다.
def NOAA_string()
Step1: 연습 1 견본답안 2
섭씨온도를 유일하게 특징지우는 문자열을 찾아야 한다.
" F "가 그런 문자열이다. (F 양 옆으로 스페이스가 있다.)
Step2: 연습 2
텍스트 파일에 저장된 문장에서 특정 단어의 출현 횟수를 확인해주는 함수 wc_sub(filename, s) 함수를 작성하라. wc는 Word Count의 줄임말이다.
힌트
Step3: 연습 3
함수 f와 숫자들의 리스트 xs를 인자로 받아 f(x)의 값이 0보다 크게 되는 x의 값만 추출해서 리턴하는 함수 filtering(f, xs)를 정의하라.
예제
Step4: 참조
Step5: 참조
Step6: 연습 6
함수 f를 입력 받으면 아래 묘사처럼 작동하는 함수를 리턴하는 함수 fun_2_fun(f)를 정의하라.
fun_2_fun(f)(2) = (f(2)) ** 2
fun_2_fun(f)(3) = (f(3)) ** 3
fun_2_fun(f)(4) = (f(4)) ** 4
...
주의
Step7: 문제 핵심
이 문제의 핵심은 함수를 단순히 인자로만 사용하는 것이 아니라 리턴값으로도 할용하는 것이다. 즉, 함수에 어떤 인자를 넣고 호출하였더니 어떤 함수를 리턴하는 함수를 구현해야 한다. 그리고 리턴값이 함수이므로 그 함수를 적당한 인자를 입력하여 호출할 수 있다.
예를 들어 함수 g를 다음과 같이 정의하자.
Step8: 그러면 g는 함수임을 확인할 수 있다.
Step9: 어떤 함수인가? help 를 이용하여 확인하면 다음과 같다.
Step10: 즉, 인자를 하나 받는 함수이며 f_exp를 이용해 정외되었음을 알 수 있다. 실제로 g는 아래와 같이 정의되어 있다.
g를 정의하기 위해 fun_2_fun(f) 함수를 호출할 때 사용된 인자 f 대신에 exp2 함수를 삽입하였기 때문에 g가 아래와 같이 정의된 함수임을 알 수 있다.
g(x) = fun_2_fun(exp2)
= f_exp(x) # f_exp 를 정의할 때 exp2 가 사용됨에 중의
= exp2(x) ** x
= (x**2) ** x
= x ** (2*x)
연습 6 견본답안 2 | Python Code:
import urllib
def NOAA_string():
url = "http://weather.noaa.gov/pub/data" +\
"/observations/metar/decoded/RKSG.TXT"
noaa_data_string = urllib.urlopen(url).read()
return noaa_data_string
print(NOAA_string())
def NOAA_temperature(s):
L = s.split('\n')
Line7 = L[6].split()
print(str(int(Line7[-2][1:])) + " C")
NOAA_temperature(NOAA_string())
Explanation: 연습문제
아래 문제들을 해결하는 코드를 lab07.py 파일에 작성하여 제출하라.
연습 1
미국 해양대기청(NOAA)은 전세계 날씨를 실시간으로 제공한다. 한국의 경우 공항이 있는 도시의 날씨정보를 제공하며 평택도 포함된다. 평택의 현재 날씨 정보를 텍스트파일로 얻고자 하면 아래 NOAA 사이트를 클릭해서 파일을 다운로드받으면 된다.
아니면 아래 함수를 이용하여 위 링크에 연결된 파일 내용을 확인할 수 있다.
def NOAA_string():
url = "http://weather.noaa.gov/pub/data" +\
"/observations/metar/decoded/RKSG.TXT"
noaa_data_string = urllib.urlopen(url).read()
return noaa_data_string
위 코드를 사용하려면 urllib 모듈을 임포트해야 한다. 위 함수를 파이썬 셸에서 실행하여 리턴값을 확인해보기 바란다.
이제 아래 일을 수행하는 함수 NOAA_temperature(s) 함수를 작성하라.
NOAA_string()의 리턴값을 인자로 받아서 해당 도시의 섭씨 단위 온도의 정수값을 리턴한다.
미국은 온도를 화씨(Fahrenheit) 단위로 나타내며 우리는 섭씨(Celsius) 단위를 사용한다.
주의: 위 사이트는 실시간으로 날씨 정보를 제공한다. 따라서 위 링크를 누를 때마다 온도 정보가 변한다. 예를 들어 2015년 10월 16일 0시 38분에 확인한 경우 아래 처럼 확인된 평택시 온도는 섭씨 14.2이다. 따라서 NOAA_temperature(NOAA_string())은 14를 리턴해야 한다. 하지만 다른 시각에 확인하면 다른 값이 나올 수 있음에 주의해야 한다. 어떻게 섭씨에 해당하는 숫자를 끄집어 낼 수 있는지 확인해야 한다.
Pyongtaek Ab, Korea, South (RKSG) 36-56N 127-00E 16M
Oct 15, 2015 - 10:58 AM EDT / 2015.10.15 1458 UTC
Wind: Calm:0
Visibility: 2 mile(s):0
Sky conditions: partly cloudy
Weather: mist
Temperature: 57.6 F (14.2 C)
Dew Point: 57.6 F (14.2 C)
Relative Humidity: 100%
Pressure (altimeter): 30.11 in. Hg (1019 hPa)
ob: RKSG 151458Z 00000KT 2SM R32/2600FT BR SCT010 14/14 A3011 RMK AO2A SLP199 T01420142
cycle: 15
힌트: 문자열 메소드 중에서 특정 부분 문자열(substring)의 위치, 즉 인덱스 번호를 확인해주는 메소드가 있다.
연습 1 견본답안 1
NOAA_string()을 실행하여 얻어진 파일의 내용을 보면 7번째 줄에서 온도 정보를 확인할 수 있다.
관건은 7번째 줄에서 14.2를 끄집어 내는 것이다. 그러려면 14.2를 유일하게 특징지울 수 있는 무언가를 찾아야 한다.
방법 1: split 메소드 이용하기
7번째 줄을 자세히 살피면 섭씨 온도 정보는 세 개의 스페이스 뒤에 위치한다. 이 정보를 활용할 수 있다.
End of explanation
def NOAA_temperature(s):
d = s.find(" F ")
print(s[d+4: d+6] + " C")
NOAA_temperature(NOAA_string())
Explanation: 연습 1 견본답안 2
섭씨온도를 유일하게 특징지우는 문자열을 찾아야 한다.
" F "가 그런 문자열이다. (F 양 옆으로 스페이스가 있다.)
End of explanation
def wc_sub(filename, s):
with open(filename, 'r') as f:
f_content = f.read()
return f_content.count(s)
print("The word 'Alice' occurs {} times.".format(wc_sub('Alice.txt', 'Alice')))
print("The word 'alice' occurs {} times.".format(wc_sub('Alice.txt', 'alice')))
Explanation: 연습 2
텍스트 파일에 저장된 문장에서 특정 단어의 출현 횟수를 확인해주는 함수 wc_sub(filename, s) 함수를 작성하라. wc는 Word Count의 줄임말이다.
힌트: count 메소드를 활용한다.
예제 1: data.txt 파일 내용이 아래와 같을 경우
One Two
wc_sub('data.txt', 'One')는 1를 리턴한다.
예제 2: data.txt 파일 내용이 아래와 같을 경우
One Two
Three Four Five
wc_sub('data.txt', 'o')는 2를 리턴한다.
wc_sub 함수를 이용하여 이상한 나라의 앨리스 원작에 'Alice'와 'alice'란 단어가 각각 몇 번 언급되는지 확인하라. 이상한 나라의 앨리스 원작은 아래 링크에서 다운 받을 수 있다.
http://www.gutenberg.org/files/28885/28885-8.txt
위 링크를 누르면 뜨는 화면에서 Plain Text UTF-8 파일을 다운로드 받으면 된다. 아마도 몇 만 단어가 사용되었을 것이다.
단, filename에 해당하는 파일이 열리지 않을 경우 -1을 리턴하도록 오류처리를 해야 한다.
연습 2 견본답안
End of explanation
def filtering(f, xs):
L = []
for x in xs:
if f(x) > 0:
L.append(x)
return L
def f1(x):
return x * 3
filtering(f1, [1, -2, 2, -1, 3, 5])
Explanation: 연습 3
함수 f와 숫자들의 리스트 xs를 인자로 받아 f(x)의 값이 0보다 크게 되는 x의 값만 추출해서 리턴하는 함수 filtering(f, xs)를 정의하라.
예제:
In [1]: def f1(x):
...: return x * 3
In [2]: filtering(f1, [1, -2, 2, -1, 3, 5])
Out[2]: [1, 2, 3, 5]
In [3]: filtering(f1, [-1, -2, -3, -4, -5])
Out[3]: []
연습 3 견본답안
End of explanation
def sum_list(f, xs):
L = 0
for x in xs:
L = L + f(x)
return L
def f2(x):
return x ** 2
print(sum_list(f2, [1, -2, 2, -3]))
print(sum_list(f1, [-1, -2, -3, -4, -5]))
Explanation: 참조: 파이썬 내장함수 중에 filter 함수가 비슷한 일을 한다. 어떤 차이점이 있는지 확인해보는 것을 추천한다.
연습 4
함수 f와 숫자들의 리스트 xs = [x1, ..., x_n]를 인자로 받아 f(xn)들의 값의 합을 리턴하는 함수 sum_list(f, xs)를 정의하라. 단, xs = [] 일 경우 0을 리턴한다.
예제:
In [4]: def f2(x):
...: return x ** 2
In [5]: sum_list(f2, [1, -2, 2, -3,])
Out[5]: 18
In [6]: sum_list(f1, [-1, -2, -3, -4, -5])
Out[6]: -45
연습 4 견본답안
End of explanation
def triangle_area(a, height=5):
return 1.0/2 * a * height
print(triangle_area(3))
print(triangle_area(3, 7))
Explanation: 참조: 파이썬 내장함수 중에 sum 함수가 비슷한 일을 한다. 어떤 차이점이 있는지 확인해보는 것을 추천한다.
연습 5
밑변의 길이와 높이가 각각 a와 h인 삼각형의 면적을 리턴하는 함수 triangle_area(a, h)를 작성하라. 그런데 삼각형의 높이 h는 기본값으로 5를 사용해야 한다. 힌트: 키워드 인자를 사용한다.
예제:
In [7]: triangle_area(3)
Out[7]: 7.5
In [8]: triangle_area(3, 7)
Out[8]: 10.5
연습 5 견본답안
End of explanation
def fun_2_fun(f):
def f_exp(n):
return (f(n)) ** n
return f_exp
print(f1(2))
print(fun_2_fun(f1)(2))
Explanation: 연습 6
함수 f를 입력 받으면 아래 묘사처럼 작동하는 함수를 리턴하는 함수 fun_2_fun(f)를 정의하라.
fun_2_fun(f)(2) = (f(2)) ** 2
fun_2_fun(f)(3) = (f(3)) ** 3
fun_2_fun(f)(4) = (f(4)) ** 4
...
주의: 함수를 입력받아 함수를 리턴하도록 작성해야 한다.
힌트: 함수 안에서 def 키워드를 이용하여 새로운 함수를 정의할 수 있다. 그 함수는 지역함수가 된다.
연습 6 견본답안 1
End of explanation
def exp2(x):
return x ** 2
g = fun_2_fun(exp2)
Explanation: 문제 핵심
이 문제의 핵심은 함수를 단순히 인자로만 사용하는 것이 아니라 리턴값으로도 할용하는 것이다. 즉, 함수에 어떤 인자를 넣고 호출하였더니 어떤 함수를 리턴하는 함수를 구현해야 한다. 그리고 리턴값이 함수이므로 그 함수를 적당한 인자를 입력하여 호출할 수 있다.
예를 들어 함수 g를 다음과 같이 정의하자.
End of explanation
type(g)
Explanation: 그러면 g는 함수임을 확인할 수 있다.
End of explanation
help(g)
g
Explanation: 어떤 함수인가? help 를 이용하여 확인하면 다음과 같다.
End of explanation
def fun_2_fun(f):
return lambda x: f(x) ** x
print(f1(2))
print(fun_2_fun(f1)(2))
Explanation: 즉, 인자를 하나 받는 함수이며 f_exp를 이용해 정외되었음을 알 수 있다. 실제로 g는 아래와 같이 정의되어 있다.
g를 정의하기 위해 fun_2_fun(f) 함수를 호출할 때 사용된 인자 f 대신에 exp2 함수를 삽입하였기 때문에 g가 아래와 같이 정의된 함수임을 알 수 있다.
g(x) = fun_2_fun(exp2)
= f_exp(x) # f_exp 를 정의할 때 exp2 가 사용됨에 중의
= exp2(x) ** x
= (x**2) ** x
= x ** (2*x)
연습 6 견본답안 2
End of explanation |
444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recognizing a Digit
In this example, we try to recognise digits of class 9 given training
examples from classes 0-8.
Step1: The $8\times 8$ images of digits are loaded from scikit-learn. Any digit $< 9$
is defined as the inlier class. Thus, the digit 9 is an outlier or anomaly. Note
this is similar to the SVC example.
Anomaly Model
Step2: ROC Curve
Step3: Confusion Matrix | Python Code:
import os
from IPython.display import Image
import numpy as np
from pathlib import Path
from sklearn import metrics
cwd = os.getcwd()
os.chdir(Path(cwd).parents[1])
from lsanomaly import LSAnomaly
import lsanomaly.notebooks.digits as demo
digits = os.path.join(os.getcwd(), "lsanomaly", "notebooks", "digits.png")
Explanation: Recognizing a Digit
In this example, we try to recognise digits of class 9 given training
examples from classes 0-8.
End of explanation
X_train, X_test, y_train, y_test = demo.data_prep(test_size=0.5)
anomaly_model = LSAnomaly()
anomaly_model.fit(X_train, y_train)
predictions = anomaly_model.predict_proba(X_test)
fpr, tpr, thresholds = metrics.roc_curve(y_test == 9, predictions[:, -1])
Explanation: The $8\times 8$ images of digits are loaded from scikit-learn. Any digit $< 9$
is defined as the inlier class. Thus, the digit 9 is an outlier or anomaly. Note
this is similar to the SVC example.
Anomaly Model
End of explanation
demo.plot_roc(fpr, tpr, metrics.auc(fpr, tpr))
Explanation: ROC Curve
End of explanation
y_pred = anomaly_model.predict(X_test)
y_pred = [w if np.isreal(w) else 9 for w in y_pred]
demo.plot_confusion_matrix(y_test,
y_pred,
title='Confusion matrix',
normalize=False)
Explanation: Confusion Matrix
End of explanation |
445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. Classify Manhattan with TensorFlow
In this codelab, we will use TensorFlow to train a neural network to predict whether a location is in Manhattan or not, by looking at its longitude and latitude.
<br/>
<br/>
<br/>
Labs and Solutions
In this codelab there are several Labs (cells) where you need to write your code to solve the problems. If you need some hints, you may take a look at the Solution page to see the answers.
Update TensorFlow to version 1.0 or above
To check the version of TensorFlow running in your Cloud Datalab, select the cell below and run the code by clicking "Run" on the menu near the top of the page.
Step1: This codelab requires TensorFlow 1.0 or above. If you see older versions such as 0.11.0rc0, please follow the instruction below to update your local Datalab.
Stop your Datalab by pressing Ctrl+C on the console
Run the following command on the console
```
docker pull gcr.io/cloud-datalab/datalab
Step2: Preprocess the training data on BigQuery
In this codelab, we do not care about the car accidents. We just wanted to use the data for getting pairs of "latitude", "longitude" and "Is it Manhattan or not" values. So, we want to do the following preprocessing on this raw data
Step3: Import the BigQuery SQL result as NumPy array
Then, we need to execute the SQL code defined above using BigQuery and import the data into Datalab. For this purpose, Datalab provides BigQuery APIs that allows you to execute the define SQL and import the results as a NumPy array named nyc_cols. Run the cell below and confirm it loaded 10,000 rows.
Step4: Let's take a look at what's inside the result. Run the cell below and check the variable is_mt has an array of 1s and 0s representing each geolocation is in Manhattan or not, and the variable latlng has an array of pairs of latitude and longitude.
Step5: Lab
Step6: Now, add the necessary new code in the following cells, and run them, to get the result described in the comments with NumPy. You should refer to the NumPy Quickstart to learn how to get the results required.
Step7: 2-2. Feature scaling and splitting data
Now we've got the training data. However, it's not ready for training a neural network model yet. If you use the raw data directly, you would fail on the training because the scales of each feature (latitude and longitude in this case) are quite different.
In machine learning, it is very common to preprocess the raw data with feature scaling to normalize the feature data to have the same scale. That make it much easier for machine learning algorithms to compare those features and find relationships between them.
In this codelab, we will use StandardScaler in scikit-learn. Scikit-learn is another popular library for machine learning in Python that provides wide variety of training algorithms, preprocessing and validation tools.
The StandardScaler scales the features so that their mean value will be 0 and standard deviation will be 1. This scaling is called Standardization. Let's Run the cell below and see how it scales the latitudes and longitudes and stores them into a variable latlng_std.
Step8: Lab
Step9: Plot the training data with Matplotlib
Now, all the preprocessing on the training data have been done. Let's see what the data looks like by using Matplotlib, the popular visualization library for Python. In this case we will use scatter() method to plot dots with the pairs of latitude and longitude. Run the cell below and see the plot.
Step10: You can see that the geolocations in Manhattan are plotted as blue dots, and others are yellow dots. Also, latitudes and longitudes are scaled to have 0 as the center.
Split the data into "Training Data" and "Test Data"
Before start training the neural network model, we need to separate out a part of the training data as test data. The test data will be used for checking accuracy of classifications by the model after training. This is common practice in machine learning, so that the performance of your model can be accurately evaluated.
Run the cell below and split the data into 8,000 pairs of training data and 2,000 pairs of test data.
Step11: Lab
Step12: The code above does the following
Step13: In the first method plot_predicted_map() at line 3, we call the predict() method of DNNClassifier class to get an array of prediction results (10,000 rows) like [1 0 0 1 ... 0 0 1 0] where 1 means that the neural network believes the geolocation is in Manhattan, and 0 means it's not. By using this array as an indexer for selecting lat and lng pairs in each class, the method plots geolocations predicted as Manhattan in blue dots and others in yellow dots.
In the second method print_accuracy() at line 9, we call the evaluate() method of DNNClassifier class to calculate the accuracy of the prediction using the test data latlng_test and is_mt_test and print it.
After defining these two methods, we call the fit() method of DNNClassifier class at line 14 to train the model for just one step. A step in the fit() method moves the weights and bias in the neural network only a little in the direction that reduces the network error. However, it usually it takes thousands of steps for neural networks to find the best weights and bias. So, what you are effectively seeing is that the neural network in the initial state (= before the training) acheives a very low accuracy and cannot classify the Manhattan locations properly.
Train the neural network
Finally, let's actually train the neural network! This time, we will train the network by calling fit() method for 500 steps with the training data latlng_train and is_mt_train. Every 100 steps, we will call plot_predicted_map() and print_accuracy() to show the current accuracy of the network. Run the cell below and wait for a while until the message "Finished" is printed. You will see the network continually tries to move the weights and bias in small steps to minimize the error and find the best position of the line for classifying geolocations in Manhattan. The final accuracy should be as high as 97%.
Step14: Lab
Step15: The hidden layers give the power
The only difference from the last DNNClassifier definition is the hidden_units parameter which defines 4 hidden layers with 20 neurons each. As the network has total of 5 layers, we're now working with a deep neural network ("deep" means you have layers more than 2).
Let's see how the deep neural network works. Run the cell below and wait for a couple of minutes until it finishes training. | Python Code:
import tensorflow as tf
tf.__version__
Explanation: 2. Classify Manhattan with TensorFlow
In this codelab, we will use TensorFlow to train a neural network to predict whether a location is in Manhattan or not, by looking at its longitude and latitude.
<br/>
<br/>
<br/>
Labs and Solutions
In this codelab there are several Labs (cells) where you need to write your code to solve the problems. If you need some hints, you may take a look at the Solution page to see the answers.
Update TensorFlow to version 1.0 or above
To check the version of TensorFlow running in your Cloud Datalab, select the cell below and run the code by clicking "Run" on the menu near the top of the page.
End of explanation
%%sql -d standard
SELECT
timestamp,
borough,
latitude,
longitude
FROM
`bigquery-public-data.new_york.nypd_mv_collisions`
ORDER BY
timestamp DESC
LIMIT
15
Explanation: This codelab requires TensorFlow 1.0 or above. If you see older versions such as 0.11.0rc0, please follow the instruction below to update your local Datalab.
Stop your Datalab by pressing Ctrl+C on the console
Run the following command on the console
```
docker pull gcr.io/cloud-datalab/datalab:local
```
Run the docker run command to restart local Datalab
2-1. Importing the training data from BigQuery
To prepare for the analysis, we will download a training data from BigQuery, a fully managed scalable data warehouse service on Google Cloud. BigQuery provides many kinds of public datasets which makes it a useful datasource for learning data analytics with TensorFlow.
One of the public datasets is NYPD Motor Vehicle Collisions Data which collects all the car accidents happened in NYC from 2012 to the present. In this codelab, we will use it for getting 10,000 pairs of "borough" column and "latitude/longitude" columns.
Let's take a look at the data by executing a BigQuery SQL query. In Cloud Datalab, you can execute BigQuery commands by using the "%%sql" command (see this doc to learn more about the BigQuery commands). Select the cell below and run the query by clicking "Run" on the menu.
End of explanation
%%sql --module nyc_collisions
SELECT
IF(borough = 'MANHATTAN', 1, 0) AS is_mt,
latitude,
longitude
FROM
`bigquery-public-data.new_york.nypd_mv_collisions`
WHERE
LENGTH(borough) > 0
AND latitude IS NOT NULL AND latitude != 0.0
AND longitude IS NOT NULL AND longitude != 0.0
AND borough != 'BRONX'
ORDER BY
RAND()
LIMIT
10000
Explanation: Preprocess the training data on BigQuery
In this codelab, we do not care about the car accidents. We just wanted to use the data for getting pairs of "latitude", "longitude" and "Is it Manhattan or not" values. So, we want to do the following preprocessing on this raw data:
Add a column "is_mt" that returns 1 or 0 to indicate if the borough is Manhattan or not
Remove rows without borough info
Remove rows without longitude/latitude info
Remove rows for Bronx (because it's too close to Manhattan and hard to classify with single layer neural network!)
Randomly shuffule all the rows (for making the training data even)
Select only the 10,000 rows
So, our SQL with the preprocessing will look like the following. Select the cell below and run it. Please note that this only defines the SQL module "nyc_collisions" that will be used later and does not output anything.
End of explanation
import datalab.bigquery as bq
nyc_cols = bq.Query(nyc_collisions).to_dataframe(dialect='standard').as_matrix()
print(nyc_cols)
print("\nLoaded " + str(len(nyc_cols)) + " rows.")
Explanation: Import the BigQuery SQL result as NumPy array
Then, we need to execute the SQL code defined above using BigQuery and import the data into Datalab. For this purpose, Datalab provides BigQuery APIs that allows you to execute the define SQL and import the results as a NumPy array named nyc_cols. Run the cell below and confirm it loaded 10,000 rows.
End of explanation
import numpy as np
is_mt = nyc_cols[:,0].astype(np.int32) # read the 0th column (is_mt) as int32
latlng = nyc_cols[:,1:3].astype(np.float32) # read the 1st and 2nd column (latitude and longitude) as float32
print("Is Manhattan: " + str(is_mt))
print("\nLat/Lng: \n\n" + str(latlng))
Explanation: Let's take a look at what's inside the result. Run the cell below and check the variable is_mt has an array of 1s and 0s representing each geolocation is in Manhattan or not, and the variable latlng has an array of pairs of latitude and longitude.
End of explanation
# create an numpy array with numbers from 0 to 14
A = np.arange(15)
print(A)
Explanation: Lab: NumPy basics
(You can skip this lab if you know how to use NumPy)
You might notice that we just used NumPy for extracting the results. NumPy is the most popular Python library for numerical calculations. For machine learning with Python, many people use NumPy for wide variety of numerical operations, including the basic array operations such as reshaping, merging, splitting, filtering, slicing and indexing. Many of TensorFlow APIs are also influenced by NumPy and use similar concepts. If you want to learn machine learning and TensorFlow with Python, we recommend you also learn some of the basics of NumPy too.
In this lab, Let's try a few basic array operations with NumPy. Run the cell below and see what kind of numpy array will be created.
End of explanation
# reshape the array A into an array with shape in 3 rows and 5 columns,
# set it to variable A, and print it.
# *** ADD YOUR CODE HERE ***
print(A)
# expected result:
# [[ 0 1 2 3 4]
# [ 5 6 7 8 9]
# [10 11 12 13 14]]
# print() the shape, data type name, size (total number of elements) of the array A
# *** ADD YOUR CODE HERE ***
# expected result:
# (3, 5)
# int64
# 15
# multiply the array A by the number 2 and print the result
# *** ADD YOUR CODE HERE ***
# expected result:
# [[ 0 2 4 6 8]
# [10 12 14 16 18]
# [20 22 24 26 28]]
# create a new array that has the same shape as the array A filled with zeros, and print it
# *** ADD YOUR CODE HERE ***
# expected result:
# [[ 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0.]]
# create a new array that has the elements in the right-most column of the array A
# *** ADD YOUR CODE HERE ***
# expected result:
# [ 4 9 14]
# Collect elements in array B with an index "I % 2 == 0" and print it
B = np.arange(10)
I = np.arange(10)
# *** ADD YOUR CODE HERE ***
# expected result:
# [0 2 4 6 8]
Explanation: Now, add the necessary new code in the following cells, and run them, to get the result described in the comments with NumPy. You should refer to the NumPy Quickstart to learn how to get the results required.
End of explanation
from sklearn.preprocessing import StandardScaler
latlng_std = StandardScaler().fit_transform(latlng)
print(latlng_std)
Explanation: 2-2. Feature scaling and splitting data
Now we've got the training data. However, it's not ready for training a neural network model yet. If you use the raw data directly, you would fail on the training because the scales of each feature (latitude and longitude in this case) are quite different.
In machine learning, it is very common to preprocess the raw data with feature scaling to normalize the feature data to have the same scale. That make it much easier for machine learning algorithms to compare those features and find relationships between them.
In this codelab, we will use StandardScaler in scikit-learn. Scikit-learn is another popular library for machine learning in Python that provides wide variety of training algorithms, preprocessing and validation tools.
The StandardScaler scales the features so that their mean value will be 0 and standard deviation will be 1. This scaling is called Standardization. Let's Run the cell below and see how it scales the latitudes and longitudes and stores them into a variable latlng_std.
End of explanation
# *** ADD YOUR CODE HERE ***
Explanation: Lab: check the standardized feature values
Print mean and standard deviation values on both latitude and longitude of variable latlng_std by using NumPy and confirm the mean is almost 0 and standard deviation is 1.
End of explanation
import matplotlib.pyplot as plt
lat = latlng_std[:,0]
lng = latlng_std[:,1]
plt.scatter(lng[is_mt == 1], lat[is_mt == 1], c='b') # plot points in Manhattan in blue
plt.scatter(lng[is_mt == 0], lat[is_mt == 0], c='y') # plot points outside Manhattan in yellow
plt.show()
Explanation: Plot the training data with Matplotlib
Now, all the preprocessing on the training data have been done. Let's see what the data looks like by using Matplotlib, the popular visualization library for Python. In this case we will use scatter() method to plot dots with the pairs of latitude and longitude. Run the cell below and see the plot.
End of explanation
# 8,000 pairs for training
latlng_train = latlng_std[0:8000]
is_mt_train = is_mt[0:8000]
# 2,000 pairs for test
latlng_test = latlng_std[8000:10000]
is_mt_test = is_mt[8000:10000]
print("Split finished.")
Explanation: You can see that the geolocations in Manhattan are plotted as blue dots, and others are yellow dots. Also, latitudes and longitudes are scaled to have 0 as the center.
Split the data into "Training Data" and "Test Data"
Before start training the neural network model, we need to separate out a part of the training data as test data. The test data will be used for checking accuracy of classifications by the model after training. This is common practice in machine learning, so that the performance of your model can be accurately evaluated.
Run the cell below and split the data into 8,000 pairs of training data and 2,000 pairs of test data.
End of explanation
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR) # supress warning messages
# define two feature columns consisting of real values
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=2)]
# create a neural network
dnnc = tf.contrib.learn.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[],
n_classes=2)
dnnc
Explanation: Lab: Disscuss the preprocessing
Discuss the following topics with your buddy:
What preprocessing steps we have done so far?
Why is each proprocessing step required?
What NumPy and Matplotlib functions have used to plot the map?
Why do we need to split the data into training and testing sets?.
2-3. Train the Neural Network with TensorFlow High level API
<br/>
<br/>
Now, let's use TensorFlow.
TensorFlow is an open source library for machine learning. You can define your own neural network or deep learning model and run the training on your laptop, or use many CPUs and GPUs in the cloud for scalable and faster training and prediction.
TensorFlow provides two kind of APIs:
High level API: provides easy-to-use predefined machine learning models
Low level API: provides customizable dataflow computation framework for machine learning
If you will use common neural network and machine learning models (such as fully-connected neural networks, convolutional neural networks, logistic regressions and k-means), the high level API is recommended. If you want to design your own neural network model with sophisticated or novel algorithms, or if you want to learn the underlying technology used for implementing the high level API, the low level API is the best option.
In this codelab, we will use the high level API first, and then look at the low level API to learn more about the underlying technology.
Define a single layer neural network
Run the cell below to define a neural network.
End of explanation
# plot a predicted map of Manhattan
def plot_predicted_map():
is_mt_pred = dnnc.predict(latlng_std, as_iterable=False) # an array of prediction results
plt.scatter(lng[is_mt_pred == 1], lat[is_mt_pred == 1], c='b')
plt.scatter(lng[is_mt_pred == 0], lat[is_mt_pred == 0], c='y')
plt.show()
# print the accuracy of the neural network
def print_accuracy():
accuracy = dnnc.evaluate(x=latlng_test, y=is_mt_test)["accuracy"]
print('Accuracy: {:.2%}'.format(accuracy))
# train the model for just 1 step and print the accuracy
dnnc.fit(x=latlng_train, y=is_mt_train, steps=1)
plot_predicted_map()
print_accuracy()
Explanation: The code above does the following:
Line 2 sets the log level to ERROR to supress warning messages
Line 5 defined the "feature columns" (columns in the training data used for training the model) as two dimensional real values
Line 8 defines a neural network by using DNNClassifier class with the following parameters:
No hidden units (= fully connected single layer neural network)
Two classes for classification (Manhattan or not)
In a nutshell, this code defines a neural network like the following illustration, which is the same single neuron we tried with the Playground, where we put latitude and longitude as inputs to x1 and x2 respectively.
<br/>
<br/>
Just like we saw on the Playground, the neuron can classify each datapoint into two groups by drawing a single straight line. While training this neuron with the training data, the neuron tries to move the weight and bias values to find what's the best angle and position for the line to classify Manhattan correctly.
<br/>
<br/>
So here, we're training a neural network (consisting of a single neuron) to classify whether a geolocation is in Manhattan or not by drawing a single straight line on the map.
Check the accuracy of the neural network
Before starting to train the neural network, let's define two methods for checking the accuracy of the neural network. Run the cell below.
End of explanation
steps = 100
for i in range (1, 6):
dnnc.fit(x=latlng_train, y=is_mt_train, steps=steps)
plot_predicted_map()
print('Steps: ' + str(i * steps))
print_accuracy()
print('\nTraining Finished.')
Explanation: In the first method plot_predicted_map() at line 3, we call the predict() method of DNNClassifier class to get an array of prediction results (10,000 rows) like [1 0 0 1 ... 0 0 1 0] where 1 means that the neural network believes the geolocation is in Manhattan, and 0 means it's not. By using this array as an indexer for selecting lat and lng pairs in each class, the method plots geolocations predicted as Manhattan in blue dots and others in yellow dots.
In the second method print_accuracy() at line 9, we call the evaluate() method of DNNClassifier class to calculate the accuracy of the prediction using the test data latlng_test and is_mt_test and print it.
After defining these two methods, we call the fit() method of DNNClassifier class at line 14 to train the model for just one step. A step in the fit() method moves the weights and bias in the neural network only a little in the direction that reduces the network error. However, it usually it takes thousands of steps for neural networks to find the best weights and bias. So, what you are effectively seeing is that the neural network in the initial state (= before the training) acheives a very low accuracy and cannot classify the Manhattan locations properly.
Train the neural network
Finally, let's actually train the neural network! This time, we will train the network by calling fit() method for 500 steps with the training data latlng_train and is_mt_train. Every 100 steps, we will call plot_predicted_map() and print_accuracy() to show the current accuracy of the network. Run the cell below and wait for a while until the message "Finished" is printed. You will see the network continually tries to move the weights and bias in small steps to minimize the error and find the best position of the line for classifying geolocations in Manhattan. The final accuracy should be as high as 97%.
End of explanation
dnnc = tf.contrib.learn.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[20, 20, 20, 20],
n_classes=2)
dnnc
Explanation: Lab: Try training the neural network a couple of times
Go back to the section "Define a single layer neural network" and run the following cells again to train the network from scratch
Repeat the training a couple of times and confirm that the network's max accuracy is around 97%
Discuss with your buddy the key reason that the single layer network can't achieve accuracy higher than 97%. The single layer neural network is also known as Perceptron. You may refer to the Wikipedia page for Perceptron to learn more about its characteristics and limitations.
2-4. Train a Deep Neural Network with TensorFlow
You just saw that the network can only draw a straight line on the map and classify whether a location is in Manhattan or not. This is so-called Linear Classification. That is the limitation of the single layer neural network and you can only achieve around 97% accuracy because the straight line (linear classification) can't split the geolocation points between Manhattan and Brooklyn with the necessary curved boundary.
We must go deeper. Let's define a deep neural network (DNN). Run the cell below to define a new DNNClassifier.
End of explanation
steps = 30
for i in range (1, 6):
dnnc.fit(x=latlng_train, y=is_mt_train, steps=steps)
plot_predicted_map()
print 'Steps: ' + str(i * steps)
print_accuracy()
print('\nTraining Finished.')
Explanation: The hidden layers give the power
The only difference from the last DNNClassifier definition is the hidden_units parameter which defines 4 hidden layers with 20 neurons each. As the network has total of 5 layers, we're now working with a deep neural network ("deep" means you have layers more than 2).
Let's see how the deep neural network works. Run the cell below and wait for a couple of minutes until it finishes training.
End of explanation |
446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS446/546 - Class Session 8 - Components
In this class session we are going to find the number of proteins that are in the giant component of the (undirected) protein-protein interaction network, using igraph.
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4 | Python Code:
from igraph import Graph
from igraph import summary
import pandas
import numpy
Explanation: CS446/546 - Class Session 8 - Components
In this class session we are going to find the number of proteins that are in the giant component of the (undirected) protein-protein interaction network, using igraph.
End of explanation
sif_data = pandas.read_csv("shared/pathway_commons.sif",
sep="\t", names=["species1","interaction_type","species2"])
Explanation: Step 1: load in the SIF file (refer to Class 6 exercise) into a data frame sif_data, using the pandas.read_csv function, and name the columns species1, interaction_type, and species2.
End of explanation
interaction_types_ppi = set(["interacts-with",
"in-complex-with"])
interac_ppi = sif_data[sif_data.interaction_type.isin(interaction_types_ppi)].copy()
Explanation: Step 2: restrict the interactions to protein-protein undirected ("in-complex-with", "interacts-with"), by using the isin function and then using [ to index rows into the data frame. Call the returned ata frame interac_ppi.
End of explanation
boolean_vec = interac_ppi['species1'] > interac_ppi['species2']
interac_ppi.loc[boolean_vec, ['species1', 'species2']] = interac_ppi.loc[boolean_vec, ['species2', 'species1']].values
interac_ppi_unique = interac_ppi[["species1","species2"]].drop_duplicates()
ppi_igraph = Graph.TupleList(interac_ppi_unique.values.tolist(), directed=False)
summary(ppi_igraph)
Explanation: Step 3: restrict the data frame to only the unique interaction pairs of proteins (ignoring the interaction type), and call that data frame interac_ppi_unique. Make an igraph Graph object from interac_ppi_unique using Graph.TupleList, values, and tolist. Call summary on the Graph object. Refer to the notebooks for the in-class exercises in Class sessions 3 and 6.
End of explanation
# call the `clusters` method on the `ppi_igraph` object, and assign the
# resulting `VertexClustering` object to have object name `ppi_components`
# call the `sizes` method on the `ppi_components` object, and assign the
# resulting list object to have the name `ppi_component_sizes`.
# make a `numpy.array` initialized by `ppi_component_sizes`, and find its
# maximum value using the `max` method on the `numpy.array` class
Explanation: Step 4: Map the components of the network using the igraph.Graph.clusters method. That method returns a igraph.clustering.VertexClustering object. Call the sizes method on that VertexClustering object, to get a list of sizes of the components. What is the giant component size?
End of explanation |
447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data generation
Step1: Utility Methods
The below methods are used to load the data, prepare the data, parse the classifier and classification parameters, and fit and run the classifier. They should probably be moved to tax_credit.framework_functions.
Step2: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep.
Step3: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods.
Step4: Preparing the pipelines
The below pipelines are used to specify the scikit-learn classifiers that are used for assignment. At the moment we only include Naïve Bayes but the collection will expand.
Step5: Test
Step6: Do the Sweep
Step7: A quick sanity check never hurt anyone...
Step8: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
Step9: Move result files to repository
Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells. | Python Code:
from os.path import join, exists, split, sep, expandvars
from os import makedirs, getpid
from glob import glob
from shutil import rmtree
import csv
import json
import tempfile
from itertools import product
from qiime2.plugins import feature_classifier
from qiime2 import Artifact
from joblib import Parallel, delayed
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from q2_feature_classifier.classifier import spec_from_pipeline
from q2_types.feature_data import DNAIterator
from pandas import DataFrame
from tax_credit.framework_functions import (
gen_param_sweep, generate_per_method_biom_tables, move_results_to_repository)
project_dir = expandvars('$HOME/Desktop/projects/short-read-tax-assignment/')
analysis_name = 'mock-community'
data_dir = join(project_dir, 'data', analysis_name)
reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
results_dir = expandvars("$HOME/Desktop/projects/mock-community/")
Explanation: Data generation: using Python to sweep over methods and parameters
In this notebook, we illustrate how to use Python to perform parameter sweeps for a taxonomic assigner and integrate the results into the TAX CREdiT framework.
Environment preparation
End of explanation
# *** one glaring flaw here is that generate_pipeline_sweep iterates
# *** through all method_parameters_combinations and reference_dbs
# *** and hence will generate training sets for each combo even if
# *** not all are called by commands in sweep. This is not an issue
# *** if sweep uses all classifiers but is inconvenient if attempting
# *** to test on a subset of sweep. Need to explicitly set all inputs!
def train_and_run_classifier(method_parameters_combinations, reference_dbs,
pipelines, sweep, verbose=False, n_jobs=4):
'''Train and run q2-feature-classifier across a parameter sweep.
method_parameters_combinations: dict of dicts of lists
Classifier methods to run and their parameters/values to sweep
Format: {method_name: {'parameter_name': [parameter_values]}}
reference_dbs: dict of tuples
Reference databases to use for classifier training.
Format: {database_name: (ref_seqs, ref_taxonomy)}
pipelines: dict
Classifier pipelines to use for training each method.
Format: {method_name: sklearn.pipeline.Pipeline}
sweep: list of tuples
output of gen_param_sweep(), format:
(parameter_output_dir, input_dir, reference_seqs, reference_tax, method, params)
n_jobs: number of jobs to run in parallel.
'''
# train classifier once for each pipeline param combo
for method, db, pipeline_param, subsweep in generate_pipeline_sweep(
method_parameters_combinations, reference_dbs, sweep):
ref_reads, ref_taxa = reference_dbs[db]
# train classifier
classifier = train_classifier(
ref_reads, ref_taxa, pipeline_param, pipelines[method], verbose=verbose)
# run classifier. Only run in parallel once classifier is trained,
# to minimize memory usage (don't want to train large refs in parallel)
Parallel(n_jobs=n_jobs)(delayed(run_classifier)(
classifier, output_dir, input_dir, split_params(params)[0], verbose=verbose)
for output_dir, input_dir, rs, rt, mt, params in subsweep)
def generate_pipeline_sweep(method_parameters_combinations, reference_dbs, sweep):
'''Generate pipeline parameters for each classifier training step'''
# iterate over parameters
for method, params in method_parameters_combinations.items():
# split out pipeline parameters
classifier_params, pipeline_params = split_params(params)
# iterate over reference dbs
for db, refs in reference_dbs.items():
# iterate over all pipeline parameter combinations
for param_product in product(*[params[id_] for id_ in pipeline_params]):
# yield parameter combinations to use for a each classifier
pipeline_param = dict(zip(pipeline_params, param_product))
subsweep = [p for p in sweep if split_params(p[5])[1]
== pipeline_param and p[2] == refs[0]]
yield method, db, pipeline_param, subsweep
def train_classifier(ref_reads, ref_taxa, params, pipeline, verbose=False):
ref_reads = Artifact.load(ref_reads)
ref_taxa = Artifact.load(ref_taxa)
pipeline.set_params(**params)
spec = json.dumps(spec_from_pipeline(pipeline))
if verbose:
print(spec)
classifier = feature_classifier.methods.fit_classifier(ref_reads, ref_taxa, spec)
#return classifier.classifier
def run_classifier(classifier, output_dir, input_dir, params, verbose=False):
# Classify the sequences
rep_seqs = Artifact.load(join(input_dir, 'rep_seqs.qza'))
if verbose:
print(output_dir)
classification = feature_classifier.methods.classify(rep_seqs, classifier, **params)
# Save the results
makedirs(output_dir, exist_ok=True)
output_file = join(output_dir, 'rep_set_tax_assignments.txt')
dataframe = classification.classification.view(DataFrame)
dataframe.to_csv(output_file, sep='\t', header=False)
def split_params(params):
classifier_params = feature_classifier.methods.\
classify.signature.parameters.keys()
pipeline_params = {k:v for k, v in params.items()
if k not in classifier_params}
classifier_params = {k:v for k, v in params.items()
if k in classifier_params}
return classifier_params, pipeline_params
Explanation: Utility Methods
The below methods are used to load the data, prepare the data, parse the classifier and classification parameters, and fit and run the classifier. They should probably be moved to tax_credit.framework_functions.
End of explanation
dataset_reference_combinations = [
('mock-1', 'gg_13_8_otus'), # formerly S16S-1
('mock-2', 'gg_13_8_otus'), # formerly S16S-2
('mock-3', 'gg_13_8_otus'), # formerly Broad-1
('mock-4', 'gg_13_8_otus'), # formerly Broad-2
('mock-5', 'gg_13_8_otus'), # formerly Broad-3
# ('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1
('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2
('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3
('mock-9', 'unite_20.11.2016_clean_fullITS'), # formerly ITS1
('mock-10', 'unite_20.11.2016_clean_fullITS'), # formerly ITS2-SAG
('mock-12', 'gg_13_8_otus'), # Extreme
# ('mock-13', 'gg_13_8_otus_full16S'), # kozich-1
# ('mock-14', 'gg_13_8_otus_full16S'), # kozich-2
# ('mock-15', 'gg_13_8_otus_full16S'), # kozich-3
('mock-16', 'gg_13_8_otus'), # schirmer-1
]
reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus_515f-806r.qza'),
join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')),
# 'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus.qza'),
# join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')),
'unite_20.11.2016_clean_fullITS' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.qza'),
join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
# 'unite_20.11.2016' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_BITSf-B58S3r_trim250.qza'),
# join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.qza'))
}
Explanation: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep.
End of explanation
method_parameters_combinations = {
'q2-multinomialNB': {'confidence': [0.0, 0.2, 0.4, 0.6, 0.8],
'classify__alpha': [0.001, 0.01, 0.1],
'feat_ext__ngram_range': [[8,8], [12,12], [20,20]]},
'q2-logisticregression': {'classify__solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag']},
'q2-randomforest': {'classify__max_features': ['sqrt', 'None'],
'classify__n_estimators': [5, 10, 100]}
}
Explanation: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods.
End of explanation
# pipeline params common to all classifiers are set here
hash_params = dict(
analyzer='char_wb', n_features=8192, non_negative=True, ngram_range=[8, 8])
# any params common to all classifiers can be set here
classify_params = dict()
def build_pipeline(classifier, hash_params, classify_params):
return Pipeline([
('feat_ext', HashingVectorizer(**hash_params)),
('classify', classifier(**classify_params))])
# Now fit the pipelines.
pipelines = {'q2-multinomialNB': build_pipeline(
MultinomialNB, hash_params, {'fit_prior': False}),
'q2-logisticregression': build_pipeline(
LogisticRegression, hash_params, classify_params),
'q2-randomforest': build_pipeline(
RandomForestClassifier, hash_params, classify_params)}
Explanation: Preparing the pipelines
The below pipelines are used to specify the scikit-learn classifiers that are used for assignment. At the moment we only include Naïve Bayes but the collection will expand.
End of explanation
dataset_reference_combinations = [
('mock-3', 'gg_13_8_otus'), # formerly Broad-1
]
method_parameters_combinations = {
'q2-randomforest': {'classify__max_features': ['sqrt'],
'classify__n_estimators': [5]}
}
reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus_515f-806r.qza'),
join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza'))}
Explanation: Test
End of explanation
sweep = gen_param_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations)
sweep = list(sweep)
Explanation: Do the Sweep
End of explanation
print(len(sweep))
sweep[0]
train_and_run_classifier(method_parameters_combinations, reference_dbs, pipelines, sweep, verbose=True, n_jobs=4)
Explanation: A quick sanity check never hurt anyone...
End of explanation
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_set_tax_assignments.txt')
generate_per_method_biom_tables(taxonomy_glob, data_dir)
Explanation: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
End of explanation
precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name)
method_dirs = glob(join(results_dir, '*', '*', '*', '*'))
move_results_to_repository(method_dirs, precomputed_results_dir)
Explanation: Move result files to repository
Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
End of explanation |
448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LeNet Lab
Source
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
Step2: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
Step3: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
Step4: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
Step5: TODO
Step6: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
Step7: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
Step8: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
Step9: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step10: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: LeNet Lab
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
dropout = 0.75
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
weights = {
'wc1': tf.Variable(tf.random_normal([5,5,1,6])),
'wc2': tf.Variable(tf.random_normal([5,5,6,16])),
'wd1': tf.Variable(tf.random_normal([400, 120])),
'wd2': tf.Variable(tf.random_normal([120, 84])),
'wd3': tf.Variable(tf.random_normal([84, 10]))}
biases = {
'bc1': tf.Variable(tf.zeros(6)),
'bc2': tf.Variable(tf.zeros(16)),
'bd1': tf.Variable(tf.zeros(120)),
'bd2': tf.Variable(tf.zeros(84)),
'bd3': tf.Variable(tf.zeros(10))}
conv1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding='VALID')
conv1 = tf.nn.bias_add(conv1, biases['bc1'])
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
ksize = [1,2,2,1]
strides = [1,2,2,1]
padding = 'VALID'
conv1 = tf.nn.max_pool(conv1, ksize, strides, padding)
# TODO: Layer 2: Convolutional. Output = 10x10x16.
conv2 = tf.nn.conv2d(conv1, weights['wc2'], strides=[1, 1, 1, 1], padding='VALID')
conv2 = tf.nn.bias_add(conv2, biases['bc2'])
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
ksize = [1,2,2,1]
strides = [1,2,2,1]
padding = 'VALID'
conv2 = tf.nn.max_pool(conv2, ksize, strides, padding)
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1 = tf.add(tf.matmul(fc0, weights['wd1']), biases['bd1'])
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2 = tf.add(tf.matmul(fc1, weights['wd2']), biases['bd2'])
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
logits = tf.add(tf.matmul(fc2, weights['wd3']), biases['bd3'])
return logits
Explanation: TODO: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, 'lenet')
print("Model saved")
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation |
449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ml - Texte et machine learning
Revue de méthodes de word embedding statistiques (~ NLP) ou comment transformer une information textuelle en vecteurs dans un espace vectoriel (features) ? Deux exercices sont ajoutés à la fin.
Step1: Données
Nous allons travailler sur des données twitter collectées avec le mot-clé macron
Step2: 5000 tweets n'est pas assez pour tirer des conclusions mais cela donne une idée. On supprime les valeurs manquantes.
Step3: Construire une pondération
Le texte est toujours délicat à traiter. Il n'est pas toujours évident de sortir d'une information binaire
Step4: Sans cette colonne qui mesure la popularité, il faut trouver un moyen d'extraire de l'information. On découpe alors en mots et on constuire un modèle de langage
Step5: n-grammes
N-Gram-Based Text Categorization
Step6: Exercice 1
Step7: On aboutit à une matrice sparse ou chaque expression est représentée à une vecteur ou chaque 1 représente l'appartenance d'un mot à l'ensemble.
Step8: td-idf
Ce genre de technique produit des matrices de très grande dimension qu'il faut réduire. On peut enlever les mots rares ou les mots très fréquents. td-idf est une technique qui vient des moteurs de recherche. Elle construit le même type de matrice (même dimension) mais associe à chaque couple (document - mot) un poids qui dépend de la fréquence d'un mot globalement et du nombre de documents contenant ce mot.
$$idf(t) = \log \frac{# D}{#{d \; | \; t \in d }}$$
Où
Step9: Exercice 3
Step10: Tagging
L'objectif est de tagger les mots comme déterminer si un mot est un verbe, un adjectif ...
grammar
Voir html.grammar.
CRF
Voir CRF
HMM
Voir HMM.
Clustering
Une fois qu'on a des coordonnées, on peut faire plein de choses.
LDA
Latent Dirichlet Application
LatentDirichletAllocation | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.ml - Texte et machine learning
Revue de méthodes de word embedding statistiques (~ NLP) ou comment transformer une information textuelle en vecteurs dans un espace vectoriel (features) ? Deux exercices sont ajoutés à la fin.
End of explanation
from ensae_teaching_cs.data import twitter_zip
df = twitter_zip(as_df=True)
df.head(n=2).T
df.shape
Explanation: Données
Nous allons travailler sur des données twitter collectées avec le mot-clé macron : tweets_macron_sijetaispresident_201609.zip.
End of explanation
data = df[["retweet_count", "text"]].dropna()
data.shape
Explanation: 5000 tweets n'est pas assez pour tirer des conclusions mais cela donne une idée. On supprime les valeurs manquantes.
End of explanation
data.sort_values("retweet_count", ascending=False).head()
Explanation: Construire une pondération
Le texte est toujours délicat à traiter. Il n'est pas toujours évident de sortir d'une information binaire : un mot est-il présent ou pas. Les mots n'ont aucun sens numérique. Une liste de tweets n'a pas beaucoup de sens à part les trier par une autre colonne : les retweet par exemple.
End of explanation
from nltk.tokenize import TweetTokenizer
tknzr = TweetTokenizer(preserve_case=False)
tokens = tknzr.tokenize(data.loc[0, "text"])
tokens
Explanation: Sans cette colonne qui mesure la popularité, il faut trouver un moyen d'extraire de l'information. On découpe alors en mots et on constuire un modèle de langage : les n-grammes. Si un tweet est constitué de la séquence de mots $(m_1, m_2, ..., m_k)$. On définit sa probabilité comme :
$$P(tweet) = P(w_1, w_2) P(w_3 | w_2, w_1) P(w_4 | w_3, w_2) ... P(w_k | w_{k-1}, w_{k-2})$$
Dans ce cas, $n=3$ car on suppose que la probabilité d'apparition d'un mot ne dépend que des deux précédents. On estime chaque n-grammes comme suit :
$$P(c | a, b) = \frac{ # (a, b, c)}{ # (a, b)}$$
C'est le nombre de fois où on observe la séquence $(a,b,c)$ divisé par le nombre de fois où on observe la séquence $(a,b)$.
Tokenisation
Découper en mots paraît simple tweet.split() et puis il y a toujours des surprises avec le texte, la prise en compte des tirets, les majuscules, les espaces en trop. On utilse un tokenizer dédié : TweetTokenizer ou un tokenizer qui prend en compte le langage.
End of explanation
from nltk.util import ngrams
generated_ngrams = ngrams(tokens, 4, pad_left=True, pad_right=True)
list(generated_ngrams)
Explanation: n-grammes
N-Gram-Based Text Categorization: Categorizing Text With Python
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
counts = count_vect.fit_transform(data["text"])
counts.shape
Explanation: Exercice 1 : calculer des n-grammes sur les tweets
Nettoyage
Tous les modèles sont plus stables sans les stop-words, c'est-à-dire tous les mots présents dans n'importe quel documents et qui n'apporte pas de sens (à, de, le, la, ...). Souvent, on enlève les accents, la ponctuation... Moins de variabilité signifie des statistiques plus fiable.
Exercice 2 : nettoyer les tweets
Voir stem.
Structure de graphe
On cherche cette fois-ci à construire des coordonnées pour chaque tweet.
matrice d'adjacence
Une option courante est de découper chaque expression en mots puis de créer une matrice expression x mot ou chaque case indique la présence d'un mot dans une expression.
End of explanation
type(counts)
counts[:5,:5].toarray()
data.loc[0,"text"]
counts[0,:].sum()
Explanation: On aboutit à une matrice sparse ou chaque expression est représentée à une vecteur ou chaque 1 représente l'appartenance d'un mot à l'ensemble.
End of explanation
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer()
res = tfidf.fit_transform(counts)
res.shape
res[0,:].sum()
Explanation: td-idf
Ce genre de technique produit des matrices de très grande dimension qu'il faut réduire. On peut enlever les mots rares ou les mots très fréquents. td-idf est une technique qui vient des moteurs de recherche. Elle construit le même type de matrice (même dimension) mais associe à chaque couple (document - mot) un poids qui dépend de la fréquence d'un mot globalement et du nombre de documents contenant ce mot.
$$idf(t) = \log \frac{# D}{#{d \; | \; t \in d }}$$
Où :
$#D$ est le nombre de tweets
$#{d \; | \; t \in d }$ est le nombre de tweets contenant le mot $t$
$f(t,d)$ est le nombre d'occurences d'un mot $t$ dans un document $d$.
$$tf(t,d) = \frac{1}{2} + \frac{1}{2} \frac{f(t,d)}{\max_{t' \in d} f(t',d)}$$
On construit le nombre $tfidf(t,f)$
$$tdidf(t,d) = tf(t,d) idf(t)$$
Le terme $idf(t)$ favorise les mots présent dans peu de documents, le terme $tf(t,f)$ favorise les termes répétés un grand nombre de fois dans le même document. On applique à la matrice précédente.
End of explanation
sentences = [tknzr.tokenize(_) for _ in data["text"]]
sentences[0]
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
model = gensim.models.Word2Vec(sentences, min_count=1)
model.wv.similar_by_word("fin")
model.wv["fin"].shape
model.wv["fin"]
Explanation: Exercice 3 : tf-idf sans mot-clés
La matrice ainsi créée est de grande dimension. Il faut trouver un moyen de la réduire avec TfidfVectorizer.
word2vec
word2vec From theory to practice
Efficient Estimation of Word Representations in Vector Space
word2vec
Cet algorithme part d'une répresentation des mots sous forme de vecteur en un espace de dimension N = le nombre de mots distinct. Un mot est représenté par $(0,0, ..., 0, 1, 0, ..., 0)$. L'astuce consiste à réduire le nombre de dimensions en compressant avec une ACP, un réseau de neurones non linéaires.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=1000)
tfidf = tfidf_vectorizer.fit_transform(data["text"])
tfidf.shape
from sklearn.decomposition import NMF, LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=10, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(tfidf)
tf_feature_names = tfidf_vectorizer.get_feature_names()
tf_feature_names[100:103]
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[- n_top_words - 1:][::-1]]))
print()
print_top_words(lda, tf_feature_names, 10)
tr = lda.transform(tfidf)
tr[:5]
tr.shape
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda, tfidf, tfidf_vectorizer)
Explanation: Tagging
L'objectif est de tagger les mots comme déterminer si un mot est un verbe, un adjectif ...
grammar
Voir html.grammar.
CRF
Voir CRF
HMM
Voir HMM.
Clustering
Une fois qu'on a des coordonnées, on peut faire plein de choses.
LDA
Latent Dirichlet Application
LatentDirichletAllocation
End of explanation |
450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass
Step9: Loss layers
Step10: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
Step11: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
Step12: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
Step20: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
Tuning learning rate is more effective to train the three-layer net, while initialization weight scale the five-layer net.
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
model = FullyConnectedNet([100, 100, 100, 100], weight_scale=1e-2)
solver = Solver(model, data,
num_epochs=10, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 1e-3
},
print_every=100,
verbose=True)
solver.train()
best_model = model
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation |
451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
blank
Data Science meets <br/> Software Data
<b>Markus Harrer</b>, Software Development Analyst
@feststelltaste
<small>20 Jahre INNOQ Event, 13.09.2019</small>
<img src="../resources/innoq_logo.jpg" width=20% height="20%" align="right"/>
Data Science
Was ist Data Science?
"Statistik auf nem <b><span class="green">Mac</span></b>."
<br/>
<br/>
<div align="right"><small>Nach https
Step1: Was haben wir hier eigentlich?
Step2: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>6</b> Series (= Spalten), <b>1128819</b> Rows (= Einträge)
Wir wandeln die Zeitstempel von Texte in Objekte um.
Step3: Wir sehen uns nur die jüngsten Änderungen an.
Step4: Wir wollen nur Java-Code verwenden.
Step5: III. Modellierung
Neue Sichten schaffen
Weitere Daten verschneiden
Wir aggregieren die Zeilen sowie die Anzahl der Änderungen pro Datei.
Step6: Wir holen Infos über die Code-Zeilen hinzu...
Step7: ...und verschneiden diese mit den vorhandenen Daten.
Step8: VI. Interpretation
Problem
Step9: V. Kommunikation
Ergebnisse managementtauglich darstellen
Nächste Schritte lostreten
Wir plotten die TOP 10 Liste als XY-Diagramm. | Python Code:
import pandas as pd
log = pd.read_csv("../dataset/git_log_intellij.csv.gz")
log.head()
Explanation: blank
Data Science meets <br/> Software Data
<b>Markus Harrer</b>, Software Development Analyst
@feststelltaste
<small>20 Jahre INNOQ Event, 13.09.2019</small>
<img src="../resources/innoq_logo.jpg" width=20% height="20%" align="right"/>
Data Science
Was ist Data Science?
"Statistik auf nem <b><span class="green">Mac</span></b>."
<br/>
<br/>
<div align="right"><small>Nach https://twitter.com/cdixon/status/428914681911070720</small></div>
<b>Data Science Venn Diagram (Drew Conway)</b>
<img src="../resources/venn_diagram.png" width=50% height="50%" >
Meine Definition
Was bedeutet "data"?
"Without data you‘re just another person with an opinion."
<br/>
<div align="right"><small>W. Edwards Deming</small></div>
<b>=> Belastbare Erkenntnisse mittels <span class="green">Fakten</span> liefern</b>
Was bedeutet "science"?
"The aim of science is to seek the simplest explanations of complex facts."
<br/>
<div align="right"><small>Albert Einstein</small></div>
<b>=> Neue Erkenntnisse <span class="green">verständlich</span> herausarbeiten</b>
Vorgehen
<small>Nach Roger Pengs "Stages of Data Analysis"</small><br/>
I. Fragestellung
II. Datenbeschaffung
III. Modellierung
IV. Interpretation
V. Kommunikation
<b>=> von der <strong>Frage</strong> über die <span class="green">Daten</span> zur <span class="blue" style="background-color: #FFFF00">Erkenntnis</span>!</b>
Was hat das mit Softwareentwicklung zu tun?
Software Data
Was ist Software Data?
Statisch
Laufzeit
Chronologisch
Community
<b>=> Krass viel!</b>
Was ist ein Data Scientist?
"Jemand, der mehr Ahnung von Statistik<br/>
hat als ein <b><span class="green">Softwareentwickler</span></b><br/>
und mehr Ahnung von <b><span class="green">Softwareentwicklung</span></b><br/>
als ein Statistiker."
<br/>
<br/>
<div align="right"><small>Nach zu https://twitter.com/cdixon/status/428914681911070720</small></div>
<b>Data Science & Software Data:</b> Perfect <b><span class="green">match</span></b>!
Beispiele für Analysen
Modularisierungsschnitte finden
Performance-Bottlenecks identifizieren
Verborgene Code-Abhängigkeiten aufdecken
Open-Source-Communities bewerten
...
<b>Individuelle Systeme => individuelle Probleme => individuelle Analysen => individuelle Erkenntnisse!</b>
Software Analytics
Definition Software Analytics
"Software Analytics is analytics on software data for managers and <b class="green">software engineers</b> with the aim of empowering software development individuals and teams to <i>gain and share insight from their data</i> to <b>make better decisions</b>."
<br/>
<div align="right"><small>Tim Menzies and Thomas Zimmermann</small></div>
<img src="../resources/kombar0.png" width=95% align="center"/>
<img src="../resources/kombar4.png" width=95% align="center"/>
Lösungsstrategie
Grundprinzip
(Intent + Code + Data + Results)<br />
* Logical Step<br />
+ Automation<br />
= Literate Statistical Programming
Implementierung: Computational notebooks
Computational Notebooks
<br/>
<div align="center"><img src="../resources/notebook_approach.jpg"></div>
Technik
Technologie (1/2)
Klassischer Data-Science-Werkzeugkasten
* Jupyter (mit RISE)
* Python 3
* pandas
* matplotlib
Technologie (2/2)
Jupyter funktioniert und integriert sich auch mit
* jQAssistant / Cypher / Neo4j
* JVM-Sprachen über beakerx / Tablesaw
* bash
* ...
Praktischer Teil
Erste Demo
Fallbeispiel
IntelliJ IDEA
IDE für die Java-Entwicklung
Vieles in Java geschrieben
Großes und lang entwickeltes Projekt
I. Fragestellung (1/3)
Offene Frage explizit niederschreiben
Analyseidee verständlich darstellen
I. Fragestellung (2/3)
<b>Frage</b>
* Welcher Code ist besonders komplex und wie oft wurde dieser in letzter Zeit geändert?
I. Fragestellung (3/3)
Umsetzungsidee
Werkzeuge: Jupyter, Python, pandas, matplotlib
Heuristiken:
"komplex": Lines of Code
"letzter Zeit": Letzte 3 Monate
"oft geändert": Anzahl Git Commits
Meta-Ziel: Grundmechaniken kennenlernen.
II. Datenbeschaffung
Daten in Data-Science-Welt laden
Rohdaten bereinigen und filtern
Wir laden Git Log Daten eines Git Repos.
End of explanation
log.info()
Explanation: Was haben wir hier eigentlich?
End of explanation
log['timestamp'] = pd.to_datetime(log['timestamp'])
log.head()
Explanation: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>6</b> Series (= Spalten), <b>1128819</b> Rows (= Einträge)
Wir wandeln die Zeitstempel von Texte in Objekte um.
End of explanation
recent = log[log['timestamp'] > pd.Timestamp('today') - pd.Timedelta('90 days')]
recent.head()
Explanation: Wir sehen uns nur die jüngsten Änderungen an.
End of explanation
java = recent[recent['filename'].str.endswith(".java")].copy()
java.head()
Explanation: Wir wollen nur Java-Code verwenden.
End of explanation
changes = java.groupby('filename')[['sha']].count()
changes.head()
Explanation: III. Modellierung
Neue Sichten schaffen
Weitere Daten verschneiden
Wir aggregieren die Zeilen sowie die Anzahl der Änderungen pro Datei.
End of explanation
loc = pd.read_csv("../dataset/cloc_intellij.csv.gz", index_col=1)
loc.head()
Explanation: Wir holen Infos über die Code-Zeilen hinzu...
End of explanation
hotspots = changes.join(loc[['code']])
hotspots.head()
Explanation: ...und verschneiden diese mit den vorhandenen Daten.
End of explanation
top10 = hotspots.sort_values(by="sha", ascending=False).head(10)
top10
Explanation: VI. Interpretation
Problem: Ergebnisse oft umfangreich
Daher: Kernaussagen klar darstellen
Wir zeigen die TOP 10 Hotspots an.
End of explanation
ax = top10.plot.scatter('sha', 'code');
for k, v in top10.iterrows():
ax.annotate(k.split("/")[-1], v)
Explanation: V. Kommunikation
Ergebnisse managementtauglich darstellen
Nächste Schritte lostreten
Wir plotten die TOP 10 Liste als XY-Diagramm.
End of explanation |
452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training Logistic Regression via Stochastic Gradient Ascent
The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will
Step1: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: The SFrame products now contains one column for each of the 193 important_words.
Step4: Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.
Step5: Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step6: Note that we convert both the training and validation sets into NumPy arrays.
Warning
Step7: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands
Step8: Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows
Step9: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
Step10: Quiz Question
Step11: Quiz Question
Step12: Quiz Question
Step13: Averaging the gradient across a batch
It is a common practice to normalize the gradient update rule by the batch size B
Step14: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.
Step15: Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question
Step16: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using
Step17: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows)
Step18: Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
Step19: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
Step20: Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent.
Step21: Checkpoint
Step22: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
Step23: Quiz Question
Step24: Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size
Step25: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves. | Python Code:
from __future__ import division
import graphlab
Explanation: Training Logistic Regression via Stochastic Gradient Ascent
The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with respect to a single coefficient.
Implement stochastic gradient ascent.
Compare convergence of stochastic gradient ascent with that of batch gradient ascent.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# Remote punctuation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string manipulation functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
products
Explanation: The SFrame products now contains one column for each of the 193 important_words.
End of explanation
train_data, validation_data = products.random_split(.9, seed=1)
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
Explanation: Split data into training and validation sets
We will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: Convert SFrame to NumPy array
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: Note that we convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1. / (1.+np.exp(-score))
return predictions
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-10-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Quiz question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?
Building on logistic regression
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver.
End of explanation
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
return derivative
Explanation: Derivative of log likelihood with respect to a single coefficient
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
In Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
Complete the following code block:
End of explanation
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)
return lp
Explanation: Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
End of explanation
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
Explanation: Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
How are the functions $\ell\ell(\mathbf{w})$ and $\ell\ell_A(\mathbf{w})$ related?
Modifying the derivative for stochastic gradient ascent
Recall from the lecture that the gradient for a single data point $\color{red}{\mathbf{x}_i}$ can be computed using the following formula:
$$
\frac{\partial\ell_{\color{red}{i}}(\mathbf{w})}{\partial w_j} = h_j(\color{red}{\mathbf{x}i})\left(\mathbf{1}[y\color{red}{i} = +1] - P(y_\color{red}{i} = +1 | \color{red}{\mathbf{x}_i}, \mathbf{w})\right)
$$
Computing the gradient for a single data point
Do we really need to re-write all our code to modify $\partial\ell(\mathbf{w})/\partial w_j$ to $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$?
Thankfully No!. Using NumPy, we access $\mathbf{x}i$ in the training data using feature_matrix_train[i:i+1,:]
and $y_i$ in the training data using sentiment_train[i:i+1]. We can compute $\partial\ell{\color{red}{i}}(\mathbf{w})/\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.
We compute $\partial\ell_{\color{red}{i}}(\mathbf{w})/\partial w_j$ using the following steps:
* First, compute $P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ using the predict_probability function with feature_matrix_train[i:i+1,:] as the first parameter.
* Next, compute $\mathbf{1}[y_i = +1]$ using sentiment_train[i:i+1].
* Finally, call the feature_derivative function with feature_matrix_train[i:i+1, j] as one of the parameters.
Let us follow these steps for j = 1 and i = 10:
End of explanation
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
Explanation: Quiz Question: The code block above computed $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ for j = 1 and i = 10. Is $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ a scalar or a 194-dimensional vector?
Modifying the derivative for using a batch of data points
Stochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.
Given a mini-batch (or a set of data points) $\mathbf{x}{i}, \mathbf{x}{i+1} \ldots \mathbf{x}{i+B}$, the gradient function for this mini-batch of data points is given by:
$$
\color{red}{\sum{s = i}^{i+B}} \frac{\partial\ell_{s}}{\partial w_j} = \color{red}{\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
Computing the gradient for a "mini-batch" of data points
Using NumPy, we access the points $\mathbf{x}i, \mathbf{x}{i+1} \ldots \mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]
and $y_i$ in the training data using sentiment_train[i:i+B].
We can compute $\color{red}{\sum_{s = i}^{i+B}} \partial\ell_{s}/\partial w_j$ easily as follows:
End of explanation
print len(sentiment_train)
Explanation: Quiz Question: The code block above computed
$\color{red}{\sum_{s = i}^{i+B}}\partial\ell_{s}(\mathbf{w})/{\partial w_j}$
for j = 10, i = 10, and B = 10. Is this a scalar or a 194-dimensional vector?
Quiz Question: For what value of B is the term
$\color{red}{\sum_{s = 1}^{B}}\partial\ell_{s}(\mathbf{w})/\partial w_j$
the same as the full gradient
$\partial\ell(\mathbf{w})/{\partial w_j}$?
End of explanation
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size]==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])
# compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += (1./batch_size)*(step_size * derivative)
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
Explanation: Averaging the gradient across a batch
It is a common practice to normalize the gradient update rule by the batch size B:
$$
\frac{\partial\ell_{\color{red}{A}}(\mathbf{w})}{\partial w_j} \approx \color{red}{\frac{1}{B}} {\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right)
$$
In other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.
Implementing stochastic gradient ascent
Now we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent:
End of explanation
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),
step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
Explanation: Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
Checkpoint
The following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.
End of explanation
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1, batch_size=1, max_iter=10)
Explanation: Compare convergence behavior of stochastic gradient ascent
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question: For what value of batch size B above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm?
Running gradient ascent using the stochastic gradient ascent implementation
Instead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)
Small Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.
We now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = 1
* max_iter = 10
End of explanation
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-1,
batch_size = len(feature_matrix_train),
max_iter=200)
Explanation: Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using:
* initial_coefficients = np.zeros(194)
* step_size = 5e-1
* batch_size = len(feature_matrix_train)
* max_iter = 200
End of explanation
# number of passes is number to complete the whole dataset
# For each batch size, we update 1 gradient, so
2*(50000/100)
Explanation: Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?
* Increases
* Decreases
* Fluctuates
Make "passes" over the dataset
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows):
$$
[\text{# of passes}] = \frac{[\text{# of data points touched so far}]}{[\text{size of dataset}]}
$$
Quiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points?
End of explanation
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
Explanation: Log likelihood plots for stochastic gradient ascent
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
Explanation: We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
End of explanation
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
Explanation: Smoothing the stochastic gradient ascent curve
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window "iterations" of stochastic gradient ascent.
End of explanation
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
Explanation: Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window. As you increase it, you should see a smoother plot.
Stochastic gradient ascent vs batch gradient ascent
To compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot() multiple times in the same cell.
We are comparing:
* stochastic gradient ascent: step_size = 0.1, batch_size=100
* batch gradient ascent: step_size = 0.5, batch_size=len(feature_matrix_train)
Write code to run stochastic gradient ascent for 200 passes using:
* step_size=1e-1
* batch_size=100
* initial_coefficients to all zeros.
End of explanation
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
Explanation: We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.
End of explanation
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
Explanation: Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent?
It's always better
10 passes
20 passes
150 passes or more
Explore the effects of step sizes on stochastic gradient ascent
In previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.
To start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:
* initial_coefficients=np.zeros(194)
* batch_size=100
* max_iter initialized so as to run 10 passes over the data.
End of explanation
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Explanation: Plotting the log likelihood as a function of passes for each step size
Now, we will plot the change in log likelihood using the make_plot for each of the following values of step_size:
step_size = 1e-4
step_size = 1e-3
step_size = 1e-2
step_size = 1e-1
step_size = 1e0
step_size = 1e1
step_size = 1e2
For consistency, we again apply smoothing_window=30.
End of explanation
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Explanation: Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.
End of explanation |
453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tabulated weak nuclear reaction rates
The reaction rate parameterizations in pynucastro/library/tabular were obtained from
Step1: Load a tabulated rate
Step2: A human readable string describing the rate, and the nuclei involved
Step3: Evaluate the electron capture rate [s$^{-1}$] at a given temperature (T [K]) and $Y_e$-weighted density ($\rho Y_e$ [g/cm$^3$])
Step4: Plot the rate depending on the temperature and the density on a heat map.
Step5: Another example
Step6: Working with a group of rates | Python Code:
import pynucastro as pyrl
Explanation: Tabulated weak nuclear reaction rates
The reaction rate parameterizations in pynucastro/library/tabular were obtained from:
Toshio Suzuki, Hiroshi Toki and Ken'ichi Nomoto (2016):
ELECTRON-CAPTURE AND beta-DECAY RATES FOR sd-SHELL NUCLEI IN STELLAR ENVIRONMENTS RELEVANT TO HIGH-DENSITY O–NE–MG CORES. The Astrophysical Journal, 817, 163
Note: You must have package seaborn in your PATHONPATH.
End of explanation
al_mg = pyrl.Rate("al28--mg28-toki")
Explanation: Load a tabulated rate
End of explanation
print(al_mg)
Explanation: A human readable string describing the rate, and the nuclei involved
End of explanation
al_mg.eval(T=1.e8,rhoY=1.e9)
Explanation: Evaluate the electron capture rate [s$^{-1}$] at a given temperature (T [K]) and $Y_e$-weighted density ($\rho Y_e$ [g/cm$^3$])
End of explanation
al_mg.plot()
Explanation: Plot the rate depending on the temperature and the density on a heat map.
End of explanation
ne_f = pyrl.Rate("ne23--f23-toki")
print(ne_f)
ne_f.plot()
Explanation: Another example :
End of explanation
files = ["c13-pg-n14-nacr",
"n13--c13-wc12",
"c12-c12n-mg23-cf88",
"o14-ap-f17-Ha96c",
"mg23--na23-toki",
"na23--ne23-toki",
"n13-pg-o14-lg06",
"c12-c12p-na23-cf88"]
rc = pyrl.RateCollection(files)
rc.plot()
Explanation: Working with a group of rates
End of explanation |
454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyses with NetworkX
Social networks have become a fixture of modern life thanks to social networking sites like Facebook and Twitter. Social networks themselves are not new, however. The study of such networks dates back to the early twentieth century, particularly in the field of sociology and anthropology. It is their prevelance in mainstream applciations that have moved these types of studies to the purview of data science.
The basis for the analyses in this notebook comes from Graph Theory- the mathmatical study of the application and properties of graphs, originally motivated by the study of games of chance. Generally speaking, this involves the study of network encoding, and measuring properties of the graph. Graph theory can be traced back to Euler's work on the Konigsberg Bridges problem (1735). However in recent decades, the rise of the social network has influenced the discpline, particularly with Computer Science graph data structures and databases.
A Graph, then can be defined as
Step1: The basics of creating a NetworkX Graph
Step2: For testing and diagnostics it's useful to generate a random Graph. NetworkX comes with several graph models including
Step3: Accessing Nodes and Edges
Step4: Serialization of Graphs
Most Graphs won't be constructed in memory, but rather saved to disk. Serialize and deserialize Graphs as follows
Step5: NetworkX has a ton of Graph serialization methods, and most have methods in the following format for serialization format, format
Step6: Computing Key Players
In the previous graph, we began exploring ego networks and strong ties between individuals in our social network. We started to see that actors with strong ties to other actors created clusters that centered around themselves. This leads to the obvious question
Step7: Betweenness Centrality
A path is a sequence of nodes between a star node and an end node where no node appears twice on the path, and is measured by the number of edges included (also called hops). The most interesting path to compute for two given nodes is the shortest path, e.g. the minimum number of edges required to reach another node, this is also called the node distance. Note that paths can be of length 0, the distance from a node to itself.
Step8: Closeness Centrality
Another centrality measure, closeness takes a statistical look at the outgoing paths fora particular node, v. That is, what is the average number of hops it takes to reach any other node in the network from v? This is simply computed as the reciprocal of the mean distance to all other nodes in the graph, which can be normalized to n-1 / size(G)-1 if all nodes in the graph are connected. The reciprocal ensures that nodes that are closer (e.g. fewer hops) score "better" e.g. closer to one as in other centrality scores.
Step9: Eigenvector Centrality
The eigenvector centrality of a node, v is proportional to the sum of the centrality scores of it's neighbors. E.g. the more important people you are connected to, the more important you are. This centrality measure is very interesting, because an actor with a small number of hugely influential contacts may outrank ones with many more mediocre contacts. For our social network, hopefully it will allow us to get underneath the celebrity structure of heroic teams and see who actually is holding the social graph together.
Step10: Clustering and Cohesion
In this next section, we're going to characterize our social network as a whole, rather than from the perspective of individual actors. This task is usually secondary to getting a feel for the most important nodes; but it is a chicken and an egg problem- determining the techniques to analyze and split the whole graph can be informed by key player analyses, and vice versa.
The density of a network is the ratio of the number of edges in the network to the total number of possible edges in the network. The possible number of edges for a graph of n vertices is n(n-1)/2 for an undirected graph (remove the division for a directed graph). Perfectly connected networks (every node shares an edge with every other node) have a density of 1, and are often called cliques.
Step11: Graphs can also be analyzed in terms of distance (the shortest path between two nodes). The longest distance in a graph is called the diameter of the social graph, and represents the longest information flow along the graph. Typically less dense (sparse) social networks will have a larger diameter than more dense networks. Additionally, the average distance is an interesting metric as it can give you information about how close nodes are to each other.
Step12: Let's actually get into some clustering. The python-louvain library uses NetworkX to perform community detection with the louvain method. Here is a simple example of cluster partitioning on a small, built-in social network.
Step13: Visualizing Graphs
NetworkX wraps matplotlib or graphviz to draw simple graphs using the same charting library we saw in the previous chapter. This is effective for smaller size graphs, but with larger graphs memory can quickly be consumed. To draw a graph, simply use the networkx.draw function, and then use pyplot.show to display it.
Step14: There is, however, a rich drawing library underneath that lets you customize how the Graph looks and is laid out with many different layout algorithms. Let's take a look at an example using one of the built-in Social Graphs | Python Code:
%matplotlib inline
import os
import random
import community
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from tribe.utils import *
from tribe.stats import *
from operator import itemgetter
## Some Helper constants
FIXTURES = os.path.join(os.getcwd(), "fixtures")
GRAPHML = os.path.join(FIXTURES, "emails.graphml")
Explanation: Analyses with NetworkX
Social networks have become a fixture of modern life thanks to social networking sites like Facebook and Twitter. Social networks themselves are not new, however. The study of such networks dates back to the early twentieth century, particularly in the field of sociology and anthropology. It is their prevelance in mainstream applciations that have moved these types of studies to the purview of data science.
The basis for the analyses in this notebook comes from Graph Theory- the mathmatical study of the application and properties of graphs, originally motivated by the study of games of chance. Generally speaking, this involves the study of network encoding, and measuring properties of the graph. Graph theory can be traced back to Euler's work on the Konigsberg Bridges problem (1735). However in recent decades, the rise of the social network has influenced the discpline, particularly with Computer Science graph data structures and databases.
A Graph, then can be defined as: G = (V, E) consiting of a finite set of nodes denoted by V or V(G) and a collection E or E(G) of unordered pairs {u, v} where u, v ∈ V. Less formally, this is a symbolic repreentation of a network and their relationships- a set of linked nodes.
Graphs can be either directed or undirected. Directed graphs simply have ordered relationships, undirected graphs can be seen as bidirectional directed graphs. A directed graph in a social network tends to have directional semantic relationships, e.g. "friends" - Abe might be friends with Jane, but Jane might not reciprocate. Undirected social networks have more general semantic relationships, e.g. "knows". Any directed graph can easily be converted to the more general undirected graph. In this case, the adjacency matrix becomes symmetric.
A few final terms will help us in our discussion. The cardinality of vertices is called the order of the Graph, where as the cardinality of the edges is called the size. In the above graph, the order is 7 and the size is 10. Two nodes are adjacent if they share an edge, they are also called neighbors and the neighborhood of a vertex is the set of all vertices that a vertex is connected to. The number of nodes in a vertex' neighborhood is that vertex' degree.
Required Python Libraries
The required external libraries for the tasks in this notebook are as follows:
networkx
matplotlib
python-louvain
NetworkX is a well maintained Python library for the creation, manipulation, and study of the structure of complex networks. Its tools allow for the quick creation of graphs, and the library also contains many common graph algorithms. In particular NetworkX complements Python's scientific computing suite of SciPy/NumPy, Matplotlib, and Graphviz and can handle graphs in memory of 10M's of nodes and 100M's of links. NetworkX should be part of every data scientist's toolkit.
NetworkX and Python are the perfect combination to do social network analysis. NetworkX is designed to handle data at scale, data that is relevant to modern scale social networks. The core algorithms that are included are implemented on extremely fast legacy code. Graphs are hugely flexible (nodes can be any hashable type), and there is an extensive set of native IO formats. Finally, with Python- you'll be able to access or use a myriad of data sources from databases to the Internet.
End of explanation
H = nx.Graph(name="Hello World Graph")
# Also nx.DiGraph, nx.MultiGraph, etc
# Add nodes manually, label can be anything hashable
H.add_node(1, name="Ben", email="[email protected]")
H.add_node(2, name="Tony", email="[email protected]")
# Can also add an iterable of nodes: H.add_nodes_from
H.add_edge(1,2, label="friends", weight=0.832)
# Can also add an iterable of edges: H.add_edges_from
print nx.info(H)
# Clearing a graph is easy
H.remove_node(1)
H.clear()
Explanation: The basics of creating a NetworkX Graph:
End of explanation
H = nx.erdos_renyi_graph(100, 0.20)
Explanation: For testing and diagnostics it's useful to generate a random Graph. NetworkX comes with several graph models including:
Complete Graph G=nx.complete_graph(100)
Star Graph G=nx.star_graph(100)
Erdős-Rényi graph, binomial graph G=nx.erdos_renyi_graph(100, 0.20)
Watts-Strogatz small-world graph G=nx.watts_strogatz_graph(100, 0.20)
Holme and Kim power law G=nx.powerlaw_cluster_graph(100, 0.20)
But there are so many more, see Graph generators for more information on all the types of graph generators NetworkX provides. These, however are the best ones for doing research on social networks.
End of explanation
print H.nodes()[1:10]
print H.edges()[1:5]
print H.neighbors(3)
# For fast, memory safe iteration, use the `_iter` methods
edges, nodes = 0,0
for e in H.edges_iter(): edges += 1
for n in H.nodes_iter(): nodes += 1
print "%i edges, %i nodes" % (edges, nodes)
# Accessing the properties of a graph
print H.graph['name']
H.graph['created'] = strfnow()
print H.graph
# Accessing the properties of nodes and edges
H.node[1]['color'] = 'red'
H.node[43]['color'] = 'blue'
print H.node[43]
print H.nodes(data=True)[:3]
# The weight property is special and should be numeric
H.edge[0][40]['weight'] = 0.432
H.edge[0][39]['weight'] = 0.123
print H.edge[40][0]
# Accessing the highest degree node
center, degree = sorted(H.degree().items(), key=itemgetter(1), reverse=True)[0]
# A special type of subgraph
ego = nx.ego_graph(H, center)
pos = nx.spring_layout(H)
nx.draw(H, pos, node_color='#0080C9', edge_color='#cccccc', node_size=50)
nx.draw_networkx_nodes(H, pos, nodelist=[center], node_size=100, node_color="r")
plt.show()
# Other subgraphs can be extracted with nx.subgraph
# Finding the shortest path
H = nx.star_graph(100)
print nx.shortest_path(H, random.choice(H.nodes()), random.choice(H.nodes()))
pos = nx.spring_layout(H)
nx.draw(H, pos)
plt.show()
# Preparing for Data Science Analysis
print nx.to_numpy_matrix(H)
# print nx.to_scipy_sparse_matrix(G)
Explanation: Accessing Nodes and Edges:
End of explanation
G = nx.read_graphml(GRAPHML) # opposite of nx.write_graphml
print nx.info(G)
Explanation: Serialization of Graphs
Most Graphs won't be constructed in memory, but rather saved to disk. Serialize and deserialize Graphs as follows:
End of explanation
# Generate a list of connected components
# See also nx.strongly_connected_components
for component in nx.connected_components(G):
print len(component)
len([c for c in nx.connected_components(G)])
# Get a list of the degree frequencies
dist = FreqDist(nx.degree(G).values())
dist.plot()
# Compute Power log sequence
degree_sequence=sorted(nx.degree(G).values(),reverse=True) # degree sequence
plt.loglog(degree_sequence,'b-',marker='.')
plt.title("Degree rank plot")
plt.ylabel("degree")
plt.xlabel("rank")
# Graph Properties
print "Order: %i" % G.number_of_nodes()
print "Size: %i" % G.number_of_edges()
print "Clustering: %0.5f" % nx.average_clustering(G)
print "Transitivity: %0.5f" % nx.transitivity(G)
hairball = nx.subgraph(G, [x for x in nx.connected_components(G)][0])
print "Average shortest path: %0.4f" % nx.average_shortest_path_length(hairball)
# Node Properties
node = '[email protected]' # Change to an email in your graph
print "Degree of node: %i" % nx.degree(G, node)
print "Local clustering: %0.4f" % nx.clustering(G, node)
Explanation: NetworkX has a ton of Graph serialization methods, and most have methods in the following format for serialization format, format:
Read Graph from disk: read_format
Write Graph to disk: write_format
Parse a Graph string: parse_format
Generate a random Graph in format: generate_format
The list of formats is pretty impressive:
Adjacency List
Multiline Adjacency List
Edge List
GEXF
GML
Pickle
GraphML
JSON
LEDA
YAML
SparseGraph6
Pajek
GIS Shapefile
The JSON and GraphmL are most noteworthy (for use in D3 and Gephi/Neo4j)
Initial Analysis of Email Network
We can do some initial analyses on our network using built in NetworkX methods.
End of explanation
def nbest_centrality(graph, metric, n=10, attribute="centrality", **kwargs):
centrality = metric(graph, **kwargs)
nx.set_node_attributes(graph, attribute, centrality)
degrees = sorted(centrality.items(), key=itemgetter(1), reverse=True)
for idx, item in enumerate(degrees[0:n]):
item = (idx+1,) + item
print "%i. %s: %0.4f" % item
return degrees
degrees = nbest_centrality(G, nx.degree_centrality, n=15)
Explanation: Computing Key Players
In the previous graph, we began exploring ego networks and strong ties between individuals in our social network. We started to see that actors with strong ties to other actors created clusters that centered around themselves. This leads to the obvious question: who are the key figures in the graph, and what kind of pull do they have? We'll look at a couple measures of "centrality" to try to discover this: degree centrality, betweeness centrality, closeness centrality, and eigenvector centrality.
Degree Centrality
The most common and perhaps simplest technique for finding the key actors of a graph is to measure the degree of each vertex. Degree is a signal that determines how connected a node is, which could be a metaphor for influence or popularity. At the very least, the most connected nodes are the ones that spread information the fastest, or have the greatest effect on their community. Measures of degree tend to suffer from dillution, and benefit from statistical techniques to normalize data sets.
End of explanation
# centrality = nx.betweenness_centrality(G)
# normalized = nx.betweenness_centrality(G, normalized=True)
# weighted = nx.betweenness_centrality(G, weight="weight")
degrees = nbest_centrality(G, nx.betweenness_centrality, n=15)
Explanation: Betweenness Centrality
A path is a sequence of nodes between a star node and an end node where no node appears twice on the path, and is measured by the number of edges included (also called hops). The most interesting path to compute for two given nodes is the shortest path, e.g. the minimum number of edges required to reach another node, this is also called the node distance. Note that paths can be of length 0, the distance from a node to itself.
End of explanation
# centrality = nx.closeness_centrality(graph)
# normalied = nx.closeness_centrality(graph, normalized=True)
# weighted = nx.closeness_centrality(graph, distance="weight")
degrees = nbest_centrality(G, nx.closeness_centrality, n=15)
Explanation: Closeness Centrality
Another centrality measure, closeness takes a statistical look at the outgoing paths fora particular node, v. That is, what is the average number of hops it takes to reach any other node in the network from v? This is simply computed as the reciprocal of the mean distance to all other nodes in the graph, which can be normalized to n-1 / size(G)-1 if all nodes in the graph are connected. The reciprocal ensures that nodes that are closer (e.g. fewer hops) score "better" e.g. closer to one as in other centrality scores.
End of explanation
# centrality = nx.eigenvector_centality(graph)
# centrality = nx.eigenvector_centrality_numpy(graph)
degrees = nbest_centrality(G, nx.eigenvector_centrality_numpy, n=15)
Explanation: Eigenvector Centrality
The eigenvector centrality of a node, v is proportional to the sum of the centrality scores of it's neighbors. E.g. the more important people you are connected to, the more important you are. This centrality measure is very interesting, because an actor with a small number of hugely influential contacts may outrank ones with many more mediocre contacts. For our social network, hopefully it will allow us to get underneath the celebrity structure of heroic teams and see who actually is holding the social graph together.
End of explanation
print nx.density(G)
Explanation: Clustering and Cohesion
In this next section, we're going to characterize our social network as a whole, rather than from the perspective of individual actors. This task is usually secondary to getting a feel for the most important nodes; but it is a chicken and an egg problem- determining the techniques to analyze and split the whole graph can be informed by key player analyses, and vice versa.
The density of a network is the ratio of the number of edges in the network to the total number of possible edges in the network. The possible number of edges for a graph of n vertices is n(n-1)/2 for an undirected graph (remove the division for a directed graph). Perfectly connected networks (every node shares an edge with every other node) have a density of 1, and are often called cliques.
End of explanation
for subgraph in nx.connected_component_subgraphs(G):
print nx.diameter(subgraph)
print nx.average_shortest_path_length(subgraph)
Explanation: Graphs can also be analyzed in terms of distance (the shortest path between two nodes). The longest distance in a graph is called the diameter of the social graph, and represents the longest information flow along the graph. Typically less dense (sparse) social networks will have a larger diameter than more dense networks. Additionally, the average distance is an interesting metric as it can give you information about how close nodes are to each other.
End of explanation
partition = community.best_partition(G)
print "%i partitions" % len(set(partition.values()))
nx.set_node_attributes(G, 'partition', partition)
pos = nx.spring_layout(G)
plt.figure(figsize=(12,12))
plt.axis('off')
nx.draw_networkx_nodes(G, pos, node_size=200, cmap=plt.cm.RdYlBu, node_color=partition.values())
nx.draw_networkx_edges(G,pos, alpha=0.5)
Explanation: Let's actually get into some clustering. The python-louvain library uses NetworkX to perform community detection with the louvain method. Here is a simple example of cluster partitioning on a small, built-in social network.
End of explanation
nx.draw(nx.erdos_renyi_graph(20, 0.20))
plt.show()
Explanation: Visualizing Graphs
NetworkX wraps matplotlib or graphviz to draw simple graphs using the same charting library we saw in the previous chapter. This is effective for smaller size graphs, but with larger graphs memory can quickly be consumed. To draw a graph, simply use the networkx.draw function, and then use pyplot.show to display it.
End of explanation
# Generate the Graph
G=nx.davis_southern_women_graph()
# Create a Spring Layout
pos=nx.spring_layout(G)
# Find the center Node
dmin=1
ncenter=0
for n in pos:
x,y=pos[n]
d=(x-0.5)**2+(y-0.5)**2
if d<dmin:
ncenter=n
dmin=d
# color by path length from node near center
p=nx.single_source_shortest_path_length(G,ncenter)
# Draw the graph
plt.figure(figsize=(8,8))
nx.draw_networkx_edges(G,pos,nodelist=[ncenter],alpha=0.4)
nx.draw_networkx_nodes(G,pos,nodelist=p.keys(),
node_size=90,
node_color=p.values(),
cmap=plt.cm.Reds_r)
Explanation: There is, however, a rich drawing library underneath that lets you customize how the Graph looks and is laid out with many different layout algorithms. Let's take a look at an example using one of the built-in Social Graphs: The Davis Women's Social Club.
End of explanation |
455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DTMF
Step1: Repeating the signal ten times
Step2: Q
Step3: Check that peaks are at the correct frequencies
Step4: Processing a noisy signal
Step5: exercise | Python Code:
Fs = 32768
duration = 0.25
t = np.linspace(0, duration, duration * Fs)
f1, f2 = 697, 1336
y1 = np.sin(2 * np.pi * f1 * t);
y2 = np.sin(2 * np.pi * f2 * t);
y = (y1 + y2) / 2
plt.plot(t, y)
from IPython.display import Audio
Audio(y, rate=44100)
Explanation: DTMF: Linear combination of two sinusoids
End of explanation
t = np.linspace(0, duration * 10, duration * 10 * Fs)
f1, f2 = 697, 1336
y1 = np.sin(2 * np.pi * f1 * t);
y2 = np.sin(2 * np.pi * f2 * t);
y = (y1 + y2) / 2
Audio(y, rate=44100)
# Recreate the original signal for simplicity
t = np.linspace(0, duration , duration * Fs)
f1, f2 = 697, 1336
y1 = np.sin(2 * np.pi * f1 * t);
y2 = np.sin(2 * np.pi * f2 * t);
y = (y1 + y2) / 2
n = y.shape[0]
p = np.abs(fft.fft(y));
f = fft.fftfreq(n, d=1/Fs)
plt.plot(f,p);
Explanation: Repeating the signal ten times
End of explanation
# enter code here
Explanation: Q: Why did this happen?
Exercise: Re-plot the spectrum for only positive frequencies, and limit the X-axis to only 2 kHz
End of explanation
max_power_index = np.argsort(p)[::-1]
max_frequency_index = f[max_power_index]
print(max_frequency_index[:5])
Explanation: Check that peaks are at the correct frequencies
End of explanation
time_step = 0.02
period = 5.
time_vec = np.arange(0, 20, time_step)
sig = np.sin(2 * np.pi / period * time_vec) + \
0.5 * np.random.randn(time_vec.size)
plt.plot(time_vec, sig)
sample_freq = fft.fftfreq(sig.size, d=time_step)
sig_fft = fft.fft(sig)
pidxs = np.where(sample_freq > 0)
freqs = sample_freq[pidxs]
power = np.abs(sig_fft)[pidxs]
plt.plot(freqs, power)
plt.xlim(0, 5)
# denoising
freq = freqs[power.argmax()]
sig_fft[np.abs(sample_freq) > freq] = 0
# Reconstruction
recons = fft.ifft(sig_fft)
plt.plot(time_vec, sig, time_vec, recons)
Explanation: Processing a noisy signal
End of explanation
x = plt.imread("moonlanding.png")
plt.imshow(x, cmap=plt.cm.gray)
plt.yticks([])
plt.xticks([])
plt.grid()
# enter code here
Explanation: exercise: 2D FFT - Denoise the follwing image
Hint: Look for 2D FFT functions in scipy.fftpack module
1. Visualize the frequency spectrum of the image
2. Threshold on a suitable value
End of explanation |
456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Curves
Curves are one of the fundamental objects in welly.
Well objects include collections of Curve objects. Multiple Well objects can be stored in a Project.
On this page, we take a closer look at the Curve object.
Some preliminaries...
Step1: Load a well from LAS
Use the from_las() method to load a well by passing a filename as a str.
This is really just a wrapper for lasio but instantiates a Header, Curves, etc.
Step2: The curves are stored in the data attribute, which is an ordinary dictionary
Step3: Let's look at one log
Step4: The object knows some things about itself
Step5: Curves have various methods on them, such as plot()...
Step6: Often we just want to look at or deal with a portion of the curve, or maybe resample it
Step7: Interpolation and slicing
We can read the curve at any depth (or depths) and get an interpolated reading
Step8: There are no samples at those depths; the well is sampled at a 0.1524 m interval
Step9: The actual depths of the samples are in the 'index'
Step10: You can slice a curve by this index; in other words, by depth
Step11: You can get a statistical description of a curve
Step12: Mathematics
Step13: Mathematical operations results in another Curve object, but the values are transformed
Step14: Beware, for the time being, units are not transformed by mathematical operations!
Plotting
Step15: There's also a pseudocolor 2D ribbon plot
Step16: You can optionally show the curve trace as well with curve=True
Step17: Despike
You can despike with a window length for the trend and a Z-score to clip at — the curve is compared to the median in the window using the standard deviation from the entire curve. Here's the difference
Step18: Blocking
We can block a curve. Let's look at a small segment
Step19: We can create a binary log (0's and 1's) with a simple cutoff
Step20: Or we can use two cutoffs and get a blocked log with three different values. By default the new values will be 0, 1, 2, but we can assign whatever we like
Step21: You can send a function in to determine replacement values from the original log. E.g., to replace the values with the block's mean value
Step22: Instantiating a new curve
Let's add a curve from a list of values (data) with depths (basis) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import welly
welly.__version__
Explanation: Curves
Curves are one of the fundamental objects in welly.
Well objects include collections of Curve objects. Multiple Well objects can be stored in a Project.
On this page, we take a closer look at the Curve object.
Some preliminaries...
End of explanation
from welly import Well
p129 = Well.from_las('https://geocomp.s3.amazonaws.com/data/P-129.LAS')
p129.plot()
Explanation: Load a well from LAS
Use the from_las() method to load a well by passing a filename as a str.
This is really just a wrapper for lasio but instantiates a Header, Curves, etc.
End of explanation
p129.data
Explanation: The curves are stored in the data attribute, which is an ordinary dictionary:
End of explanation
gr = p129.data['GR']
gr
Explanation: Let's look at one log:
End of explanation
gr.mnemonic, gr.units, gr.start, gr.stop, gr.step
Explanation: The object knows some things about itself:
End of explanation
gr.plot()
Explanation: Curves have various methods on them, such as plot()...
End of explanation
gr.to_basis(start=1000, stop=1250, step=10.0).plot()
Explanation: Often we just want to look at or deal with a portion of the curve, or maybe resample it:
End of explanation
gr.read_at([1200, 1300, 1400])
Explanation: Interpolation and slicing
We can read the curve at any depth (or depths) and get an interpolated reading:
End of explanation
gr.step
Explanation: There are no samples at those depths; the well is sampled at a 0.1524 m interval:
End of explanation
gr.index
Explanation: The actual depths of the samples are in the 'index':
End of explanation
gr[1000:1010]
Explanation: You can slice a curve by this index; in other words, by depth:
End of explanation
gr.describe() # Equivalent to get_stats()
Explanation: You can get a statistical description of a curve:
End of explanation
gr.mean()
Explanation: Mathematics
End of explanation
1000 * p129.data['RHOB']
Explanation: Mathematical operations results in another Curve object, but the values are transformed:
End of explanation
gr.plot(c='r', lw=0.5)
Explanation: Beware, for the time being, units are not transformed by mathematical operations!
Plotting
End of explanation
gr.plot_2d()
Explanation: There's also a pseudocolor 2D ribbon plot:
End of explanation
gr.plot_2d(cmap='viridis_r', curve=True, lw=0.3, edgecolor='k')
plt.xlim(0,200)
Explanation: You can optionally show the curve trace as well with curve=True:
End of explanation
p129.data['DESP'] = gr.despike(z=1)
p129.data['DIFF'] = gr - p129.data['DESP']
p129.plot(tracks=['GR', 'DESP', 'DIFF'])
Explanation: Despike
You can despike with a window length for the trend and a Z-score to clip at — the curve is compared to the median in the window using the standard deviation from the entire curve. Here's the difference:
End of explanation
segment = gr.to_basis(start=600, stop=680)
Explanation: Blocking
We can block a curve. Let's look at a small segment:
End of explanation
fig, axs = plt.subplots(ncols=2)
# The original log on the left.
segment.plot(ax=axs[0], c='r')
axs[0].axvline(80, c='c', alpha=0.7)
# Make and plot a blocked version.
segment.block(cutoffs=80).plot(ax=axs[1])
axs[1].set_xlabel('')
Explanation: We can create a binary log (0's and 1's) with a simple cutoff:
End of explanation
fig, ax = plt.subplots()
segment.plot(ax=ax)
segment.block(cutoffs=(80, 120), values=(20, 100, 120)).plot(ax=ax)
Explanation: Or we can use two cutoffs and get a blocked log with three different values. By default the new values will be 0, 1, 2, but we can assign whatever we like:
End of explanation
fig, ax = plt.subplots()
segment.plot(ax=ax)
segment.block(cutoffs=80, function=np.mean).plot(ax=ax)
plt.axvline(80, color='c', lw=1)
Explanation: You can send a function in to determine replacement values from the original log. E.g., to replace the values with the block's mean value:
End of explanation
from welly import Curve
params = {'mnemonic': 'FOO', 'run':0, }
data = [20, 30, 40, 20, 10, 0, 10]
c = Curve(data, index=[2,3,4,5,6,7,8], **params)
c.plot()
Explanation: Instantiating a new curve
Let's add a curve from a list of values (data) with depths (basis):
End of explanation |
457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TF-Hub CORD-19 Swivel 임베딩 살펴보기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 임베딩 분석하기
서로 다른 용어 간의 상관 행렬을 계산하고 플롯하여 임베딩을 분석하는 것으로 시작하겠습니다. 임베딩이 여러 단어의 의미를 성공적으로 포착하는 방법을 학습한 경우, 의미론적으로 유사한 단어의 임베딩 벡터는 서로 가까워야 합니다. 코로나바이러스감염증-19와 관련된 일부 용어를 살펴보겠습니다.
Step3: 임베딩이 여러 용어의 의미를 성공적으로 포착했음을 알 수 있습니다. 각 단어는 해당 클러스터의 다른 단어와 유사하지만(즉, "coronavirus"는 "SARS" 및 "MERS"와 높은 상관 관계가 있음) 다른 클러스터의 용어와는 다릅니다(즉, "SARS"와 "Spain" 사이의 유사성은 0에 가까움).
이제 이러한 임베딩을 사용하여 특정 작업을 해결하는 방법을 살펴보겠습니다.
SciCite
Step4: 인용 의도 분류자 훈련하기
Keras를 사용하여 SciCite 데이터세트에 대한 분류자를 훈련합니다. 분류 레이어를 상위에 둔 CORD-19 임베딩을 사용하는 모델을 빌드하겠습니다.
Step5: 모델 훈련 및 평가하기
SciCite 작업의 성능을 확인하기 위해 모델을 훈련하고 평가하겠습니다.
Step6: 모델 평가하기
그리고 모델이 어떤 성능을 보이는지 알아보겠습니다. 손실(오류를 나타내는 숫자, 값이 낮을수록 좋음) 및 정확성의 두 가지 값이 반환됩니다.
Step7: 특히 정확성이 빠르게 증가하는 동안 손실이 빠르게 감소하는 것을 볼 수 있습니다. 예측이 실제 레이블과 어떻게 관련되는지 확인하기 위해 몇 가지 예를 플롯해 보겠습니다. | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
from tqdm import trange
Explanation: TF-Hub CORD-19 Swivel 임베딩 살펴보기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/cord_19_embeddings_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/cord_19_embeddings_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/cord_19_embeddings_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/hub/tutorials/cord_19_embeddings_keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
<td><a href="https://tfhub.dev/tensorflow/cord-19/swivel-128d/3"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub 모델보기</a></td>
</table>
TF-Hub(https://tfhub.dev/tensorflow/cord-19/swivel-128d/3)의 CORD-19 Swivel 텍스트 임베딩 모듈은 연구원들이 코로나바이러스감염증-19와 관련된 자연어 텍스트를 분석할 수 있도록 빌드되었습니다. 이러한 임베딩은 CORD-19 데이터세트에 있는 기사의 제목, 저자, 요약문, 본문 텍스트 및 참조 제목에 대해 훈련되었습니다.
이 colab에서는 다음을 수행합니다.
임베딩 공간에서 의미론적으로 유사한 단어를 분석합니다.
CORD-19 임베딩을 사용하여 SciCite 데이터세트에서 분류자를 훈련합니다.
설정
End of explanation
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
# Generate embeddings for some terms
queries = [
# Related viruses
'coronavirus', 'SARS', 'MERS',
# Regions
'Italy', 'Spain', 'Europe',
# Symptoms
'cough', 'fever', 'throat'
]
module = hub.load('https://tfhub.dev/tensorflow/cord-19/swivel-128d/3')
embeddings = module(queries)
plot_correlation(queries, embeddings)
Explanation: 임베딩 분석하기
서로 다른 용어 간의 상관 행렬을 계산하고 플롯하여 임베딩을 분석하는 것으로 시작하겠습니다. 임베딩이 여러 단어의 의미를 성공적으로 포착하는 방법을 학습한 경우, 의미론적으로 유사한 단어의 임베딩 벡터는 서로 가까워야 합니다. 코로나바이러스감염증-19와 관련된 일부 용어를 살펴보겠습니다.
End of explanation
builder = tfds.builder(name='scicite')
builder.download_and_prepare()
train_data, validation_data, test_data = builder.as_dataset(
split=('train', 'validation', 'test'),
as_supervised=True)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 10#@param {type:"integer"}
TEXT_FEATURE_NAME = builder.info.supervised_keys[0]
LABEL_NAME = builder.info.supervised_keys[1]
def label2str(numeric_label):
m = builder.info.features[LABEL_NAME].names
return m[numeric_label]
data = next(iter(train_data.batch(NUM_EXAMPLES)))
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.numpy().decode('utf8') for ex in data[0]],
LABEL_NAME: [label2str(x) for x in data[1]]
})
Explanation: 임베딩이 여러 용어의 의미를 성공적으로 포착했음을 알 수 있습니다. 각 단어는 해당 클러스터의 다른 단어와 유사하지만(즉, "coronavirus"는 "SARS" 및 "MERS"와 높은 상관 관계가 있음) 다른 클러스터의 용어와는 다릅니다(즉, "SARS"와 "Spain" 사이의 유사성은 0에 가까움).
이제 이러한 임베딩을 사용하여 특정 작업을 해결하는 방법을 살펴보겠습니다.
SciCite: 인용 의도 분류
이 섹션에서는 텍스트 분류와 같은 다운스트림 작업에 임베딩을 사용하는 방법을 보여줍니다. TensorFlow 데이터세트의 SciCite 데이터세트를 사용하여 학술 논문에서 인용 의도를 분류합니다. 학술 논문의 인용이 포함된 문장이 주어지면 인용의 주요 의도가 배경 정보, 방법 사용 또는 결과 비교인지 여부를 분류합니다.
End of explanation
#@title Hyperparameters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/3' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
hub_layer = hub.KerasLayer(EMBEDDING, input_shape=[],
dtype=tf.string, trainable=TRAINABLE_MODULE)
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(3))
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: 인용 의도 분류자 훈련하기
Keras를 사용하여 SciCite 데이터세트에 대한 분류자를 훈련합니다. 분류 레이어를 상위에 둔 CORD-19 임베딩을 사용하는 모델을 빌드하겠습니다.
End of explanation
EPOCHS = 35#@param {type: "integer"}
BATCH_SIZE = 32#@param {type: "integer"}
history = model.fit(train_data.shuffle(10000).batch(BATCH_SIZE),
epochs=EPOCHS,
validation_data=validation_data.batch(BATCH_SIZE),
verbose=1)
from matplotlib import pyplot as plt
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
Explanation: 모델 훈련 및 평가하기
SciCite 작업의 성능을 확인하기 위해 모델을 훈련하고 평가하겠습니다.
End of explanation
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print('%s: %.3f' % (name, value))
Explanation: 모델 평가하기
그리고 모델이 어떤 성능을 보이는지 알아보겠습니다. 손실(오류를 나타내는 숫자, 값이 낮을수록 좋음) 및 정확성의 두 가지 값이 반환됩니다.
End of explanation
prediction_dataset = next(iter(test_data.batch(20)))
prediction_texts = [ex.numpy().decode('utf8') for ex in prediction_dataset[0]]
prediction_labels = [label2str(x) for x in prediction_dataset[1]]
predictions = [
label2str(x) for x in np.argmax(model.predict(prediction_texts), axis=-1)]
pd.DataFrame({
TEXT_FEATURE_NAME: prediction_texts,
LABEL_NAME: prediction_labels,
'prediction': predictions
})
Explanation: 특히 정확성이 빠르게 증가하는 동안 손실이 빠르게 감소하는 것을 볼 수 있습니다. 예측이 실제 레이블과 어떻게 관련되는지 확인하기 위해 몇 가지 예를 플롯해 보겠습니다.
End of explanation |
458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Description
In this challenge, you will be given a set of circles, defined by their centers and radii. Your goal is to find the bounding rectangle which will contain all of the circles completely.
Write a program that determines the vertices of the bounding rectangle with sides parallel to the axes.
Input Description
Each line will contain a comma separated center and radius for a circle.
Output Description
The format of the output will be comma separated coordinates, rounded to 3 decimal places.
Challenge Input
1,1,2
2,2,0.5
-1,-3,2
5,2,1
Challenge Output
(-3, -5), (-3, 3), (6, 3), (6, -5)
Bonus
For the bonus, we will rotate the axis for the bounding rectangle. The first line of input will now be a vector determining the direction of one edge of the bounding rectangle.
Bonus Input
1,1
1,1,2
2,2,0.5
-1,-3,2
5,2,1
Bonus Output
(-4.828, -2.0), (2.793, 5.621), (6.621, 1.793), (-1.0, -5.828)
Credit
This challenge was suggested by user /u/Preferencesoft, many thanks! If you have an idea for a challenge please share it on /r/dailyprogrammer_ideas and there's a good chance we'll use it.
Step1: Challenge
Step2: Bonus | Python Code:
from matplotlib import pyplot as plt
from matplotlib import patches as patches
from matplotlib import ticker as ticker
from math import atan, degrees, cos, sin
%matplotlib inline
Explanation: Description
In this challenge, you will be given a set of circles, defined by their centers and radii. Your goal is to find the bounding rectangle which will contain all of the circles completely.
Write a program that determines the vertices of the bounding rectangle with sides parallel to the axes.
Input Description
Each line will contain a comma separated center and radius for a circle.
Output Description
The format of the output will be comma separated coordinates, rounded to 3 decimal places.
Challenge Input
1,1,2
2,2,0.5
-1,-3,2
5,2,1
Challenge Output
(-3, -5), (-3, 3), (6, 3), (6, -5)
Bonus
For the bonus, we will rotate the axis for the bounding rectangle. The first line of input will now be a vector determining the direction of one edge of the bounding rectangle.
Bonus Input
1,1
1,1,2
2,2,0.5
-1,-3,2
5,2,1
Bonus Output
(-4.828, -2.0), (2.793, 5.621), (6.621, 1.793), (-1.0, -5.828)
Credit
This challenge was suggested by user /u/Preferencesoft, many thanks! If you have an idea for a challenge please share it on /r/dailyprogrammer_ideas and there's a good chance we'll use it.
End of explanation
circles = [(1,1,2),(2,2,0.5),(-1,-3,2),(5,2,1)]
fig, ax = plt.subplots(figsize=(10,10))
for x,y,radius in circles:
ax.add_artist(plt.Circle((x,y), radius, fill=False))
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=1.0))
ax.yaxis.set_major_locator(ticker.MultipleLocator(base=1.0))
ax.grid(b=True, which='major', color='k', linestyle='--', alpha=0.3)
plt.xlim(-8,8)
plt.ylim(-8,8)
plt.show()
min_x = max_x = min_y = max_y = None
for x,y,radius in circles:
if min_x is None or x - radius < min_x:
min_x = x - radius
if min_y is None or y - radius < min_y:
min_y = y - radius
if max_x is None or x + radius > max_x:
max_x = x + radius
if max_y is None or y + radius > max_y:
max_y = y + radius
rect_coords = [(min_x,min_y), (min_x,max_y),(max_x,max_y),(max_x,min_y)]
fig, ax = plt.subplots(figsize=(10,10))
for x,y,radius in circles:
ax.add_artist(plt.Circle((x,y), radius, fill=False))
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=1.0))
ax.yaxis.set_major_locator(ticker.MultipleLocator(base=1.0))
ax.grid(b=True, which='major', color='k', linestyle='--', alpha=0.3)
plt.xlim(-8,8)
plt.ylim(-8,8)
ax.add_patch(patches.Polygon(rect_coords, fill=False, color='r', linewidth=2))
plt.show()
print(rect_coords)
Explanation: Challenge
End of explanation
vector = (1,1)
theta = atan(vector[0]/vector[1])
def rotate_coords(x,y, theta):
return x*cos(theta) - y*sin(theta), x*sin(theta) + y*cos(theta)
min_x = max_x = min_y = max_y = None
for xo,yo,radius in circles:
x,y = rotate_coords(xo,yo,theta)
if min_x is None or x - radius < min_x:
min_x = x - radius
if min_y is None or y - radius < min_y:
min_y = y - radius
if max_x is None or x + radius > max_x:
max_x = x + radius
if max_y is None or y + radius > max_y:
max_y = y + radius
rect_coords = [rotate_coords(min_x,min_y,-theta),
rotate_coords(min_x,max_y,-theta),
rotate_coords(max_x,max_y,-theta),
rotate_coords(max_x,min_y,-theta)]
fig, ax = plt.subplots(figsize=(10,10))
for x,y,radius in circles:
ax.add_artist(plt.Circle((x,y), radius, fill=False))
ax.xaxis.set_major_locator(ticker.MultipleLocator(base=1.0))
ax.yaxis.set_major_locator(ticker.MultipleLocator(base=1.0))
ax.grid(b=True, which='major', color='k', linestyle='--', alpha=0.3)
plt.xlim(-8,8)
plt.ylim(-8,8)
ax.add_patch(patches.Polygon(rect_coords, fill=False, color='r', linewidth=2))
plt.show()
print(rect_coords)
Explanation: Bonus
End of explanation |
459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Непараметрические критерии
Критерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки)
------------- | -------------|
Знаков | $\times$ | | $\times$
Ранговый | $\times$ | $\times$ | $\times$
Перестановочный | $\times$ | $\times$ | $\times$
Недвижимость в Сиэттле
Имеются данные о продажной стоимости недвижимости в Сиэтле для 50 сделок в 2001 году и 50 в 2002. Изменились ли в среднем цены?
Step1: Загрузка данных
Step2: Двухвыборочные критерии для независимых выборок
Step3: Ранговый критерий Манна-Уитни
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
Step4: Перестановочный критерий
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$ | Python Code:
import numpy as np
import pandas as pd
import itertools
from scipy import stats
from statsmodels.stats.descriptivestats import sign_test
from statsmodels.stats.weightstats import zconfint
from statsmodels.stats.weightstats import *
%pylab inline
Explanation: Непараметрические критерии
Критерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки)
------------- | -------------|
Знаков | $\times$ | | $\times$
Ранговый | $\times$ | $\times$ | $\times$
Перестановочный | $\times$ | $\times$ | $\times$
Недвижимость в Сиэттле
Имеются данные о продажной стоимости недвижимости в Сиэтле для 50 сделок в 2001 году и 50 в 2002. Изменились ли в среднем цены?
End of explanation
seattle_data = pd.read_csv('seattle.txt', sep = '\t', header = 0)
seattle_data.shape
seattle_data.head()
price2001 = seattle_data[seattle_data['Year'] == 2001].Price
price2002 = seattle_data[seattle_data['Year'] == 2002].Price
pylab.figure(figsize=(12,4))
pylab.subplot(1,2,1)
pylab.grid()
pylab.hist(price2001, color = 'r')
pylab.xlabel('2001')
pylab.subplot(1,2,2)
pylab.grid()
pylab.hist(price2002, color = 'b')
pylab.xlabel('2002')
pylab.show()
Explanation: Загрузка данных
End of explanation
print '95%% confidence interval for the mean: [%f, %f]' % zconfint(price2001)
print '95%% confidence interval for the mean: [%f, %f]' % zconfint(price2002)
Explanation: Двухвыборочные критерии для независимых выборок
End of explanation
price2001.shape, price2002.shape
stats.mannwhitneyu(price2001, price2002)
Explanation: Ранговый критерий Манна-Уитни
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
End of explanation
def permutation_t_stat_ind(sample1, sample2):
return np.mean(sample1) - np.mean(sample2)
def get_random_combinations(n1, n2, max_combinations):
index = range(n1 + n2)
indices = set([tuple(index)])
for i in range(max_combinations - 1):
np.random.shuffle(index)
indices.add(tuple(index))
return [(index[:n1], index[n1:]) for index in indices]
def permutation_zero_dist_ind(sample1, sample2, max_combinations = None):
joined_sample = np.hstack((sample1, sample2))
n1 = len(sample1)
n = len(joined_sample)
if max_combinations:
indices = get_random_combinations(n1, len(sample2), max_combinations)
else:
indices = [(list(index), filter(lambda i: i not in index, range(n))) \
for index in itertools.combinations(range(n), n1)]
distr = [joined_sample[list(i[0])].mean() - joined_sample[list(i[1])].mean() \
for i in indices]
return distr
pylab.hist(permutation_zero_dist_ind(price2001, price2002, max_combinations = 1000))
pylab.show()
def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
t_stat = permutation_t_stat_ind(sample, mean)
zero_distr = permutation_zero_dist_ind(sample, mean, max_permutations)
if alternative == 'two-sided':
return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'less':
return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'greater':
return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr)
print "p-value: %f" % permutation_test(price2001, price2002, max_permutations = 10000)
print "p-value: %f" % permutation_test(price2001, price2002, max_permutations = 50000)
Explanation: Перестановочный критерий
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
End of explanation |
460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gallery for DR6
The purpose of this notebook is to build the gallery for the sixth Legacy Survey data release, DR6. The theme of this gallery is...the NGC catalog!
For future reference
Step1: Preliminaries
Define the data release and the various output directories.
Step2: Read the Open NGC catalog created by Mattia Verga
Step3: Select the desired object types.
Here we choose ...
Step4: Require "big" objects, particularly the galaxies (to cut down the sample size).
Step5: Convert coordinates in decimal degrees.
Step10: Generate (find) the sample of objects in the DR6 footprint.
Step12: Generate the color mosaics for each object.
Step14: Add labels and a scale bar.
Step19: Finally, assemble the webpage of good and rejected gallery images.
To test the webpage before release, do
* rsync -auvP /global/cscratch1/sd/ioannis/dr6/gallery/png /global/project/projectdirs/cosmo/www/temp/ioannis/dr6/gallery/
* rsync -auvP /global/cscratch1/sd/ioannis/dr6/gallery/*.html /global/project/projectdirs/cosmo/www/temp/ioannis/dr6/gallery/ | Python Code:
import os, sys
import shutil, time, warnings
from contextlib import redirect_stdout
import numpy as np
import numpy.ma as ma
import matplotlib.pyplot as plt
import astropy.units as u
from astropy.coordinates import SkyCoord
from astropy.table import Table, Column, vstack
from astropy.io import ascii
from PIL import Image, ImageDraw, ImageFont
#from astrometry.util.starutil_numpy import hmsstring2ra
from astrometry.util.util import Tan
from astrometry.util.fits import merge_tables
from legacypipe.survey import LegacySurveyData
from legacypipe.runbrick import run_brick
import multiprocessing
nproc = multiprocessing.cpu_count() // 2
%matplotlib inline
Explanation: Gallery for DR6
The purpose of this notebook is to build the gallery for the sixth Legacy Survey data release, DR6. The theme of this gallery is...the NGC catalog!
For future reference: The notebook must be run from https://jupyter-dev.nersc.gov with the following (approximate) activation script:
```bash
!/bin/bash
version=$1
connection_file=$2
desiconda_version=20170818-1.1.12-img
module use /global/common/${NERSC_HOST}/contrib/desi/desiconda/$desiconda_version/modulefiles
module load desiconda
export LEGACY_SURVEY_DIR=/global/cscratch1/sd/dstn/dr6plus
export LEGACYPIPE_DIR=$SCRATCH/repos/legacypipe
export PATH=$LEGACYPIPE_DIR/bin:${PATH}
export PATH=$SCRATCH//repos/build/bin:$PATH
export PYTHONPATH=$LEGACYPIPE_DIR/py:${PYTHONPATH}
export PYTHONPATH=$SCRATCH/repos/build/lib/python3.5/site-packages:$PYTHONPATH
module use $LEGACYPIPE_DIR/bin/modulefiles/cori
module load dust
exec python -m ipykernel -f $connection_file
```
Imports and paths
End of explanation
dr = 'dr6'
PIXSCALE = 0.262
gallerydir = os.path.join( os.getenv('SCRATCH'), dr, 'gallery' )
galleryfile = os.path.join(gallerydir, 'gallery-{}.fits'.format(dr))
jpgdir = os.path.join(gallerydir, 'jpg')
if not os.path.isdir(jpgdir):
os.mkdir(jpgdir)
pngdir = os.path.join(gallerydir, 'png')
if not os.path.isdir(pngdir):
os.mkdir(pngdir)
Explanation: Preliminaries
Define the data release and the various output directories.
End of explanation
names = ('name', 'type', 'ra_hms', 'dec_dms', 'const', 'majax', 'minax',
'pa', 'bmag', 'vmag', 'jmag', 'hmag', 'kmag', 'sbrightn', 'hubble',
'cstarumag', 'cstarbmag', 'cstarvmag', 'messier', 'ngc', 'ic',
'cstarnames', 'identifiers', 'commonnames', 'nednotes', 'ongcnotes')
NGC = ascii.read(os.path.join(gallerydir, 'NGC.csv'), delimiter=';', names=names)
NGC.write(os.path.join(gallerydir, 'NGC.fits'), overwrite=True)
NGC
Explanation: Read the Open NGC catalog created by Mattia Verga:
https://github.com/mattiaverga/OpenNGC
bash
wget https://raw.githubusercontent.com/mattiaverga/OpenNGC/master/NGC.csv
Name: Object name composed by catalog + number
NGC: New General Catalogue
IC: Index Catalogue
Type: Object type
*: Star
**: Double star
*Ass: Association of stars
OCl: Open Cluster
GCl: Globular Cluster
Cl+N: Star cluster + Nebula
G: Galaxy
GPair: Galaxy Pair
GTrpl: Galaxy Triplet
GGroup: Group of galaxies
PN: Planetary Nebula
HII: HII Ionized region
DrkN: Dark Nebula
EmN: Emission Nebula
Neb: Nebula
RfN: Reflection Nebula
SNR: Supernova remnant
Nova: Nova star
NonEx: Nonexistent object
RA: Right Ascension in J2000 Epoch (HH:MM:SS.SS)
Dec: Declination in J2000 Epoch (+/-DD:MM:SS.SS)
Const: Constellation where the object is located
MajAx: Major axis, expressed in arcmin
MinAx: Minor axis, expressed in arcmin
PosAng: Major axis position angle (North Eastwards)
B-Mag: Apparent total magnitude in B filter
V-Mag: Apparent total magnitude in V filter
J-Mag: Apparent total magnitude in J filter
H-Mag: Apparent total magnitude in H filter
K-Mag: Apparent total magnitude in K filter
SurfBr (only Galaxies): Mean surface brigthness within 25 mag isophot (B-band), expressed in mag/arcsec2
Hubble (only Galaxies): Morphological type (for galaxies)
Cstar U-Mag (only Planetary Nebulae): Apparent magnitude of central star in U filter
Cstar B-Mag (only Planetary Nebulae): Apparent magnitude of central star in B filter
Cstar V-Mag (only Planetary Nebulae): Apparent magnitude of central star in V filter
M: cross reference Messier number
NGC: other NGC identification, if the object is listed twice in the catalog
IC: cross reference IC number, if the object is also listed with that identification
Cstar Names (only Planetary Nebulae): central star identifications
Identifiers: cross reference with other catalogs
Common names: Common names of the object if any
NED Notes: notes about object exported from NED
OpenNGC Notes: notes about the object data from OpenNGC catalog
End of explanation
majax = ma.getdata(NGC['majax']) # arcmin
objtype = np.char.strip(ma.getdata(NGC['type']))
keeptype = ('G', 'PN', 'OCl', 'GCl', 'Cl+N') # Cl gives us GCl, OCl, and Cl+N
#keeptype = ('G', 'GPair', 'GTrpl', 'GGroup', 'PN', 'Cl') # Cl gives us GCl, OCl, and Cl+N
keep = np.zeros(len(NGC), dtype=bool)
for otype in keeptype:
print('Working on {}'.format(otype))
ww = [otype == tt for tt in objtype]
keep = np.logical_or(keep, ww)
Explanation: Select the desired object types.
Here we choose ...
End of explanation
galtoss = (objtype == 'G') * (majax < 3)
keep = np.logical_and(keep, (majax > 0.3) * (majax < 20))
keep = np.logical_and(keep, ~galtoss)
nobj = np.count_nonzero(keep)
print('Keeping {} / {} objects'.format(nobj, len(NGC)))
cat = NGC[keep]
cat
print(np.unique(ma.getdata(cat['type'])))
print(np.unique(ma.getdata(cat['hubble'])))
ww = (cat['type'] == 'G')
_ = plt.hist(cat['majax'][ww], bins=100)
Explanation: Require "big" objects, particularly the galaxies (to cut down the sample size).
End of explanation
coord = SkyCoord(ra=cat['ra_hms'], dec=cat['dec_dms'], unit=(u.hourangle, u.deg))
cat.add_column(Column(name='ra', unit='deg', length=nobj))
cat.add_column(Column(name='dec', unit='deg', length=nobj))
cat['ra'] = coord.ra.value
cat['dec'] = coord.dec.value
Explanation: Convert coordinates in decimal degrees.
End of explanation
survey = LegacySurveyData()
survey.output_dir = gallerydir
def get_name(cat, nice=False):
name = np.atleast_1d(ma.getdata(cat['name']))
mess = np.atleast_1d(ma.getdata(cat['messier']))
comm = np.atleast_1d(ma.getdata(cat['commonnames']))
outname = []
if nice:
hubble_type = np.empty_like(name)
for nn, mm, cc in zip(name, mess, comm):
oo = nn.strip().replace('NED01', '').upper()
if mm != 0:
oo = '{} = M{}'.format(oo, mm)
if cc != 0:
oo = '{} = {}'.format(oo, str(cc).replace(',', ' = '))
outname.append(oo)
else:
for nn in name:
outname.append(nn.strip().replace(' ', '_').lower())
if len(outname) == 1:
outname = outname[0]
return outname
def simple_wcs(obj, diam):
Build a simple WCS object for a single object.
size = np.rint(diam / PIXSCALE).astype('int') # [pixels]
wcs = Tan(obj['ra'], obj['dec'], size/2+0.5, size/2+0.5,
-PIXSCALE/3600.0, 0.0, 0.0, PIXSCALE/3600.0,
float(size), float(size))
return wcs
def _build_sample_one(args):
Wrapper function for the multiprocessing.
return build_sample_one(*args)
def build_sample_one(obj, factor=0.5, verbose=False):
Wrapper function to find overlapping grz CCDs for a given object.
name = get_name(obj)
print('Working on {}...'.format(name))
diam = factor * ma.getdata(obj['majax']) * 60.0 # diameter [arcsec]
wcs = simple_wcs(obj, diam)
try:
ccds = survey.ccds_touching_wcs(wcs) # , ccdrad=2*diam/3600)
except:
return None
if ccds:
# Is there 3-band coverage?
if 'g' in ccds.filter and 'r' in ccds.filter and 'z' in ccds.filter:
if verbose:
print('For {} (type={}) found {} CCDs, RA = {:.5f}, Dec = {:.5f}, Diameter={:.4f} arcmin'.format(
obj['name'], obj['type'], len(ccds), obj['ra'], obj['dec'], obj['majax']))
return obj
return None
def build_sample(cat, factor=1.0):
Build the full sample with grz coverage in DR6.
sampleargs = list()
for cc in cat:
sampleargs.append( (cc, factor, True) ) # the False refers to verbose=False
if nproc > 1:
p = multiprocessing.Pool(nproc)
result = p.map(_build_sample_one, sampleargs)
p.close()
else:
result = list()
for args in sampleargs:
result.append(_build_sample_one(args))
# Remove non-matching objects and write out the sample
outcat = vstack(list(filter(None, result)))
print('Found {}/{} objects in the DR6 footprint.'.format(len(outcat), len(cat)))
return outcat
samplelogfile = os.path.join(gallerydir, 'build-sample.log')
print('Building the sample.')
print('Logging to {}'.format(samplelogfile))
t0 = time.time()
with open(samplelogfile, 'w') as log:
with redirect_stdout(log):
sample = build_sample(cat)
print('Found {}/{} objects in the DR6 footprint.'.format(len(sample), len(cat)))
print('Total time = {:.3f} seconds.'.format(time.time() - t0))
print('Writing {}'.format(galleryfile))
sample.write(galleryfile, overwrite=True)
sample
print(np.unique(sample['type']).data)
def qa_sample():
fig, ax = plt.subplots()
ax.scatter(cat['ra'], cat['dec'], alpha=0.5, s=10, label='Trimmed NGC Catalog')
ax.scatter(sample['ra'], sample['dec'], s=20, label='Objects in DR6 Footprint')
ax.set_xlabel('RA')
ax.set_ylabel('Dec')
ax.legend(loc='lower right')
qa_sample()
Explanation: Generate (find) the sample of objects in the DR6 footprint.
End of explanation
def custom_brickname(obj, prefix='custom-'):
brickname = 'custom-{:06d}{}{:05d}'.format(
int(1000*obj['ra']), 'm' if obj['dec'] < 0 else 'p',
int(1000*np.abs(obj['dec'])))
return brickname
def get_factor(objtype):
Scale factors for the mosaics.
ref = dict(
G = 2,
GCl = 2,
OCl = 2,
PN = 4,
)
return ref[objtype]
def make_coadds_one(obj, scale=PIXSCALE, clobber=False):
name = get_name(obj)
jpgfile = os.path.join(jpgdir, '{}.jpg'.format(name))
if os.path.isfile(jpgfile) and not clobber:
print('File {} exists...skipping.'.format(jpgfile))
else:
factor = get_factor(obj['type'])
diam = factor * ma.getdata(obj['majax']) * 60.0 # diameter [arcsec]
size = np.rint(diam / scale).astype('int') # [pixels]
print('Generating mosaic for {} (type={}) with width={} pixels.'.format(name, obj['type'], size))
with warnings.catch_warnings():
warnings.simplefilter("ignore")
run_brick(None, survey, radec=(obj['ra'], obj['dec']), pixscale=scale,
width=size, height=size, stages=['image_coadds'], splinesky=True,
early_coadds=True, pixPsf=True, hybridPsf=True, normalizePsf=True,
write_pickles=False, depth_cut=False, apodize=True, threads=nproc,
do_calibs=False, ceres=False)
sys.stdout.flush()
brickname = custom_brickname(obj, prefix='custom-')
_jpgfile = os.path.join(survey.output_dir, 'coadd', 'cus', brickname,
'legacysurvey-{}-image.jpg'.format(brickname))
shutil.copy(_jpgfile, jpgfile)
shutil.rmtree(os.path.join(survey.output_dir, 'coadd'))
def make_coadds(sample, clobber=False):
for obj in sample:
make_coadds_one(obj, clobber=clobber)
#make_coadds_one(sample[111], clobber=True)
coaddslogfile = os.path.join(gallerydir, 'make-coadds.log')
print('Generating the coadds.')
print('Logging to {}'.format(coaddslogfile))
t0 = time.time()
with open(coaddslogfile, 'w') as log:
with redirect_stdout(log):
make_coadds(sample, clobber=False)
print('Total time = {:.3f} minutes.'.format((time.time() - t0) / 60))
Explanation: Generate the color mosaics for each object.
End of explanation
barlen = np.round(60.0 / PIXSCALE).astype('int')
fonttype = os.path.join(gallerydir, 'Georgia.ttf')
def _add_labels_one(args):
Wrapper function for the multiprocessing.
return add_labels_one(*args)
def add_labels_one(obj, verbose=False):
name = get_name(obj)
nicename = get_name(obj, nice=True)
jpgfile = os.path.join(jpgdir, '{}.jpg'.format(name))
pngfile = os.path.join(pngdir, '{}.png'.format(name))
thumbfile = os.path.join(pngdir, 'thumb-{}.png'.format(name))
im = Image.open(jpgfile)
sz = im.size
fntsize = np.round(sz[0]/28).astype('int')
width = np.round(sz[0]/175).astype('int')
font = ImageFont.truetype(fonttype, size=fntsize)
draw = ImageDraw.Draw(im)
# Label the object name--
draw.text((0+fntsize*2, 0+fntsize*2), nicename, font=font)
# Add a scale bar--
x0, x1, yy = sz[1]-fntsize*2-barlen, sz[1]-fntsize*2, sz[0]-fntsize*2
draw.line((x0, yy, x1, yy), fill='white', width=width)
im.save(pngfile)
# Generate a thumbnail
cmd = '/usr/bin/convert -thumbnail 300x300 {} {}'.format(pngfile, thumbfile)
os.system(cmd)
def add_labels(sample):
labelargs = list()
for obj in sample:
labelargs.append((obj, False))
if nproc > 1:
p = multiprocessing.Pool(nproc)
res = p.map(_add_labels_one, labelargs)
p.close()
else:
for args in labelargs:
res = _add_labels_one(args)
%time add_labels(sample)
Explanation: Add labels and a scale bar.
End of explanation
def get_type(hubble):
Convert Hubble type to numerical type, for sorting purposes.
numtype = {
'E': 0,
'E-S0': 1,
'S0': 2,
'S0-a': 3,
'Sa': 4,
'SBa': 4,
'SABa': 4,
'Sab': 5,
'SBab': 5,
'Sb': 6,
'SABb': 6,
'SBb': 6,
'Sbc': 7,
'Sc': 8,
'SABc': 8,
'SBc': 8,
'Scd': 9,
'SBcd': 9,
'Sd': 10,
'Sm': 11,
'SBm': 11,
'I': 12,
'IAB': 12,
'IB': 12,
'0': -1
}
return np.array([numtype[hh] for hh in hubble])
reject = ['ngc3587', 'ngc6832', 'ngc5982', 'ngc2832', 'ngc2340', 'ngc5195',
'ngc5308', 'ngc4346', 'ngc4036', 'ngc2681', 'ngc3718', 'ngc5377',
'ngc2146', 'ngc3126', 'ngc2841', 'ngc2683', 'ngc4217', 'ngc4357',
'ngc5055', 'ngc4100', 'ngc5879', 'ngc5297', 'ngc4605', 'ngc6015',
'ngc4144', 'ngc3733', 'ngc3079', 'ngc3198', 'ngc3430', 'ngc3877',
'ngc4062', 'ngc4631', 'ngc4656_ned01', 'ngc4395']
toss = np.zeros(len(sample), dtype=bool)
name = get_name(sample)
for ii, nn in enumerate(name):
for rej in np.atleast_1d(reject):
toss[ii] = rej in nn.lower()
if toss[ii]:
break
print('Rejecting {} objects.'.format(np.sum(toss)))
pngkeep = sample[~toss]
if np.sum(toss) > 0:
pngrej = sample[toss]
else:
pngrej = []
htmlfile = os.path.join(gallerydir, 'index.html')
htmlfile_reject = os.path.join(gallerydir, 'index-reject.html')
baseurl = 'http://legacysurvey.org/viewer-dev'
def html_rows(pngkeep, nperrow=4):
nrow = np.ceil(len(pngkeep) / nperrow).astype('int')
pngsplit = list()
for ii in range(nrow):
i1 = nperrow*ii
i2 = nperrow*(ii+1)
if i2 > len(pngkeep):
i2 = len(pngkeep)
pngsplit.append(pngkeep[i1:i2])
#pngsplit = np.array_split(pngkeep, nrow)
print('Splitting the sample into {} rows with {} mosaics per row.'.format(nrow, nperrow))
html.write('<table class="ls-gallery">\n')
html.write('<tbody>\n')
for pngrow in pngsplit:
html.write('<tr>\n')
for obj in pngrow:
name = get_name(obj)
nicename = get_name(obj, nice=True)
pngfile = os.path.join('png', '{}.png'.format(name))
thumbfile = os.path.join('png', 'thumb-{}.png'.format(name))
img = 'src="{}" alt="{}"'.format(thumbfile, nicename)
#img = 'class="ls-gallery" src="{}" alt="{}"'.format(thumbfile, nicename)
html.write('<td><a href="{}"><img {}></a></td>\n'.format(pngfile, img))
html.write('</tr>\n')
html.write('<tr>\n')
for obj in pngrow:
nicename = get_name(obj, nice=True)
href = '{}/?layer=decals-{}&ra={:.8f}&dec={:.8f}&zoom=12'.format(baseurl, dr, obj['ra'], obj['dec'])
html.write('<td><a href="{}" target="_blank">{}</a></td>\n'.format(href, nicename))
html.write('</tr>\n')
html.write('</tbody>\n')
html.write('</table>\n')
objtype = ma.getdata(pngkeep['type'])
hubbletype = get_type(ma.getdata(pngkeep['hubble']))
with open(htmlfile, 'w') as html:
html.write('<html><head>\n')
html.write('<style type="text/css">\n')
html.write('table.ls-gallery {width: 90%;}\n')
#html.write('img.ls-gallery {display: block;}\n')
#html.write('td.ls-gallery {width: 100%; height: auto}\n')
#html.write('td.ls-gallery {width: 100%; word-wrap: break-word;}\n')
html.write('p.ls-gallery {width: 80%;}\n')
html.write('</style>\n')
html.write('</head><body>\n')
html.write('<h1>DR6 Image Gallery</h1>\n')
html.write(<p class="ls-gallery">This gallery highlights the exquisite image quality and diversity
of objects observed by the Legacy Survey, including planetary nebulae, globular clusters, and
large, nearby galaxies. Each thumbnail links to a larger image while the object name below each
thumbnail links to the
<a href="http://legacysurvey.org/viewer">Sky Viewer</a>. For reference, the horizontal white bar in
the lower-right corner of each image represents one arcminute.</p>\n)
html.write(<p>We gratefully acknowledge the <a href="https://github.com/mattiaverga/OpenNGC" target="_blank">
OpenNGC</a> catalog created by Mattia Verga, which was used to generate this sample.</p>\n)
html.write(<p>For more eye candy, please visit the gallery of galaxy groups highlighted in the
<a href="http://portal.nersc.gov/project/cosmo/data/legacysurvey/dr5/gallery/">DR5 Gallery.</a></p>\n)
# Split by object type
html.write('<h2>Planetary Nebulae, Open Clusters, and Globular Clusters</h2>\n')
these = np.logical_or( np.logical_or(objtype == 'PN', objtype == 'OCl'), objtype == 'GCl' )
srt = np.argsort(objtype[these])[::-1]
html_rows(pngkeep[these][srt])
html.write('<br />\n')
html.write('<h2>Spheroidal & Elliptical Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype <= 2)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('<h2>Early-Type Disk Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype >= 3) * (hubbletype <= 6)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('<h2>Late-Type Disk Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype >= 7) * (hubbletype <= 10)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('<h2>Irregular Galaxies</h2>\n')
these = (objtype == 'G') * (hubbletype >= 11)
srt = np.argsort(hubbletype[these])
html_rows(pngkeep[these][srt])
html.write('</body></html>\n')
if len(pngrej) > 0:
with open(htmlfile_reject, 'w') as html:
html.write('<html><head>\n')
html.write('<style type="text/css">\n')
html.write('img.ls-gallery {display: block;}\n')
html.write('td.ls-gallery {width: 20%; word-wrap: break-word;}\n')
html.write('</style>\n')
html.write('</head><body>\n')
html.write('<h1>DR6 Image Gallery - Rejected</h1>\n')
html_rows(pngrej)
html.write('</body></html>\n')
Explanation: Finally, assemble the webpage of good and rejected gallery images.
To test the webpage before release, do
* rsync -auvP /global/cscratch1/sd/ioannis/dr6/gallery/png /global/project/projectdirs/cosmo/www/temp/ioannis/dr6/gallery/
* rsync -auvP /global/cscratch1/sd/ioannis/dr6/gallery/*.html /global/project/projectdirs/cosmo/www/temp/ioannis/dr6/gallery/
End of explanation |
461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-Nearest Neighbors (KNN)
by Chiyuan Zhang and Sören Sonnenburg
This notebook illustrates the <a href="http
Step1: Let us plot the first five examples of the train data (first row) and test data (second row).
Step2: Then we import shogun components and convert the data to shogun objects
Step3: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
Step4: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time
Step5: We have the prediction for each of the 13 k's now and can quickly compute the accuracies
Step6: So k=3 seems to have been the optimal choice.
Accellerating KNN
Obviously applying KNN is very costly
Step7: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN
Step8: Evaluate KNN with and without Cover Tree. This takes a few seconds
Step9: Generate plots with the data collected in the evaluation
Step10: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Comparison to Multiclass Support Vector Machines
In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.
Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).
Step11: Let's apply the SVM to the same test data set to compare results
Step12: Since the SVM performs way better on this task - let's apply it to all data we did not use in training. | Python Code:
import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat, savemat
from numpy import random
from os import path
mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = mat['data']
Yall = np.array(mat['label'].squeeze(), dtype=np.double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
random.seed(0)
subset = random.permutation(len(Yall))
Xtrain = Xall[:, subset[:5000]]
Ytrain = Yall[subset[:5000]]
Xtest = Xall[:, subset[5000:6000]]
Ytest = Yall[subset[5000:6000]]
Nsplit = 2
all_ks = range(1, 21)
print(Xall.shape)
print(Xtrain.shape)
print(Xtest.shape)
Explanation: K-Nearest Neighbors (KNN)
by Chiyuan Zhang and Sören Sonnenburg
This notebook illustrates the <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href="http://en.wikipedia.org/wiki/Cover_tree">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href="http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM">Multiclass Support Vector Machines</a> is shown.
The basics
The training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias.
In SHOGUN, you can use CKNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard CEuclideanDistance, but in general, any subclass of CDistance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K.
First we load and init data split:
End of explanation
%matplotlib inline
import pylab as P
def plot_example(dat, lab):
for i in range(5):
ax=P.subplot(1,5,i+1)
P.title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtrain, Ytrain)
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtest, Ytest)
Explanation: Let us plot the first five examples of the train data (first row) and test data (second row).
End of explanation
import shogun as sg
from shogun import MulticlassLabels, features
from shogun import KNN
labels = MulticlassLabels(Ytrain)
feats = features(Xtrain)
k=3
dist = sg.distance('EuclideanDistance')
knn = KNN(k, dist, labels)
labels_test = MulticlassLabels(Ytest)
feats_test = features(Xtest)
knn.train(feats)
pred = knn.apply_multiclass(feats_test)
print("Predictions", pred.get_int_labels()[:5])
print("Ground Truth", Ytest[:5])
from shogun import MulticlassAccuracy
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(pred, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
Explanation: Then we import shogun components and convert the data to shogun objects:
End of explanation
idx=np.where(pred != Ytest)[0]
Xbad=Xtest[:,idx]
Ybad=Ytest[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
Explanation: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
End of explanation
knn.put('k', 13)
multiple_k=knn.classify_for_multiple_k()
print(multiple_k.shape)
Explanation: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step:
End of explanation
for k in range(13):
print("Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest)))
Explanation: We have the prediction for each of the 13 k's now and can quickly compute the accuracies:
End of explanation
from shogun import Time, KNN_COVER_TREE, KNN_BRUTE
start = Time.get_curtime()
knn.put('k', 3)
knn.put('knn_solver', KNN_BRUTE)
pred = knn.apply_multiclass(feats_test)
print("Standard KNN took %2.1fs" % (Time.get_curtime() - start))
start = Time.get_curtime()
knn.put('k', 3)
knn.put('knn_solver', KNN_COVER_TREE)
pred = knn.apply_multiclass(feats_test)
print("Covertree KNN took %2.1fs" % (Time.get_curtime() - start))
Explanation: So k=3 seems to have been the optimal choice.
Accellerating KNN
Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above:
End of explanation
def evaluate(labels, feats, use_cover_tree=False):
from shogun import MulticlassAccuracy, CrossValidationSplitting
import time
split = CrossValidationSplitting(labels, Nsplit)
split.build_subsets()
accuracy = np.zeros((Nsplit, len(all_ks)))
acc_train = np.zeros(accuracy.shape)
time_test = np.zeros(accuracy.shape)
for i in range(Nsplit):
idx_train = split.generate_subset_inverse(i)
idx_test = split.generate_subset_indices(i)
for j, k in enumerate(all_ks):
#print "Round %d for k=%d..." % (i, k)
feats.add_subset(idx_train)
labels.add_subset(idx_train)
dist = sg.distance('EuclideanDistance')
dist.init(feats, feats)
knn = KNN(k, dist, labels)
knn.set_store_model_features(True)
if use_cover_tree:
knn.put('knn_solver', KNN_COVER_TREE)
else:
knn.put('knn_solver', KNN_BRUTE)
knn.train()
evaluator = MulticlassAccuracy()
pred = knn.apply_multiclass()
acc_train[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
feats.add_subset(idx_test)
labels.add_subset(idx_test)
t_start = time.clock()
pred = knn.apply_multiclass(feats)
time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()
accuracy[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
return {'eout': accuracy, 'ein': acc_train, 'time': time_test}
Explanation: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN:
End of explanation
labels = MulticlassLabels(Ytest)
feats = features(Xtest)
print("Evaluating KNN...")
wo_ct = evaluate(labels, feats, use_cover_tree=False)
wi_ct = evaluate(labels, feats, use_cover_tree=True)
print("Done!")
Explanation: Evaluate KNN with and without Cover Tree. This takes a few seconds:
End of explanation
import matplotlib
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')
P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')
P.legend(["Test Accuracy", "Training Accuracy"])
P.xlabel('K')
P.ylabel('Accuracy')
P.title('KNN Accuracy')
P.tight_layout()
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')
P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')
P.xlabel("K")
P.ylabel("time")
P.title('KNN time')
P.legend(["Plain KNN", "CoverTree KNN"], loc='center right')
P.tight_layout()
Explanation: Generate plots with the data collected in the evaluation:
End of explanation
from shogun import GMNPSVM
width=80
C=1
gk=sg.kernel("GaussianKernel", log_width=np.log(width))
svm=GMNPSVM(C, gk, labels)
_=svm.train(feats)
Explanation: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Comparison to Multiclass Support Vector Machines
In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.
Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).
End of explanation
out=svm.apply(feats_test)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_test)
print("Accuracy = %2.2f%%" % (100*accuracy))
Explanation: Let's apply the SVM to the same test data set to compare results:
End of explanation
Xrem=Xall[:,subset[6000:]]
Yrem=Yall[subset[6000:]]
feats_rem=features(Xrem)
labels_rem=MulticlassLabels(Yrem)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
Explanation: Since the SVM performs way better on this task - let's apply it to all data we did not use in training.
End of explanation |
462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MLPaint
Step1: The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.
Step2: ... and other Python librairies
Step3: Loading the data
A pickled version of the dataset is available on the deeplearning.net website.
The dataset has been unpickled and saved in a public Amazon's S3 cloud storage. Check out MLDB's Protocol Handlers for Files and URLS.
Step5: In the original MNIST datasets, the features and labels were in two seperate datasets. To make it easier, we joined the features with the labels. Column 785 is the labels column which was renamed accordingly 'label'.
Let's explore the data
See the Query API documentation for more details on SQL queries.
Step7: Each example is a row made up of 784 pixels or features. By reshaping the 1D data into a 2D representation, we can visualize the data a little better. At each refresh, we get randomly selected rows for each label using the sample function in a SQL From Expression.
Step9: Training a classifier
We will create a Procedure of type classifier.experiment to train and test our model. The configuration parameter defines a Random Forest algorithm. The model make take some time to train...
Step10: We are now going to construct the confusion matrix from results on the test set using the pivot aggregate function. You can learn more about confusion matrices here.
Step11: The model seems to be doing a pretty good job at classfication as seen with the confusion matrix above. In a small percentage of cases, the model seems to think that a '4' is a '9' and that a '3' is a '2'. The two sets of digits are similar in the concentration of pixels, so this makes some sense.
How does the model make its predictions?
The 'explain' function provides each pixel's weight on the final outcome.
Let's create a function of type classifier.explain to help us understand what's happening here.
Step14: We automatically get a REST API to test our model with individual digits
The procedure above created for us a Function of type classifier. We will be using two functions
Step15: In the representation of the explain matrix (figure 3), the green pixels help increase the score while the red pixels help decrease the score of the chosen digit. The explain matrix will tell us something about the pixels deemed most important. For example, if nothing was drawn in the top left corner of the picture during training, no information will be provided on the top left set of pixels in the explain matrix. During training, if a pixel is not part of the classification rules (i.e. not on any leaf), that pixel will not show up in the explain matrix.
Making a simple web app using MLDB plug-in functionality
We've built a very fun web app called MLPaint that uses everything we've shown here to do real-time recognition of digits. The app is shown in the Youtube video at the top of the notebook.
The app is built with an MLDB plugin; plugins allow us to extend functionality that we have seen so far. For more information, check out the documentation on plugins.
By running the cell below, the plugin is checkout from Github and loaded into MLDB | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("WGdLCXDiDSo")
Explanation: MLPaint: Real-Time Handwritten Digits Recognizer
The automatic recognition of handwritten digits is now a well understood and studied Machine Vision and Machine Learning problem. We will be using MNIST (check out Wikipedia's page on MNIST) to train our models. From the description on Yann LeCun's MNIST database of handwriten digits:
The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
To learn more, you can also check out Kaggle's Digit Recognizer page.
In this demo will use MLDB's classifiers functions and REST API, to create a plug-in to predict the value of handwritten digits. We will also use MLDB's explain functions to visually represent the predictive "value or importance" of each pixel in our final predictions.
Check out the video below for a demo of what we'll be creating:
End of explanation
from pymldb import Connection
mldb = Connection()
Explanation: The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.
End of explanation
import random
import math
import numpy as np
from pandas import DataFrame
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import display
from ipywidgets import widgets
Explanation: ... and other Python librairies:
End of explanation
data_url_mnist = 'file://mldb/mldb_test_data/digits_data.csv.gz'
print mldb.put('/v1/procedures/import_digits_mnist', {
"type":"import.text",
"params": {
"dataFileUrl": data_url_mnist,
"outputDataset": "digits_mnist",
"select": "{* EXCLUDING(\"785\")} AS *, \"785\" AS label",
"runOnCreation": True,
}
})
Explanation: Loading the data
A pickled version of the dataset is available on the deeplearning.net website.
The dataset has been unpickled and saved in a public Amazon's S3 cloud storage. Check out MLDB's Protocol Handlers for Files and URLS.
End of explanation
data_stats = mldb.query(
SELECT
avg(horizontal_count({* EXCLUDING(label)})) as NoOfFeatures,
count(label) AS TestExamples
FROM digits_mnist
)
print data_stats
Explanation: In the original MNIST datasets, the features and labels were in two seperate datasets. To make it easier, we joined the features with the labels. Column 785 is the labels column which was renamed accordingly 'label'.
Let's explore the data
See the Query API documentation for more details on SQL queries.
End of explanation
labels = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
x_array1 = []
x_array2 = []
x_data = []
sq = int(np.sqrt(data_stats['NoOfFeatures'][0]))
for label in labels:
x_data = mldb.query(
SELECT * EXCLUDING(label)
FROM sample(
(select * FROM digits_mnist WHERE label = %d AND rowHash() %% 5 = 0),
{rows: 1}
)
%label)
if label < 5:
x_array1.extend(x_data.as_matrix().reshape(sq, sq))
if label >= 5:
x_array2.extend(x_data.as_matrix().reshape(sq, sq))
f, (fig1, fig2) = plt.subplots(1, 2, sharey=True)
plt.gray()
fig1.matshow(x_array1)
fig2.matshow(x_array2)
plt.show()
Explanation: Each example is a row made up of 784 pixels or features. By reshaping the 1D data into a 2D representation, we can visualize the data a little better. At each refresh, we get randomly selected rows for each label using the sample function in a SQL From Expression.
End of explanation
conf_algo = {
"bbdt_d5": {
"type": "bagging",
"verbosity": 3,
"weak_learner": {
"type": "boosting",
"verbosity": 3,
"weak_learner": {
"type": "decision_tree",
"max_depth": 7,
"verbosity": 0,
"update_alg": "gentle",
"random_feature_propn": 0.3
},
"min_iter": 5,
"max_iter": 30
},
"num_bags": 5
}
}
conf_class = {
"type": "classifier.experiment",
"params": {
"experimentName": "mnist_model",
"mode": "categorical",
"inputData" :
SELECT
{* EXCLUDING(label*)} AS features,
label AS label
FROM digits_mnist
,
"datasetFolds": [
{
"trainingWhere": "rowHash() % 5 != 0", #80% of total data
"testingWhere": "rowHash() % 5 = 0" #20% of total data
}
],
"algorithm": "bbdt_d5",
"configuration": conf_algo,
"modelFileUrlPattern": "file://models/mnist_model.cls",
"keepArtifacts": True,
"outputAccuracyDataset": True,
"runOnCreation": True,
"evalTrain": True
}
}
results = mldb.put("/v1/procedures/mnist_model", conf_class)
accuracy = results.json()['status']['firstRun']['status']['aggregatedTest']['weightedStatistics']['accuracy']['mean']
print "\nModel classification accuracy on test set = %0.4f\n" % accuracy
Explanation: Training a classifier
We will create a Procedure of type classifier.experiment to train and test our model. The configuration parameter defines a Random Forest algorithm. The model make take some time to train...
End of explanation
confusionMatrix = results.json()['status']['firstRun']['status']['folds'][0]['resultsTest']['confusionMatrix']
confusionMatrix = pd.DataFrame(confusionMatrix).pivot_table(index="predicted", columns="actual")
df = np.log(confusionMatrix)
df = df.fillna(0)
fig = plt.figure(figsize=(8, 8))
plt.imshow(df, interpolation='nearest', cmap=plt.cm.jet)
plt.yticks(np.arange(0, 10, 1), fontsize=14)
plt.ylabel("Predicted", fontsize=16)
plt.xticks(np.arange(0, 10, 1), fontsize=14)
plt.xlabel("Actual", fontsize=16)
Explanation: We are now going to construct the confusion matrix from results on the test set using the pivot aggregate function. You can learn more about confusion matrices here.
End of explanation
print mldb.put('/v1/functions/mnist_explainer', {
"id": "mnist_explainer",
"type": "classifier.explain",
"params": { "modelFileUrl": "file://models/mnist_model.cls" }
})
Explanation: The model seems to be doing a pretty good job at classfication as seen with the confusion matrix above. In a small percentage of cases, the model seems to think that a '4' is a '9' and that a '3' is a '2'. The two sets of digits are similar in the concentration of pixels, so this makes some sense.
How does the model make its predictions?
The 'explain' function provides each pixel's weight on the final outcome.
Let's create a function of type classifier.explain to help us understand what's happening here.
End of explanation
def rgb_explain(x):
scale = 5
explain = [[0, 0, 0]] * data_stats['NoOfFeatures'][0] # [R,G,B] for color image
number_explain = len(x)
for j, col in enumerate(x.columns.values):
try:
index = int(col)
val = x.values[0][j] * scale
if (val >= 0.2):
explain[index] = [0, val, 0] # make it green
elif (val <= -0.2):
explain[index] = [- val, 0, 0] # make it red
except:
pass
return np.array(explain).reshape(sq, sq, 3)
@widgets.interact
def test_img_plot(digit = [0,9], other_example=[0,1000]):
data = mldb.query(
SELECT
* EXCLUDING(label),
mnist_model_scorer_0({ features: {* EXCLUDING(label*)} })[scores] AS score
FROM digits_mnist
WHERE label = %(digit)d AND rowHash() %% 5 = 0
LIMIT 1
OFFSET %(offset)d
% {"digit": digit, "offset": other_example})
data_array = data.as_matrix()
rand_test_img = data_array[0][:-10].reshape(sq, sq)
scores = data_array[0][-10:]
explain_data = mldb.query(
SELECT
mnist_explainer({
label: %(digit)d,
features: {* EXCLUDING(label)}
})[explanation] AS *
FROM digits_mnist
WHERE label = %(digit)d AND rowHash() %% 5 = 0
LIMIT 1
OFFSET %(offset)d
% {"digit": digit, "offset": other_example})
explain_img = rgb_explain(explain_data)
fig = plt.figure(figsize=(8, 8))
# plot digit image
ax1 = plt.subplot2grid((4, 4), (0, 0), colspan=2, rowspan = 2)
ax1.imshow(rand_test_img)
ax1.set_title("Fig1: You chose the digit below", fontsize=12, fontweight='bold')
# plot explain matrix
ax2 = plt.subplot2grid((4, 4), (0, 2), colspan=2, rowspan = 2)
ax2.imshow(explain_img)
ax2.set_title("Fig2: Explain matrix picture of digit %d" %digit, fontsize=12, fontweight='bold')
# plot scores
ax3 = plt.subplot2grid((4, 4), (2, 0), colspan=4, rowspan = 2)
greater_than_zero = scores >= 0
lesser_than_zero = scores < 0
ax3.barh(np.arange(len(scores))[greater_than_zero]-0.5, scores[greater_than_zero], color='#87CEFA', height=1)
ax3.barh(np.arange(len(scores))[lesser_than_zero]-0.5, scores[lesser_than_zero], color='#E6E6FA', height=1)
ax3.grid()
ax3.yaxis.set_ticks(np.arange(0, 10, 1))
ax3.yaxis.set_ticks_position('right')
ax3.set_title("Fig3: Scores for each number - the number with the highest score wins!", fontsize=12, fontweight='bold')
ax3.set_ylabel("Digits")
ax3.yaxis.set_label_position('right')
ax3.set_xlabel("Scores")
plt.tight_layout()
plt.show()
Explanation: We automatically get a REST API to test our model with individual digits
The procedure above created for us a Function of type classifier. We will be using two functions:
* The scorer function: Scores aren't probabilities, but they can be used to create binary classifiers by applying a cutoff threshold. MLDB's classifier.experiment procedure that we have seen previously outputs a score for each digit (even the wrong ones).
* The explain function: The explain function shows how each pixel and its value (black or white) of an image contributes to the model's prediction. We colored such pixels in green for positive contributions and red for negative contributions in Figure 2 below. In essense, pixels flagged in red in the explain figure should be changed to get a better score. For example, a white-colored pixel, that was seen frequently for digit '5' in the train set, will be flagged green if it seen for digit '5' in the test set. If the same pixel is actually of a different color for digit '5' in the test set, then the pixel will be flagged red.
Note that the digits are from the test set - we used 80% of the data for training and 20% for testing. You can also get the same digit written differently by using the offset bar. We using an SQL offset - we are calling it 'other_example' in the code below. The offset specifies the number of rows to skip before returning values from the query expression.
End of explanation
mldb.put("/v1/plugins/mlpaint", {
"type": "python",
"params": {
"address": "git://github.com/mldbai/mlpaint"
}
})
Explanation: In the representation of the explain matrix (figure 3), the green pixels help increase the score while the red pixels help decrease the score of the chosen digit. The explain matrix will tell us something about the pixels deemed most important. For example, if nothing was drawn in the top left corner of the picture during training, no information will be provided on the top left set of pixels in the explain matrix. During training, if a pixel is not part of the classification rules (i.e. not on any leaf), that pixel will not show up in the explain matrix.
Making a simple web app using MLDB plug-in functionality
We've built a very fun web app called MLPaint that uses everything we've shown here to do real-time recognition of digits. The app is shown in the Youtube video at the top of the notebook.
The app is built with an MLDB plugin; plugins allow us to extend functionality that we have seen so far. For more information, check out the documentation on plugins.
By running the cell below, the plugin is checkout from Github and loaded into MLDB:
End of explanation |
463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Local Group Halo Properties
Step1: Inside the Likelihood object is a "triplet" object called T, which contains an array of sample local groups, each with kinematic parameters consistent with the observational data. Let's plot these kinematic parameters in a "triangle" figure, to show all their 1 and 2-D marginal distributions.
Step2: The above plot shows a Gaussian Mixture model fitted Gaussians. The shaded regions show two standard deviations. The samples data has been preprocessed to zero the mean and scale by standard deviation. Since we are using the Gaussian Mixture Model to model the underlying PDF of the data, more components is always better.
How to evaluate goodness of fit
Step3: The above plot shows that the points drawn from the model create a population that is very similar to the true data.
Step4: Reading Simulation Points
Step5: 1D posterior work | Python Code:
%matplotlib inline
import localgroup
import triangle
import sklearn
from sklearn import mixture
import numpy as np
import pickle
import matplotlib.patches as mpatches
Explanation: Local Group Halo Properties: Demo Inference
We approximate the local group distance, radial velocity and proper motion likelihood function by sampling from the posterior distributions for these variables reported in the literature and transforming to kinematic variables in the M31-centric coordinate system.
End of explanation
L = localgroup.Likelihood(isPair=True)
L.generate(Nsamples=200000)
L.set_PDF(mixture.GMM(n_components=10, covariance_type='full'))
L.approximate()
figure_obs = L.plot_samples(10, color='b', overlay=False)
Explanation: Inside the Likelihood object is a "triplet" object called T, which contains an array of sample local groups, each with kinematic parameters consistent with the observational data. Let's plot these kinematic parameters in a "triangle" figure, to show all their 1 and 2-D marginal distributions.
End of explanation
figure_model = L.model_gof(L.T.Nsamples, color="r", fig=None)
L.model_gof(L.T.Nsamples, color="r", fig=figure_obs)
red_patch = mpatches.Patch(color='red')
blue_patch = mpatches.Patch(color='blue')
figure_obs.legend(handles=[red_patch, blue_patch], labels=["Model Generated", "Observation Generated"])
figure_obs
Explanation: The above plot shows a Gaussian Mixture model fitted Gaussians. The shaded regions show two standard deviations. The samples data has been preprocessed to zero the mean and scale by standard deviation. Since we are using the Gaussian Mixture Model to model the underlying PDF of the data, more components is always better.
How to evaluate goodness of fit:
Due to lack of a standard goodness of fit test for GMM's, the best we can do is graphically show that the model reproduces the data well. We proceed by drawing a set of points from the fitted model, where each point is a local group with (MW_D, MW_vr, MW_vt, M33_D, M33_vr, M33_vt). We then plot the 1D and 2D marginalizations of the drawn point set and show that the marginalizations match the marginalizations of the true data.
End of explanation
#name = "likelihood_fit.png"
#figure_obs.savefig("/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/LocalGroupHaloProps/doc/thesis/plots/final_plots/Pair_LGMM_GOF.pdf", dpi=1200)
Explanation: The above plot shows that the points drawn from the model create a population that is very similar to the true data.
End of explanation
path = '/lustre/ki/pfs/mwillia1/LG_project/Consuelo_Boxes/4001/quad_dat_M31_larger_bgc2.npy'
#path = '/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/data_files/MW_M31_pairs.txt'
npoints = 5000
halo_props = ['MW_Mvir', 'M31_Mvir', 'M33_Mvir']
Tr = localgroup.Triplet(isPair=True)
Tr.read_sim_points(path, npoints, halo_props, h=0.7, a=1.0, npy=True)
Tr.transform_to_M31(sim=True)
#Tr.mass_filter('sim')
Tr.dist_filter((Tr.sim_samples[:,0] < 1))
Tr.preprocess(L.samples_means, L.samples_stds, mode='sim')
sim_plot = Tr.plot_kinematics('sim', L.samples_means, L.samples_stds, color='c', fig=None)
Tr.plot_kinematics('sim', L.samples_means, L.samples_stds, color='g', fig=figure_model)
red_patch = mpatches.Patch(color='red')
green_patch = mpatches.Patch(color='green')
figure_model.legend(handles=[red_patch, green_patch], labels=["Model Generated", "Consuelo Triplets"])
figure_model
#name = 'trips_000_054_like_prior.png'
#sim_plot.savefig('/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/LocalGroupHaloProps/doc/thesis/plots/asurps/'+name)
Tr.compute_model_weights(L, "sim")
N95 = Tr.calculate_N95()
N95
MW_C200 = Tr.calculate_cvir_to_c200(Tr.MW.Cvir)
M31_C200 = Tr.calculate_cvir_to_c200(Tr.M31.Cvir)
#M33_C200 = Tr.calculate_cvir_to_c200(Tr.M33.Cvir)
MW_M200 = Tr.calculate_Mvir_to_M200(Tr.MW.Mvir, Tr.MW.Cvir, MW_C200)
M31_M200 = Tr.calculate_Mvir_to_M200(Tr.M31.Mvir, Tr.M31.Cvir, M31_C200)
#M33_M200 = Tr.calculate_Mvir_to_M200(Tr.M33.Mvir, Tr.M33.Cvir, M33_C200)
total_200 = M31_M200+MW_M200
total = Tr.MW.Mvir + Tr.M31.Mvir
all_mvir = np.log10(np.transpose(np.vstack((Tr.MW.Mvir, Tr.M31.Mvir, total))))
labs = ["$M_{MW}$", "$M_{M31}$", "$M_{LG}$"]
#total = MW_M200 + M31_M200
#all_mvir = np.log10(np.transpose(np.vstack((MW_M200, M31_M200, total))))
#labs = ["MW Mvir", "M31 Mvir", "MW+M31"]
all_mvir = np.transpose(np.vstack((Tr.MW.Cvir, Tr.M31.Cvir)))
labs = ["$Cvir_{MW}$", "$Cvir_{M31}$"]
all_mvir.shape
figure = triangle.corner(all_mvir, labels=labs, quantiles=[0.16,0.5,0.84], fig=None, weights=np.transpose(Tr.weights),\
plot_contours=True, show_titles=True, title_args={"fontsize": 16}, label_args={"fontsize": 16},\
plot_datapoints=False, bins=20, color='k')
h = np.load('/lustre/ki/pfs/mwillia1/LG_project/Consuelo_Boxes/4002/4002hlist.npy')
mw_h = h[np.abs(np.log10(h['mvir'])-11.7) < .1]
m31_h = h[np.abs(np.log10(h['mvir'])-12) < .1]
m33_h = h[np.abs(np.log10(h['mvir'])-10.8) < .1]
mwc = mw_h['rvir']/mw_h['rs']
m31c = m31_h['rvir']/m31_h['rs']
m33c = m33_h['rvir']/m33_h['rs']
mwc.shape, m31c.shape, m33c.shape
mwc = mwc[0:100000]
m31c = m31c[0:100000]
m33c = m33c[0:100000]
all_mvir = np.transpose(np.vstack((mwc, m31c)))
labs = ["$Cvir_{MW}$", "$Cvir_{M31}$", "$Cvir_{M33}$"]
figure_c = triangle.corner(all_mvir, labels=labs, quantiles=[0.,0.5,0.84], fig=figure, weights=None,\
plot_contours=True, show_titles=True, title_args={"fontsize": 16}, label_args={"fontsize": 16},\
plot_datapoints=False, bins=20, color='k')
figure2 = triangle.corner(all_mvir, labels=labs, quantiles=[0.16,0.5,0.84], fig=figure, weights=Tr.weights,\
plot_contours=True, show_titles=False, title_args={"fontsize": 12}, \
plot_datapoints=False, bins=20, color='r')
figure
red_patch = mpatches.Patch(color='red')
green_patch = mpatches.Patch(color='k')
figure.legend(handles=[red_patch, green_patch], labels=["Box 4001 bgc2", "Box 4001 hlist"])
figure
Explanation: Reading Simulation Points:
Below we read the preconfigured files containing the Consuelo (soon to be Dark Sky) Local Group analogs into a Triplet object. We plot the marginalizations of the simulation data, which allows us to compare with the LG prior.
End of explanation
Tr.weights.shape
xdata = total
plot(xdata, Tr.weights)
xscale('log')
bins = 20
xbins = np.logspace(np.log10(xdata.min()), np.log10(xdata.max()), bins)
ybins = [np.sum(Tr.weights[(xdata[:]>xbins[i]) & (xdata[:]<xbins[i+1])]) for i in range(bins-1)]
print len(xbins)
print len(ybins)
print len(xbins[:bins-1])
plot(xbins[:bins-1], ybins)
xscale('log')
xtrip = np.copy(xbins[:bins-1])
ytrip = np.copy(ybins)
xtripmw = np.copy(xbins[:bins-1])
ytripmw = np.copy(ybins)
xpairmw = np.copy(xbins[:bins-1])
ypairmw = np.copy(ybins)
xpair = np.copy(xbins[:bins-1])
ypair = np.copy(ybins)
plot(xtripmw, ytripmw, color='r')
plot(xpairmw, ypairmw, color='b')
ylabel('P(MMW)')
xlabel('MMW')
red_patch = mpatches.Patch(color='red')
blue_patch = mpatches.Patch(color='blue')
legend(["Trips, M33 dynamics", "Pairs, M33 existence"])
xscale('log')
#savefig("/afs/slac.stanford.edu/u/ki/mwillia1/Thesis/LocalGroupHaloProps/doc/thesis/plots/pdf_mw_plot.png")
Explanation: 1D posterior work
End of explanation |
464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GARD data loading with WDI
Example of how to use wikidata integrator to add synonyms from GARD. GARD data has already PARTIALLY been loaded into Wikidata via Mix N Match. It is important not to overwrite Mix N Match results
Import all necessary modules and data
Step1: login section
Step2: Pull all disease entities from GARD
Step4: Although we can easily pull the synonyms from this dataframe and upload them to Wikidata, we only have permission to upload data specifically generated by GARD. Hence we will need to visit each disease's page in GARD to check the source of the synonyms. While we're at it, we can also pull alternate identifiers (which will NOT be loaded to Wikidata), but can be used for mapping. Since the Mix N Match community has already done a lot of GARD ID mapping, we will only need these alternative identifiers for items which don't yet have GARD IDs mapped.
Step5: Import any data that was exported
Step6: Pull all WD entities with GARD IDs
Step7: Identify GARD diseases not yet in Wikidata
Currently, there is no bot to add GARD ID to Wikidata entities, so the GARD IDs in Wikidata were added via Mix N Match. Identify the GARD diseases not yet in Wikidata, and determine if they can be mapped using one of the other identifiers available via GARD (eg- Orphanet)
Step8: Pull disease lists based on identifiers so that multiple merges can be used to determine best fit
Step9: Add the appropriate Wikidata statements
After removing items which have issues with no alternative identifier by which the GARD entry can be mapped, gard entries that map to multiple Wikidata entities, and multiple gard entries that map to a single wikidata entity based entirely on the other identifiers for that entry provided by GARD, we're left with entries we can add and suggest. Entries which map to a single WDID based on MULTIPLE Identifier mappings can be scripted. Entities which map to a single WDID based on a single Identifier, would probably be best sent to Mix N Match to avoid complaints further down the line.
Step10: Identify synonyms in need of inclusion
Pull all the entities mapped via GARD and their corresponding English Aliases. Determine if a synonym is missing from the Alias list and if so, include it.
Pull all labels and aliases from WD entities with GARD IDs
Step11: Compare GARD synonyms with Wikidata aliases and labels
Step12: Write the GARD aliases to Wikidata
Since Aliases don't allow for sourcing/referencing, include a edit comment that an alias from GARD was added. | Python Code:
from wikidataintegrator import wdi_core, wdi_login
from wikidataintegrator.ref_handlers import update_retrieved_if_new_multiple_refs
import pandas as pd
from pandas import read_csv
import requests
from tqdm.notebook import trange, tqdm
import ipywidgets
import widgetsnbextension
import time
Explanation: GARD data loading with WDI
Example of how to use wikidata integrator to add synonyms from GARD. GARD data has already PARTIALLY been loaded into Wikidata via Mix N Match. It is important not to overwrite Mix N Match results
Import all necessary modules and data
End of explanation
print("retrieving API credentials")
import wdi_user_config
api_dict = wdi_user_config.get_gard_credentials()
header_info = {api_dict['name']: api_dict['value']}
print("Logging in...")
import wdi_user_config ## Credentials stored in a wdi_user_config file
login_dict = wdi_user_config.get_credentials()
login = wdi_login.WDLogin(login_dict['WDUSER'], login_dict['WDPASS'])
Explanation: login section
End of explanation
gard_results = requests.get('https://api.rarediseases.info.nih.gov/api/diseases',
headers=header_info)
print(gard_results)
gard_df = pd.read_json(gard_results.text)
print(gard_df.head(n=2))
Explanation: Pull all disease entities from GARD
End of explanation
## The resulting json file has a key "mainPropery" which is where our desired data is stored
## Since it looks like a misspelling, we'll store that key as a variable so that it'll be easy to
## change in the future if the key is changed in the future
key_of_interest = "mainPropery"
## Unit test: Request and parse a sample page
i=1
fail_list = []
sample_result = requests.get('https://api.rarediseases.info.nih.gov/api/diseases/'+str(gard_df.iloc[i]['diseaseId']),
headers=header_info)
json_result = sample_result.json()
data_of_interest = json_result.get(key_of_interest)
## Check if there are synonyms that don't have a source (ie- are by GARD)
sourced_syn = data_of_interest.get('synonyms-with-source')
identifier_results = data_of_interest.get('identifiers')
tmpdict = pd.DataFrame(sourced_syn).fillna("GARD")
tmpdict['diseaseId'] = gard_df.iloc[i]['diseaseId']
print(tmpdict)
## Check if there are identifiers that can be used for xrefs
identifier_dict = pd.DataFrame(identifier_results).fillna("None")
print(identifier_dict)
gard_id_list = gard_df['diseaseId'].unique().tolist()
#gard_id_list = [13018,5658,10095] ## Iteration test
fail_list = []
no_syns = []
no_idens = []
identifier_df = pd.DataFrame(columns=['diseaseId','identifierId','identifierType'])
synonyms_df = pd.DataFrame(columns=['diseaseId','name','source'])
for i in tqdm(range(len(gard_id_list))):
try:
sample_result = requests.get('https://api.rarediseases.info.nih.gov/api/diseases/'+str(gard_df.iloc[i]['diseaseId']),
headers=header_info)
json_result = sample_result.json()
data_of_interest = json_result.get(key_of_interest)
## Check if there are synonyms that don't have a source (ie- are by GARD)
sourced_syn = data_of_interest.get('synonyms-with-source')
tmpdict = pd.DataFrame(sourced_syn).fillna("GARD")
tmpdict['diseaseId'] = gard_df.iloc[i]['diseaseId']
if len(tmpdict) == 0:
no_syns.append(gard_df.iloc[i]['diseaseId'])
else:
synonyms_df = pd.concat((synonyms_df,tmpdict),ignore_index=True)
## Check if there are identifiers that can be used for xrefs
identifier_results = data_of_interest.get('identifiers')
identifier_dict = pd.DataFrame(identifier_results).fillna("None")
identifier_dict['diseaseId'] = gard_df.iloc[i]['diseaseId']
if len(identifier_dict) == 0:
no_idens.append(gard_df.iloc[i]['diseaseId'])
else:
identifier_df = pd.concat((identifier_df,identifier_dict),ignore_index=True)
except:
fail_list.append(gard_df.iloc[i]['diseaseId'])
print("Identifiers found: ", len(identifier_df))
print("Synonyms found: ", len(synonyms_df))
print("Requests failed: ",len(fail_list))
print("GARD IDs with no synonyms: ", len(no_syns))
print("GARD IDs with no xrefs: ", len(no_idens))
## Export results to avoid having to hit the API again
identifier_df.to_csv('data/identifier_df.tsv',sep='\t',header=True)
synonyms_df.to_csv('data/synonyms_df.tsv',sep='\t',header=True)
with open('data/no_syns.txt','w') as outwrite:
for eachentry in no_syns:
outwrite.write(str(eachentry)+'\n')
with open('data/no_idens.txt','w') as idenwrite:
for eachiden in no_idens:
idenwrite.write(str(eachiden)+'\n')
print(identifier_df)
Explanation: Although we can easily pull the synonyms from this dataframe and upload them to Wikidata, we only have permission to upload data specifically generated by GARD. Hence we will need to visit each disease's page in GARD to check the source of the synonyms. While we're at it, we can also pull alternate identifiers (which will NOT be loaded to Wikidata), but can be used for mapping. Since the Mix N Match community has already done a lot of GARD ID mapping, we will only need these alternative identifiers for items which don't yet have GARD IDs mapped.
End of explanation
identifier_df = read_csv('data/identifier_df.tsv',delimiter='\t',header=0,index_col=0)
synonyms_df = read_csv('data/synonyms_df.tsv',delimiter='\t',header=0,index_col=0, encoding='latin-1')
no_syns=[]
with open('data/no_syns.txt','r') as syn_read:
for line in syn_read:
no_syns.append(line.strip('\n'))
no_idens=[]
with open('data/no_idens.txt','r') as iden_read:
for line in no_idens:
no_idens.append(line.strip('\n'))
Explanation: Import any data that was exported
End of explanation
# Retrieve all QIDs with GARD IDs
sparqlQuery = "SELECT * WHERE {?item wdt:P4317 ?GARD}"
result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery)
gard_in_wd_list = []
for i in tqdm(range(len(result["results"]["bindings"]))):
gard_id = result["results"]["bindings"][i]["GARD"]["value"]
wdid = result["results"]["bindings"][i]["item"]["value"].replace("http://www.wikidata.org/entity/", "")
gard_in_wd_list.append({'WDID':wdid,'diseaseId':gard_id})
gard_in_wd = pd.DataFrame(gard_in_wd_list)
print(gard_in_wd.head(n=3))
Explanation: Pull all WD entities with GARD IDs
End of explanation
gard_in_wd_id_list = gard_in_wd['diseaseId'].unique().tolist()
gard_not_in_wd = identifier_df.loc[~identifier_df['diseaseId'].isin(gard_in_wd_id_list)]
print(len(gard_not_in_wd))
print(len(gard_not_in_wd['diseaseId'].unique().tolist()))
print(gard_not_in_wd.head(n=2))
property_list = gard_not_in_wd['identifierType'].unique().tolist()
print(property_list)
Explanation: Identify GARD diseases not yet in Wikidata
Currently, there is no bot to add GARD ID to Wikidata entities, so the GARD IDs in Wikidata were added via Mix N Match. Identify the GARD diseases not yet in Wikidata, and determine if they can be mapped using one of the other identifiers available via GARD (eg- Orphanet)
End of explanation
prop_id_dict = {'OMIM':'P492', 'ORPHANET':'P1550', 'UMLS':'P2892',
'SNOMED CT':'P5806', 'ICD 10':'P494', 'NCI Thesaurus':'P1748',
'ICD 10-CM':'P4229', 'MeSH':'P486'}
print(prop_id_dict['OMIM'])
sparql_start = 'SELECT * WHERE {?item wdt:'
sparql_end = '}'
identifier_megalist=[]
for eachidtype in property_list:
sparqlQuery = sparql_start + prop_id_dict[eachidtype] + ' ?identifierId'+sparql_end
result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery)
for i in tqdm(range(len(result["results"]["bindings"]))):
id_id = result["results"]["bindings"][i]['identifierId']["value"]
wdid = result["results"]["bindings"][i]["item"]["value"].replace("http://www.wikidata.org/entity/", "")
identifier_megalist.append({'WDID':wdid,'identifierId':id_id, 'identifierType':eachidtype})
print(len(identifier_megalist))
time.sleep(2)
identifier_megadf = pd.DataFrame(identifier_megalist)
identifier_megadf.to_csv('data/identifier_megadf.tsv',sep='\t',header=True)
## For each Gard Disease Entry, check for multiple mappings to the same WDID
missing_gard_merge = gard_not_in_wd.merge(identifier_megadf,on=(['identifierId', 'identifierType']), how="inner")
still_missing = gard_not_in_wd.loc[~gard_not_in_wd['diseaseId'].isin(missing_gard_merge['diseaseId'].unique().tolist())]
print("Disease IDs for which identifiers couldn't be used to find a QID: ",len(still_missing))
## Determine the number of identifiers that support a merge
potential_gard = missing_gard_merge.groupby(['diseaseId','WDID']).size().reset_index(name='identifier_count')
mapping_check1 = gard_ids_to_add.groupby('diseaseId').size().reset_index(name='qid_count')
one_to_many = mapping_check1.loc[mapping_check1['qid_count']>1]
#print(len(one_to_many))
mapping_check2 = gard_ids_to_add.groupby('WDID').size().reset_index(name='gardid_count')
many_to_one = mapping_check2.loc[mapping_check2['gardid_count']>1]
#print(len(many_to_one))
gard_mapping_issue_ids = one_to_many['diseaseId'].unique().tolist() + many_to_one['WDID'].unique().tolist()
gard_to_add = potential_gard.loc[~potential_gard['diseaseId'].isin(gard_mapping_issue_ids) &
~potential_gard['WDID'].isin(gard_mapping_issue_ids) &
~potential_gard['diseaseId'].isin(still_missing)]
gard_to_add_full = gard_to_add.merge(gard_df,on='diseaseId',how="left")
gard_to_auto_add = gard_to_add_full.loc[gard_to_add_full['identifier_count']>1]
gard_to_suggest = gard_to_add_full.loc[gard_to_add_full['identifier_count']==1]
print(gard_to_auto_add.head(n=2))
Explanation: Pull disease lists based on identifiers so that multiple merges can be used to determine best fit
End of explanation
# GARD rare disease ID P4317
from datetime import datetime
import copy
def create_reference(gard_url):
refStatedIn = wdi_core.WDItemID(value="Q47517289", prop_nr="P248", is_reference=True)
timeStringNow = datetime.now().strftime("+%Y-%m-%dT00:00:00Z")
refRetrieved = wdi_core.WDTime(timeStringNow, prop_nr="P813", is_reference=True)
refURL = wdi_core.WDUrl(value=gard_url, prop_nr="P854", is_reference=True)
return [refStatedIn, refRetrieved, refURL]
## Unit test -- write a statement
gard_qid = gard_to_auto_add.iloc[1]['WDID']
gard_url = gard_to_auto_add.iloc[1]['websiteUrl']
gard_id = str(gard_to_auto_add.iloc[1]['diseaseId'])
reference = create_reference(gard_url)
gard_prop = "P4317"
statement = [wdi_core.WDString(value=gard_id, prop_nr=gard_prop, references=[copy.deepcopy(reference)])]
item = wdi_core.WDItemEngine(wd_item_id=gard_qid, data=statement, append_value=gard_prop,
global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs)
item.write(login)
edit_id = item.lastrevid
print(gard_id, gard_qid, gard_url)
## Test write with 10 items completed successfully
gard_map_revision_list = []
i=0
for i in tqdm(range(len(gard_to_auto_add))):
gard_qid = gard_to_auto_add.iloc[i]['WDID']
gard_url = gard_to_auto_add.iloc[i]['websiteUrl']
gard_id = str(gard_to_auto_add.iloc[i]['diseaseId'])
reference = create_reference(gard_url)
gard_prop = "P4317"
statement = [wdi_core.WDString(value=gard_id, prop_nr=gard_prop, references=[copy.deepcopy(reference)])]
item = wdi_core.WDItemEngine(wd_item_id=gard_qid, data=statement, append_value=gard_prop,
global_ref_mode='CUSTOM', ref_handler=update_retrieved_if_new_multiple_refs)
item.write(login,edit_summary='added GARD ID')
gard_map_revision_list.append(item.lastrevid)
i=i+1
## Export the revision list
with open('data/mapping_revisions.txt','w') as outwritelog:
for eachrevid in gard_map_revision_list:
outwritelog.write(str(eachrevid)+'\n')
Explanation: Add the appropriate Wikidata statements
After removing items which have issues with no alternative identifier by which the GARD entry can be mapped, gard entries that map to multiple Wikidata entities, and multiple gard entries that map to a single wikidata entity based entirely on the other identifiers for that entry provided by GARD, we're left with entries we can add and suggest. Entries which map to a single WDID based on MULTIPLE Identifier mappings can be scripted. Entities which map to a single WDID based on a single Identifier, would probably be best sent to Mix N Match to avoid complaints further down the line.
End of explanation
## pull aliases for all entries with GARD IDs
sparqlQuery = 'SELECT ?item ?itemLabel ?GARD ?alias WHERE {?item wdt:P4317 ?GARD. OPTIONAL {?item skos:altLabel ?alias FILTER (LANG (?alias) = "en").} SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }}'
result = wdi_core.WDItemEngine.execute_sparql_query(sparqlQuery)
## Format the results from the Wikidata query into Pandas DF for easier manipulation
gard_alias_in_wd_list = []
for i in tqdm(range(len(result["results"]["bindings"]))):
gard_id = result["results"]["bindings"][i]["GARD"]["value"]
wdid = result["results"]["bindings"][i]["item"]["value"].replace("http://www.wikidata.org/entity/", "")
label = result["results"]["bindings"][i]["itemLabel"]["value"]
try:
alias = result["results"]["bindings"][i]["alias"]["value"]
except:
alias = "No alias"
gard_alias_in_wd_list.append({'WDID':wdid,'diseaseId':int(gard_id),'label':label,'alias':alias})
## Note that Wikidata stores the GARD IDs at strings, while GARD stores as int. Convert to ensure matchability
gard_alias_in_wd = pd.DataFrame(gard_alias_in_wd_list)
print(gard_alias_in_wd.head(n=3))
Explanation: Identify synonyms in need of inclusion
Pull all the entities mapped via GARD and their corresponding English Aliases. Determine if a synonym is missing from the Alias list and if so, include it.
Pull all labels and aliases from WD entities with GARD IDs
End of explanation
## Pull the aliases that are sourced from GARD
gard_alias = synonyms_df.loc[synonyms_df['source']=='GARD']
## Filter the Wikidata GARD Alias table down to just the GARD IDs in GARD alias DF (ie- has allowable synonyms)
gard_wd_limited_df = gard_alias_in_wd.loc[gard_alias_in_wd['diseaseId'].isin(gard_alias['diseaseId'].unique().tolist())]
alias_check_df = gard_alias.merge(gard_wd_limited_df,on='diseaseId',how='inner').copy()
## Check if the GARD synonym matches anything in the corresponding Wikidata label or alias
alias_check_df['label_match?'] = alias_check_df['name'].str.lower()==alias_check_df['label'].str.lower()
alias_check_df['alias_match?'] = alias_check_df['name'].str.lower()==alias_check_df['alias'].str.lower()
## Identify the GARD synonyms that were found in Wikidata (label or aliases) for removal
synonyms_to_drop = alias_check_df['name'].loc[(alias_check_df['label_match?']==True) |
(alias_check_df['alias_match?']==True)].unique().tolist()
## Filter out GARD entries that were found in Wikidata
synonyms_to_inspect = alias_check_df.loc[~alias_check_df['name'].isin(synonyms_to_drop)]
## Identify the synonyms to add to wikidata as an alias
synonyms_to_add = synonyms_to_inspect.drop_duplicates(subset=['diseaseId','name','source','WDID','label'], keep='first')
print(synonyms_to_add.head(n=4))
print(len(synonyms_to_add))
Explanation: Compare GARD synonyms with Wikidata aliases and labels
End of explanation
disease_qid = synonyms_to_add.iloc[0]['WDID']
disease_alias = synonyms_to_add.iloc[0]['name']
print(disease_qid,disease_alias)
## Unit test -- write a statement
wikidata_item = wdi_core.WDItemEngine(wd_item_id=disease_qid)
wikidata_item.set_aliases([disease_alias],lang='en',append=True)
wikidata_item.write(login, edit_summary='added alias from GARD')
print(wikidata_item.get_aliases(lang='en'))
print(wikidata_item.lastrevid)
#wikidata_item.get_aliases(lang='en')
## Script to run the synonym updates
gard_alias_revision_list = []
i=0
for i in tqdm(range(len(gard_to_auto_add))):
disease_qid = synonyms_to_add.iloc[i]['WDID']
disease_alias = synonyms_to_add.iloc[i]['name']
wikidata_item = wdi_core.WDItemEngine(wd_item_id=disease_qid)
wikidata_item.set_aliases([disease_alias],lang='en',append=True)
wikidata_item.write(login, edit_summary='added alias from GARD')
gard_alias_revision_list.append(wikidata_item.lastrevid)
i=i+1
## Export the revision list
with open('data/alias_revisions.txt','w') as aliaslog:
for eachrevid in gard_alias_revision_list:
aliaslog.write(str(eachrevid)+'\n')
Explanation: Write the GARD aliases to Wikidata
Since Aliases don't allow for sourcing/referencing, include a edit comment that an alias from GARD was added.
End of explanation |
465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combine train and test data for one-hot encoding
Step1: Adjust for multicollinearity
Idea
Step2: Train the model
Step3: Prediction
Step4: Combine submission | Python Code:
train["data"] = "train"
test["data"] = "test"
combined_data = pd.concat([train, test])
encoded = pd.get_dummies(combined_data[["X0", "X1", "X2", "X3", "X4", "X5", "X6", "X8"]])
drop_cat = combined_data.drop(["X0", "X1", "X2", "X3", "X4", "X5", "X6", "X8"], axis=1)
combined_data_clean = drop_cat.join(encoded)
train_data = combined_data_clean[combined_data_clean.data == "train"].copy()
test_data = combined_data_clean[combined_data_clean.data == "test"].copy()
train_data.drop("data", axis=1, inplace=True)
test_data.drop(["data", "y"], axis=1, inplace=True)
train_data.columns
test_data.columns
y_train = train_data["y"].astype(np.float32)
x_train = train_data.drop("y", axis=1).astype(np.float32)
x_test = test_data.astype(np.float32)
x_train.shape
train_reshaped = np.array([i.reshape((-1, 1)) for i in x_train.values])
train_reshaped = train_reshaped.astype(np.float32)
train_reshaped.shape
Explanation: Combine train and test data for one-hot encoding
End of explanation
n_comp = 128
pca = PCA(n_components=n_comp, random_state=42)
pca2_results_train = pca.fit_transform(x_train)
pca2_results_test = pca.transform(x_test)
train_reshaped = np.array([i.reshape((-1, 1)) for i in pca2_results_train])
train_reshaped = train_reshaped.astype(np.float32)
train_reshaped.shape
Explanation: Adjust for multicollinearity
Idea: treat the features as sequences but using the raw data 0/1 can cause NaN easily / probably due to multicollinearity
Therefore either take VIF or PCA to adjust for it
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(x_train.values, i) for i in range(x_train.shape[1])]
vif["features"] = x_train.columns
End of explanation
# Idea: Simple model
model = Sequential()
model.add(Bidirectional(SimpleRNN(128, return_sequences=True, activation="relu"), input_shape=(None, 1)))
model.add(Bidirectional(SimpleRNN(64, return_sequences=True, activation="relu")))
model.add(Bidirectional(SimpleRNN(32, return_sequences=False, activation="relu")))
model.add(Dropout(0.5))
model.add(Dense(1, activation="linear"))
model.compile(optimizer="rmsprop", loss="mse")
model.summary()
# Idea: Funnel -> reduce information after each layer / deep model
model = Sequential()
model.add(Bidirectional(SimpleRNN(64, return_sequences=True, activation="relu"), input_shape=(None, 1)))
model.add(Bidirectional(SimpleRNN(64, return_sequences=True, activation="relu")))
model.add(TimeDistributed(Dense(32, activation="relu")))
model.add(Bidirectional(SimpleRNN(32, return_sequences=True, activation="relu")))
model.add(Bidirectional(SimpleRNN(32, return_sequences=True, activation="relu")))
model.add(TimeDistributed(Dense(16, activation="relu")))
model.add(Bidirectional(SimpleRNN(16, return_sequences=False, activation="relu")))
model.add(Dropout(0.5))
model.add(Dense(1, activation="linear"))
model.compile(optimizer="rmsprop", loss="mse")
model.summary()
early_stop = EarlyStopping(monitor="loss", patience=10)
file_path = "weights.{epoch:02d}-{val_loss:.2f}.hdf5"
checkpoint = ModelCheckpoint(file_path)
model_run = model.fit(train_reshaped, y_train, epochs=100 ,validation_split=0.02, callbacks=[early_stop, checkpoint])
y_pred_train = model.predict(train_reshaped)
print("the R2 score is : {}".format(r2_score(y_train, y_pred_train)))
Explanation: Train the model
End of explanation
test_reshaped = np.array([i.reshape((-1, 1)) for i in pca2_results_test])
test_reshaped = test_reshaped.astype(np.float32)
test_reshaped.shape
y_pred_test = model.predict(test_reshaped)
output = pd.DataFrame({"ID": test.index, "y": y_pred_test.reshape(-1)})
output.head()
output.to_csv("submissions_{}.csv".format(datetime.datetime.today()), index=False)
Explanation: Prediction
End of explanation
sub_1 = pd.read_csv("submission_baseLine.csv")
sub_2 = pd.read_csv("submissions_2017-05-31 15:48:40.546392.csv")
sub_3 = output.copy()
mean_pred = (sub_1.y.values + sub_2.y.values + sub_3.y.values) / 3
output_mean = pd.DataFrame({"ID": test.index, "y": mean_pred})
output_mean.to_csv("submissions_mean_{}.csv".format(datetime.datetime.today()), index=False)
sub_1 = pd.read_csv("submission_baseLine.csv")
sub_2 = pd.read_csv("submissions_2017-05-31 15:48:40.546392.csv")
mean_pred = (sub_1.y.values + sub_2.y.values ) / 2
output_mean = pd.DataFrame({"ID": test.index, "y": mean_pred})
output_mean.to_csv("submissions_mean_2_{}.csv".format(datetime.datetime.today()), index=False)
Explanation: Combine submission
End of explanation |
466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-block alert-info" style="margin-top
Step1: <a id="ref0"></a>
<h2> Logistic Function </h2>
Step2: Create a tensor ranging from -10 to 10
Step3: When you use sequential, you can create a sigmoid object
Step4: Apply the element-wise function Sigmoid with the object
Step5: Plot the results
Step6: For custom modules, call the sigmoid from the torch (nn.functional for the old version), which applies the element-wise sigmoid from the function module and plots the results
Step7: w=torch.te
<a id="ref1"></a>
<h2> Tanh </h2>
When you use sequential, you can create a tanh object
Step8: Call the object and plot it
Step9: For custom modules, call the Tanh object from the torch (nn.functional for the old version), which applies the element-wise sigmoid from the function module and plots the results
Step10: <a id="ref3"></a>
<h2> Relu </h2>
When you use sequential, you can create a Relu object
Step11: For custom modules, call the relu object from the nn.functional, which applies the element-wise sigmoid from the function module and plots the results
Step12: <a id="ref3"></a>
<h2> Compare Activation Functions </h2> | Python Code:
import torch.nn as nn
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center"></a>
<img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center">
<h1 align=center><font size = 5>Logistic Regression</font></h1>
In this lab, you will cover logistic regression by using Pytorch.
# Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">Logistic Function</a></li>
<li><a href="#ref1">Tach</a></li>
<li><a href="#ref2">Rulu</a></li>
<li><a href="#ref3">Compare Activation Functions</a></li>
<li><a href="#ref4">Practice</a></li>
<br>
<p></p>
Estimated Time Needed: <strong>15 min</strong>
</div>
<hr>
Import all the necessary modules:
End of explanation
torch.manual_seed(2)
Explanation: <a id="ref0"></a>
<h2> Logistic Function </h2>
End of explanation
z=torch.arange(-10,10,0.1).view(-1, 1)
Explanation: Create a tensor ranging from -10 to 10:
End of explanation
sig=nn.Sigmoid()
Explanation: When you use sequential, you can create a sigmoid object:
End of explanation
yhat=sig(z)
sig(torch.tensor(-1.0))
Explanation: Apply the element-wise function Sigmoid with the object:
End of explanation
plt.plot(z.numpy(),yhat.numpy())
plt.xlabel('z')
plt.ylabel('yhat')
Explanation: Plot the results:
End of explanation
yhat=torch.sigmoid(z)
plt.plot(z.numpy(),yhat.numpy())
Explanation: For custom modules, call the sigmoid from the torch (nn.functional for the old version), which applies the element-wise sigmoid from the function module and plots the results:
End of explanation
TANH=nn.Tanh()
Explanation: w=torch.te
<a id="ref1"></a>
<h2> Tanh </h2>
When you use sequential, you can create a tanh object:
End of explanation
yhat=TANH(z)
plt.plot(z.numpy(),yhat.numpy())
Explanation: Call the object and plot it:
End of explanation
yhat=torch.tanh(z)
plt.plot(z.numpy(),yhat.numpy())
Explanation: For custom modules, call the Tanh object from the torch (nn.functional for the old version), which applies the element-wise sigmoid from the function module and plots the results:
End of explanation
RELU=nn.ReLU()
yhat=RELU(z)
plt.plot(z.numpy(),yhat.numpy())
Explanation: <a id="ref3"></a>
<h2> Relu </h2>
When you use sequential, you can create a Relu object:
End of explanation
yhat=F.relu(z)
plt.plot(z.numpy(),yhat.numpy())
Explanation: For custom modules, call the relu object from the nn.functional, which applies the element-wise sigmoid from the function module and plots the results:
End of explanation
x=torch.arange(-2,2,0.1).view(-1, 1)
plt.plot(x.numpy(),F.relu(x).numpy(),label='relu')
plt.plot(x.numpy(),torch.sigmoid(x).numpy(),label='sigmoid')
plt.plot(x.numpy(),torch.tanh(x).numpy(),label='tanh')
plt.legend()
Explanation: <a id="ref3"></a>
<h2> Compare Activation Functions </h2>
End of explanation |
467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 12
Step1: Day 12.2
A function to transform terms of the form "[key]"
Step2: Track the regions to ignore
Step3: Regions to erase may come out nested. If one region to erase is included inside another, we will ignore the smaller one.
Step4: Gather all the functions into a main pruned_sum()
Step5: Test
Step6: Solution | Python Code:
with open('inputs/input12.txt') as f_input:
s = next(f_input).rstrip()
import re
def sum_numbers(s):
p = re.compile('[-]?[\d]+')
numbers = list(map(int, p.findall(s)))
return sum(numbers)
sum_numbers(s)
Explanation: Day 12: JSAbacusFramework.io
Day 12.1
End of explanation
def transform_reds(s):
q = re.compile('\"[\w]+\"\:\"red\"')
return q.sub('R', s)
Explanation: Day 12.2
A function to transform terms of the form "[key]":"red" into a single character 'R'.
End of explanation
def regions_to_erase(s):
regions = []
curr_depth = 0
last_sink = {}
red = None
for i, c in enumerate(s):
if c == '{':
curr_depth += 1
if red is None:
last_sink[curr_depth] = i
elif c == 'R':
ignore = True
if red is None:
red = curr_depth
elif c == '}':
if red is not None:
if curr_depth == red:
regions.append([last_sink[curr_depth], i])
red = None
curr_depth -= 1
return regions
Explanation: Track the regions to ignore: when an 'R' is found at depth d we keep this information; we ignore the span between the last $[d-1,d]$ transition (sink down) and the next $[d,d-1]$ transition (float up). Those regions will be erased.
End of explanation
def nest_regions(regions):
nested = []
for i, bounds in enumerate(regions):
include = True
for a in regions[i + 1:]:
if a[0] < bounds[0]:
include = include & False
if include:
nested.append(bounds)
return nested
Explanation: Regions to erase may come out nested. If one region to erase is included inside another, we will ignore the smaller one.
End of explanation
def pruned_sum(s):
t = transform_reds(s)
nested_regions = nest_regions(regions_to_erase(t))
last_bound = 0
pruned = ''
for i, bounds in enumerate(nested_regions):
pruned += t[last_bound: bounds[0]]
last_bound = bounds[1] + 1
pruned += t[last_bound:]
return sum_numbers(pruned)
Explanation: Gather all the functions into a main pruned_sum()
End of explanation
def test():
assert(pruned_sum('[1,2,3]') == 6)
assert(pruned_sum('[1,{"c":"red","b":2},3]') == 4)
assert(pruned_sum('{"d":"red","e":[1,2,3,4],"f":5}') == 0)
assert(pruned_sum('[1,{"c":"red","b":2},3]') == 4)
assert(pruned_sum('[1,"red",5]') == 6)
test()
Explanation: Test
End of explanation
pruned_sum(s)
Explanation: Solution
End of explanation |
468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nurgle
"Buboes, phlegm, blood and guts! Boils, bogeys, rot and pus! Blisters, fevers, weeping sores! From your wounds the fester pours."
-- <cite>The Chant of Nurgle</cite>
The following analyses require jupyterthemes, numpy, pandas, and plotly. Install them with pip.
Step1: Inputs
Reading input files for the simulation.
Step2: Metabolism
Here shows profiles and analyses related to the metabolic pathway and chemical compounds.
Concentration of Compounds
Reading timecourse data first.
Step3: Plotting concentrations of compounds.
Step4: Plotting time series of compound concentrations. | Python Code:
from jupyterthemes.stylefx import set_nb_theme
set_nb_theme('grade3')
import os
PREFIX = os.environ.get('PWD', '.')
# PREFIX = "../build/outputs"
import numpy
import pandas
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly.offline import init_notebook_mode, iplot
init_notebook_mode(connected=True)
Explanation: Nurgle
"Buboes, phlegm, blood and guts! Boils, bogeys, rot and pus! Blisters, fevers, weeping sores! From your wounds the fester pours."
-- <cite>The Chant of Nurgle</cite>
The following analyses require jupyterthemes, numpy, pandas, and plotly. Install them with pip.
End of explanation
compounds = pandas.read_csv(os.path.join(PREFIX, "compounds.csv"))
num_compounds = compounds.shape[0]
print('[{}] compounds were loaded.'.format(num_compounds))
metabolism = pandas.read_csv(os.path.join(PREFIX, "metabolism.csv"))
print('[{}] reactions were loaded.'.format(metabolism.shape[0]))
Explanation: Inputs
Reading input files for the simulation.
End of explanation
timecourse = pandas.read_csv(os.path.join(PREFIX, "timecourse.csv"))
timecourse = timecourse.rename(columns={timecourse.columns[0]: "Time"})
concentrations = pandas.DataFrame(timecourse.values[: , : num_compounds+2], timecourse.index, timecourse.columns[ : num_compounds+2])
indices = [0] + list(range(num_compounds+2, timecourse.shape[1]))
fluxes = pandas.DataFrame(timecourse.values[: , indices], timecourse.index, timecourse.columns[indices])
Explanation: Metabolism
Here shows profiles and analyses related to the metabolic pathway and chemical compounds.
Concentration of Compounds
Reading timecourse data first.
End of explanation
def plot1(df, filename, indices=None, nsteps=10, rescaled=False, xlabel="", ylabel=""):
if indices is None:
(m, _) = df.shape
indices = range(0, m, m // nsteps)
if rescaled:
func = lambda idx: df.iloc[idx, 1: ] / df.iloc[0, 1: ]
else:
func = lambda idx: df.iloc[idx, 1: ]
ymin, ymax = +numpy.inf, -numpy.inf
for idx in indices:
y = df.iloc[idx, 1: ] / df.iloc[0, 1: ]
ymin, ymax = min(ymin, min(func(idx))), max(ymax, max(func(idx)))
(ymin, ymax) = ymin - (ymax - ymin) / 15, ymax + (ymax - ymin) / 15
scatters = [
dict(
y=func(idx),
text=df.columns[1: ],
mode='markers',
marker=dict(
size='12', color=func(idx), colorscale='Viridis', showscale=True,
cmin=ymin, cmax=ymax,
line=dict(width=1)
),
visible=False
)
for idx in indices]
scatters[0]['visible'] = True
steps = []
for i, idx in enumerate(indices):
step = dict(
method='restyle',
label='{}'.format(df.iloc[idx, 0]),
args=['visible', [False] * len(scatters)],
)
step['args'][1][i] = True # Toggle i'th trace to "visible"
steps.append(step)
sliders = [
dict(
active=0,
currentvalue=dict(prefix="Time="),
pad=dict(t=50),
steps=steps
)
]
layout = dict(
hovermode= 'closest',
xaxis= dict(title=xlabel),
yaxis=dict(title=ylabel, range=(ymin, ymax)),
showlegend= False,
sliders=sliders,
height=600
)
fig = dict(data=scatters, layout=layout)
iplot(fig, filename=filename)
plot1(concentrations, "concentration_markers", nsteps=15, xlabel="Compound", ylabel="Concentration")
Explanation: Plotting concentrations of compounds.
End of explanation
def plot2(df, filename, ngroups=20, lenlabel=30, rescaled=False, xlabel="", ylabel=""):
indices = list(range(1, df.shape[1]))
tick = len(indices) // (ngroups - 1)
if rescaled:
func = lambda idx: df.iloc[: , idx] / df.iloc[0, idx]
else:
func = lambda idx: df.iloc[: , idx]
ymin, ymax = +numpy.inf, -numpy.inf
for idx in indices:
ymin, ymax = min(ymin, min(func(idx))), max(ymax, max(func(idx)))
(ymin, ymax) = ymin - (ymax - ymin) / 15, ymax + (ymax - ymin) / 15
scatters = [
dict(
x=df.iloc[: , 0],
y=func(idx),
mode='lines',
name=df.columns[idx][: lenlabel],
visible=(idx < tick)
)
for idx in indices]
steps = []
for i in range(ngroups):
step = dict(
method='restyle',
label=i + 1,
args=['visible', [(i * tick <= j < (i + 1) * tick) for j in range(len(scatters))]],
)
if any(step['args'][1]):
steps.append(step)
sliders = [
dict(
active=0,
pad=dict(t=50),
steps=steps,
currentvalue=dict(prefix='Group')
)
]
layout = dict(
hovermode= 'closest',
xaxis= dict(title=xlabel),
yaxis=dict(title=ylabel, range=(ymin, ymax)),
# showlegend= False,
sliders=sliders,
height=600
)
fig = dict(data=scatters, layout=layout)
iplot(fig, filename=filename)
plot2(concentrations, "concentration_lines", xlabel="Time", ylabel="Concentration")
plot1(fluxes, "flux_markers", nsteps=15, rescaled=True, xlabel="Reaction", ylabel="Relative Flux")
plot2(fluxes, "flux_lines", rescaled=True, xlabel="Time", ylabel="Relative Flux")
Explanation: Plotting time series of compound concentrations.
End of explanation |
469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas Data Structures
Step1: Understanding language's data structures is the most important part for a good programming experience. Poor understanding of data structures leads to poor code in terms of efficiency and readability.
These notes are devoted to walk through all Pandas structures but also providing further readings for a deeper knowledge. A previous Numpy datastructures knowledge is required.
The covered Pandas data structures are
Step2: Otherwise, if index is passed, values with keys in index are pulled out, the rest are assigned to NaN. A NaN value means not assigned, and we will have to deal with these values in the future.
What is NaN??
(from Pandas docs)
Some might quibble over our usage of missing. By “missing” we simply mean null or “not present for whatever reason”. Many data sets simply arrive with missing data, either because it exists and was not collected or it never existed. For example, in a collection of financial time series, some of the time series might start on different dates. Thus, values prior to the start date would generally be marked as missing.
Step3: From list
Step4: From numpy array
Step5: From scalar
Step6: Series can have the attribute name. When dealing with DataFrames, Series names will be automatically assigned with its column name.
Step7: Series Accessing
Series can be accessed through position (numerical index), boolean ¿list? or key (axis of labels). Accessing by position is like working with numpy ndarrays while accessing through keys (axis of labels) is like working with dictionaries.
Position accessing
Step8: Boolean list accessing
Step9: Key accessing
Step10: In case of accessing an nonexistent key, a KeyError exception is thrown
Step11: To avoid errors, we can use Series get function, where a default value is returned in case of error.
Step12: Series Operations
Vectorized operations can be done over pandas Series and also Series are accepted as most of NumPy operations. The result of an operation between unaligned Series will have the union of the indexes involved. If a label is not found in one Series or the other, the result will be marked as missing NaN.
Step13: Operations are performed index-wise.
Step14: Series dtypes
Dtype can be forced at Series creation, if None, the dtype is inferred.
Step15: For numerical variables, most common dtypes will be int and float.
For categorical variables strings are common types. However, when working with labels, more advanced categorical data management can be used (http
Step16: Dates are categorical or numeric?
Days of the month, months, days of the week, etc... are considered categorical.
Specific dates, such as the day payments are received, birth dates, etc... are numeric.
Step17: Missing Data
You can insert missing values by simply assigning to containers. The actual missing value used will be chosen based on the dtype.
For example, numeric containers will always use NaN regardless of the missing value type chosen
Step18: Missing values propagate naturally through arithmetic operations between pandas objects.
Step19: The descriptive statistics and computational methods discussed in the data structure overview (and listed here and here) are all written to account for missing data. For example
Step20: Exercises
Step21: We can create series from a list of lists directly from data variable.
Step22: This is not a very useful Series object, as access to list items is not syntactically nice. However, let's try to put the all countries' airports in a Series.
Step23: We can see that there's an error between index and the number provided as index in the csv. Let's try to see what happened with the knowledge we have with Series.
Step24: DataFrame
A DataFrame is a 2-dimensional labeled data structure with columns of different types. It can be seen as a spreadsheet, where columns are Series or a Python dictionary where Series can be accessed through labels.
DataFrame Creation
We can create DataFrames from
Step25: The row and column labels can be accessed respectively by accessing the index and columns attributes
Step26: From dict of ndarrays / lists
The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays. If no index is passed, the result will be <code>range(n)</code>, where n is the array length.
Step27: From structured or record array
This case is handled identically to a dict of arrays.
Step28: From a list of dicts
Step29: From a dict of tuples
You can automatically create a multi-indexed frame by passing a tuples dictionary
Step30: Alternate constructors
Column selection, addition, deletion
(from Pandas Docs)
You can treat a DataFrame semantically like a dict of like-indexed Series objects. Getting, setting, and deleting columns works with the same syntax as the analogous dict operations
Step31: Columns can be deleted or popped like with a dict
Step32: When inserting a scalar value, it will naturally be propagated to fill the column
Step33: When inserting a Series that does not have the same index as the DataFrame, it will be conformed to the DataFrame’s index
Step34: You can insert raw ndarrays but their length must match the length of the DataFrame’s index.
By default, columns get inserted at the end. The insert function is available to insert at a particular location in the columns
Step35: Indexing / Selection
The basics of indexing are as follows
Step36: Both loc and iloc methods can be used to access cells or groups of cells. In this case the result is a DataFrame
Step37: Column selection, addition, deletion
We will go through these examples using a real dataset. Remember from Series examples that we loaded a csv. We did it using <code> csv </code> library, however, Pandas provide the necessary tools to read a csv file and output a DataFrame. Let's see an example using OpenFlight data. The data is structured as follows
Step38: Assigning New Columns in Method Chains
Inspired by dplyr’s <code> mutate </code> verb, DataFrame has an assign() method that allows you to easily create new columns that are potentially derived from existing columns. | Python Code:
import numpy as np
import pandas as pd
Explanation: Pandas Data Structures
End of explanation
d = {'a':5.,'b':5.,'c':5.}
i = ['x','y','z']
s1 = pd.Series(d)
print(s1)
s1.index
Explanation: Understanding language's data structures is the most important part for a good programming experience. Poor understanding of data structures leads to poor code in terms of efficiency and readability.
These notes are devoted to walk through all Pandas structures but also providing further readings for a deeper knowledge. A previous Numpy datastructures knowledge is required.
The covered Pandas data structures are:
* Series:
* DataFrames:
Series
Series is an one-dimension structures that can hold any data type (boolean, integer, float or even Python objects).
Series Creation
We can create Series from:
* Python dictionaries
* Python List
* NumPy ndarrays
* a sclar value
As optional parameter, we can provide an index. The passed index is a list of axis labels. Indexes, this will provide an effective way to access data (slicing).
Each Series has a dtype that corresponds to the variable type of Series.
From dictionary:
In this case if no index is provided, it is extracted from dictionary keys, while the data is extracted from values.
End of explanation
d = {'a':5,'b':5,'c':5}
i = ['x','y','a','b']
s1 = pd.Series(d, index = i)
print(s1)
print(s1.dtype)
s1.index
Explanation: Otherwise, if index is passed, values with keys in index are pulled out, the rest are assigned to NaN. A NaN value means not assigned, and we will have to deal with these values in the future.
What is NaN??
(from Pandas docs)
Some might quibble over our usage of missing. By “missing” we simply mean null or “not present for whatever reason”. Many data sets simply arrive with missing data, either because it exists and was not collected or it never existed. For example, in a collection of financial time series, some of the time series might start on different dates. Thus, values prior to the start date would generally be marked as missing.
End of explanation
l = [5,5,5]
s1 = pd.Series(l)
print(s1)
i = ['x','y','z']
s1 = pd.Series(l, index = i)
print(s1)
#This would raise an error
#i = ['x','y','a','b']
#s1 = pd.Series(l, index = i)
#print s1
print(s1.dtype)
s1.index
Explanation: From list:
We can create series from lists. In this case we can provide an index if desired. However, index must have the same length as constructor list.
End of explanation
s2 = pd.Series(np.array([3,20,5]),index=['a','b','c'])
print(s2)
print(s2.dtype)
s2.index
Explanation: From numpy array:
End of explanation
s3 = pd.Series(5,index=['a','b','c'])
print(s3)
print(s3.dtype)
s3.index
Explanation: From scalar:
End of explanation
s3 = pd.Series(5,index=['a','b','c'], name = 'Series3')
s3.name
Explanation: Series can have the attribute name. When dealing with DataFrames, Series names will be automatically assigned with its column name.
End of explanation
s2
s2[1]
s2[:1]
Explanation: Series Accessing
Series can be accessed through position (numerical index), boolean ¿list? or key (axis of labels). Accessing by position is like working with numpy ndarrays while accessing through keys (axis of labels) is like working with dictionaries.
Position accessing
End of explanation
s2[[True,True,False]]
s2[s2>4]
Explanation: Boolean list accessing
End of explanation
s2[['a','b']]
s2['a']
'a' in s2
Explanation: Key accessing
End of explanation
try:
s2['z']
except KeyError:
print("Error handled")
Explanation: In case of accessing an nonexistent key, a KeyError exception is thrown
End of explanation
s2.get('x',np.NaN)
Explanation: To avoid errors, we can use Series get function, where a default value is returned in case of error.
End of explanation
s2
print(s2 + 23)
# we can apply np functions
np.add(s2,23) == s2 + 23
s1
s2
Explanation: Series Operations
Vectorized operations can be done over pandas Series and also Series are accepted as most of NumPy operations. The result of an operation between unaligned Series will have the union of the indexes involved. If a label is not found in one Series or the other, the result will be marked as missing NaN.
End of explanation
s2.reset_index() + s1.reset_index()
(s2 + s1).dropna()
s2 ** 3
np.exp(s2)
Explanation: Operations are performed index-wise.
End of explanation
pd.Series([1,2,3,4,5],dtype=np.float32)
Explanation: Series dtypes
Dtype can be forced at Series creation, if None, the dtype is inferred.
End of explanation
pd.Series(["a","b","c"],dtype=str)
Explanation: For numerical variables, most common dtypes will be int and float.
For categorical variables strings are common types. However, when working with labels, more advanced categorical data management can be used (http://pandas.pydata.org/pandas-docs/stable/categorical.html) http://pandas.pydata.org/pandas-docs/stable/categorical.html]
Why strings are dtype object? (from (http://stackoverflow.com/questions/21018654/strings-in-a-dataframe-but-dtype-is-object)[http://stackoverflow.com/questions/21018654/strings-in-a-dataframe-but-dtype-is-object])
The dtype object comes from NumPy, it describes the type of element in a ndarray. Every element in a ndarray must has the same size in byte. For int64 and float64, they are 8 bytes. But for strings, the length of the string is not fixed. So instead of save the bytes of strings in the ndarray directly, Pandas use object ndarray, which save pointers to objects, because of this the dtype of this kind ndarray is object.
End of explanation
pd.Series(['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'])
import datetime
base = datetime.datetime.today()
date_list = [base - datetime.timedelta(days=x) for x in range(0, 7)]
date_s = pd.Series(date_list)
print(date_s)
date_s[1] > date_s[2]
Explanation: Dates are categorical or numeric?
Days of the month, months, days of the week, etc... are considered categorical.
Specific dates, such as the day payments are received, birth dates, etc... are numeric.
End of explanation
import numpy as np
s = pd.Series(["a", "b", "c"])
s.loc[0] = None
s.loc[1] = np.nan
print(s)
s = pd.Series([1, 2, 3])
s.loc[0] = None
s.loc[1] = np.nan
print(s)
Explanation: Missing Data
You can insert missing values by simply assigning to containers. The actual missing value used will be chosen based on the dtype.
For example, numeric containers will always use NaN regardless of the missing value type chosen:
End of explanation
s1 = pd.Series([1,2,3])
s2 = pd.Series([1,np.nan,3])
s1 + s2
s1.index
s2.index
Explanation: Missing values propagate naturally through arithmetic operations between pandas objects.
End of explanation
s2 = pd.Series([1,np.nan,3])
print(s2.fillna(0))
print(s2.dropna())
print(s2.isnull())
s2[s2.isnull()]
Explanation: The descriptive statistics and computational methods discussed in the data structure overview (and listed here and here) are all written to account for missing data. For example:
When summing data, NA (missing) values will be treated as zero
If the data are all NA, the result will be NA
Methods like cumsum and cumprod ignore NA values, but preserve them in the resulting arrays
Cleaning/filling missing data:
pandas objects are equipped with various data manipulation methods for dealing with missing data.
The fillna function can “fill in” NA values with non-null data
With dropna () you can simply exclude labels from a data set which refer to missing data
With isnull() you can return a boolean array to select not assigned values
End of explanation
import csv
import urllib2
url = 'https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat'
response = urllib2.urlopen(url)
data = list(csv.reader(response))
#data is a list of list
Explanation: Exercises:
Exercise 1: Load iqsize.csv using csv library as a dictionary of Series.
Exercise 2: Check Series dtype. Are they correct? Why? What is the problem of having numerical variables as categorical?
Exercise 3: For sex variable, select those that are males (including synonyms).
Exercise 4: Count how many women and how many men there are. How many missing elements there are?
Examples with Series
Before loading data from different sources, we can have some examples using series with self-generated data.
End of explanation
pd.Series(data)
Explanation: We can create series from a list of lists directly from data variable.
End of explanation
countries = pd.Series(np.array([airport[3] for airport in data]))
# this is a more interesting Series object
countries
print(countries.index)
## Access by number
print("-> Access by number")
print(countries[0])
## we can evaluate a function over all elements to get a boolean series
print((countries == "Spain").head())
print((countries[countries == "Spain"]).head())
print((countries[countries != "Spain"]).head())
print((countries[countries != "Spain"]).tail())
# we can have the list of countries that have (at least
print(countries.unique())
# we can also access by key
countries[81]
Explanation: This is not a very useful Series object, as access to list items is not syntactically nice. However, let's try to put the all countries' airports in a Series.
End of explanation
csv_index = pd.Series(np.array([airport[0] for airport in data]))
print(csv_index[csv_index.astype(int) != csv_index.index + 1])
# we get the list of all missing values
shifted_index = csv_index[csv_index.index-1]
shifted_index = shifted_index.fillna(0)
shifted_index.index = csv_index.index
# we get the list of non consecutive values
non_consecutive = csv_index[csv_index.astype(int) != shifted_index.astype(int) + 1].astype(int)
print(non_consecutive.head(10))
# check if our assumption is true (1 at most consecutive values are missing)
print("the assumption is:", len(non_consecutive) + len(csv_index) == csv_index[len(csv_index)-1])
# let's see what happens
difference = (shifted_index.astype(int) + 1).add(-csv_index.astype(int))
print(difference[difference < -1])
Explanation: We can see that there's an error between index and the number provided as index in the csv. Let's try to see what happened with the knowledge we have with Series.
End of explanation
d = {'one': pd.Series([1,2,3],index=['a','b','c']),
'two': pd.Series([1,2,3,4],index=['a','b','c','z']),
'three':{'a':1}}
df = pd.DataFrame(d)
df
pd.DataFrame(d, index=['d', 'b', 'a'])
pd.DataFrame(d, index=['d', 'b', 'a'],
columns=['two', 'three','four'])
Explanation: DataFrame
A DataFrame is a 2-dimensional labeled data structure with columns of different types. It can be seen as a spreadsheet, where columns are Series or a Python dictionary where Series can be accessed through labels.
DataFrame Creation
We can create DataFrames from:
* Dict of 1D ndarrays, lists, dicts, or Series
* 2-D numpy.ndarray
* Structured or record ndarray
* A Series
* Another DataFrame
From dict of Series or dict
The result index will be the union of the indexes of the various Series. If there are any nested dicts, these will be first converted to Series. If no columns are passed, the columns will be the sorted list of dict keys.
End of explanation
df.index
df.columns
Explanation: The row and column labels can be accessed respectively by accessing the index and columns attributes:
End of explanation
d = {'one' : [1., 2., 3.,4.], 'two' : [4., 3., 2., 1.]}
pd.DataFrame(d)
pd.DataFrame(d) + pd.DataFrame(d, index = ['a','b','c','d'])
Explanation: From dict of ndarrays / lists
The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays. If no index is passed, the result will be <code>range(n)</code>, where n is the array length.
End of explanation
data = np.zeros((2,), dtype=[('A', 'i4'),('B', 'f4'),('C', 'a10')])
print(data)
data[:] = [(1,2.,'Hello'), (2,3.,"World")]
print(data)
df = pd.DataFrame(data)
print(df)
df.dtypes
pd.DataFrame(data, index=['first', 'second'])
pd.DataFrame(data, columns=['C', 'A', 'B'])
Explanation: From structured or record array
This case is handled identically to a dict of arrays.
End of explanation
data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
pd.DataFrame(data2)
pd.DataFrame(data2, index=['first', 'second'])
pd.DataFrame(data2, columns=['a', 'b'])
Explanation: From a list of dicts
End of explanation
pd.DataFrame({('a', 'b'): {('A', 'B'): 1, ('A', 'C'): 2},
....: ('a', 'a'): {('A', 'C'): 3, ('A', 'B'): 4},
....: ('a', 'c'): {('A', 'B'): 5, ('A', 'C'): 6},
....: ('b', 'a'): {('A', 'C'): 7, ('A', 'B'): 8},
....: ('b', 'b'): {('A', 'D'): 9, ('A', 'B'): 10}})
Explanation: From a dict of tuples
You can automatically create a multi-indexed frame by passing a tuples dictionary
End of explanation
import pandas as pd
df = pd.DataFrame({"one":[1.6,2.2,3.4,3.5],"two":[1.5,2.1,3.9,np.nan],"three":[1.2,2.80,3.80,np.nan]})
print(df)
df['four'] = df['one'] * df['two']
display(df['flag'] = df['one'] > 2
df
type(df['one'])
Explanation: Alternate constructors
Column selection, addition, deletion
(from Pandas Docs)
You can treat a DataFrame semantically like a dict of like-indexed Series objects. Getting, setting, and deleting columns works with the same syntax as the analogous dict operations:
End of explanation
del df['two']
three = df.pop('three')
df
Explanation: Columns can be deleted or popped like with a dict:
End of explanation
df['foo'] = 'bar'
df
Explanation: When inserting a scalar value, it will naturally be propagated to fill the column:
End of explanation
df['one_trunc'] = df['one'][:2]
df
df['one'][:2]
Explanation: When inserting a Series that does not have the same index as the DataFrame, it will be conformed to the DataFrame’s index:
End of explanation
df.insert(1, 'rand', np.random.randint(1,10,df["one"].size))
df
Explanation: You can insert raw ndarrays but their length must match the length of the DataFrame’s index.
By default, columns get inserted at the end. The insert function is available to insert at a particular location in the columns:
End of explanation
df
print("> Select column")
print(df["one"])
print("> Select row by index label")
print(df.loc[1])
print("> Select row by integer location")
print(df.iloc[1])
print("> Slice rows")
print(df[2:4])
print("> Select rows by boolean vector")
print(df[[True,True,True,False]])
Explanation: Indexing / Selection
The basics of indexing are as follows:
Operation Syntax Result
* df[col]: Select column (returns Series)
df.loc[label]: Select row by label (returns Series)
df.iloc[loc]: Select row by integer location (returns Series)
df[5:10]: Slice rows (returns DataFrame)
df[bool_vec]: Select rows by boolean vector (returns DataFrame)
Row selection, for example, returns a Series whose index is the columns of the DataFrame:
End of explanation
print("> Slice the whole DF")
print(df.loc[:,:])
print("> Slice one row")
print(df.loc[1,:])
print("> Slice one column")
print(df.loc[:,"one"])
print("> Slice two columns")
print(df.loc[:,["one","rand"]])
print("> Slice two columns and two rows")
print(df.loc[[1,2],["one","rand"]])
print("> Slice the whole DF")
print(df.iloc[:,:])
print("> Slice one row")
print(df.iloc[1,:])
print("> Slice one column")
print(df.iloc[:,1])
print("> Slice two columns")
print(df.iloc[:,[1,2]])
print("> Slice two columns and two rows")
print(df.iloc[[1,2],[1,2]])
Explanation: Both loc and iloc methods can be used to access cells or groups of cells. In this case the result is a DataFrame
End of explanation
import pandas as pd
import urllib2
url = 'https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat'
response = urllib2.urlopen(url)
head = ["Airport ID", "Name", "City", "Country", "IATA/FAA", "ICAO", "Latitude", "Longitude",
"Altitude", "Timezone", "DST", "Tz database time zone"]
data_frame = pd.read_csv(response,names=head)
data_frame.head()
data_frame["Name"].head()
(data_frame["Name"] + data_frame["ICAO"]).head()
data_frame["Altitude (m)"] = (data_frame["Altitude"] * 0.3048)
data_frame["Seaside"] = data_frame["Altitude (m)"] < 20
data_frame[data_frame["Seaside"]].head()
# Columns can be deleted or popped like with a dict:
del data_frame["Altitude"]
seaside = data_frame.pop('Seaside')
print(seaside.head())
data_frame.head()
# When inserting a scalar value, it will naturally be propagated to fill the column:
data_frame["Infrastructure"] = "Airport"
Explanation: Column selection, addition, deletion
We will go through these examples using a real dataset. Remember from Series examples that we loaded a csv. We did it using <code> csv </code> library, however, Pandas provide the necessary tools to read a csv file and output a DataFrame. Let's see an example using OpenFlight data. The data is structured as follows:
Airport ID Unique OpenFlights identifier for this airport.
Name Name of airport. May or may not contain the City name.
City Main city served by airport. May be spelled differently from Name.
Country Country or territory where airport is located.
IATA/FAA 3-letter FAA code, for airports located in Country "United States of America". 3-letter IATA code, for all other airports. Blank if not assigned.
ICAO 4-letter ICAO code. Blank if not assigned.
Latitude Decimal degrees, usually to six significant digits. Negative is South, positive is North.
Longitude Decimal degrees, usually to six significant digits. Negative is West, positive is East.
Altitude In feet.
Timezone Hours offset from UTC. Fractional hours are expressed as decimals, eg. India is 5.5.
DST Daylight savings time. One of E (Europe), A (US/Canada), S (South America), O (Australia), Z (New Zealand), N (None) or U (Unknown). See also: Help: Time
Tz database time zone Timezone in "tz" (Olson) format, eg. "America/Los_Angeles".
End of explanation
(data_frame.assign(hemisphere_north = lambda x: x['Latitude'].astype(float) > 0)).head(10)
Explanation: Assigning New Columns in Method Chains
Inspired by dplyr’s <code> mutate </code> verb, DataFrame has an assign() method that allows you to easily create new columns that are potentially derived from existing columns.
End of explanation |
470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table style="border
Step1: This last statement will make some aggregation and mutation statements look a little different to the cheat sheet as spark functions will be prefixed by F.
The spark.read function will import data directly into our spark instance, without touching the environment in which Python is running.
Step2: When this is done, we are left with a Python object which is a logical pointer to a Spark DataFrame object.
Notice as well that Spark will give us some information on how it is breaking down the heavy lifting into discrete and parallelisable tasks.
2.2 Quality and completeness
Before we can get stuck into building predictive models, we need to first understand if any immediate attention is required to issues of data quality and completeness.
Like many data manipulation libraries, Spark makes its own decisions about how to read in data.
To check it has treated our data as we expected, we need to compare the schemata of our Spark DataFrames to that provided with the datasets
Step3: A small, reference dataset.
Next question
Step4: Spark has interpreted every field as a string. 🙄
There are two fields
Step5: Now we're ready to take a peak at the data.
Step6: And finally, check for any records with a missing value in critical fields.
Step7: 2.2.2 Customer-Offer History
Can you repeat the same operations here?
1. Count the records;
2. Inspect the type schema;
3. Convert miss-typed fields using withColumn() and cast();
4. Inspect the data; and
5. Check the dataset for missing values.
Step8: A larger dataset that relates offers and customers.
Step9: Same deal. Let's do some conversion to numeric and datetime types.
Step10: 2.2.3 Transactions
Step11: The largest of the three datasets.
Step12: 2.3 Exploration
Let's begin interpreting the contents of these sets, starting with the range of dates over which the offers were presented to customers.
Step13: What is the frequency of records across some of the categorical variables?
Step14: Probably too many of these to do anything useful without grouping.
Try for yourself with the market variable.
Step15: Slightly more usable, perhaps we'll come back to this.
Step16: First insight
Step17: There's a hypothesis emerging here that our larger markets may show the strongest response to offers. Let's plot it to check.
Step18: There is a weak relationship there, but we probably won't want to employ something as nuanced as this in our first iteration of analysis. Interesting to know though!
We understand a little bit of the offer history data. Let's just check for missing values.
Now we need to repeat the process across our other datasets. Let's start with the offers.
Step19: Looks as though a small number of the offers have a different quantity value.
Step20: Interesting. Is this still applicable when we join offers to our history dataset?
Step21: No. In which case, it's not going to be significant for our analysis and modelling.
Do any of the categorical fields have few enough levels to enter into a simple classification model?
Step22: These might work in a classification tree model which automatically groups, or as a binned aggregate measure of the response rate across each.
Let's move on to the transactions.
What is the range of dates of our transactions?
Step23: Right, so up to a year before the offers were presented.
What are the ranges of purchasequantity and purchaseamount?
Step24: OK, we have some returns data in here, too. We may need to account for this in a future iteration of modelling work.
2.4 Activity 2
Step25: The graph above is a little messy and difficult to interpret.
Let's use a very quick and dirty approach to extracting the month, plot that and see if this is any easier to interpret.
Step26: We are now able to discriminate between the periods in our analysis, but we've lost the interesting pattern we saw before.
Extension tasks
Try producing a summary of counts by week.
Hint
Step27: 2.4.2 Data understanding learning challenge 2
In the data understanding phase, we run lots of operations against our data, and we need these to be optimised in order to complete the task in a timely fashion. In our case, the key differentiator is whether the function requires creation of a Python object (e.g. using Pandas), or whether it can run on a Spark data frame. Let's try using Pandas' implementation of 'describe'
Step28: Pyspark.sql also has an implementation of describe(), which we saw earlier. Note
Step29: From the exercise, it looks like there is something strange in one of the columns. Are all of the values positive and with a reasonable range? Let's take a look at some of the negative values
Step30: <a id="prepare"></a>
3. Data preparation
The output of this phase is a dataset with which we can build and test models.
3.1 Summarising data for use in modelling
Given that the aim of the task is to make customer-offer level predictions about likelihood to become a repeat purchaser, data that we use from the offers or transactions datasets will need to be joined to the history dataset.
We have also observed that the transactions dataset contains a large volume of datat, too much to enter into a model without aggregation to the customer, offer level. In aggregating this, our goal is to select an approach which generates features that
Step31: Calculate "history" interval dates based on offerdate.
Step32: We can employ a Spark "user defined function" to create corresponding aggregation flags to identify whether the transaction in scope of one of the history periods.
Step33: At this point we can calculate the quantity and spend per customer per offer.
As an extension, you could join onto the offers table and create equivalent measures for quantity and spend in the same brand, company and category as the offer presented to the customer.
Step34: 3.2 Reshaping data
What is the average spend in these intervals?
Spark will allow us to calculate this quite easily. In order to plot this nicely, we will need help from the Python data wrangling library of choice
Step35: 3.3 Divertissement
Step36: This gives us a very small level of interaction - but much more is possible!
You may have to move outside of the notebook environment to do this though (using output_file, for example).
Bokeh interactivity docs and examples
3.4 Activity 3
Step37: We can use these to create a series of flags at the customer level which may be useful in the classification task. The task here is to create a flag for every customer, showing whether they have shopped in each of
Step38: Inspect and generate some summary statistics for these fields.
Step39: Last piece of the puzzle
Step40: Qs.
Are these good variables to add into the model?
How else could we test their suitability?
If we built these flags for a hundred departments
Step41: Moving on to (2), let's calculate a new column chain_market_rank within historyCustCount showing the ranking of chains by customer count within each market.
Hint
Step42: Is this a good variable to use in our model? Let's plot the distribution of values as a histogram.
Again, Pandas has a very neat function
Step43: Do you think this would be informative?
What other analysis could you do to support your assertion?
For completeness, let's go ahead and join this back to our original trainHistory dataframe.
Step44: We don't need customer_count, let's go ahead and drop it.
Step45: 3.4.3 Data prep learning challenge 3
This challenge is more open ended
Step46: <a id="model"></a>
4. Modelling experiments
We're now ready to have a first pass at building a model.
4.1 Holdout partitioning
For those who have worked in targeted marketing, this approach will be quite familiar. The premise is to train a model based on one part of your dataset and evaluate its performance using the other.
Spark has a convenience function to do just that
Step47: Let's also cache these datafrmaes like we did with our original data sets.
Step48: Spark has an idiosyncratic way of taking input and we will have to employ the 'VectorAssembler' function to bundle up the features for us to input into the model.
Step49: We now have a fitted model!
What values have been chosen for the parameters?
Step50: Not bad, the competition benchmark for this dataset is 0.59.
Before we start extending this, we should perform some diagnositcs.
First, let's look at where the misclassifications are occuring by creating a contingency table (a.k.a confusion matrix).
Step51: What does this tell us? Is this a bad model?
And of course we should check if our model generalises well by scoring and evaluating the test set.
4.2 Modeling Exercises
4.2.1 Exercise 1
Step52: Build a confusion matrix to calculate rate of misclassification
Step53: 4.2.2 Exercise 2
Step54: The first thing we did with our logistic regression was to look at the parameter values. There is no equivalent for decision trees. Instead, there is a featureImportances object which will give us similar useful information about the model.
Extract it from the pipeline in the same way we did for the coefficients of our logistic regression.
Step55: Go ahead and measure the AUC metric as before using the trains and test sets.
Step56: Now, there is a stochastic element to the building of these models, but in preparation for Cognihack, something felt strange about the AUC we were getting. Have a chat with your teams about why this may be the case, in particular in the context of parallel computing.
Extension
Step57: Tip
Step58: Create model artifact (abstraction layer).
Step59: Tip
Step60: Get saved model metadata from Watson Machine Learning.
Tip
Step61: Tip
Step62: You can now create an online scoring endpoint. Execute the following sample code that uses the modelVersionHref value to create the scoring endpoint to the Bluemix repository.
Step63: Let's see what happens when we send a PUT request to our new endpoint containing a new scoring record. The model should hopefully return some predictions. | Python Code:
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
Explanation: <table style="border: none" align="left">
<tr style="border: none">
<th style="border: none"><img src="http://i.imgur.com/o1X3CAd.jpg" alt="Icon" align="left"></th>
</tr>
<tr style="border: none">
<th style="border: none"><font face="verdana" size="6" color="black"><b>Cognihack Data Science Track</b></font></th>
</tr>
<tr style="border: none">
<th style="border: none"><font face="verdana" size="5" color="black"><b>Predicting repeat shopping likelihood with Python, Spark and Watson Machine Learning</b></font></th>
</tr>
<tr style="border: none">
<th style="border: none"><img src="https://kaggle2.blob.core.windows.net/competitions/kaggle/3897/media/shoppers_lores.png" alt="Icon" align="left"> </th>
</tr>
</table>
This notebook will take you through the process of creating a predictive model in Python using the data manipulation and machine learning libraries distributed with Spark.
- The data we are using here is an open source dataset prepared for a machine learning challenge hosted on the Kaggle website.
Once we've worked through the process of reading, understanding and preparing our data and have built a simple model together, we'll deploy it to the Watson Machine Learning service and make it available as a real-time scoring service.
You should spend the remaining time working as a group to speculate on how you might improve these predictions. The cognihack tutors will endeavour to assist with any experimentation to help you create and evaluate refinements to the baseline model.
Learning goals
The learning goals of this exercise are:
Loading CSV files into an Apache® Spark DataFrame.
Exploring the data using the features within:
a) Spark's data wrangling Python API: pyspark.sql;
b) the pandas data wrangling library; and
c) matplotlib for exploratory plots.
Engineering some basic predictive features, again using pyspark.sql and Spark user defined functions (UDFs).
Preparing the data for training and evaluation.
Creating an Apache® Spark machine learning pipeline.
Training and evaluating a model.
Persisting a pipeline and model in Watson Machine Learning repository.
Deploying the model for online scoring using Wastson Machine Learning API.
Scoring sample scoring data using the Watson Machine Learning API.
Contents
This notebook contains the following parts:
Setup
Load and understand data
Prepare dataset
Create a basic model
Deploy and score
Taking it further
<a id="setup"></a>
1. Setup
Before we begin working through this notebook, you must perform the following setup tasks:
Sign up for the IBM Data Science Experience (using w3 credentials) and create a new project;
Make sure that you are using a Spark 2.0 kernel and Python 2.x; and
Create a Watson Machine Learning Service instance (a free plan is offered).
<a id="load"></a>
2. Load and explore data
2.1 Load the data
The first step in our analysis process is to bring our data into the Spark environment.
We will do this by reading directly from a Bluemix hosted dashDB instance.
pyspark.sql will help us load and manipulate our data from Python while it resides in Spark.
End of explanation
spark = SparkSession.builder.getOrCreate()
dash = {
'jdbcurl': 'jdbc:db2://dashdb-entry-yp-dal09-10.services.dal.bluemix.net:50000/BLUDB',
'user': 'dash14210',
'password': 'W@sI$Ea8H8fz',
}
offers = spark.read.jdbc(dash['jdbcurl'],
table='DASH14210.OFFERS',
properties={"user" : dash["user"],
"password" : dash["password"]})
trainHistory = spark.read.jdbc(dash['jdbcurl'],
table='DASH14210.HISTORY',
properties={"user" : dash["user"],
"password" : dash["password"]})
transactions = spark.read.jdbc(dash['jdbcurl'],
table='DASH14210.TRANSACTIONS',
properties={"user" : dash["user"],
"password" : dash["password"]})
offers.cache()
trainHistory.cache()
transactions.cache()
Explanation: This last statement will make some aggregation and mutation statements look a little different to the cheat sheet as spark functions will be prefixed by F.
The spark.read function will import data directly into our spark instance, without touching the environment in which Python is running.
End of explanation
offers.count()
Explanation: When this is done, we are left with a Python object which is a logical pointer to a Spark DataFrame object.
Notice as well that Spark will give us some information on how it is breaking down the heavy lifting into discrete and parallelisable tasks.
2.2 Quality and completeness
Before we can get stuck into building predictive models, we need to first understand if any immediate attention is required to issues of data quality and completeness.
Like many data manipulation libraries, Spark makes its own decisions about how to read in data.
To check it has treated our data as we expected, we need to compare the schemata of our Spark DataFrames to that provided with the datasets:
history
id - A unique id representing a customer
chain - An integer representing a store chain
offer - An id representing a certain offer
market - An id representing a geographical region
repeattrips - The number of times the customer made a repeat purchase
repeater - A boolean, equal to repeattrips > 0
offerdate - The date a customer received the offer
transactions
id - see above
chain - see above
dept - An aggregate grouping of the Category (e.g. water)
category - The product category (e.g. sparkling water)
company - An id of the company that sells the item
brand - An id of the brand to which the item belongs
date - The date of purchase
productsize - The amount of the product purchase (e.g. 16 oz of water)
productmeasure - The units of the product purchase (e.g. ounces)
purchasequantity - The number of units purchased
purchaseamount - The dollar amount of the purchase
offers
offer - see above
category - see above
quantity - The number of units one must purchase to get the discount
company - see above
offervalue - The dollar value of the offer
brand - see above
The transactions file can be joined to the history file by (id,chain). The history file can be joined to the offers file by (offer). The transactions file can be joined to the offers file by (category, brand, company). A negative value in productquantity and purchaseamount indicates a return.
While we're at it, let's also see how many observations each dataset has, take a peek at the data and look for any missing values.
2.2.1 Offer details
First up, the count of observations in the dataset.
End of explanation
offers.schema
Explanation: A small, reference dataset.
Next question: how is the data typed within the Spark DataFrame?
End of explanation
offers = offers.withColumn("offervalue", offers["offervalue"].cast("double"))
offers = offers.withColumn("quantity", offers["quantity"].cast("double"))
Explanation: Spark has interpreted every field as a string. 🙄
There are two fields: offervalue and quantity that definitely should not be strings - let's fix them up now.
End of explanation
offers.show()
Explanation: Now we're ready to take a peak at the data.
End of explanation
offers.where(offers.offer.isNull() |
offers.category.isNull() |
offers.quantity.isNull() |
offers.company.isNull() |
offers.offervalue.isNull() |
offers.brand.isNull()).count()
Explanation: And finally, check for any records with a missing value in critical fields.
End of explanation
trainHistory.count()
Explanation: 2.2.2 Customer-Offer History
Can you repeat the same operations here?
1. Count the records;
2. Inspect the type schema;
3. Convert miss-typed fields using withColumn() and cast();
4. Inspect the data; and
5. Check the dataset for missing values.
End of explanation
trainHistory.schema
Explanation: A larger dataset that relates offers and customers.
End of explanation
trainHistory = trainHistory.withColumn("repeattrips", trainHistory["repeattrips"].cast("double"))
trainHistory = trainHistory.withColumn("repeater", trainHistory["repeater"].cast("boolean"))
trainHistory = trainHistory.withColumn("repeater", trainHistory["repeater"].cast("double"))
trainHistory = trainHistory.withColumn("offerdate", trainHistory["offerdate"].cast("date"))
trainHistory.show()
trainHistory.where(trainHistory.chain.isNull() |
trainHistory.market.isNull() |
trainHistory.repeattrips.isNull() |
trainHistory.repeater.isNull() |
trainHistory.offerdate.isNull()).count()
Explanation: Same deal. Let's do some conversion to numeric and datetime types.
End of explanation
transactions.count()
Explanation: 2.2.3 Transactions
End of explanation
transactions.schema
transactions = transactions.withColumn("date", transactions["date"].cast("date"))
transactions = transactions.withColumn("productsize", transactions["productsize"].cast("double"))
transactions = transactions.withColumn("purchasequantity", transactions["purchasequantity"].cast("double"))
transactions = transactions.withColumn("purchaseamount", transactions["purchaseamount"].cast("double"))
transactions.show()
transactions.where(transactions.id.isNull() |
transactions.chain.isNull() |
transactions.dept.isNull() |
transactions.category.isNull() |
transactions.company.isNull() |
transactions.brand.isNull() |
transactions.date.isNull() |
transactions.productsize.isNull() |
transactions.productmeasure.isNull() |
transactions.purchasequantity.isNull() |
transactions.purchaseamount.isNull()).count()
Explanation: The largest of the three datasets.
End of explanation
trainHistory.agg(
F.min("offerdate").alias("offerdate_min")
, F.max("offerdate").alias("offerdate_max")).show()
Explanation: 2.3 Exploration
Let's begin interpreting the contents of these sets, starting with the range of dates over which the offers were presented to customers.
End of explanation
trainHistory.groupBy(trainHistory.chain).count().orderBy("count", ascending = False).show(n = 50)
Explanation: What is the frequency of records across some of the categorical variables?
End of explanation
trainHistory.groupBy(trainHistory.market).count().orderBy("count", ascending = False).show(n = 50)
Explanation: Probably too many of these to do anything useful without grouping.
Try for yourself with the market variable.
End of explanation
trainHistory.describe(["repeattrips", "repeater"]).show()
Explanation: Slightly more usable, perhaps we'll come back to this.
End of explanation
trainHistory.groupBy(trainHistory.market).agg(
F.count("id").alias("customer_count")
, F.avg("repeater").alias("response_rate")
).orderBy("response_rate", ascending = False).show()
Explanation: First insight: 27% of customers to whom an offer is made become repeat shoppers. Does this vary across market?
End of explanation
count_vs_rate = trainHistory.groupBy(trainHistory.market).agg(
F.count("id").alias("customer_count")
, F.avg("repeater").alias("response_rate")
).orderBy("response_rate", ascending = False).toPandas()
%matplotlib inline
import matplotlib.pyplot as plt
count_vs_rate.plot(kind='scatter', x='customer_count', y='response_rate')
count_vs_rate[count_vs_rate.customer_count < 40000].plot(kind='scatter', x='customer_count', y='response_rate')
Explanation: There's a hypothesis emerging here that our larger markets may show the strongest response to offers. Let's plot it to check.
End of explanation
offers.describe(["quantity", "offervalue"]).show()
Explanation: There is a weak relationship there, but we probably won't want to employ something as nuanced as this in our first iteration of analysis. Interesting to know though!
We understand a little bit of the offer history data. Let's just check for missing values.
Now we need to repeat the process across our other datasets. Let's start with the offers.
End of explanation
offers.groupBy("quantity").count().show()
Explanation: Looks as though a small number of the offers have a different quantity value.
End of explanation
offers[offers.quantity == 2].show()
trainHistory[trainHistory.offer=="1221658"].count()
Explanation: Interesting. Is this still applicable when we join offers to our history dataset?
End of explanation
offers.groupBy("company").count().orderBy("count", ascending = False).show()
offers.groupBy("brand").count().orderBy("count", ascending = False).show()
offers.groupBy("category").count().orderBy("count", ascending = False).show()
Explanation: No. In which case, it's not going to be significant for our analysis and modelling.
Do any of the categorical fields have few enough levels to enter into a simple classification model?
End of explanation
transactions.agg(
F.min("date").alias("date_min")
,F.max("date").alias("date_max")).show()
Explanation: These might work in a classification tree model which automatically groups, or as a binned aggregate measure of the response rate across each.
Let's move on to the transactions.
What is the range of dates of our transactions?
End of explanation
transactions.describe(["productsize"
,"productmeasure"
,"purchasequantity"
, "purchaseamount"]).show()
Explanation: Right, so up to a year before the offers were presented.
What are the ranges of purchasequantity and purchaseamount?
End of explanation
datatoplot = trainHistory.groupBy(?).?
datatoplot.orderBy(?).show()
datatoplot.orderBy(?).?.plot(kind=?, x=?, y=?)
Explanation: OK, we have some returns data in here, too. We may need to account for this in a future iteration of modelling work.
2.4 Activity 2: Data understanding learning challenges
2.4.1 Data understanding learning challenge 1
Is there any pattern to the number of records in our history dataset by offerdate? Try plotting a graph to investigate.
End of explanation
newdata = trainHistory.withColumn("offermonth", datatoplot["offerdate"]??)
datatoplot = newdata.groupBy(?).? #aggregate the data
datatoplot.? # plot the chart
Explanation: The graph above is a little messy and difficult to interpret.
Let's use a very quick and dirty approach to extracting the month, plot that and see if this is any easier to interpret.
End of explanation
newdata = trainHistory.withColumn("offerweek", ?)
datatoplot = newdata.groupBy(?).?
datatoplot.?.?.plot(?)
newdata = trainHistory.withColumn("dayofweek", ?)
# hints: https://spark.apache.org/docs/1.6.1/api/python/pyspark.sql.html#pyspark.sql.functions.date_format
# a format string of "E" will return a day of the week abbreviation
datatoplot = newdata.groupBy(?).?
datatoplot.?.?.plot(?)
Explanation: We are now able to discriminate between the periods in our analysis, but we've lost the interesting pattern we saw before.
Extension tasks
Try producing a summary of counts by week.
Hint: take a look at the resample() function within Pandas or the pyspark.sql.functions functions next_day() or weekofyear().
Is there a pattern relating to the day of the week offers are sent out? Is this an important indicator in response likelihood?
Hint: pyspark.sql.functions.date_format() may be helpful.
End of explanation
trainHistory.toPandas().?
offers.?.?
Explanation: 2.4.2 Data understanding learning challenge 2
In the data understanding phase, we run lots of operations against our data, and we need these to be optimised in order to complete the task in a timely fashion. In our case, the key differentiator is whether the function requires creation of a Python object (e.g. using Pandas), or whether it can run on a Spark data frame. Let's try using Pandas' implementation of 'describe'
End of explanation
transactions.agg(
F.min("chain").alias("chain_min")
,F.max("chain").alias("chain_max")
,F.min("?").alias("?")
,F.max("?").alias("?")
?
?
?
?
?
?
).show()
Explanation: Pyspark.sql also has an implementation of describe(), which we saw earlier. Note: be careful which order you run code, as you may need to declare which module to use a function from. Let's try to use a different set of Pyspark functionality to find the range of our continuous variables.
End of explanation
transactions[transactions.??].show()
Explanation: From the exercise, it looks like there is something strange in one of the columns. Are all of the values positive and with a reasonable range? Let's take a look at some of the negative values
End of explanation
offertxns = transactions.join(trainHistory.select(["id" , "chain", "offer", "offerdate", "repeater"]), ["id", "chain"], how = "inner")
offertxns.show(n=5)
Explanation: <a id="prepare"></a>
3. Data preparation
The output of this phase is a dataset with which we can build and test models.
3.1 Summarising data for use in modelling
Given that the aim of the task is to make customer-offer level predictions about likelihood to become a repeat purchaser, data that we use from the offers or transactions datasets will need to be joined to the history dataset.
We have also observed that the transactions dataset contains a large volume of datat, too much to enter into a model without aggregation to the customer, offer level. In aggregating this, our goal is to select an approach which generates features that:
a) retain as much information about the behaviour of these customers as possible; and
b) will be usable in our model (some algorithms can only accept numerical inputs, for example).
As a starter set, we will simply measure how much each customer had spent in the 30, 60, 90 and 180 days prior to being made an offer.
To do so, we will first need to join the offer history and transactions tables.
End of explanation
offertxns = offertxns.withColumn("offerdate_30", F.date_sub(offertxns.offerdate, 30))
offertxns = offertxns.withColumn("offerdate_60", F.date_sub(offertxns.offerdate, 60))
offertxns = offertxns.withColumn("offerdate_90", F.date_sub(offertxns.offerdate, 90))
offertxns = offertxns.withColumn("offerdate_180", F.date_sub(offertxns.offerdate, 180))
offertxns.show(n=5)
Explanation: Calculate "history" interval dates based on offerdate.
End of explanation
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType
def inDateRange(date, date_lower, date_upper):
if date >= date_lower and date <= date_upper: return 1
else: return 0
udfInDateRange = udf(inDateRange, IntegerType())
offertxns = offertxns.withColumn("offerdate_30_tf", udfInDateRange(offertxns.date, offertxns.offerdate_30, offertxns.offerdate))
offertxns = offertxns.withColumn("offerdate_60_tf", udfInDateRange(offertxns.date, offertxns.offerdate_60, offertxns.offerdate))
offertxns = offertxns.withColumn("offerdate_90_tf", udfInDateRange(offertxns.date, offertxns.offerdate_90, offertxns.offerdate))
offertxns = offertxns.withColumn("offerdate_180_tf", udfInDateRange(offertxns.date, offertxns.offerdate_180, offertxns.offerdate))
offertxns.show(n=5)
Explanation: We can employ a Spark "user defined function" to create corresponding aggregation flags to identify whether the transaction in scope of one of the history periods.
End of explanation
offertxns = offertxns.withColumn("offerdate_30_qty", offertxns.purchasequantity * offertxns.offerdate_30_tf)
offertxns = offertxns.withColumn("offerdate_60_qty", offertxns.purchasequantity * offertxns.offerdate_60_tf)
offertxns = offertxns.withColumn("offerdate_90_qty", offertxns.purchasequantity * offertxns.offerdate_90_tf)
offertxns = offertxns.withColumn("offerdate_180_qty", offertxns.purchasequantity * offertxns.offerdate_180_tf)
offertxns = offertxns.withColumn("offerdate_30_amt", offertxns.purchaseamount * offertxns.offerdate_30_tf)
offertxns = offertxns.withColumn("offerdate_60_amt", offertxns.purchaseamount * offertxns.offerdate_60_tf)
offertxns = offertxns.withColumn("offerdate_90_amt", offertxns.purchaseamount * offertxns.offerdate_90_tf)
offertxns = offertxns.withColumn("offerdate_180_amt", offertxns.purchaseamount * offertxns.offerdate_180_tf)
offertxns.show(n=5)
offertxnsSum = offertxns.groupBy(["id", "chain", "offer", "offerdate", "repeater"]).agg(
F.sum("offerdate_30_qty").alias("qty_30")
, F.sum("offerdate_60_qty").alias("qty_60")
, F.sum("offerdate_90_qty").alias("qty_90")
, F.sum("offerdate_180_qty").alias("qty_180")
, F.sum("offerdate_30_amt").alias("amt_30")
, F.sum("offerdate_60_amt").alias("amt_60")
, F.sum("offerdate_90_amt").alias("amt_90")
, F.sum("offerdate_180_amt").alias("amt_180"))
offertxnsSum.show(n=5)
Explanation: At this point we can calculate the quantity and spend per customer per offer.
As an extension, you could join onto the offers table and create equivalent measures for quantity and spend in the same brand, company and category as the offer presented to the customer.
End of explanation
import pandas as pd
average_spend = offertxnsSum.groupBy("repeater").agg(
F.avg("amt_30").alias("30")
, F.avg("amt_60").alias("60")
, F.avg("amt_90").alias("90")
, F.avg("amt_180").alias("180")).toPandas()
average_spend_melt = pd.melt(average_spend, id_vars = "repeater", var_name = "interval_days", value_name = "spend_ave")
average_spend_melt["interval_days"] = pd.to_numeric(average_spend_melt["interval_days"])
average_spend_melt.head()
average_spend_melt.plot(kind='scatter', x='interval_days', y='spend_ave', c="repeater")
Explanation: 3.2 Reshaping data
What is the average spend in these intervals?
Spark will allow us to calculate this quite easily. In order to plot this nicely, we will need help from the Python data wrangling library of choice: Pandas. Luckily Spark also offers an easy way to translate between the two types of object using the .toPandas() function.
End of explanation
from bokeh.plotting import figure
from bokeh.io import output_notebook, show
from bokeh.charts import Scatter
from bokeh.palettes import brewer
output_notebook()
p = Scatter(average_spend_melt,
x="interval_days",
y="spend_ave",
color = "repeater",
title = "Comparison: purchase intervals and average spends",
xlabel = "Purchase analysis interval",
ylabel = "Average spend",
palette = brewer["Dark2"][3])
show(p)
Explanation: 3.3 Divertissement: interactive charting
(Maybe useful for the hack later...)
If we want to add some interactivity to our charts, one great option is the Bokeh library.
End of explanation
(transactions
.sample(False, 0.01, 42)
.groupBy(?)
.agg(?.alias("transaction_count"))
.orderBy(?)
.show(?))
Explanation: This gives us a very small level of interaction - but much more is possible!
You may have to move outside of the notebook environment to do this though (using output_file, for example).
Bokeh interactivity docs and examples
3.4 Activity 3: Data prep learning challenges
3.4.1 Data prep learning challenge 1
Let's apply some of what we've looked at together and engineer a new feature that may or may not be predictive and useful, but will at least give us some experience of working with the summarisation capability of pyspark.sql.
What we are looking to do is examine which departments customers have shopped into as a way of classifying their habits.
Let's start by identifying the five most popular departments in our transactions set. We can sample our data to achieve this quickly.
End of explanation
customerDepts = (transactions
.withColumn(?, (?).?) # check out the cheat sheet for help here!
...
.groupBy(?)
.agg(?.alias("dept_99"),
...)) # how would you aggregate this to get a 'per customer' answer?
Explanation: We can use these to create a series of flags at the customer level which may be useful in the classification task. The task here is to create a flag for every customer, showing whether they have shopped in each of: department 99, department 35, 37, 56 and 26.
End of explanation
customerDepts.?
customerDepts.?.?
Explanation: Inspect and generate some summary statistics for these fields.
End of explanation
customerDepts.?.corr()
Explanation: Last piece of the puzzle: let's measure the level of correlation of these predictors.
(If everybody buys from the same departments then these will not be good predictor variables to use in a model.)
Pandas has a very slick way of doing this via. the .corr() function, but what do we need to do first to allow us to use our summary data?
End of explanation
historyCustCount = (trainHistory
.groupBy(?,?)
.agg(?))
historyCustCount.show()
historyCustCount.count()
Explanation: Qs.
Are these good variables to add into the model?
How else could we test their suitability?
If we built these flags for a hundred departments: how could we use this information but still only add a small number of additional predictors into our model?
3.4.2 Data prep learning challenge 2
How could we add a new column to our offer history data, expressing the rank of the chain (by number of customers) within the market?
Wrangling pattern:
Summarise customer counts by chain and market;
Apply the rank() function to customer counts by market; then
Join back to original dataframe.
Starting with (1) - can you use the agg() function to summarise the data appropriately?
- Name your column of counts: customer_count
- Name your new dataframe: historyCustCounts
End of explanation
from ? import ?
w = ?.partitionBy(?).orderBy(?)
historyCustCount = historyCustCount.withColumn(?, ?.over(w))
historyCustCount.show()
Explanation: Moving on to (2), let's calculate a new column chain_market_rank within historyCustCount showing the ranking of chains by customer count within each market.
Hint: as this is a window function (calculates a quantity within a partition, in this case market), you'll need to specify the window specification using Window which is available in the pyspark.sql.window library.
End of explanation
historyCustCount.?.hist(?) # check out the pandas docs for help with the arguments to hist()
# https://pandas.pydata.org/pandas-docs/stable/visualization.html#visualization-hist
Explanation: Is this a good variable to use in our model? Let's plot the distribution of values as a histogram.
Again, Pandas has a very neat function: .hist() that allows us to plot histograms quickly.
End of explanation
trainHistory = ?.join(?) # pyspark cheat sheet will help you here
trainHistory.show()
Explanation: Do you think this would be informative?
What other analysis could you do to support your assertion?
For completeness, let's go ahead and join this back to our original trainHistory dataframe.
End of explanation
trainHistory = trainHistory.?
trainHistory.show()
Explanation: We don't need customer_count, let's go ahead and drop it.
End of explanation
firstCustTrans = (transactions
.groupBy(?)
.agg(?.alias("first_purch_date")))
firstCustTrans.show()
firstCustTrans = (trainHistory
.select("id", "offerdate")
.join(firstCustTrans, ?)
.withColumn("shop_history_interval", ?))
firstCustTrans.show()
firstCustTrans.sample(?).?.?
Explanation: 3.4.3 Data prep learning challenge 3
This challenge is more open ended: you will work from a specification to produce a new predictor variable (i.e. without direction from the script).
This is the specification you have been provided:
Calculating "Customer first transaction interval"
We are hoping to create a variable to encapsulate information about how long a customer began shopping with the company prior to being offered an incentive.
To do this, you will need to:
1. Find the first transaction for each customer in the transactions dataset;
2. Compare that to the offerdate in the offer history dataset and calculate the number of days between the two (the datediff() function will help); and
3. Plot (or otherwise analyse) the distribution and make a decision about whether you would include this in a model. You might consider sampling your data before plotting.
End of explanation
offertxnsSum = offertxnsSum.withColumnRenamed("repeater","label")
splits = offertxnsSum.randomSplit([0.7,0.3])
trainSet = splits[0]
testSet = splits[1]
trainSet.show(n=5)
trainSet.count()
Explanation: <a id="model"></a>
4. Modelling experiments
We're now ready to have a first pass at building a model.
4.1 Holdout partitioning
For those who have worked in targeted marketing, this approach will be quite familiar. The premise is to train a model based on one part of your dataset and evaluate its performance using the other.
Spark has a convenience function to do just that: randomSplit(). The arguments supplied to this function is an array of weights specifying the proportions by which the dataset should be split.
End of explanation
trainSet.cache()
testSet.cache()
Explanation: Let's also cache these datafrmaes like we did with our original data sets.
End of explanation
from pyspark.ml.feature import VectorAssembler
assembler = VectorAssembler(
inputCols=["qty_30", "qty_60", "qty_90", "qty_180",
"amt_30", "amt_60", "amt_90", "amt_180"],
outputCol="features")
trainSetAssembled = assembler.transform(trainSet)
trainSetAssembled.show(n=5, truncate = False)
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression()
pipeline = Pipeline(stages=[assembler, lr])
model = pipeline.fit(trainSet)
Explanation: Spark has an idiosyncratic way of taking input and we will have to employ the 'VectorAssembler' function to bundle up the features for us to input into the model.
End of explanation
#[stage.coefficients for stage in model.stages if hasattr(stage, "coefficients")]
model.stages[1].coefficients
prediction = model.transform(trainSet)
prediction.select("label","prediction", "probability", "features").show(5)
from sklearn.metrics import roc_curve
prediction_collect = prediction.select("label", "probability").toPandas()
roc_inputs = [(float(i[1][0]), float(i[1][1][1])) for i in prediction_collect.iterrows()]
roc_inputs = pd.DataFrame(roc_inputs, columns = ["label","prob"])
fpr, tpr, _ = roc_curve(y_true = roc_inputs.label,
y_score = roc_inputs.prob)
roc_lr_train = pd.DataFrame(
{"FPR" : fpr
,"TPR" : tpr})
fig = plt.figure()
ax = plt.axes()
ax.set(title = "Receiver-Operator Characteristic",
xlabel = "False Positive Rate",
ylabel = "True Positive Rate")
x = [0,1]
y = [0,1]
ax.plot(x, y)
ax.plot(roc_lr_train.FPR, roc_lr_train.TPR)
plt.show()
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = BinaryClassificationEvaluator(
labelCol="label", rawPredictionCol="rawPrediction", metricName="areaUnderROC")
accuracy = evaluator.evaluate(prediction)
print "Area under the ROC curve = %g " % accuracy
Explanation: We now have a fitted model!
What values have been chosen for the parameters?
End of explanation
prediction_collect = prediction.toPandas()
pd.crosstab(prediction_collect.label, prediction_collect.prediction)
Explanation: Not bad, the competition benchmark for this dataset is 0.59.
Before we start extending this, we should perform some diagnositcs.
First, let's look at where the misclassifications are occuring by creating a contingency table (a.k.a confusion matrix).
End of explanation
prediction = ?
prediction.select("label","prediction", "probability", "features").show(5)
Explanation: What does this tell us? Is this a bad model?
And of course we should check if our model generalises well by scoring and evaluating the test set.
4.2 Modeling Exercises
4.2.1 Exercise 1: Scoring on the test set
Go ahead and use the pipeline we've already developed to transform the test set data.
When you've done that, plot a ROC curve for the predictions and measure the AUC.
Discuss and draw some conclusions about whether the model is generalising well.
End of explanation
prediction_collect = prediction.?
pd.crosstab(?)
prediction_collect = prediction.? # different to the previous cell! Check out section 4.1 for help...
roc_inputs = ?
roc_lr_test = ?
fig = plt.figure()
ax = plt.axes()
ax.set(title = "Receiver-Operator Characteristic",
xlabel = "False Positive Rate",
ylabel = "True Positive Rate")
x = [0,1]
y = [0,1]
ax.plot(x, y)
ax.plot(roc_lr_train.FPR, roc_lr_train.TPR)
ax.plot(?) # this is where you would plot your test set performance
# specify a different line colour or style to differentiate from the training set performance.
plt.show()
accuracy = ?
print "Area under the ROC curve = %g " % accuracy
Explanation: Build a confusion matrix to calculate rate of misclassification
End of explanation
from pyspark.ml.classification import ?
from pyspark.ml.feature import StringIndexer
si = StringIndexer(inputCol="label", outputCol="indexed")
dt = DecisionTreeClassifier(labelCol=?)
pipeline_dt = Pipeline(stages=[?, ?, ?])
model_dt = ?
Explanation: 4.2.2 Exercise 2: Creating a Decision Tree
We've seen how to create a logistic regression model. However, this is a parametric model and requires making some distributional assumptions about our data. In some cases this is not appropriate and we need to use a non-parametric method. Let's go through the same approach using pyspark.ml.classification, and fit a decision tree.
Slight additional complexity here: regardless of the original data type of the target variable, this algorithm requires you to have processed it with StringIndexer (from pyspark.ml.feature) prior to sending it to the model. Hence we have an additional stage to this pipeline.
End of explanation
model_dt.stages[?].?
Explanation: The first thing we did with our logistic regression was to look at the parameter values. There is no equivalent for decision trees. Instead, there is a featureImportances object which will give us similar useful information about the model.
Extract it from the pipeline in the same way we did for the coefficients of our logistic regression.
End of explanation
prediction_dt = ?
prediction_dt.select("label", "prediction", "probability").show(5)
evaluator_dt = BinaryClassificationEvaluator(?)
accuracy_dt = ?
print "Area under the ROC curve = %g " % accuracy_dt
prediction_collect = ?
pd.crosstab(?)
Explanation: Go ahead and measure the AUC metric as before using the trains and test sets.
End of explanation
from repository.mlrepositoryclient import MLRepositoryClient
from repository.mlrepositoryartifact import MLRepositoryArtifact
Explanation: Now, there is a stochastic element to the building of these models, but in preparation for Cognihack, something felt strange about the AUC we were getting. Have a chat with your teams about why this may be the case, in particular in the context of parallel computing.
Extension:
Try recreating the decision tree model, carefully selecting your features.
What about any more data derivations you could include
Try selecting the parameters of the decision tree, such as depth and minimum split size
Consider other classification algorithms, either from pyspark.ml or elsewhere
<a id="deploy"></a>
5. Deploy and score
With the advent of Watson Machine Learning, we can quite easily deploy our model to a cloud scoring service.
5.1 Persist Spark model within the ML repository
The first step here is to import the relevant client libraries.
End of explanation
# The code was removed by DSX for sharing.
ml_repository_client = MLRepositoryClient(wml_service_path)
ml_repository_client.authorize(wml_username, wml_password)
Explanation: Tip: service_path, user and password can be found on Service Credentials tab of service instance created in Bluemix.
For example, the following code:
wml_service_path = "https://ibm-watson-ml.mybluemix.net"
wml_username = "ebecda6c-a18b-4c6f-82e4-4c7fc26361f4"
wml_password = "4705d497-fcc0-4e1c-9f55-934b13b13fb2"
Will create the necessary credentials to connect to the Watson ML service. Just substitute in your own in the place of these example values.
End of explanation
model_artifact = MLRepositoryArtifact(model, training_data=trainSet, name="Repeat Buyer Prediction Model")
Explanation: Create model artifact (abstraction layer).
End of explanation
saved_model = ml_repository_client.models.save(model_artifact)
Explanation: Tip: The MLRepositoryArtifact method expects a trained model object, training data, and a model name. (It is this model name that is displayed by the Watson Machine Learning service).
We can now save our model to the repository.
End of explanation
saved_model.meta.available_props()
print "modelType: " + saved_model.meta.prop("modelType")
print "trainingDataSchema: " + str(saved_model.meta.prop("trainingDataSchema"))
print "creationTime: " + str(saved_model.meta.prop("creationTime"))
print "modelVersionHref: " + saved_model.meta.prop("modelVersionHref")
print "label: " + saved_model.meta.prop("label")
Explanation: Get saved model metadata from Watson Machine Learning.
Tip: Use meta.available_props() to get the list of available props.
End of explanation
import urllib3, requests, json
headers = urllib3.util.make_headers(basic_auth='{}:{}'.format(wml_username, wml_password))
url = '{}/v2/identity/token'.format(wml_service_path)
response = requests.get(url, headers=headers)
mltoken = json.loads(response.text).get('token')
print mltoken
Explanation: Tip: modelVersionHref is our model unique indentifier in the Watson Machine Learning repository.
5.2 Create an online scoring endpoint
In this section you will learn how to create online scoring and to score a new data record by using the Watson Machine Learning REST API.
For more information about REST APIs, see the Swagger Documentation.
To work with the Watson Machine Leraning REST API you must generate an access token. To do that you can use the following sample code:
End of explanation
endpoint_online = wml_service_path + "/v2/online/deployments/"
header_online = {'Content-Type': 'application/json', 'Authorization': mltoken}
payload_online = {"artifactVersionHref": saved_model.meta.prop("modelVersionHref"), "name": "Repeat Shopper Prediction"}
print endpoint_online
print header_online
print payload_online
response_online = requests.post(endpoint_online, json=payload_online, headers=header_online)
print response_online
print response_online.text
scoring_href = json.loads(response_online.text).get('entity').get('scoringHref')
print scoring_href
Explanation: You can now create an online scoring endpoint. Execute the following sample code that uses the modelVersionHref value to create the scoring endpoint to the Bluemix repository.
End of explanation
payload_scoring = {
"record":[
"42", #id
"8620", #chain
"400", #offer
"2017-6-5", #offerdate
5, #qty_30
10, #qty_60
15, #qty_90
20, #qty_180
50, #amt_30
100, #amt_60
150, #amt_90
200, #amt_180
]}
response_scoring = requests.put(scoring_href, json=payload_scoring, headers=header_online)
print response_scoring.text
Explanation: Let's see what happens when we send a PUT request to our new endpoint containing a new scoring record. The model should hopefully return some predictions.
End of explanation |
471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Morphological Image Processing in Python
Tanmoy Dasgupta
[email protected] | Assistant Professor | Department of Electrical Engineering | Techno India University, Kolkata
Step1: Binary Morphology
Morphological transformations are some simple operations based on the image shape. It is normally performed on binary images. It needs two inputs, one is our original image, second one is called structuring element or kernel which decides the nature of operation. Two basic morphological operators are Erosion and Dilation. Then its variant forms like Opening, Closing, Gradient etc also comes into play.
Images can be thought to be a mapping from the integer space $\mathbb{Z^2}$ to $\mathbb{R}$. For a binary image, the mapping reduces to $f
Step2: Dilation
With $A, B \subseteq \mathbb{Z}^2$, the dilation of $A$ by $B$ (SE) is defined as $A\oplus B = {z
Step3: Opening
The morphological opening of $A$ by $B$ is defined as $A \circ B = (A\ominus B)\oplus B = \cup{(B)_z
Step4: Closing
The morphological closing of $A$ by $B$ is defined as $A \bullet B = (A\oplus B)\ominus B$, which is nothing but dilation followed by erosion. It is useful in removing noise. It is useful in closing small holes inside the foreground objects, or small black points on the object.
Step5: Morphological Gradient
It is the difference between dilation and erosion of an image.
Step6: Tophat
It is the difference between the original image and its opening.
Step7: Blackhat
It is the difference between the closing of the input image and input image.
Step8: SEs of different shapes
OpenCV provides built-in functions for creating SEs of custom shapes like circle, ellipse, cross, etc. They turn out to be useful for dufferent purposes.
Step9: Now check the morphological closing operation with circular SEs.
Step10: Another example showing morphological blackhat operation with circular SEs. | Python Code:
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import PIL
import cv2
import skimage as sk
%pylab inline
Explanation: Morphological Image Processing in Python
Tanmoy Dasgupta
[email protected] | Assistant Professor | Department of Electrical Engineering | Techno India University, Kolkata
End of explanation
img = cv2.imread('images/binary_circles.jpg',0)
kernel1 = np.ones((3,3), np.uint8)
erosion1 = cv2.erode(img, kernel1, iterations = 1)
kernel2 = np.ones((5,5), np.uint8)
erosion2 = cv2.erode(img, kernel2, iterations = 1)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(erosion1, cmap=plt.cm.gray)
plt.title(r'Erosion with a $3\times3$ kernel')
plt.subplot(133)
plt.imshow(erosion2, cmap=plt.cm.gray)
plt.title(r'Erosion with a $5\times5$ kernel')
Explanation: Binary Morphology
Morphological transformations are some simple operations based on the image shape. It is normally performed on binary images. It needs two inputs, one is our original image, second one is called structuring element or kernel which decides the nature of operation. Two basic morphological operators are Erosion and Dilation. Then its variant forms like Opening, Closing, Gradient etc also comes into play.
Images can be thought to be a mapping from the integer space $\mathbb{Z^2}$ to $\mathbb{R}$. For a binary image, the mapping reduces to $f: \mathbb{Z^2} \to {0,1}$. Every pixel at position $(x,y)\in\mathbb{Z^2}$ is either completely dark $(0)$ or completely bright $(1)$. We shall now introduce two very important set operations.
Reflection of a set $B\subseteq\mathbb{Z}^2$ is defined as $\hat{B} = {w : w=-b, \forall b\in B}$. E.g. if $B$ is a set of points of the form $(x,y)$, then $\hat{B}$ can be found by replacing those points by $(-x,-y)$.
Translation of a set $B\subseteq\mathbb{Z}^2$ by a point $z=(z_1,z_2)$ is defined as $(B)_z={c : c=b+z,\forall b\in B}$. E.g. if $B$ is a set of points of the form $(x,y)$, then $(B)_z$ can be found by replacig those points by $(x+z_1, y+z_2)$.
Set reflection and translation are employed extensively in morphology to formulate operations based on so-called structuring elements (SEs) or kernels. SEs are basically small sets or subimages used to probe an image under study for properties of interest. Usually they are often taken in rectangular, circular, elliptical or cross shapes.
Erosion
With $A, B \subseteq \mathbb{Z}^2$, the erosion of $A$ by $B$ (SE) is defined as $A\ominus B = {z : (B)_z \subseteq A}$. In words, this equation indicated that the erosion of $A$ by $B$ is the set of all points $z$ such that $B$, translated by $z$, is contained in $A$. The basic idea of erosion is just like soil erosion only, it erodes away the boundaries of foreground object (Always try to keep foreground in white). So what does it do? The kernel (SE) slides through the image (as in 2D convolution). A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise it is eroded (made to zero).
So what happends is that, all the pixels near boundary will be discarded depending upon the size of kernel. So the thickness or size of the foreground object decreases or simply white region decreases in the image. It is useful for removing small white noises, detach two connected objects etc. Let us use a rectangular SE (kernel) to check this out.
End of explanation
img = cv2.imread('images/binary_retina.png',0)
kernel1 = np.ones((3,3), np.uint8)
dilation1 = cv2.dilate(img, kernel1, iterations = 1)
kernel2 = np.ones((5,5), np.uint8)
dilation2 = cv2.dilate(img, kernel2, iterations = 1)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(dilation1, cmap=plt.cm.gray)
plt.title(r'Dilation with a $3\times3$ kernel')
plt.subplot(133)
plt.imshow(dilation2, cmap=plt.cm.gray)
plt.title(r'Dilation with a $5\times5$ kernel')
Explanation: Dilation
With $A, B \subseteq \mathbb{Z}^2$, the dilation of $A$ by $B$ (SE) is defined as $A\oplus B = {z:(\hat{B})z\cap A \ne \phi}$. In words, the dilation of $A$ by $B$ is the set consisting of all the structuring element _origin locations where the reflected and translated $B$ overlaps at least one element of $A$.
It is just opposite of erosion. Here, a pixel element is $1$ if atleast one pixel under the kernel is $1$. So it increases the white region in the image or size of foreground object increases. Normally, in cases like noise removal, erosion is followed by dilation. Because, erosion removes white noises, but it also shrinks our object. So we dilate it. Since noise is gone, they won’t come back, but our object area increases. It is also useful in joining broken parts of an object.
End of explanation
img = cv2.imread('images/binary_sand.png',0)
kernel1 = np.ones((5,5), np.uint8)
opening1 = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel1)
kernel2 = np.ones((7,7), np.uint8)
opening2 = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel2)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(opening1, cmap=plt.cm.gray)
plt.title(r'Opening with a $5\times5$ kernel')
plt.subplot(133)
plt.imshow(opening2, cmap=plt.cm.gray)
plt.title(r'Opening with a $7\times7$ kernel')
Explanation: Opening
The morphological opening of $A$ by $B$ is defined as $A \circ B = (A\ominus B)\oplus B = \cup{(B)_z:(B)_z\subseteq A}$, which is nothing but erosion followed by dilation. It is useful in removing noise.
End of explanation
img = cv2.imread('images/binary_circles.jpg',0)
kernel1 = np.ones((5,5), np.uint8)
closing1 = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel1)
kernel2 = np.ones((7,7), np.uint8)
closing2 = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel2)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(closing1, cmap=plt.cm.gray)
plt.title(r'Closing with a $5\times5$ kernel')
plt.subplot(133)
plt.imshow(closing2, cmap=plt.cm.gray)
plt.title(r'Closing with a $7\times7$ kernel')
Explanation: Closing
The morphological closing of $A$ by $B$ is defined as $A \bullet B = (A\oplus B)\ominus B$, which is nothing but dilation followed by erosion. It is useful in removing noise. It is useful in closing small holes inside the foreground objects, or small black points on the object.
End of explanation
img = cv2.imread('images/binary_circles.jpg',0)
kernel1 = np.ones((3,3), np.uint8)
grad1 = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel1)
kernel2 = np.ones((5,5), np.uint8)
grad2 = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel2)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(grad1, cmap=plt.cm.gray)
plt.title(r'Morphological gradient with a $3\times3$ kernel')
plt.subplot(133)
plt.imshow(grad2, cmap=plt.cm.gray)
plt.title(r'Morphological gradient with a $5\times5$ kernel')
Explanation: Morphological Gradient
It is the difference between dilation and erosion of an image.
End of explanation
img = cv2.imread('images/binary_angiogram.png',0)
kernel1 = np.ones((5,5), np.uint8)
top1 = cv2.morphologyEx(img, cv2.MORPH_TOPHAT, kernel1)
kernel2 = np.ones((9,9), np.uint8)
top2 = cv2.morphologyEx(img, cv2.MORPH_TOPHAT, kernel2)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(top1, cmap=plt.cm.gray)
plt.title(r'Morphological tophat with a $5\times5$ kernel')
plt.subplot(133)
plt.imshow(top2, cmap=plt.cm.gray)
plt.title(r'Morphological tophat with a $9\times9$ kernel')
Explanation: Tophat
It is the difference between the original image and its opening.
End of explanation
img = cv2.imread('images/binary_circles.jpg',0)
kernel1 = np.ones((5,5), np.uint8)
black1 = cv2.morphologyEx(img, cv2.MORPH_BLACKHAT, kernel1)
kernel2 = np.ones((11,11), np.uint8)
black2 = cv2.morphologyEx(img, cv2.MORPH_BLACKHAT, kernel2)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(black1, cmap=plt.cm.gray)
plt.title(r'Morphological blackhat with a $5\times5$ kernel')
plt.subplot(133)
plt.imshow(black2, cmap=plt.cm.gray)
plt.title(r'Morphological blackhat with a $11\times11$ kernel')
Explanation: Blackhat
It is the difference between the closing of the input image and input image.
End of explanation
# Rectangular Kernel
rect = cv2.getStructuringElement(cv2.MORPH_RECT,(25,25))
# Elliptical Kernel
ellip = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(25,25))
# Cross-shaped Kernel
cross = cv2.getStructuringElement(cv2.MORPH_CROSS,(25,25))
plt.matshow(ellip, cmap=cm.gray)
plt.title(r'A $19\times 19$ elliptical / circular kernel')
Explanation: SEs of different shapes
OpenCV provides built-in functions for creating SEs of custom shapes like circle, ellipse, cross, etc. They turn out to be useful for dufferent purposes.
End of explanation
img = cv2.imread('images/binary_circles.jpg',0)
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
closing1 = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel1)
kernel2 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15,15))
closing2 = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel2)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(closing1, cmap=plt.cm.gray)
plt.title(r'Closing with a $5\times5$ circular kernel')
plt.subplot(133)
plt.imshow(closing2, cmap=plt.cm.gray)
plt.title(r'Closing with a $15\times15$ circular kernel')
Explanation: Now check the morphological closing operation with circular SEs.
End of explanation
img = cv2.imread('images/binary_circles.jpg',0)
kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9))
black1 = cv2.morphologyEx(img, cv2.MORPH_BLACKHAT, kernel1)
kernel2 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15,15))
black2 = cv2.morphologyEx(img, cv2.MORPH_BLACKHAT, kernel2)
plt.figure(figsize=(15, 5))
plt.subplot(131)
plt.imshow(img, cmap=plt.cm.gray)
plt.title('Original image')
plt.subplot(132)
plt.imshow(black1, cmap=plt.cm.gray)
plt.title(r'Blackhat with a $9\times9$ circular kernel')
plt.subplot(133)
plt.imshow(black2, cmap=plt.cm.gray)
plt.title(r'Blackhat with a $15\times15$ circular kernel')
Explanation: Another example showing morphological blackhat operation with circular SEs.
End of explanation |
472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Crime Analytics
Step1: Initial observations
The two data sets have different set of attributes and structure and cannot be directly compared
San Fransisco data set has the additional information on the resolution for the incident which could potentially be used to analyze how many of the reported incidents have been resolved. In other words the efficiency of the SFO PD in resolving the incidents. However there is no resolution date.
Both data sets have longitude and latitide information, which if plotted on a map may provide patterns in the clustering of the incident locations. This may help map out higher crime rate areas v/s lower crime rate areas
Offence categorization is different in both the data sets
There seem to be data issues in the seattle data. I can see date reported or date occurred end range dates in 2015 whereas the column (Occurred Date or Date Range Start seems more reliable. Looking further into the data it seems that the only the year has been mistyped as 2015 instead of 2014. This assumption may not be correct.
We can combine the data together into a single data set based on the common columns, however the category column will have to be normalized first so that we can map the existing categories into a new set of categories that will match across the two data sets.
Day of the week column is present in the SFO data set but is absent from the seattle data set. This might be an interesting Nominal column that can help in looking for patterns based on the day of the week.
Validations
Consistency check. We'll check the dates and identify rows where the dates are inconsistent. There is a year column in the seattle data which can potentially help in this effort.
Completeness check
Step2: Completeness check for Seattle
Step3: Consistency Check for SFO
Step4: Consistency Check for Seattle
Step5: There is only one year and data spans for the three months of Jun-Aug. Looks consistent although there is no way to confirm.
Incident Categorization based on this site
http
Step6: Seattle Incidents trend by date (By Category)
Step7: Seattle Incidents trend by weekday (By Category)
Step8: Seattle Incidents trend by hour (By Category)
Step9: Seattle Analysis
In general "Property" related crimes are higher than all others.
The next highest category is "Inchoate"
There does not seem to be any specific trend in the crimes over dates or over the weekdays
When we chart the crimes on the hours of the day, there seems to be a definite trend. Number of incidents peak at midnight and then drop as the day starts. After 5 am there is a sharp increase in incidents and it seems to peak around noon. After this peak there is a slower increase in the incident rates.
Most incidents span between 10am and midnight.
San Francisco Incidents trend by date (By Category)
Step10: San Francisco Incidents trend by weekday (By Category)
Step11: San Francisco Incidents trend by hour (By Category) | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
sanfran = pd.read_csv('sanfrancisco_incidents_summer_2014.csv')
pd.DataFrame(sanfran.columns)
seattle = pd.read_csv('seattle_incidents_summer_2014.csv'
,parse_dates=['Occurred Date or Date Range Start']
,infer_datetime_format=True)
pd.DataFrame(seattle.columns)
sfo_incident_category = pd.DataFrame(pd.unique(sanfran.Category.ravel()))
sfo_incident_category
seattle_incident_category = pd.DataFrame(pd.unique(seattle['Summarized Offense Description'].ravel()))
seattle_incident_category
sanfran.head(5)
seattle.head(5)
Explanation: Crime Analytics: Visualization of Incident Reports
In this assignment, you will analyze criminal incident data from Seattle or San Francisco to visualize patterns and, if desired, contrast and compare patterns across the two cities.
Two sets of data files are available
* sanfrancisco_incidents_summer_2014.csv
* seattle_incidents_summer_2014.csv
Assignment: Crime Analytics: Visualization of Incident Reports entry by Jerry Thomas
Mostly based on the excellent ProntoData Analysis by Jake Vanderplas
ProntoData Analysis
Exploring the Data
End of explanation
sanfran_missing_count = {}
for col_name in sanfran.columns:
sanfran_missing_count[col_name] = len(sanfran[sanfran[col_name].isnull()])
sanfran_missing_count
Explanation: Initial observations
The two data sets have different set of attributes and structure and cannot be directly compared
San Fransisco data set has the additional information on the resolution for the incident which could potentially be used to analyze how many of the reported incidents have been resolved. In other words the efficiency of the SFO PD in resolving the incidents. However there is no resolution date.
Both data sets have longitude and latitide information, which if plotted on a map may provide patterns in the clustering of the incident locations. This may help map out higher crime rate areas v/s lower crime rate areas
Offence categorization is different in both the data sets
There seem to be data issues in the seattle data. I can see date reported or date occurred end range dates in 2015 whereas the column (Occurred Date or Date Range Start seems more reliable. Looking further into the data it seems that the only the year has been mistyped as 2015 instead of 2014. This assumption may not be correct.
We can combine the data together into a single data set based on the common columns, however the category column will have to be normalized first so that we can map the existing categories into a new set of categories that will match across the two data sets.
Day of the week column is present in the SFO data set but is absent from the seattle data set. This might be an interesting Nominal column that can help in looking for patterns based on the day of the week.
Validations
Consistency check. We'll check the dates and identify rows where the dates are inconsistent. There is a year column in the seattle data which can potentially help in this effort.
Completeness check: There may be missing information in some rows which would need to be either discarded or corrected if possible.
Completeness check for SFO
End of explanation
seattle_missing_count = {}
for col_name in seattle.columns:
seattle_missing_count[col_name] = len(seattle[seattle[col_name].isnull()])
seattle_missing_count
Explanation: Completeness check for Seattle
End of explanation
# the date and time of incident are in two separate columns
# combining them into a date_time column
sanfran['date_time'] = pd.to_datetime(sanfran['Date'] + ' ' + sanfran['Time'])
date_idx = pd.DatetimeIndex(sanfran['date_time'])
sanfran['incident_date'] = date_idx.date.astype('datetime64')
sanfran['incident_hour'] = date_idx.hour
sanfran['incident_year'] = date_idx.year
sanfran['incident_month'] = date_idx.month
sanfran['incident_weekday'] = date_idx.weekday
by_year = sanfran.pivot_table('IncidntNum', aggfunc='count',
index='incident_year',
columns='incident_month')
by_year
Explanation: Consistency Check for SFO
End of explanation
seattle['date_time'] = seattle['Occurred Date or Date Range Start']
date_idx = pd.DatetimeIndex(seattle['date_time'])
seattle['incident_date'] = date_idx.date.astype('datetime64')
seattle['incident_hour'] = date_idx.hour
seattle['incident_year'] = date_idx.year
seattle['incident_month'] = date_idx.month
seattle['incident_weekday'] = date_idx.weekday
by_year = seattle.pivot_table('General Offense Number', aggfunc='count',
index='incident_year',
columns='incident_month')
by_year
Explanation: Consistency Check for Seattle
End of explanation
map_categories = {
'[INC - CASE DC USE ONLY]': 'OTHER OFFENSES',
'ANIMAL COMPLAINT': 'OTHER OFFENSES',
'ARSON': 'PROPERTY',
'ASSAULT': 'PERSONAL',
'BIAS INCIDENT': 'INCHOATE',
'BIKE THEFT': 'PROPERTY',
'BRIBERY': 'PROPERTY',
'BURGLARY': 'PROPERTY',
'BURGLARY-SECURE PARKING-RES': 'PROPERTY',
'CAR PROWL': 'INCHOATE',
'COUNTERFEIT': 'PROPERTY',
'DISORDERLY CONDUCT': 'INCHOATE',
'DISPUTE': 'INCHOATE',
'DISTURBANCE': 'INCHOATE',
'DRIVING UNDER THE INFLUENCE': 'STATUTORY',
'DRUG/NARCOTIC': 'STATUTORY',
'DRUNKENNESS': 'STATUTORY',
'DUI': 'STATUTORY',
'ELUDING': 'INCHOATE',
'EMBEZZLE': 'PROPERTY',
'EMBEZZLEMENT': 'PROPERTY',
'ESCAPE': 'STATUTORY',
'EXTORTION': 'INCHOATE',
'FALSE REPORT': 'INCHOATE',
'FAMILY OFFENSES': 'INCHOATE',
'FIREWORK': 'PROPERTY',
'FORGERY': 'PROPERTY',
'FORGERY/COUNTERFEITING': 'PROPERTY',
'FRAUD': 'PROPERTY',
'GAMBLING': 'PROPERTY',
'HOMICIDE': 'PERSONAL',
'ILLEGAL DUMPING': 'STATUTORY',
'INJURY': 'PERSONAL',
'KIDNAPPING': 'PERSONAL',
'LARCENY/THEFT': 'PROPERTY',
'LIQUOR LAWS': 'STATUTORY',
'LIQUOR VIOLATION': 'STATUTORY',
'LOITERING': 'INCHOATE',
'LOST PROPERTY': 'PROPERTY',
'MAIL THEFT': 'PROPERTY',
'MISSING PERSON': 'PERSONAL',
'NARCOTICS': 'STATUTORY',
'NON-CRIMINAL': 'NON-CRIMINAL',
'OBSTRUCT': 'PROPERTY',
'OTHER OFFENSES': 'OTHER OFFENSES',
'OTHER PROPERTY': 'PROPERTY',
'PICKPOCKET': 'PROPERTY',
'PORNOGRAPHY': 'INCHOATE',
'PORNOGRAPHY/OBSCENE MAT': 'INCHOATE',
'PROPERTY DAMAGE': 'PROPERTY',
'PROSTITUTION': 'INCHOATE',
'PUBLIC NUISANCE': 'INCHOATE',
'PURSE SNATCH': 'PROPERTY',
'RECKLESS BURNING': 'PROPERTY',
'RECOVERED PROPERTY': 'PROPERTY',
'ROBBERY': 'PROPERTY',
'RUNAWAY': 'INCHOATE',
'SECONDARY CODES': 'STATUTORY',
'SHOPLIFTING': 'PROPERTY',
'STOLEN PROPERTY': 'PROPERTY',
'SUICIDE': 'PERSONAL',
'SUSPICIOUS OCC': 'PROPERTY',
'THEFT OF SERVICES': 'PROPERTY',
'THREATS': 'INCHOATE',
'TRAFFIC': 'STATUTORY',
'TRESPASS': 'INCHOATE',
'VANDALISM': 'PROPERTY',
'VEHICLE THEFT': 'PROPERTY',
'VIOLATION OF COURT ORDER': 'STATUTORY',
'WARRANT ARREST': 'STATUTORY',
'WARRANTS': 'STATUTORY',
'WEAPON': 'STATUTORY',
'WEAPON LAWS': 'STATUTORY'
}
# Map the incident codes to a smaller set
seattle['incident_category'] = seattle['Summarized Offense Description'].apply(lambda col: map_categories[col])
sanfran['incident_category'] = sanfran['Category'].apply(lambda col: map_categories[col])
Explanation: There is only one year and data spans for the three months of Jun-Aug. Looks consistent although there is no way to confirm.
Incident Categorization based on this site
http://www.legalmatch.com/law-library/article/what-are-the-different-types-of-crimes.html
I took the 4 prominent categories and remapped them to the best of my knowledge.
Personal Crimes – “Offenses against the Person”: These are crimes that result in physical or mental harm to another person.
Property Crimes – “Offenses against Property”: These are crimes that do not necessarily involve harm to another person. Instead, they involve an interference with another person’s right to use or enjoy their property.
Inchoate Crimes – “Inchoate” translates into “incomplete”, meaning crimes that were begun, but not completed. This requires that a person take a substantial step to complete a crime, as opposed to just “intend” to commit a crime.
Statutory Crimes – A violation of a specific state or federal statute and can involve either property offenses or personal offense.
Other Offences – Anything that could not be mapped to the above 4 categories
See more at: http://www.legalmatch.com/law-library/article/what-are-the-different-types-of-crimes.html#sthash.xHKGRbs4.dpuf
End of explanation
by_date = seattle.pivot_table('General Offense Number', aggfunc='count',
index='incident_date',
columns='incident_category' )
ax = by_date.plot()
ax.figure.savefig('figs/seattle_incidents_by_date.png', bbox_inches='tight')
Explanation: Seattle Incidents trend by date (By Category)
End of explanation
by_weekday = seattle.pivot_table('General Offense Number', aggfunc='count',
index='incident_weekday',
columns='incident_category')
ax = by_weekday.plot()
ax.figure.savefig('figs/incidents_by_weekday.png', bbox_inches='tight')
Explanation: Seattle Incidents trend by weekday (By Category)
End of explanation
by_hour = seattle.pivot_table('General Offense Number', aggfunc='count',
index='incident_hour',
columns='incident_category')
ax = by_hour.plot()
ax.figure.savefig('figs/incidents_by_hour.png', bbox_inches='tight')
Explanation: Seattle Incidents trend by hour (By Category)
End of explanation
by_date = sanfran.pivot_table('IncidntNum', aggfunc='count',
index='incident_date',
columns='incident_category' )
ax = by_date.plot()
ax.figure.savefig('figs/sanfran_incidents_by_date.png', bbox_inches='tight')
Explanation: Seattle Analysis
In general "Property" related crimes are higher than all others.
The next highest category is "Inchoate"
There does not seem to be any specific trend in the crimes over dates or over the weekdays
When we chart the crimes on the hours of the day, there seems to be a definite trend. Number of incidents peak at midnight and then drop as the day starts. After 5 am there is a sharp increase in incidents and it seems to peak around noon. After this peak there is a slower increase in the incident rates.
Most incidents span between 10am and midnight.
San Francisco Incidents trend by date (By Category)
End of explanation
by_weekday = sanfran.pivot_table('IncidntNum', aggfunc='count',
index='incident_weekday',
columns='incident_category' )
ax = by_weekday.plot()
ax.figure.savefig('figs/sanfran_incidents_by_weekday.png', bbox_inches='tight')
Explanation: San Francisco Incidents trend by weekday (By Category)
End of explanation
by_hour = sanfran.pivot_table('IncidntNum', aggfunc='count',
index='incident_hour',
columns='incident_category' )
ax = by_hour.plot()
ax.figure.savefig('figs/sanfran_incidents_by_hour.png', bbox_inches='tight')
Explanation: San Francisco Incidents trend by hour (By Category)
End of explanation |
473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exceptions
An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions.
You've already seen some exceptions
Step1: Question
Step3: Generating Exceptions
Why generate exceptions? (Don't I have enough unintentional errors?) | Python Code:
def divide1(numerator, denominator):
try:
result = numerator/denominator
print("result = %f" % result)
except:
print("You can't divide by 0!!")
divide1(1.0, 2)
divide1(1.0, 0)
divide1("x", 2)
Explanation: Exceptions
An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions.
You've already seen some exceptions:
- syntax errors
- divide by 0
Many programs want to know about exceptions when they occur. For example, if the input to a program is a file path. If the user inputs an invalid or non-existent path, the program generates an exception. It may be desired to provide a response to the user in this case.
It may also be that programs will generate exceptions. This is a way of indicating that there is an error in the inputs provided. In general, this is the preferred style for dealing with invalid inputs or states inside a python function rather than having an error return.
Catching Exceptions
Python provides a way to detect when an exception occurs. This is done by the use of a block of code surrounded by a "try" and "except" statement.
End of explanation
#1/0
def divide2(numerator, denominator):
try:
result = numerator/denominator
print("result = %f" % result)
except (ZeroDivisionError, TypeError):
print("Got an exception")
divide2(1, "x")
# Why doesn't this catch the exception?
# How do we fix it?
divide2("x", 2)
# Exceptions in file handling
def read_safely(path):
error = None
try:
with open(path, "r") as fd:
lines = fd.readlines()
print ('\n'.join(lines()))
except FileNotFoundError as err:
print("File %s does not exist. Try again." % path)
read_safely("unknown.txt")
# Handle division by 0 by using a small number
SMALL_NUMBER = 1e-3
def divide2(numerator, denominator):
try:
result = numerator/denominator
except ZeroDivisionError:
result = numerator/SMALL_NUMBER
print("result = %f" % result)
divide2(1,0)
Explanation: Question: What do you do when you get an exception?
You can get information about exceptions.
End of explanation
import pandas as pd
def func(df):
"
:param pd.DataFrame df: should have a column named "hours"
if not "hours" in df.columns:
raise ValueError("DataFrame should have a column named 'hours'.")
df = pd.DataFrame({'hours': range(10) })
func(df)
df = pd.DataFrame({'years': range(10) })
# Generates an exception
#func(df)
Explanation: Generating Exceptions
Why generate exceptions? (Don't I have enough unintentional errors?)
End of explanation |
474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Pattern Roto-Translation Documentation
This notebook explains and documents the logic for generating a multispot pattern
with arbitrary translation, rotation and different X and Y pitch.
The pattern function can be though as rigidly attached to a frame of reference passing through its center.
This frame of reference ("pattern frame of reference") rotates and translates with the pattern.
Step4: LCOS parameters
Step5: We defines the grid (XL, YL) of LCOS pixel centers in the LCOS frame of reference
with origin in the LCOS center and in pixel units
Step6: By convention, the LCOS grid is centered on the LCOS pixel (400, 300)
Step7: Pattern parameters
Step8: The most natural approach is working on the LCOS reference frame and moving
the pattern function with a
rigid transformation
to the desired position and rotation.
The LCOS grid (i.e. pixels) samples this function and the pattern is obtained.
This is conceptually straightforward but, practically computing the rectangular
region for each spot is not trivial when we have a non-zero rotation.
Equivalently, we can think as the LCOS pattern is the result of sampling
a pattern "function" defined in the pattern frame of reference
(centered on its origin and not rotated).
To obtain an LCOS pattern with different translations and rotations,
the LCOS grid is rigidly moved to sample the
(fixed) function at different positions and orientations.
This is mathematically equivalent to the first approach
but makes it trivial to compute
the spot regions simply by using the floor function.
To calculate the rectangular regions assigned to each spot we work
in a Cartesian frame of reference rigidly attached to the centered of the pattern.
In this frame of reference the spot regions can be computed using
a simple floor function if the coordinates are scaled in pitch units.
The function spotmap_func take a pair of coordinate (x, y)
and return the spot-number which contains the point. If (x, y) does not fall
in the region of any spot NaN is returned.
The trick is feeding spotmap_func with the coordinates of a
rigidly transformed LCOS grid.
The (XL, YL) grid is first translated
and then rotated with respect to its center
Step9: Apart from the computing the spot regions, the crucial point of computing the multispot
pattern is computing the distance
of each LCOS pixel from the spot center.
Working in the LCOS frame of reference, we can compute the
spot (centers) coordinates as
Step10: Z_spots (the 2D array of labeled spot regions) was computed using the inverse rigid transformation of the LCOS grid in the pattern frame of reference. The spot centers, instead, are computed with a direct rigid transformation in the LCOS frame of reference. The two transformations are equivalent, as can bee seen plotting the spot centers and the spot region (on the LCOS frame of reference)
Step11: The pattern is computed spot by spot. For each spot we create a mask
of pixels defining the spot region. Then, we compute the distance from
the spot center for each pixel of the region.
The correct grid to use is (XL, YL) in the LCOS frame of reference .
Step12: Finally we plot the pattern | Python Code:
def rotate(x, y, angle):
Rotate the point (x, y) (or array of points) with respect to the origin.
Arguments:
x, y (floats or arrays): input coordinates to be transformed.
angle (float): rotation angle in degrees. When the Y axis points
up and the X axis points right, a positive angle result in
a counter-clock-wise rotation.
Returns:
New coordinates or the rotated point.
if angle == 0:
return x, y
shape = x.shape
assert shape == y.shape
x_ = x.ravel()
y_ = y.ravel()
theta = angle * np.pi / 180
rot_matrix = np.array([[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
v = np.vstack([x_, y_])
xr, yr = rot_matrix @ v
return xr.reshape(shape), yr.reshape(shape)
def spotmap_func(xy, pitch_xy, nspots_xy):
Return the spot number containing the input point(s).
Arguments:
xy (tuple of floats or arrays): coordinates on the input points.
pitch_xy (tuple of 2 floats): X and Y pattern pitch
nspots_xy (tuple of 2 ints): number of spots in X and Y direction.
Returns:
Return a number or an array (when passing an array as input coordinates)
identifying the spot which contains the point. If no spot contains a point
the value is NaN.
spotnum = [0, 0]
for i, (v, pitch, nspots) in enumerate(zip(xy, pitch_xy, nspots_xy)):
offset = 0 if (nspots % 2) == 0 else 0.5
spotnum[i] = np.floor(v / pitch + offset)
smin, smax = -(nspots // 2), (nspots // 2) + nspots % 2 - 1
spotnum[i][(spotnum[i] < smin) + (spotnum[i] > smax)] = np.nan
Z = spotnum[0] + spotnum[1]*ncols
Z -= np.nanmin(Z)
return Z
def spotgrid_centers(nrows, ncols, pitch_x=25, pitch_y=25,
center_x=0, center_y=0, rotation=0):
Returns centers of a grid of spots with given offset and rotation.
Arguments:
nrows, ncols (ints): number spots in the Y (nrows) and X(ncols)
direction.
pitch_x, pitch_y (floats): spot pitch in X and Y direction.
center_x, center_y (floats): coordinate of the pattern center.
rotation (float): rotation angle in degree.
Returns:
A tuple (Xm, Ym) of two 2D array containing the grid of spot centers.
xp = (np.arange(0, ncols, dtype=float) - (ncols-1)/2) * pitch_x
yp = (np.arange(0, nrows, dtype=float) - (nrows-1)/2) * pitch_y
Xp, Yp = np.meshgrid(xp, yp) # spot centers in pattern space
# Roto-translation to go to LCOS space
Xm, Ym = rotate(Xp, Yp, rotation)
Xm += center_x
Ym += center_y
return Xm, Ym
Explanation: Pattern Roto-Translation Documentation
This notebook explains and documents the logic for generating a multispot pattern
with arbitrary translation, rotation and different X and Y pitch.
The pattern function can be though as rigidly attached to a frame of reference passing through its center.
This frame of reference ("pattern frame of reference") rotates and translates with the pattern.
End of explanation
LCOS_X_SIZE, LCOS_Y_SIZE, LCOS_PIX_SIZE = 800, 600, 20e-6
Explanation: LCOS parameters:
End of explanation
xl = np.arange(LCOS_X_SIZE) - LCOS_X_SIZE // 2
yl = np.arange(LCOS_Y_SIZE) - LCOS_Y_SIZE // 2
YL, XL = np.mgrid[:LCOS_Y_SIZE, :LCOS_X_SIZE]
XL -= LCOS_X_SIZE // 2
YL -= LCOS_Y_SIZE // 2
Explanation: We defines the grid (XL, YL) of LCOS pixel centers in the LCOS frame of reference
with origin in the LCOS center and in pixel units:
End of explanation
LCOS_X_SIZE//2, LCOS_Y_SIZE//2
assert (XL[:, LCOS_X_SIZE//2] == 0).all()
assert (YL[LCOS_Y_SIZE//2] == 0).all()
Explanation: By convention, the LCOS grid is centered on the LCOS pixel (400, 300):
End of explanation
# Pattern shape
ncols = 3
nrows = 2
# Coordinates of the LCOS center in the pattern space
center_x = 100
center_y = 200
# Rotation of the LCOS grid versus the pattern space
rotation = 20
# Pitch for the multi-spot pattern
pitch_x = 80
pitch_y = 60
Explanation: Pattern parameters:
End of explanation
XLtr, YLtr = rotate((XL - center_x), (YL - center_y), angle=-rotation)
Z_spots = spotmap_func((XLtr, YLtr), (pitch_x, pitch_y), (ncols, nrows))
Explanation: The most natural approach is working on the LCOS reference frame and moving
the pattern function with a
rigid transformation
to the desired position and rotation.
The LCOS grid (i.e. pixels) samples this function and the pattern is obtained.
This is conceptually straightforward but, practically computing the rectangular
region for each spot is not trivial when we have a non-zero rotation.
Equivalently, we can think as the LCOS pattern is the result of sampling
a pattern "function" defined in the pattern frame of reference
(centered on its origin and not rotated).
To obtain an LCOS pattern with different translations and rotations,
the LCOS grid is rigidly moved to sample the
(fixed) function at different positions and orientations.
This is mathematically equivalent to the first approach
but makes it trivial to compute
the spot regions simply by using the floor function.
To calculate the rectangular regions assigned to each spot we work
in a Cartesian frame of reference rigidly attached to the centered of the pattern.
In this frame of reference the spot regions can be computed using
a simple floor function if the coordinates are scaled in pitch units.
The function spotmap_func take a pair of coordinate (x, y)
and return the spot-number which contains the point. If (x, y) does not fall
in the region of any spot NaN is returned.
The trick is feeding spotmap_func with the coordinates of a
rigidly transformed LCOS grid.
The (XL, YL) grid is first translated
and then rotated with respect to its center:
End of explanation
xc, yc = spotgrid_centers(nrows, ncols, pitch_x=pitch_x, pitch_y=pitch_y,
center_x=center_x, center_y=center_y, rotation=rotation)
Explanation: Apart from the computing the spot regions, the crucial point of computing the multispot
pattern is computing the distance
of each LCOS pixel from the spot center.
Working in the LCOS frame of reference, we can compute the
spot (centers) coordinates as:
End of explanation
Z, x, y = Z_spots, xc, yc
fig, ax = plt.subplots(figsize=(8, 6))
im = ax.imshow(Z, interpolation='none', cmap='viridis', aspect='equal',
extent=(-LCOS_X_SIZE/2, LCOS_X_SIZE/2, LCOS_Y_SIZE/2, -LCOS_Y_SIZE/2))
ax.scatter(x, y, color='w', edgecolors='k', lw=0.3)
Explanation: Z_spots (the 2D array of labeled spot regions) was computed using the inverse rigid transformation of the LCOS grid in the pattern frame of reference. The spot centers, instead, are computed with a direct rigid transformation in the LCOS frame of reference. The two transformations are equivalent, as can bee seen plotting the spot centers and the spot region (on the LCOS frame of reference):
End of explanation
pattern = np.zeros_like(Z, dtype=float)
for ispot in range(nrows * ncols):
mask = Z_spots == ispot
x0, y0 = xc.ravel()[ispot], yc.ravel()[ispot]
radius = np.sqrt((XL[mask] - x0)**2 + (YL[mask] - y0)**2)
pattern[mask] = radius
Explanation: The pattern is computed spot by spot. For each spot we create a mask
of pixels defining the spot region. Then, we compute the distance from
the spot center for each pixel of the region.
The correct grid to use is (XL, YL) in the LCOS frame of reference .
End of explanation
Z, x, y = pattern, xc, yc
fig, ax = plt.subplots(figsize=(8, 6))
im = ax.imshow(Z, interpolation='none', cmap='viridis', aspect='equal',
norm=mpl.colors.LogNorm(),
extent=(-LCOS_X_SIZE/2, LCOS_X_SIZE/2, LCOS_Y_SIZE/2, -LCOS_Y_SIZE/2))
plt.colorbar(im)
ax.scatter(x, y, color='w', edgecolors='k', lw=0.3)
ax.axhline(0); ax.axvline(0);
Explanation: Finally we plot the pattern:
End of explanation |
475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self Employment Data 2015
from OECD
Step1: Exercise 1
Create a barchart, which shows the selfemployment-rates for men-women by country.
Outcome should look similar to | Python Code:
countries = ['AUS', 'AUT', 'BEL', 'CAN', 'CZE', 'FIN', 'DEU', 'GRC', 'HUN', 'ISL', 'IRL', 'ITA', 'JPN',
'KOR', 'MEX', 'NLD', 'NZL', 'NOR', 'POL', 'PRT', 'SVK', 'ESP', 'SWE', 'CHE', 'TUR', 'GBR',
'USA', 'CHL', 'COL', 'EST', 'ISR', 'RUS', 'SVN', 'EU28', 'EA19', 'LVA']
male_selfemployment_rates = [12.13246, 15.39631, 18.74896, 9.18314, 20.97991, 18.87097,
13.46109, 39.34802, 13.3356, 16.83681, 25.35344, 29.27118,
12.06516, 27.53898, 31.6945, 19.81751, 17.68489, 9.13669,
24.15699, 22.95656, 19.00245, 21.16428, 13.93171, 8.73181,
30.73483, 19.11255, 7.48383, 25.92752, 52.27145, 12.05042,
15.8517, 8.10048, 19.02411, 19.59021, 19.1384, 14.75558]
female_selfemployment_rates = [8.18631, 10.38607, 11.07756, 8.0069, 12.78461,
9.42761, 7.75637, 29.56566, 8.00408, 7.6802, 8.2774, 18.33204,
9.7313, 23.56431, 32.81488, 13.36444, 11.50045, 4.57464,
17.63891, 13.92678, 10.32846, 12.82925, 6.22453, 9.28793,
38.32216, 10.21743, 5.2896, 25.24502, 49.98448, 6.624,
9.0243, 6.26909, 13.46641, 11.99529, 11.34129, 8.88987]
plt.bar?
Explanation: Self Employment Data 2015
from OECD
End of explanation
# TODO
Explanation: Exercise 1
Create a barchart, which shows the selfemployment-rates for men-women by country.
Outcome should look similar to:
End of explanation |
476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
full_backend
debugging the full backend code
Step1: import the new haven report card module
Step2: now determine the root directory for the repo
Step3: read in the issue data from file (to speed things up)
Step4: now determine the neighborhoods for each issue using the get_neighborhoods routine
Step5: now add the neighborhoods to the DataFrame
Step6: Statistics to calculate | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: full_backend
debugging the full backend code
End of explanation
import nhrc2
from nhrc2.backend import get_neighborhoods as get_ngbrhd
from nhrc2.backend import read_issues as ri
import pandas as pd
import numpy as np
Explanation: import the new haven report card module
End of explanation
nhrc2dir = '/'.join(str(nhrc2.__file__).split('/')[:-1])+'/'
scf_df_cat = ri.read_categories()
scf_df_cat
scf_df_cat[scf_df_cat['organization'] == 'City of New Haven']
Explanation: now determine the root directory for the repo:
End of explanation
readfile=False
writejson=False
scf_df = ri.get_issues(readfile=readfile, writejson=writejson)
Explanation: read in the issue data from file (to speed things up)
End of explanation
hoods = get_ngbrhd.get_neighborhoods()
Explanation: now determine the neighborhoods for each issue using the get_neighborhoods routine:
End of explanation
scf_df['neighborhood'] = hoods
Explanation: now add the neighborhoods to the DataFrame:
End of explanation
scf_df.columns
scf_df.loc[0:1, 'created_at']
pd.to_datetime(scf_df.loc[0, 'created_at'])
scf_df['time_to_acknowledge'] = (pd.to_datetime(scf_df['acknowledged_at']) - pd.to_datetime(scf_df['created_at']))/pd.Timedelta('1d')
scf_df['time_to_close'] = (pd.to_datetime(scf_df['closed_at']) - pd.to_datetime(scf_df['created_at']))/pd.Timedelta('1d')
scf_df.loc[0:1, 'time_to_acknowledge']
np.median(scf_df['time_to_acknowledge'].values)
np.median(scf_df['time_to_close'].values)
Explanation: Statistics to calculate:
There are a few things that it would be nice to calculate. Namely,
The time to acknowledgement
The time to completion
Acknowledgement Improvement
Completion Improvement
Computing the Time to Acknowledgement
End of explanation |
477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulated Linear Regression
Abstract
In order to understand what TensorFlow can do, here is a little demo that makes up some phony data following a certain rulw, and then fits a line to it using Linear Regression. In the end, we expect that TensorFlow will be able to find out the parameters used to make up the phony data.
Linear Regression is a Machine Learning algorithm that models the relationship between a dependent variable and one or more independent variables.
Introduction
This tutorial is taken, with slight modification and different annotations, from TensorFlow's official documentation and Professor Jordi Torres' First Contact with TensorFlow.
This tutorial is intended for readers who are new to both machine learning and TensorFlow.
Data Preparation
Let's first start by creating 1000 phony x, y data points. In order to accomplish this, we will use NumPy. NumPy is an extension to the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large library of high-level mathematical functions to operate on these.
In particular, we will take advantage of the numpy.random.normal() function, which draws random samples from a normal (Gaussian) distribution (also called the bell curve because of its characteristic shape). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution
The rule that our phony data points will follow is
Step1: Data Analysis
Linear Regression models can be represented with just two parameters
Step2: Then, let's use two other ops for describing the relationship between x_data , W and b, that is the linear function (first degree polynomial).
Step3: In order to find the best W and b, we need to minimize the mean squared error between the predicted y and the actual y_data. The way we accomplish this is using a Gradient Descent Optimizer.
Step4: Before starting, initialize the variables. We will 'run' this first.
Step5: Then, we launch the graph.
Step6: Now, fit the line. In order to do this, let's iterate 200 times (epochs) on the training data.
Step7: Finally, let's see if TensorFlow learned that the best fit is near W | Python Code:
import numpy as np
num_points = 1000
vectors_set = []
for i in range(num_points):
x1= np.random.normal(0.0, 0.55)
y1= x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)
vectors_set.append([x1, y1])
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
Explanation: Simulated Linear Regression
Abstract
In order to understand what TensorFlow can do, here is a little demo that makes up some phony data following a certain rulw, and then fits a line to it using Linear Regression. In the end, we expect that TensorFlow will be able to find out the parameters used to make up the phony data.
Linear Regression is a Machine Learning algorithm that models the relationship between a dependent variable and one or more independent variables.
Introduction
This tutorial is taken, with slight modification and different annotations, from TensorFlow's official documentation and Professor Jordi Torres' First Contact with TensorFlow.
This tutorial is intended for readers who are new to both machine learning and TensorFlow.
Data Preparation
Let's first start by creating 1000 phony x, y data points. In order to accomplish this, we will use NumPy. NumPy is an extension to the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large library of high-level mathematical functions to operate on these.
In particular, we will take advantage of the numpy.random.normal() function, which draws random samples from a normal (Gaussian) distribution (also called the bell curve because of its characteristic shape). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution
The rule that our phony data points will follow is:
y = x * 0.1 + 0.3
To this, we will add an "error" following a normal distribution.
End of explanation
import tensorflow as tf
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
Explanation: Data Analysis
Linear Regression models can be represented with just two parameters: W (the slope) and b (the y-intercept).
We want to generate a TensorFlow algorithm to find the best parameters W and b that from input data x_data describe the underlying rule.
First, let's begin by defining two Variable ops: one for the slope and one the y-intercept.
End of explanation
y = tf.add(tf.mul(x_data, W), b) # W * x_data + b
Explanation: Then, let's use two other ops for describing the relationship between x_data , W and b, that is the linear function (first degree polynomial).
End of explanation
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
Explanation: In order to find the best W and b, we need to minimize the mean squared error between the predicted y and the actual y_data. The way we accomplish this is using a Gradient Descent Optimizer.
End of explanation
init = tf.initialize_all_variables()
Explanation: Before starting, initialize the variables. We will 'run' this first.
End of explanation
sess = tf.Session()
sess.run(init)
Explanation: Then, we launch the graph.
End of explanation
for step in range(200):
sess.run(train)
Explanation: Now, fit the line. In order to do this, let's iterate 200 times (epochs) on the training data.
End of explanation
print(sess.run(W), sess.run(b))
Explanation: Finally, let's see if TensorFlow learned that the best fit is near W: [0.1], b: [0.3] (because, in our example, the input data were "phony", contained some noise: the "error")
End of explanation |
478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Restructure Data
We will sort landmark data according to stype and organize it in a two tiered dictionary according to sample type (s) and channel (c).
Step1: Set up graph data
Step2: Graphs | Python Code:
oldlm.stype.unique()
Dlm = {}
for stype in tqdm.tqdm(oldlm.stype.unique()):
# These two lines may need to be modified based on stype structure
s = stype.split('-')[0]
c = stype.split('-')[-1]
# Add sample type dictionary if not already present
if s not in Dlm.keys():
Dlm[s] = {}
# Save sample specific landmark data to dictionary
Dlm[s][c] = oldlm[oldlm.stype==stype]
Explanation: Restructure Data
We will sort landmark data according to stype and organize it in a two tiered dictionary according to sample type (s) and channel (c).
End of explanation
gdata = {}
gdata['you'] = {}
gdata['wt'] = {}
gdata['you']['ZRF'] = ds.graphData(Dlm['you']['ZRF'],colors[3])
gdata['wt']['ZRF'] = ds.graphData(Dlm['wt']['ZRF'],colors[1])
gdata['you']['AT'] = ds.graphData(Dlm['you']['AT'],colors[2])
gdata['wt']['AT'] = ds.graphData(Dlm['wt']['AT'],colors[0])
Explanation: Set up graph data
End of explanation
crop = 40 # microns
legend = False
save = True
a = 0.3
pthresh = 0.01
channel = 'ZRF'
fig,axr = plt.subplots(2,4,figsize=(12,5),sharey='row',sharex=True)
if crop is not None:
mask = np.where((xarr>-crop)&(xarr<crop) == True)[0]
xmin = mask.min()
xmax = mask.max()
xarrcr = xarr[xmin:xmax+1]
else:
xarrcr = xarr
for I,dtype in enumerate(['r','pts']):
go1 = gdata['wt'][channel]
go1.prepare_data(xarrcr,tarr,dtype)
go2 = gdata['you'][channel]
go2.prepare_data(xarrcr,tarr,dtype)
parr = stats.ttest_ind(go1.arr_masked,go2.arr_masked,axis=2,nan_policy='omit')[1]
parr[parr<pthresh] = 0
parr[parr>pthresh] = 1
# Plot wildtype data
go = go1
for i,p in enumerate(tpairs):
n = i
i = I
ti1 = np.where(tarr==p[0])[0][0]
ti2 = np.where(tarr==p[1])[0][0]
axr[i,n].fill_between(xarrcr,go.avg[:,ti1]+go.sem[:,ti1],go.avg[:,ti1]-go.sem[:,ti1],alpha=a,color=go.c,zorder=1)
axr[i,n].fill_between(xarrcr,-go.avg[:,ti2]+go.sem[:,ti2],-go.avg[:,ti2]-go.sem[:,ti2],alpha=a,color=go.c,zorder=1)
axr[i,n].plot(xarrcr,go.avg[:,ti1],c=go.c,zorder=2,label='{} {}'.format(go.arr.shape[-1],'wt'))
axr[i,n].plot(xarrcr,-go.avg[:,ti2],c=go.c,zorder=2)
# Plot mutant data
go = go2
for i,p in enumerate(tpairs):
n = i
i = I
ti1 = np.where(tarr==p[0])[0][0]
ti2 = np.where(tarr==p[1])[0][0]
axr[i,n].fill_between(xarrcr,go.avg[:,ti1]+go.sem[:,ti1],go.avg[:,ti1]-go.sem[:,ti1],alpha=a,color=go.c,zorder=1)
axr[i,n].fill_between(xarrcr,-go.avg[:,ti2]+go.sem[:,ti2],-go.avg[:,ti2]-go.sem[:,ti2],alpha=a,color=go.c,zorder=1)
axr[i,n].plot(xarrcr,go.avg[:,ti1],c=go.c,zorder=2,label='{} {}'.format(go.arr.shape[-1],'yot'))
axr[i,n].plot(xarrcr,-go.avg[:,ti2],c=go.c,zorder=2)
axr[i,n].scatter(xarrcr,go.avg[:,ti1],c=parr[:,ti1],cmap='Greys_r',zorder=3,vmin=0,vmax=1,edgecolor='k')
axr[i,n].scatter(xarrcr,-go.avg[:,ti2],c=parr[:,ti2],cmap='Greys_r',zorder=3,vmin=0,vmax=1,edgecolor='k')
axr[0,n].legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,ncol=2, mode="expand", borderaxespad=0.)
plt.tight_layout()
tstamp = datetime.datetime.now().strftime('%Y-%m-%d')
if save:
fig.savefig(tstamp+'_yot-wt-{}.pdf'.format(channel))
channel = 'AT'
fig,axr = plt.subplots(2,4,figsize=(12,5),sharey='row',sharex=True)
if crop is not None:
mask = np.where((xarr>-crop)&(xarr<crop) == True)[0]
xmin = mask.min()
xmax = mask.max()
xarrcr = xarr[xmin:xmax+1]
else:
xarrcr = xarr
for I,dtype in enumerate(['r','pts']):
go1 = gdata['wt'][channel]
go1.prepare_data(xarrcr,tarr,dtype)
go2 = gdata['you'][channel]
go2.prepare_data(xarrcr,tarr,dtype)
parr = stats.ttest_ind(go1.arr_masked,go2.arr_masked,axis=2,nan_policy='omit')[1]
parr[parr<pthresh] = 0
parr[parr>pthresh] = 1
# Plot wildtype data
go = go1
for i,p in enumerate(tpairs):
n = i
i = I
ti1 = np.where(tarr==p[0])[0][0]
ti2 = np.where(tarr==p[1])[0][0]
axr[i,n].fill_between(xarrcr,go.avg[:,ti1]+go.sem[:,ti1],go.avg[:,ti1]-go.sem[:,ti1],alpha=a,color=go.c,zorder=1)
axr[i,n].fill_between(xarrcr,-go.avg[:,ti2]+go.sem[:,ti2],-go.avg[:,ti2]-go.sem[:,ti2],alpha=a,color=go.c,zorder=1)
axr[i,n].plot(xarrcr,go.avg[:,ti1],c=go.c,zorder=2,label='{} {}'.format(go.arr.shape[-1],'wt'))
axr[i,n].plot(xarrcr,-go.avg[:,ti2],c=go.c,zorder=2)
# Plot mutant data
go = go2
for i,p in enumerate(tpairs):
n = i
i = I
ti1 = np.where(tarr==p[0])[0][0]
ti2 = np.where(tarr==p[1])[0][0]
axr[i,n].fill_between(xarrcr,go.avg[:,ti1]+go.sem[:,ti1],go.avg[:,ti1]-go.sem[:,ti1],alpha=a,color=go.c,zorder=1)
axr[i,n].fill_between(xarrcr,-go.avg[:,ti2]+go.sem[:,ti2],-go.avg[:,ti2]-go.sem[:,ti2],alpha=a,color=go.c,zorder=1)
axr[i,n].plot(xarrcr,go.avg[:,ti1],c=go.c,zorder=2,label='{} {}'.format(go.arr.shape[-1],'yot'))
axr[i,n].plot(xarrcr,-go.avg[:,ti2],c=go.c,zorder=2)
axr[i,n].scatter(xarrcr,go.avg[:,ti1],c=parr[:,ti1],cmap='Greys_r',zorder=3,vmin=0,vmax=1,edgecolor='k')
axr[i,n].scatter(xarrcr,-go.avg[:,ti2],c=parr[:,ti2],cmap='Greys_r',zorder=3,vmin=0,vmax=1,edgecolor='k')
axr[0,n].legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,ncol=2, mode="expand", borderaxespad=0.)
plt.tight_layout()
tstamp = datetime.datetime.now().strftime('%Y-%m-%d')
if save:
fig.savefig(tstamp+'_yot-wt-{}.pdf'.format(channel))
Explanation: Graphs
End of explanation |
479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explanation
The HSC data is too large to store as one sqlite database file using github. So instead, it needs to be fetched by the user, separately from cloning the repository. This notebook is a work-in-progress to help automate that process, and make sure that the final schema is correct.
Sending the query
The HSC data release site provides a command line tool for querying the database; I've adapted it to run programmatically from within a python session. Check it out; it's the file hsc_release_query.py. There's a working example of a simple query in sql_tester.ipynb. This notebook rolls everything together
Step2: Get HSC Fluxes
Build the query
Gets both the fluxes and the magnitudes. The difference shouldn't matter, but now you have both, depending on what's more convenient. In general, using the flux flags with the magnitude values is what I usually do.
Step3: Make the query
The total number of objects is currently hardcoded! Make sure this hasn't changed!
The cleaner way to do this would be to make a simple query to the database, then count the number of records. But for now, hardcoding it is simpler.
Step4: Check if it worked
Step5: Combine databases
Step6: Match HSC objects to COSMOS objects
Every COSMOS galaxy will be in 1 pair. HSC galaxies can be in 0, 1 or more pairs.
Step7: Check matches
Step9: Get spec-z's matched to HSC objects
Build the query
Step10: Make the query
Step11: Check if it worked
Step13: Get FRANKEN-Z photo-z's, and then match to HSC
Build the query
There are no photo-z's with the "fake" COSMOS-field Wide images. That catalog was originally UltraDeep, degraded to being Wide-like. To most-closely match the photo-z catalogs, I'd then want to look in the UltraDeep dataset; but to most-correctly prepare for running on the true-Wide data, I'll pull my photo-z's from the Deep later. (Note
Step14: Make the query
Step15: Check if it worked
Step16: Cross reference FRANKENZ ids to general HSC ids
Step17: Copy index column to a new data frame, then only add desired columns | Python Code:
from __future__ import division, print_function
# give access to importing dwarfz
import os, sys
dwarfz_package_dir = os.getcwd().split("dwarfz")[0]
if dwarfz_package_dir not in sys.path:
sys.path.insert(0, dwarfz_package_dir)
import dwarfz
from dwarfz.hsc_credentials import credential
from dwarfz.hsc_release_query import query_wrapper
# back to regular import statements
import os, sys
import shutil
import glob
import pandas as pd
import numpy as np
import pathlib
Explanation: Explanation
The HSC data is too large to store as one sqlite database file using github. So instead, it needs to be fetched by the user, separately from cloning the repository. This notebook is a work-in-progress to help automate that process, and make sure that the final schema is correct.
Sending the query
The HSC data release site provides a command line tool for querying the database; I've adapted it to run programmatically from within a python session. Check it out; it's the file hsc_release_query.py. There's a working example of a simple query in sql_tester.ipynb. This notebook rolls everything together: querying the server, and combining the subsets into one table.
What gets saved?
This comes in two parts:
1) Get the main HSC table (position, fluxes, flags for each object)
2) Get a list of matched spec-z's
Code
Remember to set your credentials within hsc_credentials.py !
End of explanation
sql_base =
SELECT
object_id,
ra, dec,
detect_is_patch_inner, detect_is_tract_inner, detect_is_primary,
gcmodel_flux, gcmodel_flux_err, gcmodel_flux_flags, gcmodel_mag,
rcmodel_flux, rcmodel_flux_err, rcmodel_flux_flags, rcmodel_mag,
icmodel_flux, icmodel_flux_err, icmodel_flux_flags, icmodel_mag,
zcmodel_flux, zcmodel_flux_err, zcmodel_flux_flags, zcmodel_mag,
ycmodel_flux, ycmodel_flux_err, ycmodel_flux_flags, ycmodel_mag
FROM
pdr1_cosmos_widedepth_median.forced
LIMIT
{}
OFFSET
{}
Explanation: Get HSC Fluxes
Build the query
Gets both the fluxes and the magnitudes. The difference shouldn't matter, but now you have both, depending on what's more convenient. In general, using the flux flags with the magnitude values is what I usually do.
End of explanation
n_objects = 1263503
block_size = 250000
n_blocks = (n_objects // block_size) + 1
temp_hsc_table_dir = pathlib.Path("partial_hsc_tables")
if not temp_hsc_table_dir.is_dir():
temp_hsc_table_dir.mkdir()
limit = block_size
preview_results = False
delete_job = True
out_format = "sqlite3"
for i in range(n_blocks):
offset = i*block_size
sql = sql_base.format(limit, offset)
output_filename = temp_hsc_table_dir / "tmp_{}.sqlite3".format(i)
print(" ---------------- QUERY {} -------------------- ".format(i+1))
print(sql)
with open(output_filename, mode="wb") as output_file:
query_wrapper(credential, sql, preview_results, delete_job,
out_format, output_file,
nomail=True)
Explanation: Make the query
The total number of objects is currently hardcoded! Make sure this hasn't changed!
The cleaner way to do this would be to make a simple query to the database, then count the number of records. But for now, hardcoding it is simpler.
End of explanation
database_filenames = sorted(temp_hsc_table_dir.glob("tmp_*.sqlite3"))
database_filenames
Explanation: Check if it worked
End of explanation
dfs = [pd.read_sql_table("table_1", "sqlite:///{}".format(database_filename),
index_col="object_id")
for database_filename in database_filenames]
assert(sum(df.shape[0] for df in dfs) == n_objects)
combined = pd.concat(dfs)
assert(combined.shape[0] == n_objects)
del dfs
combined.head()
for filename in database_filenames:
os.remove(filename)
if len(list(temp_hsc_table_dir.glob("*")))==0:
temp_hsc_table_dir.rmdir()
combined.keys()
hsc_database_filename = "HSC_COSMOS_median_forced.sqlite3"
hsc_database_filename_old = hsc_database_filename + ".old"
if os.path.exists(hsc_database_filename):
try:
shutil.move(hsc_database_filename, hsc_database_filename_old)
combined.to_sql("hsc", "sqlite:///{}".format(hsc_database_filename))
except:
# in case there's an error during writing, don't overwrite/delete the existing database
shutil.move(hsc_database_filename_old, hsc_database_filename)
raise
else:
# only delete if combining went successfully
os.remove(hsc_database_filename + ".old")
else:
combined.to_sql("hsc", "sqlite:///{}".format(hsc_database_filename))
Explanation: Combine databases
End of explanation
COSMOS_filename = pathlib.Path(dwarfz.data_dir_default) / "COSMOS_reference.sqlite"
COSMOS = dwarfz.datasets.COSMOS(COSMOS_filename)
COSMOS.df.head()
HSC_filename = pathlib.Path(dwarfz.data_dir_default) / "HSC_COSMOS_median_forced.sqlite3"
HSC = dwarfz.datasets.HSC(HSC_filename)
HSC.df.head()
matches = dwarfz.matching.Matches(COSMOS.df, HSC.df)
matches_filename = pathlib.Path(dwarfz.data_dir_default) / "matches.sqlite3"
if not matches_filename.exists():
matches.save_to_filename(matches_filename)
Explanation: Match HSC objects to COSMOS objects
Every COSMOS galaxy will be in 1 pair. HSC galaxies can be in 0, 1 or more pairs.
End of explanation
print("threshold (error) : {:>5.2f}".format(matches.threshold_error))
print("threshold (match) : {:>5.2f}".format(matches.threshold_match))
print("overall completeness : {:.2f} %".format(100*np.mean(matches.df.match[~matches.df.error])))
print("min separation: {:.4f} [arcsec]".format(min(matches.df.sep)))
print("max separation: {:.4f} [arcsec]".format(max(matches.df.sep)))
Explanation: Check matches
End of explanation
redshifts_sql =
SELECT
object_id, specz_id,
d_pos,
specz_ra, specz_dec,
specz_redshift, specz_redshift_err, specz_flag_homogeneous
FROM
pdr1_cosmos_widedepth_median.specz
Explanation: Get spec-z's matched to HSC objects
Build the query
End of explanation
preview_results = False
delete_job = True
out_format = "sqlite3"
output_filename = "specz.{}".format(out_format)
print(output_filename)
with open(output_filename, mode="wb") as output_file:
query_wrapper(credential, redshifts_sql, preview_results, delete_job,
out_format, output_file,
nomail=True,
)
Explanation: Make the query
End of explanation
!ls -lh specz.sqlite3
df = pd.read_sql_table("table_1",
"sqlite:///{}".format("specz.sqlite3"),
index_col="object_id")
df = df[df.specz_flag_homogeneous]
df.head()
Explanation: Check if it worked
End of explanation
photoz_sql =
SELECT
pdr1_deep.forced.object_id,
pdr1_deep.forced.ra,
pdr1_deep.forced.dec,
pdr1_deep.photoz_frankenz.photoz_best,
pdr1_deep.photoz_frankenz.photoz_risk_best
FROM
pdr1_deep.forced
INNER JOIN pdr1_deep.photoz_frankenz
ON pdr1_deep.photoz_frankenz.object_id=pdr1_deep.forced.object_id
WHERE (ra BETWEEN 149.25 AND 151.25) AND (dec BETWEEN 1.4 AND 3);
Explanation: Get FRANKEN-Z photo-z's, and then match to HSC
Build the query
There are no photo-z's with the "fake" COSMOS-field Wide images. That catalog was originally UltraDeep, degraded to being Wide-like. To most-closely match the photo-z catalogs, I'd then want to look in the UltraDeep dataset; but to most-correctly prepare for running on the true-Wide data, I'll pull my photo-z's from the Deep later. (Note: no photo-z's have been publicly released for the Wide data within the COSMOS field, circa 8 June 2017)
End of explanation
preview_results = False
delete_job = True
out_format = "sqlite3"
output_filename = "photoz_tmp.{}".format(out_format)
print(output_filename)
with open(output_filename, mode="wb") as output_file:
query_wrapper(credential, photoz_sql, preview_results, delete_job,
out_format, output_file,
nomail=True,
)
Explanation: Make the query
End of explanation
!ls -lh photoz_tmp.sqlite3
df = pd.read_sql_table("table_1",
"sqlite:///{}".format("photoz_tmp.sqlite3"),
index_col="object_id")
df.head()
df.to_sql("FRANKENZ", "sqlite:///franken_z-DEEP-COSMOS.sqlite3",
if_exists="replace")
os.remove("photoz_tmp.sqlite3")
Explanation: Check if it worked
End of explanation
HSC_filename = pathlib.Path(dwarfz.data_dir_default) / "HSC_COSMOS_median_forced.sqlite3"
HSC = dwarfz.datasets.HSC(HSC_filename)
matches = dwarfz.matching.Matches(HSC.df, df )
matches.df["HSC_ids"] = matches.df.index
matches.df["FRANKENZ_ids"] = matches.df.catalog_2_ids
matches.df.head()
HSC.df.join(matches.df).join(df[["photoz_best",
"photoz_risk_best"]],
on="FRANKENZ_ids").head()
Explanation: Cross reference FRANKENZ ids to general HSC ids
End of explanation
HSC_photo_zs = HSC.df.copy()[[]] # only copy index column
HSC_photo_zs = HSC_photo_zs.join(matches.df[["FRANKENZ_ids"]])
HSC_photo_zs = HSC_photo_zs.join(df[["photoz_best", "photoz_risk_best"]],
on="FRANKENZ_ids")
HSC_photo_zs.head()
HSC_photo_zs.to_sql("photo_z",
"sqlite:///HSC_matched_to_FRANKENZ.sqlite",
if_exists="replace",
)
Explanation: Copy index column to a new data frame, then only add desired columns
End of explanation |
480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TIS Analysis Framework Examples
This notebook provides an overview of the TIS analysis framework in OpenPathSampling. We start with the StandardTISAnalysis object, which will probably meet the needs of most users. Then we go into details of how to set up custom objects for analysis, and how to assemble them into a generic TISAnalysis object.
Step1: Simplified Combined Analysis
The StandardTISAnalysis object makes it very easy to perform the main TIS rate analysis. Furthermore, it caches all the intemediate results, so they can also be analyzed.
Step2: Note that there are many options for setting up the StandardTISAnalysis object. Most customizationizations to the analysis can be performed by changing the initialization parameters of that object; see its documentation for details.
Looking at the parts of the calculation
Once you run the rate calculation (or if you run tis_analysis.calculate(steps), you have already cached a large number of subcalculations. All of those are available in the results dictionary, although the analysis object has a number of conveniences to access some of them.
Looking at the keys of the results dictionary, we can see what has been cached
Step3: In practice, however, we won't go directly to the results dictionary. We'd rather use the convenience methods that make it easier to get to the interesting results.
We'll start by looking at the flux
Step4: Next we look at the total crossing probability (i.e., the crossing probability, joined by WHAM) for each sampled transition. We could also look at this per physical transition, but of course $A\to B$ and $A\to C$ are identical in MSTIS -- only the initial state matters.
Step5: We may want to look in more detail at one of these, by checking the per-ensemble crossing probability (as well at the total crossing probability). Here we select based on the $A\to B$ transition, we would get the same results if we selected the transition using either trans = network.from_state[stateA] or trans = network.transitions[(stateA, stateC)].
Step6: Finally, we look at the last part of the rate calculation
Step7: Individual components of the analysis
The combined analysis is the easiest way to perform analysis, but if you need to customize things (or if you want to compare different calculation methods) you might want to create objects for components of the analysis individually. Note that unlike the StandardTISAnalysis object, these do not cache their intermediate results.
Flux from the minus move
Step8: To calculate the fluxes, we use the .calculate method of the MinusMoveFlux object
Step9: The minus move flux calculates some intermediate information along the way, which can be of use for further analysis. This is cached when using the StandardTISAnalysis, but can always be recalculated. The intermediate maps each (state, interface) pair to a dictionary. For details on the structure of that dictionary, see the documentation of TrajectoryTransitionAnalysis.analyze_flux.
Step10: Flux from existing dictionary
The DictFlux class (which is required for MISTIS, and often provides better statistics than the minus move flux in other cases) takes a pre-calculated flux dictionary for initialization, and always returns that dictionary. The dictionary is in the same format as the fluxes returned by the MinusMoveFlux.calculate method; here, we'll just use the results we calculated above
Step11: Note that DictFlux.calculate just echoes back the dictionary we gave it, so it doesn't actually care if we give it the steps argument or not
Step12: This object can be used to provide the flux part of the TIS calculation, in exactly the same way a MinusMoveFlux object does.
Total crossing probability function
To calculate the total crossing probability, we must first calculate the individual ensemble crossing probabilities. This is done by creating a histogram of the maximum values of the order parameter. The class to do that is FullHistogramMaxLambdas. Then we'll create the TotalCrossingProbability.
Step13: We can also change the function used to calculate the maximum value of the order parameter with the max_lambda_func parameter. This can be useful to calculate the crossing probabilities along some other order parameter.
To calculate the total crossing probability function, we also need a method for combining the ensemble crossing probability functions. We'll use the default WHAM here; see its documentation for details on how it can be customized.
Step14: Now we can put these together into the total crossing probability function
Step15: Conditional transition probability
The last part of the standard calculation is the conditional transition probability. We'll make a version of this that works for all ensembles
Step16: StandardTISAnalysis.conditional_transition_probability converts this into a pandas.DataFrame, which gives prettier printing. However, the same data in included in this dict-of-dict structure.
Assembling a TIS analysis from scratch
If you're using the "standard" TIS approach, then the StandardTISAnalysis object is the most efficient way to do it. However, if you want to use another analysis approach, it can be useful to see how the "standard" approach can be assembled.
This won't have all the shortcuts or saved intermediates that the specialized object does, but it will use the same algorithms to get the same results.
Step17: Some of the objects that we created in previous sections can be reused here. In particular, there is only only one flux calculation and only one conditional transitional transition probability per reaction network. However, the total crossing probability method is dependent on the transition (different order parameters might have different histrogram parameters). So we need to associate each transition with a different TotalCrossingProbability object. In this example, we take the default behavior of WHAM (instead of specifying in explicitly, as above).
Step18: The general TISAnalysis object makes the most simple splitting
Step19: Finally we put this all together into a TISAnalysis object, and calculate the rate matrix. | Python Code:
%%time
storage = paths.AnalysisStorage(filename)
network = storage.networks[0]
scheme = storage.schemes[0]
stateA = storage.volumes['A']
stateB = storage.volumes['B']
stateC = storage.volumes['C']
all_states = [stateA, stateB, stateC] # all_states gives the ordering
Explanation: TIS Analysis Framework Examples
This notebook provides an overview of the TIS analysis framework in OpenPathSampling. We start with the StandardTISAnalysis object, which will probably meet the needs of most users. Then we go into details of how to set up custom objects for analysis, and how to assemble them into a generic TISAnalysis object.
End of explanation
from openpathsampling.analysis.tis import StandardTISAnalysis
# the scheme is only required if using the minus move for the flux
tis_analysis = StandardTISAnalysis(
network=network,
scheme=scheme,
max_lambda_calcs={t: {'bin_width': 0.05, 'bin_range': (0.0, 0.5)}
for t in network.sampling_transitions}
)
%%time
tis_analysis.rate_matrix(steps=storage.steps).to_pandas(order=all_states)
Explanation: Simplified Combined Analysis
The StandardTISAnalysis object makes it very easy to perform the main TIS rate analysis. Furthermore, it caches all the intemediate results, so they can also be analyzed.
End of explanation
tis_analysis.results.keys()
Explanation: Note that there are many options for setting up the StandardTISAnalysis object. Most customizationizations to the analysis can be performed by changing the initialization parameters of that object; see its documentation for details.
Looking at the parts of the calculation
Once you run the rate calculation (or if you run tis_analysis.calculate(steps), you have already cached a large number of subcalculations. All of those are available in the results dictionary, although the analysis object has a number of conveniences to access some of them.
Looking at the keys of the results dictionary, we can see what has been cached:
End of explanation
tis_analysis.flux_matrix
Explanation: In practice, however, we won't go directly to the results dictionary. We'd rather use the convenience methods that make it easier to get to the interesting results.
We'll start by looking at the flux:
End of explanation
for transition in network.sampling_transitions:
label = transition.name
tcp = tis_analysis.total_crossing_probability[transition]
plt.plot(tcp.x, np.log(tcp), label=label)
plt.title("Total Crossing Probability")
plt.xlabel("$\lambda$")
plt.ylabel("$\ln(P(\lambda | X_0))$")
plt.legend();
Explanation: Next we look at the total crossing probability (i.e., the crossing probability, joined by WHAM) for each sampled transition. We could also look at this per physical transition, but of course $A\to B$ and $A\to C$ are identical in MSTIS -- only the initial state matters.
End of explanation
state_pair = (stateA, stateB)
trans = network.transitions[state_pair]
for ens in trans.ensembles:
crossing = tis_analysis.crossing_probability(ens)
label = ens.name
plt.plot(crossing.x, np.log(crossing), label=label)
tcp = tis_analysis.total_crossing_probability[state_pair]
plt.plot(tcp.x, np.log(tcp), '-k', label="total crossing probability")
plt.title("Crossing Probabilities, " + stateA.name + " -> " + stateB.name)
plt.xlabel("$\lambda$")
plt.ylabel("$\ln(P_A(\lambda | \lambda_i))$")
plt.legend();
Explanation: We may want to look in more detail at one of these, by checking the per-ensemble crossing probability (as well at the total crossing probability). Here we select based on the $A\to B$ transition, we would get the same results if we selected the transition using either trans = network.from_state[stateA] or trans = network.transitions[(stateA, stateC)].
End of explanation
tis_analysis.conditional_transition_probability
Explanation: Finally, we look at the last part of the rate calculation: the conditional transition probability. This is calculated for the outermost interface in each interface set.
End of explanation
from openpathsampling.analysis.tis import MinusMoveFlux
flux_calc = MinusMoveFlux(scheme)
Explanation: Individual components of the analysis
The combined analysis is the easiest way to perform analysis, but if you need to customize things (or if you want to compare different calculation methods) you might want to create objects for components of the analysis individually. Note that unlike the StandardTISAnalysis object, these do not cache their intermediate results.
Flux from the minus move
End of explanation
%%time
fluxes = flux_calc.calculate(storage.steps)
fluxes
Explanation: To calculate the fluxes, we use the .calculate method of the MinusMoveFlux object:
End of explanation
%%time
flux_dicts = flux_calc.intermediates(storage.steps)[0]
flux_dicts
Explanation: The minus move flux calculates some intermediate information along the way, which can be of use for further analysis. This is cached when using the StandardTISAnalysis, but can always be recalculated. The intermediate maps each (state, interface) pair to a dictionary. For details on the structure of that dictionary, see the documentation of TrajectoryTransitionAnalysis.analyze_flux.
End of explanation
from openpathsampling.analysis.tis import DictFlux
dict_flux = DictFlux(fluxes)
dict_flux.calculate(storage.steps)
Explanation: Flux from existing dictionary
The DictFlux class (which is required for MISTIS, and often provides better statistics than the minus move flux in other cases) takes a pre-calculated flux dictionary for initialization, and always returns that dictionary. The dictionary is in the same format as the fluxes returned by the MinusMoveFlux.calculate method; here, we'll just use the results we calculated above:
End of explanation
dict_flux.calculate(None)
Explanation: Note that DictFlux.calculate just echoes back the dictionary we gave it, so it doesn't actually care if we give it the steps argument or not:
End of explanation
transition = network.sampling_transitions[0]
print transition
from openpathsampling.analysis.tis import FullHistogramMaxLambdas, TotalCrossingProbability
from openpathsampling.numerics import WHAM
max_lambda_calc = FullHistogramMaxLambdas(
transition=transition,
hist_parameters={'bin_width': 0.05, 'bin_range': (0.0, 0.5)}
)
Explanation: This object can be used to provide the flux part of the TIS calculation, in exactly the same way a MinusMoveFlux object does.
Total crossing probability function
To calculate the total crossing probability, we must first calculate the individual ensemble crossing probabilities. This is done by creating a histogram of the maximum values of the order parameter. The class to do that is FullHistogramMaxLambdas. Then we'll create the TotalCrossingProbability.
End of explanation
combiner = WHAM(interfaces=transition.interfaces.lambdas)
Explanation: We can also change the function used to calculate the maximum value of the order parameter with the max_lambda_func parameter. This can be useful to calculate the crossing probabilities along some other order parameter.
To calculate the total crossing probability function, we also need a method for combining the ensemble crossing probability functions. We'll use the default WHAM here; see its documentation for details on how it can be customized.
End of explanation
total_crossing = TotalCrossingProbability(
max_lambda_calc=max_lambda_calc,
combiner=combiner
)
tcp = total_crossing.calculate(storage.steps)
plt.plot(tcp.x, np.log(tcp))
plt.title("Total Crossing Probability, exiting " + transition.stateA.name)
plt.xlabel("$\lambda$")
plt.ylabel("$\ln(P_A(\lambda | \lambda_i))$")
Explanation: Now we can put these together into the total crossing probability function:
End of explanation
from openpathsampling.analysis.tis import ConditionalTransitionProbability
outermost_ensembles = [trans.ensembles[-1] for trans in network.sampling_transitions]
cond_transition = ConditionalTransitionProbability(
ensembles=outermost_ensembles,
states=network.states
)
ctp = cond_transition.calculate(storage.steps)
ctp
Explanation: Conditional transition probability
The last part of the standard calculation is the conditional transition probability. We'll make a version of this that works for all ensembles:
End of explanation
from openpathsampling.analysis.tis import StandardTransitionProbability, TISAnalysis
Explanation: StandardTISAnalysis.conditional_transition_probability converts this into a pandas.DataFrame, which gives prettier printing. However, the same data in included in this dict-of-dict structure.
Assembling a TIS analysis from scratch
If you're using the "standard" TIS approach, then the StandardTISAnalysis object is the most efficient way to do it. However, if you want to use another analysis approach, it can be useful to see how the "standard" approach can be assembled.
This won't have all the shortcuts or saved intermediates that the specialized object does, but it will use the same algorithms to get the same results.
End of explanation
tcp_methods = {
trans: TotalCrossingProbability(
max_lambda_calc=FullHistogramMaxLambdas(
transition=trans,
hist_parameters={'bin_width': 0.05, 'bin_range': (0.0, 0.5)}
)
)
for trans in network.transitions.values()
}
Explanation: Some of the objects that we created in previous sections can be reused here. In particular, there is only only one flux calculation and only one conditional transitional transition probability per reaction network. However, the total crossing probability method is dependent on the transition (different order parameters might have different histrogram parameters). So we need to associate each transition with a different TotalCrossingProbability object. In this example, we take the default behavior of WHAM (instead of specifying in explicitly, as above).
End of explanation
transition_probability_methods = {
trans: StandardTransitionProbability(
transition=trans,
tcp_method=tcp_methods[trans],
ctp_method=cond_transition
)
for trans in network.transitions.values()
}
Explanation: The general TISAnalysis object makes the most simple splitting: flux and transition probability. A single flux calculation is used for all transitions, but each transition has a different transition probability (since each transition can have a different total crossing probability). We make those objects here.
End of explanation
analysis = TISAnalysis(
network=network,
flux_method=dict_flux,
transition_probability_methods=transition_probability_methods
)
analysis.rate_matrix(storage.steps)
Explanation: Finally we put this all together into a TISAnalysis object, and calculate the rate matrix.
End of explanation |
481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ANLP 2015 Text Classification Assignment
Emily Scharff and Juan Shishido
Write Up
Introduction
This notebook contains the code and documentation that we used to obtain our score of 0.58541 on the public leaderboard for the ANLP 2015 Classification Assignment. We describe our text processing, feature engineering, and model selection approaches. We both worked on feature engineering and model selection. Juan spent time at the beginning setting up tfidf and Emily created and tested many features. Juan also experimented with tweaking the model parameters. Both Juan and Emily contributed to setting up the workflow.
Text Processing
The data were loaded into pandas DataFrames. We began plotting the frequency of each category in the training set and noticed that the distribution was not uniform. Category 1, for example, was the most well represented with 769 questions. Category 6, on the other hand, had the least amount of questions—232. This would prove to be a good insight and we'll describe how we used this to our advantage.
In terms of processing the data, our approach was not to modify the original text. Rather, we created a new column, text_clean, that reflected our changes.
While examining the plain-text training data, we noticed sequences of HTML escaped characters, such as &#xd;&lt;br&gt;, which we removed with a regular expression. We also remove non-alphanumeric characters and replace whitespace with single spaces.
Features and Models
In terms of features, we started simple, using a term-document matrix that only included word frequencies. We also decided to get familiar with a handful of algorithms. We used our word features to train logistic regression and multinomial naive Bayes models. Using Scikit-Learn's cross_validation function, we were surprised to find initial scores of around 50% accuracy.
From here, we deviated somewhat and tried document similarity. Using the training data, we combined questions, by category. Our thought was to create seven "documents," one for each category, that represented the words used for the corresponding questions. This resulted in a $7 \times w$ matrix, where $w$ represents the number of unique words across documents. This was created using Scikit-Learn's TfidfVectorizer. For the test data, the matrix was of dimension $w \times q$, where $q$ represents the number of questions. Note that $w$ is the same in each of our matrices. This is so that it's possible to perform matrix multiplication. Of course, the cosine_similarity function, the metric we decided to use, takes care of some of the implementation details. Our first submission was based on this approach. We then stemmed the words in our corpus, using the Porter Stemmer, and that increased our score slightly.
Before proceeding, we decided to use Scikit-Learn's train_test_split function to create a development set—20% of the training data—on which to test our models. To fit our models, we used the remaining 80% of the original training data.
In our next iteration, we went back to experimenting with logistic regression and naive Bayes, but also added a linear support vector classifier. Here, we also started to add features. Because we were fitting a model, we did not combine questions by category. Rather, our tfidf feature matrix had a row for each question.
We tried many features. We ended up with the following list
Step9: Functions
Step10: Data
Load
Step11: Clean
Step12: Features
Step13: Split the training data
Step14: tfidf
Step15: Combine
Step16: Training
Step17: Testing on dev
Step18: Test Data | Python Code:
%matplotlib inline
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import regexp_tokenize
from nltk.stem.porter import PorterStemmer
from sklearn import cross_validation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
plt.style.use('ggplot')
Explanation: ANLP 2015 Text Classification Assignment
Emily Scharff and Juan Shishido
Write Up
Introduction
This notebook contains the code and documentation that we used to obtain our score of 0.58541 on the public leaderboard for the ANLP 2015 Classification Assignment. We describe our text processing, feature engineering, and model selection approaches. We both worked on feature engineering and model selection. Juan spent time at the beginning setting up tfidf and Emily created and tested many features. Juan also experimented with tweaking the model parameters. Both Juan and Emily contributed to setting up the workflow.
Text Processing
The data were loaded into pandas DataFrames. We began plotting the frequency of each category in the training set and noticed that the distribution was not uniform. Category 1, for example, was the most well represented with 769 questions. Category 6, on the other hand, had the least amount of questions—232. This would prove to be a good insight and we'll describe how we used this to our advantage.
In terms of processing the data, our approach was not to modify the original text. Rather, we created a new column, text_clean, that reflected our changes.
While examining the plain-text training data, we noticed sequences of HTML escaped characters, such as &#xd;&lt;br&gt;, which we removed with a regular expression. We also remove non-alphanumeric characters and replace whitespace with single spaces.
Features and Models
In terms of features, we started simple, using a term-document matrix that only included word frequencies. We also decided to get familiar with a handful of algorithms. We used our word features to train logistic regression and multinomial naive Bayes models. Using Scikit-Learn's cross_validation function, we were surprised to find initial scores of around 50% accuracy.
From here, we deviated somewhat and tried document similarity. Using the training data, we combined questions, by category. Our thought was to create seven "documents," one for each category, that represented the words used for the corresponding questions. This resulted in a $7 \times w$ matrix, where $w$ represents the number of unique words across documents. This was created using Scikit-Learn's TfidfVectorizer. For the test data, the matrix was of dimension $w \times q$, where $q$ represents the number of questions. Note that $w$ is the same in each of our matrices. This is so that it's possible to perform matrix multiplication. Of course, the cosine_similarity function, the metric we decided to use, takes care of some of the implementation details. Our first submission was based on this approach. We then stemmed the words in our corpus, using the Porter Stemmer, and that increased our score slightly.
Before proceeding, we decided to use Scikit-Learn's train_test_split function to create a development set—20% of the training data—on which to test our models. To fit our models, we used the remaining 80% of the original training data.
In our next iteration, we went back to experimenting with logistic regression and naive Bayes, but also added a linear support vector classifier. Here, we also started to add features. Because we were fitting a model, we did not combine questions by category. Rather, our tfidf feature matrix had a row for each question.
We tried many features. We ended up with the following list:
number of question marks
number of periods
number of apostrophes
number of "the"s
number of words
number of stop words
number of first person words
number of second person words
number of third person words
indicators for whether the first word was in ['what', 'how', 'why', 'is']
Other features we tried
Unigrams: This feature was used to check for the occurrence of certain unigrams, just as in John's Scikit-Learn notebook. We used it to check for the most frequent words in each category. Using the 500 most frequent words in each category performed the best. However, this performance was outstripped by a simple tfidf and, when combined, only lowered the score.
Numeric: The goal of this feature was to check if a certain question used numbers. The idea was that crtain categories, such as math, would use number more frequently than others, such as entertainment. In practice it did not work out that well.
Similarity: Here we used WordNet's similarity to see how similar the words in the question were to the question's category. This performed quite poorly. We believe this was due to the fact that the similarity function is not that accurate.
POS: We added a feature to count the number of a particular part of speech. We tested it with nouns, verbs, and adjectives. Interestingly the verbs performed the best. However in combination with the other features we chose it seemed to hurt the performance
Median length: Without tfidf, including the length of the median word of a question greatly increased the categorization accuracy. However, after using tfidf, the median length only detracted from the score. Because tfidf performed better, we did not include it in the final set of features.
Names: This feature checked if a particular question contained a name. This worked better than counting the number of names. This is likely due to a lack of data. Overall, the number of questions with names in the training set is small so you can get better classification by only making the feature return
Other processing
We also stemmed the words prior to passing them through the TfidfVectorizer.
When we noticed some misspelled words, we tried using Peter Norvig's correct function, but it did not improve our accuracy scores.
One thing that was helpful was the plots we created when assessing the various models. We plotted the predicted labels against the ground truth. (An example of this in included below.) This helped us see, right away, that the linear SVC was performing best across all the permutations of features we tried. This is how we eventually decided to stick with that algorithm.
During one of the iterations, we noticed that the naive Bayes model was incorrectly predicting category 1 for a majority of the data. We remembered the distribution of categories mentioned earlier and decided to sample the other categories at higher frequencies. We took the original training data, and then drew a random sample of questions from categories 2 through 7. After some experimentation, we decided to sample an extra 1,200 observations. This strategy helped improve our score.
We also spend time examining and analyzing the confidence scores using the decision_function() method. The idea here was to see if we could identify patterns in how the classifier was incorrectly labeling the development set. Unfortunately, we were not able to use this information to improve our scores.
Finally, because of all the testing we had done, we had several results files, which included results we did not submit. With this data, we used a bagging approach—majority vote—to get a "final" classification on the 1,874 test examples. This, unfortunately, did not improve our score.
Our best result on the public leaderboard was from a single linear support vector classifier using tfidf and the features listed above.
Code
Imports
End of explanation
def sample(df, n=1000, include_cats=[2, 3, 4, 5, 6, 7], random_state=1868):
Take a random sample of size `n` for categories
in `include_cats`.
df = df.copy()
subset = df[df.Category.isin(include_cats)]
sample = subset.sample(n, random_state=random_state)
return sample
def clean_text(df, col):
A function for keeping only alpha-numeric
characters and replacing all white space with
a single space.
df = df.copy()
porter_stemmer = PorterStemmer()
return df[col].apply(lambda x: re.sub(';br&', ';&', x))\
.apply(lambda x: re.sub('&.+?;', '', x))\
.apply(lambda x: re.sub('[^A-Za-z0-9]+', ' ', x.lower()))\
.apply(lambda x: re.sub('\s+', ' ', x).strip())\
.apply(lambda x: ' '.join([porter_stemmer.stem(w)
for w in x.split()]))
def count_pattern(df, col, pattern):
Count the occurrences of `pattern`
in df[col].
df = df.copy()
return df[col].str.count(pattern)
def split_on_sentence(text):
Tokenize the text on sentences.
Returns a list of strings (sentences).
sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
return sent_tokenizer.tokenize(text)
def split_on_word(text):
Use regular expression tokenizer.
Keep apostrophes.
Returns a list of lists, one list for each sentence:
[[word, word], [word, word, ..., word], ...].
if type(text) is list:
return [regexp_tokenize(sentence, pattern="\w+(?:[-']\w+)*")
for sentence in text]
else:
return regexp_tokenize(text, pattern="\w+(?:[-']\w+)*")
def features(df):
Create the features in the specified DataFrame.
stop_words = stopwords.words('english')
df = df.copy()
df['n_questionmarks'] = count_pattern(df, 'Text', '\?')
df['n_periods'] = count_pattern(df, 'Text', '\.')
df['n_apostrophes'] = count_pattern(df, 'Text', '\'')
df['n_the'] = count_pattern(df, 'Text', 'the ')
df['first_word'] = df.text_clean.apply(lambda x: split_on_word(x)[0])
question_words = ['what', 'how', 'why', 'is']
for w in question_words:
col_wc = 'n_' + w
col_fw = 'fw_' + w
df[col_fw] = (df.first_word == w) * 1
del df['first_word']
df['n_words'] = df.text_clean.apply(lambda x: len(split_on_word(x)))
df['n_stopwords'] = df.text_clean.apply(lambda x:
len([w for w in split_on_word(x)
if w not in stop_words]))
df['n_first_person'] = df.text_clean.apply(lambda x:
sum([w in person_first
for w in x.split()]))
df['n_second_person'] = df.text_clean.apply(lambda x:
sum([w in person_second
for w in x.split()]))
df['n_third_person'] = df.text_clean.apply(lambda x:
sum([w in person_third
for w in x.split()]))
return df
def flatten_words(list1d, get_unique=False):
qa = [s.split() for s in list1d]
if get_unique:
return sorted(list(set([w for sent in qa for w in sent])))
else:
return [w for sent in qa for w in sent]
def tfidf_matrices(tr, te, col='text_clean'):
Returns tfidf matrices for both the
training and test DataFrames.
The matrices will have the same number of
columns, which represent unique words, but
not the same number of rows, which represent
samples.
tr = tr.copy()
te = te.copy()
text = tr[col].values.tolist() + te[col].values.tolist()
vocab = flatten_words(text, get_unique=True)
tfidf = TfidfVectorizer(stop_words='english', vocabulary=vocab)
tr_matrix = tfidf.fit_transform(tr.text_clean)
te_matrix = tfidf.fit_transform(te.text_clean)
return tr_matrix, te_matrix
def concat_tfidf(df, matrix):
df = df.copy()
df = pd.concat([df, pd.DataFrame(matrix.todense())], axis=1)
return df
def jitter(values, sd=0.25):
Jitter points for use in a scatterplot.
return [np.random.normal(v, sd) for v in values]
person_first = ['i', 'we', 'me', 'us', 'my', 'mine', 'our', 'ours']
person_second = ['you', 'your', 'yours']
person_third = ['he', 'she', 'it', 'him', 'her', 'his', 'hers', 'its']
Explanation: Functions
End of explanation
training = pd.read_csv('../data/newtrain.csv')
test = pd.read_csv('../data/newtest.csv')
Explanation: Data
Load
End of explanation
training['text_clean'] = clean_text(training, 'Text')
test['text_clean'] = clean_text(test, 'Text')
Explanation: Clean
End of explanation
training = features(training)
test = features(test)
Explanation: Features
End of explanation
train, dev = cross_validation.train_test_split(training, test_size=0.2, random_state=1868)
train = train.append(sample(train, n=800))
train.reset_index(drop=True, inplace=True)
dev.reset_index(drop=True, inplace=True)
Explanation: Split the training data
End of explanation
train_matrix, dev_matrix = tfidf_matrices(train, dev)
Explanation: tfidf
End of explanation
train = concat_tfidf(train, train_matrix)
dev = concat_tfidf(dev, dev_matrix)
Explanation: Combine
End of explanation
svm = LinearSVC(dual=False, max_iter=5000)
features = train.columns[3:]
X = train[features].values
y = train['Category'].values
features_dev = dev[features].values
Explanation: Training
End of explanation
svm.fit(X, y)
dev_predicted = svm.predict(features_dev)
accuracy_score(dev.Category, dev_predicted)
plt.figure(figsize=(6, 5))
plt.scatter(jitter(dev.Category, 0.15),
jitter(dev_predicted, 0.15),
color='#348ABD', alpha=0.25)
plt.title('Support Vector Classifier\n')
plt.xlabel('Ground Truth')
plt.ylabel('Predicted')
Explanation: Testing on dev
End of explanation
training = training.append(sample(training, n=1200))
training.reset_index(drop=True, inplace=True)
training_matrix, test_matrix = tfidf_matrices(training, test)
training = concat_tfidf(training, training_matrix)
test = concat_tfidf(test, test_matrix)
features = training.columns[3:]
X = training[features].values
y = training['Category'].values
features_test = test[features].values
svm.fit(X, y)
test_predicted = svm.predict(features_test)
test['Category'] = test_predicted
output = test[['Id', 'Category']]
Explanation: Test Data
End of explanation |
482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importance Reweighting
Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
Step2: Importance Sampling
Nested Sampling provides both a set of samples and their associated importance weights. These are exactly analagous to those provided by importance sampling, where we want to estimate some function $f(\mathbf{x})$ relative to a target distribution $p(\mathbf{x})$ using some proposal distribution $q(\mathbf{x})$ using $N$ Monte Carlo samples $\mathbf{x}_i$ drawn from $q(\mathbf{x})$ via
$$
\mathbb{E}[f(\mathbf{x})] = \int f(\mathbf{x}) p(\mathbf{x}) d\mathbf{x}
= \int f(\mathbf{x}) q(\mathbf{x}) \frac{p(\mathbf{x})}{q(\mathbf{x})} d\mathbf{x}
\approx \frac{1}{N} \sum_{i=1}^{N} f(\mathbf{x}_i) \frac{p(\mathbf{x}_i)}{q(\mathbf{x}_i)}
\
Step4: We'll again define our prior (via prior_transform) to be uniform in each dimension from -10 to 10 and 0 everywhere else.
Step5: Let's first generate samples from this target distribution.
Step7: Now let's generate samples from the uncorrelated version with the same priors.
Step8: Comparing our results shows these distributions are somewhat different.
Step9: Let's using importance reweighting to adjust each of our samples to try and approximate the other distribution. | Python Code:
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
from numpy import linalg
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
rstate = np.random.default_rng(510)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
Explanation: Importance Reweighting
Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
End of explanation
ndim = 3 # number of dimensions
C = np.identity(ndim) # set covariance to identity matrix
C[C==0] = 0.95 # set off-diagonal terms (strongly correlated)
Cinv = linalg.inv(C) # precision matrix
lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization)
# 3-D correlated multivariate normal log-likelihood
def loglikelihood(x):
Multivariate normal log-likelihood.
return -0.5 * np.dot(x, np.dot(Cinv, x)) + lnorm
Explanation: Importance Sampling
Nested Sampling provides both a set of samples and their associated importance weights. These are exactly analagous to those provided by importance sampling, where we want to estimate some function $f(\mathbf{x})$ relative to a target distribution $p(\mathbf{x})$ using some proposal distribution $q(\mathbf{x})$ using $N$ Monte Carlo samples $\mathbf{x}_i$ drawn from $q(\mathbf{x})$ via
$$
\mathbb{E}[f(\mathbf{x})] = \int f(\mathbf{x}) p(\mathbf{x}) d\mathbf{x}
= \int f(\mathbf{x}) q(\mathbf{x}) \frac{p(\mathbf{x})}{q(\mathbf{x})} d\mathbf{x}
\approx \frac{1}{N} \sum_{i=1}^{N} f(\mathbf{x}_i) \frac{p(\mathbf{x}_i)}{q(\mathbf{x}_i)}
\: \text{where} \:
\mathbf{x}_i \sim q(\mathbf{x})
$$
This means that by assigning each sample $\mathbf{x}_i$ its importance weight $w_i \equiv p(\mathbf{x}_i) / q(\mathbf{x}_i)$, we can estimate any posterior-related quantity as well as its integral (i.e. the evidence). In Nested Sampling, $q(\mathbf{x})$ is constructed/estimated from the actual sampling process.
Within an importance sampling framework, it is straightforward to update to a new target distribution $p^\prime(\mathbf{x})$ using the previous set of importance weights since
$$
w^\prime_i \equiv \frac{p^\prime(\mathbf{x}_i)}{q(\mathbf{x}_i)}
= \frac{p^\prime(\mathbf{x}_i)}{p(\mathbf{x}_i)}\frac{p(\mathbf{x}_i)}{q(\mathbf{x}_i)} =
\frac{p^\prime(\mathbf{x}_i)}{p(\mathbf{x}_i)} w_i
$$
Since the Nested Sampling weights are importance weights, it is also straightforward to update these a new target distribution if we want to "swap out" our posteriors.
There are two important caveats to this:
1. This process can only work if the pre-existing samples have sufficient coverage, meaning that they span the majority of the new target distribution. If they don't encompass the majority of the new parameter space, the results will be inevitably biased.
2. In addition to reasonable coverage, samples must also be sufficiently dense relative to the new target distribution. If samples are sparse, then reweighting can lead to a much noisier estimates.
As a result, importance reweighting is most useful when "tweaking" a distribution and least useful when trying to make big changes.
3-D Multivariate Normal
We will demonstrate importance reweighting using 3-D multivariate Normal distributions. First, we will define the correlated version used in previous examples.
End of explanation
# prior transform
def prior_transform(u):
Transforms our unit cube samples `u` to a flat prior between -10. and 10. in each variable.
return 10. * (2. * u - 1.)
Explanation: We'll again define our prior (via prior_transform) to be uniform in each dimension from -10 to 10 and 0 everywhere else.
End of explanation
# initialize our nested sampler
dsampler = dynesty.DynamicNestedSampler(loglikelihood, prior_transform, ndim=3,
bound='single', sample='unif', rstate=rstate)
dsampler.run_nested(maxiter=50000, use_stop=False)
dres = dsampler.results
Explanation: Let's first generate samples from this target distribution.
End of explanation
C2 = np.identity(ndim) # set covariance to identity matrix
Cinv2 = linalg.inv(C2) # precision matrix
lnorm2 = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C2))) # ln(normalization)
# 3-D correlated multivariate normal log-likelihood
def loglikelihood2(x):
Multivariate normal log-likelihood.
return -0.5 * np.dot(x, np.dot(Cinv2, x)) + lnorm2
dsampler2 = dynesty.DynamicNestedSampler(loglikelihood2, prior_transform, ndim=3,
bound='single', sample='unif', rstate=rstate)
dsampler2.run_nested(maxiter=50000, use_stop=False)
dres2 = dsampler2.results
Explanation: Now let's generate samples from the uncorrelated version with the same priors.
End of explanation
# plot results
from dynesty import plotting as dyplot
lnz_truth = ndim * -np.log(2 * 10.) # analytic evidence solution
fig, axes = dyplot.runplot(dres, color='blue')
fig, axes = dyplot.runplot(dres2, color='red',
lnz_truth=lnz_truth, truth_color='black',
fig=(fig, axes))
fig.tight_layout()
# initialize figure
fig, axes = plt.subplots(3, 7, figsize=(35, 15))
axes = axes.reshape((3, 7))
[a.set_frame_on(False) for a in axes[:, 3]]
[a.set_xticks([]) for a in axes[:, 3]]
[a.set_yticks([]) for a in axes[:, 3]]
# plot initial run (left)
fg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], truth_color='black',
show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, axes[:, :3]))
# plot extended run (right)
fg, ax = dyplot.cornerplot(dres2, color='red', truths=[0., 0., 0.], truth_color='black',
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))
Explanation: Comparing our results shows these distributions are somewhat different.
End of explanation
# compute new log-likelihoods
logl = np.array([loglikelihood(s) for s in dres2.samples])
logl2 = np.array([loglikelihood2(s) for s in dres.samples])
# reweight results
dres_rwt = dynesty.utils.reweight_run(dres, logp_new=logl2)
dres2_rwt = dynesty.utils.reweight_run(dres2, logp_new=logl)
# initialize figure
fig, axes = plt.subplots(3, 7, figsize=(35, 15))
axes = axes.reshape((3, 7))
[a.set_frame_on(False) for a in axes[:, 3]]
[a.set_xticks([]) for a in axes[:, 3]]
[a.set_yticks([]) for a in axes[:, 3]]
# plot initial run (left)
fg, ax = dyplot.cornerplot(dres_rwt, color='blue', truths=[0., 0., 0.], truth_color='black',
show_titles=True, max_n_ticks=3, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, axes[:, :3]))
# plot extended run (right)
fg, ax = dyplot.cornerplot(dres2_rwt, color='red', truths=[0., 0., 0.], truth_color='black',
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, max_n_ticks=3, fig=(fig, axes[:, 4:]))
# plot results
fig, axes = dyplot.runplot(dres_rwt, color='blue')
fig, axes = dyplot.runplot(dres2_rwt, color='red',
lnz_truth=lnz_truth, truth_color='black',
fig=(fig, axes))
fig.tight_layout()
Explanation: Let's using importance reweighting to adjust each of our samples to try and approximate the other distribution.
End of explanation |
483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic
Step1: Load the train and test datasets to create two DataFrames
Step2: Print the 'head' of the train and test dataframes
Step3: Understanding your data
Step4: Rose vs Jack, or Female vs Male
Passengers that survived vs passengers that passed away
Step5: As proportions
Step6: Males that survived vs males that passed away
Step7: Females that survived vs Females that passed away
Step8: Normalized male survival
Step9: Normalized female survival
Step10: Does age play a role?
Create the column Child and assign to 'NaN'
Step11: Assign 1 to passengers under 18, 0 to those 18 or older. Print the new column.
Step12: Print normalized Survival Rates for passengers under 18
Step13: Print normalized Survival Rates for passengers 18 or older
Step14: First prediction
Create a copy of test
Step15: Initialize a Survived column to 0
Step16: Set Survived to 1 if Sex equals "female" and print the Survived column from test_one
Step17: Cleaning and Formatting your Data
Convert the male and female groups to integer form
Step18: Impute the Embarked variable
Step19: Convert the Embarked classes to integer form
Step20: Print the Sex and Embarked columns
Step21: Creating your first decision tree
Import the Numpy library
Step22: Import 'tree' from scikit-learn library
Step23: Print the train data to see the available features
Step24: Fill the NaN values
Step25: Create the target and features numpy arrays
Step26: Fit your first decision tree
Step27: Look at the importance and score of the included features
Step28: Predict and submit to Kaggle
Impute the missing value with the median
Step29: Extract the features from the test set
Step30: Make your prediction using the test set
Step31: Create a data frame with two columns
Step32: Check that your data frame has 418 entries
Step33: Write your solution to a csv file with the name my_solution.csv
Step34: Overfitting and how to control it
Create a new array with the added features
Step35: Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5
Step36: Print the score of the new decison tree
Step37: Feature-engineering for our Titanic data set
Step38: A Random Forest analysis in Python
Import the RandomForestClassifier
Step39: We want the Pclass, Age, Sex, Fare,SibSp, Parch, and Embarked variables
Step40: Building and fitting my_forest
Step41: Print the score of the fitted random forest
Step42: Compute predictions on our test set features then print the length of the prediction vector
Step43: Interpreting and Comparing
Request and print the .feature_importances_ attribute
Step44: Compute and print the mean accuracy score for both models | Python Code:
import pandas as pd
Explanation: Titanic: Machine Learning from Disaster
Get the Data with Pandas
Import the Pandas library
End of explanation
train_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/train.csv"
train = pd.read_csv(train_url)
test_url = "http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/test.csv"
test = pd.read_csv(test_url)
Explanation: Load the train and test datasets to create two DataFrames
End of explanation
print(train.head())
print(test.head())
Explanation: Print the 'head' of the train and test dataframes
End of explanation
print(train.shape)
print(test.shape)
print(train.describe())
print(test.describe())
Explanation: Understanding your data
End of explanation
print(train["Survived"].value_counts())
Explanation: Rose vs Jack, or Female vs Male
Passengers that survived vs passengers that passed away
End of explanation
print(train["Survived"].value_counts(normalize=True))
Explanation: As proportions
End of explanation
print(train["Survived"][train["Sex"] == 'male'].value_counts())
Explanation: Males that survived vs males that passed away
End of explanation
print(train["Survived"][train["Sex"] == 'female'].value_counts())
Explanation: Females that survived vs Females that passed away
End of explanation
print(train["Survived"][train["Sex"] == 'male'].value_counts(normalize=True))
Explanation: Normalized male survival
End of explanation
print(train["Survived"][train["Sex"] == 'female'].value_counts(normalize=True))
Explanation: Normalized female survival
End of explanation
train["Child"] = float('NaN')
Explanation: Does age play a role?
Create the column Child and assign to 'NaN'
End of explanation
train["Child"][train["Age"] < 18] = 1
train["Child"][train["Age"] >= 18] = 0
print(train['Child'])
Explanation: Assign 1 to passengers under 18, 0 to those 18 or older. Print the new column.
End of explanation
print(train["Survived"][train["Child"] == 1].value_counts(normalize = True))
Explanation: Print normalized Survival Rates for passengers under 18
End of explanation
print(train["Survived"][train["Child"] == 0].value_counts(normalize = True))
Explanation: Print normalized Survival Rates for passengers 18 or older
End of explanation
test_one = test
Explanation: First prediction
Create a copy of test: test_one
End of explanation
test_one['Survived'] = 0
Explanation: Initialize a Survived column to 0
End of explanation
test_one['Survived'][test_one['Sex'] == "female"] = 1
print(test_one['Survived'])
Explanation: Set Survived to 1 if Sex equals "female" and print the Survived column from test_one
End of explanation
train["Sex"][train["Sex"] == "male"] = 0
train["Sex"][train["Sex"] == "female"] = 1
test["Sex"][test["Sex"] == "male"] = 0
test["Sex"][test["Sex"] == "female"] = 1
Explanation: Cleaning and Formatting your Data
Convert the male and female groups to integer form
End of explanation
train["Embarked"] = train["Embarked"].fillna('S')
test["Embarked"] = test["Embarked"].fillna('S')
Explanation: Impute the Embarked variable
End of explanation
train["Embarked"][train["Embarked"] == "S"] = 0
train["Embarked"][train["Embarked"] == "C"] = 1
train["Embarked"][train["Embarked"] == "Q"] = 2
test["Embarked"][test["Embarked"] == "S"] = 0
test["Embarked"][test["Embarked"] == "C"] = 1
test["Embarked"][test["Embarked"] == "Q"] = 2
Explanation: Convert the Embarked classes to integer form
End of explanation
print(train["Embarked"])
print(train["Sex"])
print(test["Embarked"])
print(test["Sex"])
Explanation: Print the Sex and Embarked columns
End of explanation
import numpy as np
Explanation: Creating your first decision tree
Import the Numpy library
End of explanation
from sklearn import tree
Explanation: Import 'tree' from scikit-learn library
End of explanation
print(train)
Explanation: Print the train data to see the available features
End of explanation
train[["Pclass", "Sex", "Age", "Fare"]] = train[["Pclass", "Sex", "Age", "Fare"]].fillna(train[["Pclass", "Sex", "Age", "Fare"]].median())
print(train)
Explanation: Fill the NaN values
End of explanation
target = train["Survived"].values
features_one = train[["Pclass", "Sex", "Age", "Fare"]].values
Explanation: Create the target and features numpy arrays: target, features_one
End of explanation
my_tree_one = tree.DecisionTreeClassifier()
my_tree_one = my_tree_one.fit(features_one, target)
Explanation: Fit your first decision tree: my_tree_one
End of explanation
print(my_tree_one.feature_importances_)
print(my_tree_one.score(features_one, target))
Explanation: Look at the importance and score of the included features
End of explanation
#test.Fare[152] = test.Fare.median()
test[["Pclass", "Sex", "Age", "Fare"]] = test[["Pclass", "Sex", "Age", "Fare"]].fillna(test[["Pclass", "Sex", "Age", "Fare"]].median())
Explanation: Predict and submit to Kaggle
Impute the missing value with the median
End of explanation
test_features = test[["Pclass", "Sex", "Age", "Fare"]].values
Explanation: Extract the features from the test set: Pclass, Sex, Age, and Fare.
End of explanation
first_prediction = my_tree_one.predict(test_features)
print(first_prediction)
Explanation: Make your prediction using the test set
End of explanation
PassengerId =np.array(test["PassengerId"]).astype(int)
print(PassengerId.shape)
first_solution = pd.DataFrame(first_prediction, PassengerId, columns = ["Survived"])
print(first_solution)
Explanation: Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
End of explanation
print(first_solution.shape)
Explanation: Check that your data frame has 418 entries
End of explanation
first_solution.to_csv("../submissions/first_solution.csv", index_label = ["PassengerId"])
Explanation: Write your solution to a csv file with the name my_solution.csv
End of explanation
features_two = train[["Pclass","Age","Sex","Fare", "SibSp", "Parch", "Embarked"]].values
Explanation: Overfitting and how to control it
Create a new array with the added features: features_two
End of explanation
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = 10, min_samples_split = 5, random_state = 1)
my_tree_two = my_tree_two.fit(features_two, target)
Explanation: Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
End of explanation
print(my_tree_two.score(features_two, target))
test_features_two = test[["Pclass","Age","Sex","Fare", "SibSp", "Parch", "Embarked"]].values
second_prediction = my_tree_two.predict(test_features_two)
print(second_prediction)
print(second_prediction.shape)
#PassengerId =np.array(test["PassengerId"]).astype(int)
second_solution = pd.DataFrame(second_prediction, PassengerId, columns = ["Survived"])
print(second_solution)
print(second_solution.shape)
second_solution.to_csv("../submissions/second_solution.csv", index_label = ["PassengerId"])
Explanation: Print the score of the new decison tree
End of explanation
# Create train_two with the newly defined feature
train_two = train.copy()
train_two["family_size"] = train_two["SibSp"] + train_two["Parch"] + 1
# Create a new feature set and add the new feature
features_three = train_two[["Pclass", "Sex", "Age", "Fare", "SibSp", "Parch", "family_size"]].values
# Define the tree classifier, then fit the model
my_tree_three = tree.DecisionTreeClassifier()
my_tree_three = my_tree_three.fit(features_three, target)
# Print the score of this decision tree
print(my_tree_three.score(features_three, target))
Explanation: Feature-engineering for our Titanic data set
End of explanation
from sklearn.ensemble import RandomForestClassifier
Explanation: A Random Forest analysis in Python
Import the RandomForestClassifier
End of explanation
features_forest = train[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
target = train["Survived"]
Explanation: We want the Pclass, Age, Sex, Fare,SibSp, Parch, and Embarked variables
End of explanation
forest = RandomForestClassifier(max_depth = 10, min_samples_split=2, n_estimators = 100, random_state = 1)
my_forest = forest.fit(features_forest, target)
Explanation: Building and fitting my_forest
End of explanation
print(my_forest.score(features_forest, target))
Explanation: Print the score of the fitted random forest
End of explanation
test_features = test[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
pred_forest = my_forest.predict(test_features)
print(len(pred_forest))
PassengerId =np.array(test["PassengerId"]).astype(int)
third_solution = pd.DataFrame(pred_forest, PassengerId, columns = ["Survived"])
print(third_solution)
print(third_solution.shape)
third_solution.to_csv("../submissions/third_solution.csv", index_label = ["PassengerId"])
Explanation: Compute predictions on our test set features then print the length of the prediction vector
End of explanation
print(my_tree_two.feature_importances_)
print(my_forest.feature_importances_)
Explanation: Interpreting and Comparing
Request and print the .feature_importances_ attribute
End of explanation
print(my_tree_two.score(features_two, target))
print(my_forest.score(features_forest, target))
Explanation: Compute and print the mean accuracy score for both models
End of explanation |
484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representation of data submission workflow components based on W3C-PROV
Step1: Model is along the concept described in https
Step2: Example name spaces
(from DOI
Step3: assign information to provenance graph nodes and edges
Step4: Transform submission object to a provenance graph | Python Code:
%load_ext autoreload
%autoreload 2
from prov.model import ProvDocument
d1 = ProvDocument()
d1.deserialize?
Explanation: Representation of data submission workflow components based on W3C-PROV
End of explanation
from IPython.display import display, Image
Image(filename='key-concepts.png')
from dkrz_forms import form_handler
#from project_cordex import cordex_dict
#from project_cordex import name_space
# add namespaces for submission provenance capture
#for key,value in name_space.iteritems():
# d1.add_namespace(key,value)
#d1.add_namespace()
# to do: look into some predefined vocabs, e.g. dublin core, iso19139,foaf etc.
d1.add_namespace("enes_entity",'http://www.enes.org/enes_entitiy#')
d1.add_namespace('enes_agent','http://www.enes.org/enes_agent#')
d1.add_namespace('data_collection','http://www.enes.org/enes_entity/file_collection')
d1.add_namespace('data_manager','http://www.enes.org/enes_agent/data_manager')
d1.add_namespace('data_provider','http://www.enes.org/enes_agent/data_provider')
d1.add_namespace('subm','http://www.enes.org/enes_entity/data_submsission')
d1.add_namespace('foaf','http://xmlns.com/foaf/0.1/')
Explanation: Model is along the concept described in https://www.w3.org/TR/prov-primer/
End of explanation
# later: organize things in bundles
data_manager_ats = {'foaf:givenName':'Peter','foaf:mbox':'[email protected]'}
d1.entity('sub:empty')
def add_stage(agent,activity,in_state,out_state):
# in_stage exists, out_stage is generated
d1.agent(agent, data_manager_ats)
d1.activity(activity)
d1.entity(out_state)
d1.wasGeneratedBy(out_state,activity)
d1.used(activity,in_state)
d1.wasAssociatedWith(activity,agent)
d1.wasDerivedFrom(out_state,in_state)
import json
form_file = open('/home/stephan/tmp/CORDEX/Kindermann_test1.json',"r")
json_info = form_file.read()
#json_info["__type__"] = "sf",
form_file.close()
sf_dict = json.loads(json_info)
sf = form_handler.dict_to_form(sf_dict)
print sf.__dict__
data_provider = sf.first_name+'_'+sf.last_name
submission_manager = sf.sub['responsible_person']
ingest_manager = sf.ing['responsible_person']
qa_manager = sf.ing['responsible_person']
publication_manager = sf.pub['responsible_person']
add_stage(agent='data_provider:test_user_id',activity='subm:submit',in_state="subm:empty",out_state='subm:out1_sub')
add_stage(agent='data_manager:peter_lenzen_id',activity='subm:review',in_state="subm:out1_sub",out_state='subm:out1_rev')
add_stage(agent='data_manager:peter_lenzen_id',activity='subm:ingest',in_state="subm:out1_rev",out_state='subm:out1_ing')
add_stage(agent='data_manager:hdh_id',activity='subm:check',in_state="subm:out1_ing",out_state='subm:out1_che')
add_stage(agent='data_manager:katharina_b_id',activity='subm:publish',in_state="subm:out1_che",out_state='subm:out1_pub')
add_stage(agent='data_manager:lta_id',activity='subm:archive',in_state="subm:out1_pub",out_state='subm:out1_arch')
Explanation: Example name spaces
(from DOI: 10.3390/ijgi5030038 , mehr unter https://github.com/tsunagun/vocab/blob/master/all_20130125.csv)
owl Web Ontology Language http://www.w3.org/2002/07/owl#
dctype DCMI Type Vocabulary http://purl.org/dc/dcmitype/
dco DCO Ontology http://info.deepcarbon.net/schema#
prov PROV Ontology http://www.w3.org/ns/prov#
skos Simple Knowledge
Organization System http://www.w3.org/2004/02/skos/core#
foaf FOAF Ontology http://xmlns.com/foaf/0.1/
vivo VIVO Ontology http://vivoweb.org/ontology/core#
bibo Bibliographic Ontology http://purl.org/ontology/bibo/
xsd XML Schema Datatype http://www.w3.org/2001/XMLSchema#
rdf Resource Description
Framework http://www.w3.org/1999/02/22-rdf-syntax-ns#
rdfs Resource Description
Framework Schema http://www.w3.org/2000/01/rdf-schema#
End of explanation
%matplotlib inline
d1.plot()
d1.wasAttributedTo(data_submission,'????')
Explanation: assign information to provenance graph nodes and edges
End of explanation
#d1.get_records()
submission = d1.get_record('subm:out1_sub')[0]
review = d1.get_record('subm:out1_rev')[0]
ingest = d1.get_record('subm:out1_ing')[0]
check = d1.get_record('subm:out1_che')[0]
publication = d1.get_record('subm:out1_pub')[0]
lta = d1.get_record('subm:out1_arch')[0]
res = form_handler.prefix_dict(sf.sub,'sub',sf.sub.keys())
res['sub:status']="fertig"
print res
ing = form_handler.prefix_dict(sf.ing,'ing',sf.ing.keys())
che = form_handler.prefix_dict(sf.che,'che',sf.che.keys())
pub = form_handler.prefix_dict(sf.pub,'pub',sf.pub.keys())
submission.add_attributes(res)
ingest.add_attributes(ing)
check.add_attributes(che)
publication.add_attributes(pub)
che_act = d1.get_record('subm:check')
tst = che_act[0]
test_dict = {'subm:test':'test'}
tst.add_attributes(test_dict)
print tst
tst.FORMAL_ATTRIBUTES
tst.
che_act = d1.get_record('subm:check')
#tst.formal_attributes
#tst.FORMAL_ATTRIBUTES
tst.add_attributes({'foaf:name':'tst'})
print tst.attributes
#for i in tst:
# print i
#tst.insert([('subm:givenName','sk')])
import sys
sys.path.append('/home/stephan/Repos/ENES-EUDAT/submission_forms')
from dkrz_forms import form_handler
sf,repo = form_handler.init_form("CORDEX")
init_dict = sf.__dict__
sub_form = form_handler.prefix(sf,'subm',sf.__dict__.keys())
sub_dict = sub_form.__dict__
#init_state = d1.get_record('subm:empty')[0]
#init_state.add_attributes(init_dict)
sub_state = d1.get_record('subm:out1_sub')[0]
init_state.add_attributes(sub_dict)
tst_dict = {'test1':'val1','test2':'val2'}
tst = form_handler.submission_form(tst_dict)
print tst.__dict__
print result.__dict__
dict_from_class(sf)
Explanation: Transform submission object to a provenance graph
End of explanation |
485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 4 - Inheritance and abstraction. Graphical User Interfaces (GUIs)
Learning Objectives
Describe inheritance in the context of object oriented programming
List situations in which inheritance is useful
Create an abstract class
Contrast control abstraction with data abstraction
Implement a simple graphic user interface
Last week we looked at several example projects and the classes we might use to implement them.
Example 1
Step1: There is one other situation we should consider. Occasionally we will want a class of a particular type to always implement a particular method even though we are unable to implement that method in our parent class. We need some way of raising an error when the parent class is inherited and the method is not implemented.
As a simple example consider a class representing length. We might create classes for meters, miles, feet, etc. Keeping the original units when performing operations (adding, subtracting, etc) would prevent rounding errors but each class would need custom logic.
Returning to our laboratory inventory system one way we can implement this is below
Step2: A disadvantage with this approach is we only see the error message when we call the method. The error is in the way we implemented the class so it would be more intuitive to get an error earlier, when we first create the object.
This can be achieved using the abstract method decorator.
Step3: Either of these approaches work well for adding new methods or completely changing the behaviour of a method. Often we only need to make a more subtle change. In this situation it can be useful to call a method from a parent class while only implementing our new functionality in the child class.
There are two approaches for this.
Step4: Using super() is usually the best approach, the reasons for this are covered in detail in this blog post
Multiple Inheritance
We are not limited to inheriting from a single class. It is possible to merge functionality from multiple different classes simply by inheriting from them.
When inheriting from multiple classes that contain a method or attribute with the same name there is a particular order in which the names are resolved.
Step5: A simple rule-of-thumb is that search is depth first. The details are a little more complicated.
isinstance
Often we need to check whether a particular variable is an instance of a particular class. For example, returning to our laboratory inventory system we would want to check that we only add instances of Item or its subclasses to our storage locations.
Step6: Duck typing
A popular alternative in python is duck typing, an approach named after the idea that,
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
What this means for programming is that instead of checking for a particular class, instead the methods and attributes that are actually needed are checked for. | Python Code:
class Item(object):
def __init__(self, name, description, location):
self.name = name
self.description = description
self.location = location
def update_location(self, new_location):
pass
class Equipment(Item):
pass
class Consumable(Item):
def __init__(self, name, description, location, initial_quantity, current_quantity, storage_temp, flammability):
self.name = name
self.description = description
self.location = location
self.initial_quantity = initial_quantity
self.current_quantity = current_quantity
self.flammability = flammability
def update_quantity_remaining(self, amount):
pass
Explanation: Week 4 - Inheritance and abstraction. Graphical User Interfaces (GUIs)
Learning Objectives
Describe inheritance in the context of object oriented programming
List situations in which inheritance is useful
Create an abstract class
Contrast control abstraction with data abstraction
Implement a simple graphic user interface
Last week we looked at several example projects and the classes we might use to implement them.
Example 1: A Laboratory Inventory
I would like to keep track of all the items in the laboratory so I can easily find them the next time I need them. Both equipment and consumables would be tracked. We have multiple rooms, and items can be on shelves, in refrigerators, in freezers, etc. Items can also be in boxes containing other items in all these places.
The words in bold would all be good ideas to turn into classes. Now we know some of the classes we will need we can start to think about what each of these classes should do, what the methods will be. Let's consider the consumables class:
For consumables we will need to manage their use so there should be an initial quantity and a quantity remaining that is updated every time we use some. We want to make sure that temperature sensitive consumables are always stored at the correct temperature, and that flammables are stored in a flammables cabinet etc.
The consumable class will need a number of attributes:
Initial quantity
Current quantity
Storage temperature
Flammability
The consumable class will need methods to:
Update the quantity remaining
Check for improper storage?
The consumable class might interact with the shelf, refrigerator, freezer, and/or box classes.
Reading back through our description of consumables there is reference to a flammables cabinet that was not mentioned in our initial description of the problem. This is an iterative design process so we should go back and add a flammables cabinet class.
If we expand our list to all the classes we plan to use we get the following:
Items
Attributes
Name
Description
Location
Methods
Update location
Interactions
Every other class except items and consumables
Laboratory
Attributes
?
Methods
Search
Interactions
Every other class
Equipment
Attributes
Name
Description
Location
Methods
Update location
Interactions
Every other class except items and consumables
Consumables
Attributes
Name
Description
Location
Initial quantity
Current quantity
Storage temperature
Flammability
Methods
Update location
Update quantity remaining
Check for appropriate storage
Interactions
Every other class except equipment and items
Rooms
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class
Shelves
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class possible although refrigerator and freezer are unlikely
Refrigerators
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Temperature
Methods
Search
Interactions
Every other class possible although freezer and flammables cabinet unlikely
Freezers
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Temperature
Methods
Search
Interactions
Every other class possible although refrigerator and flammables cabinet unlikely
Boxes
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class
Flammables Cabinet
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class possible although refrigerator and freezer unlikely
Although this is a long list careful examination reveals that there is a lot of repetition.
Items and equipment are identical and consumables is similar, adding several extra attributes and methods.
Rooms, shelves, refrigerators, freezers, boxes and flammables cabinet are all similar, only differing in the occasional attribute.
Our three main groups are:
* Laboratory
* Items (Items, equipment, and consumables)
* Locations (Rooms, shelves, refrigerators, freezers, boxes and flammables cabinet)
So much duplication is problematic, it is diffcult to maintain and subject to greater risk of bugs.
There is a better way - we can create a generic class with the shared functionality and then inherit from it when we create the other classes.
For example an Item class would contain the basic attributes and methods. The Equipment class could then inherit from this class without modification. The Consumable class would also inherit from Item and only add the extra attributes and methods uniquely need by the Consumable class.
End of explanation
class Item(object):
def safely_stored(self):
raise NotImplementedError('override in subclass')
class Consumable(Item):
def safely_stored(self):
return True
a = Item()
a.safely_stored()
b = Consumable()
b.safely_stored()
Explanation: There is one other situation we should consider. Occasionally we will want a class of a particular type to always implement a particular method even though we are unable to implement that method in our parent class. We need some way of raising an error when the parent class is inherited and the method is not implemented.
As a simple example consider a class representing length. We might create classes for meters, miles, feet, etc. Keeping the original units when performing operations (adding, subtracting, etc) would prevent rounding errors but each class would need custom logic.
Returning to our laboratory inventory system one way we can implement this is below:
End of explanation
from abc import ABCMeta, abstractmethod
class Item(metaclass=ABCMeta):
@abstractmethod
def safely_stored(self):
pass
class Consumable(Item):
def safely_stored(self):
return True
a = Item()
b = Consumable()
b.safely_stored()
Explanation: A disadvantage with this approach is we only see the error message when we call the method. The error is in the way we implemented the class so it would be more intuitive to get an error earlier, when we first create the object.
This can be achieved using the abstract method decorator.
End of explanation
class A(object):
def a(self):
print('a in class A')
class B(A):
def a(self):
A.a(self)
print('b in class B')
a = A()
a.a()
b = B()
b.a()
class A(object):
def a(self):
print('a in class A')
class B(A):
def a(self):
super().a()
print('b in class B')
a = A()
a.a()
b = B()
b.a()
Explanation: Either of these approaches work well for adding new methods or completely changing the behaviour of a method. Often we only need to make a more subtle change. In this situation it can be useful to call a method from a parent class while only implementing our new functionality in the child class.
There are two approaches for this.
End of explanation
class A(object):
def a(self):
print('A-a')
class A2(object):
def a(self):
print('A2-a')
class B(A, A2): # first object method is inherited
pass
a = A()
a.a()
a2 = A2()
a2.a()
b = B()
b.a()
class A(object):
def a(self):
print('A-a')
class A2(object):
def a(self):
print('A2-a')
class B(A):
pass
class C(B, A2):
pass
a = A()
a.a()
a2 = A2()
a2.a()
c = C()
c.a()
Explanation: Using super() is usually the best approach, the reasons for this are covered in detail in this blog post
Multiple Inheritance
We are not limited to inheriting from a single class. It is possible to merge functionality from multiple different classes simply by inheriting from them.
When inheriting from multiple classes that contain a method or attribute with the same name there is a particular order in which the names are resolved.
End of explanation
isinstance(a, Item)
isinstance(b, Consumable)
isinstance(b, Item)
isinstance(a, Consumable)
Explanation: A simple rule-of-thumb is that search is depth first. The details are a little more complicated.
isinstance
Often we need to check whether a particular variable is an instance of a particular class. For example, returning to our laboratory inventory system we would want to check that we only add instances of Item or its subclasses to our storage locations.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot([1,2,3,4,5,4,3,2,1], color=(0,1,0)) # can substitute a tuple instead of a string, duck typing
Explanation: Duck typing
A popular alternative in python is duck typing, an approach named after the idea that,
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
What this means for programming is that instead of checking for a particular class, instead the methods and attributes that are actually needed are checked for.
End of explanation |
486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pi Day Fun
March 14, 2016
Updated since.
Step2: Reference Pi
Step5: The Youtube above describes how to use successive primes in successive terms to build a running product that converges to 2/pi.
Step7: Ramanujan<br />
Pi Contest
Below is a famous one from Ramanujan. Why it works I'm not sure anyone knows exactly.
Thanks to a change in Python 3.8, factorial no longer accepts the Decimal type. I've had to update the code for forward compatibility.
Here's a corresponding script on repl.it.
Step9: The generator below gives successive digits of Pi. | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("HrRMnzANHHs")
Explanation: Pi Day Fun
March 14, 2016
Updated since.
End of explanation
from fractions import Fraction
from itertools import count, islice
from decimal import Decimal, localcontext
def convert(f):
get a Decimal from a Fraction (and multiply by 4)
return (Decimal(f.numerator) / Decimal(f.denominator)) * Decimal(4)
def pi_series():
"...converges very slowly"
denoms = count(1,2) # odd numbers from 1
total = Fraction(1,next(denoms)) # 1/1
while True:
yield total
total -= Fraction(1, next(denoms)) # - 1/3
total += Fraction(1, next(denoms)) # + 1/5 and so on
def nth(iterable, n, default=None):
"Returns the nth item or a default value"
return next(islice(iterable, n, None), default)
with localcontext() as ctx: # <-- context manager object
ctx.prec = 3000
pi = pi_series()
print("{0}".format(convert(nth(pi, 1000)))[:10])
Explanation: Reference Pi:
<pre>
3.14159265358979323846264338327950288419716939937510
58209749445923078164062862089986280348253421170679
82148086513282306647093844609550582231725359408128
48111745028410270193852110555964462294895493038196
44288109756659334461284756482337867831652712019091
45648566923460348610454326648213393607260249141273
72458700660631558817488152092096282925409171536436
78925903600113305305488204665213841469519415116094
33057270365759591953092186117381932611793105118548
07446237996274956735188575272489122793818301194912
98336733624406566430860213949463952247371907021798
60943702770539217176293176752384674818467669405132
00056812714526356082778577134275778960917363717872
14684409012249534301465495853710507922796892589235
42019956112129021960864034418159813629774771309960
51870721134999999837297804995105973173281609631859
50244594553469083026425223082533446850352619311881
71010003137838752886587533208381420617177669147303
59825349042875546873115956286388235378759375195778
18577805321712268066130019278766111959092164201989
</pre>
The sequence of odd fractions, as a running total, converges to pi/4, albeit slowly...
End of explanation
def Primes():
generate successive prime numbers (trial by division)
candidate = 1
_primes_so_far = [2] # first prime, only even prime
yield _primes_so_far[-1]
while True:
candidate += 2 # check odds only from now on
for prev in _primes_so_far:
if prev**2 > candidate:
yield candidate
_primes_so_far.append(candidate)
break
if not divmod(candidate, prev)[1]: # no remainder!
break # done looping
p = Primes()
print([next(p) for _ in range(100)]) # next 30 primes please!
def convert(f):
get a Decimal from a Fraction (and multiply by 4)
return (Decimal(f.denominator) / Decimal(f.numerator))
def Pi():
primes = Primes()
result = Fraction(1,1)
while True:
p = next(primes)
if divmod(p, 4)[1] == 1:
term = (1 + Fraction(1,p))
else:
term = (1 - Fraction(1,p))
result *= term
yield result
with localcontext() as ctx: # <-- context manager object
ctx.prec = 300 # feel free to boost
pi = Pi()
print("{0}".format(convert(nth(pi, 333)))[:10])
# print("{0}".format(convert(nth(pi, 3000)))[:20])
Explanation: The Youtube above describes how to use successive primes in successive terms to build a running product that converges to 2/pi.
End of explanation
from math import factorial as fact
def pieinsky():
Ramanujan's: converges relatively quickly
c1 = Decimal(4)
c2 = Decimal(1103)
c3 = Decimal(26390)
c4 = Decimal(396)
c5 = Decimal(9801)
# code formatted for readability (make it be one line)
root8 = Decimal('8').sqrt()
i = Decimal(0)
thesum = Decimal(0)
while True:
# explicit casts to int create forward compatibility
term = (fact(int(c1*i))*(c2 + c3*i))/(pow(fact(int(i)),4)*pow(c4,4*i))
thesum = thesum + term
yield 1/((root8/c5)*thesum)
i += 1
with localcontext() as ctx: # <-- context manager object
ctx.prec = 1000
pi = pieinsky()
print("{0}".format(nth(pi, 100))[:100])
Explanation: Ramanujan<br />
Pi Contest
Below is a famous one from Ramanujan. Why it works I'm not sure anyone knows exactly.
Thanks to a change in Python 3.8, factorial no longer accepts the Decimal type. I've had to update the code for forward compatibility.
Here's a corresponding script on repl.it.
End of explanation
Another generator example: converging to Pi
https://mail.python.org/pipermail/edu-sig/2015-September/date.html
def pi():
k, a, b, a1, b1 = 2, 4, 1, 12, 4
while True:
p, q, k = k*k, 2*k+1, k+1
a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
d, d1 = a/b, a1/b1
while d == d1:
yield int(d)
a, a1 = 10*(a%b), 10*(a1%b1)
d, d1 = a/b, a1/b1
if __name__ == "__main__":
the_gen = pi()
for _ in range(100):
print(next(the_gen),end="")
print()
gen = pi()
type(gen)
next(gen)
next(gen)
Explanation: The generator below gives successive digits of Pi.
End of explanation |
487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None,real_dim),name="inputs_real")
inputs_z = tf.placeholder(tf.float32, shape=(None,z_dim),name="inputs_z")
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1,h1)
# Logits and tanh output
logits = tf.layers.dense(inputs=h1, units=out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(inputs=x, units=n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1,h1)
# Logits and sigmoid output
logits = tf.layers.dense(inputs=h1, units=1, activation=None)
out = tf.nn.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real,input_z=model_inputs(input_size,z_size)
# Generator network here
# generator(z, out_dim, n_units=128, reuse=False, alpha=0.01)
g_model = generator(input_z,input_size)
# g_model is the generator output
# Size of input image to discriminator
#input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
#z_size = 100
# Sizes of hidden layers in generator and discriminator
#g_hidden_size = 128
#d_hidden_size = 128
# Leak factor for leaky ReLU
#alpha = 0.01
# Label smoothing
#smooth = 0.1
# Disriminator network here
# discriminator(x, n_units=128, reuse=False, alpha=0.01):
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model,reuse=True)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
# [v for v in tf.all_variables() if v.name == 'C:0']
print(t_vars[0].name)
g_vars = [var for var in ((t_vars)) if var.name.startswith('generator')]
d_vars = [var for var in ((t_vars)) if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Splunk<> Graphistry
Graphistry brings modern visual analytics to event data in Splunk. The full platform is intended for enterprise teams, while this tutorials shares visibility techniques for researchers and hunters.
To use
Step1: 1. Imports
Step2: Graphistry
Step3: Splunk
Step4: 2. Get data
Step5: 3. Visualize!
A) Simple IP<>IP
Step6: B) IP<>IP + srcip<>protocol
Step7: 3. All<>All via Hypergraph
Step8: Node Colors | Python Code:
#splunk
SPLUNK = {
'host': 'MY.SPLUNK.com',
'scheme': 'https',
'port': 8089,
'username': 'MY_SPLUNK_USER',
'password': 'MY_SPLUNK_PWD'
}
Explanation: Splunk<> Graphistry
Graphistry brings modern visual analytics to event data in Splunk. The full platform is intended for enterprise teams, while this tutorials shares visibility techniques for researchers and hunters.
To use:
* Read along, start the prebuilt visualizations by clicking on them
* Plug in your Graphistry API Key & Splunk credentials to use for yourself
Further reading:
* UI Guide: https://labs.graphistry.com/graphistry/ui.html
* Python client tutorials & demos: https://github.com/graphistry/pygraphistry
* Graphistry API Key: https://www.graphistry.com/api-request
* DoD / VAST challenges: https://www.cs.umd.edu/hcil/varepository/benchmarks.php
0. Configure
End of explanation
import pandas as pd
Explanation: 1. Imports
End of explanation
!pip install graphistry
import graphistry
graphistry.__version__
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
Explanation: Graphistry
End of explanation
!pip install splunk-sdk
import splunklib
#Connect to Splunk. Replace settings with your own setup.
import splunklib.client as client
import splunklib.results as results
service = client.connect(**SPLUNK)
def extend(o, override):
for k in override.keys():
o[k] = override[k]
return o
STEP = 10000;
def splunkToPandas(qry, overrides={}):
kwargs_blockingsearch = extend({
"count": 0,
"earliest_time": "2010-01-24T07:20:38.000-05:00",
"latest_time": "now",
"search_mode": "normal",
"exec_mode": "blocking"
}, overrides)
job = service.jobs.create(qry, **kwargs_blockingsearch)
print "Search results:\n"
resultCount = job["resultCount"]
offset = 0;
print 'results', resultCount
out = None
while (offset < int(resultCount)):
print "fetching:", offset, '-', offset + STEP
kwargs_paginate = extend(kwargs_blockingsearch,
{"count": STEP,
"offset": offset})
# Get the search results and display them
blocksearch_results = job.results(**kwargs_paginate)
reader = results.ResultsReader(blocksearch_results)
lst = [x for x in reader]
df2 = pd.DataFrame(lst)
out = df2 if type(out) == type(None) else pd.concat([out, df2], ignore_index=True)
offset += STEP
return out
Explanation: Splunk
End of explanation
query = 'search index="vast" srcip=* destip=* | rename destip -> dest_ip, srcip -> src_ip | fields dest_ip _time src_ip protocol | eval time=_time | fields - _* '
%time df = splunkToPandas(query, {"sample_ratio": 1000})
#df = splunkToPandasAll('search index="vast" | head 10')
#df = pd.concat([ splunkToPandas('search index="vast" | head 10'), splunkToPandas('search index="vast" | head 10') ], ignore_index=True)
print 'results', len(df)
df.sample(5)
Explanation: 2. Get data
End of explanation
graphistry.bind(source='src_ip', destination='dest_ip').edges(df).plot()
Explanation: 3. Visualize!
A) Simple IP<>IP: 1326 nodes, 253K edges
End of explanation
def make_edges(df, src, dst):
out = df.copy()
out['src'] = df[src]
out['dst'] = df[dst]
return out
ip2ip = make_edges(df, 'src_ip', 'dest_ip')
srcip2protocol = make_edges(df, 'src_ip', 'protocol')
combined = pd.concat([ip2ip, srcip2protocol], ignore_index=True)
combined.sample(6)
graphistry.bind(source='src', destination='dst').edges(combined).plot()
Explanation: B) IP<>IP + srcip<>protocol: 1328 nodes, 506K edges
End of explanation
hg = graphistry.hypergraph(df, entity_types=[ 'src_ip', 'dest_ip', 'protocol'] )
print hg.keys()
hg['graph'].plot()
Explanation: 3. All<>All via Hypergraph: 254K nodes, 760K edges
End of explanation
nodes = pd.concat([
df[['src_ip']].rename(columns={'src_ip': 'id'}).assign(orig_col='src_ip'),
df[['dest_ip']].rename(columns={'dest_ip': 'id'}).assign(orig_col='dest_ip') ],
ignore_index=True).drop_duplicates(['id'])
#see https://labs.graphistry.com/docs/docs/palette.html
col2color = {
"src_ip": 90005,
"dest_ip": 46005
}
nodes_with_color = nodes.assign(color=nodes.apply(lambda row: col2color[ row['orig_col'] ], axis=1))
nodes_with_color.sample(3)
graphistry.bind(source='src_ip', destination='dest_ip').edges(df).nodes(nodes_with_color).bind(node='id', point_color='color').plot()
Explanation: Node Colors
End of explanation |
489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the Data
Step1: Make the column names ints not strings for handling
Step2: Turn population into bubble sizes. Use min_size and factor to tweak.
Step3: Use pandas categories and categorize & color the regions
Step4: Build the plot
Setting up the data
The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot.
We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side.
This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
Step5: sources looks like this
```
{'_1964'
Step6: Build the plot
Step7: Add the background year text
We add this first so it is below all the other glyphs
Step8: Add the bubbles and hover
We add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later).
plot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.
Step9: Add the legend
Finally we manually build the legend by adding circles and texts to the upper-right portion of the plot.
Step11: Add the slider and callback
Last, but not least, we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback.
It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our python-made ColumnDataSource.
Step12: Embed in a template and render
Last but not least, we use vplot to stick togethre the chart and the slider. And we embed that in a template we write using the script, div output from components.
We display it in IPython and save it as an html file. | Python Code:
fertility_df = pd.read_csv('data/fertility.csv', index_col='Country')
life_expectancy_df = pd.read_csv('data/life_expectancy.csv', index_col='Country')
population_df = pd.read_csv('data/population.csv', index_col='Country')
regions_df = pd.read_csv('data/regions.csv', index_col='Country')
Explanation: Get the Data
End of explanation
columns = list(fertility_df.columns)
years = list(range(int(columns[0]), int(columns[-1])))
rename_dict = dict(zip(columns, years))
fertility_df = fertility_df.rename(columns=rename_dict)
life_expectancy_df = life_expectancy_df.rename(columns=rename_dict)
population_df = population_df.rename(columns=rename_dict)
regions_df = regions_df.rename(columns=rename_dict)
Explanation: Make the column names ints not strings for handling
End of explanation
scale_factor = 200
population_df_size = np.sqrt(population_df/np.pi)/scale_factor
min_size = 3
population_df_size = population_df_size.where(population_df_size >= min_size).fillna(min_size)
Explanation: Turn population into bubble sizes. Use min_size and factor to tweak.
End of explanation
regions_df.Group = regions_df.Group.astype('category')
regions = list(regions_df.Group.cat.categories)
def get_color(r):
index = regions.index(r.Group)
return Spectral6[regions.index(r.Group)]
regions_df['region_color'] = regions_df.apply(get_color, axis=1)
zip(regions, Spectral6)
Explanation: Use pandas categories and categorize & color the regions
End of explanation
sources = {}
region_color = regions_df['region_color']
region_color.name = 'region_color'
for year in years:
fertility = fertility_df[year]
fertility.name = 'fertility'
life = life_expectancy_df[year]
life.name = 'life'
population = population_df_size[year]
population.name = 'population'
new_df = pd.concat([fertility, life, population, region_color], axis=1)
sources['_' + str(year)] = ColumnDataSource(new_df)
Explanation: Build the plot
Setting up the data
The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot.
We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side.
This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
End of explanation
dictionary_of_sources = dict(zip([x for x in years], ['_%s' % x for x in years]))
js_source_array = str(dictionary_of_sources).replace("'", "")
Explanation: sources looks like this
```
{'_1964': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165cc0>,
'_1965': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165b00>,
'_1966': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d1656a0>,
'_1967': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165ef0>,
'_1968': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9dac18>,
'_1969': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da9b0>,
'_1970': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da668>,
'_1971': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da0f0>...
```
We will pass this dictionary to the Callback. In doing so, we will find that in our javascript we have an object called, for example _1964 that refers to our ColumnDataSource. Note that we needed the prefixing _ as JS objects cannot begin with a number.
Finally we construct a string that we can insert into our javascript code to define an object.
The string looks like this: {1962: _1962, 1963: _1963, ....}
Note the keys of this object are integers and the values are the references to our ColumnDataSources from above. So that now, in our JS code, we have an object that's storing all of our ColumnDataSources and we can look them up.
End of explanation
# Set up the plot
xdr = Range1d(1, 9)
ydr = Range1d(20, 100)
plot = Plot(
x_range=xdr,
y_range=ydr,
title="",
plot_width=800,
plot_height=400,
outline_line_color=None,
toolbar_location=None,
)
AXIS_FORMATS = dict(
minor_tick_in=None,
minor_tick_out=None,
major_tick_in=None,
major_label_text_font_size="10pt",
major_label_text_font_style="normal",
axis_label_text_font_size="10pt",
axis_line_color='#AAAAAA',
major_tick_line_color='#AAAAAA',
major_label_text_color='#666666',
major_tick_line_cap="round",
axis_line_cap="round",
axis_line_width=1,
major_tick_line_width=1,
)
xaxis = LinearAxis(SingleIntervalTicker(interval=1), axis_label="Children per woman (total fertility)", **AXIS_FORMATS)
yaxis = LinearAxis(SingleIntervalTicker(interval=20), axis_label="Life expectancy at birth (years)", **AXIS_FORMATS)
plot.add_layout(xaxis, 'below')
plot.add_layout(yaxis, 'left')
Explanation: Build the plot
End of explanation
# Add the year in background (add before circle)
text_source = ColumnDataSource({'year': ['%s' % years[0]]})
text = Text(x=2, y=35, text='year', text_font_size='150pt', text_color='#EEEEEE')
plot.add_glyph(text_source, text)
Explanation: Add the background year text
We add this first so it is below all the other glyphs
End of explanation
# Add the circle
renderer_source = sources['_%s' % years[0]]
circle_glyph = Circle(
x='fertility', y='life', size='population',
fill_color='region_color', fill_alpha=0.8,
line_color='#7c7e71', line_width=0.5, line_alpha=0.5)
circle_renderer = plot.add_glyph(renderer_source, circle_glyph)
# Add the hover (only against the circle and not other plot elements)
tooltips = "@index"
plot.add_tools(HoverTool(tooltips=tooltips, renderers=[circle_renderer]))
Explanation: Add the bubbles and hover
We add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later).
plot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.
End of explanation
text_x = 7
text_y = 95
for i, region in enumerate(regions):
plot.add_glyph(Text(x=text_x, y=text_y, text=[region], text_font_size='10pt', text_color='#666666'))
plot.add_glyph(Circle(x=text_x - 0.1, y=text_y + 2, fill_color=Spectral6[i], size=10, line_color=None, fill_alpha=0.8))
text_y = text_y - 5
Explanation: Add the legend
Finally we manually build the legend by adding circles and texts to the upper-right portion of the plot.
End of explanation
# Add the slider
code =
var year = slider.get('value'),
sources = %s,
new_source_data = sources[year].get('data');
renderer_source.set('data', new_source_data);
renderer_source.trigger('change');
text_source.set('data', {'year': [String(year)]});
text_source.trigger('change');
% js_source_array
callback = Callback(args=sources, code=code)
slider = Slider(start=years[0], end=years[-1], value=1, step=1, title="Year", callback=callback)
callback.args["slider"] = slider
callback.args["renderer_source"] = renderer_source
callback.args["text_source"] = text_source
Explanation: Add the slider and callback
Last, but not least, we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback.
It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our python-made ColumnDataSource.
End of explanation
# Stick the plot and the slider together
layout = vplot(plot, hplot(slider))
with open('gapminder_template_simple.html', 'r') as f:
template = Template(f.read())
script, div = components(layout)
html = template.render(
title="Bokeh - Gapminder demo",
plot_script=script,
plot_div=div,
)
with open('gapminder_simple.html', 'w') as f:
f.write(html)
display(HTML(html))
Explanation: Embed in a template and render
Last but not least, we use vplot to stick togethre the chart and the slider. And we embed that in a template we write using the script, div output from components.
We display it in IPython and save it as an html file.
End of explanation |
490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NOAA SPC Convective Outlook
Demonstrate the use of geoJSON and shapefile data with PlotGeometry in MetPy's simplified
plotting interface. This example walks through plotting the Day 1 Convective Outlook from NOAA
Storm Prediction Center. The geoJSON file was retrieved from the Storm Prediction Center's
archives <https
Step1: Read in the geoJSON file containing the convective outlook.
Step2: Preview the data.
Step3: Plot the shapes from the 'geometry' column. Give the shapes their fill and stroke color by
providing the 'fill' and 'stroke' columns. Use text from the 'LABEL' column as labels for the
shapes.
Step4: Add the geometry plot to a panel and container. | Python Code:
import geopandas
from metpy.cbook import get_test_data
from metpy.plots import MapPanel, PanelContainer, PlotGeometry
Explanation: NOAA SPC Convective Outlook
Demonstrate the use of geoJSON and shapefile data with PlotGeometry in MetPy's simplified
plotting interface. This example walks through plotting the Day 1 Convective Outlook from NOAA
Storm Prediction Center. The geoJSON file was retrieved from the Storm Prediction Center's
archives <https://www.spc.noaa.gov/archive/>_.
End of explanation
day1_outlook = geopandas.read_file(get_test_data('spc_day1otlk_20210317_1200_lyr.geojson'))
Explanation: Read in the geoJSON file containing the convective outlook.
End of explanation
day1_outlook
Explanation: Preview the data.
End of explanation
geo = PlotGeometry()
geo.geometry = day1_outlook['geometry']
geo.fill = day1_outlook['fill']
geo.stroke = day1_outlook['stroke']
geo.labels = day1_outlook['LABEL']
geo.label_fontsize = 'large'
Explanation: Plot the shapes from the 'geometry' column. Give the shapes their fill and stroke color by
providing the 'fill' and 'stroke' columns. Use text from the 'LABEL' column as labels for the
shapes.
End of explanation
panel = MapPanel()
panel.title = 'SPC Day 1 Convective Outlook (Valid 12z Mar 17 2021)'
panel.plots = [geo]
panel.area = [-120, -75, 25, 50]
panel.projection = 'lcc'
panel.layers = ['lakes', 'land', 'ocean', 'states', 'coastline', 'borders']
pc = PanelContainer()
pc.size = (12, 8)
pc.panels = [panel]
pc.show()
Explanation: Add the geometry plot to a panel and container.
End of explanation |
491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Load data
Step4: Neural Network
Step5: Training on unbiased dataset
Step6: Baseline (unconstrained)
Step7: Our method | Python Code:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
import numpy as np
import copy
Explanation: Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
MNIST Simulation
We investigate the behavior of our method on a variant of the well-known MNIST task.
We take the MNIST dataset under the standard train/test split and then randomly select $20\%$ of the training data points and change their label to $2$, yielding a biased set of labels. On such a dataset, our method should be able to find appropriate weights so that training on the weighted dataset roughly corresponds to training on the true labels.
To this end, we train a classifier with a demographic-parity-like constraint on the predictions of digit $2$; i.e., we encourage a classifier to predict the digit $2$ at a rate of $10\%$, the rate appearing in the true labels.
End of explanation
(train_xs, train_ys), (test_xs, test_ys) = mnist.load_data()
train_xs = train_xs / 255.
test_xs = test_xs / 255.
train_xs = train_xs.reshape(-1, 28 * 28)
test_xs = test_xs.reshape(-1, 28 * 28)
print("Distribution Before")
for i in range(10):
print np.mean(train_ys == i)
train_ys_corrupted = np.copy(train_ys)
np.random.seed(12345)
idxs = np.random.choice(range(len(train_ys_corrupted)), size=len(train_ys_corrupted)/5, replace=False)
train_ys_corrupted[idxs] = 2
print("Distribution After")
for i in range(10):
print np.mean(train_ys_corrupted == i)
Explanation: Load data
End of explanation
def weight_variable(shape, name="weight_variable"):
weight_variable generates a weight variable of a given shape.
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial, name=name)
def bias_variable(shape, name="bias_variable"):
bias_variable generates a bias variable of a given shape.
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial, name=name)
def run_simple_NN(X,
y,
X_test,
y_test,
weights,
num_iter=10000,
learning_rate=0.001,
batch_size=128,
display_steps=1000,
n_layers=1):
n_labels = np.max(y) + 1
n_features = X.shape[1]
weights_ = weights / (1. * np.sum(weights))
x = tf.placeholder(tf.float32, [None, n_features])
y_ = tf.placeholder(tf.float32, [None, n_labels])
N = 512
W_1 = weight_variable([784, N])
b_1 = bias_variable([N])
h_1 = tf.nn.relu(tf.matmul(x, W_1) + b_1)
W_2 = weight_variable([N, N])
b_2 = bias_variable([N])
h_2 = tf.nn.relu(tf.matmul(h_1, W_2) + b_2)
W_3 = weight_variable([N, N])
b_3 = bias_variable([N])
h_3 = tf.nn.relu(tf.matmul(h_2, W_3) + b_3)
W_4 = weight_variable([N, 10])
b_4 = bias_variable([10])
NN_logits =tf.nn.softmax(tf.matmul(h_3, W_4) + b_4)
loss = -tf.reduce_mean(tf.reduce_sum(y_ *tf.log(NN_logits+1e-6),1),0)
acc = tf.reduce_mean(
tf.cast(tf.equal(tf.arg_max(NN_logits,1), tf.arg_max(y_,1)), "float"))
train_step = tf.train.AdamOptimizer().minimize(loss)
correct_prediction = tf.equal(tf.argmax(NN_logits, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def one_hot(ns):
return np.eye(n_labels)[ns]
y_onehot = one_hot(y)
y_test_onehot = one_hot(y_test)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(num_iter):
ns = np.random.choice(range(len(X)), size=50, replace=True, p=weights_)
if (i + 1) % display_steps == 0:
train_accuracy = accuracy.eval(feed_dict={x: X, y_: y_onehot})
test_accuracy = accuracy.eval(feed_dict={x: X_test, y_: y_test_onehot})
print("step %d, training accuracy %g, test accuracy %g" %
(i + 1, train_accuracy, test_accuracy))
train_step.run(
feed_dict={x: X[ns, :], y_: y_onehot[ns, :]})
testing_prediction = tf.argmax(NN_logits, 1).eval(feed_dict={x: X_test})
training_prediction = tf.argmax(NN_logits, 1).eval(feed_dict={x: X})
return training_prediction, testing_prediction
Explanation: Neural Network
End of explanation
weights = np.array([1] * len(train_ys))
test_predictions = run_simple_NN(train_xs, train_ys, test_xs, test_ys, weights)
Explanation: Training on unbiased dataset
End of explanation
weights = np.array([1] * len(train_ys))
test_predictions = run_simple_NN(train_xs, train_ys_corrupted, test_xs, test_ys, weights)
Explanation: Baseline (unconstrained)
End of explanation
def debias_weights(original_labels, protected_attributes, multipliers):
exponents = np.zeros(len(original_labels))
for i, m in enumerate(multipliers):
exponents -= m * protected_attributes[i]
weights = np.exp(exponents)/ (np.exp(exponents) + np.exp(-exponents))
weights = np.where(original_labels == 2, 1 - weights, weights)
return weights
multipliers = np.zeros(1)
learning_rate = 1.
n_iters = 100
protected_train = [(train_ys_corrupted == 2)]
for it in xrange(n_iters):
print("Iteration", it + 1, "multiplier", multipliers)
weights = debias_weights(train_ys_corrupted, protected_train, multipliers)
weights = weights / np.sum(weights)
print("Weights for 2", np.sum(weights[np.where(train_ys_corrupted==2)]))
train_prediction, test_predictions = run_simple_NN(train_xs, train_ys_corrupted, test_xs, test_ys, weights)
violation = np.mean(train_prediction == 2) - 0.1
multipliers -= learning_rate * violation
print()
print()
Explanation: Our method
End of explanation |
492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simplest possible example
Compute the fluxes of atmospheric leptons for a standard set of models at a fixed zenith angle.
Step1: Create an instance of an MCEqRun class. Most options are defined in the mceq_config module, and do not require change. Look into mceq_config.py or use the documentation.
If the initialization succeeds it will print out some information according to the debug level.
Step2: If everything succeeds than the last message should be something like
MCEqRun
Step3: Define variables and angles
Step4: Calculate average flux
Step5: Obtain solution at the surface
The fluxes of the particle (e.g., $\mu^+$) and the anti-particle ($\mu^-$) are summed in this example.
Step6: Plot using matplotlib
Step7: Save as in ASCII file for other types of processing
The block below will save the results in an ASCII file for further processing. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
#import solver related modules
from MCEq.core import MCEqRun
import mceq_config as config
#import primary model choices
import crflux.models as pm
Explanation: Simplest possible example
Compute the fluxes of atmospheric leptons for a standard set of models at a fixed zenith angle.
End of explanation
mceq_run = MCEqRun(
#provide the string of the interaction model
interaction_model='SIBYLL2.3c',
#primary cosmic ray flux model
primary_model = (pm.HillasGaisser2012, "H3a"),
# Zenith angle in degrees. 0=vertical, 90=horizontal
theta_deg=0.0
)
Explanation: Create an instance of an MCEqRun class. Most options are defined in the mceq_config module, and do not require change. Look into mceq_config.py or use the documentation.
If the initialization succeeds it will print out some information according to the debug level.
End of explanation
mceq_run.pman.print_particle_tables(0)
Explanation: If everything succeeds than the last message should be something like
MCEqRun::set_primary_model(): HillasGaisser2012 H3a.
List all available particle species
End of explanation
#Power of energy to scale the flux (the results will be returned as E**mag * flux)
mag = 3
#obtain energy grid (fixed) of the solution for the x-axis of the plots
e_grid = mceq_run.e_grid
#Dictionary for results
flux = {}
#Define a zenith angle, counted positively from vertical direction. Theta = 0. means vertical, theta = 90. horizontal
theta = 60.
Explanation: Define variables and angles
End of explanation
#Set the zenith angle
mceq_run.set_theta_deg(theta)
#Run the solver
mceq_run.solve()
Explanation: Calculate average flux
End of explanation
#_conv means conventional (mostly pions and kaons)
mu_conv = (mceq_run.get_solution('conv_mu+', mag)
+ mceq_run.get_solution('conv_mu-', mag))
# _pr means prompt (the mother of the muon had a critical energy
# higher or equal to that of a D meson. Includes all charm and direct resonance
# contribution)
mu_pr = (mceq_run.get_solution('pr_mu+', mag)
+ mceq_run.get_solution('pr_mu-', mag))
# total means conventional + prompt
mu_total = (mceq_run.get_solution('total_mu+', mag)
+ mceq_run.get_solution('total_mu-', mag))
# Muon charge ratio
mu_charge = (mceq_run.get_solution('total_mu+', mag)
/ mceq_run.get_solution('total_mu-', mag))
# same meaning of prefixes for muon neutrinos as for muons
numu_conv = (mceq_run.get_solution('conv_numu', mag)
+ mceq_run.get_solution('conv_antinumu', mag))
numu_pr = (mceq_run.get_solution('pr_numu', mag)
+ mceq_run.get_solution('pr_antinumu', mag))
numu_total = (mceq_run.get_solution('total_numu', mag)
+ mceq_run.get_solution('total_antinumu', mag))
numu_ratio = (mceq_run.get_solution('total_numu', mag)
/ mceq_run.get_solution('total_antinumu', mag))
# same meaning of prefixes for electron neutrinos as for muons
nue_conv = (mceq_run.get_solution('conv_nue', mag)
+ mceq_run.get_solution('conv_antinue', mag))
nue_pr = (mceq_run.get_solution('pr_nue', mag)
+ mceq_run.get_solution('pr_antinue', mag))
nue_total = (mceq_run.get_solution('total_nue', mag)
+ mceq_run.get_solution('total_antinue', mag))
nue_ratio = (mceq_run.get_solution('total_nue', mag)
/ mceq_run.get_solution('total_antinue', mag))
# since there are no conventional tau neutrinos, prompt=total
nutau_total = (mceq_run.get_solution('total_nutau', mag)
+ mceq_run.get_solution('total_antinutau', mag))
nutau_pr = (mceq_run.get_solution('pr_nutau', mag)
+ mceq_run.get_solution('pr_antinutau', mag))
Explanation: Obtain solution at the surface
The fluxes of the particle (e.g., $\mu^+$) and the anti-particle ($\mu^-$) are summed in this example.
End of explanation
# for pref, lab in [('numu_',r'\nu_\mu'), ('nue_',r'\nu_e')]:
#Muons
plt.figure(figsize=(4.2, 3))
plt.loglog(e_grid, mu_total, color='k', ls='-', lw=1.5)
plt.loglog(e_grid, mu_conv, ls='-.', lw=1.5,
label=r'conventional $\mu$')
plt.loglog(e_grid, mu_pr, ls='--', lw=1.5,
label=r'prompt $\mu$')
plt.xlim(10,1e7)
plt.ylim(1e-5,10)
plt.xlabel(r"$E_{\mu}$ [GeV]")
plt.ylabel(r"$\Phi_{\mu}$ (E/GeV)$^{" + str(mag) +" }$" +
"(cm$^{2}$ s sr GeV)$^{-1}$")
plt.legend(loc='upper right',frameon=False,numpoints=1,fontsize='medium')
plt.tight_layout()
#To save the plot use
# plt.savefig('H3a_60_sib23c_whatever.pdf')
#Muon neutrinos
plt.figure(figsize=(4.2, 3))
plt.loglog(e_grid, numu_total, color='k', ls='-', lw=1.5)
plt.loglog(e_grid, numu_conv, ls='-.', lw=1.5,
label=r'conventional $\nu_\mu$')
plt.loglog(e_grid, numu_pr, ls='--', lw=1.5,
label=r'prompt $\nu_\mu$')
plt.xlim(10,1e7)
plt.ylim(1e-5,10)
plt.xlabel(r"$E_{\nu_\mu}$ [GeV]")
plt.ylabel(r"$\Phi_{\nu_\mu}$ (E/GeV)$^{" + str(mag) +" }$" +
"(cm$^{2}$ s sr GeV)$^{-1}$")
plt.legend(loc='upper right',frameon=False,numpoints=1,fontsize='medium')
plt.tight_layout()
#Electron neutrinos
plt.figure(figsize=(4.2, 3))
plt.loglog(e_grid, nue_total, color='k', ls='-', lw=1.5)
plt.loglog(e_grid, nue_conv, ls='-.', lw=1.5,
label=r'conventional $\nu_e$')
plt.loglog(e_grid, nue_pr, ls='--', lw=1.5,
label=r'prompt $\nu_e$')
plt.xlim(10,1e7)
plt.ylim(1e-5,10)
plt.xlabel(r"$E_{\nu_e}$ [GeV]")
plt.ylabel(r"$\Phi_{\nu_e}$ (E/GeV)$^{" + str(mag) +" }$" +
"(cm$^{2}$ s sr GeV)$^{-1}$")
plt.legend(loc='upper right',frameon=False,numpoints=1,fontsize='medium')
plt.tight_layout()
#Tau neutrinos
plt.figure(figsize=(4.2, 3))
plt.loglog(e_grid, nutau_total, color='k', ls='-', lw=1.5)
plt.loglog(e_grid, nutau_pr, ls='--', lw=1.5,
label=r'prompt $\nu_\tau$')
plt.xlim(10,1e7)
plt.ylim(1e-7,1e-2)
plt.xlabel(r"$E_{\nu_\tau}$ [GeV]")
plt.ylabel(r"$\Phi_{\nu_\tau}$ (E/GeV)$^{" + str(mag) +" }$" +
"(cm$^{2}$ s sr GeV)$^{-1}$")
plt.legend(loc='upper right',frameon=False,numpoints=1,fontsize='medium')
plt.tight_layout()
#Muons
plt.figure(figsize=(4.2, 3))
plt.semilogx(e_grid, mu_charge, color='k', ls='-', lw=1.5, label=r'$\mu^+/\mu^-$')
plt.semilogx(e_grid, numu_ratio, color='r', ls='-', lw=1.5, label=r'$\nu_\mu/\bar{\nu}_\mu$')
plt.semilogx(e_grid, nue_ratio, color='b', ls='-', lw=1.5, label=r'$\nu_e/\bar{\nu}_e$')
# plt.semilogx(e_grid, 0.1*numu_total/nue_total, color='cyan', ls='-',
# lw=1.5, label=r'$0.1 \cdot \nu_\mu/\nu_e$')
plt.xlim(10,1e7)
plt.ylim(1,2)
plt.xlabel(r"$E_{\rm lepton}$ [GeV]")
plt.ylabel(r"Flux ratios")
plt.legend(loc='upper left',frameon=False,numpoints=1,fontsize='medium')
plt.tight_layout()
Explanation: Plot using matplotlib
End of explanation
np.savetxt(open('H3a_theta_60_sib23c.txt','w'),
zip(e_grid,
mu_conv,mu_pr,mu_total,
numu_conv,numu_pr,numu_total,
nue_conv,nue_pr,nue_total,
nutau_pr),
fmt='%6.5E',
header=('lepton flux scaled with E**{0}. Order (E, mu_conv, mu_pr, mu_total, ' +
'numu_conv, numu_pr, numu_total, nue_conv, nue_pr, nue_total, ' +
'nutau_pr').format(mag)
)
Explanation: Save as in ASCII file for other types of processing
The block below will save the results in an ASCII file for further processing.
End of explanation |
493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Forest
A single decision tree - tasked to learn a dataset - might not be able to perform well due to the outliers and the breadth and depth complexity of the data.
So instead of relying on a single tree, random forests rely on a multitude of cleverly grown decision trees.
Each tree within the forest is allowed to become highly specialised in a specific area but still retains some general knowledge about most areas. When a random forest classifies, it is actualy each tree in the forest working together to cast votes on what label they think a specific sample should be assigned.
Instead of sharing the entire dataset with each decision tree, the forest performs an operation which is essential a train / test split of the training data. Each decision tree in the forest randomly samples from the overall training data set. Through doing so, each tree exist in an independent subspace and the variation between trees is controlled. This technique is known as tree bagging, or bootstrap aggregating.
In addition to the tree bagging of training samples at the forest level, each individual decision tree further 'feature bags' at each node-branch split. This is helpful because some datasets contain a feature that is very correlated to the target (the 'y'-label). By selecting a random sampling of features every split - if such a feature were to exist - it wouldn't show up on as many branches of the tree and there would be more diversity of the features examined.
Check my post to see more details about Random Forests!
Human activity prediction
As an example, we will predict human activity by looking at data from wearables.
For this , we train a random forest against a public domain Human Activity Dataset titled Wearable Computing
Step1: Pre-processing the data
What we want to do is to predict the activity class based on the accelerometer's data from the wearables.
Step2: Great, no NaNs here. Let's go on.
Step3: Extract the target values
Step4: Split the dataset into training and test
Step5: Train the Random Forest model
Step6: You can check the SKlearn documentation to see all possible parameters.
The ones used here
Step7: Note that it takes a much longer time to train a forest than a single decision tree.
This is the score based on the test dataset that we split earlier. Note how good it is.
Step8: These are the top 5 features used in the classification.
They are all related to the movements, no gender or age.
Step9: Example prediction
Let's use the wrong row - that we extracted earlier from the dataset - as a prediction example.
but first we need to correct it
Step10: Remember that these were the categories for the classes
Step11: The fourth one is "standing up". Seems that the model predicted correctly.
OutOfBag error instead of splitting into train and test
Since each tree within the forest is only trained using a subset of the overall training set, the forest ensemble has the ability to error test itself.
It does this by scoring each tree's predictions against that tree's out-of-bag samples. A tree's out of bag samples are those forest training samples that were withheld from a specific tree during training.
One of the advantages of using the out of bag (OOB) error is that eliminates the need to split your data into a training / testing before feeding it into the forest model, since that's part of the forest algorithm. However using the OOB error metric often underestimates the actual performance improvement and the optimal number of training iterations.
Step12: Time needed is similar.
Let's check the score
Step13: The out-of-bag estimation is not far away from the more precise score estimated from the test dataset.
And now we predict the same user's movement. Class output shall be "standing up", the fourth one | Python Code:
import pandas as pd
import time
# Grab the DLA HAR dataset from the links above
# we assume that is stored in a dataset folder
#
# Load up the dataset into dataframe 'X'
#
X = pd.read_csv("../datasets/dataset-har-PUC-rio-ugulino.csv", sep=';', low_memory=False)
X.head(2)
X.describe()
Explanation: Random Forest
A single decision tree - tasked to learn a dataset - might not be able to perform well due to the outliers and the breadth and depth complexity of the data.
So instead of relying on a single tree, random forests rely on a multitude of cleverly grown decision trees.
Each tree within the forest is allowed to become highly specialised in a specific area but still retains some general knowledge about most areas. When a random forest classifies, it is actualy each tree in the forest working together to cast votes on what label they think a specific sample should be assigned.
Instead of sharing the entire dataset with each decision tree, the forest performs an operation which is essential a train / test split of the training data. Each decision tree in the forest randomly samples from the overall training data set. Through doing so, each tree exist in an independent subspace and the variation between trees is controlled. This technique is known as tree bagging, or bootstrap aggregating.
In addition to the tree bagging of training samples at the forest level, each individual decision tree further 'feature bags' at each node-branch split. This is helpful because some datasets contain a feature that is very correlated to the target (the 'y'-label). By selecting a random sampling of features every split - if such a feature were to exist - it wouldn't show up on as many branches of the tree and there would be more diversity of the features examined.
Check my post to see more details about Random Forests!
Human activity prediction
As an example, we will predict human activity by looking at data from wearables.
For this , we train a random forest against a public domain Human Activity Dataset titled Wearable Computing: Accelerometers' Data Classification of Body Postures and Movements, containing 165633 data points.
Within the dataset, there are five target activities:
- Sitting
- Sitting Down
- Standing
- Standing Up
- Walking
These activities were captured from 30 people wearing accelerometers mounted on their waist, left thigh, right arm, and right ankle.
Read the data
The original dataset can be found on the UCI MachineLearning Repository
A copy can be found also here on GitHub (URL is below) and on Kaggle
End of explanation
#
# An easy way to show which rows have NaNs in them:
print (X[pd.isnull(X).any(axis=1)])
Explanation: Pre-processing the data
What we want to do is to predict the activity class based on the accelerometer's data from the wearables.
End of explanation
#
# Encode the gender column: 0 as male, 1 as female
#
X.gender = X.gender.map({'Woman':1, 'Man':0})
#
# Clean up any column with commas in it
# so that they're properly represented as decimals instead
#
X.how_tall_in_meters = X.how_tall_in_meters.str.replace(',','.').astype(float)
X.body_mass_index = X.body_mass_index.str.replace(',','.').astype(float)
#
# Check data types
print (X.dtypes)
# column z4 is type "object". Something is wrong with the dataset.
#
# Convert that column into numeric
# Use errors='raise'. This will alert you if something ends up being
# problematic
#
#
# INFO: There is an error raised ... you will find it if you try the method
#
# print (X[pd.isnull(X).any(axis=1)])
# 122076 --> z4 = -14420-11-2011 04:50:23.713
#
# !! The data point #122076 is a wrong coded record,
# change it or drop it before calling the to_numeric methods:
#
#X.at[122076, 'z4'] = -144 // change to correct value
# I keep this value for later and drop it from the dataset
wrongRow = X.loc[122076]
X.drop(X.index[[122076]], inplace=True)
X.z4 = pd.to_numeric(X.z4, errors='raise')
print (X.dtypes)
# everything ok now
Explanation: Great, no NaNs here. Let's go on.
End of explanation
# Activity to predict is in "class" column
# Encode 'y' value as a dummies version of dataset's "class" column
#
y = pd.get_dummies(X['class'].copy())
# this produces a 5 column wide dummies dataframe as the y value
#
# Get rid of the user and class columns in X
#
X.drop(['class','user'], axis=1, inplace=True)
print (X.head(2))
print (y.head())
Explanation: Extract the target values
End of explanation
#
# Split data into test / train sets
#
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=7)
Explanation: Split the dataset into training and test
End of explanation
#
# Create an RForest classifier 'model'
#
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=30, max_depth= 20, random_state=0)
Explanation: Train the Random Forest model
End of explanation
print ("Fitting...")
s = time.time()
model.fit(X_train, y_train)
print("completed in: ", time.time() - s, "seconds")
Explanation: You can check the SKlearn documentation to see all possible parameters.
The ones used here:
n_estimators: integer, optional (default=100)
The number of trees in the forest. Note that this number changed from 10 to 100 (following the progress in computing performance and memory)
max_depth: integer or None, optional (default=None)
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
Setting a limit helps with the computing time and memory needed.Not setting a max depth will lead to have unpruned and fully grown trees which - depending on the dataset - will require large memory footprint.
oob_score: bool (default=False)
Whether to use out-of-bag samples to estimate the generalization accuracy.
random_state: int, RandomState instance or None, optional (default=None)
Controls both the randomness of the bootstrapping of the samples used when building trees (if bootstrap=True) and the sampling of the features to consider
And other useful / important:
criterion: string, optional (default=”gini”)
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
Same as for the Trees.
bootstrap: boolean, optional (default=True)
Whether bootstrap samples are used when building trees. If False, the whole datset is used to build each tree.
End of explanation
print ("Scoring...")
s = time.time()
score = model.score(X_test, y_test)
print ("Score: ", round(score*100, 3))
print ("Scoring completed in: ", time.time() - s)
Explanation: Note that it takes a much longer time to train a forest than a single decision tree.
This is the score based on the test dataset that we split earlier. Note how good it is.
End of explanation
# Extract feature importances
fi = pd.DataFrame({'feature': list(X_train.columns),
'importance': model.feature_importances_}).\
sort_values('importance', ascending = False)
# Display
fi.head()
Explanation: These are the top 5 features used in the classification.
They are all related to the movements, no gender or age.
End of explanation
outputClassPredictionExample = wrongRow['class']
forPredictionExample = wrongRow.drop(labels=['class','user']) # remove class and user
forPredictionExample.z4 = -144 # correct the value
print("We use this example for prediction later:")
print(forPredictionExample)
print("The class shall be: ", outputClassPredictionExample)
model.predict(forPredictionExample.values.reshape(1, -1))
Explanation: Example prediction
Let's use the wrong row - that we extracted earlier from the dataset - as a prediction example.
but first we need to correct it:
End of explanation
y_test.iloc[0]
Explanation: Remember that these were the categories for the classes:
End of explanation
modelOOB = RandomForestClassifier(n_estimators=30, max_depth= 20, random_state=0,
oob_score=True)
print ("Fitting...")
s = time.time()
modelOOB.fit(X, y)
print("completed in: ", time.time() - s, "seconds")
Explanation: The fourth one is "standing up". Seems that the model predicted correctly.
OutOfBag error instead of splitting into train and test
Since each tree within the forest is only trained using a subset of the overall training set, the forest ensemble has the ability to error test itself.
It does this by scoring each tree's predictions against that tree's out-of-bag samples. A tree's out of bag samples are those forest training samples that were withheld from a specific tree during training.
One of the advantages of using the out of bag (OOB) error is that eliminates the need to split your data into a training / testing before feeding it into the forest model, since that's part of the forest algorithm. However using the OOB error metric often underestimates the actual performance improvement and the optimal number of training iterations.
End of explanation
# Display the OOB Score of data
scoreOOB = modelOOB.oob_score_
print ("OOB Score: ", round(scoreOOB*100, 3))
Explanation: Time needed is similar.
Let's check the score:
End of explanation
modelOOB.predict(forPredictionExample.values.reshape(1, -1))
Explanation: The out-of-bag estimation is not far away from the more precise score estimated from the test dataset.
And now we predict the same user's movement. Class output shall be "standing up", the fourth one
End of explanation |
494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GeoWave Spatial Join Demo
This demo runs a distance join using an GPX dataset for Germany and the GDELT dataset. We use this demo to run a distance join using our tiered join algorithm on two large datasets to get what GPX points are within a certain distance to GDELT events.
To run this join on Spark using a naive Spark SQL query would take 20+ hours to possibly get a result. With this algorithm and GeoWaves tiered indexing strategy we can complete the same join in 2-5 hours depending on the cluster size and configuration. This algorithm is not the answer to every join situation however, for smaller dataset sizes that can fit into memory you are performing extra work by running this join in its current implementation. For those datasets using native Spark joins are still a better option.
The current implementation of this algorithm considers the worst case scenario for each dataset. This will be improved upon quickly over the next updates and releases. Currently, the algorithm will dynamically index each set even when the underlying indexing method for each rdd is the same. This requires a touch of all records in the dataset which can be avoided for a majority of joins where the indexing methods are the same between both sets.
Simply focus a cell and use [SHIFT + ENTER] to run the code.
Import pixiedust
Start by importing pixiedust which if all bootstrap and install steps were run correctly.
You should see below for opening the pixiedust database successfully with no errors.
Depending on the version of pixiedust that gets installed, it may ask you to update.
If so, run this first cell.
Step1: Picking the right parallelism
It's important to pick a high enough parallelism to partition the data into small enough chunks to support the join. Relying on the default set by Spark for the cluster size when working with a extremely large set of data is recipe for OOM errors on the executor.
If you're having trouble finding the right parallelism try looking at the Spark history server and checking what your largest partition size is. Aim for a max partition size of ~64MB preferably smaller.
Step2: Download and ingest the GPX data
NOTE Depending on cluster size sometimes the copy can fail. This appears to be a race condition error with the copy command when downloading the files from s3. This may make the following import into acccumulo command fail. You can check the accumulo tables by looking at port 9995 of the emr cluster. There should be 5 tables after importing.
Step3: Download GDELT Data
Download the gdelt data necessary to perform the join. You can either download the quickstart events which ends around ~120k features, or you can download all events from 2010 onward which is close to ~500k+ features.
If you want the larger dataset run the cell below, but replace "TIME_REGEX" with "LARGER_TIME_REGEX"
Step4: Ingest GDELT Data
Depending on how many events were downloaded above this step could take anywhere from 10 minutes to hours. The CQL filter only ingests a small portion of the events over Europe.
Step5: Run the spatial join
Execute the cell below to run the spatial join. This will compare 285 million gpx points against ~100k-~500k gdelt events. The smallest run case takes anywhere from 2-5 hours depending on dataset and cluster size. The work is split into 3 jobs, the first two determining which tiers contain data and the last performing the join between tiers.
This would be the equivalent of running the following sql command from the sql_context
Step6: Create Map of join results
Once we have geoserver layers of our join results we can use folium to add the wms layers, and display the results on a map. | Python Code:
#!pip install --user --upgrade pixiedust
#Stop old session
spark.stop()
Explanation: GeoWave Spatial Join Demo
This demo runs a distance join using an GPX dataset for Germany and the GDELT dataset. We use this demo to run a distance join using our tiered join algorithm on two large datasets to get what GPX points are within a certain distance to GDELT events.
To run this join on Spark using a naive Spark SQL query would take 20+ hours to possibly get a result. With this algorithm and GeoWaves tiered indexing strategy we can complete the same join in 2-5 hours depending on the cluster size and configuration. This algorithm is not the answer to every join situation however, for smaller dataset sizes that can fit into memory you are performing extra work by running this join in its current implementation. For those datasets using native Spark joins are still a better option.
The current implementation of this algorithm considers the worst case scenario for each dataset. This will be improved upon quickly over the next updates and releases. Currently, the algorithm will dynamically index each set even when the underlying indexing method for each rdd is the same. This requires a touch of all records in the dataset which can be avoided for a majority of joins where the indexing methods are the same between both sets.
Simply focus a cell and use [SHIFT + ENTER] to run the code.
Import pixiedust
Start by importing pixiedust which if all bootstrap and install steps were run correctly.
You should see below for opening the pixiedust database successfully with no errors.
Depending on the version of pixiedust that gets installed, it may ask you to update.
If so, run this first cell.
End of explanation
#Create new session with adequate parallelism
spark = SparkSession.builder\
.config('spark.serializer','org.apache.spark.serializer.KryoSerializer')\
.config('spark.kryo.registrator', 'org.locationtech.geowave.analytic.spark.GeoWaveRegistrator')\
.config('spark.default.parallelism', '6000')\
.getOrCreate()
print(spark.__dict__)
sc = spark.sparkContext
import pixiedust
import geowave_pyspark
pixiedust.enableJobMonitor()
# Print Spark info and create sql_context
print('Spark Version: {0}'.format(sc.version))
print('Python Version: {0}'.format(sc.pythonVer))
print('Application Name: {0}'.format(sc.appName))
print('Application ID: {0}'.format(sc.applicationId))
print('Spark Master: {0}'.format( sc.master))
Explanation: Picking the right parallelism
It's important to pick a high enough parallelism to partition the data into small enough chunks to support the join. Relying on the default set by Spark for the cluster size when working with a extremely large set of data is recipe for OOM errors on the executor.
If you're having trouble finding the right parallelism try looking at the Spark history server and checking what your largest partition size is. Aim for a max partition size of ~64MB preferably smaller.
End of explanation
%%bash
s3-dist-cp -D mapreduce.task.timeout=60000000 --src=s3://geowave-gpx-data/gpx --dest=hdfs://$HOSTNAME:8020/tmp/
%%bash
/opt/accumulo/bin/accumulo shell -u root -p secret -e "importtable geowave.germany_gpx_SPATIAL_IDX /tmp/spatial"
/opt/accumulo/bin/accumulo shell -u root -p secret -e "importtable geowave.germany_gpx_GEOWAVE_METADATA /tmp/metadata"
%%bash
# configure geowave connection params for store
geowave store add germany_gpx --gwNamespace geowave.germany_gpx -t accumulo -i accumulo -u root -p secret --zookeeper $HOSTNAME:2181
Explanation: Download and ingest the GPX data
NOTE Depending on cluster size sometimes the copy can fail. This appears to be a race condition error with the copy command when downloading the files from s3. This may make the following import into acccumulo command fail. You can check the accumulo tables by looking at port 9995 of the emr cluster. There should be 5 tables after importing.
End of explanation
%%bash
cd /mnt/tmp
wget s3.amazonaws.com/geowave/latest/scripts/emr/quickstart/geowave-env.sh
source /mnt/tmp/geowave-env.sh
#setup a larger regex for every event after 2010
export LARGER_TIME_REGEX=201
mkdir gdelt
cd gdelt
wget http://data.gdeltproject.org/events/md5sums
for file in `cat md5sums | cut -d' ' -f3 | grep "^${TIME_REGEX}"` ; \
do wget http://data.gdeltproject.org/events/$file ; done
md5sum -c md5sums 2>&1 | grep "^${TIME_REGEX}"
Explanation: Download GDELT Data
Download the gdelt data necessary to perform the join. You can either download the quickstart events which ends around ~120k features, or you can download all events from 2010 onward which is close to ~500k+ features.
If you want the larger dataset run the cell below, but replace "TIME_REGEX" with "LARGER_TIME_REGEX"
End of explanation
%%bash
# We have to source here again because bash runs in a separate sub process each cell.
source /mnt/tmp/geowave-env.sh
# clear old potential runs
geowave store clear gdelt
geowave store rm gdelt
# configure geowave connection params for accumulo stores "gdelt"
geowave store add gdelt --gwNamespace geowave.gdelt -t accumulo -i accumulo -u root -p secret --zookeeper $HOSTNAME:2181
# configure a spatial index
geowave index add gdelt gdeltspatial -t spatial --partitionStrategy round_robin --numPartitions $NUM_PARTITIONS
# run the ingest for a 10x10 deg bounding box over Europe
geowave ingest localtogw /mnt/tmp/gdelt gdelt gdeltspatial -f gdelt \
--gdelt.cql "BBOX(geometry, 0, 50, 10, 60)"
#grab classes from jvm
hbase_options_class = sc._jvm.org.locationtech.geowave.datastore.hbase.cli.config.HBaseRequiredOptions
accumulo_options_class = sc._jvm.org.locationtech.geowave.datastore.accumulo.cli.config.AccumuloRequiredOptions
query_options_class = sc._jvm.org.locationtech.geowave.core.store.query.QueryOptions
geowave_rdd_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDD
indexed_rdd_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveIndexedRDD
rdd_loader_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDDLoader
rdd_options_class = sc._jvm.org.locationtech.geowave.analytic.spark.RDDOptions
sf_df_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.SimpleFeatureDataFrame
byte_array_class = sc._jvm.org.locationtech.geowave.core.index.ByteArrayId
#grab classes for spatial join
join_runner_class = sc._jvm.org.locationtech.geowave.analytic.spark.spatial.SpatialJoinRunner
index_builder_class = sc._jvm.org.locationtech.geowave.core.geotime.ingest.SpatialDimensionalityTypeProvider.SpatialIndexBuilder
geom_intersects_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.udf.GeomIntersects
geom_distance_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.udf.GeomWithinDistance
udf_registry_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.udf.GeomFunctionRegistry
feature_data_adapter_class = sc._jvm.org.locationtech.geowave.adapter.vector.FeatureDataAdapter
feature_data_utils = sc._jvm.org.locationtech.geowave.adapter.vector.util.FeatureDataUtils
sft_builder_class = sc._jvm.org.geotools.feature.simple.SimpleFeatureTypeBuilder
datastore_utils_class = sc._jvm.org.locationtech.geowave.core.store.util.DataStoreUtils
udf_registry_class.registerGeometryFunctions(spark._jsparkSession)
spatial_encoders_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.GeoWaveSpatialEncoders
spatial_encoders_class.registerUDTs()
import os
#setup input datastore
gpx_store = accumulo_options_class()
gpx_store.setInstance('accumulo')
gpx_store.setUser('root')
gpx_store.setPassword('secret')
gpx_store.setZookeeper(os.environ['HOSTNAME'] + ':2181')
gpx_store.setGeowaveNamespace('geowave.germany_gpx')
#Setup osm datastore
gdelt_store = accumulo_options_class()
gdelt_store.setInstance('accumulo')
gdelt_store.setUser('root')
gdelt_store.setPassword('secret')
gdelt_store.setZookeeper(os.environ['HOSTNAME'] + ':2181')
gdelt_store.setGeowaveNamespace('geowave.gdelt')
#Setup output store
output_store = accumulo_options_class()
output_store.setInstance('accumulo')
output_store.setUser('root')
output_store.setPassword('secret')
output_store.setZookeeper(os.environ['HOSTNAME'] + ':2181')
output_store.setGeowaveNamespace('geowave.joined')
gpx_store_plugin = gpx_store.createPluginOptions()
gdelt_store_plugin = gdelt_store.createPluginOptions()
output_store_plugin = output_store.createPluginOptions()
#loading RDDs and setting up variables for join
# Create SpatialJoinRunner object
# You have to pass the wrapped java SparkSession object to java functions
join_runner = join_runner_class(spark._jsparkSession)
# Set data for left side rdd in join
join_runner.setLeftStore(gpx_store_plugin)
gpx_point = byte_array_class('gpxpoint')
join_runner.setLeftAdapterId(gpx_point)
# Set data for right side rdd in join
join_runner.setRightStore(gdelt_store_plugin)
gdelt_event = byte_array_class('gdeltevent')
join_runner.setRightAdapterId(gdelt_event)
# Set data for output store
join_runner.setOutputStore(output_store_plugin)
join_runner.setOutputLeftAdapterId(byte_array_class('gpxJoin'))
join_runner.setOutputRightAdapterId(byte_array_class('gdeltJoin'))
# Set predicate method for join
distance_predicate = geom_distance_class(0.01)
join_runner.setPredicate(distance_predicate)
# Set default partition count for spark objects
join_runner.setPartCount(6000)
Explanation: Ingest GDELT Data
Depending on how many events were downloaded above this step could take anywhere from 10 minutes to hours. The CQL filter only ingests a small portion of the events over Europe.
End of explanation
join_runner.run()
Explanation: Run the spatial join
Execute the cell below to run the spatial join. This will compare 285 million gpx points against ~100k-~500k gdelt events. The smallest run case takes anywhere from 2-5 hours depending on dataset and cluster size. The work is split into 3 jobs, the first two determining which tiers contain data and the last performing the join between tiers.
This would be the equivalent of running the following sql command from the sql_context:
"select gpx.*, gdelt.* from gpx, gdelt where geomDistance(gpx.geom,gdelt.geom) <= 0.01"
End of explanation
%%bash
geowave store add gpx_joined --gwNamespace geowave.joined -t accumulo -i accumulo -u root -p secret --zookeeper $HOSTNAME:2181
# set up geoserver
geowave config geoserver "$HOSTNAME:8000"
# add the gpx join results layer
geowave gs layer add gpx_joined -id gdeltJoin
geowave gs style set gdeltJoin --styleName geowave:blue
# add the gdelt join results layer
geowave gs layer add gpx_joined -id gpxJoin
geowave gs style set gpxJoin --styleName point
import owslib
from owslib.wms import WebMapService
url = "http://" + os.environ['HOSTNAME'] + ":8000/geoserver/geowave/wms"
web_map_services = WebMapService(url)
#print layers available wms
print('\n'.join(web_map_services.contents.keys()))
import folium
from folium import Map
#grab wms info for centroids
layer = 'gdeltJoin'
wms = web_map_services.contents[layer]
#build center of map off centroid bbox
lon = (wms.boundingBox[0] + wms.boundingBox[2]) / 2.
lat = (wms.boundingBox[1] + wms.boundingBox[3]) / 2.
center = [lat, lon]
m = Map(location = center,zoom_start=10)
name = wms.title
gdelt = folium.raster_layers.WmsTileLayer(
url=url,
name=name,
fmt='image/png',
transparent=True,
layers=layer,
overlay=True,
COLORSCALERANGE='1.2,28',
)
gdelt.add_to(m)
layer = 'gpxJoin'
wms = web_map_services.contents[layer]
name = wms.title
gpx = folium.raster_layers.WmsTileLayer(
url=url,
name=name,
fmt='image/png',
transparent=True,
layers=layer,
overlay=True,
COLORSCALERANGE='1.2,28',
)
gpx.add_to(m)
folium.LayerControl().add_to(m)
m
Explanation: Create Map of join results
Once we have geoserver layers of our join results we can use folium to add the wms layers, and display the results on a map.
End of explanation |
495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
Google Cloud SDK is already installed in Google Cloud Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your GCP account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
Note
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex SDK
Import the Vertex SDK into our Python environment.
Step11: Vertex AI constants
Setup up the following constants for Vertex AI
Step12: AutoML constants
Next, setup constants unique to AutoML Text Classification datasets and training
Step13: Clients Vertex AI
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving. Note
Step14: Example output
Step15: Example output
Step16: Response
Step17: Example output
Step18: projects.locations.datasets.import
Request
Step19: Example output
Step20: Response
Step21: Example output
Step22: Example output
Step23: Response
Step24: Example output
Step25: projects.locations.trainingPipelines.get
Call
Step26: Response
Step27: Example output
Step28: Evaluate the model
projects.locations.models.evaluations.list
Call
Step29: Response
Step30: Example output
Step31: Response
Step32: Example output
Step33: Example output
Step34: Example output
Step35: Example output
Step36: Response
Step37: Example output
Step38: projects.locations.batchPredictionJobs.get
Call
Step39: Response
Step41: Example output
Step42: Example output
Step43: Example output
Step44: Response
Step45: Example output
Step46: projects.locations.endpoints.deployModel
Request
Step47: Example output
Step48: Response
Step49: Example output
Step50: projects.locations.endpoints.predict
Request
Step51: Example output
Step52: Response
Step53: Example output
Step54: Response
Step55: Example output | Python Code:
! pip3 install -U google-cloud-aiplatform --user
Explanation: Vertex SDK: AutoML natural language text classification model
Installation
Install the latest (preview) version of Vertex SDK.
End of explanation
! pip3 install google-cloud-storage
Explanation: Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
Google Cloud SDK is already installed in Google Cloud Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import base64
import json
import os
import sys
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex SDK
Import the Vertex SDK into our Python environment.
End of explanation
# API Endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex AI constants
Setup up the following constants for Vertex AI:
API_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex AI location root path for dataset, model and endpoint resources.
End of explanation
# Text Dataset type
TEXT_SCHEMA = "google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
IMPORT_SCHEMA_TEXT_CLASSIFICATION = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_single_label_io_format_1.0.0.yaml"
# Text Training task
TRAINING_TEXT_CLASSIFICATION_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml"
Explanation: AutoML constants
Next, setup constants unique to AutoML Text Classification datasets and training:
Dataset Schemas: Tells the managed dataset service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the managed dataset service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Vertex AI Pipelines service the task (e.g., classification) to train the model for.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
! gsutil cat $IMPORT_FILE | head -n 10
Explanation: Clients Vertex AI
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving. Note: Prediction has a different service endpoint.
End of explanation
DATA_SCHEMA = TEXT_SCHEMA
dataset = {
"display_name": "happiness_" + TIMESTAMP,
"metadata_schema_uri": "gs://" + DATA_SCHEMA,
}
print(
MessageToJson(
aip.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
Explanation: Example output:
I went on a successful date with someone I felt sympathy and connection with.,affection
I was happy when my son got 90% marks in his examination,affection
I went to the gym this morning and did yoga.,exercise
We had a serious talk with some friends of ours who have been flaky lately. They understood and we had a good evening hanging out.,bonding
I went with grandchildren to butterfly display at Crohn Conservatory,affection
I meditated last night.,leisure
"I made a new recipe for peasant bread, and it came out spectacular!",achievement
I got gift from my elder brother which was really surprising me,affection
YESTERDAY MY MOMS BIRTHDAY SO I ENJOYED,enjoy_the_moment
Watching cupcake wars with my three teen children,affection
Create a dataset
projects.locations.datasets.create
Request
End of explanation
request = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "happiness_20210226015238",
"metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/574578388396670976",
"displayName": "happiness_20210226015238",
"metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml",
"labels": {
"aiplatform.googleapis.com/dataset_metadata_schema": "TEXT"
},
"metadata": {
"dataItemSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/dataitem/text_1.0.0.yaml"
}
}
End of explanation
LABEL_SCHEMA = IMPORT_SCHEMA_TEXT_CLASSIFICATION
import_config = {
"gcs_source": {"uris": [IMPORT_FILE]},
"import_schema_uri": LABEL_SCHEMA,
}
print(
MessageToJson(
aip.ImportDataRequest(
name=dataset_short_id, import_configs=[import_config]
).__dict__["_pb"]
)
)
Explanation: projects.locations.datasets.import
Request
End of explanation
request = clients["dataset"].import_data(
name=dataset_id, import_configs=[import_config]
)
Explanation: Example output:
{
"name": "574578388396670976",
"importConfigs": [
{
"gcsSource": {
"uris": [
"gs://cloud-ml-data/NL-classification/happiness.csv"
]
},
"importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_single_label_io_format_1.0.0.yaml"
}
]
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
TRAINING_SCHEMA = TRAINING_TEXT_CLASSIFICATION_SCHEMA
task = json_format.ParseDict(
{
"multi_label": False,
},
Value(),
)
training_pipeline = {
"display_name": "happiness_" + TIMESTAMP,
"input_data_config": {"dataset_id": dataset_short_id},
"model_to_upload": {"display_name": "happiness_" + TIMESTAMP},
"training_task_definition": TRAINING_SCHEMA,
"training_task_inputs": task,
}
print(
MessageToJson(
aip.CreateTrainingPipelineRequest(
parent=PARENT, training_pipeline=training_pipeline
).__dict__["_pb"]
)
)
Explanation: Example output:
{}
Train a model
projects.locations.trainingPipelines.create
Request
End of explanation
request = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"trainingPipeline": {
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {
"multi_label": false
},
"modelToUpload": {
"displayName": "happiness_20210226015238"
}
}
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the training pipeline
training_pipeline_id = request.name
# The short numeric ID for the training pipeline
training_pipeline_short_id = training_pipeline_id.split("/")[-1]
print(training_pipeline_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/trainingPipelines/2903115317607661568",
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {},
"modelToUpload": {
"displayName": "happiness_20210226015238"
},
"state": "PIPELINE_STATE_PENDING",
"createTime": "2021-02-26T02:23:54.166560Z",
"updateTime": "2021-02-26T02:23:54.166560Z"
}
End of explanation
request = clients["pipeline"].get_training_pipeline(name=training_pipeline_id)
Explanation: projects.locations.trainingPipelines.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
while True:
response = clients["pipeline"].get_training_pipeline(name=training_pipeline_id)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_name = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
break
else:
model_id = response.model_to_upload.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(20)
print(model_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/trainingPipelines/2903115317607661568",
"displayName": "happiness_20210226015238",
"inputDataConfig": {
"datasetId": "574578388396670976"
},
"trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml",
"trainingTaskInputs": {},
"modelToUpload": {
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"displayName": "happiness_20210226015238"
},
"state": "PIPELINE_STATE_SUCCEEDED",
"createTime": "2021-02-26T02:23:54.166560Z",
"startTime": "2021-02-26T02:23:54.396088Z",
"endTime": "2021-02-26T06:08:06.548524Z",
"updateTime": "2021-02-26T06:08:06.548524Z"
}
End of explanation
request = clients["model"].list_model_evaluations(parent=model_id)
Explanation: Evaluate the model
projects.locations.models.evaluations.list
Call
End of explanation
model_evaluations = [json.loads(MessageToJson(mel.__dict__["_pb"])) for mel in request]
print(json.dumps(model_evaluations, indent=2))
# The evaluation slice
evaluation_slice = request.model_evaluations[0].name
Explanation: Response
End of explanation
request = clients["model"].get_model_evaluation(name=evaluation_slice)
Explanation: Example output:
```
[
{
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640/evaluations/1541152463304785920",
"metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml",
"metrics": {
"confusionMatrix": {
"annotationSpecs": [
{
"displayName": "exercise",
"id": "952213353537732608"
},
{
"id": "1528674105841156096",
"displayName": "achievement"
},
{
"id": "3258056362751426560",
"displayName": "leisure"
},
{
"id": "3834517115054850048",
"displayName": "bonding"
},
{
"id": "5563899371965120512",
"displayName": "enjoy_the_moment"
},
{
"id": "6140360124268544000",
"displayName": "nature"
},
{
"id": "8446203133482237952",
"displayName": "affection"
}
],
"rows": [
[
19.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
[
0.0,
342.0,
5.0,
2.0,
13.0,
2.0,
13.0
],
[
2.0,
10.0,
42.0,
1.0,
12.0,
0.0,
2.0
],
[
0.0,
4.0,
0.0,
121.0,
1.0,
0.0,
4.0
],
[
2.0,
29.0,
3.0,
2.0,
98.0,
0.0,
6.0
],
[
0.0,
3.0,
0.0,
1.0,
0.0,
21.0,
1.0
],
[
0.0,
7.0,
0.0,
1.0,
6.0,
0.0,
409.0
]
]
},
"confidenceMetrics": [
{
"f1Score": 0.25,
"recall": 1.0,
"f1ScoreAt1": 0.88776374,
"precisionAt1": 0.88776374,
"precision": 0.14285715,
"recallAt1": 0.88776374
},
{
"confidenceThreshold": 0.05,
"recall": 0.9721519,
"f1Score": 0.8101266,
"recallAt1": 0.88776374,
"f1ScoreAt1": 0.88776374,
"precisionAt1": 0.88776374,
"precision": 0.69439423
},
# REMOVED FOR BREVITY
{
"f1Score": 0.0033698399,
"recall": 0.0016877637,
"confidenceThreshold": 1.0,
"recallAt1": 0.0016877637,
"f1ScoreAt1": 0.0033698399,
"precisionAt1": 1.0,
"precision": 1.0
}
],
"auPrc": 0.95903283,
"logLoss": 0.08260541
},
"createTime": "2021-02-26T06:07:48.967028Z",
"sliceDimensions": [
"annotationSpec"
]
}
]
```
projects.locations.models.evaluations.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
Explanation: Example output:
```
{
"name": "projects/116273516712/locations/us-central1/models/2369051733671280640/evaluations/1541152463304785920",
"metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml",
"metrics": {
"confusionMatrix": {
"annotationSpecs": [
{
"displayName": "exercise",
"id": "952213353537732608"
},
{
"displayName": "achievement",
"id": "1528674105841156096"
},
{
"id": "3258056362751426560",
"displayName": "leisure"
},
{
"id": "3834517115054850048",
"displayName": "bonding"
},
{
"displayName": "enjoy_the_moment",
"id": "5563899371965120512"
},
{
"displayName": "nature",
"id": "6140360124268544000"
},
{
"id": "8446203133482237952",
"displayName": "affection"
}
],
"rows": [
[
19.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
[
0.0,
342.0,
5.0,
2.0,
13.0,
2.0,
13.0
],
[
2.0,
10.0,
42.0,
1.0,
12.0,
0.0,
2.0
],
[
0.0,
4.0,
0.0,
121.0,
1.0,
0.0,
4.0
],
[
2.0,
29.0,
3.0,
2.0,
98.0,
0.0,
6.0
],
[
0.0,
3.0,
0.0,
1.0,
0.0,
21.0,
1.0
],
[
0.0,
7.0,
0.0,
1.0,
6.0,
0.0,
409.0
]
]
},
"logLoss": 0.08260541,
"confidenceMetrics": [
{
"precision": 0.14285715,
"precisionAt1": 0.88776374,
"recall": 1.0,
"f1ScoreAt1": 0.88776374,
"recallAt1": 0.88776374,
"f1Score": 0.25
},
{
"f1Score": 0.8101266,
"recall": 0.9721519,
"precision": 0.69439423,
"confidenceThreshold": 0.05,
"recallAt1": 0.88776374,
"precisionAt1": 0.88776374,
"f1ScoreAt1": 0.88776374
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 1.0,
"f1Score": 0.0033698399,
"f1ScoreAt1": 0.0033698399,
"precisionAt1": 1.0,
"precision": 1.0,
"recall": 0.0016877637,
"recallAt1": 0.0016877637
}
],
"auPrc": 0.95903283
},
"createTime": "2021-02-26T06:07:48.967028Z",
"sliceDimensions": [
"annotationSpec"
]
}
```
Make batch predictions
Prepare files for batch prediction
End of explanation
import json
import tensorflow as tf
test_item_uri = "gs://" + BUCKET_NAME + "/test.txt"
with tf.io.gfile.GFile(test_item_uri, "w") as f:
f.write(test_item + "\n")
gcs_input_uri = "gs://" + BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_uri, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
! gsutil cat $gcs_input_uri
! gsutil cat $test_item_uri
Explanation: Example output:
I went on a successful date with someone I felt sympathy and connection with. affection
Make the batch input file
Let's now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each text file. The dictionary contains the key/value pairs:
content: The Cloud Storage path to the text file.
mimeType: The content type. In our example, it is an text/plain file.
End of explanation
batch_prediction_job = {
"display_name": "happiness_" + TIMESTAMP,
"model": model_id,
"input_config": {
"instances_format": "jsonl",
"gcs_source": {"uris": [gcs_input_uri]},
},
"output_config": {
"predictions_format": "jsonl",
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
},
},
"dedicated_resources": {
"machine_spec": {
"machine_type": "n1-standard-2",
"accelerator_count": 0,
},
"starting_replica_count": 1,
"max_replica_count": 1,
},
}
print(
MessageToJson(
aip.CreateBatchPredictionJobRequest(
parent=PARENT, batch_prediction_job=batch_prediction_job
).__dict__["_pb"]
)
)
Explanation: Example output:
{"content": "gs://migration-ucaip-trainingaip-20210226015238/test.txt", "mime_type": "text/plain"}
I went on a successful date with someone I felt sympathy and connection with.
projects.locations.batchPredictionJobs.create
Request
End of explanation
request = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"batchPredictionJob": {
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
}
}
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
# The fully qualified ID for the batch job
batch_job_id = request.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4770983263059574784",
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"state": "JOB_STATE_PENDING",
"completionStats": {
"incompleteCount": "-1"
},
"createTime": "2021-02-26T09:37:44.471843Z",
"updateTime": "2021-02-26T09:37:44.471843Z"
}
End of explanation
request = clients["job"].get_batch_prediction_job(name=batch_job_id)
Explanation: projects.locations.batchPredictionJobs.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
def get_latest_predictions(gcs_out_dir):
Get the latest prediction subfolder using the timestamp in the subfolder name
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
response = clients["job"].get_batch_prediction_job(name=batch_job_id)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", response.state)
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
folder = get_latest_predictions(
response.output_config.gcs_destination.output_uri_prefix
)
! gsutil ls $folder/prediction*.jsonl
! gsutil cat $folder/prediction*.jsonl
break
time.sleep(60)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/4770983263059574784",
"displayName": "happiness_20210226015238",
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210226015238/test.jsonl"
]
}
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015238/batch_output/"
}
},
"state": "JOB_STATE_PENDING",
"completionStats": {
"incompleteCount": "-1"
},
"createTime": "2021-02-26T09:37:44.471843Z",
"updateTime": "2021-02-26T09:37:44.471843Z"
}
End of explanation
endpoint = {"display_name": "happiness_" + TIMESTAMP}
print(
MessageToJson(
aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"]
)
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210226015238/batch_output/prediction-happiness_20210226015238-2021-02-26T09:37:44.261133Z/predictions_00001.jsonl
{"instance":{"content":"gs://migration-ucaip-trainingaip-20210226015238/test.txt","mimeType":"text/plain"},"prediction":{"ids":["8446203133482237952","3834517115054850048","1528674105841156096","5563899371965120512","952213353537732608","3258056362751426560","6140360124268544000"],"displayNames":["affection","bonding","achievement","enjoy_the_moment","exercise","leisure","nature"],"confidences":[0.9183423,0.045685068,0.024327256,0.0057157497,0.0040851077,0.0012627868,5.8173126E-4]}}
Make online predictions
projects.locations.endpoints.create
Request
End of explanation
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"endpoint": {
"displayName": "happiness_20210226015238"
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The fully qualified ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296"
}
End of explanation
deployed_model = {
"model": model_id,
"display_name": "happiness_" + TIMESTAMP,
"automatic_resources": {"min_replica_count": 1, "max_replica_count": 1},
}
traffic_split = {"0": 100}
print(
MessageToJson(
aip.DeployModelRequest(
endpoint=endpoint_id,
deployed_model=deployed_model,
traffic_split=traffic_split,
).__dict__["_pb"]
)
)
Explanation: projects.locations.endpoints.deployModel
Request
End of explanation
request = clients["endpoint"].deploy_model(
endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split
)
Explanation: Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296",
"deployedModel": {
"model": "projects/116273516712/locations/us-central1/models/2369051733671280640",
"displayName": "happiness_20210226015238",
"automaticResources": {
"minReplicaCount": 1,
"maxReplicaCount": 1
}
},
"trafficSplit": {
"0": 100
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The unique ID for the deployed model
deployed_model_id = result.deployed_model.id
print(deployed_model_id)
Explanation: Example output:
{
"deployedModel": {
"id": "418518105996656640"
}
}
End of explanation
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
instances_list = [{"content": test_item}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
request = aip.PredictRequest(
endpoint=endpoint_id,
)
request.instances.append(instances)
print(MessageToJson(request.__dict__["_pb"]))
Explanation: projects.locations.endpoints.predict
Request
End of explanation
request = clients["prediction"].predict(endpoint=endpoint_id, instances=instances)
Explanation: Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/7367713068517687296",
"instances": [
[
{
"content": "I went on a successful date with someone I felt sympathy and connection with."
}
]
]
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
request = clients["endpoint"].undeploy_model(
endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}
)
Explanation: Example output:
{
"predictions": [
{
"confidences": [
0.8867673277854919,
0.024743923917412758,
0.0034913308918476105,
0.07936617732048035,
0.0013463868526741862,
0.0002393187169218436,
0.0040455833077430725
],
"displayNames": [
"affection",
"achievement",
"enjoy_the_moment",
"bonding",
"leisure",
"nature",
"exercise"
],
"ids": [
"8446203133482237952",
"1528674105841156096",
"5563899371965120512",
"3834517115054850048",
"3258056362751426560",
"6140360124268544000",
"952213353537732608"
]
}
],
"deployedModelId": "418518105996656640"
}
projects.locations.endpoints.undeployModel
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
delete_dataset = True
delete_model = True
delete_endpoint = True
delete_pipeline = True
delete_batchjob = True
delete_bucket = True
# Delete the dataset using the Vertex AI fully qualified identifier for the dataset
try:
if delete_dataset:
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model:
clients["model"].delete_model(name=model_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint:
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex AI fully qualified identifier for the training pipeline
try:
if delete_pipeline:
clients["pipeline"].delete_training_pipeline(name=training_pipeline_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob:
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
{}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation |
496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with CTF data
Step1: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change use_precomputed to
False. With use_precomputed = False running time of this script can
be several minutes even on a fast computer.
Step2: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of
Step3: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded into memory.
Step4: The data array consists of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz). In addition
Step5: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
Step6: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
Step7: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
Step8: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
Step9: We also lowpass filter the data at 100 Hz to remove the hf components.
Step10: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
Step11: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
Step12: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
Step13: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
Step14: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
Step15: The averages for each conditions are computed.
Step16: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with
Step17: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
Step18: Show activations as topography figures.
Step19: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
Step20: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
Step21: The transformation is read from a file
Step22: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information
Step23: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
Step24: Deviant condition.
Step25: Difference. | Python Code:
# Authors: Mainak Jas <[email protected]>
# Eric Larson <[email protected]>
# Jaakko Leppakangas <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
Explanation: Working with CTF data: the Brainstorm auditory dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see :footcite:TadelEtAl2011 and the
associated brainstorm site
<https://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>_.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
FieldTrip bug tracker
<http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300>__.
End of explanation
use_precomputed = True
Explanation: To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change use_precomputed to
False. With use_precomputed = False running time of this script can
be several minutes even on a fast computer.
End of explanation
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', subject, 'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', subject, 'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', subject, 'S01_Noise_20131218_01.ds')
Explanation: The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:mne.io.Raw.
End of explanation
raw = read_raw_ctf(raw_fname1)
n_times_run1 = raw.n_times
# Here we ignore that these have different device<->head transforms
mne.io.concatenate_raws(
[raw, read_raw_ctf(raw_fname2)], on_mismatch='ignore')
raw_erm = read_raw_ctf(erm_fname)
Explanation: In the memory saving mode we use preload=False and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded into memory.
End of explanation
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick(['meg', 'stim', 'misc', 'eog', 'ecg']).load_data()
Explanation: The data array consists of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz). In addition:
1 stim channel for marking presentation times for the stimuli
1 audio channel for the sent signal
1 response channel for recording the button presses
1 ECG bipolar
2 EOG bipolar (vertical and horizontal)
12 head tracking channels
20 unused channels
Notice also that the digitized electrode positions (stored in a .pos file)
were automatically loaded and added to the ~mne.io.Raw object.
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
End of explanation
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.set_annotations(annotations)
del onsets, durations, descriptions
Explanation: For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
End of explanation
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
baseline=(None, None),
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
Explanation: Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
End of explanation
raw.plot(block=True)
Explanation: Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
End of explanation
if not use_precomputed:
raw.plot_psd(tmax=np.inf, picks='meg')
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks='meg')
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
End of explanation
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
Explanation: We also lowpass filter the data at 100 Hz to remove the hf components.
End of explanation
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
Explanation: Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
End of explanation
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
Explanation: The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
End of explanation
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
Explanation: We mark a set of bad channels that seem noisier than others. This can also
be done interactively with raw.plot by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
End of explanation
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=['meg', 'eog'],
baseline=(None, 0), reject=reject, preload=False,
proj=True)
Explanation: The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword reject_by_annotation=False.
End of explanation
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs
Explanation: We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
End of explanation
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
Explanation: The averages for each conditions are computed.
End of explanation
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
Explanation: Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:mne.io.Raw.filter), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:mne.io.Raw.notch_filter.)
End of explanation
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
Explanation: Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
End of explanation
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
Explanation: Show activations as topography figures.
End of explanation
evoked_difference = combine_evoked([evoked_dev, evoked_std], weights=[1, -1])
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
Explanation: We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
End of explanation
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
Explanation: Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
End of explanation
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
Explanation: The transformation is read from a file:
End of explanation
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
Explanation: To save time and memory, the forward solution is read from a file. Set
use_precomputed=False in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: CHDBBCEJ, :func:mne.setup_source_space,
bem-model, :func:mne.bem.make_watershed_bem.
End of explanation
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
Explanation: The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
time_viewer=True.
Standard condition.
End of explanation
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
Explanation: Deviant condition.
End of explanation
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
Explanation: Difference.
End of explanation |
497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <a href="https
Step2: adjust_brightness
Shifts the brightness of an RGB image by a given amount
Step3: adjust_contrast
Adjusts the contrast of an RGB image by a given multiplicative amount.
Step4: adjust_gamma
Adjusts the gamma of an RGB image
Step5: adjust_hue
Adjust the hue of an RGB image by a given multiplicative amount
Step6: adjust_saturation
Adjusts the saturation of an RGB image by a given multiplicative amount
Step7: flip_left_right
Flips an image along the horizontal axis.
Assumes that the image is either ...HWC or ...CHW and flips the W axis
Step8: flip_up_down
Flips an image along the vertical axis.
Assumes that the image is either ...HWC or ...CHW and flips the H axis
Step9: gaussian_blur
Step10: random_brightness
adjust_brightness(...) with random delta in [-max_delta, max_delta]
Step11: random_contrast
adjust_contrast(...) with random factor in [lower, upper).
Step12: random_crop
Crop images randomly to specified sizes.
Given an input image, it crops the image to the specified crop_sizes. If crop_sizes are lesser than the image's size, the offset for cropping is chosen at random
Step13: random_flip_left_right
50% chance of flip_up_down(...) otherwise returns image unchanged.
Step14: random_flip_up_down
50% chance of flip_up_down(...) otherwise returns image unchanged.
Step15: random_hue
adjust_hue(...) with random delta in [-max_delta, max_delta).
Step16: random_saturation
adjust_saturation(...) with random factor in [lower, upper)
Step17: rot90
Rotate an image counter-clockwise by 90 degrees.
Assumes that the image is either ...HWC or ...CHW
Step18: solarize
Applies solarization to an image.
All values above a given threshold will be inverted | Python Code:
%%capture
!pip install dm-pix
!git clone https://github.com/deepmind/dm_pix.git
import dm_pix as pix
import jax.numpy as jnp
import numpy as np
import PIL.Image as pil
from jax import random
IMAGE_PATH = '/content/dm_pix/examples/assets/jax_logo.jpg'
# Helper functions to read images and display them
def get_image(img_path) -> jnp.ndarray:
return jnp.array(pil.open(img_path), dtype=jnp.float32) / 255.
def imshow(image: jnp.ndarray) -> None:
Shows the input image using PIL/Pillow backend.
image = pil.fromarray(np.asarray(image * 255.).astype(np.uint8), "RGB")
display(image)
Explanation: <a href="https://colab.research.google.com/github/SupreethRao99/dm_pix/blob/master/examples/image_augmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
PIX
PIX is an image processing library in JAX, for JAX.
overview
JAX is a library resulting from the union of Autograd and XLA for high-performance machine learning research. It provides NumPy, SciPy, automatic differentiation and first class GPU/TPU support.
PIX is a library built on top of JAX with the goal of providing image processing functions and tools to JAX in a way that they can be optimized and parallelised through jax.jit(), jax.vmap(), jax.pmap()
End of explanation
image = get_image(IMAGE_PATH)
delta = 0.42 #@param {type: "slider", min: 0, max: 1}
new_image = pix.adjust_brightness(
image=image,
delta=delta)
imshow(new_image)
Explanation: adjust_brightness
Shifts the brightness of an RGB image by a given amount
End of explanation
image = get_image(IMAGE_PATH)
factor = 0.42 #@param {type: "slider", min: 0, max: 1}
new_image = pix.adjust_contrast(
image=image,
factor=factor)
imshow(new_image)
Explanation: adjust_contrast
Adjusts the contrast of an RGB image by a given multiplicative amount.
End of explanation
image = get_image(IMAGE_PATH)
gamma = 3 #@param {type: "slider", min: 0, max: 10}
gain = 4 #@param{type: "slider",min:0, max:10}
new_image = pix.adjust_gamma(
image=image,
gain=gain,
gamma=gamma)
imshow(new_image)
Explanation: adjust_gamma
Adjusts the gamma of an RGB image
End of explanation
image = get_image(IMAGE_PATH)
delta = 0.7 #@param {type: "slider", min: 0, max: 1}
new_image = pix.adjust_hue(
image=image,
delta=delta)
imshow(new_image)
Explanation: adjust_hue
Adjust the hue of an RGB image by a given multiplicative amount
End of explanation
image = get_image(IMAGE_PATH)
factor = 0.42 #@param {type: "slider", min: 0, max: 1}
new_image = pix.adjust_saturation(
image=image,
factor=factor)
imshow(new_image)
Explanation: adjust_saturation
Adjusts the saturation of an RGB image by a given multiplicative amount
End of explanation
image = get_image(IMAGE_PATH)
new_image = pix.flip_left_right(
image=image)
imshow(new_image)
Explanation: flip_left_right
Flips an image along the horizontal axis.
Assumes that the image is either ...HWC or ...CHW and flips the W axis
End of explanation
image = get_image(IMAGE_PATH)
new_image = pix.flip_up_down(
image=image)
imshow(new_image)
Explanation: flip_up_down
Flips an image along the vertical axis.
Assumes that the image is either ...HWC or ...CHW and flips the H axis
End of explanation
image = get_image(IMAGE_PATH)
sigma = 5 #@param {type: "slider", min: 0, max: 10}
kernel_size = 5 #@param{type: "slider",min:0, max:10}
new_image = pix.gaussian_blur(
image=image,
sigma=sigma,
kernel_size=kernel_size)
imshow(new_image)
Explanation: gaussian_blur
End of explanation
key = random.PRNGKey(0) # change to see different brightness
image = get_image(IMAGE_PATH)
delta = 0.9
new_image = pix.random_brightness(
key=key,
image=image,
max_delta=delta)
imshow(new_image)
Explanation: random_brightness
adjust_brightness(...) with random delta in [-max_delta, max_delta]
End of explanation
key = random.PRNGKey(0) # change to see different contrast
image = get_image(IMAGE_PATH)
new_image = pix.random_contrast(
key=key,
image=image,
lower=0,
upper=5)
imshow(new_image)
Explanation: random_contrast
adjust_contrast(...) with random factor in [lower, upper).
End of explanation
key = random.PRNGKey(5) #change to see different crop
image = get_image(IMAGE_PATH)
new_image = pix.random_crop(
key=key,
image=image,
crop_sizes=(128,128,3))
imshow(new_image)
Explanation: random_crop
Crop images randomly to specified sizes.
Given an input image, it crops the image to the specified crop_sizes. If crop_sizes are lesser than the image's size, the offset for cropping is chosen at random
End of explanation
key = random.PRNGKey(1) #change to see different views
image = get_image(IMAGE_PATH)
new_image = pix.random_flip_left_right(
key=key,
image=image
)
imshow(new_image)
Explanation: random_flip_left_right
50% chance of flip_up_down(...) otherwise returns image unchanged.
End of explanation
key = random.PRNGKey(0) #change to see different views
image = get_image(IMAGE_PATH)
new_image = pix.random_flip_up_down(
key=key,
image=image
)
imshow(new_image)
Explanation: random_flip_up_down
50% chance of flip_up_down(...) otherwise returns image unchanged.
End of explanation
key = random.PRNGKey(0) #change to see different views
image = get_image(IMAGE_PATH)
delta = 0.7
new_image = pix.random_hue(
key=key,
image=image,
max_delta=delta)
imshow(new_image)
Explanation: random_hue
adjust_hue(...) with random delta in [-max_delta, max_delta).
End of explanation
key = random.PRNGKey(0) # change to see different saturation
image = get_image(IMAGE_PATH)
new_image = pix.random_saturation(
key=key,
image=image,
lower=0,
upper=5)
imshow(new_image)
Explanation: random_saturation
adjust_saturation(...) with random factor in [lower, upper)
End of explanation
image = get_image(IMAGE_PATH)
new_image = pix.rot90(
k=1,#number of times the rotation is applied
image=image)
imshow(new_image)
Explanation: rot90
Rotate an image counter-clockwise by 90 degrees.
Assumes that the image is either ...HWC or ...CHW
End of explanation
image = get_image(IMAGE_PATH)
new_image = pix.solarize(
threshold=0.6,
image=image)
imshow(new_image)
Explanation: solarize
Applies solarization to an image.
All values above a given threshold will be inverted
End of explanation |
498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Romania Kendo Stats
25 years of Kendo History in Romania, visualized
Data cleaning workbook
Created by Dénes Csala | 2019 | MIT License
For any improvement suggestions and spotted processing mistakes drop me a message on Facebook.
If you would like to have your country/club data visualized in a similar manner, or any other data visualization and analytics consultancy inquiries contact me at [email protected]
This workbook guides you through the data cleaning stage for the Romania Kendo Stats visualization. This is a multi-stage process, you will need access to the raw data (liaise with Secretary or other member in charge of data the Romanian Kendo Association), Python and Excel installed. Any Python packages will also be installed on the way, but we recommend using the Anaconda distribution of Python 3. If you would like to edit the visualization part, then you will need PowerBI Desktop.
The general structure of the repository is the following
Step1: First, download members data (Evidenta membrilor.xlsx) from the official data source, and create a macro-enabled Excel file from the Google Sheet. Then write a simple macro to extract the cell comments from the Club column in order to get info about club Transfers. Follow the instructions here. Save the new file as Evidenta membrilor.xlsm in the /data/manual folder. Use the members_loader module to process this file.
Step2: Members are loaded but a bit messy.
Step3: 2. Load and clean matches
Matches are loaded from excel sheets in the /data folder, organized by year and competition. We are always looking for match list data,the cleaner the better, the more concentrated the better. While this is not possible all the time, we have several demo import routines. These are stored in the matches_loader.py function library. While not all matches have textual data available, these will need to be processed through OCR first. Raw excel data that can be processed right away can be found in the /data/raw folder, while the processed ones in /data/ocr. We use a separate workbook, ocr.ipynb to walk you through the OCR process.
Step4: 2.1. Load matches
Step5: 2.2. Standardize names
Names in name_exceptions get replaced with their right hand side values before processing.
Step6: Names in name_equals get replaced with their right hand side values after processing.
Step7: Names in name_doubles handle situation where the default name abbreviation might lead to duplicates.
Step8: Normalize Romanian characters, define name cleaner function to get Name IDs. Name ID are unique competitor names in the form of
Step9: Names equalling any string in redflags_names get thrown out of the final dataset.
Names containing any string in redflags_names2 get thrown out of the final dataset.
Step10: Check is name is not in redflags. Ignore these entries.
Step11: Process all names for standardization. Create 3 variables
Step12: Link procesed to names in members. The name_linker dictionary contains Name IDs (short names) as keys and sets of long names as values. Ideally, this set should contain only one element, so that the mapping is unique.
Step13: Do the opposite mapping in names_abbr
Step14: Save club mappings by short name, by year.
Step15: Add short names to members_clean.
Step16: Some names appear in the short form, we need to add them manually to the long list. We parse through all forms in which the name appears, and choose the longest. We call this the inferred name.
Step17: Infer duplicates
Step18: 2.3. Infer clubs
Infer clubs from name if club is part of name in the competition. Club names in redflags_clubs get ignored. Clubs in club_equals get replaced after processing. The convention is to have 3 letter all-caps club names for Romanian clubs, 3 letter club names followed by a / and a two letter country code for foreign clubs.
Step19: Attach clubs to all_players who have it in their competition name data, but we don't already known from members.
Step20: Normalize club names and long names.
Step21: If club still not found, fill the gaps between years. Forward fill first, then backward fill, if necessary.
Step22: We have extracted what was possible from the data. Now we do a save of short name to long name and club mappings (by year). We then edit this file manually, if necessary.
2.4. Manual club and long name overrides
Step23: Extend with manual data
Step24: Update and overwrite with club existence data
3. Update members
Extend members data with data mined from matches
Extend members with unregistered members. Probably inactive now, or from abroad. Only that one year when he appared in competition. But we only register them as known to be active that year. This is in ontrast with the Inactive members from the registry, for whom we know when did they go inactive.
Step25: Extend 0 dan down to starting year.
Step26: Update members
Step27: Prettify club names, and IDs
Step28: Fix unknwown genders
Step29: Update members with manual gender data.
Step30: Save to /data/export.
Step31: 4. Update matches
Update and save cleaned match data
Step32: Clean up and save matches for display | Python Code:
import pandas as pd, numpy as np, json
import members_loader, matches_loader, clubs_loader, point_utils, save_utils
Explanation: Romania Kendo Stats
25 years of Kendo History in Romania, visualized
Data cleaning workbook
Created by Dénes Csala | 2019 | MIT License
For any improvement suggestions and spotted processing mistakes drop me a message on Facebook.
If you would like to have your country/club data visualized in a similar manner, or any other data visualization and analytics consultancy inquiries contact me at [email protected]
This workbook guides you through the data cleaning stage for the Romania Kendo Stats visualization. This is a multi-stage process, you will need access to the raw data (liaise with Secretary or other member in charge of data the Romanian Kendo Association), Python and Excel installed. Any Python packages will also be installed on the way, but we recommend using the Anaconda distribution of Python 3. If you would like to edit the visualization part, then you will need PowerBI Desktop.
The general structure of the repository is the following:
- /data
- /raw: this where you place the downloaded data from the official data source, sorted by years and competitions, only keep those that have relevant data for matches only
- /ocr: this is where the data gets saved after an OCR has been performed - this is necessary for some older files in image format
- /manual: this is where manually extracted matches from old image files get placed - they should follow the 2018 CN format, i.e. all matches in one sheet
- /export: this is where we save the dataformatted for loading into the viz
- /clean: this is where all the processed, cleaned data ends up - they should follow the 2018 CN format, i.e. all matches in one sheet
- /scripts: this is the main code repository for all data processing scripts
- /viz: this is where the visualization files get saved - they are created using PowerBI and load data from /data/clean
1. Load and clean members
This section reads and clean the RKA members list. Save as baseline.
End of explanation
members=members_loader.get_members('../data/manual/Evidenta membrilor.xlsm')
Explanation: First, download members data (Evidenta membrilor.xlsx) from the official data source, and create a macro-enabled Excel file from the Google Sheet. Then write a simple macro to extract the cell comments from the Club column in order to get info about club Transfers. Follow the instructions here. Save the new file as Evidenta membrilor.xlsm in the /data/manual folder. Use the members_loader module to process this file.
End of explanation
members.head(2)
members_clean=members_loader.cleaner(members).reset_index(drop=False)
members_clean.to_csv('../data/clean/members.csv')
Explanation: Members are loaded but a bit messy.
End of explanation
matches={i:{} for i in range(1993,2019)}
competitions={
2018:['CR','CN','SL'],
2017:['CR','CN','SL'],
2016:['CR','CN','SL'],
2015:['CR','CN','SL'],
2014:['CR','CN','SL'],
2013:['CR','CN','SL'],
2012:['CR','CN'],
2011:['CR','CN'],
2010:['CR','CN'],
2009:['CR','CN'],
1998:['CR'],
1997:['CR'],
1993:['CR']
}
Explanation: 2. Load and clean matches
Matches are loaded from excel sheets in the /data folder, organized by year and competition. We are always looking for match list data,the cleaner the better, the more concentrated the better. While this is not possible all the time, we have several demo import routines. These are stored in the matches_loader.py function library. While not all matches have textual data available, these will need to be processed through OCR first. Raw excel data that can be processed right away can be found in the /data/raw folder, while the processed ones in /data/ocr. We use a separate workbook, ocr.ipynb to walk you through the OCR process.
End of explanation
for year in competitions:
for competition in competitions[year]:
matches[year][competition]=matches_loader.get_matches(year,competition)
Explanation: 2.1. Load matches
End of explanation
name_exceptions={'Atanasovski':'Atanasovski A. (MAC)',
'Dobrovicescu (SON)':'Dobrovicescu T. (SON)',
'Ianăș':'Ianăș F.',
'Crăciun (Tamang) Sujata':'Crăciun S.',
'Abe (Carțiș) Emilia':'Abe E.',
'Dinu (Ioniță) Claudia-Andreea':'Dinu A.',
'Mureșan (Egri) Melinda':'Mureșan M.',
'Grădișteanu (Gușu) Rebeca':'Grădișteanu R.',
'Józsa (Gușu) Rodiana':'Józsa R.',
'Arabadjiyski': 'Arabadjiyski A.',
'Dudaș Francisc Andrei':'Dudaș F.',
'Dudaș Francisc':'Dudaș F.',
'Mandia':'Mandia F.',
'Stanev':'Stanev A.',
'Mochalov':'Mochalov O.',
'Sozzi':'Sozzi A.',
'Crăciunel':'Crăciunel I.',
'Craciunel':'Crăciunel I.',
'Sagaev':'Sagaev L.',
'Buzás':'Búzás C.',
'Csala':'Csala T.',
'Dimitrov':'Dimitrov M.',
'Józsa':'Józsa L.',
'Creangă':'Creangă A.',
'Duțescu':'Duțescu M.',
'Furtună':'Furtună G.',
'Gârbea':'Gârbea I.',
'Stupu':'Stupu I.',
'Mahika-Voiconi':'Mahika-Voiconi S.',
'Mahika':'Mahika-Voiconi S.',
'Stanciu':'Stanciu F.',
'Vrânceanu':'Vrânceanu R.',
'Wolfs':'Wolfs J.',
'Ducarme':'Ducarme A.',
'Sbârcea':'Sbârcea B.',
'Mocian':'Mocian A.',
'Hatvani':'Hatvani L.',
'Dusan':'Dusan N.',
'Borota':'Borota V.',
'Tsushima':'Tsushima K.',
'Tráser':'Tráser T.',
'Colțea':'Colțea A.',
'Brîcov':'Brîcov A.',
'Yamamoto':'Yamamoto M.',
'Crăciun':'Crăciun D.'}
Explanation: 2.2. Standardize names
Names in name_exceptions get replaced with their right hand side values before processing.
End of explanation
name_equals={'Chirea M.':'Chirea A.',
'Ghinet C.':'Ghineț C.',
'Anghelescu A.':'Anghelescu M.',
'Domnița M.':'Domniță M.',
'Bejgu N.':'Beygu N.',
'Canceu A.':'Canceu Ad.',
'Dinu C.':'Dinu A.',
'Grapa D.':'Grapă D.',
'Cristea C.':'Cristea Că.',
'Cismas O.':'Cismaș O.',
'Garbea I.':'Gârbea I.',
'Vitali O.':'Oncea V.',
'Ah-hu W.':'Ah-hu S.',
'Horvát M.':'Horváth M.',
'Ionita A.':'Ioniță A.',
'Medvedschi I.':'Medvețchi I.',
'Mahika S.':'Mahika-Voiconi S.',
'Mate L.':'Máté L.',
'Hentea L.':'Hentea A.',
'Stupu I.':'Stupu A.',
'Ah-Hu S.':'Ah-hu S.',
'Alexa I.':'Alexa A.',
'Albert V.':'Albert J.',
'Angelescu M.':'Angelescu M.',
'Apostu D.':'Apostu T.',
'Brâcov A.':'Brîcov A.',
'Zaporojan R.':'Zaporojan O.',
'Vasile C.':'Vasile I.',
'Dițu I.':'Dițu A.',
'Tudor-Duicu C.':'Tudor D.',
'Sandu M.':'Sandu Mar.',
'Radulescu A.':'Rădulescu An.',
'Péter C.':'Péter Cso.',
'Movatz E.':'Movatz V.',
'Molinger B.':'Molinger P.',
'Mitelea C.':'Mițelea C.',
'Macavei I.':'Macaveiu A.',
'Macavei A.' : 'Macaveiu A.',
'Macaveiu I.' : 'Macaveiu A.',
'Luca T.':'Luca Tr.',
'Leca L.':'Leca F.',
'Gutu E.':'Guțu E.',
'Angelescu A.':'Angelescu M.',
'Mehelean L.':'Mahalean L.',
'Catoriu D.':'Cantoriu D.',
'Călina A.':'Călina C.',
'Ștefu I.' : 'Ștefu L.',
'Țarălungă A.' : 'Țarălungă D.',
'Buzás C.':'Búzás C.',
'Korenshi E.':'Korenschi E.',
'Pleșa R.':'Pleșea R.',
'Galos A.':'Galoș A.',
'Győrfi G.':'Györfi G.',
'Győrfi S.':'Györfi S.',
'Ghineț G.':'Ghineț C.',
'Hostina E.':'Hoștină E.',
'Hostină E.':'Hoștină E.',
'Ianăs F.':'Ianăș F.',
'Ianas F.':'Ianăș F.',
'Tamang S.':'Crăciun S.',
'Taralunga D.':'Țarălungă D.',
'Lacatus M.':'Lăcătuș M.',
'Máthé L.':'Máté L.',
'Burinaru A.':'Burinaru Al.',
'Nastase M.':'Năstase E.',
'Oprisan A.':'Oprișan A.',
'Pârlea A.':'Pîrlea A.',
'Parlea A.':'Pîrlea A.',
'Sabau D.':'Sabău D.',
'Spriu C.':'Spiru C.',
'Crețiu T.':'Crețiu-Codreanu T.',
'Crețiu M.':'Crețiu-Codreanu M.',
'Bíró S.':'Biró S.',
'Oprișan B.':'Oprișan A.',
'Székely J.':'Székely P.',
'Bărbulescu M.' : 'Bărbulescu E.',
'Bejenariu G.' : 'Bejenaru G.',
'Bojan V.' : 'Bojan Vl.',
'Moise A.' : 'Moise Ad.',
'Măgirdicean R.' : 'Magirdicean Ră.',
'Pall D.':'Páll D.',
'Stănculascu C.':'Stănculescu C.',
'Vrânceanu M.': 'Vrânceanu L.',
'Georgescu A.':'Georgescu An.',
'Wasicek V.':'Wasicheck W.',
'Wasicsec W.':'Wasicheck W.',
'Wasichek W.' : 'Wasicheck W.',
'Wasicsek W.':'Wasicheck W.',
'Zolfoghari A.':'Zolfaghari A.'}
Explanation: Names in name_equals get replaced with their right hand side values after processing.
End of explanation
name_doubles={
'Cristea Cristina':'Cristea Cr.',
'Cristea Călin-Ștefan':'Cristea Că.',
'Sandu Marius-Cristian':'Sandu Mar.',
'Sandu Matei-Serban':'Sandu Mat.',
'Sandu Matei':'Sandu Mat.',
'Georgescu Andrei':'Georgescu An.',
'Georgescu Alexandra':'Georgescu Al.',
'Péter Csongor':'Péter Cso.',
'Péter Csanád':'Péter Csa.',
'Luca Mihnea':'Luca Mihn.',
'Luca Mihai-Cătălin':'Luca Miha.',
'Luca':'Luca Miha.',
'Luca M':'Luca Miha.',
'Luca M.':'Luca Miha.',
'Luca Mihai':'Luca Miha.',
'Luca Traian-Dan':'Luca Tr.',
'Luca Tudor':'Luca Tu.',
'Canceu Anamaria':'Canceu An.',
'Canceu Adriana-Maria':'Canceu Ad.',
'Cioată Daniel-Mihai':'Cioată M.',
'Cioată Dragoș':'Cioată D.',
'Burinaru Alexandra':'Burinaru Al.',
'Burinaru Andreea':'Burinaru An.',
'Kovács Andrei':'Kovács An.',
'Kovács Alexandru':'Kovács Al.',
'Cristea Adrian':'Cristea Ad.',
'Cristea Andrei':'Cristea An.',
'Cristea A.':'Cristea An.',
'Ungureanu Nicolae Marius':'Ungureanu M.',
'Ungureanu Nicoleta':'Ungureanu N.',
'Vincze Vlad':'Vincze Vl.',
'Vincze Valentina':'Vincze Va.',
'Bojan Vladimir':'Bojan Vl.',
'Bojan Voicu':'Bojan Vo.',
'Crețiu Codreanu Matei':'Crețiu-Codreanu M.',
'Crețiu Codreanu Tudor':'Crețiu-Codreanu T.',
'Pop Mugurel Voicu':'Pop-Mugurel V.',
'Pop Mihai':'Pop M.',
'Moise Alexandru':'Moise Al.',
'Moise Adrian':'Moise Ad.',
'Rădulescu Andrei-Savin':'Rădulescu An.',
'Rădulescu Adrian':'Rădulescu Ad.',
'Magirdicean Romeo':'Magirdicean Ro.',
'Magirdicean Răzvan Ionuț':'Magirdicean Ră.'}
Explanation: Names in name_doubles handle situation where the default name abbreviation might lead to duplicates.
End of explanation
letter_norm={'ţ':'ț','ş':'ș','Ş':'Ș'}
def name_cleaner(name):
name=str(name)
if name in name_doubles:
return name_doubles[name]
else:
for letter in letter_norm:
name=name.replace(letter,letter_norm[letter])
if name in name_exceptions:
name=name_exceptions[name]
nc=name.replace(' ',' ').split('(')
rname=nc[0].strip()
rnames=rname.split(' ')
sname=rnames[0]+' '+rnames[1][0]+'.'
if sname in name_equals:
sname=name_equals[sname]
if sname in name_doubles:
print(name,sname)
return sname
Explanation: Normalize Romanian characters, define name cleaner function to get Name IDs. Name ID are unique competitor names in the form of: Surname, First letter of Name. If the First Letter of Name leads to a non-unique ID, the second letter is taken, and so forth, until a unique ID is found. It gets contructed as follows:
1. If name in doubles return the solution directly
2. Normalize characters
3. If name is in exceptions, clean
4. Replace any double spaces, then split at ( (to split away club, if embedded in the name)
5. Split into Surname and Name, store in rnames
6. Store Surname N. in sname
7. If sname is in equals, clean
8. Retrun sname
End of explanation
redflags_names=['-','—','—',np.nan,'. ()','— ','- -.','- -. (-)','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','R','S',
'Kashi','Sankon','București','Victorii:','Sakura','Taiken','Ikada','Sonkei','CRK','Museido',
'Ichimon','Bushi Tokukai 1','Competitori – Shiai-sha','Echipa - roşu','Numele şi prenumele',
'Victorii:','Victorii: 0','Victorii: 1','Victorii: 2','Victorii: 3','Victorii: 4',
'Victorii: 5','?','Kyobukan','2/5','2/6','3/8','Finala','Kyobukan (0/0/0)','―',
'(clasament final după meci de baraj)','CRK (Bucuresti)','Kaybukan','Isshin (Cluj)',
'Ikada (Bucureşti)','Kyobukan (Braşov)','Puncte:','KASHI','Budoshin','Isshin',
'— (—)','4. B.','4. Baraj: Stupu M - Hostina','4. Baraj: Moise KM - Korenschi M',
'Bushi Tokukai (2/8/17)','CRK 2 (1/6/14)', 'CRK 2','CRK 1','Loc I.:','Loc',
'Bushi Tokukai 2 (M Ciuc)','Echipa suport']
redflags_names2=['Bushi Tokukai','Eliminatoriu','finala','Finala','Fianala','Ikada','Ichimon','Pool',
'Locul ','Lotul ','Loc ','Grupa ','Isshin','Meciul ','Victorii:','L1','1','2','3','4','5','6','7','8','9','0']
Explanation: Names equalling any string in redflags_names get thrown out of the final dataset.
Names containing any string in redflags_names2 get thrown out of the final dataset.
End of explanation
def name_ok(name):
name=str(name)
if name=='nan': return False
if name not in redflags_names:
if np.array([i not in name for i in redflags_names2]).all():
return True
return False
Explanation: Check is name is not in redflags. Ignore these entries.
End of explanation
all_players={}
all_players_r={}
all_players_unsorted=set()
for year in matches:
for competition in matches[year]:
for match in matches[year][competition]:
for color in ['aka','shiro']:
name=match[color]['name']
all_players_unsorted.add(name)
if name_ok(name):
name=name_cleaner(name)
rname=match[color]['name']
if rname not in all_players_r:all_players_r[rname]=name
if name not in all_players: all_players[name]={}
if year not in all_players[name]:all_players[name][year]={'names':set()}
all_players[name][year]['names'].add(rname)
if 'shinpan' in match:
for color in ['fukushin1','shushin','fukushin2']:
aka=match['aka']['name']
shiro=match['shiro']['name']
if (name_ok(aka)) and\
(name_ok(shiro)) and\
(name_cleaner(aka) in all_players) and\
(name_cleaner(shiro) in all_players):
rname=match['shinpan'][color]
all_players_unsorted.add(rname)
if name_ok(rname):
name=name_cleaner(rname)
if rname not in all_players_r:all_players_r[rname]=name
if name not in all_players: all_players[name]={}
if year not in all_players[name]:all_players[name][year]={'names':set()}
all_players[name][year]['names'].add(rname)
Explanation: Process all names for standardization. Create 3 variables:
1. all_players: forward relationship: unclean name -> cleaned name
2. all_players_r: reverse relationship
3. all_players_unsorted: unique set of all names processed
Process both competitor and shinpan names.
End of explanation
name_linker={}
for i in members_clean.index:
name=members_clean.loc[i]['name']
try:
cname=name_cleaner(name)
except:
print(name)
if cname not in name_linker:name_linker[cname]=set()
name_linker[cname].add(name)
Explanation: Link procesed to names in members. The name_linker dictionary contains Name IDs (short names) as keys and sets of long names as values. Ideally, this set should contain only one element, so that the mapping is unique.
End of explanation
names_abbr={}
for name in name_linker:
if len(name_linker[name])>1:
#only for dev to create exceptions for duplicate person names.
print(name,name_linker[name])
for i in name_linker[name]:
names_abbr[i]=name
Explanation: Do the opposite mapping in names_abbr: long->short. Create exceptions for duplicate names.
End of explanation
names_abbr_list=[]
name_abbr2long={}
name_abbr2club={}
for i in members_clean.index:
name=members_clean.loc[i]['name']
club=members_clean.loc[i]['club']
year=members_clean.loc[i]['year']
names_abbr_list.append(names_abbr[name])
name_abbr2long[names_abbr[name]]=name
if names_abbr[name] not in name_abbr2club:name_abbr2club[names_abbr[name]]={}
if year not in name_abbr2club[names_abbr[name]]:
name_abbr2club[names_abbr[name]][year]=club
Explanation: Save club mappings by short name, by year.
End of explanation
members_clean['name_abbr']=names_abbr_list
Explanation: Add short names to members_clean.
End of explanation
for name in all_players:
if name not in name_abbr2long:
#infer using longest available name
names={len(j):j for i in all_players[name] for j in all_players[name][i]['names']}
if len(names)>0:
inferred_name=names[max(names.keys())]
if '(' in inferred_name:
inferred_name=inferred_name[:inferred_name.find('(')-1]
name_abbr2long[name]=inferred_name
Explanation: Some names appear in the short form, we need to add them manually to the long list. We parse through all forms in which the name appears, and choose the longest. We call this the inferred name.
End of explanation
def levenshteinDistance(s1, s2):
if len(s1) > len(s2):
s1, s2 = s2, s1
distances = range(len(s1) + 1)
for i2, c2 in enumerate(s2):
distances_ = [i2+1]
for i1, c1 in enumerate(s1):
if c1 == c2:
distances_.append(distances[i1])
else:
distances_.append(1 + min((distances[i1], distances[i1 + 1], distances_[-1])))
distances = distances_
return distances[-1]
nkeys=np.sort(list(name_abbr2long.keys()))
for ii in range(len(name_abbr2long)):
i=nkeys[ii]
for jj in range(ii):
j=nkeys[jj]
if levenshteinDistance(name_abbr2long[i],name_abbr2long[j])<4:
print(name_abbr2long[i],':',name_abbr2long[j],' - ',i,':',j)
nkeys=np.sort(list(name_abbr2long.keys()))
for ii in range(len(name_abbr2long)):
i=nkeys[ii]
for jj in range(ii):
j=nkeys[jj]
if levenshteinDistance(i,j)<3:
print(i,':',j,' - ',name_abbr2long[i],':',name_abbr2long[j])
Explanation: Infer duplicates
End of explanation
redflags_clubs=['','N/A','RO1','RO2']
club_equals=clubs_loader.club_equals
Explanation: 2.3. Infer clubs
Infer clubs from name if club is part of name in the competition. Club names in redflags_clubs get ignored. Clubs in club_equals get replaced after processing. The convention is to have 3 letter all-caps club names for Romanian clubs, 3 letter club names followed by a / and a two letter country code for foreign clubs.
End of explanation
for name in all_players:
#if we dont already know the club for this year from the members register
if name not in name_abbr2club:
for year in all_players[name]:
for name_form in all_players[name][year]['names']:
if '(' in name_form:
club=name_form.split('(')[1].strip()[:-1]
if club not in redflags_clubs:
if name not in name_abbr2club:name_abbr2club[name]={}
name_abbr2club[name][year]=club
else:
for year in all_players[name]:
#else if no club info for particular year
if year not in name_abbr2club[name]:
for name_form in all_players[name][year]['names']:
if '(' in name_form:
club=name_form.split('(')[1].strip()[:-1]
if club not in redflags_clubs:
name_abbr2club[name][year]=club
Explanation: Attach clubs to all_players who have it in their competition name data, but we don't already known from members.
End of explanation
for name in name_abbr2club:
for year in name_abbr2club[name]:
if name_abbr2club[name][year] in club_equals:
name_abbr2club[name][year]=club_equals[name_abbr2club[name][year]]
for name in name_abbr2long:
name_abbr2long[name]=name_abbr2long[name].replace(' ',' ').strip()
Explanation: Normalize club names and long names.
End of explanation
for name in all_players:
if name in name_abbr2club:
years=np.sort(list(all_players[name].keys()))
minyear1=min(years)
maxyear1=max(years)
minyear2=min(name_abbr2club[name].keys())
maxyear2=min(name_abbr2club[name].keys())
if len(years)>1:
for year in range(min(minyear1,minyear2),max(maxyear1,maxyear2)+1):
if year not in name_abbr2club[name]:
#get club from previous year
for y in range(years[0],year):
if y in name_abbr2club[name]:
name_abbr2club[name][year]=str(name_abbr2club[name][y])
break
if year not in name_abbr2club[name]:
#if still not found, get club from next year
for y in np.arange(years[-1],year,-1):
if y in name_abbr2club[name]:
name_abbr2club[name][year]=str(name_abbr2club[name][y])
break
if year not in name_abbr2club[name]:
#if still not found, get first known year
if year<minyear2:
name_abbr2club[name][year]=str(name_abbr2club[name][minyear2])
else:
name_abbr2club[name][year]=str(name_abbr2club[name][maxyear2])
Explanation: If club still not found, fill the gaps between years. Forward fill first, then backward fill, if necessary.
End of explanation
manual_data_needed=[]
for i in manual_name_needed.union(manual_club_needed):
if i not in list(manual_data_override.index):
dummy={'name':i,'long_name':'','club':''}
if i in name_abbr2club:
dummy['club']=name_abbr2club[name][max(list(name_abbr2club[name].keys()))]
if i in manual_club_needed:
if i in name_abbr2long:
dummy['long_name']=name_abbr2long[i]
manual_data_needed.append(dummy)
df=pd.DataFrame(manual_data_needed).set_index('name')
df=pd.concat([manual_data_override,df]).drop_duplicates().sort_index()
df.to_excel('../data/manual/members_manual.xlsx')
Explanation: We have extracted what was possible from the data. Now we do a save of short name to long name and club mappings (by year). We then edit this file manually, if necessary.
2.4. Manual club and long name overrides
End of explanation
for i in df['long_name'].replace('',np.nan).dropna().index:
name_abbr2long[i]=df.loc[i]['long_name']
all_players_r[name_abbr2long[i]]=i
manual_club_needed=set()
for name in all_players:
years=np.sort(list(all_players[name].keys()))
minyear=min(years)
maxyear=max(years)
for year in range(minyear,maxyear+1):
if name not in name_abbr2club:name_abbr2club[name]={}
if year not in name_abbr2club[name]:
if name in df['club'].replace('',np.nan).dropna().index:
name_abbr2club[name][year]=df.loc[name]['club']
else:
name_abbr2club[name][year]='XXX'
Explanation: Extend with manual data
End of explanation
unregistered_members=[]
for name in all_players:
if name not in set(members_clean['name_abbr'].values):
years=np.sort(list(name_abbr2club[name].keys()))
for year in range(min(years),max(years)+1):
if year in all_players[name]:
iyear=year
else:
iyear=max(years)
club,country=clubs_loader.club_cleaner(name_abbr2club[name][year])
if country=='RO':
activ='Active'
dan=''#dan=0
else:
activ='Abroad'
dan=''
unregistered_members.append({'name':name_abbr2long[name],'name_abbr':name,
'club':club,'active':activ,'year':year,'dan':dan,'country':country,'source':'matches'})
members_clean['country']='RO'
members_clean['source']='member list'
members_updated=pd.concat([members_clean,pd.DataFrame(unregistered_members)]).reset_index(drop=True)
Explanation: Update and overwrite with club existence data
3. Update members
Extend members data with data mined from matches
Extend members with unregistered members. Probably inactive now, or from abroad. Only that one year when he appared in competition. But we only register them as known to be active that year. This is in ontrast with the Inactive members from the registry, for whom we know when did they go inactive.
End of explanation
members_mu_dan_extensions=[]
members_by_name=members_updated.set_index(['name_abbr'])
for year in matches:
members_by_year=members_updated.set_index(['year']).loc[year]
for competition in matches[year]:
print(year,competition)
for k in matches[year][competition]:
aka=k['aka']['name']
shiro=k['shiro']['name']
if (name_ok(aka)) and\
(name_ok(shiro)) and\
(name_cleaner(aka) in all_players) and\
(name_cleaner(shiro) in all_players):
for a in ['aka','shiro']:
for h in k[a]:
if h=='name':
name=k[a][h]
rname=all_players_r[name]
if rname in list(members_by_name.index):
if rname not in members_by_year['name_abbr'].values:
dummy=members_by_name.loc[[rname]]
minyear=min(dummy['year'])
maxyear=max(dummy['year'])
if year>maxyear:
dummy=dummy[dummy['year']==maxyear]
yeardiff=min(dummy['year'])-year
else:
dummy=dummy[dummy['year']==minyear]
yeardiff=year-max(dummy['year'])
dummy=dummy.reset_index()
dummy['year']=year
dummy['dan']=0
dummy['age']=dummy['age']+yeardiff
dummy['source']='matches, mu dan'
members_mu_dan_extensions.append(dummy)
#if only appears in competition in one year, then not in members table
else:
print(rname,year)
#fix in unregistered_members
Explanation: Extend 0 dan down to starting year.
End of explanation
members_mu_dan_extensions=pd.concat(members_mu_dan_extensions)
members_updated=pd.concat([members_updated,members_mu_dan_extensions]).reset_index(drop=True)
Explanation: Update members
End of explanation
clubs=[]
pclubs=[]
countries=[]
for i in members_updated.index:
club=members_updated.loc[i]['club']
country=members_updated.loc[i]['country']
year=members_updated.loc[i]['year']
club,country=clubs_loader.club_cleaner(club,country)
club,pclub=clubs_loader.club_year(club,country,year)
clubs.append(club)
pclubs.append(pclub)
countries.append(country)
members_updated['club']=clubs
members_updated['pretty_club']=pclubs
members_updated['country']=countries
Explanation: Prettify club names, and IDs
End of explanation
manual_mf_data_override=pd.read_excel('../data/manual/members_mf_manual.xlsx')
manual_mf_data_needed=members_updated[(members_updated['gen']!='M')&(members_updated['gen']!='F')][['name_abbr','name']]\
.drop_duplicates()
df=manual_mf_data_needed#.merge(manual_mf_data_override[['name_abbr','gen']],'outer').drop_duplicates()
df.to_excel('../data/manual/members_mf_manual.xlsx')
Explanation: Fix unknwown genders
End of explanation
members_updated=members_updated.reset_index(drop=True).drop_duplicates()
gens=[]
for i in members_updated.index:
name=members_updated.loc[i]['name_abbr']
if name in list(df.index):
gens.append(df.loc[name])
else:
gens.append(members_updated.loc[i]['gen'])
members_updated['gen']=gens
Explanation: Update members with manual gender data.
End of explanation
members_updated.to_csv('../data/export/members.csv')
clubs_updated=members_updated.groupby(['club','country','pretty_club','year'])[['name_abbr']].count()
clubs_updated=clubs_updated.reset_index().set_index('club').join(clubs_loader.club_year_df['Oraș'])
clubs_updated.to_csv('../data/export/clubs.csv')
Explanation: Save to /data/export.
End of explanation
master_matches=[]
for year in matches:
members_by_year=members_updated.set_index(['year']).loc[year].drop_duplicates()
for competition in matches[year]:
print(year,competition)
for k in matches[year][competition]:
good=True
match={'year':year,'competition':competition}
match['match_category'],match['match_teams'],match['match_phase']=point_utils.match_cleaner(year,k['match_type'])
if 'shinpan' in k:
for color in ['fukushin1','shushin','fukushin2']:
if color in k['shinpan']:
if k['shinpan'][color] in all_players_r:
#normalize shinpan names
match[color]=name_abbr2long[all_players_r[k['shinpan'][color]]]
aka=k['aka']['name']
shiro=k['shiro']['name']
if (name_ok(aka)) and\
(name_ok(shiro)) and\
(name_cleaner(aka) in all_players) and\
(name_cleaner(shiro) in all_players):
for a in ['aka','shiro']:
points=''
for h in k[a]:
if h=='name':
name=k[a][h]
#normalize competitor names
rname=all_players_r[name]
df=members_by_year[members_by_year['name_abbr']==rname]
match[a+' name']=name_abbr2long[rname]
else:
point=k[a][h]
if str(point)=='nan': point=''
points=points+point
good=point_utils.point_redflags(points)
if good:
match[a+' point1'],match[a+' point2'],match[a+' points'],\
match[a+' hansoku'],match['encho']=point_utils.points_cleaner(points)
else:
good=False
if good:
if 'outcome' in k:
match['encho']=point_utils.outcome_cleaner(k['outcome'])
else:
match['encho']=False
match['winner'],match['difference']=point_utils.outcome_from_points(match['aka points'],match['shiro points'])
master_matches.append(match)
Explanation: 4. Update matches
Update and save cleaned match data
End of explanation
data=pd.DataFrame(master_matches).reset_index(drop=True)
save_utils.save(data)
Explanation: Clean up and save matches for display
End of explanation |
499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TV Script Generation
In this project, we'll generate our own Simpsons TV scripts using RNNs. We'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
We'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step1: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step3: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step5: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step6: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step7: Check Point
This is our first checkpoint.
Step8: Build the Neural Network
We'll build the components necessary to build a RNN by implementing the following functions below
Step10: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network.
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
Step12: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
Return the cell and initial state in the following tuple (Cell, InitialState)
Step14: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step16: Build RNN
We created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step18: Build the Neural Network
Apply the functions you implemented above to
Step20: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step21: Neural Network Training
Hyperparameters
Tune the following parameters
Step22: Build the Graph
Build the graph using the neural network we implemented.
Step23: Train
Train the neural network on the preprocessed data.
Step24: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step25: Checkpoint
Step27: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name().
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
Step29: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step30: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script we want to generate. | Python Code:
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, we'll generate our own Simpsons TV scripts using RNNs. We'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
We'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (20, 30)
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {index: text for index, text in enumerate(sorted_vocab)}
vocab_to_int = {text: index for index, text in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# Implement Function
token_dict = {
'.': '<<period>>',
',': '<<comma>>',
'"': '<<quotation_mark>>',
';': '<<semicolon>>',
'!': '<<exclamation_mark>>',
'?': '<<question_mark>>',
'(': '<<left_parentheses>>',
')': '<<right_parentheses>>',
'--': '<<dash>>',
'\n': '<<return>>',
}
return token_dict
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is our first checkpoint.
End of explanation
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
We'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network.
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# Implement Function
lstm_layers = 2
keep_prob = 0.5
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state")
return cell, initial_state
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# Implement Function
#with graph.as_default():
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# Implement Function
#embed_size = 100
#vocab_size = 100
#embed = get_embed(inputs, vocab_size, embed_size)
#outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(state, name = 'final_state')
return outputs, final_state
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
We created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# Implement Function
build_rnn_input = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, build_rnn_input)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.asarray(list(zip(x_batches, y_batches)))
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
End of explanation
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 32
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 15
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network we implemented.
End of explanation
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data.
End of explanation
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# Implement Function
InputTensor = loaded_graph.get_tensor_by_name('input:0')
InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')
FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')
ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name().
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# Implement Function
#int_words = [int_to_vocab[word] for word in text]
#train_words = [word for word in int_to_vocab if np.any(probabilities < random.random())]
index = np.argmax(probabilities)
train_words = int_to_vocab[index]
return train_words
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script we want to generate.
End of explanation |