Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
15,900
Given the following text description, write Python code to implement the functionality described below step by step Description: Physiology 1) Using the ion concentrations of interstitial and intracellular compartments and the Nernst equation, calculate the equilibrium potentials for Na+, K+, and Cl- Step1: 2) Assuming the resting potential for the plasma membrane is -70mV, explain whether each of the ions in question 1 would be expected to move into or out of the cell. Use an I-V plot to support your answer. Step2: IV graph
Python Code: from math import log # RT/F = 26.73 at room temperature rt_div_f = 26.73 nernst = lambda xO, xI, z: rt_div_f/z * log(1.0 * xO / xI) Na_Eq = nernst(145, 15, 1) K_Eq = nernst(4.5, 120, 1) Cl_Eq = nernst(116, 20, -1) print "Na+ equilibrium potential is %.2f mV" % (Na_Eq) print "K+ equilibrium potential is %.2f mV" % (K_Eq) print "Cl- equilibrium potential is %.2f mV" % (Cl_Eq) Explanation: Physiology 1) Using the ion concentrations of interstitial and intracellular compartments and the Nernst equation, calculate the equilibrium potentials for Na+, K+, and Cl- End of explanation # Values from Table 3.1 p57 in syllabus G_Na = 1 G_K = 100 G_Cl = 25 goldman = lambda Na_Out, Na_In, K_Out, K_In, Cl_Out, Cl_In: \ rt_div_f * log((G_Na * Na_Out + G_K * K_Out + G_Cl * Cl_In)/\ (1.0 * G_Na * Na_In + G_K * K_In + G_Cl * Cl_Out)) print "Potential at equalibrium is %.2f mV" % goldman(150, 15, 5, 150, 100, 10) Explanation: 2) Assuming the resting potential for the plasma membrane is -70mV, explain whether each of the ions in question 1 would be expected to move into or out of the cell. Use an I-V plot to support your answer. End of explanation %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.figure(figsize=(20,20)) x = np.arange(-100, 60, 0.1); iv_line = lambda G_val, E_x: G_val * x + ((0.0 - E_x) * G_val) K_line = iv_line(G_K, K_Eq) Na_line = iv_line(G_Na, Na_Eq) Cl_line = iv_line(G_Cl, Cl_Eq) Sum_line = K_line + Na_line + Cl_line plt.grid(True) K, = plt.plot(x, K_line, label="K") Na, = plt.plot(x, Na_line, label="Na") Cl, = plt.plot(x, Cl_line, label="Cl") Em, = plt.plot(x, Sum_line, label="Em") plt.legend(handles=[K, Na, Cl, Em]) plt.show() Explanation: IV graph End of explanation
15,901
Given the following text description, write Python code to implement the functionality described below step by step Description: Decoding (MVPA) Step1: Transformation classes Scaler The Step2: PSDEstimator The Step3: Source power comodulation (SPoC) Source Power Comodulation ( Step4: Decoding over time This strategy consists in fitting a multivariate predictive model on each time instant and evaluating its performance at the same instant on new epochs. The Step5: You can retrieve the spatial filters and spatial patterns if you explicitly use a LinearModel Step6: Temporal generalization Temporal generalization is an extension of the decoding over time approach. It consists in evaluating whether the model estimated at a particular time instant accurately predicts any other time instant. It is analogous to transferring a trained model to a distinct learning problem, where the problems correspond to decoding the patterns of brain activity recorded at distinct time instants. The object to for Temporal generalization is Step7: Plot the full (generalization) matrix
Python Code: import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression import mne from mne.datasets import sample from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler, cross_val_multiscore, LinearModel, get_coef, Vectorizer, CSP) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' tmin, tmax = -0.200, 0.500 event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two raw = mne.io.read_raw_fif(raw_fname, preload=True) # The subsequent decoding analyses only capture evoked responses, so we can # low-pass the MEG data. Usually a value more like 40 Hz would be used, # but here low-pass at 20 so we can more heavily decimate, and allow # the examlpe to run faster. The 2 Hz high-pass helps improve CSP. raw.filter(2, 20) events = mne.find_events(raw, 'STI 014') # Set up pick list: EEG + MEG - bad channels (modify to your needs) raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=('grad', 'eog'), baseline=(None, 0.), preload=True, reject=dict(grad=4000e-13, eog=150e-6), decim=10) epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times y = epochs.events[:, 2] # target: Audio left or right Explanation: Decoding (MVPA) :depth: 3 .. include:: ../../links.inc Design philosophy Decoding (a.k.a. MVPA) in MNE largely follows the machine learning API of the scikit-learn package. Each estimator implements fit, transform, fit_transform, and (optionally) inverse_transform methods. For more details on this design, visit scikit-learn_. For additional theoretical insights into the decoding framework in MNE, see [1]_. For ease of comprehension, we will denote instantiations of the class using the same name as the class but in small caps instead of camel cases. Let's start by loading data for a simple two-class problem: End of explanation # Uses all MEG sensors and time points as separate classification # features, so the resulting filters used are spatio-temporal clf = make_pipeline(Scaler(epochs.info), Vectorizer(), LogisticRegression(solver='lbfgs')) scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1) # Mean scores across cross-validation splits score = np.mean(scores, axis=0) print('Spatio-temporal: %0.1f%%' % (100 * score,)) Explanation: Transformation classes Scaler The :class:mne.decoding.Scaler will standardize the data based on channel scales. In the simplest modes scalings=None or scalings=dict(...), each data channel type (e.g., mag, grad, eeg) is treated separately and scaled by a constant. This is the approach used by e.g., :func:mne.compute_covariance to standardize channel scales. If scalings='mean' or scalings='median', each channel is scaled using empirical measures. Each channel is scaled independently by the mean and standand deviation, or median and interquartile range, respectively, across all epochs and time points during :class:~mne.decoding.Scaler.fit (during training). The :meth:~mne.decoding.Scaler.transform method is called to transform data (training or test set) by scaling all time points and epochs on a channel-by-channel basis. To perform both the fit and transform operations in a single call, the :meth:~mne.decoding.Scaler.fit_transform method may be used. To invert the transform, :meth:~mne.decoding.Scaler.inverse_transform can be used. For scalings='median', scikit-learn_ version 0.17+ is required. <div class="alert alert-info"><h4>Note</h4><p>Using this class is different from directly applying :class:`sklearn.preprocessing.StandardScaler` or :class:`sklearn.preprocessing.RobustScaler` offered by scikit-learn_. These scale each *classification feature*, e.g. each time point for each channel, with mean and standard deviation computed across epochs, whereas :class:`mne.decoding.Scaler` scales each *channel* using mean and standard deviation computed across all of its time points and epochs.</p></div> Vectorizer Scikit-learn API provides functionality to chain transformers and estimators by using :class:sklearn.pipeline.Pipeline. We can construct decoding pipelines and perform cross-validation and grid-search. However scikit-learn transformers and estimators generally expect 2D data (n_samples * n_features), whereas MNE transformers typically output data with a higher dimensionality (e.g. n_samples * n_channels * n_frequencies * n_times). A Vectorizer therefore needs to be applied between the MNE and the scikit-learn steps like: End of explanation csp = CSP(n_components=3, norm_trace=False) clf = make_pipeline(csp, LogisticRegression(solver='lbfgs')) scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1) print('CSP: %0.1f%%' % (100 * scores.mean(),)) Explanation: PSDEstimator The :class:mne.decoding.PSDEstimator computes the power spectral density (PSD) using the multitaper method. It takes a 3D array as input, converts it into 2D and computes the PSD. FilterEstimator The :class:mne.decoding.FilterEstimator filters the 3D epochs data. Spatial filters Just like temporal filters, spatial filters provide weights to modify the data along the sensor dimension. They are popular in the BCI community because of their simplicity and ability to distinguish spatially-separated neural activity. Common spatial pattern :class:mne.decoding.CSP is a technique to analyze multichannel data based on recordings from two classes [2]_ (see also https://en.wikipedia.org/wiki/Common_spatial_pattern). Let $X \in R^{C\times T}$ be a segment of data with $C$ channels and $T$ time points. The data at a single time point is denoted by $x(t)$ such that $X=[x(t), x(t+1), ..., x(t+T-1)]$. Common spatial pattern (CSP) finds a decomposition that projects the signal in the original sensor space to CSP space using the following transformation: \begin{align}x_{CSP}(t) = W^{T}x(t) :label: csp\end{align} where each column of $W \in R^{C\times C}$ is a spatial filter and each row of $x_{CSP}$ is a CSP component. The matrix $W$ is also called the de-mixing matrix in other contexts. Let $\Sigma^{+} \in R^{C\times C}$ and $\Sigma^{-} \in R^{C\times C}$ be the estimates of the covariance matrices of the two conditions. CSP analysis is given by the simultaneous diagonalization of the two covariance matrices \begin{align}W^{T}\Sigma^{+}W = \lambda^{+} :label: diagonalize_p\end{align} \begin{align}W^{T}\Sigma^{-}W = \lambda^{-} :label: diagonalize_n\end{align} where $\lambda^{C}$ is a diagonal matrix whose entries are the eigenvalues of the following generalized eigenvalue problem \begin{align}\Sigma^{+}w = \lambda \Sigma^{-}w :label: eigen_problem\end{align} Large entries in the diagonal matrix corresponds to a spatial filter which gives high variance in one class but low variance in the other. Thus, the filter facilitates discrimination between the two classes. .. topic:: Examples * `sphx_glr_auto_examples_decoding_plot_decoding_csp_eeg.py` * `sphx_glr_auto_examples_decoding_plot_decoding_csp_timefreq.py` <div class="alert alert-info"><h4>Note</h4><p>The winning entry of the Grasp-and-lift EEG competition in Kaggle used the :class:`~mne.decoding.CSP` implementation in MNE and was featured as a `script of the week <sotw_>`_.</p></div> We can use CSP with these data with: End of explanation # Fit CSP on full data and plot csp.fit(X, y) csp.plot_patterns(epochs.info) csp.plot_filters(epochs.info, scalings=1e-9) Explanation: Source power comodulation (SPoC) Source Power Comodulation (:class:mne.decoding.SPoC) [3]_ identifies the composition of orthogonal spatial filters that maximally correlate with a continuous target. SPoC can be seen as an extension of the CSP where the target is driven by a continuous variable rather than a discrete variable. Typical applications include extraction of motor patterns using EMG power or audio patterns using sound envelope. .. topic:: Examples * `sphx_glr_auto_examples_decoding_plot_decoding_spoc_CMC.py` xDAWN :class:mne.preprocessing.Xdawn is a spatial filtering method designed to improve the signal to signal + noise ratio (SSNR) of the ERP responses [4]_. Xdawn was originally designed for P300 evoked potential by enhancing the target response with respect to the non-target response. The implementation in MNE-Python is a generalization to any type of ERP. .. topic:: Examples * `sphx_glr_auto_examples_preprocessing_plot_xdawn_denoising.py` * `sphx_glr_auto_examples_decoding_plot_decoding_xdawn_eeg.py` Effect-matched spatial filtering The result of :class:mne.decoding.EMS is a spatial filter at each time point and a corresponding time course [5]_. Intuitively, the result gives the similarity between the filter at each time point and the data vector (sensors) at that time point. .. topic:: Examples * `sphx_glr_auto_examples_decoding_plot_ems_filtering.py` Patterns vs. filters When interpreting the components of the CSP (or spatial filters in general), it is often more intuitive to think about how $x(t)$ is composed of the different CSP components $x_{CSP}(t)$. In other words, we can rewrite Equation :eq:csp as follows: \begin{align}x(t) = (W^{-1})^{T}x_{CSP}(t) :label: patterns\end{align} The columns of the matrix $(W^{-1})^T$ are called spatial patterns. This is also called the mixing matrix. The example sphx_glr_auto_examples_decoding_plot_linear_model_patterns.py discusses the difference between patterns and filters. These can be plotted with: End of explanation # We will train the classifier on all left visual vs auditory trials on MEG clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs')) time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True) scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1) # Mean scores across cross-validation splits scores = np.mean(scores, axis=0) # Plot fig, ax = plt.subplots() ax.plot(epochs.times, scores, label='score') ax.axhline(.5, color='k', linestyle='--', label='chance') ax.set_xlabel('Times') ax.set_ylabel('AUC') # Area Under the Curve ax.legend() ax.axvline(.0, color='k', linestyle='-') ax.set_title('Sensor space decoding') Explanation: Decoding over time This strategy consists in fitting a multivariate predictive model on each time instant and evaluating its performance at the same instant on new epochs. The :class:mne.decoding.SlidingEstimator will take as input a pair of features $X$ and targets $y$, where $X$ has more than 2 dimensions. For decoding over time the data $X$ is the epochs data of shape n_epochs x n_channels x n_times. As the last dimension of $X$ is the time, an estimator will be fit on every time instant. This approach is analogous to SlidingEstimator-based approaches in fMRI, where here we are interested in when one can discriminate experimental conditions and therefore figure out when the effect of interest happens. When working with linear models as estimators, this approach boils down to estimating a discriminative spatial filter for each time instant. Temporal decoding We'll use a Logistic Regression for a binary classification as machine learning model. End of explanation clf = make_pipeline(StandardScaler(), LinearModel(LogisticRegression(solver='lbfgs'))) time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True) time_decod.fit(X, y) coef = get_coef(time_decod, 'patterns_', inverse_transform=True) evoked = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0]) joint_kwargs = dict(ts_args=dict(time_unit='s'), topomap_args=dict(time_unit='s')) evoked.plot_joint(times=np.arange(0., .500, .100), title='patterns', **joint_kwargs) Explanation: You can retrieve the spatial filters and spatial patterns if you explicitly use a LinearModel End of explanation # define the Temporal generalization object time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True) scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1) # Mean scores across cross-validation splits scores = np.mean(scores, axis=0) # Plot the diagonal (it's exactly the same as the time-by-time decoding above) fig, ax = plt.subplots() ax.plot(epochs.times, np.diag(scores), label='score') ax.axhline(.5, color='k', linestyle='--', label='chance') ax.set_xlabel('Times') ax.set_ylabel('AUC') ax.legend() ax.axvline(.0, color='k', linestyle='-') ax.set_title('Decoding MEG sensors over time') Explanation: Temporal generalization Temporal generalization is an extension of the decoding over time approach. It consists in evaluating whether the model estimated at a particular time instant accurately predicts any other time instant. It is analogous to transferring a trained model to a distinct learning problem, where the problems correspond to decoding the patterns of brain activity recorded at distinct time instants. The object to for Temporal generalization is :class:mne.decoding.GeneralizingEstimator. It expects as input $X$ and $y$ (similarly to :class:~mne.decoding.SlidingEstimator) but generates predictions from each model for all time instants. The class :class:~mne.decoding.GeneralizingEstimator is generic and will treat the last dimension as the one to be used for generalization testing. For convenience, here, we refer to it as different tasks. If $X$ corresponds to epochs data then the last dimension is time. This runs the analysis used in [6] and further detailed in [7]: End of explanation fig, ax = plt.subplots(1, 1) im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r', extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.) ax.set_xlabel('Testing Time (s)') ax.set_ylabel('Training Time (s)') ax.set_title('Temporal generalization') ax.axvline(0, color='k') ax.axhline(0, color='k') plt.colorbar(im, ax=ax) Explanation: Plot the full (generalization) matrix: End of explanation
15,902
Given the following text description, write Python code to implement the functionality described below step by step Description: Semi-Monocoque Theory Step1: Import Section class, which contains all calculations Step2: Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy) Step3: Define sympy parameters used for geometric description of sections Step4: We also define numerical values for each symbol in order to plot scaled section and perform calculations Step5: First example Step6: Define section and perform first calculations Step7: Plot of S1 section in original reference frame Define a dictionary of coordinates used by Networkx to plot section as a Directed graph. Note that arrows are actually just thicker stubs Step8: Plot of S1 section in inertial reference Frame Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes. Center of Gravity and Shear Center are drawn Step9: Compute L matrix Step10: Compute H matrix Step11: Compute $\tilde{K}$ and $\tilde{M}$ as Step12: Compute eigenvalues and eigenvectors as Step13: Eigenvalues correspond to $\beta^2$ Step14: Eigenvectors are orthogonal as expected Step15: From $\beta_i^2$ we compute
Python Code: from pint import UnitRegistry import sympy import networkx as nx import numpy as np import matplotlib.pyplot as plt import sys %matplotlib inline from IPython.display import display Explanation: Semi-Monocoque Theory: corrective solutions End of explanation from Section import Section Explanation: Import Section class, which contains all calculations End of explanation ureg = UnitRegistry() sympy.init_printing() Explanation: Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy) End of explanation A, A0, t, t0, a, b, h, L, E, G = sympy.symbols('A A_0 t t_0 a b h L E G', positive=True) Explanation: Define sympy parameters used for geometric description of sections End of explanation values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \ (b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter), \ (t, 0.8 *ureg.millimeter),(E, 72e3 * ureg.MPa), (G, 27e3 * ureg.MPa)] datav = [(v[0],v[1].magnitude) for v in values] Explanation: We also define numerical values for each symbol in order to plot scaled section and perform calculations End of explanation stringers = {1:[(2*a,h),A], 2:[(a,h),A], 3:[(sympy.Integer(0),h),A], 4:[(sympy.Integer(0),sympy.Integer(0)),A], 5:[(2*a,sympy.Integer(0)),A]} #5:[(sympy.Rational(1,2)*a,h),A]} panels = {(1,2):t, (2,3):t, (3,4):t, (4,5):t, (5,1):t} Explanation: First example: Simple rectangular symmetric section Define graph describing the section: 1) stringers are nodes with parameters: - x coordinate - y coordinate - Area 2) panels are oriented edges with parameters: - thickness - lenght which is automatically calculated End of explanation S1 = Section(stringers, panels) S1.cycles Explanation: Define section and perform first calculations End of explanation start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16); Explanation: Plot of S1 section in original reference frame Define a dictionary of coordinates used by Networkx to plot section as a Directed graph. Note that arrows are actually just thicker stubs End of explanation positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } x_ct, y_ct = S1.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16); Explanation: Plot of S1 section in inertial reference Frame Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes. Center of Gravity and Shear Center are drawn End of explanation S1.compute_L() S1.L Explanation: Compute L matrix: with 5 nodes we expect 2 dofs, one with symmetric load and one with antisymmetric load End of explanation S1.compute_H() S1.H Explanation: Compute H matrix End of explanation S1.compute_KM(A,h,t) S1.Ktilde S1.Mtilde Explanation: Compute $\tilde{K}$ and $\tilde{M}$ as: $$\tilde{K} = L^T \cdot \left[ \frac{A}{A_0} \right] \cdot L$$ $$\tilde{M} = H^T \cdot \left[ \frac{l}{l_0}\frac{t_0}{t} \right] \cdot L$$ End of explanation sol_data = (S1.Ktilde.inv()*(S1.Mtilde.subs(datav))).eigenvects() Explanation: Compute eigenvalues and eigenvectors as: $$\left| \mathbf{I} \cdot \beta^2 - \mathbf{\tilde{K}}^{-1} \cdot \mathbf{\tilde{M}} \right| = 0$$ We substitute some numerical values to simplify the expressions End of explanation β2 = [sol[0] for sol in sol_data] β2 Explanation: Eigenvalues correspond to $\beta^2$ End of explanation X = [sol[2][0] for sol in sol_data] X Explanation: Eigenvectors are orthogonal as expected End of explanation λ = [sympy.N(sympy.sqrt(E*A*h/(G*t)*βi).subs(datav)) for βi in β2] λ Explanation: From $\beta_i^2$ we compute: $$\lambda_i = \sqrt{\frac{E A_0 l_0}{G t_0} \beta_i^2}$$ substuting numerical values End of explanation
15,903
Given the following text description, write Python code to implement the functionality described below step by step Description: From FITS to HDF5 Purpose of this notebook is to get the data to suitable data structure for preprocessing. FITS file format https Step1: HDUs A FITS file is comprised of segmets called Header/Data Units (HDUs). The first HDU is called the 'Primary HDU'. The primary data array can contain a 1-999 dimensional array of numbers. A typical primary array could contain a 1 dimensional spectrum, a 2 dimensional image, a 3 dimensional data cube. Any number of additional HDUs may follow the primary array. These HDUs are referred as 'extensions'. There are three types of standart extensions currently defined Step2: Header Units Every HDU consists of an ASCII formatted 'Header Unit' and 'Data Unit'. Each header unit contains a sequence of fixed-length 80 character long keyword record which have form Step3: Data Units Note that the data unit is not required. The image pixels in primary array or an image extension may have one of 5 supported data types Step4: FITS parsing Parse interesting thing from FITS file Step5: Save to HDF5
Python Code: %matplotlib inline import os import glob import random import h5py import astropy.io.fits import numpy as np import matplotlib.pyplot as plt # find the normalized spectra in data_path directory # add all filenames to the list fits_paths FITS_DIR = 'data/ondrejov/' fits_paths = glob.glob(FITS_DIR + '*.fits') len(fits_paths) # pick random fits random_fits = random.choice(fits_paths) random_fits Explanation: From FITS to HDF5 Purpose of this notebook is to get the data to suitable data structure for preprocessing. FITS file format https://fits.gsfc.nasa.gov/fits_primer.html Flexible Image Transport System is data format used within astronomy for transporting, analyzing, archiving scientific data files. It is design to store data sets consisting of multidimensiional arrays and two dimensional tables. End of explanation # open file with astropy hdulist = astropy.io.fits.open(random_fits) # display info about the HDUs hdulist.info() Explanation: HDUs A FITS file is comprised of segmets called Header/Data Units (HDUs). The first HDU is called the 'Primary HDU'. The primary data array can contain a 1-999 dimensional array of numbers. A typical primary array could contain a 1 dimensional spectrum, a 2 dimensional image, a 3 dimensional data cube. Any number of additional HDUs may follow the primary array. These HDUs are referred as 'extensions'. There are three types of standart extensions currently defined: Image Extension (XTENSION = 'IMAGE') ASCII Table Extension (XTENSION = 'TABLE') Binary Table Extension (XTENSION = 'BINTABLE') End of explanation hdulist[0].header hdulist[1].header Explanation: Header Units Every HDU consists of an ASCII formatted 'Header Unit' and 'Data Unit'. Each header unit contains a sequence of fixed-length 80 character long keyword record which have form: KEYNAME = value / comment string Non-printing ASCII character such as tabs, carriage-returns, line-feeds are not allowed anywhere in the header unit. End of explanation data = hdulist[1].data data flux = data.field('flux').astype(np.float64, order='C', copy=True) wave = data.field('spectral').astype(np.float64, order='C', copy=True) flux.shape, wave.shape plt.plot(wave, flux) plt.ylabel('flux') plt.xlabel('wavelength') plt.grid(True) Explanation: Data Units Note that the data unit is not required. The image pixels in primary array or an image extension may have one of 5 supported data types: 8-bit (unsigned) integer bytes 16-bit (signed) integer bytes 32-bit (signed) integer bytes 32-bit single precision floating point real numbers 64-bit double precision floating point real numbers The othe 2 standard extensions, ASCII tables and binary tables, contain tabular information organized into rows and columns. Binary tables are more compact and are faster to read and write then ASCII tables. All the entries within a column of a tables have the same datatype. The allowed data formats for an ASCII table column are: integer, signe and double precision floating point value, character string. Binary table also support logical, bit and complex data formats. End of explanation def parse_fits_id(path): return os.path.splitext(os.path.split(path)[-1])[0] # http://astropy.readthedocs.io/en/latest/io/fits/appendix/faq.html#i-m-opening-many-fits-files-in-a-loop-and-getting-oserror-too-many-open-files def parse_fits(filename): '''Parse normalized spectrum from fits file.''' try: with astropy.io.fits.open(filename, mmap=False) as hdulist: data = hdulist[1].data header = hdulist[1].header wave = data['spectral'].astype(np.float64, order='C', copy=True) flux = data['flux'].astype(np.float64, order='C', copy=True) date = header['DATE'] ra = header['RA'] dec = header['DEC'] except IOError as e: print(e, filename) return None, None return parse_fits_id(filename), { 'wave': wave, 'flux': flux, 'date': date, 'ra': ra, 'dec': dec, } parse_fits(random_fits) H_ALPHA = 6562.8 def in_range(start, stop, val=H_ALPHA): '''Check if val is in range [start, stop]''' return start <= val <= stop def in_wavelen(wavelens, val=H_ALPHA): '''Check if val is somewhere in-between wavelens array start and end.''' return wavelens is not None and in_range(wavelens[0], wavelens[-1], val) # the data stucture is dict where a key is spectrum id # and a value is dict of wavelen and flux spectra = { fits_id: data_dict for fits_id, data_dict in map(parse_fits, fits_paths) if in_wavelen(wave, H_ALPHA) } len(spectra) Explanation: FITS parsing Parse interesting thing from FITS file: wave flux date ra dec End of explanation with h5py.File('data/data.hdf5', 'w') as f: for ident, data in spectra.items(): if ident is None or data is None: continue wave = data['wave'] flux = data['flux'] group = 'spectra/' + ident dset = f.create_dataset(group, (2, wave.shape[0]), dtype=wave.dtype) dset[0, :] = wave dset[1, :] = flux dset.attrs['date'] = data['date'] dset.attrs['dec'] = data['dec'] dset.attrs['ra'] = data['ra'] Explanation: Save to HDF5 End of explanation
15,904
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook analyzes the predictions of the trading model. <br/>At different thresholds, how effective is the model at predicting<br/> larger-than-average range days? Step1: This file contains the ranked predictions of the test set. Step2: The probabilities are in descending order. Observe the greater number of True values at the top of the rankings versus the bottom. Step3: Let's plot the True/False ratios for each probability decile. These ratios should roughly reflect the trend in the calibration plot.
Python Code: %matplotlib inline import numpy as np import pandas as pd pwd cd output ls Explanation: This notebook analyzes the predictions of the trading model. <br/>At different thresholds, how effective is the model at predicting<br/> larger-than-average range days? End of explanation ranking_frame = pd.read_csv('rankings_20170425.csv') ranking_frame.columns Explanation: This file contains the ranked predictions of the test set. End of explanation ranking_frame.rrover.head(20) ranking_frame.rrover.tail(20) Explanation: The probabilities are in descending order. Observe the greater number of True values at the top of the rankings versus the bottom. End of explanation ranking_frame['bins'] = pd.qcut(ranking_frame.probability, 10, labels=False) grouped = ranking_frame.groupby('bins') def get_ratio(series): ratio = series.value_counts()[1] / series.size return ratio grouped['rrover'].apply(get_ratio).plot(kind='bar') Explanation: Let's plot the True/False ratios for each probability decile. These ratios should roughly reflect the trend in the calibration plot. End of explanation
15,905
Given the following text description, write Python code to implement the functionality described below step by step Description: Section 5.5 superposition in time. Simulating varying head at x=0 using a sequence of sudden changes of head at $x=0$ and $t=0$ IHE, Delft, 20120-01-06 @T.N.Olsthoorn Context The aquifer is considered of constant transmissivity $kD$ and storage coefficient $S$ and to extend from $0 \le x \le \infty$. The partial differential equation is $$ kD \frac {\partial^2 s} {\partial x} = S \frac {\partial s} {\partial t} $$ The solution for a sudden change of head equal to $A [m]$ at $x=0$ and $t = 0$ is $$ s(x, t) = A \,\mathtt{erfc}(u), \,\,\,\, u=\sqrt{\frac {x^2 S} {4 kD t}} $$ where $\mathtt {erfc} () $ is the so-called complementary error function Step1: Convenience function to set up a graph Step2: Implementation Step3: Simulation by superposition in time Step4: The discharge at $x=0$ The implementation of the discharge at $x=0$ is straightforward given the mathematical expression derived above.
Python Code: import numpy as np from scipy.special import erfc from matplotlib import pyplot as plt from matplotlib import animation, rc from matplotlib.animation import FuncAnimation from matplotlib.patches import PathPatch, Path from IPython.display import HTML from scipy.special import erfc import pdb Explanation: Section 5.5 superposition in time. Simulating varying head at x=0 using a sequence of sudden changes of head at $x=0$ and $t=0$ IHE, Delft, 20120-01-06 @T.N.Olsthoorn Context The aquifer is considered of constant transmissivity $kD$ and storage coefficient $S$ and to extend from $0 \le x \le \infty$. The partial differential equation is $$ kD \frac {\partial^2 s} {\partial x} = S \frac {\partial s} {\partial t} $$ The solution for a sudden change of head equal to $A [m]$ at $x=0$ and $t = 0$ is $$ s(x, t) = A \,\mathtt{erfc}(u), \,\,\,\, u=\sqrt{\frac {x^2 S} {4 kD t}} $$ where $\mathtt {erfc} () $ is the so-called complementary error function: $$ \mathtt {erfc} (z) = \frac 2 {\sqrt {\pi} } \intop _z ^\infty e ^{-y^2}dy $$ And so its derivative is $$ \frac {d \mathtt{erfc}(z)} {d z} = - \frac 2 {\sqrt {\pi}} e ^{-z^2} $$ Therefore, the discharge equals $$ Q = -kD \frac {\partial s} {\partial x} = A \sqrt{\frac {kDS} {\pi t}} \mathtt{exp} \left( -u^2 \right) $$ and for $ x = 0 $ $$ Q_0 = A \sqrt{\frac {kD S} {\pi t}}$$ Superposition Any varying head can be approximated using a series of constant heads over short time intervals. This allows to use de 1D solution for a sudden head change to simulate the effect on an aquifer of a varying river head. It is assumed that the aquifer is in direct good contact with the surface water at $x=0$. The superpostion may be written as $$ s(x, t) = \sum _{i=1} ^{N} \left{ A_i \mathtt{erfc} \sqrt{\frac {x^2 S} {4 kD (t - t_i)}} \right}, \,\,\, t \ge t_i $$ Clearly, $a$ term $i$ is non-existent when $t < t_i$. This formula can be computed by looping over the amplituces and times pertaining to each moment on which the amplitude changes. With a series of amplitudues A, what matters is the change of amplitude. So we need $$ A = A_0, A_1 - A_0, A_2 - A_1, ... A_n - A_{n-1} $$ Loading modules End of explanation def newfig(title='?', xlabel='?', ylabel='?', xlim=None, ylim=None, xscale='linear', yscale='linear', size_inches=(14, 8), fontsize=15): '''Setup a new axis for plotting''' fig, ax = plt.subplots() fig.set_size_inches(size_inches) ax.set_title(title, fontsize=fontsize) ax.set_xlabel(xlabel, fontsize=fontsize) ax.set_ylabel(ylabel, fontsize=fontsize) ax.set_xscale(xscale) ax.set_yscale(yscale) if xlim is not None: ax.set_xlim(xlim) if ylim is not None: ax.set_ylim(ylim) ax.grid(True) return ax Explanation: Convenience function to set up a graph End of explanation # aquifer properties kD = 900 # m2/d S = 0.1 # [-] # A is the water level at x=0 at the change times tc A = np.array([1.5, 0.5, 1.0, -1.2, 0.5, -1.8, 0.3, -3, 1, 0.5, -0.3]) # m tend = 30 tc = tend * np.random.rand(len(A)) # switch times tc.sort() dA = np.diff(np.hstack((0, A))) # m, the head changes at x=0 at the change times # show the switch times s0 and A ax = newfig('River stage', 'time [m]', 's(x, t) [m]') # this is an advanced way of neatly printed a line of formatted numbers # making sure that the numbers in both lines are exactly above each other # you could also just print(s0) and print(A) print('A = ' + ' '.join([f'{a:6.1f}' for a in A]) + ' [m]') print('dA = ' + ' '.join([f'{dai:6.1f}' for dai in dA]) + ' [m]') print('tc = ' + ' '.join([f'{tci:6.1f}' for tci in tc]) + ' [d]') # plot the amplitudes and their changes ax.step(tc, A, 'k', label="s0 = river", lw=3, where='post') #ax.step(tc, A, label='dA = change', where='post') ax.legend() Explanation: Implementation End of explanation times = np.linspace(0, tend, 101) # simulatioin times in days x = 150. # choose a value for x for which the graphs will be made # plot heads ax = newfig(f"Effect of varying river stage on groundwater at $x$={x:.0f}", 'time [d]', 'head change relative to river stage [m]') fig = ax.figure lines = [] Y = np.zeros((len(dA) + 2, len(times))) s = np.zeros_like(times) for tci, Ai in zip(tc, A): # Switch times and jumps s[times > tci] = Ai line, = ax.plot(times, s, 'b', lw=3, label="River level") Y[0] = s lines.append(line) s = np.zeros_like(times) for it, (tci, dAi) in enumerate(zip(tc, dA)): # Switch times and jumps u = np.sqrt((x**2 * S)/(4 * kD * (times[times > tci] - tci))) ds = dAi * erfc(u) # logical indexing Y[it + 1][times > tci] = ds line, = ax.plot(times[times > tci], ds, label=f'tc = {tci:.0f} d') lines.append(line) s[times > tci] += ds line, = ax.plot(times, s, 'k', lw=3, label="Sum") lines.append(line) Y[-1] = line.get_ydata() text = ax.text(0.25, 0.8, 't = {:5.2f}'.format(0), transform=ax.transAxes, fontsize=15, bbox=dict(boxstyle='round', facecolor='gray', alpha=0.3)) ax.legend(loc='lower left') def init(): global lines, text for line in lines: line.set_data([], []) text.set_text('t = {:5.2f}'.format(0)) return lines + [text] def animate(it): global lines, times for y, line in zip(Y, lines): line.set_data(times[:it], y[:it]) text.set_text('t = {:5.2f} d'.format(times[it])) return lines + [text] # call the animator. blit=True means only re-draw the parts that have changed. print("Patience, computing and generating video takes about 30 sec. on mac...") anim = animation.FuncAnimation(fig, animate, init_func=init, frames=len(times), fargs=(), interval=20, blit=True, repeat=False) plt.close(anim._fig) if True: out = HTML(anim.to_html5_video()) out else: anim.save('Sudden_changes_super_position_in_t.mp4', fps=20, extra_args=['-vcodec', 'libx264']) print(anim.save_count, " frames saved.") !ffmpeg -i Sudden_changes_super_position_in_t.mp4 -y Sudden_changes_super_position_in_t.gif out pwd Explanation: Simulation by superposition in time End of explanation t = np.linspace(0, 2 * len(tc), 1001.) # more points for more detail ax = newfig('Discharge Q [m2/d] at x=0', 'time [d]','Q [m2/d]') Q0 = np.zeros_like(t) # initialize the discharge as zeroes for tci, dAi in zip(tc, dA): Q0[t > tci] += dAi * np.sqrt(kD * S / (np.pi * (t[t > tci] - tci)) ) ax.plot(t, Q0, 'b', label='Q0') from scipy.special import exp1 exp1(5) Explanation: The discharge at $x=0$ The implementation of the discharge at $x=0$ is straightforward given the mathematical expression derived above. End of explanation
15,906
Given the following text description, write Python code to implement the functionality described below step by step Description: Sampling from a Population The law of averages also holds when the random sample is drawn from individuals in a large population. As an example, we will study a population of flight delay times. The table united contains data for United Airlines domestic flights departing from San Francisco in the summer of 2015. The data are made publicly available by the Bureau of Transportation Statistics in the United States Department of Transportation. There are 13,825 rows, each corresponding to a flight. The columns are the date of the flight, the flight number, the destination airport code, and the departure delay time in minutes. Some delay times are negative; those flights left early. Step1: One flight departed 16 minutes early, and one was 580 minutes late. The other delay times were almost all between -10 minutes and 200 minutes, as the histogram below shows. Step2: For the purposes of this section, it is enough to zoom in on the bulk of the data and ignore the 0.8% of flights that had delays of more than 200 minutes. This restriction is just for visual convenience; the table still retains all the data. Step3: The height of the [0, 10) bar is just under 3% per minute, which means that just under 30% of the flights had delays between 0 and 10 minutes. That is confirmed by counting rows Step4: Empirical Distribution of the Sample Let us now think of the 13,825 flights as a population, and draw random samples from it with replacement. It is helpful to package our analysis code into a function. The function empirical_delays takes the sample size as its argument and returns the array of sampled flight delays. Step5: As we saw with the dice, as the sample size increases, the empirical histogram of the sample more closely resembles the histogram of the population. Compare these histograms to the population histogram above.
Python Code: united = Table.read_table('http://inferentialthinking.com/notebooks/united_summer2015.csv') united Explanation: Sampling from a Population The law of averages also holds when the random sample is drawn from individuals in a large population. As an example, we will study a population of flight delay times. The table united contains data for United Airlines domestic flights departing from San Francisco in the summer of 2015. The data are made publicly available by the Bureau of Transportation Statistics in the United States Department of Transportation. There are 13,825 rows, each corresponding to a flight. The columns are the date of the flight, the flight number, the destination airport code, and the departure delay time in minutes. Some delay times are negative; those flights left early. End of explanation united.column('Delay').min() united.column('Delay').max() delay_opts = { 'xlabel': 'Delay (minute)', 'ylabel': 'Percent per minute', 'xlim': (-20, 600), 'ylim': (0, 0.045), 'bins': 62, } nbi.hist(united.column('Delay'), options=delay_opts) Explanation: One flight departed 16 minutes early, and one was 580 minutes late. The other delay times were almost all between -10 minutes and 200 minutes, as the histogram below shows. End of explanation united.where('Delay', are.above(200)).num_rows/united.num_rows delay_opts = { 'xlabel': 'Delay (minute)', 'ylabel': 'Percent per minute', 'xlim': (-20, 200), 'ylim': (0, 0.045), 'bins': 22, } nbi.hist(united.column('Delay'), options=delay_opts) Explanation: For the purposes of this section, it is enough to zoom in on the bulk of the data and ignore the 0.8% of flights that had delays of more than 200 minutes. This restriction is just for visual convenience; the table still retains all the data. End of explanation united.where('Delay', are.between(0, 10)).num_rows/united.num_rows Explanation: The height of the [0, 10) bar is just under 3% per minute, which means that just under 30% of the flights had delays between 0 and 10 minutes. That is confirmed by counting rows: End of explanation def empirical_hist_delay(sample_size): return united.sample(sample_size).column('Delay') Explanation: Empirical Distribution of the Sample Let us now think of the 13,825 flights as a population, and draw random samples from it with replacement. It is helpful to package our analysis code into a function. The function empirical_delays takes the sample size as its argument and returns the array of sampled flight delays. End of explanation nbi.hist(empirical_hist_delay, options=delay_opts, sample_size=widgets.ToggleButtons(options=[10, 100, 1000, 10000], description='Sample Size: ')) Explanation: As we saw with the dice, as the sample size increases, the empirical histogram of the sample more closely resembles the histogram of the population. Compare these histograms to the population histogram above. End of explanation
15,907
Given the following text description, write Python code to implement the functionality described below step by step Description: Regression Week 1 Step1: Useful pandas summary functions In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular Step2: As we see we get the same answer both ways Step3: Aside Step4: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line Step5: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data! Step6: Predicting Values Now that we have the model parameters Step7: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above. Quiz Question Step8: Residual Sum of Squares Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output. Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope Step9: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero! Step10: Now use your function to calculate the RSS on training data from the squarefeet model calculated above. Quiz Question Step11: Predict the squarefeet given price What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x). Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output! Step12: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be. Quiz Question Step13: New Model Step14: Test your Linear Regression Algorithm Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet. Quiz Question
Python Code: import pandas as pd import numpy as np dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':str, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int} sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict) train_data = pd.read_csv('kc_house_train_data.csv', dtype=dtype_dict) test_data = pd.read_csv('kc_house_test_data.csv', dtype=dtype_dict) sales.head() Explanation: Regression Week 1: Simple Linear Regression In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will: * Use graphlab SArray and SFrame functions to compute important summary statistics * Write a function to compute the Simple Linear Regression weights using the closed form solution * Write a function to make predictions of the output given the input feature * Turn the regression around to predict the input given the output * Compare two different models for predicting house prices In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own. Import module End of explanation # Let's compute the mean of the House Prices in King County in 2 different ways. prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray # recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses: sum_prices = prices.sum() num_houses = len(prices) # when prices is an SArray .size() returns its length avg_price_1 = sum_prices/num_houses avg_price_2 = prices.mean() # if you just want the average, the .mean() function print "average price via method 1: " + str(avg_price_1) print "average price via method 2: " + str(avg_price_2) Explanation: Useful pandas summary functions In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular: * Computing the sum of an SArray * Computing the arithmetic average (mean) of an SArray * multiplying SArrays by constants * multiplying SArrays by other SArrays End of explanation # if we want to multiply every price by 0.5 it's a simple as: half_prices = 0.5*prices # Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with * prices_squared = prices*prices sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up. print "the sum of price squared is: " + str(sum_prices_squared) Explanation: As we see we get the same answer both ways End of explanation def simple_linear_regression(input_feature, output): n = len(input_feature) x = input_feature y = output # compute the mean of input_feature and output x_mean = x.mean() y_mean = y.mean() # compute the product of the output and the input_feature and its mean sum_xy = (y * x).sum() xy_by_n = (y.sum() * x.sum())/n # compute the squared value of the input_feature and its mean x_square = (x**2).sum() xx_by_n = (x.sum() * x.sum())/n # use the formula for the slope slope = (sum_xy - xy_by_n) / (x_square - xx_by_n) # use the formula for the intercept intercept = y_mean - (slope * x_mean) return (intercept, slope) Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2 Build a generic simple linear regression function Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output. Complete the following function (or write your own) to compute the simple linear regression slope and intercept: End of explanation test_feature = np.array(range(5)) test_output = np.array(1 + 1*test_feature) (test_intercept, test_slope) = simple_linear_regression(test_feature, test_output) print "Intercept: " + str(test_intercept) print "Slope: " + str(test_slope) Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1 End of explanation sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'].values, train_data['price'].values) print "Intercept: " + str(sqft_intercept) print "Slope: " + str(sqft_slope) Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data! End of explanation def get_regression_predictions(input_feature, intercept, slope): # calculate the predicted values: predicted_values = intercept + (slope * input_feature) return predicted_values Explanation: Predicting Values Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept: End of explanation my_house_sqft = 2650 estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope) print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price) Explanation: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above. Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft? End of explanation def get_residual_sum_of_squares(input_feature, output, intercept, slope): # First get the predictions predicted_values = intercept + (slope * input_feature) # then compute the residuals (since we are squaring it doesn't matter which order you subtract) residuals = output - predicted_values # square the residuals and add them up RSS = (residuals * residuals).sum() return(RSS) Explanation: Residual Sum of Squares Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output. Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope: End of explanation print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0 Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero! End of explanation rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope) print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft) Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above. Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data? End of explanation def inverse_regression_predictions(output, intercept, slope): # solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions: estimated_feature = (output - intercept)/slope return estimated_feature Explanation: Predict the squarefeet given price What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x). Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output! End of explanation my_house_price = 800000 estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope) print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet) Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be. Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000? End of explanation # Estimate the slope and intercept for predicting 'price' based on 'bedrooms' sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'].values, train_data['price'].values) print "Intercept: " + str(sqft_intercept) print "Slope: " + str(sqft_slope) Explanation: New Model: estimate prices from bedrooms We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame. Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data! End of explanation # Compute RSS when using bedrooms on TEST data: sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'].values, train_data['price'].values) rss_prices_on_bedrooms = get_residual_sum_of_squares(test_data['bedrooms'].values, test_data['price'].values, sqft_intercept, sqft_slope) print 'The RSS of predicting Prices based on Bedrooms is : ' + str(rss_prices_on_bedrooms) # Compute RSS when using squarfeet on TEST data: sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'].values, train_data['price'].values) rss_prices_on_sqft = get_residual_sum_of_squares(test_data['sqft_living'].values, test_data['price'].values, sqft_intercept, sqft_slope) print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft) Explanation: Test your Linear Regression Algorithm Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet. Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case. End of explanation
15,908
Given the following text description, write Python code to implement the functionality described below step by step Description: pyplearnr demo Here I demonstrate pyplearnr, a wrapper for building/training/validating scikit learn pipelines using GridSearchCV or RandomizedSearchCV. Quick keyword arguments give access to optional feature selection (e.g. SelectKBest), scaling (e.g. standard scaling), use of feature interactions, and data transformations (e.g. PCA, t-SNE) before being fed to a classifier/regressor. After building the pipeline, data can be used to perform a nested (stratified if classification) k-folds cross-validation and output an object containing data from the process, including the best model. Various default pipeline step parameters for the grid-search for quick iteration over different pipelines, with the option to ignore/override them in a flexible way. This is an on-going project that I intend to update with more models and pre-processing options and also with corresponding defaults. Titanic dataset example Here I use the Titanic dataset I've cleaned and pickled in a separate tutorial. Import data Step1: By "cleaned" I mean I've derived titles (e.g. "Mr.", "Mrs.", "Dr.", etc) from the passenger names, imputed the missing Age values using polynomial regression with grid-searched 10-fold cross-validation, filled in the 3 missing Embarked values with the mode, and removed all fields that could be considered an id for that individual. Thus, there is no missing data. Set categorical features as type 'category' Step2: One-hot encode categorical features Step3: Now we have 17 features. Split into input/output data Step4: Null model Step5: Thus, null accuracy of ~62% if always predict death. Import data science library and initialize optimized pipeline collection Step6: Basic models w/ no pre-processing KNN Here we do a simple K-nearest neighbors (KNN) classification with stratified 10-fold (default) cross-validation with a grid search over the default of 1 to 30 nearest neighbors and the use of either "uniform" or "distance" weights Step7: The output of the train_model() method is an instance of my custom OptimizedPipeline class containing all of the data associated with the nested stratified k-folds cross-validation. This includes the data, its test/train splits (based on the test_size percentage keyword argument), the GridSearchCV or RandomizedGridSearchCV object, the Pipeline object that has been retrained using all of the data with the best parameters, test/train scores, and validation metrics/reports. A report can be printed immediately after the fit by setting the suppress_output keyword argument to True. It lists the steps in the pipeline, their optimized settings, the test/training accuracy (or L2 regression score), the grid search parameters, and the best parameters. If the estimator used is a classifier it also includes the confusion matrix, normalized confusion matrix, and a classification report containing precision/recall/f1-score for each class. This same report is also accessible by printing the OptimizedPipeline class instance Step8: Turns out that the best settings are 12 neighbors and the use of the 'uniform' weight. Note how I've set the random_state keyword agument to 6 so that the models can be compared using the same test/train split. The default parameters to grid-search over for k-nearest neighbors is 1 to 30 neighbors and either the 'uniform' or 'distance' weight. The defaults for the pre-processing steps, classifiers, and regressors can be viewed by using the get_default_pipeline_step_parameters() method with the number of features as the input Step9: These default parameters can be ignored by setting the use_default_param_dist keyword argument to False. The param_dist keyword argument can be used to keep default parameters (if use_default_param_dist set to True) or to be used as the sole source of parameters (if use_default_param_dist set to False). Here is a demonstration of generation of default parameters with those in param_dist being overridden Step10: Note how the n_neighbors parameter was 30 to 499 instead of 1 to 30. Here's an example of only using param_dist for parameters Step11: Note how the estimator__weights parameter isn't set for the KNN estimator. Other models This code currently supports K-nearest neighbors, logistic regression, support vector machines, multilayer perceptrons, random forest, and adaboost. We can loop through and pick the best model like this Step12: Random forest performed the best with a test score of ~0.854. Lets look at the report Step13: The optimal parameter was 96 for the n_estimators parameter for the RandomizedForestClassifier. All models with standard scaling We can set the scaling type using the scale_type keyword argument Step14: Random forest without scaling still appears to have the best test score. Though that with scaling had closer test and train scores. All models with SelectKBest feature selection Setting the feature_selection_type keyword argument will use SelectKBest with f_classif for feature selection Step15: Again, random_forest performs the best. Though K-nearest neighbors appears to have the smallest difference between testing and training sets. All models with feature interaction Setting the feature_interactions keyword argument to True will cause the use of feature interactions. The default is to only consider pairwise products, though this be set to higher by overriding using param_dist Step16: This doesn't appear to result in many gains in this case. All models with transformed data Setting the transform_type to 'pca' or 't-sne' will apply Principal Component Analysis or t-distributed stochastic neighbor embedding, respectively, to the data before applying the estimator Step17: Here's the use of t-SNE Step18: Wow, that took forever. We can get a better idea on how long this will take by setting the num_parameter_combos keyword argument. Setting this will only allow that number of grid combinations to be used for each run Step19: Applying t-sne to the data and then testing the 6 classifiers takes about 7 min. This could be optimized by pre-transforming the data once and then applying the classifiers. I'm thinking of creating some sort of container class that should be able to optimize this in the future. SelectKBest, standard scaling, and all classifiers Finally, here we appply feature selection and standard scaling for all 6 classifiers Step20: With 48 different pre-processing/transformation/classification combinations, this has become rather unwieldy. Here I make a quick dataframe of the test/train scores and visualize Step21: The best training score was acheived by the random forest classifier. Step22: So the best model was random forest. Here's the report for the model
Python Code: import pandas as pd df = pd.read_pickle('trimmed_titanic_data.pkl') df.info() Explanation: pyplearnr demo Here I demonstrate pyplearnr, a wrapper for building/training/validating scikit learn pipelines using GridSearchCV or RandomizedSearchCV. Quick keyword arguments give access to optional feature selection (e.g. SelectKBest), scaling (e.g. standard scaling), use of feature interactions, and data transformations (e.g. PCA, t-SNE) before being fed to a classifier/regressor. After building the pipeline, data can be used to perform a nested (stratified if classification) k-folds cross-validation and output an object containing data from the process, including the best model. Various default pipeline step parameters for the grid-search for quick iteration over different pipelines, with the option to ignore/override them in a flexible way. This is an on-going project that I intend to update with more models and pre-processing options and also with corresponding defaults. Titanic dataset example Here I use the Titanic dataset I've cleaned and pickled in a separate tutorial. Import data End of explanation simulation_df = df.copy() categorical_features = ['Survived','Pclass','Sex','Embarked','Title'] for feature in categorical_features: simulation_df[feature] = simulation_df[feature].astype('category') simulation_df.info() Explanation: By "cleaned" I mean I've derived titles (e.g. "Mr.", "Mrs.", "Dr.", etc) from the passenger names, imputed the missing Age values using polynomial regression with grid-searched 10-fold cross-validation, filled in the 3 missing Embarked values with the mode, and removed all fields that could be considered an id for that individual. Thus, there is no missing data. Set categorical features as type 'category' End of explanation simulation_df = pd.get_dummies(simulation_df,drop_first=True) simulation_df.info() Explanation: One-hot encode categorical features End of explanation # Set output feature output_feature = 'Survived_1' # Get all column names column_names = list(simulation_df.columns) # Get input features input_features = [x for x in column_names if x != output_feature] # Split into features and responses X = simulation_df[input_features].copy() y = simulation_df[output_feature].copy() Explanation: Now we have 17 features. Split into input/output data End of explanation simulation_df['Survived_1'].value_counts().values/float(simulation_df['Survived_1'].value_counts().values.sum()) Explanation: Null model End of explanation import pyplearnr as ppl optimized_pipelines = {} Explanation: Thus, null accuracy of ~62% if always predict death. Import data science library and initialize optimized pipeline collection End of explanation %%time reload(dsl) estimator = 'knn' # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': False, 'transform_type': None } # Initialize pipeline optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 # 20% saved as test set } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Save optimized_pipelines[estimator] = optimized_pipeline Explanation: Basic models w/ no pre-processing KNN Here we do a simple K-nearest neighbors (KNN) classification with stratified 10-fold (default) cross-validation with a grid search over the default of 1 to 30 nearest neighbors and the use of either "uniform" or "distance" weights: End of explanation print optimized_pipeline Explanation: The output of the train_model() method is an instance of my custom OptimizedPipeline class containing all of the data associated with the nested stratified k-folds cross-validation. This includes the data, its test/train splits (based on the test_size percentage keyword argument), the GridSearchCV or RandomizedGridSearchCV object, the Pipeline object that has been retrained using all of the data with the best parameters, test/train scores, and validation metrics/reports. A report can be printed immediately after the fit by setting the suppress_output keyword argument to True. It lists the steps in the pipeline, their optimized settings, the test/training accuracy (or L2 regression score), the grid search parameters, and the best parameters. If the estimator used is a classifier it also includes the confusion matrix, normalized confusion matrix, and a classification report containing precision/recall/f1-score for each class. This same report is also accessible by printing the OptimizedPipeline class instance: End of explanation pre_processing_grid_parameters,classifier_grid_parameters,regression_grid_parameters = \ optimized_pipeline.get_default_pipeline_step_parameters(X.shape[0]) classifier_grid_parameters['knn'] Explanation: Turns out that the best settings are 12 neighbors and the use of the 'uniform' weight. Note how I've set the random_state keyword agument to 6 so that the models can be compared using the same test/train split. The default parameters to grid-search over for k-nearest neighbors is 1 to 30 neighbors and either the 'uniform' or 'distance' weight. The defaults for the pre-processing steps, classifiers, and regressors can be viewed by using the get_default_pipeline_step_parameters() method with the number of features as the input: End of explanation %%time reload(dsl) model_name = 'custom_override_%s'%(estimator_name) # Set custom parameters param_dist = { 'estimator__n_neighbors': range(30,500) } estimator = 'knn' # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': False, 'transform_type': None } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': False, 'use_default_param_dist': True, 'param_dist': param_dist, 'test_size': 0.2 # 20% saved as test set } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Save optimized_pipelines[model_name] = optimized_pipeline Explanation: These default parameters can be ignored by setting the use_default_param_dist keyword argument to False. The param_dist keyword argument can be used to keep default parameters (if use_default_param_dist set to True) or to be used as the sole source of parameters (if use_default_param_dist set to False). Here is a demonstration of generation of default parameters with those in param_dist being overridden: End of explanation %%time reload(dsl) model_name = 'from_scratch_%s'%(estimator_name) # Set custom parameters param_dist = { 'estimator__n_neighbors': range(10,30) } estimator = 'knn' # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': False, 'transform_type': None } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': False, 'use_default_param_dist': False, 'param_dist': param_dist, 'test_size': 0.2 # 20% saved as test set } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Save optimized_pipelines[model_name] = optimized_pipeline Explanation: Note how the n_neighbors parameter was 30 to 499 instead of 1 to 30. Here's an example of only using param_dist for parameters: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm', 'multilayer_perceptron','random_forest','adaboost'] for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': False, 'transform_type': None } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Save optimized_pipelines[estimator] = optimized_pipeline format_str = '{0:<22} {1:<15} {2:<15}' print format_str.format(*['model','train score','test score']) print format_str.format(*['','','']) for x in [[key,value.train_score_,value.test_score_] for key,value in optimized_pipelines.iteritems()]: print format_str.format(*x) Explanation: Note how the estimator__weights parameter isn't set for the KNN estimator. Other models This code currently supports K-nearest neighbors, logistic regression, support vector machines, multilayer perceptrons, random forest, and adaboost. We can loop through and pick the best model like this: End of explanation print optimized_pipelines['random_forest'] Explanation: Random forest performed the best with a test score of ~0.854. Lets look at the report: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm', 'multilayer_perceptron','random_forest','adaboost'] prefix = 'scale' for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': 'standard', 'feature_interactions': False, 'transform_type': None } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Form name used to save optimized pipeline pipeline_name = '%s_%s'%(prefix,estimator) # Save optimized_pipelines[pipeline_name] = optimized_pipeline format_str = '{0:<30} {1:<15} {2:<15}' print format_str.format(*['model','train score','test score']) print format_str.format(*['','','']) for x in [[key,value.train_score_,value.test_score_] for key,value in optimized_pipelines.iteritems()]: print format_str.format(*x) Explanation: The optimal parameter was 96 for the n_estimators parameter for the RandomizedForestClassifier. All models with standard scaling We can set the scaling type using the scale_type keyword argument: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm', 'multilayer_perceptron','random_forest','adaboost'] prefix = 'select' for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': 'select_k_best', 'scale_type': None, 'feature_interactions': False, 'transform_type': None } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Form name used to save optimized pipeline pipeline_name = '%s_%s'%(prefix,estimator) # Save optimized_pipelines[pipeline_name] = optimized_pipeline format_str = '{0:<30} {1:<15} {2:<15} {3:<15}' print format_str.format(*['model','train score','test score','train-test']) print format_str.format(*['','','','']) for x in [[key,value.train_score_,value.test_score_,value.train_score_-value.test_score_] for key,value in optimized_pipelines.iteritems()]: print format_str.format(*x) Explanation: Random forest without scaling still appears to have the best test score. Though that with scaling had closer test and train scores. All models with SelectKBest feature selection Setting the feature_selection_type keyword argument will use SelectKBest with f_classif for feature selection: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm','multilayer_perceptron','random_forest','adaboost'] prefix = 'interact' for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': True, 'transform_type': None } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Form name used to save optimized pipeline pipeline_name = '%s_%s'%(prefix,estimator) # Save optimized_pipelines[pipeline_name] = optimized_pipeline format_str = '{0:<30} {1:<15} {2:<15} {3:<15}' print format_str.format(*['model','train score','test score','train-test']) print format_str.format(*['','','','']) for x in [[key,value.train_score_,value.test_score_,value.train_score_-value.test_score_] \ for key,value in optimized_pipelines.iteritems()]: print format_str.format(*x) Explanation: Again, random_forest performs the best. Though K-nearest neighbors appears to have the smallest difference between testing and training sets. All models with feature interaction Setting the feature_interactions keyword argument to True will cause the use of feature interactions. The default is to only consider pairwise products, though this be set to higher by overriding using param_dist: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm', 'multilayer_perceptron','random_forest','adaboost'] prefix = 'pca' for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': None, 'transform_type': 'pca' } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Form name used to save optimized pipeline pipeline_name = '%s_%s'%(prefix,estimator) # Save optimized_pipelines[pipeline_name] = optimized_pipeline format_str = '{0:<30} {1:<15} {2:<15} {3:<15}' print format_str.format(*['model','train score','test score','train-test']) print format_str.format(*['','','','']) for x in [[key,value.train_score_,value.test_score_,value.train_score_-value.test_score_] for key,value in optimized_pipelines.iteritems()]: print format_str.format(*x) Explanation: This doesn't appear to result in many gains in this case. All models with transformed data Setting the transform_type to 'pca' or 't-sne' will apply Principal Component Analysis or t-distributed stochastic neighbor embedding, respectively, to the data before applying the estimator: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm','multilayer_perceptron','random_forest','adaboost'] prefix = 't_sne' for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': None, 'transform_type': 't-sne' } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Form name used to save optimized pipeline pipeline_name = '%s_%s'%(prefix,estimator) # Save optimized_pipelines[pipeline_name] = optimized_pipeline format_str = '{0:<30} {1:<15} {2:<15} {3:<15}' print format_str.format(*['model','train score','test score','train-test']) print format_str.format(*['','','','']) for x in [[key,value.train_score_,value.test_score_,value.train_score_-value.test_score_] for key,value in optimized_pipelines.iteritems()]: print format_str.format(*x) Explanation: Here's the use of t-SNE: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm', 'multilayer_perceptron','random_forest','adaboost'] prefix = 't_sne_less_combo' for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': None, 'scale_type': None, 'feature_interactions': None, 'transform_type': 't-sne' } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': 1, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Form name used to save optimized pipeline pipeline_name = '%s_%s'%(prefix,estimator) # Save optimized_pipelines[pipeline_name] = optimized_pipeline Explanation: Wow, that took forever. We can get a better idea on how long this will take by setting the num_parameter_combos keyword argument. Setting this will only allow that number of grid combinations to be used for each run: End of explanation %%time reload(dsl) classifiers = ['knn','logistic_regression','svm', 'multilayer_perceptron','random_forest','adaboost'] prefix = 'select_standard' for estimator in classifiers: # Set pipeline keyword arguments optimized_pipeline_kwargs = { 'feature_selection_type': 'select_k_best', 'scale_type': 'standard', 'feature_interactions': None, 'transform_type': None } # Initialize pipeline optimized_pipeline = dsl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs) # Set pipeline fitting parameters fit_kwargs = { 'cv': 10, 'num_parameter_combos': None, 'n_jobs': -1, 'random_state': 6, 'suppress_output': True, 'use_default_param_dist': True, 'param_dist': None, 'test_size': 0.2 } # Fit data optimized_pipeline.fit(X,y,**fit_kwargs) # Form name used to save optimized pipeline pipeline_name = '%s_%s'%(prefix,estimator) # Save optimized_pipelines[pipeline_name] = optimized_pipeline format_str = '{0:<40} {1:<15} {2:<15} {3:<15}' print format_str.format(*['model','train score','test score','train-test']) print format_str.format(*['','','','']) for x in [[key,value.train_score_,value.test_score_,value.train_score_-value.test_score_] for key,value in optimized_pipelines.iteritems()]: print format_str.format(*x) len(optimized_pipelines) Explanation: Applying t-sne to the data and then testing the 6 classifiers takes about 7 min. This could be optimized by pre-transforming the data once and then applying the classifiers. I'm thinking of creating some sort of container class that should be able to optimize this in the future. SelectKBest, standard scaling, and all classifiers Finally, here we appply feature selection and standard scaling for all 6 classifiers: End of explanation %matplotlib inline model_indices = optimized_pipelines.keys() train_scores = [value.train_score_ for key,value in optimized_pipelines.iteritems()] test_scores = [value.test_score_ for key,value in optimized_pipelines.iteritems()] score_df = pd.DataFrame({'training_score':train_scores,'test_score':test_scores}, index=model_indices) score_df['test-train'] = score_df['test_score']-score_df['training_score'] score_df['test_score'].sort_values().plot(kind='barh',figsize=(10,20)) Explanation: With 48 different pre-processing/transformation/classification combinations, this has become rather unwieldy. Here I make a quick dataframe of the test/train scores and visualize: End of explanation score_df['test-train'].sort_values().plot(kind='barh',figsize=(10,20)) ax = score_df.plot(x=['test_score'],y='test-train',style='o',legend=None) ax.set_xlabel('test score') ax.set_ylabel('test-train') Explanation: The best training score was acheived by the random forest classifier. End of explanation print optimized_pipelines['random_forest'] Explanation: So the best model was random forest. Here's the report for the model: End of explanation
15,909
Given the following text description, write Python code to implement the functionality described below step by step Description: Regressão Linear Este notebook mostra uma implementação básica de Regressão Linear e o uso da biblioteca MLlib do PySpark para a tarefa de regressão na base de dados Million Song Dataset do repositório UCI Machine Learning Repository. Nosso objetivo é predizer o ano de uma música através dos seus atributos de áudio. Neste notebook Step2: (1b) Usando LabeledPoint Na MLlib, bases de dados rotuladas devem ser armazenadas usando o objeto LabeledPoint. Escreva a função parsePoint que recebe como entrada uma amostra de dados, transforma os dados usandoo comando unicode.split, e retorna um LabeledPoint. Aplique essa função na variável samplePoints da célula anterior e imprima os atributos e rótulo utilizando os atributos LabeledPoint.features e LabeledPoint.label. Finalmente, calcule o número de atributos nessa base de dados. Step4: Visualização 1 Step5: (1c) Deslocando os rótulos Para melhor visualizar as soluções obtidas, calcular o erro de predição e visualizar a relação dos atributos com os rótulos, costuma-se deslocar os rótulos para iniciarem em zero. Dessa forma vamos verificar qual é a faixa de valores dos rótulos e, em seguida, subtrair os rótulos pelo menor valor encontrado. Em alguns casos também pode ser interessante normalizar tais valores dividindo pelo valor máximo dos rótulos. Step6: (1d) Conjuntos de treino, validação e teste Como próximo passo, vamos dividir nossa base de dados em conjunto de treino, validação e teste conforme discutido em sala de aula. Use o método randomSplit method com os pesos (weights) e a semente aleatória (seed) especificados na célula abaixo parar criar a divisão das bases. Em seguida, utilizando o método cache() faça o pré-armazenamento da base processada. Esse comando faz o processamento da base através das transformações e armazena em um novo RDD que pode ficar armazenado em memória, se couber, ou em um arquivo temporário. Step7: Part 2 Step10: (2b) Erro quadrático médio Para comparar a performance em problemas de regressão, geralmente é utilizado o Erro Quadrático Médio (RMSE). Implemente uma função que calcula o RMSE a partir de um RDD de tuplas (rótulo, predição). Step11: (2c) RMSE do baseline para os conjuntos de treino, validação e teste Vamos calcular o RMSE para nossa baseline. Primeiro crie uma RDD de (rótulo, predição) para cada conjunto, e então chame a função calcRMSE. Step12: Visualização 2 Step14: Parte 3 Step16: (3b) Use os pesos para fazer a predição Agora, implemente a função getLabeledPredictions que recebe como parâmetro o conjunto de pesos e um LabeledPoint e retorna uma tupla (rótulo, predição). Lembre-se que podemos predizer um rótulo calculando o produto interno dos pesos com os atributos. Step18: (3c) Gradiente descendente Finalmente, implemente o algoritmo gradiente descendente para regressão linear e teste a função em um exemplo. Step19: (3d) Treinando o modelo na base de dados Agora iremos treinar o modelo de regressão linear na nossa base de dados de treino e calcular o RMSE na base de validação. Lembrem-se que não devemos utilizar a base de teste até que o melhor parâmetro do modelo seja escolhido. Para essa tarefa vamos utilizar as funções linregGradientDescent, getLabeledPrediction e calcRMSE já implementadas. Step20: Visualização 3 Step21: Part 4 Step22: (4b) Predição Agora use o método LinearRegressionModel.predict() para fazer a predição de um objeto. Passe o atributo features de um LabeledPoint comp parâmetro. Step23: (4c) Avaliar RMSE Agora avalie o desempenho desse modelo no teste de validação. Use o método predict() para criar o RDD labelsAndPreds RDD, e então use a função calcRMSE() da Parte (2b) para calcular o RMSE. Step24: (4d) Grid search Já estamos superando o baseline em pelo menos dois anos na média, vamos ver se encontramos um conjunto de parâmetros melhor. Faça um grid search para encontrar um bom parâmetro de regularização. Tente valores para regParam dentro do conjunto 1e-10, 1e-5, e 1. Step25: Visualização 5 Step26: (4e) Grid Search para o valor de alfa e número de iterações Agora, vamos verificar diferentes valores para alfa e número de iterações para perceber o impacto desses parâmetros em nosso modelo. Especificamente tente os valores 1e-5 e 10 para alpha e os valores 500 e 5 para número de iterações. Avalie todos os modelos no conjunto de valdação. Reparem que com um valor baixo de alpha, o algoritmo necessita de muito mais iterações para convergir ao ótimo, enquanto um valor muito alto para alpha, pode fazer com que o algoritmo não encontre uma solução.
Python Code: # carregar base de dados import os.path fileName = os.path.join('Data', 'millionsong.txt') numPartitions = 2 rawData = sc.textFile(fileName, numPartitions) # EXERCICIO numPoints = rawData.count() print (numPoints) samplePoints = rawData.take(5) print (samplePoints) # TEST Load and check the data (1a) assert numPoints==6724, 'incorrect value for numPoints' print("OK") assert len(samplePoints)==5, 'incorrect length for samplePoints' print("OK") Explanation: Regressão Linear Este notebook mostra uma implementação básica de Regressão Linear e o uso da biblioteca MLlib do PySpark para a tarefa de regressão na base de dados Million Song Dataset do repositório UCI Machine Learning Repository. Nosso objetivo é predizer o ano de uma música através dos seus atributos de áudio. Neste notebook: Parte 1: Leitura e parsing da base de dados Visualização 1: Atributos Visualização 2: Deslocamento das variáveis de interesse Parte 2: Criar um preditor de referência Visualização 3: Valores Preditos vs. Verdadeiros Parte 3: Treinar e avaliar um modelo de regressão linear Visualização 4: Erro de Treino Parte 4: Treinar usando MLlib e ajustar os hiperparâmetros Visualização 5: Predições do Melhor modelo Visualização 6: Mapa de calor dos hiperparâmetros Parte 5: Adicionando interações entre atributos Parte 6: Aplicando na base de dados de Crimes de São Francisco Para referência, consulte os métodos relevantes do PySpark em Spark's Python API e do NumPy em NumPy Reference Parte 1: Leitura e parsing da base de dados (1a) Verificando os dados disponíveis Os dados da base que iremos utilizar estão armazenados em um arquivo texto. No primeiro passo vamos transformar os dados textuais em uma RDD e verificar a formatação dos mesmos. Altere a segunda célula para verificar quantas amostras existem nessa base de dados utilizando o método count method. Reparem que o rótulo dessa base é o primeiro registro, representando o ano. End of explanation from pyspark.mllib.regression import LabeledPoint import numpy as np # Here is a sample raw data point: # '2001.0,0.884,0.610,0.600,0.474,0.247,0.357,0.344,0.33,0.600,0.425,0.60,0.419' # In this raw data point, 2001.0 is the label, and the remaining values are features # EXERCICIO def parsePoint(line): Converts a comma separated unicode string into a `LabeledPoint`. Args: line (unicode): Comma separated unicode string where the first element is the label and the remaining elements are features. Returns: LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and features. Point = list(map(float,line.split(','))) return LabeledPoint(Point[0],Point[1:]) parsedSamplePoints = list(map(parsePoint,samplePoints)) firstPointFeatures = parsedSamplePoints[0].features firstPointLabel = parsedSamplePoints[0].label print (firstPointFeatures, firstPointLabel) d = len(firstPointFeatures) print (d) # TEST Using LabeledPoint (1b) assert isinstance(firstPointLabel, float), 'label must be a float' expectedX0 = [0.8841,0.6105,0.6005,0.4747,0.2472,0.3573,0.3441,0.3396,0.6009,0.4257,0.6049,0.4192] assert np.allclose(expectedX0, firstPointFeatures, 1e-4, 1e-4), 'incorrect features for firstPointFeatures' assert np.allclose(2001.0, firstPointLabel), 'incorrect label for firstPointLabel' assert d == 12, 'incorrect number of features' print("OK") Explanation: (1b) Usando LabeledPoint Na MLlib, bases de dados rotuladas devem ser armazenadas usando o objeto LabeledPoint. Escreva a função parsePoint que recebe como entrada uma amostra de dados, transforma os dados usandoo comando unicode.split, e retorna um LabeledPoint. Aplique essa função na variável samplePoints da célula anterior e imprima os atributos e rótulo utilizando os atributos LabeledPoint.features e LabeledPoint.label. Finalmente, calcule o número de atributos nessa base de dados. End of explanation import matplotlib.pyplot as plt import matplotlib.cm as cm %matplotlib inline sampleMorePoints = rawData.take(50) parsedSampleMorePoints = map(parsePoint, sampleMorePoints) dataValues = list(map(lambda lp: lp.features.toArray(), parsedSampleMorePoints)) def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999', gridWidth=1.0): Template for generating the plot layout. plt.close() fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white') ax.axes.tick_params(labelcolor='#999999', labelsize='10') for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]: axis.set_ticks_position('none') axis.set_ticks(ticks) axis.label.set_color('#999999') if hideLabels: axis.set_ticklabels([]) plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-') map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right']) return fig, ax # generate layout and plot fig, ax = preparePlot(np.arange(.5, 11, 1), np.arange(.5, 49, 1), figsize=(8,7), hideLabels=True, gridColor='#eeeeee', gridWidth=1.1) image = plt.imshow(dataValues,interpolation='nearest', aspect='auto', cmap=cm.Greys) for x, y, s in zip(np.arange(-.125, 12, 1), np.repeat(-.75, 12), [str(x) for x in range(12)]): plt.text(x, y, s, color='#999999', size='10') plt.text(4.7, -3, 'Feature', color='#999999', size='11'), ax.set_ylabel('Observation') pass Explanation: Visualização 1: Atributos A próxima célula mostra uma forma de visualizar os atributos através de um mapa de calor. Nesse mapa mostramos os 50 primeiros objetos e seus atributos representados por tons de cinza, sendo o branco representando o valor 0 e o preto representando o valor 1. Esse tipo de visualização ajuda a perceber a variação dos valores dos atributos. End of explanation # EXERCICIO parsedDataInit = rawData.map(parsePoint) onlyLabels = parsedDataInit.map(lambda x: x.label) minYear = onlyLabels.min() maxYear = onlyLabels.max() print (maxYear, minYear) # TEST Find the range (1c) assert len(parsedDataInit.take(1)[0].features)==12, 'unexpected number of features in sample point' sumFeatTwo = parsedDataInit.map(lambda lp: lp.features[2]).sum() assert np.allclose(sumFeatTwo, 3158.96224351), 'parsedDataInit has unexpected values' yearRange = maxYear - minYear assert yearRange == 89, 'incorrect range for minYear to maxYear' print("OK") # EXERCICIO: subtraia os labels do valor mínimo parsedData = parsedDataInit.map(lambda p: LabeledPoint(p.label-minYear, p.features)) # Should be a LabeledPoint print (type(parsedData.take(1)[0])) # View the first point print ('\n{0}'.format(parsedData.take(1))) # TEST Shift labels (1d) oldSampleFeatures = parsedDataInit.take(1)[0].features newSampleFeatures = parsedData.take(1)[0].features assert np.allclose(oldSampleFeatures, newSampleFeatures), 'new features do not match old features' sumFeatTwo = parsedData.map(lambda lp: lp.features[2]).sum() assert np.allclose(sumFeatTwo, 3158.96224351), 'parsedData has unexpected values' minYearNew = parsedData.map(lambda lp: lp.label).min() maxYearNew = parsedData.map(lambda lp: lp.label).max() assert minYearNew == 0, 'incorrect min year in shifted data' assert maxYearNew == 89, 'incorrect max year in shifted data' print("OK") Explanation: (1c) Deslocando os rótulos Para melhor visualizar as soluções obtidas, calcular o erro de predição e visualizar a relação dos atributos com os rótulos, costuma-se deslocar os rótulos para iniciarem em zero. Dessa forma vamos verificar qual é a faixa de valores dos rótulos e, em seguida, subtrair os rótulos pelo menor valor encontrado. Em alguns casos também pode ser interessante normalizar tais valores dividindo pelo valor máximo dos rótulos. End of explanation # EXERCICIO weights = [.8, .1, .1] seed = 42 parsedTrainData, parsedValData, parsedTestData = parsedData.randomSplit(weights, seed) parsedTrainData.cache() parsedValData.cache() parsedTestData.cache() nTrain = parsedTrainData.count() nVal = parsedValData.count() nTest = parsedTestData.count() print (nTrain, nVal, nTest, nTrain + nVal + nTest) print (parsedData.count()) # TEST Training, validation, and test sets (1e) assert parsedTrainData.getNumPartitions() == numPartitions, 'parsedTrainData has wrong number of partitions' assert parsedValData.getNumPartitions() == numPartitions, 'parsedValData has wrong number of partitions' assert parsedTestData.getNumPartitions() == numPartitions,'parsedTestData has wrong number of partitions' assert len(parsedTrainData.take(1)[0].features) == 12, 'parsedTrainData has wrong number of features' sumFeatTwo = (parsedTrainData .map(lambda lp: lp.features[2]) .sum()) sumFeatThree = (parsedValData .map(lambda lp: lp.features[3]) .reduce(lambda x, y: x + y)) sumFeatFour = (parsedTestData .map(lambda lp: lp.features[4]) .reduce(lambda x, y: x + y)) assert np.allclose([sumFeatTwo, sumFeatThree, sumFeatFour],2526.87757656, 297.340394298, 184.235876654), 'parsed Train, Val, Test data has unexpected values' assert nTrain + nVal + nTest == 6724, 'unexpected Train, Val, Test data set size' assert nTrain == 5359, 'unexpected value for nTrain' assert nVal == 678, 'unexpected value for nVal' assert nTest == 687, 'unexpected value for nTest' print("OK") Explanation: (1d) Conjuntos de treino, validação e teste Como próximo passo, vamos dividir nossa base de dados em conjunto de treino, validação e teste conforme discutido em sala de aula. Use o método randomSplit method com os pesos (weights) e a semente aleatória (seed) especificados na célula abaixo parar criar a divisão das bases. Em seguida, utilizando o método cache() faça o pré-armazenamento da base processada. Esse comando faz o processamento da base através das transformações e armazena em um novo RDD que pode ficar armazenado em memória, se couber, ou em um arquivo temporário. End of explanation # EXERCICIO averageTrainYear = (parsedTrainData .map(lambda p: p.label) .mean() ) print (averageTrainYear) # TEST Average label (2a) assert np.allclose(averageTrainYear, 53.6792311), 'incorrect value for averageTrainYear' print("OK") Explanation: Part 2: Criando o modelo de baseline (2a) Rótulo médio O baseline é útil para verificarmos que nosso modelo de regressão está funcionando. Ele deve ser um modelo bem simples que qualquer algoritmo possa fazer melhor. Um baseline muito utilizado é fazer a mesma predição independente dos dados analisados utilizando o rótulo médio do conjunto de treino. Calcule a média dos rótulos deslocados para a base de treino, utilizaremos esse valor posteriormente para comparar o erro de predição. Use um método apropriado para essa tarefa, consulte o RDD API. End of explanation # EXERCICIO def squaredError(label, prediction): Calculates the the squared error for a single prediction. Args: label (float): The correct value for this observation. prediction (float): The predicted value for this observation. Returns: float: The difference between the `label` and `prediction` squared. return (label-prediction)**2 def calcRMSE(labelsAndPreds): Calculates the root mean squared error for an `RDD` of (label, prediction) tuples. Args: labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples. Returns: float: The square root of the mean of the squared errors. return np.sqrt(labelsAndPreds.map(lambda lp: squaredError(lp[0],lp[1])).mean()) labelsAndPreds = sc.parallelize([(3., 1.), (1., 2.), (2., 2.)]) # RMSE = sqrt[((3-1)^2 + (1-2)^2 + (2-2)^2) / 3] = 1.291 exampleRMSE = calcRMSE(labelsAndPreds) print (exampleRMSE) # TEST Root mean squared error (2b) assert np.allclose(squaredError(3, 1), 4.), 'incorrect definition of squaredError' assert np.allclose(exampleRMSE, 1.29099444874), 'incorrect value for exampleRMSE' print("OK") Explanation: (2b) Erro quadrático médio Para comparar a performance em problemas de regressão, geralmente é utilizado o Erro Quadrático Médio (RMSE). Implemente uma função que calcula o RMSE a partir de um RDD de tuplas (rótulo, predição). End of explanation # EXERCICIO labelsAndPredsTrain = parsedTrainData.map(lambda p: (p.label, averageTrainYear)) rmseTrainBase = calcRMSE(labelsAndPredsTrain) labelsAndPredsVal = parsedValData.map(lambda p: (p.label, averageTrainYear)) rmseValBase = calcRMSE(labelsAndPredsVal) labelsAndPredsTest = parsedTestData.map(lambda p: (p.label, averageTrainYear)) rmseTestBase = calcRMSE(labelsAndPredsTest) print ('Baseline Train RMSE = {0:.3f}'.format(rmseTrainBase)) print ('Baseline Validation RMSE = {0:.3f}'.format(rmseValBase)) print ('Baseline Test RMSE = {0:.3f}'.format(rmseTestBase)) # TEST Training, validation and test RMSE (2c) assert np.allclose([rmseTrainBase, rmseValBase, rmseTestBase],[21.506125957738682, 20.877445428452468, 21.260493955081916]), 'incorrect RMSE value' print("OK") Explanation: (2c) RMSE do baseline para os conjuntos de treino, validação e teste Vamos calcular o RMSE para nossa baseline. Primeiro crie uma RDD de (rótulo, predição) para cada conjunto, e então chame a função calcRMSE. End of explanation from matplotlib.colors import ListedColormap, Normalize from matplotlib.cm import get_cmap cmap = get_cmap('YlOrRd') norm = Normalize() actual = np.asarray(parsedValData .map(lambda lp: lp.label) .collect()) error = np.asarray(parsedValData .map(lambda lp: (lp.label, lp.label)) .map(lambda lp: squaredError(lp[0], lp[1])) .collect()) clrs = cmap(np.asarray(norm(error)))[:,0:3] fig, ax = preparePlot(np.arange(0, 100, 20), np.arange(0, 100, 20)) plt.scatter(actual, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.5) ax.set_xlabel('Predicted'), ax.set_ylabel('Actual') pass predictions = np.asarray(parsedValData .map(lambda lp: averageTrainYear) .collect()) error = np.asarray(parsedValData .map(lambda lp: (lp.label, averageTrainYear)) .map(lambda lp: squaredError(lp[0], lp[1])) .collect()) norm = Normalize() clrs = cmap(np.asarray(norm(error)))[:,0:3] fig, ax = preparePlot(np.arange(53.0, 55.0, 0.5), np.arange(0, 100, 20)) ax.set_xlim(53, 55) plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.3) ax.set_xlabel('Predicted'), ax.set_ylabel('Actual') Explanation: Visualização 2: Predição vs. real Vamos visualizar as predições no conjunto de validação. Os gráficos de dispersão abaixo plotam os pontos com a coordenada X sendo o valor predito pelo modelo e a coordenada Y o valor real do rótulo. O primeiro gráfico mostra a situação ideal, um modelo que acerta todos os rótulos. O segundo gráfico mostra o desempenho do modelo baseline. As cores dos pontos representam o erro quadrático daquela predição, quanto mais próxima do laranja, maior o erro. End of explanation from pyspark.mllib.linalg import DenseVector # EXERCICIO def gradientSummand(weights, lp): Calculates the gradient summand for a given weight and `LabeledPoint`. Note: `DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably within this function. For example, they both implement the `dot` method. Args: weights (DenseVector): An array of model weights (betas). lp (LabeledPoint): The `LabeledPoint` for a single observation. Returns: DenseVector: An array of values the same length as `weights`. The gradient summand. return DenseVector((weights.dot(lp.features) - lp.label)*lp.features) exampleW = DenseVector([1, 1, 1]) exampleLP = LabeledPoint(2.0, [3, 1, 4]) summandOne = gradientSummand(exampleW, exampleLP) print (summandOne) exampleW = DenseVector([.24, 1.2, -1.4]) exampleLP = LabeledPoint(3.0, [-1.4, 4.2, 2.1]) summandTwo = gradientSummand(exampleW, exampleLP) print (summandTwo) # TEST Gradient summand (3a) assert np.allclose(summandOne, [18., 6., 24.]), 'incorrect value for summandOne' assert np.allclose(summandTwo, [1.7304,-5.1912,-2.5956]), 'incorrect value for summandTwo' print("OK") Explanation: Parte 3: Treinando e avaliando o modelo de regressão linear (3a) Gradiente do erro Vamos implementar a regressão linear através do gradiente descendente. Lembrando que para atualizar o peso da regressão linear fazemos: $$ \scriptsize \mathbf{w}_{i+1} = \mathbf{w}_i - \alpha_i \sum_j (\mathbf{w}_i^\top\mathbf{x}_j - y_j) \mathbf{x}_j \,.$$ onde $ \scriptsize i $ é a iteração do algoritmo, e $ \scriptsize j $ é o objeto sendo observado no momento. Primeiro, implemente uma função que calcula esse gradiente do erro para certo objeto: $ \scriptsize (\mathbf{w}^\top \mathbf{x} - y) \mathbf{x} \, ,$ e teste a função em dois exemplos. Use o método DenseVector dot para representar a lista de atributos (ele tem funcionalidade parecida com o np.array()). End of explanation # EXERCICIO def getLabeledPrediction(weights, observation): Calculates predictions and returns a (label, prediction) tuple. Note: The labels should remain unchanged as we'll use this information to calculate prediction error later. Args: weights (np.ndarray): An array with one weight for each features in `trainData`. observation (LabeledPoint): A `LabeledPoint` that contain the correct label and the features for the data point. Returns: tuple: A (label, prediction) tuple. return (observation.label, weights.dot(observation.features)) weights = np.array([1.0, 1.5]) predictionExample = sc.parallelize([LabeledPoint(2, np.array([1.0, .5])), LabeledPoint(1.5, np.array([.5, .5]))]) labelsAndPredsExample = predictionExample.map(lambda lp: getLabeledPrediction(weights, lp)) print (labelsAndPredsExample.collect()) # TEST Use weights to make predictions (3b) assert labelsAndPredsExample.collect() == [(2.0, 1.75), (1.5, 1.25)], 'incorrect definition for getLabeledPredictions' print("OK") Explanation: (3b) Use os pesos para fazer a predição Agora, implemente a função getLabeledPredictions que recebe como parâmetro o conjunto de pesos e um LabeledPoint e retorna uma tupla (rótulo, predição). Lembre-se que podemos predizer um rótulo calculando o produto interno dos pesos com os atributos. End of explanation # EXERCICIO def linregGradientDescent(trainData, numIters): Calculates the weights and error for a linear regression model trained with gradient descent. Note: `DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably within this function. For example, they both implement the `dot` method. Args: trainData (RDD of LabeledPoint): The labeled data for use in training the model. numIters (int): The number of iterations of gradient descent to perform. Returns: (np.ndarray, np.ndarray): A tuple of (weights, training errors). Weights will be the final weights (one weight per feature) for the model, and training errors will contain an error (RMSE) for each iteration of the algorithm. # The length of the training data n = trainData.count() # The number of features in the training data d = len(trainData.first().features) w = np.zeros(d) alpha = 1.0 # We will compute and store the training error after each iteration errorTrain = np.zeros(numIters) for i in range(numIters): # Use getLabeledPrediction from (3b) with trainData to obtain an RDD of (label, prediction) # tuples. Note that the weights all equal 0 for the first iteration, so the predictions will # have large errors to start. labelsAndPredsTrain = trainData.map(lambda l: getLabeledPrediction(w,l)) errorTrain[i] = calcRMSE(labelsAndPredsTrain) # Calculate the `gradient`. Make use of the `gradientSummand` function you wrote in (3a). # Note that `gradient` sould be a `DenseVector` of length `d`. gradient = trainData.map(lambda l: gradientSummand(w, l)).sum() # Update the weights alpha_i = alpha / (n * np.sqrt(i+1)) w -= alpha_i*gradient return w, errorTrain # create a toy dataset with n = 10, d = 3, and then run 5 iterations of gradient descent # note: the resulting model will not be useful; the goal here is to verify that # linregGradientDescent is working properly exampleN = 10 exampleD = 3 exampleData = (sc .parallelize(parsedTrainData.take(exampleN)) .map(lambda lp: LabeledPoint(lp.label, lp.features[0:exampleD]))) print (exampleData.take(2)) exampleNumIters = 5 exampleWeights, exampleErrorTrain = linregGradientDescent(exampleData, exampleNumIters) print (exampleWeights) # TEST Gradient descent (3c) expectedOutput = [48.20389904, 34.53243006, 30.60284959] assert np.allclose(exampleWeights, expectedOutput), 'value of exampleWeights is incorrect' expectedError = [79.72766145, 33.64762907, 9.46281696, 9.45486926, 9.44889147] assert np.allclose(exampleErrorTrain, expectedError),'value of exampleErrorTrain is incorrect' print("OK") Explanation: (3c) Gradiente descendente Finalmente, implemente o algoritmo gradiente descendente para regressão linear e teste a função em um exemplo. End of explanation # EXERCICIO numIters = 50 weightsLR0, errorTrainLR0 = linregGradientDescent(parsedTrainData, numIters) labelsAndPreds = parsedValData.map(lambda lp: getLabeledPrediction(weightsLR0, lp)) rmseValLR0 = calcRMSE(labelsAndPreds) print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}'.format(rmseValBase, rmseValLR0)) # TEST Train the model (3d) expectedOutput = [ 22.64370481, 20.1815662, -0.21620107, 8.53259099, 5.94821844, -4.50349235, 15.51511703, 3.88802901, 9.79146177, 5.74357056, 11.19512589, 3.60554264] assert np.allclose(weightsLR0, expectedOutput), 'incorrect value for weightsLR0' print("OK") Explanation: (3d) Treinando o modelo na base de dados Agora iremos treinar o modelo de regressão linear na nossa base de dados de treino e calcular o RMSE na base de validação. Lembrem-se que não devemos utilizar a base de teste até que o melhor parâmetro do modelo seja escolhido. Para essa tarefa vamos utilizar as funções linregGradientDescent, getLabeledPrediction e calcRMSE já implementadas. End of explanation norm = Normalize() clrs = cmap(np.asarray(norm(np.log(errorTrainLR0))))[:,0:3] fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(2, 6, 1)) ax.set_ylim(2, 6) plt.scatter(list(range(0, numIters)), np.log(errorTrainLR0), s=14**2, c=clrs, edgecolors='#888888', alpha=0.75) ax.set_xlabel('Iteration'), ax.set_ylabel(r'$\log_e(errorTrainLR0)$') pass norm = Normalize() clrs = cmap(np.asarray(norm(errorTrainLR0[6:])))[:,0:3] fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(17, 22, 1)) ax.set_ylim(17.8, 21.2) plt.scatter(range(0, numIters-6), errorTrainLR0[6:], s=14**2, c=clrs, edgecolors='#888888', alpha=0.75) ax.set_xticklabels(map(str, range(6, 66, 10))) ax.set_xlabel('Iteration'), ax.set_ylabel(r'Training Error') pass Explanation: Visualização 3: Erro de Treino Vamos verificar o comportamento do algoritmo durante as iterações. Para isso vamos plotar um gráfico em que o eixo x representa a iteração e o eixo y o log do RMSE. O primeiro gráfico mostra as primeiras 50 iterações enquanto o segundo mostra as últimas 44 iterações. Note que inicialmente o erro cai rapidamente, quando então o gradiente descendente passa a fazer apenas pequenos ajustes. End of explanation from pyspark.mllib.regression import LinearRegressionWithSGD # Values to use when training the linear regression model numIters = 500 # iterations alpha = 1.0 # step miniBatchFrac = 1.0 # miniBatchFraction reg = 1e-1 # regParam regType = 'l2' # regType useIntercept = True # intercept # EXERCICIO firstModel = LinearRegressionWithSGD.train(parsedTrainData, iterations = numIters, step = alpha, miniBatchFraction = 1.0, regParam=reg,regType=regType, intercept=useIntercept) # weightsLR1 stores the model weights; interceptLR1 stores the model intercept weightsLR1 = firstModel.weights interceptLR1 = firstModel.intercept print (weightsLR1, interceptLR1) # TEST LinearRegressionWithSGD (4a) expectedIntercept = 13.332056210482524 expectedWeights = [15.9694010246,13.9897244172,0.669349383773,6.24618402989,4.00932179503,-2.30176663131,10.478805422,3.06385145385,7.14414111075,4.49826819526,7.87702565069,3.00732146613] assert np.allclose(interceptLR1, expectedIntercept), 'incorrect value for interceptLR1' assert np.allclose(weightsLR1, expectedWeights), 'incorrect value for weightsLR1' print("OK") Explanation: Part 4: Treino utilizando MLlib e Busca em Grade (Grid Search) (4a) LinearRegressionWithSGD Nosso teste inicial já conseguiu obter um desempenho melhor que o baseline, mas vamos ver se conseguimos fazer melhor introduzindo a ordenada de origem da reta além de outros ajustes no algoritmo. MLlib LinearRegressionWithSGD implementa o mesmo algoritmo da parte (3b), mas de forma mais eficiente para o contexto distribuído e com várias funcionalidades adicionais. Primeiro utilize a função LinearRegressionWithSGD para treinar um modelo com regularização L2 (Ridge) e com a ordenada de origem. Esse método retorna um LinearRegressionModel. Em seguida, use os atributos weights e intercept para imprimir o modelo encontrado. End of explanation # EXERCICIO samplePoint = parsedTrainData.take(1)[0] samplePrediction = firstModel.predict(samplePoint.features) print (samplePrediction) # TEST Predict (4b) assert np.allclose(samplePrediction, 56.4065674104), 'incorrect value for samplePrediction' Explanation: (4b) Predição Agora use o método LinearRegressionModel.predict() para fazer a predição de um objeto. Passe o atributo features de um LabeledPoint comp parâmetro. End of explanation # EXERCICIO labelsAndPreds = parsedValData.map(lambda lp: (lp.label, firstModel.predict(lp.features))) rmseValLR1 = calcRMSE(labelsAndPreds) print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}'.format(rmseValBase, rmseValLR0, rmseValLR1)) # TEST Evaluate RMSE (4c) assert np.allclose(rmseValLR1, 19.025), 'incorrect value for rmseValLR1' Explanation: (4c) Avaliar RMSE Agora avalie o desempenho desse modelo no teste de validação. Use o método predict() para criar o RDD labelsAndPreds RDD, e então use a função calcRMSE() da Parte (2b) para calcular o RMSE. End of explanation # EXERCICIO bestRMSE = rmseValLR1 bestRegParam = reg bestModel = firstModel numIters = 500 alpha = 1.0 miniBatchFrac = 1.0 for reg in [1e-10, 1e-5, 1.]: model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha, miniBatchFrac, regParam=reg, regType='l2', intercept=True) labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features))) rmseValGrid = calcRMSE(labelsAndPreds) print (rmseValGrid) if rmseValGrid < bestRMSE: bestRMSE = rmseValGrid bestRegParam = reg bestModel = model rmseValLRGrid = bestRMSE print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n\tLRGrid = {3:.3f}'.format(rmseValBase, rmseValLR0, rmseValLR1, rmseValLRGrid)) # TEST Grid search (4d) assert np.allclose(16.6813542516, rmseValLRGrid), 'incorrect value for rmseValLRGrid' Explanation: (4d) Grid search Já estamos superando o baseline em pelo menos dois anos na média, vamos ver se encontramos um conjunto de parâmetros melhor. Faça um grid search para encontrar um bom parâmetro de regularização. Tente valores para regParam dentro do conjunto 1e-10, 1e-5, e 1. End of explanation predictions = np.asarray(parsedValData .map(lambda lp: bestModel.predict(lp.features)) .collect()) actual = np.asarray(parsedValData .map(lambda lp: lp.label) .collect()) error = np.asarray(parsedValData .map(lambda lp: (lp.label, bestModel.predict(lp.features))) .map(lambda lp: squaredError(lp[0], lp[1])) .collect()) norm = Normalize() clrs = cmap(np.asarray(norm(error)))[:,0:3] fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20)) ax.set_xlim(15, 82), ax.set_ylim(-5, 105) plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=.5) ax.set_xlabel('Predicted'), ax.set_ylabel(r'Actual') pass Explanation: Visualização 5: Predições do melhor modelo Agora, vamos criar um gráfico para verificar o desempenho do melhor modelo. Reparem nesse gráfico que a quantidade de pontos mais escuros reduziu bastante em relação ao baseline. End of explanation # EXERCICIO reg = bestRegParam modelRMSEs = [] for alpha in [1e-5, 10]: for numIters in [500, 5]: model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha, miniBatchFrac, regParam=reg, regType='l2', intercept=True) labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features))) rmseVal = calcRMSE(labelsAndPreds) print ('alpha = {0:.0e}, numIters = {1}, RMSE = {2:.3f}'.format(alpha, numIters, rmseVal)) modelRMSEs.append(rmseVal) # TEST Vary alpha and the number of iterations (4e) expectedResults = sorted([57.487692757541318, 57.487692757541318, 352324534.65684682]) assert np.allclose(sorted(modelRMSEs)[:3], expectedResults), 'incorrect value for modelRMSEs' Explanation: (4e) Grid Search para o valor de alfa e número de iterações Agora, vamos verificar diferentes valores para alfa e número de iterações para perceber o impacto desses parâmetros em nosso modelo. Especificamente tente os valores 1e-5 e 10 para alpha e os valores 500 e 5 para número de iterações. Avalie todos os modelos no conjunto de valdação. Reparem que com um valor baixo de alpha, o algoritmo necessita de muito mais iterações para convergir ao ótimo, enquanto um valor muito alto para alpha, pode fazer com que o algoritmo não encontre uma solução. End of explanation
15,910
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Map <script type="text/javascript"> localStorage.setItem('language', 'language-py') </script> <table align="left" style="margin-right Step2: Examples In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration. Then, we apply Map in multiple ways to transform every element in the PCollection. Map accepts a function that returns a single element for every input element in the PCollection. Example 1 Step3: <table align="left" style="margin-right Step4: <table align="left" style="margin-right Step5: <table align="left" style="margin-right Step6: <table align="left" style="margin-right Step7: <table align="left" style="margin-right Step8: <table align="left" style="margin-right Step9: <table align="left" style="margin-right
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License") # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/map-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> <table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/map"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table> End of explanation !pip install --quiet -U apache-beam Explanation: Map <script type="text/javascript"> localStorage.setItem('language', 'language-py') </script> <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Map"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a> </td> </table> <br/><br/><br/> Applies a simple 1-to-1 mapping function over each element in the collection. Setup To run a code cell, you can click the Run cell button at the top left of the cell, or select it and press Shift+Enter. Try modifying a code cell and re-running it to see what happens. To learn more about Colab, see Welcome to Colaboratory!. First, let's install the apache-beam module. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: plants = ( pipeline | 'Gardening plants' >> beam.Create([ ' 🍓Strawberry \n', ' 🥕Carrot \n', ' 🍆Eggplant \n', ' 🍅Tomato \n', ' 🥔Potato \n', ]) | 'Strip' >> beam.Map(str.strip) | beam.Map(print)) Explanation: Examples In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration. Then, we apply Map in multiple ways to transform every element in the PCollection. Map accepts a function that returns a single element for every input element in the PCollection. Example 1: Map with a predefined function We use the function str.strip which takes a single str element and outputs a str. It strips the input element's whitespaces, including newlines and tabs. End of explanation import apache_beam as beam def strip_header_and_newline(text): return text.strip('# \n') with beam.Pipeline() as pipeline: plants = ( pipeline | 'Gardening plants' >> beam.Create([ '# 🍓Strawberry\n', '# 🥕Carrot\n', '# 🍆Eggplant\n', '# 🍅Tomato\n', '# 🥔Potato\n', ]) | 'Strip header' >> beam.Map(strip_header_and_newline) | beam.Map(print)) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 2: Map with a function We define a function strip_header_and_newline which strips any '#', ' ', and '\n' characters from each element. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: plants = ( pipeline | 'Gardening plants' >> beam.Create([ '# 🍓Strawberry\n', '# 🥕Carrot\n', '# 🍆Eggplant\n', '# 🍅Tomato\n', '# 🥔Potato\n', ]) | 'Strip header' >> beam.Map(lambda text: text.strip('# \n')) | beam.Map(print)) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 3: Map with a lambda function We can also use lambda functions to simplify Example 2. End of explanation import apache_beam as beam def strip(text, chars=None): return text.strip(chars) with beam.Pipeline() as pipeline: plants = ( pipeline | 'Gardening plants' >> beam.Create([ '# 🍓Strawberry\n', '# 🥕Carrot\n', '# 🍆Eggplant\n', '# 🍅Tomato\n', '# 🥔Potato\n', ]) | 'Strip header' >> beam.Map(strip, chars='# \n') | beam.Map(print)) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 4: Map with multiple arguments You can pass functions with multiple arguments to Map. They are passed as additional positional arguments or keyword arguments to the function. In this example, strip takes text and chars as arguments. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: plants = ( pipeline | 'Gardening plants' >> beam.Create([ ('🍓', 'Strawberry'), ('🥕', 'Carrot'), ('🍆', 'Eggplant'), ('🍅', 'Tomato'), ('🥔', 'Potato'), ]) | 'Format' >> beam.MapTuple(lambda icon, plant: '{}{}'.format(icon, plant)) | beam.Map(print)) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 5: MapTuple for key-value pairs If your PCollection consists of (key, value) pairs, you can use MapTuple to unpack them into different function arguments. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: chars = pipeline | 'Create chars' >> beam.Create(['# \n']) plants = ( pipeline | 'Gardening plants' >> beam.Create([ '# 🍓Strawberry\n', '# 🥕Carrot\n', '# 🍆Eggplant\n', '# 🍅Tomato\n', '# 🥔Potato\n', ]) | 'Strip header' >> beam.Map( lambda text, chars: text.strip(chars), chars=beam.pvalue.AsSingleton(chars), ) | beam.Map(print)) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 6: Map with side inputs as singletons If the PCollection has a single value, such as the average from another computation, passing the PCollection as a singleton accesses that value. In this example, we pass a PCollection the value '# \n' as a singleton. We then use that value as the characters for the str.strip method. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: chars = pipeline | 'Create chars' >> beam.Create(['#', ' ', '\n']) plants = ( pipeline | 'Gardening plants' >> beam.Create([ '# 🍓Strawberry\n', '# 🥕Carrot\n', '# 🍆Eggplant\n', '# 🍅Tomato\n', '# 🥔Potato\n', ]) | 'Strip header' >> beam.Map( lambda text, chars: text.strip(''.join(chars)), chars=beam.pvalue.AsIter(chars), ) | beam.Map(print)) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 7: Map with side inputs as iterators If the PCollection has multiple values, pass the PCollection as an iterator. This accesses elements lazily as they are needed, so it is possible to iterate over large PCollections that won't fit into memory. End of explanation import apache_beam as beam def replace_duration(plant, durations): plant['duration'] = durations[plant['duration']] return plant with beam.Pipeline() as pipeline: durations = pipeline | 'Durations' >> beam.Create([ (0, 'annual'), (1, 'biennial'), (2, 'perennial'), ]) plant_details = ( pipeline | 'Gardening plants' >> beam.Create([ { 'icon': '🍓', 'name': 'Strawberry', 'duration': 2 }, { 'icon': '🥕', 'name': 'Carrot', 'duration': 1 }, { 'icon': '🍆', 'name': 'Eggplant', 'duration': 2 }, { 'icon': '🍅', 'name': 'Tomato', 'duration': 0 }, { 'icon': '🥔', 'name': 'Potato', 'duration': 2 }, ]) | 'Replace duration' >> beam.Map( replace_duration, durations=beam.pvalue.AsDict(durations), ) | beam.Map(print)) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection), but this requires that all the elements fit into memory. Example 8: Map with side inputs as dictionaries If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary. Each element must be a (key, value) pair. Note that all the elements of the PCollection must fit into memory for this. If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead. End of explanation
15,911
Given the following text description, write Python code to implement the functionality described below step by step Description: Formulas Step1: import convention Step2: OLS regression using formulas Step3: Categorical variables Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories. If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C( ) operator Step4: Operators We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix. Removing variables The "-" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by Step5: Multiplicative interactions " Step6: Functions Step7: User defined function Step8: Using formuls with methods that do not(yet) support them Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices can then be fed to the fitting function as endog and exog arguments.
Python Code: import numpy as np import statsmodels.api as sm Explanation: Formulas: Fitting models using R-style formulas loading modules and fucntions End of explanation from statsmodels.formula.api import ols import statsmodels.formula.api as smf dir(smf) Explanation: import convention End of explanation dta = sm.datasets.get_rdataset('Guerry','HistData', cache=True) df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna() df.head() model = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df).fit() print(model.summary()) Explanation: OLS regression using formulas End of explanation res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit() print(res.params) Explanation: Categorical variables Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories. If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C( ) operator: End of explanation res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit() print(res.params) Explanation: Operators We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix. Removing variables The "-" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by: End of explanation res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit() res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit() print(res1.params, '\n') print(res2.params) Explanation: Multiplicative interactions ":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together: End of explanation res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit() print(res.params) Explanation: Functions End of explanation def log_plus_1(x): return np.log(x) + 1. res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit() print(res.params) Explanation: User defined function End of explanation import patsy f = 'Lottery ~ Literacy * Wealth' y,X = patsy.dmatrices(f, df, return_type='dataframe') print(y[:5]) print(X[:5]) print(sm.OLS(y, X).fit().summary()) Explanation: Using formuls with methods that do not(yet) support them Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices can then be fed to the fitting function as endog and exog arguments. End of explanation
15,912
Given the following text description, write Python code to implement the functionality described below step by step Description: Curve fitting in python A.M.C. Dawes - 2015 An introduction to various curve fitting routines useful for physics work. The first cell is used to import additional features so they are available in our notebook. matplotlib provides plotting functions and numpy provides math and array functions. Step1: Next we define x as a linear space with 100 points that range from 0 to 10. Step2: y is mock data that we create by linear function with a slope of 1.45. We also add a small amount of random data to simulate noise as if this were a measured quantity. Step3: The data is pretty clearly linear, but we can fit a line to determine the slope. A 1st order polynomial is a line, so we use polyfit Step4: The fit is stored in a variable called fit which has several elements. We can print them out with nice labels using the following cell Step5: The main thing we want is the list of coefficients. These are the values in the polynomial that was a best fit. We can create a function (called f) that is the best fit polynomial. Then it is easy to plot both together and see that the fit is reasonable. Step6: General function fitting For more than just polynomials "When choosing a fit, Polynomial is almost always the wrong answer" Often there is a better model that describes the data. In most cases this is a known function; something like a power law or an exponential. In these cases, there are two options Step7: Then define a function that we expect models our system. In this case, exponential decay with an offset. Step8: Create a pure (i.e. exact) set of data with some parameters, and then simulate some data of the same system (by adding random noise). Step9: Now carry out the fit. curve_fit returns two outputs, the fit parameters, and the covariance matrix. We won't use the covariance matrix yet, but it's good practice to save it into a variable. Step10: We can see the parameters are a reasonable match to the pure function we created above. Next, we want to create a "best fit" data set but using the parameters in the model function func. The "splat" operator is handy for this, it unpacks the parameters array into function arguments a, b, and c. Step11: Looks pretty good as far as fits go. Let's check out the error Step12: To further illustrate the variation in this fit, repeat all the cells (to get new random noise in the data) and you'll see the fit changes. Sometimes, the error is as large as 10%. Compare this to a linear fit of log data and I bet you see much less variation in the fit! Modeling by rescaling data The "fit a line to anything" approach "With a small enough data set, you can always fit it to a line" Step13: Now to finally back out the exponential from the linear fit Step14: Clearly the tail is a bit off, the next iteration is to average the tail end and use that as the y shift instead of using just the last point.
Python Code: import matplotlib.pyplot as plt import numpy as np %matplotlib inline Explanation: Curve fitting in python A.M.C. Dawes - 2015 An introduction to various curve fitting routines useful for physics work. The first cell is used to import additional features so they are available in our notebook. matplotlib provides plotting functions and numpy provides math and array functions. End of explanation x = np.linspace(0,10,100) Explanation: Next we define x as a linear space with 100 points that range from 0 to 10. End of explanation y = 1.45 * x + 1.3*np.random.random(len(x)) plt.plot(x,y,".") Explanation: y is mock data that we create by linear function with a slope of 1.45. We also add a small amount of random data to simulate noise as if this were a measured quantity. End of explanation # execute the fit on the data; a 1-dim fit (line) fit = np.polyfit(x, y, 1,full=True) Explanation: The data is pretty clearly linear, but we can fit a line to determine the slope. A 1st order polynomial is a line, so we use polyfit: End of explanation print("coeffients:", fit[0]) print("residuals:", fit[1]) print("rank:", fit[2]) print("singular_values:", fit[3]) print("rcond:", fit[4]) Explanation: The fit is stored in a variable called fit which has several elements. We can print them out with nice labels using the following cell: End of explanation f = np.poly1d(fit[0]) # create a function using the fit parameters plt.plot(x,y) plt.plot(x,f(x)) Explanation: The main thing we want is the list of coefficients. These are the values in the polynomial that was a best fit. We can create a function (called f) that is the best fit polynomial. Then it is easy to plot both together and see that the fit is reasonable. End of explanation from scipy.optimize import curve_fit Explanation: General function fitting For more than just polynomials "When choosing a fit, Polynomial is almost always the wrong answer" Often there is a better model that describes the data. In most cases this is a known function; something like a power law or an exponential. In these cases, there are two options: 1. Convert the variables so that a plot will be linear (i.e. plot the log of your data, or the square root, or the square etc.). This is highly effective becuase a linear fit is always (yes always) more accurate than a fit of another function. 2. Perform a nonlinear fit to the function that models your data. We'll illustrate this below and show how even a "decent" fit gives several % error. First, we import the functions that do nonlinear fitting: End of explanation def func(x, a, b, c): return a * np.exp(-b * x) + c Explanation: Then define a function that we expect models our system. In this case, exponential decay with an offset. End of explanation y = func(x, 2.5, 0.6, 0.5) ydata = y + 0.2 * np.random.normal(size=len(x)) Explanation: Create a pure (i.e. exact) set of data with some parameters, and then simulate some data of the same system (by adding random noise). End of explanation parameters, covariance = curve_fit(func, x, ydata) parameters #the fit results for a, b, c Explanation: Now carry out the fit. curve_fit returns two outputs, the fit parameters, and the covariance matrix. We won't use the covariance matrix yet, but it's good practice to save it into a variable. End of explanation yfit = func(x, *parameters) # the splat operator unpacks an array into function arguments plt.plot(x,ydata,".") plt.plot(x,yfit) plt.plot(x,y) Explanation: We can see the parameters are a reasonable match to the pure function we created above. Next, we want to create a "best fit" data set but using the parameters in the model function func. The "splat" operator is handy for this, it unpacks the parameters array into function arguments a, b, and c. End of explanation plt.plot(x,((yfit-y)/y)*100) plt.title("Fit error %") Explanation: Looks pretty good as far as fits go. Let's check out the error: End of explanation ylog = np.log(ydata[:25] - ydata[-1]) plt.plot(x[:25],ylog,".") fitlog = np.polyfit(x[:25], ylog[:25], 1,full=True) fitlog ylog.shape flog = np.poly1d(fitlog[0]) plt.plot(x[:25],ylog) plt.plot(x[:25],flog(x[:25])) Explanation: To further illustrate the variation in this fit, repeat all the cells (to get new random noise in the data) and you'll see the fit changes. Sometimes, the error is as large as 10%. Compare this to a linear fit of log data and I bet you see much less variation in the fit! Modeling by rescaling data The "fit a line to anything" approach "With a small enough data set, you can always fit it to a line" End of explanation ylogfit = np.exp(flog(x)) plt.plot(x,ylogfit+ydata[-1]) plt.plot(x,ydata) Explanation: Now to finally back out the exponential from the linear fit: End of explanation yshift = np.average(ydata[-20:]) yshift ylog = np.log(ydata[:25] - yshift) fitlog = np.polyfit(x[:25], ylog[:25], 1,full=True) flog = np.poly1d(fitlog[0]) plt.plot(x[:25],ylog) plt.plot(x[:25],flog(x[:25])) ylogfit = np.exp(flog(x)) plt.plot(x,ylogfit+yshift) plt.plot(x,ydata) Explanation: Clearly the tail is a bit off, the next iteration is to average the tail end and use that as the y shift instead of using just the last point. End of explanation
15,913
Given the following text description, write Python code to implement the functionality described below step by step Description: Multithreading, Multiprocessing, and Subprocessing - Chris Sterling This is a primer to help you get started, not an all-encompassing guide I'll be posting this, so no need to frantically write code Here and there, I had to do some magic behind the scenes so that IPython notebook gave expected behavior - look at the notebook for more info A function Step1: Suppose we have a CPU bound function Step2: Concurrency What if we want to split the work? Libraries threading multiprocessing subprocessing Threads Wrap around C's PThread library Significantly slowed down significantly because of the GIL 2 threads - 1/2 the time? ...Sometimes... Basic operations Step3: More Threads, More Power! Spawn 10 threads all counting 1/10th the time...right? Step4: Uh...What? What it's supposed to look like Step5: Fork the work Step6: Multiprocessing Step7: Modified from http Step8: A Comparison Step9: But they do the same thing! ...Don't they? Something I personally still struggle with is when to use which Downside of processing is much more gets copied Remember, this is for threading in Python, not just in C When do I use which? Processes copy resources (heavier) Threads share resources (lighter) Depends on what you're trying to do Process Thread Then there's subprocess Subprocess doesn't carry around as much as multiprocess Carries the environment, but doesn't copy the active variables Great for calling external commands Basic Example Step10: Weakness .check_output is 'blocking' What if we wanted to get the data back concurrently though? The subprocess module is for spawning processes and doing things with their input/output - not for running functions. We can use Popen() Non-blocking Basic Example Step11: Communication Between Processes Use a pipe Shell pipe example Step12: Remember to always reap your zombie children Processes Use .kill() or .terminate() for processes Threads sys.exit() does not stop other threads unless they're in daemon mode daemon makes it easy for us so we don't have to explicitly exit threads (unless you're in iPython...) Used for background tasks Thread status gets ignored There is no true .kill() mechanism Can set t.Event() Example Step13: Challenges of Concurrency Race conditions - Things not executing in the correct order (solve with Queue!) Deadlock - A bunch of things all fighting over the same resources How do we solve? Program design (use thread safe packages) Mutex! Race condition example Step14: Mutex visualized Based on http Step15: Modified from http
Python Code: def count_down(n): while n > 0: n-=1 COUNT = 500000000 Explanation: Multithreading, Multiprocessing, and Subprocessing - Chris Sterling This is a primer to help you get started, not an all-encompassing guide I'll be posting this, so no need to frantically write code Here and there, I had to do some magic behind the scenes so that IPython notebook gave expected behavior - look at the notebook for more info A function End of explanation import time start = time.time() count_down(COUNT) end = time.time() nothing = end-start print(nothing) Explanation: Suppose we have a CPU bound function End of explanation from threading import Thread # Setup the worker functions t1 = Thread(target=count_down, args=(COUNT//2,)) t2 = Thread(target=count_down, args=(COUNT//2,)) start = time.time() t1.start; t2.start() t1.join; t2.join() end = time.time() two_thread = end-start print "Nothing:\t%.4f" % nothing print "2 Threads:\t%.4f" % two_thread Explanation: Concurrency What if we want to split the work? Libraries threading multiprocessing subprocessing Threads Wrap around C's PThread library Significantly slowed down significantly because of the GIL 2 threads - 1/2 the time? ...Sometimes... Basic operations: Thread(target=function_pointer, args=(fxn_arguments)) .start() .join() End of explanation NUM_THREADS = 10 threads = [Thread(target=count_down, args=(COUNT//NUM_THREADS,)) for x in range(NUM_THREADS)] start = time.time() # Run thread pool for t in threads: t.start() # Wait for the completed threads to exit for t in threads: t.join() end = time.time() multi_thread = end - start print "Nothing:\t%.4f" % nothing print "2 Threads:\t%.4f" % two_thread print "%d Threads:\t%.4f" % (NUM_THREADS, multi_thread) Explanation: More Threads, More Power! Spawn 10 threads all counting 1/10th the time...right? End of explanation import os a = 0 newpid = os.fork() if newpid == 0: print 'A new child (%d) is born!' % os.getpid( ) a+=1 print "The value of a in the child is %d" % a os._exit(0) else: pids = (os.getpid(), newpid) time.sleep(5) print "parent: %d, child: %d" % pids print "The value of a in the parent is %d" % a Explanation: Uh...What? What it's supposed to look like: What it actually looks like What it actually looks like The CPython GIL (Global Interpreter Lock) What if we want true concurrency though? In order to have a truly multi-threaded program, you need a multi-threaded language CPython 2 can only execute one line of code at a time because it passes around a single-threaded resource to access the Global Interpreter Only one line of code can really be executed because of this Forking! The C-like way of spawning a child End of explanation import os import time start = time.time() newpid = os.fork() if newpid==0: count_down(COUNT/2) os._exit(0) else: count_down(COUNT/2) end = time.time() forking = end-start print "Nothing:\t%.4f" % nothing print "Forking:\t%.4f" % forking Explanation: Fork the work: End of explanation import multiprocessing as mp import random import string start = time.time() p1 = mp.Process(target=count_down, args=(COUNT//2,)) p2 = mp.Process(target=count_down, args=(COUNT//2,)) p1.start(); p2.start() p1.join(); p2.join() end = time.time() two_proc = end - start print "Nothing:\t%.4f" % nothing print "Forking:\t%.4f" % forking print "2 Processes:\t%.4f" % two_proc Explanation: Multiprocessing End of explanation NUM_PROCS = 10 processes = [mp.Process(target=count_down, args=(COUNT//NUM_PROCS,)) for x in range(NUM_PROCS)] start = time.time() # Run processes for p in processes: p.start() # Wait for the completed processes to exit for p in processes: p.join() end = time.time() multi_proc = end - start print "Nothing:\t%.4f" % nothing print "Forking:\t%.4f" % forking print "2 Processes:\t%.4f" % two_proc print "%d Processes:\t%.4f" % (NUM_PROCS, multi_proc) Explanation: Modified from http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html#Multi-Threading-vs.-Multi-Processing More processes...more power? Spawn 10 processes all counting End of explanation print "Nothing:\t%.4f" % nothing print "2 Threads:\t%.4f" % two_thread print "%d Threads:\t%.4f" % (NUM_THREADS, multi_thread) print "Forking:\t%.4f" % forking print "2 Processes:\t%.4f" % two_proc print "%d Processes:\t%.4f" % (NUM_PROCS, multi_proc) Explanation: A Comparison End of explanation import subprocess ls_output = subprocess.check_output(['ls']) print(ls_output) Explanation: But they do the same thing! ...Don't they? Something I personally still struggle with is when to use which Downside of processing is much more gets copied Remember, this is for threading in Python, not just in C When do I use which? Processes copy resources (heavier) Threads share resources (lighter) Depends on what you're trying to do Process Thread Then there's subprocess Subprocess doesn't carry around as much as multiprocess Carries the environment, but doesn't copy the active variables Great for calling external commands Basic Example: End of explanation from subprocess import Popen, PIPE import sys command = "ls -l".split(" ") proc = Popen(command, cwd='/', stdout=PIPE, stderr=PIPE) while True: out = proc.stdout.read(1) if out == '' and proc.poll() != None: break if out != '': sys.stdout.write(out) sys.stdout.flush() Explanation: Weakness .check_output is 'blocking' What if we wanted to get the data back concurrently though? The subprocess module is for spawning processes and doing things with their input/output - not for running functions. We can use Popen() Non-blocking Basic Example End of explanation from Queue import Queue from threading import Thread, current_thread, Event import scraping q = Queue() # Pop something out of the queue def worker(stop_event): while not stop_event.is_set(): url = q.get() # Internal mutex in queue print "Thread %s - Downloading '%s...%s'" % (current_thread().name, url[:7], url[len(url)-50:len(url)]) # Tell the queue you are done with the item # When queue is zero-ed, queue.join no longer blocks q.task_done() # Spawn 10 threads NUM_THREADS = 10 stop_event = Event() threads = [Thread(target=worker, name=x, args=(stop_event,)) for x in range(NUM_THREADS)] # Run threads for t in threads: t.daemon = True t.start() # Put the scraped URLS into the queue for url in scraping.main(): print "Putting into queue" q.put(url) # Wait for the queue to be empty q.join() print "The queue is now empty" stop_event.set() Explanation: Communication Between Processes Use a pipe Shell pipe example: ls -l | grep "a" Or use a queue! Queues! First In First Out Allow for producer/consumer relationship Natively thread-safe in Python Basic operations: q.put(a_variable) a_variable = q.get() - Locks the queue q.task_done() - Unlocks the queue End of explanation import atexit procs = [] # No matter what happens, kill all the remaining processes @atexit.register def kill_subprocesses(): for proc in procs: proc.kill() Explanation: Remember to always reap your zombie children Processes Use .kill() or .terminate() for processes Threads sys.exit() does not stop other threads unless they're in daemon mode daemon makes it easy for us so we don't have to explicitly exit threads (unless you're in iPython...) Used for background tasks Thread status gets ignored There is no true .kill() mechanism Can set t.Event() Example: while not stop_event: do_stuff() The entire Python program exits when no alive non-daemon threads are left. End of explanation from threading import Thread, RLock mutex = RLock() def processData(fp): mutex.acquire() fp.write("Some data") mutex.release() with open('a_new_file', 'w') as fp: t = Thread(target = processData, args = (fp,)) t.start() t.join() Explanation: Challenges of Concurrency Race conditions - Things not executing in the correct order (solve with Queue!) Deadlock - A bunch of things all fighting over the same resources How do we solve? Program design (use thread safe packages) Mutex! Race condition example: - what if order matters? Take the ATM example: Your bank account is at 0 (like mine) You have a check for 100 Need to wait for it to clear before paying a bill Deadlock example: What if two threads want to modify the same variable at the same time? Can lock up your program because system just doesn't know what to do with them Mute...what? Mutex – a mechanism for protecting chunks of code that aren't necessarily thread-safe Usage: Lock() RLock() mutex.acquire() mutex.release() Lock() v RLock() Another difference is that an acquired Lock can be released by any thread, while an acquired RLock can only be released by the thread which acquired it. End of explanation from PIL import Image from IPython.display import Image as I from IPython.display import display import threading w = 512 # image width h = 512 # image height image = Image.new("RGB", (w, h)) wh = w * h maxIt = 12 # max number of iterations allowed # drawing region (xa < xb & ya < yb) xa = -2.0 xb = 1.0 ya = -1.5 yb = 1.5 xd = xb - xa yd = yb - ya numThr = 5 # number of threads to run # lock = threading.Lock() class ManFrThread(threading.Thread): def __init__ (self, k): self.k = k threading.Thread.__init__(self) def run(self): # each thread only calculates its own share of pixels for i in range(k, wh, numThr): kx = i % w ky = int(i / w) a = xa + xd * kx / (w - 1.0) b = ya + yd * ky / (h - 1.0) x = a y = b for kc in range(maxIt): x0 = x * x - y * y + a y = 2.0 * x * y + b x = x0 if x * x + y * y > 4: # various color palettes can be created here red = (kc % 8) * 32 green = (16 - kc % 16) * 16 blue = (kc % 16) * 16 # lock.acquire() global image image.putpixel((kx, ky), (red, green, blue)) # lock.release() break tArr = [] for k in range(numThr): # create all threads tArr.append(ManFrThread(k)) for k in range(numThr): # start all threads tArr[k].start() for k in range(numThr): # wait until all threads finished tArr[k].join() image.save("MandelbrotFractal.png", "PNG") i = I(filename='MandelbrotFractal.png') display(i) Explanation: Mutex visualized Based on http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html A Threading Example Make a fractal Building image concurrently End of explanation from subprocess import Popen, PIPE from threading import Thread from Queue import Queue, Empty io_q = Queue() # Sticks anything from the pipe into a queue def stream_watcher(identifier, stream): for line in stream: io_q.put((identifier, line)) if not stream.closed: stream.close() # Takes things out of the queue and prints them to the screen as they come def printer(): while True: try: # Block for 1 second. item = io_q.get(True, 1) except Empty: # No output in either streams for a second. Are we done? if proc.poll() is not None: break else: identifier, line = item print identifier + ':', line command = "ls -l".split(" ") proc = Popen(command, cwd='/usr/local/bin', stdout=PIPE, stderr=PIPE) stdout_t = Thread(target=stream_watcher, args=('STDOUT', proc.stdout)) stderr_t = Thread(target=stream_watcher, args=('STDERR', proc.stderr)) stdout_t.daemon = True; stderr_t.daemon = True stdout_t.start(); stderr_t.start() print_t = Thread(target=printer) print_t.daemon = True print_t.start() Explanation: Modified from http://code.activestate.com/recipes/577680-multi-threaded-mandelbrot-fractal/ Pipe AND queue communication Let's get crazy Subprocess to pipe to threads End of explanation
15,914
Given the following text description, write Python code to implement the functionality described below step by step Description: <CENTER> <p><font size="5"> Queuing theory Step1: 2) The discrete valued random variable $X$ follows a Poisson distribution if its probabilities depend on a parameter $\lambda$ and are such that $$ P(X=k)=\dfrac{\lambda^k}{k!}e^{-\lambda},\quad{\text for }\; k=0,1,2,\ldots $$ We denote by $\mathcal{P}(\lambda)$ the Poisson distribution with parameter $\lambda$. As for continuous valued distributions, samples from discrete distributions can be obtained via the inverse transform sampling method. Alternatively, one can use the statistics sublibrary of Scipy to draw samples from the a Poisson distribution (https Step2: Your answers for the exercise
Python Code: %matplotlib inline from pylab import * N = 10**5 lambda_ = 2.0 ######################################## # Supply the missing coefficient herein below V1 = -1.0/lambda_ data = V1*log(rand(N)) ######################################## m = mean(data) v = var(data) print("\u03BB={0}: m={1:1.2f}, \u03C3\u00B2={2:1.2f}" .format(lambda_,m,v)) #\u... for unicode caracters Explanation: <CENTER> <p><font size="5"> Queuing theory: from Markov chains to multiserver systems</font></p> <p><font size="5"> Python Lab </p> <p><font size="5"> Week I: simulation of random variables </p> </CENTER> In this first lab, we are going to introduce basics of random variables simulation, focusing on the simulation of exponential and Poisson distributions, that play a central role in mathematical modelling of queues. We will see how to draw samples from distributions by the inverse transform sampling method or by using the Statistics sublibrary of Scipy. We will use the inverse transform sampling method to draw samples from the exponential distribution. Then we will introduce the Poisson distribution. As explained in the general introduction to the labs (Week 0), to complete a lab, you will have to fill in undefined variables in the code. Then, the code will generate some variables named Vi, with i=1,... . You will find all the Vis generated form your results by running the last cell of code of the lab. You can check your results by answering to the exercise at the end of the lab section where you will be asked for the values of the Vis. Let $F(x)=P(X\leq x)$ denote the distribution function of some random variable $X$. When $F$ is continuous and strictly monotone increasing on the domain of $X$, then the random variable $U=F(X)$ with values in $[0,1]$ satisfies $$ P(U\leq u)=P(F(X)\leq u)=P(X\leq F^{-1}(u))=F(F^{-1}(u))=u,\qquad \forall u\in[0,1]. $$ Thus, $U$ is a uniform random variable over [0,1], what we note $U\sim\mathcal{U}{[0,1]}$. In other words, for all $a,b$, with $0\leq a\leq b\leq 1$, then $P(U\in[a,b])=b-a$. Conversly, the distribution function of the random variable $Y=F^{-1}(U)$ is $F$ when $U\sim\mathcal{U}{[0,1]}$. 1) Based on the above results explain how to draw samples from an $Exp(\lambda)$ distribution. Draw $N=10^5$ samples of an $Exp(\lambda)$ distribution, for $\lambda=2$. Calculate the mean $m$ and variance $\sigma^2$ of an $Exp(\lambda)$ distribution and compute the sample estimates of $m$ and $\sigma^2$. End of explanation from scipy.stats import poisson lambda_ = 20 N = 10**5 #################################### # Give parameters mu and size in function poisson.rvs # (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html) sample = poisson.rvs(mu= lambda_, size= N) #################################### # mean and variance of sample vector mean_sample = mean(sample) var_sample = var(sample) print(("\u03BB = {0}\nestimated mean = {1:1.2f}\n" +"estimated var = {2:1.2f}") .format(lambda_,mean_sample, var_sample)) #------------------------ V2 = mean_sample Explanation: 2) The discrete valued random variable $X$ follows a Poisson distribution if its probabilities depend on a parameter $\lambda$ and are such that $$ P(X=k)=\dfrac{\lambda^k}{k!}e^{-\lambda},\quad{\text for }\; k=0,1,2,\ldots $$ We denote by $\mathcal{P}(\lambda)$ the Poisson distribution with parameter $\lambda$. As for continuous valued distributions, samples from discrete distributions can be obtained via the inverse transform sampling method. Alternatively, one can use the statistics sublibrary of Scipy to draw samples from the a Poisson distribution (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html). Draw $N=10^5$ samples from the Poisson distribution with parameter $\lambda = 20$ and compute their sample mean and variance. Check that they are close to their theoretical values that are both equal to $\lambda$ Answer to question 2 End of explanation print("---------------------------\n" +"RESULTS SUPPLIED FOR LAB 1:\n" +"---------------------------") results = ("V"+str(k) for k in range(1,3)) for x in results: try: print(x+" = {0:.2f}".format(eval(x))) except: print(x+": variable is undefined") Explanation: Your answers for the exercise End of explanation
15,915
Given the following text description, write Python code to implement the functionality described below step by step Description: Setup $ mkvirtualenv aws_name_similarity $ pip install --upgrade pip $ pip install jellyfish jupyter scipy matplotlib $ jupyter notebook Step1: # Testing it out Step2: With real AWS service names
Python Code: from itertools import combinations import jellyfish from scipy.cluster import hierarchy import numpy as np import matplotlib.pyplot as plt Explanation: Setup $ mkvirtualenv aws_name_similarity $ pip install --upgrade pip $ pip install jellyfish jupyter scipy matplotlib $ jupyter notebook End of explanation # Strings to compare strs = [u"MARTHA", u"MARHTA", u"DWAYNE", u"DUANE", u"DIXON", u"DICKSONX"] # Calculating Jaro similarity and converting to distance (use Jaro-Winkler below) jaro_dists = [1 - jellyfish.jaro_distance(x,y) for x,y in combinations(strs, 2)] jaro_dists # Plot it ytdist = np.array(jaro_dists) Z = hierarchy.linkage(ytdist, 'single') plt.figure() hierarchy.set_link_color_palette(['m', 'c', 'y', 'k']) dn = hierarchy.dendrogram(Z, above_threshold_color='#bcbddc', orientation='left', labels=strs) hierarchy.set_link_color_palette(None) # reset to default after use plt.show() Explanation: # Testing it out End of explanation # I copied these from the AWS console. If anyone knows the AWS API endpoint to get an equivalent list, let me know! strs = [ u"API Gateway", u"Application Discovery Service", u"AppStream", u"AppStream 2.0", u"Athena", u"AWS IoT", u"Certificate Manager", u"CloudFormation", u"CloudFront", u"CloudSearch", u"CloudTrail", u"CloudWatch", u"CodeBuild", u"CodeCommit", u"CodeDeploy", u"CodePipeline", u"Cognito", u"Compliance Reports", u"Config", u"Data Pipeline", u"Device Farm", u"Direct Connect", u"Directory Service", u"DMS", u"DynamoDB", u"EC2", u"EC2 Container Service", u"Elastic Beanstalk", u"Elastic File System", u"Elastic Transcoder", u"ElastiCache", u"Elasticsearch Service", u"EMR", u"GameLift", u"Glacier", u"IAM", u"Inspector", u"Kinesis", u"Lambda", u"Lex", u"Lightsail", u"Machine Learning", u"Managed Services", u"Mobile Analytics", u"Mobile Hub", u"OpsWorks", u"Pinpoint", u"Polly", u"QuickSight", u"RDS", u"Redshift", u"Rekognition", u"Route 53", u"S3", u"Server Migration", u"Service Catalog", u"SES", u"Snowball", u"SNS", u"SQS", u"Step Functions", u"Storage Gateway", u"SWF", u"Trusted Advisor", u"VPC", u"WAF & Shield", u"WorkDocs", u"WorkMail", u"WorkSpaces" ] # Calculate similarity and convert to distance jaro_dists = [1 - jellyfish.jaro_winkler(x,y) for x,y in combinations(strs, 2)] ytdist = np.array(jaro_dists) Z = hierarchy.linkage(ytdist, 'single') plt.figure(figsize=(6, 10), facecolor='white') # The colors don't mean anything; anything below the color_threshold uses one of these colors plt.suptitle('Jaro-Winkler Similarity of AWS Service Names', y=.94, fontsize=16) plt.title('github.com/agussman | T:@percontate', fontsize=10) hierarchy.set_link_color_palette(['g', 'r', 'm', 'c']) dn = hierarchy.dendrogram(Z, color_threshold=0.25, above_threshold_color='#bcbddc', orientation='left', labels=strs) hierarchy.set_link_color_palette(None) # reset to default after use plt.show() Explanation: With real AWS service names End of explanation
15,916
Given the following text description, write Python code to implement the functionality described below step by step Description: Self-Driving Car Engineer Nanodegree Deep Learning Project Step1: Step 1 Step2: Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include Step3: Step 2 Step4: Model Architecture Train, Validate and Test the Model Step5: A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. Step6: Evaluate the Model evaluate the performance of the model on the test set. Step7: Step 3 Step8: Predict the Sign Type for Each Image Step9: Analyze Performance Step10: Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability Step11: Project Writeup Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. Note
Python Code: # Load pickled data import pickle import cv2 # for grayscale and normalize # TODO: Fill this in based on where you saved the training and testing data training_file ='traffic-signs-data/train.p' validation_file='traffic-signs-data/valid.p' testing_file = 'traffic-signs-data/test.p' with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_trainLd, y_trainLd = train['features'], train['labels'] X_validLd, y_validLd = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] #X_trainLd=X_trainLd.astype(float) #y_trainLd=y_trainLd.astype(float) #X_validLd=X_validLd.astype(float) #y_validLd=y_validLd.astype(float) print("Xtrain shape : "+str(X_trainLd.shape)+" ytrain shape : "+str(y_trainLd.shape)) print("Xtrain shape : "+str(X_trainLd.shape)+" ytrain shape : "+str(y_trainLd.shape)) print("X_test shape : "+str(X_test.shape)+" y_test shape : "+str(y_test.shape)) from sklearn.model_selection import train_test_split Explanation: Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition Classifier In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project. The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Step 0: Load The Data End of explanation ### Replace each question mark with the appropriate value. ### Use python, pandas or numpy methods rather than hard coding the results import numpy as np # TODO: Number of training examples n_train = X_trainLd.shape[0] # TODO: Number of validation examples n_validation = X_validLd.shape[0] # TODO: Number of testing examples. n_test = X_test.shape[0] # TODO: What's the shape of an traffic sign image? image_shape = X_trainLd.shape[1:4] # TODO: How many unique classes/labels there are in the dataset. #n_classes = n_train+n_validation+n_test -- this doesn't seem correct 43 in excel file n_classes = 43 print("Number of training examples =", n_train) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes) Explanation: Step 1: Dataset Summary & Exploration The pickled data is a dictionary with 4 key/value pairs: 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id. 'sizes' is a list containing tuples, (width, height) representing the original width and height the image. 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas End of explanation import random ### Data exploration visualization code goes here. ### Feel free to use as many code cells as needed. import matplotlib.pyplot as plt # Visualizations will be shown in the notebook. %matplotlib inline index = random.randint(0, len(X_trainLd)) image = X_trainLd[100] #squeeze : Remove single-dimensional entries from the shape of an array. image = image.astype(float) #normalise def normit(img): size = img.shape[2] imagenorm = cv2.normalize(img, dst =image_shape, alpha=0, beta=25, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1) image = img.astype(float) norm = (image-128.0)/128.0 return norm temp = normit(image) plt.figure(figsize=(1,1)) plt.imshow(temp.squeeze()) Explanation: Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python. NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others? End of explanation ### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include ### converting to grayscale, etc. ### Feel free to use as many code cells as needed. import cv2 from sklearn.utils import shuffle print("Test") ## xtrain grey_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2]]) norm_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2],3]) norm_X_train = norm_X_train.astype(float) X_train, y_train = shuffle(X_trainLd, y_trainLd) shuff_X_train, shuff_y_train =X_train, y_train X_valid, y_valid = X_validLd, y_validLd i=0 for p in X_train: t = normit(p) norm_X_train[i] = t i=i+1 print("after normalise") ##validate norm_X_valid = np.zeros(shape=[X_validLd.shape[0],X_validLd.shape[1],X_validLd.shape[2],3]) norm_X_valid=norm_X_valid.astype(float) i=0 for v in X_valid: tv = normit(v) #tempv = tv.reshape(32,32,1) norm_X_valid[i] = tv i=i+1 ##test norm_X_test=[] norm_X_test = np.zeros(shape=[X_test.shape[0],X_test.shape[1],X_test.shape[2],3]) norm_X_test=norm_X_test.astype(float) i=0 for testim in X_test: tt = normit(testim) norm_X_test[i] = tt i=i+1 print("fin") image22 = norm_X_train[110] ; imageb4 = X_train[110]; imagev=norm_X_valid[100]; imaget=norm_X_test[100] plt.figure(figsize=(1,1)) plt.imshow(imagev.squeeze()) plt.figure(figsize=(1,1)) plt.imshow(imaget.squeeze()) #squeeze : Remove single-dimensional entries from the shape of an array Explanation: Step 2: Design and Test a Model Architecture Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset. The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem: Neural network architecture (is the network over or underfitting?) Play around preprocessing techniques (normalization, rgb to grayscale, etc) Number of examples per label (some have more than others). Generate fake data. Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. End of explanation ### Define your architecture here. ### Feel free to use as many code cells as needed. import tensorflow as tf EPOCHS = 30 BATCH_SIZE = 128 #SMcM change to 256 from 128 #X_train=X_train.astype(float) X_train=norm_X_train #print(X_train[20]) #X_train=shuff_X_train #X_valid=norm_X_valid from tensorflow.contrib.layers import flatten def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0.0 sigma = 0.1 #SMcM changed from 0.1 to 0.2 # SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5,3, 6), mean = mu, stddev = sigma)) #SMcM depth cahnged to 3 conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b #try same should be better (padding) # SOLUTION: Activation. conv1 = tf.nn.relu(conv1) #conv1 = tf.nn.relu(conv1) #SMcM add an extra relu # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # SOLUTION: Activation. conv2 = tf.nn.relu(conv2) # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # SOLUTION: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # SOLUTION: Activation. fc1 = tf.nn.relu(fc1) # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # SOLUTION: Activation. fc2 = tf.nn.relu(fc2) # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(43)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits print("model") image22 = X_train[110] #squeeze : Remove single-dimensional entries from the shape of an array print(norm_X_train.shape) print(X_train.shape) plt.figure(figsize=(1,1)) plt.imshow(image22.squeeze()) #print(image22) Explanation: Model Architecture Train, Validate and Test the Model End of explanation ### Train your model here. ### Calculate and report the accuracy on the training and validation set. ### Once a final model architecture is selected, ### the accuracy on the test set should be calculated and reported as well. ### Feel free to use as many code cells as needed. #Features and Labels x = tf.placeholder(tf.float32, (None, 32, 32, 3)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 43) print("start") #Training Pipeline rate = 0.0025 # SMCM decreased rate to .0008 from 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() #Model Evaluation def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples #Train the Model with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(norm_X_valid, y_valid) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './sign') print("Model saved") Explanation: A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. End of explanation #evaluate the model with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) print("restored") test_accuracy = evaluate(norm_X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) Explanation: Evaluate the Model evaluate the performance of the model on the test set. End of explanation ### Load the images and plot them here. ### Feel free to use as many code cells as needed. #http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset #http://benchmark.ini.rub.de/Dataset/GTSRB_Online-Test-Images.zip Explanation: Step 3: Test a Model on New Images To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type. You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images End of explanation ### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. Explanation: Predict the Sign Type for Each Image End of explanation ### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. Explanation: Analyze Performance End of explanation ### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability: ``` (5, 6) array a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]]) ``` Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces: TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32)) Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices. End of explanation ### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray") Explanation: Project Writeup Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable. For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. <figure> <img src="visualize_cnn.png" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above)</p> </figcaption> </figure> <p></p> End of explanation
15,917
Given the following text description, write Python code to implement the functionality described below step by step Description: Python 浮点数运算 浮点数用来存储计算机中的小数,与现实世界中的十进制小数不同的是,浮点数通过二进制的形式来表示一个小数。在深入了解浮点数的实现之前,先来看几个 Python 浮点数计算有意思的例子: Step1: IEEE 浮点数表示法 这些看起来违反常识的“错误”并非 Python 的错,而是由浮点数的规则所决定的,即使放到其它语言中结果也是这样的。要理解计算机中浮点数的表示规则,先来看现实世界中十进制小数是如何表示的: 1.234 = 1 + 1/10 + 2/100 + 3/1000 可以用下面的公式来表示: $$d = \sum_{i=-n}^m10^i*d_i$$ 其中 $d_i$ 是十进制中 0~9 的数字。而如果是一个二进制的小数: 1.001 = 1 + 0/2 + 0/4 + 1/8 可以用下面的公式来表示: $$d = \sum_{i=-n}^m2^i*d_i$$ 其中 $d_i$ 是二进制中的 0 或 1。Python 中的浮点数都是双精度的,也就说采用 64 位来表示一个小数,那这 64 位分别有多少用来表示整数部分和小数部分呢?根据 IEEE 标准,考虑到符号位,双精度表示法是这样分配的: $$d = s * \sum_{i=-52}^{11} 2^i*d_i$$ 也就是说用1位表示符号位,11位表示整数部分,52位表示小数部分。正如十进制中我们无法精确表示某些分数(如10/3),浮点数中通过 d1/2 + d2/4 + ... 的方式也会出现这种情况,比如上面的例子中,十进制中简单的 0.1 就无法在二进制中精确描述,而只能通过近似表示法表示出来: Step2: 也就是说 0.1 是通过 3602879701896397/36028797018963968 来近似表示的,很明显这样近似的表示会导致许多差距很小的数字公用相同的近似表示数,例如: Step3: 在 Python 中所有这些可以用相同的近似数表示的数字统一采用最短有效数字来表示: Step4: 浮点数运算 既然有些浮点数是通过近似值表示的,那么在计算过程中就很容易出现误差,就像最开始的第二个例子一样: Step5: 为了解决运算中的问题,IEEE 标准还指定了一个舍入规则(round),即 Python 中内置的 round 方法,我们可以通过舍入的方式取得两个数的近似值,来判断其近似值是否相等: Step6: 当然这种舍入的方式并不一定是可靠的,依赖于舍入的选择的位数,位数太大,就失去了 round 的作用,太小,就会引入别的错误: Step7: Python 中使用更精确的浮点数可以通过 decimal 和 fractions 两个模块,从名字上也能猜到,decimal 表示完整的小数,而 fractions 通过分数的形式表示小数:
Python Code: 0.1 == 0.10000000000000000000001 0.1+0.1+0.1 == 0.3 Explanation: Python 浮点数运算 浮点数用来存储计算机中的小数,与现实世界中的十进制小数不同的是,浮点数通过二进制的形式来表示一个小数。在深入了解浮点数的实现之前,先来看几个 Python 浮点数计算有意思的例子: End of explanation (0.1).as_integer_ratio() Explanation: IEEE 浮点数表示法 这些看起来违反常识的“错误”并非 Python 的错,而是由浮点数的规则所决定的,即使放到其它语言中结果也是这样的。要理解计算机中浮点数的表示规则,先来看现实世界中十进制小数是如何表示的: 1.234 = 1 + 1/10 + 2/100 + 3/1000 可以用下面的公式来表示: $$d = \sum_{i=-n}^m10^i*d_i$$ 其中 $d_i$ 是十进制中 0~9 的数字。而如果是一个二进制的小数: 1.001 = 1 + 0/2 + 0/4 + 1/8 可以用下面的公式来表示: $$d = \sum_{i=-n}^m2^i*d_i$$ 其中 $d_i$ 是二进制中的 0 或 1。Python 中的浮点数都是双精度的,也就说采用 64 位来表示一个小数,那这 64 位分别有多少用来表示整数部分和小数部分呢?根据 IEEE 标准,考虑到符号位,双精度表示法是这样分配的: $$d = s * \sum_{i=-52}^{11} 2^i*d_i$$ 也就是说用1位表示符号位,11位表示整数部分,52位表示小数部分。正如十进制中我们无法精确表示某些分数(如10/3),浮点数中通过 d1/2 + d2/4 + ... 的方式也会出现这种情况,比如上面的例子中,十进制中简单的 0.1 就无法在二进制中精确描述,而只能通过近似表示法表示出来: End of explanation (0.10000000000000001).as_integer_ratio() Explanation: 也就是说 0.1 是通过 3602879701896397/36028797018963968 来近似表示的,很明显这样近似的表示会导致许多差距很小的数字公用相同的近似表示数,例如: End of explanation print(0.10000000000000001) Explanation: 在 Python 中所有这些可以用相同的近似数表示的数字统一采用最短有效数字来表示: End of explanation a = .1 + .1 + .1 b = .3 print(a.as_integer_ratio()) print(b.as_integer_ratio()) print(a == b) Explanation: 浮点数运算 既然有些浮点数是通过近似值表示的,那么在计算过程中就很容易出现误差,就像最开始的第二个例子一样: End of explanation round(a, 10) == round(b, 10) Explanation: 为了解决运算中的问题,IEEE 标准还指定了一个舍入规则(round),即 Python 中内置的 round 方法,我们可以通过舍入的方式取得两个数的近似值,来判断其近似值是否相等: End of explanation print(round(a, 17) == round(b, 17)) print(round(0.1, 1) == round(0.111, 1)) Explanation: 当然这种舍入的方式并不一定是可靠的,依赖于舍入的选择的位数,位数太大,就失去了 round 的作用,太小,就会引入别的错误: End of explanation from decimal import Decimal a = Decimal(0.1) b = Decimal(0.1000000000000001) c = Decimal(0.10000000000000001) print(a) print(b) print(c) a == b == c from fractions import Fraction f1 = Fraction(1, 10) # 0.1 print(float(f1)) f3 = Fraction(3, 10) # 0.3 print(float(f3)) print(f1 + f1 + f1 == f3) Explanation: Python 中使用更精确的浮点数可以通过 decimal 和 fractions 两个模块,从名字上也能猜到,decimal 表示完整的小数,而 fractions 通过分数的形式表示小数: End of explanation
15,918
Given the following text description, write Python code to implement the functionality described below step by step Description: Lab 3 - Multi Layer Perceptron with MNIST This lab corresponds to Module 3 of the "Deep Learning Explained" course. We assume that you have successfully completed Lab 1 (Downloading the MNIST data). In this lab, we train a multi-layer perceptron on MNIST data. This notebook provides the recipe using Python APIs. Introduction Problem We will continue to work on the same problem of recognizing digits in MNIST data. The MNIST data comprises of hand-written digits with little background noise. Step1: Goal Step2: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU). Step3: Data reading There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort. We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset. In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form Step4: <a id='#Model Creation'></a> Model Creation Our multi-layer perceptron will be relatively simple with 2 hidden layers (num_hidden_layers). The number of nodes in the hidden layer being a parameter specified by hidden_layers_dim. The figure below illustrates the entire model we will use in this tutorial in the context of MNIST data. If you are not familiar with the terms hidden_layer and number of hidden layers, please review the module 3 course videos. Each Dense layer (as illustrated below) shows the input dimensions, output dimensions and activation function that layer uses. Specifically, the layer below shows Step5: Network input and output Step6: Multi-layer Perceptron setup The code below is a direct translation of the model shown above. Step7: z will be used to represent the output of a network. We introduced sigmoid function in CNTK 102, in this tutorial you should try different activation functions in the hidden layer. You may choose to do this right away and take a peek into the performance later in the tutorial or run the preset tutorial and then choose to perform the suggested exploration. Suggested Exploration - Record the training error you get with sigmoid as the activation function - Now change to relu as the activation function and see if you can improve your training error Knowledge Check Step8: Training Below, we define the Loss function, which is used to guide weight changes during training. As explained in the lectures, we use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions). We minimize the cross-entropy between the label and predicted probability by the network. Step9: Evaluation Below, we define the Evaluation (or metric) function that is used to report a measurement of how well our model is performing. For this problem, we choose the classification_error() function as our metric, which returns the average error over the associated samples (treating a match as "1", where the model's prediction matches the "ground truth" label, and a non-match as "0"). Step10: Configure training The trainer strives to reduce the loss function by different optimization approaches, Stochastic Gradient Descent (sgd) being a basic one. Typically, one would start with random initialization of the model parameters. The sgd optimizer would calculate the loss or error between the predicted label against the corresponding ground-truth label and using gradient-decent generate a new set model parameters in a single iteration. The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the loss or error from that set to update the model parameters. This subset is called a minibatch. With minibatches we often sample observation from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the loss (and the error). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained. One of the key parameter for optimization is called the learning_rate. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will be covering more details in later tutorial. With this information, we are ready to create our trainer. Step11: First let us create some helper functions that will be needed to visualize different functions associated with training. Step12: <a id='#Run the trainer'></a> Run the trainer We are now ready to train our fully connected neural net. We want to decide what data we need to feed into the training engine. In this example, each iteration of the optimizer will work on minibatch_size sized samples. We would like to train on all 60000 observations. Additionally we will make multiple passes through the data specified by the variable num_sweeps_to_train_with. With these parameters we can proceed with training our simple multi-layer perceptron network. Step13: Let us plot the errors over the different training minibatches. Note that as we iterate the training loss decreases though we do see some intermediate bumps. Step14: Evaluation / Testing Now that we have trained the network, let us evaluate the trained network on the test data. This is done using trainer.test_minibatch. Step15: Note, this error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid the phenomenon of overfitting. This is a huge reduction in error compared to multi-class LR (from Lab 02). We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes. Step16: Let us test a small minibatch sample from the test data. Step17: As you can see above, our model is much better. Do you see any mismatches? Let us visualize one of the test images and its associated label. Do they match?
Python Code: # Figure 1 Image(url= "http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png", width=200, height=200) Explanation: Lab 3 - Multi Layer Perceptron with MNIST This lab corresponds to Module 3 of the "Deep Learning Explained" course. We assume that you have successfully completed Lab 1 (Downloading the MNIST data). In this lab, we train a multi-layer perceptron on MNIST data. This notebook provides the recipe using Python APIs. Introduction Problem We will continue to work on the same problem of recognizing digits in MNIST data. The MNIST data comprises of hand-written digits with little background noise. End of explanation from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter) import matplotlib.image as mpimg import matplotlib.pyplot as plt import numpy as np import sys import os import cntk as C %matplotlib inline Explanation: Goal: Our goal is to train a classifier that will identify the digits in the MNIST dataset. Additionally, we aspire to achieve lower error rate with Multi-layer perceptron compared to Multi-class logistic regression. Approach: There are 4 stages in this lab: - Data reading: We will use the CNTK Text reader. - Data preprocessing: Covered in part A (suggested extension section). - Model creation: Multi-Layer Perceptron model. - Train-Test-Predict: This is the same workflow introduced in the lectures End of explanation # Select the right target device when this notebook is being tested: if 'TEST_DEVICE' in os.environ: if os.environ['TEST_DEVICE'] == 'cpu': C.device.try_set_default_device(C.device.cpu()) else: C.device.try_set_default_device(C.device.gpu(0)) # Test for CNTK version if not C.__version__ == "2.0": raise Exception("this lab is designed to work with 2.0. Current Version: " + C.__version__) # Ensure we always get the same amount of randomness np.random.seed(0) C.cntk_py.set_fixed_random_seed(1) C.cntk_py.force_deterministic_algorithms() # Define the data dimensions input_dim = 784 num_output_classes = 10 Explanation: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU). End of explanation # Read a CTF formatted text (as mentioned above) using the CTF deserializer from a file def create_reader(path, is_training, input_dim, num_label_classes): return C.io.MinibatchSource(C.io.CTFDeserializer(path, C.io.StreamDefs( labels = C.io.StreamDef(field='labels', shape=num_label_classes, is_sparse=False), features = C.io.StreamDef(field='features', shape=input_dim, is_sparse=False) )), randomize = is_training, max_sweeps = C.io.INFINITELY_REPEAT if is_training else 1) # Ensure the training and test data is generated and available for this tutorial. # We search in two locations in the toolkit for the cached MNIST data set. data_found = False for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"), os.path.join("data", "MNIST")]: train_file = os.path.join(data_dir, "Train-28x28_cntk_text.txt") test_file = os.path.join(data_dir, "Test-28x28_cntk_text.txt") if os.path.isfile(train_file) and os.path.isfile(test_file): data_found = True break if not data_found: raise ValueError("Please generate the data by completing Lab1_MNIST_DataLoader") print("Data directory is {0}".format(data_dir)) Explanation: Data reading There are different ways one can read data into CNTK. The easiest way is to load the data in memory using NumPy / SciPy / Pandas readers. However, this can be done only for small data sets. Since deep learning requires large amount of data we have chosen in this course to show how to leverage built-in distributed readers that can scale to terrabytes of data with little extra effort. We are using the MNIST data you have downloaded using Lab 1 DataLoader notebook. The dataset has 60,000 training images and 10,000 test images with each image being 28 x 28 pixels. Thus the number of features is equal to 784 (= 28 x 28 pixels), 1 per pixel. The variable num_output_classes is set to 10 corresponding to the number of digits (0-9) in the dataset. In Lab 1, the data was downloaded and written to 2 CTF (CNTK Text Format) files, 1 for training, and 1 for testing. Each line of these text files takes the form: |labels 0 0 0 1 0 0 0 0 0 0 |features 0 0 0 0 ... (784 integers each representing a pixel) We are going to use the image pixels corresponding the integer stream named "features". We define a create_reader function to read the training and test data using the CTF deserializer. The labels are 1-hot encoded. Refer to Lab 1 for data format visualizations. End of explanation num_hidden_layers = 2 hidden_layers_dim = 400 #hidden_layers_dim = 50 Explanation: <a id='#Model Creation'></a> Model Creation Our multi-layer perceptron will be relatively simple with 2 hidden layers (num_hidden_layers). The number of nodes in the hidden layer being a parameter specified by hidden_layers_dim. The figure below illustrates the entire model we will use in this tutorial in the context of MNIST data. If you are not familiar with the terms hidden_layer and number of hidden layers, please review the module 3 course videos. Each Dense layer (as illustrated below) shows the input dimensions, output dimensions and activation function that layer uses. Specifically, the layer below shows: input dimension = 784 (1 dimension for each input pixel), output dimension = 400 (number of hidden nodes, a parameter specified by the user) and activation function being relu. In this model we have 2 dense layer called the hidden layers each with an activation function of relu. These are followed by the dense output layer with no activation. The output dimension (a.k.a. number of hidden nodes) in the 2 hidden layer is set to 400. The number of hidden layers is 2. The final output layer emits a vector of 10 values. Since we will be using softmax to normalize the output of the model we do not use an activation function in this layer. The softmax operation comes bundled with the loss function we will be using later in this tutorial. End of explanation input = C.input_variable(input_dim) label = C.input_variable(num_output_classes) Explanation: Network input and output: - input variable (a key CNTK concept): An input variable is a container in which we fill different observations in this case image pixels during model learning (a.k.a.training) and model evaluation (a.k.a. testing). Thus, the shape of the input must match the shape of the data that will be provided. For example, when data are images each of height 10 pixels and width 5 pixels, the input feature dimension will be 50 (representing the total number of image pixels). More on data and their dimensions to appear in separate tutorials. Knowledge Check What is the input dimension of your chosen model? This is fundamental to our understanding of variables in a network or model representation in CNTK. End of explanation def create_model(features): with C.layers.default_options(init = C.layers.glorot_uniform(), activation = C.ops.relu): #with C.layers.default_options(init = C.layers.glorot_uniform(), activation = C.ops.sigmoid): h = features for _ in range(num_hidden_layers): h = C.layers.Dense(hidden_layers_dim)(h) r = C.layers.Dense(num_output_classes, activation = None)(h) #r = C.layers.Dense(num_output_classes, activation = C.ops.sigmoid)(h) return r z = create_model(input) Explanation: Multi-layer Perceptron setup The code below is a direct translation of the model shown above. End of explanation # Scale the input to 0-1 range by dividing each pixel by 255. z = create_model(input/255.0) Explanation: z will be used to represent the output of a network. We introduced sigmoid function in CNTK 102, in this tutorial you should try different activation functions in the hidden layer. You may choose to do this right away and take a peek into the performance later in the tutorial or run the preset tutorial and then choose to perform the suggested exploration. Suggested Exploration - Record the training error you get with sigmoid as the activation function - Now change to relu as the activation function and see if you can improve your training error Knowledge Check: Name some of the different supported activation functions. Which activation function gives the least training error? End of explanation loss = C.cross_entropy_with_softmax(z, label) Explanation: Training Below, we define the Loss function, which is used to guide weight changes during training. As explained in the lectures, we use the softmax function to map the accumulated evidences or activations to a probability distribution over the classes (Details of the softmax function and other activation functions). We minimize the cross-entropy between the label and predicted probability by the network. End of explanation label_error = C.classification_error(z, label) Explanation: Evaluation Below, we define the Evaluation (or metric) function that is used to report a measurement of how well our model is performing. For this problem, we choose the classification_error() function as our metric, which returns the average error over the associated samples (treating a match as "1", where the model's prediction matches the "ground truth" label, and a non-match as "0"). End of explanation # Instantiate the trainer object to drive the model training learning_rate = 0.2 lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch) learner = C.sgd(z.parameters, lr_schedule) trainer = C.Trainer(z, (loss, label_error), [learner]) Explanation: Configure training The trainer strives to reduce the loss function by different optimization approaches, Stochastic Gradient Descent (sgd) being a basic one. Typically, one would start with random initialization of the model parameters. The sgd optimizer would calculate the loss or error between the predicted label against the corresponding ground-truth label and using gradient-decent generate a new set model parameters in a single iteration. The aforementioned model parameter update using a single observation at a time is attractive since it does not require the entire data set (all observation) to be loaded in memory and also requires gradient computation over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation sample at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations and use an average of the loss or error from that set to update the model parameters. This subset is called a minibatch. With minibatches we often sample observation from the larger training dataset. We repeat the process of model parameters update using different combination of training samples and over a period of time minimize the loss (and the error). When the incremental error rates are no longer changing significantly or after a preset number of maximum minibatches to train, we claim that our model is trained. One of the key parameter for optimization is called the learning_rate. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will be covering more details in later tutorial. With this information, we are ready to create our trainer. End of explanation # Define a utility function to compute the moving average sum. # A more efficient implementation is possible with np.cumsum() function def moving_average(a, w=5): if len(a) < w: return a[:] # Need to send a copy of the array return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)] # Defines a utility that prints the training progress def print_training_progress(trainer, mb, frequency, verbose=1): training_loss = "NA" eval_error = "NA" if mb%frequency == 0: training_loss = trainer.previous_minibatch_loss_average eval_error = trainer.previous_minibatch_evaluation_average if verbose: print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}%".format(mb, training_loss, eval_error*100)) return mb, training_loss, eval_error # Initialize the parameters for the trainer minibatch_size = 64 #minibatch_size = 512 num_samples_per_sweep = 60000 num_sweeps_to_train_with = 10 num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size Explanation: First let us create some helper functions that will be needed to visualize different functions associated with training. End of explanation # Create the reader to training data set reader_train = create_reader(train_file, True, input_dim, num_output_classes) # Map the data streams to the input and labels. input_map = { label : reader_train.streams.labels, input : reader_train.streams.features } # Run the trainer on and perform model training training_progress_output_freq = 500 plotdata = {"batchsize":[], "loss":[], "error":[]} for i in range(0, int(num_minibatches_to_train)): # Read a mini batch from the training data file data = reader_train.next_minibatch(minibatch_size, input_map = input_map) trainer.train_minibatch(data) batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1) if not (loss == "NA" or error =="NA"): plotdata["batchsize"].append(batchsize) plotdata["loss"].append(loss) plotdata["error"].append(error) Explanation: <a id='#Run the trainer'></a> Run the trainer We are now ready to train our fully connected neural net. We want to decide what data we need to feed into the training engine. In this example, each iteration of the optimizer will work on minibatch_size sized samples. We would like to train on all 60000 observations. Additionally we will make multiple passes through the data specified by the variable num_sweeps_to_train_with. With these parameters we can proceed with training our simple multi-layer perceptron network. End of explanation # Compute the moving average loss to smooth out the noise in SGD plotdata["avgloss"] = moving_average(plotdata["loss"]) plotdata["avgerror"] = moving_average(plotdata["error"]) # Plot the training loss and the training error import matplotlib.pyplot as plt plt.figure(1) plt.subplot(211) plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--') plt.xlabel('Minibatch number') plt.ylabel('Loss') plt.title('Minibatch run vs. Training loss') plt.show() plt.subplot(212) plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--') plt.xlabel('Minibatch number') plt.ylabel('Label Prediction Error') plt.title('Minibatch run vs. Label Prediction Error') plt.show() Explanation: Let us plot the errors over the different training minibatches. Note that as we iterate the training loss decreases though we do see some intermediate bumps. End of explanation # Read the training data reader_test = create_reader(test_file, False, input_dim, num_output_classes) test_input_map = { label : reader_test.streams.labels, input : reader_test.streams.features, } # Test data for trained model test_minibatch_size = 512 num_samples = 10000 num_minibatches_to_test = num_samples // test_minibatch_size test_result = 0.0 for i in range(num_minibatches_to_test): # We are loading test data in batches specified by test_minibatch_size # Each data point in the minibatch is a MNIST digit image of 784 dimensions # with one pixel per dimension that we will encode / decode with the # trained model. data = reader_test.next_minibatch(test_minibatch_size, input_map = test_input_map) eval_error = trainer.test_minibatch(data) test_result = test_result + eval_error # Average of evaluation errors of all test minibatches print("Average test error: {0:.2f}%".format(test_result*100 / num_minibatches_to_test)) Explanation: Evaluation / Testing Now that we have trained the network, let us evaluate the trained network on the test data. This is done using trainer.test_minibatch. End of explanation out = C.softmax(z) Explanation: Note, this error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid the phenomenon of overfitting. This is a huge reduction in error compared to multi-class LR (from Lab 02). We have so far been dealing with aggregate measures of error. Let us now get the probabilities associated with individual data points. For each observation, the eval function returns the probability distribution across all the classes. The classifier is trained to recognize digits, hence has 10 classes. First let us route the network output through a softmax function. This maps the aggregated activations across the network to probabilities across the 10 classes. End of explanation # Read the data for evaluation reader_eval = create_reader(test_file, False, input_dim, num_output_classes) eval_minibatch_size = 25 eval_input_map = {input: reader_eval.streams.features} data = reader_test.next_minibatch(eval_minibatch_size, input_map = test_input_map) img_label = data[label].asarray() img_data = data[input].asarray() predicted_label_prob = [out.eval(img_data[i]) for i in range(len(img_data))] # Find the index with the maximum value for both predicted as well as the ground truth pred = [np.argmax(predicted_label_prob[i]) for i in range(len(predicted_label_prob))] gtlabel = [np.argmax(img_label[i]) for i in range(len(img_label))] print("Label :", gtlabel[:25]) print("Predicted:", pred) Explanation: Let us test a small minibatch sample from the test data. End of explanation # Plot a random image sample_number = 5 plt.imshow(img_data[sample_number].reshape(28,28), cmap="gray_r") plt.axis('off') img_gt, img_pred = gtlabel[sample_number], pred[sample_number] print("Image Label: ", img_pred) Explanation: As you can see above, our model is much better. Do you see any mismatches? Let us visualize one of the test images and its associated label. Do they match? End of explanation
15,919
Given the following text description, write Python code to implement the functionality described below step by step Description: H2O Tutorial Author Step1: Enable inline plotting in the Jupyter Notebook Step2: Intro to H2O Data Munging Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store. Step3: View the top of the H2O frame. Step4: View the bottom of the H2O Frame Step5: Select a column fr["VAR_NAME"] Step6: Select a few columns Step7: Select a subset of rows Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection. Step8: Key attributes Step9: Select rows based on value Step10: Boolean masks can be used to subselect rows based on a criteria. Step11: Get summary statistics of the data and additional data distribution information. Step12: Set up the predictor and response column names Using H2O algorithms, it's easier to reference predictor and response columns by name in a single frame (i.e., don't split up X and y) Step13: Machine Learning With H2O H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented. Unlike Scikit-learn, H2O allows for categorical and missing data. The basic work flow is as follows Step14: The performance of the model can be checked using the holdout dataset Step15: Train-Test Split Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data. Step16: There was a massive jump in the R^2 value. This is because the original data is not shuffled. Cross validation H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits). In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either Step17: However, you can still make use of the cross_val_score from Scikit-Learn Cross validation Step18: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is analgous to the scikit-learn RandomForestRegressor object with its own fit method Step19: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage. Since the progress bar print out gets annoying let's disable that Step20: Grid Search Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties) Randomized grid search Step21: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions). The steps to perform a randomized grid search Step24: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report. Step25: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs Step26: Transformations Rule of machine learning Step27: Normalize Data Step28: Then, we can apply PCA and keep the top 5 components. Step29: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers. Pipelines "Tranformers unite!" If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple. Steps Step30: This is so much easier!!! But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score. Combining randomized grid search and pipelines "Yo dawg, I heard you like models, so I put models in your models to model models." Steps Step31: Currently Under Development (drop-in scikit-learn pieces)
Python Code: import pandas as pd import numpy from numpy.random import choice from sklearn.datasets import load_boston import h2o h2o.init() # transfer the boston data from pandas to H2O boston_data = load_boston() X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names) X["Median_value"] = boston_data.target X = h2o.H2OFrame(python_obj=X.to_dict("list")) # select 10% for valdation r = X.runif(seed=123456789) train = X[r < 0.9,:] valid = X[r >= 0.9,:] h2o.export_file(train, "Boston_housing_train.csv", force=True) h2o.export_file(valid, "Boston_housing_test.csv", force=True) Explanation: H2O Tutorial Author: Spencer Aiello Contact: [email protected] This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms. Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai. Setting up your system for this demo The following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory End of explanation %matplotlib inline import matplotlib.pyplot as plt Explanation: Enable inline plotting in the Jupyter Notebook End of explanation fr = h2o.import_file("Boston_housing_train.csv") Explanation: Intro to H2O Data Munging Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store. End of explanation fr.head() Explanation: View the top of the H2O frame. End of explanation fr.tail() Explanation: View the bottom of the H2O Frame End of explanation fr["CRIM"].head() # Tab completes Explanation: Select a column fr["VAR_NAME"] End of explanation columns = ["CRIM", "RM", "RAD"] fr[columns].head() Explanation: Select a few columns End of explanation fr[2:7,:] # explicitly select all columns with : Explanation: Select a subset of rows Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection. End of explanation # The columns attribute is exactly like Pandas print "Columns:", fr.columns, "\n" print "Columns:", fr.names, "\n" print "Columns:", fr.col_names, "\n" # There are a number of attributes to get at the shape print "length:", str( len(fr) ), "\n" print "shape:", fr.shape, "\n" print "dim:", fr.dim, "\n" print "nrow:", fr.nrow, "\n" print "ncol:", fr.ncol, "\n" # Use the "types" attribute to list the column types print "types:", fr.types, "\n" Explanation: Key attributes: * columns, names, col_names * len, shape, dim, nrow, ncol * types Note: Since the data is not in local python memory there is no "values" attribute. If you want to pull all of the data into the local python memory then do so explicitly with h2o.export_file and reading the data into python memory from disk. End of explanation fr.shape Explanation: Select rows based on value End of explanation mask = fr["CRIM"]>1 fr[mask,:].shape Explanation: Boolean masks can be used to subselect rows based on a criteria. End of explanation fr.describe() Explanation: Get summary statistics of the data and additional data distribution information. End of explanation x = fr.names y="Median_value" x.remove(y) Explanation: Set up the predictor and response column names Using H2O algorithms, it's easier to reference predictor and response columns by name in a single frame (i.e., don't split up X and y) End of explanation model = h2o.random_forest(x=fr[:400,x],y=fr[:400,y],seed=42) # Define and fit first 400 points model.predict(fr[400:,:]) # Predict the rest Explanation: Machine Learning With H2O H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented. Unlike Scikit-learn, H2O allows for categorical and missing data. The basic work flow is as follows: * Fit the training data with a machine learning algorithm * Predict on the testing data Simple model End of explanation perf = model.model_performance(fr[400:,:]) perf.r2() # get the r2 on the holdout data perf.mse() # get the mse on the holdout data perf # display the performance object Explanation: The performance of the model can be checked using the holdout dataset End of explanation r = fr.runif(seed=12345) # build random uniform column over [0,1] train= fr[r<0.75,:] # perform a 75-25 split test = fr[r>=0.75,:] model = h2o.random_forest(x=train[x],y=train[y],seed=42) perf = model.model_performance(test) perf.r2() Explanation: Train-Test Split Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data. End of explanation model = h2o.random_forest(x=fr[x],y=fr[y], nfolds=10) # build a 10-fold cross-validated model scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96) print "Scores:", scores.round(2) Explanation: There was a massive jump in the R^2 value. This is because the original data is not shuffled. Cross validation H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits). In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either: * AUTO: Perform random assignment * Random: Each row has a equal (1/nfolds) chance of being in any fold. * Modulo: Observations are in/out of the fold based by modding on nfolds End of explanation from sklearn.cross_validation import cross_val_score from h2o.cross_validation import H2OKFold from h2o.estimators.random_forest import H2ORandomForestEstimator from h2o.model.regression import h2o_r2_score from sklearn.metrics.scorer import make_scorer Explanation: However, you can still make use of the cross_val_score from Scikit-Learn Cross validation: H2O and Scikit-Learn End of explanation model = H2ORandomForestEstimator(seed=42) scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv) print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96) print "Scores:", scores.round(2) Explanation: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is analgous to the scikit-learn RandomForestRegressor object with its own fit method End of explanation h2o.__PROGRESS_BAR__=False h2o.no_progress() Explanation: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage. Since the progress bar print out gets annoying let's disable that End of explanation from sklearn import __version__ sklearn_version = __version__ print sklearn_version Explanation: Grid Search Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties) Randomized grid search: H2O and Scikit-Learn End of explanation %%time from h2o.estimators.random_forest import H2ORandomForestEstimator # Import model from sklearn.grid_search import RandomizedSearchCV # Import grid search from scipy.stats import randint, uniform model = H2ORandomForestEstimator(seed=42) # Define model params = {"ntrees": randint(20,50), "max_depth": randint(1,10), "min_rows": randint(1,10), # scikit's min_samples_leaf "mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv random_search = RandomizedSearchCV(model, params, n_iter=30, scoring=scorer, cv=custom_cv, random_state=42, n_jobs=1) # Define grid search object random_search.fit(fr[x], fr[y]) print "Best R^2:", random_search.best_score_, "\n" print "Best params:", random_search.best_params_ Explanation: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions). The steps to perform a randomized grid search: 1. Import model and RandomizedSearchCV 2. Define model 3. Specify parameters to test 4. Define grid search object 5. Fit data to grid search object 6. Collect scores All the steps will be repeated from above. Because 0.16.1 is installed, we use scipy to define specific distributions ADVANCED TIP: Turn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1). We'll turn it back on again in the aftermath of a Parallel job. If you don't want to run jobs in parallel, don't turn off the reference counting. Pattern is: >>> h2o.turn_off_ref_cnts() >>> .... parallel job .... >>> h2o.turn_on_ref_cnts() End of explanation def report_grid_score_detail(random_search, charts=True): Input fit grid search estimator. Returns df of scores with details df_list = [] for line in random_search.grid_scores_: results_dict = dict(line.parameters) results_dict["score"] = line.mean_validation_score results_dict["std"] = line.cv_validation_scores.std()*1.96 df_list.append(results_dict) result_df = pd.DataFrame(df_list) result_df = result_df.sort("score", ascending=False) if charts: for col in get_numeric(result_df): if col not in ["score", "std"]: plt.scatter(result_df[col], result_df.score) plt.title(col) plt.show() for col in list(result_df.columns[result_df.dtypes == "object"]): cat_plot = result_df.score.groupby(result_df[col]).mean() cat_plot.sort() cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2)) plt.show() return result_df def get_numeric(X): Return list of numeric dtypes variables return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist() report_grid_score_detail(random_search).head() Explanation: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report. End of explanation %%time params = {"ntrees": randint(30,40), "max_depth": randint(4,10), "mtries": randint(4,10),} custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big # impact on the std of the resulting scores. More random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher n_iter=10, # variation per sample scoring=scorer, cv=custom_cv, random_state=43, n_jobs=1) random_search.fit(fr[x], fr[y]) print "Best R^2:", random_search.best_score_, "\n" print "Best params:", random_search.best_params_ report_grid_score_detail(random_search) Explanation: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs: End of explanation from h2o.transforms.preprocessing import H2OScaler from h2o.transforms.decomposition import H2OPCA Explanation: Transformations Rule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful. At the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames. Basic steps: Remove the response variable from transformations. Import transformer Define transformer Fit train data to transformer Transform test and train data Re-attach the response variable. First let's normalize the data using the means and standard deviations of the training data. Then let's perform a principal component analysis on the training data and select the top 5 components. Using these components, let's use them to reduce the train and test design matrices. End of explanation y_train = train.pop("Median_value") y_test = test.pop("Median_value") norm = H2OScaler() norm.fit(train) X_train_norm = norm.transform(train) X_test_norm = norm.transform(test) print X_test_norm.shape X_test_norm Explanation: Normalize Data: Use the means and standard deviations from the training data. End of explanation pca = H2OPCA(n_components=5) pca.fit(X_train_norm) X_train_norm_pca = pca.transform(X_train_norm) X_test_norm_pca = pca.transform(X_test_norm) # prop of variance explained by top 5 components? print X_test_norm_pca.shape X_test_norm_pca[:5] model = H2ORandomForestEstimator(seed=42) model.fit(X_train_norm_pca,y_train) y_hat = model.predict(X_test_norm_pca) h2o_r2_score(y_test,y_hat) Explanation: Then, we can apply PCA and keep the top 5 components. End of explanation from h2o.transforms.preprocessing import H2OScaler from h2o.transforms.decomposition import H2OPCA from h2o.estimators.random_forest import H2ORandomForestEstimator from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown> model = H2ORandomForestEstimator(seed=42) pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps ("pca", H2OPCA(n_components=5)), ("rf", model)]) # Notice the last step is an estimator pipe.fit(train, y_train) # Fit training data y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator) h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before Explanation: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers. Pipelines "Tranformers unite!" If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple. Steps: Import Pipeline, transformers, and model Define pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest). Fit the training data to pipeline Either transform or predict the testing data End of explanation pipe = Pipeline([("standardize", H2OScaler()), ("pca", H2OPCA()), ("rf", H2ORandomForestEstimator(seed=42))]) params = {"standardize__center": [True, False], # Parameters to test "standardize__scale": [True, False], "pca__n_components": randint(2, 6), "rf__ntrees": randint(50,80), "rf__max_depth": randint(4,10), "rf__min_rows": randint(5,10), } # "rf__mtries": randint(1,4),} # gridding over mtries is # problematic with pca grid over # n_components above from sklearn.grid_search import RandomizedSearchCV from h2o.cross_validation import H2OKFold from h2o.model.regression import h2o_r2_score from sklearn.metrics.scorer import make_scorer custom_cv = H2OKFold(fr, n_folds=5, seed=42) random_search = RandomizedSearchCV(pipe, params, n_iter=30, scoring=make_scorer(h2o_r2_score), cv=custom_cv, random_state=42, n_jobs=1) random_search.fit(fr[x],fr[y]) results = report_grid_score_detail(random_search) results.head() Explanation: This is so much easier!!! But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score. Combining randomized grid search and pipelines "Yo dawg, I heard you like models, so I put models in your models to model models." Steps: Import Pipeline, grid search, transformers, and estimators <Not shown below> Define pipeline Define parameters to test in the form: "(Step name)__(argument name)" A double underscore separates the two words. Define grid search Fit to grid search End of explanation best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline save_path = h2o.save_model(h2o_model, path=".", force=True) print save_path # assumes new session my_model = h2o.load_model(path=save_path) my_model.predict(fr) Explanation: Currently Under Development (drop-in scikit-learn pieces): * Richer set of transforms (only PCA and Scale are implemented) * Richer set of estimators (only RandomForest is available) * Full H2O Grid Search Other Tips: Model Save/Load It is useful to save constructed models to disk and reload them between H2O sessions. Here's how: End of explanation
15,920
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I realize my question is fairly similar to Vectorized moving window on 2D array in numpy , but the answers there don't quite satisfy my needs.
Problem: import numpy as np a = np.array([[1,2,3,4], [2,3,4,5], [3,4,5,6], [4,5,6,7]]) size = (3, 3) def window(arr, shape=(3, 3)): ans = [] # Find row and column window sizes r_win = np.floor(shape[0] / 2).astype(int) c_win = np.floor(shape[1] / 2).astype(int) x, y = arr.shape for i in range(x): xmin = max(0, i - r_win) xmax = min(x, i + r_win + 1) for j in range(y): ymin = max(0, j - c_win) ymax = min(y, j + c_win + 1) ans.append(arr[xmin:xmax, ymin:ymax]) return ans result = window(a, size)
15,921
Given the following text description, write Python code to implement the functionality described below step by step Description: <center> <img src="http Step1: Hints Step2: Example 2 Considers the following IVP Step3: Example 3 Considers the following IVP Step4: Example 4 See classnotes! Step5: Example 7, for completeness. Plotting Step6: Example 8, for completeness. Plotting Step7: Example 9, for completeness. Plotting Step8: Example 10, for completeness. $$\begin{equation} f(t)= \begin{cases} t\,\sin(t), & 0\leq t < 2, \ \dfrac{\sin(\pi\,t)}{t}, & 2 < t. \end{cases} \end{equation}$$ Step9: Example 11, for completeness. Write a function that return $f(t)=\exp(-\alpha\,t^2)$ and $f'(t)=-2\,\alpha\,t\,\exp(-\alpha\,t^2)$.
Python Code: import numpy as np import scipy.sparse.linalg as sp import sympy as sym from scipy.linalg import toeplitz import ipywidgets as widgets from ipywidgets import IntSlider import matplotlib.pyplot as plt %matplotlib inline from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter plt.style.use('ggplot') import matplotlib as mpl mpl.rcParams['font.size'] = 14 mpl.rcParams['axes.labelsize'] = 20 mpl.rcParams['xtick.labelsize'] = 14 mpl.rcParams['ytick.labelsize'] = 14 sym.init_printing() from scipy.integrate import solve_ivp from ipywidgets import interact Explanation: <center> <img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%"> <h1> INF-495 - Modelado Computacional Aplicado </h1> <h2> Prof. Claudio Torres, Ph.D. </h2> <h2> Version: 1.02</h2> </center> Textbook: Computational Mathematical Modeling, An Integrated Approach Across Scales by Daniela Calvetti and Erkki Somersalo. Chapter 1 End of explanation # Warning: In general, we will use \dot{y}=f(t,y), y(0)=y_0, # so don't get mixed up with the notation from the textbook. def f_example_1_interact(alpha_input=1,T_max=5,p=2): # Example 1, numerically. def f_example_1(t, y, alpha): return -alpha*y # Initial condition y0=1 # time where we want your solution t = np.linspace(0, T_max, 100) sol = solve_ivp(f_example_1, [0,T_max], (y0,), t_eval=t, args=(alpha_input,)) plt.figure() plt.plot(t, sol.y[0], 'b', label='y(t)') for i in np.arange(1,np.ceil(T_max/(np.log(p)/alpha_input))): plt.axvline(x=i*np.log(p)/alpha_input) print('t: ',i*np.log(p)/alpha_input,', exp(t): ',np.exp(-alpha_input*i*np.log(p)/alpha_input)) plt.legend(loc='best') plt.xlabel('t') plt.ylabel(r'$\exp(-\alpha\,t)$') plt.title('What are the red lines?') plt.grid(True) plt.show() interact(f_example_1_interact,alpha_input=(0.1,10,0.1),T_max=(0.1,100,0.1),p=(2,10,1)) Explanation: Hints: I strongly suggest to review the following jupyter notebook: - [Intro to ODE] https://github.com/tclaudioe/Scientific-Computing/blob/master/SC1v2/11_ODE.ipynb 1.2 Ordinary differential equations A differential equation, in this case a scalar initial value problem (IVP), can be defined as follows: $$\begin{align} \dfrac{\mathrm{d}x}{\mathrm{d}t}(t) &= f(t,x(t)),\ x(0) &= x_0. \end{align}$$ For a known function $f(t,x)$ and $x_0$. Particularly one could integrate both sides of the first equation as follows, $$\begin{align} \int_0^{h} \dfrac{\mathrm{d}x}{\mathrm{d}t}(t)\,\mathrm{d}t & = \int_0^{h} f(t,x(t))\,\mathrm{d}t,\ x(h)-x(0) &= \int_0^{h} f(t,x(t))\,\mathrm{d}t, \end{align}$$ where the Fundamental Theorem of Calculus was used. Now, we solve for the unknow value $x(h)$: $$\begin{align} x(h) &= x_0 + \int_0^{h} f(t,x(t))\,\mathrm{d}t. \end{align}$$ Up to this point, we have not made any approximation. From this point, one can derive several numerical method, depending how you approximate the integral. For instance, if we consider $h=\Delta t$ and we use a Reimann sum from the left (only one interval from $0$ to $\Delta t$), we obtain the method called Forward Euler. But if we do the same but with the Reimann sum from the right, we obtain backward Euler. Another two options that are easy to see are the midpoint rule and the trapezoidal rule. Notice that in some cases mentioned, you would need a non-linear equation (or a system in higher dimension) for each time step. Example 1 Considers the following IVP: $$\begin{align} \dfrac{\mathrm{d}x}{\mathrm{d}t}(t) &= -\alpha\,x(t),\ x(0) &= x_0, \end{align}$$ for which we know the solution: $$\begin{align} x(t) &= x_0\,\exp(-\alpha\,t) \end{align}$$ End of explanation print(2**-1074,', ',2**-1075) # Example 2, numerically. def f_example_2(t,y): return np.sqrt(y) # Initial condition y0 = 0.0 # time where we want your solution t = np.linspace(0, 1, 100) sol_a = solve_ivp(f_example_2, [0,1], (y0,), t_eval=t) sol_b = solve_ivp(f_example_2, [0,1], (y0+np.power(2.0,-1074),), t_eval=t) plt.figure() plt.plot(t, sol_a.y[0], 'b', label='y_a(t)') plt.plot(t, sol_b.y[0], 'r', label='y_b(t)') plt.legend(loc='best') plt.xlabel('t') plt.grid(True) plt.show() Explanation: Example 2 Considers the following IVP: $$\begin{align} \dfrac{\mathrm{d}x}{\mathrm{d}t}(t) &= \sqrt{x(t)}\ x(0) &= 0, \end{align}$$ Recall that $2^{-1074}$ is the smallest number greater than 0 that ``double precision'' can store. See: - https://github.com/tclaudioe/Scientific-Computing/blob/master/SC1v2/02_floating_point_arithmetic.ipynb End of explanation def f_example_3_interact(T_max=0.5): # Example 3, numerically. def f_example_3(t,y): return y*y # Initial condition y0=1 # time where we want your solution t = np.linspace(0, T_max, 100) sol = solve_ivp(f_example_3, [0,T_max], (y0,), t_eval=t) plt.figure() plt.plot(t, sol.y[0], 'b', label='y(t)') #plt.plot(t, 1/(1-t), 'r--', label='y_e(t)') plt.legend(loc='best') plt.xlabel('t') plt.grid(True) plt.show() interact(f_example_3_interact,T_max=(0.1,1,0.001)) Explanation: Example 3 Considers the following IVP: $$\begin{align} \dfrac{\mathrm{d}x}{\mathrm{d}t}(t) &= x(t)^2\ x(0) &= 1, \end{align}$$ for which we know the solution: $$\begin{align} x(t) &= \dfrac{1}{1-t} \end{align}$$ End of explanation def f_example_4(t,y,alpha): return alpha*(1/2-y/(1+y)) # Initial condition y0=3 # time where we want your solution t = np.linspace(0, 10, 100) plt.figure(figsize=(10,8)) for j in np.arange(1,6): sol= odeint(f_example_4, y0, t, args=(4/j,), tfirst=True) plt.plot(t, sol, label=r'$\xi(t)$'+r', $\tau= $'+str(j)) plt.axvline(x=j,color='k') plt.text(j,0.5,r'$\tau= $'+str(j)) plt.legend(loc='best') plt.ylabel('Concentration') plt.xlabel('t') plt.ylim((0,3)) plt.grid(True) plt.show() def f_example_4b(t,y,alpha,phi_in): return phi_in-alpha*(y/(1+y)) # Initial condition y0=3 # time where we want your solution t = np.linspace(0, 1, 100) plt.figure(figsize=(10,8)) sol= odeint(f_example_4b, y0, t, args=(4,5), tfirst=True) plt.plot(t, sol, label=r'$\xi(t)$') plt.legend(loc='best') plt.ylabel('Concentration') plt.xlabel('t') plt.title(r'What happens then if $\phi_{in}\geq\alpha$?') plt.grid(True) plt.show() Explanation: Example 4 See classnotes! End of explanation t = np.linspace(0,4*np.pi,100) f = lambda t: (t**2/(np.pi**2+t**2))*np.sin(t) plt.figure() plt.plot(t,f(t),'b-',label='f(t)') plt.legend(loc='best') plt.xlabel('t') plt.grid(True) plt.show() Explanation: Example 7, for completeness. Plotting: $$\begin{align} f(t) &= \dfrac{t^2}{\pi^2+t^2}\,\sin(5\,t),\quad 0 \leq t \leq 4\,\pi \end{align}$$ End of explanation t = np.linspace(0,2,100) f = lambda t,tau: np.exp(-t/tau)*np.sin(2*np.pi*t) plt.figure() plt.plot(t,f(t,1),'b-',label=r'$f(t,\tau=1)$') plt.plot(t,f(t,0.5),'r--',label=r'$f(t,\tau=0.5)$') plt.legend(loc='best') plt.xlabel('t') plt.grid(True) plt.show() Explanation: Example 8, for completeness. Plotting: $$\begin{align} f(t,\tau) &= \exp(-t/\tau)\,\sin(2\,\pi\,t),\quad 0 \leq t \leq 2. \end{align}$$ End of explanation elev_widget = IntSlider(min=0, max=180, step=10, value=40) azim_widget = IntSlider(min=0, max=360, step=10, value=230) def example_9_interact(elev=40,azim=230): f = lambda t,u: (u**2)*(t/(1+t)) nt, nu = (100, 50) t = np.linspace(0, 5, nt) u = np.linspace(-1, 1, nu) tt, uu = np.meshgrid(t, u) zz = f(tt,uu) fig = plt.figure(figsize=(5,5)) ax = fig.gca(projection='3d') surf = ax.plot_surface(tt,uu,zz, cmap=cm.coolwarm, linewidth=0, antialiased=False) ax.view_init(elev,azim) ax.set_xlabel('t') ax.set_ylabel('u') ax.set_zlabel('f') plt.grid(True) plt.show() interact(example_9_interact,elev=elev_widget,azim=azim_widget) Explanation: Example 9, for completeness. Plotting: $$\begin{align} f(t,u) &= u^2\,\dfrac{t}{1+t},\quad 0 \leq t, u \in \mathbb{R}. \end{align}$$ End of explanation t = np.linspace(0,4,100) def f(t): if t<=2: return t*np.sin(t) else: return np.sin(np.pi*t)/t fv=np.vectorize(f, otypes=[np.float64]) plt.figure() plt.plot(t,fv(t),'b-',label='$f(t)$') plt.plot(t[t<=2],fv(t[t<=2]),'r--',label=r'$f_r(t)$') plt.plot(t[t>2],fv(t[t>2]),'r--') plt.legend(loc='best') plt.xlabel('t') plt.grid(True) plt.show() Explanation: Example 10, for completeness. $$\begin{equation} f(t)= \begin{cases} t\,\sin(t), & 0\leq t < 2, \ \dfrac{\sin(\pi\,t)}{t}, & 2 < t. \end{cases} \end{equation}$$ End of explanation def f_and_fprime(t,alpha): f = lambda t: np.exp(-alpha*t**2) fp = lambda t: -2*alpha*t*f(t) return np.array([f(t),fp(t)]) print(f_and_fprime(t=1,alpha=1)) Explanation: Example 11, for completeness. Write a function that return $f(t)=\exp(-\alpha\,t^2)$ and $f'(t)=-2\,\alpha\,t\,\exp(-\alpha\,t^2)$. End of explanation
15,922
Given the following text description, write Python code to implement the functionality described below step by step Description: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project 1 Step1: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation Step3: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset Step4: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable Step5: Answer Step6: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint Step7: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint Step9: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint Step10: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. Step11: Answer Step12: Answer
Python Code: # Import libraries necessary for this project import numpy as np import pandas as pd import visuals as vs # Supplementary code from sklearn.cross_validation import ShuffleSplit # Pretty display for notebooks %matplotlib inline # Load the Boston housing dataset data = pd.read_csv('housing.csv') prices = data['MEDV'] features = data.drop('MEDV', axis = 1) # Success print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) Explanation: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project 1: Predicting Boston Housing Prices Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis. The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset: - 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed. - 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed. - The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded. - The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation. Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation print prices.head() print prices.describe() print np.mean(prices) # TODO: Minimum price of the data minimum_price = np.min(prices) # TODO: Maximum price of the data maximum_price = np.max(prices) # TODO: Mean price of the data mean_price = np.mean(prices) # TODO: Median price of the data median_price = np.median(prices) # TODO: Standard deviation of prices of the data std_price = np.std(prices) # Show the calculated statistics print "Statistics for Boston housing dataset:\n" print "Minimum price: ${:,.2f}".format(minimum_price) print "Maximum price: ${:,.2f}".format(maximum_price) print "Mean price: ${:,.2f}".format(mean_price) print "Median price ${:,.2f}".format(median_price) print "Standard deviation of prices: ${:,.2f}".format(std_price) Explanation: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation: Calculate Statistics For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model. In the code cell below, you will need to implement the following: - Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices. - Store each calculation in their respective variable. End of explanation # TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict ): Calculates and returns the performance score between true and predicted values based on the metric chosen. # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score Explanation: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor). - 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood. Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each. Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7? Answer: I would think that an increase in the average number of rooms in a home(RM) would lead to an increase in MEDV(prices).Justification the larger the home generally the more rooms although mansions to not fit this line of thought-square footage might be a better predictor; I would predict that an increase in the value of LSTAT would decrease MEDV-jusitification beacuse the lower the salary the less expendible income to take care of yard and home maintenence and the lower the slary the less free hours to work on the home upkeep, (although I personally do not believe this bias is true) and lastly an increase in PTRATIO would lead to a decrease in MEDV-justification- the higher the number of students in the class the less personal attention so parent would want a lower student teacher ratio. A school area that has a loweer student to teacher ratio would be more desirable and higher student to teacher ration would be less desirable Developing a Model In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance Metric It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 always fails to predict the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is no better than one that naively predicts the mean of the target variable. For the performance_metric function in the code cell below, you will need to implement the following: - Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict. - Assign the performance score to the score variable. End of explanation # Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score) Explanation: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have successfully captured the variation of the target variable? Why or why not? Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination. End of explanation from sklearn import cross_validation from sklearn.cross_validation import train_test_split # TODO: Import 'train_test_split' # TODO: Shuffle and split the data into training and testing subsets def shuffle_slpit_data(X, y): X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=1) return X_train, y_train, X_test, y_test # Success print "Angie, your data has been split" Explanation: Answer:I think the model has reasonably captured the variation in the data because it has fit a rsquared formula which delivers a 0.923 out of 1 or 92% correlation between the actual and predicted values. Possibility there is room to improve this Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset. For the code cell below, you will need to implement the following: - Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent. - Assign the train and testing splits to X_train, X_test, y_train, and y_test. End of explanation # Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices) Explanation: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: Benefit: Splitting the data into multiple sets allows the alorgorithm to be tested against a set it has not seen before. It tests the robustness of the algorithm against unseen data such as would be encountered in the real world.What could go wrong if you do not have a way to test your model is that the model can learn and perform very well using all the data and the only way that you would know that it actually performs poorly on unseen data is when you go into production against unseen data and it performs poorly. If you had split and tested against the unseen held back data then you would have seen this during model creation. Analyzing Model Performance In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning Curves The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. Run the code cell below and use these graphs to answer the following question. End of explanation vs.ModelComplexity(X_train, y_train) Explanation: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to particular scores? Answer: max_depth = 3 model. As the training points increase the scores goes from one to near 0.8. The testing score increases from just above 0.6 to just below 0.8. It appears having more training points would not benefit max depth 1,3 or 10. In the Max depth 6 graph the final data point score appears to possibly be less than the point before, so this depth could possible perform worse as more training points are added. In max_depth 3 graph it appears the learning curves might be converging towards the score of 0.8. The other graphs also appear to be converging except max_depth = 10 which appears to be more parallel. Complexity Curves The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. Run the code cell below and use this graph to answer the following two questions. End of explanation from sklearn import grid_search from sklearn import tree from sklearn.metrics import mean_squared_error, make_scorer from sklearn.grid_search import GridSearchCV from sklearn.tree import DecisionTreeRegressor # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' def fit_model(X, y): Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. # Create cross-validation sets from the training data cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0) # TODO: Create a decision tree regressor object regressor = DecisionTreeRegressor() # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 params = {'max_depth': (1,2,3,4,5,6,7,8,9,10)} # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_function = make_scorer(score_func = mean_squared_error, greater_is_better = False) # TODO: Create the grid search object grid = GridSearchCV(estimator = regressor,param_grid = params,scoring = scoring_function,cv = cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_ try: grid = fit_model(features, prices) print "Yea, fit a model!", grid except: print "Something went wrong with fitting a model." Explanation: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering from high bias or high variance? Answer: I had to lok up the definitions of high bias and high variance from http://www.astroml.org/sklearn_tutorial/practical.html to help answer this question. High bias is underfitting the data. Indicated when both the training and cross-validation errors are very high which means the r2 is low. If this is the case, I can add more features,use a more sophisticated model, use fewer samples or decrease any regularization (penalty terms) that may be in the algorithm. They noted that adding more training data will not help matters if both lines have converged to a relatively high error. High Variance is overfitting. Indicated when the training error is much less than the cross-validation error. If this is the case, adding more training data may not help matters, the training error will climb and the cross validation error will decrease until they begin to converge and both lines tend to converge to a relatively high error. To fix high variance I can use fewer features,use more training samples and/or increase regularization (add penalty terms). Looking at the graphs here it appears that when max depth is 1 both the training error is high and the validation error is high (low r2) which indicates high bias (under fitting). When the max depth is 10 the training error is low (high r2) but the validation error is high (lowr2) which indicates high variance. The visual clues are the score points in relationship to the max depth. The shaded uncertainty appears relatively consistant decreasing on the traing score at max depth 10 which makes sense if it is overfitting the data Question 6 - Best-Guess Optimal Model Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer? Answer: I would guess max depth 4 based upon the graph. It appears that overfitting (high variance) is begining at max depth 6 and the error does not appear improved at max depth 5 as compared to 4 Evaluating Model Performance In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model. Question 7 - Grid Search What is the grid search technique and how it can be applied to optimize a learning algorithm? Answer: To answer this question I went to http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html and http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html#sklearn.metrics.make_scorer The grid search technique searches over the parameter values to return estimators. Because it can help determine the best parameters for the model it can be useful for parameter tuning Question 8 - Cross-Validation What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model? Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set? Answer:I had to research to answer this question. I used https://www.cs.cmu.edu/~schneide/tut5/node42.html to find the definition of the technique: "K-fold cross validation is one way to improve over the holdout method. The data set is divided into k subsets, and the holdout method is repeated k times. Each time, one of the k subsets is used as the test set and the other k-1 subsets are put together to form a training set" It seems to benefit grid search because each data point is in the test set exactly once (and in the training set k-1 times.) which then implies that using grid search without a cross validation set you could overfit or increase the variance Implementation: Fitting a Model Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms. For the fit_model function in the code cell below, you will need to implement the following: - Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object. - Assign this object to the 'regressor' variable. - Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable. - Use make_scorer from sklearn.metrics to create a scoring function object. - Pass the performance_metric function as a parameter to the object. - Assign this scoring function to the 'scoring_fnc' variable. - Use GridSearchCV from sklearn.grid_search to create a grid search object. - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. - Assign the GridSearchCV object to the 'grid' variable. End of explanation # Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) Explanation: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. End of explanation # Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) Explanation: Answer: It choose max depth 5 and I quessed that max depth would be 4 for the optimal model Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients: | Feature | Client 1 | Client 2 | Client 3 | | :---: | :---: | :---: | :---: | | Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms | | Neighborhood poverty level (as %) | 17% | 32% | 3% | | Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 | What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features? Hint: Use the statistics you calculated in the Data Exploration section to help justify your response. Run the code block below to have your optimized model make predictions for each client's home. End of explanation vs.PredictTrials(features, prices, fit_model, client_data) Explanation: Answer: The statistics show that the: Minimum price: 105,000.00 Maximum price: 1,024,800.00 Mean price: 454,342.94 Median price 438,900.00 Standard deviation of prices: 165,171.13 so the values appear within reason based upon the statistics of the sales prices the poverty levels and the student teacher ratios. Although client 2 home has an extremely low valuation it would be interesting to see if the current comps maintain that low level over time. To elaborate as requested: client 1 has 5 rooms, a median poverty level and a median student teacher povery level as compared to the other 2 so a predicted selling price of 419,000 appears within the median of the 3 estimates(consistant with the median statistical home price), as is the case for home 2 which has fewer rooms than client 1 but the poverty level is the highest of the 3 and the student teacher ratio is also the highest of the 3 so the lowest valuation matches the the expectation of closest to the minimum price of the 3; also client 3 has 8 rooms, 3% poverty level and the best student teacher ratio which should place it closest to the maximum price of the three. In summation, the features of the clients homes(room.poverty level,student teacher level) which are relatively consistant with anticipated poor valuations and are consistant in their relative comparison to the statistical results also based upon rooms,poverty level and student/teacher ratio. The model appears consistant with the data and features presented. Although one cannot say of this result that these three features are the best predictor of home prices in totality Sensitivity An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on. End of explanation
15,923
Given the following text description, write Python code to implement the functionality described below step by step Description: TFX pipeline example - Chicago Taxi tips prediction Overview Tensorflow Extended (TFX) is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework to express ML pipelines consisting of TFX components, which brings the user large-scale ML task orchestration, artifact lineage, as well as the power of various TFX libraries. Kubeflow Pipelines can be used as the orchestrator supporting the execution of a TFX pipeline. This sample demonstrates how to author a ML pipeline in TFX and run it on a KFP deployment. Permission This pipeline requires Google Cloud Storage permission to run. If KFP was deployed through K8S marketplace, please make sure "Allow access to the following Cloud APIs" is checked when creating the cluster. <img src="check_permission.png"> Otherwise, follow instructions in the guideline to guarantee at least, that the service account has storage.admin role. Step1: Note Step2: In this example we'll need TFX SDK later than 0.21 to leverage the RuntimeParameter feature. RuntimeParameter in TFX DSL Currently, TFX DSL only supports parameterizing field in the PARAMETERS section of ComponentSpec, see here. This prevents runtime-parameterizing the pipeline topology. Also, if the declared type of the field is a protobuf, the user needs to pass in a dictionary with exactly the same names for each field, and specify one or more value as RuntimeParameter objects. In other word, the dictionary should be able to be passed in to ParseDict() method and produce the correct pb message. Step3: TFX Components Please refer to the official guide for the detailed explanation and purpose of each TFX component.
Python Code: !python3 -m pip install pip --upgrade --quiet --user !python3 -m pip install kfp --upgrade --quiet --user pip install tfx==1.4.0 tensorflow==2.5.1 --quiet --user Explanation: TFX pipeline example - Chicago Taxi tips prediction Overview Tensorflow Extended (TFX) is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework to express ML pipelines consisting of TFX components, which brings the user large-scale ML task orchestration, artifact lineage, as well as the power of various TFX libraries. Kubeflow Pipelines can be used as the orchestrator supporting the execution of a TFX pipeline. This sample demonstrates how to author a ML pipeline in TFX and run it on a KFP deployment. Permission This pipeline requires Google Cloud Storage permission to run. If KFP was deployed through K8S marketplace, please make sure "Allow access to the following Cloud APIs" is checked when creating the cluster. <img src="check_permission.png"> Otherwise, follow instructions in the guideline to guarantee at least, that the service account has storage.admin role. End of explanation # Set `PATH` to include user python binary directory and a directory containing `skaffold`. PATH=%env PATH %env PATH={PATH}:/home/jupyter/.local/bin Explanation: Note: if you're warned by WARNING: The script {LIBRARY_NAME} is installed in '/home/jupyter/.local/bin' which is not on PATH. You might need to fix by running the next cell and restart the kernel. End of explanation import json import os import kfp import tensorflow_model_analysis as tfma from tfx import v1 as tfx # In TFX MLMD schema, pipeline name is used as the unique id of each pipeline. # Assigning workflow ID as part of pipeline name allows the user to bypass # some schema checks which are redundant for experimental pipelines. pipeline_name = 'taxi_pipeline_with_parameters' # Path of pipeline data root, should be a GCS path. # Note that when running on KFP, the pipeline root is always a runtime parameter. # The value specified here will be its default. pipeline_root = os.path.join('gs://{{kfp-default-bucket}}', 'tfx_taxi_simple', kfp.dsl.RUN_ID_PLACEHOLDER) # Location of input data, should be a GCS path under which there is a csv file. data_root = '/opt/conda/lib/python3.7/site-packages/tfx/examples/chicago_taxi_pipeline/data/simple' # Path to the module file, GCS path. # Module file is one of the recommended way to provide customized logic for component # includeing Trainer and Transformer. # See https://github.com/tensorflow/tfx/blob/93ea0b4eda5a6000a07a1e93d93a26441094b6f5/tfx/components/trainer/component.py#L38 taxi_module_file_param = tfx.dsl.experimental.RuntimeParameter( name='module-file', default='/opt/conda/lib/python3.7/site-packages/tfx/examples/chicago_taxi_pipeline/taxi_utils_native_keras.py', ptype=str, ) # Path that ML models are pushed, should be a GCS path. # TODO: CHANGE the GCS bucket name to yours. serving_model_dir = os.path.join('gs://your-bucket', 'serving_model', 'tfx_taxi_simple') push_destination = tfx.dsl.experimental.RuntimeParameter( name='push_destination', default=json.dumps({'filesystem': {'base_directory': serving_model_dir}}), ptype=str, ) Explanation: In this example we'll need TFX SDK later than 0.21 to leverage the RuntimeParameter feature. RuntimeParameter in TFX DSL Currently, TFX DSL only supports parameterizing field in the PARAMETERS section of ComponentSpec, see here. This prevents runtime-parameterizing the pipeline topology. Also, if the declared type of the field is a protobuf, the user needs to pass in a dictionary with exactly the same names for each field, and specify one or more value as RuntimeParameter objects. In other word, the dictionary should be able to be passed in to ParseDict() method and produce the correct pb message. End of explanation example_gen = tfx.components.CsvExampleGen(input_base=data_root) statistics_gen = tfx.components.StatisticsGen(examples=example_gen.outputs['examples']) schema_gen = tfx.components.SchemaGen( statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False) example_validator = tfx.components.ExampleValidator( statistics=statistics_gen.outputs['statistics'], schema=schema_gen.outputs['schema']) # The module file used in Transform and Trainer component is paramterized by # _taxi_module_file_param. transform = tfx.components.Transform( examples=example_gen.outputs['examples'], schema=schema_gen.outputs['schema'], module_file=taxi_module_file_param) # The numbers of steps in train_args are specified as RuntimeParameter with # name 'train-steps' and 'eval-steps', respectively. trainer = tfx.components.Trainer( module_file=taxi_module_file_param, examples=transform.outputs['transformed_examples'], schema=schema_gen.outputs['schema'], transform_graph=transform.outputs['transform_graph'], train_args=tfx.proto.TrainArgs(num_steps=10), eval_args=tfx.proto.EvalArgs(num_steps=5)) # Set the TFMA config for Model Evaluation and Validation. eval_config = tfma.EvalConfig( model_specs=[ tfma.ModelSpec( signature_name='serving_default', label_key='tips_xf', preprocessing_function_names=['transform_features']) ], metrics_specs=[ tfma.MetricsSpec( # The metrics added here are in addition to those saved with the # model (assuming either a keras model or EvalSavedModel is used). # Any metrics added into the saved model (for example using # model.compile(..., metrics=[...]), etc) will be computed # automatically. metrics=[ tfma.MetricConfig(class_name='ExampleCount') ], # To add validation thresholds for metrics saved with the model, # add them keyed by metric name to the thresholds map. thresholds = { 'binary_accuracy': tfma.MetricThreshold( value_threshold=tfma.GenericValueThreshold( lower_bound={'value': 0.5}), change_threshold=tfma.GenericChangeThreshold( direction=tfma.MetricDirection.HIGHER_IS_BETTER, absolute={'value': -1e-10})) } ) ], slicing_specs=[ # An empty slice spec means the overall slice, i.e. the whole dataset. tfma.SlicingSpec(), # Data can be sliced along a feature column. In this case, data is # sliced along feature column trip_start_hour. tfma.SlicingSpec(feature_keys=['trip_start_hour']) ]) # The name of slicing column is specified as a RuntimeParameter. evaluator = tfx.components.Evaluator( examples=example_gen.outputs['examples'], model=trainer.outputs['model'], eval_config=eval_config) pusher = tfx.components.Pusher( model=trainer.outputs['model'], model_blessing=evaluator.outputs['blessing'], push_destination=push_destination) # Create the DSL pipeline object. # This pipeline obj carries the business logic of the pipeline, but no runner-specific information # was included. dsl_pipeline = tfx.dsl.Pipeline( pipeline_name=pipeline_name, pipeline_root=pipeline_root, components=[ example_gen, statistics_gen, schema_gen, example_validator, transform, trainer, evaluator, pusher ], enable_cache=True, beam_pipeline_args=['--direct_num_workers=%d' % 0], ) # Specify a TFX docker image. For the full list of tags please see: # https://hub.docker.com/r/tensorflow/tfx/tags tfx_image = 'gcr.io/tfx-oss-public/tfx:1.4.0' config = tfx.orchestration.experimental.KubeflowDagRunnerConfig( kubeflow_metadata_config=tfx.orchestration.experimental .get_default_kubeflow_metadata_config(), tfx_image=tfx_image) kfp_runner = tfx.orchestration.experimental.KubeflowDagRunner(config=config) # KubeflowDagRunner compiles the DSL pipeline object into KFP pipeline package. # By default it is named <pipeline_name>.tar.gz kfp_runner.run(dsl_pipeline) run_result = kfp.Client( host='1234567abcde-dot-us-central2.pipelines.googleusercontent.com' # Put your KFP endpoint here ).create_run_from_pipeline_package( pipeline_name + '.tar.gz', arguments={ # Uncomment following lines in order to use custom GCS bucket/module file/training data. # 'pipeline-root': 'gs://<your-gcs-bucket>/tfx_taxi_simple/' + kfp.dsl.RUN_ID_PLACEHOLDER, # 'module-file': '<gcs path to the module file>', # delete this line to use default module file. # 'data-root': '<gcs path to the data>' # delete this line to use default data. }) Explanation: TFX Components Please refer to the official guide for the detailed explanation and purpose of each TFX component. End of explanation
15,924
Given the following text description, write Python code to implement the functionality described below step by step Description: <div align="right">Python 3.6 Jupyter Notebook</div> Visual communication Geocoding and markdown examples Your completion of the notebook exercises will be graded based on your ability to do the following Step1: 1. Introduction to geocoding Geocoding is the process of transforming a description of a location into a spatial (physical) location on the earth’s surface. You can geocode by entering one location’s description at a time or by simultaneously providing multiple descriptions in a table. 1.1 Geocoder There are several geocoding libraries and services available. This notebook demonstrates the use of the Geocoder Python library, using Google Maps as the provider. Start by geocoding a single city and country combination. You can change the values of the city, should you wish to do so. Step2: You can use the same library to find the location, based on IP addresses. When executing this in your virtual analysis environment, the location of the server will be returned. Step3: 1.2 Input data Step4: Consider the forms and applications that you complete regularly. While you may be happy to share your personal information with the company providing you with a product or service, it is highly likely that you would be unhappy if that company started sharing your information publicly. Many people provide this data on social media and public forums, and do not necessarily consider the potential consequences. One of the techniques to hide sensitive data is to only release aggregated data. The greatest disadvantage of this approach is that you are still able to identify people in low-density areas of the data set. You need to be extremely careful when designing applications that utilize personal data to ensure that you do not breach the trust of the users who have supplied you with their data. Names, surnames, telephone numbers, and email addresses have been removed, however, you may still be able to identify students. This will be demonstrated later in this course. 1.2.2 Prepare the data Step5: 1.2.3 Retrieve the data for a specific city Step6: 1.2.4 Plot the students per country Step7: 1.2.5 Plot the students per industry Step8: <br> <div class="alert alert-info"> <b>Exercise 1 Start Step9: <br> <div class="alert alert-info"> <b>Exercise 1 End.</b> </div> Exercise complete Step10: 1.3 Geocoding the data Next, geocode the cities in the student registrations list in order to display their locations on a map. Important Step11: If you opted to execute the cell above, wait for it to complete. The "In[ ]" will show "In[*]" while being executed, and will change to "In[number]" when complete. If this step has been completed successfully, you will not have to load the data set in the following cell. Should you choose to execute the cell, no harm will be done. You will simply overwrite your geocoded data set with the supplied geocoded data set. If you opted to not execute the cell above, you will need to execute the cell below to retrieve the data set that has already been geocoded for you, in order to proceed. Step12: 1.4 Saving and retrieving your result In some cases, you may want to save result sets. You can use this technique to store copies of intermediary results when you do not wish to perform the calculations again when resuming your analysis. This technique may also be used to output the result so that it may be shared or used in other applications. This example demonstrates how to save the file as a CSV in the current working directory, "module_1". Step13: 1.5 Plotting the geocoded data on a map Visit the Folium documentation or browse the GitHub repository for further instructions and examples on how to plot geocoded data. Feel free to change the map and try visualizing the count of students per country or the count of students per industry per country. Step14: 2. Communicating your analysis In the orientation module notebook as well as the second notebook in this module, the markdown mechanism was briefly introduced. This mechanism has been used to provide instructions and images to you within these notebooks. You can select "Help" and then "Markdown" in the Jupyter menu at the top of the screen to take you to additional links. Use a couple of the cells below to demonstrate your ability to communicate your ideas using markdown. You can state your intention, describe your steps, and include code, comments, and visualizations in the cells below. <br> <div class="alert alert-info"> <b>Exercise 2 Start
Python Code: # Load relevant libraries. import pandas as pd import numpy as np import matplotlib import folium import geocoder from tqdm import tqdm %pylab inline pylab.rcParams['figure.figsize'] = (10, 8) Explanation: <div align="right">Python 3.6 Jupyter Notebook</div> Visual communication Geocoding and markdown examples Your completion of the notebook exercises will be graded based on your ability to do the following: Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets? Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data? Create: Are you able to produce notebooks that serve as computational records of a session, and which can be used to share your insights with others? Notebook objectives By the end of this notebook, you will be expected to be able to use geocoding within Python and communicate your ideas using markdown. List of exercises Exercise 1: Plot student count. Exercise 2: Markdown. Notebook introduction Working with data helps you make informed decisions. There is a wealth of information in the form of articles about being "data driven". There have also been technological and systems development best practices for a couple of decades, many of which contain great input and guidelines. One of the biggest problems we are facing with tools, technology, and best practices is the rate of change. David Shrier discusses the concept of the half-life of data in the video content in this module. The half-life of tools, technologies, and best practices in the information technology industry is also shortening. Your enrollment in this course demonstrates your ability to see value in data-driven approaches, and the opportunities that advances in technology bring. As you continue your journey, you will discover additional sources of information, such as the rich communities on GitHub, where users share code and learn from others. This notebook works through an example containing data from the students enrolled in this course. In many cases, you will need to enrich your existing data sets, as changing the collection process is not always an option. This notebook demonstrates how country and city locations (in text format) can be utilized to geocode cities to locations that can be plotted on a map. While you should not share answers with classmates, you are encouraged to ask for assistance, and post examples and syntax structures that you have found helpful, on the forums. <div class="alert alert-warning"> <b>Note</b>:<br> It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears. </div> Load libraries and set options End of explanation # Let's geocode a city in the format of the data set that we have available. g = geocoder.google('Adelaide, Australia') # Print the latitude and longitude for the city. g.latlng Explanation: 1. Introduction to geocoding Geocoding is the process of transforming a description of a location into a spatial (physical) location on the earth’s surface. You can geocode by entering one location’s description at a time or by simultaneously providing multiple descriptions in a table. 1.1 Geocoder There are several geocoding libraries and services available. This notebook demonstrates the use of the Geocoder Python library, using Google Maps as the provider. Start by geocoding a single city and country combination. You can change the values of the city, should you wish to do so. End of explanation # Find your location based on your IP address. mylocation = geocoder.ip('me') # Print your location. mylocation.latlng Explanation: You can use the same library to find the location, based on IP addresses. When executing this in your virtual analysis environment, the location of the server will be returned. End of explanation # Load student location data and display the header. df = pd.read_csv('students_raw.csv') df.head() Explanation: 1.2 Input data: Student location An earlier snapshot of the student group has been extracted, a new unique identifier generated, and the industry, country, and city have been included. The aim here is to show you what can be achieved with minimal input. 1.2.1 Load the data End of explanation # Step 1: Group the data to hide the user id. df1 = pd.DataFrame(df.groupby(['country', 'city', 'industry'])['id'].count()).reset_index() df1 = df1.rename(columns = {'id':'student_count'}) df1.head(10) Explanation: Consider the forms and applications that you complete regularly. While you may be happy to share your personal information with the company providing you with a product or service, it is highly likely that you would be unhappy if that company started sharing your information publicly. Many people provide this data on social media and public forums, and do not necessarily consider the potential consequences. One of the techniques to hide sensitive data is to only release aggregated data. The greatest disadvantage of this approach is that you are still able to identify people in low-density areas of the data set. You need to be extremely careful when designing applications that utilize personal data to ensure that you do not breach the trust of the users who have supplied you with their data. Names, surnames, telephone numbers, and email addresses have been removed, however, you may still be able to identify students. This will be demonstrated later in this course. 1.2.2 Prepare the data End of explanation # Return all rows for New York. df1.loc[df1['city'] == 'New York'] Explanation: 1.2.3 Retrieve the data for a specific city End of explanation # Plot the count of students per country. country_counts = df1.groupby(['country'])['student_count'].sum() country_counts.plot(kind='bar') Explanation: 1.2.4 Plot the students per country End of explanation # Plot the count of students per industry. industry_counts = df1.groupby(['industry'])['student_count'].sum() industry_counts.plot(kind='bar') Explanation: 1.2.5 Plot the students per industry End of explanation # Your code here. Explanation: <br> <div class="alert alert-info"> <b>Exercise 1 Start: Plot student count.</b> </div> Instructions Plot the count of students per city for a specific country. Create a data frame with the list of cities in your country that are present in this data set. Should the data set be too sparsely or densely populated for your country, you are welcome to select another. Use the variable name "df3" for your subset. Create a bar plot for the cities in this country, indicating the number of students in each city using the sum method. Hint: Create a new subset of the data set first: new_df = df.loc[df['column'] == 'value'] End of explanation # We tested the geocoder library with town and country as input. # Let's create a new column in our dataframe that contains these values. df1['geocode_input'] = df1['city'] + ', ' + df1['country'] # We also create two additional columns for lattitude and longitude. df1['lat'], df1['long'] = [0, 0] # Display the head of the updated dataframe. df1.head() Explanation: <br> <div class="alert alert-info"> <b>Exercise 1 End.</b> </div> Exercise complete: This is a good time to "Save and Checkpoint". 1.2.6 Prepare the data frame for geocoding End of explanation # Now we use Geocoder in a loop to geocode the cities and update our dataframe. # Wait until the In[*] indicator on the lefthand side changes to a number before proceeding. # Uncomment the lines below by removing the '#' from the start of the line should you wish to execute the code. #for i in tqdm(range(len(df1))): # g = geocoder.google(df1.loc[i,'geocode_input']) # df1.loc[i,'lat'], df1.loc[i,'long'] = g.lat, g.lng #print('Geocoding complete!') Explanation: 1.3 Geocoding the data Next, geocode the cities in the student registrations list in order to display their locations on a map. Important: Executing the cell below is optional. This cell will run through a loop and geocode each of the city and country combinations provided as input. This process may take at least 5 minutes as the response time is influenced by the target server capacity. On a course such as this one, where there is a large number of students, you may experience a delay in response. This opportunity will therefore be used to demonstrate how you can save intermediary results as an output file, which you can then load when resuming your analysis at a later stage, without having to redo all of the processing. Note: The code in the cell below can be uncommented and executed should you wish to do so, but it is not required to complete the notebook. End of explanation # Load geocoded dataset if you chose not to execute. df1 = pd.read_csv('grouped_geocoded.csv',index_col=0) # Let's look at the dataframe again to see the populated values for latitude and longitude. df1.head() Explanation: If you opted to execute the cell above, wait for it to complete. The "In[ ]" will show "In[*]" while being executed, and will change to "In[number]" when complete. If this step has been completed successfully, you will not have to load the data set in the following cell. Should you choose to execute the cell, no harm will be done. You will simply overwrite your geocoded data set with the supplied geocoded data set. If you opted to not execute the cell above, you will need to execute the cell below to retrieve the data set that has already been geocoded for you, in order to proceed. End of explanation # To save the output to a file you can use the command below and replace "filename_export" with a name of your choice. df1.to_csv('filename_export.csv') # To load the file you just generated, you can replace the filename below with the one you entered in the previous cell. # Create a new Pandas dataframe with the file created in the previous cell. new_df = pd.read_csv('filename_export.csv') Explanation: 1.4 Saving and retrieving your result In some cases, you may want to save result sets. You can use this technique to store copies of intermediary results when you do not wish to perform the calculations again when resuming your analysis. This technique may also be used to output the result so that it may be shared or used in other applications. This example demonstrates how to save the file as a CSV in the current working directory, "module_1". End of explanation # Set map center and zoom level. mapc = [0, 30] zoom = 2 # Create map object. map_osm = folium.Map(location=mapc, zoom_start=zoom) # Plot your server location. folium.CircleMarker(mylocation.latlng, radius=50, popup='My Server Location', fill_color='red' ).add_to(map_osm) # Plot each of the locations that you geocoded. for j in range(len(df1)): folium.Marker([df1.ix[j,'lat'], df1.ix[j,'long']], icon=folium.Icon(color='green',icon='info-sign') ).add_to(map_osm) # Show the map. map_osm # Feel free to experiment here with mapping options. # You can copy and paste the code from the cell above and change markers, zoom level, # or add additional features demonstrated on the Folium site in this cell. Explanation: 1.5 Plotting the geocoded data on a map Visit the Folium documentation or browse the GitHub repository for further instructions and examples on how to plot geocoded data. Feel free to change the map and try visualizing the count of students per country or the count of students per industry per country. End of explanation # Your code answer here. (#3) # Your code answer here. (#4) # Your code answer here. (#5) Explanation: 2. Communicating your analysis In the orientation module notebook as well as the second notebook in this module, the markdown mechanism was briefly introduced. This mechanism has been used to provide instructions and images to you within these notebooks. You can select "Help" and then "Markdown" in the Jupyter menu at the top of the screen to take you to additional links. Use a couple of the cells below to demonstrate your ability to communicate your ideas using markdown. You can state your intention, describe your steps, and include code, comments, and visualizations in the cells below. <br> <div class="alert alert-info"> <b>Exercise 2 Start: Markdown.</b> </div> Instructions Getting comfortable with markdown. Notebook documents contain the inputs and outputs of an interactive session, as well as additional text that accompanies the code but is not meant for execution. In this way, notebook files can serve as a complete computational record of a session. To this purpose, this exercise requires you to complete the section below using a combination of markdown instructions and code. The content that you submit will not be evaluated critically, but it is important that you display the ability to master the basics of markdown. In separate cells, create each of the following features as they relate to your understanding of big data and social analytics: 1. A markdown cell containing a combination of headings, text, and lists. 2. A markdown cell containing a link or image. 3. A code cell with no output (for example, loading a library) that contains comments in the cell describing the function of the cell. 4. A code cell with an output (for example, df.head() or a calculation). 5. A code cell that produces a graph or plot. Hints: Make use of sections in previous notebooks, source external components (with references), or generate your own content. You may also obtain additional help from the Jupyter menu bar at the top of this notebook, under "Help", "Markdown", or review the Jupyter Markdown Documentation. Note: Ensure that the cell type is "Markdown" in the menu bar above, and that you "Execute" the markdown cells before saving, downloading, and submitting your notebook to the Online Campus. Should you need to edit a markdown cell after executing it, double-click the white space on the left of the cell. Your markdown answer here. (#1) Your markdown answer here. (#2) End of explanation
15,925
Given the following text description, write Python code to implement the functionality described below step by step Description: Compare fit of mixture model where the nulldistribution is either with or without prethreshold In this notebook, I did a first effort to see if we can apply the thresholdfree peakdistribution in our method to estimate the alternative distribution on a simulated dataset. Import packages and set working directory Step1: Define peak density function Step2: Simulate and export data from 10 subjects Step3: Perform group analysis and extract peaks from Tstat-map Step4: Plot observed distribution of peaks with theoretical distribution (under H_0) Step5: Compute p-values based on theoretical distribution (by numerical integration) Step6: Compute proportion of activation based on BUM model Step7: Plot histogram of p-values with expected distribution (beta and uniform) Step8: Apply power procedure WITH threshold Step10: Adjust power procedure without threshold Step11: Figures for JSM Step12: $P(T>t | H_0, t>u)$
Python Code: import matplotlib % matplotlib inline import numpy as np import scipy import scipy.stats as stats import scipy.optimize as optimize import scipy.integrate as integrate from __future__ import print_function, division import os import math from nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset from nipype.interfaces import fsl import nibabel as nib import matplotlib.pyplot as plt import pandas as pd from palettable.colorbrewer.qualitative import Paired_12 import scipy.stats as stats os.chdir("/Users/Joke/Documents/Onderzoek/Studie_7_neuropower_improved/WORKDIR/") Explanation: Compare fit of mixture model where the nulldistribution is either with or without prethreshold In this notebook, I did a first effort to see if we can apply the thresholdfree peakdistribution in our method to estimate the alternative distribution on a simulated dataset. Import packages and set working directory End of explanation def peakdens1D(x,k): f1 = (3-k**2)**0.5/(6*math.pi)**0.5*np.exp(-3*x**2/(2*(3-k**2))) f2 = 2*k*x*math.pi**0.5/6**0.5*stats.norm.pdf(x)*stats.norm.cdf(k*x/(3-k**2)**0.5) out = f1*f2 return out def peakdens2D(x,k): f1 = 3**0.5*k**2*(x**2-1)*stats.norm.pdf(x)*stats.norm.cdf(k*x/(2-k**2)**0.5) f2 = k*x*(3*(2-k**2))**0.5/(2*math.pi) * np.exp(-x**2/(2-k**2)) f31 = 6**0.5/(math.pi*(3-k**2))**0.5*np.exp(-3*x**2/(2*(3-k**2))) f32 = stats.norm.cdf(k*x/((3-k**2)*(2-k**2))**0.5) out = f1+f2+f31*f32 return out def peakdens3D(x,k): fd1 = 144*stats.norm.pdf(x)/(29*6**(0.5)-36) fd211 = k**2.*((1.-k**2.)**3. + 6.*(1.-k**2.)**2. + 12.*(1.-k**2.)+24.)*x**2. / (4.*(3.-k**2.)**2.) fd212 = (2.*(1.-k**2.)**3. + 3.*(1.-k**2.)**2.+6.*(1.-k**2.)) / (4.*(3.-k**2.)) fd213 = 3./2. fd21 = (fd211 + fd212 + fd213) fd22 = np.exp(-k**2.*x**2./(2.*(3.-k**2.))) / (2.*(3.-k**2.))**(0.5) fd23 = stats.norm.cdf(2.*k*x / ((3.-k**2.)*(5.-3.*k**2.))**(0.5)) fd2 = fd21*fd22*fd23 fd31 = (k**2.*(2.-k**2.))/4.*x**2. - k**2.*(1.-k**2.)/2. - 1. fd32 = np.exp(-k**2.*x**2./(2.*(2.-k**2.))) / (2.*(2.-k**2.))**(0.5) fd33 = stats.norm.cdf(k*x / ((2.-k**2.)*(5.-3.*k**2.))**(0.5)) fd3 = fd31 * fd32 * fd33 fd41 = (7.-k**2.) + (1-k**2)*(3.*(1.-k**2.)**2. + 12.*(1.-k**2.) + 28.)/(2.*(3.-k**2.)) fd42 = k*x / (4.*math.pi**(0.5)*(3.-k**2.)*(5.-3.*k**2)**0.5) fd43 = np.exp(-3.*k**2.*x**2/(2.*(5-3.*k**2.))) fd4 = fd41*fd42 * fd43 fd51 = math.pi**0.5*k**3./4.*x*(x**2.-3.) f521low = np.array([-10.,-10.]) f521up = np.array([0.,k*x/2.**(0.5)]) f521mu = np.array([0.,0.]) f521sigma = np.array([[3./2., -1.],[-1.,(3.-k**2.)/2.]]) fd521,i = stats.mvn.mvnun(f521low,f521up,f521mu,f521sigma) f522low = np.array([-10.,-10.]) f522up = np.array([0.,k*x/2.**(0.5)]) f522mu = np.array([0.,0.]) f522sigma = np.array([[3./2., -1./2.],[-1./2.,(2.-k**2.)/2.]]) fd522,i = stats.mvn.mvnun(f522low,f522up,f522mu,f522sigma) fd5 = fd51*(fd521+fd522) out = fd1*(fd2+fd3+fd4+fd5) return out Explanation: Define peak density function End of explanation smooth_FWHM = 3 smooth_sigma = smooth_FWHM/(2*math.sqrt(2*math.log(2))) dimensions = (50,50,50) positions = np.array([[60,40,40], [40,80,40], [50,30,60]]) amplitudes = np.array([1.,1.,1.]) width = 5. seed=123 mask = nib.load("0mask.nii") nsub=10 noise = surrogate_3d_dataset(n_subj=nsub, shape=dimensions, mask=mask, sk=smooth_sigma,noise_level=1.0, width=5.0,out_text_file=None, out_image_file=None, seed=seed) signal = surrogate_3d_dataset(n_subj=nsub, shape=dimensions, mask=mask, sk=smooth_sigma,noise_level=0.0, pos=positions, ampli=amplitudes, width=10.0,out_text_file=None, out_image_file=None, seed=seed) low_values_indices = signal < 0.1 signal[low_values_indices] = 0 high_values_indices = signal > 0 signal[high_values_indices] = 1 data = noise+signal fig,axs=plt.subplots(1,3,figsize=(13,3)) fig.subplots_adjust(hspace = .5, wspace=0.3) axs=axs.ravel() axs[0].imshow(noise[1,:,:,40]) axs[1].imshow(signal[1,:,:,40]) axs[2].imshow(data[1,:,:,40]) fig.show() data = data.transpose((1,2,3,0)) img=nib.Nifti1Image(data,np.eye(4)) img.to_filename(os.path.join("simulated_dataset.nii.gz")) Explanation: Simulate and export data from 10 subjects End of explanation model=fsl.L2Model(num_copes=nsub) model.run() flameo=fsl.FLAMEO(cope_file='simulated_dataset.nii.gz', cov_split_file='design.grp', design_file='design.mat', t_con_file='design.con', mask_file='0mask.nii', run_mode='ols', terminal_output='none') flameo.run() from StringIO import StringIO # This is for reading a string into a pandas df import tempfile import shutil tstat = nib.load("stats/tstat1.nii.gz").get_data() minimum = np.nanmin(tstat) newdata = tstat - minimum #little trick because fsl.model.Cluster ignores negative values img=nib.Nifti1Image(newdata,np.eye(4)) img.to_filename(os.path.join("tstat1_allpositive.nii.gz")) input_file = os.path.join("tstat1_allpositive.nii.gz") # 0) Creating a temporary directory for the temporary file to save the local cluster file tmppath = tempfile.mkdtemp() # 1) Running the command and saving output to screen into df cmd = "cluster -i %s --thresh=0 --num=10000 --olmax=%s/locmax.txt --connectivity=26" %(input_file,tmppath) output = StringIO(os.popen(cmd).read()) #Joke - If you need the output for the max stuffs, you can get it in this variable, # you can read it into a pandas data frame df = pd.DataFrame.from_csv(output, sep="\t", parse_dates=False) df # 2) Now let's read in the temporary file, and delete the directory and everything in it peaks = pd.read_csv("%s/locmax.txt" %tmppath,sep="\t").drop('Unnamed: 5',1) peaks.Value = peaks.Value + minimum shutil.rmtree(tmppath) peaks[:5] Explanation: Perform group analysis and extract peaks from Tstat-map End of explanation xn = np.arange(-10,10,0.01) yn = [] for x in xn: yn.append(peakdens3D(x,1)) twocol = Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-5,10,0.3),label="observed distribution") plt.xlim([-1,10]) plt.ylim([0,0.6]) plt.plot(xn,yn,color=twocol[1],lw=3,label="theoretical distribution under H_0") plt.title("histogram") plt.xlabel("peak height") plt.ylabel("density") plt.legend(loc="upper left",frameon=False) plt.show() Explanation: Plot observed distribution of peaks with theoretical distribution (under H_0) End of explanation y = [] for x in peaks.Value: y.append(1-integrate.quad(lambda x: peakdens3D(x,1), -20, x)[0]) ynew = [10**(-6) if x<10**(-6) else x for x in y] peaks.P = ynew Explanation: Compute p-values based on theoretical distribution (by numerical integration) End of explanation bum = BUM.bumOptim(peaks.P,starts=100) bum["pi1"] Explanation: Compute proportion of activation based on BUM model End of explanation twocol = Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.hist(peaks.P,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(0,1,0.1),label="observed distribution") plt.hlines(1-bum["pi1"],0,1,color=twocol[1],lw=3,label="null part of distribution") plt.plot(xn,stats.beta.pdf(xn,bum["a"],1)+1-bum["pi1"],color=twocol[3],lw=3,label="alternative part of distribution") plt.xlim([0,1]) plt.ylim([0,4]) plt.title("histogram") plt.xlabel("peak height") plt.ylabel("density") plt.legend(loc="upper right",frameon=False) plt.show() Explanation: Plot histogram of p-values with expected distribution (beta and uniform) End of explanation powerthres = neuropower.peakmixmodfit(peaks.Value[peaks.Value>3],bum["pi1"],3) print(powerthres["mu"]) print(powerthres["sigma"]) twocol = Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.hist(peaks.Value[peaks.Value>3],lw=0,facecolor=twocol[0],normed=True,bins=np.arange(3,10,0.3),label="observed distribution") plt.xlim([3,10]) plt.ylim([0,1]) plt.plot(xn,neuropower.nulprobdens(3,xn)*(1-bum["pi1"]),color=twocol[3],lw=3,label="null distribution") plt.plot(xn,neuropower.altprobdens(powerthres["mu"],powerthres["sigma"],3,xn)*(bum["pi1"]),color=twocol[5],lw=3, label="alternative distribution") plt.plot(xn,neuropower.mixprobdens(powerthres["mu"],powerthres["sigma"],bum["pi1"],3,xn),color=twocol[1],lw=3,label="total distribution") plt.title("histogram") plt.xlabel("peak height") plt.ylabel("density") plt.legend(loc="upper right",frameon=False) plt.show() Explanation: Apply power procedure WITH threshold End of explanation def altprobdens(mu,sigma,peaks): out = scipy.stats.norm(mu,sigma).pdf(peaks) return out def mixprobdens(mu,sigma,pi1,peaks): f0=[(1-pi1)*peakdens3D(p,1) for p in peaks] fa=[pi1*altprobdens(mu,sigma,p) for p in peaks] f=[x + y for x, y in zip(f0, fa)] return(f) def mixprobdensSLL(pars,pi1,peaks): mu=pars[0] sigma=pars[1] f = mixprobdens(mu,sigma,pi1,peaks) LL = -sum(np.log(f)) return(LL) def nothrespeakmixmodfit(peaks,pi1): Searches the maximum likelihood estimator for the mixture distribution of null and alternative start = [5,0.5] opt = scipy.optimize.minimize(mixprobdensSLL,start,method='L-BFGS-B',args=(pi1,peaks),bounds=((2.5,50),(0.1,50))) out={'maxloglikelihood': opt.fun, 'mu': opt.x[0], 'sigma': opt.x[1]} return out modelfit = nothrespeakmixmodfit(peaks.Value,bum["pi1"]) twocol = Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-2,10,0.3),label="observed distribution") plt.xlim([-2,10]) plt.ylim([0,0.5]) plt.plot(xn,[(1-bum["pi1"])*peakdens3D(p,1) for p in xn],color=twocol[3],lw=3,label="null distribution") plt.plot(xn,bum["pi1"]*altprobdens(modelfit["mu"],modelfit["sigma"],xn),color=twocol[5],lw=3,label="alternative distribution") plt.plot(xn,mixprobdens(modelfit["mu"],modelfit["sigma"],bum["pi1"],xn),color=twocol[1],lw=3,label="fitted distribution") plt.title("histogram") plt.xlabel("peak height") plt.ylabel("density") plt.legend(loc="upper right",frameon=False) plt.show() Explanation: Adjust power procedure without threshold End of explanation xn = np.arange(-10,10,0.01) newcol = ["#8C1515","#4D4F53","#000000","#B3995D"] plt.figure(figsize=(5,3)) plt.xlim([1.7,7.8]) plt.ylim([0,2]) k = -1 for u in range(2,6): k = k+1 print(k) plt.plot(xn,neuropower.nulprobdens(u,xn),color=newcol[k],lw=3,label="u=%s" %(u)) plt.vlines(u,0,2,color=newcol[k],lw=1,linestyle="--") plt.legend(loc="upper right",frameon=False) plt.show() Explanation: Figures for JSM End of explanation plt.figure(figsize=(5,3)) plt.hlines(1-0.30,0,1,color=newcol[1],lw=3,label="null distribution") plt.plot(xn,stats.beta.pdf(xn,0.2,1)+1-0.3,color=newcol[0],lw=3,label="alternative distribution") plt.xlim([0,1]) plt.ylim([0,4]) plt.title("") plt.xlabel("") plt.ylabel("") plt.legend(loc="upper right",frameon=False) plt.show() plt.figure(figsize=(5,3)) plt.xlim([2,6]) plt.ylim([0,1]) plt.plot(xn,neuropower.nulprobdens(2,xn)*0.3,color=newcol[3],lw=3,label="null distribution") plt.plot(xn,neuropower.altprobdens(3,1,2,xn)*0.7,color=newcol[1],lw=3, label="alternative distribution") plt.plot(xn,neuropower.mixprobdens(3,1,0.7,2,xn),color=newcol[0],lw=3,label="total distribution") plt.title("") plt.xlabel("") plt.ylabel("") plt.legend(loc="upper right",frameon=False) plt.show() y1 = [] ran = range(10,51) for n in ran: delta = 3/10**0.5 new = delta*n**0.5 y1.append(1-neuropower.altcumdens(new,1,2,4)) plt.figure(figsize=(5,3)) plt.plot(ran,y1,color=newcol[0],lw=3) plt.xlim([10,np.max(ran)]) plt.ylim([0,1]) plt.title("") plt.xlabel("") plt.ylabel("") plt.show() Explanation: $P(T>t | H_0, t>u)$ End of explanation
15,926
Given the following text description, write Python code to implement the functionality described below step by step Description: Using ChemicalEnvironments Chemical Environments were created as a way to parse SMIRKS strings and make changes in chemical perception space. In this workbook, we will show you have chemical environments are initiated and used to make changes to a SMIRKS pattern. Authors * Caitlin C. Bannan from Mobley Group at University of California, Irvine Basic Structure of ChemicalEnvironments ChemicalEnvironments are initiated with the following input variables Step1: Default Chemical Environments All Chemical Environments can be initated using SMIRKS strings. If a ChemicalEnvironment is initiated with no SMIRKS pattern, it is an empty structure. However, there are 5 subtypes of ChemicalEnvironments that match the types of parameters found in the SMIRNOFF format. If they are initiated with no SMIRKS pattern, their structure matches a generic for that parameter type, for example [* Step2: Initiating ChemicalEnvironments from SMIRKS Strings ChemicalEnvironments can be initialized by SMIRKS strings. Here we attempt to show the robustness of this parsing. These patterns are intentionally complicated and therefore can be hard to read by humans. Here are some of the key features we would like to test Step3: Structure of ChemicalEnvironments Up until now, we have discussed only how to initiate ChemicalEnvironments. Now we will explain how they are structured and how to use them to make changes to your SMIRKS pattern (and therefor the fragment you are describing). To begin with, the overall structure of ChemicalEnvironments is similar to how a chemist might think about a fragment. We use NetworkX graphs to store information about the pieces. Nodes store information about Atoms and edges (connecting nodes) store information about Bonds. Both of these sub-structures, Atoms and Bonds store information about the input SMIRKS pattern in a broken down way so it can be easily editted. The words Atoms and Bonds are capitalized as they are classes in and of themselves. Both Atoms and Bonds have two types of information * ORtypes - things that are OR'd together in the SMIRKS string using a comma (',') - These have two subtypes Step4: Changing ORtypes and ANDtypes For both ORtypes and ANDtypes for Atoms and Bonds there are "get" and "set" methods. The set methods completely rewrite that type. There are also methods for add ORtypes and ANDtypes where you add a single entry to the existing list. Here we will use the set ORtypes to change atom1 to be a trivalent carbon or a divalent nitrogen. Then we will also add an ORType and ANDType to atom2 so that it could refer to an oxygen ('#8') or trivalent and neutral nitrogen ('#7X3+0') and in one ring ('R1'). Final SMIRKS string Step5: Adding new Atoms The addAtom method is used to introduce atoms bound to existing atoms. You can add an empty atom or specify information about the new bond and new atom. Here are the parameters for the addAtom method Step6: Removing Atoms The removeAtom method works how you would expect. It removes the specified atom and the bond connecting it to the fragment. You cannot remove indexed atoms (if you want to remove their OR and AND decorators you can set them to empty lists). The other option with the removeAtom method is to say only remove it if the atom is undecorated. This is done by setting the input variable isEmpty to True (default is False). When isEmpty is True, the atom is only removed if it has 1 ORtype and no ANDtypes. The removeAtom method returns True if the atom was removed and False if it was not. As an example, we will remove the hydrogen in the beta position to atom3 that was added above. New SMIRKS pattern Step7: Other ChemicalEnvironment Methods There are a variety of other methods that let you get information about the stored fragment. This includes
Python Code: # import necessary functions from openff.toolkit.typing.chemistry import environment as env from openeye import oechem Explanation: Using ChemicalEnvironments Chemical Environments were created as a way to parse SMIRKS strings and make changes in chemical perception space. In this workbook, we will show you have chemical environments are initiated and used to make changes to a SMIRKS pattern. Authors * Caitlin C. Bannan from Mobley Group at University of California, Irvine Basic Structure of ChemicalEnvironments ChemicalEnvironments are initiated with the following input variables: * smirks = any SMIRKS string (if None an empty environment is created) * label = this could be anything, a number/str/int, stored at ChemicalEnvironment.label * replacements = This is a list of two tuples in the form (short, smirks) to substitute short hand in your SMIRKS strings. This is used to check if your input SMIRKS string or created Chemical Environment are Valid. SMIRKS Strings Here we use the word SMIRKS to mean SMARTS patterns with indexed atoms, we are not using Chemical Environments to parse SMIRKS strings that describe reactions. That means these SMIRKS patterns should not contain multiple molecules ('.') or reaction arrows ('&gt;&gt;'). Here we will try to explain the SMIRKS patterns used here, but SMARTS and SMIRKS are a complex language. SMARTS/SMIRKS strings are similar to SMILES strings with increased complexity. For more details about this language see the Daylight tutorials: * SMILES * SMARTS * SMIRKS End of explanation # NBVAL_SKIP Env = env.ChemicalEnvironment() atomEnv = env.AtomChemicalEnvironment() bondEnv = env.BondChemicalEnvironment() angleEnv = env.AngleChemicalEnvironment() torsionEnv = env.TorsionChemicalEnvironment() impropEnv = env.ImproperChemicalEnvironment() EnvList = [Env, atomEnv, bondEnv, angleEnv, torsionEnv, impropEnv] names = ['generic', 'Atom','Bond','Angle','Torsion','(Improper'] for idx, Env in enumerate(EnvList): print("%10s: %s" % (names[idx], Env.asSMIRKS())) Explanation: Default Chemical Environments All Chemical Environments can be initated using SMIRKS strings. If a ChemicalEnvironment is initiated with no SMIRKS pattern, it is an empty structure. However, there are 5 subtypes of ChemicalEnvironments that match the types of parameters found in the SMIRNOFF format. If they are initiated with no SMIRKS pattern, their structure matches a generic for that parameter type, for example [*:1]~[*:2] for a bond (that is any atom connected to any other atom by any bond). The 5 subtypes are listed below with their expected number of indexed atoms and the corresponding SMIRKS structure: AtomChemicalEnvironment expects 1 indexed atom default/generic SMIRKS "[*:1]" BondChemicalEnvironment expects 2 indexed atoms default/generic SMIRKS: "[*:1]~[*:2]" AngleChemicalEnvironment expects 3 indexed atoms default/generic SMIRKS: "[*:1]~[*:2]~[*:3]" TorsionChemicalEnvironment expects 4 indexed atoms in a proper dihedral angle default/generic SMIRKS: "[*:1]~[*:2]~[*:3]~[*:4]" ImproperChemicalEnvironment expects 4 indexed atoms in an improper dihedral angle default/generic SMIRKS: "[*:1]~[*:2](~[*:3])~[*:4]" Here we show how these are initiated. Note that the generic environment is blank, it has the potential to become a SMIRKS pattern, but currently nothing is stored in it. While the subtypes have the shape described above, but wildcards ('*' for atoms and '~' for bonds). End of explanation # NBVAL_SKIP # define the two replacements strings replacements = [ ('ewg1', '[#7!-1,#8!-1,#16!-1,#9,#17,#35,#53]'), ('ewg2', '[#7!-1,#8,#16]')] # define complicated SMIRKS patterns SMIRKS = ['[#6$(*~[#6]=[#8])$(*-,=$ewg2)]', # complex recursive SMIRKS 'CCC', # SMILES "[#1:1]-CCC", # simple hybrid '[#6:1]1(-;!@[#1,#6])=;@[#6]-;@[#6]1', # Complicated ring 'C(O-[#7,#8])CC=[*]', # Hybrid SMIRKS "[#6$([#6X4](~[$ewg1])(~[#8]~[#1])):1]-[#6X2H2;+0:2]-,=,:;!@;!#[$ewg2:3]-[#4:4]", # its just long "[#6$([#6X4](~[$ewg1])(~[#8]~[#1])):1]1=CCCC1", # another ring ] for smirk in SMIRKS: qmol = oechem.OEQMol() tmp_smirks = oechem.OESmartsLexReplace(smirk, replacements) parseable = env.OEParseSmarts(qmol, tmp_smirks) print("Input SMIRKS: %s" % smirk) print("\t parseable by OpenEye Tools: %s" % parseable) Env = env.ChemicalEnvironment(smirks = smirk, replacements = replacements) print("\t Chemical Environment SMIRKS: %s\n" % Env.asSMIRKS()) Explanation: Initiating ChemicalEnvironments from SMIRKS Strings ChemicalEnvironments can be initialized by SMIRKS strings. Here we attempt to show the robustness of this parsing. These patterns are intentionally complicated and therefore can be hard to read by humans. Here are some of the key features we would like to test: SMILES strings are SMIRKS strings (i.e. 'CCC' should be stored as 3 atoms bonded in a row). Replacement strings, such as "$ewg1" to mean "[#7!-1,#8!-1,#16!-1,#9,#17,#35,#53]" Complex recursive SMIRKS such as "[#6$(*([#6]=[#8])-,=$ewg2))]" Ring indexing, as in SMILES, SMARTS and SMIKRS use a number after an atom to describe the atoms in a ring, such as "[#6:1]1(-;!@[#1,#6])=;@[#6]-;@[#6]1" to show a cyclopropene ring where atom 1 is in the double bond and is bound to a hydrogen or carbon outside the ring. Hybrid SMIRKS with atomic symbols for the atoms. These do not have to use the square brackets, for example "C(O-[#7,#8])C[C+0]=[*]" In this set-up we will show that these SMIRKS patterns are parseable with the OpenEye Toolkits, then create a ChemicalEnvironment from the SMIRKS string and then print the ChemicalEnvironment as a SMIRKS string. Note that due to the nature of SMIRKS patterns the ChemicalEnvironment smirks may not identically match the input SMIRKS. A key difference is that every atom in a ChemicalEnvironment SMIRKS will be inside square brackets. Also, "blank" bonds, for example in "CCC" will be converted to their literal meaning, single or aromatic. End of explanation smirks = "[#6X3,#7:1]~;@[#8;r:2]~;@[#6X3,#7:3]" angle = env.ChemicalEnvironment(smirks = smirks) # get atom1 and print information atom1 = angle.selectAtom(1) print("Atom 1: '%s'" % atom1.asSMIRKS()) print("ORTypes") for (base, decs) in atom1.getORtypes(): print("\tBase: %s" % base) str_decs = ["'%s'" % d for d in decs] str_decs = ','.join(str_decs) print("\tDecorators: [%s]" % str_decs) print("ANDTypes:", atom1.getANDtypes()) print() # get bond1 and print information bond1 = angle.selectBond(1) print("Bond 1: '%s'" % bond1.asSMIRKS()) print("ORTypes: ", bond1.getORtypes()) print("ANDTypes: ", bond1.getANDtypes()) Explanation: Structure of ChemicalEnvironments Up until now, we have discussed only how to initiate ChemicalEnvironments. Now we will explain how they are structured and how to use them to make changes to your SMIRKS pattern (and therefor the fragment you are describing). To begin with, the overall structure of ChemicalEnvironments is similar to how a chemist might think about a fragment. We use NetworkX graphs to store information about the pieces. Nodes store information about Atoms and edges (connecting nodes) store information about Bonds. Both of these sub-structures, Atoms and Bonds store information about the input SMIRKS pattern in a broken down way so it can be easily editted. The words Atoms and Bonds are capitalized as they are classes in and of themselves. Both Atoms and Bonds have two types of information * ORtypes - things that are OR'd together in the SMIRKS string using a comma (',') - These have two subtypes: - ORbases - typically an atomic number - ORdecorators - typically information that might be true for 1 possible atomic number, but not others * ANDtypes - things that are AND'd together in the SMIRKS string using a semi-colon (';') This starts to sound complicated, so to try to illustrate how this works, we will use an actual Angle found in the SMIRNOFF99Frosst force field. Here is the SMIRKS String: "[#6X3,#7:1]~;@[#8;r:2]~;@[#6X3,#7:3]" atom 1 and atom 3 ORtypes '#6X3' - a trivalent carbon ORbase = '#6' ORdecorators = ['X3'] '#7' is a nitrogen ORbase = '#7' ORdecorators [] ANDtypes [] (None) atom 2 ORtypes '#8' ORbase = '#8' ORdecorators = [] ANDtypes ['r'] it is in a ring bond 1 and 2 are identical ORtypes = None (generic bond ~) ANDtypes = ['@'] it is in a ring Selecting Atoms and Bonds Here we will use the selectAtom and selectBond functions to get a specific atom or bond and then print its information. The 'select' methods ( selectAtom() or selectBond() ) takes an argument descriptor which can be used to select a certain atom or type of atom. Descriptor input option: * None - returns a random atom * int - returns that atom or bond by index * 'Indexed' - returns a random indexed atom * 'Unindexed' - returns a random non-indexed atom * 'Alpha' - returns a random atom alpha to an indexed atom * 'Beta' - returns a random atom beta to an indexed atom End of explanation # Change atom1's ORtypes with the setORtype method new_ORtypes = [ ('#6', ['X3']), ('#7', ['X2']) ] atom1.setORtypes(new_ORtypes) print("New Atom 1: %s " % atom1.asSMIRKS()) # Change atom2's AND and OR types with the add*type methods atom2 = angle.selectAtom(2) atom2.addANDtype('R1') atom2.addORtype('#7', ['X3', '+0']) print("New Atom 2: %s" % atom2.asSMIRKS()) print("\nNew SMIRKS: %s" % angle.asSMIRKS()) Explanation: Changing ORtypes and ANDtypes For both ORtypes and ANDtypes for Atoms and Bonds there are "get" and "set" methods. The set methods completely rewrite that type. There are also methods for add ORtypes and ANDtypes where you add a single entry to the existing list. Here we will use the set ORtypes to change atom1 to be a trivalent carbon or a divalent nitrogen. Then we will also add an ORType and ANDType to atom2 so that it could refer to an oxygen ('#8') or trivalent and neutral nitrogen ('#7X3+0') and in one ring ('R1'). Final SMIRKS string: "[#6X3,#7X2:1]~;@[#8,#7X3+0;r;R1:2]~;@[#6X3,#7:3]" End of explanation atom3 = angle.selectAtom(3) alpha_ORtypes = [('#8', ['X2', 'H1'])] alpha_ANDtypes = ['R0'] alpha_bondANDtypes = ['!@'] beta_ORtypes = [('#1', [])] alpha = angle.addAtom(atom3, bondANDtypes = alpha_bondANDtypes, newORtypes = alpha_ORtypes, newANDtypes = alpha_ANDtypes) beta = angle.addAtom(alpha, newORtypes = beta_ORtypes) print("Alpha Atom SMIRKS: %s" % alpha.asSMIRKS()) print("Beta Atom SMIRKS: %s" % beta.asSMIRKS()) print() print("New overall SMIRKS: %s" % angle.asSMIRKS()) Explanation: Adding new Atoms The addAtom method is used to introduce atoms bound to existing atoms. You can add an empty atom or specify information about the new bond and new atom. Here are the parameters for the addAtom method: Parameters ----------- bondToAtom: atom object, required atom the new atom will be bound to bondORtypes: list of tuples, optional strings that will be used for the ORtypes for the new bond bondANDtypes: list of strings, optional strings that will be used for the ANDtypes for the new bond newORtypes: list of strings, optional strings that will be used for the ORtypes for the new atom newANDtypes: list of strings, optional strings that will be used for the ANDtypes for the new atom newAtomIndex: int, optional integer label that could be used to index the atom in a SMIRKS string beyondBeta: boolean, optional if True, allows bonding beyond beta position The addAtom method returns the created atom. Here we will add an alpha atom (oxygen) to atom 3 that is not in a ring and then a beta atom (hydrogen) bound to the alpha atom. New SMIRKS pattern: "[#6X3,#7X2:1]~;@[#8,#7+0X3;R1:2]~;@[#6X3,#7:3]~;!@[#8X2H1;R0]~[#1]" End of explanation removed = angle.removeAtom(beta) print("The hydrogen beta to atom3 was remove: ", removed) print("Updated SMIRKS string: %s" % angle.asSMIRKS()) Explanation: Removing Atoms The removeAtom method works how you would expect. It removes the specified atom and the bond connecting it to the fragment. You cannot remove indexed atoms (if you want to remove their OR and AND decorators you can set them to empty lists). The other option with the removeAtom method is to say only remove it if the atom is undecorated. This is done by setting the input variable isEmpty to True (default is False). When isEmpty is True, the atom is only removed if it has 1 ORtype and no ANDtypes. The removeAtom method returns True if the atom was removed and False if it was not. As an example, we will remove the hydrogen in the beta position to atom3 that was added above. New SMIRKS pattern: "New overall SMIRKS: [#6X3,#7X2:1]~;@[#8,#7+0X3;R1:2]~;@[#6X3,#7:3]~;!@[#8X2H1;R0]" End of explanation # 1. Getting information about an atom or bond in an environment (i.e. isAlpha returns a boolean) # Check if the alpha atom above is any of the following print("Above a carbon atom ('%s') was added in the alpha position to atom 3. This atom is ..." % alpha.asSMIRKS()) print("\t Indexed: ", angle.isIndexed(alpha)) print("\t Unindexed: ", angle.isUnindexed(alpha)) print("\t Alpha: ", angle.isAlpha(alpha)) print("\t Beta: ", angle.isBeta(alpha)) # NOTE - These methods can take an atom or a bond as an argument # 2. Get atoms or bonds in each type of position, for example getIndexedAtoms or getAlphaBonds # We will print the SMIRKS for each indexed atom: indexed = angle.getIndexedAtoms() print("Here are the SMIRKS strings for the Indexed atoms in the example angle:") for a in indexed: print("\tAtom %i: '%s'" % (a.index, a.asSMIRKS())) print() bonds = angle.getBonds() print("Here are the SMIRKS strings for ALL bonds in the example angle:") for b in bonds: print("\t'%s'" % b.asSMIRKS()) # 3. Report the minimum order of a bond with Bond.getOrder bond1 = angle.selectBond(1) print("Bond 1 (between atoms 1 and 2) has a minimum order of %i" % bond1.getOrder()) # 4. Report the valence and bond order around an atom can be reported with getValence and getBondORder atom3 = angle.selectAtom(3) print("Atom 3 has a valency of %i" % angle.getValence(atom3)) print("Atom 3 has a minimum bond order of %i" % angle.getBondOrder(atom3)) # 5. Get a bond between two atoms (or determine if the atoms are bonded) with getBond(atom1, atom2) # Check for bonds between each pair of indexed atoms atom_pairs = [ (1,2), (2,3), (1,3) ] for (A,B) in atom_pairs: atomA = angle.selectAtom(A) atomB = angle.selectAtom(B) # check if there is a bond between the two atoms bond = angle.getBond(atomA, atomB) if bond is None: print("There is no bond between Atom %i and Atom %i" % (A, B)) else: print("The bond between Atom %i and Atom %i has the pattern '%s'" % (A, B, bond.asSMIRKS())) # 6. Get atoms bound to a specified atom with getNeighbors # get the neighbors for each indexed atom for A in [1,2,3]: atomA = angle.selectAtom(A) print("Atom %i has the following neighbors" % A) for a in angle.getNeighbors(atomA): print("\t '%s' " % a.asSMIRKS()) print() Explanation: Other ChemicalEnvironment Methods There are a variety of other methods that let you get information about the stored fragment. This includes: Getting information about an atom or bond in an environment (i.e. isAlpha returns a boolean) Get atoms or bonds in each type of position: getAtoms or getBonds returns all atoms or bonds getIndexedAtoms or getIndexedBonds getAlphaAtoms or getAlphaBonds getBetaAtoms or getBetaBonds getUnindexedAtoms or getUnindexedBonds Report the minimum order of a bond with Bond.getOrder Note this is the minimum so a bond that is single or double ('-,=') will report the order as 1 Report the valence and bond order around an atom can be reported with getValence and getBondORder Get a bond between two atoms (or determine if the atoms are bonded) with getBond(atom1, atom2) Get atoms bound to a specified atom with getNeighbors Here we will show how each of these method types is used: End of explanation
15,927
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Linear Regression Step5: Partitioning the data Before we can begin training our model and testing it, it is important to first properly partition the data into three subsets Step9: Training and Validating the model Now that we have partitioned our data, we can now begin training our model. We will be creating two different models that we will compare using Cross-Validation out of which we will pick one module to test. Step10: Now, given the two trained models, we want to determine which model is more accurate at making predictions on unseen data. To do this, we will 'test' both models on the validation set created earlier and determine which one performs better on this set. Remember that we are still in the training phase! The better performing set will be used during the testing phase. Step11: Once you have chosen one of the models from above, train the model on training set combined with the validation set to complete the training phase. Pandas has a useful method to concantenate two datasets that can help you here. Step12: Testing the model At this point, you should have selected one of the models from above as the model that you will use to predict median house values. We are in the testing phase! We will now test the selected model to see how well it performs on unseen test data. Use the test data set that you created at the beginning of the exercise to test your model. Before we can begin testing though, we should train the selected model on the training set combined with the validation set
Python Code: # import libraries import matplotlib import IPython import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib as mpl import pylab import seaborn as sns import sklearn as sk %matplotlib inline housing = Read the .csv file containing the housing data Explanation: Linear Regression: Part 2 In the previous exercise, we learned how to build a simple linear model on our dataset. You were asked to build a linear model to predict housing prices. This exercise will be very similar but now, we will incorporate training, validation and testing. The goal of this exercise is to give you a simple, hands-on experience with the process of training, validating and testing your data. The exercise is split up in a manner that will lead you through the entire process step-by-step starting from partitioning your data all the way to testing your generated model. This should reinforce the knowledge gained about what role each set plays in the supervised learning process. Additionally, it will give you a basic idea of problems like model and feature selection which are a common and crucial part of the supervised learning process. If you are unclear about the purpose of each dataset, go back to the Cross-Validation and Overfitting slides and review slide 4 before starting this exercise. As data, we will be using a slightly modified version of the Boston Housing Dataset. Good luck! End of explanation housing_training_set = Training set goes here housing_validation_set = Validation set goes here housing_test_set = Testing set goes here Explanation: Partitioning the data Before we can begin training our model and testing it, it is important to first properly partition the data into three subsets: Training, Validation and Test Set. We will be using the Holdout method for the purposes of Cross-Validation. Make sure that there is NO data overlap between these datasets. Also, remember that the Test set is only used once we are fully satisfied that our model is properly trained. End of explanation # Define your two predictors and response here X_1 = First Model Predictor X_2 = Second Model Predictor Y = Model Response from sklearn.linear_model import LinearRegression # Define your two models with different features (ex. 'tax', 'pratio') here. Feel free to change the names of the models. lin_mod_param1 = LinearRegression() lin_mod_param2 = LinearRegression() # Train both models on the training data Explanation: Training and Validating the model Now that we have partitioned our data, we can now begin training our model. We will be creating two different models that we will compare using Cross-Validation out of which we will pick one module to test. End of explanation # Use the validation set to evaluate the performance of both models # Hint you can use a method provided by sklearn.linear_model.LinearRegression for this purpose Explanation: Now, given the two trained models, we want to determine which model is more accurate at making predictions on unseen data. To do this, we will 'test' both models on the validation set created earlier and determine which one performs better on this set. Remember that we are still in the training phase! The better performing set will be used during the testing phase. End of explanation # Concatenate the training and validation set # Train your model on the combined dataset Explanation: Once you have chosen one of the models from above, train the model on training set combined with the validation set to complete the training phase. Pandas has a useful method to concantenate two datasets that can help you here. End of explanation # Test your model here Explanation: Testing the model At this point, you should have selected one of the models from above as the model that you will use to predict median house values. We are in the testing phase! We will now test the selected model to see how well it performs on unseen test data. Use the test data set that you created at the beginning of the exercise to test your model. Before we can begin testing though, we should train the selected model on the training set combined with the validation set End of explanation
15,928
Given the following text description, write Python code to implement the functionality described below step by step Description: Leakage Coefficient Summary This notebook summarize the leakage coefficient fitted from 4 dsDNA samples. Import software Step1: Data files Step2: Plot style Step3: Average leakage Mean per sample Step4: Mean per sample (weighted on the number of bursts) Step5: Mean per channel Step6: Mean per channel (weighted on the number of bursts) Step7: Transform table in tidy form Step8: NOTE Step9: Figure Step10: Now I will transform leakage_kde in "tidy form" for easier plotting. For info on "data tidying" see Step11: Save Per-channel mean Step12: Per-sample mean Step13: Global mean
Python Code: import os import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from cycler import cycler import seaborn as sns %matplotlib inline %config InlineBackend.figure_format='retina' # for hi-dpi displays figure_size = (5, 4) default_figure = lambda: plt.subplots(figsize=figure_size) save_figures = True def savefig(filename, **kwargs): if not save_figures: return import os dir_ = 'figures/' kwargs_ = dict(dpi=300, bbox_inches='tight') #frameon=True, facecolor='white', transparent=False) kwargs_.update(kwargs) plt.savefig(dir_ + filename, **kwargs_) Explanation: Leakage Coefficient Summary This notebook summarize the leakage coefficient fitted from 4 dsDNA samples. Import software End of explanation bsearch_str = 'DexDem' leakage_kde = pd.read_csv( 'results/Multi-spot - leakage coefficient all values KDE %s.csv' % bsearch_str, index_col=0) leakage_gauss = pd.read_csv( 'results/Multi-spot - leakage coefficient all values gauss %s.csv' % bsearch_str, index_col=0) nbursts = pd.read_csv( 'results/Multi-spot - leakage coefficient all values nbursts %s.csv' % bsearch_str, index_col=0) for df in (leakage_kde, leakage_gauss, nbursts): df.columns.name = 'Channel' for dx in (leakage_gauss, leakage_kde, nbursts): dx.columns = pd.Index(np.arange(1, 9), name='Spot') PLOT_DIR = './figure/' Explanation: Data files End of explanation bmap = sns.color_palette("Set1", 9) colors_dark = np.array(bmap)[(1,0,2,3,4,8), :] colors_dark4 = np.array(bmap)[(1,0,2,8), :] bmap = sns.color_palette('Paired', 12) colors_light = np.array(bmap)[(0,4,2,8,6,10), :] colors_light4 = np.array(bmap)[(0,4,2,8), :] colors_light[-1] = colors_light4[-1] = [.8, .8, .8] colors_paired = np.zeros((colors_dark.shape[0]*2, colors_dark.shape[1])) colors_paired[::2] = colors_dark colors_paired[1::2] = colors_light colors_paired4 = colors_paired[(0, 1, 2, 3, 4, 5, 10, 11), :] sns.palplot(colors_paired4) sns.set(style='ticks', font_scale=1.4, palette=colors_paired) fig, ax = plt.subplots() ax2 = ax.twinx() kws = dict(lw=2, marker='o', ms=8) for i, did in enumerate(('7d', '12d', '17d', 'DO')): (100*leakage_kde).loc[did].plot(label='%s KDE' % did, ax=ax, color=colors_light4[i], **kws) nbursts.loc[did].plot(ax=ax2, ls='--', lw=2.5, color=colors_dark4[i]) for i, did in enumerate(('7d', '12d', '17d', 'DO')): (100*leakage_gauss).loc[did].plot(label='%s Gauss' % did, ax=ax, color=colors_dark4[i], **kws) handles, lab = ax.get_legend_handles_labels() h = handles#[1::2] + handles[::2] l = lab[1::2] + lab[::2] ax.legend(ncol=2, loc=1, bbox_to_anchor=(1, 0.5), borderaxespad=0.) ax.set_ylim(0) ax2.set_ylim(0, 3200) plt.xlim(0.75, 8.25) plt.xlabel('Channel') ax.set_ylabel('Leakage %') ax2.set_ylabel('# Bursts') sns.despine(offset=10, trim=True, right=False) savefig('multi-spot leakage KDE vs Gauss.svg') # bmap = sns.color_palette("Set1", 9) # colors = np.array(bmap)[(1,0,2,3,4,8,6,7), :] # sns.set_palette(colors) # sns.palplot(colors) sns.swarmplot(data=leakage_kde); plt.figure() sns.swarmplot(data=leakage_kde.T); Explanation: Plot style End of explanation lk_s = pd.DataFrame(index=['mean', 'std'], columns=leakage_kde.index, dtype=float) lk_s.loc['mean'] = leakage_kde.mean(1)*100 lk_s.loc['std'] = leakage_kde.std(1)*100 lk_s = lk_s.round(5) lk_s Explanation: Average leakage Mean per sample: End of explanation nbursts leakage_kde lk_sw = pd.DataFrame(index=['mean', 'std'], columns=leakage_kde.index, dtype=float) lk_sw.loc['mean'] = (nbursts*leakage_kde).sum(1)/nbursts.sum(1)*100 lk_sw.loc['std'] = np.sqrt((((leakage_kde.T*100 - lk_sw.loc['mean']).T**2) * nbursts).sum(1) / (nbursts.sum(1) - 1)) #lk_sw['mean'] = (nbursts * lk_sw).sum(1) / nbursts.sum(1).sum() lk_sw = lk_sw.round(5) lk_sw lk_swg = pd.DataFrame(index=['mean', 'std'], columns=leakage_gauss.index, dtype=float) lk_swg.loc['mean'] = (nbursts*leakage_gauss).sum(1)/nbursts.sum(1)*100 lk_swg.loc['std'] = np.sqrt((((leakage_gauss.T*100 - lk_swg.loc['mean']).T**2) * nbursts).sum(1) / (nbursts.sum(1) - 1)) #lk_sw['mean'] = (nbursts * lk_sw).sum(1) / nbursts.sum(1).sum() lk_swg = lk_swg.round(5) lk_swg lk_sw_m = pd.concat((lk_sw.loc['mean'], lk_swg.loc['mean']), axis=1, keys=['KDE', 'Gauss']) lk_sw_m lk_sw_s = pd.concat((lk_sw.loc['std'], lk_swg.loc['std']), axis=1, keys=['KDE', 'Gauss']) lk_sw_s sns.set_style('ticks') lk_sw_m.plot(yerr=lk_sw_s, lw=5, alpha=0.6) plt.xlim(-0.2, 3.2) plt.xticks(range(4), lk_sw_s.index) sns.despine(trim=True, offset=10) lk_sw_m.plot.bar(yerr=lk_sw_s, alpha=0.8) sns.despine(trim=True, offset=10) sns.swarmplot(data=leakage_kde*100, size=8, palette=colors_dark); plt.ylim(0) lk_sw_m.loc[:,'KDE'].plot(lw=3, alpha=0.8, color='k') plt.xlim(-0.2, 3.2) plt.xticks(range(4), lk_sw_s.index) sns.despine(trim=True, offset=10) Explanation: Mean per sample (weighted on the number of bursts): Number of bursts in D-only population: End of explanation lk_c = pd.DataFrame(index=['mean', 'std'], columns=leakage_kde.columns, dtype=float) lk_c.loc['mean'] = leakage_kde.mean()*100 lk_c.loc['std'] = leakage_kde.std()*100 #lk_c['mean'] = lk_c.mean(1) lk_c = lk_c.round(5) lk_c Explanation: Mean per channel: End of explanation lk_cw = pd.DataFrame(index=['mean', 'std'], columns=leakage_kde.columns, dtype=float) lk_cw.loc['mean'] = (nbursts*leakage_kde).sum()/nbursts.sum()*100 lk_cw.loc['std'] = np.sqrt((((leakage_kde*100 - lk_cw.loc['mean'])**2) * nbursts).sum(0) / (nbursts.sum(0) - 1)) #lk_cw['mean'] = lk_cw.mean(1) lk_cw = lk_cw.round(5) lk_cw lk_cwg = pd.DataFrame(index=['mean', 'std'], columns=leakage_gauss.columns) lk_cwg.loc['mean'] = (nbursts*leakage_gauss).sum()/nbursts.sum()*100 lk_cwg.loc['std'] = np.sqrt((((leakage_kde*100 - lk_cwg.loc['mean'])**2) * nbursts).sum(0) / (nbursts.sum(0) - 1)) #lk_cwg['mean'] = lk_cwg.mean(1) lk_cwg = lk_cwg.round(5) lk_cwg lk_cw_m = pd.concat((lk_cw.loc['mean'], lk_cwg.loc['mean']), axis=1, keys=['KDE', 'Gauss']) lk_cw_m.T lk_cw_s = pd.concat((lk_cw.loc['std'], lk_cwg.loc['std']), axis=1, keys=['KDE', 'Gauss']) lk_cw_s.T sns.set_palette(colors_dark) kws = dict(lw=5, marker='o', ms=8, alpha=0.5) lk_cw.loc['mean'].plot(yerr=lk_cw.loc['std'], **kws) lk_cwg.ix['mean',:].plot(yerr=lk_cwg.loc['std'],**kws) plt.ylim(0, 4) plt.xlim(0.75, 8.25) sns.despine(trim=True) lk_cw_m.plot.bar(alpha=0.8) #sns.despine(trim=True, offset=10) Explanation: Mean per channel (weighted on the number of bursts): End of explanation leakage_kde_t = pd.melt(leakage_kde.reset_index(), id_vars=['Sample'], value_name='leakage_kde').apply(pd.to_numeric, errors='ignore') leakage_kde_t.leakage_kde *= 100 leakage_kde_t.head() _ = lk_cw_m.copy().assign(Spot=range(1, 9)).set_index('Spot') _.head() sns.set_palette(colors_dark4) sns.swarmplot(x='Spot', y='leakage_kde', data=leakage_kde_t, size=6, hue='Sample'); _ = lk_cw_m.copy().assign(Spot=range(8)).set_index('Spot') _.loc[:,'KDE'].plot(lw=3, alpha=0.8, color='k') plt.ylim(0) plt.xlim(-0.25, 7.25) sns.despine(trim=True) Explanation: Transform table in tidy form: End of explanation leakage_kde_wmean = (leakage_kde*nbursts).sum().sum() / nbursts.sum().sum() leakage_kde_wmean Explanation: NOTE: There is a per-channel trend that cannot be ascribed to the background because we performend a D-emission burst search and selection and the leakage vs ch does not resemble the D-background vs channel curve. The effect is probably due to slight PDE variations (detectors + optics) that slightly change $\gamma$ on a per-spot basis. Weighted mean of the weighted mean End of explanation %config InlineBackend.figure_format='retina' # for hi-dpi displays Explanation: Figure End of explanation leakage_kde leakage_kde_t = pd.melt((100*leakage_kde).reset_index(), id_vars=['Sample'], value_name='leakage_kde').apply(pd.to_numeric, errors='ignore') leakage_kde_t.head() # leakage_kde_t = pd.melt((100*leakage_kde).T.reset_index(), id_vars=['Spot'], # value_name='leakage_kde').apply(pd.to_numeric, errors='ignore') # leakage_kde_t.head() sns.set_palette(colors_dark4) fig, ax = plt.subplots(1, 2, figsize=(12, 5), sharey=True) plt.subplots_adjust(wspace=0.1) sns.swarmplot(x='Sample', y='leakage_kde', data=leakage_kde_t, size=6, ax=ax[0]) lk_sw_m.loc[:,'KDE'].plot(lw=3, alpha=0.8, color='k', ax=ax[0]) ax[0].set_ylim(0) ax[0].set_xlim(-0.2, 3.2) plt.xticks(range(4), lk_sw_s.index) sns.despine(trim=True, offset=10, ax=ax[0]) sns.swarmplot(x='Spot', y='leakage_kde', data=leakage_kde_t, size=6, hue='Sample', ax=ax[1]) _ = lk_cw_m.copy().assign(Spot=range(8)).set_index('Spot') _.loc[:,'KDE'].plot(lw=3, alpha=0.8, color='k', label='mean') ax[1].set_ylim(0) ax[1].set_xlim(-0.25, 7.25) plt.xticks(np.arange(8)); sns.despine(trim=True, offset=10, ax=ax[1], left=True) ax[1].yaxis.set_visible(False) ax[0].set_ylabel('Leakage %') leg = ax[1].get_legend() h, l = ax[1].get_legend_handles_labels() ax[1].legend(h[1:] + h[:1], l[1:] + l[:1], title='Sample', loc='lower right') fs = 28 ax[0].text(0,0, 'A', fontsize=fs) ax[1].text(0,0, 'B', fontsize=fs) savefig('multi-spot leakage KDE 2panels.png') Explanation: Now I will transform leakage_kde in "tidy form" for easier plotting. For info on "data tidying" see: http://stackoverflow.com/questions/37490771/seaborn-categorical-plot-with-hue-from-dataframe-rows/ https://www.ibm.com/developerworks/community/blogs/jfp/entry/Tidy_Data_In_Python End of explanation lk_cw.to_csv('results/Multi-spot - leakage coefficient mean per-ch KDE %s.csv' % bsearch_str) Explanation: Save Per-channel mean End of explanation lk_sw.to_csv('results/Multi-spot - leakage coefficient mean per-sample KDE %s.csv' % bsearch_str) Explanation: Per-sample mean End of explanation '%.5f' % leakage_kde_wmean with open('results/Multi-spot - leakage coefficient KDE wmean %s.csv' % bsearch_str, 'w') as f: f.write('%.5f' % leakage_kde_wmean) Explanation: Global mean End of explanation
15,929
Given the following text description, write Python code to implement the functionality described below step by step Description: Thin Plate Splines (TPS) Transforms Step1: Let's create the landmarks used in Principal Warps paper (http Step2: Let's visualize the TPS Step3: This proves that the result is correct Step4: Here is another example with a deformed diamond.
Python Code: import numpy as np from menpo.transform import ThinPlateSplines from menpo.shape import PointCloud Explanation: Thin Plate Splines (TPS) Transforms End of explanation # landmarks used in Principal Warps paper # http://user.engineering.uiowa.edu/~aip/papers/bookstein-89.pdf src_landmarks = np.array([[3.6929, 10.3819], [6.5827, 8.8386], [6.7756, 12.0866], [4.8189, 11.2047], [5.6969, 10.0748]]) tgt_landmarks = np.array([[3.9724, 6.5354], [6.6969, 4.1181], [6.5394, 7.2362], [5.4016, 6.4528], [5.7756, 5.1142]]) src = PointCloud(src_landmarks) tgt = PointCloud(tgt_landmarks) tps = ThinPlateSplines(src, tgt) Explanation: Let's create the landmarks used in Principal Warps paper (http://user.engineering.uiowa.edu/~aip/papers/bookstein-89.pdf) End of explanation %matplotlib inline tps.view(); Explanation: Let's visualize the TPS End of explanation np.allclose(tps.apply(src_landmarks), tgt_landmarks) Explanation: This proves that the result is correct End of explanation # deformed diamond src_landmarks = np.array([[ 0, 1.0], [-1, 0.0], [ 0,-1.0], [ 1, 0.0]]) tgt_landmarks = np.array([[ 0, 0.75], [-1, 0.25], [ 0,-1.25], [ 1, 0.25]]) src = PointCloud(src_landmarks) tgt = PointCloud(tgt_landmarks) tps = ThinPlateSplines(src, tgt) %matplotlib inline tps.view(); np.allclose(tps.apply(src_landmarks), tgt_landmarks) Explanation: Here is another example with a deformed diamond. End of explanation
15,930
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 5 Statistics Describing a Single Set of Data Step4: Central Tendencies Step7: Dispersion Step8: Correlation
Python Code: num_friends = [100,49,41,40,25,21,21,19,19,18,18,16,15,15,15,15,14,14,13,13,13,13,12,12,11,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,8,8,8,8,8,8,8,8,8,8,8,8,8,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1] from collections import Counter %matplotlib inline from matplotlib import pyplot as plt # Figure 5-1. A histogram of friend counts friend_counts = Counter(num_friends) xs = range(101) ys = [friend_counts[x] for x in xs] plt.bar(xs, ys) plt.axis([0, 101, 0, 25]) plt.title("Histogram of Friend Counts") plt.xlabel('# of friends') plt.ylabel('# of people') plt.show() num_points = len(num_friends) num_points largest_value = max(num_friends) largest_value smallest_value = min(num_friends) smallest_value sorted_values = sorted(num_friends) smallest_value = sorted_values[0] smallest_value second_smallest_value = sorted_values[1] second_smallest_value second_largest_value = sorted_values[-2] second_largest_value Explanation: Chapter 5 Statistics Describing a Single Set of Data End of explanation def mean(x): return sum(x) / len(x) mean(num_friends) def median(v): finds the middle-most value n = len(v) sorted_v = sorted(v) midpoint = n // 2 if n % 2 == 1: # if odd, return the middle value return sorted_v[midpoint] else: # if even, return the average of middle values lo = midpoint - 1 hi = midpoint return (sorted_v[lo] + sorted_v[hi]) / 2 median(num_friends) def quantile(x, p): returns the pth-percentile value in x p_index = int(p * len(x)) return sorted(x)[p_index] quantile(num_friends, 0.10) quantile(num_friends, 0.25) quantile(num_friends, 0.70) quantile(num_friends, 0.90) def mode(x): returns a list, might be more than one mode counts = Counter(x) max_count = max(counts.values()) return [x_i for x_i, count in counts.items() if count == max_count] mode(num_friends) Explanation: Central Tendencies End of explanation def data_range(x): return max(x) - min(x) data_range(num_friends) def de_mean(x): translates x by substracting its mean x_bar = mean(x) return [x_i - x_bar for x_i in x] def variance(x): assume x has at least 2 elements n = len(x) deviation = de_mean(x) return sum(dev ** 2 for dev in deviation) / (n - 1) variance(num_friends) import math def standard_deviation(x): return math.sqrt(variance(x)) standard_deviation(num_friends) def interquartile_range(x): return quantile(x, 0.75) - quantile(x, 0.25) interquartile_range(num_friends) Explanation: Dispersion End of explanation def covariance(x, y): n = len(x) return sum(x_i * y_i for x_i, y_i in zip(de_mean(x), de_mean(y))) / (n - 1) daily_minutes = [1,68.77,51.25,52.08,38.36,44.54,57.13,51.4,41.42,31.22,34.76,54.01,38.79,47.59,49.1,27.66,41.03,36.73,48.65,28.12,46.62,35.57,32.98,35,26.07,23.77,39.73,40.57,31.65,31.21,36.32,20.45,21.93,26.02,27.34,23.49,46.94,30.5,33.8,24.23,21.4,27.94,32.24,40.57,25.07,19.42,22.39,18.42,46.96,23.72,26.41,26.97,36.76,40.32,35.02,29.47,30.2,31,38.11,38.18,36.31,21.03,30.86,36.07,28.66,29.08,37.28,15.28,24.17,22.31,30.17,25.53,19.85,35.37,44.6,17.23,13.47,26.33,35.02,32.09,24.81,19.33,28.77,24.26,31.98,25.73,24.86,16.28,34.51,15.23,39.72,40.8,26.06,35.76,34.76,16.13,44.04,18.03,19.65,32.62,35.59,39.43,14.18,35.24,40.13,41.82,35.45,36.07,43.67,24.61,20.9,21.9,18.79,27.61,27.21,26.61,29.77,20.59,27.53,13.82,33.2,25,33.1,36.65,18.63,14.87,22.2,36.81,25.53,24.62,26.25,18.21,28.08,19.42,29.79,32.8,35.99,28.32,27.79,35.88,29.06,36.28,14.1,36.63,37.49,26.9,18.58,38.48,24.48,18.95,33.55,14.24,29.04,32.51,25.63,22.22,19,32.73,15.16,13.9,27.2,32.01,29.27,33,13.74,20.42,27.32,18.23,35.35,28.48,9.08,24.62,20.12,35.26,19.92,31.02,16.49,12.16,30.7,31.22,34.65,13.13,27.51,33.2,31.57,14.1,33.42,17.44,10.12,24.42,9.82,23.39,30.93,15.03,21.67,31.09,33.29,22.61,26.89,23.48,8.38,27.81,32.35,23.84] covariance(num_friends, daily_minutes) def correlation(x, y): stdev_x = standard_deviation(x) stdev_y = standard_deviation(y) if stdev_x > 0 and stdev_y > 0: return covariance(x, y) / (stdev_x * stdev_y) else: return 0 correlation(num_friends, daily_minutes) outlier = num_friends.index(100) num_friends_good = [x for i, x in enumerate(num_friends) if i != outlier] daily_minutes_good = [y for i, y in enumerate(daily_minutes) if i != outlier] correlation(num_friends_good, daily_minutes_good) Explanation: Correlation End of explanation
15,931
Given the following text description, write Python code to implement the functionality described below step by step Description: Download, Parse and Interrogate Apple Health Export Data The first part of this program is all about getting the Apple Health export and putting it into an analyzable format. At that point it can be analysed anywhere. The second part of this program is concerned with using SAS Scripting Wrapper for Analytics Transfer (SWAT) Python library to transfer the data to SAS Viya, and analyze it there. The SWAT package provides native python language access to the SAS Viya codebase. https Step1: Authenticate with Google This will open a browser to let you beging the process of authentication with an existing Google Drive account. This process will be separate from Python. For this to work, you will need to set up a Other Authentication OAuth credential at https Step2: Download the most recent Apple Health export file Now that we are authenticated into Google Drive, use PyDrive to access the API and get to files stored. Google Drive allows multiple files with the same name, but it indexes them with the ID to keep them separate. In this block, we make one pass of the file list where the file name is called export.zip, and save the row that corresponds with the most recent date. We will use that file id later to download the correct file that corresponds with the most recent date. Apple Health export names the file export.zip, and at the time this was written, there is no other option. Step3: Download the file from Google Drive Ensure that the file downloaded is the latest file generated Step4: Unzip the most current file to a holding directory Step5: Parse Apple Health Export document Step6: List XML headers by element count Step7: List types for "Record" Header Step8: Extract Values to Data Frame TODO Step9: import calmap ts = pd.Series(HR_df['HeartRate'].values, index=HR_df['Days']) ts.index = pd.to_datetime(ts.index) tstot = ts.groupby(ts.index).median() plt.rcParams['figure.figsize'] = 16, 8 import warnings warnings.simplefilter(action='ignore', category=FutureWarning) calmap.yearplot(data=tstot,year=2017) Flag Chemotherapy Days for specific analysis The next two cells provide the ability to introduce cycles that start on specific days and include this data in the datasets so that they can be overlaid in graphics. In the example below, there are three cycles of 21 days. The getDelta function returns the cycle number when tpp == 0 and the days since day 0 when tpp == 2. This allows the overlaying of the cycles, with the days since day 0 being overlaid. Step10: Boxplots Using Seaborn
Python Code: import xml.etree.ElementTree as et import pandas as pd import numpy as np from datetime import * import matplotlib.pyplot as plt import re import os.path import zipfile import pytz %matplotlib inline plt.rcParams['figure.figsize'] = 16, 8 Explanation: Download, Parse and Interrogate Apple Health Export Data The first part of this program is all about getting the Apple Health export and putting it into an analyzable format. At that point it can be analysed anywhere. The second part of this program is concerned with using SAS Scripting Wrapper for Analytics Transfer (SWAT) Python library to transfer the data to SAS Viya, and analyze it there. The SWAT package provides native python language access to the SAS Viya codebase. https://github.com/sassoftware/python-swat This file was created from a desire to get my hands on data collected by Apple Health, notably heart rate information collected by Apple Watch. For this to work, this file needs to be in a location accessible to Python code. A little bit of searching told me that iCloud file access is problematic and that there were already a number of ways of doing this with the Google API if the file was saved to Google Drive. I chose PyDrive. So for the end to end program to work with little user intervention, you will need to sign up for Google Drive, set up an application in the Google API and install Google Drive app to your iPhone. This may sound involved, and it is not necessary if you simply email the export file to yourself and copy it to a filesystem that Python can see. If you choose to do that, all of the Google Drive portion can be removed. I like the Google Drive process though as it enables a minimal manual work scenario. This version requires the user to grant Google access, requiring some additional clicks, but it is not too much. I think it is possible to automate this to run without user intervention as well using security files. The first step to enabling this process is exporting the data from Apple Health. As of this writing, open Apple Health and click on your user icon or photo. Near the bottom of the next page in the app will be a button or link called Export Health Data. Clicking on this will generate a xml file, zipped up. THe next dialog will ask you where you want to save it. Options are to email, save to iCloud, message etc... Select Google Drive. Google Drive allows multiple files with the same name and this is accounted for by this program. End of explanation # Authenticate into Google Drive from pydrive.auth import GoogleAuth gauth = GoogleAuth() gauth.LocalWebserverAuth() Explanation: Authenticate with Google This will open a browser to let you beging the process of authentication with an existing Google Drive account. This process will be separate from Python. For this to work, you will need to set up a Other Authentication OAuth credential at https://console.developers.google.com/apis/credentials, save the secret file in your root directory and a few other things that are detailed at https://pythonhosted.org/PyDrive/. The PyDrive instructions also show you how to set up your Google application. There are other methods for accessing the Google API from python, but this one seems pretty nice. The first time through the process, regular sign in and two factor authentication is required (if you require two factor auth) but after that it is just a process of telling Google that it is ok for your Google application to access Drive. End of explanation from pydrive.drive import GoogleDrive drive = GoogleDrive(gauth) file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList() # Step through the file list and find the most current export.zip file id, then use # that later to download the file to the local machine. # This may look a little old school, but these file lists will never be massive and # it is readable and easy one pass way to get the most current file using the # least (or low) amount of resouces selection_dt = datetime.strptime("2000-01-01T01:01:01.001Z","%Y-%m-%dT%H:%M:%S.%fZ") print("Matching Files") for file1 in file_list: if re.search("^export-*\d*.zip",file1['title']): dt = datetime.strptime(file1['createdDate'],"%Y-%m-%dT%H:%M:%S.%fZ") if dt > selection_dt: selection_id = file1['id'] selection_dt = dt print(' title: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate'])) if not os.path.exists('healthextract'): os.mkdir('healthextract') Explanation: Download the most recent Apple Health export file Now that we are authenticated into Google Drive, use PyDrive to access the API and get to files stored. Google Drive allows multiple files with the same name, but it indexes them with the ID to keep them separate. In this block, we make one pass of the file list where the file name is called export.zip, and save the row that corresponds with the most recent date. We will use that file id later to download the correct file that corresponds with the most recent date. Apple Health export names the file export.zip, and at the time this was written, there is no other option. End of explanation for file1 in file_list: if file1['id'] == selection_id: print('Downloading this file: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate'])) file1.GetContentFile("healthextract/export.zip") Explanation: Download the file from Google Drive Ensure that the file downloaded is the latest file generated End of explanation zip_ref = zipfile.ZipFile('healthextract/export.zip', 'r') zip_ref.extractall('healthextract') zip_ref.close() Explanation: Unzip the most current file to a holding directory End of explanation path = "healthextract/apple_health_export/export.xml" e = et.parse(path) #this was from an older iPhone, to demonstrate how to join files legacy = et.parse("healthextract/apple_health_legacy/export.xml") #<<TODO: Automate this process #legacyFilePath = "healthextract/apple_health_legacy/export.xml" #if os.path.exists(legacyFilePath): # legacy = et.parse("healthextract/apple_health_legacy/export.xml") #else: # os.mkdir('healthextract/apple_health_legacy') Explanation: Parse Apple Health Export document End of explanation pd.Series([el.tag for el in e.iter()]).value_counts() Explanation: List XML headers by element count End of explanation pd.Series([atype.get('type') for atype in e.findall('Record')]).value_counts() Explanation: List types for "Record" Header End of explanation import pytz #Extract the heartrate values, and get a timestamp from the xml # there is likely a more efficient way, though this is very fast def txloc(xdate,fmt): eastern = pytz.timezone('US/Eastern') dte = xdate.astimezone(eastern) return datetime.strftime(dte,fmt) def xmltodf(eltree, element,outvaluename): dt = [] v = [] for atype in eltree.findall('Record'): if atype.get('type') == element: dt.append(datetime.strptime(atype.get("startDate"),"%Y-%m-%d %H:%M:%S %z")) v.append(atype.get("value")) myd = pd.DataFrame({"Create":dt,outvaluename:v}) colDict = {"Year":"%Y","Month":"%Y-%m", "Week":"%Y-%U","Day":"%d","Hour":"%H","Days":"%Y-%m-%d","Month-Day":"%m-%d"} for col, fmt in colDict.items(): myd[col] = myd['Create'].dt.tz_convert('US/Eastern').dt.strftime(fmt) myd[outvaluename] = myd[outvaluename].astype(float).astype(int) print('Extracting ' + outvaluename + ', type: ' + element) return(myd) HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate") EX_df = xmltodf(e,"HKQuantityTypeIdentifierAppleExerciseTime","Extime") EX_df.head() #comment this cell out if no legacy exports. # extract legacy data, create series for heartrate to join with newer data #HR_df_leg = xmltodf(legacy,"HKQuantityTypeIdentifierHeartRate","HeartRate") #HR_df = pd.concat([HR_df_leg,HR_df]) #import pytz #eastern = pytz.timezone('US/Eastern') #st = datetime.strptime('2017-08-12 23:45:00 -0400', "%Y-%m-%d %H:%M:%S %z") #ed = datetime.strptime('2017-08-13 00:15:00 -0400', "%Y-%m-%d %H:%M:%S %z") #HR_df['c2'] = HR_df['Create'].dt.tz_convert('US/Eastern').dt.strftime("%Y-%m-%d") #HR_df[(HR_df['Create'] >= st) & (HR_df['Create'] <= ed) ].head(10) #reset plot - just for tinkering plt.rcParams['figure.figsize'] = 30, 8 HR_df.boxplot(by='Month',column="HeartRate", return_type='axes') plt.grid(axis='x') plt.title('All Months') plt.ylabel('Heart Rate') plt.ylim(40,140) dx = HR_df[HR_df['Year']=='2019'].boxplot(by='Week',column="HeartRate", return_type='axes') plt.title('All Weeks') plt.ylabel('Heart Rate') plt.xticks(rotation=90) plt.grid(axis='x') [plt.axvline(_x, linewidth=1, color='blue') for _x in [10,12]] plt.ylim(40,140) monthval = '2019-03' #monthval1 = '2017-09' #monthval2 = '2017-10' #HR_df[(HR_df['Month']==monthval1) | (HR_df['Month']== monthval2)].boxplot(by='Month-Day',column="HeartRate", return_type='axes') HR_df[HR_df['Month']==monthval].boxplot(by='Month-Day',column="HeartRate", return_type='axes') plt.grid(axis='x') plt.rcParams['figure.figsize'] = 16, 8 plt.title('Daily for Month: '+ monthval) plt.ylabel('Heart Rate') plt.xticks(rotation=90) plt.ylim(40,140) HR_df[HR_df['Month']==monthval].boxplot(by='Hour',column="HeartRate") plt.title('Hourly for Month: '+ monthval) plt.ylabel('Heart Rate') plt.grid(axis='x') plt.ylim(40,140) Explanation: Extract Values to Data Frame TODO: Abstraction of the next code block End of explanation # This isnt efficient yet, just a first swipe. It functions as intended. def getDelta(res,ttp,cyclelength): mz = [x if (x >= 0) & (x < cyclelength) else 999 for x in res] if ttp == 0: return(mz.index(min(mz))+1) else: return(mz[mz.index(min(mz))]) #chemodays = np.array([date(2017,4,24),date(2017,5,16),date(2017,6,6),date(2017,8,14)]) chemodays = np.array([date(2018,1,26),date(2018,2,2),date(2018,2,9),date(2018,2,16),date(2018,2,26),date(2018,3,2),date(2018,3,19),date(2018,4,9),date(2018,5,1),date(2018,5,14),date(2018,6,18),date(2018,7,10),date(2018,8,6)]) HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate") #I dont think this is efficient yet... a = HR_df['Create'].apply(lambda x: [x.days for x in x.date()-chemodays]) HR_df['ChemoCycle'] = a.apply(lambda x: getDelta(x,0,21)) HR_df['ChemoDays'] = a.apply(lambda x: getDelta(x,1,21)) import seaborn as sns plotx = HR_df[HR_df['ChemoDays']<=21] plt.rcParams['figure.figsize'] = 24, 8 ax = sns.boxplot(x="ChemoDays", y="HeartRate", hue="ChemoCycle", data=plotx, palette="Set2",notch=1,whis=0,width=0.75,showfliers=False) plt.ylim(65,130) #the next statement puts the chemodays variable as a rowname, we need to fix that plotx_med = plotx.groupby('ChemoDays').median() #this puts chemodays back as a column in the frame. I need to see if there is a way to prevent the effect plotx_med.index.name = 'ChemoDays' plotx_med.reset_index(inplace=True) snsplot = sns.pointplot(x='ChemoDays', y="HeartRate", data=plotx_med,color='Gray') Explanation: import calmap ts = pd.Series(HR_df['HeartRate'].values, index=HR_df['Days']) ts.index = pd.to_datetime(ts.index) tstot = ts.groupby(ts.index).median() plt.rcParams['figure.figsize'] = 16, 8 import warnings warnings.simplefilter(action='ignore', category=FutureWarning) calmap.yearplot(data=tstot,year=2017) Flag Chemotherapy Days for specific analysis The next two cells provide the ability to introduce cycles that start on specific days and include this data in the datasets so that they can be overlaid in graphics. In the example below, there are three cycles of 21 days. The getDelta function returns the cycle number when tpp == 0 and the days since day 0 when tpp == 2. This allows the overlaying of the cycles, with the days since day 0 being overlaid. End of explanation import seaborn as sns sns.set(style="ticks", palette="muted", color_codes=True) sns.boxplot(x="Month", y="HeartRate", data=HR_df,whis=np.inf, color="c") # Add in points to show each observation snsplot = sns.stripplot(x="Month", y="HeartRate", data=HR_df,jitter=True, size=1, alpha=.15, color=".3", linewidth=0) hr_only = HR_df[['Create','HeartRate']] hr_only.tail() hr_only.to_csv('~/Downloads/stc_hr.csv') Explanation: Boxplots Using Seaborn End of explanation
15,932
Given the following text description, write Python code to implement the functionality described. Description: Count of arrays in which all adjacent elements are such that one of them divide the another Python3 program to count the number of arrays of size n such that every element is in range [ 1 , m ] and adjacent are divisible ; For storing factors . ; For storing multiples . ; calculating the factors and multiples of elements [ 1. . . m ] . ; Initialising for size 1 array for each i <= m . ; Calculating the number of array possible of size i and starting with j . ; For all previous possible values . Adding number of factors . ; Adding number of multiple . ; Calculating the total count of array which start from [ 1. . . m ] . ; Driven Program
Python Code: MAX = 1000 def numofArray(n , m ) : dp =[[ 0 for i in range(MAX ) ] for j in range(MAX ) ] di =[[ ] for i in range(MAX ) ] mu =[[ ] for i in range(MAX ) ] for i in range(1 , m + 1 ) : for j in range(2 * i , m + 1 , i ) : di[j ] . append(i ) mu[i ] . append(j )  di[i ] . append(i )  for i in range(1 , m + 1 ) : dp[1 ][i ] = 1  for i in range(2 , n + 1 ) : for j in range(1 , m + 1 ) : dp[i ][j ] = 0 for x in di[j ] : dp[i ][j ] += dp[i - 1 ][x ]  for x in mu[j ] : dp[i ][j ] += dp[i - 1 ][x ]    ans = 0 for i in range(1 , m + 1 ) : ans += dp[n ][i ] di[i ] . clear() mu[i ] . clear()  return ans  if __name__== "__main __": n = m = 3 print(numofArray(n , m ) ) 
15,933
Given the following text description, write Python code to implement the functionality described below step by step Description: Comparison of the accuracy of a cutting plane active learning procedure using the (i) analytic center; (ii) Chebyshev center; and (iii) random center on the Iris flower data set The set up Step1: Importing and processing the Iris data set In this experiment we work with the classic Iris flower data set. The Iris flower data set consists of 3 classes of 50 instances where each class corresponds to a different species of the Iris flower. For each instance there are 4 features. This data set is useful as it is known that one of the classes is linearly seperable from the other 2. (In the other experiment the data set used, the Pima Indians diabetes data set, is not linearly seperable.) For simplicity, we work with the first two features of the data set, sepal length in cm and sepal width in cm, and label the class of Iris Setosa flowers 1 and the other two classes, Iris Versicolour and Iris Virginica, -1. We will randomly divide the data set into two halves, to be used for training and testing. Step2: Experimental procedure See Section 7.5 of the report. Logistic regression Step3: The experiment
Python Code: import numpy as np import active import experiment import logistic_regression as logr from sklearn import datasets # The Iris dataset is imported from here. from IPython.display import display import matplotlib.pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 1 %aimport active %aimport experiment %aimport logistic_regression np.set_printoptions(precision=4) plt.rcParams['axes.labelsize'] = 15 plt.rcParams['axes.titlesize'] = 15 plt.rcParams['xtick.labelsize'] = 15 plt.rcParams['ytick.labelsize'] = 15 plt.rcParams['legend.fontsize'] = 15 plt.rcParams['figure.titlesize'] = 18 Explanation: Comparison of the accuracy of a cutting plane active learning procedure using the (i) analytic center; (ii) Chebyshev center; and (iii) random center on the Iris flower data set The set up End of explanation # This code was adapted from # http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html# iris = datasets.load_iris() X = iris.data[:, :2] # Take the first two features. Y = iris.target print('X has shape', X.shape) print('Y has shape', Y.shape) x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 plt.figure(2, figsize=(12, 7)) plt.clf() plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') plt.title('The Iris flower data set') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.savefig('iris.png', dpi=600, bbox_inches='tight', transparent=True) plt.show() bias = np.ones((X.shape[0], 1)) # Add a bias variable set to 1. X = np.hstack((X, bias)) Y[Y==1] = -1 Y[Y==2] = -1 Y[Y==0] = +1 np.random.seed(1) size = X.shape[0] index = np.arange(size) np.random.shuffle(index) training_index = index[:int(size/2)] testing_index = index[int(size/2):] X_iris_training = X[training_index] Y_iris_training = Y[training_index] X_iris_testing = X[testing_index] Y_iris_testing = Y[testing_index] n = 10 iterations = 75 X_testing = X_iris_testing Y_testing = Y_iris_testing X_training = X_iris_training Y_training = Y_iris_training Explanation: Importing and processing the Iris data set In this experiment we work with the classic Iris flower data set. The Iris flower data set consists of 3 classes of 50 instances where each class corresponds to a different species of the Iris flower. For each instance there are 4 features. This data set is useful as it is known that one of the classes is linearly seperable from the other 2. (In the other experiment the data set used, the Pima Indians diabetes data set, is not linearly seperable.) For simplicity, we work with the first two features of the data set, sepal length in cm and sepal width in cm, and label the class of Iris Setosa flowers 1 and the other two classes, Iris Versicolour and Iris Virginica, -1. We will randomly divide the data set into two halves, to be used for training and testing. End of explanation Y_training[Y_training== -1] = 0 Y_testing[Y_testing==-1] = 0 Y_training Y_testing average_accuracies_logr = \ logr.experiment(n, iterations, X_testing, Y_testing, X_training, Y_training) print(average_accuracies_logr) w_best = logr.train(X_training, Y_training) print('w_best is', w_best) predictions = logr.predict(w_best, X_testing) print('Using w_best the accuracy is', \ logr.compute_accuracy(predictions, Y_testing)) Explanation: Experimental procedure See Section 7.5 of the report. Logistic regression End of explanation Y_training[Y_training==0] = -1 Y_testing[Y_testing==0] = -1 Y_training Y_testing average_accuracies_ac = \ experiment.experiment(n, iterations, X_testing, Y_testing, X_training, Y_training, center='ac', sample=1, M=None) average_accuracies_cc = \ experiment.experiment(n, iterations, X_testing, Y_testing, X_training, Y_training, center='cc', sample=1, M=None) average_accuracies_rand = \ experiment.experiment(n, iterations, X_testing, Y_testing, X_training, Y_training, center='random', sample=1, M=None) plt.figure(figsize=(12,7)) queries = np.arange(1, iterations + 1) plt.plot(queries, average_accuracies_logr, 'mx-', label='LR', markevery=5, lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5, markeredgecolor = 'm') plt.plot(queries, average_accuracies_ac, 'r^-', label='AC', markevery=5, lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5, markeredgecolor = 'r') plt.plot(queries, average_accuracies_cc, 'go-', label='CC', markevery=5, lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5, markeredgecolor = 'g') plt.plot(queries, average_accuracies_rand, 'bs-', label='Random', markevery=5, lw=1.5, ms=10, markerfacecolor='none', markeredgewidth=1.5, markeredgecolor = 'b') plt.xlabel('Number of iterations') plt.ylabel('Accuracy averaged over %d tests' % n) plt.title('Average accuracy of a cutting plane active learning procedure (Iris flower data set)') plt.legend(loc='best') plt.savefig('iris_experiment.png', dpi=600, bbox_inches='tight', transparent=True) plt.show() Explanation: The experiment End of explanation
15,934
Given the following text description, write Python code to implement the functionality described below step by step Description: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. Step1: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! Step2: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. Step3: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). Step4: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. Step5: Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). Step7: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters Step8: Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. Step9: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. Step10: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation rides[:24*10].plot(x='dteday', y='cnt') Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] Explanation: Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation from numba import jitclass, jit from numba import int64, float64, float32 import numpy as np from collections import OrderedDict from numba import jitclass, jit from numba import int64, float64 spec = OrderedDict({ 'input_nodes': int64, 'hidden_nodes': int64, 'output_nodes': int64, 'weights_input_to_hidden': float64[:, :], 'weights_hidden_to_output': float64[:, :], 'lr': float64 }) class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes self.weights_input_to_hidden = np.ones((self.input_nodes, self.hidden_nodes)) / 10 self.weights_hidden_to_output = np.ones((self.hidden_nodes, self.output_nodes)) / 10 # Initialize weights # self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, # (self.input_nodes, self.hidden_nodes)) # self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, # (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate def __repr__(self): return '<NeuralNetwork: {:,} -> {:,} -> {:,}; lr: {:}>'.format( self.input_nodes, self.hidden_nodes, self.output_nodes, self.lr ) def activation_function(self, x): return 1 / (1 + np.exp(-x)) def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] # Eg: 4 (input) -> 2 (hidden) -> 1 (output) # (n_records, 4), (n_records, 1) X, y = features, targets ### Forward pass ### # (n_records, 1), (n_records, 2) final_outputs, hidden_outputs = self._run(X) ### Backward pass ### # (n_records, 1) error = y - final_outputs # Output error # (n_records, 1) output_error_term = error # because f'(x) = 1 # Calculate for each node in the hidden layer's contribution to the error # (n_recors, 1) @ (1, 2) = (n_records, 2) hidden_error = output_error_term @ self.weights_hidden_to_output.T # Backpropagated error terms # (n_records, 2) * (n_records, 2) = (n_records, 2) hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) # Weight step (input to hidden) # (4, n_records) * (n_records, 2) = (4, 2) delta_weights_i_h = X.T @ hidden_error_term # Weight step (hidden to output) # (2, n_records) * (n_records, 1) = (2, 1) delta_weights_h_o = hidden_outputs.T @ output_error_term # Update the weights self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records def _run(self, features): # Hidden layer # (n, 4) @ (4, 2) = (n, 2) hidden_inputs = features @ self.weights_input_to_hidden hidden_outputs = self.activation_function(hidden_inputs) # Output layer # (n, 2) @ (2, 1) = (n, 1) final_inputs = hidden_outputs @ self.weights_hidden_to_output # (n, 1) final_outputs = final_inputs # f(x) = x return final_outputs, hidden_outputs def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' final_outputs, _ = self._run(features) return final_outputs inputs = np.array([[0.5, -0.2, 0.1, 0.2], [0.5, -0.2, 0.1, 0.2]]) targets = np.array([[0.4], [0.4]]) network = NeuralNetwork(4, 2, 1, 0.5) network.train(inputs, targets) inputs = np.array([[1.0, 0.0], [0.0, 1]]) targets = np.array([[1.0], [0.0]]) network = NeuralNetwork(2, 1, 1, 0.3) network.train(inputs, targets) print(network.weights_input_to_hidden) print(network.weights_hidden_to_output) inputs = np.array([[1.0, 0.0]]) targets = np.array([[1.0]]) network = NeuralNetwork(2, 1, 1, 0.3) network.train(inputs, targets) print(np.round(network.weights_input_to_hidden, 6)) print(np.round(network.weights_hidden_to_output, 6)) print('') network.train(np.array([[0.0, 1.0]]), np.array([[0.0]])) print(np.round(network.weights_input_to_hidden, 8)) print(np.round(network.weights_hidden_to_output, 6)) class NeuralNetwork2(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes self.weights_input_to_hidden = np.ones((self.input_nodes, self.hidden_nodes)) / 10 self.weights_hidden_to_output = np.ones((self.hidden_nodes, self.output_nodes)) / 10 # Initialize weights # self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, # (self.input_nodes, self.hidden_nodes)) # self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, # (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate self.activation_function = lambda x : 1/(1 + np.exp(-x)) def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): ### Forward pass ### hidden_inputs = np.dot(X, self.weights_input_to_hidden) hidden_outputs = self.activation_function(hidden_inputs) final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # since the last layer just passes on its value, we don't have to apply the sigmoid here. final_outputs = final_inputs ### Backward pass ### error = y - final_outputs # The derivative of the activation function y=x is 1 output_error_term = error * 1.0 hidden_error = np.dot(self.weights_hidden_to_output, error) # Backpropagated error terms hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) # Weight step (input to hidden) delta_weights_i_h += hidden_error_term * X[:,None] # Weight step (hidden to output) delta_weights_h_o += output_error_term * hidden_outputs[:,None] # Weights update self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' # Forward pass hidden_inputs = np.dot(features, self.weights_input_to_hidden) hidden_outputs = self.activation_function(hidden_inputs) final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) final_outputs = final_inputs return final_outputs inputs = np.array([[1.0, 0.0], [0.0, 1]]) targets = np.array([[1.0], [0.0]]) network = NeuralNetwork2(2, 1, 1, 0.3) network.train(inputs, targets) print(network.weights_input_to_hidden) print(network.weights_hidden_to_output) inputs = np.array([[1.0, 0.0]]) targets = np.array([[1.0]]) network = NeuralNetwork2(2, 1, 1, 0.3) network.train(inputs, targets) print(network.weights_input_to_hidden) print(network.weights_hidden_to_output) print('') network.train(np.array([[0.0, 1.0]]), np.array([[0.0]])) print(network.weights_input_to_hidden) print(network.weights_hidden_to_output) def MSE(y, Y): return np.mean((y-Y)**2) Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) _ = unittest.TextTestRunner().run(suite) Explanation: Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. End of explanation %%timeit -n 1 -r 1 import sys # declare global variables because %%timeit will # put the whole cell in a closure global losses global network ### Set the hyperparameters here ### iterations = 4000 learning_rate = 1.3 hidden_nodes = 7 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} train_features_arr = np.array(train_features) val_features_arr = np.array(val_features) train_targets_cnt = np.array(train_targets.cnt, ndmin=2).T val_targets_cnt = np.array(val_targets.cnt, ndmin=2).T for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features_arr[batch], train_targets_cnt[batch] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features_arr), train_targets_cnt) val_loss = MSE(network.run(val_features_arr), val_targets_cnt) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) # give room for timeit result print('\n') fig, ax = plt.subplots(figsize=(7,4)) ax.plot(losses['train'], label='Training loss') ax.plot(losses['validation'], label='Validation loss') ax.legend() ax.set_xlabel('epoch') ax.set_ylabel('loss') _ = plt.ylim([0, 1]) Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(np.array(test_features))*std + mean ax.plot(predictions[:,0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.loc[test_data.index, 'dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation
15,935
Given the following text description, write Python code to implement the functionality described below step by step Description: Setup Setup that is specific only to Jupyter notebooks Step1: Setup to use Python libraries/modules cf. www.coolprop.org/coolprop/examples.html Step2: Examples from CoolProp's Examples cf. www.coolprop.org/coolprop/examples.html Step3: High-Level Interface cf. http Step4: Nitrogen
Python Code: from pathlib import Path import sys notebook_directory_parent = Path.cwd().resolve().parent if str(notebook_directory_parent) not in sys.path: sys.path.append(str(notebook_directory_parent)) Explanation: Setup Setup that is specific only to Jupyter notebooks End of explanation # Import the things you need from CoolProp.CoolProp import PropsSI import CoolProp Explanation: Setup to use Python libraries/modules cf. www.coolprop.org/coolprop/examples.html End of explanation # Print some information on the currently used version of coolprop print(CoolProp.__version__, CoolProp.__gitrevision__) # Density of carbon dioxide at 100 bar and 25 C PropsSI('D', 'T', 298.15, 'P', 100e5, 'CO2') # Saturated vapor enthalpy [J/kg] of R134a at 25 C PropsSI('H', 'T', 298.15, 'Q', 1, 'R134a') Explanation: Examples from CoolProp's Examples cf. www.coolprop.org/coolprop/examples.html End of explanation # Saturation temperature of Water at 1 atm in K PropsSI('T', 'P', 101325, 'Q', 0, 'Water') Explanation: High-Level Interface cf. http://www.coolprop.org/coolprop/HighLevelAPI.html#high-level-api T is the output property returned 2nd, 4th parameters are specified input pair of properties that determine state point where output property will be calculated output property and input pair properties are text strings 3rd, 5th parameters are values of input pair 6th and last parameter is fluid for which output property will be calculated End of explanation PropsSI('T', 'P', 101325, 'Q', 0, 'N2') PropsSI('T', 'P', 101325, 'Q', 0, 'Nitrogen') Explanation: Nitrogen End of explanation
15,936
Given the following text description, write Python code to implement the functionality described below step by step Description: add paths of ac-calibration tools and plotting functions Step1: create PSDMeasurement object - holding the power spectra of one calibration Step2: create PSDfit object and fit the psds
Python Code: import sys sys.path.append('../..') import pyotc Explanation: add paths of ac-calibration tools and plotting functions End of explanation directory = '../exampleData/height_calibration_single_psds/' fname = 'B01_1000.dat' pm = pyotc.PSDMeasurement(warn=False) pm.load(directory, fname) Explanation: create PSDMeasurement object - holding the power spectra of one calibration End of explanation fmin = 0 fmax = 45e3 pf = pyotc.PSDFit(pm, bounds=(fmin, fmax)) pf.setup_fit(lp_filter=True, lp_fixed=False, f3dB=8000, alpha=0.3) pf.fit_psds(fitreport=True) fig = pf.plot_fits(showLegend=False) fig pf.print_pc_results() pf.print_ac_results() pf.write_results_to_file() Explanation: create PSDfit object and fit the psds End of explanation
15,937
Given the following text description, write Python code to implement the functionality described below step by step Description: Before we get started, a couple of reminders to keep in mind when using iPython notebooks Step1: Fixing Data Types Step2: Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur. Investigating the Data Step3: Problems in the Data Step4: Missing Engagement Records Step5: Checking for More Problem Records Step6: Tracking Down the Remaining Problems Step7: Refining the Question Step8: Getting Data from First Week Step9: Exploring Student Engagement Step10: Debugging Data Analysis Code Step11: Lessons Completed in First Week Step12: Number of Visits in First Week Step13: Splitting out Passing Students Step14: Comparing the Two Student Groups Step15: Making Histograms Step16: Improving Plots and Sharing Findings
Python Code: import unicodecsv ## Longer version of code (replaced with shorter, equivalent version below) # enrollments = [] # f = open('enrollments.csv', 'rb') # reader = unicodecsv.DictReader(f) # for row in reader: # enrollments.append(row) # f.close() with open('enrollments.csv', 'rb') as f: reader = unicodecsv.DictReader(f) enrollments = list(reader) ##################################### # 1 # ##################################### ## Read in the data from daily_engagement.csv and project_submissions.csv ## and store the results in the below variables. ## Then look at the first row of each table. daily_engagement = '' project_submissions = Explanation: Before we get started, a couple of reminders to keep in mind when using iPython notebooks: Remember that you can see from the left side of a code cell when it was last run if there is a number within the brackets. When you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session. The previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage. Load Data from CSVs End of explanation from datetime import datetime as dt # Takes a date as a string, and returns a Python datetime object. # If there is no date given, returns None def parse_date(date): if date == '': return None else: return dt.strptime(date, '%Y-%m-%d') # Takes a string which is either an empty string or represents an integer, # and returns an int or None. def parse_maybe_int(i): if i == '': return None else: return int(i) # Clean up the data types in the enrollments table for enrollment in enrollments: enrollment['cancel_date'] = parse_date(enrollment['cancel_date']) enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel']) enrollment['is_canceled'] = enrollment['is_canceled'] == 'True' enrollment['is_udacity'] = enrollment['is_udacity'] == 'True' enrollment['join_date'] = parse_date(enrollment['join_date']) enrollments[0] # Clean up the data types in the engagement table for engagement_record in daily_engagement: engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed'])) engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited'])) engagement_record['projects_completed'] = int(float(engagement_record['projects_completed'])) engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited']) engagement_record['utc_date'] = parse_date(engagement_record['utc_date']) daily_engagement[0] # Clean up the data types in the submissions table for submission in project_submissions: submission['completion_date'] = parse_date(submission['completion_date']) submission['creation_date'] = parse_date(submission['creation_date']) project_submissions[0] Explanation: Fixing Data Types End of explanation ##################################### # 2 # ##################################### ## Find the total number of rows and the number of unique students (account keys) ## in each table. Explanation: Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur. Investigating the Data End of explanation ##################################### # 3 # ##################################### ## Rename the "acct" column in the daily_engagement table to "account_key". Explanation: Problems in the Data End of explanation ##################################### # 4 # ##################################### ## Find any one student enrollments where the student is missing from the daily engagement table. ## Output that enrollment. Explanation: Missing Engagement Records End of explanation ##################################### # 5 # ##################################### ## Find the number of surprising data points (enrollments missing from ## the engagement table) that remain, if any. Explanation: Checking for More Problem Records End of explanation # Create a set of the account keys for all Udacity test accounts udacity_test_accounts = set() for enrollment in enrollments: if enrollment['is_udacity']: udacity_test_accounts.add(enrollment['account_key']) len(udacity_test_accounts) # Given some data with an account_key field, removes any records corresponding to Udacity test accounts def remove_udacity_accounts(data): non_udacity_data = [] for data_point in data: if data_point['account_key'] not in udacity_test_accounts: non_udacity_data.append(data_point) return non_udacity_data # Remove Udacity test accounts from all three tables non_udacity_enrollments = remove_udacity_accounts(enrollments) non_udacity_engagement = remove_udacity_accounts(daily_engagement) non_udacity_submissions = remove_udacity_accounts(project_submissions) print len(non_udacity_enrollments) print len(non_udacity_engagement) print len(non_udacity_submissions) Explanation: Tracking Down the Remaining Problems End of explanation ##################################### # 6 # ##################################### ## Create a dictionary named paid_students containing all students who either ## haven't canceled yet or who remained enrolled for more than 7 days. The keys ## should be account keys, and the values should be the date the student enrolled. paid_students = Explanation: Refining the Question End of explanation # Takes a student's join date and the date of a specific engagement record, # and returns True if that engagement record happened within one week # of the student joining. def within_one_week(join_date, engagement_date): time_delta = engagement_date - join_date return time_delta.days < 7 ##################################### # 7 # ##################################### ## Create a list of rows from the engagement table including only rows where ## the student is one of the paid students you just found, and the date is within ## one week of the student's join date. paid_engagement_in_first_week = Explanation: Getting Data from First Week End of explanation from collections import defaultdict # Create a dictionary of engagement grouped by student. # The keys are account keys, and the values are lists of engagement records. engagement_by_account = defaultdict(list) for engagement_record in paid_engagement_in_first_week: account_key = engagement_record['account_key'] engagement_by_account[account_key].append(engagement_record) # Create a dictionary with the total minutes each student spent in the classroom during the first week. # The keys are account keys, and the values are numbers (total minutes) total_minutes_by_account = {} for account_key, engagement_for_student in engagement_by_account.items(): total_minutes = 0 for engagement_record in engagement_for_student: total_minutes += engagement_record['total_minutes_visited'] total_minutes_by_account[account_key] = total_minutes import numpy as np # Summarize the data about minutes spent in the classroom total_minutes = total_minutes_by_account.values() print 'Mean:', np.mean(total_minutes) print 'Standard deviation:', np.std(total_minutes) print 'Minimum:', np.min(total_minutes) print 'Maximum:', np.max(total_minutes) Explanation: Exploring Student Engagement End of explanation ##################################### # 8 # ##################################### ## Go through a similar process as before to see if there is a problem. ## Locate at least one surprising piece of data, output it, and take a look at it. Explanation: Debugging Data Analysis Code End of explanation ##################################### # 9 # ##################################### ## Adapt the code above to find the mean, standard deviation, minimum, and maximum for ## the number of lessons completed by each student during the first week. Try creating ## one or more functions to re-use the code above. Explanation: Lessons Completed in First Week End of explanation ###################################### # 10 # ###################################### ## Find the mean, standard deviation, minimum, and maximum for the number of ## days each student visits the classroom during the first week. Explanation: Number of Visits in First Week End of explanation ###################################### # 11 # ###################################### ## Create two lists of engagement data for paid students in the first week. ## The first list should contain data for students who eventually pass the ## subway project, and the second list should contain data for students ## who do not. subway_project_lesson_keys = ['746169184', '3176718735'] passing_engagement = non_passing_engagement = Explanation: Splitting out Passing Students End of explanation ###################################### # 12 # ###################################### ## Compute some metrics you're interested in and see how they differ for ## students who pass the subway project vs. students who don't. A good ## starting point would be the metrics we looked at earlier (minutes spent ## in the classroom, lessons completed, and days visited). Explanation: Comparing the Two Student Groups End of explanation ###################################### # 13 # ###################################### ## Make histograms of the three metrics we looked at earlier for both ## students who passed the subway project and students who didn't. You ## might also want to make histograms of any other metrics you examined. Explanation: Making Histograms End of explanation ###################################### # 14 # ###################################### ## Make a more polished version of at least one of your visualizations ## from earlier. Try importing the seaborn library to make the visualization ## look better, adding axis labels and a title, and changing one or more ## arguments to the hist() function. Explanation: Improving Plots and Sharing Findings End of explanation
15,938
Given the following text description, write Python code to implement the functionality described below step by step Description: Building a LAS file from scratch This example shows Step1: Step 1 Create some synthetic data, and make some of the values in the middle null values (numpy.nan specifically). Note that of course every curve in a LAS file is recorded against a reference/index, either depth or time, so we create that array too. Step2: Step 2 Create an empty LASFile object and review its header section Step3: Let's add some information to the header Step4: Next, let's make a new item in the ~Parameters section for the operator. To do this we need to make a new HeaderItem Step5: And finally, add some free text to the ~Other section Step6: Step 3 Add the curves to the LAS file using the add_curve method Step7: Step 4 Now let's write out two files Step8: Step 5 And finally let's read in the resulting v1.2 file and see if the data is there correctly...
Python Code: import lasio print(lasio.__version__) import datetime import numpy import os import matplotlib.pyplot as plt %matplotlib inline Explanation: Building a LAS file from scratch This example shows: Creating a pretend/synthetic data curve that we'll call "SYNTH", including some null values Creating an empty LASFile object with a default header Adding some information to the header Adding the synthetic data to the LASFile object Writing it to disk as both a LAS version 1.2 and 2.0 file Re-loading the file and checking that the null values are interpreted correctly End of explanation depths = numpy.arange(10, 50, 0.5) synth = numpy.log10(depths) * 10 + numpy.random.random(len(depths)) synth[15:25] = numpy.nan # Add some null values in the middle plt.plot(depths, synth) Explanation: Step 1 Create some synthetic data, and make some of the values in the middle null values (numpy.nan specifically). Note that of course every curve in a LAS file is recorded against a reference/index, either depth or time, so we create that array too. End of explanation l = lasio.LASFile() l.header Explanation: Step 2 Create an empty LASFile object and review its header section End of explanation l.well["DATE"].value = str(datetime.datetime.today()) Explanation: Let's add some information to the header: the date the operator (in the ~P section) a description of the file in the ~O (Other) section. First, let's change the date. Note that when changing the value of a HeaderItem like the well["DATE"] object above, you must be careful to change the value attribute rather than the HeaderItem itself. (This will be made easier in the future.) End of explanation # HeaderItem = namedlist("HeaderItem", ["mnemonic", "unit", "value", "descr"]) l.params["ENGI"] = lasio.HeaderItem("ENGI", "", "[email protected]", "Creator of this file...") Explanation: Next, let's make a new item in the ~Parameters section for the operator. To do this we need to make a new HeaderItem: End of explanation l.other = "Example of how to create a LAS file from scratch using las_reader" Explanation: And finally, add some free text to the ~Other section: End of explanation l.add_curve("DEPT", depths, unit="m") l.add_curve("SYNTH", synth, descr="Synthetic data") Explanation: Step 3 Add the curves to the LAS file using the add_curve method: End of explanation fn = "scratch_example_v2.las" if os.path.exists(fn): # Remove file if it already exists os.remove(fn) with open(fn, mode="w") as f: # Write LAS file to disk l.write(f) with open(fn, mode="r") as f: # Show the result... print(f.read()) fn2 = "scratch_example_v1.2.las" if os.path.exists(fn2): # Remove file if it already exists os.remove(fn2) with open(fn2, mode="w") as f: # Write LAS file to disk l.write(f, version=1.2) with open(fn2, mode="r") as f: # Show the result... print(f.read()) Explanation: Step 4 Now let's write out two files: one according to the LAS file specification version 1.2, and one according to 2.0. Note that by default an empty LASFile object is version 2.0. End of explanation l_v12 = lasio.read(fn) print("Reading in %s" % fn) print(l_v12.keys()) plt.plot(l_v12["DEPT"], l_v12["SYNTH"]) print(l_v12.well["DATE"]) os.remove(fn) os.remove(fn2) Explanation: Step 5 And finally let's read in the resulting v1.2 file and see if the data is there correctly... End of explanation
15,939
Given the following text description, write Python code to implement the functionality described below step by step Description: 국민대, 파이썬, 데이터 W10 NumPy 101 Table of Contents NumPy Basic NumPy Exercises NumPy Example Step1: 1. NumPy Basic 1. Data Container 데이터를 담는 그릇, Data Container라고 합니다. Python 에서는 List/Tuple/Dictionary 등과 같이 Data container가 있습니다. 마찬가지로 NumPy에서도 데이터를 담는 그릇으로 Multi Dimensional Array 즉, ndarray가 있습니다. 모양새가 List/Tuple과 흡사합니다. 2. Why not just use a list of lists? 그렇다면 그냥 Python의 list나 tuple을 써도 데이터를 담을 수 있는데 왜 NumPy를 쓸까요? 바로 NumPy의 강력한 기능 때문입니다. a powerful n-dimensional array object 그 외에 선형 대수, 푸리에 변환, 랜덤 생성 능력 등 2. NumPy Essential Step2: 1. Creation Step3: ndrarray를 생성하는 다양한 방법 Step4: 2. Indexing and Slicing Step5: 3. NumPy Exercises 1. np라는 이름으로 NumPy 패키지를 가져와보세요. Step6: 2. 0인 원소 10개가 들어있는 ndarray를 만들어봅시다. 3. 5부터 26까지 들어있는 ndarray를 만들어봅시다. 4. 램덤값으로 3x3x3 array를 만들어봅시다. 4. NumPy Example Step7: pyplot은 matplotplib 패키지에서 가장 많이 쓰이는 것 중입니다. 그래프로 그리는(plot) 기능이 담겨져 있는 모듈입니다. 일반적으로 plt라는 별칭을 많이 사용합니다. 1. Bernoulli's Trials (베르누이 시행) 임의의 결과가 '성공' 또는 '실패'의 두 가지 중 하나인 실험(위키피디아) 즉, 1 or -1 두 개의 값을 1,000번 임의로 표현하여 누적된 그래프로 표현하는 시작을 베르누이 시행을 만들어내는 것으로 하겠습니다. 순수하게 Python의 내장 함수만을 이용한 것과 NumPy를 이용한 것, 두 가지를 함께 살펴보며 NumPy의 강점을 살펴보겠습니다. 1) Pure Python Step8: 모듈(확장자가 py 파일) 안을 살펴보면 여러가지 클래스와 함수가 있음을 알 수 있습니다. 그 중에 randint라는 함수를 살펴보겠습니다. Step9: 위에 randint() 함수를 통해서 100개의 1 또는 0 값이 있는 리스트를 아래와 같이 만들어 보겠습니다. Step10: 2) NumPy Way Step11: 2. Random Walk 이제 본격적으로 랜덤하게 +1이 되든 -1이 되든 누적된 1,000 걸음을 걸어보도록 하겠습니다. 여기서도 처음에는 Pure(순수하게) Python만을 이용해 만들어본 후 NumPy로 만들어보도록 하겠습니다. 1) Pure Python Step12: 잠깐!! range와 xrange에 대해서 python2|python3|difference --|--|-- xrange(10)|range(10)|evaluate lazily, 순서열 range(10)|list(range(10))|10개에 해당하는 메모리 생성 Step13: 2) NumPy Way Step14: 3) Pure Python Vs. NumPy Way Step15: 3. Random Walks using where() 만들어진 ndarray 객체에서 np.where를 이용해 조건절을 만들어 원하는 형태로 바꿔보겠습니다. Step16: 즉, arr == 0에서 arr 다차원 배열의 원소 각각이 0이면 -1을 아니면 1을 넣으라 라는 뜻입니다. Step17: 4. Random Walks using cumsum() Step18: 생각해보기 np.cumsum(steps)와 steps.cumsum()의 차이점은 무엇일까요!? 5. More
Python Code: from IPython.display import Image Explanation: 국민대, 파이썬, 데이터 W10 NumPy 101 Table of Contents NumPy Basic NumPy Exercises NumPy Example: Random Walks Coding Convention import numpy as np End of explanation import numpy as np Explanation: 1. NumPy Basic 1. Data Container 데이터를 담는 그릇, Data Container라고 합니다. Python 에서는 List/Tuple/Dictionary 등과 같이 Data container가 있습니다. 마찬가지로 NumPy에서도 데이터를 담는 그릇으로 Multi Dimensional Array 즉, ndarray가 있습니다. 모양새가 List/Tuple과 흡사합니다. 2. Why not just use a list of lists? 그렇다면 그냥 Python의 list나 tuple을 써도 데이터를 담을 수 있는데 왜 NumPy를 쓸까요? 바로 NumPy의 강력한 기능 때문입니다. a powerful n-dimensional array object 그 외에 선형 대수, 푸리에 변환, 랜덤 생성 능력 등 2. NumPy Essential End of explanation a = np.array([1, 2, 3, 4]) a help(np.array) type(a) a.dtype a = np.array([1, 2, 3], dtype=float) a a.dtype Image(filename='images/numpy_dtype.png') Image(filename='images/numpy_dtype.png') a = np.array([1, 2, 3], dtype=float) a b = np.array( [ [1, 8, 5, 1, [1, 2]], [1, 8, 5, 1, [1, 2]], [1, 8, 5, 1, [1, 2]], ] , dtype=object) b.dtype b print(a.ndim, b.ndim) print(a.shape, b.shape) Explanation: 1. Creation End of explanation np.arange(10) np.arange(1, 10, 2) np.linspace(0, 1, 6) np.linspace(0, 1, 6, endpoint=False) np.ones((3,3)) np.zeros((3,3)) np.eye(3) np.diag(np.array([1, 2, 3, 4])) help(np.random) np.random.random((3, 4, 2)) Explanation: ndrarray를 생성하는 다양한 방법 End of explanation a = np.arange(10) a a[3] a[::-1] b = np.diag(np.arange(1, 4)) b b[2, 2] b[1, 2] = 10 b Explanation: 2. Indexing and Slicing End of explanation import numpy as np Explanation: 3. NumPy Exercises 1. np라는 이름으로 NumPy 패키지를 가져와보세요. End of explanation import matplotlib.pyplot as plt %matplotlib inline Explanation: 2. 0인 원소 10개가 들어있는 ndarray를 만들어봅시다. 3. 5부터 26까지 들어있는 ndarray를 만들어봅시다. 4. 램덤값으로 3x3x3 array를 만들어봅시다. 4. NumPy Example: Random Walks +1 혹은 -1, 두 개의 값 중에 임의의 수로 1,000번의 걸음을 만들어 이를 누적 그래프로 확인해보겠습니다. End of explanation import random type(random) Explanation: pyplot은 matplotplib 패키지에서 가장 많이 쓰이는 것 중입니다. 그래프로 그리는(plot) 기능이 담겨져 있는 모듈입니다. 일반적으로 plt라는 별칭을 많이 사용합니다. 1. Bernoulli's Trials (베르누이 시행) 임의의 결과가 '성공' 또는 '실패'의 두 가지 중 하나인 실험(위키피디아) 즉, 1 or -1 두 개의 값을 1,000번 임의로 표현하여 누적된 그래프로 표현하는 시작을 베르누이 시행을 만들어내는 것으로 하겠습니다. 순수하게 Python의 내장 함수만을 이용한 것과 NumPy를 이용한 것, 두 가지를 함께 살펴보며 NumPy의 강점을 살펴보겠습니다. 1) Pure Python End of explanation random.randint(0, 1) type(random.randint(0, 1)) # randint 함수의 반환값이 int임을 알 수 있습니다. Explanation: 모듈(확장자가 py 파일) 안을 살펴보면 여러가지 클래스와 함수가 있음을 알 수 있습니다. 그 중에 randint라는 함수를 살펴보겠습니다. End of explanation lst = [] for i in range(100): lst.append(random.randint(0, 1)) print(lst) Explanation: 위에 randint() 함수를 통해서 100개의 1 또는 0 값이 있는 리스트를 아래와 같이 만들어 보겠습니다. End of explanation import numpy as np np.random.randint(0, 2, size=100) Explanation: 2) NumPy Way End of explanation def pwalks(steps=1000): position = 0 walk = [position] max_steps = int(steps) for i in range(max_steps): step = 1 if random.randint(0, 1) else -1 position += step walk.append(position) return walk plt.suptitle('Random Walk with +1/-1 Steps') plt.plot(pwalks()) Explanation: 2. Random Walk 이제 본격적으로 랜덤하게 +1이 되든 -1이 되든 누적된 1,000 걸음을 걸어보도록 하겠습니다. 여기서도 처음에는 Pure(순수하게) Python만을 이용해 만들어본 후 NumPy로 만들어보도록 하겠습니다. 1) Pure Python End of explanation %timeit range(10) %timeit list(range(10)) Explanation: 잠깐!! range와 xrange에 대해서 python2|python3|difference --|--|-- xrange(10)|range(10)|evaluate lazily, 순서열 range(10)|list(range(10))|10개에 해당하는 메모리 생성 End of explanation def nwalks(steps=1000): position = 0 walk = [position] max_steps = int(steps) for movement in np.random.randint(0, 2, size=max_steps): step = 1 if movement else -1 position += step walk.append(position) return walk plt.suptitle("Random Walk with +1/-1 Steps") plt.plot(nwalks()) np.random.randint(0, 2, size=1000) type(np.random.randint(0, 2, size=1000)) Explanation: 2) NumPy Way End of explanation %timeit pwalks() %timeit nwalks() Explanation: 3) Pure Python Vs. NumPy Way End of explanation arr = np.random.randint(0, 2, size=100) arr np.where(arr == 0, -1, 1) Explanation: 3. Random Walks using where() 만들어진 ndarray 객체에서 np.where를 이용해 조건절을 만들어 원하는 형태로 바꿔보겠습니다. End of explanation def wwalks(steps=1000): position = 0 walk = [position] max_steps = int(steps) arr = np.random.randint(0, 2, size=max_steps) for step in np.where(arr == 0, -1, 1): position += step walk.append(position) return walk plt.plot(wwalks()) %timeit nwalks() %timeit wwalks() Explanation: 즉, arr == 0에서 arr 다차원 배열의 원소 각각이 0이면 -1을 아니면 1을 넣으라 라는 뜻입니다. End of explanation def cwalks(steps=1000): position = 0 walk = [position] max_steps = int(steps) arr = np.random.randint(0, 2, size=max_steps) steps = np.where(arr == 0, -1, 1) walk = np.cumsum(steps) return walk plt.plot(cwalks()) %timeit wwalks() %timeit cwalks() Explanation: 4. Random Walks using cumsum() End of explanation position = 0 max_steps = 1000 walk = [position] arr = np.random.randint(0, 2, size=max_steps) steps = np.where(arr == 0, -1, 1) walk = np.cumsum(steps) walk.max() walk.min() np.abs(walk) (np.abs(walk) >= 10) (np.abs(walk) >= 10).argmax() Explanation: 생각해보기 np.cumsum(steps)와 steps.cumsum()의 차이점은 무엇일까요!? 5. More End of explanation
15,940
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: モデルの保存と復元 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: サンプルデータセットの取得 ここでは、重みの保存と読み込みをデモするために、MNIST データセットを使います。デモの実行を速くするため、最初の 1,000 件のサンプルだけを使います。 Step3: モデルの定義 簡単なシーケンシャルモデルを構築することから始めます。 Step4: トレーニング中にチェックポイントを保存する 再トレーニングせずにトレーニング済みモデルを使用したり、トレーニングプロセスを中断したところから再開することもできます。tf.keras.callbacks.ModelCheckpoint コールバックを使用すると、トレーニング中でもトレーニングの終了時でもモデルを継続的に保存できます。 チェックポイントコールバックの使い方 トレーニング中にのみ重みを保存する tf.keras.callbacks.ModelCheckpoint コールバックを作成します。 Step5: この結果、エポックごとに更新される一連のTensorFlowチェックポイントファイルが作成されます。 Step6: 2 つのモデルが同じアーキテクチャを共有している限り、それらの間で重みを共有できます。したがって、重みのみからモデルを復元する場合は、元のモデルと同じアーキテクチャでモデルを作成してから、その重みを設定します。 次に、トレーニングされていない新しいモデルを再構築し、テストセットで評価します。トレーニングされていないモデルは、偶然誤差(10% 以下の正解率)で実行されます。 Step7: 次に、チェックポイントから重みをロードし、再び評価します。 Step8: チェックポイントコールバックのオプション このコールバックには、チェックポイントに一意な名前をつけたり、チェックポイントの頻度を調整するためのオプションがあります。 新しいモデルをトレーニングし、5 エポックごとに一意な名前のチェックポイントを保存します。 Step9: 次に、できあがったチェックポイントを確認し、最後のものを選択します。 Step10: 注意 Step11: これらのファイルは何? 上記のコードは、バイナリ形式でトレーニングされた重みのみを含む checkpoint 形式のファイルのコレクションに重みを格納します。チェックポイントには、次のものが含まれます。 1 つ以上のモデルの重みのシャード。 どの重みがどのシャードに格納されているかを示すインデックスファイル。 一台のマシンでモデルをトレーニングしている場合は、接尾辞が .data-00000-of-00001 のシャードが 1 つあります。 手動で重みを保存する Model.save_weights メソッドを使用して手動で重みを保存します。デフォルトでは、tf.keras、特に save_weights は、<code>.ckpt</code> 拡張子を持つ TensorFlow の<a>チェックポイント</a>形式を使用します (HDF5 に .h5 拡張子を付けて保存する方法については、モデルの保存とシリアル化ガイドを参照してください)。 Step12: モデル全体の保存 model.save を呼ぶことで、モデルのアーキテクチャや重み、トレーニングの設定を単一のファイル/フォルダに保存できます。これにより、オリジナルの Python コード (*) にアクセスせずにモデルを使えるように、モデルをエクスポートできます。オプティマイザの状態も復旧されるため、中断したところからトレーニングを再開できます。 モデル全体を 2 つの異なるファイル形式 (SavedModelとHDF5) で保存できます。TensorFlow SavedModel 形式は、TF2.x のデフォルトのファイル形式ですが、モデルは HDF5 形式で保存できます。モデル全体を 2 つのファイル形式で保存する方法の詳細については、以下の説明をご覧ください。 完全に動作するモデルを保存すると TensorFlow.js (Saved Model、HDF5) で読み込んで、ブラウザ上でトレーニングや実行したり、TensorFlow Lite (Saved Model、HDF5) を用いてモバイルデバイス上で実行できるよう変換することもできるので非常に便利です。 カスタムのオブジェクト (クラスを継承したモデルやレイヤー) は保存や読み込みを行うとき、特別な注意を必要とします。以下のカスタムオブジェクトの保存*を参照してください。 SavedModel フォーマットとして SavedModel 形式は、モデルをシリアル化するもう 1 つの方法です。この形式で保存されたモデルは、tf.keras.models.load_model を使用して復元でき、TensorFlow Serving と互換性があります。SavedModel をサービングおよび検査する方法についての詳細は、SavedModel ガイドを参照してください。以下のセクションでは、モデルを保存および復元する手順を示します。 Step13: SavedModel 形式は、protobuf バイナリと TensorFlow チェックポイントを含むディレクトリです。保存されたモデルディレクトリを調べます。 Step14: 保存したモデルから新しい Keras モデルを再度読み込みます。 Step15: 復元されたモデルは、元のモデルと同じ引数でコンパイルされます。読み込まれたモデルで評価と予測を実行してみてください。 Step16: HDF5ファイルとして Keras は HDF5 の標準に従ったベーシックな保存形式も提供します。 Step17: 保存したファイルを使ってモデルを再作成します。 Step18: 正解率を検査します。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation !pip install pyyaml h5py # Required to save models in HDF5 format import os import tensorflow as tf from tensorflow import keras print(tf.version.VERSION) Explanation: モデルの保存と復元 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/save_and_load"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> </table> モデルの進行状況は、トレーニング中およびトレーニング後に保存できます。モデルが中断したところから再開できるので、長いトレーニング時間を回避できます。また、保存することによりモデルを共有したり、他の人による作業の再現が可能になります。研究モデルや手法を公開する場合、ほとんどの機械学習の実践者は次を共有します。 モデルを構築するプログラム モデルのトレーニング済みモデルの重みやパラメータ このデータを共有することで、他の人がモデルがどの様に動作するかを理解したり、新しいデータに試してみたりすることが容易になります。 注意: TensorFlow モデルはコードであり、信頼できないコードに注意する必要があります。詳細については、TensorFlow を安全に使用するをご覧ください。 オプション 使用している API に応じて、さまざまな方法で TensorFlow モデルを保存できます。このガイドでは、高レベル API である tf.keras を使用して、TensorFlow でモデルを構築およびトレーニングします。他のアプローチについては、TensorFlow 保存と復元ガイドまたは Eager で保存するを参照してください。 設定 インストールとインポート TensorFlow をインストールし、依存関係インポートします。 End of explanation (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() train_labels = train_labels[:1000] test_labels = test_labels[:1000] train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0 test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0 Explanation: サンプルデータセットの取得 ここでは、重みの保存と読み込みをデモするために、MNIST データセットを使います。デモの実行を速くするため、最初の 1,000 件のサンプルだけを使います。 End of explanation # Define a simple sequential model def create_model(): model = tf.keras.models.Sequential([ keras.layers.Dense(512, activation='relu', input_shape=(784,)), keras.layers.Dropout(0.2), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.metrics.SparseCategoricalAccuracy()]) return model # Create a basic model instance model = create_model() # Display the model's architecture model.summary() Explanation: モデルの定義 簡単なシーケンシャルモデルを構築することから始めます。 End of explanation checkpoint_path = "training_1/cp.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create a callback that saves the model's weights cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, verbose=1) # Train the model with the new callback model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels), callbacks=[cp_callback]) # Pass callback to training # This may generate warnings related to saving the state of the optimizer. # These warnings (and similar warnings throughout this notebook) # are in place to discourage outdated usage, and can be ignored. Explanation: トレーニング中にチェックポイントを保存する 再トレーニングせずにトレーニング済みモデルを使用したり、トレーニングプロセスを中断したところから再開することもできます。tf.keras.callbacks.ModelCheckpoint コールバックを使用すると、トレーニング中でもトレーニングの終了時でもモデルを継続的に保存できます。 チェックポイントコールバックの使い方 トレーニング中にのみ重みを保存する tf.keras.callbacks.ModelCheckpoint コールバックを作成します。 End of explanation os.listdir(checkpoint_dir) Explanation: この結果、エポックごとに更新される一連のTensorFlowチェックポイントファイルが作成されます。 End of explanation # Create a basic model instance model = create_model() # Evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Untrained model, accuracy: {:5.2f}%".format(100 * acc)) Explanation: 2 つのモデルが同じアーキテクチャを共有している限り、それらの間で重みを共有できます。したがって、重みのみからモデルを復元する場合は、元のモデルと同じアーキテクチャでモデルを作成してから、その重みを設定します。 次に、トレーニングされていない新しいモデルを再構築し、テストセットで評価します。トレーニングされていないモデルは、偶然誤差(10% 以下の正解率)で実行されます。 End of explanation # Loads the weights model.load_weights(checkpoint_path) # Re-evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100 * acc)) Explanation: 次に、チェックポイントから重みをロードし、再び評価します。 End of explanation # Include the epoch in the file name (uses `str.format`) checkpoint_path = "training_2/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) batch_size = 32 # Create a callback that saves the model's weights every 5 epochs cp_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_path, verbose=1, save_weights_only=True, save_freq=5*batch_size) # Create a new model instance model = create_model() # Save the weights using the `checkpoint_path` format model.save_weights(checkpoint_path.format(epoch=0)) # Train the model with the new callback model.fit(train_images, train_labels, epochs=50, batch_size=batch_size, callbacks=[cp_callback], validation_data=(test_images, test_labels), verbose=0) Explanation: チェックポイントコールバックのオプション このコールバックには、チェックポイントに一意な名前をつけたり、チェックポイントの頻度を調整するためのオプションがあります。 新しいモデルをトレーニングし、5 エポックごとに一意な名前のチェックポイントを保存します。 End of explanation os.listdir(checkpoint_dir) latest = tf.train.latest_checkpoint(checkpoint_dir) latest Explanation: 次に、できあがったチェックポイントを確認し、最後のものを選択します。 End of explanation # Create a new model instance model = create_model() # Load the previously saved weights model.load_weights(latest) # Re-evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100 * acc)) Explanation: 注意: デフォルトの TensorFlow 形式では、最新の 5 つのチェックポイントのみが保存されます。 テストのため、モデルをリセットし最後のチェックポイントを読み込みます。 End of explanation # Save the weights model.save_weights('./checkpoints/my_checkpoint') # Create a new model instance model = create_model() # Restore the weights model.load_weights('./checkpoints/my_checkpoint') # Evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100 * acc)) Explanation: これらのファイルは何? 上記のコードは、バイナリ形式でトレーニングされた重みのみを含む checkpoint 形式のファイルのコレクションに重みを格納します。チェックポイントには、次のものが含まれます。 1 つ以上のモデルの重みのシャード。 どの重みがどのシャードに格納されているかを示すインデックスファイル。 一台のマシンでモデルをトレーニングしている場合は、接尾辞が .data-00000-of-00001 のシャードが 1 つあります。 手動で重みを保存する Model.save_weights メソッドを使用して手動で重みを保存します。デフォルトでは、tf.keras、特に save_weights は、<code>.ckpt</code> 拡張子を持つ TensorFlow の<a>チェックポイント</a>形式を使用します (HDF5 に .h5 拡張子を付けて保存する方法については、モデルの保存とシリアル化ガイドを参照してください)。 End of explanation # Create and train a new model instance. model = create_model() model.fit(train_images, train_labels, epochs=5) # Save the entire model as a SavedModel. !mkdir -p saved_model model.save('saved_model/my_model') Explanation: モデル全体の保存 model.save を呼ぶことで、モデルのアーキテクチャや重み、トレーニングの設定を単一のファイル/フォルダに保存できます。これにより、オリジナルの Python コード (*) にアクセスせずにモデルを使えるように、モデルをエクスポートできます。オプティマイザの状態も復旧されるため、中断したところからトレーニングを再開できます。 モデル全体を 2 つの異なるファイル形式 (SavedModelとHDF5) で保存できます。TensorFlow SavedModel 形式は、TF2.x のデフォルトのファイル形式ですが、モデルは HDF5 形式で保存できます。モデル全体を 2 つのファイル形式で保存する方法の詳細については、以下の説明をご覧ください。 完全に動作するモデルを保存すると TensorFlow.js (Saved Model、HDF5) で読み込んで、ブラウザ上でトレーニングや実行したり、TensorFlow Lite (Saved Model、HDF5) を用いてモバイルデバイス上で実行できるよう変換することもできるので非常に便利です。 カスタムのオブジェクト (クラスを継承したモデルやレイヤー) は保存や読み込みを行うとき、特別な注意を必要とします。以下のカスタムオブジェクトの保存*を参照してください。 SavedModel フォーマットとして SavedModel 形式は、モデルをシリアル化するもう 1 つの方法です。この形式で保存されたモデルは、tf.keras.models.load_model を使用して復元でき、TensorFlow Serving と互換性があります。SavedModel をサービングおよび検査する方法についての詳細は、SavedModel ガイドを参照してください。以下のセクションでは、モデルを保存および復元する手順を示します。 End of explanation # my_model directory !ls saved_model # Contains an assets folder, saved_model.pb, and variables folder. !ls saved_model/my_model Explanation: SavedModel 形式は、protobuf バイナリと TensorFlow チェックポイントを含むディレクトリです。保存されたモデルディレクトリを調べます。 End of explanation new_model = tf.keras.models.load_model('saved_model/my_model') # Check its architecture new_model.summary() Explanation: 保存したモデルから新しい Keras モデルを再度読み込みます。 End of explanation # Evaluate the restored model loss, acc = new_model.evaluate(test_images, test_labels, verbose=2) print('Restored model, accuracy: {:5.2f}%'.format(100 * acc)) print(new_model.predict(test_images).shape) Explanation: 復元されたモデルは、元のモデルと同じ引数でコンパイルされます。読み込まれたモデルで評価と予測を実行してみてください。 End of explanation # Create and train a new model instance. model = create_model() model.fit(train_images, train_labels, epochs=5) # Save the entire model to a HDF5 file. # The '.h5' extension indicates that the model should be saved to HDF5. model.save('my_model.h5') Explanation: HDF5ファイルとして Keras は HDF5 の標準に従ったベーシックな保存形式も提供します。 End of explanation # Recreate the exact same model, including its weights and the optimizer new_model = tf.keras.models.load_model('my_model.h5') # Show the model architecture new_model.summary() Explanation: 保存したファイルを使ってモデルを再作成します。 End of explanation loss, acc = new_model.evaluate(test_images, test_labels, verbose=2) print('Restored model, accuracy: {:5.2f}%'.format(100 * acc)) Explanation: 正解率を検査します。 End of explanation
15,941
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. Step2: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following Step5: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. Step8: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint Step10: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. Step12: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step17: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note Step20: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling Step23: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option Step26: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. Step29: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option Step32: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model Step35: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following Step37: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. Step38: Hyperparameters Tune the following parameters Step40: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. Step42: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. Step45: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('cifar-10-python.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', 'cifar-10-python.tar.gz', pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open('cifar-10-python.tar.gz') as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 3 sample_id = 9999 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation def normalize(x): Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data x_min = np.min(x) x_max = np.max(x) return (x - x_min)/(x_max - x_min) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_normalize(normalize) Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation from sklearn import preprocessing def one_hot_encode(x): One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels return np.eye(10)[x] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_one_hot_encode(one_hot_encode) Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation import tensorflow as tf def neural_net_image_input(image_shape): Return a Tensor for a bach of image input : image_shape: Shape of the images : return: Tensor for image input. return tf.placeholder(tf.float32, shape=[None, *image_shape], name='x') def neural_net_label_input(n_classes): Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. return tf.placeholder(tf.float32, shape=[None, n_classes], name='y') def neural_net_keep_prob_input(): Return a Tensor for keep probability : return: Tensor for keep probability. return tf.placeholder(tf.float32, name='keep_prob') DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor input_depth = int(x_tensor.shape[3]) output_depth = conv_num_outputs W_shape = [*conv_ksize, input_depth, output_depth] W = tf.Variable(tf.random_normal(W_shape, stddev=0.1)) b = tf.Variable(tf.zeros(output_depth)) conv_strides_shape = [1, *conv_strides, 1] x = tf.nn.conv2d(x_tensor, W, strides=conv_strides_shape, padding='SAME') x = tf.nn.bias_add(x, b) x = tf.nn.relu(x) pool_ksize_shape = [1, *pool_ksize, 1] pool_strides_shape = [1, *pool_strides, 1] x = tf.nn.max_pool(x, pool_ksize_shape, pool_strides_shape, padding='SAME') return x DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_con_pool(conv2d_maxpool) Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers. End of explanation def flatten(x_tensor): Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). return tf.contrib.layers.flatten(x_tensor) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_flatten(flatten) Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation def fully_conn(x_tensor, num_outputs): Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=tf.nn.relu) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_fully_conn(fully_conn) Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. End of explanation def output(x_tensor, num_outputs): Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_output(output) Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation def conv_net(x, keep_prob): Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) x = conv2d_maxpool(x, 64, (3, 3), (1, 1), (3, 3), (2, 2)) x = conv2d_maxpool(x, 64, (4, 4), (1, 1), (3, 3), (2, 2)) x = conv2d_maxpool(x, 64, (3, 3), (1, 1), (2, 2), (2, 2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x = flatten(x) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) x = fully_conn(x, 512) x = tf.nn.dropout(x, keep_prob) x = fully_conn(x, 192) x = tf.nn.dropout(x, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) x = output(x, 10) # TODO: return output return x DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data # TODO: Implement Function session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_train_nn(train_neural_network) Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation def print_stats(session, feature_batch, label_batch, cost, accuracy): Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.}) valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.}) print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_accuracy)) Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation # TODO: Tune Parameters epochs = 50 batch_size = 512 keep_probability = 0.5 Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation DON'T MODIFY ANYTHING IN THIS CELL print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation DON'T MODIFY ANYTHING IN THIS CELL %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): Test the saved model against the test dataset test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation
15,942
Given the following text description, write Python code to implement the functionality described below step by step Description: Self-Adaptive Evolution Strategy (SAES) TODO Step2: (1+1)-$\sigma$-Self-Adaptation-ES Step3: Some explanations about $\sigma$ and $\tau$ Step4: Other inplementations PyAI Import required modules Step5: Define the objective function
Python Code: # Init matplotlib %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (8, 8) # Setup PyAI import sys sys.path.insert(0, '/Users/jdecock/git/pub/jdhp/pyai') # Set the objective function #from pyai.optimize.functions import sphere as func from pyai.optimize.functions import sphere2d as func #from pyai.optimize.functions import additive_gaussian_noise as noise from pyai.optimize.functions import multiplicative_gaussian_noise as noise #from pyai.optimize.functions import additive_poisson_noise as noise #func.noise = noise # Comment this line to use a deterministic objective function xmin = func.bounds[0] # TODO xmax = func.bounds[1] # TODO Explanation: Self-Adaptive Evolution Strategy (SAES) TODO: * http://www.scholarpedia.org/article/Evolution_strategies * Matlab / mathematica implementation * https://en.wikipedia.org/wiki/Evolution_strategy * https://en.wikipedia.org/wiki/Evolution_window http://deap.readthedocs.io/en/master/index.html The SAES algorithm TODO: this is wrong, redo this section ! Notations: * ${\cal{N}}$ denotes some independent standard Gaussian random variable * $d$ is the dimension of input vectors, $d \in \mathbb{N}^_+$ * $n$ is the current iteration index (or generation index), $n \in \mathbb{N}^_+$ Algorithm's parameters: * $K > 0$, * $\zeta \geq 0$, * $\lambda > \mu > 0$ Input: * an initial parent population $\boldsymbol{x}{1,i} \in \mathbb{R}^d$ * an initial scalar $\sigma{1,i} = 1$ with $i \in {1, \dots, \mu }$ $n\leftarrow 1$ while (stop condition) do $\quad$ Generate $\lambda$ individuals $i_j$ independently with $j \in { 1, \dots, \lambda }$ using \begin{eqnarray} \sigma_j & = & \sigma_{n, mod(j-1, \mu) + 1} \times \exp\left( \frac{1}{2d} \cal{N} \right) \ i_j & = & \boldsymbol{x}_{n, mod(j-1, \mu) + 1} + \sigma_j \cal{N}. \end{eqnarray} $\quad$ Evaluate each of them $\lceil Kn^\zeta \rceil$ times and average their fitness values $\quad$ Define $j_1, \dots, j_{\lambda}$ so that $$ \mathbb{E}{\lceil Kn^\zeta \rceil}[f(i{j_1})]\leq \mathbb{E}{\lceil Kn^\zeta \rceil}[f(i{j_2})] \leq \dots \leq \mathbb{E}{\lceil Kn^\zeta \rceil}[f(i{j_{\lambda}})] $$ $\quad$ where $\mathbb{E}_m$ denotes a sample average over $m$ resamplings. $\quad$ Update: compute $\boldsymbol{x}{n+1, k}$ and $\sigma{n+1, k}$ using \begin{eqnarray} \sigma_{n+1,k} &=& \sigma_{j_{k}}, \quad k \in {1, \dots, \mu}\ {\boldsymbol{x}{n+1,k}} &=& i{j_{k}}, \quad k \in {1, \dots, \mu} \end{eqnarray} $\quad$ $n\leftarrow n+1$ end while For more information, see http://www.scholarpedia.org/article/Evolution_strategies. A Python inplementation End of explanation import numpy as np import math This is a simple Python implementation of the (mu/1, lambda)-sigmaSA-ES as discussed in http://www.scholarpedia.org/article/Evolution_Strategies mu = 3 # mu: the number of parents lmb = 12 # lambda: the number of children rho = 1 # rho: number of parents per child selection_operator = '+' d = 2 # number of dimension of the solution space num_gen = 10 tau = 1./math.sqrt(2.*d) # self-adaptation learning rate # Init the population ########################## # "pop" array layout: # - the first mu lines contain parents # - the next lambda lines contain children # - the first column contains the individual's strategy (sigma) # - the last column contains the individual's assess (f(x)) # - the other columns contain the individual value (x) pop = np.full([mu+lmb, d+2], np.nan) pop[:mu, 0] = 1. # init the parents strategy to 1.0 pop[:mu, 1:-1] = np.random.normal(0., 1., size=[mu,d]) # init the parents value pop[:mu, -1] = func(pop[:mu, 1:-1].T) # evaluate parents print("Initial population:\n", pop) ## Sort parents #pop = pop[pop[:,-1].argsort()] #print(pop) for gen in range(num_gen): # Make children ################################ if rho == 1: # Each child is made from one randomly selected parent pop[mu:,:] = pop[np.random.randint(mu, size=lmb)] elif rho == mu: # Recombine all parents for each child raise NotImplemented() # TODO elif 1 < rho < mu: # Recombine rho randomly selected parents for each child raise NotImplemented() # TODO else: raise ValueError() pop[mu:,-1] = np.nan #print("Children:\n", pop) # Mutate children's sigma ###################### pop[mu:,0] = pop[mu:,0] * np.exp(tau * np.random.normal(size=lmb)) #print("Mutated children (sigma):\n", pop) # Mutate children's value ###################### pop[mu:,1:-1] = pop[mu:,1:-1] + pop[mu:,1:-1] * np.random.normal(size=[lmb,d]) #print("Mutated children (value):\n", pop) # Evaluate children ############################ pop[mu:, -1] = func(pop[mu:, 1:-1].T) #print("Evaluated children:\n", pop) # Select the best individuals ################## if selection_operator == '+': # *plus-selection* operator pop = pop[pop[:,-1].argsort()] elif selection_operator == ',': # *comma-selection* operator pop[:lmb,:] = pop[pop[mu:,-1].argsort()] # TODO: check this... else: raise ValueError() pop[mu:, :] = np.nan #print("Selected individuals for the next generation:\n", pop) print("Result:\n", pop[:mu, :]) Explanation: (1+1)-$\sigma$-Self-Adaptation-ES End of explanation tau import random sigma_list = [1.] for i in range(1000): sigma_list.append(sigma_list[-1] * math.exp(tau * random.normalvariate(0., 1.))) # mutate sigma #sigma = sigma * exp(tau*randn) # mutate sigma plt.loglog(sigma_list); x = np.linspace(-4, 4, 100) y1 = np.exp(1./math.sqrt(1.*d) * x) y2 = np.exp(1./math.sqrt(2.*d) * x) y3 = np.exp(1./math.sqrt(3.*d) * x) y4 = np.exp(1./(2.*d) * x) plt.plot(x, y1, label="tau1") plt.plot(x, y2, label="tau2") plt.plot(x, y3, label="tau3") plt.plot(x, y4, label="tau4") plt.legend(); tau1 = 1./math.sqrt(1.*d) tau2 = 1./math.sqrt(2.*d) tau3 = 1./math.sqrt(3.*d) tau4 = 1./(2.*d) x1 = np.exp(tau1 * np.random.normal(size=[100000])) x2 = np.exp(tau2 * np.random.normal(size=[100000])) x3 = np.exp(tau3 * np.random.normal(size=[100000])) x4 = np.exp(tau4 * np.random.normal(size=[100000])) bins = np.linspace(0, 10, 100) plt.hist(x1, bins=bins, alpha=0.5, label=r"$\exp\left(\frac{1}{\sqrt{d}} \mathcal{N}(0,1)\right)$", lw=2, histtype='step') plt.hist(x2, bins=bins, alpha=0.5, label=r"$\exp\left(\frac{1}{\sqrt{2d}} \mathcal{N}(0,1)\right)$", lw=2, histtype='step') plt.hist(x3, bins=bins, alpha=0.5, label=r"$\exp\left(\frac{1}{\sqrt{3d}} \mathcal{N}(0,1)\right)$", lw=2, histtype='step') plt.hist(x4, bins=bins, alpha=0.5, label=r"$\exp\left(\frac{1}{2d} \mathcal{N}(0,1)\right)$", lw=2, histtype='step') plt.xlim(-0.25, 7) plt.axvline(1, color='k', linestyle='dotted') plt.legend(fontsize='x-large'); Explanation: Some explanations about $\sigma$ and $\tau$ End of explanation # Init matplotlib %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (8, 8) # Setup PyAI import sys sys.path.insert(0, '/Users/jdecock/git/pub/jdhp/pyai') import numpy as np import time from pyai.optimize import SAES # Plot functions from pyai.optimize.utils import plot_contour_2d_solution_space from pyai.optimize.utils import plot_2d_solution_space from pyai.optimize.utils import array_list_to_array from pyai.optimize.utils import plot_fx_wt_iteration_number from pyai.optimize.utils import plot_err_wt_iteration_number from pyai.optimize.utils import plot_err_wt_execution_time from pyai.optimize.utils import plot_err_wt_num_feval Explanation: Other inplementations PyAI Import required modules End of explanation ## Objective function: Rosenbrock function (Scipy's implementation) #func = scipy.optimize.rosen # Set the objective function #from pyai.optimize.functions import sphere as func from pyai.optimize.functions import sphere2d as func #from pyai.optimize.functions import additive_gaussian_noise as noise from pyai.optimize.functions import multiplicative_gaussian_noise as noise #from pyai.optimize.functions import additive_poisson_noise as noise func.noise = noise # Comment this line to use a deterministic objective function xmin = func.bounds[0] # TODO xmax = func.bounds[1] # TODO %%time saes = SAES() func.do_eval_logs = True func.reset_eval_counters() func.reset_eval_logs() res = saes.minimize(func, init_pop_mu=0., init_pop_sigma=1.) func.do_eval_logs = False eval_x_array = np.array(func.eval_logs_dict['x']).T eval_error_array = np.array(func.eval_logs_dict['fx']) - func(func.arg_min) res plot_contour_2d_solution_space(func, xmin=xmin, xmax=xmax, xstar=res, xvisited=eval_x_array, title="SAES"); plot_err_wt_num_feval(eval_error_array, x_log=True, y_log=True) %%time eval_error_array_list = [] NUM_RUNS = 100 for run_index in range(NUM_RUNS): saes = SAES() func.do_eval_logs = True func.reset_eval_counters() func.reset_eval_logs() res = saes.minimize(func, init_pop_mu=0., init_pop_sigma=1., lmb=6) func.do_eval_logs = False eval_error_array = np.array(func.eval_logs_dict['fx']) - func(func.arg_min) print("x* =", res) eval_error_array_list.append(eval_error_array); plot_err_wt_num_feval(array_list_to_array(eval_error_array_list), x_log=True, y_log=True, plot_option="mean") Explanation: Define the objective function End of explanation
15,943
Given the following text description, write Python code to implement the functionality described below step by step Description: Intro to Cython Why Cython Outline Step1: Now, let's time this Step2: Not too bad, but this can add up. Let's see if Cython can do better Step3: That's a little bit faster, which is nice since all we did was to call Cython on the exact same code. But can we do better? Step4: The final bit of "easy" Cython optimization is "declaring" the variables inside the function Step5: 4X speedup with so little effort is pretty nice. What else can we do? Cython has a nice "-a" flag (for annotation) that can provide clues about why your code is slow. Step6: That's a lot of yellow still! How do we reduce this? Exercise Step7: Part 2 Step8: Rubbish! How do we fix this? Exercise Step9: Part 3 Step10: Exercise (if time) Write a parallel matrix multiplication routine. Part 4 Step11: Example Step12: Using Cython in production code Use setup.py to build your Cython files. ```python from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext import numpy as np setup( cmdclass = {'build_ext'
Python Code: def f(x): y = x**4 - 3*x return y def integrate_f(a, b, n): dx = (b - a) / n dx2 = dx / 2 s = f(a) * dx2 for i in range(1, n): s += f(a + i * dx) * dx s += f(b) * dx2 return s Explanation: Intro to Cython Why Cython Outline: Speed up Python code Interact with NumPy arrays Release GIL and get parallel performance Wrap C/C++ code Part 1: speed up your Python code We want to integrate the function $f(x) = x^4 - 3x$. End of explanation from scipy.integrate import quad %timeit quad(f, -100,100) %timeit integrate_f(-100, 100, int(1e5)) Explanation: Now, let's time this: End of explanation %load_ext cython %%cython def f2(x): y = x**4 - 3*x return y def integrate_f2(a, b, n): dx = (b - a) / n dx2 = dx / 2 s = f2(a) * dx2 for i in range(1, n): s += f2(a + i * dx) * dx s += f2(b) * dx2 return s %timeit integrate_f2(-100, 100, int(1e5)) Explanation: Not too bad, but this can add up. Let's see if Cython can do better: End of explanation %%cython def f3(double x): y = x**4 - 3*x return y def integrate_f3(double a, double b, int n): dx = (b - a) / n dx2 = dx / 2 s = f3(a) * dx2 for i in range(1, n): s += f3(a + i * dx) * dx s += f3(b) * dx2 return s %timeit integrate_f3(-100, 100, int(1e5)) Explanation: That's a little bit faster, which is nice since all we did was to call Cython on the exact same code. But can we do better? End of explanation %%cython def f4(double x): y = x**4 - 3*x return y def integrate_f4(double a, double b, int n): cdef: double dx = (b - a) / n double dx2 = dx / 2 double s = f4(a) * dx2 int i = 0 for i in range(1, n): s += f4(a + i * dx) * dx s += f4(b) * dx2 return s %timeit integrate_f4(-100, 100, int(1e5)) Explanation: The final bit of "easy" Cython optimization is "declaring" the variables inside the function: End of explanation %%cython -a def f4(double x): y = x**4 - 3*x return y def integrate_f4(double a, double b, int n): cdef: double dx = (b - a) / n double dx2 = dx / 2 double s = f4(a) * dx2 int i = 0 for i in range(1, n): s += f4(a + i * dx) * dx s += f4(b) * dx2 return s Explanation: 4X speedup with so little effort is pretty nice. What else can we do? Cython has a nice "-a" flag (for annotation) that can provide clues about why your code is slow. End of explanation %%cython -a #cython: cdivision=True #import cython cdef double f5(double x): y = x**4 - 3*x return y def integrate_f6(double a, double b, int n): cdef: double dx = (b - a) / n double dx2 = dx / 2 double s = f5(a) * dx2 int i = 0 for i in range(1, n): s += f5(a + i * dx) * dx s += f5(b) * dx2 return s %timeit integrate_f6(-100, 100, int(1e5)) Explanation: That's a lot of yellow still! How do we reduce this? Exercise: change the f4 declaration to C End of explanation import numpy as np def mean3filter(arr): arr_out = np.empty_like(arr) for i in range(1, arr.shape[0] - 1): arr_out[i] = np.sum(arr[i-1 : i+1]) / 3 arr_out[0] = (arr[0] + arr[1]) / 2 arr_out[-1] = (arr[-1] + arr[-2]) / 2 return arr_out %timeit mean3filter(np.random.rand(1e5)) %%cython import cython import numpy as np @cython.boundscheck(False) def mean3filter2(double[::1] arr): cdef double[::1] arr_out = np.empty_like(arr) cdef int i for i in range(1, arr.shape[0]-1): arr_out[i] = np.sum(arr[i-1 : i+1]) / 3 arr_out[0] = (arr[0] + arr[1]) / 2 arr_out[-1] = (arr[-1] + arr[-2]) / 2 return np.asarray(arr_out) %timeit np.convolve(np.random.rand(1e5), np.array([1.,1.,1.]), 'same') %timeit mean3filter2(np.random.rand(1e5)) Explanation: Part 2: work with NumPy arrays This is a very small subset of Python. Most scientific application deal not with single values, but with arrays of data. End of explanation %%cython -a import cython import numpy as np @cython.boundscheck(False) def mean3filter2a(double[::1] arr): # ::1 means that the array is contiguous cdef double[::1] arr_out = np.empty_like(arr) cdef int i for i in range(1, arr.shape[0]-1): #for j in range(3): arr_out[i] = arr[i-1] + arr[i] + arr[i+1] arr_out[i] *= 0.333333333333333333333333 #arr_out[i] = np.sum(arr[i-1 : i+1]) / 3 arr_out[0] = (arr[0] + arr[1]) / 2 arr_out[-1] = (arr[-1] + arr[-2]) / 2 return np.asarray(arr_out) %timeit mean3filter2a(np.random.rand(1e5)) Explanation: Rubbish! How do we fix this? Exercise: use %%cython -a to speed up the code End of explanation %%cython -a import cython from cython.parallel import prange import numpy as np @cython.boundscheck(False) def mean3filter3a(double[::1] arr, double[::1] out): cdef int i, j, k = arr.shape[0]-1 for i in range(1, k-1): for j in range(i-1, i+1): out[i] += arr[j] out[i] /= 3 out[0] = (arr[0] + arr[1]) / 2 out[-1] = (arr[-1] + arr[-2]) / 2 return np.asarray(out) %%cython -a import cython from cython.parallel import prange import numpy as np @cython.boundscheck(False) def mean3filter3(double[::1] arr, double[::1] out): cdef int i, j, k = arr.shape[0]-1 with nogil: for i in prange(1, k-1, schedule='static', chunksize=(k-2) // 2, num_threads=4): for j in range(i-1, i+1): out[i] += arr[j] out[i] /= 3 out[0] = (arr[0] + arr[1]) / 2 out[-1] = (arr[-1] + arr[-2]) / 2 return np.asarray(out) %%cython -a import cython from cython.parallel import prange import numpy as np @cython.boundscheck(False) def mean3filter3b(double[::1] arr, double[::1] out): cdef int i, j, k = arr.shape[0]-1 for i in range(1, k-1): for j in range(i-1, i+1): out[i] += arr[j] with nogil: for i in prange(1, k-1, schedule='static', chunksize=(k-2) // 2, num_threads=4): out[i] /= 3 out[0] = (arr[0] + arr[1]) / 2 out[-1] = (arr[-1] + arr[-2]) / 2 return np.asarray(out) del rin, rout rin = np.random.rand(1e8) rout = np.empty_like(rin) %timeit mean3filter3b(rin, rout) %timeit mean3filter3(rin, rout) Explanation: Part 3: write parallel code Warning:: Dragons afoot. End of explanation %%cython -a # distutils: language=c++ import cython from libcpp.vector cimport vector @cython.boundscheck(False) def build_list_with_vector(double[::1] in_arr): cdef vector[double] out cdef int i for i in range(in_arr.shape[0]): out.push_back(in_arr[i]) return out build_list_with_vector(np.random.rand(10)) Explanation: Exercise (if time) Write a parallel matrix multiplication routine. Part 4: interact with C/C++ code End of explanation %%cython -a #distutils: language=c++ from cython.operator cimport dereference as deref, preincrement as inc from libcpp.vector cimport vector from libcpp.map cimport map as cppmap cdef class Graph: cdef cppmap[int, vector[int]] _adj cpdef int has_node(self, int node): return self._adj.find(node) != self._adj.end() cdef void add_node(self, int new_node): cdef vector[int] out if not self.has_node(new_node): self._adj[new_node] = out def add_edge(self, int u, int v): self.add_node(u) self.add_node(v) self._adj[u].push_back(v) self._adj[v].push_back(u) def __getitem__(self, int u): return self._adj[u] cdef vector[int] _degrees(self): cdef vector[int] deg cdef int first = 0 cdef vector[int] edges cdef cppmap[int, vector[int]].iterator it = self._adj.begin() while it != self._adj.end(): deg.push_back(deref(it).second.size()) it = inc(it) return deg def degrees(self): return self._degrees() g0 = Graph() g0.add_edge(1, 5) g0.add_edge(1, 6) g0[1] g0.has_node(1) g0.degrees() import networkx as nx g = nx.barabasi_albert_graph(100000, 6) with open('graph.txt', 'w') as fout: for u, v in g.edges_iter(): fout.write('%i,%i\n' % (u, v)) %timeit list(g.degree()) myg = Graph() def line2edges(line): u, v = map(int, line.rstrip().split(',')) return u, v edges = map(line2edges, open('graph.txt')) for u, v in edges: myg.add_edge(u, v) %timeit mydeg = myg.degrees() Explanation: Example: C++ int graph End of explanation import numpy as np from mean3 import mean3filter mean3filter(np.random.rand(10)) Explanation: Using Cython in production code Use setup.py to build your Cython files. ```python from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext import numpy as np setup( cmdclass = {'build_ext': build_ext}, ext_modules = [ Extension("prange_demo", ["prange_demo.pyx"], include_dirs=[np.get_include()], extra_compile_args=['-fopenmp'], extra_link_args=['-fopenmp', '-lgomp']), ] ) ``` Exercise Write a Cython module with a setup.py to run the mean-3 filter, then import from the notebook. End of explanation
15,944
Given the following text description, write Python code to implement the functionality described below step by step Description: Trading Framework This framework is developed based on Peter Henry https Step1: Create a new OpenAI Gym environment with the customised Trading environment .initialise_simulator() must be invoked after env.make('trading-v0') . Within this function, provide these arguments Step2: States map states_map is a discretized observation space bounded by the extreme values of features with an interval of 0.5. Also I use trade duration and boolean of active trade. This observations are dinamical, while the algorithm runs. Step4: The magic (Deep Q-Network) The point of Baselines OpenAI is set of high-quality implementations of reinforcement learning algorithms. A lot of projects for reinforcement trading uses their own implementations, causing small bugs or hard to maintenance/improvement. There are a lot of good resources to drill down on this topic. But well above, the core of Q_learning and DQNs can express with the next diagrams Learning resources Step5: And define run_test function to use in the end of every episode Step6: Running the enviroment! At this point, we can start up the enviroment and run the episodes. The most important is
Python Code: csv = "data/EURUSD60.csv" Explanation: Trading Framework This framework is developed based on Peter Henry https://github.com/Henry-bee/gym_trading/ which in turn on developed of Tito Ingargiola's https://github.com/hackthemarket/gym-trading. First, define the address for the CSV data End of explanation env = gym.make('trading-v0') env.initialise_simulator(csv, trade_period=50, train_split=0.7) Explanation: Create a new OpenAI Gym environment with the customised Trading environment .initialise_simulator() must be invoked after env.make('trading-v0') . Within this function, provide these arguments: csv_name: Address of the data trade_period: (int), Max of duration of each trades. Default: 1000 train_split: (float), Percentage of data set for training. Default: 0.7 End of explanation env.sim.states Explanation: States map states_map is a discretized observation space bounded by the extreme values of features with an interval of 0.5. Also I use trade duration and boolean of active trade. This observations are dinamical, while the algorithm runs. End of explanation def model(inpt, num_actions, scope, reuse=False): This model takes as input an observation and returns values of all actions. with tf.variable_scope(scope, reuse=reuse): out = inpt out = layers.fully_connected(out, num_outputs=128, activation_fn=tf.nn.tanh) out = layers.fully_connected(out, num_outputs=64, activation_fn=tf.nn.tanh) out = layers.fully_connected(out, num_outputs=32, activation_fn=tf.nn.tanh) out = layers.fully_connected(out, num_outputs=num_actions, activation_fn=None) return out Explanation: The magic (Deep Q-Network) The point of Baselines OpenAI is set of high-quality implementations of reinforcement learning algorithms. A lot of projects for reinforcement trading uses their own implementations, causing small bugs or hard to maintenance/improvement. There are a lot of good resources to drill down on this topic. But well above, the core of Q_learning and DQNs can express with the next diagrams Learning resources: http://karpathy.github.io/2016/05/31/rl/ http://minpy.readthedocs.io/en/latest/tutorial/rl_policy_gradient_tutorial/rl_policy_gradient.html http://pemami4911.github.io/blog/2016/08/21/ddpg-rl.html http://kvfrans.com/simple-algoritms-for-solving-cartpole/ https://medium.com/@awjuliani/super-simple-reinforcement-learning-tutorial-part-1-fd544fab149 https://dataorigami.net/blogs/napkin-folding/79031811-multi-armed-bandits Set the model So, let's get our hands dirty. First set our network End of explanation def run_test(env, act, episodes=1, final_test=False): obs = env._reset(train=False) start = env.sim.train_end_index + 1 end = env.sim.count - 2 for episode in range(episodes): done = False while done is False: action = act(obs[None]) obs, reward, done, info = env.step(action) if not final_test: journal = pd.DataFrame(env.portfolio.journal) profit = journal["Profit"].sum() return env.portfolio.average_profit_per_trade, profit else: print("Training period %s - %s" % (env.sim.date_time[start], env.sim.date_time[end])) print("Average Reward is %s" % (env.portfolio.average_profit_per_trade)) if final_test: env._generate_summary_stats() Explanation: And define run_test function to use in the end of every episode End of explanation with U.make_session(8): act, train, update_target, debug = deepq.build_train( make_obs_ph=lambda name: U.BatchInput(env.observation_space.shape, name=name), q_func=model, num_actions=env.action_space.n, optimizer=tf.train.AdamOptimizer(learning_rate=5e-4), ) replay_buffer = ReplayBuffer(50000) # Create the schedule for exploration starting from 1 (every action is random) down to # 0.02 (98% of actions are selected according to values predicted by the model). exploration = LinearSchedule(schedule_timesteps=10000, initial_p=1.0, final_p=0.02) # Initialize the parameters and copy them to the target network. U.initialize() update_target() episode_rewards = [0.0] obs = env.reset() l_mean_episode_reward = [] for t in itertools.count(): # Take action and update exploration to the newest value action = act(obs[None], update_eps=exploration.value(t))[0] new_obs, rew, done, _ = env.step(action) # Store transition in the replay buffer. replay_buffer.add(obs, action, rew, new_obs, float(done)) obs = new_obs episode_rewards[-1] += rew is_solved = np.mean(episode_rewards[-101:-1]) > 500 or t >= 300000 is_solved = is_solved and len(env.portfolio.journal) > 2 if done: journal = pd.DataFrame(env.portfolio.journal) profit = journal["Profit"].sum() try: print("-------------------------------------") print("steps | {:}".format(t)) print("episodes | {}".format(len(episode_rewards))) print("% time spent exploring | {}".format(int(100 * exploration.value(t)))) print("--") l_mean_episode_reward.append(round(np.mean(episode_rewards[-101:-1]), 1)) print("mean episode reward | {:}".format(l_mean_episode_reward[-1])) print("Total operations | {}".format(len(env.portfolio.journal))) print("Avg duration trades | {}".format(round(journal["Trade Duration"].mean(), 2))) print("Total profit | {}".format(round(profit), 1)) print("Avg profit per trade | {}".format(round(env.portfolio.average_profit_per_trade, 3))) print("--") reward_test, profit = run_test(env=env, act=act) print("Total profit test: > {}".format(round(profit, 2))) print("Avg profit per trade test > {}".format(round(reward_test, 3))) print("-------------------------------------") except Exception as e: print("Exception: ", e) # Update target network periodically. obs = env.reset() episode_rewards.append(0) if is_solved: # Show off the result env._generate_summary_stats() run_test(env, act, final_test=True) break else: # Minimize the error in Bellman's equation on a batch sampled from replay buffer. if t > 500: obses_t, actions, rewards, obses_tp1, dones = replay_buffer.sample(32) train(obses_t, actions, rewards, obses_tp1, dones, np.ones_like(rewards)) if t % 500 == 0: update_target() %matplotlib inline import matplotlib.pyplot as plt plt.plot(l_mean_episode_reward) plt.xlabel('Episode') plt.ylabel('Total Reward') plt.show() Explanation: Running the enviroment! At this point, we can start up the enviroment and run the episodes. The most important is: Set the episode_rewards with the reward that we want. For example if we want maximice each trade: episode_rewards[-1] += rew Set the solved function. The training will stop when the outcome of function get True. For example: is_solved = np.mean(episode_rewards[-101:-1]) &gt; 1000 or t == 100000 Instanciate deepq.build_train. That is the core of Baseline of OpenAI. build_train Creates the train function: Parameters make_obs_ph: str -> tf.placeholder or TfInput -> a function that takes a name and creates a placeholder of input with that name q_func: (tf.Variable, int, str, bool) -> tf.Variable -> the model that takes the following inputs: observation_in: object -> the output of observation placeholder num_actions: int -> number of actions scope: str reuse: bool -> should be passed to outer variable scope and returns a tensor of shape (batch_size, num_actions) with values of every action. num_actions: int -> number of actions reuse: bool -> whether or not to reuse the graph variables optimizer: tf.train.Optimizer -> optimizer to use for the Q-learning objective. grad_norm_clipping: float or None -> clip gradient norms to this value. If None no clipping is performed. gamma: float -> discount rate. double_q: bool -> if true will use Double Q Learning (https://arxiv.org/abs/1509.06461).In general it is a good idea to keep it enabled. scope: str or VariableScope -> optional scope for variable_scope. reuse: bool or None -> whether or not the variables should be reused. To be able to reuse the scope must be given. Returns act: (tf.Variable, bool, float) -> tf.Variable -> function to select and action given observation. train: (object, np.array, np.array, object, np.array, np.array) -> np.array -> optimize the error in Bellman's equation.See the top of the file for details. update_target: () -> () -> copy the parameters from optimized Q function to the target Q function. debug: {str: function} -> a bunch of functions to print debug data like q_values. End of explanation
15,945
Given the following text description, write Python code to implement the functionality described below step by step Description: Sentiment Analysis with an RNN In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels. The architecture for this network is shown below. <img src="assets/network_diagram.png" width=400px> Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own. From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function. We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label. Step1: Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. Step2: Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise Step3: Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general. Step6: Exercise Step7: Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise Step10: Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise Step11: LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation Step12: RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise Step13: Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[ Step14: Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. Step15: Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size]. Step16: Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists. Step17: Testing
Python Code: import numpy as np import tensorflow as tf with open('../sentiment_network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment_network/labels.txt', 'r') as f: labels = f.read() reviews[:2000] Explanation: Sentiment Analysis with an RNN In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels. The architecture for this network is shown below. <img src="assets/network_diagram.png" width=400px> Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own. From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function. We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label. End of explanation from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100] Explanation: Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. End of explanation from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()]) Explanation: Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0. Also, convert the reviews to integers and store the reviews in a new list called reviews_ints. End of explanation labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels]) review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens))) Explanation: Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively. End of explanation non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0] len(non_zero_idx) reviews_ints[-1] Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove the review with zero length from the reviews_ints list. End of explanation reviews_ints = [reviews_ints[ii] for ii in non_zero_idx] labels = np.array([labels[ii] for ii in non_zero_idx]) Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general. End of explanation seq_len = 200 features = np.zeros((len(reviews_ints), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len] features[:10,:100] Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data. End of explanation split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) Explanation: Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data. End of explanation lstm_size = 256 lstm_layers = 1 batch_size = 500 learning_rate = 0.001 Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like: Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200) Build the graph Here, we'll build the graph. First up, defining the hyperparameters. lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc. lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting. batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. learning_rate: Learning rate End of explanation n_words = len(vocab_to_int) # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs') labels_ = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob') Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder. End of explanation # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) Explanation: Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200]. End of explanation with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) Explanation: LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation: tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;) you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like lstm = tf.contrib.rnn.BasicLSTMCell(num_units) to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell: cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell. Here is a tutorial on building RNNs that will help you out. End of explanation with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) Explanation: RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed. End of explanation with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) Explanation: Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_. End of explanation with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) Explanation: Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. End of explanation def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] Explanation: Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size]. End of explanation epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt") Explanation: Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists. End of explanation test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc))) Explanation: Testing End of explanation
15,946
Given the following text description, write Python code to implement the functionality described below step by step Description: UTSC Machine Learning WorkShop Cross-validation for feature selection with Linear Regression From the video series Step1: MSE is more popular than MAE because MSE "punishes" larger errors. But, RMSE is even more popular than MSE because RMSE is interpretable in the "y" units. Step2: TASK Select the best polynomial order for feature Grith to use in the tree problem. Step3: Feature engineering and selection within cross-validation iterations Normally, feature engineering and selection occurs before cross-validation Instead, perform all feature engineering and selection within each cross-validation iteration More reliable estimate of out-of-sample performance since it better mimics the application of the model to out-of-sample data
Python Code: import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression from sklearn.feature_selection import SelectKBest, f_regression from sklearn.cross_validation import cross_val_score # read in the advertising dataset data = pd.read_csv('data/Advertising.csv', index_col=0) # create a Python list of three feature names feature_cols = ['TV', 'Radio', 'Newspaper'] # use the list to select a subset of the DataFrame (X) X = data[feature_cols] # select the Sales column as the response (y) y = data.Sales # 10-fold cross-validation with all three features lm = LinearRegression() MAEscores = cross_val_score(lm, X, y, cv=10, scoring='mean_absolute_error') print MAEscores Explanation: UTSC Machine Learning WorkShop Cross-validation for feature selection with Linear Regression From the video series: Introduction to machine learning with scikit-learn Agenda Put together what we learned, using corss-validation to select features for linear regration models. Practice on a different problem. Cross-validation example: feature selection Model Evaluation Metrics for Regression For classification problems, we have only used classification accuracy as our evaluation metric. What metrics can we used for regression problems? Mean Absolute Error (MAE) is the mean of the absolute value of the errors: $$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$ Mean Squared Error (MSE) is the mean of the squared errors: $$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$ Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors: $$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$ Read More http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter Goal: Select whether the Newspaper feature should be included in the linear regression model on the advertising dataset End of explanation # The MSE scores can be calculated by: scores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error') print scores # fix the sign of MSE scores mse_scores = -scores print mse_scores # convert from MSE to RMSE rmse_scores = np.sqrt(mse_scores) print rmse_scores # calculate the average RMSE print rmse_scores.mean() # 10-fold cross-validation with two features (excluding Newspaper) feature_cols = ['TV', 'Radio'] X = data[feature_cols] print np.sqrt(-cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')).mean() Explanation: MSE is more popular than MAE because MSE "punishes" larger errors. But, RMSE is even more popular than MSE because RMSE is interpretable in the "y" units. End of explanation import pydataset from pydataset import data trees=data('trees') #set up features and aimed result feature_cols=["Girth", "Height"] X=trees[feature_cols] y=trees.Volume # find the cross validation score scores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error') print scores # find the cross validation score for higher polynomial features trees['squared'] = trees['Girth']**2 trees.head() Explanation: TASK Select the best polynomial order for feature Grith to use in the tree problem. End of explanation from IPython.core.display import HTML def css_styling(): styles = open("styles/custom.css", "r").read() return HTML(styles) css_styling() Explanation: Feature engineering and selection within cross-validation iterations Normally, feature engineering and selection occurs before cross-validation Instead, perform all feature engineering and selection within each cross-validation iteration More reliable estimate of out-of-sample performance since it better mimics the application of the model to out-of-sample data End of explanation
15,947
Given the following text description, write Python code to implement the functionality described below step by step Description: Now that we understand variables, we can start to develop more complex code structures which can build more interesting functionality into our scripts. Up to this point, our scripts have been pretty basic, and limited to only executing in a top-down order, with one command or operation per line. The following two concepts, conditionals and loops, are the two basic 'flow control' structures which can actually alter the sequence in which our code is executed, thereby creating more complex behavior and more interesting functionality. 2. Conditionals Conditionals are structures within the code which can execute different lines of code based on certain 'conditions' being met. In Python, the most basic type of conditional will test a boolean to see if it is True, and then execute some code if it passes Step1: Here, since b is in fact True, it passes the test, causing the code that is inset after the 'if b Step2: will skip both print lines if b is False. However, by deleting the indent on the last line, you take that line out of the nested structure, and it will now execute regardless of whether b is True or False Step3: On the other hand, if you inset the last line one level further Step4: You will get an error saying IndentationError Step5: In this case, when b is True the first statement will execute, and when b is False the second statement will execute. Try this code both ways to see. In addition to using booleans, you can also create conditionals using various comparison operators. For example, a conditional can test the size of a number Step6: Or the contents of a string Step7: In this example I use the double equals '==' operator to check if one thing equals another. This is the standard way to check equality, since the single equals '=' is reserved for assigning values to variables. The most common comparison operators are Step8: This creates a chain of tests that happen in order. If the first test passes, that block of code is executed, and the rest of the conditional is skipped. If it fails, the second test (after the 'elif Step9: 3. Loops Loops are the second primary type of 'flow control' structure, and they can be used to make code repeat multiple times under specific conditions. The most basic type of loop is one that iterates over each value within a list Step10: The 'for item in list Step11: If you run this code, you will see that the entries are not returned in the same order that they are typed. This is because dictionaries, unlike lists, do not enforce a specific order. However, iterating through the keys using the .key() function will ensure that you go through each item in the dictionary. In addition to iterating through every item in a list or dictionary, loops are often used to simply repeat a particular piece of code a specific number of times. For this, Python's range() function is very useful, which takes in an integer value and returns a list of integers starting at 0, up to but not including that value Step12: Using the range() function, we can set up a basic loop like Step13: This will simply run the code inside the loop five times, since in effect we are creating a list of five sequential numbers, and then iterating over every item in that list. In addition, we are also storing each successive number in the variable 'i', which we can also use within the loop. A common example is to combine both strategies by tying the range() function to the length of a list (using the len() function), and then using the iterating number to get items from that list Step14: Although this might seem redundant given the first example, there are times when you want to build a loop that has access to both an item within a list, as well as an iterator which specifies its index. In such cases, you can use a special function called enumerate() which takes in a list and returns both the item and its index Step15: While the 'for' loop will serve most purposes, there is another kind of loop which will iterate over a piece of code until a certain condition is met Step16: In this case, the loop will keep going while it's condition is satisfied, and only stop once the variable 'i' obtains a value greater or equal to 5. This type of loop can be useful if you do not know how long the loop should be run for, or if you want to make the termination criteria somehow dynamic relative to other activities within the script. It requires a bit more setup, however, as the value tested must first be initialized (i = 0), and there has to be code within the loop which changes that value in such a way that it eventually meets the exit criteria. The '+=' notation here is a shorthand in Python for adding a value to a variable. You can write the same thing explicitly like Step17: This type of loop is inherently more dangerous than a 'for' loop, because it can easily create a situation where the loop can never exit. In theory, such a loop will run indefinitely, although in practice it will most certainly cause Python to crash. The most dangerous kind of loop is also the simplest
Python Code: b = True if b: print 'b is True' Explanation: Now that we understand variables, we can start to develop more complex code structures which can build more interesting functionality into our scripts. Up to this point, our scripts have been pretty basic, and limited to only executing in a top-down order, with one command or operation per line. The following two concepts, conditionals and loops, are the two basic 'flow control' structures which can actually alter the sequence in which our code is executed, thereby creating more complex behavior and more interesting functionality. 2. Conditionals Conditionals are structures within the code which can execute different lines of code based on certain 'conditions' being met. In Python, the most basic type of conditional will test a boolean to see if it is True, and then execute some code if it passes: End of explanation b = False if b: print 'b is True' print 'b is False' Explanation: Here, since b is in fact True, it passes the test, causing the code that is inset after the 'if b:' line to execute. Try to run the code again, this time setting b to False to see that nothing happens. In this case, if b does not pass the test, the entire block of inset code after the first conditional line is skipped over and ignored. In this code, 'if b:' is shorthand for 'if b is True:'. If you want to test for Falseness, you could use the Python shorthand 'if not b:' or write the full 'if b is False:'. In Python, a line ending with a ':' followed by inset lines of code is a basic syntax for creating hierarchical structure, and is used with all higher codes structures including conditionals, loops, functions, and objects. The trick is that Python is very particular about how these insets are specified. You have the option of using TABS or a series of spaces, but you cannot mix and match, and you have to be very explicit about the number of each that you use based on the level of the structure. For instance, this code: End of explanation b = False if b: print 'b is True' print 'b is False' Explanation: will skip both print lines if b is False. However, by deleting the indent on the last line, you take that line out of the nested structure, and it will now execute regardless of whether b is True or False: End of explanation b = False if b: print 'b is True' print 'b is False' Explanation: On the other hand, if you inset the last line one level further: End of explanation b = True if b: print 'b is True' else: print 'b is False' Explanation: You will get an error saying IndentationError: unexpected indent which means that something is wrong with your indenting. In this case you have indented to a level that does not exist in the code structure. Such errors are extremely common and can be quite annoying, since they may come either from improper indentation, mixing spaces with TABs or both. On the bright side, this focus on proper indenting enforces a visual clarity in Python scripts that is often missing in other languages. Moving on, if a conditional test does not pass and the first block of code is passed over, it can be caught by an 'else' statement: End of explanation num = 7 if num > 5: print 'num is greater than 5' Explanation: In this case, when b is True the first statement will execute, and when b is False the second statement will execute. Try this code both ways to see. In addition to using booleans, you can also create conditionals using various comparison operators. For example, a conditional can test the size of a number: End of explanation t = 'this is text' if t == 'this is text': print 'the text matches' Explanation: Or the contents of a string: End of explanation num1 = 3 num2 = 7 if num1 > 5: print 'num1 is greater than 5' elif num2 > 5: print 'num2 is greater than 5' else: print "they're both too small!" Explanation: In this example I use the double equals '==' operator to check if one thing equals another. This is the standard way to check equality, since the single equals '=' is reserved for assigning values to variables. The most common comparison operators are: == (equal) != (not equal) &gt; (greater than) &gt;= (greater than or equal) &lt; (less than) &lt;= (less than or equal) You can use the 'elif:' (a concatenation of else and if) statement to chain together conditions to create more complex logics: End of explanation num1 = 3 num2 = 7 if num1 < 5 and num2 < 5: print "they're both too small!" if num1 < 5 or num2 < 5: print "at least one of them is too small!" Explanation: This creates a chain of tests that happen in order. If the first test passes, that block of code is executed, and the rest of the conditional is skipped. If it fails, the second test (after the 'elif:') is analyzed, and so on. If none of the tests pass, the code following the else: statement is executed). Experiment with different values for num1 and num2 above to see how the printed statement changes. Finally, you can also combine multiple tests within a single line by using the 'and' and 'or' keywords: End of explanation fruits = ['apples', 'oranges', 'bananas'] for fruit in fruits: print fruit Explanation: 3. Loops Loops are the second primary type of 'flow control' structure, and they can be used to make code repeat multiple times under specific conditions. The most basic type of loop is one that iterates over each value within a list: End of explanation dict = {'a': 1, 'b': 2, 'c': 3} for key in dict.keys(): print dict[key] Explanation: The 'for item in list:' structure is the basic way to construct loops in Python. It basically runs the inset code within the structure once for each item in the list, each time setting the current item to the variable specified before the 'in'. In this case, it will run the 'print' code three times, once for each fruit in the list. Every time the code is run, the variable 'fruit' is set to a different fruit in the list in order. This is often used to apply a certain kind of analysis or processing to every element within a list. You can do the same basic kind of iteration on a dictionary using the .keys() function, which will return a list of all the keys in the dictionary, and allow you to iterate over each entry: End of explanation print range(5) Explanation: If you run this code, you will see that the entries are not returned in the same order that they are typed. This is because dictionaries, unlike lists, do not enforce a specific order. However, iterating through the keys using the .key() function will ensure that you go through each item in the dictionary. In addition to iterating through every item in a list or dictionary, loops are often used to simply repeat a particular piece of code a specific number of times. For this, Python's range() function is very useful, which takes in an integer value and returns a list of integers starting at 0, up to but not including that value: End of explanation for i in range(5): print 'Hello' Explanation: Using the range() function, we can set up a basic loop like: End of explanation fruits = ['apples', 'oranges', 'bananas'] for i in range(len(fruits)): print fruits[i] Explanation: This will simply run the code inside the loop five times, since in effect we are creating a list of five sequential numbers, and then iterating over every item in that list. In addition, we are also storing each successive number in the variable 'i', which we can also use within the loop. A common example is to combine both strategies by tying the range() function to the length of a list (using the len() function), and then using the iterating number to get items from that list: End of explanation fruits = ['apples', 'oranges', 'bananas'] for i, fruit in enumerate(fruits): print 'the', fruit, 'are in position', i Explanation: Although this might seem redundant given the first example, there are times when you want to build a loop that has access to both an item within a list, as well as an iterator which specifies its index. In such cases, you can use a special function called enumerate() which takes in a list and returns both the item and its index: End of explanation i = 0 while i < 5: print i i += 1 Explanation: While the 'for' loop will serve most purposes, there is another kind of loop which will iterate over a piece of code until a certain condition is met: End of explanation i = i + 1 Explanation: In this case, the loop will keep going while it's condition is satisfied, and only stop once the variable 'i' obtains a value greater or equal to 5. This type of loop can be useful if you do not know how long the loop should be run for, or if you want to make the termination criteria somehow dynamic relative to other activities within the script. It requires a bit more setup, however, as the value tested must first be initialized (i = 0), and there has to be code within the loop which changes that value in such a way that it eventually meets the exit criteria. The '+=' notation here is a shorthand in Python for adding a value to a variable. You can write the same thing explicitly like: End of explanation # while True: # print 'infinity' Explanation: This type of loop is inherently more dangerous than a 'for' loop, because it can easily create a situation where the loop can never exit. In theory, such a loop will run indefinitely, although in practice it will most certainly cause Python to crash. The most dangerous kind of loop is also the simplest: End of explanation
15,948
Given the following text description, write Python code to implement the functionality described below step by step Description: Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator <br>-- Linearized Ops -- This operator is contributed by Chevron Energy Technology Company (2020) This operator is based on simplfications of the systems presented in Step1: Instantiate the Devito grid for a two dimensional problem We define the grid the same as in the previous notebook outlining implementation for the nonlinear forward. Step2: Define velocity, buoyancy and $\frac{\omega_c}{Q}$ model parameters We have the following constants and fields to define Step4: Define the simulation time range and the acquisition geometry Simulation time range Step5: Plot the model We plot the following Functions Step6: Define pressure wavefields We need two wavefields for Jacobian operations, one computed during the finite difference evolution of the nonlinear forward operator $u_0(t,x,z)$, and one computed during the finite difference evolution of the Jacobian operator $\delta u(t,x,z)$. For this example workflow we will require saving all time steps from the nonlinear forward operator for use in the Jacobian operators. There are other ways to implement this requirement, including checkpointing, but that is way outside the scope of this illustrative workflow. Step7: Implement and run the nonlinear operator We next transcribe the time update expression for the nonlinear operator above into a Devito Eq. Then we add the source injection and receiver extraction and build an Operator that will generate the c code for performing the modeling. We copy the time update expression from the first implementation notebook, but omit the source term $q$ because for the nonlinear operator we explicitly inject the source using src_term. We think of this as solving for the background wavefield $u_0$ not the total wavefield $u$, and hence we use $v_0$ for velocity instead of $v$. $$ \begin{aligned} u_0(t+\Delta_t) &= \frac{\Delta_t^2 v_0^2}{b} \left[ \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u_0 \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u_0 \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u_0 \right) + q \right] \[5pt] &\quad +\ u_0(t) \left(2 - \frac{\Delta_t^2 \omega_c}{Q} \right) - u_0(t-\Delta_t) \left(1 - \frac{\Delta_t\ \omega_c}{Q} \right) \end{aligned} $$ Self adjoint means support for nonlinear and linearized ops Note that this stencil can be used for all of the operations we need, modulo the different source terms for the nonlinear and linearized forward evolutions Step8: Implement and run the Jacobian forward operator We next transcribe the time update expression for the linearized operator into a Devito Eq. Note that the source injection for the linearized operator is very different, and involves the Born source derived above everywhere in space. Please refer to the first notebook for the derivation of the time update equation if you don't follow this step. $$ \begin{aligned} \delta u(t+\Delta_t) &= \frac{\Delta_t^2 v_0^2}{b} \left[ \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ \delta u \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ \delta u \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ \delta u \right) + q \right] \[5pt] &\quad +\ \delta u(t) \left(2 - \frac{\Delta_t^2 \omega_c}{Q} \right) - \delta u(t-\Delta_t) \left(1 - \frac{\Delta_t\ \omega_c}{Q} \right) \[10pt] q &= \frac{2\ b\ \delta m}{m_0^3} L_t\left[u_0\right] \end{aligned} $$ Source injection and receiver extraction for linearized forward operator Note the source for the linearized forward operator is the Born source $q$, so we do not require a source injection term as with the nonlinear operator. As this is a forward operator, we are mapping into receiver gathers and therefore need to define both a container and an extraction term for receiver data. Step9: Plot the computed nonlinear and linearized forward wavefields Below we show the nonlinear and Born scattered wavefields at the end of the finite difference evolution. You can clearly see both forward and backward scattered energy from the velocity perturbation in the linearized forward (Born) wavefield, with appropriate polatiry reversals in the events. Step10: An alternative implementation for the linearized forward We would like to acknowledge Mathias Louboutin for an alternative method of implementing the linearized forward operator that is very efficient and perhaps novel. The driver code that implements the Jacobian forward operator (examples/seismic/self_adjoint/operators.py) can solve for both the nonlinear and linearized finite difference evolutions simultaneously. This implies significant performance gains with respect to cache pressure. We outline below this code for your enjoyment, with line numbers added Step11: Plot the image Below we plot the velocity perturbation and the "image" recovered from the linearized Jacobian adjoint. We normalize both fields to their maximum absolute value. Note that with a single source and this transmission geometry, we should expect to see significant horizontal smearing in the image.
Python Code: import numpy as np from examples.seismic import RickerSource, Receiver, TimeAxis from devito import (Grid, Function, TimeFunction, SpaceDimension, Constant, Eq, Operator, solve, configuration, norm) from devito.finite_differences import Derivative from devito.builtins import gaussian_smooth from examples.seismic.self_adjoint import setup_w_over_q import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib import cm from timeit import default_timer as timer # These lines force images to be displayed in the notebook, and scale up fonts %matplotlib inline mpl.rc('font', size=14) # Make white background for plots, not transparent plt.rcParams['figure.facecolor'] = 'white' # We define 32 bit floating point as the precision type dtype = np.float32 # Set logging to debug, captures statistics on the performance of operators # configuration['log-level'] = 'DEBUG' configuration['log-level'] = 'INFO' Explanation: Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator <br>-- Linearized Ops -- This operator is contributed by Chevron Energy Technology Company (2020) This operator is based on simplfications of the systems presented in: <br>Self-adjoint, energy-conserving second-order pseudoacoustic systems for VTI and TTI media for reverse time migration and full-waveform inversion (2016) <br>Kenneth Bube, John Washbourne, Raymond Ergas, and Tamas Nemeth <br>SEG Technical Program Expanded Abstracts <br>https://library.seg.org/doi/10.1190/segam2016-13878451.1 Introduction The goal of this tutorial set is to generate and prove correctness of modeling and inversion capability in Devito for variable density visco- acoustics using an energy conserving form of the wave equation. We describe how the linearization of the energy conserving self adjoint system with respect to modeling parameters allows using the same modeling system for all nonlinear and linearized forward and adjoint finite difference evolutions. There are three notebooks in this series: 1. Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator -- Nonlinear Ops Implement the nonlinear modeling operations. sa_01_iso_implementation1.ipynb 2. Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator -- Linearized Ops Implement the linearized (Jacobian) forward and adjoint modeling operations. sa_02_iso_implementation2.ipynb 3. Implementation of a Devito self adjoint variable density visco- acoustic isotropic modeling operator -- Correctness Testing Tests the correctness of the implemented operators. sa_03_iso_correctness.ipynb There are similar series of notebooks implementing and testing operators for VTI and TTI anisotropy (README.md). Below we continue the implementation of our self adjoint wave equation with dissipation only Q attenuation, and linearize the modeling operator with respect to the model parameter velocity. We show how to implement finite difference evolutions to compute the action of the forward and adjoint Jacobian. Outline Define symbols The nonlinear operator The Jacobian opeator Create the Devito grid and model fields The simulation time range and acquistion geometry Implement and run the nonlinear forward operator Implement and run the Jacobian forward operator Implement and run the Jacobian adjoint operator References Table of symbols There are many more symbols for the Jacobian than we had in the previous notebook. We need to introduce terminology for the nonlinear operator, including total, background and perturbation fields for several variables. | Symbol &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Description | Dimensionality | |:---|:---|:---| | $\overleftarrow{\partial_t}$ | shifted first derivative wrt $t$ | shifted 1/2 sample backward in time | | $\partial_{tt}$ | centered second derivative wrt $t$ | centered in time | | $\overrightarrow{\partial_x},\ \overrightarrow{\partial_y},\ \overrightarrow{\partial_z}$ | + shifted first derivative wrt $x,y,z$ | shifted 1/2 sample forward in space | | $\overleftarrow{\partial_x},\ \overleftarrow{\partial_y},\ \overleftarrow{\partial_z}$ | - shifted first derivative wrt $x,y,z$ | shifted 1/2 sample backward in space | | $\omega_c = 2 \pi f$ | center angular frequency | constant | | $b(x,y,z)$ | buoyancy $(1 / \rho)$ | function of space | | $Q(x,y,z)$ | Attenuation at frequency $\omega_c$ | function of space | | $m(x,y,z)$ | Total P wave velocity ($m_0+\delta m$) | function of space | | $m_0(x,y,z)$ | Background P wave velocity | function of space | | $\delta m(x,y,z)$ | Perturbation to P wave velocity | function of space | | $u(t,x,y,z)$ | Total pressure wavefield ($u_0+\delta u$)| function of time and space | | $u_0(t,x,y,z)$ | Background pressure wavefield | function of time and space | | $\delta u(t,x,y,z)$ | Perturbation to pressure wavefield | function of time and space | | $q(t,x,y,z)$ | Source wavefield | function of time, localized in space to source location | | $r(t,x,y,z)$ | Receiver wavefield | function of time, localized in space to receiver locations | | $F[m; q]$ | Forward nonlinear modeling operator | Nonlinear in $m$, linear in $q$: $\quad$ maps $m \rightarrow r$ | | $\nabla F[m; q]\ \delta m$ | Forward Jacobian modeling operator | Linearized at $[m; q]$: $\quad$ maps $\delta m \rightarrow \delta r$ | | $\bigl( \nabla F[m; q] \bigr)^\top\ \delta r$ | Adjoint Jacobian modeling operator | Linearized at $[m; q]$: $\quad$ maps $\delta r \rightarrow \delta m$ | | $\Delta_t, \Delta_x, \Delta_y, \Delta_z$ | sampling rates for $t, x, y , z$ | $t, x, y , z$ | A word about notation We use the arrow symbols over derivatives $\overrightarrow{\partial_x}$ as a shorthand notation to indicate that the derivative is taken at a shifted location. For example: $\overrightarrow{\partial_x}\ u(t,x,y,z)$ indicates that the $x$ derivative of $u(t,x,y,z)$ is taken at $u(t,x+\frac{\Delta x}{2},y,z)$. $\overleftarrow{\partial_z}\ u(t,x,y,z)$ indicates that the $z$ derivative of $u(t,x,y,z)$ is taken at $u(t,x,y,z-\frac{\Delta z}{2})$. $\overleftarrow{\partial_t}\ u(t,x,y,z)$ indicates that the $t$ derivative of $u(t,x,y,z)$ is taken at $u(t-\frac{\Delta_t}{2},x,y,z)$. We usually drop the $(t,x,y,z)$ notation from wavefield variables unless required for clarity of exposition, so that $u(t,x,y,z)$ becomes $u$. The Nonlinear operator The nonlinear operator is the solution to the self adjoint scalar isotropic variable density visco- acoustic wave equation shown immediately below, and maps the velocity model vector $m$ into the receiver wavefield vector $r$. $$ \frac{b}{m^2} \left( \frac{\omega_c}{Q}\overleftarrow{\partial_t}\ u + \partial_{tt}\ u \right) = \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q $$ In operator notation, where the operator is nonlinear with respect to model $m$ to the left of semicolon inside the square brackets, and linear with respect to the source $q$ to the right of semicolon inside the square brackets. $$ F[m; q] = r $$ The Jacobian operator In this section we linearize about a background model and take the derivative of the nonlinear operator to obtain the Jacobian forward operator. In operator notation, where the derivative of the modeling operator is now linear in the model perturbation vector $\delta m$, the Jacobian operator maps a perturbation in the velocity model $\delta m$ into a perturbation in the receiver wavefield $\delta r$. $$ \nabla F[m; q]\ \delta m = \delta r $$ 1. We begin by simplifying notation To simplify the treatment below we introduce the operator $L_t[\cdot]$, accounting for the time derivatives inside the parentheses on the left hand side of the wave equation. $$ L_t[\cdot] \equiv \frac{\omega_c}{Q} \overleftarrow{\partial_t}[\cdot] + \partial_{tt}[\cdot] $$ Next we re-write the wave equation using this notation. $$ \frac{b}{m^2} L_t[u] = \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q $$ 2. Linearize To linearize we treat the total model as the sum of background and perturbation models $\left(m = m_0 + \delta m\right)$, and the total pressure as the sum of background and perturbation pressures $\left(u = u_0 + \delta u\right)$. $$ \frac{b}{(m_0+\delta m)^2} L_t[u_0+\delta u] = \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x} (u_0+\delta u) \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y} (u_0+\delta u) \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z} (u_0+\delta u) \right) + q $$ Note that model parameters for this variable density isotropic visco-acoustic physics is only velocity, we do not treat perturbations to density. We also write the PDE for the background model, which we subtract after linearization to simplify the final expression. $$ \frac{b}{m_0^2} L_t[u_0] = \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x} u_0 \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y} u_0 \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z} u_0 \right) + q $$ 3. Take derivative w.r.t. model parameters Next we take the derivative with respect to velocity, keep only terms up to first order in the perturbations, subtract the background model PDE equation, and finally arrive at the following linearized equation: $$ \frac{b}{m_0^2} L_t\left[\delta u\right] = \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x} \delta u \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y} \delta u \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z} \delta u \right) + \frac{2\ b\ \delta m}{m_0^3} L_t\left[u_0\right] $$ Note that the source $q$ in the original equation above has disappeared due to subtraction of the background PDE, and has been replaced by the Born source: $$ q = \frac{\displaystyle 2\ b\ \delta m}{\displaystyle m_0^3} L_t\left[u_0\right] $$ This is the same equation as used for the nonlinear forward, only now in the perturbed wavefield $\delta u$ and with the Born source. The adjoint of the Jacobian operator In this section we introduce the adjoint of the Jacobian operator we derived above. The Jacobian adjoint operator maps a perturbation in receiver wavefield $\delta r$ into a perturbation in velocity model $\delta m$. In operator notation: $$ \bigl( \nabla F[m; q] \bigr)^\top\ \delta r = \delta m $$ 1. Solve the time reversed wave equation with the receiver perturbation as source The PDE for the adjoint of the Jacobian is solved for the perturbation to the pressure wavefield $\delta u$ by using the same wave equation as the nonlinear forward and the Jacobian forward, with the time reversed receiver wavefield perturbation $\widetilde{\delta r}$ injected as source. Note that we use $\widetilde{\delta u}$ and $\widetilde{\delta r}$ to indicate that we solve this finite difference evolution time reversed. $$ \frac{b}{m_0^2} L_t\left[\widetilde{\delta u}\right] = \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ \widetilde{\delta u} \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ \widetilde{\delta u} \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ \widetilde{\delta u} \right) + \widetilde{\delta r} $$ 2. Compute zero lag correlation We compute the perturbation to the velocity model by zero lag correlation of the wavefield perturbation $\widetilde{\delta u}$ solved in step 1 as shown in the following expression: $$ \delta m(x,y,z) = \sum_t \left{ \widetilde{\delta u}(t,x,y,z)\ \frac{\displaystyle 2\ b}{\displaystyle m_0^3} L_t\bigl[u_0(t,x,y,z)\bigr] \right} $$ Note that this correlation can be more formally derived by examining the equations for two Green's functions, one for the background model ($m_0$) and wavefield ($u_0$), and one for for the total model $(m_0 + \delta m)$ and wavefield $(u_0 + \delta u)$, and subtracting to derive the equation for Born scattering. Implementation Next we assemble the Devito objects needed to implement these linearized operators. Imports We have grouped all imports used in this notebook here for consistency. End of explanation # Define dimensions for the interior of the model nx,nz = 301,301 dx,dz = 10.0,10.0 # Grid spacing in m shape = (nx, nz) # Number of grid points spacing = (dx, dz) # Domain size is now 5 km by 5 km origin = (0., 0.) # Origin of coordinate system, specified in m. extent = tuple([s*(n-1) for s, n in zip(spacing, shape)]) # Define dimensions for the model padded with absorbing boundaries npad = 50 # number of points in absorbing boundary region (all sides) nxpad,nzpad = nx + 2 * npad, nz + 2 * npad shape_pad = np.array(shape) + 2 * npad origin_pad = tuple([o - s*npad for o, s in zip(origin, spacing)]) extent_pad = tuple([s*(n-1) for s, n in zip(spacing, shape_pad)]) # Define the dimensions # Note if you do not specify dimensions, you get in order x,y,z x = SpaceDimension(name='x', spacing=Constant(name='h_x', value=extent_pad[0]/(shape_pad[0]-1))) z = SpaceDimension(name='z', spacing=Constant(name='h_z', value=extent_pad[1]/(shape_pad[1]-1))) # Initialize the Devito grid grid = Grid(extent=extent_pad, shape=shape_pad, origin=origin_pad, dimensions=(x, z), dtype=dtype) print("shape; ", shape) print("origin; ", origin) print("spacing; ", spacing) print("extent; ", extent) print("") print("shape_pad; ", shape_pad) print("origin_pad; ", origin_pad) print("extent_pad; ", extent_pad) print("") print("grid.shape; ", grid.shape) print("grid.extent; ", grid.extent) print("grid.spacing_map;", grid.spacing_map) Explanation: Instantiate the Devito grid for a two dimensional problem We define the grid the same as in the previous notebook outlining implementation for the nonlinear forward. End of explanation # NBVAL_IGNORE_OUTPUT # Create the velocity and buoyancy fields as in the nonlinear notebook space_order = 8 # Wholespace velocity m0 = Function(name='m0', grid=grid, space_order=space_order) m0.data[:] = 1.5 # Perturbation to velocity: a square offset from the center of the model dm = Function(name='dm', grid=grid, space_order=space_order) size = 10 x0 = shape_pad[0]//2 z0 = shape_pad[1]//2 dm.data[:] = 0.0 dm.data[x0-size:x0+size, z0-size:z0+size] = 1.0 # Constant density b = Function(name='b', grid=grid, space_order=space_order) b.data[:,:] = 1.0 / 1.0 # Initialize the attenuation profile for Q=100 model fpeak = 0.010 w = 2.0 * np.pi * fpeak qmin = 0.1 qmax = 1000.0 wOverQ = Function(name='wOverQ', grid=grid, space_order=space_order) setup_w_over_q(wOverQ, w, qmin, 100.0, npad) Explanation: Define velocity, buoyancy and $\frac{\omega_c}{Q}$ model parameters We have the following constants and fields to define: | &nbsp; Symbol &nbsp; | Description | |:---:|:---| | $$m0(x,z)$$ | Background velocity model | | $$\delta m(x,z)$$ | Perturbation to velocity model | | $$b(x,z)=\frac{1}{\rho(x,z)}$$ | Buoyancy (reciprocal density) | | $$\omega_c = 2 \pi f_c$$ | Center angular frequency | | $$\frac{1}{Q(x,z)}$$ | Inverse Q model used in the modeling system | End of explanation def compute_critical_dt(v): Determine the temporal sampling to satisfy CFL stability. This method replicates the functionality in the Model class. Note we add a safety factor, reducing dt by a factor 0.75 due to the w/Q attentuation term. Parameters ---------- v : Function velocity coeff = 0.38 if len(v.grid.shape) == 3 else 0.42 dt = 0.75 * v.dtype(coeff * np.min(v.grid.spacing) / (np.max(v.data))) return v.dtype("%.5e" % dt) t0 = 0.0 # Simulation time start tn = 1200.0 # Simulation time end (1 second = 1000 msec) dt = compute_critical_dt(m0) time_range = TimeAxis(start=t0, stop=tn, step=dt) print("Time min, max, dt, num; %10.6f %10.6f %10.6f %d" % (t0, tn, dt, int(tn//dt) + 1)) print("time_range; ", time_range) # Source at 1/4 X, 1/2 Z, Ricker with 10 Hz center frequency src_nl = RickerSource(name='src_nl', grid=grid, f0=fpeak, npoint=1, time_range=time_range) src_nl.coordinates.data[0,0] = dx * 1 * nx//4 src_nl.coordinates.data[0,1] = dz * shape[1]//2 # Receivers at 3/4 X, line in Z rec_nl = Receiver(name='rec_nl', grid=grid, npoint=nz, time_range=time_range) rec_nl.coordinates.data[:,0] = dx * 3 * nx//4 rec_nl.coordinates.data[:,1] = np.linspace(0.0, dz*(nz-1), nz) print("src_coordinate X; %+12.4f" % (src_nl.coordinates.data[0,0])) print("src_coordinate Z; %+12.4f" % (src_nl.coordinates.data[0,1])) print("rec_coordinates X min/max; %+12.4f %+12.4f" % \ (np.min(rec_nl.coordinates.data[:,0]), np.max(rec_nl.coordinates.data[:,0]))) print("rec_coordinates Z min/max; %+12.4f %+12.4f" % \ (np.min(rec_nl.coordinates.data[:,1]), np.max(rec_nl.coordinates.data[:,1]))) Explanation: Define the simulation time range and the acquisition geometry Simulation time range: In this notebook we run 3 seconds of simulation using the sample rate related to the CFL condition as implemented in examples/seismic/self_adjoint/utils.py. We also use the convenience TimeRange as defined in examples/seismic/source.py. Acquisition geometry: source: - X coordinate: left sode of model - Z coordinate: middle of model - We use a 10 Hz center frequency RickerSource wavelet source as defined in examples/seismic/source.py receivers: - X coordinate: right side of model - Z coordinate: vertical line in model - We use a vertical line of Receivers as defined with a PointSource in examples/seismic/source.py End of explanation # NBVAL_INGNORE_OUTPUT # Note: flip sense of second dimension to make the plot positive downwards plt_extent = [origin_pad[0], origin_pad[0] + extent_pad[0], origin_pad[1] + extent_pad[1], origin_pad[1]] vmin, vmax = 1.5, 2.0 pmin, pmax = -1, +1 bmin, bmax = 0.9, 1.1 q = w / wOverQ.data[:] x1 = 0.0 x2 = dx * nx z1 = 0.0 z2 = dz * nz abcX = [x1,x1,x2,x2,x1] abcZ = [z1,z2,z2,z1,z1] plt.figure(figsize=(12,12)) plt.subplot(2, 2, 1) plt.imshow(np.transpose(m0.data), cmap=cm.jet, vmin=vmin, vmax=vmax, extent=plt_extent) plt.plot(abcX, abcZ, 'gray', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.colorbar(orientation='horizontal', label='Velocity (m/msec)') plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("Background Velocity") plt.subplot(2, 2, 2) plt.imshow(np.transpose(1 / b.data), cmap=cm.jet, vmin=bmin, vmax=bmax, extent=plt_extent) plt.plot(abcX, abcZ, 'gray', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.colorbar(orientation='horizontal', label='Density (kg/m^3)') plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("Background Density") plt.subplot(2, 2, 3) plt.imshow(np.transpose(dm.data), cmap="seismic", vmin=pmin, vmax=pmax, extent=plt_extent) plt.plot(abcX, abcZ, 'gray', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.colorbar(orientation='horizontal', label='Velocity (m/msec)') plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("Velocity Perturbation") plt.subplot(2, 2, 4) plt.imshow(np.transpose(np.log10(q.data)), cmap=cm.jet, vmin=np.log10(qmin), vmax=np.log10(qmax), extent=plt_extent) plt.plot(abcX, abcZ, 'white', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.colorbar(orientation='horizontal', label='log10 $Q_p$') plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("log10 of $Q_p$ Profile") plt.tight_layout() None Explanation: Plot the model We plot the following Functions: - Background Velocity - Background Density - Velocity perturbation - Q Model Each subplot also shows: - The location of the absorbing boundary as a dotted line - The source location as a red star - The line of receivers as a black vertical line End of explanation # Define the TimeFunctions for nonlinear and Jacobian operations nt = time_range.num u0 = TimeFunction(name="u0", grid=grid, time_order=2, space_order=space_order, save=nt) duFwd = TimeFunction(name="duFwd", grid=grid, time_order=2, space_order=space_order, save=None) duAdj = TimeFunction(name="duAdj", grid=grid, time_order=2, space_order=space_order, save=None) # Get the dimensions for t, x, z t,x,z = u0.dimensions Explanation: Define pressure wavefields We need two wavefields for Jacobian operations, one computed during the finite difference evolution of the nonlinear forward operator $u_0(t,x,z)$, and one computed during the finite difference evolution of the Jacobian operator $\delta u(t,x,z)$. For this example workflow we will require saving all time steps from the nonlinear forward operator for use in the Jacobian operators. There are other ways to implement this requirement, including checkpointing, but that is way outside the scope of this illustrative workflow. End of explanation # NBVAL_IGNORE_OUTPUT # The nonlinear forward time update equation eq_time_update_nl_fwd = (t.spacing**2 * m0**2 / b) * \ ((b * u0.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + (b * u0.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2)) + \ (2 - t.spacing * wOverQ) * u0 + \ (t.spacing * wOverQ - 1) * u0.backward stencil_nl = Eq(u0.forward, eq_time_update_nl_fwd) # Update the dimension spacing_map to include the time dimension # Please refer to the first implementation notebook for more information spacing_map = grid.spacing_map spacing_map.update({t.spacing : dt}) print("spacing_map; ", spacing_map) # Source injection and Receiver extraction src_term_nl = src_nl.inject(field=u0.forward, expr=src_nl * t.spacing**2 * m0**2 / b) rec_term_nl = rec_nl.interpolate(expr=u0.forward) # Instantiate and run the operator for the nonlinear forward op_nl = Operator([stencil_nl] + src_term_nl + rec_term_nl, subs=spacing_map) u0.data[:] = 0 op_nl.apply() None # Continuous integration hooks # We ensure the norm of these computed wavefields is repeatable # print(norm(u0)) assert np.isclose(norm(u0), 3098.012, atol=0, rtol=1e-2) Explanation: Implement and run the nonlinear operator We next transcribe the time update expression for the nonlinear operator above into a Devito Eq. Then we add the source injection and receiver extraction and build an Operator that will generate the c code for performing the modeling. We copy the time update expression from the first implementation notebook, but omit the source term $q$ because for the nonlinear operator we explicitly inject the source using src_term. We think of this as solving for the background wavefield $u_0$ not the total wavefield $u$, and hence we use $v_0$ for velocity instead of $v$. $$ \begin{aligned} u_0(t+\Delta_t) &= \frac{\Delta_t^2 v_0^2}{b} \left[ \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u_0 \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u_0 \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u_0 \right) + q \right] \[5pt] &\quad +\ u_0(t) \left(2 - \frac{\Delta_t^2 \omega_c}{Q} \right) - u_0(t-\Delta_t) \left(1 - \frac{\Delta_t\ \omega_c}{Q} \right) \end{aligned} $$ Self adjoint means support for nonlinear and linearized ops Note that this stencil can be used for all of the operations we need, modulo the different source terms for the nonlinear and linearized forward evolutions: 1. the nonlinear forward (solved forward in time, $q$ is the usual source ) 2. the Jacobian forward (solved forward in time, $q$ is the Born source ) 3. the Jacobian adjoint (solved backward in time, $q$ is the time reversed receiver wavefield) Source injection and receiver extraction for nonlinear forward operator Source injection and receiver extraction follow the implementation shown in the first notebook, please refer there for more information. End of explanation # NBVAL_IGNORE_OUTPUT # The linearized forward time update equation eq_time_update_ln_fwd = (t.spacing**2 * m0**2 / b) * \ ((b * duFwd.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + (b * duFwd.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2) + 2 * b * dm * m0**-3 * (wOverQ * u0.dt(x0=t-t.spacing/2) + u0.dt2)) +\ (2 - t.spacing * wOverQ) * duFwd + \ (t.spacing * wOverQ - 1) * duFwd.backward stencil_ln_fwd = Eq(duFwd.forward, eq_time_update_ln_fwd) # Receiver container and receiver extraction for the linearized operator rec_ln = Receiver(name='rec_ln', grid=grid, npoint=nz, time_range=time_range) rec_ln.coordinates.data[:,:] = rec_nl.coordinates.data[:,:] rec_term_ln_fwd = rec_ln.interpolate(expr=duFwd.forward) # Instantiate and run the operator for the linearized forward op_ln_fwd = Operator([stencil_ln_fwd] + rec_term_ln_fwd, subs=spacing_map) duFwd.data[:] = 0 op_ln_fwd.apply() None # Continuous integration hooks # We ensure the norm of these computed wavefields is repeatable # print(norm(duFwd)) assert np.isclose(norm(duFwd), 227.063, atol=0, rtol=1e-3) Explanation: Implement and run the Jacobian forward operator We next transcribe the time update expression for the linearized operator into a Devito Eq. Note that the source injection for the linearized operator is very different, and involves the Born source derived above everywhere in space. Please refer to the first notebook for the derivation of the time update equation if you don't follow this step. $$ \begin{aligned} \delta u(t+\Delta_t) &= \frac{\Delta_t^2 v_0^2}{b} \left[ \overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ \delta u \right) + \overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ \delta u \right) + \overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ \delta u \right) + q \right] \[5pt] &\quad +\ \delta u(t) \left(2 - \frac{\Delta_t^2 \omega_c}{Q} \right) - \delta u(t-\Delta_t) \left(1 - \frac{\Delta_t\ \omega_c}{Q} \right) \[10pt] q &= \frac{2\ b\ \delta m}{m_0^3} L_t\left[u_0\right] \end{aligned} $$ Source injection and receiver extraction for linearized forward operator Note the source for the linearized forward operator is the Born source $q$, so we do not require a source injection term as with the nonlinear operator. As this is a forward operator, we are mapping into receiver gathers and therefore need to define both a container and an extraction term for receiver data. End of explanation # NBVAL_IGNORE_OUTPUT # Plot the two wavefields, each normalized to own maximum kt = nt - 2 amax_nl = 1.0 * np.max(np.abs(u0.data[kt,:,:])) amax_ln = 0.1 * np.max(np.abs(duFwd.data[kt,:,:])) print("amax nl; %12.6f" % (amax_nl)) print("amax ln t=%.2fs; %12.6f" % (dt * kt / 1000, amax_ln)) plt.figure(figsize=(12,12)) plt.subplot(1, 2, 1) plt.imshow(np.transpose(u0.data[kt,:,:]), cmap="seismic", vmin=-amax_nl, vmax=+amax_nl, extent=plt_extent) plt.colorbar(orientation='horizontal', label='Amplitude') plt.plot(abcX, abcZ, 'gray', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("Nonlinear wavefield at t=%.2fs" % (dt * kt / 1000)) plt.subplot(1, 2, 2) plt.imshow(np.transpose(duFwd.data[kt,:,:]), cmap="seismic", vmin=-amax_ln, vmax=+amax_ln, extent=plt_extent) plt.colorbar(orientation='horizontal', label='Amplitude') plt.plot(abcX, abcZ, 'gray', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("Born wavefield at t=%.2fs" % (dt * kt / 1000)) plt.tight_layout() None Explanation: Plot the computed nonlinear and linearized forward wavefields Below we show the nonlinear and Born scattered wavefields at the end of the finite difference evolution. You can clearly see both forward and backward scattered energy from the velocity perturbation in the linearized forward (Born) wavefield, with appropriate polatiry reversals in the events. End of explanation # NBVAL_IGNORE_OUTPUT # New Function to hold the output from the adjoint dmAdj = Function(name='dmAdj', grid=grid, space_order=space_order) # The linearized adjoint time update equation # Note the small differencess from the linearized forward above eq_time_update_ln_adj = (t.spacing**2 * m0**2 / b) * \ ((b * duAdj.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + (b * duAdj.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2)) +\ (2 - t.spacing * wOverQ) * duAdj + \ (t.spacing * wOverQ - 1) * duAdj.forward stencil_ln_adj = Eq(duAdj.backward, eq_time_update_ln_adj) # Equation to sum the zero lag correlation dm_update = Eq(dmAdj, dmAdj + duAdj * (2 * b * m0**-3 * (wOverQ * u0.dt(x0=t-t.spacing/2) + u0.dt2))) # Receiver injection, time reversed rec_term_ln_adj = rec_ln.inject(field=duAdj.backward, expr=rec_ln * t.spacing**2 * m0**2 / b) # Instantiate and run the operator for the linearized forward op_ln_adj = Operator([dm_update] + [stencil_ln_adj] + rec_term_ln_adj, subs=spacing_map) op_ln_adj.apply() None # Continuous integration hooks # We ensure the norm of these computed wavefields is repeatable # print(norm(duAdj)) assert np.isclose(norm(duAdj), 19218.924, atol=0, rtol=1e-3) Explanation: An alternative implementation for the linearized forward We would like to acknowledge Mathias Louboutin for an alternative method of implementing the linearized forward operator that is very efficient and perhaps novel. The driver code that implements the Jacobian forward operator (examples/seismic/self_adjoint/operators.py) can solve for both the nonlinear and linearized finite difference evolutions simultaneously. This implies significant performance gains with respect to cache pressure. We outline below this code for your enjoyment, with line numbers added: 01 # Time update equations for nonlinear and linearized operators 02 eqn1 = iso_stencil(u0, model, forward=True) 03 eqn2 = iso_stencil(du, model, forward=True, 04 q=2 * b * dm * v**-3 * (wOverQ * u0.dt(x0=t-t.spacing/2) + u0.dt2)) 05 06 # Inject the source into the nonlinear wavefield at u0(t+dt) 07 src_term = src.inject(field=u0.forward, expr=src * t.spacing**2 * v**2 / b) 08 09 # Extract receiver wavefield from the linearized wavefield, at du(t) 10 rec_term = rec.interpolate(expr=du) 11 12 # Create the operator 13 Operator(eqn1 + src_term + eqn2 + rec_term, subs=spacing_map, 14 name='ISO_JacobianFwdOperator', **kwargs) One important thing to note about this code is the precedence of operations specified on the construction of the operator at line 13. It is guaranteed by Devito that eqn1 will 'run' before eqn2. This means that this specific order will occurr in the generated code: 1. The nonlinear wavefield is advanced in time 2. The nonlinear source is injected in the nonlinear wavefield 3. The linearized wavefield is advanced in time 4. The linearixzed wavefield is interpolated at the receiever locations As an exercise, you might implement this operator and print the generated c code to confirm this. Implement and run the Jacobian adjoint operator The linearized Jacobian adjoint uses the same time update equation as written above so we do not reproduce it here. Note that the finite difference evolution for the Jacobian adjoint runs time-reversed and a receiver wavefield is injected as source term. For this example we will inject the recorded linearized wavefield, which will provide an "image" of the Born scatterer. Zero lag temporal correlation to build image We rewrite the zero lag temporal correlation that builds up the image from above. The sum is achieved in Devito via Eq(dm, dm + &lt;...&gt;), where &lt;...&gt; is the operand of the zero lag correlation shown immediately below. $$ \delta m(x,y,z) = \sum_t \left{ \widetilde{\delta u}(t,x,y,z)\ \frac{\displaystyle 2\ b}{\displaystyle m_0^3} L_t\left[u_0(t,x,y,z)\right] \right} $$ Note we instantiate a new Function $dm_a$ to hold the output from the linearized adjoint operator. Source injection and receiver extraction for linearized adjoint operator Note the source for the linearized adjoint operator is the receiver wavefield, injected time-reversed. As this is an adjoint operator, we are mapping into the model domain and therefore do not need to define receivers. End of explanation # NBVAL_IGNORE_OUTPUT amax1 = 0.5 * np.max(np.abs(dm.data[:])) amax2 = 0.5 * np.max(np.abs(dmAdj.data[:])) print("amax dm; %12.6e" % (amax1)) print("amax dmAdj %12.6e" % (amax2)) dm.data[:] = dm.data / amax1 dmAdj.data[:] = dmAdj.data / amax2 plt.figure(figsize=(12,8)) plt.subplot(1, 2, 1) plt.imshow(np.transpose(dm.data), cmap="seismic", vmin=-1, vmax=+1, extent=plt_extent, aspect="auto") plt.plot(abcX, abcZ, 'gray', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.colorbar(orientation='horizontal', label='Velocity (m/msec)') plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("Velocity Perturbation") plt.subplot(1, 2, 2) plt.imshow(np.transpose(dmAdj.data), cmap="seismic", vmin=-1, vmax=+1, extent=plt_extent, aspect="auto") plt.plot(abcX, abcZ, 'gray', linewidth=4, linestyle=':', label="Absorbing Boundary") plt.plot(src_nl.coordinates.data[:, 0], src_nl.coordinates.data[:, 1], \ 'red', linestyle='None', marker='*', markersize=15, label="Source") plt.plot(rec_nl.coordinates.data[:, 0], rec_nl.coordinates.data[:, 1], \ 'black', linestyle='None', marker='^', markersize=2, label="Receivers") plt.colorbar(orientation='horizontal', label='Velocity (m/msec)') plt.xlabel("X Coordinate (m)") plt.ylabel("Z Coordinate (m)") plt.title("Output from Jacobian adjoint") plt.tight_layout() None Explanation: Plot the image Below we plot the velocity perturbation and the "image" recovered from the linearized Jacobian adjoint. We normalize both fields to their maximum absolute value. Note that with a single source and this transmission geometry, we should expect to see significant horizontal smearing in the image. End of explanation
15,949
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: let the function to read the directory recursively into a dataset
Python Code:: import tensorflow as tf from tensorflow.keras.utils import image_dataset_from_directory PATH = ".../Citrus/Leaves" ds = image_dataset_from_directory(PATH, validation_split=0.2, subset="training", image_size=(256,256), interpolation="bilinear", crop_to_aspect_ratio=True, seed=42, shuffle=True, batch_size=32)
15,950
Given the following text description, write Python code to implement the functionality described below step by step Description: This example assumes the notebook server has been called with ipython notebook --pylab inline and the trunk version of numba at Github. Step1: Numba provides two major decorators Step2: The speed-up is even more pronounced the more inner loops in the code. Here is an image processing example Step4: You can call Numba-created functions from other Numba-created functions and get even more amazing speed-ups. Step5: Numba works very well for numerical calculations and infers types for variables. You can over-ride this inference by passing in a locals dictionary to the autojit decorator. Notice how the code below shows both Python object manipulation and native manipulation Step6: Basic complex support is available as well. Some functions are still being implemented, however. Step7: We can even create a function that takes a structured array as input.
Python Code: import numpy as np from numba import autojit, jit, double %pylab inline Explanation: This example assumes the notebook server has been called with ipython notebook --pylab inline and the trunk version of numba at Github. End of explanation def sum(arr): M, N = arr.shape sum = 0.0 for i in range(M): for j in range(N): sum += arr[i,j] return sum fastsum = jit('f8(f8[:,:])')(sum) flexsum = autojit(sum) arr2d = np.arange(600,dtype=float).reshape(20,30) print(sum(arr2d)) print(fastsum(arr2d)) print(flexsum(arr2d)) print(flexsum(arr2d.astype(int))) %timeit sum(arr2d) %timeit fastsum(arr2d) 416 / .921# speedup %timeit arr2d.sum() 7.86 / .921 # even provides a speedup over general-purpose NumPy sum Explanation: Numba provides two major decorators: jit and autojit. The jit decorator returns a compiled version of the function using the input types and the output types of the function. You can specify the type using out_type(in_type, ...) syntax. Array inputs can be specified using [:,:] appended to the type. The autojit decorator does not require you to specify any types. It watches for what types you call the function with and infers the type of the return. If there is a previously compiled version of the code available it uses it, if not it generates machine code for the function and then executes that code. End of explanation @jit('void(f8[:,:],f8[:,:],f8[:,:])') def filter(image, filt, output): M, N = image.shape m, n = filt.shape for i in range(m//2, M-m//2): for j in range(n//2, N-n//2): result = 0.0 for k in range(m): for l in range(n): result += image[i+k-m//2,j+l-n//2]*filt[k, l] output[i,j] = result import urllib bytes = urllib.urlopen('http://www.cs.tut.fi/~foi/SA-DCT/original/image_Lake512.png').read() from matplotlib.pyplot import imread import cStringIO image = imread(cStringIO.StringIO(bytes)).astype('double') import time filt = np.ones((15,15),dtype='double') filt /= filt.sum() output = image.copy() filter(image, filt, output) gray() imshow(output) start = time.time() filter(image[:100,:100], filt, output[:100,:100]) fast = time.time() - start start = time.time() filter.py_func(image[:100,:100], filt, output[:100,:100]) slow = time.time() - start print("Python: %f s; Numba: %f ms; Speed up is %f" % (slow, fast*1000, slow / fast)) imshow(image) gray() Explanation: The speed-up is even more pronounced the more inner loops in the code. Here is an image processing example: End of explanation @autojit def mandel(x, y, max_iters): Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the Mandelbrot set given a fixed number of iterations. i = 0 c = complex(x, y) z = 0.0j for i in range(max_iters): z = z*z + c if (z.real*z.real + z.imag*z.imag) >= 4: return i return 255 @autojit def create_fractal(min_x, max_x, min_y, max_y, image, iters): height = image.shape[0] width = image.shape[1] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height for x in range(width): real = min_x + x * pixel_size_x for y in range(height): imag = min_y + y * pixel_size_y color = mandel(real, imag, iters) image[y, x] = color return image image = np.zeros((500, 750), dtype=np.uint8) imshow(create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)) jet() %timeit create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20) %timeit create_fractal.py_func(-2.0, 1.0, -1.0, 1.0, image, 20) 2.14/16.3e-3 Explanation: You can call Numba-created functions from other Numba-created functions and get even more amazing speed-ups. End of explanation from numba import double, autojit class MyClass(object): def mymethod(self, arg): return arg * 2 @autojit(locals=dict(mydouble=double)) # specify types for local variables def call_method(obj): print(obj.mymethod("hello")) # object result mydouble = obj.mymethod(10.2) # native double print(mydouble * 2) # native multiplication call_method(MyClass()) Explanation: Numba works very well for numerical calculations and infers types for variables. You can over-ride this inference by passing in a locals dictionary to the autojit decorator. Notice how the code below shows both Python object manipulation and native manipulation End of explanation @autojit def complex_support(real, imag): c = complex(real, imag) return (c ** 2).conjugate() c = 2.0 + 4.0j complex_support(c.real, c.imag), (c**2).conjugate() Explanation: Basic complex support is available as well. Some functions are still being implemented, however. End of explanation from numba import struct, jit, double import numpy as np record_type = struct([('x', double), ('y', double)]) record_dtype = record_type.get_dtype() a = np.array([(1.0, 2.0), (3.0, 4.0)], dtype=record_dtype) @jit(argtypes=[record_type[:]]) def hypot(data): # return types of numpy functions are inferred result = np.empty_like(data, dtype=np.float64) # notice access to structure elements 'x' and 'y' via attribute access # You can also index by field name or field index: # data[i].x == data[i]['x'] == data[i][0] for i in range(data.shape[0]): result[i] = np.sqrt(data[i].x * data[i].x + data[i].y * data[i].y) return result print(hypot(a)) # Notice inferred return type print(hypot.signature) # Notice native sqrt calls and for.body direct access to memory... print(hypot.lfunc) print(hypot.signature) # inspect function signature, note inferred return type [line for line in str(hypot.lfunc).splitlines() if 'sqrt' in line] # note native math calls Explanation: We can even create a function that takes a structured array as input. End of explanation
15,951
Given the following text description, write Python code to implement the functionality described below step by step Description: The issues associated with validation and cross-validation are some of the most important aspects of the practice of machine learning. Selecting the optimal model for your data is vital, and is a piece of the problem that is not often appreciated by machine learning practitioners. Of core importance is the following question Step1: Learning Curves What the right model for a dataset is depends critically on how much data we have. More data allows us to be more confident about building a complex model. Lets built some intuition on why that is. Look at the following datasets Step2: They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty. A great way to explore how a model fit evolves with different dataset sizes are learning curves. A learning curve plots the validation error for a given model against different training set sizes. But first, take a moment to think about what we're going to see Step3: You can see that for the model with kernel = linear, the validation score doesn't really decrease as more data is given. Notice that the validation error generally decreases with a growing training set, while the training error generally increases with a growing training set. From this we can infer that as the training size increases, they will converge to a single value. From the above discussion, we know that kernel = linear underfits the data. This is indicated by the fact that both the training and validation errors are very high. When confronted with this type of learning curve, we can expect that adding more training data will not help matters
Python Code: import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn.svm import SVR from sklearn import cross_validation np.random.seed(0) n_samples = 200 kernels = ['linear', 'poly', 'rbf'] true_fun = lambda X: X ** 3 X = np.sort(5 * (np.random.rand(n_samples) - .5)) y = true_fun(X) + .01 * np.random.randn(n_samples) plt.figure(figsize=(14, 5)) for i in range(len(kernels)): ax = plt.subplot(1, len(kernels), i + 1) plt.setp(ax, xticks=(), yticks=()) model = SVR(kernel=kernels[i], C=5) model.fit(X[:, np.newaxis], y) # Evaluate the models using crossvalidation scores = cross_validation.cross_val_score(model, X[:, np.newaxis], y, scoring="mean_squared_error", cv=10) X_test = np.linspace(3 * -.5, 3 * .5, 100) plt.plot(X_test, model.predict(X_test[:, np.newaxis]), label="Model") plt.plot(X_test, true_fun(X_test), label="True function") plt.scatter(X, y, label="Samples") plt.xlabel("x") plt.ylabel("y") plt.xlim((-3 * .5, 3 * .5)) plt.ylim((-1, 1)) plt.legend(loc="best") plt.title("Kernel {}\nMSE = {:.2e}(+/- {:.2e})".format( kernels[i], -scores.mean(), scores.std())) plt.show() Explanation: The issues associated with validation and cross-validation are some of the most important aspects of the practice of machine learning. Selecting the optimal model for your data is vital, and is a piece of the problem that is not often appreciated by machine learning practitioners. Of core importance is the following question: If our estimator is underperforming, how should we move forward? Use simpler or more complicated model? Add more features to each observed data point? Add more training samples? The answer is often counter-intuitive. In particular, sometimes using a more complicated model will give worse results. Also, sometimes adding training data will not improve your results. The ability to determine what steps will improve your model is what separates the successful machine learning practitioners from the unsuccessful. Learning Curves and Validation Curves One way to address this issue is to use what are often called Learning Curves. Given a particular dataset and a model we'd like to fit (e.g. using feature creation and linear regression), we'd like to tune our value of the hyperparameter kernel to give us the best fit. We can visualize the different regimes with the following plot, modified from the sklearn examples here End of explanation import numpy as np import matplotlib.pyplot as plt from sklearn import cross_validation np.random.seed(0) n_samples = 200 true_fun = lambda X: X ** 3 X = np.sort(5 * (np.random.rand(n_samples) - .5)) y = true_fun(X) + .02 * np.random.randn(n_samples) X = X[:, None] y = y f, axarr = plt.subplots(1, 3) axarr[0].scatter(X[::20], y[::20]) axarr[0].set_xlim((-3 * .5, 3 * .5)) axarr[0].set_ylim((-1, 1)) axarr[1].scatter(X[::10], y[::10]) axarr[1].set_xlim((-3 * .5, 3 * .5)) axarr[1].set_ylim((-1, 1)) axarr[2].scatter(X, y) axarr[2].set_xlim((-3 * .5, 3 * .5)) axarr[2].set_ylim((-1, 1)) plt.show() Explanation: Learning Curves What the right model for a dataset is depends critically on how much data we have. More data allows us to be more confident about building a complex model. Lets built some intuition on why that is. Look at the following datasets: End of explanation from sklearn.learning_curve import learning_curve from sklearn.svm import SVR training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='linear'), X, y, cv=10, scoring="mean_squared_error", train_sizes=[.6, .7, .8, .9, 1.]) # Use the negative because we want to minimize squared error plt.plot(training_sizes, -train_scores.mean(axis=1), label="training scores") plt.plot(training_sizes, -test_scores.mean(axis=1), label="test scores") plt.ylim((0, 50)) plt.legend(loc='best') Explanation: They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty. A great way to explore how a model fit evolves with different dataset sizes are learning curves. A learning curve plots the validation error for a given model against different training set sizes. But first, take a moment to think about what we're going to see: Questions: As the number of training samples are increased, what do you expect to see for the training error? For the validation error? Would you expect the training error to be higher or lower than the validation error? Would you ever expect this to change? We can run the following code to plot the learning curve for a kernel = linear model: End of explanation from sklearn.learning_curve import learning_curve from sklearn.svm import SVR training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='rbf'), X, y, cv=10, scoring="mean_squared_error", train_sizes=[.6, .7, .8, .9, 1.]) # Use the negative because we want to minimize squared error plt.plot(training_sizes, -train_scores.mean(axis=1), label="training scores") plt.plot(training_sizes, -test_scores.mean(axis=1), label="test scores") plt.ylim((0, 50)) plt.legend(loc='best') Explanation: You can see that for the model with kernel = linear, the validation score doesn't really decrease as more data is given. Notice that the validation error generally decreases with a growing training set, while the training error generally increases with a growing training set. From this we can infer that as the training size increases, they will converge to a single value. From the above discussion, we know that kernel = linear underfits the data. This is indicated by the fact that both the training and validation errors are very high. When confronted with this type of learning curve, we can expect that adding more training data will not help matters: both lines will converge to a relatively high error. When the learning curves have converged to a high error, we have an underfitting model. An underfitting model can be improved by: Using a more sophisticated model (i.e. in this case, increase complexity of the kernel parameter) Gather more features for each sample. Decrease regularization in a regularized model. A underfitting model cannot be improved, however, by increasing the number of training samples (do you see why?) Now let's look at an overfit model: End of explanation
15,952
Given the following text description, write Python code to implement the functionality described below step by step Description: Segmentation Segmentation is the division of an image into "meaningful" regions. If you've seen The Terminator, you've seen image segmentation Step1: We can try to create a nicer visualization for labels Step2: Notice that some spices are broken up into "light" and "dark" parts. We have multiple parameters to control this Step3: Yikes! It looks like a little too much merging went on! This is because of the intertwining of the labels. One way to avoid this is to blur the image before segmentation. Because this is such a common use-case, a Gaussian blur is included in SLIC--just pass in the sigma parameter Step4: Getting there! But it looks like some regions are merged together. We can alleviate this by increasing the number of segments Step5: That's looking pretty good! Some regions are still too squiggly though... Let's try jacking up the compactness Step6: <span class="exercize">SLIC explorer</span> Write an interactive tool to explore the SLIC parameter space. A skeleton is given below. ```python from IPython.html import widgets def func(slider_kwarg=0.5, dropdown_kwarg='option0') Step7: Image types Step8: The watershed algorithm finds the regions between these edges. It does so by envisioning the pixel intensity as height on a topographic map. It then "floods" the map from the bottom up, starting from seed points. These flood areas are called "watershed basins" and when they meet, they form the image segmentation. Let's look at a one-dimensional example Step9: Answers the question Step10: Then, we find the peaks in that image--the background points furthest away from any edges--which will act as the seeds. Step11: We are now ready to perform the watershed Step12: Examining the resulting segmentation We have more prior knowledge that we can include in this processing problem. For one--the coins are round! Step13: <span class="exercize">Seeds of doubt</span> We can see that watershed gives a very good segmentation, but some coins are missing. Why? Can you suggest better seed points for the watershed operation? Discussion Watershed and SLIC are too simple to be used as final segmentation outputs. In fact, their output is often called a superpixel, a kind of minimal segment. These are then used for further processing. Downstream processing methods are slated to be added to scikit-image in the next version. See Vighnesh Birodkar's GSoC project and his recent (and excellent) PR. These are beyond the scope of this tutorial but come chat to me after if you are interested in segmentation! <div style="height
Python Code: import numpy as np from matplotlib import pyplot as plt import skdemo plt.rcParams['image.cmap'] = 'spectral' from skimage import io, segmentation as seg, color url = '../images/spice_1.jpg' image = io.imread(url) labels = seg.slic(image, n_segments=18, compactness=10) skdemo.imshow_all(image, labels.astype(float) / labels.max()) print(labels) Explanation: Segmentation Segmentation is the division of an image into "meaningful" regions. If you've seen The Terminator, you've seen image segmentation: <img src="../2014-scipy/images/terminator-vision.png" width="700px"/> In scikit-image, you can find segmentation functions in the segmentation package, with one exception: the watershed function is in morphology, because it's a bit of both. We'll use two algorithms, SLIC and watershed, and discuss applications of each. There are two kinds of segmentation: contrast-based and boundary-based. The first is used when the regions of the image you are trying to divide have different characteristics, such as a red flower on a green background. The second is used when you want to segment an image in which borders between objects are prominent, but objects themselves are not very distinct. For example, a pile of oranges. Image types: contrast SLIC (Simple Linear Iterative Clustering) is a segmentation algorithm of the first kind: it clusters pixels in both space and color. Therefore, regions of space that are similar in color will end up in the same segment. Let's try to segment this image: <img src="../images/spice_1.jpg" width="400px"/> (Photo by Flickr user Clyde Robinson, used under CC-BY 2.0 license.) The SLIC function takes two parameters: the desired number of segments, and the "compactness", which is the relative weighting of the space and color dimensions. The higher the compactness, the more "square" the returned segments. End of explanation def mean_color(image, labels): out = np.zeros_like(image) for label in np.unique(labels): indices = np.nonzero(labels == label) out[indices] = np.mean(image[indices], axis=0) return out skdemo.imshow_all(image, mean_color(image, labels)) Explanation: We can try to create a nicer visualization for labels: each segment will be represented by its average color. End of explanation labels = seg.slic(image, n_segments=18, compactness=10, enforce_connectivity=True) label_image = mean_color(image, labels) skdemo.imshow_all(image, label_image) Explanation: Notice that some spices are broken up into "light" and "dark" parts. We have multiple parameters to control this: enforce_connectivity: Do some post-processing so that small regions get merged to adjacent big regions. End of explanation labels = seg.slic(image, n_segments=18, compactness=10, sigma=2, enforce_connectivity=True) label_image = mean_color(image, labels) skdemo.imshow_all(image, label_image) Explanation: Yikes! It looks like a little too much merging went on! This is because of the intertwining of the labels. One way to avoid this is to blur the image before segmentation. Because this is such a common use-case, a Gaussian blur is included in SLIC--just pass in the sigma parameter: End of explanation labels = seg.slic(image, n_segments=24, compactness=10, sigma=2, enforce_connectivity=True) label_image = mean_color(image, labels) skdemo.imshow_all(image, label_image) Explanation: Getting there! But it looks like some regions are merged together. We can alleviate this by increasing the number of segments: End of explanation labels = seg.slic(image, n_segments=24, compactness=40, sigma=2, enforce_connectivity=True) label_image = mean_color(image, labels) skdemo.imshow_all(image, label_image) Explanation: That's looking pretty good! Some regions are still too squiggly though... Let's try jacking up the compactness: End of explanation url2 = '../images/spices.jpg' Explanation: <span class="exercize">SLIC explorer</span> Write an interactive tool to explore the SLIC parameter space. A skeleton is given below. ```python from IPython.html import widgets def func(slider_kwarg=0.5, dropdown_kwarg='option0'): s = some_func(image, arg1=slider_kwarg, arg2=dropdown_kwarg) skdemo.imshow_all(image, s) widgets.interact(func, slider_kwarg=(start, stop, step), dropdown_kwarg=['option0', 'option1']) ``` <span class="exercize">Select the spices</span> Try segmenting the following image with a modification to the same tool: <img src="../images/spices.jpg" width="400px"/> "Spices" photo by Flickr user Riyaad Minty. https://www.flickr.com/photos/riym/3326786046 Used under the Creative Commons CC-BY 2.0 license. Note: this image is more challenging to segment because the color regions are different from one part of the image to the other. Try the slic_zero parameter in combination with different values for n_segments. End of explanation from skimage import data from skimage import filters from matplotlib import pyplot as plt, cm coins = data.coins() edges = filters.sobel(coins) plt.imshow(edges, cmap='gray'); Explanation: Image types: boundary images Often, the contrast between regions is not sufficient to distinguish them, but there is a clear boundary between the two. Using an edge detector on these images, followed by a watershed, often gives very good segmentation. For example, look at the output of the Sobel filter on the coins image: End of explanation from skimage.morphology import watershed from scipy import ndimage as ndi x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) y = np.array([1, 0, 1, 2, 1, 3, 2, 0, 2, 4, 1, 0]) seeds = ndi.label(y == 0)[0] seed_positions = np.argwhere(seeds)[:, 0] print("Seeds:", seeds) print("Seed positions:", seed_positions) # ------------------------------- # result = watershed(y, seeds) # ------------------------------- # # You can ignore the code below--it's just # to make a pretty plot of the results. plt.figure(figsize=(10, 5)) plt.plot(y, '-o', label='Image slice', linewidth=3) plt.plot(seed_positions, np.zeros_like(seed_positions), 'r^', label='Seeds', markersize=15) for n, label in enumerate(np.unique(result)): mask = (result == label) plt.bar(x[mask][:-1], result[mask][:-1], width=1, label='Region %d' % n, alpha=0.1) plt.vlines(np.argwhere(np.diff(result)) + 0.5, -0.2, 4.1, 'm', linewidth=3, linestyle='--') plt.legend(loc='upper left', numpoints=1) plt.axis('off') plt.ylim(-0.2, 4.1); Explanation: The watershed algorithm finds the regions between these edges. It does so by envisioning the pixel intensity as height on a topographic map. It then "floods" the map from the bottom up, starting from seed points. These flood areas are called "watershed basins" and when they meet, they form the image segmentation. Let's look at a one-dimensional example: End of explanation threshold = 0.4 # Euclidean distance transform # How far do we ave to travel from a non-edge to find an edge? non_edges = (edges < threshold) distance_from_edge = ndi.distance_transform_edt(non_edges) plt.imshow(distance_from_edge, cmap='gray'); Explanation: Answers the question: which seed flooded this point? Let's find some seeds for coins. First, we compute the distance transform of a thresholded version of edges: End of explanation from skimage import feature # -------------------------------------------------# peaks = feature.peak_local_max(distance_from_edge) print("Peaks shape:", peaks.shape) # -------------------------------------------------# peaks_image = np.zeros(coins.shape, np.bool) peaks_image[tuple(np.transpose(peaks))] = True seeds, num_seeds = ndi.label(peaks_image) plt.imshow(edges, cmap='gray') plt.plot(peaks[:, 1], peaks[:, 0], 'ro'); plt.axis('image') Explanation: Then, we find the peaks in that image--the background points furthest away from any edges--which will act as the seeds. End of explanation ws = watershed(edges, seeds) from skimage import color plt.imshow(color.label2rgb(ws, coins)); Explanation: We are now ready to perform the watershed: End of explanation from skimage.measure import regionprops regions = regionprops(ws) ws_updated = ws.copy() for region in regions: if region.eccentricity > 0.6: ws_updated[ws_updated == region.label] = 0 plt.imshow(color.label2rgb(ws_updated, coins, bg_label=0)); Explanation: Examining the resulting segmentation We have more prior knowledge that we can include in this processing problem. For one--the coins are round! End of explanation %reload_ext load_style %load_style ../themes/tutorial.css Explanation: <span class="exercize">Seeds of doubt</span> We can see that watershed gives a very good segmentation, but some coins are missing. Why? Can you suggest better seed points for the watershed operation? Discussion Watershed and SLIC are too simple to be used as final segmentation outputs. In fact, their output is often called a superpixel, a kind of minimal segment. These are then used for further processing. Downstream processing methods are slated to be added to scikit-image in the next version. See Vighnesh Birodkar's GSoC project and his recent (and excellent) PR. These are beyond the scope of this tutorial but come chat to me after if you are interested in segmentation! <div style="height: 400px;"></div> End of explanation
15,953
Given the following text description, write Python code to implement the functionality described below step by step Description: SF Purchases Example In this example, interact is used to build a UI for exploring San Francisco department purchases by city agency data. Step1: You can take a quick look at the first 5 rows of the data set using a slice. Pandas knows how to display this as a table in IPython. Step2: Notice that the totals are of type object (strings) instead of numbers. Step3: Remove the dollar sign from the strings and cast them to numbers. Step4: Now the data can be explored using matplotlib and interact. The following function plots the costs of the selected parameter type.
Python Code: # Import Pandas and then load the data. from pandas import read_csv df = read_csv('SFDeptPurchases.csv') Explanation: SF Purchases Example In this example, interact is used to build a UI for exploring San Francisco department purchases by city agency data. End of explanation df[:5] Explanation: You can take a quick look at the first 5 rows of the data set using a slice. Pandas knows how to display this as a table in IPython. End of explanation df[:5]['Total'] Explanation: Notice that the totals are of type object (strings) instead of numbers. End of explanation df['Total'] = df['Total'].str.replace(r'[$,]', '').convert_objects(convert_numeric=True) df[:5]['Total'] Explanation: Remove the dollar sign from the strings and cast them to numbers. End of explanation %matplotlib inline from matplotlib import pyplot as plt from pandas import DataFrame def plot_by(df, column='Dept Name', count=10, ascending=False): # Group the data by the column specified and sum the totals. data = df.groupby(column)['Total'].sum().dropna() # Sort the data. data = DataFrame(data, columns=['Total']).sort('Total', ascending=ascending) # Plot the subset of the sorted data that the user is interested in. data = data[:count].plot(kind='bar') # Plot settings. plt.title('%s Costs' % column) plt.ylabel('Cost ($)') from IPython.html.widgets import interact, fixed interact(plot_by, df=fixed(df), column=df.columns.tolist(), count=(5,15)); Explanation: Now the data can be explored using matplotlib and interact. The following function plots the costs of the selected parameter type. End of explanation
15,954
Given the following text description, write Python code to implement the functionality described below step by step Description: Limb Darkening Setup Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. Step2: We'll just add an 'lc' dataset Step3: Relevant Parameters Step4: Note that ld_coeffs isn't visible (relevant) if ld_func=='interp' Step5: Influence on Light Curves (fluxes)
Python Code: !pip install -I "phoebe>=2.1,<2.2" Explanation: Limb Darkening Setup Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation %matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. End of explanation b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01') Explanation: We'll just add an 'lc' dataset End of explanation print b['ld_func_bol@primary'] print b['ld_func_bol@primary'].choices print b['ld_coeffs_bol@primary'] print b['ld_func@lc01'] print b['ld_func@lc01@primary'].choices Explanation: Relevant Parameters End of explanation b['ld_func@lc01@primary'] = 'logarithmic' print b['ld_coeffs@lc01@primary'] Explanation: Note that ld_coeffs isn't visible (relevant) if ld_func=='interp' End of explanation b.run_compute(model='mymodel') afig, mplfig = b['lc01@mymodel'].plot(show=True) Explanation: Influence on Light Curves (fluxes) End of explanation
15,955
Given the following text description, write Python code to implement the functionality described below step by step Description: Regression Week 4 Step1: Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2. Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights Step4: Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term. Cost(w) = SUM[ (prediction - output)^2 ] + l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2). Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as Step5: To test your feature derivartive run the following Step6: Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.) With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria. Step7: Visualizing effect of L2 penalty The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature Step8: Let us split the dataset into training set and test set. Make sure to use seed=0 Step9: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data. Step10: Let's set the parameters for our optimization Step11: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights Step12: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights Step13: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.) Step14: Compute the RSS on the TEST data for the following three sets of weights Step15: QUIZ QUESTIONS 1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization? 2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper? 3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? Step16: Running a multiple regression with L2 penalty Let us now consider a model with 2 features Step17: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations. Step18: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights Step19: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights Step20: Compute the RSS on the TEST data for the following three sets of weights Step21: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house? Step22: QUIZ QUESTIONS 1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization? 2. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? 3. We make prediction for the first house in the test set using two sets of weights (no regularization vs high regularization). Which weights make better prediction <u>for that particular house</u>?
Python Code: import graphlab Explanation: Regression Week 4: Ridge Regression (gradient descent) In this notebook, you will implement ridge regression via gradient descent. You will: * Convert an SFrame into a Numpy array * Write a Numpy function to compute the derivative of the regression weights with respect to a single feature * Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty Fire up graphlab create Make sure you have the latest version of GraphLab Create (>= 1.7) End of explanation sales = graphlab.SFrame('kc_house_data.gl/') Explanation: Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. End of explanation import numpy as np # note this allows us to refer to numpy as np instead def get_numpy_data(data_sframe, features, output): data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame # add the column 'constant' to the front of the features list so that we can extract it along with the others: features = ['constant'] + features # this is how you combine two lists # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant): features_sframe = data_sframe[features] # the following line will convert the features_SFrame into a numpy matrix: feature_matrix = features_sframe.to_numpy() # assign the column of data_sframe associated with the output to the SArray output_sarray output_sarray = data_sframe[output] # the following will convert the SArray into a numpy array by first converting it to a list output_array = output_sarray.to_numpy() return(feature_matrix, output_array) Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2. End of explanation def predict_output(feature_matrix, weights): # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array # create the predictions vector by using np.dot() predictions = [] for col in range(feature_matrix.shape[0]): predictions.append(np.dot(feature_matrix[col,], weights)) return(predictions) Explanation: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights: End of explanation def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant): # If feature_is_constant is True, derivative is twice the dot product of errors and feature if feature_is_constant: derivative = 2 * np.dot(errors, feature) # Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight else: derivative = 2 * np.dot(errors, feature) + 2 * l2_penalty * weight return derivative Explanation: Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term. Cost(w) = SUM[ (prediction - output)^2 ] + l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2). Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as: 2*SUM[ error*[feature_i] ]. The derivative of the regularization term with respect to w[i] is: 2*l2_penalty*w[i]. Summing both, we get 2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i]. That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus 2*l2_penalty*w[i]. We will not regularize the constant. Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the 2*l2_penalty*w[0] term). Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus 2*l2_penalty*w[i]. With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call feature_is_constant which you should set to True when computing the derivative of the constant and False otherwise. End of explanation (example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') my_weights = np.array([1., 10.]) test_predictions = predict_output(example_features, my_weights) errors = test_predictions - example_output # prediction errors # next two lines should print the same values print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False) print np.sum(errors*example_features[:,1])*2+20. print '' # next two lines should print the same values print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True) print np.sum(errors)*2. Explanation: To test your feature derivartive run the following: End of explanation def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100): weights = np.array(initial_weights) # make sure it's a numpy array #while not reached maximum number of iterations: for _iter in range(max_iterations): # compute the predictions based on feature_matrix and weights using your predict_output() function predictions = predict_output(feature_matrix, weights) # compute the errors as predictions - output errors = predictions - output for i in xrange(len(weights)): # loop over each weight # Recall that feature_matrix[:,i] is the feature column associated with weights[i] # compute the derivative for weight[i]. #(Remember: when i=0, you are computing the derivative of the constant!) derivative = feature_derivative_ridge(errors, feature_matrix[:,i], weights[i], l2_penalty, bool(i == 0)) # subtract the step size times the derivative from the current weight weights[i] -= step_size * derivative return weights Explanation: Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.) With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria. End of explanation simple_features = ['sqft_living'] my_output = 'price' Explanation: Visualizing effect of L2 penalty The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature: End of explanation train_data,test_data = sales.random_split(.8,seed=0) Explanation: Let us split the dataset into training set and test set. Make sure to use seed=0: End of explanation (simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output) (simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output) Explanation: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data. End of explanation initial_weights = np.array([0., 0.]) step_size = 1e-12 max_iterations=1000 Explanation: Let's set the parameters for our optimization: End of explanation simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 0, max_iterations) Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights: simple_weights_0_penalty we'll use them later. End of explanation simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 1e11, max_iterations) Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights: simple_weights_high_penalty we'll use them later. End of explanation import matplotlib.pyplot as plt %matplotlib inline plt.plot(simple_feature_matrix,output,'k.', simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-', simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-') Explanation: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.) End of explanation predictions_1 = predict_output(simple_test_feature_matrix, initial_weights) residuals_1 = [(predictions_1[i] - test_output[i]) ** 2 for i in range(len(predictions_1))] print sum(residuals_1) predictions_2 = predict_output(simple_test_feature_matrix, simple_weights_0_penalty) residuals_2 = [(predictions_2[i] - test_output[i]) ** 2 for i in range(len(predictions_2))] print sum(residuals_2) predictions_3 = predict_output(simple_test_feature_matrix, simple_weights_high_penalty) residuals_3 = [(predictions_3[i] - test_output[i]) ** 2 for i in range(len(predictions_3))] print sum(residuals_3) Explanation: Compute the RSS on the TEST data for the following three sets of weights: 1. The initial weights (all zeros) 2. The weights learned with no regularization 3. The weights learned with high regularization Which weights perform best? End of explanation simple_weights_0_penalty simple_weights_high_penalty Explanation: QUIZ QUESTIONS 1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization? 2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper? 3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? End of explanation model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. my_output = 'price' (feature_matrix, train_output) = get_numpy_data(train_data, model_features, my_output) (test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output) Explanation: Running a multiple regression with L2 penalty Let us now consider a model with 2 features: ['sqft_living', 'sqft_living15']. First, create Numpy versions of your training and test data with these two features. End of explanation initial_weights = np.array([0.0,0.0,0.0]) step_size = 1e-12 max_iterations = 1000 Explanation: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations. End of explanation multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix, train_output, initial_weights, step_size, 0, max_iterations) Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights: multiple_weights_0_penalty End of explanation multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix, train_output, initial_weights, step_size, 1e11, max_iterations) Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights: multiple_weights_high_penalty End of explanation predictions_4 = predict_output(test_feature_matrix, initial_weights) residuals_4 = [(predictions_4[i] - test_output[i]) ** 2 for i in range(len(predictions_4))] print sum(residuals_4) predictions_5 = predict_output(test_feature_matrix, multiple_weights_0_penalty) residuals_5 = [(predictions_5[i] - test_output[i]) ** 2 for i in range(len(predictions_5))] print sum(residuals_5) predictions_6 = predict_output(test_feature_matrix, multiple_weights_high_penalty) residuals_6 = [(predictions_6[i] - test_output[i]) ** 2 for i in range(len(predictions_6))] print sum(residuals_6) Explanation: Compute the RSS on the TEST data for the following three sets of weights: 1. The initial weights (all zeros) 2. The weights learned with no regularization 3. The weights learned with high regularization Which weights perform best? End of explanation first = test_data[0] a, b, c= multiple_weights_0_penalty p_0 = a + b * first['sqft_living'] + c * first['sqft_living15'] print p_0 d, e, f = multiple_weights_high_penalty p_high = d + e * first['sqft_living'] + f * first['sqft_living15'] print p_high first['price'] Explanation: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house? End of explanation multiple_weights_0_penalty multiple_weights_high_penalty Explanation: QUIZ QUESTIONS 1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization? 2. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)? 3. We make prediction for the first house in the test set using two sets of weights (no regularization vs high regularization). Which weights make better prediction <u>for that particular house</u>? End of explanation
15,956
Given the following text description, write Python code to implement the functionality described below step by step Description: Template for test Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation. Included is N Phosphorylation however no benchmarks are available, yet. Training data is from phospho.elm and benchmarks are from dbptm. Note Step2: Y Phosphorylation Step3: T Phosphorylation
Python Code: from pred import Predictor from pred import sequence_vector from pred import chemical_vector Explanation: Template for test End of explanation par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_s_filtered.csv") y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0) y.supervised_training("xgb") y.benchmark("Data/Benchmarks/phos_stripped.csv", "S") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_s_filtered.csv") x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1) x.supervised_training("xgb") x.benchmark("Data/Benchmarks/phos_stripped.csv", "S") del x Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation. Included is N Phosphorylation however no benchmarks are available, yet. Training data is from phospho.elm and benchmarks are from dbptm. Note: SMOTEEN seems to preform best End of explanation par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_Y_filtered.csv") y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0) y.supervised_training("xgb") y.benchmark("Data/Benchmarks/phos_stripped.csv", "Y") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_Y_filtered.csv") x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1) x.supervised_training("xgb") x.benchmark("Data/Benchmarks/phos_stripped.csv", "Y") del x Explanation: Y Phosphorylation End of explanation par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_t_filtered.csv") y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0) y.supervised_training("xgb") y.benchmark("Data/Benchmarks/phos_stripped.csv", "T") del y print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_t_filtered.csv") x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1) x.supervised_training("xgb") x.benchmark("Data/Benchmarks/phos_stripped.csv", "T") del x Explanation: T Phosphorylation End of explanation
15,957
Given the following text description, write Python code to implement the functionality described below step by step Description: Elements and the periodic table This data came from Penn State CS professor Doug Hogan. Thanks to UCF undergraduates Sam Borges, for finding the data set, and Lissa Galguera, for formatting it. Step1: Getting the data Step2: Looking at some relationships
Python Code: # Import modules that contain functions we need import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt Explanation: Elements and the periodic table This data came from Penn State CS professor Doug Hogan. Thanks to UCF undergraduates Sam Borges, for finding the data set, and Lissa Galguera, for formatting it. End of explanation # Read in data that will be used for the calculations. # The data needs to be in the same directory(folder) as the program # Using pandas read_csv method, we can create a data frame data = pd.read_csv("./data/elements.csv") # If you're not using a Binder link, you can get the data with this instead: #data = pd.read_csv("http://php.scripts.psu.edu/djh300/cmpsc221/pt-data1.csv")" # displays the first several rows of the data set data.head() # the names of all the columns in the dataset data.columns Explanation: Getting the data End of explanation ax = data.plot('Atomic Number', 'Atomic Radius (pm)', title="Atomic Radius vs. Atomic Number", legend=False) ax.set(xlabel="x label", ylabel="y label") data.plot('Atomic Number', 'Mass') data[['Name', 'Year Discovered']].sort_values(by='Year Discovered') Explanation: Looking at some relationships End of explanation
15,958
Given the following text description, write Python code to implement the functionality described below step by step Description: Phone Digits Given a phone number create a list of all the possible words that you can make given a dictionary from numbers to letters. In python there is a itertools.permutations('abc') that would print all permutations given some input. ```python import itertools itertools.permutations('abc') [i for i in itertools.permutations('abc')] output permutations ``` Step1: Print Longest Common Subsequence This is a good problem for working out variations where you count contiguous subsequence versus non continuous The move with longest common subsequence is to start from the back of the strings and see if the letters are the same. Then increment with a dynamic programming approach where Step2: Time Travelling dictionary Design a time traveling dictionary, has a get and put function where the get function takes a time and returns the corresponding value at the time. Step3: Alien Dictionary Given a sorted dictionary of an alien language, find order of characters ```python Input Step4: Binary Search
Python Code: letters_map = {'2':'ABC', '3':'DEF', '4':'GHI', '5':'JKL', '6':'MNO', '7':'PQRS', '8':'TUV', '9':'WXYZ'} def printWords(number, ): #number is phone number def printWordsUtil(numb, curr_digit, output, n): if curr_digit == n: print('%s ' % output) return for i in range(len(letters_map[numb[curr_digit]])): output[curr_digit] = letters_map[number[curr_digit]][i] printWordsUtil(numb, curr_digit+1, output, n) if numb[curr_digit] == 0 or numb[curr_digit] == 1: return def gen_phone(digits): results = [] lookup = { '0': ' ', '1': ' ', '2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz', } def decode_next(s, i): if i == len(digits): results.append(s) return for c in lookup[digits[i]]: decode_next(s + c, i + 1) decode_next('', 0) return results Explanation: Phone Digits Given a phone number create a list of all the possible words that you can make given a dictionary from numbers to letters. In python there is a itertools.permutations('abc') that would print all permutations given some input. ```python import itertools itertools.permutations('abc') [i for i in itertools.permutations('abc')] output permutations ``` End of explanation # Dynamic programming implementation of LCS problem # Returns length of LCS for X[0..m-1], Y[0..n-1] def lcs(X, Y, m, n): L = [[0 for x in xrange(n+1)] for x in xrange(m+1)] # Following steps build L[m+1][n+1] in bottom up fashion. Note # that L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1] for i in xrange(m+1): for j in xrange(n+1): if i == 0 or j == 0: L[i][j] = 0 elif X[i-1] == Y[j-1]: L[i][j] = L[i-1][j-1] + 1 else: L[i][j] = max(L[i-1][j], L[i][j-1]) # Following code is used to print LCS index = L[m][n] # Create a character array to store the lcs string lcs = [""] * (index+1) lcs[index] = "\0" # Start from the right-most-bottom-most corner and # one by one store characters in lcs[] i = m j = n while i > 0 and j > 0: # If current character in X[] and Y are same, then # current character is part of LCS if X[i-1] == Y[j-1]: lcs[index-1] = X[i-1] i-=1 j-=1 index-=1 # If not same, then find the larger of two and # go in the direction of larger value elif L[i-1][j] > L[i][j-1]: i-=1 else: j-=1 print "LCS of " + X + " and " + Y + " is " + "".join(lcs) # Driver program X = "AGGTAB" Y = "GXTXAYB" m = len(X) n = len(Y) lcs(X, Y, m, n) passed in a list of dictionaries also passed a character passed single characted to int if a character does not exist in the dict then the defualt value it zero find the highest possisble value for a character in the dicts now design it to take an abatrary operator and reutrn the highest value based on the operator and then have it return ascending and descending order Explanation: Print Longest Common Subsequence This is a good problem for working out variations where you count contiguous subsequence versus non continuous The move with longest common subsequence is to start from the back of the strings and see if the letters are the same. Then increment with a dynamic programming approach where End of explanation import time import math class TimeTravelDict: def __init__(self): self.dict = {} def get(self, key, time): if not self.dict[key]: return -1 most_recent, value = math.inf, None for a, b in self.dict[key]: if b < time: if (time - b) < most_recent: most_recent = b value = a if value == None: return -1 else: return value def put(self, key, value): if not key in self.dict: self.dict[key] = [(value, time.time())] self.dict[key].append((value, time.time())) print(self.dict[key]) tt = TimeTravelDict() tt.put('a', 11) tt.put('a', 12) tt.put('a', 13) tt.put('a', 14) tt.get('a', 1513571590.2447577) Explanation: Time Travelling dictionary Design a time traveling dictionary, has a get and put function where the get function takes a time and returns the corresponding value at the time. End of explanation #[2::][1::2] import collections words = ["baa", "", "abcd", "abca", "cab", "cad"] def alienOrder(words): pre, suc = collections.defaultdict(set), collections.defaultdict(set) for pair in zip(words, words[1:]): print(pair) for a, b in zip(*pair): if a != b: suc[a].add(b) pre[b].add(a) break print('succ %s' % suc) print('pred %s' % pre) chars = set(''.join(words)) print('chars %s' % chars) print(set(pre)) free = chars - set(pre) print('free %s' % free) order = '' while free: a = free.pop() order += a for b in suc[a]: pre[b].discard(a) if not pre[b]: free.add(b) if set(order) == chars: return order else: False # return order * (set(order) == chars) alienOrder(words) Explanation: Alien Dictionary Given a sorted dictionary of an alien language, find order of characters ```python Input: words[] = {"baa", "abcd", "abca", "cab", "cad"} Output: Order of characters is 'b', 'd', 'a', 'c' Note that words are sorted and in the given language "baa" comes before "abcd", therefore 'b' is before 'a' in output. Similarly we can find other orders. Input: words[] = {"caa", "aaa", "aab"} Output: Order of characters is 'c', 'a', 'b' ``` The idea is to create a graph of characters a then find topological sorting of the graph. 1. Create a graph g with number of vertices equal to the size of alphabet in the given language. For example, if the alphabet size is 5, then there can be 5 characters in words. Initially there are no edges in graph. 2. DO the following for every pair of adjacent words in given sorted array. 1. Let the current pair of words be word1 and word2. One by one compare characters of both words and find the mismatching characters. 2. Create an edge in g from mismatching character of word1 to that of word2. 3. Print topological sorting of the above created graph. End of explanation def binarySearch(alist, value): mini = 0 maxi = len(alist) while mini <= maxi: print('here') pivot = (maxi - mini) // 2 current_value = alist[pivot] if current_value < value: mini = pivot + 1 elif current_value > value: maxi = pivot -1 else: return pivot return pivot or -1 test1 = [0, 5, 10 , 23, 46, 49, 78] test2 = [0, 5, 10] test3 = [0] print(binarySearch(test1, 49)) print(binarySearch(test2, 10)) binarySearch(test3, 90) Explanation: Binary Search End of explanation
15,959
Given the following text description, write Python code to implement the functionality described below step by step Description: $$ \def\CC{\bf C} \def\QQ{\bf Q} \def\RR{\bf R} \def\ZZ{\bf Z} \def\NN{\bf N} $$ Fonctions def Step1: Une fonction rassemble un ensemble d'instructions qui permettent d'atteindre un certain objectif commun. Les fonctions permettent de séparer un programme en morceaux qui correspondent à la façon dont on pense à la résolution d'un problème. La syntaxe pour la définition d'une fonction est Step2: La ligne d'en-tête commence avec def et se termine par un deux-points. Le choix du nom de la fonction suit exactement les mêmes règles que pour le choix du nom d'une variable. Un bloc constitué d'une ou plusieurs instructions Python, chacune indentée du même nombre d'espace (la convention est d'utiliser 4 espaces) par rapport à la ligne d'en-tête. Nous avons déjà vu la boucle for qui suit ce modèle. Le nom de la fonction est suivi par certains paramètres entre parenthèses. La liste des paramètres peut être vide, ou il peut contenir un certain nombre de paramètres séparés les uns des autres par des virgules. Dans les deux cas, les parenthèses sont nécessaires. Les paramètres spécifient les informations, le cas échéant, que nous devons fournir pour pouvoir utiliser la nouvelle fonction. La ou les valeurs de retour d'une fonction sont retournées avec la commande return. Par exemple, la fonction qui retourne la somme de deux valeurs s'écrit Step3: La fonction qui calcule le volume d'un parallépipède rectangle s'écrit Step4: On peut rassemble le code sur la température de l'eau que l'on a écrit plus au sein d'une fonction etat_de_leau qui dépend du paramètre temperature Step5: Cette fonction permet de tester le code sur la température de l'eau plus facilement
Python Code: from __future__ import division, print_function # Python 3 Explanation: $$ \def\CC{\bf C} \def\QQ{\bf Q} \def\RR{\bf R} \def\ZZ{\bf Z} \def\NN{\bf N} $$ Fonctions def End of explanation def FONCTION( PARAMETRES ): INSTRUCTIONS Explanation: Une fonction rassemble un ensemble d'instructions qui permettent d'atteindre un certain objectif commun. Les fonctions permettent de séparer un programme en morceaux qui correspondent à la façon dont on pense à la résolution d'un problème. La syntaxe pour la définition d'une fonction est: End of explanation def somme(a, b): return a + b somme(4,7) Explanation: La ligne d'en-tête commence avec def et se termine par un deux-points. Le choix du nom de la fonction suit exactement les mêmes règles que pour le choix du nom d'une variable. Un bloc constitué d'une ou plusieurs instructions Python, chacune indentée du même nombre d'espace (la convention est d'utiliser 4 espaces) par rapport à la ligne d'en-tête. Nous avons déjà vu la boucle for qui suit ce modèle. Le nom de la fonction est suivi par certains paramètres entre parenthèses. La liste des paramètres peut être vide, ou il peut contenir un certain nombre de paramètres séparés les uns des autres par des virgules. Dans les deux cas, les parenthèses sont nécessaires. Les paramètres spécifient les informations, le cas échéant, que nous devons fournir pour pouvoir utiliser la nouvelle fonction. La ou les valeurs de retour d'une fonction sont retournées avec la commande return. Par exemple, la fonction qui retourne la somme de deux valeurs s'écrit: End of explanation def volume(largeur, hauteur, profondeur): v = volume(2,3,4) v Explanation: La fonction qui calcule le volume d'un parallépipède rectangle s'écrit: End of explanation def etat_de_leau(temperature): if temperature < 0: print("L'eau est solide") elif temperature == 0: print("L'eau est en transition de phase solide-liquide") elif temperature < 100: print("L'eau est liquide") elif temperature == 100: print("L'eau est en transition de phase liquide-gaz") else: print("L'eau est un gaz") Explanation: On peut rassemble le code sur la température de l'eau que l'on a écrit plus au sein d'une fonction etat_de_leau qui dépend du paramètre temperature : End of explanation etat_de_leau(23) etat_de_leau(-23) etat_de_leau(0) etat_de_leau(0.1) etat_de_leau(102) Explanation: Cette fonction permet de tester le code sur la température de l'eau plus facilement: End of explanation
15,960
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Filter <script type="text/javascript"> localStorage.setItem('language', 'language-py') </script> <table align="left" style="margin-right Step2: Examples In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration. Then, we apply Filter in multiple ways to filter out produce by their duration value. Filter accepts a function that keeps elements that return True, and filters out the remaining elements. Example 1 Step3: <table align="left" style="margin-right Step4: <table align="left" style="margin-right Step5: <table align="left" style="margin-right Step6: <table align="left" style="margin-right Step7: <table align="left" style="margin-right
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License") # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master//Users/dcavazos/src/beam/examples/notebooks/documentation/transforms/python/element-wise/filter-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> <table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/filter"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table> End of explanation !pip install --quiet -U apache-beam Explanation: Filter <script type="text/javascript"> localStorage.setItem('language', 'language-py') </script> <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Filter"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a> </td> </table> <br/><br/><br/> Given a predicate, filter out all elements that don't satisfy that predicate. May also be used to filter based on an inequality with a given value based on the comparison ordering of the element. Setup To run a code cell, you can click the Run cell button at the top left of the cell, or select it and press Shift+Enter. Try modifying a code cell and re-running it to see what happens. To learn more about Colab, see Welcome to Colaboratory!. First, let's install the apache-beam module. End of explanation import apache_beam as beam def is_perennial(plant): return plant['duration'] == 'perennial' with beam.Pipeline() as pipeline: perennials = ( pipeline | 'Gardening plants' >> beam.Create([ {'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'}, {'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'}, {'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'}, {'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'}, {'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'}, ]) | 'Filter perennials' >> beam.Filter(is_perennial) | beam.Map(print) ) Explanation: Examples In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration. Then, we apply Filter in multiple ways to filter out produce by their duration value. Filter accepts a function that keeps elements that return True, and filters out the remaining elements. Example 1: Filtering with a function We define a function is_perennial which returns True if the element's duration equals 'perennial', and False otherwise. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: perennials = ( pipeline | 'Gardening plants' >> beam.Create([ {'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'}, {'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'}, {'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'}, {'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'}, {'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'}, ]) | 'Filter perennials' >> beam.Filter( lambda plant: plant['duration'] == 'perennial') | beam.Map(print) ) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 2: Filtering with a lambda function We can also use lambda functions to simplify Example 1. End of explanation import apache_beam as beam def has_duration(plant, duration): return plant['duration'] == duration with beam.Pipeline() as pipeline: perennials = ( pipeline | 'Gardening plants' >> beam.Create([ {'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'}, {'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'}, {'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'}, {'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'}, {'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'}, ]) | 'Filter perennials' >> beam.Filter(has_duration, 'perennial') | beam.Map(print) ) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 3: Filtering with multiple arguments You can pass functions with multiple arguments to Filter. They are passed as additional positional arguments or keyword arguments to the function. In this example, has_duration takes plant and duration as arguments. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: perennial = pipeline | 'Perennial' >> beam.Create(['perennial']) perennials = ( pipeline | 'Gardening plants' >> beam.Create([ {'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'}, {'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'}, {'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'}, {'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'}, {'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'}, ]) | 'Filter perennials' >> beam.Filter( lambda plant, duration: plant['duration'] == duration, duration=beam.pvalue.AsSingleton(perennial), ) | beam.Map(print) ) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 4: Filtering with side inputs as singletons If the PCollection has a single value, such as the average from another computation, passing the PCollection as a singleton accesses that value. In this example, we pass a PCollection the value 'perennial' as a singleton. We then use that value to filter out perennials. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: valid_durations = pipeline | 'Valid durations' >> beam.Create([ 'annual', 'biennial', 'perennial', ]) valid_plants = ( pipeline | 'Gardening plants' >> beam.Create([ {'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'}, {'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'}, {'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'}, {'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'}, {'icon': '🥔', 'name': 'Potato', 'duration': 'PERENNIAL'}, ]) | 'Filter valid plants' >> beam.Filter( lambda plant, valid_durations: plant['duration'] in valid_durations, valid_durations=beam.pvalue.AsIter(valid_durations), ) | beam.Map(print) ) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Example 5: Filtering with side inputs as iterators If the PCollection has multiple values, pass the PCollection as an iterator. This accesses elements lazily as they are needed, so it is possible to iterate over large PCollections that won't fit into memory. End of explanation import apache_beam as beam with beam.Pipeline() as pipeline: keep_duration = pipeline | 'Duration filters' >> beam.Create([ ('annual', False), ('biennial', False), ('perennial', True), ]) perennials = ( pipeline | 'Gardening plants' >> beam.Create([ {'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'}, {'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'}, {'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'}, {'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'}, {'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'}, ]) | 'Filter plants by duration' >> beam.Filter( lambda plant, keep_duration: keep_duration[plant['duration']], keep_duration=beam.pvalue.AsDict(keep_duration), ) | beam.Map(print) ) Explanation: <table align="left" style="margin-right:1em"> <td> <a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/element_wise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a> </td> </table> <br/><br/><br/> Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection), but this requires that all the elements fit into memory. Example 6: Filtering with side inputs as dictionaries If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary. Each element must be a (key, value) pair. Note that all the elements of the PCollection must fit into memory for this. If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead. End of explanation
15,961
Given the following text description, write Python code to implement the functionality described below step by step Description: Project 1 Used Vehicle Price Prediction Introduction 1.2 Million listings scraped from TrueCar.com - Price, Mileage, Make, Model dataset from Kaggle Step1: Exercise P1.1 (50%) Develop a machine learning model that predicts the price of the of car using as an input ['Year', 'Mileage', 'State', 'Make', 'Model'] Submit the prediction of the testing set to Kaggle https Step2: Submission example
Python Code: %matplotlib inline import pandas as pd data = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTrain_carListings.zip') data.head() data.shape data.Price.describe() data.plot(kind='scatter', y='Price', x='Year') data.plot(kind='scatter', y='Price', x='Mileage') data.columns Explanation: Project 1 Used Vehicle Price Prediction Introduction 1.2 Million listings scraped from TrueCar.com - Price, Mileage, Make, Model dataset from Kaggle: data Each observation represents the price of an used car End of explanation data_test = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTest_carListings.zip', index_col=0) data_test.head() data_test.shape Explanation: Exercise P1.1 (50%) Develop a machine learning model that predicts the price of the of car using as an input ['Year', 'Mileage', 'State', 'Make', 'Model'] Submit the prediction of the testing set to Kaggle https://www.kaggle.com/c/miia4200-20191-p1-usedcarpriceprediction Evaluation: 25% - Performance of the model in the Kaggle Private Leaderboard 25% - Notebook explaining the modeling process End of explanation import numpy as np np.random.seed(42) y_pred = pd.DataFrame(np.random.rand(data_test.shape[0]) * 75000 + 5000, index=data_test.index, columns=['Price']) y_pred.to_csv('test_submission.csv', index_label='ID') y_pred.head() Explanation: Submission example End of explanation
15,962
Given the following text description, write Python code to implement the functionality described below step by step Description: Variance Reduction in Hull-White Monte Carlo Simulation Using Moment Matching Goutham Balaraman In an earlier blog post on how the Hull-White Monte Carlo simulations are notorious for not coverging with some of the expected moments. In this post, I would like to touch upon a variance reduction technique called moment matching that can be employed to fix this issue of convergence. The idea behind moment matching is rather simple. Lets consider the specific example of short rate model. For the short rate model, it is known that the average of stochastic discount factors generated from each path has to agree with the model (or the give yield curve) discount factors. The idea of moment matching is to correct the short rates generated by the term structure model such that the average of stochastic discount factors from the simulation matches the model discount factors. Step1: For simplicity, we use a constant forward rate as the given interest rate term structure. The method discussed here would work with any market yield curve as well. Step2: Here, I setup the Monte Carlo simulation of the Hull-White process. The result of the generate_paths function below is the time grid and a matrix of short rates generated by the model. This is discussed in detaio in the Hull-White simulation post. Step3: Here is a plot of the generated short rates. Step4: The model zero coupon bond price $B(0, T)$ is given as Step5: The plots below show the zero coupon bond price and mean of short rates with and without the moment matching.
Python Code: import QuantLib as ql import numpy as np import matplotlib.pyplot as plt %matplotlib inline from scipy.integrate import cumtrapz ql.__version__ Explanation: Variance Reduction in Hull-White Monte Carlo Simulation Using Moment Matching Goutham Balaraman In an earlier blog post on how the Hull-White Monte Carlo simulations are notorious for not coverging with some of the expected moments. In this post, I would like to touch upon a variance reduction technique called moment matching that can be employed to fix this issue of convergence. The idea behind moment matching is rather simple. Lets consider the specific example of short rate model. For the short rate model, it is known that the average of stochastic discount factors generated from each path has to agree with the model (or the give yield curve) discount factors. The idea of moment matching is to correct the short rates generated by the term structure model such that the average of stochastic discount factors from the simulation matches the model discount factors. End of explanation sigma = 0.01 a = 0.001 timestep = 360 length = 30 # in years forward_rate = 0.05 day_count = ql.Thirty360() todays_date = ql.Date(15, 1, 2015) ql.Settings.instance().evaluationDate = todays_date yield_curve = ql.FlatForward( todays_date, ql.QuoteHandle(ql.SimpleQuote(forward_rate)), day_count) spot_curve_handle = ql.YieldTermStructureHandle(yield_curve) Explanation: For simplicity, we use a constant forward rate as the given interest rate term structure. The method discussed here would work with any market yield curve as well. End of explanation hw_process = ql.HullWhiteProcess(spot_curve_handle, a, sigma) rng = ql.GaussianRandomSequenceGenerator( ql.UniformRandomSequenceGenerator( timestep, ql.UniformRandomGenerator(125))) seq = ql.GaussianPathGenerator(hw_process, length, timestep, rng, False) def generate_paths(num_paths, timestep): arr = np.zeros((num_paths, timestep+1)) for i in range(num_paths): sample_path = seq.next() path = sample_path.value() time = [path.time(j) for j in range(len(path))] value = [path[j] for j in range(len(path))] arr[i, :] = np.array(value) return np.array(time), arr Explanation: Here, I setup the Monte Carlo simulation of the Hull-White process. The result of the generate_paths function below is the time grid and a matrix of short rates generated by the model. This is discussed in detaio in the Hull-White simulation post. End of explanation num_paths = 128 time, paths = generate_paths(num_paths, timestep) for i in range(num_paths): plt.plot(time, paths[i, :], lw=0.8, alpha=0.6) plt.title("Hull-White Short Rate Simulation") plt.show() Explanation: Here is a plot of the generated short rates. End of explanation def stoch_df(paths, time): return np.mean( np.exp(-cumtrapz(paths, time, initial=0.)),axis=0 ) B_emp = stoch_df(paths, time) logB_emp = np.log(B_emp) B_yc = np.array([yield_curve.discount(t) for t in time]) logB_yc = np.log(B_yc) deltaT = time[1:] - time[:-1] deltaB_emp = logB_emp[1:]-logB_emp[:-1] deltaB_yc = logB_yc[1:] - logB_yc[:-1] new_paths = paths.copy() new_paths[:,1:] += (deltaB_emp/deltaT - deltaB_yc/deltaT) Explanation: The model zero coupon bond price $B(0, T)$ is given as: $$B(0, T) = E\left[\exp\left(-\int_0^T r(t)dt \right) \right]$$ where $r(t)$ is the short rate generated by the model. The expectation of the stochastic discount factor at time $T$ is the price of the zero coupon bond at that time. In a simulation the paths are generated in a time grid and the discretization introduces some error. The empirical estimation of the zero coupon bond price from a Monte Carlo simulation $\hat{B}(0, t_m)$ maturing at time $t_m$ is given as: $$\hat{B}(0, t_m) = \frac{1}{N}\sum_{i=1}^{N} \exp\left(-\sum_{j=0}^{m-1} \hat{r}i(t_j)[t{j+1}-t_j] \right)$$ where $\hat{r}_i(t_j)$ is the short rate for the path $i$ at time $t_j$ on the time grid. The expression for the moment matched short rates is given as [1]: $$ r^c_i(t_j) = \hat{r}i(t_j) + \frac{\log \hat{B}(0, t{j+1}) - \log \hat{B}(0, t_{j})}{t_{j+1} - t_j} - \frac{\log B(0, t_{j+1}) - \log B(0, t_{j})}{t_{j+1} - t_j}$$ End of explanation plt.plot(time, stoch_df(paths, time),"r-.", label="Original", lw=2) plt.plot(time, stoch_df(new_paths, time),"g:", label="Corrected", lw=2) plt.plot(time,B_yc, "k--",label="Market", lw=1) plt.title("Zero Coupon Bond Price") plt.legend() def alpha(forward, sigma, a, t): return forward + 0.5* np.power(sigma/a*(1.0 - np.exp(-a*t)), 2) avg = [np.mean(paths[:, i]) for i in range(timestep+1)] new_avg = [np.mean(new_paths[:, i]) for i in range(timestep+1)] plt.plot(time, avg, "r-.", lw=3, alpha=0.6, label="Original") plt.plot(time, new_avg, "g:", lw=3, alpha=0.6, label="Corrected") plt.plot(time,alpha(forward_rate, sigma, a, time), "k--", lw=2, alpha=0.6, label="Model") plt.title("Mean of Short Rates") plt.legend(loc=0) Explanation: The plots below show the zero coupon bond price and mean of short rates with and without the moment matching. End of explanation
15,963
Given the following text description, write Python code to implement the functionality described below step by step Description: 仿照求$ \sum_{i=1}^mi + \sum_{i=1}^ni + \sum_{i=1}^ki$的完整代码,写程序,可求m!+n!+k! Step1: 写函数可返回1 - 1/3 + 1/5 - 1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。 Step2: 将task3中的练习1及练习4改写为函数,并进行调用。 练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),liupengyuan'.endswith('n'))。 Step3: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。
Python Code: def compute_sum(n): i=0 sum=0 while i<n: i=i+1 sum+=i return sum m=int(input('plz input m: ')) n=int(input('plz input n: ')) k=int(input('plz input k: ')) print(compute_sum(m) + compute_sum(n) + compute_sum(k)) Explanation: 仿照求$ \sum_{i=1}^mi + \sum_{i=1}^ni + \sum_{i=1}^ki$的完整代码,写程序,可求m!+n!+k! End of explanation def compute_sum(n): i=0 total=0 while i<n: i+=1 if i%2==0: total-=1/(2*i-1) else: total+=1/(2*i-1) return total print(compute_sum(1000)) print('*4: ',4*compute_sum(1000)) print(compute_sum(10000)) print('*4: ',4*compute_sum(10000)) Explanation: 写函数可返回1 - 1/3 + 1/5 - 1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。 End of explanation def Constellation(n,m,d): if (m>=3 and d>21) or (m<=4 and d<19): return (n,'你是白羊座') elif (m>=4 and d>20) or (m<=5 and d<20): return (n,'你是金牛座') elif (m>=5 and d>21) or (m<=6 and d<21): return (n,'你是双子座') elif (m>=6 and d>22) or (m<=7 and d<22): return (n,'你是巨蟹座') elif (m>=7 and d>23) or (m<=8 and d<22): return (n,'你是狮子座') elif (m>=8 and d>23) or (m<=9 and d<22): return (n,'你是处女座') elif (m>=9 and d>23) or (m<=10 and d<23): return (n,'你是天秤座') elif (m>=10 and d>24) or (m<=11 and d<23): return (n,'你是天蝎座') elif (m>=11 and d>23) or (m<=12 and d<21): return (n,'你是射手座') elif (m>=12 and d>22) or (m<=1 and d<19): return (n,'你是摩羯座') elif (m>=1 and d>20) or (m<=2 and d<18): return (n,'你是水瓶座') elif (m>=2 and d>19) or (m<=3 and d<20): return (n,'你是双鱼座') n=str(input('plz input name:')) m=int(input('plz input birth_mon: ')) k=int(input('plz input birth_day: ')) print(Constellation(n,m,k)) def Plurality(word): if ( word.endswith('ch') or word.endswith('sh') or word.endswith('s') or word.endswith('x') ): print(word,'es',sep='') else: print(word,'s',sep='') w=str(input('plz input a word')) Plurality(w) Explanation: 将task3中的练习1及练习4改写为函数,并进行调用。 练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),liupengyuan'.endswith('n'))。 End of explanation def count_sum(m,n,k): i=0 total=0 while i+k<n: i=i+k total+=i+k return total m=int(input('plz input m:')) n=int(input('plz input n:')) k=int(input('plz input k:')) print(count_sum(m,n,k)) Explanation: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。 End of explanation
15,964
Given the following text description, write Python code to implement the functionality described below step by step Description: Finding stories in data with Python and Jupyter notebooks Journocoders London, April 13, 2017 David Blood/@davidcblood/[first] dot [last] at ft.com Introduction The Jupyter notebook provides an intuitive, flexible and shareable way to work with code. It's a potentially invaluable tool for journalists who need to analyse data quickly and reproducibly, particularly as part of a graphics-oriented workflow. This aim of this tutorial is to help you become familiar with the notebook and its role in a Python data analysis toolkit. We'll start with a demographic dataset and explore and analyse it visually in the notebook to see what it can tell us about people who voted ‘leave’ in the UK's EU referendum. To finish, we'll output a production-quality graphic using Bokeh. You'll need access to an empty Python 3 Jupyter notebook, ideally running on your local machine, although a cloud-based Jupyter environment is fine too. You're ready to start the tutorial when you're looking at this screen Step1: There shouldn't be any output from that cell, but if you get any error messages, it's most likely because you don't have one or more of these modules installed on your system. Running pip3 install pandas matplotlib numpy seaborn bokeh from the command line should take care of that. If not, holler and I'll try to help you. As well as running your code, hitting shift-return in that first cell should have automatically created an empty cell below it. In that cell, we're going to use the read_csv method provided by pandas to, um, read our CSV. When pandas reads data from a CSV file, it automagically puts it into something called a dataframe. It's not important at this point to understand what a dataframe is or how it differs from other Python data structures. All you need to know for now is that it's an object containing structured data that's stored in memory for the duration of your notebook session. We'll also assign our new dataframe to another variable—df—so we can do things with it down the line. We do all of this like so (remember to hit shift-return) Step2: See how easy that was? Now let's check that df is in fact a dataframe. Using the .head(n=[number]) method on any dataframe will return the first [number] rows of that dataframe. Let's take a look at the first ten Step3: Looks good! (FYI Step4: Yikes, not much of a relationship there. Let's try a different variable Step5: Hmm, that distribution looks better—there's a stronger, negative correlation there—but it's still a little unclear what we're looking at. Let's add some context. We know from our provisional data-munging (that we didn't do) that many of the boroughs of London were among the strongest ‘remain’ areas in the country. We can add an additional column called is_london to our dataframe and set the values of that column to either True or False depending on whether the value in the row's region_name column is London Step6: Those names should look familiar. That's numpy's .where method coming in handy there to help us generate a new column of data based on the values of another column—in this case, region_name. At this point, we're going to abandon Matplotlib like merciless narcissists and turn our attention to the younger, hotter Seaborn. Though it sounds like one of the factions from Game of Thrones, it's actually another plotting module that includes some handy analytical shortcuts and statistical methods. One of those analytical shortcuts is the FacetGrid. If you've ever used OpenRefine, you're probably familiar with the concept of faceting. I'll fumblingly describe it here as a method whereby data is apportioned into distinct matrices according to the values of a single field. You get the idea. Right now, we're going to facet on the is_london column so that we can distinguish the London boroughs from the rest of the UK Step7: Now we're cooking with gas! We can see a slight negative correlation in the distribution of the data points and we can see how London compares to all the other regions of the country. Whatever var2 is, we now know that the London boroughs generally have higher levels of it than most of the rest of the UK, and that it has a (weak) negative correlation with ‘leave’ vote percentage. So what's to stop you faceting on is_london but with a different variable plotted along the x axis? The answer is Step8: What's more, faceting isn't limited to just highlighting specific data points. We can also pass FacetGrid a col (column) argument with the name of a column that we'd like to use to further segment our data. So let's create another True/False (Boolean) column to flag the areas with the largest populations—the ones with electorates of 100,000 people or more—and plot a new facet grid Step9: Now we're able to make the following statements based solely on a visual inspection of this facet grid Step10: Try passing the remaining variables (var5-var9) to the pair grid. You should be able to see which of the variables in the dataset correlate most strongly with ‘leave’ vote percentage and whether the correlations are positive or negative. 4. Go into detail Seaborn also provides a heatmap method that we can use to quickly compare the correlation coefficient of each pair of variables (the value between -1 and 1 that describes the strength of the relationship between them). We can pass all the columns we're interested in to the heatmap in one go, because heatmaps are easier to read than pair grids Step11: By now, you should have a pretty good idea which variables are worth reporting as being significant demographic factors in the ‘leave’ vote. If you wanted to take your analysis even further, you could also report on whether London boroughs returned higher or lower ‘leave’ vote percentages than we would expect based on the values of any correlating variable. A convenient way to do this would be to use Seaborn's built-in linear regression plotting Step12: Reading this plot, we're able to say that, all things being equal, most of the London boroughs have lower ‘leave’ vote percentages than we would expect based on their levels of var2 alone. This suggests—rightly—that variables other than var2 are in play in determining London's lower-than-expected levels of ‘leave’ voting. 5. Make a graphic and get it out of the notebook Everyone knows that data journalism without pretty graphics is just boring. While the Matplotlib and Seaborn scatter plots get the job done, they're not exactly 😍 For that, we need Bokeh. You can pretty much throw a stone and hit a data visualisation library these days, but Bokeh is a good fit for Jupyter notebooks because it's made for Python and can work with dataframes and all that other good stuff we've got going on in here. So let's fire it up by telling it that, like Matplotlib, we want it to plot in the notebook Step13: Because we want this to be our output graphic, we're going to be much fussier about how it looks, so there's quite a bit of configuration involved here
Python Code: import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from bokeh.plotting import figure, show from bokeh.io import output_notebook %matplotlib inline Explanation: Finding stories in data with Python and Jupyter notebooks Journocoders London, April 13, 2017 David Blood/@davidcblood/[first] dot [last] at ft.com Introduction The Jupyter notebook provides an intuitive, flexible and shareable way to work with code. It's a potentially invaluable tool for journalists who need to analyse data quickly and reproducibly, particularly as part of a graphics-oriented workflow. This aim of this tutorial is to help you become familiar with the notebook and its role in a Python data analysis toolkit. We'll start with a demographic dataset and explore and analyse it visually in the notebook to see what it can tell us about people who voted ‘leave’ in the UK's EU referendum. To finish, we'll output a production-quality graphic using Bokeh. You'll need access to an empty Python 3 Jupyter notebook, ideally running on your local machine, although a cloud-based Jupyter environment is fine too. You're ready to start the tutorial when you're looking at this screen: 1. Bring your data into the notebook In Python-world, people often use the pandas module for working with data. You don't have to—there are other modules that do similar things—but it's the most well-known and comprehensive (probably). Let's import pandas into our project and assign it to the variable pd, because that's easier to type than pandas. While we're at it, let's import all the other modules we'll need for this tutorial and also let Matplotlib know that we want it to plot charts here in the notebook rather than in a separate window. Enter the following code into the first cell in your notebook and hit shift-return to run the code block—don't copy-and-paste it. The best way to develop an understanding of the code is to type it out yourself: End of explanation url = 'https://raw.githubusercontent.com/davidbjourno/finding-stories-in-data/master/data/leave-demographics.csv' # Pass in the URL of the CSV file: df = pd.read_csv(url) Explanation: There shouldn't be any output from that cell, but if you get any error messages, it's most likely because you don't have one or more of these modules installed on your system. Running pip3 install pandas matplotlib numpy seaborn bokeh from the command line should take care of that. If not, holler and I'll try to help you. As well as running your code, hitting shift-return in that first cell should have automatically created an empty cell below it. In that cell, we're going to use the read_csv method provided by pandas to, um, read our CSV. When pandas reads data from a CSV file, it automagically puts it into something called a dataframe. It's not important at this point to understand what a dataframe is or how it differs from other Python data structures. All you need to know for now is that it's an object containing structured data that's stored in memory for the duration of your notebook session. We'll also assign our new dataframe to another variable—df—so we can do things with it down the line. We do all of this like so (remember to hit shift-return): End of explanation df.head(n=10) Explanation: See how easy that was? Now let's check that df is in fact a dataframe. Using the .head(n=[number]) method on any dataframe will return the first [number] rows of that dataframe. Let's take a look at the first ten: End of explanation # Configure Matplotlib's pyplot method (plt) to plot at a size of 8x8 inches and # a resolution of 72 dots per inch plt.figure( figsize=(8, 8), dpi=72 ) # Plot the data as a scatter plot g = plt.scatter( x=df['var1'], # The values we want to plot along the x axis y=df['leave'], # The values we want to plot along the y axis s=50, # The size… c='#0571b0', # …colour… alpha=0.5 # …and opacity we want the data point markers to be ) Explanation: Looks good! (FYI: .tail(n=[number]) will give you the last [number] rows.) By now, you may have noticed that some of the row headers in this CSV aren't particularly descriptive (var1, var2 etc.). This is the game: by the end of this tutorial, you should be able to identify the variables that correlated most strongly with the percentage of ‘leave’ votes (the leave column), i.e. which factors were the most predictive of people voting ‘leave’. At the end of the meetup, before we all go down the pub, you can tell me which variables you think correlated most strongly and I'll tell you what each of them are 😁 2. Explore the data The main advantage of the workflow we're using here is that it enables us to inspect a dataset visually, which can often be the quickest way to identify patterns, trends or outliers in data. A common first step in this process is to use scatter plots to visualise the relationship, if any, between two variables. So let's use Matplotlib to create a first, super basic scatter plot: End of explanation plt.figure( figsize=(8, 8), dpi=72 ) g = plt.scatter( x=df['var2'], # Plotting var2 along the x axis this time y=df['leave'], s=50, c='#0571b0', alpha=0.5 ) Explanation: Yikes, not much of a relationship there. Let's try a different variable: End of explanation df['is_london'] = np.where(df['region_name'] == 'London', True, False) # Print all the rows in the dataframe in which is_london is equal to True df[df['is_london'] == True] Explanation: Hmm, that distribution looks better—there's a stronger, negative correlation there—but it's still a little unclear what we're looking at. Let's add some context. We know from our provisional data-munging (that we didn't do) that many of the boroughs of London were among the strongest ‘remain’ areas in the country. We can add an additional column called is_london to our dataframe and set the values of that column to either True or False depending on whether the value in the row's region_name column is London: End of explanation # Set the chart background colour (completely unnecessary, I just don't like the # default) sns.set_style('darkgrid', { 'axes.facecolor': '#efefef' }) # Tell Seaborn that what we want from it is a FacetGrid, and assign this to the # variable ‘fg’ fg = sns.FacetGrid( data=df, # Use our dataframe as the input data hue='is_london', # Highlight the data points for which is_london == True palette=['#0571b0', '#ca0020'], # Define a tasteful blue/red colour combo size=7 # Make the plots size 7, whatever that means ) # Tell Seaborn that what we want to do with our FacetGrid (fg) is visualise it # as a scatter plot fg.map( plt.scatter, 'var2', # Values to plot along the x axis 'leave', # Values to plot along the y axis alpha=0.5 ) Explanation: Those names should look familiar. That's numpy's .where method coming in handy there to help us generate a new column of data based on the values of another column—in this case, region_name. At this point, we're going to abandon Matplotlib like merciless narcissists and turn our attention to the younger, hotter Seaborn. Though it sounds like one of the factions from Game of Thrones, it's actually another plotting module that includes some handy analytical shortcuts and statistical methods. One of those analytical shortcuts is the FacetGrid. If you've ever used OpenRefine, you're probably familiar with the concept of faceting. I'll fumblingly describe it here as a method whereby data is apportioned into distinct matrices according to the values of a single field. You get the idea. Right now, we're going to facet on the is_london column so that we can distinguish the London boroughs from the rest of the UK: End of explanation # Plot the chart above with a different variable along the x axis. Explanation: Now we're cooking with gas! We can see a slight negative correlation in the distribution of the data points and we can see how London compares to all the other regions of the country. Whatever var2 is, we now know that the London boroughs generally have higher levels of it than most of the rest of the UK, and that it has a (weak) negative correlation with ‘leave’ vote percentage. So what's to stop you faceting on is_london but with a different variable plotted along the x axis? The answer is: nothing! Try doing that exact thing right now: End of explanation df['is_largest'] = np.where(df['electorate'] >= 100000, True, False) g = sns.FacetGrid( df, hue='is_london', col='is_largest', palette=['#0571b0', '#ca0020'], size=7 ) g.map( plt.scatter, 'var2', 'leave', alpha=0.5 ) Explanation: What's more, faceting isn't limited to just highlighting specific data points. We can also pass FacetGrid a col (column) argument with the name of a column that we'd like to use to further segment our data. So let's create another True/False (Boolean) column to flag the areas with the largest populations—the ones with electorates of 100,000 people or more—and plot a new facet grid: End of explanation # Just adding the first four variables, plus leave, to start with—you'll see why columns = [ 'var1', 'var2', 'var3', 'var4', 'leave', 'is_london' ] g = sns.PairGrid( data=df[columns], hue='is_london', palette=['#0571b0', '#ca0020'] ) g.map_offdiag(plt.scatter); Explanation: Now we're able to make the following statements based solely on a visual inspection of this facet grid: Most of the less populous areas (electorate < 100,000) voted ‘leave’ Most of the less populous areas had var2 levels below 35. Only two—both London boroughs—had levels higher than 35 There is a stronger correlation between the strength of the ‘leave’ vote and the level of var2 among the more populous areas So you see how faceting can come in handy when you come to a dataset cold and need to start to understand it quickly. As yet, we still don't have much of a story, just a few observations—not exactly Pulitzer material. The next and most important step is to narrow down which of the variables in the dataset were the most indicative of ‘leave’ vote percentage. The good news is that we don't have to repeat the facet grid steps above for every variable, because Seaborn provides another useful analytical shortcut called a PairGrid. 3. Optimise for efficiency Apparently there's an equivalent to the pair grid in R called a correlogram or something (I wouldn't know). But the pair grid is super sweet because it allows us to check for correlations across a large number of variables at once. By passing the PairGrid function an array of column headers from our dataset, we can plot each of those variables against every other variable in one amazing ultra-grid: End of explanation plt.figure( figsize=(15, 15), dpi=72 ) columns = [ # ALL THE COLUMNS 'var1', 'var2', 'var3', 'var4', 'var5', 'var6', 'var7', 'var8', 'var9', 'leave' ] # Calculate the standard correlation coefficient of each pair of columns correlations = df[columns].corr(method='pearson') sns.heatmap( data=correlations, square=True, xticklabels=correlations.columns.values, yticklabels=correlations.columns.values, # The Matplotlib colormap to use # (https://matplotlib.org/examples/color/colormaps_reference.html) cmap='plasma' ) Explanation: Try passing the remaining variables (var5-var9) to the pair grid. You should be able to see which of the variables in the dataset correlate most strongly with ‘leave’ vote percentage and whether the correlations are positive or negative. 4. Go into detail Seaborn also provides a heatmap method that we can use to quickly compare the correlation coefficient of each pair of variables (the value between -1 and 1 that describes the strength of the relationship between them). We can pass all the columns we're interested in to the heatmap in one go, because heatmaps are easier to read than pair grids: End of explanation columns = ['var2', 'leave'] g = sns.lmplot( data=df, x=columns[0], y=columns[1], hue='is_london', palette=['#0571b0', '#ca0020'], size=7, fit_reg=False, ) sns.regplot( data=df, x=columns[0], y=columns[1], scatter=False, color='#0571b0', ax=g.axes[0, 0] ) Explanation: By now, you should have a pretty good idea which variables are worth reporting as being significant demographic factors in the ‘leave’ vote. If you wanted to take your analysis even further, you could also report on whether London boroughs returned higher or lower ‘leave’ vote percentages than we would expect based on the values of any correlating variable. A convenient way to do this would be to use Seaborn's built-in linear regression plotting: End of explanation output_notebook() Explanation: Reading this plot, we're able to say that, all things being equal, most of the London boroughs have lower ‘leave’ vote percentages than we would expect based on their levels of var2 alone. This suggests—rightly—that variables other than var2 are in play in determining London's lower-than-expected levels of ‘leave’ voting. 5. Make a graphic and get it out of the notebook Everyone knows that data journalism without pretty graphics is just boring. While the Matplotlib and Seaborn scatter plots get the job done, they're not exactly 😍 For that, we need Bokeh. You can pretty much throw a stone and hit a data visualisation library these days, but Bokeh is a good fit for Jupyter notebooks because it's made for Python and can work with dataframes and all that other good stuff we've got going on in here. So let's fire it up by telling it that, like Matplotlib, we want it to plot in the notebook: End of explanation color_map = {False: '#0571b0', True: '#ca0020'} # Instantiate our plot p = figure( plot_width=600, plot_height=422, background_fill_color='#d3d3d3', title='Leave demographics' ) # Add a circle renderer to the plot p.circle( x=df['var2'], y=df['leave'], # Size the markers according to the size of the electorate (scaled down) size=df['electorate'] / 20000, fill_color=df['is_london'].map(color_map), line_color=df['is_london'].map(color_map), line_width=1, alpha=0.5 ) # Configure the plot's x axis p.xaxis.axis_label = 'var5' p.xgrid.grid_line_color = None # Configure the plot's y axis p.yaxis.axis_label = 'Percentage voting leave' p.ygrid.grid_line_color = '#999999' p.ygrid.grid_line_alpha = 1 p.ygrid.grid_line_dash = [6, 4] # Show the plot show(p) Explanation: Because we want this to be our output graphic, we're going to be much fussier about how it looks, so there's quite a bit of configuration involved here: End of explanation
15,965
Given the following text description, write Python code to implement the functionality described below step by step Description: 今回のレポートでは、①オートエンコーダの作成、②再帰型ニューラルネットワークの作成を試みた。 ①コブダクラス型生産関数を再現できるオートエンコーダの作成が目標である。 Step1: 定義域は0≤x≤1である。 <P>コブ・ダグラス型生産関数は以下の通りである。</P> <P>z = x_1**0.5*x_2*0.5</P> Step2: NNのクラスはすでにNN.pyからimportしてある。 Step3: 以下に使い方を説明する。 初めに、このコブ・ダグラス型生産関数を用いる。 Step4: 入力層、中間層、出力層を作る関数を実行する。引数には層の数を用いる。 Step5: <p>nn.set_hidden_layer()は同時にシグモイド関数で変換する前の中間層も作る。</p> <p>set_output_layer()は同時にシグモイド関数で変換する前の出力層、さらに教師データを入れる配列も作る。</p> nn.setup()で入力層ー中間層、中間層ー出力層間の重みを入れる配列を作成する。 nn.initialize()で重みを初期化する。重みは-1/√d ≤ w ≤ 1/√d (dは入力層及び中間層の数)の範囲で一様分布から決定される。 Step6: nn.supervised_function(f, idata)は教師データを作成する。引数は関数とサンプルデータをとる。 Step7: nn.simulate(N, eta)は引数に更新回数と学習率をとる。普通はN=1で行うべきかもしれないが、工夫として作成してみた。N回学習した後に出力層を返す。 Step8: nn.calculation()は学習せずに入力層から出力層の計算を行う。nn.simulate()内にも用いられている。 次に実際に学習を行う。サンプルデータは、 Step9: の組み合わせである。 Step10: 例えば(0, 0)を入力すると0.52328635を返している(つまりa[0]とb[0]を入力して、c[0]の値を返している)。 ここでは交差検定は用いていない。 Step11: 確率的勾配降下法を100回繰り返したが見た感じから近づいている。回数を10000回に増やしてみる。 Step12: 見た感じ随分近づいているように見える。 最後に交差検定を行う。 初めに学習回数が極めて少ないNNである。 Step13: 次に十分大きく(100回に)してみる。 Step14: 誤差の平均であるので小さい方よい。 学習回数を増やした結果、精度が上がった。 最後にオートエンコーダを作成する。回数を増やした方がよいことが分かったため、10000回学習させてみる。 Step15: 十分再現できていることが分かる。 ②ゲーム理論で用いられるTit for Tatを再現してみる。二人のプレーヤーが互いにRNNで相手の行動を予測し、相手の行動に対してTit for Tatに基づいた行動を選択する。 Step16: 最初の行動はRNNで指定できないので、所与となる。この初期値と裏切りに対する感応度で収束の仕方が決まる。 協調を1、裏切りを0としている。RNNの予測値は整数値でないが、p=(RNNの出力値)で次回に協調を行う。 例1:1期目に、プレーヤー1が協力、プレーヤー2が裏切り。 Step17: 下の図より、最初は交互に相手にしっぺ返しをしているが、やがて両者が裏切り合うこと状態に収束する。 Step18: 例2:1期目に、プレーヤー1が協力、プレーヤー2が協力。ただし、プレーヤー2は相手の裏切りをかなり警戒している。 警戒を表すためにp=(RNNの出力値 - 0.2)とする。p<0の場合はp=0に直す。 Step19: 例3:次に相手の行動を完全には観測できない場合を考える。t期の相手の行動をt+1期にノイズが加わって知る。例えば、1期目に相手が協調したことを、確率90%で2期目に正しく知れるが、10%で裏切りと誤って伝わる場合である。 ノイズは20%の確率で加わるものとする。その他の条件は例1と同じにした。
Python Code: %matplotlib inline import numpy as np import pylab as pl import math from sympy import * import matplotlib.pyplot as plt import matplotlib.animation as animation from mpl_toolkits.mplot3d import Axes3D from NN import NN Explanation: 今回のレポートでは、①オートエンコーダの作成、②再帰型ニューラルネットワークの作成を試みた。 ①コブダクラス型生産関数を再現できるオートエンコーダの作成が目標である。 End of explanation def example1(x_1, x_2): z = x_1**0.5*x_2*0.5 return z fig = pl.figure() ax = Axes3D(fig) X = np.arange(0, 1, 0.1) Y = np.arange(0, 1, 0.1) X, Y = np.meshgrid(X, Y) Z = example1(X, Y) ax.plot_surface(X, Y, Z, rstride=1, cstride=1) pl.show() Explanation: 定義域は0≤x≤1である。 <P>コブ・ダグラス型生産関数は以下の通りである。</P> <P>z = x_1**0.5*x_2*0.5</P> End of explanation nn = NN() Explanation: NNのクラスはすでにNN.pyからimportしてある。 End of explanation x_1 = Symbol('x_1') x_2 = Symbol('x_2') f = x_1**0.5*x_2*0.5 Explanation: 以下に使い方を説明する。 初めに、このコブ・ダグラス型生産関数を用いる。 End of explanation nn.set_input_layer(2) nn.set_hidden_layer(2) nn.set_output_layer(2) Explanation: 入力層、中間層、出力層を作る関数を実行する。引数には層の数を用いる。 End of explanation nn.setup() nn.initialize() Explanation: <p>nn.set_hidden_layer()は同時にシグモイド関数で変換する前の中間層も作る。</p> <p>set_output_layer()は同時にシグモイド関数で変換する前の出力層、さらに教師データを入れる配列も作る。</p> nn.setup()で入力層ー中間層、中間層ー出力層間の重みを入れる配列を作成する。 nn.initialize()で重みを初期化する。重みは-1/√d ≤ w ≤ 1/√d (dは入力層及び中間層の数)の範囲で一様分布から決定される。 End of explanation idata = [1, 2] nn.supervised_function(f, idata) Explanation: nn.supervised_function(f, idata)は教師データを作成する。引数は関数とサンプルデータをとる。 End of explanation nn.simulate(1, 0.1) Explanation: nn.simulate(N, eta)は引数に更新回数と学習率をとる。普通はN=1で行うべきかもしれないが、工夫として作成してみた。N回学習した後に出力層を返す。 End of explanation X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) print X, Y Explanation: nn.calculation()は学習せずに入力層から出力層の計算を行う。nn.simulate()内にも用いられている。 次に実際に学習を行う。サンプルデータは、 End of explanation X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network() for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) for i in range(100): l = np.random.choice([i for i in range(len(a))]) m = nn.main2(1, f, [a[l], b[l]], 0.5) for x in X: for y in Y: idata = [x, y] c = np.append(c, nn.realize(f, idata)) a b c Explanation: の組み合わせである。 End of explanation fig = pl.figure() ax = Axes3D(fig) ax.scatter(a, b, c) pl.show() Explanation: 例えば(0, 0)を入力すると0.52328635を返している(つまりa[0]とb[0]を入力して、c[0]の値を返している)。 ここでは交差検定は用いていない。 End of explanation X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network() for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) for i in range(10000): l = np.random.choice([i for i in range(len(a))]) m = nn.main2(1, f, [a[l], b[l]], 0.5) for x in X: for y in Y: idata = [x, y] c = np.append(c, nn.realize(f, idata)) fig = pl.figure() ax = Axes3D(fig) ax.scatter(a, b, c) pl.show() Explanation: 確率的勾配降下法を100回繰り返したが見た感じから近づいている。回数を10000回に増やしてみる。 End of explanation X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) evl = np.array([]) for i in range(len(a)): nn = NN() nn.set_network() for j in range(1): l = np.random.choice([i for i in range(len(a))]) if l != i: nn.main2(1, f, [a[l], b[l]], 0.5) idata = [a[i], b[i]] est = nn.realize(f, idata) evl = np.append(evl, math.fabs(est - nn.supervised_data)) np.average(evl) Explanation: 見た感じ随分近づいているように見える。 最後に交差検定を行う。 初めに学習回数が極めて少ないNNである。 End of explanation X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network(h=7) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) evl = np.array([]) for i in range(len(a)): nn = NN() nn.set_network() for j in range(100): l = np.random.choice([i for i in range(len(a))]) if l != i: nn.main2(1, f, [a[l], b[l]], 0.5) idata = [a[i], b[i]] evl = np.append(evl, math.fabs(nn.realize(f, idata) - nn.supervised_data)) np.average(evl) Explanation: 次に十分大きく(100回に)してみる。 End of explanation nn = NN() nn.set_network() X = np.arange(0, 1, 0.05) Y = np.arange(0, 1, 0.05) a = np.array([]) b = np.array([]) c = np.array([]) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) evl = np.array([]) s = [i for i in range(len(a))] for j in range(1000): l = np.random.choice(s) nn.main2(1, f, [a[l], b[l]], 0.5) c = np.array([]) for i in range(len(a)): idata = [a[i], b[i]] c = np.append(c, nn.realize(f, idata)) fig = pl.figure() ax = Axes3D(fig) ax.scatter(a, b, c) pl.show() Explanation: 誤差の平均であるので小さい方よい。 学習回数を増やした結果、精度が上がった。 最後にオートエンコーダを作成する。回数を増やした方がよいことが分かったため、10000回学習させてみる。 End of explanation from NN import RNN Explanation: 十分再現できていることが分かる。 ②ゲーム理論で用いられるTit for Tatを再現してみる。二人のプレーヤーが互いにRNNで相手の行動を予測し、相手の行動に対してTit for Tatに基づいた行動を選択する。 End of explanation nn1 = RNN() nn1.set_network() nn2 = RNN() nn2.set_network() idata1 = [[1, 0]] idata2 = [[0, 1]] sdata1 = [[0]] sdata2 = [[1]] for t in range(20): for i in range(10): nn1.main2(idata1, sdata2, 0.9) nn2.main2(idata2, sdata1, 0.9) idata1.append([sdata1[-1][0], sdata2[-1][0]]) idata2.append([idata1[-1][1], idata1[-1][0]]) n1r = nn1.realize(idata1) n2r = nn2.realize(idata1) sdata1.append([np.random.choice([1, 0], p=[n1r, 1-n1r])]) sdata2.append([np.random.choice([1, 0], p=[n2r, 1-n2r])]) idata.append([sdata1[-1][0], sdata2[-1][0]]) print nn1.realize(idata1), nn2.realize(idata), idata1 Explanation: 最初の行動はRNNで指定できないので、所与となる。この初期値と裏切りに対する感応度で収束の仕方が決まる。 協調を1、裏切りを0としている。RNNの予測値は整数値でないが、p=(RNNの出力値)で次回に協調を行う。 例1:1期目に、プレーヤー1が協力、プレーヤー2が裏切り。 End of explanation p1 = [] p2 = [] for i in range(len(idata1)): p1.append(idata1[i][0]) for i in range(len(idata2)): p2.append(idata2[i][0]) plt.plot(p1, label='player1') plt.plot(p2, label='player2') Explanation: 下の図より、最初は交互に相手にしっぺ返しをしているが、やがて両者が裏切り合うこと状態に収束する。 End of explanation nn1 = RNN() nn1.set_network() nn2 = RNN() nn2.set_network() idata1 = [[1, 1]] idata2 = [[1, 1]] sdata1 = [[1]] sdata2 = [[1]] for t in range(20): for i in range(10): nn1.main2(idata1, sdata2, 0.9) nn2.main2(idata2, sdata1, 0.9) idata1.append([sdata1[-1][0], sdata2[-1][0]]) idata2.append([idata1[-1][1], idata1[-1][0]]) n1r = nn1.realize(idata1) n2r = nn2.realize(idata1) prob1 = n1r prob2 = n2r - 0.3 if prob2 < 0: prob2 = 0 sdata1.append([np.random.choice([1, 0], p=[prob1, 1-prob1])]) sdata2.append([np.random.choice([1, 0], p=[prob2, 1-prob2])]) idata.append([sdata1[-1][0], sdata2[-1][0]]) print nn1.realize(idata1), nn2.realize(idata), idata1 p1 = [] p2 = [] for i in range(len(idata1)): p1.append(idata1[i][0]) for i in range(len(idata2)): p2.append(idata2[i][0]) plt.plot(p1, label='player1') plt.plot(p2, label='player2') Explanation: 例2:1期目に、プレーヤー1が協力、プレーヤー2が協力。ただし、プレーヤー2は相手の裏切りをかなり警戒している。 警戒を表すためにp=(RNNの出力値 - 0.2)とする。p<0の場合はp=0に直す。 End of explanation nn1 = RNN() nn1.set_network() nn2 = RNN() nn2.set_network() idata1 = [[1, 0]] idata2 = [[0, 1]] sdata1 = [[0]] sdata2 = [[1]] for t in range(20): for i in range(10): nn1.main2(idata1, sdata2, 0.9) nn2.main2(idata2, sdata1, 0.9) idata1.append([sdata1[-1][0], np.random.choice([sdata2[-1][0], 1-sdata2[-1][0]], p=[0.8, 0.2])]) idata2.append([sdata2[-1][0], np.random.choice([sdata1[-1][0], 1-sdata1[-1][0]], p=[0.8, 0.2])]) n1r = nn1.realize(idata1) n2r = nn2.realize(idata1) prob1 = n1r prob2 = n2r sdata1.append([np.random.choice([1, 0], p=[prob1, 1-prob1])]) sdata2.append([np.random.choice([1, 0], p=[prob2, 1-prob2])]) idata.append([sdata1[-1][0], sdata2[-1][0]]) print nn1.realize(idata1), nn2.realize(idata), idata1 p1 = [] p2 = [] for i in range(len(idata1)): p1.append(idata1[i][0]) for i in range(len(idata2)): p2.append(idata2[i][0]) plt.plot(p1, label='player1') plt.plot(p2, label='player2') Explanation: 例3:次に相手の行動を完全には観測できない場合を考える。t期の相手の行動をt+1期にノイズが加わって知る。例えば、1期目に相手が協調したことを、確率90%で2期目に正しく知れるが、10%で裏切りと誤って伝わる場合である。 ノイズは20%の確率で加わるものとする。その他の条件は例1と同じにした。 End of explanation
15,966
Given the following text description, write Python code to implement the functionality described below step by step Description: pandas 패키지의 소개 pandas 패키지 Index를 가진 자료형인 R의 data.frame 자료형을 Python에서 구현 참고 자료 http Step1: Vectorized Operation Step2: 명시적인 Index를 가지는 Series 생성시 index 인수로 Index 지정 Index 원소는 각 데이터에 대한 key 역할을 하는 Label dict Step3: Series Indexing 1 Step4: Series Indexing 2 Step5: dict 연산 Step6: dict 데이터를 이용한 Series 생성 별도의 index를 지정하면 지정한 자료만으로 생성 Step7: Index 기준 연산 Step8: Index 이름 Step9: Index 변경 Step10: DataFrame Multi-Series 동일한 Row 인덱스를 사용하는 복수 Series Series를 value로 가지는 dict 2차원 행렬 DataFrame을 행렬로 생각하면 각 Series는 행렬의 Column의 역할 NumPy Array와 차이점 각 Column(Series)마다 type이 달라도 된다. Column Index (Row) Index와 Column Index를 가진다. 각 Column(Series)에 Label 지정 가능 (Row) Index와 Column Label을 동시에 사용하여 자료 접근 가능 Step11: 명시적인 Column/Row Index를 가지는 DataFrame Step12: Single Column Access Step13: Cloumn Data Update Step14: Add Column Step15: Delete Column Step16: inplace 옵션 함수/메소드는 두 가지 종류 그 객체 자체를 변형 해당 객체는 그대로 두고 변형된 새로운 객체를 출력 DataFrame 메소드 대부분은 inplace 옵션을 가짐 inplace=True이면 출력을 None으로 하고 객체 자체를 변형 inplace=False이면 객체 자체는 보존하고 변형된 새로운 객체를 출력 Step17: drop 메소드를 사용한 Row/Column 삭제 del 함수 inplace 연산 drop 메소드 삭제된 Series/DataFrame 출력 Series는 Row 삭제 DataFrame은 axis 인수로 Row/Column 선택 axis=0(디폴트) Step18: Nested dict를 사용한 DataFrame 생성 Step19: Series dict를 사용한 DataFrame 생성 Step20: NumPy array로 변환 Step21: DataFrame의 Column Indexing Single Label key Single Label attribute Label List Fancy Indexing
Python Code: s = pd.Series([4, 7, -5, 3]) s s.values type(s.values) s.index type(s.index) Explanation: pandas 패키지의 소개 pandas 패키지 Index를 가진 자료형인 R의 data.frame 자료형을 Python에서 구현 참고 자료 http://pandas.pydata.org/ http://pandas.pydata.org/pandas-docs/stable/10min.html http://pandas.pydata.org/pandas-docs/stable/tutorials.html pandas 자료형 Series 시계열 데이터 Index를 가지는 1차원 NumPy Array DataFrame 복수 필드 시계열 데이터 또는 테이블 데이터 Index를 가지는 2차원 NumPy Array Index Label: 각각의 Row/Column에 대한 이름 Name: 인덱스 자체에 대한 이름 <img src="https://docs.google.com/drawings/d/12FKb94RlpNp7hZNndpnLxmdMJn3FoLfGwkUAh33OmOw/pub?w=602&h=446" style="width:60%; margin:0 auto 0 auto;"> Series Row Index를 가지는 자료열 생성 추가/삭제 Indexing 명시적인 Index를 가지지 않는 Series End of explanation s * 2 np.exp(s) Explanation: Vectorized Operation End of explanation s2 = pd.Series([4, 7, -5, 3], index=["d", "b", "a", "c"]) s2 s2.index Explanation: 명시적인 Index를 가지는 Series 생성시 index 인수로 Index 지정 Index 원소는 각 데이터에 대한 key 역할을 하는 Label dict End of explanation s2['a'] s2["b":"c"] s2[['a', 'b']] Explanation: Series Indexing 1: Label Indexing Single Label Label Slicing 마지막 원소 포함 Label을 원소로 가지는 Label (Label을 사용한 List Fancy Indexing) 주어진 순서대로 재배열 End of explanation s2[2] s2[1:4] s2[[2, 1]] s2[s2 > 0] Explanation: Series Indexing 2: Integer Indexing Single Integer Integer Slicing 마지막 원소를 포함하지 않는 일반적인 Slicing Integer List Indexing (List Fancy Indexing) Boolearn Fancy Indexing End of explanation "a" in s2, "e" in s2 for k, v in s2.iteritems(): print(k, v) s2["d":"a"] Explanation: dict 연산 End of explanation sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000} s3 = pd.Series(sdata) s3 states = ['California', 'Ohio', 'Oregon', 'Texas'] s4 = pd.Series(sdata, index=states) s4 pd.isnull(s4) pd.notnull(s4) s4.isnull() s4.notnull() Explanation: dict 데이터를 이용한 Series 생성 별도의 index를 지정하면 지정한 자료만으로 생성 End of explanation print(s3.values, s4.values) s3.values + s4.values s3 + s4 Explanation: Index 기준 연산 End of explanation s4 s4.name = "population" s4 s4.index.name = "state" s4 Explanation: Index 이름 End of explanation s s.index s.index = ['Bob', 'Steve', 'Jeff', 'Ryan'] s s.index Explanation: Index 변경 End of explanation data = { 'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002], 'pop': [1.5, 1.7, 3.6, 2.4, 2.9] } df = pd.DataFrame(data) df pd.DataFrame(data, columns=['year', 'state', 'pop']) df.dtypes Explanation: DataFrame Multi-Series 동일한 Row 인덱스를 사용하는 복수 Series Series를 value로 가지는 dict 2차원 행렬 DataFrame을 행렬로 생각하면 각 Series는 행렬의 Column의 역할 NumPy Array와 차이점 각 Column(Series)마다 type이 달라도 된다. Column Index (Row) Index와 Column Index를 가진다. 각 Column(Series)에 Label 지정 가능 (Row) Index와 Column Label을 동시에 사용하여 자료 접근 가능 End of explanation df2 = pd.DataFrame(data, columns=['year', 'state', 'pop', 'debt'], index=['one', 'two', 'three', 'four', 'five']) df2 Explanation: 명시적인 Column/Row Index를 가지는 DataFrame End of explanation df["state"] type(df["state"]) df.state Explanation: Single Column Access End of explanation df2['debt'] = 16.5 df2 df2['debt'] = np.arange(5) df2 df2['debt'] = pd.Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five']) df2 Explanation: Cloumn Data Update End of explanation df2['eastern'] = df2.state == 'Ohio' df2 Explanation: Add Column End of explanation del df2['eastern'] df2 Explanation: Delete Column End of explanation x = [3, 6, 1, 4] sorted(x) x x.sort() x Explanation: inplace 옵션 함수/메소드는 두 가지 종류 그 객체 자체를 변형 해당 객체는 그대로 두고 변형된 새로운 객체를 출력 DataFrame 메소드 대부분은 inplace 옵션을 가짐 inplace=True이면 출력을 None으로 하고 객체 자체를 변형 inplace=False이면 객체 자체는 보존하고 변형된 새로운 객체를 출력 End of explanation s = pd.Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e']) s s2 = s.drop('c') s2 s s.drop(["b", "c"]) df = pd.DataFrame(np.arange(16).reshape((4, 4)), index=['Ohio', 'Colorado', 'Utah', 'New York'], columns=['one', 'two', 'three', 'four']) df df.drop(['Colorado', 'Ohio']) df.drop('two', axis=1) df.drop(['two', 'four'], axis=1) Explanation: drop 메소드를 사용한 Row/Column 삭제 del 함수 inplace 연산 drop 메소드 삭제된 Series/DataFrame 출력 Series는 Row 삭제 DataFrame은 axis 인수로 Row/Column 선택 axis=0(디폴트): Row axis=1: Column End of explanation pop = { 'Nevada': { 2001: 2.4, 2002: 2.9 }, 'Ohio': { 2000: 1.5, 2001: 1.7, 2002: 3.6 } } df3 = pd.DataFrame(pop) df3 Explanation: Nested dict를 사용한 DataFrame 생성 End of explanation pdata = { 'Ohio': df3['Ohio'][:-1], 'Nevada': df3['Nevada'][:2] } pd.DataFrame(pdata) Explanation: Series dict를 사용한 DataFrame 생성 End of explanation df3.values df2.values Explanation: NumPy array로 변환 End of explanation df2 df2["year"] df2.year df2[["state", "debt", "year"]] df2[["year"]] Explanation: DataFrame의 Column Indexing Single Label key Single Label attribute Label List Fancy Indexing End of explanation
15,967
Given the following text description, write Python code to implement the functionality described below step by step Description: This document is intended for intermediate to advanced users. It deals with the internals of the MoveStrategy and MoveScheme objects, as well how to create custom versions of them. For most users, the default behaviors are sufficient. Step1: MoveStrategy and MoveScheme After you've set up your ensembles, you need to create a scheme to sample those ensembles. This is done by the MoveStrategy and MoveScheme objects. OpenPathSampling uses a simple default scheme for any network, in which first you choose a type of move to do (shooting, replica exchange, etc), and then you choose a specific instance of that move type (i.e., which ensembles to use). This default scheme works for most cases, but you might find yourself in a situation where the default scheme isn't very efficient, or where you think you have an idea for a more efficient scheme. OpenPathSampling makes it easy to modify the underlying move scheme. Definitions of terms move scheme Step2: OpenPathSampling comes with a nice tool to visualize the move scheme. There are two main columns in the output of this visualization Step3: MoveSchemes are built from MoveStrategy objects In the end, you must give your PathSimulator object a single MoveScheme. However, this scheme might involve several different strategies (for example, whether you want to do one-way shooting or two-way shooting is one strategy decision, and it each can be combined with either nearest-neightbor replica exchange strategy or all-replica exchange strategy Step4: Now when we visualize this, note the difference in the replica exchange block Step5: What if you changed your mind, or wanted to go the other way? Of course, you could just create a new scheme from scratch. However, you can also append a NearestNeighborRepExStrategy after the AllSetRepExStrategy and, from that, return to nearest-neighbor replica exchange. For NearestNeighborRepExStrategy, the default is replace=True Step6: Combination strategies OpenPathSampling provides a few shortcuts to strategies which combine several substrategies into a whole. DefaultMoveStrategy The DefaultMoveStrategy converts the move scheme to one which follows the default OpenPathSampling behavior. TODO Step7: Examples of practical uses In the examples above, we saw how to change from nearest neighbor replica exchange to all (in-set) replica exchange, and we saw how to switch to a single replica move strategy. In the next examples, we'll look at several other uses for move strategies. Adding a specific extra replica exchange move In the examples above, we showed how to get either a nearest neighbor replica exchange attempt graph, or an all in-set replica exchange attempt graph. If you want something in-between, there's also the NthNearestNeighborRepExStrategy, which works like those above. But what if (probably in addition to one of these schemes) you want to allow a certain few replica exchange? For example, in a multiple interface set approach you might want to include a few exchanges between interfaces in different sets which share the same initial state. To do this, we start with an acceptable strategy (we'll assume the default NearestNeighborRepExStrategy is our starting point) and we add more moves using SelectedPairsRepExStrategy, with replace=False. Step8: Now we have 7 replica exchange movers (5 not including MS-outer), as can be seen in the move tree visualization. Step9: First crossing shooting point selection for some ensembles For ensembles which are far from the state, sometimes uniform shooting point selection doesn't work. If the number of frames inside the interface is much larger than the number outside the interface, then you are very likely to select a shooting point inside the interface. If that point is far enough from the interface, it may be very unlikely for the trial path to cross the interface. One remedy for this is to use the first frame after the first crossing of the interface as the shooting point. This leads to 100% acceptance of the shooting move (every trial satisfies the ensemble, and since there is only one such point -- which is conserved in the trial -- the selection probability is equal in each direction.) The downside of this approach is that the paths decorrelate much more slowly, since only that one point is allowed for shooting (path reversal moves change which is the "first" crossing, otherwise there would be never be complete decorrelation). So while it may be necessary to do it for outer interfaces, doing the same for inner interfaces may slow convergence. The trick we'll show here is to apply the first crossing shooting point selection only to the outer interfaces. This can increase the acceptance probability of the outer interfaces without affecting the decorrelation of the inner interfaces. Step10: Two different kinds of shooting for one ensemble In importance sampling approaches like TIS, you're seeking a balance between two sampling goals. On the one hand, most of space has a negligible (or zero) contribution to the property being measured, so you don't want your steps to be so large that your trials are never accepted. On the other hand, if you make very small steps, it takes a long time to diffuse through the important region (i.e., to decorrelate). One approach which could be used to fix this would be to allow two different kinds of moves Step11: In the visualization of this, you'll see that we have 2 blocks of shooting moves Step12: RepEx-Shoot-RepEx One of the mains goals of OpenPathSampling is to allow users to develop new approaches. New move strategies certainly represents one direction of possible research. This particular example also shows you how to implement such features. It includes both implementation of a custom PathMover and a custom MoveStrategy. Say that, instead of doing the standard replica exchange and shooting moves, you wanted to combine them all into one move buy first doing all the replica exchanges in one order, then doing all the shooting moves, then doing all the replica exchanges in the other order. To implement this, we'll create a custom subclass of MoveStrategy. When making the movers for this strategy, we'll use the built-in SequentialMover object to create the move we're interested in. Step13: You'll notice that the combo_mover we defined above is within a RandomChoiceMover Step14: Modifying the probabilities of moves The DefaultStrategy includes default choices for the probability of making each move type, and then treats all moves within a given type with equal probability. Above, we described how to change the probability of a specific move type; now we're going to discuss changing the probability of a specific move within that type. One approach would be to create a custom MoveStrategy at the GLOBAL level. However, in this section we're going to use a different paradigm to approach this problem. Instead of using a MoveStrategy to change the MoveScheme, we will manually modify it. Keep in mind that this involves really diving into the guts of the MoveScheme object, with all the caveats that involves. Although this paradigm can be used in this and other cases, it is only recommended for advanced users. One you've created the move decision tree, you can make any custom modifications to it that you would desire. However, it is important to remember that modifying certain aspects can lead to a nonsensical result. For example, appending a move to a RandomChoiceMover without also appending an associated weight will lead to nonsense. For the most part, it is better to use MoveStrategy objects to modify your move decision tree. But to make your own MoveStrategy subclasses, you will need to know how to work with the details of the MoveScheme and the move decision tree. In this example, we find the shooting movers associated with a certain ensemble, and double the probability of choosing that ensemble if a shooting move is selected.
Python Code: %matplotlib inline import openpathsampling as paths from openpathsampling.visualize import PathTreeBuilder, PathTreeBuilder from IPython.display import SVG, HTML import openpathsampling.high_level.move_strategy as strategies # TODO: handle this better # real fast setup of a small network from openpathsampling import VolumeFactory as vf cvA = paths.FunctionCV(name="xA", f=lambda s : s.xyz[0][0]) cvB = paths.FunctionCV(name="xB", f=lambda s : -s.xyz[0][0]) stateA = paths.CVDefinedVolume(cvA, float("-inf"), -0.5).named("A") stateB = paths.CVDefinedVolume(cvB, float("-inf"), -0.5).named("B") interfacesA = paths.VolumeInterfaceSet(cvA, float("-inf"), [-0.5, -0.3, -0.1]) interfacesB = paths.VolumeInterfaceSet(cvB, float("-inf"), [-0.5, -0.3, -0.1]) network = paths.MSTISNetwork( [(stateA, interfacesA), (stateB, interfacesB)], ms_outers=paths.MSOuterTISInterface.from_lambdas( {interfacesA: 0.0, interfacesB: 0.0} ) ) Explanation: This document is intended for intermediate to advanced users. It deals with the internals of the MoveStrategy and MoveScheme objects, as well how to create custom versions of them. For most users, the default behaviors are sufficient. End of explanation scheme = paths.DefaultScheme(network) Explanation: MoveStrategy and MoveScheme After you've set up your ensembles, you need to create a scheme to sample those ensembles. This is done by the MoveStrategy and MoveScheme objects. OpenPathSampling uses a simple default scheme for any network, in which first you choose a type of move to do (shooting, replica exchange, etc), and then you choose a specific instance of that move type (i.e., which ensembles to use). This default scheme works for most cases, but you might find yourself in a situation where the default scheme isn't very efficient, or where you think you have an idea for a more efficient scheme. OpenPathSampling makes it easy to modify the underlying move scheme. Definitions of terms move scheme: for a given simulation, the move scheme is the "move decision tree". Every step of the MC is done by starting with some root move, and tracing a series of decision points to generate (and then accept) a trial. move strategy: a general approach to building a move scheme (or a subset thereof). SRTIS is a move strategy. Nearest-neighbor replica exchange is a move strategy. All-replica exchange is a move strategy. So we use "strategy" to talk about the general idea, and "scheme" to talk about a specific implementation of that idea. This document will describe both how to modify the default scheme for one-time modifications and how to develop new move strategies to be re-used on many problems. For the simplest cases, you don't need to get into all of this. All you need to do is to use the DefaultScheme, getting the move decision tree as follows: End of explanation move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme) SVG(move_vis.svg()) Explanation: OpenPathSampling comes with a nice tool to visualize the move scheme. There are two main columns in the output of this visualization: at the left, you see a visualization of the move decision tree. On the right, you see the input and output ensembles for each PathMover. The move decision tree part of the visualization should be read as follows: each RandomChoiceMover (or related movers, such as OneWayShooting) randomly select one of the movers at the next level of indentation. Any form of SequentialMover performs the moves at the next level of indentation in the order from top to bottom. The input/output ensembles part shows possible input ensembles to the move marked with a green bar at the top, and possible output ensembles to the move marked with a red bar on the bottom. The example below shows this visualization for the default scheme with this network. End of explanation # example: switching between AllSetRepEx and NearestNeighborRepEx scheme = paths.DefaultScheme(network) scheme.append(strategies.AllSetRepExStrategy()) Explanation: MoveSchemes are built from MoveStrategy objects In the end, you must give your PathSimulator object a single MoveScheme. However, this scheme might involve several different strategies (for example, whether you want to do one-way shooting or two-way shooting is one strategy decision, and it each can be combined with either nearest-neightbor replica exchange strategy or all-replica exchange strategy: these strategy decisions are completely independent.) Creating a strategy A strategy should be thought of as a way to either add new PathMovers to a MoveScheme or to change those PathMovers which already exist in some way. Every MoveStrategy therefore has an ensembles parameter. If the ensembles parameter is not given, it is assumed that the user intended all normal ensembles in the scheme's transitions. Every strategy also has an initialization parameter called group. This defines the "category" of the move. There are several standard categories (described below), but you can also create custom categories (some examples are given later). Finally, there is another parameter which can be given in the initialization of any strategy, but which must be given as a named parameter. This is replace, which is a boolean stating whether the movers created using this should replace those in the scheme at this point. Strategy groups Intuitively, we often think of moves in groups: the shooting moves, the replica exchange moves, etc. For organizational and analysis purposes, we include that structure in the MoveScheme, and each MoveStrategy must declare what groups it applies to. OpenPathSampling allows users to define arbitrary groups (using strings as labels). The standard schemes use the following groups: 'shooting' 'repex' 'pathreversal' 'minus' Strategy levels In order to apply the strategies in a reasonable order, OpenPathSampling distinguishes several levels at which move strategies work. For example, one level determines which swaps define the replica exchange strategy to be used (SIGNATURE), and another level determines whether the swaps are done as replica exchange or ensemble hopping (GROUP). Yet another level creates the structures that determine when to do a replica exchange vs. when to do a shooting move (GLOBAL). When building the move decision tree, the strategies are applied in the order of their levels. Each level is given a numerical value, meaning that it is simple to create custom orderings. Here are the built-in levels, their numeric values, and brief description: levels.SIGNATURE = 10: levels.MOVER = 30: levels.GROUP = 50: levels.SUPERGROUP = 70: levels.GLOBAL = 90: Applying the strategy to a move scheme To add a strategy to the move scheme, you use MoveScheme's .append() function. This function can take two arguments: the list of items to append (which is required) and the levels associated with each item. By default, every strategy has a level associated with it, so under most circumstances you don't need to use the levels argument. Now let's look at a specific example. Say that, instead of doing nearest-neighbor replica exchange (as is the default), we wanted to allow all exchanges within each transition. This is as easy as appending an AllSetRepExStrategy to our scheme. End of explanation move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme) SVG(move_vis.svg()) Explanation: Now when we visualize this, note the difference in the replica exchange block: we have 6 movers instead of 4, and now we allow the exchanges between the innermost and outermost ensembles. End of explanation scheme.append(strategies.NearestNeighborRepExStrategy(), force=True) move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme) SVG(move_vis.svg()) Explanation: What if you changed your mind, or wanted to go the other way? Of course, you could just create a new scheme from scratch. However, you can also append a NearestNeighborRepExStrategy after the AllSetRepExStrategy and, from that, return to nearest-neighbor replica exchange. For NearestNeighborRepExStrategy, the default is replace=True: this is required in order to replace the AllSetRepExStrategy. Also, to obtain the new move decision tree, you have to pass the argument rebuild=True. This is because, once you've built the tree once, the function scheme.mover_decision_tree() will otherwise skip building the scheme and return the root of the already-built decision tree. This allows advanced custom changes, as discussed much later in this document. End of explanation # example: single replica Explanation: Combination strategies OpenPathSampling provides a few shortcuts to strategies which combine several substrategies into a whole. DefaultMoveStrategy The DefaultMoveStrategy converts the move scheme to one which follows the default OpenPathSampling behavior. TODO: note that this isn't always the same as the default scheme you get from an empty move scheme. If other movers exist, they are converted to the default strategy. So if you added movers which are not part of the default for your network, they will still get included in the scheme. SingleReplicaStrategy The SingleReplicaStrategy converts all replica exchanges to ensemble hops (bias parameter required). It then reshapes the move decision tree so that is organized by ensemble, TODO End of explanation ens00 = network.sampling_transitions[0].ensembles[0] ens02 = network.sampling_transitions[0].ensembles[2] extra_repex = strategies.SelectedPairsRepExStrategy(ensembles=[ens00, ens02], replace=False) scheme = paths.DefaultScheme(network) scheme.append(extra_repex) Explanation: Examples of practical uses In the examples above, we saw how to change from nearest neighbor replica exchange to all (in-set) replica exchange, and we saw how to switch to a single replica move strategy. In the next examples, we'll look at several other uses for move strategies. Adding a specific extra replica exchange move In the examples above, we showed how to get either a nearest neighbor replica exchange attempt graph, or an all in-set replica exchange attempt graph. If you want something in-between, there's also the NthNearestNeighborRepExStrategy, which works like those above. But what if (probably in addition to one of these schemes) you want to allow a certain few replica exchange? For example, in a multiple interface set approach you might want to include a few exchanges between interfaces in different sets which share the same initial state. To do this, we start with an acceptable strategy (we'll assume the default NearestNeighborRepExStrategy is our starting point) and we add more moves using SelectedPairsRepExStrategy, with replace=False. End of explanation move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme) SVG(move_vis.svg()) Explanation: Now we have 7 replica exchange movers (5 not including MS-outer), as can be seen in the move tree visualization. End of explanation # select the outermost ensemble in each sampling transition special_ensembles = [transition.ensembles[-1] for transition in network.sampling_transitions] alternate_shooting = strategies.OneWayShootingStrategy( selector=paths.UniformSelector(), # TODO: change this ensembles=special_ensembles ) # note that replace=True is the default scheme = paths.DefaultScheme(network) scheme.movers = {} # TODO: this will be removed, and lines on either side combined, when all is integrated scheme.append(alternate_shooting) move_decision_tree = scheme.move_decision_tree() # TODO: find a way to visualize Explanation: First crossing shooting point selection for some ensembles For ensembles which are far from the state, sometimes uniform shooting point selection doesn't work. If the number of frames inside the interface is much larger than the number outside the interface, then you are very likely to select a shooting point inside the interface. If that point is far enough from the interface, it may be very unlikely for the trial path to cross the interface. One remedy for this is to use the first frame after the first crossing of the interface as the shooting point. This leads to 100% acceptance of the shooting move (every trial satisfies the ensemble, and since there is only one such point -- which is conserved in the trial -- the selection probability is equal in each direction.) The downside of this approach is that the paths decorrelate much more slowly, since only that one point is allowed for shooting (path reversal moves change which is the "first" crossing, otherwise there would be never be complete decorrelation). So while it may be necessary to do it for outer interfaces, doing the same for inner interfaces may slow convergence. The trick we'll show here is to apply the first crossing shooting point selection only to the outer interfaces. This can increase the acceptance probability of the outer interfaces without affecting the decorrelation of the inner interfaces. End of explanation # example: add extra shooting (in a different group, preferably) extra_shooting = strategies.OneWayShootingStrategy( selector=paths.UniformSelector(), # TODO: change this group='small_step_shooting' ) scheme = paths.DefaultScheme(network) scheme.append(extra_shooting) Explanation: Two different kinds of shooting for one ensemble In importance sampling approaches like TIS, you're seeking a balance between two sampling goals. On the one hand, most of space has a negligible (or zero) contribution to the property being measured, so you don't want your steps to be so large that your trials are never accepted. On the other hand, if you make very small steps, it takes a long time to diffuse through the important region (i.e., to decorrelate). One approach which could be used to fix this would be to allow two different kinds of moves: one which makes small changes with a relatively high acceptance probability to get accepted samples, and one which makes larger changes in an attempt to decorrelate. This section will show you how to do that by adding a small_step_shooting group which does uses the first crossing shooting point selection. (In reality, a better way to get this effect would be to use the standard one-way shooting to do the small steps, and use two-way shooting -- not yet implemented -- to get the larger steps.) End of explanation move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme) SVG(move_vis.svg()) Explanation: In the visualization of this, you'll see that we have 2 blocks of shooting moves: one is the pre-existing group called 'shooting', and the other is this new group 'small_step_shooting'. End of explanation # example: custom subclass of `MoveStrategy` class RepExShootRepExStrategy(strategies.MoveStrategy): _level = strategies.levels.GROUP # we define an init function mainly to set defaults for `replace` and `group` def __init__(self, ensembles=None, group="repex_shoot_repex", replace=True, network=None): super(RepExShootRepExStrategy, self).__init__( ensembles=ensembles, group=group, replace=replace ) def make_movers(self, scheme): # if we replace, we remove these groups from the scheme.movers dictionary if self.replace: repex_movers = scheme.movers.pop('repex') shoot_movers = scheme.movers.pop('shooting') else: repex_movers = scheme.movers['repex'] shoot_movers = scheme.movers['shooting'] # combine into a list for the SequentialMover mover_list = repex_movers + shoot_movers + list(reversed(repex_movers)) combo_mover = paths.SequentialMover(mover_list) return [combo_mover] repex_shoot_repex = RepExShootRepExStrategy() scheme = paths.DefaultScheme(network) scheme.append(repex_shoot_repex) Explanation: RepEx-Shoot-RepEx One of the mains goals of OpenPathSampling is to allow users to develop new approaches. New move strategies certainly represents one direction of possible research. This particular example also shows you how to implement such features. It includes both implementation of a custom PathMover and a custom MoveStrategy. Say that, instead of doing the standard replica exchange and shooting moves, you wanted to combine them all into one move buy first doing all the replica exchanges in one order, then doing all the shooting moves, then doing all the replica exchanges in the other order. To implement this, we'll create a custom subclass of MoveStrategy. When making the movers for this strategy, we'll use the built-in SequentialMover object to create the move we're interested in. End of explanation # TODO: there appears to be a bug in MoveTreeBuilder with this scheme move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme) SVG(move_vis.svg()) Explanation: You'll notice that the combo_mover we defined above is within a RandomChoiceMover: that random choice is for the group 'repex_shoot_repex', which has only this one member. In this, we have used the default replace=True, which removes the old groups for the shooting movers and replica exchange movers. If you would like to keep the old shooting and replica exchange moves around as well, you can use replace=False. End of explanation # TODO: This is done differently (and more easily) now # example: getting into the details #scheme = paths.DefaultScheme(network) #move_decision_tree = scheme.move_decision_tree() #ens = network.sampling_transitions[0].ensembles[-1] #shooting_chooser = [m for m in move_decision_tree.movers if m.movers==scheme.movers['shooting']][0] #idx_ens = [shooting_chooser.movers.index(m) # for m in shooting_chooser.movers # if m.ensemble_signature==((ens,), (ens,))] #print shooting_chooser.weights #for idx in idx_ens: # shooting_chooser.weights[idx] *= 2 #print shooting_chooser.weights Explanation: Modifying the probabilities of moves The DefaultStrategy includes default choices for the probability of making each move type, and then treats all moves within a given type with equal probability. Above, we described how to change the probability of a specific move type; now we're going to discuss changing the probability of a specific move within that type. One approach would be to create a custom MoveStrategy at the GLOBAL level. However, in this section we're going to use a different paradigm to approach this problem. Instead of using a MoveStrategy to change the MoveScheme, we will manually modify it. Keep in mind that this involves really diving into the guts of the MoveScheme object, with all the caveats that involves. Although this paradigm can be used in this and other cases, it is only recommended for advanced users. One you've created the move decision tree, you can make any custom modifications to it that you would desire. However, it is important to remember that modifying certain aspects can lead to a nonsensical result. For example, appending a move to a RandomChoiceMover without also appending an associated weight will lead to nonsense. For the most part, it is better to use MoveStrategy objects to modify your move decision tree. But to make your own MoveStrategy subclasses, you will need to know how to work with the details of the MoveScheme and the move decision tree. In this example, we find the shooting movers associated with a certain ensemble, and double the probability of choosing that ensemble if a shooting move is selected. End of explanation
15,968
Given the following text description, write Python code to implement the functionality described below step by step Description: Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below Step1: Exercise 1 - Shell basics Work through as much of the Software Carpentry lesson on the Unix Shell as you can. Run through the Setup section just below, then open a terminal through Jupyter to run through the exercises. After you have completed the first few sections of the tutorial, return to this notebook. Execute all of the cells, and answer all of the questions. Setup - getting required files To get started, you'll need to have the required files in your directory. Use wget to get them Step2: Note Step3: What is the difference between the two previous cells, and what does the single dot mean? YOUR ANSWER HERE Step4: What do the double dots mean? YOUR ANSWER HERE Step5: Working with Files and Directories The following cells come from the next section of the tutorial. Step6: You can't use the nano editor here in Jupyter, so we'll use the touch command to create an empty file instead. Step7: Removing files and directories. Step8: Renaming and copying files.
Python Code: NAME = "" COLLABORATORS = "" Explanation: Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All). Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below: End of explanation !wget http://swcarpentry.github.io/shell-novice/data/shell-novice-data.zip !unzip -l shell-novice-data.zip !unzip shell-novice-data.zip Explanation: Exercise 1 - Shell basics Work through as much of the Software Carpentry lesson on the Unix Shell as you can. Run through the Setup section just below, then open a terminal through Jupyter to run through the exercises. After you have completed the first few sections of the tutorial, return to this notebook. Execute all of the cells, and answer all of the questions. Setup - getting required files To get started, you'll need to have the required files in your directory. Use wget to get them: End of explanation !whoami !pwd !ls -F !ls -F !ls -F data-shell/ !ls -aF !ls -af . Explanation: Note: you only need to do this once per session while using Jupyter on datanotebook.org. You can open a terminal now and work through the steps, and return to this notebook a little later, and the files will be available either way. That's because you're working in the same local directory. However! If you download this file, close your Jupyter session for the night, then come back tomorrow and open up a new Jupyter session on the server again, you'll need to get those sample files again. Just execute the cells above to do it. Okay, let's get on with the exercise! Navigating Files and Directories As you work through this section of the tutorial, complete the steps here as well, using the ! shell escape command. Execute each cell as you go. These steps aren't exactly the same as what's in the tutorial, where the file layout is a little different and where they're not using a notebook like we are. That's okay. Just consider this practice. End of explanation !ls -F .. Explanation: What is the difference between the two previous cells, and what does the single dot mean? YOUR ANSWER HERE End of explanation !ls data-shell/north-pacific-gyre/2012-07-03/ Explanation: What do the double dots mean? YOUR ANSWER HERE End of explanation !ls -F !mkdir thesis import os assert "thesis" in os.listdir() !ls -F Explanation: Working with Files and Directories The following cells come from the next section of the tutorial. End of explanation !touch thesis/draft.txt assert "draft.txt" in os.listdir("thesis") !ls -F thesis Explanation: You can't use the nano editor here in Jupyter, so we'll use the touch command to create an empty file instead. End of explanation !rm thesis/draft.txt assert "draft.txt" not in os.listdir("thesis") !rm thesis !rmdir thesis assert "thesis" not in os.listdir() !ls Explanation: Removing files and directories. End of explanation !touch draft.txt assert "draft.txt" in os.listdir() !mv draft.txt quotes.txt assert "quotes.txt" in os.listdir() assert "draft.txt" not in os.listdir() !ls !cp quotes.txt quotations.txt assert "quotes.txt" in os.listdir() assert "quotations.txt" in os.listdir() Explanation: Renaming and copying files. End of explanation
15,969
Given the following text description, write Python code to implement the functionality described below step by step Description: Analyzer Analyzer is a python program that tries to gauge the evolvability and maintainability of java software. To achieve this, it tries to measure the complexity of the software under evalutation. A. What is software evolvability and maintainability? We define software evolvability as the ease with which a software system or a component can evolve while preserving its design as much as possible. In the case of OO class libraries, we restrict the preservation of the design to the preservation of the library interface. This is important when we consider that the evolution of a system that uses a library is directly influenced by the evolvability of the library. For instance, a system that uses version i of a library can easily be upgraded with version i+1 of the same library if the new version preserves the interface of the older one. B. What is software complexity? As the Wikipedia article (https Step1: Analyzing one project Step2: 1. Commit frequency Step3: 2. Distinct committers Step4: 3. Class reference count Step5: 4. Inheritance count Step6: 5. Lines of code Step7: 6. Number of methods Step8: 7. Halstead complexity measures To measure the Halstead complexity, following metrics are taken into account Step9: b) Number of operands Step10: Complexity measures Step11: 8. Cyclomatic complexity Step12: Analyzing Apache Java projects Step13: Construct model with Apache projects Step14: We break the projects down in batches of five to make the analysis manageable Step15: Clustering Apache Java projects Step16: Clustering results The clustering shows us that with four clusters, we can cover the whole graph. This gives us four clearly defined areas in which all projects can be mapped. The task is now to discover what parameters have the largest importance in this clustering. We can do this by examining the features of the four projects closest to the centroids and comparing them. Step17: Tabulating groovy and synapse Step18: Construct a prediction model with the Apache projects Labeling all projects using four clusters Step19: Construct a Support Vector Classification model Step20: Test it Step21: Analyze JetBrains kotlin project
Python Code: # Imports and directives %matplotlib inline import numpy as np from math import log import matplotlib.pyplot as plt from matplotlib.mlab import PCA as mlabPCA import javalang import os, re, requests, zipfile, json, operator from collections import Counter import colorsys import random from StringIO import StringIO from subprocess import Popen, PIPE from sklearn.cluster import KMeans from tabulate import tabulate from sklearn import svm # Variables USER = 'apache' # github user of the repo that is analysed REPO = 'tomcat' # repository to investigate BASE_PATH = '/Users/philippepossemiers/Documents/Dev/Spark/data/analyzer/' # local expansion path COMMENT_LINES = ['/*', '//', '*/', '* '] # remove comments from code KEY_WORDS = ['abstract','continue','for','new','switch','assert','default','goto','synchronized', 'boolean','do','if','private','this','break','double','implements','protected','throw', 'byte','else','public','throws','case','enum','instanceof','return','transient', 'catch','extends','int','short','try','char','final','interface','static','void', 'class','finally','long','strictfp','volatile','const','float','native','super','while' 'true','false','null'] TOP = 25 # number of items to show in graphs # list of operators to find in source code OPERATORS = ['\+\+','\-\-','\+=','\-=','\*\*','==','!=','>=','<=','\+','=','\-','\*','/','%','!','&&', \ '\|\|','\?','instanceof','~','<<','>>','>>>','&','\^','<','>'] # list of variable types to find in source code OPERANDS = ['boolean','byte','char','short','int','long','float','double','String'] GIT_COMMIT_FIELDS = ['author_name', 'committer name', 'date', 'message', 'name'] GIT_LOG_FORMAT = ['%an', '%cn', '%ad', '%s'] GIT_LOG_FORMAT = '%x1f'.join(GIT_LOG_FORMAT) + '%x1e' # List of Apache Java projects on github APACHE_PROJECTS = ['abdera', 'accumulo', 'ace', 'activemq', 'airavata', 'ambari', 'ant', 'ant-antlibs-antunit', \ 'any23', 'archiva', 'aries', 'webservices-axiom', 'axis2-java', \ 'bigtop', 'bookkeeper', 'bval', 'calcite', 'camel', 'cassandra', 'cayenne', \ 'chainsaw', 'chukwa', 'clerezza', 'commons-bcel', \ 'commons-beanutils', 'commons-bsf', 'commons-chain', 'commons-cli', 'commons-codec', \ 'commons-collections', 'commons-compress', 'commons-configuration', 'commons-daemon', \ 'commons-dbcp', 'commons-dbutils', 'commons-digester', 'commons-discovery', \ 'commons-email', 'commons-exec', 'commons-fileupload', 'commons-functor', 'httpcomponents-client', \ 'commons-io', 'commons-jci', 'commons-jcs', 'commons-jelly', 'commons-jexl', 'commons-jxpath', \ 'commons-lang', 'commons-launcher', 'commons-logging', 'commons-math', \ 'commons-net', 'commons-ognl', 'commons-pool', 'commons-proxy', 'commons-rng', 'commons-scxml', \ 'commons-validator', 'commons-vfs', 'commons-weaver', 'continuum', 'crunch', \ 'ctakes', 'curator', 'cxf', 'derby', 'directmemory', \ 'directory-server', 'directory-studio', 'drill', 'empire-db', 'falcon', 'felix', 'flink', \ 'flume', 'fop', 'directory-fortress-core', 'ftpserver', 'geronimo', 'giraph', 'gora', \ 'groovy', 'hadoop', 'hama', 'harmony', 'hbase', 'helix', 'hive', 'httpcomponents-client', \ 'httpcomponents-core', 'jackrabbit', 'jena', 'jmeter', 'lens', 'log4j', \ 'lucene-solr', 'maven', 'maven-doxia', 'metamodel', 'mina', 'mrunit', 'myfaces', 'nutch', 'oozie', \ 'openjpa', 'openmeetings', 'openwebbeans', 'orc', 'phoenix', 'pig', 'poi','rat', 'river', \ 'shindig', 'sling', \ 'sqoop', 'struts', 'synapse', 'syncope', 'tajo', 'tika', 'tiles', 'tomcat', 'tomee', \ 'vxquery', 'vysper', 'whirr', 'wicket', 'wink', 'wookie', 'xmlbeans', 'zeppelin', 'zookeeper'] print len(APACHE_PROJECTS) # Global dictionaries joined = [] # list with all source files commit_dict = {} # commits per class reference_dict = {} # number of times a class is referenced lines_dict = {} # number of lines per class methods_dict = {} # number of functions per class operators_dict = {} # number of operators per class operands_dict = {} # number of operands per class halstead_dict = {} # Halstead complexity measures cyclomatic_dict = {} # cyclomatic complexity # Utility functions # TODO : check if we can use this def sanitize(contents): lines = contents.split('\n') # remove stop lines for stop_line in COMMENT_LINES: lines = [line.lower().lstrip().replace(';', '') for line in lines if stop_line not in line and line <> ''] return '\n'.join(lines) def find_whole_word(word): return re.compile(r'\b({0})\b'.format(word), flags=re.IGNORECASE).search def all_files(directory): for path, dirs, files in os.walk(directory): for f in files: yield os.path.join(path, f) def build_joined(repo): src_list = [] repo_url = 'https://github.com/' + repo[0] + '/' + repo[1] os.chdir(BASE_PATH) os.system('git clone {}'.format(repo_url)) # get all java source files src_files = [f for f in all_files(BASE_PATH + repo[1]) if f.endswith('.java')] for f in src_files: try: # read contents code = open(f, 'r').read() # https://github.com/c2nes/javalang tree = javalang.parse.parse(code) # create tuple with package + class name and code + tree + file path src_list.append((tree.package.name + '.' + tree.types[0].name, (code, tree, f))) except: pass return src_list def parse_git_log(repo_dir, src): # first the dictionary with all classes # and their commit count total = 0 p = Popen('git log --name-only --pretty=format:', shell=True, stdout=PIPE, cwd=repo_dir) (log, _) = p.communicate() log = log.strip('\n\x1e').split('\x1e') log = [r.strip().split('\n') for r in log] log = [r for r in log[0] if '.java' in r] log2 = [] for f1 in log: for f2 in src: if f2[1][2].find(f1) > -1: log2.append(f2[0]) cnt_dict = Counter(log2) for key, value in cnt_dict.items(): total += value cnt_dict['total'] = total # and then the list of commits as dictionaries p = Popen('git log --format="%s"' % GIT_LOG_FORMAT, shell=True, stdout=PIPE, cwd=repo_dir) (log, _) = p.communicate() log = log.strip('\n\x1e').split("\x1e") log = [row.strip().split("\x1f") for row in log] log = [dict(zip(GIT_COMMIT_FIELDS, row)) for row in log] # now get list of distinct committers committers = len(set([x['committer name'] for x in log])) cnt_dict['committers'] = committers return cnt_dict def count_inheritance(src): count = 0 for name, tup in src: if find_whole_word('extends')(tup[0]): count += 1 return count def count_references(src): names, tups = zip(*src) dict = {e : 0 for i, e in enumerate(names)} total = 0 for name in names: c_name = name[name.rfind('.'):] for tup in tups: if find_whole_word(c_name)(tup[0]): dict[name] += 1 total += 1 dict['total'] = total # sort by amount of references return {k: v for k, v in dict.iteritems() if v > 1} def count_lines(src): dict = {e : 0 for i, e in enumerate(src)} total = 0 for name, tup in src: dict[name] = 0 lines = tup[0].split('\n') for line in lines: if line != '\n': dict[name] += 1 total += 1 dict['total'] = total # sort by amount of lines return {k: v for k, v in dict.iteritems()} # constructors not counted def count_methods(src): dict = {e : 0 for i, e in enumerate(src)} total = 0 for name, tup in src: dict[name] = len(tup[1].types[0].methods) total += dict[name] dict['total'] = total # sort by amount of functions return {k: v for k, v in dict.iteritems()} def count_operators(src): dict = {key: 0 for key in OPERATORS} for name, tup in src: for op in OPERATORS: # if operator is in list, match it without anything preceding or following it # eg +, but not ++ or += if op in ['\+','\-','!','=']: # regex excludes followed_by (?!) and preceded_by (?<!) dict[op] += len(re.findall('(?!\-|\*|&|>|<|>>)(?<!\-|\+|=|\*|&|>|<)' + op, tup[0])) else: dict[op] += len(re.findall(op, tup[0])) # TODO : correct bug with regex for the '++' dict['\+'] -= dict['\+\+'] total = 0 distinct = 0 for key in dict: if dict[key] > 0: total += dict[key] distinct += 1 dict['total'] = total dict['distinct'] = distinct return dict def count_operands(src): dict = {key: 0 for key in OPERANDS} for name, tup in src: lines = tup[0].split('\n') for line in lines: for op in OPERANDS: if op in line: dict[op] += 1 + line.count(',') total = 0 distinct = 0 for key in dict: if dict[key] > 0: total += dict[key] distinct += 1 dict['total'] = total dict['distinct'] = distinct return dict def calc_cyclomatic_complexity(src): dict = {} total = 0 for name, tup in src: dict[name] = 1 dict[name] += len(re.findall('if|else|for|switch|while', tup[0])) total += dict[name] dict['total'] = total # sort by amount of complexity return {k: v for k, v in dict.iteritems()} def make_hbar_plot(dictionary, title, x_label, top=TOP): # show top classes vals = sorted(dictionary.values(), reverse=True)[:top] lbls = sorted(dictionary, key=dictionary.get, reverse=True)[:top] # make plot fig = plt.figure(figsize=(10, 7)) fig.suptitle(title, fontsize=15) ax = fig.add_subplot(111) # set ticks y_pos = np.arange(len(lbls)) + 0.5 ax.barh(y_pos, vals, align='center', alpha=0.4, color='lightblue') ax.set_yticks(y_pos) ax.set_yticklabels(lbls) ax.set_xlabel(x_label) plt.show() pass # Clustering def random_centroid_selector(total_clusters , clusters_plotted): random_list = [] for i in range(0, clusters_plotted): random_list.append(random.randint(0, total_clusters - 1)) return random_list def plot_cluster(kmeansdata, centroid_list, names, num_cluster, title): mlab_pca = mlabPCA(kmeansdata) cutoff = mlab_pca.fracs[1] users_2d = mlab_pca.project(kmeansdata, minfrac=cutoff) centroids_2d = mlab_pca.project(centroid_list, minfrac=cutoff) # make plot fig = plt.figure(figsize=(20, 15)) fig.suptitle(title, fontsize=15) ax = fig.add_subplot(111) plt.xlim([users_2d[:, 0].min() - 3, users_2d[:, 0].max() + 3]) plt.ylim([users_2d[:, 1].min() - 3, users_2d[:, 1].max() + 3]) random_list = random_centroid_selector(num_cluster, 50) for i, position in enumerate(centroids_2d): if i in random_list: plt.scatter(centroids_2d[i, 0], centroids_2d[i, 1], marker='o', c='red', s=100) for i, position in enumerate(users_2d): plt.scatter(users_2d[i, 0], users_2d[i, 1], marker='o', c='lightgreen') for label, x, y in zip(names, users_2d[:, 0], users_2d[:, 1]): ax.annotate( label, xy = (x, y), xytext=(-15, 15), textcoords = 'offset points', ha='right', va='bottom', bbox = dict(boxstyle='round,pad=0.5', fc='white', alpha=0.5), arrowprops = dict(arrowstyle = '->', connectionstyle='arc3,rad=0')) pass Explanation: Analyzer Analyzer is a python program that tries to gauge the evolvability and maintainability of java software. To achieve this, it tries to measure the complexity of the software under evalutation. A. What is software evolvability and maintainability? We define software evolvability as the ease with which a software system or a component can evolve while preserving its design as much as possible. In the case of OO class libraries, we restrict the preservation of the design to the preservation of the library interface. This is important when we consider that the evolution of a system that uses a library is directly influenced by the evolvability of the library. For instance, a system that uses version i of a library can easily be upgraded with version i+1 of the same library if the new version preserves the interface of the older one. B. What is software complexity? As the Wikipedia article (https://en.wikipedia.org/wiki/Programming_complexity) on programming complexity states : "As the number of entities increases, the number of interactions between them would increase exponentially, and it would get to a point where it would be impossible to know and understand all of them. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions and so increases the chance of introducing defects when making changes. In more extreme cases, it can make modifying the software virtually impossible." C. How can we measure sofware complexity? To measure sofware complexity, we have to break this down into metrics. Therefore, we propose to use the metrics as proposed by Sanjay Misra en Ferid Cafer in their paper 'ESTIMATING COMPLEXITY OF PROGRAMS IN PYTHON LANGUAGE'. To quote from this paper : "Complexity of a system depends on the following factors : 1. Complexity due to classes. Class is a basic unit of object oriented software development. All the functions are distributed in different classes. Further classes in the object-oriented code either are in inheritance hierarchy or distinctly distributed. Accordingly, the complexity of all the classes is due to classes in inheritance hierarchy and the complexity of distinct classes. 2. Complexity due to global factors: The second important factor, which is normally neglected in calculating complexity of object-oriented codes, is the complexity of global factors in main program. 3. Complexity due to coupling: Coupling is one of the important factors for increasing complexity of object- oriented code." Whitin the Analyzer program, we try to measure complexity using following metrics : Commit frequency. This can find the 'hotspots' in code where many changes were performed and which can be problem zones. This idea was proposed by Adam Tornhill in 'Your Code as a Crime Scene'. Distinct number of committers. This metric will tell us how many people worked on the code, thereby increasing complexity. Class reference count. This metric measures the degree of coupling between classes by counting the references to them. Inheritance count. This is a measure of the coupling that exists because of inheritance. Lines of code. A rather crude metric that tries to measure the length of our software system. Number of methods. This is a measure of the complexity of the system. Halstead complexity measures : https://en.wikipedia.org/wiki/Halstead_complexity_measures Cyclomatic Complexity : https://en.wikipedia.org/wiki/Cyclomatic_complexity D. Interpreting the metrics Now we try to interpret these measures by clustering, or grouping together the results from analyzing 134 open-source Apache Java projects. To do that, we will use the k-means algorithm, a classic machine-learning algorithm originally developed in 1957. Clustering is an unsupervised learning technique and we use clustering algorithms for exploring data. Using clustering allows us to group similar software projects together, and we can explore the trends in each cluster independently. End of explanation # first build list of source files joined = build_joined((USER, REPO)) Explanation: Analyzing one project End of explanation commit_dict = parse_git_log(BASE_PATH + REPO, joined) make_hbar_plot(commit_dict, 'Commit frequency', 'Commits', TOP) Explanation: 1. Commit frequency End of explanation print 'Distinct committers : ' + str(commit_dict['committers']) Explanation: 2. Distinct committers End of explanation reference_dict = count_references(joined) make_hbar_plot(reference_dict, 'Top 25 referenced classes', 'References', TOP) Explanation: 3. Class reference count End of explanation inheritance_count = count_inheritance(joined) print 'Inheritance count : ' + inheritance_count Explanation: 4. Inheritance count End of explanation lines_dict = count_lines(joined) make_hbar_plot(lines_dict, 'Largest 25 classes', 'Lines of code', TOP) Explanation: 5. Lines of code End of explanation methods_dict = count_methods(joined) make_hbar_plot(methods_dict, 'Top 25 classes in nr of methods', 'Number of methods', TOP) Explanation: 6. Number of methods End of explanation operators_dict = count_operators(joined) make_hbar_plot(operators_dict, 'Top 25 operators', 'Number of operators', TOP) Explanation: 7. Halstead complexity measures To measure the Halstead complexity, following metrics are taken into account : * the number of distinct operators (https://docs.oracle.com/javase/tutorial/java/nutsandbolts/opsummary.html) * the number of distinct operands * the total number of operators * the total number of operands a) Number of operators End of explanation operands_dict = count_operands(joined) make_hbar_plot(operands_dict, 'Top 25 operand types', 'Number of operands', TOP) Explanation: b) Number of operands End of explanation halstead_dict['PROGRAM_VOCABULARY'] = operators_dict['distinct'] + operands_dict['distinct'] halstead_dict['PROGRAM_LENGTH'] = round(operators_dict['total'] + operands_dict['total'], 0) halstead_dict['VOLUME'] = round(halstead_dict['PROGRAM_LENGTH'] * log(halstead_dict['PROGRAM_VOCABULARY'], 2), 0) halstead_dict['DIFFICULTY'] = (operators_dict['distinct'] / 2) * (operands_dict['total'] / operands_dict['distinct']) halstead_dict['EFFORT'] = round(halstead_dict['VOLUME'] * halstead_dict['DIFFICULTY'], 0) halstead_dict['TIME'] = round(halstead_dict['EFFORT'] / 18, 0) halstead_dict['BUGS'] = round(halstead_dict['VOLUME'] / 3000, 0) print halstead_dict Explanation: Complexity measures End of explanation cyclomatic_dict = calc_cyclomatic_complexity(joined) make_hbar_plot(cyclomatic_dict, 'Top 25 classes with cyclomatic complexity', 'Level of complexity', TOP) Explanation: 8. Cyclomatic complexity End of explanation # featurize all metrics def make_features(repo, dict): features = [] for key, value in dict.items(): features.append(int(value)) return features # iterate all repos and build # dictionary with all metrics def make_rows(repos): rows = [] try: for repo in repos: dict = {} joined = build_joined(repo) github_dict = parse_git_log(BASE_PATH + repo[1], joined) dict['commits'] = github_dict['total'] #dict['committers'] = github_dict['committers'] Uncomment this line for the next run. # Was added at the last minute dict['references'] = count_references(joined)['total'] dict['inheritance'] = count_inheritance(joined) dict['lines'] = count_lines(joined)['total'] dict['methods'] = count_methods(joined)['total'] operators_dict = count_operators(joined) operands_dict = count_operands(joined) dict['program_vocabulary'] = operators_dict['distinct'] + operands_dict['distinct'] dict['program_length'] = round(operators_dict['total'] + operands_dict['total'], 0) dict['volume'] = round(dict['program_length'] * log(dict['program_vocabulary'], 2), 0) dict['difficulty'] = (operators_dict['distinct'] / 2) * (operands_dict['total'] / operands_dict['distinct']) dict['effort'] = round(dict['volume'] * dict['difficulty'], 0) dict['time'] = round(dict['effort'] / 18, 0) dict['bugs'] = round(dict['volume'] / 3000, 0) dict['cyclomatic'] = calc_cyclomatic_complexity(joined)['total'] rows.append(make_features(repo, dict)) except: pass return rows def cluster_repos(arr, nr_clusters): kmeans = KMeans(n_clusters=nr_clusters) kmeans.fit(arr) centroids = kmeans.cluster_centers_ labels = kmeans.labels_ return (centroids, labels) Explanation: Analyzing Apache Java projects End of explanation repositories = [('apache', x) for x in APACHE_PROJECTS] Explanation: Construct model with Apache projects End of explanation rows = make_rows(repositories[:5]) rows.extend(make_rows(repositories[5:10])) rows.extend(make_rows(repositories[10:15])) rows.extend(make_rows(repositories[15:20])) rows.extend(make_rows(repositories[20:25])) rows.extend(make_rows(repositories[25:30])) rows.extend(make_rows(repositories[30:35])) rows.extend(make_rows(repositories[35:40])) rows.extend(make_rows(repositories[40:45])) rows.extend(make_rows(repositories[45:50])) rows.extend(make_rows(repositories[50:55])) rows.extend(make_rows(repositories[55:60])) rows.extend(make_rows(repositories[60:65])) rows.extend(make_rows(repositories[65:70])) rows.extend(make_rows(repositories[70:75])) rows.extend(make_rows(repositories[75:80])) rows.extend(make_rows(repositories[80:85])) rows.extend(make_rows(repositories[85:90])) rows.extend(make_rows(repositories[90:95])) rows.extend(make_rows(repositories[95:100])) rows.extend(make_rows(repositories[100:105])) rows.extend(make_rows(repositories[105:110])) rows.extend(make_rows(repositories[110:115])) rows.extend(make_rows(repositories[115:120])) rows.extend(make_rows(repositories[120:125])) rows.extend(make_rows(repositories[125:130])) rows.extend(make_rows(repositories[130:133])) rows.extend(make_rows(repositories[133:134])) print rows Explanation: We break the projects down in batches of five to make the analysis manageable End of explanation # TWO clusters NR_CLUSTERS = 2 arr = np.array(rows) tup = cluster_repos(arr, NR_CLUSTERS) centroids = tup[0] plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters') # THREE clusters NR_CLUSTERS = 3 arr = np.array(rows) tup = cluster_repos(arr, NR_CLUSTERS) centroids = tup[0] plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters') # FOUR clusters NR_CLUSTERS = 4 arr = np.array(rows) tup = cluster_repos(arr, NR_CLUSTERS) centroids = tup[0] plot_cluster(arr, centroids, APACHE_PROJECTS, NR_CLUSTERS, str(NR_CLUSTERS) + ' Clusters') Explanation: Clustering Apache Java projects End of explanation names = [x[1] for x in repositories] print names.index('synapse') print names.index('tomcat') print names.index('groovy') print names.index('hama') Explanation: Clustering results The clustering shows us that with four clusters, we can cover the whole graph. This gives us four clearly defined areas in which all projects can be mapped. The task is now to discover what parameters have the largest importance in this clustering. We can do this by examining the features of the four projects closest to the centroids and comparing them. End of explanation headers = ['Repo', 'Com', 'Ref', 'Inh', 'Line', 'Meth', 'Voc', \ 'Len', 'Vol', 'Diff', 'Eff', 'Time', 'Bug','Cycl'] print tabulate([[names[118]] + [x for x in rows[118]], [names[123]] + [x for x in rows[123]], \ [names[82]] + [x for x in rows[82]], [names[84]] + [x for x in rows[84]]], headers=headers) Explanation: Tabulating groovy and synapse End of explanation # THREE clusters NR_CLUSTERS = 4 arr = np.array(rows) tup = cluster_repos(arr, NR_CLUSTERS) labels = tup[1] Explanation: Construct a prediction model with the Apache projects Labeling all projects using four clusters End of explanation clf = svm.SVC(gamma=0.001, C=100.) clf.fit(rows, labels) Explanation: Construct a Support Vector Classification model End of explanation print labels print clf.predict(rows[3]) print clf.predict(rows[34]) Explanation: Test it End of explanation #repositories = [('qos-ch', 'slf4j'), ('mockito', 'mockito'), ('elastic', 'elasticsearch')] repositories = [('JetBrains', 'kotlin')] rows = make_rows(repositories) print clf.predict(rows[0]) print tabulate([['Kotlin'] + [x for x in rows[0]]], headers=headers) Explanation: Analyze JetBrains kotlin project End of explanation
15,970
Given the following text description, write Python code to implement the functionality described below step by step Description: File (Revision) upload example To run this example, you'll need the Ovation Python API. Install with pip Step1: Connection You use a connection.Session to interact with the Ovaiton REST API. Use the connect method to create an authenticated Session. Step2: Upload a file (revision) The Python API wraps the Ovation REST API, using the awesome requests library. The Session provides some convenient additions to make working with Ovation's API a little easier. For example, it automatically sets the content type to JSON and handles URL creation from path and host. The example below shows retrieving a project by ID, adding a new File and uploading a new Revision (a version) of that file using the ovation.revisions.upload_revision convenience method. Step3: You can upload an entire folder or individual files to the Project. First, let's upload a folder Step4: Alternatively, we can create a folder and upload individual files Step5: For advanced users, we can create a File and then upload a Revision. Step6: upload_revision is also how you can upload a new Revision to an existing file. Download a revision The Ovation API generates a temporary authenticated URL for downloading a Revision. This example uses the ovation.revisions.download_revision function to get this authenticated URL and then to download it to the local file system, returning the downloaded file's path
Python Code: import ovation.core as core from ovation.session import connect from ovation.upload import upload_revision, upload_file, upload_folder from ovation.download import download_revision from pprint import pprint from getpass import getpass from tqdm import tqdm_notebook as tqdm Explanation: File (Revision) upload example To run this example, you'll need the Ovation Python API. Install with pip: pip install ovation End of explanation session = connect(input('Ovation email: '), org=input('Organization (enter for default): ') or 0) Explanation: Connection You use a connection.Session to interact with the Ovaiton REST API. Use the connect method to create an authenticated Session. End of explanation project_id = input('Project UUID: ') # Get a project by ID proj = session.get(session.path('project', project_id)) Explanation: Upload a file (revision) The Python API wraps the Ovation REST API, using the awesome requests library. The Session provides some convenient additions to make working with Ovation's API a little easier. For example, it automatically sets the content type to JSON and handles URL creation from path and host. The example below shows retrieving a project by ID, adding a new File and uploading a new Revision (a version) of that file using the ovation.revisions.upload_revision convenience method. End of explanation folder = upload_folder(session, proj, '/path/to/project_fastq_folder') Explanation: You can upload an entire folder or individual files to the Project. First, let's upload a folder: End of explanation import os folder = core.create_folder(session, proj, 'FASTQ') for f in os.glob('/path/to/project_fastq_folder/*.fastq') Explanation: Alternatively, we can create a folder and upload individual files: End of explanation # Create a new File r = session.post(project_url, data={'entities': [{'type': 'File', 'attributes': {'name': 'example.vcf'}}]}) file = r[0] pprint(file) # Create a new Revision (version) of the new File by uploading a local file revision = upload_revision(session, file, '/Users/barry/Desktop/example.vcf') pprint(revision) Explanation: For advanced users, we can create a File and then upload a Revision. End of explanation file_path = download_revision(session, revision._id) Explanation: upload_revision is also how you can upload a new Revision to an existing file. Download a revision The Ovation API generates a temporary authenticated URL for downloading a Revision. This example uses the ovation.revisions.download_revision function to get this authenticated URL and then to download it to the local file system, returning the downloaded file's path: End of explanation
15,971
Given the following text description, write Python code to implement the functionality described below step by step Description: Background Example notebook for the visualiztion of metagenomic data using MinHash signatures calculated with sourmash compute, classified with sourmash gather, and compared with sourmash compare. Signatures were computed with a --scaled 10000 and -k 31 python sourmash compute --scaled 10000 -k 31 &lt; filename &gt; - Signatures used in the example below can be found in the data directory Taxonomic classification was performed using sourmash gather and the sourmash genbank sbt database. More databases are available! python sourmash gather -k 31 genbank-k31.sbt.json &lt; filename &gt; Metagenomes were compared using sourmash compare python sourmash compare -k 31 &lt; filename &gt; 1) Import data visualiztion tools Step1: 2) Convert sourmash output (i.e. csv) to dataframe from visualization Step2: Terms intersect_bp - baspairs in shared by the query and the match f_orig_query - fraction of the query f_match - fraction of the match found f_unique_to_query - fraction of the query that is unique to the match name - name of the match filename - search database used md5 - unique identifier for data used to generate the signature 3) Compare taxa across metagenomes Step3: 3) Compare metagenomes with sourmash compare Step4: 4) Visualize metagenome comparisons
Python Code: #Import matplotlib %matplotlib inline #Import pandas, seaborn, and ipython display import pandas as pd import seaborn as sns from IPython.display import display, HTML Explanation: Background Example notebook for the visualiztion of metagenomic data using MinHash signatures calculated with sourmash compute, classified with sourmash gather, and compared with sourmash compare. Signatures were computed with a --scaled 10000 and -k 31 python sourmash compute --scaled 10000 -k 31 &lt; filename &gt; - Signatures used in the example below can be found in the data directory Taxonomic classification was performed using sourmash gather and the sourmash genbank sbt database. More databases are available! python sourmash gather -k 31 genbank-k31.sbt.json &lt; filename &gt; Metagenomes were compared using sourmash compare python sourmash compare -k 31 &lt; filename &gt; 1) Import data visualiztion tools End of explanation #Read in taxonmic classification results from sourmash with pandas #Dataframe name, read in csv file mg_1_table = pd.read_csv("../data/mg_1") mg_2_table = pd.read_csv("../data/mg_2") mg_3_table = pd.read_csv("../data/mg_3") mg_4_table = pd.read_csv("../data/mg_4") mg_5_table = pd.read_csv("../data/mg_5") mg_6_table = pd.read_csv("../data/mg_6") mg_7_table = pd.read_csv("../data/mg_7") mg_8_table = pd.read_csv("../data/mg_8") #Display taxonomic classification results for 8 metagenomes #Display data frames as tabels with display() #Remove dataframe by commenting out using the "#" symbol #Display all dataframes display(mg_1_table) display(mg_2_table) display(mg_3_table) display(mg_4_table) display(mg_5_table) display(mg_6_table) display(mg_7_table) display(mg_8_table) Explanation: 2) Convert sourmash output (i.e. csv) to dataframe from visualization End of explanation #Combined output into a single file named all_gather_results.csv !head -1 ../data/mg_1 \ > all_gather_results.csv; tail -n +2 -q ../data/mg_{1..8} >> all_gather_results.csv sns.set(style="darkgrid") #Ploting the frequency of detection of each match across the 8 metagenomes dx = pd.read_csv('all_gather_results.csv', header = 0) dx['name'].value_counts().plot(kind="barh", fontsize=16, figsize=(12,12)) #plt.savefig('<file name>.pdf', bbox_inches='tight') #Ploting average of the fraction of match detected across all metagenomes newdx = dx[['f_match', 'name']].copy() newdx newdx_byname = newdx.set_index('name') newdx_byname.groupby(level=0).mean().plot(kind="barh", fontsize=16, figsize=(12,12)) #plt.savefig('<insert name>.pdf', bbox_inches='tight') Explanation: Terms intersect_bp - baspairs in shared by the query and the match f_orig_query - fraction of the query f_match - fraction of the match found f_unique_to_query - fraction of the query that is unique to the match name - name of the match filename - search database used md5 - unique identifier for data used to generate the signature 3) Compare taxa across metagenomes End of explanation #Calculate jaccard distance using sourmash compare and generate results in a csv named mg_compare #Path to sourmash install, "compare", path to signatures, output format, output filename !~/dev/sourmash/sourmash compare ../data/mg_*sig --csv mg_compare Explanation: 3) Compare metagenomes with sourmash compare End of explanation #Generate similarity matrix with hierchical clustering import seaborn as sns import matplotlib.pyplot as plt sns.set(context="paper", font="monospace") sns.set(font_scale=1.4) #Define clustermap color scheme cmap = sns.cubehelix_palette(8, start=2, rot=0, dark=0, light=.95, as_cmap=True) # Load the datset df = pd.read_csv("mg_compare", header=0) # Draw the clustermap using seaborn o = sns.clustermap(df, vmax=1, vmin=0, square=True, linewidths=.005, cmap=cmap) #Bold labels and rotate plt.setp(o.ax_heatmap.get_yticklabels(), rotation=0, fontweight="bold") plt.setp(o.ax_heatmap.get_xticklabels(), rotation=90, fontweight="bold") #Set context with seaborn sns.set(context="paper",font="monospace") #Save figure #plt.savefig(<filename>.pdf) Explanation: 4) Visualize metagenome comparisons End of explanation
15,972
Given the following text description, write Python code to implement the functionality described below step by step Description: ML Workbench Sample --- Image Classification <br><br> Introduction of ML Workbench ML Workbench provides an easy command line interface for machine learning life cycle, which involves four stages Step1: Setup Step2: Next cell, we will create a dataset representing our training data. Step3: Analyze Analysis step includes computing numeric stats (i.e. min/max), categorical classes, text vocabulary and frequency, etc. Run "%%ml analyze --help" for usage. The analysis results will be used for transforming raw data into numeric features that the model can deal with. For example, to convert categorical value to a one-hot vector ("Monday" becomes [1, 0, 0, 0, 0, 0, 0]). The data may be very large, so sometimes a cloud run is needed by adding --cloud flag. Cloud run will start BigQuery jobs, which may incur some costs. In this case, analysis step only collects unique labels. Note that we run analysis only on training data, but not evaluation data. Step4: Transform With analysis results we can transform raw data into numeric features. This needs to be done for both training and eval data. The data may be very large, so sometimes a cloud pipeline is needed by adding --cloud. Cloud run is implemented by DataFlow jobs, so it may incur some costs. In this case, transform is required. It downloads image, resizes it, and generate embeddings from each image by running a pretrained TensorFlow graph. Note that it creates two jobs --- one for training data and one for eval data. Step5: After transformation is done, create a new dataset referencing the training data. Step6: Train Training starts from transformed data. If training work is too much to do on the local VM, --cloud is recommended so training happens in cloud, in a distributed way. Run %%ml train --help for details. Training in cloud is implemented with Cloud ML Engine. It may incur some costs. Step7: After training is complete, you should see model files like the following. Step8: Batch Prediction Batch prediction performs prediction in a batched fashion. The data can be large, and is specified by files. Note that, we use the "evaluation_model" which sits in "evaluation_model_dir". There are two models created in training. One is a regular model under "model" dir, the other is "evaluation_model". The difference is the regular one takes prediction data without target and the evaluation model takes data with target and output the target as is. So evaluation model is good for evaluating the quality of the model because the targets and predicted values are included in output. Step9: Prediction results are in JSON format. We can load the results into BigQuery table and performa analysis. Step10: Check wrong predictions. Step11: Online Prediction and Build Your Own Prediction Client Please see "Flower Classification (small dataset experience)" notebook for how to deploy the trained model and build your own prediction client. Cleanup
Python Code: # ML Workbench magics (%%ml) are under google.datalab.contrib namespace. It is not enabled by default and you need to import it before use. import google.datalab.contrib.mlworkbench.commands Explanation: ML Workbench Sample --- Image Classification <br><br> Introduction of ML Workbench ML Workbench provides an easy command line interface for machine learning life cycle, which involves four stages: * analyze: gather stats and metadata of the training data, such as numeric stats, vocabularies, etc. Analysis results are used in transforming raw data into numeric features, which can be consumed by training directly. * transform: explicitly transform raw data into numeric features which can be used for training. * train: training model using transformed data. * predict/batch_predict: given a few instances of prediction data, make predictions instantly / with large number of instances of prediction data, make predictions in a batched fassion. There are "local" and "cloud" run mode for each stage. "cloud" run mode is recommended if your data is big. <br><br> ML Workbench supports numeric, categorical, text, image training data. For each type, there are a set of "transforms" to choose from. The "transforms" indicate how to convert the data into numeric features. For images, it is converted to fixed size vectors representing high level features. <br><br> Transfer learning using Inception Package - Cloud Run Experience With Large Data ML Workbench supports image transforms (image to vec) with transfer learning. This notebook continues the codifies the capabilities discussed in this blog post. In a nutshell, it uses the pre-trained inception model as a starting point and then uses transfer learning to train it further on additional, customer-specific images. For explanation, simple flower images are used. Compared to training from scratch, the time and costs are drastically reduced. This notebook does preprocessing, training and prediction by calling CloudML API instead of running them "locally" in the Datalab container. It uses full data. End of explanation # Create a temp GCS bucket. If the bucket already exists and you don't have permissions, rename it. !gsutil mb gs://flower-datalab-demo-bucket-large-data Explanation: Setup End of explanation %%ml dataset create name: flower_data_full format: csv train: gs://cloud-datalab/sampledata/flower/train3000.csv eval: gs://cloud-datalab/sampledata/flower/eval670.csv schema: - name: image_url type: STRING - name: label type: STRING Explanation: Next cell, we will create a dataset representing our training data. End of explanation %%ml analyze --cloud output: gs://flower-datalab-demo-bucket-large-data/analysis data: flower_data_full features: image_url: transform: image_to_vec label: transform: target # Check analysis results !gsutil list gs://flower-datalab-demo-bucket-large-data/analysis Explanation: Analyze Analysis step includes computing numeric stats (i.e. min/max), categorical classes, text vocabulary and frequency, etc. Run "%%ml analyze --help" for usage. The analysis results will be used for transforming raw data into numeric features that the model can deal with. For example, to convert categorical value to a one-hot vector ("Monday" becomes [1, 0, 0, 0, 0, 0, 0]). The data may be very large, so sometimes a cloud run is needed by adding --cloud flag. Cloud run will start BigQuery jobs, which may incur some costs. In this case, analysis step only collects unique labels. Note that we run analysis only on training data, but not evaluation data. End of explanation # Remove previous results !gsutil -m rm gs://flower-datalab-demo-bucket-large-data/transform %%ml transform --cloud analysis: gs://flower-datalab-demo-bucket-large-data/analysis output: gs://flower-datalab-demo-bucket-large-data/transform data: flower_data_full Explanation: Transform With analysis results we can transform raw data into numeric features. This needs to be done for both training and eval data. The data may be very large, so sometimes a cloud pipeline is needed by adding --cloud. Cloud run is implemented by DataFlow jobs, so it may incur some costs. In this case, transform is required. It downloads image, resizes it, and generate embeddings from each image by running a pretrained TensorFlow graph. Note that it creates two jobs --- one for training data and one for eval data. End of explanation %%ml dataset create name: flower_data_full_transformed format: transformed train: gs://flower-datalab-demo-bucket-large-data/transform/train-* eval: gs://flower-datalab-demo-bucket-large-data/transform/eval-* Explanation: After transformation is done, create a new dataset referencing the training data. End of explanation # Remove previous training results. !gsutil -m rm -r gs://flower-datalab-demo-bucket-large-data/train %%ml train --cloud output: gs://flower-datalab-demo-bucket-large-data/train analysis: gs://flower-datalab-demo-bucket-large-data/analysis data: flower_data_full_transformed model_args: model: dnn_classification hidden-layer-size1: 100 top-n: 0 cloud_config: region: us-central1 scale_tier: BASIC Explanation: Train Training starts from transformed data. If training work is too much to do on the local VM, --cloud is recommended so training happens in cloud, in a distributed way. Run %%ml train --help for details. Training in cloud is implemented with Cloud ML Engine. It may incur some costs. End of explanation # List the model files !gsutil list gs://flower-datalab-demo-bucket-large-data/train/model Explanation: After training is complete, you should see model files like the following. End of explanation %%ml batch_predict --cloud model: gs://flower-datalab-demo-bucket-large-data/train/evaluation_model output: gs://flower-datalab-demo-bucket-large-data/evaluation cloud_config: region: us-central1 data: csv: gs://cloud-datalab/sampledata/flower/eval670.csv # after prediction is done, check the output !gsutil list -l -h gs://flower-datalab-demo-bucket-large-data/evaluation # Take a look at the file. !gsutil cat -r -500 gs://flower-datalab-demo-bucket-large-data/evaluation/prediction.results-00000-of-00006 Explanation: Batch Prediction Batch prediction performs prediction in a batched fashion. The data can be large, and is specified by files. Note that, we use the "evaluation_model" which sits in "evaluation_model_dir". There are two models created in training. One is a regular model under "model" dir, the other is "evaluation_model". The difference is the regular one takes prediction data without target and the evaluation model takes data with target and output the target as is. So evaluation model is good for evaluating the quality of the model because the targets and predicted values are included in output. End of explanation import google.datalab.bigquery as bq schema = [ {'name': 'predicted', 'type': 'STRING'}, {'name': 'target', 'type': 'STRING'}, {'name': 'daisy', 'type': 'FLOAT'}, {'name': 'dandelion', 'type': 'FLOAT'}, {'name': 'roses', 'type': 'FLOAT'}, {'name': 'sunflowers', 'type': 'FLOAT'}, {'name': 'tulips', 'type': 'FLOAT'}, ] bq.Dataset('image_classification_results').create() t = bq.Table('image_classification_results.flower').create(schema = schema, overwrite = True) t.load('gs://flower-datalab-demo-bucket-large-data/evaluation/prediction.results-*', mode='overwrite', source_format='json') Explanation: Prediction results are in JSON format. We can load the results into BigQuery table and performa analysis. End of explanation %%bq query SELECT * FROM image_classification_results.flower WHERE predicted != target %%ml evaluate confusion_matrix --plot bigquery: image_classification_results.flower %%ml evaluate accuracy bigquery: image_classification_results.flower Explanation: Check wrong predictions. End of explanation !gsutil -m rm -rf gs://flower-datalab-demo-bucket-large-data Explanation: Online Prediction and Build Your Own Prediction Client Please see "Flower Classification (small dataset experience)" notebook for how to deploy the trained model and build your own prediction client. Cleanup End of explanation
15,973
Given the following text description, write Python code to implement the functionality described below step by step Description: Tech - calcul matriciel avec numpy numpy est la librairie incontournable pour faire des calculs en Python. Ces fonctionnalités sont disponibles dans tous les langages et utilisent les optimisations processeurs. Il est hautement improbable d'écrire un code aussi rapide sans l'utiliser. numpy implémente ce qu'on appelle les opérations matricielles basiques ou plus communément appelées BLAS. Quelque soit le langage, l'implémentation est réalisée en langage bas niveau (C, fortran, assembleur) et a été peaufinée depuis 50 ans au gré des améliorations matérielles. Step1: Enoncé La librairie numpy propose principalement deux types Step2: La maîtrise du nan nan est une convention pour désigner une valeur manquante. Elle réagit de façon un peut particulière. Elle n'est égale à aucune autre y compris elle-même. Step3: Il faut donc utiliser une fonction spéciale isnan. Step4: La maîtrise des types Un tableau est défini par ses dimensions et le type unique des éléments qu'il contient. Step5: C'est le même type pour toute la matrice. Il existe plusieurs type d'entiers et des réels pour des questions de performances. Step6: Un changement de type et le calcul est plus long. La maîtrise du broadcasting Le broadcasting signifie que certaines opérations ont un sens même si les dimensions des tableaux ne sont pas tout à fait égales. Step7: La maîtrise des index Step8: La maîtrise des fonctions On peut regrouper les opérations que numpy propose en différents thèmes. Mais avant il L'initialisation Step9: Q2 Step10: Q3
Python Code: from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline Explanation: Tech - calcul matriciel avec numpy numpy est la librairie incontournable pour faire des calculs en Python. Ces fonctionnalités sont disponibles dans tous les langages et utilisent les optimisations processeurs. Il est hautement improbable d'écrire un code aussi rapide sans l'utiliser. numpy implémente ce qu'on appelle les opérations matricielles basiques ou plus communément appelées BLAS. Quelque soit le langage, l'implémentation est réalisée en langage bas niveau (C, fortran, assembleur) et a été peaufinée depuis 50 ans au gré des améliorations matérielles. End of explanation import numpy mat = numpy.array([[0, 5, 6, -3], [6, 7, -4, 8], [-5, 8, -4, 9]]) mat mat[:2], mat[:, :2], mat[0, 3], mat[0:2, 0:2] Explanation: Enoncé La librairie numpy propose principalement deux types : array et matrix. Pour faire simple, prenez toujours le premier. Ca évite les erreurs. Les array sont des tableaux à plusieurs dimensions. La maîtrise du slice Le slice est l'opérateur : (décrit sur la page indexing). Il permet de récupérer une ligne, une colonne, un intervalle de valeurs. End of explanation numpy.nan == numpy.nan numpy.nan == 4 Explanation: La maîtrise du nan nan est une convention pour désigner une valeur manquante. Elle réagit de façon un peut particulière. Elle n'est égale à aucune autre y compris elle-même. End of explanation numpy.isnan(numpy.nan) Explanation: Il faut donc utiliser une fonction spéciale isnan. End of explanation matint = numpy.array([0, 1, 2]) matint.shape, matint.dtype Explanation: La maîtrise des types Un tableau est défini par ses dimensions et le type unique des éléments qu'il contient. End of explanation %timeit matint * matint matintf = matint.astype(numpy.float64) matintf.shape, matintf.dtype %timeit matintf * matintf %timeit matintf * matint Explanation: C'est le même type pour toute la matrice. Il existe plusieurs type d'entiers et des réels pour des questions de performances. End of explanation mat mat + 1000 mat + numpy.array([0, 10, 100, 1000]) mat + numpy.array([[0, 10, 100]]).T Explanation: Un changement de type et le calcul est plus long. La maîtrise du broadcasting Le broadcasting signifie que certaines opérations ont un sens même si les dimensions des tableaux ne sont pas tout à fait égales. End of explanation mat = numpy.array([[0, 5, 6, -3], [6, 7, -4, 8], [-5, 8, -4, 9]]) mat mat == 5 mat == numpy.array([[0, -4, 9]]).T (mat == numpy.array([[0, -4, 9]]).T).astype(numpy.int64) mat * (mat == numpy.array([[0, -4, 9]]).T).astype(numpy.int64) Explanation: La maîtrise des index End of explanation import numpy O = numpy.array([[15., 20., 13.], [4., 9., 5.]]) O def chi_square(O): N = numpy.sum(O) pis = numpy.sum(O, axis=1, keepdims=True) / N pjs = numpy.sum(O, axis=0, keepdims=True) / N pispjs = pis @ pjs chi = pispjs * ((O / N - pispjs) / pispjs) ** 2 return numpy.sum(chi) * N chi_square(O) Explanation: La maîtrise des fonctions On peut regrouper les opérations que numpy propose en différents thèmes. Mais avant il L'initialisation : array, empty, zeros, ones, full, identity, rand, randn, randint Les opérations basiques : +, -, *, /, @, dot Les transformations : transpose, hstack, vstack, reshape, squeeze, expend_dims Les opérations de réduction : minimum, maximum, argmin, argmax, sum, mean, prod, var, std Tout le reste comme la génération de matrices aléatoires, le calcul des valeurs, vecteurs propres, des fonctions commme take, ... Q1 : calculer la valeur du $\chi_2$ d'un tableau de contingence La formule est là. Et il faut le faire sans boucle. Vous pouvez comparer avec la fonction chisquare de la librairie scipy qui est une extension de numpy. $$\chi_2 = N \sum_{i,j} p_{i.} p_{.j} \left( \frac{\frac{O_{ij}}{N} - p_{i.} p_{.j}}{p_{i.} p_{.j}}\right)^2$$ Q2 : calculer une distribution un peu particulière La fonction histogram permet de calculer la distribution empirique de variables. Pour cette question, on tire un vecteur aléatoire de taille 10 avec la fonction rand, on les trie par ordre croissant, on recommence plein de fois, on calcule la distribution du plus grand nombre, du second plus grand nombre, ..., du plus petit nombre. Q3 : on veut créer une matrice identité un million par un million Vous pouvez essayer sans réfléchir ou lire cette page d'abord : csr_matrix. Q4 : vous devez créer l'application StopCovid Il existe une machine qui reçoit la position de 3 millions de téléphones portable. On veut identifier les cas contacts (rapidement). Réponses Q1 : calculer la valeur du $\chi_2$ d'un tableau de contingence La formule est là. Et il faut le faire sans boucle. Vous pouvez comparer avec la fonction chisquare de la librairie scipy qui est une extension de numpy. $$\chi_2 = N \sum_{i,j} p_{i.} p_{.j} \left( \frac{\frac{O_{ij}}{N} - p_{i.} p_{.j}}{p_{i.} p_{.j}}\right)^2$$ End of explanation rnd = numpy.random.rand(10) rnd numpy.sort(rnd) def tirage(n): rnd = numpy.random.rand(n) trie = numpy.sort(rnd) return trie[-1] tirage(10) def plusieurs_tirages(N, n): rnd = numpy.random.rand(N, n) return numpy.max(rnd, axis=1) plusieurs_tirages(5, 10) t = plusieurs_tirages(5000, 10) hist = numpy.histogram(t) hist import matplotlib.pyplot as plt plt.plot(hist[1][1:], hist[0] / hist[0].sum()); Explanation: Q2 : calculer une distribution un peu particulière La fonction histogram permet de calculer la distribution empirique de variables. Pour cette question, on tire un vecteur aléatoire de taille 10 avec la fonction rand, on les trie par ordre croissant, on recommence plein de fois, on calcule la distribution du plus grand nombre, du second plus grand nombre, ..., du plus petit nombre. End of explanation import numpy from scipy.sparse import csr_matrix ide = csr_matrix((1000000, 1000000), dtype=numpy.float64) ide.setdiag(1.) Explanation: Q3 : on veut créer une matrice identité un million par un million Vous pouvez essayer sans réfléchir ou lire cette page d'abord : csr_matrix. $(10^6)^2=10^{12}$>10 Go, bref ça ne tient pas en mémoire sauf si on a une grosse machine. Les matrices creuses (ou sparses en anglais), sont adéquates pour représenter des matrices dont la grande majorité des coefficients sont nuls car ceux-ci ne sont pas stockés. Concrètement, la matrice enregistre uniquement les coordonnées des coefficients et les valeurs non nuls. End of explanation
15,974
Given the following text description, write Python code to implement the functionality described below step by step Description: Inspect Graph Edges Your graph edges are represented by a list of tuples of length 3. The first two elements are the node names linked by the edge. The third is the dictionary of edge attributes. Step1: Nodes Similarly, your nodes are represented by a list of tuples of length 2. The first element is the node ID, followed by the dictionary of node attributes. Step2: Visualize Manipulate Colors and Layout Positions Step3: Colors Step4: Solving the Chinese Postman Problem is quite simple conceptually Step6: CPP Step 2 Step8: Step 2.3 Step9: For a visual prop, the fully connected graph of odd degree node pairs is plotted below. Note that you preserve the X, Y coordinates of each node, but the edges do not necessarily represent actual trails. For example, two nodes could be connected by a single edge in this graph, but the shortest path between them could be 5 hops through even degree nodes (not shown here). Step10: Step 2.4 Step11: The matching output (odd_matching_dupes) is a dictionary. Although there are 36 edges in this matching, you only want 18. Each edge-pair occurs twice (once with node 1 as the key and a second time with node 2 as the key of the dictionary). Step12: To illustrate how this fits in with the original graph, you plot the same min weight pairs (blue lines), but over the trail map (faded) instead of the complete graph. Again, note that the blue lines are the bushwhacking route (as the crow flies edges, not actual trails). You still have a little bit of work to do to find the edges that comprise the shortest route between each pair in Step 3. Step14: Step 2.5 Step15: CPP Step 3 Step17: Correct Circuit Now let's define a function that utilizes the original graph to tell you which trails to use to get from node A to node B. Although verbose in code, this logic is actually quite simple. You simply transform the naive circuit which included edges that did not exist in the original graph to a Eulerian circuit using only edges that exist in the original graph. You loop through each edge in the naive Eulerian circuit (naive_euler_circuit). Wherever you encounter an edge that does not exist in the original graph, you replace it with the sequence of edges comprising the shortest path between its nodes using the original graph Step18: Stats Step20: Create CPP Graph Your first step is to convert the list of edges to walk in the Euler circuit into an edge list with plot-friendly attributes.
Python Code: # Preview first 5 edges list(g.edges(data=True))[0:5] Explanation: Inspect Graph Edges Your graph edges are represented by a list of tuples of length 3. The first two elements are the node names linked by the edge. The third is the dictionary of edge attributes. End of explanation # Preview first 10 nodes list(g.nodes(data=True))[0:10] ## Summary Stats print('# of edges: {}'.format(g.number_of_edges())) print('# of nodes: {}'.format(g.number_of_nodes())) Explanation: Nodes Similarly, your nodes are represented by a list of tuples of length 2. The first element is the node ID, followed by the dictionary of node attributes. End of explanation # Define node positions data structure (dict) for plotting node_positions = {node[0]: (node[1]['X'], -node[1]['Y']) for node in g.nodes(data=True)} # Preview of node_positions with a bit of hack (there is no head/slice method for dictionaries). dict(list(node_positions.items())[0:5]) Explanation: Visualize Manipulate Colors and Layout Positions: First you need to manipulate the node positions from the graph into a dictionary. This will allow you to recreate the graph using the same layout as the actual trail map. Y is negated to transform the Y-axis origin from the topleft to the bottomleft. End of explanation # Define data structure (list) of edge colors for plotting edge_colors = [e[2]['attr_dict']['color'] for e in g.edges(data=True)] # Preview first 10 edge_colors[0:10] plt.figure(figsize=(8, 6)) nx.draw(g, pos=node_positions, edge_color=edge_colors, node_size=10, node_color='black') plt.title('Graph Representation of Sleeping Giant Trail Map', size=15) plt.show() Explanation: Colors: Now you manipulate the edge colors from the graph into a simple list so that you can visualize the trails by their color. End of explanation list(g.nodes(data=True)) # Calculate list of nodes with odd degree nodes_odd_degree = [v for v, d in g.degree() if d % 2 ==1] # Preview (nodes_odd_degree[0:5]) print('Number of nodes of odd degree: {}'.format(len(nodes_odd_degree))) print('Number of total nodes: {}'.format(len(g.nodes()))) Explanation: Solving the Chinese Postman Problem is quite simple conceptually: Find all nodes with odd degree (very easy). (Find all trail intersections where the number of trails touching that intersection is an odd number) Add edges to the graph such that all nodes of odd degree are made even. These added edges must be duplicates from the original graph (we'll assume no bushwhacking for this problem). The set of edges added should sum to the minimum distance possible (hard...np-hard to be precise). (In simpler terms, minimize the amount of double backing on a route that hits every trail) Given a starting point, find the Eulerian tour over the augmented dataset (moderately easy). (Once we know which trails we'll be double backing on, actually calculate the route from beginning to end) CPP Step 1: Find Nodes of Odd Degree This is a pretty straightforward counting computation. You see that 36 of the 76 nodes have odd degree. These are mostly the dead-end trails (degree 1) and intersections of 3 trails. There are a handful of degree 5 nodes. End of explanation # Compute all pairs of odd nodes. in a list of tuples odd_node_pairs = list(itertools.combinations(nodes_odd_degree, 2)) # Preview pairs of odd degree nodes odd_node_pairs[0:10] print('Number of pairs: {}'.format(len(odd_node_pairs))) def get_shortest_paths_distances(graph, pairs, edge_weight_name): Compute shortest distance between each pair of nodes in a graph. Return a dictionary keyed on node pairs (tuples). distances = {} for pair in pairs: distances[pair] = nx.dijkstra_path_length(graph, pair[0], pair[1], weight=edge_weight_name) return distances # Compute shortest paths. Return a dictionary with node pairs keys and a single value equal to shortest path distance. odd_node_pairs_shortest_paths = get_shortest_paths_distances(g, odd_node_pairs, 'distance') # Preview with a bit of hack (there is no head/slice method for dictionaries). dict(list(odd_node_pairs_shortest_paths.items())[0:10]) Explanation: CPP Step 2: Find Min Distance Pairs This is really the meat of the problem. You'll break it down into 5 parts: Compute all possible pairs of odd degree nodes. Compute the shortest path between each node pair calculated in 1. Create a complete graph connecting every node pair in 1. with shortest path distance attributes calculated in 2. Compute a minimum weight matching of the graph calculated in 3. (This boils down to determining how to pair the odd nodes such that the sum of the distance between the pairs is as small as possible). Augment the original graph with the shortest paths between the node pairs calculated in 4. Step 2.1: Compute Node Pairs You use the itertools combination function to compute all possible pairs of the odd degree nodes. Your graph is undirected, so we don't care about order: For example, (a,b) == (b,a). End of explanation def create_complete_graph(pair_weights, flip_weights=True): Create a completely connected graph using a list of vertex pairs and the shortest path distances between them Parameters: pair_weights: list[tuple] from the output of get_shortest_paths_distances flip_weights: Boolean. Should we negate the edge attribute in pair_weights? g = nx.Graph() for k, v in pair_weights.items(): wt_i = - v if flip_weights else v g.add_edge(k[0], k[1], attr_dict={'distance': v, 'weight': wt_i}) return g # Generate the complete graph g_odd_complete = create_complete_graph(odd_node_pairs_shortest_paths, flip_weights=True) # Counts print('Number of nodes: {}'.format(len(g_odd_complete.nodes()))) print('Number of edges: {}'.format(len(g_odd_complete.edges()))) Explanation: Step 2.3: Create Complete Graph A complete graph is simply a graph where every node is connected to every other node by a unique edge. create_complete_graph is defined to calculate it. The flip_weights parameter is used to transform the distance to the weight attribute where smaller numbers reflect large distances and high numbers reflect short distances. This sounds a little counter intuitive, but is necessary for Step 2.4 where you calculate the minimum weight matching on the complete graph. Ideally you'd calculate the minimum weight matching directly, but NetworkX only implements a max_weight_matching function which maximizes, rather than minimizes edge weight. We hack this a bit by negating (multiplying by -1) the distance attribute to get weight. This ensures that order and scale by distance are preserved, but reversed. End of explanation # Plot the complete graph of odd-degree nodes plt.figure(figsize=(8, 6)) pos_random = nx.random_layout(g_odd_complete) nx.draw_networkx_nodes(g_odd_complete, node_positions, node_size=20, node_color="red") nx.draw_networkx_edges(g_odd_complete, node_positions, alpha=0.1) plt.axis('off') plt.title('Complete Graph of Odd-degree Nodes') plt.show() Explanation: For a visual prop, the fully connected graph of odd degree node pairs is plotted below. Note that you preserve the X, Y coordinates of each node, but the edges do not necessarily represent actual trails. For example, two nodes could be connected by a single edge in this graph, but the shortest path between them could be 5 hops through even degree nodes (not shown here). End of explanation # Compute min weight matching. # Note: max_weight_matching uses the 'weight' attribute by default as the attribute to maximize. odd_matching_dupes = nx.algorithms.max_weight_matching(g_odd_complete, True) print('Number of edges in matching: {}'.format(len(odd_matching_dupes))) Explanation: Step 2.4: Compute Minimum Weight Matching This is the most complex step in the CPP. You need to find the odd degree node pairs whose combined sum (of distance between them) is as small as possible. So for your problem, this boils down to selecting the optimal 18 edges (36 odd degree nodes / 2) from the hairball of a graph generated in 2.3. Both the implementation and intuition of this optimization are beyond the scope of this tutorial... like 800+ lines of code and a body of academic literature beyond this scope. The code implemented in the NetworkX function max_weight_matching is based on Galil, Zvi (1986) [2] which employs an O(n3) time algorithm. End of explanation odd_matching_dupes list(odd_matching_dupes) # Convert matching to list of deduped tuples odd_matching = list(odd_matching_dupes) # Counts print('Number of edges in matching (deduped): {}'.format(len(odd_matching))) plt.figure(figsize=(8, 6)) # Plot the complete graph of odd-degree nodes nx.draw(g_odd_complete, pos=node_positions, node_size=20, alpha=0.05) # Create a new graph to overlay on g_odd_complete with just the edges from the min weight matching g_odd_complete_min_edges = nx.Graph(odd_matching) nx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, edge_color='blue', node_color='red') plt.title('Min Weight Matching on Complete Graph') plt.show() Explanation: The matching output (odd_matching_dupes) is a dictionary. Although there are 36 edges in this matching, you only want 18. Each edge-pair occurs twice (once with node 1 as the key and a second time with node 2 as the key of the dictionary). End of explanation plt.figure(figsize=(8, 6)) # Plot the original trail map graph nx.draw(g, pos=node_positions, node_size=20, alpha=0.1, node_color='black') # Plot graph to overlay with just the edges from the min weight matching nx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, alpha=1, node_color='red', edge_color='blue') plt.title('Min Weight Matching on Orginal Graph') plt.show() Explanation: To illustrate how this fits in with the original graph, you plot the same min weight pairs (blue lines), but over the trail map (faded) instead of the complete graph. Again, note that the blue lines are the bushwhacking route (as the crow flies edges, not actual trails). You still have a little bit of work to do to find the edges that comprise the shortest route between each pair in Step 3. End of explanation def add_augmenting_path_to_graph(graph, min_weight_pairs): Add the min weight matching edges to the original graph Parameters: graph: NetworkX graph (original graph from trailmap) min_weight_pairs: list[tuples] of node pairs from min weight matching Returns: augmented NetworkX graph # We need to make the augmented graph a MultiGraph so we can add parallel edges graph_aug = nx.MultiGraph(graph.copy()) for pair in min_weight_pairs: graph_aug.add_edge(pair[0], pair[1], attr_dict={'distance': nx.dijkstra_path_length(graph, pair[0], pair[1]), 'trail': 'augmented'} ) return graph_aug # Create augmented graph: add the min weight matching edges to g g_aug = add_augmenting_path_to_graph(g, odd_matching) # Counts print('Number of edges in original graph: {}'.format(len(g.edges()))) print('Number of edges in augmented graph: {}'.format(len(g_aug.edges()))) Explanation: Step 2.5: Augment the Original Graph Now you augment the original graph with the edges from the matching calculated in 2.4. A simple function to do this is defined below which also notes that these new edges came from the augmented graph. You'll need to know this in 3. when you actually create the Eulerian circuit through the graph. End of explanation naive_euler_circuit = list(nx.eulerian_circuit(g_aug, source='b_end_east')) print('Length of eulerian circuit: {}'.format(len(naive_euler_circuit))) naive_euler_circuit[0:10] Explanation: CPP Step 3: Compute Eulerian Circuit ow that you have a graph with even degree the hard optimization work is over. As Euler famously postulated in 1736 with the Seven Bridges of Königsberg problem, there exists a path which visits each edge exactly once if all nodes have even degree. Carl Hierholzer fomally proved this result later in the 1870s. There are many Eulerian circuits with the same distance that can be constructed. You can get 90% of the way there with the NetworkX eulerian_circuit function. However there are some limitations. Naive Circuit Nonetheless, let's start with the simple yet incomplete solution: End of explanation def create_eulerian_circuit(graph_augmented, graph_original, starting_node=None): Create the eulerian path using only edges from the original graph. euler_circuit = [] naive_circuit = list(nx.eulerian_circuit(graph_augmented, source=starting_node)) for edge in naive_circuit: edge_data = graph_augmented.get_edge_data(edge[0], edge[1]) #print(edge_data[0]) if edge_data[0]['attr_dict']['trail'] != 'augmented': # If `edge` exists in original graph, grab the edge attributes and add to eulerian circuit. edge_att = graph_original[edge[0]][edge[1]] euler_circuit.append((edge[0], edge[1], edge_att)) else: aug_path = nx.shortest_path(graph_original, edge[0], edge[1], weight='distance') aug_path_pairs = list(zip(aug_path[:-1], aug_path[1:])) print('Filling in edges for augmented edge: {}'.format(edge)) print('Augmenting path: {}'.format(' => '.join(aug_path))) print('Augmenting path pairs: {}\n'.format(aug_path_pairs)) # If `edge` does not exist in original graph, find the shortest path between its nodes and # add the edge attributes for each link in the shortest path. for edge_aug in aug_path_pairs: edge_aug_att = graph_original[edge_aug[0]][edge_aug[1]] euler_circuit.append((edge_aug[0], edge_aug[1], edge_aug_att)) return euler_circuit # Create the Eulerian circuit euler_circuit = create_eulerian_circuit(g_aug, g, 'b_end_east') print('Length of Eulerian circuit: {}'.format(len(euler_circuit))) ## CPP Solution # Preview first 20 directions of CPP solution for i, edge in enumerate(euler_circuit[0:20]): print(i, edge) Explanation: Correct Circuit Now let's define a function that utilizes the original graph to tell you which trails to use to get from node A to node B. Although verbose in code, this logic is actually quite simple. You simply transform the naive circuit which included edges that did not exist in the original graph to a Eulerian circuit using only edges that exist in the original graph. You loop through each edge in the naive Eulerian circuit (naive_euler_circuit). Wherever you encounter an edge that does not exist in the original graph, you replace it with the sequence of edges comprising the shortest path between its nodes using the original graph End of explanation # Computing some stats total_mileage_of_circuit = sum([edge[2]['attr_dict']['distance'] for edge in euler_circuit]) total_mileage_on_orig_trail_map = sum(nx.get_edge_attributes(g, 'distance').values()) _vcn = pd.value_counts(pd.value_counts([(e[0]) for e in euler_circuit]), sort=False) node_visits = pd.DataFrame({'n_visits': _vcn.index, 'n_nodes': _vcn.values}) _vce = pd.value_counts(pd.value_counts([sorted(e)[0] + sorted(e)[1] for e in nx.MultiDiGraph(euler_circuit).edges()])) edge_visits = pd.DataFrame({'n_visits': _vce.index, 'n_edges': _vce.values}) # Printing stats print('Mileage of circuit: {0:.2f}'.format(total_mileage_of_circuit)) print('Mileage on original trail map: {0:.2f}'.format(total_mileage_on_orig_trail_map)) print('Mileage retracing edges: {0:.2f}'.format(total_mileage_of_circuit-total_mileage_on_orig_trail_map)) #print('Percent of mileage retraced: {0:.2f}%\n'.format((1-total_mileage_of_circuit/total_mileage_on_orig_trail_map)*-100)) print('Number of edges in circuit: {}'.format(len(euler_circuit))) print('Number of edges in original graph: {}'.format(len(g.edges()))) print('Number of nodes in original graph: {}\n'.format(len(g.nodes()))) print('Number of edges traversed more than once: {}\n'.format(len(euler_circuit)-len(g.edges()))) print('Number of times visiting each node:') print(node_visits.to_string(index=False)) print('\nNumber of times visiting each edge:') print(edge_visits.to_string(index=False)) Explanation: Stats End of explanation def create_cpp_edgelist(euler_circuit): Create the edgelist without parallel edge for the visualization Combine duplicate edges and keep track of their sequence and # of walks Parameters: euler_circuit: list[tuple] from create_eulerian_circuit cpp_edgelist = {} for i, e in enumerate(euler_circuit): edge = frozenset([e[0], e[1]]) if edge in cpp_edgelist: cpp_edgelist[edge][2]['sequence'] += ', ' + str(i) cpp_edgelist[edge][2]['visits'] += 1 else: cpp_edgelist[edge] = e cpp_edgelist[edge][2]['sequence'] = str(i) cpp_edgelist[edge][2]['visits'] = 1 return list(cpp_edgelist.values()) cpp_edgelist = create_cpp_edgelist(euler_circuit) print('Number of edges in CPP edge list: {}'.format(len(cpp_edgelist))) cpp_edgelist[0:3] g_cpp = nx.Graph(cpp_edgelist) plt.figure(figsize=(14, 10)) visit_colors = {1:'lightgray', 2:'blue', 3: 'red', 4 : 'black', 5 : 'green'} edge_colors = [visit_colors[e[2]['visits']] for e in g_cpp.edges(data=True)] node_colors = ['red' if node in nodes_odd_degree else 'lightgray' for node in g_cpp.nodes()] nx.draw_networkx(g_cpp, pos=node_positions, node_size=20, node_color=node_colors, edge_color=edge_colors, with_labels=False) plt.axis('off') plt.show() plt.figure(figsize=(14, 10)) edge_colors = [e[2]['attr_dict']['color'] for e in g_cpp.edges(data=True)] nx.draw_networkx(g_cpp, pos=node_positions, node_size=10, node_color='black', edge_color=edge_colors, with_labels=False, alpha=0.5) bbox = {'ec':[1,1,1,0], 'fc':[1,1,1,0]} # hack to label edges over line (rather than breaking up line) edge_labels = nx.get_edge_attributes(g_cpp, 'sequence') nx.draw_networkx_edge_labels(g_cpp, pos=node_positions, edge_labels=edge_labels, bbox=bbox, font_size=6) plt.axis('off') plt.show() visit_colors = {1:'lightgray', 2:'blue', 3: 'red', 4 : 'black', 5 : 'green'} edge_cnter = {} g_i_edge_colors = [] for i, e in enumerate(euler_circuit, start=1): edge = frozenset([e[0], e[1]]) if edge in edge_cnter: edge_cnter[edge] += 1 else: edge_cnter[edge] = 1 # Full graph (faded in background) nx.draw_networkx(g_cpp, pos=node_positions, node_size=6, node_color='gray', with_labels=False, alpha=0.07) # Edges walked as of iteration i euler_circuit_i = copy.deepcopy(euler_circuit[0:i]) for i in range(len(euler_circuit_i)): edge_i = frozenset([euler_circuit_i[i][0], euler_circuit_i[i][1]]) euler_circuit_i[i][2]['visits_i'] = edge_cnter[edge_i] g_i = nx.Graph(euler_circuit_i) g_i_edge_colors = [visit_colors[e[2]['visits_i']] for e in g_i.edges(data=True)] nx.draw_networkx_nodes(g_i, pos=node_positions, node_size=6, alpha=0.6, node_color='lightgray', with_labels=False, linewidths=0.1) nx.draw_networkx_edges(g_i, pos=node_positions, edge_color=g_i_edge_colors, alpha=0.8) plt.axis('off') plt.savefig('img{}.png'.format(i), dpi=120, bbox_inches='tight') plt.close() import glob import numpy as np import imageio import os def make_circuit_video(image_path, movie_filename, fps=7): # sorting filenames in order filenames = glob.glob(image_path + 'img*.png') filenames_sort_indices = np.argsort([int(os.path.basename(filename).split('.')[0][3:]) for filename in filenames]) filenames = [filenames[i] for i in filenames_sort_indices] # make movie with imageio.get_writer(movie_filename, mode='I', fps=fps) as writer: for filename in filenames: image = imageio.imread(filename) writer.append_data(image) make_circuit_video('', 'cpp_route_animation.gif', fps=3) Explanation: Create CPP Graph Your first step is to convert the list of edges to walk in the Euler circuit into an edge list with plot-friendly attributes. End of explanation
15,975
Given the following text description, write Python code to implement the functionality described below step by step Description: Mediation analysis with duration data This notebook demonstrates mediation analysis when the mediator and outcome are duration variables, modeled using proportional hazards regression. These examples are based on simulated data. Step1: Make the notebook reproducible. Step2: Specify a sample size. Step3: Generate an exposure variable. Step4: Generate a mediator variable. Step5: Generate an outcome variable. Step6: Build a dataframe containing all the relevant variables. Step7: Run the full simulation and analysis, under a particular population structure of mediation. Step8: Run the example with full mediation Step9: Run the example with partial mediation Step10: Run the example with no mediation
Python Code: import pandas as pd import numpy as np import statsmodels.api as sm from statsmodels.stats.mediation import Mediation Explanation: Mediation analysis with duration data This notebook demonstrates mediation analysis when the mediator and outcome are duration variables, modeled using proportional hazards regression. These examples are based on simulated data. End of explanation np.random.seed(3424) Explanation: Make the notebook reproducible. End of explanation n = 1000 Explanation: Specify a sample size. End of explanation exp = np.random.normal(size=n) Explanation: Generate an exposure variable. End of explanation def gen_mediator(): mn = np.exp(exp) mtime0 = -mn * np.log(np.random.uniform(size=n)) ctime = -2 * mn * np.log(np.random.uniform(size=n)) mstatus = (ctime >= mtime0).astype(int) mtime = np.where(mtime0 <= ctime, mtime0, ctime) return mtime0, mtime, mstatus Explanation: Generate a mediator variable. End of explanation def gen_outcome(otype, mtime0): if otype == "full": lp = 0.5 * mtime0 elif otype == "no": lp = exp else: lp = exp + mtime0 mn = np.exp(-lp) ytime0 = -mn * np.log(np.random.uniform(size=n)) ctime = -2 * mn * np.log(np.random.uniform(size=n)) ystatus = (ctime >= ytime0).astype(int) ytime = np.where(ytime0 <= ctime, ytime0, ctime) return ytime, ystatus Explanation: Generate an outcome variable. End of explanation def build_df(ytime, ystatus, mtime0, mtime, mstatus): df = pd.DataFrame( { "ytime": ytime, "ystatus": ystatus, "mtime": mtime, "mstatus": mstatus, "exp": exp, } ) return df Explanation: Build a dataframe containing all the relevant variables. End of explanation def run(otype): mtime0, mtime, mstatus = gen_mediator() ytime, ystatus = gen_outcome(otype, mtime0) df = build_df(ytime, ystatus, mtime0, mtime, mstatus) outcome_model = sm.PHReg.from_formula( "ytime ~ exp + mtime", status="ystatus", data=df ) mediator_model = sm.PHReg.from_formula("mtime ~ exp", status="mstatus", data=df) med = Mediation( outcome_model, mediator_model, "exp", "mtime", outcome_predict_kwargs={"pred_only": True}, ) med_result = med.fit(n_rep=20) print(med_result.summary()) Explanation: Run the full simulation and analysis, under a particular population structure of mediation. End of explanation run("full") Explanation: Run the example with full mediation End of explanation run("partial") Explanation: Run the example with partial mediation End of explanation run("no") Explanation: Run the example with no mediation End of explanation