Search is not available for this dataset
project
stringlengths 1
235
| source
stringclasses 16
values | language
stringclasses 48
values | content
stringlengths 909
64.8M
|
---|---|---|---|
fastSOM | cran | R | Package ‘fastSOM’
October 13, 2022
Type Package
Version 1.0.1
Date 2019-11-19
Title Fast Calculation of Spillover Measures
Imports parallel
Description Functions for computing spillover measures, especially spillover
tables and spillover indices, as well as their average, minimal, and maximal
values.
License GPL (>= 2)
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2019-11-19 12:40:06 UTC
R topics documented:
fastSOM-packag... 2
so... 2
soi_avg_es... 4
soi_avg_exac... 5
soi_from_so... 7
so... 8
sot_avg_es... 10
sot_avg_exac... 11
fastSOM-package Fast Calculation of Spillover Measures
Description
This package comprises various functions for computing spillover measures, especially spillover
tables and spillover indices as proposed by Diebold and Yilmaz (2009) as well as their estimated
and exact average, minimal, and maximal values.
Details
Package: fastSOM
Type: Package
Version: 1.0.0
Date: 2016-07-20
License: GPL (>=2)
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] Diebold, <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
soi Calculation of the Spillover Index
Description
This function calculates the spillover index as proposed by Diebold and Yilmaz (2009, see Refer-
ences).
Usage
soi(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores, only relevant if Sigma is a list of matrices. Missing ncores or
ncores=1 means no parallelization (just one core is used). ncores=0 means au-
tomatic detection of the number of available cores. Any other integer determines
the maximal number of cores to be used.
... Further arguments, especially perm which is used to reorder variables. If perm is
missing, then the original ordering of the model variables will be used. If perm
is a permutation of 1:N, then the spillover index for the model with variables
reordered according to perm will be calculated.
Details
The spillover index was introduced by Diebold and Yilmaz in 2009 (see References). It is based on
a variance decompostion of the forecast error variances of an N -dimensional MA(∞) process. The
underlying idea is to decompose the forecast error of each variable into own variance shares and
cross variance shares. The latter are interpreted as contributions of shocks of one variable to the
error variance in forecasting another variable (see also sot). The spillover index then is a number
between 0 and 100, describing the relative amount of forecast error variances that can be explained
by shocks coming from other variables in the model.
The typical application of the ’list’ version of soi is a rolling windows approach when Sigma and
A are lists representing the corresponding quantities at different points in time (rolling windows).
Value
Returns a single numeric value or a list thereof.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, sot
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate the spillover index
soi(Sigma, A)
soi_avg_est Estimation of Average, Minimal, and Maximal Spillover Index
Description
Calculates an estimate of the average, the minimum, and the maximum spillover index based on
different permutations.
Usage
soi_avg_est(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores. Missing ncores or ncores=1 means no parallelization (just
one core is used). ncores=0 means automatic detection of the number of avail-
able cores. Any other integer determines the maximal number of cores to be
used.
... Further arguments, especially perms which is used to reorder variables. If perms
is missing, then 10.000 randomly created permutations of 1:N will be used as
reorderings of the model variables. If perms is defined, it has to be either a
matrix with each column being a permutation of 1:N, or, alternatively, an integer
value defining the number of randomly created permutations.
Details
The spillover index introduced by Diebold and Yilmaz (2009) (see References) depends on the or-
dering of the model variables. While soi_avg_exact provides a fast algorithm for exact calculation
of average, minimum, and maximum of the spillover index over all permutations, there might be
reasons to prefer to estimate these quantities using a limited number of permutations (mainly to
save time when N is large). This is exactly what soi_avg_est does.
The typical application of the ’list’ version of soi_avg_est is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the estimated average, minimal, and maximal spillover
index as well as permutations that generated the minimal and maximal value. The ’list’ version
returns a list consisting of three vectors (the average, minimal, and maximal spillover index values)
and two matrices (the columns of which are the permutations generating the minima and maxima).
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] Kloessner, S. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi_avg_exact
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate estimates of the average, minimal,
# and maximal spillover index and determine the corresponding ordering
# of the model variables
soi_avg_est(Sigma, A)
soi_avg_exact Exact Calculation of Average, Minimal, and Maximal Spillover Index
Description
Calculates the Average, Minimal, and Maximal Spillover Index exactly.
Usage
soi_avg_exact(Sigma, A, ncores = 1)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores. Missing ncores or ncores=1 means no parallelization (just
one core is used). ncores=0 means automatic detection of the number of avail-
able cores. Any other integer determines the maximal number of cores to be
used.
Details
The spillover index introduced by Diebold and Yilmaz (2009) (see References) depends on the
ordering of the model variables. While soi_avg_est provides an algorithm to estimate aver-
age, minimum, and maximum of the spillover index over all permutations, soi_avg_est calcu-
lates these quantities exactly. Notice, however, that for large dimensions N , this might be quite
time- as well as memory-consuming. If only the exact average of the spillover index is wanted,
soi_from_sot(sot_avg_exact(Sigma,A,ncores)$Average) should be used.
The typical application of the ’list’ version of soi_avg_exact is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the exact average, minimal, and maximal spillover
index as well as permutations that generated the minimal and maximal value. The ’list’ version
returns a list consisting of three vectors (the average, minimal, and maximal spillover index values)
and two matrices (the columns of which are the permutations generating the minima and maxima).
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi_avg_est
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate the exact average, minimal,
# and maximal spillover index and determine the corresponding ordering
# of the model variables
soi_avg_exact(Sigma, A)
soi_from_sot Calculation of the Spillover Index for a given Spillover Table
Description
Given a spillover table, this function calculates the corresponding spillover index.
Usage
soi_from_sot(input_table)
Arguments
input_table Either a spillover table or a list thereof
Details
The spillover index was introduced by Diebold and Yilmaz in 2009 (see References). It is based on
a variance decompostion of the forecast error variances of an N -dimensional MA(∞) process. The
underlying idea is to decompose the forecast error of each variable into own variance shares and
cross variance shares. The latter are interpreted as contributions of shocks of one variable to the
error variance in forecasting another variable (see also sot). The spillover index then is a number
between 0 and 100, describing the relative amount of forecast error variances that can be explained
by shocks coming from other variables in the model.
The typical application of the ’list’ version of soi_from_sot is a rolling windows approach when
input_table is a list representing the corresponding spillover tables at different points in time
(rolling windows).
Value
Numeric value or a list thereof.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi, sot
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate spillover table
SOT <- sot(Sigma,A)
# calculate spillover index from spillover table
soi_from_sot(SOT)
sot Calculation of Spillover Tables
Description
This function calculates an N xN -dimensional spillover table.
Usage
sot(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores, only relevant if Sigma is a list of matrices. Missing ncores or
ncores=1 means no parallelization (just one core is used). ncores=0 means au-
tomatic detection of the number of available cores. Any other integer determines
the maximal number of cores to be used.
... Further arguments, especially perm which is used to reorder variables. If perm is
missing, then the original ordering of the model variables will be used. If perm
is a permutation of 1:N, then the spillover index for the model with variables
reordered according to perm will be calculated.
Details
The (i, j)-entry of a spillover table represents the relative contribution of shocks in variable j (the
column variable) to the forecasting error variance of variable i (the row variable). Hence, off-
diagonal values are interpreted as spillovers, while the own variance shares appear on the diagonal.
An overall spillover measure is given by soi.
The typical application of the ’list’ version of sot is a rolling windows approach when Sigma and
A are lists representing the corresponding quantities at different points in time (rolling windows).
Value
Matrix, or a list thereof, of dimensions N xN with non-negative entries summing up to 100 for each
row.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate spillover table
sot(Sigma,A)
sot_avg_est Estimation of the Average, Minimal, and Maximal Entries of a
Spillover Table
Description
Calculates estimates of the average, minimal, and maximal entries of a spillover.
Usage
sot_avg_est(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores. Missing ncores or ncores=1 means no parallelization (just
one core is used). ncores=0 means automatic detection of the number of avail-
able cores. Any other integer determines the maximal number of cores to be
used.
... Further arguments, especially perms which is used to reorder variables. If perms
is missing, then 10.000 randomly created permutations of 1:N will be used as
reorderings of the model variables. If perms is defined, it has to be either a
matrix with each column being a permutation of 1:N, or, alternatively, an integer
value defining the number of randomly created permutations.
Details
The spillover tables introduced by Diebold and Yilmaz (2009) (see References) depend on the or-
dering of the model variables. While sot_avg_exact provides a fast algorithm for exact calculation
of average, minimum, and maximum of the spillover table over all permutations, there might be rea-
sons to prefer to estimate these quantities using a limited number of permutations (mainly to save
time when N is large). This is exactly what sot_avg_est does.
The typical application of the ’list’ version of sot_avg_est is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the exact average, minimal, and maximal values for the
spillover table. The ’list’ version returns a list with three elements (Average, Minimum, Maximum)
which themselves are lists of the corresponding tables.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, sot_avg_exact
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate estimates of the average, minimal,
# and maximal entries within a spillover table
sot_avg_est(Sigma, A)
sot_avg_exact Calculation of the Exact Values for Average, Minimal, and Maximal
Entries of a Spillover Table
Description
Calculates the exact values of the average, the minimum, and the maximum entries of a spillover
tables based on different permutations.
Usage
sot_avg_exact(Sigma, A, ncores = 1)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores, only relevant for ’list’ version. In this case, missing ncores or
ncores=1 means no parallelization (just one core is used), ncores=0 means au-
tomatic detection of the number of available cores, any other integer determines
the maximal number of cores to be used.
Details
The spillover tables introduced by Diebold and Yilmaz (2009) (see References) depend on the
ordering of the model variables. While sot_avg_est provides an algorithm to estimate average,
minimal, and maximal values of the spillover table over all permutations, sot_avg_est calculates
these quantities exactly. Notice, however, that for large dimensions N , this might be quite time- as
well as memory-consuming.
The typical application of the ’list’ version of sot_avg_exact is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the exact average, minimal, and maximal values for the
spillover table. The ’list’ version returns a list with three elements (Average, Minimum, Maximum)
which themselves are lists of the corresponding tables.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, sot_avg_est
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate the exact average, minimal,
# and maximal entries within a spillover table
sot_avg_exact(Sigma, A) |
shadow | ctan | TeX | # The shadow package+
Footnote †: This manual corresponds to shadow.sty v1.3, dated 19 February 2003.
<NAME>
<EMAIL>
19 February 2003
The command \shabox has the same meaning of the LaTeX command \fbox except for the fact that a "shadow" is added to the bottom and the right side of the box. It computes the right dimension of the box, even if the text spans over more than one line; in this case a warning message is given.
There are three parameters governing:
1. the width of the lines delimiting the box: \sboxrule
2. the separation between the edge of the box and its contents: \sboxsep
3. the dimension of the shadow: \sdim
**Sintax:**
\shabox{\(\{\)text\(\}\)}
where \(\{\)text\(\}\) is the text to be put in the framed box. It can be an entire paragraph.
Adapted from the file dropshadow.tex by <EMAIL>.
1.1 Works in a double column environment.
2. When there is an online shadow box, it will be centered on the line (in V1.1 the box was aligned with the baseline). (Courtesy by <NAME>)"
3. Added a number of missing % signs no other cleanup done (FMi) |
currr | cran | R | Package ‘currr’
February 17, 2023
Title Apply Mapping Functions in Frequent Saving
Version 0.1.2
Description Implementations of the family of map() functions with frequent saving of the intermedi-
ate results. The contained functions let you start the evaluation of the itera-
tions where you stopped (reading the already evaluated ones from cache), and work with the cur-
rently evaluated iterations while remaining ones are running in a background job. Parallel com-
puting is also easier with the workers parameter.
License MIT + file LICENSE
URL https://github.com/MarcellGranat/currr
BugReports https://github.com/MarcellGranat/currr/issues
Depends R (>= 4.1.0)
Imports dplyr, tidyr, readr, stringr, broom, pacman, tibble,
clisymbols, job, rstudioapi, scales, parallel, purrr, crayon,
stats
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-4036-1500>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-02-17 12:20:20 UTC
R topics documented:
cp_ma... 2
cp_map_ch... 4
cp_map_db... 5
cp_map_df... 7
cp_map_df... 9
cp_map_lg... 11
remove_currr_cach... 12
saving_ma... 13
saving_map_nodo... 14
cp_map Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A list.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfc(), cp_map_dfr(), cp_map_lgl()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = 2, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = 2, name = "iris_mean")
remove_currr_cache()
cp_map_chr Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_chr(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A character vector.
See Also
Other map variants: cp_map_dbl(), cp_map_dfc(), cp_map_dfr(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_dbl Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_dbl(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A numeric vector.
See Also
Other map variants: cp_map_chr(), cp_map_dfc(), cp_map_dfr(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_dfc Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_dfc(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A tibble.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfr(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_dfr Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_dfr(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A tibble.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfc(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_lgl Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_lgl(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A logical vector.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfc(), cp_map_dfr(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
remove_currr_cache Remove currr’s intermediate data from the folder.
Description
Remove currr’s intermediate data from the folder.
Usage
remove_currr_cache(list = NULL)
Arguments
list A character vector specifying the name of the caches you want to remove (files
in .currr.data folder). If empy (default), all caches will be removed.
Value
No return value, called for side effects
saving_map Run a map with the function, but saves after a given number of ex-
ecution. This is an internal function, you are not supposed to use it
manually, but can call for background job inly if exported.
Description
Run a map with the function, but saves after a given number of execution. This is an internal
function, you are not supposed to use it manually, but can call for background job inly if exported.
Usage
saving_map(.ids, .f, name, n_checkpoint = 100, currr_folder, ...)
Arguments
.ids Placement of .x to work with.
.f Called function.
name Name for saving.
n_checkpoint Number of checkpoints.
currr_folder Folder where cache files are stored.
... Additionals.
Value
No return value, called for side effects
saving_map_nodot Run a map with the function, but saves after a given number of ex-
ecution. This is an internal function, you are not supposed to use it
manually, but can call for background job only if exported. This func-
tion differs from saving_map, since it does not have a ... input. This is
neccessary because job::job fails if ... is not provided for the cp_map
call.
Description
Run a map with the function, but saves after a given number of execution. This is an internal
function, you are not supposed to use it manually, but can call for background job only if exported.
This function differs from saving_map, since it does not have a ... input. This is neccessary because
job::job fails if ... is not provided for the cp_map call.
Usage
saving_map_nodot(.ids, .f, name, n_checkpoint = 100, currr_folder)
Arguments
.ids Placement of .x to work with.
.f Called function.
name Name for saving.
n_checkpoint Number of checkpoints.
currr_folder Folder where cache files are stored.
Value
No return value, called for side effects |
@types/system-task | npm | JavaScript | [Installation](#installation)
===
> `npm install --save @types/system-task`
[Summary](#summary)
===
This package contains type definitions for system-task (<https://github.com/leocwlam/system-task>).
[Details](#details)
===
Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/system-task>.
[index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/system-task/index.d.ts)
---
```
declare class SystemTask {
type: string;
constructor(taskType?: string, isAsyncProcess?: boolean, logMethod?: any);
/**
* @async
*/
log(type: string, message: string, detail?: any): void;
/**
* @async
*/
insertPreprocessItemsHandler(task: SystemTask): Promise<any>;
/**
* @async
*/
preprocessHandler(task: SystemTask, preProcessItem: any): Promise<any>;
/**
* @async
*/
processHandler(task: SystemTask, processItem: any): Promise<any>;
/**
* @async
*/
cleanupHandler(task: SystemTask, cleanupItems: any[]): Promise<any>;
isValidProcess(): void;
/**
* @async
*/
start(): void;
}
declare function asyncProcess(items: any[], executeAsyncCall: any, task: SystemTask, errors: any[]): Promise<any>;
declare function syncProcess(items: any[], executeSyncCall: any, task: SystemTask, errors: any[]): Promise<any>;
declare namespace SystemTask {
/**
* @async
*/
const SyncProcess: typeof syncProcess;
/**
* @async
*/
const AsyncProcess: typeof asyncProcess;
}
export = SystemTask;
```
### [Additional Details](#additional-details)
* Last updated: Wed, 18 Oct 2023 11:45:06 GMT
* Dependencies: none
[Credits](#credits)
===
These definitions were written by [<NAME>](https://github.com/leocwlam).
Readme
---
### Keywords
none |
re_memory | rust | Rust | Crate re_memory
===
Run-time memory tracking and profiling.
See `AccountingAllocator` and `accounting_allocator`.
Re-exports
---
* `pub use accounting_allocator::AccountingAllocator;`
Modules
---
* accounting_allocatorTrack allocations and memory use.
* util
Structs
---
* CountAndSizeNumber of allocation and their total size.
* MemoryHistoryTracks memory use over time.
* MemoryLimit
* MemoryUse
* RamLimitWarner
Functions
---
* total_ram_in_bytesAmount of available RAM on this machine.
Crate re_memory
===
Run-time memory tracking and profiling.
See `AccountingAllocator` and `accounting_allocator`.
Re-exports
---
* `pub use accounting_allocator::AccountingAllocator;`
Modules
---
* accounting_allocatorTrack allocations and memory use.
* util
Structs
---
* CountAndSizeNumber of allocation and their total size.
* MemoryHistoryTracks memory use over time.
* MemoryLimit
* MemoryUse
* RamLimitWarner
Functions
---
* total_ram_in_bytesAmount of available RAM on this machine.
Struct re_memory::accounting_allocator::AccountingAllocator
===
```
pub struct AccountingAllocator<InnerAllocator> { /* private fields */ }
```
Install this as the global allocator to get memory usage tracking.
Use `set_tracking_callstacks` or `turn_on_tracking_if_env_var` to turn on memory tracking.
Collect the stats with `tracking_stats`.
Usage:
```
use re_memory::AccountingAllocator;
#[global_allocator]
static GLOBAL: AccountingAllocator<std::alloc::System> = AccountingAllocator::new(std::alloc::System);
```
Implementations
---
### impl<InnerAllocator> AccountingAllocator<InnerAllocator#### pub const fn new(allocator: InnerAllocator) -> Self
Trait Implementations
---
### impl<InnerAllocator: Default> Default for AccountingAllocator<InnerAllocator#### fn default() -> AccountingAllocator<InnerAllocatorReturns the “default value” for a type.
Allocate memory as described by the given `layout`.
Behaves like `alloc`, but also ensures that the contents are set to zero before being returned.
Deallocate the block of memory at the given `ptr` pointer with the given `layout`.
&self,
old_ptr: *mutu8,
layout: Layout,
new_size: usize
) -> *mutu8
Shrink or grow a block of memory to the given `new_size` in bytes.
The block is described by the given `ptr` pointer and `layout`. Read moreAuto Trait Implementations
---
### impl<InnerAllocator> RefUnwindSafe for AccountingAllocator<InnerAllocator>where
InnerAllocator: RefUnwindSafe,
### impl<InnerAllocator> Send for AccountingAllocator<InnerAllocator>where
InnerAllocator: Send,
### impl<InnerAllocator> Sync for AccountingAllocator<InnerAllocator>where
InnerAllocator: Sync,
### impl<InnerAllocator> Unpin for AccountingAllocator<InnerAllocator>where
InnerAllocator: Unpin,
### impl<InnerAllocator> UnwindSafe for AccountingAllocator<InnerAllocator>where
InnerAllocator: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module re_memory::accounting_allocator
===
Track allocations and memory use.
Structs
---
* AccountingAllocatorInstall this as the global allocator to get memory usage tracking.
* TrackingStatistics
Functions
---
* global_allocsTotal number of live allocations,
and the number of live bytes allocated as tracked by `AccountingAllocator`.
* is_tracking_callstacksAre we doing (slightly expensive) tracking of the callstacks of large allocations?
* set_tracking_callstacksShould we do (slightly expensive) tracking of the callstacks of large allocations?
* tracking_statsGather statistics from the live tracking, if enabled.
* turn_on_tracking_if_env_varTurn on callstack tracking (slightly expensive) if a given env-var is set.
Struct re_memory::CountAndSize
===
```
pub struct CountAndSize {
pub count: usize,
pub size: usize,
}
```
Number of allocation and their total size.
Fields
---
`count: usize`Number of allocations.
`size: usize`Number of bytes.
Implementations
---
### impl CountAndSize
#### pub const ZERO: Self = _
#### pub fn add(&mut self, size: usize)
Add an allocation.
#### pub fn sub(&mut self, size: usize)
Remove an allocation.
Trait Implementations
---
### impl Clone for CountAndSize
#### fn clone(&self) -> CountAndSize
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> CountAndSize
Returns the “default value” for a type.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &CountAndSize) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for CountAndSize
### impl Eq for CountAndSize
### impl StructuralEq for CountAndSize
### impl StructuralPartialEq for CountAndSize
Auto Trait Implementations
---
### impl RefUnwindSafe for CountAndSize
### impl Send for CountAndSize
### impl Sync for CountAndSize
### impl Unpin for CountAndSize
### impl UnwindSafe for CountAndSize
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::MemoryHistory
===
```
pub struct MemoryHistory {
pub resident: History<i64>,
pub counted: History<i64>,
pub counted_gpu: History<i64>,
pub counted_store: History<i64>,
pub counted_blueprint: History<i64>,
}
```
Tracks memory use over time.
Fields
---
`resident: History<i64>`Bytes allocated by the application according to operating system.
Resident Set Size (RSS) on Linux, Android, Mac, iOS.
Working Set on Windows.
`counted: History<i64>`Bytes used by the application according to our own memory allocator’s accounting.
This can be smaller than `Self::resident` because our memory allocator may not return all the memory we free to the OS.
`counted_gpu: History<i64>`VRAM bytes used by the application according to its own accounting if a tracker was installed.
Values are usually a rough estimate as the actual amount of VRAM used depends a lot on the specific GPU and driver. Accounted typically only raw buffer & texture sizes.
`counted_store: History<i64>`Bytes used by the datastore according to its own accounting.
`counted_blueprint: History<i64>`Bytes used by the blueprint store according to its own accounting.
Implementations
---
### impl MemoryHistory
#### pub fn is_empty(&self) -> bool
#### pub fn capture(
&mut self,
counted_gpu: Option<i64>,
counted_store: Option<i64>,
counted_blueprint: Option<i64>
)
Add data to history
Trait Implementations
---
### impl Default for MemoryHistory
#### fn default() -> Self
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for MemoryHistory
### impl Send for MemoryHistory
### impl Sync for MemoryHistory
### impl Unpin for MemoryHistory
### impl UnwindSafe for MemoryHistory
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::MemoryLimit
===
```
pub struct MemoryLimit {
pub limit: Option<i64>,
}
```
Fields
---
`limit: Option<i64>`Limit in bytes.
This is primarily compared to what is reported by `crate::AccountingAllocator` (‘counted’).
We limit based on this instead of `resident` (RSS) because `counted` is what we have immediate control over, while RSS depends on what our allocator (MiMalloc) decides to do.
Implementations
---
### impl MemoryLimit
#### pub fn parse(limit: &str) -> Result<Self, StringThe limit can either be absolute (e.g. “16GB”) or relative (e.g. “50%”).
#### pub fn is_exceeded_by(&self, mem_use: &MemoryUse) -> Option<f32Returns how large fraction of memory we should free to go down to the exact limit.
Trait Implementations
---
### impl Clone for MemoryLimit
#### fn clone(&self) -> MemoryLimit
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> MemoryLimit
Returns the “default value” for a type.
#### fn eq(&self, other: &MemoryLimit) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for MemoryLimit
### impl Eq for MemoryLimit
### impl StructuralEq for MemoryLimit
### impl StructuralPartialEq for MemoryLimit
Auto Trait Implementations
---
### impl RefUnwindSafe for MemoryLimit
### impl Send for MemoryLimit
### impl Sync for MemoryLimit
### impl Unpin for MemoryLimit
### impl UnwindSafe for MemoryLimit
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::MemoryUse
===
```
pub struct MemoryUse {
pub resident: Option<i64>,
pub counted: Option<i64>,
}
```
Fields
---
`resident: Option<i64>`Bytes allocated by the application according to operating system.
Resident Set Size (RSS) on Linux, Android, Mac, iOS.
Working Set on Windows.
`None` if unknown.
`counted: Option<i64>`Bytes used by the application according to our own memory allocator’s accounting.
This can be smaller than `Self::resident` because our memory allocator may not return all the memory we free to the OS.
`None` if `crate::AccountingAllocator` is not used.
Implementations
---
### impl MemoryUse
#### pub fn capture() -> Self
Trait Implementations
---
### impl Clone for MemoryUse
#### fn clone(&self) -> MemoryUse
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn eq(&self, other: &MemoryUse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Sub<MemoryUse> for MemoryUse
#### type Output = MemoryUse
The resulting type after applying the `-` operator.#### fn sub(self, rhs: Self) -> Self::Output
Performs the `-` operation.
### impl Eq for MemoryUse
### impl StructuralEq for MemoryUse
### impl StructuralPartialEq for MemoryUse
Auto Trait Implementations
---
### impl RefUnwindSafe for MemoryUse
### impl Send for MemoryUse
### impl Sync for MemoryUse
### impl Unpin for MemoryUse
### impl UnwindSafe for MemoryUse
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::RamLimitWarner
===
```
pub struct RamLimitWarner { /* private fields */ }
```
Implementations
---
### impl RamLimitWarner
#### pub fn warn_at_fraction_of_max(fraction: f32) -> Self
#### pub fn update(&mut self)
Warns if we have exceeded the limit.
Auto Trait Implementations
---
### impl RefUnwindSafe for RamLimitWarner
### impl Send for RamLimitWarner
### impl Sync for RamLimitWarner
### impl Unpin for RamLimitWarner
### impl UnwindSafe for RamLimitWarner
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Function re_memory::total_ram_in_bytes
===
```
pub fn total_ram_in_bytes() -> u64
```
Amount of available RAM on this machine. |
packagist_lazetime_amazon_advertising_api_php_sdk.jsonl | personal_doc | Unknown | # Move your business forward
### What brands can learn from a best-selling author who uses Amazon Ads
See how this author used campaign insights to sell more than 150,000 copies of his books.
Discover how brands big and small create engaging campaigns, reach audiences, and measure results.
Discover the power of purpose-driven brands and how consumer sentiments around sustainability, DEI, and brand trust impact customer preference in our 2023 Higher Impact report.
Discover how to build your campaigns with Amazon Ads products, services, and strategies through our courses and certifications.
unBoxed is an annual conference hosted by Amazon Ads. It announces new solutions and innovations to help companies connect with customers, grow their brand, and prepare for the future. In 2023, we’re showcasing fresh insights from industry leaders, training and certification classes, engaging breakouts, interactive demos, after-hours events, and an exclusive musical performance.
The unBoxed conference is held at the <NAME> Convention Center in New York City. In 2023, the conference is taking place October 24 through October 26.
unBoxed attendees include marketing leaders, partners, agencies, small-business owners, advertising experts, and more. Hosted by Amazon Ads, the conference showcases how our latest product offerings can help businesses move forward.
Date: 2023-10-12
Categories:
Tags:
October 12, 2023 | By <NAME>, Senior Content Strategist
The world of advertising is constantly evolving. To keep up, savvy advertisers are always trying to improve their skills and learn the latest ways to grow their business, and Amazon Ads offers a robust library of certifications to help advertisers enhance their Amazon Ads knowledge.From October 24 to 26, our annual conference unBoxed returns to New York City, where you can learn about tools and innovations to help accomplish your goals and scale your reach. And this year, we’re offering Educate & Elevate, a pre-event on October 24 designed to help you up-level and up-skill your Amazon Ads knowledge with our certification program. Certifications, available for free on the learning console, validate your proficiency in Amazon Ads products and solutions. Interested in brushing up on your Amazon Ads knowledge before you join us at unBoxed? To help kick off your learning journey, we tapped a few partners hosting our hands-on-keyboard sessions and Educate & Elevate certification prep workshops to share the Amazon Ads Certifications they think every advertiser should take.
“The Amazon Advertising Foundations Certification is a necessary base for all additional Amazon Ads knowledge and a perfect foundation to build off,” says <NAME>, director of data intelligence at Perpetua.
<NAME>, co-founder of Acorn-i, also prioritizes this certification, which, along with the Amazon Ads Retail Certification, is required for all staff at her company. “The [certifications] provide a valuable foundation to build understanding of retail and ads, and how the two are interconnected,” Leon says. “I would advocate all new sellers, vendors, and Amazon practitioners to take these modules annually, and then build on these with the advanced modules depending on role function or area of expertise that needs enhancing.”
The Amazon Ads Retail Certification covers topics like understanding the diverse selling partner types, getting familiar with Amazon’s fulfillment solutions, and exploring the core concepts of selling in the Amazon store. <NAME>, co-founder and CEO of Ad Advance, thinks it’s a great starting point to learn about Amazon’s retail landscape. “For advertising professionals who manage online commerce operations and campaigns on behalf of Amazon’s selling partners, this certification is like a compass, guiding you through Amazon.com,” Shelerud says. “It has some challenging topics, which makes it good for all areas of expertise.”<NAME>, CEO of BetterAMS, called this certification a personal favorite. “I never managed the retail side of the business before,” says Wishon. “So [the courses in this certification] helped me understand the challenges our brands were facing and provided insights into areas of the retail business that could be directly influencing the success of our advertising campaigns.”
In addition to the Amazon Ads Retail Certification, Wishon recommends taking the Amazon Ads Campaign Optimization Certification. “This is a comprehensive assessment of knowledge and expertise in all of the core Amazon Ads,” Wishon explains. “It not only tests your ability to understand and explain advertising concepts, but it also requires you to demonstrate your skills in hands-on optimization. It’s a valuable asset for anyone who manages campaigns.”
“This certification gives you the nuances of the Amazon DSP advertising solution,” says <NAME>, CEO and co-founder of Intentwise. “It’s critical for all advertisers who are advising clients or optimizing DSP campaigns because it teaches you the ins and outs of programmatic advertising on Amazon, whether endemic or non-endemic.”
<NAME> also recommends the Amazon Marketing Cloud Certification. “It helps you tie everything back together and know how all the Amazon advertising solutions work in cohesion,” she says. Leon is also a fan of this certification, saying it’s useful for “practitioners who want to understand deeper analytics and create bespoke insights for clients.” She adds that it’s a good challenge for the intermediate Amazon Ads advertiser.
Shelerud also suggests the Amazon Video Ads Certification. “This certification equips advertisers with the skills to effectively incorporate video into their ad portfolios. It ensures that you’re ready to tap in to the immense potential of video ads and engage customers in a more relevant way,” Shelerud says. “Even a newer seller can take advantage of higher-converting placements through Sponsored Brands video.”Reddy echoes this by noting the courses within this certification “give advertisers the knowledge they need to add video campaigns to their strategy.” He adds, “As video advertising continues to be more accessible, it’s critical to understand all the different video ad products offered by Amazon Ads and the different goals for each.” In-person tickets to unBoxed have sold out, but don’t miss out on the action. You can register for our livestream to watch our keynotes. If you’re joining us in person at unBoxed, register for Educate & Elevate to learn about Amazon Ads Certifications by selecting the “seat icon” on the Educate & Elevate agenda sessions. Pre-registration is required.
Since 2021, Amazon Ads has conducted the annual Higher Impact study1 to better understand consumer values and how they impact brand preference. Over the past few years, consumers have navigated numerous challenges, including growing economic uncertainty. With rising inflation in 2022, consumer confidence has ebbed and flowed, with more consumers being increasingly mindful of their spending.Yet, our research shows consumers, especially among younger generations, still want to support brands whose values align with their own.2 Explore what this study suggests about diversity, equity, and inclusion (DEI), sustainability, and brand trust.
## Balancing shifting priorities
With rising inflation, consumers are re-evaluating their priorities.
Over the past three years, consumers have been watching their spending, with 84% of global consumers stating they are re-evaluating their needs to shop more effectively: a 9% increase from 2022. And more than ever before, consumers said they have modified their way of life to concentrate more on things that are of value.
Percentage of consumers who agree they have modified their way of life to concentrate more on things that are of value
Despite being more budget-conscious, 7 in 10 consumers—an 11% increase from 2022—said they make a point to support brands that donate money or supplies to causes that are important to them, with Italian shoppers leading the way at 84%.
Consumers who identify as women or nonbinary were also more likely to support brands that donate to causes they care about, compared to those who identify as men.
Men
Women
Nonbinary/ Other
### So what issues are most top of mind?
From poverty to health care and the environment, consumers have a lot on their minds. But for the first time in our study, economic uncertainty ranked in the top three causes that are most important to them globally.
# Top 5 causes that are most important to consumers, globally:
* Health care / Health care access (32%)
* Health and wellness (27%)
* Economic uncertainty (23%)
* Environmentalism (21%)
* Poverty (20%)
# Top 5 causes that are most important to U.S. consumers:
* Mental health awareness (29%)
* Health care / Health care access (26%)
* Economic uncertainty (25%)
* Human rights and social issues (21%)
* Homelessness / Unhoused persons (20%)
Brands that want to successfully weather this period of economic uncertainty should be moving beyond the traditional promotions and messaging strategies, and embracing value-based innovation that delivers meaningful, positive impact on consumers, the business, and the brand. Ultimately, building empathy, emotional connection, trust, and familiarity is key here.
— <NAME>, global chief client officer, WPP
### Top 3 causes by generation
What matters to consumers will vary widely by geography and age. Among the different generations surveyed, Gen Z adults are more likely to care about social causes while Millennials, Gen X, and Boomers 3 are more likely to care about health care and the economy.
### Top causes among adult Gen Z consumers:
* Mental health awareness (26%)
* Human rights and social issues (23%)
* Animal rights (19%)
### Top causes among Millennial consumers:
* Health care / Health care access (31%)
* Health and wellness (26%)
* Economic uncertainty (24%)
### Top causes among Gen X consumers:
* Health care / Health care access (35%)
* Health and wellness (28%)
* Economic uncertainty (25%)
### Top causes among Boomer consumers:
* Health care / Health care access (40%)
* Economic uncertainty (30%)
* Environmentalism (26%)
### Why brands should care
Because consumers are voting with their dollars, it’s important for brands to pay attention to what their customers care deeply about, especially as they are more closely monitoring their spending.
7 in 10(70%) of global consumers believe they can vote with their dollars and look to support brands that are good citizens.
8 in 10 (82%) of global consumers said it was important to support or buy from small-business owners during times of economic uncertainty, with the greatest agreement coming from older generations.
* Boomers: 85%
* Gen X: 83%
* Millennials: 83%
* Gen Z: 79%
More than 60% of sales in Amazon’s store are from independent sellers—and almost all of those are small and medium-sized businesses. The small businesses selling and thriving in Amazon’s store are at the heart of their local communities, and they include many women-owned, Black-owned, and military family–owned businesses, as well as artisans who create handcrafted goods.
As consumers take action to support brands that align with their values and support causes that matter to them, they recognize that these brands are earning their trust.
## Brand trust
Earning consumer trust is essential for brands—here’s what you can do.
From functional benefits like low prices to quality and reliability of products or services to commitments on social and environmental causes, there are many ways brands can earn trust with consumers—but not all weigh the same in their minds.
### Top ways a brand can earn consumer trust:
* Good value for money (39%)
* Low prices (26%)
* Quality products and services (26%)
* Products and services that are consistent and reliable (20%)
And while value for their money reigns as the top way to earn trust across all generations, social and ethical issues significantly influence some generations, with Gen Z adults ranking employee treatment (14%) higher than other generations. Boomers and Gen Z adults saw eye to eye on brands protecting the environment as an important way to earn trust (18% and 17%, respectively), more so than Gen X and Millennials.As consumers are re-evaluating their needs more than ever, where they shop and how they spend their money is critical to them—and trust in brands’ goods and services is key. When consumers lose trust in a brand, it can be for a variety of reasons.
### Top ways a brand can lose consumer trust:
* Not offering good value for money (35%)
* Offering poor-quality products/services (35%)
* Not offering consistent and reliable products/services (24%)
* Providing poor or unfair customer service (24%)
* Having a poor experience with the brand (23%)
Losing trust with consumers looks different among generations, with Gen X and Boomers ranking how not offering good value for money is their top way to lose trust (40% and 39%, respectively) while Gen Z and Millennials were more concerned with poor product quality.
### Consumers expect more from brands, and earning their trust continues to evolve with their values.
Globally, nearly 8 in 10 (78%) consumers are tired of brands acting like they are exempt from environmental responsibility (7% increase YoY).
And there’s work to be done by brands to earn more trust with consumers when it comes to their messaging around the key issues of sustainability and DEI.
Just over half of respondents (58%) trust the credibility of sustainability and DEI messaging from brands.As brands continue to earn and maintain their trust with shoppers globally, they’ll need to consider more opportunities to showcase their efforts around sustainability, DEI, and other key issues important to consumers if they plan to be the brands those shoppers have come to know and rely on—especially in the face of evolving global issues and economic uncertainty.
If a brand consistently delivers on their product promise to maintain performance and delight customers, they will earn and maintain consumer trust every day. Clever campaigns definitely garner attention, but won’t sustain performance if brands haven't aligned their brand basics with their consumers. Great marketing, when informed with a solid brand strategy and consumer understanding, can fire on all brand cylinders.
— <NAME>, SVP head of omnichannel and emerging marketplaces, Publicis
## Sustainability
The environment remains an important focus area for global consumers—even amid other pressing priorities.
The number of global consumers who said they seek out brands that are sustainable in their business practices is up 6% compared to last year. With a growing number of (sometimes competing) priorities, consumers are trying to balance their needs, values, and budgets. Still, more than half of consumers (52%) said they’re willing to pay more for a product that has a third-party sustainability certification, and as much as 62% of adult Gen Z consumers were willing to pay the higher price tag, compared to just 41% of Boomers.
### And sustainability encompasses many areas, with the top environmental issues for global consumers including:
* Climate change / Global warming (33%)
* Plastic waste (24%)
* Water pollution and drinking water quality (22%)
* Oceans and ocean pollution (19%)
* Air pollution and air quality (19%)
* Loss of wildlife habitats and parks (18%)
### Your sustainability commitment matters
Consumers globally agree that taking care of the environment matters, but younger consumers and those in parts of Western Europe are clear on where they stand, and willing to invest more in brands that share their values.
As younger generations’ spending power increases, their values may play a large part in their spending habits, as 6 in 10 (65%) adult Gen Z and Millennials said it was important for brands that they purchase from are committed to sustainability.
Value-led brands in action: Inside Logitech’s climate action efforts to innovate products with sustainability in mind
### To authentically engage sustainably minded shoppers, a brand’s messaging must be in terms they understand.
Consumers have heightened awareness when it comes to terms associated with sustainability. The top five most familiar sustainability terms for consumers include:
* Climate change
* Global warming
* Eco-friendly
* Biodegradable
* Organic
However, consumers are less familiar with terms like carbon neutral (59%), net-zero carbon emissions (58%), and greenwashing (41%), which had the lowest familiarity.Accurately and authentically educating consumers around the use of sustainability terms is important for brands, especially to continue building trust with consumers and credibility in their sustainability efforts. It’s important that brands understand their sustainability messaging positioning and that they are not greenwashing in any way, for example, by conflating capabilities or oversimplifying sustainability efforts or requirements, and that they are being as transparent as possible.
Value-led brands in action:See how General Mills and Amazon Ads supported the National Park Foundation during Earth Month.
### Ways brands can help sustainably minded consumers navigate their shopping with greater ease
Brands can take many actions to showcase their more-sustainable efforts, especially when it comes to their product offerings.
Combined with material and measurable progress, metrics can speak volumes. Consumers want more transparency into a brand’s supply chain, details of their sourcing, and methods of disposal. Communicating these endeavors via product descriptions, on packaging materials, and through a brand's own presence can help inform a prospective buyer and help leading companies stand out from their competition.
— <NAME>, global head of strategic partnerships, GroupM
Consumers are doing their own research when it comes to searching for more-sustainable options prior to buying—and they’re most likely to trust products with credible third-party certifications (35%, which is up 2% YoY) when making their decisions.According to consumers, the most trustworthy sources when researching sustainability/sustainable options prior to purchase:
* Third-party certifications (35%)
* Search engines (33%)
* Sustainability experts and advocates (29%)
Brands should consider the value opportunity of working to qualify their products or services via credible third-party certifications to help sustainably minded consumers navigate their shopping experiences.
Programs like Climate Pledge Friendly (CPF) help make that possible by highlighting products with sustainability certifications via their CPF badge on products.
In 2022, sales of U.S. Climate Pledge Friendly (CPF) products in fashion, health and beauty, grocery, and auto increased 84% year over year. CPF uses sustainability certifications to highlight products that support their commitment to help preserve the natural world, while encouraging their selling partners to prioritize more-sustainable practices.
## Diversity, equity, and inclusion (DEI)
Greater diversity, equity, and inclusion is important to consumers, and younger generations are demanding more from brands.
Globally, consumers continue to value DEI as 7 in 10 (73%) believe it’s important that brands they buy from take action to promote DEI. That’s up 7% from the previous year, and Gen Z adults have a higher expectation of brands around this (77%).
Value-led brands in action: Mastercard on working with Amazon Ads to celebrate Black women-owned small businesses
### So how can brands meet these expectations authentically?
Consumers see value in companies incorporating DEI into their core offerings, like their products or services, as well as expecting them to go beyond their corporate walls and take action in communities.
According to consumers, the most authentic ways for brands to demonstrate DEI commitments include:
Broader actions and support beyond a brand’s core offerings (52%)
Directly through a brand’s core offerings (48%)
This means that brands need to be thinking about both types of actionable opportunities (such as internal corporate actions and external community actions) for authentically committing to DEI in ways that consumers expect.With younger generations, corporate commitments to DEI rank higher (77% for Gen Z adults, and 75% for Millennials); therefore, when brands look to the future, they may want to consider a combination of ensuring they incorporate DEI into their corporate brand DNA and through their actions in their communities. These efforts can help earn trust from consumers and future generations of shoppers.
We are building diversity into the fabric of who we are as a brand and what we represent to our customers. We remain steadfast on our diversity, equity, and inclusion (DEI) journey and have continued to scale our work through technology.
— <NAME>, vice president inclusive experience and technology, Amazon
### Consumers expect brands to focus on specific aspects of DEI
There are many areas of DEI that consumers care about, and these are important for brands to note when engaging with consumers so they provide the best support possible on these sensitive topics.
# Top DEI areas of most importance to global consumers include:
* Gender equality (29%)
* Racial equity (27%)
* Income (20%)
* Education (20%)
* Age (20%)
* Emotional, psychological, or mental health conditions (19%)
* Physical disability (19%)
# Top 4 DEI areas of most importance to U.S. consumers:
* Racial equity (31%)
* Gender equality (24%)
* Emotional, psychological, or mental health conditions (23%)
* Income (19%)
Some priorities weigh more importantly than others to certain generations. While gender equality is the most important area across all generations, Boomers rank age (31%) as second in priority, in contrast to both adult Gen Z and Millennials who believe racial equity is the next most important area.
Younger generations of consumers want to see visible, sustained investments in diversity, equity, and inclusion to demonstrate it is a long-term priority. They’re expecting to see actions from companies and brands that not only reflect their communities, but elevate and lift those communities as well.
— <NAME>, vice president people, experience, and technology, Amazon
### Brands need to meet consumers’ evolving DEI expectations—especially in certain industries.
As brands across the world work to authentically reconcile years of underrepresentation, bias, and inaccessibility, consumers have their sights set on certain industries to drive impactful change.
Value-led brands in action: How a mom’s search for a doll that looked like her daughter became the successful brand <NAME>
Amazon launched the Black Business Accelerator (BBA) to help build sustainable diversity and provide growth opportunities for Black-owned businesses. The BBA aims to drive economic equity for Black-owned businesses, providing entrepreneurs with resources to thrive as business leaders.
Nearly half of the consumers across the seven countries surveyed think it is important that the companies/brands they purchase from are committed to DEI, in industries such as grocery (49%), fashion (49%), travel and hospitality (48%), entertainment (48%), and consumer products (48%).
When brands show their commitment to DEI by not only incorporating it into their company DNA but by the actions they take in their communities, consumers will take notice.
Real change is going to take time, but as long as we remain committed to embracing authenticity in everything we do, we’ll continue to move in the right direction and create an industry that is inclusive, welcoming, and empowering for all.
— <NAME>, vice president of commercial marketing, PepsiCo Beverages North America
## Conclusion
As brands look to make deeper, authentic connections with consumers, they need to consider a myriad of elements affecting shoppers today. From consumer values to the global economy, these factors weigh significantly in the minds, and wallets, of shoppers. While economic uncertainty has made them re-evaluate their spending and what they are willing to pay for, consumer values still significantly factor in their decision-making of which brands they buy from, which they trust, and what they expect of brands on these important topics. Now is the time for brands to clearly demonstrate their commitments through authentic actions, both internal and external, of how they are aligned with consumers’ values.
By engaging and connecting on shared values, brands can earn the trust and loyalty that drive long-term relationships. That’s why we’ve commissioned this research for a third year in a row, as consumer sentiments are evolving quickly. To better understand consumer needs and motivations, Amazon Ads commissioned an online survey from Environics Research to 7,213 consumers across seven key countries.
* Canada (1,003 consumers)
* France (1,019 consumers)
* Germany (1,008 consumers)
* Italy (1,011 consumers)
* Spain (1,042 consumers)
* United Kingdom (1,003 consumers)
* United States (1,127 consumers)
Percentage of consumers who agree with the following statements | Global | Canada | France | Germany | Italy | Spain | UK | US |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
More brands should do their part in helping the world, especially during difficult times like these. | 88% | 88% | 87% | 84% | 92% | 90% | 87% | 84% |
It’s important to me that brands take action in times of humanitarian crises and natural disasters. | 82% | 81% | 85% | 79% | 87% | 89% | 77% | 79% |
I am more likely to purchase products or services from brands whose values align with my own. | 81% | 80% | 82% | 74% | 86% | 85% | 77% | 79% |
I absolutely love it when a brand can make me feel inspired in some way. | 79% | 77% | 77% | 74% | 81% | 86% | 77% | 81% |
I make a point to support brands that donate money or supplies to causes that are important to me. | 71% | 68% | 68% | 66% | 84% | 79% | 62% | 70% |
I believe I can vote with my dollars and look to support brands that are good citizens. | 70% | 73% | 68% | 65% | 70% | 70% | 65% | 75% |
I am more likely to purchase an item from a brand that is willing to take a stand on social issues and conflicts. | 69% | 66% | 70% | 67% | 77% | 72% | 65% | 65% |
I am prepared to pay more for brands, products, and services that are truly authentic. | 68% | 63% | 65% | 64% | 78% | 67% | 67% | 71% |
I’m tired of brands acting like they are exempt from environmental responsibility. | 78% | 77% | 79% | 75% | 84% | 79% | 77% | 75% |
I avoid using the services or products of companies which I consider to have a poor environmental record. | 67% | 67% | 69% | 66% | 71% | 74% | 65% | 62% |
I actively seek out brands that are sustainable in their business practices. | 66% | 63% | 72% | 65% | 74% | 69% | 57% | 63% |
I am prepared to pay more for an environmentally friendly product. | 61% | 59% | 65% | 55% | 70% | 62% | 55% | 60% |
I will not compromise on the products or services I purchase because it is important for me to support brands that align with my values no matter how much they cost or the state of the current economy. | 61% | 53% | 62% | 57% | 70% | 67% | 55% | 59% |
Everyone should have their own lifestyle, religious beliefs, and sexual preferences, even if it makes them different from everyone else. | 87% | 90% | 84% | 84% | 90% | 89% | 89% | 86% |
If you want to learn and grow in life, it is essential to meet and converse with different kinds of people, who come from all kinds of backgrounds. | 85% | 87% | 83% | 81% | 88% | 84% | 87% | 85% |
I learn a great deal from meeting people who are different from me. | 84% | 88% | 80% | 77% | 86% | 86% | 85% | 85% |
It’s important to me that brands I buy from take action to promote diversity, equity, and inclusion. | 73% | 70% | 77% | 63% | 84% | 78% | 70% | 69% |
I want to see more diversity in advertising (i.e., people of all genders, races, and sexual identities). | 68% | 66% | 69% | 55% | 72% | 75% | 69% | 70% |
I trust the credibility of diversity, equity, and inclusion (DEI) messaging from brands. | 58% | 56% | 58% | 51% | 65% | 65% | 54% | 57% |
I want to be the one who decides when and where I interact with a brand. | 90% | 90% | 86% | 87% | 93% | 92% | 90% | 90% |
Whatever the type of product, whenever I buy something, price is always very important. | 84% | 89% | 82% | 78% | 82% | 86% | 88% | 85% |
I am increasingly re-evaluating my needs in order to shop more effectively. | 84% | 83% | 86% | 78% | 90% | 88% | 79% | 81% |
I try to gather a lot of information about products before I make an important purchase. | 80% | 83% | 78% | 79% | 85% | 76% | 82% | 81% |
I have recently modified my way of life to concentrate more on things that are really important to me. | 77% | 79% | 74% | 77% | 77% | 77% | 75% | 78% |
Most small businesses do their best to provide high-quality goods and services to their customers. | 83% | 87% | 81% | 83% | 77% | 79% | 86% | 86% |
It’s important for me to support / buy from small-business owners, especially during times of economic uncertainty. | 82% | 81% | 85% | 80% | 83% | 86% | 78% | 83% |
Small businesses generally try to strike a fair balance between profits and the public interest. | 74% | 75% | 77% | 71% | 71% | 68% | 76% | 77% |
1 Amazon Ads with Environics Research, 2023 Higher Impact report, CA, DE, ES, FR, IT, UK, and US.2 YoY comparisons reflect regional respondent sample changes with Japan included in 2022 data analysis, but not in 2021 nor 2023. Italy and Spain are new to 2023 research, and were not included in 2021 nor 2022 research. 3 This study categorizes generational demographics based on birth year: Adult Gen Z (1995-2005), Millennials (1980-1994), Gen X (1965-1979), and Boomers (1945-1964).
Date: 2023-09-09
Categories:
Tags:
September 9, 2023 | By <NAME>, Sr. Marketing Manager
At the 2023 Cannes Lions International Festival of Creativity, Amazon Ads celebrated nine incredible small businesses by showcasing their products and stories to attendees at Amazon Port.
More than 60% of sales in Amazon’s store are from independent sellers—and almost all of those are small and medium-size businesses. These small businesses selling in the Amazon store are often at the heart of their local communities, and include businesses owned by artisans, women, Black entrepreneurs, and military families. Independent selling partners, entrepreneurs, and small businesses continue to find success with Amazon more than 20 years after the virtual shelves opened.We had a chance to speak to several of the founders that we partnered with at Cannes Lions about their business growth and experience selling with Amazon. While diverse in their products offerings and their paths to success, these founders share the same commonality: Advertising during retail tentpole moments helps drive their business success on Amazon and helps foster meaningful customer connection when it matters most.
## What is a retail tentpole event?
Retail tentpole events, also known as key shopping moments, are large-scale opportunities for brand building and driving sales on the Amazon store and third-party destinations. All year round, brands can capitalize on these events to boost traffic on their product pages, increase sales, build customer loyalty, and drive momentum toward long-term profitability.
Retail tentpole events
## Three tips on how small businesses can utilize tentpole events
There are three key ways your business can fuel your funnel, drive conversions, and achieve year-round results. It all starts with utilizing retail tentpole moments effectively.
### 1. Get your campaign in front of the right audience
<NAME>, the founder and CEO of Mother’s Shea, a mother-and-daughter social enterprise dedicated to empowering women through nature’s wonder balm, says, “By leveraging targeted advertising campaigns, we have been able to effectively showcase Mother’s Shea products to a relevant audience, increase visibility, and drive more traffic to the listings.” Akuete continues, “The 2022 holiday season with Amazon surpassed our expectations. As a small, Black-owned family business, we depend on high-traffic periods to attract first-time customers and ensure they become repeat customers. Our 2022 holiday sales were up 112% compared to 2021, with traffic up 87% and conversion up by 10% compared to the pre-holiday season.” With robust tools from Amazon Ads for measuring and analyzing campaign performance, Mother’s Shea was able to optimize strategies and improve return on investment.<NAME>, the founder and CEO of Jumping Fox Design, a creative and woman-owned stationery business in California, says that Amazon Ads has helped a new brand like theirs quickly gain exposure for their private-label products in highly competitive product categories. “When we released our ring binder collection, we were able to gain the No. 1 New Release badge within a few weeks and entered the top 50 in the category within a few months because of our Amazon Ads campaign,” says Li.
### 2. Leverage multi-product campaigns to drive conversions at every touchpoint
<NAME>, founder of Raw Elements USA, a certified-natural, reef-and environmentally safe sunscreen company, says, “By leveraging Sponsored Brands and Sponsored Products, we can tell our brand story at the right time, to the right consumer. We know that, most of the time, consumers are browsing on Amazon for products before making their purchasing decision. Through education and advertising on Amazon, we believe it has fast-tracked the buying decision to purchase clean vs. chemical sunscreen. For example, a customer may be looking for sunscreen for their child and see our tinted products. The combination of products gives us control over the audience we want to reach and when we reach them.”
### 3. Integrate Amazon Ads across tentpole retail moments to drive conversions and sustain customer interest
From 2021 to 2022, the sales for Cure Hydration, a premium, hydrating electrolyte drink mix, grew 3x on Amazon. Founder and CEO <NAME> says the brand is on a similar trajectory for 2023. “Advertising has helped drive our success during retail tentpole events. We usually see a 35% lift in sales when these events happen,” Picasso says. “We have found that it is important to tailor ad strategies for each event, factoring in category changes, increased traffic, and discounts, all of which are key to ensuring the best possible budget allocation and returns during these events.”“We’ve been able to reach millions of customers around the world by running ads on Amazon, which has been a huge driver of our revenue growth,” says <NAME>, founder and CEO of Simply Gum, a brand that makes treats with simple ingredients in all their products. “Our ads have an extremely high return on investment because we’re meeting our customers where they are already shopping, which helps us focus our ad spend and maximize our returns. For example, Prime Day has really proven to be one of the most important days of the year for our business, not only in terms of sales lift, but also in terms of generating brand awareness and exposure. Running ads simultaneously further amplifies this effect.”
# Make it memorable with Amazon Ads
Date: 2022-11-01
Categories:
Tags:
### Key learnings from Amazon’s holiday shopping season
See the best-selling products and categories.
A holiday plan for the entire year
Here’s how your brand can play a meaningful role in 2023 holidays across the globe, at family dinners, gift exchanges, and festive parties.
October is the unofficial start of the holiday season. Across the globe, celebrations for holidays including Diwali start as early as October, and holidays including Lunar New Year last through February.
Holiday marketing consists of ads and campaigns that reach holiday shoppers before, during, and after major holidays or big events.
With consumers starting holiday shopping as early as October, it’s good to start raising awareness early.1 And remember, even though it’s good to start raising awareness early, shopping also continues right through the holiday season.
The most important holidays for marketing and advertising depend on your location and audience. Important holidays include Diwali, Halloween, Thanksgiving, Black Friday, Cyber Monday, El Buen Fin, White Friday, Hanukkah, Christmas, Boxing Day, New Year’s Day, Easter, Golden Week, and Mother’s Day.
Beginning can be tricky. That's why we've put together 4 simple questions to help kickstart your advertising.
* Do you sell products on Amazon?
* Which best describes your business?
* What is your primary marketing objective?
* Which industry best describes your business?
# Insights
Explore our library of content designed to inspire the future of your brand and optimize your advertising strategy.
Our robust guides can help beginners learn the foundational advertising skills to keep growing their business.
Learn why reach can be an important factor in your ad campaigns and marketing strategy.
Targeting is crucial to your Sponsored Products campaigns, and can help to match shopper intent to your products.
This marketing strategy finds the perfect way to bring awareness to your offerings.
Case Study
Learn how Thule and Global Overview used Sponsored Display video to grow sales and brand awareness.
Get hands-on guidance from our ad specialists to help you learn how to launch campaigns.
This entry-level webinar covers how to use sponsored ads to help customers discover and purchase your products.
In this webinar, we’ll walk beginners through the concepts to help set and adjust campaign budgets and bids.
Maximizing Efficiency & ROI for Amazon Agencies with PPC Entourage
Get an exclusive chance to join <NAME> and PPC Enrtourage's Mike Zagare for an intimate fireside chat with <NAME> from D8aDriven. Expect enlightening discussions as they shed light on the nuances of Amazon advertising agency management and share exclusive insights from their impressive collective experience.
Amazon Ads holiday summit: Unlock your brand’s potential this shopping season
Join the Amazon Ads holiday summit on September 26. The free virtual event will teach tips and tactics for sponsored ads to help your brand this holiday season.
Your brand in new places: Learn Sponsored Display tips
Sponsored Display has been observed to be one of the most effective ad products in driving new-to-brand sales. Learn tips from a guest speaker.
Drive growth across borders: Global advertising tips from an Amazon Ads Partner
You’ve already started selling in new countries. Now, we’ll introduce you to how advertising can help you grow your business across borders with a guest speaker.
Sponsored ads best practices from award-winning partners
Join us live to learn sponsored ads best practices from the Amazon Ads 2022 Partner Awards winners.
Help your books stand out with Sponsored Brands
Learn in this webinar how Sponsored Brands campaigns can help you promote your books to a wide, relevant audience and inspire reader loyalty
Build your customer loyalty
In this free webinar we’ll teach you how to foster customer loyalty with a range of free and paid self-service solutions from Amazon Ads
Expand and optimize your sponsored ads targeting
This intermediate webinar focuses on optimizing your manual campaigns and how different targeting strategies can help you meet your business goals.
Reach a relevant audience with product targeting
Product targeting can help you connect with more customers, drive consideration of your products with relevant audiences, and increase sales when shoppers are ready to buy.
Plan and optimize your Posts strategy
Learn how Brand Follow and Posts work together to help grow loyalty. We will show you best practices and explore different strategies to try, based on your advertising goals.
Get ready to advertise with this Amazon Ads webinar
Get ready to advertise with this beginner-level Amazon Ads webinar to learn how to create top-selling products and improve sales conversion.
# Stores
### Leverage Stores for brand growth
Learn how to set up a Store to help reach more customers. Plus, you’ll find out best practices for optimizing and designing your Store.
No matter the size of your brand, Stores give you an immersive place to introduce audiences to your story, mission, and products.
Amazon Stores allow you to showcase your brand and products in a multipage, immersive shopping experience.
Help shoppers explore your full range of products with your own branded URL on Amazon.
Use predesigned templates and drag-and-drop tiles to create a Store that fits your brand and spotlights your best-selling products, without ever writing a line of code.
Metrics like sales, visits, page views, and traffic sources help you better understand how to best serve your shoppers.
Stores with 3+ pages have 83% higher shopper dwell time and 32% higher attributed sales per visitor.1
On average, Stores updated within the past 90 days have 21% more repeat visitors and 35% higher attributed sales per visitor.2
These free courses will teach you how to best use our products and solutions to grow your brand on Amazon and start reaching new audiences.
Create pages for your products using our templates and features.
Add videos, text, and images to show your products in action.
Submit your Store for review. It will be typically be reviewed within 24 hours.
In your Store, shoppable images help customers see your products in context, engage with their details, and add them directly to their carts.
Already have an account? Sign in
Stores are available for sellers enrolled in Amazon Brand Registry, vendors, and agencies. You do not need to advertise on Amazon to create a Store.
Create your Store using Amazon's self-service Store builder, available through the advertising console. You can use templates to build your pages. You can use the drag-and-drop content tiles to add, remove, or rearrange content in the template, or you can use the tiles to build your own design. After you’re finished designing your online Store, submit it for review. Learn more about building and optimizing your Store.
There is no cost to create a store.
The Stores insights dashboard includes metrics such as daily visitors, page views, and sales generated from your Store. If you promote your Store in external marketing activities, you can also add a tag to the URL to analyze traffic sources to your Store.
On Amazon, Stores can be reached through the brand byline (the brand name link displayed under or above product names on a product’s detail page). You can also send shoppers directly to your Store via the short URL, such as amazon.com/BRANDNAME.You can also drive traffic to your Store from your own sites and social media, or sponsored advertising both on and off Amazon. Shoppers can share Stores with their friends via Facebook, Twitter, and Pinterest. Stores are also discoverable through shopping results on Amazon.
Yes. Stores templates and widgets are all designed for mobile web, app, and desktop. Learn how to optimize your Store for mobile.
At this time Stores does not support advertisers who do not sell on Amazon. These advertisers can continue to use campaign landing pages.
You can use your Store as the Amazon landing page for your Sponsored Brands campaigns. Display ad campaigns can also direct traffic to your Store or a sub-page within your Store.
Sources:
1 Amazon Internal, May 20202 Amazon Internal, WW, May 2020
### Amazon DSP and Sponsored ads
Create audio, video, and display campaigns for multiple businesses or brands.
Let an Amazon ads consultant manage your business’s Amazon DSP campaigns. If you already have an account, switch accounts.
CPC (cost per click) is a metric that determines how much advertisers pay for the ads they place on websites or social media, based on the number of clicks the ad receives. CPC is important for marketers to consider, since it measures the price is for a brand’s paid advertising campaigns. The goal of marketers should be to reduce the price of clicks while also cultivating high-quality clicks, and consequently satisfied customers.
## What is CPC?
CPC is the cost per click that an ad receives. It’s a metric that applies to all types of ads, whether they have text, images, or videos. It applies to ads that appear on the results pages of search engines, display ads, and ads that appear on social media, too. Thinking about CPC should be one of the Sponsored Products best practices used by brands, because finding an accurate bid on certain keywords helps determine the value of advertising campaigns.
## What’s the difference between CPC vs CPM?
It’s important for brands to measure their digital marketing metrics, including comparing CPC vs CPM (cost per mille, or the cost per 1,000 ad impressions). The click-through rate (CTR) of an ad, landing page, or article is the rate of the number of clicks to another page that it receives. CPC is based on the number of actual clicks the ad receives, while CPM is based on the number of times an ad is viewed, regardless of whether customers click on it or not. Brands can use both metrics, considering the implications of each, for a more comprehensive view of the performance of their ad campaigns.
## How to calculate cost per click
CPC equals the average amount paid for each click on an ad. A high number of clicks, or visits on an ad, mean that the ad is getting attention from customers. Various advertisers can bid on ad placement on websites and popular keywords, and so each brand’s optimal CPC is determined by its ad ranking as well as the ranking of other related brands and products. The more in-demand a keyword is in the auction and the higher the ad placement, the higher the advertising costs.
### What is average cost per click?
The average cost per click is easy: It’s the average amount spent on each click for an ad. Typically, ads placed on search engine pages will cost more than ads placed on a brand’s website. The rank of ads changes frequently, so there won’t be a set number for the CPC of a brand’s ad.
### What is maximum cost per click?
The maximum cost per click is a brand’s bid on ad placement and keywords, which means it’s the highest price they’re willing to pay. Luckily, though, the brand will typically not pay the entire maximum cost per click. Based on related brands and quality scores in search, the actual CPC will often be lower than that initial bid.
Once a brand decides on its maximum cost per click, it needs to decide if it wants to use manual cost per click bidding or automatic cost per click bidding. Manual cost per click bidding means that the brand chooses its own individual bid amounts. For a more automatic option, brands can use enhanced cost per click bidding instead.
An alternate option to manual cost per click bidding is enhanced cost per click bidding, where the brand sets their overall budget and then has their bids automated based on that. Enhanced cost per click bidding is a feature that’s been added to search engines including Google AdWords and Microsoft Bing. Sponsored Display uses automated bidding to adjust your bid based on conversion.
## Advantages and disadvantages of pay per click (PPC) advertising
There are upsides and downsides to PPC advertising, which is measured by CPC and CPM: CPC is more directly correlated to the purchases made by customers, while CPM could help reach a goal of increasing brand awareness. This will often result in CPC advertising being higher priced but also more valuable in getting customers to the next step in their shopping journey, since CPC is directly linked to the click-through rate (CTR) of customers who have viewed the ads.
## How do you decrease CPC?
The goal of all brands should be to have a low CPC. That means that ads should have optimization for high value at a low cost. Additionally, CPC should be proportionate to the overall profits of brands, since the ultimate goal of CPC is to drive sales. Most brands don’t want to spend more on ads than their income is, so budgeting can be crucial.
## CPC FAQs
### What’s the difference between PPC and CPC?
PPC and CPC are describing the same thing: PPC is the system of brands paying per click on an ad, and CPC is the metric used to measure those clicks.
### How can brands begin advertising and determining CPC?
Sponsored Products and Sponsored Brands use CPC to determine how much advertisers need to pay, based on the clicks of customers. By becoming aware of their CPC optimization and figuring out what their cost per click currently is, and what it should be, brands can start making their ad campaigns even smarter.
1 Statista, Global Programmatic Advertising Spending, 20212 eMarketer, Programmatic Digital Display Ad Spending Forecast, June 2021
# What is CPM?
How to build and optimize a marketing funnel
Global programmatic advertising spending has doubled in the past four years and is expected to grow by more than more than $40 billion by 2023.1 As the online advertising industry continues to expand (programmatic advertising now accounts for more than 89% of all digital display ad spending), it’s important to understand some of the key terms that make digital advertising tick.2 Let’s dig into CPM, or cost per mille, which is one of the popular pricing models used in programmatic advertising.
Programmatic advertising is the automated buying and selling of digital advertising inventory, including display and video formats. You can use a demand-side platform (DSP), which is software that automates purchasing and management of digital advertising inventory from multiple publishers. A supply-side platform or sell-side platform (SSP) is software used by publishers to automate the sale and management of their advertising inventory.Certain types of programmatic ads are measured by cost per mille (CPM), which means cost per thousand impressions. CPM is a pricing model where you pay a certain amount for 1,000 impressions, or the number of times your ad appears. CPM is popular with larger publishers, where advertisers pay a set price based on the number of impressions each placement receives in a monthly or quarterly basis.
Programmatic advertising is the automated buying and selling of digital advertising inventory, including display and video formats. You can use a demand-side platform (DSP), which is software that automates purchasing and management of digital advertising inventory from multiple publishers. A supply-side platform or sell-side platform (SSP) is software used by publishers to automate the sale and management of their advertising inventory.
CPM is one of many types of possible pricing models in digital advertising. Because digital advertising can be measured with different marketing metrics—how often an ad appears, is clicked, leads to a sale, and more—pricing can be tailored to the intended function of the ad. Certain pricing methods may be more appropriate for specific advertising campaigns. CPM is often used for advertisers focusing on brand awareness or delivering a specific message, because this pricing model is more focused on exposure as opposed to a cost-per-click model.
CPM is only one of a variety of digital advertising pricing models. While CPM is cost per thousand impressions, another type of pricing is cost per click (CPC), which is where advertisers pay each time consumers click an ad. CPA, as opposed to CPM, is cost per acquisition, where the advertiser only pays when consumers make a purchase after clicking on an ad.
Display ad costs vary, but what makes them one of the most cost-effective advertising methods is the flexibility. With some traditional advertising, brands can’t change the visuals, call to action (CTA), or message after an ad begins running. That means that if the ad isn’t effective, the cost per action may be higher. Since display advertising is dynamic, and based on pricing models like CPM, this allows advertisers to change course during a campaign and gives brands greater flexibility to optimize campaigns and maximize the efficiency of their budget.Amazon’s Sponsored Display uses a cost per thousand viewable impressions (vCPM) pricing structure. This means an advertiser is charged when their ad has been viewed by shoppers. Sponsored Display adheres to the MRC definition for an ad view: at least 50% of the ad should have been in the shopper’s viewport for at least 1 second for it to be registered as a viewed impression.
Amazon’s audio ads are sold on a CPM basis. These audio ads campaigns are measured through impressions, average impression frequency, cumulative campaign reach, audio start, audio complete, effective cost per audio complete (eCPAC), and more. Amazon DSP’s display ads also use a CPM model.
1 Statista, Global Programmatic Advertising Spending, 20212 eMarketer, Programmatic Digital Display Ad Spending Forecast, June 2021
Your step-by-step guide to getting started with Sponsored Products.
## Here are the themes we’ll cover in this guide
# Chapter 4
# Chapter 6
## Before you start advertising
Here's the first step you must take to get ready to advertise.
### Check eligibility requirements
To advertise, you must have an active professional seller or vendor account and have products in one or more of the eligible categories. At this time, we do not support adult products, used products, refurbished products, and products in closed categories.
Your products must also be eligible for the featured offer. The featured offer is the offer displayed on a product detail page with an Add to Cart or Buy Now button. A key feature of the Amazon website is that multiple sellers can offer the same product, so you may be one of many sellers who are considered for the featured offer placement.
For additional information on how to increase your chances of winning the featured offer, check our help page here.Please note, products and features may not be available in all marketplaces.
## Define your goals
Before you create your first campaign, know what you want to accomplish.
Before you create your first campaign, it’s important to know what business goals you want to accomplish through advertising. Establishing your goals up front will help you choose which products to advertise, decide how to structure your campaigns, and better analyze performance.Consider if you’re trying to:
* Increase brand or product visibility
* Generate traffic to your product detail page
* Improve sales performance
> — <NAME>, NetrushBefore you start doing any advertising, it's really important to set a goal. Whether your goal is growth and awareness, sales, customer retention and loyalty.
### Are you trying to...
Drive sales of a new product?
Improve sales of low performing products or clear inventory?
Generate traffic to your product detail pages?
Increase brand visibility?
## Determine what products to advertise
Choose products that can help you meet your goals.
### Advertise the right products for your brand
Choose products that can help you meet your goals, grouping similar products together. Make sure to take into account product pricing and availability when deciding what to advertise. Here’s a helpful tip: if your products aren’t presenting the featured offer or are out of stock, your ad will not display.
Make sure they’re winning the featured offer at the highest rate—ideally 90% or higher. If you are a seller, you can check your featured offer rate under the ‘Reports’ tab in Seller Central. Click on ‘Business Reports,’ and under the section labeled ‘By ASIN,’ click on ‘Detail Page Sales and Traffic by Child Item’. Here, you can sort by ‘featured offer percentage’ to find your best-performing ASINs.It’s best to look for a high featured offer percentage paired with a high number of sessions to the product detail page. These are your most frequently viewed ASINs.
Your product must be in stock and priced competitively in order to present the featured offer, so take into account product pricing and availability when deciding on items to advertise. If your products aren’t presenting the featured offer or are out of stock, your ad will not display.
## Audit your product detail pages
Checking your product detail pages for retail readiness.
### Make sure their product detail pages are ready
Remember that shoppers who click your ad will be taken to your product detail page, and a strong product detail page can help convert the click into a sale. Check your product detail pages.
# Do they have…
* Accurate, descriptive titles?
* High-quality images?
* Relevant and useful product information?
* At least 5 bullet points?
* Relevant product description?
* Contain search terms metadata?
Help customers discover and purchase products that you sell on Amazon
Simply put, these ads let you promote individual listings to shoppers as they’re browsing and discovering items to buy on Amazon. Launch your first ad in minutes with no previous experience needed.
You pay only when your ad is clicked.
You choose how much you’re willing to spend.
You target ads by keywords or products.
Ads appear in shopping results and on product detail pages.
> — <NAME>, Founder, Empire CaseWhen you want to showcase a new product, you put it where consumers can easily see it. Sponsored Products is like being in the front of the store.
## Create your first campaign
How to get started creating Sponsored Products campaigns.
### Ready to create your Sponsored Products campaign?
For sellers: Start by going to the Advertising tab in Seller Central and selecting ‘Campaign Manager,’ then click the ‘Create campaign’ button.For vendors: Go to the advertising console and select ‘Register,’ then choose one of the vendor account options to log in. Next, click the ‘Create campaign’ button, and choose Sponsored Products.
Next, click the ‘Create campaign’ button, choose Sponsored Products, and follow these steps to launch your campaign in minutes.
# 1. Pick your products
Help create demand for new items or give your bestsellers an extra lift.Group similar items and make sure they’re priced competitively enough to present the featured offer.
# 2. Give your campaign a name
Keep it straightforward, so you can find it easily later.
# 3. Set the budget you want
Just $10 a day can help you get clicks and sales.
# 4. Choose your duration
We recommend running your campaign immediately to start generating traffic. To boost sales on Amazon year-round, set your campaign with no end date.
# 5. Select your targeting type
Pick automatic targeting to target relevant keywords and products.
# 6. Choose your bid and launch
Select how much to spend per click, and you're ready to launch your campaign.
Need more help? Register for one of our webinars to learn from Sponsored Products specialists or view additional video resources in Seller University or on our YouTube channel.
Thank you for reading
# Brand awareness
How self-service ads can start the cycle of discovery
Brand awareness refers to the level of familiarity consumers have with a particular brand. It is measured by how well consumers can recognize the brand’s logo, name, products, and other assets. An effective brand strategy defines a brand’s target audience, develops a unique selling proposition, and creates a consistent brand experience across touchpoints.
Start using Amazon Ads to display your products and create campaigns.
Help customers discover your brand and products with ads that appear in Amazon shopping results.
Learn how to build your brand with Amazon Ads.
Learn from top advertisers and brands on how they created successful campaigns with the help of Amazon Ads.
Brand awareness is the beginning of a consumer’s interest in a product or service. It is the first step on the path to purchase, as well as the starting point of their relationship with a brand. Brand awareness, or recognition, refers to customers’ ability to recognize a product or service by name.Ever wonder why you recognize, remember, and have an association with a company, even if you don’t use their products? It’s because they have strong brand awareness, which means consumers are familiar with, or aware of, their brands. A business’s brand is much more than just a logo or a tagline. It’s a combination of what products they sell, how they tell their story, their aesthetic, the customer experience they deliver, what the company stands for, and more. For example, think about your best friend. The first time you met that person, they made an initial impression on you. Additional interactions with that person informed your feelings about them over time. And based on that, you’ve developed a sense of who they are and what they stand for. In your mind, your best friend has a brand. It’s formed through the combination of all your experiences with them. A customer’s perception of a brand is based on a range of inputs over time. Companies build brands by delivering a consistent message and experience across touchpoints. That consistency—that repetition of messaging and experience—is fundamental to making your brand memorable, which is key to building brand awareness. Brand awareness helps your brand become top of mind with potential customers when they begin to consider purchase decisions. After all, a strong brand is important, but to grow your business, you need consumers to know about it. Brand awareness is important because it helps foster trust and allows brands to tell their story and build equity with consumers.According to a 2022 global survey from Statista, 5 out of 10 consumers said they would be willing to spend extra for a brand with an image that appealed to them. “In 2022, the aggregate value of the world’s 100 most valuable brands increased by over 22% and reached a record $8.7 trillion,” according to Statista. “By comparison, this figure stood at around $5 trillion just two years earlier.” Brand awareness is also important because it helps develop a strong identity through which a company can share its values and mission. This type of connection is important to consumers. According to a 2022 Amazon Ads and Environics Research report, 79% of global consumers say they are more likely to purchase from brands whose values align with their own. Brand awareness works to help familiarize customers with a brand or product through promotions, advertising, social media, and more. A successful brand awareness campaign will work to help differentiate a brand or product from others. Many brands may connect brand awareness with consideration: The more consumers who are aware of your brand or product, the more likely they may be to consider purchasing.Though the purchase journey is not linear, the traditional marketing funnel still provides a useful way to visualize it and to demonstrate the importance of awareness. Awareness is at the very top of the funnel, where there are consumers who may be interested in learning more about your products. Here, a brand that can grab customers’ attention with a positive experience will help raise awareness and possibly inspire them to seek more information. When customers begin to seek information, they enter the next phase of the funnel: consideration, when they are considering making a purchase. Their intent to purchase has increased, based on the inspiration they’ve received at the awareness level. Those who are compelled further, through additional information, then enter the conversion phase, when they’ll look to make a purchase. Throughout the process, your potential customers are narrowing down their options. Companies that already have brand awareness with customers are ahead of the curve, because they don’t have to explain who they are and what makes them different. Essentially, they’ve already introduced themselves, so they can focus on delivering more specific information that is relevant to a potential buyer’s purchase decision. Let’s say you’ve just heard about a cutting-edge new television that piqued your interest. And let’s say that two companies are selling the same TV at a similar price—the first is a company you know nothing about, and the other is a company with strong brand awareness. Even if you’ve never purchased a product from the second company, its brand awareness is a strength that lends credibility to its product. And that’s why building brand awareness is so important. Everyone’s attention is limited. With countless brands vying for the same consumers’ attention, it is useful to be the first brand they think of when considering a product in your category. Big brands know this, and that’s why we know them. It’s no coincidence that many consumers have existing associations between these brands and what they offer. And it’s no mistake that these well-known brands have long invested in increasing awareness. Companies can help increase brand awareness through promotions, social media, influencer programs, and, of course, brand advertising. Brands are also finding other creative ways to increase awareness through streaming, content marketing, immersive storytelling, interactive advertising, experiential advertising, and more. According to a 2022 Statista survey, the leading goal of content marketing worldwide is to “create brand awareness.”For example, brands can use Sponsored Display video to make a product and brand story more memorable while customers are shopping or consuming the content they love. Advertisers who used video creative for Sponsored Display and selected the “optimize for conversion” bid optimization saw on average 86% of sales from new-to-brand shoppers, compared to 78% for those using image creative.1 Sponsored Brands can help brands get noticed with creative ads that showcase a brand or product in shopping results. Advertisers who used all Sponsored Brands ad formats saw, on average, 79% of their sales from new-to-brand customers.2 Amazon Ads offers several ways to measure brand awareness, such as new-to-brand metrics. Sponsored Brands offers unique reporting with new-to-brand metrics, which help you measure the total number of first-time customers or total first-time sales in the past 12 months.Knowing that many non-Amazon channels play a role in the customer journey, you can gain visibility into how audiences discover your products with Amazon Attribution. The console helps you unify your advertising measurement, across search, social, video, display, and email. By understanding the impact of your digital advertising across touchpoints, you can better drive awareness and achieve your brand marketing goals. No matter what your budget is, there are solutions that can help you that drive measurable outcomes.
There are a number of brand awareness metrics and KPIs that marketers use to measure consumers’ familiarity with a brand or product. These brand awareness metrics allow marketers to track and understand the effects of their marketing efforts and if their campaigns are helping consumers learn about their offerings, values, or brand story.
In marketing, impressions is a simple metric used to indicate how many consumers see an advertisement or how many times an ad is shown.
When brands talk about “increasing traffic,” they refer to customers who take an action online to visit a product page or some other resource that exists on a brand’s site. More in-depth reporting—beyond page views—includes Store visitors, unique visitors, Store page views, and sales generated from your Store.
New-to-brand metrics give visibility to advertisers to determine whether an ad-attributed purchase was made by an existing customer or a new buyer for the brand’s product on Amazon within a year.
Amazon Brand Lift helps advertisers quantify how their Amazon Ads campaigns are driving marketing objectives such as awareness, purchase intent, and ad recall.
Reach in marketing is the measurement of the size of the audience that has seen your ads or campaign content. Reach measures your actual audience, and marketing reach measures the potential customers a campaign could reach.
Brands of all sizes have driven awareness and grown their businesses on Amazon by using a range of complementary brand-building solutions—whether they’re large companies, or first-time authors promoting a new self-published Kindle novel, and everyone in between.To raise awareness and develop a deeper relationship with potential customers, Amazon Ads offers a range of relevant and effective solutions. Sponsored Brands, for example, helps brands engage shoppers as they browse and discover products on Amazon. You can create a custom headline to share your brand message and showcase products to customers when they’re looking to buy. If they click on the logo in your ad, they’re taken to a custom landing page or to a Store, where you can showcase your brand products further through a more immersive shopping experience. A self-service Store helps raise brand awareness at no additional cost, as do Posts and streaming with Amazon Live.
Here are some examples of the creative and successful strategies that businesses have used to grow their brand awareness with consumers.
Case Study
In 2022, Loftie set out to build awareness of their mission and products to help customers sleep better with an improved tech/life balance. To achieve these goals, the small business launched a campaign that included Sponsored Products, Sponsored Brands, and more Sponsored Display ads—especially near the holidays.Loftie also focused on Sponsored Brands video, which they used to highlight the key benefits of their products for customers. With video ads, Loftie was able to communicate with customers how their alarm clock stands out from others, which helped increase consideration and allowed them to better stand out in their category. In 2022, Loftie’s Sponsored Products campaigns saw an average return on ad spend (ROAS) of $5.66 and an advertising cost of sale (ACOS) of 17.68%. Meanwhile the brand’s Sponsored Brands campaigns had a click-through rate of 1.06%, which put them in the 75th percentile for their category.3
Case Study
In 2022, Volkswagen wanted to find a creative car marketing campaign to inform consumers about their new ID.4 electric SUV. The Amazon Ads Brand Innovation Lab, Amazon Web Services, and Volkswagen developed the first ever Test Drive With Alexa, a program that allows customers in select regions to take Alexa-guided test-drives of Volkswagen’s ID.4 EV SUV.After test-driving the car with Alexa, 28% of participants requested more information about the Volkswagen ID.4, and a majority requested a follow-up from a dealer.4 Brand measurement from the campaign showed a statistically significant lift in awareness among 18- to 34-year-olds.4
Case Study
In 2022, KITKAT approached Amazon Ads looking to reach new adult Gen Z and millennial audiences in new ways. The team wanted to explore how they could position the heritage chocolate bar in the best way to appeal to this digitally savvy audience.Amazon Ads and KITKAT identified Twitch premium video as the ideal solution to help the brand build awareness and affinity with the gaming community. The Twitch premium video ads enable brands to connect with Twitch’s hyper-engaged community by incorporating ads into live broadcasts across desktop, mobile, tablet, and connected TV devices. The campaign was able to raise unaided brand awareness by 52%, which surpassed the average benchmark of unaided brand awareness across similar campaigns by 3x. Having seen the ad, audiences were also more than 2x more likely to associate KITKAT with gaming.5 Twitch viewers who watched the ad were also twice as likely to attribute the KITKAT message, “Even the biggest champ needs a break,” to the right brand, compared to the average benchmark across similar campaigns. 1 Amazon internal data, 20232 Amazon internal data, 2022 3 Advertiser-provided data, 2023 4 Advertiser-provided data, 2022 5 Twitch and Amazon Ads internal data, June 2022
# Drive reach that matters
Amazon Ads has billions of shopping, streaming, and browsing signals to help connect your brand with the right customers, at the right time, in brand-safe placements.
Reach more customers interested in products like yours across Amazon, Twitch, and third-party sites and apps with Sponsored Display. With customizable creative elements and video formats, advertisers can showcase their brand and products through immersive storytelling.
Advertisers who use Sponsored Display Audiences on average have seen up to 82% of their ad-attributed sales driven by new-to-brand customers.1
Sponsored Brands can help customers discover your brand and products with creative ads that appear in relevant Amazon shopping results.
On average, advertisers who activated a Sponsored Brands ad campaign grew their branded searches by +59% over a 4 week period.2
## Video ads
Reach a leaned-in audience at scale with engaging Streaming TV and online video ad solutions. Connect with customers in Streaming TV exclusive originals on Amazon Freevee, livestreamed entertainment on Twitch, live sports including Thursday Night Football, top TV and network broadcaster apps, and Fire TV Channels, or run your in-stream and out-stream video ads on Amazon-owned sites and across the web on leading third-party publisher sites.
Advertisers in the US can reach an average monthly audience of 155MM+ across Amazon ad-supported Streaming TV including Amazon Freevee, Amazon Publisher Direct, Fire TV Channels, Thursday Night Football, and Twitch.4
## Audio ads
Reach unique, high-quality audiences across our audio supply, including Amazon Music, Alexa News, Twitch audio ads, and Amazon Publisher Direct. Customize with genre insights or layer in first-party insights to reach the right audience at scale.
Amazon streaming audio ads helps you reach more than 77% of total ad-supported audio streaming hours on Alexa-enabled devices across Amazon Music ad-supported tier, Alexa News, and Amazon Publisher Direct (APD).5
## Display ads
Scale your reach to more customers with display ads across Amazon properties and devices like Fire TV, Fire tablet, Echo Show, and the Prime Video app, and premium third-party content.
Fire TV display ads provided a median 125% incremental reach when added to STV video ad campaign(s).6
## Custom and out-of-home ads
Connect to new customers in innovative and impactful ways. Our custom sponsorships are attention-capturing, tailor-made experiences that run on Amazon properties such as Prime Video live sports broadcasts, Amazon Music concerts, gaming events, and Amazon Freevee, as well as on-package advertising, in-show virtual product placements, and more.
### What is marketing reach? Here's everything you need to know.
There are many ways that reach can be an important factor in your advertising campaigns and marketing strategy. We’re breaking down the basics here for advertisers of all levels.
Brand reach is the total number of potential customers that come into contact with your brand. This can include customers reached through brand awareness campaigns, online or in-person advertising, social media, thought leadership, and more.
Brand reach is important because it is indicative of the number of potential customers your brand could access. The larger your brand’s reach, the greater the likelihood your brand has to build awareness and loyalty to drive sales.
Brand reach can be increased in a number of ways, including brand awareness campaigns, product advertising, and telling your brand story. Most importantly, a brand must be able to identify its desired audience and increase its presence in the areas where they shop and live.
The way to measure brand reach is largely based on the objective of the brand and its initiatives when expanding reach. These metrics can include site traffic, sales conversions, video views, and more.
Marketing reach refers to the measurement of the size of the audience that has seen your ads or campaign content. Reach measures your actual audience, and marketing reach measures the potential customers a campaign could reach. These can refer to specific audience segments or to a broader percentage of the population
Sources:1 Amazon internal, 2021, US 2 Amazon internal, 2022, WW 3 Amazon internal data, US, 8/1/2022 – 08/26/2022 4 Amazon Internal, Q4 2022, US 5 Amazon internal, 2022, US 6 Amazon internal, 2023, US
# Building brand awareness with Amazon Ads
### Building brand awareness with Amazon Ads
In this free webinar, we’ll share how you can help build awareness for your brand and create new opportunities for shoppers to discover your products at these key moments.
Capture attention across content and commerce using our unparalleled insights and innovative ad formats.
Use Sponsored Display video to make your product and brand story more memorable while customers are shopping or consuming the content they love. Use engaging content like tutorials and unboxing videos to help audiences better understand your product, and layer in first-party insights to reach relevant audiences.
Advertisers who used video creative for Sponsored Display and selected the "optimize for conversion" bid optimization saw on average 86% of sales from new-to-brand shoppers, compared to 78% for those using image creative.1
Get noticed with creative ads featuring custom headlines, videos, and images that showcase your brand or products in related shopping results.
Streaming TV and online video ads can help you tell your brand story and optimize engagement with unique and relevant audiences across exclusive properties like Amazon.com, Thursday Night Football on Prime Video, Amazon Freevee, Twitch, and premium third-party supply.
Amazon ad-supported Streaming TV reaches an average audience of 155MM+ monthly. Includes Amazon Freevee, Amazon Publisher Direct, Fire TV Channels, Thursday Night Football, and Twitch.4
Get your brand message heard across exclusive first-party and premium third-party audio content, including Amazon Music’s ad-supported tier, Alexa News, Twitch, and Amazon Publisher Direct.
66% of Amazon Audio Streamers agree streaming audio ads help them discover new brands, products, which is 2x greater than what they reported for other media channels like email, newspaper, billboards, or radio messaging.5
Device ads are display placements that run on certain Amazon devices with screens (e.g., Fire TV, Fire tablet, and Echo Show). Device display ad solutions deliver high-impact, full-screen, and immersive ad placements that make it easy for customers to engage and take action.
Brand sponsorships help create favorable brand association and increased recall by reaching customers alongside their passion points. Run non-skippable, seamless ads like virtual product placements in popular movies and TV shows, on-package advertising, Amazon Music concert sponsorships, or collaborate with our in-house team to customize a unique solution for your brand.
Brand awareness is a customer’s knowledge, perception, and familiarity with a brand. A customer’s perception of a brand is based on a range of inputs over time. Companies build brands by delivering a consistent message and experience across touchpoints. That consistency, including the repetition of messaging and experience, is fundamental to making your brand memorable, which is key to building brand awareness.
Brand awareness is important because it helps your brand become top of mind with potential customers when they begin to consider purchase decisions. Though the purchase journey is not linear, the traditional marketing funnel still provides a useful visualization and demonstrates the importance of awareness. Brand awareness is often the beginning of that purchase journey.
Companies use a range of techniques to increase awareness for their brands, such as promotions, social media campaigns, influencer programs, and, most notably, brand advertising. Many of the ad solutions listed above can help increase brand awareness.
There are a number of key performance indicators that help measure brand awareness, including ad recall and frequency. Amazon Ads offers several ways to measure brand awareness, such as new-to-brand metrics. Knowing that many non-Amazon channels play a role in the customer journey, you can gain visibility into how audiences discover your products with Amazon Attribution. Amazon Ads also offers advanced measurement solutions, including Brand Lift studies, campaign metrics, and creative testing.
Amazon Brand Registry helps you protect your intellectual property (IP), manage your listings, and grow your business—regardless of whether you sell in the Amazon store—for free. You know your brand and IP the best. Simply enroll in Brand Registry, and share information about your brand. We’ll give you peace of mind by activating proactive protections that help stop bad listings and bad actors. Click here to get started.
Sources:1 Amazon internal, 2023 2 Amazon internal, 2022, WW 3 Amazon internal data, US, 8/1/2022 – 08/26/2022 4 Kantar, 2022, US 5 Audio in the Path to Purchase Study, A18-64, 2022, US 6 Kantar, 2022, US
Videos
With Sponsored Brands products and free brand shopping experiences like Stores, Posts, and Amazon Live, you have an opportunity to reach and inspire shoppers on Amazon.
Maximize customer engagement and product education by driving traffic to your Store on Amazon or brand site with the help of unique Amazon Ads insights and ad formats.
Take shoppers looking for products like yours directly to your product detail page.
Suggested products are 46 times more likely to be clicked on if advertised, compared to a product that is not suggested.1
Sponsored Brands can help customers discover your brand and products with creative ads that appear in relevant Amazon shopping results, and drive traffic to your fully branded Store on Amazon.
On average, shoppers spend 2x longer on a brand’s Store and product detail pages after clicking on that brand’s Sponsored Brands campaign.2
Use Amazon audience signals to link interested customers to your product detail page with Sponsored Display. Our rich shopping and contextual signals help advertisers reach the right audiences in the right context.
Drive action with high-impact display ads across Amazon properties and devices like Fire TV, Fire tablet, Echo Show, and the Prime Video app, and align with premium Amazon and third-party content.
Interactive video ads performed 2x greater in driving consideration and interactive audio ads were 1.3x more effective across the same metric when compared to video and audio campaigns without interactive ad features.5
71% Amazon Audio Streamers said listening to streaming audio ads made them more likely to purchase a product.6
When brands talk about “increasing traffic,” they refer to customers who take an action online to visit a product page or some other resource that exists on a brand’s site. Driving traffic can result from a call to action the brand has provided, ads that create intrigue or interest, or simply following a related link.
Driving traffic is important because it’s the step on the customer journey that brings customers to a brand’s product. By driving traffic to a brand’s site, a customer can learn more about the brand or the product that interests them, or even purchase the product.
Traffic can be increased in a number of ways, including brand awareness campaigns, on-page advertising, or calls to action. Amazon Ads offers solutions that help brands reach new customers, including the ones mentioned above.
The most common way to measure traffic is through traffic reports that tell a brand how many people have visited their page. More in-depth reporting—beyond page views—includes Store visitors, unique visitors, Store page views, and sales generated from your Store.
Sources:1 Amazon internal data, US, 8/1/2022 – 08/26/2022 2 Amazon internal, 2022, US 3 Amazon internal, 2022, WW 4 Kantar study, 2022, US 5 Amazon internal data and Kantar, US, 2022 6 Audio in the Path to Purchase Study, A18-64, 2022, US
Date: 2019-01-06
Categories:
Tags:
Shift Marketplace leverages Sponsored Products and Sponsored Display campaigns to create new revenue streams for Alexandra Workwear.
Leverage insights and signals that optimize the customer experience and drive sales for your brand.
Help boost sales of individual products with a single click to conversion to take shoppers directly to your product detail page with Sponsored Products. An always-on, cost-per-click solution, Sponsored Products helps any advertiser quickly plan, launch, and optimize with agility.
Help grow sales by getting your products in front of in-category shoppers in prominent shopping result placements. Sponsored Brands are cost-per-click (CPC) ads that can feature your brand logo, a custom headline, and multiple products. These ads appear in relevant shopping results and help drive conversions for shoppers looking for similar products.
Help drive sales by using Amazon audience signals to attract in-market shoppers to the destination of your choice with Sponsored Display. This programmatic solution helps advertisers easily discover, reach, and engage the right audiences in the most relevant contexts across their shopping and entertainment journeys both on and off Amazon, using machine learning and multi-format, dynamically optimized creatives.
Drive sales by engaging and re-engaging with shoppers, on and off Amazon, with our display ads that can run across Amazon properties, Amazon device screens like Fire TV, Fire tablet, and Echo Show, and premium third-party content.
Interactive video ads were +1.1X more likely to generate conversion metrics than video campaigns without interactive features.4
Interactive audio ads were +2.3X more likely to generate conversion metrics than audio campaigns without Alexa CTAs.5
A conversion is the point in the purchase journey when a customer purchases a product after deciding it is best-suited to them.
Conversions are ultimately the end goal of most marketing campaigns, and impact a business’s bottom line.
Increasing sales through marketing is a multiprong strategy that typically involves increasing brand awareness, establishing a presence within the market, and growing consideration of your brand.
The main way to measure increased sales is through purchases. Amazon Ads also has measurement solutions that help brands measure return on ad spend (ROAS) and the percentage of new-to-brand customers.
Sources:1 Amazon internal data, WW, 6/1/2019 – 7/30/2021 2 Amazon internal, 2022, WW 3 Amazon internal data, WW, 2022 4, 5 Amazon internal, Kantar, 2022, US
# 6 tactics to help build customer loyalty
### 6 tactics to help build customer loyalty
Learn how building customer loyalty with Amazon Ads solutions can help drive more sales and encourage business growth.
Create relationships that last. Amazon Ads shopping insights and engaging ad formats help your brand reconnect with the right audiences on Amazon and beyond.
Sponsored Products help customers find your products by quickly creating ads that appear in related shopping results and product pages. With an always-on strategy, we can help you connect with customers when they’re ready to buy again.
Help cultivate brand loyalty with Sponsored Display by re-engaging with audiences who have purchased your products. This programmatic display solution helps advertisers easily re-engage the right audiences in the most relevant contexts.
Build lasting relationships and engage with shoppers through Brand Follow. Shoppers can simply hit the “Follow” button in Stores, Posts, or Amazon Live to stay connected and learn the latest deals from their favorite brands in the Amazon store.
Help drive loyalty by re-engaging audiences on and off Amazon who have previously purchased your product with display ads. Reconnect with your customers across the Amazon homepage, product detail pages, shopping results pages, and Twitch, across Amazon devices like Fire TV, Fire tablet, and Echo Show, as well as in premium third-party content.
Help boost more sales by remarketing to audiences of existing customers with interactive video ads on Streaming TV and online video. With interactive ads, customers can use either their voice or remote to add products to their cart, learn more, or even buy directly, without ever leaving their couch or the Amazon Freevee app on their Fire TV. For advertisers who do not sell on Amazon, consumers can also interact by scanning a QR code with their smartphone, which will take them to a branded landing page.
Customer loyalty is an ongoing relationship between your brand and your customers. Loyal customers have a higher propensity to engage with your brand on a continual basis, make repeat purchases, and recommend your business to others.
Customer loyalty is an important long-term business strategy. Customer loyalty can help brands withstand changes in shopping behaviors, preferred channels, or major industry disruptions.
Engaging customers in your brand’s story and values helps drive meaningful connections, going beyond a generic transactional relationship they may be used to from other brands. Communicating with customers on a regular basis makes them feel special, helps keep your products top of mind, and deepens the emotional connection with your brand. Loyal customers are more likely to purchase more often from brands they already know and trust.
Customer loyalty can be measured through repeat purchases, return on ad spend, percent of new-to-brand customers, number of reviews, follower count, and campaign metrics on Amazon Attribution.
### Generate sales, stay top of mind, and increase brand love
Amazon sponsored ads help advertisers of all sizes create brand affinity, increase sales, and stand out to shoppers both on and off Amazon.
# Create a strategy with simple-to-use tools
Set up campaigns in just a few minutes and control your costs so you know how much you’re spending on your advertising.
Brands can grow with ads that connect with new customers and inspire current customers to keep coming back.
# Stay top of mind with your audience
Be in front of customers where they are actively spending their time on Amazon and beyond.
### Small businesses using Amazon Ads attributed 30% of their sales to our ads.1
With its highly visible ad placements, self-service interface, and diverse campaign optimization options, Sponsored Products can help advertisers of all experience levels elevate their product discoverability and sell their products efficiently on the Amazon store.
Sponsored Brands promotes brand discovery and consideration by using static and video brand creatives in the shopper journey, and helps increase traffic to a brand Store or product detail page. By using highly engaging and prominently placed ads on Amazon, Sponsored Brands helps advertisers grow brands and engage shoppers with brand stories and product offerings.
Designed for businesses of any size and budget, whether or not you sell on Amazon, Sponsored Display can help advertisers easily discover and reach relevant audiences. Tailor display advertising campaigns based on goals. With its dynamically optimized creatives, businesses can engage audiences across their shopping and entertainment journeys both on and off Amazon.
## Stores
Stores is a free way for brands to educate, convert, and build long-term relationships with shoppers by featuring their product portfolio and telling their brand story.
This course designed for new advertisers, covers foundational topics and strategies to help you run high quality campaigns. Through this bootcamp, we'll teach you how to set up campaigns to match your goals, analyze performance data, and implement best practices to improve performance.
Sponsored ads is a suite of self-service ad solutions that work together to help advertisers engage with shoppers across their buying journey to achieve their marketing goals.
Sponsored Products and Sponsored Brands are cost-per-click (CPC) ads. This means you place a bid on the maximum amount you’re willing to pay when a shopper clicks an ad for your product. Sponsored Display also supports CPC, as well as cost-per-thousand viewable impression (vCPM) billing options. If you’re just getting started in digital advertising, Stores, Posts, and Brand Follow are free.
Sponsored ads products are run through the Amazon Ads console and Amazon Ads API.
Sponsored ads may be displayed on the Amazon store homepage, on top of, alongside, or within shopping results, and on desktop and mobile. Sponsored Display ads in particular can be found across Amazon’s owned and operated pages as well as off Amazon on third-party apps and websites.
Sponsored ads solutions use Amazon’s first-party signals and machine learning to make faster and more accurate optimizations and recommendations for you when you set up and launch your campaigns.
Sponsored ads offers unified, reliable, near-real-time, and actionable measurement solutions so you have more transparency in measuring your campaign performance, and can accurately quantify the return on your marketing investment.
Sources:1 Amazon internal data, January – December 2022. U.S. advertisers using Amazon Ads including Sponsored Products, Sponsored Brands, and/or Sponsored Display. Aggregated average results are based on past observations and not indicative of future performance.
## Your brand.
Turned up.
Whether audiences are going through their morning routines, working from home, cooking in their kitchens, or entertaining friends and family, we make it easy to connect with an audience of millions during those unique listening moments in the connected home and beyond.
### Be part of their favorite streaming moments
Get your brand message heard with 10-to-30-second audio ads, which play periodically during breaks between the audio content they love.
### Access quality inventory
Tell your brand story across exclusive first-party and premium third-party audio content, including Amazon Music ad-supported tier, Alexa News, Twitch, and Amazon Publisher Direct.
### Make noise on a growing channel
Reach audiences on Amazon Music ad-supported tier — the most popular streaming audio service on smart speakers with the most share of time spent listening.1 Our monthly unique audience grew by +95% year over year across our supply.2 That means our supply helps you achieve greater unique reach that ever before.3
### Elevate your ad campaigns with the power of Alexa
Engage audiences with brand messaging they find relevant using just their voice, now with interactive audio ads. Listeners can simply reply aloud to Alexa call-to-actions to take actions—such as adding an item to their cart, requesting more info via email as well as a push notification through the companion app, or setting a reminder—without disrupting their streaming audio content.
## Audio advertising insights
### +77%
Smart speaker ownership has grown by +77% over the past two years.5
### 2.0x brand favorability
Amazon audio ads campaigns deliver higher +1.1x ad awareness, +2x favorability, and +1.9x purchase intent compared to third-party benchmarks for audio in the industry.6
### 1.5x higher brand lift
Amazon audio ads that include an Alexa call-to-action are +1.5x more likely to generate statistically significant lift in at least one brand metric such as awareness, consideration, or purchase intent compared to Amazon standard audio ad creatives.7
Explore three insights from a new study by Amazon Ads and Wondery about how audio can impact the emotions and moods of audiences.
## Get started with audio ads
From brainstorming all the way to execution, we want to make it easy for your brand to tell your story using audio ads. Connect with an Amazon Ads account executive to start building your audio ads strategy.
Already have an Amazon DSP account?
## FAQ's
Amazon audio ads are non-skippable 10-to-30-second ads that play for listeners across first-party and third-party streaming audio services. On Alexa-enabled devices, Amazon Ads also offers interactive audio ads, which let listeners simply reply by voice to an Alexa call-to-action to take actions, such as adding an item to their cart, requesting more info via email as well as a push notification through the companion app, or setting a reminder—without disrupting their streaming audio content.
Advertisers can buy audio ads whether or not they sell products in the Amazon store. Audio ads require working with an Amazon Ads account executive, with a typical minimum budget of $50,000 (US). We offer both managed-service and self-service buying capabilities, which allow advertisers any necessary control and flexibility. Contact your account executive for more information. Ads are sold on a CPM (cost-per-thousand impressions) basis.
* AU
* IN
* JP
* SG
Your brand messaging will be heard during ad breaks across premium audio content, including exclusive first-party Amazon Music ad-supported tier, Twitch, News, and third-party Amazon Publisher Direct (APD). They surface across desktop, mobile, tablet, and connected TV environments, but the majority are delivered on Alexa-enabled devices. Take a look at the some industry-specific examples here.
Reporting includes impressions, average impression frequency, cumulative campaign reach, audio start, audio complete, effective cost per audio complete (eCPAC), and more.
Sources:1 Edison Share of Ear, US, Q3 2021 2 Amazon Internal Data, US, Q1 2021 vs. Q2 2022 3 Amazon Internal Data, US, Q2 2022 4 Edison Share of Ear, US, Q1 2022, 2019 vs. 2021 5MRI Simmons Cord Evolution Study, Winter 2020 vs Winter 2022, USA 18+ 6 Kantar Research, US, 2020-21 7 3p Kantar research, 2021, US
# Custom advertising solutions
## What are custom advertising solutions?
Amazon's Brand Innovation Lab works with brands to develop campaigns that engage consumers at multiple stages of the marketing funnel, building brand awareness, consideration, and conversion. Our global team of storytellers and innovators includes strategists, creatives, design technologists, engineers, and more, collaborating to create solutions customized to our advertisers' goals.
## Why use custom advertising solutions
Capture customers' attention and imagination with innovative, tailor-made experiences.
# Comprehensive campaigns
We develop end-to-end creative campaigns that use Amazon's first-party shopping insights to find the best way to deliver your message to your audience at the right time, in the right way.
# Innovative solutions
If we see a way for a brand to enhance the customer experience not included in our existing suite of advertising products, we'll work with our design technologists and engineers to build it.
# Work across Amazon
Our campaigns unite Amazon’s online, off-line, and subsidiary brands to create cross-channel advertising campaigns that capture customers’ attention.
Learn all the ways you can bridge your online and offline marketing through branded experiences and custom campaigns.
## How advertisers can use custom advertising solutions
We build campaigns that help drive brand awareness and engagement among Amazon customers. We combine Amazon’s retail insights with information provided in the advertiser’s creative brief to develop concepts and executions that deliver experiences that serve both your brand and your customers.We work to achieve a wide variety of goals for advertisers, from brand awareness to consideration, conversion, and retention.
Blog
From cat castles to iconic characters from the entertainment world, Amazon Ad's on-box advertising surprises and delights customers.
In this overview, we discuss what experiential marketing is, share experiential brand marketing examples, and offer solutions from Amazon Ads.
## How to get started
Contact your Amazon Ads account executive for more information about working with the Brand Innovation Lab or get in touch with our sales team.
Businesses with a product or brand to promote can buy custom solutions, whether or not they sell products on Amazon. Custom programs require working with an ad consultant and are subject to a required minimum spend.
Campaign placements can include home page takeovers, Fire TV placements, customized destination pages, on-box advertising, and multi-channel campaigns that include in-store displays in Amazon 4-star stores. If you'd like to use assets created for custom solutions in other channels, such as social media, we can work with advertisers to negotiate usage rights.
The time of production depends on complexity and scope, and can range from several weeks to several months.
You don't need to sell on Amazon to work with our team. We’ve created custom solutions for auto manufacturers, movie and television launches, and financial services.
### Plan your advertising at scale, with insights you can trust
Our suite of planning tools help you better understand your audiences, as well as discover new and relevant audiences. Our media planning solutions help you make informed and insights-driven decisions, helping you get the most out of every dollar spent.
# Planning solutions that benefit all advertisers
Whether or not you sell in an Amazon store, our suite of insights and planning solutions can help you grow your business.
# Connect with the right audience
Reach thousands of differentiated and unique audience segments based on a range of signals with insights that can be applied to both Amazon-owned properties and third-party sites.
# Insights to unlock your potential
Through first-person insights and billions of signals from streaming and shopping channels, you can plan impactful marketing strategies that drive business results.
## Amazon audience insights
With easy-to-use tools and billions of proprietary audience signals informed by online and offline touchpoints, Amazon helps brands build insight-driven audience strategies. Brands have the flexibility to discover new audiences while seamlessly accessing our robust menu of à la carte segments to reach relevant, known audiences.
## Media planning
Optimize your marketing strategy with our cross-channel planning solutions—tools that use insights from each stage of the customer journey to better inform how to leverage your ad dollars. By putting the customer at the center of the process, your brand can embrace experimentation, learn from measurement, and even work in tandem with your agency.
## Brand insights
Amazon Ads insights help make sense of billions of global signals to help you find the right audience, channels, budget, and customer touchpoints for your campaign on and off Amazon.
Audience insights are informed by billions of signals, including Amazon shopping signals, to build a holistic picture of a particular audience. Audience insights enable brands and advertisers to better understand their customers’ interests and behaviors as well as discover and engage customers across the purchase journey—both on and off Amazon.
Amazon audience insights are available through our advertising console as well as on Amazon DSP, providing advertisers access to a large catalog of audience segments that are easy to use and ready to go. You can select specific audiences that align to you product’s or brand’s core customer—for example, “outdoor enthusiasts” or “environmentally conscious shoppers.”
Audience insights can be used to better understand which customers or audience segments are engaging with your ads and how they’re reacting or converting. Amazon audiences segments includes in-market, lifestyle, interests, and life events.
### Make connections that matter
Discover and reach relevant audiences through tailored campaigns informed by insights. With billions of proprietary audience signals and a breadth of simple tools, we can help you build an insight-driven audience strategy to make more meaningful connections.
# Reach audiences where they are
Be where audiences are shopping, browsing, and streaming, and use our breadth of signals to understand how to reach them at the right place and time.
# Connect through messages that matter
Amazon Ads harnesses billions of unique, proprietary signals to help you reach relevant audiences on Amazon and beyond, even when ad identifiers are not present.
# Have a flexible audience strategy
Build your audience strategy based on your unique business objectives. Discover new audiences, leverage our robust menu of segments, and combine our signals with your insights to make better connections.
## Audience insights
Compare brand-specific insights with existing market research to analyze the difference between your audiences on- and off-Amazon. Identify new campaign opportunities and look for audience behaviors to inspire new ideas.
## Persona Builder
Dive deeper to understand which products and categories interest your audience. Brands of all sizes can create and get insights on custom, composite audience expressions that reflect today’s complex purchase pathways.
## Amazon Audiences
Access thousands of à la carte audience segments fueled by our billions of unique and proprietary signals. We then apply these signals to reach relevant audiences and drive high-value outcomes on Amazon.com and beyond.
## Advertiser Audiences
Incorporate your own audiences into your Amazon Ads campaigns by onboarding directly into Amazon DSP or through Amazon Marketing Cloud (AMC). Reach new audiences, extend reach, remarket to loyal customers, or optimize campaigns.
A target audience is a set of signals and conditions used to reach a group of consumers with relevant media and/or creative.
Your campaigns will be more successful if you have identified target audiences with a considered strategy of how to engage with them. Ads that are sent to broad audiences without considering relevance tend to have less engagement than ads sent to relevant audiences. Identifying target audiences also helps you optimize your media budget by only reaching audiences who might be interested your product.
Audience segmentation is a strategy defined as the organization of various types of audiences, to reach them with your ads and campaigns. By dividing your customers into audience segments, it becomes easier to reach them with relevant ads. In contrast, creating ads with a broad or generic statement can make them less likely to appeal to customers.
Identifying your audience will vary by advertiser, but in general you will want to understand all of the signals available, evaluate past campaign behavior and your current campaign goals in order to identify what audiences you want to reach on a given campaign.
### Build an insights-driven strategy to increase reach, performance, and sales
Amazon Ads Media Planning Suite helps advertisers reach and convert audiences across the customer journey using machine learning and tools that connect between channels. Advertisers can leverage first- and third-party insights to develop a strategy for products on and off Amazon.
# A strategic blueprint to set expectations
Media planning acts as a map for how to effectively reach and engage audiences through relevant channels. This plan helps answer your campaign’s who, what, where, when, how, and why questions.
# Customer-centric approach
Amazon Ads media planning bases all decisions around the customer to create a plan that is effective, while never losing sight of the core audience.
# Context, with concept
Media planning involves a deep understanding of the target audience’s media consumption habits, preferences, and behaviors, along with a keen sense of awareness of the ever-evolving media landscape.
## Cross-channel planning
Advertisers can create cross-channel plans across display, video, and audio, on and off Amazon. This enables advertisers to reach users on a variety of platforms and devices, increasing the likelihood of engagement and conversion to captivate their interest across the shopper journey.
## Channel planning
With the Amazon Ads channel planner, advertisers can create detailed, channel-specific media plans leveraging Amazon’s unique first- and third-party insights, optimizing how to allocate media investments across Amazon Ads. Available in the same interface, marketers are able to capture the details of their channel plans using customizable inputs, while still ensuring the goals across the various channels are consistent.
## Application Programming Interface (APIs)
Through APIs, advertisers are able to curate the right channels and tailor their messaging strategy, while seamlessly integrating their plan into agency workflows. Tech specialists can automate tedious tasks and understand how to optimize their ad budgets.
Media planning is the process advertisers undergo before buying and launching ads to gauge effectiveness and maximize ROI (return on investment). It is a critical first step in any ad campaign. The tangible outcome of the media planning process is a media plan document to guide an ad campaign.
Media planning is important because it helps set a parameter for what ad placements might work best for your brand and where to best place those advertisements.
Media buying is what happens after your media plan is complete—they work hand in hand. Media planning sets the parameters for the media buying. Media buying involves evaluating all media advertising options within your budget parameters in order to determine which audiences, ad types, and combination of media channels will help deliver the best possible campaign results, then purchasing those ads.
Media buying is important because strategically purchased media can impact a campaign’s success. It’s not enough to have compelling copy and visuals—ads must be placed in the right locations and at the right times and frequencies, so that the right audiences see the ad.
1 Kantar, “Media Multiplier”2 Nielsen, “The Case for Custom Media Planning”
### Bring your brand story to life
From initial creative ideation and strategy, to ad production and editing, our suite of creative solutions can help you elevate your marketing and drive more engaging ad experiences on and off Amazon.
# Create engaging assets
Our production services and tools help produce effective ad creative. Use your product name or ASIN to build ad creative with our customizable templates for no additional charge.
# Explore innovative ideas
Our creative ideation and innovation services combine a deep knowledge of our ad products as well as custom creative executions to develop unique opportunities to reach your audience. Collaborating with Amazon Ads creative experts can help highlight new creative opportunities such as interactive enhancements, custom ad experiences, and more.
# Connect with your audience
Our suite of creative effectiveness solutions helps you build creative that is relevant and optimized for performance at scale.
## Creative services
Our creative experts help develop a tailored strategy for your brand and business goals. Then use our in-house creative production and partner services to adjust or design new creative assets.
## Creative tools
We offer simple tools to create or enhance your ad campaigns with compelling visual assets. These tools help simplify the creative process for brands of all sizes across various ad formats.
Amazon Ads creative solutions are a suite of tools and services designed to help advertisers of all types and sizes to bring their brand story to life. Our creative solutions include self-service tools as well as hands-on support across creative ideation and strategy, creative production and editing, ad policy, and creative effectiveness.
Creative solutions enable brands to unlock new advertising opportunities, for example, by providing video creatives to first-time video advertisers or creative strategy consultation to brands looking for new ways to tell their brand story. Additionally, creative solutions provide access to creative insights and creative testing, as well as ad policy education, running efficient campaigns to achieve their business goals.
### Bring your ideas to life
Our creative services for advertisers and brands of all types and sizes can help with everything from ideation and strategy to ad content production and editing. Reach your brand’s full creative potential with engaging, interactive, and memorable ad experiences.
# Innovative ideas tailored for your brand and audiences
With deep knowledge of Amazon Ads products, formats (e.g. video, audio, interactive), and custom executions, our creative experts leverage available audience insights to develop campaigns that perfectly fit your brand identity and business goals.
# Creative production and editing, no matter the brand size
Whether you have existing assets or are starting from scratch, Amazon’s creative production and editing services can help you produce compliant and inspiring creatives across all formats, from static banners and Stores, to video and audio.
## Creative strategy services
Creative strategy services help you develop and amplify your brand's creative presence across Amazon's suite of ad products, using insights and testing capabilities.
## Brand Innovation Lab
Going beyond the boundaries of established ad products, Amazon Ads Brand Innovation Lab helps advertisers explore completely custom, experiential creative executions by partnering with advertisers and agencies to deliver unforgettable experiences with brands customers love.
## Production and editing services
With hands-on assistance available across video, audio, and more, our in-house creative production and editing services fit a wide range of budgets.
## Creative onboarding and on-demand support
With education and hands-on support, self-service advertisers using Amazon DSP or Amazon Ads console can request on-demand creative reviews to help better ensure success for their brand’s creative.
Amazon Ads creative services enable brands unlock new advertising opportunities by providing hands-on support across creative ideation and strategy, creative production and editing, as well as consultation on ad policies and creative effectiveness.
### Tools to capture your brand’s story
Our creative tools can help brands of all sizes and types tell their story. Whether you’re just getting started, experimenting with different formats, or looking to elevate your visuals, our simple tools help capture your brand’s beauty.
# No design experience necessary
Use your existing assets and copy to make beautiful new display and video assets in minutes
# Enhancements to your existing creative
Add seasonal messaging, include product details, and incorporate interactive elements to create engaging and relevant experiences that break through
# Multiple creative versions without the time and hassle
Our quick, no-cost, and easy-to-use tools allow you to test different visuals or call-to-action text to identify what resonates best with your audience
## Amazon Video Builder
Video Builder is a self-service tool that uses images and text to let you create video at no cost. Simply sign in to your advertising console account, select a template, upload images and text, then publish your video to a Sponsored Brands video, Sponsored Display video, or Online Video campaign. You can also customize videos with your logo, background images, or music options.
# 15,000
Videos created by over 15,000 advertisers1
# 50,000
Over 50,000 videos created using the tool2
# 99%
99% moderation approval rate3
## eCommerce Display Creative
Our eCommerce Display Creative pulls info directly from your product pages on Amazon to auto-generate display assets for desktop and mobile placements. Amazon Ads automatically optimizes between creative elements that will drive the best performance for your campaign objective. These assets can be customized to feature a tailored background, logo, headline, or other copy.
## Streaming TV Studio
Easily augment your existing video assets using customizable templates. Add overlays and end slates or build in Amazon’s trusted brand elements like ratings, product details, and voice calls to action on Alexa.
Our creative tools create or enhance ad campaigns with compelling visual assets for brands of all sizes. Brands can experiment with and instantly use these tools across various ad formats to make more engaging and relevant ads.
Creative tools allow advertisers to tailor communications and engage with audiences in a fresh way. With options that encompass visuals, audio, and text, creative tools allow brands to break through and introduce consumers to their products or services.
All of these solutions are currently free to use in the advertising console, on Amazon DSP, or via managed service through an Amazon Ads account executive.
Sources:1-3 Amazon internal data, WW, 2021
# Your non-Amazon campaigns, measured.
## What is Amazon Attribution?
Amazon Attribution is an advertising and analytics measurement solution that gives marketers insight into how their non-Amazon marketing channels perform on Amazon.
### Measure
Understand the impact of your cross-channel digital marketing activities.
### Optimize
Make in-flight optimizations using on-demand advertising analytics to help maximize impact and ensure efficiency.
### Plan
Learn which of your strategies maximize return on investment and drive sales to build future marketing plans.
In this course for beginners, learn how to create attribution tags to measure your non-Amazon advertising media, and how to gain insights to optimize performance.
## How advertisers can use Amazon Attribution
While Amazon Advertising helps drive consideration for your brand and products across multiple touch points, we know there are a number of non-Amazon channels that also play key roles in the shopping journey. With Amazon Attribution measurement, you can gain visibility into how these non-Amazon touch points help customers discover and consider your products on Amazon. Using these advertising analytics and insights, you can optimize and plan your digital strategy based on what you know resonates with your customers and drives value for your brand on Amazon.
Understand which non-Amazon strategies are helping you reach your goals.
Access full-funnel advertising analytics with metrics including clicks, detail page views, Add to Carts, and sales.
Discover new sales opportunities by learning more about how shoppers engage with your brand on Amazon.
Get insight into campaign performance in-flight with Amazon conversion metrics for your campaigns.
Grow return on investment by ensuring your marketing campaigns are driving value for your brand on Amazon.
## Get started with Amazon Attribution
Once your account is created, sign in to add the products for which you want to measure conversions. From there, generate tags for each of your marketing strategies, and then implement tags across your search ads, social ads, display ads, video ads, and email marketing.
Already registered?
Amazon Attribution is currently available for professional sellers enrolled in Amazon Brand Registry, vendors, KDP authors, and agencies with clients who sell products on Amazon. Eligible sellers and vendors can access Amazon Attribution measurement through either the self-service console or tool providers integrated with the Amazon Ads API.
Amazon Attribution measures non-Amazon Ads media such as search ads, social ads, display ads, video ads, and email marketing.
At this time, there is no cost associated with participating in Amazon Attribution.
Amazon Attribution reports include clicks, as well as Amazon conversion metrics, such as detail page views, Add to Carts, and purchases. Reporting is available via downloadable reports and within the console.
Yes, you can grant permission for users directly within the console. Select "Manage" at the top navigation, and then select "User management" to add users.
Yes, Amazon Attribution is now available through the Amazon Ads API. If you’re a tool provider, visit our API getting started guide to learn more.
Amazon Brand Lift studies are designed to be an insightful, easy, quick, and privacy-safe way for advertisers to quantify the impact of upper- and mid-funnel campaigns.
## Why should I use Amazon Brand Lift?
Your brand can use Amazon Brand Lift to measure customer awareness, attitudes, preferences, favorability, intent, and ad recall. With participation from the sizable, representative, and engaged Amazon Shopper Panel community, Amazon Brand Lift helps provide objective and concrete measurement results.
# Detailed insights for your business goals
Amazon Brand Lift allows you to see the overall lift in brand metrics from your ads as well as analyze the aggregated survey results by audience segments including age range, household income, gender identity, ad frequency, ad type, and device.
# Easy and intuitive reports
Choose which campaign to measure, build the survey questions using pre-populated templates, and review your study results on demand. You can also receive results directly through the Amazon Ads API.
# Quick turnaround
You can request a study while campaigns are upcoming or in flight, and results are ready in as few as 10 business days after submission—giving you a fast understanding of ongoing performance.
# Privacy by design
The Amazon Shopper Panel is an opt-in program that gives panelists clear explanations about how Amazon will use their reporting. All reporting is anonymized and aggregated.
## How do I create a Amazon Brand Lift campaign?
Go to Studies in the Amazon DSP console, and select “Create study”
On the “Create study” page, under “Brand lift,” select “Continue”
Choose orders to study
### 4
Name your study, products, or brands, and select benchmark categories
### 5
Choose questions and objectives
### 6
Submit your study for review
### 7
Interpret your study summary
## What ad campaigns are eligible for Amazon Brand Lift?
## Who can use Amazon Brand Lift?
Amazon Brand Lift is available to all advertisers running Amazon DSP campaigns (managed service and self-service) in the US, UK, and CA. It is also available via Amazon Ads API.
Amazon Brand Lift is Amazon’s first-party brand lift solution, available in the Amazon DSP console. Amazon Brand Lift quantifies the impact of upper- and mid-funnel ad campaigns on key brand objectives like awareness, intent, and ad recall.
Amazon Brand Lift is free as long as campaigns meet spend and impression minimums. Minimums vary by marketplace; please visit the Amazon DSP or contact your account executive to learn more.
Amazon Brand Lift calculates results and generates a report that shows absolute lift, which is the difference between the ad-exposed group’s and control group’s response rate to the qualifying responses for a given question. Results will show a summary of lift for each question, plus a deeper dive on individual questions, including demographic and audience breakouts.
Brand lift studies help advertisers measure the impact of their ad campaigns on shoppers’ perceptions of their brand. They evaluate whether customers who have seen an ad are more likely to respond favorably when asked about a brand, compared to a similar group of customers who haven’t see an ad. This helps brands understand how brand awareness, perception, and loyalty are changing over time.
Brand lift studies use surveys, issued to both an ad-exposed audience and an unexposed control group, to measure the impact of advertising on customer perceptions of your brand.
## Near real-time access to hourly metrics and campaign messages
Amazon Marketing Stream is a push-based messaging system that delivers hourly Amazon Ads campaign metrics and information on campaign changes in near real time, through the Amazon Ads API.
## Optimize campaigns more effectively
Obtain performance metrics, summarized hourly, that provide intraday insights to help drive advanced campaign optimization.
## Respond quickly to campaign changes
Access timely information such as budget consumption, which can help advanced API users write responsive applications to further optimization.
## Improve operational efficiency
Push-based information delivered to you in near real time, eliminates the need to aggregate metrics over time to understand the changes between hourly performance.
Example data visualization
Hours
Conversion rate (%)
Cost per click (CPC) $
## How to use Amazon Marketing Stream
### Manage campaigns intraday
Analyze hourly variations, helping you optimize campaign attributes such as increasing bids during hours with greater performance.
### Build responsive applications
Take actions in near real time on changes such as campaigns going out of budget.
### Keep local reporting store in sync
Help maintain campaign reporting in sync with Amazon Ads’ reporting on a continuous basis as Amazon Marketing Stream pushes changes to dimensional campaign information in near real time.
### Develop push notifications
Keep advertisers informed by setting up timely, trigger-based notifications within your campaign management interface.
## Examples of how Amazon Marketing Stream is used
Solution provider Quartile used Amazon Marketing Stream’s hourly performance metrics to update advertiser campaigns more frequently, helping improve campaign efficiency.
Utilizing Amazon Marketing Stream helped solution provider Flywheel Digital and their clients meet campaign goals, helping increase clicks, sales, and ROAS while keeping media spend consistent.
## Get started with Amazon Marketing Stream
Visit our Amazon Marketing Stream onboarding guide, which walks through the process of getting API access, integrating with AWS, and subscribing to campaign data sets using the Amazon Marketing Stream API.
Amazon Marketing Stream is currently limited to tool providers and direct advertisers who are integrated with the Amazon Ads API.
To help set up for success with Amazon Marketing Stream, here are a few things you need:
* Technical integration with the Amazon Ads API: If you already have this, there is no separate access for Amazon Marketing Stream required. You’ll use the same access token to subscribe to Amazon Marketing Stream metrics. If you’re not yet integrated, follow the steps to onboarding available in the Amazon Ads advanced tools center.
* An AWS account, which is required to create an SQS end point where you wish to receive Amazon Marketing Stream data sets.
* Access to software development resources.
Yes—at least initially, you’ll need a developer resource to establish the Amazon Marketing Stream subscription API connection. In addition, we recommend a solid understanding of how AWS works in order to most efficiently use Amazon Marketing Stream.
You can use any type of AWS database to receive Amazon Marketing Stream data sets.
Once the Amazon Marketing Stream subscription API returns successfully, you’ll start receiving data sets into your AWS SQS end point. Alternatively, the data can be stored, and an analyst can take them from an SQS end point, storing in S3 and using a service to query in S3. You can decide what method is best based for their unique use case.
You can create services based on near real-time reporting and campaign changes in your AWS database. This means creating responsive applications to monitor incoming information in near real time.
Amazon Marketing Stream can currently be used to obtain hourly changes to Sponsored Products performance metrics (traffic and conversion). These insights can help inform campaign optimizations, such as automating hourly bids through the Amazon Ads API. For Sponsored Display and Sponsored Brands reporting, you can leverage the ad console and reporting API, which will provide daily performance metrics.
# Campaign reporting
### Return on ad spend (ROAS) explained
ROAS stands for Return on Ad Spend. It is a metric that shows the effectiveness of an advertising campaign by measuring revenue against the ad spend.
Campaign reporting provides advertisers with metrics available through the Amazon Ads console and Amazon DSP reporting. Find information on standard industry metrics such as impressions, clicks, and sales, as well as proprietary Amazon metrics such as new-to-brand, brand halo, and Subscribe & Save.
Our reporting and measurement solutions help all marketers—from small businesses to leading advertising agencies to established brands—accurately assess the impact of their campaigns and make it easy to plan, optimize, and gauge marketing strategies.
Success metrics fall into two categories: industry standard (click through rate, return on ad spend, detail page view rate, etc.) and e-commerce goals, which are unique to Amazon (advertising cost of sale, Subscribe and Save, etc.).
New-to-brand metrics are generated from first-time customers of your brand and help you understand ad-attributed purchases.
Gross and invalid traffic metrics enable reconciliation of Amazon DSP metrics with third-party metrics. Invalid traffic (IVT) metrics are nuanced and can be hard to interpret, so reconcile gross metrics before using IVT metrics to rule out measurement-related discrepancies.
Reach and frequency metrics show the audience volume that was exposed to ads, as well as the quantity of exposures.
Viewability metrics include the percentage of total impressions served that are viewable.
Conversion metrics measure attributed ad impact off Amazon, such as on the advertiser’s website or through a measurement partner.
Campaign reporting and measurement solutions are tools to accurately measure the impact of advertising and make it easy to plan and optimize campaigns and marketing strategies.
Our reporting includes traffic performance insights of advertised products, such as performance by keyword and detail page views, as well as advanced retail and attribution sales insights to compare activity on Amazon before, during, and after campaigns.
Campaign reporting includes both industry standard metrics and Amazon proprietary metrics. Campaign reporting helps you better understand your campaign’s impact on how customers discover, research, and purchase your products.
Amazon Attribution, Amazon DSP, audio ads, Sponsored Brands, Sponsored Display, Sponsored Products, Stores, and video ads.
Campaign reporting metrics are available through the Amazon Ads Console and Amazon DSP reporting.
Omnichannel Metrics (OCM) are a way for advertisers to measure the aggregated, total impact of their ad tactics on shopping activities on and off Amazon while campaigns are still mid-flight.
## Why should I use Omnichannel Metrics?
OCM enables advertisers to measure the aggregated, total impact of their ad tactics. OCM insights can help advertisers adjust budget allocation, optimize campaign tactics, and maximize media investment ROI.
# Create a full-funnel strategy
OCM creates a unified view of your campaigns, offering a holistic full-funnel marketing strategy. Advertisers can measure impact across upper-, mid-, and lower-funnel strategies and tactics.
# Turn measurements into actionable solutions
Line-item level reporting in OCM shows the relationship between ad audiences, placements, and creatives on outcomes. OCM enables the agility to optimize and drive campaign efficiency.
# Access a holistic view
OCM includes insights from across channels both on and off Amazon, as well as shopping activities online and offline.
# Prioritize your security
All Amazon Ads products offer privacy and security, and all reporting is aggregated and anonymized.
## How do I start using Omnichannel Metrics?
### Create an Omnichannel Metrics campaign
Go to the “Studies” tab of the Amazon DSP campaign manager, and select “Create study” under “Omnichannel sales.” Enter your brand or products, then select which orders to include.
### Submit your study
After you submit your study, it will be automatically reviewed and approved or rejected within one business day. You may edit your study at any time prior to the start date.
### Optimize your in-flight campaign
Make line-item adjustments to help improve your return on ad spend (ROAS). You can also use Amazon DSP budget optimization to automatically shift budget between line items.
## Who can use Omnichannel Metrics?
Omnichannel Metrics is available for CPG and grocery advertisers, including both managed-service and self-service accounts, with eligible Amazon DSP campaigns. OCM is currently available in the U.S.
Omnichannel marketing is a cohesive strategy across all your advertising channels, and OCM from Amazon Ads provides a holistic view of its measurement and performance. OCM provides measurement to influence the optimization of campaigns both on and off Amazon.
Omnichannel Metrics charges a usage fee. The fee is 4% of media spend and applies to all impressions measured using OCM.
OCM provides line-item reporting, using metrics to help connect the performance of ads, placements, and creative in your campaigns. Advertisers can then set up an OCM study, and reports will be refreshed and delivered on a weekly basis, even while campaigns are mid-flight.
Multichannel marketing is a strategy that uses multiple channels, while the goal of omnichannel marketing is to provide a more holistic overview.
OCM combines first-party and third-party signals to obtain a holistic view of ad-attributed impact. OCM reporting shares metrics for Amazon retail, off-Amazon, and combined (i.e., omnichannel) metrics. Off-Amazon metrics have a projection factor to compensate for unmatched audiences for ad attribution and/or incomplete third-party transaction coverage.
OCM is available for CPG and grocery advertisers, including both managed-service and self-service accounts, with eligible Amazon DSP campaigns. OCM is eligible to measure campaigns across Streaming TV, video, audio, and display Amazon DSP campaigns.
### Technology solutions and flexible insights-driven tools to help grow your business
Our suite of ad tech solutions can support your full-funnel marketing goals on or off Amazon. Launch streamlined campaigns through the Amazon Ads console, set up advanced programmatic buys on Amazon DSP, get managed service through an Amazon Ads account executive or accredited third-party partner. Unlock a holistic view of your media investment and audiences with Amazon Marketing Cloud (AMC).
## Amazon DSP
Programmatically buy ads to reach new and existing audiences on and off Amazon. Use exclusive insights and shopping signals to connect with the most relevant audiences.
## Amazon Ad Server
Amazon Ad Server (AAS) is a global, multichannel ad server used to create, distribute, customize, measure, and optimize campaigns across a variety of screens.
## Amazon Marketing Cloud
Amazon Marketing Cloud (AMC) is a secure, privacy-safe, and cloud-based clean room solution, with which advertisers can easily perform custom analytics and build custom audiences using pseudonymized Amazon Ads signals and their own inputs to plan, activate, measure, and optimize cross-media ad investments.
## Amazon Ads API
Automate, scale, and optimize your advertising from campaign management and performance data to reporting. Our API enables users to develop flexible solutions to meet your needs and goals, and to integrate more deeply with Amazon Ads.
## Managed service
Our managed-service option is designed for advertisers who want access to Amazon DSP with consultative resources or for those with limited programmatic advertising experience. Budget minimums apply. Contact an Amazon Ads account executive for more information.
Advertising technology, also known as ad tech, is used to buy, manage, and measure digital advertising. The term describes the tools and software that advertisers use to reach audiences to deliver and measure digital advertising campaigns. Common ad tech tools, such as demand-side platforms, are technologies that enable advertisers to buy impressions and select audiences across many publisher sites.
Marketing technology, also known as martech, describes the software that marketers use to optimize their marketing efforts and achieve their objectives. It leverages technology to plan, execute, and measure campaigns and other marketing tactics.
# Amazon Marketing Cloud
## What is Amazon Marketing Cloud?
Amazon Marketing Cloud (AMC) is a secure, privacy-safe, and cloud-based clean room solution, in which advertisers can easily perform analytics and build audiences across pseudonymized signals, including Amazon Ads signals as well as their own inputs.
### Holistic measurement
Built on Amazon Web Services (AWS), AMC can help advertisers with campaign measurement, audience analysis, media optimization, and more.
### Flexible analytics
Structure custom queries to explore unique marketing questions and address top business priorities. Use our instructional queries to build queries faster.
### Cross-media insights
Conduct analysis with signals across video, audio, display, and search to gain a holistic and in-depth understanding of the customer journey.
### Custom audiences
Utilize ad engagement and conversion signals across channels and sources to build bespoke audiences for direct activation via Amazon DSP.
### Insight expansion
Subscribe to Paid Features (beta) powered by Amazon Ads and third-party providers to expand the scope and depth of insights.
### Privacy-safe environment
AMC only accepts pseudonymized information. All information in an advertiser’s AMC instance is handled in strict accordance with Amazon’s privacy policies, and your own signals cannot be exported or accessed by Amazon. Advertisers can only access aggregated, anonymous outputs from AMC.
Build your foundational clean room knowledge or take specialized trainings designed for analytics practitioners, developers, and admins, based on your role and interests.
Certification
Our free AMC Certification helps advertisers with SQL experience learn to analyze marketing efforts across channels and generate insights, and then validates AMC knowledge with an assessment.
## How advertisers can use Amazon Marketing Cloud
### Campaign deep-dive
Inspect campaign reach, frequency, and total impact across the marketing funnel.
### Media mix analysis
Understand a media channel’s incremental value and the effectiveness of different media combinations.
### Audience insights
Learn about the composition of ad-exposed audiences and attributes of engaging audience groups.
### Journey assessment
Analyze the sequence, frequency, and types of audience interactions on path to conversion.
### Custom attribution
Tailor how you credit different touch points to understand the full contribution of different media and campaigns.
### Omni-channel impact
Evaluate how Amazon Ads campaigns drive engagement and sales wherever customers spend their time.
Goodway Group used Amazon Marketing Cloud to help SpoonfulONE deepen understanding of media investments and identify opportunities for business growth.
Buy Box Experts helped a home security company reach new shoppers and improve advertising ROI using Amazon Marketing Cloud.
## Get started with Amazon Marketing Cloud
Contact your Amazon Ads account executive to learn more.
Already have an Amazon Marketing Cloud account?
## FAQ
Amazon Marketing Cloud is available at no cost to eligible advertisers via web-based UI and API. Advertisers will have a dedicated AMC clean room environment set up for them.Advertisers need to have an executed Amazon DSP Master Service Agreement (MSA), planned campaigns or campaigns live in the last 28 days at Amazon DSP,and a technical resource familiar with SQL. For AMC API users, advertisers should also have an Amazon Web Services (AWS) account. Contact your Amazon Ads account executive for more information.
Advertisers can access hundreds of fields about their Amazon Ads campaigns via Amazon Marketing Cloud including ad-attributed impressions, clicks, and conversions. Advertisers can also choose to upload their own pseudonymous inputs into a dedicated Amazon Marketing Cloud clean room environment and join the inputs with Amazon Ads campaign signals for analysis.
Amazon Marketing Cloud prioritizes privacy and security by design. AMC only accepts pseudonymized inputs, and all information in your AMC instance is handled in strict accordance with Amazon’s privacy policies. You can only access aggregated and anonymous outputs from AMC. The information you choose to upload stays within your dedicated AMC instance, and cannot be exported or accessed by Amazon.
Amazon Marketing Cloud can report on cross-channel media performance and complement the reporting capabilities available in Amazon DSP. AMC reports are outputs of SQL query-based analysis. The results are delivered via a downloadable CSV file and/or sent to your Amazon Web Services S3 bucket. Amazon DSP reporting contains pre-aggregated standard metrics, delivered via Amazon DSP console on-demand or API, and can report on performance of your media purchase and operations in Amazon DSP.
AMC allows advertisers to expand and customize audience options on top of using the Amazon DSP audiences. Advertisers can use the cross-source and cross-media signals, such as ad engagement signals and advertisers’ own signals, over a lookback window of up to 12.5 months, to build audiences in AMC. Users have control over how to construct custom queries, and the flexibility to generate audiences that best meet their advertising needs and business goals.We recommend advertisers to start with exploring Amazon DSP audience options first, and use AMC to build custom audiences that address more sophisticated use cases involving multi-channel advertising, ad engagement considerations, detailed segmentation, or other bespoke use cases.
Amazon Marketing Cloud's custom attribution focuses on assessing Amazon Ads campaign performance, while Amazon Attribution measures non-Amazon media’s effectiveness in driving on-Amazon engagement. We view these solutions as complementary and recommend using the two together for a more comprehensive view of marketing attribution.
# Amazon Ads API
Date: 2000-01-01
Categories:
Tags:
## What does the API offer?
The Amazon Ads API provides a way to automate, scale, and optimize advertising. Campaign and performance data for Sponsored Products, Sponsored Brands, and Sponsored Display are available through the API, enabling programmatic access for campaign management and reporting. Amazon Attribution (beta) insights are also available through the Amazon Ads API. Amazon Attribution can help measure the full-funnel impact non-Amazon Ads media such as search ads, social ads, display ads, video ads, and email marketing. Insights throughout the shopping journey including clicks, detail page views, and purchases can be utilized to optimize campaign ROI.
The API enables users to develop flexible solutions that meet their needs and goals, and to integrate more deeply with Amazon Ads. The API offers most of the functionality of the advertising console while enabling programmatic management, allowing advertisers to manage ads or ad groups based on pre-defined conditions.
The Advertising API is a robust tool for users with the development resources to manage implementation and versioning updates.
## Typical API users include
* Advertising solution providers who integrate with the API and offer paid management and reporting tools to agencies and advertisers who lack the development resources and bandwidth to directly integrate with the API
* Agencies that have internal engineering resources and that manage a significant volume of Sponsored Products and Sponsored Brands campaigns for advertising clients
* Advertisers who manage their ad spend, have internal engineering resources, and directly run a significant volume of Sponsored Products and Sponsored Brands campaigns
## Request API access
Select the option that best describes your company:
### Partner
Businesses that build applications and software solutions to automate and help optimize advertising on behalf of others can access the Amazon Ads API as a third party. Request registration to call the Amazon Ads API and license your application to others.
Request Amazon Ads API access via Partner Network
### Direct advertiser
Advertisers can use the Amazon Ads API to automate, scale, and optimize advertising activities and reporting. Request registration to access the Amazon Ads API on behalf of your own advertising account.
Request API access
## Frequently asked questions
There are no fees for utilizing the API. Standard Selling on Amazon account fees, as well as the campaign costs for using Sponsored Products, Sponsored Brands or Sponsored Display will apply.
We offer a sandbox environment for development and testing purposes. Ads created in this environment will not show on Amazon, and therefore, no fees will apply.
* If you are using a third-party solution to manage your selling on Amazon inventory, order, and pricing information, or are using an agency to manage your online advertising, we recommend asking your partner if they offer API integration in their solution. For a list of regions in which the API is currently available to solution providers and advertising agencies, see the “API Endpoints” section of “How to use the Amazon Ads API.”
* Alternatively, you can use the bulk operations feature for Sponsored Products and Sponsored Brands; sellers can access this in Seller Central, while vendors can access this in the Advertising Console.
If you’re looking for an agency to help plan and manage your Amazon Ads campaigns or a tool provider who can offer unique technology solutions, you can turn to the Amazon Ads partner directory for support. The partner directory is an online resource that makes it easy to find a partner that’s right for your business. When searching on the partner directory, you can filter by criteria like “products,” “marketplaces,” or “service model” to identify a partner most relevant to your business.
Developers interested in building applications using the Amazon Ads API. Advertisers looking to work with a third-party tool that uses the API do not need to request API access.
### Maximize the impact of your creative
Amazon Ad Server allows you to create, distribute, and optimize your messaging, including ad serving, creative authoring, and dynamic creative optimization.
## What is Amazon Ad Server?
Amazon Ad Server is a global, multichannel ad server used to build, distribute, and measure campaigns across demand-side platforms (DSPs) and publishers worldwide. Amazon Ad Server (formerly Sizmek Ad Suite) offers brands and agencies the tools for creating engaging ads and centralizing applicable insights.
## Why should I use Amazon Ad Server?
Amazon Ad Server gives you the creative control, flexibility, and efficiency to engage audiences with relevant, impactful ads.
### Build creative easily, at any skill level
Our tools provide full creative flexibility and ease of use that fit your workflow, whether you have a robust creative team or limited resources.
### Optimize campaigns to connect with audiences
Launch campaigns quickly and make in-flight adjustments with total control and efficiency, so you can serve the right creative at the right time.
### Analyze performance with centralized reporting
Power your measurement solutions with on-demand reporting that generates the insights you need to better understand campaign delivery and performance.
## Benefits of Amazon Ad Server
Amazon Ad Server offers flexible solutions for creative authoring, campaign management, and cross-media measurement.
### Easy creative authoring
Amazon Ad Server provides intuitive, efficient creative authoring tools that team members of any skill level can use to build and upload engaging creative.
### Creative relevance and optimization
Our Dynamic Creative Optimization (DCO) technology helps you scale and optimize creative using a variety of shopping and streaming signals.
### Creative analytics
Amazon Ad Server delivers creative insights by ad, version, or asset sliced against dimensions like audience and site.
### Efficient campaign management
Common, repetitive tasks are handled with rules-based automation to help you focus on strategic objectives and launch faster.
### Cross-media analytics
Amazon Ad Server provides consolidated, MRC-accredited, media-agnostic reporting that can be exported to internal tools or combined with Amazon-specific insights in Amazon Marketing Cloud.
### Tailored service
Amazon Ad Server is fully self-service with an intuitive, efficient user interface (UI). Our technology is also backed by expert service that you can tailor to fit your needs, from completely full service to à la carte assistance.
## Sign in to Amazon Ad Server
To learn more about Amazon Ad Server, get in touch with our sales team.
Amazon Ad Server is available globally to advertisers and agencies running digital campaigns. Our creative tools are available to those responsible for developing ads, and APIs and available data feeds are directly accessible to advertisers and their partners.
Sizmek Ad Suite is the former name of Amazon Ad Server.
Our DCO solution provides you with the ability to create thousands of ads, all customized to your specific audiences based on a variety of real-time signals. Any aspect of an ad can be dynamic, and driven by external inputs or manually configured. Dynamic creative ads have multiple options to control optimization and offer version-level reporting and version-level control.
We provide high-impact ad experiences on a variety of screens by offering adaptive and responsive ad units and auto-transcoding for optimal video quality.
Amazon Ad Server is certified for measurement and reporting of clicks, and of display, video, and rich media served advertising impressions within desktop, mobile web, and mobile in-application environments (excluding CTV/OTT environments); display and video viewable impressions and related metrics (desktop only); and the detection of general invalid traffic.
An ad server is used by advertisers and publishers to optimize, manage, and distribute ads across a multitude of paid channels. It uses advertising campaign parameters to dynamically serve relevant ads.
As audiences browse the web, listen to podcasts or stream video, ad servers talk to one another to showcase a digital campaign in fractions of a second. This process loops in ad servers to dynamically serve up the most relevant ads by adapting to campaign parameters, simplifying the ad buying and planning process for digital advertising.
# Industry insights
### Automotive
Build awareness and reduce friction in the purchase journey to help sell more cars and see satisfied customers drive away.
Explore inspiring case studies, detailed research, and informational courses to help you understand how to use Amazon Ads solutions to craft marketing strategies specific to your industry.
How can Amazon Ads fit into the marketing strategy for your automotive business? Learn about the advertising solutions we can offer automotive marketers, on and off Amazon.
## Rev up with your creative engine
With ad-creation advancements like new interactive features to improve automation and stay user-friendly, Amazon Ads can help fine-tune marketing strategies. Innovative opportunities are endless when partnering with the Amazon Ads Brand Innovation Lab for custom creative offerings.
## The automotive industry today
Automotive marketing helps you connect with prospective car shoppers, both near-market audiences (shoppers who might be 3-6 months out from purchasing a car) and in-market shoppers (higher-intent shoppers who are less than 3 months from purchasing a car).In today's automotive industry, brands and dealerships are innovating vehicle technology while at the same time adapting their buying process in order to match changing customer preferences and the digital marketing landscape. Meanwhile, advertising remains an important channel for brand messaging and building relationships with customers.
## Automotive marketing trends
2021 forecasted global light vehicle sales are expected to exceed 68MM.1
65% of US dealers expect accelerated development of online vehicle sales and booking platforms.2
14% increase in total expected automotive ad spending worldwide in 2021.3
Explore Amazon Ads’ courses and certifications to learn how to best use our products and solutions to reach more customers and optimize your strategy.
## Challenges facing automotive brands
### Shift from linear TV
TV ads have been a staple of automotive marketing for decades due to TV’s effectiveness at delivering memorable ads and messaging. But viewers are shifting away from linear to streaming services, spending 57% more time streaming in 2020 compared to the previous year.4
### Driving early consideration
The automotive shopping journey has moved increasingly to online channels, and potential customers are researching and forming opinions heading into their purchases. In fact, nearly half (48%) of car buyers were already decided on the vehicle make or model prior to purchase.5
### Vehicle electrification
While electric vehicles (EVs) represented only 2.5% of all global new car sales in 2019, they are forecast to represent 32% by 2030.6 This change is being driven by a combination of global shifts in consumer sentiment and energy policy.
## Insights for automotive marketers
83% of Amazon shoppers are either cord-cutters—audiences who switch from pay TV subscriptions to streaming media services—or cord-stackers—audiences who subscribe to both pay TV and streaming TV.7
Frequent Amazon shoppers visit an average 2.6 car dealerships versus only 2 for less-frequent Amazon shoppers.8
Frequent Amazon shoppers are 3 times more likely to purchase an electric or hybrid vehicle compared to less-frequent Amazon shoppers.9
63% of car buyers experienced a life-changing event such as a family, home, or hobby change before purchasing a vehicle.10
## Automotive advertising strategies
### Build awareness with streaming TV
Use Amazon Streaming TV ads to engage shopping, automotive, and lifestyle audiences that you can’t access with any other media provider. You also have the opportunity to reach cord-cutters and cord-nevers who use Fire TV devices. Plus, with over 55MM+ viewers of ad-supported streaming content, we can help your campaigns scale at a designated market area (DMA) level.11
### Discover unique audiences
Access unique vehicle in-market audiences using shopping signals from Amazon Vehicles and Amazon's Your Garage. Amazon can also help you develop a near-market strategy that aligns vehicle promotions with your customers’ life events and interests, such as advertising EV models to shoppers who have indicated interests in green or Climate Pledge Friendly products.
### Advertise programmatically across screens
With Amazon DSP, you can gain a singular view of your audiences and connect seamlessly between desktop and mobile devices. Amazon Ad Server is a global, multichannel ad server that offers creative tools, campaign management and optimization, and Media Rating Council-accredited measurement.
1 Scotiabank; Bloomberg; Ward’s, Global, October 20202 IHS Markit, US, October 2020 3 WARC’s Global Ad Trends: State of the Industry 2020/1, Global, November 2020 4 Conviva’s State of Streaming, Global, Q3 2020 5 Kantar and Amazon Advertising auto shopping study, US, April 2020 6 Deloitte Insights – Electric Vehicles, Global, July 2020 7 Kantar and Amazon Advertising TV viewers study, US, August 2020 8 Kantar and Amazon Advertising auto shopping study, US, April 2020 9 Kantar and Amazon Advertising auto shopping study, US, April 2020 10 Kantar and Amazon Advertising auto shopping study, US, April 2020 11 Nielsen Media Impact and Amazon internal, US, 2021
Beauty is a rapidly growing global industry. Revenue from the beauty and personal care industry amounted to more than $500 billion in 2021.1 And the industry has an expected compounded annual grow rate of 4.76%.2 Discover how your beauty brand can stand out with the help of Amazon Ads.
## The beauty industry today
Beauty shoppers are increasingly purchasing products online, and brands have an opportunity stay top of mind for shoppers along their customer journey. Based on a survey we conducted with Kantar:
### 28%
Online shoppers are looking for lower prices.3
### 22%
Wanted to avoid shopping in-store.4
### 21%
Are searching for hard-to-find brands and products that brick-and-mortar stores don’t typically carry. 5
### 39%
Searching for products that are available online.6
## Beauty marketing on Amazon
Amazon is a key retailer for beauty product discovery.7 The beauty category at Amazon includes mass beauty and luxury beauty brands. And these brands span cosmetics, hair care, skincare, fragrance, beauty appliances, and nail.
## Understanding beauty marketing
45% of beauty purchases are semi-planned.Semi-planned purchases include purchases made by shoppers who wanted to buy a product but didn't know exactly which one or knew the brand/product they wanted to purchase, but didn’t know which model.8
## Beauty marketing by the numbers
84% of premium skincare shoppers that visited Amazon recall seeing an ad.9
1 in 2 premium skincare shoppers own an Amazon device such as Alexa or Fire TV.10
Amazon beauty shoppers said hydration is their number 1 skincare need.11
## Challenges facing beauty brands
### Omni-channel shopping
Advertisers are challenged to adapt to omni-channel retail. While online media, including streaming TV ads, streaming audio ads, social media, and retail media, influence 77% of retail decisions, 90% of CPG purchases are still made at brick-and-mortar outlets.12 With the increased number of customers buying products online, brands may consider adapting their advertising strategies to better reach relevant audiences.
### Changing consumption trends
Online shopping is growing in popularity. 89% of shoppers who began shopping online during 2020/2021 are extremely or very likely to continue buying products online.13
### Reaching cross-category shoppers
Mass beauty shoppers also tend to shop in the professional and luxury skincare categories. Amazon Ads can help advertisers stay top-of-mind across beauty sub-categories.14
Optimizing beauty ads could have a long-term payoff. Discover six tips to get you started on creating beauty ads for your brand.
## Beauty advertising tips
1
Running Streaming TV ads with Amazon Ads helps your brand reach beauty and lifestyle audiences. You also have the opportunity to reach cord-cutters and cord-nevers who use Fire TV devices and services like Prime Video and Amazon Freevee.
2
Optimize your strategy by aligning your beauty promotions with your customers’ interests. With Amazon DSP, brands can reach audiences on and off Amazon with video and display ads that introduce your brand and top products to audiences where they already are. Also, with Amazon audio ads you can be a part of a growing channel filled with engaged audiences.
3
More than half of premium skincare shoppers surveyed said brand name was important in their purchasing decision.15 Stand out to audiences with Amazon Ads solutions like Sponsored Brands and Sponsored Products.
Sources:
1, 2 Beauty and personal care report, Statista, global, 20213 -8 Kantar and Amazon Ads Beauty Audience Study, November 2021, US 9 - 11 Kantar and Amazon Advertising P2P Lite Study June 2021 12 NielsenIQ Omnichannel Fundamentals 2021 13 Kantar and Amazon Ads Beauty Audience Study, November 2021, US 14 Amazon Internal, January 2021 – December 2021, US 15 Amazon Internal, August 2021
How can Amazon Ads fit into your consumer electronics marketing strategy? Whether you're driving brand awareness or bringing your products to new audiences, learn about the advertising solutions we can offer your business, on and off Amazon.
For consumer electronics, Amazon spans the entire customer journey, encompassing both online and off-line touch points to help drive brand discovery, consideration, and sales.For high-consideration purchases within consumer electronics, such as TVs, laptops, and premium smartphones, Amazon Ads offers tools to help build brand awareness and drive sales.
## How Amazon Advertising can help consumer electronics brands reach shoppers
Customers don’t have to shop for consumer electronics on Amazon in order for you to reach them. With hundreds of millions worldwide active customer accounts, we’re able to reach shoppers at scale even if they don’t directly engage with the consumer electronics category on Amazon. Based on a custom survey by Kantar,1 Amazon can help you reach significant segments of consumer electronics buyers.
While 49% of smartphone purchases are made in store, 83% of smartphone buyers conduct research online prior to purchasing. Amazon also helps shoppers discover new brands, as 64% of smartphone buyers discover a new brand or product on Amazon.
86% of laptop buyers conduct research online prior to purchase, but 37% still buy in store. 63% of laptop buyers discovered a new brand or product on Amazon and tend to have a larger consideration set.
Amazon reaches TV purchasers, even though 60% of shoppers ultimately purchase in store. TV buyers come to Amazon to compare prices and read reviews and recommendations.
## Consumer electronics marketing examples
### Create your consumer electronics marketing plan
Register to get started with your consumer electronics marketing strategy, or contact your Amazon Ads account executive.
## Consumer electronics advertising strategies
### Brand advertising
Drive brand awareness through ad products such as Streaming TV ads, audio ads, and Amazon DSP with a focus on ads off Amazon or ads with link-out creative to your brand site.
### Digital commerce enablement
Drive sales both on Amazon and off-line through ad products such as video and display ads through the Amazon DSP with either link-in or link-out creative.
### Performance marketing
Drive on-Amazon sales through ad products such as Sponsored Display, Sponsored Products, and Sponsored Brands.
1 Kantar, 2019, US.2 Amazon quarterly earnings, Q1 2020. Active customer accounts represent accounts that have placed an order during the preceding 12-month period.
# Entertainment marketing
### <NAME>. creates immersive advertainment for The Batman
<NAME>. created an immersive, interactive page on IMDb for the movie The Batman, combining advertising and entertainment with the help of Amazon Ads.
How can Amazon Ads fit into your marketing strategy? Learn about the advertising solutions we can offer your business to help you engage entertainment audiences, on and off Amazon.
Stories are powerful. Brands make them better by building worlds that fans can become part of. Amazon Ads can help you connect with audiences while they're immersed.
The growth of streaming viewership continues to shape the entertainment industry. This includes the increase in cord-cutters (audiences who switch from pay TV subscriptions to streaming TV (also known as OTT—over-the-top—services) and cord-stackers (audiences who subscribe to both pay TV and streaming TV).This evolution of the industry extends to theatrical entertainment. Film studios are leaning in to streaming as a distribution method for new film releases, with many studios releasing their movies to streaming services and theaters on the same day.
By the end of 2024, the number of US cord-cutter households is expected to reach 46.6 million.1
US consumers on average use seven streaming video services.2
Time spent watching ad-supported streaming services has increased 50% year over year.3
Worldwide consumer spending on streaming video is expected to reach almost $103 billion by 2025.4
Entertainment brands must put effort toward raising awareness in order to find viewership for their content. US consumers now have more than 300 different video streaming services to choose from.5
Outside of movie and TV content, entertainment brands are also competing with gaming, music, and social media for consumers' attention.6
As more options have become available, consumers are making decisions about how many services they are willing to pay for. In fact, 31% of streamers say that they are likely to stop using one of their existing services.7
83% of Amazon customers stream video content.8
More than 60% of Amazon shoppers say they consult IMDb before deciding to watch a TV program.9
42% of streamers who shop on Amazon say that content is the primary reason they decide to subscribe to a streaming service.10
55% of moviegoers have purchased DVDs from Amazon, and 51% have bought or rented digital movies from Amazon.11
Amazon’s entertainment lifestyle audiences are interested in certain types of content based on streaming signals across Amazon-supported apps and devices. Combine Amazon audiences with your existing audience sources on Amazon DSP to both on and off Amazon.
Use Streaming TV ads to extend the reach of your linear TV campaign to unique, highly engaged viewers. Show up alongside IMDb TV Original hit shows and movies, during live sports, like Thursday Night Football, across the top TV networks and broadcasters, and on the News app on Fire TV.
Use Amazon Advertising’s custom solutions to drive audience awareness of your upcoming premiere, service launch, or home entertainment release. Our tailor-made campaigns create memorable brand experiences that excite and surprise audiences.
With audio ads, you can access unique, quality inventory to tell your story on Amazon Music's free ad-supported tier. Reach listeners across Alexa-enabled devices, including Echo and Fire TV, as well as on mobile and desktop.
1 eMarketer; September 2020, US2 The NPD Group, TV Switching Study, Jan 2021 3 Nielsen Total Audience Study, August 2020, US 4 Statista, 2020, WW 5 Nielsen Total Audience Study, August 2020 6 Deloitte Digital Trends survey, 15th edition, 2021, US 7 The NPD Group, TV Switching Study, Jan 2021, US 8 Kantar and Amazon Advertising TV viewers study, August 2020, US 9 Kantar and Amazon Advertising TV viewers study, August 2020, US 10 Kantar and Amazon Advertising TV viewers study, August 2020, US 11 Comscore custom survey, 2019, US. The study surveyed 1,050 moviegoers who went to the movies in the last six months, and is representative of the US population of moviegoers.
Fashion is a rapidly growing global industry. The revenue of the worldwide apparel market segment was estimated at $0.99 trillion in 2022, and is predicted to increase to approximately $1.37 trillion by 2025.1 Discover how your brand can stand out with the help of Amazon Ads.
## Fashion shoppers today
Fashion shoppers are expert researchers who show an increasing preference for variety, selection, and values-based buying.
### 65%
65% of consumers surveyed own 5 or more apparel brands in their closet 2
### 82%
82% mix and match items from different brands when putting together an outfit 3
### 3 in 5
3 in 5 apparel shoppers exhibit omnichannel behaviors 4
### 69%
The research stage is often quite short: 69% of apparel shoppers reported that they begin their pre-purchase research within one day of their purchase 5
71% of consumers reported that they prefer buying from brands that align with their values 6
## Challenges facing fashion brands
### Omnichannel shopping
Brands are being challenged to keep up with digital methods of purchase. Overall online sales penetration is expected to grow from 39% in 2022 to 53% in 2025.7 Fashion companies are estimated to ramp up their investments in technology, from between 1.6% and 1.8% of sales in 2021 to between 3% and 3.5% by 2030.8
Consumers are becoming less attached to brands. Today, 69% of apparel shoppers say they’re undecided on brand at the start of their shopping journey for apparel (a 30% increase from 2020).9 Amazon Ads can help your brand’s visibility during the discovery phase of fashion customers’ shopping journeys.
### Shift in shopping behaviors
As consumers adjust their spending alongside changing economic conditions, they may be turning to brands that they trust. About 71% of longtime customers reported that they have empathy for companies when they need to increase their prices due to inflation or shortages; 73% of consumers reported that they are willing to continue buying from brands that increase their prices if they feel valued as a customer.10
## Reach fashion shoppers on and off Amazon
Amazon is a fashion destination for discovery, and can help drive conversion in and outside of the Amazon store.
### 62%
62% of surveyed apparel shoppers who have visited the Amazon store along their shopping journey have reported that they discovered a new brand or product11
71% of surveyed apparel shoppers reported that they purchased a new brand or product after discovering it in the Amazon store 12
### 66%
66% of surveyed apparel shoppers who purchased a new brand or product after visiting the Amazon store reported that they purchased the apparel elsewhere (outside of the Amazon store) 13
### 1.8X
Surveyed apparel shoppers who visited the Amazon store reported they are 1.6x more likely to shop for apparel at least once a month, and 1.8x more likely to buy new apparel items to keep up with the latest trends compared to total apparel shoppers 14
## How Amazon Ads is helping fashion marketers reach and engage audiences
## Fashion marketing, by the numbers
Percentage of surveyed apparel shoppers who visited the Amazon store and recall seeing an ad along their path to purchase 15
Optimal number of touchpoints (ad products) that advertisers needed to reach apparel shoppers and to see a lift in new-to-brand sales and repeat customer growth 16
Percentage of surveyed apparel shoppers who reported that they encountered an Amazon offering (Amazon.com, Fire TV, Amazon Music, etc.) along their path to purchase 17
## Fashion advertising tips
A majority (83%) of apparel shoppers surveyed by MRI reported that they have engaged with a streaming TV service in the last seven days.18 With so many consumers frequently streaming content, Amazon Ads can help your brand reach fashion audiences across Fire TV devices and services like Prime Video and Amazon Freevee.
Surveyed apparel shoppers who visited Amazon’s store reported that they are 1.3x more likely to buy new apparel items each season to keep up with the latest trends, compared to total apparel shoppers.19 With Amazon DSP, brands can reach engaged fashion audiences on and off Amazon with video and display ads that introduce your brand and top products to audiences where they already are. Also, with Amazon audio ads, you can be a part of a growing channel filled with engaged audiences.
78% of surveyed apparel shoppers reported that they agree that when shopping for fashion, the overall look is more important than the brand.20 Amazon Ads can help your brand’s eye-catching products stay top of mind with solutions like Sponsored Brands and Sponsored Products.
Sources:1 Statista.com, Fashion Worldwide, 2022 2, 3 Kantar Apparel Custom Study, US, May 2022 4 Kantar and Amazon Ads apparel path to purchase study, US, May 2022 5 Kantar Apparel Custom Study, US, May 2022 6 Smallbiztrends.com, Brand Values Alignment, 2020 7 Statista.com, Fashion Worldwide, 2022 8 BoF state of fashion special edition: technology 2022 9 Kantar Apparel Custom Study, US, May 2022 10 Poll on Consumer Behavior During COVID-19, Ipsos, May 2022, US. 11, 12 Kantar and Amazon Ads apparel path to purchase study, US, May 2022 13, 14 Kantar Apparel Custom Study, US, May 2022 15 Kantar Apparel Custom Study, US, May 2022 16 Amazon Internal, US, Jun 2021 - Jun 2022 * Results are representative of performance of 98,542 advertisers and is not indicative of future performance 17 Kantar Apparel Custom Study, US, May 2022 18 MRI Simmons, US, Spring 2022 19 Kantar Apparel Custom Study, US, May 2022 20 MRI Simmons, US, Spring 2022
# Financial Services advertising
### Reach new audiences with banking and financial marketing strategies
Discover how to build financial marketing strategies with Amazon Ads
How can Amazon Ads fit into your financial services marketing strategy? Learn about the advertising solutions we can offer your financial services company, whether it’s insurance, banking, brokerage, or tax.
Financial services marketing helps you connect with consumers and companies looking for service providers to fulfill their financial management needs. A complete marketing strategy reaches prospective customers and encourages them to consider your business when they're in the market for financial solutions like yours, before eventually purchasing and becoming loyal customers.
Financial services advertising is projected to grow 9.7% in the US, and is the second largest spending category after retail.1 Financial services advertisers are continuously looking for ways to connect with consumers to drive awareness and consideration of their brand and products, as well as lower-funnel actions, like adoption and retention. In a highly competitive marketplace, advertising can help them stand out.
79% of financial services customers say that the experience a company provides is as important as its products and services.2
71% of Gen Zers believe that financial services brands should help them achieve personal goals and aspirations.3
87% of financial services marketers say adopting or refining customer journey strategies is a priority.4
Financial services brands are looking to differentiate their brands and products to reach unique audiences with complementary solutions. A strong marketing strategy requires analyzing insights to effectively reach audiences in a fragmented landscape.
48% of consumers know which financial services brand they are going to select before they open a new credit card, insurance policy, or brokerage account.5 It’s important for you to reach these audiences before they’re ready to purchase, and while they’re still open to considering your products and solutions.
An effective marketing strategy reaches consumers at just the right time. More than 30% of insurance, banking, and brokerage customers say that their most recent purchase of a new policy, credit or debit card, loan, or online brokerage account was due to a life-event change.6
Insurance as a category covers a broad scope of products, spanning health, property, and casualty (home, auto, and renters), life, travel, and many more niche types of insurance. Any risk or liability can be insured.Based on a custom survey by Kantar, 68% of insurance customers who purchased insurance in the last 6 months shop on Amazon at least 1+ times per month.7 The banking and payments industry comprises financial institutions such as traditional banks, financial technology (fintech) companies, and network payments providers. They offer a variety of products and services, including credit or debit cards, checking and savings accounts, and payments and lending solutions.The Kantar survey also found that 88% of consumers who searched for a credit card in the last 6 months visited Amazon within a few days of their application.8 Online brokerage firms allow customers to buy and sell various securities online. Aside from allowing customers to transact online, they also provide resources for investment information and advice.Of the Kantar survey respondents, 61% of those who used a brokerage firm in the last 12 months had also visited Amazon in the last 30 days prior to the survey.9
Our upper-funnel solutions help you extend your reach to audiences on and off Amazon, and outside the home.
We can help you reach audiences at relevant moments when they may be considering new financial services products.
Amazon Ads measurement solutions go beyond return on ad spend to help inform and customize your future marketing campaigns.
1 Scotiabank; Bloomberg; Ward’s, Global, October 20202 IHS Markit, US, October 2020 3 WARC’s Global Ad Trends: State of the Industry 2020/1, Global, November 2020 4 Conviva’s State of Streaming, Global, Q3 2020 5 Kantar and Amazon Ads auto shopping study, US, April 2020 6 Deloitte Insights – Electric Vehicles, Global, July 2020 7 Kantar and Amazon Ads TV viewers study, US, August 2020 8 Kantar and Amazon Ads auto shopping study, US, April 2020 9 Kantar and Amazon Ads auto shopping study, US, April 2020 10 Kantar and Amazon Ads auto shopping study, US, April 2020 11 Nielsen Media Impact and Amazon internal, US, 2021
Shopping online for items like vegetables and eggs is growing in popularity. With the online grocery industry expected to become a $250 billion-plus business by 2025 in the US, learn how your brand can stand out.1
## Food for thought
Whether you’re a big brand or just starting out, Amazon Ads can help support your grocery marketing goals. Hear from our customers and learn what inspires them as they create campaigns that reach relevant audiences.
### Nestle USA
How to keep consumers engaged.
### GT's Living Foods
How to connect with a broader audience.
### CAVU Venture Partners
How to build a purposeful brand.
## Understanding grocery marketing
Over the last few years, online grocery shopping has accelerated. As consumers adopt new, digital grocery shopping methods, brands have the opportunity to reach these new customers both on and offline with Amazon’s advertising solutions.
## The grocery industry today
More consumers are using online grocery shopping in order to accommodate their lifestyle needs. For example, Amazon online grocery shoppers, which includes shoppers on Amazon.com, Amazon Fresh, and Whole Foods Market, are looking for convenience and ways to save time.2 As the industry evolves, brands will need to find new ways of reaching audiences and keep their products top-of-mind for shoppers.
## Grocery marketing trends
83% of grocery shoppers expect flexible shipping and fulfillment options such as buy-online and pick-up in stores.3
33% of US consumers are buying their groceries mostly in stores, and 48% are omni-channel shoppers that purchase both in physical stores and online.4
Surveyed Amazon online grocery shoppers are 40% more likely to engage in online touchpoints before grocery shopping than in-person grocery customers.5
Dive deeper into understanding how to use Amazon Ads’ products and solutions to build your campaigns and reach more customers.
## Challenges facing grocery brands
### Adapting to omni-channel shopping
U.S. grocery spending has increased from 3-4% to 10-15% since 2020.6 With this shift in consumer shopping behavior, advertisers need to adapt their strategies to better reach new audiences.
### Fragmented channels
There are more places for brands to advertise, but fragmented channels can be confusing to shoppers who are looking for consistent and accessible information when grocery shopping online.
### Responding to digital disruptions
While the past few years have increased grocery demand and revenue, it has also increased the use of contactless payments and online grocery with delivery and pickup. Advertisers should consider how their advertising strategies appeal to new consumer behaviors to drive sales despite consumers spending less time engaging with their products in-store.
## How brands are reaching audiences with an Amazon Ads grocery marketing strategy
## Grocery advertising tips
### Engage customers with omni-channel ad solutions
1 in 5 Amazon online grocery shoppers can be reached through ad enabled streaming services.7 By using Streaming TV ads with Amazon, you can engage shopping, grocery, and lifestyle audiences.
### Reach customers as they shop
Amazon online grocery shoppers are twice as likely to shop for groceries a couple of times a week than in-person shoppers.8 Brands have the opportunity to connect with these shoppers at multiple touch-points along their online shopping journey. And with Amazon landing pages, brands can help audiences discover new products.
### Help customers find your brand
About 29% of customers are in an exploratory mindset or 27% are browsing when they’re on Amazon shopping for groceries.9 Amazon Ads can help your brand build an ad strategy centered on aligning grocery promotions with customers’ needs. With Amazon DSP, brands can reach audiences on and off Amazon, introducing them to your top products. Brands can also connect with audiences during screen-free moments with Amazon audio ads.
Sources:
1 Grocery’s New Reality: The Pandemic’s Lasting Impact on US Grocery Shopping Behavior, Mercatus in Collaboration w/Incisiv, 20202 Grocery shopping audience study, Kantar and Amazon Ads, December 2020 3 Salesforce State of the Connected Consumer, 2020 US 4 Supermarket News, U.S. grocery shoppers head back to stores as COVID-19 vaccinations rise, 2021 5 US Supermarket News, U.S. grocery shoppers head back to stores as COVID-19 vaccinations rise, 2021 6 US - Kantar and Amazon Advertising Grocery Study, 2020 US 7 Grocery shopping audience study, Kantar and Amazon Ads, December 2020 8, 9 Grocery shopping audience study, Kantar and Amazon Ads, December 2020
Date: 2021-01-06
Categories:
Tags:
The health and personal care (HPC) category has achieved material growth over the past two years, and is projected to continue expanding until at least 2026.1 At Amazon, the health and personal care category consists of nutrition and wellness, healthcare, baby products, household, medical supplies and equipment. Learn how audiences can discover your products and stay top of mind for customers with the help of Amazon Ads solutions.
## Understanding the health and personal care industry today
The consumer package goods (CPG) industry, which includes the health and personal care category, has grown at a rate of 2.7% between 2020 and 2021.2 As the industry continues to grow, your brand has the opportunity to drive awareness, consideration, and conversion with Amazon Ads solutions.
## Health and personal care marketing by the numbers
CPG and Grocery brands that used three or more Amazon Ads solutions saw on average a 50% repeat customer growth compared to brands that paired together one or two ad types.4
41% of CPG and Grocery brands observed an increase in repeat sales growth when engaging with shoppers through Streaming TV ads.5
## Challenges facing health and personal care brands
### Adapting to omni-channel retail
Many CPG advertisers want to adopt an omnichannel approach. While according to a third-party study conducted by IRI in 2020, online media, including streaming TV ads, streaming audio ads, social media, and retail media, influence 77% of retail decisions, 90% of CPG purchases are still made at brick-and-mortar stores.6 Brands may consider adapting their advertising strategies to better reach their desired audiences along their path to purchase.
### More brands for shoppers to consider
Consumers have more health and personal care product options than ever before.7 Brands can develop marketing strategies that help them not just stand out, but also stay top of mind for customers.
### Earning customer trust during periods of inventory challenges
It can be challenging to navigate an effective advertising strategy that reaches audiences while earning their trust when brands experience “high inventory challenges.” At Amazon Ads we define a high inventory challenge as category ASINS that are out of stock for two or more days in a week.
## How brands can reach shoppers with an always-on strategy
When brands face high inventory challenges, they may approach their advertising strategies in different ways. Amazon Ads analyzed three key approaches that brands may choose, to measure the advertising impact on lift in units sold following high inventory challenges
### No advertising
ASINs that did not have any Amazon Ads support either during the high inventory challenge stage or when brands began to see improvement in inventory.
### Advertising only in the recovery stage
ASINs that did not have any Amazon Ads support during their inventory challenges stage but did have Amazon Ads support when brands began to see inventory improvements in the immediate four weeks following high inventory challenges, also known as the recovery stage.
### Always-on advertising
ASINs with Amazon Ads support during ‘high’ inventory challenges stage and later in recovery stage.
> <NAME>, CPG Lead at Amazon AdsPeriods of low inventory can be used to build brand awareness and customer loyalty with upper-funnel solutions. Later, when a brand’s inventory enters the recovery stage, lower-funnel solutions can be activated to drive conversion.
## Household
Household brands that had Amazon Ads support in the recovery stage observed 15.9x lift in units sold compared to brands that had an inconsistent approach in the same time period.8
Household brands that had an always-on advertising strategy observed on average a 15.0x increase in units sold during the recovery stage compared to brands that did not have an always-on strategy.9
74% of household brands leveraged more than one Amazon Ad product during recovery stages.10
Household brands that used four Amazon Ad solutions saw a 77% sales growth lift compared to brands that used less than three solutions.11
## Nutrition and wellness
Nutrition and wellness brands that had Amazon Ads support in the recovery stage observed 8.4x lift in units sold compared to brands that had an inconsistent approach in the same time period.12
On average 77% of health and wellness brands leveraged more than one Amazon Ad product during recovery stages.14
Nutrition and wellness brands that used four Amazon Ad solutions saw on average an 85% sales growth lift compared to brands that used less than three solutions.15
## Baby products
Baby product brands that had Amazon Ads support in the recovery stage observed 43.1x lift in units sold compared to brands that had an inconsistent approach in the same time period.16
Baby product brands that had an always-on advertising strategy observed on average a 59.1x increase in units sold during the recovery stage compared to brands that did not have an always-on strategy.17
Baby product brands that used four Amazon Ad solutions saw on average an 83% sales growth lift compared to brands that used less than three solutions.19
## Healthcare
Healthcare brands that had Amazon Ads support recovery stage observed 17.6x lift in units sold compared to brands that had an inconsistent approach in the same time period. 20
Healthcare brands that had an always-on advertising strategy observed on average a 76.1x increase in units sold during the recovery stage compared to brands that did not have an always-on strategy. 21
Healthcare brands that used four Amazon Ad solutions saw on average a 77% sales growth lift compared to brands that used less than three solutions. 23
## Medical supplies and equipment
Medical supplies and equipment brands that had Amazon Ads support in the recovery stage observed 52.6x lift in units sold compared to brands that had an inconsistent approach in the same time period. 24
On average, 40% of medical supplies and equipment brands leveraged more than one Amazon Ad product during recovery stages. 26
Medical supplies and equipment brands that used four Amazon Ad solutions saw on average a 73% sales growth lift compared to brands that used less than three solutions. 27
## Health and personal care advertising tips
Running Streaming TV ads with Amazon Ads means that you get access to shopping, beauty, and lifestyle audiences that you can’t access with any other media provider. You also have the opportunity to reach cord-cutters and cord-nevers who use Fire TV devices.
Livestream can help connect brands with shoppers in real-time and in interactive ways. You can use livestreaming solutions through Amazon Live and Twitch, and work with trusted creators and influencers to showcase your brand to an engaged audience.
Amazon Ads can help you build a near-market strategy centered on aligning CPG promotions with your customers’ interests and seasonal tentpole moments. For example, Amazon DSP can help your brand introduce relevant product offerings to audiences on and off Amazon. Also, with audio ads you can connect with audiences during screen-free moments.
1 Consumer Health in the U.S., Euromonitor, October 2021, USA2 E-Marketer, February 2022 4-5 Amazon Internal, 06/01/2021 – 05/31/2022, US. * ASINs with minimum of 4 weeks and maximum of 13 weeks of continuous ‘high’ inventory challenges were considered for this analysis 6 IRI, E-Commerce Opportunities: What, When and How to Achieve Growth in the Digital Space, Statista, May 2020 7 Amazon Internal, 2021 Q1 **Lift is calculated as percentage increase in average units sold in the recovery stage versus average units sold during the high inventory challenge stage 8-10 Amazon Internal, 2022, US 11 Amazon Internal, Jan 2021 – Dec 2021, US 12-14 Amazon Internal, 2022, US 15 Amazon Internal, Jan 2021 – Dec 2021, US 16-18 Amazon Internal, 2022, US 19 Amazon Internal, Jan 2021 – Dec 2021, US 20-22 Amazon Internal, 2022, US 23 Amazon Internal, Jan 2021 – Dec 2021, US 24-26 Amazon Internal, 2022, US 27 Amazon Internal, Jan 2021 – Dec 2021, US
From home decor to kitchen appliances, the home goods industry is growing as consumers buy and rent homes, reimagine living spaces, and upgrade their environments. Discover how Amazon Ads can help your home goods business grow.
## Home goods and furniture shoppers today
Home goods shoppers in the US have been on a robust shopping spree since 2020, accelerating growth for the industry.1 The global home decor industry, which includes categories like home furniture, home textiles, floorcare, wall decor, and lighting, reached a value of $641.4 billion in 2020.2 As home goods shoppers start easing their home decor and home appliance purchases, Amazon Ads can help your brand to try and expand brand awareness, increase product consideration, and inspire purchase.
## Home goods and furniture marketing insights
Furniture and home furnishings digital sales are projected to reach $193.77 billion by 2026.3
Digital touchpoints are important in mattress shoppers’ purchase journeys, as 74% of mattress buyers browse online before buying a mattress.4
## Amazon Ads home goods and furniture insights
64% of small kitchen appliance shoppers surveyed who visited the Amazon store reported that they are more likely to discover new products through advertising.5
81% of all mattress buyers surveyed who visit Amazon discover a new brand or product.6
66% of home environment shoppers surveyed recall seeing an ad specific to the category.7
54% of shoppers surveyed reported Amazon provided information to make them feel confident in their floorcare purchase.8
## Opportunities for home goods and furniture marketing
### Continuing to grow after a surge in sales
Many home goods brands saw a surge of sales in 2020.9 Categories like furniture and homeware digital revenue grew by 14.5%, earning nearly $53 million in revenue in 2020. This $6.6 million increase year over year is considered unprecedented, according to industry experts.10 Growth for the home and furniture category is expected to plateau in terms of sales starting in 2023.11 Knowing this, home goods brands may want to develop marketing strategies to help build brand trust and loyalty with customers.
### Adapting to audiences digital mindset
With the home goods industry rapidly growing, the industry has become more competitive as brands try to reach and engage shoppers.12 Brands looking to increase awareness to consumers want to consider developing a holistic, always-on marketing approach, especially as more shoppers use digital resources to browse and shop for products.
### Helping shoppers discover products around life moments:
Whether consumers are moving into a new home or getting married, those life moments can inspire consumers to seek out products that will help them settle into their new lifestyles. In fact, 32% of small kitchen appliance shoppers reported that moving into a new home was the reason for their purchase.13 And 23% of mattress shoppers also selected a life event as a reason for their purchase.14
In this blog post, we learn that consumers have been thinking more about the air they breathe. Discover what the path to purchase looks like for air purifier shoppers and how brands can engage with them.
## How do I advertise my home goods and furniture products?
Take your marketing campaign to the next level by teaming up with our Brand Innovation Lab. By working with them, your brand can create custom marketing experiences that surprise and delight audiences in inventive ways.
1, 2 “Worldwide Home Decor Industry to 2026,” Research and Markets, March 2021, US3 eMarketer, June 2022, US 4 Kantar and Amazon Ads, Mattress path to purchase, March 2021, US 5 “2021 Deep Dive: Home Goods Industry Data and Trends,” ROIRevolution, 2021, US 6, 7 Furniture digital shopping revenue in the United States from 2017 to 2025, Statista, 2022, US 8 “2021 Deep Dive: Home Goods Industry Data and Trends,” ROIRevolution, 2021, US 9 Kantar and Amazon Ads, Small kitchen appliance path to purchase, July 2022, US 10-12 Kantar and Amazon Ads, Mattress plan to purchase, March 2021, US 13 Kantar and Amazon Ads, Home environment path to purchase, December 2021, US 14 Kantar and Amazon Ads, Floorcare path to purchase, November 2021, US
Customers are out shopping for the home improvement products they need, from power tools to gardening equipment. By 2025, the tools and home improvement category is expected to reach over $620 billion in the US alone and is growing at a rate of 4% annually.1 Discover how your brand can potentially grow, increase sales, and help customers with Amazon Ads solutions.
## Home improvement shoppers today
Home improvement customers are evolving. For example, homeowners are looking for more energy-efficient products as the costs of fuel and gas increase.2 And larger living spaces and new DIYers are helping the industry expand.3 Further, government incentives and tax credits to build more sustainable construction have stimulated the home improvement industry.4Customers are also using digital shopping resources to inform upgrading and remodeling their homes. As the industry changes, your brand may want to consider working with Amazon Ads to reach different customer segments.
## Home improvement shopping trends
A recent study conducted by Kantar and Amazon Ads found the following insights about the tools and home improvement audience:
60% of tools and home improvement shoppers surveyed reported that they engaged with an online touchpoint during their path to purchase.5
34% of tools and home improvement shoppers surveyed reported that they looked up products online while in-store.6
59% of home improvement shoppers surveyed reported that they browse Amazon to help them learn about products, and then they purchase those products elsewhere.7
## Opportunities for home improvement marketing
63% of tools and home improvement shoppers surveyed reported that they are open to discovering new brands early on in their purchasing journey. Therefore, brands cannot solely rely on past sales to drive future purchases. Instead, brands should consider developing campaigns that foster brand trust, love, and loyalty through messaging, creative, and promotional marketing strategies.8
### Reaching new audiences where they are
As new shoppers enter the home improvement category, brands may want to consider adapting to audiences’ media consumption preferences, including shifting from linear TV to streaming TV and digital videos. Ninety-five percent of tools and home improvement shoppers surveyed reported that they are using a streaming TV service or device.9
### Creating digital experiences
As more shoppers use digital sources to browse, learn about, and purchase home improvement products, brands have an opportunity to develop elevated digital shopping experiences for their customers. Virtual reality (VR) and augmented reality (AR) present new opportunities for brands to help audiences visualize products before making a purchase. As the home improvement industry grows, VR/AR and the metaverse are expected to increase spending on home and home-related categories to $697 billion.10
## Amazon Ads home improvement insights
44% of home improvement shoppers surveyed reported that they visit the Amazon store during their shopping journey to browse and discover products. And 59% of those shoppers who visited Amazon’s store reported that they then make a product purchase elsewhere.11
Shoppers who visited Amazon are 1.5x more likely to be motivated by advertising compared to those who did not visit Amazon, with 58% reporting they were motivated to make a purchase after seeing an ad.12
74% of tools and home improvement shoppers surveyed reported that they stream TV shows and movies using an Amazon-owned service or device.13
## Home improvement marketing tips
Take your marketing campaign to the next level by teaming up with our Brand Innovation Lab. By working with them, your brand can create custom marketing experiences that surprise and delight audiences in inventive ways.
1 North American Hardware and Paint Association, Hardware Retailing, US, January 20222-4 Home improvement marketing, Global Marketing Insights, US, 2021 5-7 Tools and home improvement path to purchase, Kantar and Amazon Ads, US, May 2022 8, 9 Tools and home improvement path to purchase, Kantar and Amazon Ads, US, May 2022 10 “The Metaverse: Evolutionary or Revolutionary?” <NAME>, November 2021 11-13 Tools and home improvement path to purchase, Kantar and Amazon Ads, US, May 2022 * Tools and home improvement path to purchase includes the following categories: Power and hand tools, paint and supplies, smart home and security, and lighting
Consumers are discovering more ways to experience travel and leisure with the of help online resources and streaming content. With the global hospitality industry expected to reach $5,297.78 billion in 2025 at a compounded annual growth rate of 6%, learn how your brand can stand out to customers.1
## What is hospitality marketing?
Hospitality marketing helps advertisers in travel, restaurants, and consumer services bring awareness and consideration of their products and services to consumers. Hospitality marketing strategies can play an important role in helping brands drive customer engagement and stay top-of-mind.
## The hospitality industry today
There’s a renewed sense of adventure for consumers. This comes at a time when the hospitality industry is changing to meet consumers’ needs. Today, there are more consumers focused on not just going on adventures and seeking new experiences, but they are also thinking about the larger impact of their activities in areas like health, wellness, and the environment.2 With these new considerations, consumers are reimagining how they dine out, spend leisure time, and travel as the world adapts to hybrid work models, explores sustainability travel options, and considers plant-based meals.3
## Hospitality marketing trends
### Expected increase in bookings
There’s a 40% increase expected in gross bookings growth for global online travel agencies in 2022, which should bring volumes back to pre-pandemic levels.4
### Adults are comfortable going out to eat
67% of US adults say they feel comfortable going out to eat at restaurants.5
### Spending on sustainability
Nearly 60% of consumers say they are willing to spend more to make their trip more sustainable.6
## Insights for hospitality marketers
As of the end of 2021, 71% of Amazon shoppers plan to travel in the next 12 months.7
81% of Amazon.com shoppers who intend to travel in the next 12 months shop on Amazon weekly.8
Food advertising is the number one most-interacted online ad category among consumers.9
Develop your skills through courses and certifications to deliver successful campaigns in the hospitality industry with Amazon Ads. Learn about the latest product innovations, Amazon insights, and advertising best practices.
## Challenges facing hospitality brands
### Preparing for uncertain needs
An increase in spontaneous travel means brands may want to consider preparing to reach consumers who will book trips with shorter lead times. Over one-quarter, 28%, of consumers are saying “yes” to more last-minute trips, and 25% are making no plans for the trip in favor of being spontaneous when they arrive to their destination.10
### Communicating loyalty benefits
64% of Gen Z and 61% of millennial consumers participate in loyalty programs at one or two of the table-service restaurants they frequent, surveys show.11 Brands may want to consider reaching this generation of decision-makers through creative advertising that focuses on value and exclusive experiences.
### Navigating fragmented channels
Consumers are increasingly more likely to book travel, dining, and entertainment from their smartphones.12 70% of consumers use their phones to find fun things to do, 66% use their phones to research destinations, and 58% use their phones to plan accommodations.13 Brands may want to consider shifting their marketing strategy to increase visibility among mobile-first consumers.
## Amazon Ads solutions
With Amazon DSP, brands can programmatically buy ads to reach new and existing audiences on and off Amazon. Our exclusive insights and shopping signals could help power hospitality brands to make more informed decisions that may drive growth.
Amazon’s Brand Innovation Lab can help hospitality brands stand out to audiences in various ways through innovative and tailored campaigns. Creative ads help capture audiences’ attentions and imaginations through a variety of placements, including homepage takeovers, Fire TV placements, customized destination pages, on-box advertising, and multichannel campaigns.
Viewers today are leaning into streaming services. With Amazon’s Streaming TV ads, hospitality brands can stay top of mind by showing up alongside audiences’ favorite movies, TV shows, news, and live sports.
Sources:
1 “Global Hospitality Market Report 2021: Market is Expected to Reach $5297.78 Billion in 2025 - Forecast to 2030,” Research and Markets, global, June 20212, 3 “Travel’s Theme for 2022? ‘Go Big’,” The New York Times, Feb. 2022 4 Capital IQ, Skift Research Data as of Nov. 2021 5 "Tracking the Return to Normal: Dining." Morning Consult, Feb. 2022 6 Expedia Group Media Solutions + Skift, Oct. 2021 7 - 9 Kantar Amazon Audience Streaming Survey, Dec. 2021 10 “The 2021 Upgrade,” Hotels.com, Jan. 2021 11 “The Digital Divide Report: Minding The Loyalty Gap,” PYMNTS.com, Nov. 2021 12, 13 Ad Colony Travel Survey, July 2021
Consumers are more connected than ever before, with twenty-five connected devices in the average US household.1 As customers look for services to support their needs, learn how Amazon Ads can help them discover your brand.
## Make the next switch their last
Amazon Ads can help you connect with Telco customers who are still figuring things out, including their phone, wifi and more.
## The telecom industry today
As remote work continues to expand, so do customers’ connectivity needs.2 They’re looking for more entertainment options as they spend more time at home watching their favorite shows, playing their favorite games, and making important calls.3 Many are also upgrading their services to meet their needs: of the two-thirds of households that have smart devices, 39% paid to increase their internet speed.4 As consumer behaviors evolve, brands providing these essential services have an opportunity to reach new customers, creating campaigns that can help drive awareness and consideration while also building trust.
## Telecom marketing trends
Streaming is an important consideration for telecom customers. 83% of Amazon shoppers are streamers, and they’re 125% more likely to switch internet service providers.5
Subscriptions to virtual multichannel video programming distributors – vMVPDs – are growing, and are expected to reach 15M households in 2022.6
vMVPD customers aren’t brand loyal; they use a total of 4.5 services on average, and the majority have used their current providers for less than a year.7
## Challenges facing the telecom industry
### Navigating 5G
5G has arrived, and brands may need to educate consumers about the technology and its benefits to drive adoption and prevent marketplace confusion.
### Customer retention
vMVPD customers consider an average of 2.5 services before signing up, with 95% enrolling in a free trial before committing.8 With more options available than ever before, brands may want to consider new ways to help drive loyalty after trial periods end.
### Connectivity matters
Connectivity issues present challenges for both subscribers and businesses. As working from home continues, 35% of US workers say they don’t have internet fast enough to handle tasks like video calls.9
## Insights for telecom marketers
30% of vMVPD customers who shop in Amazon’s store visit daily.10
65% of vMVPD customers would be open to switching to a standard cable or satellite service.11
Amazon shoppers are 40% more likely than the average adult to play video games.12
60% of college students say they spend more time on Amazon than other online sources when looking to purchase a smartphone. 95% say the service plan is important to their decision.13
## Telecom marketing solutions
Reach relevant audiences across wireless and cable and drive brand awareness with ad products such as Streaming TV ads, audio ads, and Amazon DSP with a focus on ads off Amazon or ads with link-out creative to your brand site.
Use Amazon shopping insights on Amazon Marketing cloud to reach new relevant audiences, on and off Amazon, across desktop and mobile devices. Use Amazon first-party insights to accurately measure your advertising’s impact on how consumers discover, research, and purchase services across various touch points. Consider remarketing opportunities that help drive audiences to conversion and encourage brand loyalty.
Amazon’s Custom Advertising team can help your brand stand out to customers by creating campaigns around upcoming promotions, device launches, and bundled deals. The Custom Advertising team team helps brands bring their stories to life with memorable experiences that can surprise and delight customers.
Provide telecom audiences with an easy way to take the next step with your brand through audio, video and interactive ad offerings on Amazon.
Environics Research, “Brands With Purpose Global Consumer Themes,” Amazon Ads, 2022, U.S., U.K., Canada, Germany, Japan.
How can Amazon Ads fit into the marketing strategy for your toys and games business? Whether you're seeking to drive brand awareness or conversion, learn about the advertising solutions we can offer your business, on and off Amazon.
## It’s playtime, all the time
Amazon Ads can help brands connect with customers looking for toys year-round, in the moments that matter most.
## The toys and games industry today
Marketing toys and games requires understanding the rapid evolution of the industry. Global revenue is projected to reach $357 billion USD by 2023.1 Sales increases in many categories are partially due to more adult consumers purchasing for themselves, and some toy brands are creating new products just for them. Meanwhile, sales of toys and games based on popular characters from cartoons, TV shows, and movies are also helping to propel growth.Customer attitudes are also shifting, with many toy manufacturers focusing on eco-friendly toys and packaging to address customer sustainability concerns.
## Toys and games marketing trends
$357 billion projected toy industry revenue (2023).2
$35 billion worldwide sales of licensed toys (2019).3
38% of US parents say their children prefer toys featuring well-known characters.4
## Challenges facing toys and games brands
### Greater variety of options
Over two-thirds of toy shoppers make their toy purchases online.5 Online browsing offers shoppers more selection than brick-and-mortar, which means that product discoverability is key.
### Adapting to shifting customer behaviors
As toy shoppers increasingly move to online purchases, adults are less likely to make impulse buys driven by child requests.6 Find new ways to engage shoppers and drive purchase intent by telling your brand and product story.
### Thinking beyond the holiday season
Over 80% of toy purchases are gifts,7 but marketing toys shouldn't be limited to the holiday season. Brands need to keep their products top of mind when year-round gift-giving opportunities arise.
## Insights for toys and games marketing
44% of toy shoppers say they research their purchase on Amazon, regardless of where they make their purchase.8
A toy purchase on Amazon is 3x more likely than the average Amazon purchase to be a gift.9
97% of global e-commerce sessions end without a purchase,10 so it's important to treat each touch point as an opportunity to build a relationship rather than just drive an immediate sale.
A recent Amazon Ads analysis found that brands who combined display advertising with sponsored ads for their toy advertising achieved a 33% uplift in conversion, compared to the toy brands who only ran sponsored ads.11
## Toys and games advertising strategies
### Reach toy shoppers at scale
Maximize consideration and conversion on Amazon, both during seasonal shopping events like Prime Day and when audiences are shopping for gifts year-round, by maintaining an always-on approach for your toys advertising.
### Create cross-channel experiences
Amazon Ads offers solutions that can help you reach toys and games shoppers who engage with Amazon across devices and channels, such as video, audio, and out-of-home.
### Make data-driven decisions
Use Amazon shopping insights on Amazon DSP to reach new relevant toys audiences, on and off Amazon, across desktop and mobile devices. Use Amazon first-party data to accurately measure your advertising’s impact on how consumers discover, research, and purchase toys across touch points.
1 Statista, Consumer Market Outlook 20202 Statista, Consumer Market Outlook 2020 3 Global Licensing Industry Survey, 2019 4 Statista Global Survey: Toys & Games, 2020 (Numbers are 33% for UK and 27% for DE) 5 Kantar custom toy shopper study, US, April 2020 6 The Toy Association, 2020 7 Kantar custom toy shopper study, US, April 2020 8 Kantar custom toy shopper study, US, April 2020 9 Statista, benchmark for global ecommerce conversion rate, 2018 10 Amazon internal, US, June 2020 11 Amazon internal, US, June 2020 12 Kantar custom toy shopper study, US, April 2020
# Channels
Thoughtfully engage audiences throughout their daily experiences. We provide access to premium inventory beyond Amazon through direct publisher integrations, helping your brand seamlessly extend reach to where your audience spends their time.
## Unparalleled reach across our brand safe channels
### The reach you need, without hidden fees across premium video, display, and audio publishers
Amazon Publisher Direct provides Amazon Ads buyers with direct and reserved access to thousands of publishers and helps bring buyers and sellers closer together to deliver a simple and transparent supply chain.
Marketing channels are mediums that marketers can access to advertise their product or brand. Marketing channels include a variety of destinations where audiences spend their time, such as streaming TV and audio apps, or in stores where audiences are browsing and shopping.
Marketing channels enable advertisers to deliver information about your product or brand to audiences. This helps you reach customers and meet your business or campaign goals.
## Our philosophy
We insist on the highest standards and present Amazon customers with timely, relevant, and beautiful advertising that enhances their shopping experience. Our specs and policies are in alignment with the CBA Better Ads Standards and explain how to create and evaluate ad experiences.
With Amazon DSP, advertisers can reach Amazon shoppers everywhere.
Static full-screen images in rotation on Amazon Echo Show devices.
Engage customers with your brand through Amazon Music’s free ad-supported tier.
Unique Amazon ad units that are closely integrated with the shopping experience.
eCommerce creatives introduce Amazon features like Add to Cart and Customer Reviews into display ad units.
Banner ad placements on Amazon’s Shopping Apps and the mobile version of Amazon.com.
Our Fire tablets with Special Offers provide a unique ad experience for customers.
Engage customers with your brand through a featured banner on the home screen.
Promote your products or offers with simple to navigate pages, built for TV.
Our Kindle reading devices with Special Offers provide a unique ad experience for customers.
Standard sizes for desktop ad units.
Follow these creative guidelines to meet our content specification requirements for Stores.
Advertise your brand with solutions across IMDb.
Advertise your brand with solutions on Twitch.
Engage customers with your content on Prime Video.
Date: 2017-12-25
Categories:
Tags:
Amazon Ads offers a range of options to help you achieve your advertising goals to registered sellers, vendors, book vendors, Kindle Direct Publishing (KDP) authors, app developers, and/or agencies (refer to each product's page for eligibility criteria).
To get started with self-service advertising products, including Sponsored Products, Sponsored Brands, Sponsored Display, and Stores, visit the Register page and choose one of the options to enroll.Display ads, video ads, and ads run through the Amazon DSP can be managed independently or with an Amazon Ads account executive. Contact us to get started. Sellers sell products directly to Amazon customers. If you manage your products in Seller Central, you’re a seller.Vendors sell their items directly to Amazon, who then sells them to customers. If you manage your products in Vendor Central, you’re a vendor.
CPC or PPC advertising is a type of paid advertising where ads display at no charge—ad impressions, or views, are free—and the advertiser is charged only when a customer clicks the ad. Sponsored ads—such as Sponsored Products and Sponsored Brands—run on the CPC model.
CPM advertising is a type of paid advertising where you are charged a certain price for every 1,000 impressions of your ads.
Display ads are banners or images that appear on websites. Display advertising is available through Sponsored Display or Amazon DSP.
* Sponsored Products, Sponsored Brands, and Sponsored Display are cost-per-click ads, meaning you pay only when customers click your ad, and you control your campaign budget.
* The cost of display ad and video ad campaigns can vary depending on format and placement.
* Audio ads are sold on a CPM basis.
* Advertising through a managed-service option with an Amazon Ads account executive (display ads, video ads, and ads that are run through the Amazon DSP) typically require a minimum spend of $50,000 (US). Contact an Amazon Ads account executive for more information.
* You can create a Store for free.
A keyword is a single word or combination of words that you add to your Sponsored Products and Sponsored Brands campaigns. Keywords are matched to shopping queries that customers use to look for products on Amazon and determine when your ads may appear. Note that keywords are only used for Sponsored Products or Sponsored Brands. Sponsored Display ads reach products, product categories, or customers' interests.
To start, we recommend using the suggested keywords when creating your campaigns. Sponsored Products campaigns may use automatic targeting, which selects relevant keywords automatically.Once your automatic campaign is running, you can check your advertising reports to see which keywords are resulting in ad clicks and sales. You can use the top-performing keywords to create a Sponsored Brands campaign or a Sponsored Products campaign with manual targeting, where you select keywords to target and set individual bids for them.
Make sure that your keywords reference metadata contained on your advertised product’s detail pages. For example, an ad will not receive impressions for the keyword “beach towels” if the campaign contains only bath towel products.
We offer free personalized support from an Amazon Ads specialist on Sponsored Products, Sponsored Brands, Sponsored Display, and Managed Display campaigns to help you reach your goals.Our dedicated ad specialists will help set up, review, and optimize your sponsored ad campaigns, as well as assist with organic brand-building tools such as Stores and Posts. They will share tailored campaign recommendations on keywords, bids, budgets, and more to help you maximize your chances of success on Amazon Ads. To check your eligibility for this program and get started, fill out this form and one of our ad specialists will reach out to you. Sponsored Products is a cost-per-click, keyword-targeted advertising solution that enables you to promote the products you sell with ads that may appear in highly visible placements on Amazon. You select your products to advertise and choose keywords to target or let Amazon’s systems target relevant keywords automatically. You control how much you want to spend on your bids and budgets and can measure your ads’ performance.The ads serve both on desktop and mobile browsers as well as on the Amazon mobile app. When customers click your ad, they are taken to the advertised product’s detail page.
Your ads may be displayed at the top of, alongside, or within shopping results and on product detail pages. Ads may appear on both desktop, tablet, and mobile.
Sponsored Products is available for Amazon professional sellers and retail vendors in the advertising console, and Kindle Direct Publishing (KDP) authors in the KDP dashboard.
* An active Amazon professional seller account
* Ability to ship to all addresses in the marketplace you are advertising in
* Product listings in one or more of the available categories (must be new)
* Listings that are eligible for the Featured Offer1
1 If you create an ad for a product listing that is not eligible for the Featured Offer, your ad will not display to Amazon customers. Ads that are not eligible are flagged in the campaign manager.
Vendors can be a (must meet at least ONE criterion):
* Hardlines vendor
* Softlines vendor
* Supplier Express vendor (aka Vendor Express vendors) with a confirmed purchase order or direct fulfillment order
* Media vendor
* Books vendor
* Consumables vendor
In addition, listings advertised on Sponsored Products must be eligible for the Featured Offer.
At this time, we do not support adult products, used products, refurbished products, and products in closed categories.
Sponsored Products may help you increase sales by displaying ads when shoppers look for relevant products on Amazon.com. There are no monthly fees—you pay only when your ad is clicked. Consider using Sponsored Products for product visibility, new offers, unique selections, offers with low glance views, clearance items, and seasonal promotions.
Sponsored Products uses a cost-per-click, auction-based pricing model. You set the maximum amount that you are willing to pay when a customer clicks an ad for your product. The more competitive your bid is, the more likely it is that your ad will be displayed.
Sponsored Products allow broad, phrase, and exact matches.
Negative matches are matching types that prevent ads from being triggered by a certain shopping queries (word or phrase).
Negative keywords help prevent your ads from appearing on shopping results pages that don’t meet your performance goals. This extra level of control can help to reduce costs by excluding keywords where an advertiser might be overinvesting. This can help to improve ad performance metrics, such as click-through rate (CTR), advertising cost of sales return on investment (ACOS), and cost per click (CPC).
When creating a campaign, an advertiser enters a bid for the keywords that they want to target. A bid is the maximum amount you are willing to pay when a customer clicks an ad.
For Sponsored Products, you’ll set a daily budget for your campaign. The daily budget is a daily amount you are willing to spend on a campaign over a calendar month. For example, if you set your daily budget at $100, you may receive up to $3,100 worth of clicks in that calendar month (assuming a full 31-day month). Daily budgets are not paced throughout the day, meaning a smaller daily budget could be spent in a few minutes if there are a large number of shoppers interested in your advertised products.Regardless, you control the total amount you want to spend—the final cost will never be more than the amount you’ve set for your campaign’s duration.
Yes, you can increase or decrease your daily budget once your ads are live.
Invoicing is also available for vendors on an invite-only basis.
Before you create your first campaign, it’s important to know what business goals you want to accomplish through advertising. Establishing your goals up front will help you choose which products to advertise, decide how to structure your campaigns, and better analyze performance.We recommend Sponsored Products as the simplest way to start advertising. No images or custom copy are needed, and ads go live immediately. You may use automatic targeting, which uses Amazon’s shopping insights to help you learn what shoppers are looking for. It also dynamically adapts to trends in the marketplace and seasonality. Remember that your product must be in stock and priced competitively in order to become the Featured Offer, so take into account product pricing and availability when deciding to advertise. If your product isn't the Featured Offer or is out of stock, your ad will not display. If this is your initial campaign, we recommend you take an always-on approach and allow it to run for 2-3 weeks. This will help you gather enough data and insights to understand what is working and what isn’t.
Ad display is dynamic based on your campaign parameters. You can see your ad impressions, clicks, and conversions on the reporting page.
Sponsored Brands are keyword-targeted ads that appear in shopping results on Amazon. They allow brands to promote multiple products or titles with a custom headline and logo within the ad creative. Ads take customers to a product detail page or Store.
Your ads may be displayed on top of, alongside, or within shopping results. Ads may appear on both desktop and mobile.
If a customer clicks one of the specific product images, they will be taken directly to the product detail page for that product. If a customer clicks the hero image or the ad copy, they will be taken to a customized landing page, such as a Store.
Sponsored Brands are available for Amazon professional sellers who are enrolled in Amazon Brand Registry and retail vendors in the advertising console.
Sellers (must meet ALL criteria):
* An active Amazon professional seller account
* Ability to ship to all addresses in the marketplace you are advertising in
* Product listings in one or more of the available categories (must be new)
* Registered in Amazon Brand Registry
Vendors (must meet at least ONE criterion):
* Hardlines vendor
* Softlines vendor
* Media vendor
* Books vendor
* Consumables (non-Pantry/Fresh) vendor
With Sponsored Brands, you can drive sales on Amazon, as well as brand awareness with ads located in high-visibility placements.
Sponsored Brands can help you achieve a variety of goals, from generating awareness of a new product to promoting seasonal items or creating more demand for a best seller.We provide a range tools and reports that make it easy to analyze campaign performance and measure success. This includes a search term report to help you see what keywords are generating clicks and sales, and advertising cost of sales (ACOS), which represents ad spend as a percentage of sales.
Sponsored Brands uses a cost-per-click, auction-based pricing model. You set the maximum amount that you are willing to pay when your ad is clicked. The more competitive your bid is, the more likely it is that your ad will be displayed.
Sponsored Brands allow broad, phrase, and exact matches.
When creating a campaign, an advertiser enters a bid for the keywords that they want to target. A bid is the maximum amount you are willing to pay when a customer clicks an ad.
When you create a Sponsored Brands campaign, you’ll decide your advertising budget. You can set a daily budget or a lifetime budget.The daily budget is the total amount you are willing to spend per day on a campaign. Each day is capped at the amount you have set but you might not always hit your daily budget limit. The lifetime budget is the total amount that you are willing to spend on one Sponsored Brands campaign for as long as it runs. Once your campaign has reached the lifetime budget limit you have set, it will stop running ads. Daily budgets are not paced throughout the day, meaning a smaller daily budget could be spent in a few minutes if there are a large number of shoppers interested in your advertised products. Lifetime budgets are paced throughout the day so that your entire budget will incrementally accrue clicks and will not spend your entire budget in one day. You will not be able to switch between a daily budget or a lifetime budget after you have selected one. Regardless of which option you select, you have control over the total amount you want to spend—the final cost will never be more than the amount you’ve set for your campaign’s duration.
You can increase or decrease your daily budget once your ads are live. You can only increase your lifetime budget once your ads are live.
Invoicing is also available for vendors on an invite-only basis.
Ad display is dynamic based on your campaign parameters. You can see your ad impressions, clicks, and conversions on the reporting page.
Sponsored Display is a self-service display advertising solution that helps you grow your business by quickly creating campaigns that reach relevant audiences both on and off Amazon.
Sponsored Display on Fire TV, or Fire TV ads, are self-service display ads that allow you to promote apps, movies, and TV shows to viewers on Fire TV.
Sponsored Display is available for professional sellers enrolled in Amazon Brand Registry, vendors, and agencies with clients who sell products on Amazon. In order to advertise, your products must be in one or more eligible categories.
* An active Amazon professional seller account
* Ability to ship to all addresses in the marketplace you are advertising in
Vendors must sell products within approved hardlines, softlines, books, and consumable categories on Amazon. Vendors who do not sell products directly on Amazon are not eligible to use Sponsored Display.
Fire TV ads are available to Fire TV app developers, Prime Video Channels, and Prime Video Direct publishers.
Sponsored Display enables you to quickly—in just a few clicks—set up display campaigns that run both on and off Amazon.* Simply select your audience, set your bid and daily budget, choose your products to advertise, and create your campaign. Ad creatives are automatically generated with the same familiar features as Sponsored Products and Sponsored Brands, including a product image, pricing, badging, star rating, and Shop now button that links back to the product's detail page, making it easy for customers to browse or buy.**Sponsored Display uses automation and machine learning to optimize your campaigns. Bids automatically adjust based on likelihood of conversion while still allowing you to change your bid or pause your campaign. From the list of products you add to your campaign, the Sponsored Display views strategy also dynamically promotes the most relevant ASIN that has the highest chance of conversion. *Ad creative displays on or off Amazon depending on the targeting strategy and audiences you choose.**Shop now button may be included in ad creative based on placement.
These ads appear as sponsored tiles in the “Sponsored” row on the Fire TV home screen. They are shown to viewers based on genre or app interest. Ad creatives are automatically generated using the image already associated with your app or content.
Your ads may appear both on and off Amazon on desktop, mobile, sites and apps based on the audiences or product targeting strategy you choose.
Audiences or product targeting selected | Description | Ad placement |
| --- | --- | --- |
Views* | Engage audiences who viewed the detail pages of your advertised products or similar products within the last 30 days but haven’t yet purchased. | Off Amazon on third-party websites and apps. |
Interests** | Engage audiences whose shopping activities on Amazon demonstrate an interest in product categories related to your promoted product. | On product detail pages, or other product-related pages. |
Products | Target specific products on Amazon that are similar or complementary to your promoted product. | On product detail pages, or other product-related pages. |
Categories | Target a range of product categories on Amazon that are similar or complementary to your promoted product. | On product detail pages, or other product-related pages. |
*This audience is not yet available for book vendors**This audience is not yet available for sellers
Reach audiences who showed interest in categories related to your promoted product. Reengage audiences off Amazon who previously viewed your product detail page but haven’t yet purchased. You can also target the detail pages of specific products or product categories on Amazon.
Without extensive resources you can quickly create a display ad campaign in minutes that reaches relevant audiences both on and off Amazon to help achieve your business objectives.
Fire TV ads help you reach cord-cutters as they watch and browse streaming TV content, and with interest-based targeting, you can engage viewers who are looking for content like yours. Campaigns can be created in a few minutes and use existing Fire TV catalog assets to automatically generate your ads.
Sponsored Display ads are purchased on a cost-per-click (CPC) basis. CPC advertising is a type of paid advertising where ads display at no charge—ad impressions, or views, are free—and you’re charged only when a customer clicks your ad. There is no minimum ad investment required. Advertisers choose their daily bid and budget.
Fire TV ads are cost-per-click (CPC), so you pay only when viewers click your ad. Advertisers choose their own bids and budget.
Sponsored Display can help you increase product awareness, consideration, and conversion by displaying your ad to shoppers both on and off Amazon. Ad creative may include a Shop now button that links back to the product's detail page on Amazon. Advertisers can use the same familiar campaign metrics available within our sponsored ads suite to understand campaign performance.
You can use Fire TV ads to help drive app installs or promote video streams, rentals, purchases, and channel subscriptions. A customizable reporting dashboard allows you to monitor your campaign performance.
No. Sponsored Display uses shopping signals to automatically reach audiences who may be interested in your promoted product.
Vendors must customize the logo or headline of ad creatives that reach audiences based on interest, or which target specific products or product categories on Amazon.
Stores are a multipage, immersive shopping experience on Amazon that allow you to showcase your brand and products. Creating a Store is free and doesn't require any web development skills. Easily create custom layouts with rich multimedia content by using drag-and-drop tiles or predesigned templates.
Stores appear on the Amazon website on mobile, app, and desktop.
Stores are available for sellers who are registered in Amazon Brand Registry, vendors, and agencies representing vendors. You do not need to advertise on Amazon to create a Store, but you must be selling products on Amazon. Amazon DSP customers can also create a Store but must have an advertising console account in addition to their Amazon DSP account.
You must have a registered and active trademark submitted and approved by Amazon. Learn more about enrolling your brand in Amazon Brand Registry. Only sellers must enroll in Amazon Brand Registry—vendors do not need to enroll.
Building a Store helps drive shopping engagement, with a curated destination for customers to not only shop your products but also learn more about your brand.
Key features include:
* Unique design: Choose from a selection of design templates with varying store layouts and customizable features to best showcase your brand.
* Custom curation: Feature a dynamic or handpicked assortment of products along with optional multimedia content to enhance the customer shopping experience.
* Integrated promotion: Use built-in social features like social sharing buttons, coupled with promotional extensions such as Sponsored Brands, to drive Store awareness and traffic.
Creating a Store is free.
The Stores insights dashboard provides you with daily and aggregate views of your Store's performance. Metrics by traffic source and by page are available, including:
* Daily visitors: Total unique users or devices that viewed one or more pages on your Store in a single day.
* Views: Number of page views during this time period. Includes repeat views.
* Sales: Estimated total sales generated by Store visitors within 14 days of their last visit. Units and sales data are only available as of December 25, 2017.
* Units sold: Estimated total units purchased by store visitors within 14 days of their last visit.
* Views/Visitor: Average number of unique pages viewed by a daily visitor to your Store.
You can access analytics from the store builder, or from the Stores main page.
The time it takes to create a Store depends on what you would like to create. We’ve provided you with templates and tiles to make it easy to created pages quickly without design expertise. Before your Store can be published, we review it using a moderation process to make sure that it is up to the high standards we set for the customer shopping experience across Amazon. Keep in mind that moderation will take up to 72 hours, and your Store may be rejected if it does not meet our content acceptance policy. So plan ahead and publish your Store with plenty of time before major sales, deals, or holiday events.
Yes. Stores templates and widgets are all designed as responsive. They work on any screen size or device type. To see how your Store will look on mobile or desktop before publishing it, a Preview link is available to you from the Store Builder.
Display ads, powered by Amazon DSP, are a flexible ad format that you can use to reach your desired audiences on and off Amazon using either Amazon-generated creative or your own.
Display ads appear on Amazon websites, apps, and devices, as well as on sites and apps not owned by Amazon.
Customers may be taken to a product detail page, a Store, a custom landing page, or an external website.
Display ads help you reach, inspire, and reengage customers, with the right message, on and off Amazon.
Video ads combine sight, sound, and motion to share your brand story and engage your audience on and off Amazon.
Streaming TV ads are video ads that appear before, during, or after streaming content. These ads cannot always be skipped and therefore are typically viewed until completion of the ad.
Amazon Streaming TV ads appear alongside content on connected TVs, publisher channels and networks, IMDb, and IMDb TV.
Out-stream video ads are video ads that appear outside of video content, such as on a website or app. They often appear on a web page in the space reserved for a display ad.
Out-stream video ads appear both on Amazon subsidiaries like IMDb and across the web as standalone videos on desktop, mobile, or tablet. Formats include in-feed video, in-article/in-read video, video in-banner, and interstitial video.
For video ads that can be clicked, then the customer can be taken to a product page on Amazon, your own website, or another destination across the internet.
Video ads empower you to tell stories and make emotional connections with customers throughout their decision journeys. They can help you reach relevant audiences at scale by showcasing your brand and products in trusted environments and alongside high-quality content.
Amazon audio ads are ads between 10 and 30 seconds long that play periodically in breaks between songs on Amazon Music's free ad-supported tier.
Audio ads are played on the free tier of Amazon Music across Alexa-enabled devices, including Echo and Fire TV, as well as on mobile and desktop.
Amazon audio ads help you connect with audiences wherever they are listening to the free tier of Amazon Music, even if they aren't watching their screens.
Audio ads are sold on a CPM (cost-per-thousand impressions) basis. They typically require a minimum spend of $25,000 (US). Contact an Amazon Ads account executive for more information.
Advertisers can provide a companion banner for their audio ads, which appears on Echo Show devices, Fire TV, and in the Amazon Music app and webplayer on mobile and desktop. Companion banners are clickable in the Amazon Music app and webplayer.
Amazon DSP is a demand-side platform that allows advertisers and agencies to programmatically reach audiences across the web.
A demand-side platform is software that provides automated, centralized media buying from multiple sources.
Programmatic advertising is the automated buying and selling of digital advertising inventory. Advertising inventory is the space for ads on a given website.
You can purchase display ads, video ads, and audio ads using Amazon DSP.
Amazon DSP programmatically delivers ads across Amazon.com and Amazon subsidiaries, like IMDb. Additionally, advertisers have access to direct inventory from leading publisher sites through Amazon Publisher Services as well as large third-party exchanges. This inventory includes high-quality sites on desktop and mobile web display, mobile app, and video pre-roll.
Display ads take customers to a product detail page, a Store, a custom landing page, or an external website. For video ads that can be clicked, the customer can be taken to a product detail page on Amazon, your own website, or another destination across the internet.
Amazon DSP is available to both advertisers who sell products on Amazon and those who do not. Amazon DSP is best suited to advertisers who want to programmatically buy display and video ads at scale.
Advertisers can enhance their reach by leveraging their existing audience using pixels, data management platforms (DMP), or advertiser-hashed audiences. In doing so, advertisers can deliver and optimize relevant ads to the same audiences across devices and ad formats to help drive greater relevance and improve campaign performance.
Pricing for ads through Amazon DSP varies depending on format and placement. Self-service customers are in full control of their campaigns, and there are no management fees. The managed-service option typically requires a minimum spend of $50,000 (US). Contact an Amazon Ads account executive for more information.
Self-service and managed-service options are available with Amazon DSP. Self-service customers are in full control of their campaigns, and there are no management fees. The managed-service option is a great solution for companies that want access to Amazon DSP inventory with white glove service or those with limited programmatic experience. To register for Amazon DSP, contact an Amazon Ads account executive.
Amazon Live is a fun and unique shopping experience on Amazon. Brands can create interactive, shoppable livestreams, either for free using the Amazon Live Creator app, or producing them in partnership with Amazon Live.
Livestreams appear in several locations on Amazon.com/Live, in the Amazon mobile app under Amazon Live, and in the Amazon Live Shopping app on Fire TV. Brand-created livestreams can appear on detail pages and other relevant placements where Amazon shoppers browse. Amazon-produced livestreams appear in placements such as the Amazon.com home page, event pages, and category pages.
The Amazon Live Creator app is available in the US to vendors who have a Store and professional sellers enrolled in Amazon Brand Registry.
Amazon Live allows you to engage with shoppers in real time in order to inspire, educate, and entertain. Livestreams can help you drive consideration and sales by providing live product demonstrations, connecting with customers through live chat, and displaying special offers.
Brands can livestream for free using the Amazon Live Creator app. Amazon-produced livestreams require a minimum spend. Contact an Amazon Ads account executive for more information.
Brands can collaborate with the Amazon Custom Advertising team to create custom advertising solutions. Together, they develop innovative campaigns that combine our advertising solutions with formats outside of existing ad products.
Custom advertising solutions can appear in placements such as home page takeovers, Fire TV placements, customized destination pages, and even non-digital formats such as on-box advertising and in-store displays.
Businesses can buy custom advertising solutions whether or not they sell products on Amazon.
Custom advertising solutions can help brands achieve their goals through innovative, customized experiences across Amazon’s online and physical stores. The Custom Advertising team is a global team of strategists, creatives, design technologists, engineers, and more.
Custom programs require working with an ad consultant and are subject to a required minimum spend. Contact an Amazon Ads account executive for more information.
Currently in beta, Posts is a shoppable feed on Amazon that allows you to reach relevant shoppers as they browse your categories on Amazon. Shoppers can click through Posts to explore your brand’s feed, and discover product pages directly from your feed.
Posts appear on the Amazon mobile shopping app (iOS and Android) and on mobile web in your brand’s feed, on detail pages, in feeds for related products, and in category-based feeds.
Posts is available for sellers enrolled in Amazon Brand Registry, vendors, and agencies representing vendors. Amazon DSP customers can also create Posts but must have an advertising console account in addition to their Amazon DSP account.
With Posts, you can deliver your brand story to in-market audiences, helping drive brand and product discovery and consideration with curated lifestyle imagery.
There is no cost to use Posts.
Amazon Ad Server is a global, multichannel ad server used to create, distribute, customize, measure, and optimize campaigns.
Amazon Ad Server is available globally to advertisers and agencies running digital campaigns.
Amazon Ad Server offers multiple options for creative authoring, streamlined campaign management tools, advanced dynamic creative optimization capabilities, and Media Rating Council-accredited measurement. Amazon Ad Server’s integration with Amazon Attribution allows advertisers in the US, Canada, UK, France, Germany, Italy, and Spain access to aggregated reporting and audience insights that show sales and conversion metrics on and off Amazon.
Impressions are the total number of times your ad was displayed.
Clicks are the number of times your ad was clicked.
Click-through rate is the total clicks divided by the total impressions.
Attribution is the assigning of credit to an ad that a customer was exposed to before taking a desired action, such as a purchase. We use a last-touch attribution model that accounts for various factors, including how the customer interacted with the ad.
Detail page views are the number of times customers viewed one of your product detail pages after viewing or clicking your ad.
Spend is the total dollar value of accrued clicks (CPC) or impressions (CPM).
Advertising cost of sales is your total spend divided by your total sales as a percentage. For example, if you spend $5 on ads and generate $25 in sales, your ACOS is 20%—a straightforward measure of your advertising’s profitability. Once you have launched your campaign, you can view all reports on the reporting page.
Return on ad spend is your total sales divided by your total spend as a percentage.
Account and campaign reports are available for all ad products. For some ad types, product- and keyword-level reports are available as well.
Thank you for submitting your application to register for Amazon Ads. If your application was rejected, you will receive an email outlining the reason for rejection and next steps.
Thank you for submitting your application to register for Amazon Ads. If we require additional information to process your application, you will receive an email from Amazon Ads. Please respond to the email with the requested information.
To retrieve your password, click "Forgot your password?" on the sign-in page.
You can change the name by going to Manage your accounts in the advertising console and clicking “Edit name.”
* Eligible on your KDP Bookshelf
* Available on Amazon in the marketplace you're advertising in
* Meets the Book Advertising Guidelines and Acceptance Policies, and Amazon Ads Guidelines and Acceptance Policies
Advertisements must be appropriate for a general audience, be available in the English language, and meet the Book Advertising Guidelines and Acceptance Policies for advertising. When you generate creative for your advertisements using the advertising console or KDP dashboard, your book's cover, title, and content are part of the advertisement. These and any custom ad images you upload will be reviewed for compliance with the policies.
Campaigns may be rejected due to poor-quality images or covers, unreadable text, typos in text, content not appropriate for a general audience, and content that is not localized for use in the United States.
A partner is an advertising agency or tool provider who works with Amazon advertisers to manage and optimize campaigns. These campaigns help advertisers achieve business objectives and connect with Amazon audiences in meaningful ways.
Please ensure you are the representative of your organization authorized to accept the Partner Network Terms & Conditions on behalf of your business, create, and operate the Partner Network account. As the assigned partner admin, you will be able to add additional users with other access levels in the account setup process.
We recommend using the Amazon account that is associated with your business email address to register your business on the Partner Network. Sign in with the existing Amazon account credentials you use to support advertisers. If you do not have an Amazon account, we recommend you create a new one. If you have an existing manager account, we recommend using it to register on Partner Network to activate the users, and advertisers that are already linked to the manager account. If you have multiple manager accounts, we recommend selecting the manager account which best represents your business that will be evaluated for advanced partner status.
You can register on the Partner Network by providing information required to identify your business. You can also create multiple Partner Network accounts for your organization based on regions, and distinct legal entities if you want them to be represented individually. Each Partner Network account will require a distinct business registration number and will be evaluated independently for assessment to designate advanced partner status. If multiple administrators submit a registration request for the same business, only the first registration will be considered.
Your local, state, or federal government issues business registration numbers. You can find it on the legal document of your incorporated business.
We recommend you to complete the account set up by adding users, linking your Login with Amazon applications and advertisers, and submitting your partner directory profile listing that will be reflected in the Amazon Ads partner directory. For further guidance, refer to the welcome email that was sent to the email address used to register for the Partner Network.
A partner admin can click on “Publish Directory Profile” to submit the Partner Network profile details that will be listed on the Amazon Ads partner directory. You can showcase your capabilities on the partner directory, which can be accessed by advertisers looking for partners who offer expertise across a diverse range of Amazon Ads specialties. When publishing your partner directory listing, please provide accurate, up-to-date information to describe your business.
The Amazon Ads partner directory includes a list of partners (agencies and tool integrators) that have tools and offering that can help advertisers to achieve business objectives and connect with Amazon audiences in meaningful ways. If you are looking for Amazon Ads support through partners, here is a place to start. We encourage you to do your own due diligence to find the partner that best meets your unique advertising needs.
The check mark implies that Amazon Ads has verified at least two associates in the partner’s organization who have the relevant product certification through the learning console and recent campaign management activity.
Simply contact the partner directly by navigating to their profile, where you can click on a "Contact provider" button. This will connect you directly to the partner's contact page.
Ready to grow your business? Join the Partner Network to build your Amazon Ads expertise, access resources, and get discovered by advertisers.
## What is the Partner Network?
## Why join the Partner Network?
### Partner status
Earn advanced or verified partner status by demonstrating expertise, engaging with Amazon Ads, and delivering results for advertisers.
### Resources, events, and webinars
Build your knowledge through product user guides, case studies, and best practices.
### View certifications
Access Amazon Ads learning certifications completed by your organization’s employees.
### Partner directory
Display your expertise through a listing in the enhanced partner directory.
### Developer resources
Access technical documentation and latest release information.
## Who can join the Partner Network?
The Partner Network is designed for agencies and tool providers that offer a broad range of products, services, and areas of expertise to support advertiser needs. Whether you specialize in an advertising product or offer advertising services, you can join the Partner Network to learn more about new products, access training and resources, as well as manage your Amazon Ads business.
### Agencies
Support advertisers with expertise in hands-on management and optimization of digital advertising campaigns.
### Tool providers
Offer advertisers specialized tools or software as a service (SaaS) solutions to manage and optimize digital advertising campaigns.
### Other advertising support
Provide advertisers education and training in digital or online advertising.
## How do I get started?
## Step 1
Identify the representative of your organization authorized to accept the Partner Network terms and conditions and create the Partner Network account on behalf of your business.
## Step 2
Complete the Partner Network registration form with information required to identify your business.
## Step 3
Review and accept the Partner Network terms and conditions.
## Step 4
Complete the account set up by adding users, advertisers, and submitting your directory profile listing that will be reflected in the partner directory once approved.
## Step 5
Explore resources, events, and webinars.
## Additional training and education for partners
Choose from the partner fundamentals, digital ad strategist, retail media specialist, developer, or data and analytics learning plan, and get started for free in the learning console.
Certifications allow partners to validate their proficiency in specific Amazon Ads products and solutions. Upon successful completion, you’ll earn an Amazon Ads Certification digital badge.
# Conditions of Use
Conditions of Use Select your country to see the Conditions of Use North America Canada Mexico United States Europe Belgium France Germany Italy Netherlands Poland Spain Sweden Turkey United Kingdom Asia Pacific Australia India Japan Singapore Middle East Egypt Saudi Arabia United Arab Emirates South America Brazil
# Cookie Notice
We use cookies and similar tools (collectively, “cookies”) for the purposes described below. Blocking cookies may impact your experience of our site. You may review and change your choices at any time by clicking on the ‘Cookie preferences’ in the footer of this page.Operational cookies: Operational cookies can't be deactivated to the extent we use them to provide you our services, for example:
* To recognize you when you sign in to use our services
* To deliver content, display features, products, and services, which might be of interest to you
* To prevent fraudulent activity
* To improve security
* To keep track of your preferences (such as language)
We also use cookies to understand how customers use our services so we can make improvements. For example, we use cookies to conduct research and diagnostics to improve Amazon’s products, and services, and to measure and understand the performance of our services.Performance cookies: Performance cookies provide anonymous statistics about how customers use our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes. Advertising cookies: Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. |
ormBigData | cran | R | Package ‘ormBigData’
October 14, 2022
Title Fitting Semiparametric Cumulative Probability Models for Big
Data
Version 0.0.1
Description A big data version for fitting cumulative probability models using the orm() func-
tion. See Liu et al. (2017) <DOI:10.1002/sim.7433> for details.
Depends R (>= 3.5.0)
License GPL (>= 2)
Encoding UTF-8
RoxygenNote 7.1.1
Imports rms (>= 5.1-4),Hmisc (>= 4.3-0),doParallel (>=
1.0.11),parallel (>= 3.5.2),foreach (>= 1.2.0),iterators (>=
1.0.0),SparseM (>= 1.77),benchmarkme (>= 1.0.4)
NeedsCompilation no
Author <NAME> [cre, aut],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-06-10 08:40:02 UTC
R topics documented:
ormB... 2
ormBD Cumulative Probability Model for Big Data
Description
Fits cumulative probability models (CPMs) for big data. CPMs can be fit with the orm() function
in the rms package. When the sample size or the number of distinct values is very large, fitting a
CPM may be very slow or infeasible due to demand on CPU time or storage. This function provides
three alternative approaches. In the divide-and-combine approach, the data are evenly divided into
subsets, a CPM is fit to each subset, followed by a final step to aggregate all the information. In
the binning and rounding approaches, a new outcome variable is defined and a CPM is fit to the
new outcome variable. In the binning approach, the outcomes are ordered and then grouped into
equal-quantile bins, and the median of each bin is assigned as the new outcome for the observations
in the bin. In the rounding approach, the outcome variable is either rounded to a decimal place or a
power of ten, or rounded to significant digits.
Usage
ormBD(
formula,
data,
subset = NULL,
na.action = na.delete,
target_num = 10000,
approach = c("binning", "rounding", "divide-combine"),
rd_type = c("skewness", "signif", "decplace"),
mem_limit = 0.75,
log = NULL,
model = FALSE,
x = FALSE,
y = FALSE,
method = c("orm.fit", "model.frame", "model.matrix"),
...
)
Arguments
formula a formula object
data data frame to use. Default is the current frame.
subset logical expression or vector of subscripts defining a subset of observations to
analyze
na.action function to handle NAs in the data. Default is ’na.delete’, which deletes any
observation having response or predictor missing, while preserving the attributes
of the predictors and maintaining frequencies of deletions due to each variable
in the model. This is usually specified using options(na.action="na.delete").
target_num the desired number of observations in a subset for the ’divide-and-combine’
method; the target number of bins for the ’binning’ method; the desired number
of distinct outcome values after rounding for the ’rounding’ method. Default to
10,000. Please see Details.
approach the type of method to analyze the data. Can take value ’binning’, ’rounding’,
and ’divide-combine’. Default is ’binning’.
rd_type the type of round, either rounding to a decimal place or a power of ten (rd_type
= ’decplace’) or to significant digits (rd_type = ’signif’). Default is ’skewness’,
which is to determine the rounding type according to the skewness of the out-
come: ’decplace’ if skewness < 2 and ’signif’ otherwise.
mem_limit the fraction of system memory to be used in the ’divide-and-combine’ method.
Default is 0.75, which is 75 percent of system memory. Range from 0 to 1.
log a parameter for parallel::makeCluster() when the ’divide-and-combine’ method
is used. See the help page for makeCluster for more detail.
model a parameter for orm(). Explicitly included here so that the ’divide-and-combine’
method gives the correct output. See the help page for orm for more detail.
x a parameter for orm(). Explicitly included here so that the ’divide-and-combine’
method gives the correct output. See the help page for orm for more detail.
y a parameter for orm(). Explicitly included here so that the ’divide-and-combine’
method gives the correct output. See the help page for orm for more detail.
method a parameter for orm(). Explicitly included here so that the ’divide-and-combine’
method gives the correct output. See the help page for orm for more detail.
... other arguments that will be passed to orm
Details
In the divide-and-combine approach, the data are evenly divided into subsets. The desired number
of observations in each subset is specified by ’target_num’. As this number may not evenly divide
the whole dataset, a number closest to it will be determined and used instead. A CPM is fit for
each subset with the orm() function. The results from all subsets are then aggregated to compute
the final estimates of the intercept function alpha and the beta coefficients, their standard errors, and
the variance-covariance matrix for the beta coefficients.
In the binning approach, observations are grouped into equal-quantile bins according to their out-
come. The number of bins are specified by ’target_num’. A new outcome variable is defined to
takes value median[y, y in B] for observations in bin B. A CPM is fit with the orm() function for the
new outcome variable.
In the rounding approach, by default the outcome is rounded to a decimal place or a power of ten
unless the skewness of the outcome is greater than 2, in which case the outcome is rounded to sig-
nificant digits. The desired number of distinct outcomes after rounding is specified by ’target_num’.
Because rounding can yield too few or too many distinct values compared to the target number spec-
ified by ’target_num’, a refinement step is implemented so that the final number of distinct rounded
values is close to ’target_num’. Details are in Li et al. (2021). A CPM is fit with the orm() function
for the new rounded outcome.
Value
The returned object has class ’ormBD’. It contains the following components in addition to those
mentioned under the optional arguments and those generated by orm().
call calling expression
approach the type of method used to analyze the data
target_num the ’target_num’ argument in the function call
... others, same as for orm
Author(s)
<NAME>
Department of Computer and Data Sciences
Case Western Reserve University
<NAME>
Department of Population and Public Health Sciences
University of Southern California
References
Liu et al. "Modeling continuous response variables using ordinal regression." Statistics in Medicine,
(2017) 36:4316-4335.
Li et al. "Fitting semiparametric cumulative probability models for big data." (2021) (to be submit-
ted)
See Also
orm na.delete get_ram registerDoParallel SparseM.solve
Examples
## generate a small example data and run one of the three methods
set.seed(1)
n <- 200
x1 = rnorm(n); x2 = rnorm(n)
tmpdata = data.frame(x1 = x1, x2 = x2, y = rnorm(n) + x1 + 2*x2)
modbinning <- ormBD(y ~ x1 + x2, data = tmpdata, family = loglog,
approach = "binning", target_num = 100)
## modrounding <- ormBD(y ~ x1 + x2, data = tmpdata, family = loglog,
## approach = "rounding", target_num = 100)
## moddivcomb <- ormBD(y ~ x1 + x2, data = tmpdata, family = loglog,
## approach = "divide-combine", target_num = 100) |
go.opentelemetry.io/collector/config/configauth | go | Go | README
[¶](#section-readme)
---
### Authentication configuration
This module defines necessary interfaces to implement server and client type authenticators:
* Server type authenticators perform authentication for incoming HTTP/gRPC requests and are typically used in receivers.
* Client type authenticators perform client-side authentication for outgoing HTTP/gRPC requests and are typically used in exporters.
The currently known authenticators are:
* Server Authenticators
+ [Basic Auth Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/basicauthextension)
+ [Bearer Token Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/bearertokenauthextension)
+ [OIDC Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/oidcauthextension)
* Client Authenticators
+ [ASAP Client Authentication Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/asapauthextension)
+ [Basic Auth Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/basicauthextension)
+ [Bearer Token Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/bearertokenauthextension)
+ [OAuth2 Client Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/oauth2clientauthextension)
+ [Sigv4 Extension](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/sigv4authextension)
Examples:
```
extensions:
oidc:
# see the blog post on securing the otelcol for information
# on how to setup an OIDC server and how to generate the TLS certs
# required for this example
# https://medium.com/opentelemetry/securing-your-opentelemetry-collector-1a4f9fa5bd6f
issuer_url: http://localhost:8080/auth/realms/opentelemetry
audience: account
oauth2client:
client_id: someclientid
client_secret: someclientsecret
token_url: https://example.com/oauth2/default/v1/token
scopes: ["api.metrics"]
# tls settings for the token client
tls:
insecure: true
ca_file: /var/lib/mycert.pem
cert_file: certfile
key_file: keyfile
# timeout for the token client
timeout: 2s
receivers:
otlp/with_auth:
protocols:
grpc:
endpoint: localhost:4318
tls:
cert_file: /tmp/certs/cert.pem
key_file: /tmp/certs/cert-key.pem
auth:
## oidc is the extension name to use as the authenticator for this receiver
authenticator: oidc
otlphttp/withauth:
endpoint: http://localhost:9000
auth:
authenticator: oauth2client
```
#### Creating an authenticator
New authenticators can be added by creating a new extension that also implements the appropriate interface (`configauth.ServerAuthenticator` or `configauth.ClientAuthenticator`).
Generic authenticators that may be used by a good number of users might be accepted as part of the contrib distribution. If you have an interest in contributing an authenticator, open an issue with your proposal. For other cases, you'll need to include your custom authenticator as part of your custom OpenTelemetry Collector, perhaps being built using the [OpenTelemetry Collector Builder](https://github.com/open-telemetry/opentelemetry-collector/tree/main/cmd/builder).
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package configauth implements the configuration settings to ensure authentication on incoming requests, and allows exporters to add authentication on outgoing requests.
### Index [¶](#pkg-index)
* [type Authentication](#Authentication)
* + [func (a Authentication) GetClientAuthenticator(extensions map[component.ID]component.Component) (auth.Client, error)](#Authentication.GetClientAuthenticator)
+ [func (a Authentication) GetServerAuthenticator(extensions map[component.ID]component.Component) (auth.Server, error)](#Authentication.GetServerAuthenticator)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [Authentication](https://github.com/open-telemetry/opentelemetry-collector/blob/config/configauth/v0.87.0/config/configauth/configauth.go#L24) [¶](#Authentication)
```
type Authentication struct {
// AuthenticatorID specifies the name of the extension to use in order to authenticate the incoming data point.
AuthenticatorID [component](/go.opentelemetry.io/collector/component).[ID](/go.opentelemetry.io/collector/component#ID) `mapstructure:"authenticator"`
}
```
Authentication defines the auth settings for the receiver.
####
func (Authentication) [GetClientAuthenticator](https://github.com/open-telemetry/opentelemetry-collector/blob/config/configauth/v0.87.0/config/configauth/configauth.go#L45) [¶](#Authentication.GetClientAuthenticator)
```
func (a [Authentication](#Authentication)) GetClientAuthenticator(extensions map[[component](/go.opentelemetry.io/collector/component).[ID](/go.opentelemetry.io/collector/component#ID)][component](/go.opentelemetry.io/collector/component).[Component](/go.opentelemetry.io/collector/component#Component)) ([auth](/go.opentelemetry.io/collector/extension/auth).[Client](/go.opentelemetry.io/collector/extension/auth#Client), [error](/builtin#error))
```
GetClientAuthenticator attempts to select the appropriate auth.Client from the list of extensions,
based on the component id of the extension. If an authenticator is not found, an error is returned.
This should be only used by HTTP clients.
####
func (Authentication) [GetServerAuthenticator](https://github.com/open-telemetry/opentelemetry-collector/blob/config/configauth/v0.87.0/config/configauth/configauth.go#L31) [¶](#Authentication.GetServerAuthenticator)
```
func (a [Authentication](#Authentication)) GetServerAuthenticator(extensions map[[component](/go.opentelemetry.io/collector/component).[ID](/go.opentelemetry.io/collector/component#ID)][component](/go.opentelemetry.io/collector/component).[Component](/go.opentelemetry.io/collector/component#Component)) ([auth](/go.opentelemetry.io/collector/extension/auth).[Server](/go.opentelemetry.io/collector/extension/auth#Server), [error](/builtin#error))
```
GetServerAuthenticator attempts to select the appropriate auth.Server from the list of extensions,
based on the requested extension name. If an authenticator is not found, an error is returned. |
artificery | hex | Erlang | Artificery
===
[![Module Version](https://img.shields.io/hexpm/v/artificery.svg)](https://hex.pm/packages/artificery)
[![Hex Docs](https://img.shields.io/badge/hex-docs-lightgreen.svg)](https://hexdocs.pm/artificery/)
[![Total Download](https://img.shields.io/hexpm/dt/artificery.svg)](https://hex.pm/packages/artificery)
[![License](https://img.shields.io/hexpm/l/artificery.svg)](https://github.com/bitwalker/artificery/blob/master/LICENSE)
[![Last Updated](https://img.shields.io/github/last-commit/bitwalker/artificery.svg)](https://github.com/bitwalker/artificery/commits/master)
Artificery is a toolkit for generating command line applications. It handles argument parsing, validation/transformation, generating help, and provides an easy way to define commands, their arguments, and options.
Installation
---
Just add Artificery to your deps:
```
defp deps do
[
# You can get the latest version information via `mix hex.info artificery`
{:artificery, "~> x.x"}
]
end
```
Then run [`mix deps.get`](https://hexdocs.pm/mix/Mix.Tasks.Deps.Get.html) and you are ready to get started!
Defining a CLI
---
Let's assume you have an application named `:myapp`, let's define a module,
`MyCliModule` which will be the entry point for the command line interface:
```
defmodule MyCliModule do
use Artificery end
```
The above will setup the Artificery internals for your CLI, namely it defines an entry point for the command line, argument parsing, and imports the macros for defining commands, options, and arguments.
###
Commands
Let's add a simple "hello" command, which will greet the caller:
```
defmodule MyCliModule do
use Artificery
command :hello, "Says hello" do
argument :name, :string, "The name of the person to greet", required: true
end end
```
We've introduced two of the macros Aritificery imports: `command`, for defining top-level and nested commands; and `argument` for defining positional arguments for the current command. **Note**: `argument` can only be used inside of
`command`, as it applies to the current command being defined, and has no meaning globally.
This command could be invoked (via escript) like so: `./myapp hello bitwalker`.
Right now this will print an error stating that the command is defined, but no matching implementation was exported. We define that like so:
```
def hello(_argv, %{name: name}) do
Artificery.Console.notice "Hello #{name}!"
end
```
**Note**: Command handlers are expected to have an arity of 2, where the first argument is a list of unhandled arguments/options passed on the command line,
and the second is a map containing all of the formally defined arguments/options.
This goes in the same module as the command definition, but you can use
`defdelegate` to put the implementation elsewhere. The thing to note is that the function needs to be named the same as the command. You can change this however using an extra parameter to `command`, like so:
```
command :hello, [callback: :say_hello], "Says hello" do
argument :name, :string, "The name of the person to greet", required: true end
```
The above will invoke `say_hello/2` rather than `hello/2`.
###
Command Flags
There are two command flags you can set currently to alter some of Artificery's behaviour: `callback: atom` and `hidden: boolean`. The former will change the callback function invoked when dispatching a command, as shown above, and the latter, when true, will hide the command from display in the `help` output. You may also apply `:hidden` to options (but not arguments).
###
Options
Let's add a `--greeting=string` option to the `hello` command:
```
command :hello, "Says hello" do
argument :name, :string, "The name of the person to greet", required: true
option :greeting, :string, "Sets a different greeting than \"Hello <name>\!""
end
```
And adjust our implementation:
```
def hello(_argv, %{name: name} = opts) do
greeting = Map.get(opts, :greeting, "Hello")
greet(greeting, name)
end defp greet(greeting, name), do: Artificery.Console.notice("#{greeting} #{name}!")
```
And we're done!
###
Subcommands
When you have more complex command line interfaces, it is common to divide up
"topics" or top-level commands into subcommands, you see this in things like Heroku's CLI, e.g. `heroku keys:add`. Artificery supports this by allowing you to nest `command` within another `command`. Artificery is smart about how it parses arguments, so you can have options/arguments at the top-level as well as in subcommands, e.g. `./myapp info --format=json processes`. The options map received by the `processes` command will contain all of the options for commands above it.
```
defmodule MyCliModule do
use Artificery
command :info, "Get info about :myapp" do
option :format, :string, "Sets the output format"
command :processes, "Prints information about processes running in :myapp"
end
```
**Note**: As you may have noticed above, the `processes` command doesn't have a
`do` block, because it doesn't define any arguments or options, this form is supported for convenience.
###
Global Options
You may define global options which apply to all commands by defining them outside `command`:
```
defmodule MyCliModule do
use Artificery
option :debug, :boolean, "When set, produces debugging output"
...
end
```
Now all commands defined in this module will receive `debug: true | false` in their options map,
and can act accordingly.
###
Reusing Options
You can define reusable options via `defoption/3` or `defoption/4`. These are effectively the same as `option/3` and `option/4`, except they do not define an option in any context, they are defined abstractly and intended to be used via
`option/1` or `option/2`, as shown below:
```
defoption :host, :string, "The hostname of the server to connect to",
alias: :h
command :ping, "Pings the host to verify connectivity" do
# With no overridden flags
# option :host
# With overrides
option :host, help: "The host to ping", default: "localhost"
end
command :query, "Queries the host" do
# Can be shared across commands, even used globally
option :host, required: true
argument :query, :string, required: true end
```
###
Option/Argument Transforms
You can provide transforms for options or arguments to convert them to the data types your commands desire as part of the option definition, like so:
```
# Options option :ip, :string, "The IP address of the host to connect to",
transform: fn raw ->
case :inet.parse_address(String.to_charlist(raw)) do
{:ok, ip} ->
ip
{:error, reason} ->
raise "invalid value for --ip, got: #{raw}, error: #{inspect reason}"
end
end
# Arguments argument :ip, :string, "The IP address of the host to connect to",
transform: ...
```
Now the command (and any subcommands) where this option is defined will get a parsed IP address, rather than a raw string, allowing you to do the conversion in one place, rather than in each command handler.
Currently this macro supports functions in anonymous form (like in the example above), or one of the following forms:
```
# Function capture, must have arity 1 transform: &String.to_atom/1
# Local function as an atom, must have arity 1 transform: :to_ip_address
# Module/function/args tuple, where the raw value is passed as the first argument
# This form is invoked via `apply/3`
transform: {String, :to_char_list, []}
```
###
Pre-Dispatch Handling
For those cases where you need to perform some action before command handlers are invoked, perhaps to apply global behaviour to all commands, start applications, or whatever else you may need, Artificery provides a hook for that, `pre_dispatch/3`.
This is actually a callback defined as part of the [`Artificery`](Artificery.html) behaviour, but is given a default implementation. You can override this implementation though to provide your own pre-dispatch step.
The default implementation is basically the following:
```
def pre_dispatch(%Artificery.Command{}, _argv, %{} = options) do
{:ok, options}
end
```
You can either return `{:ok, options}` or raise an error, there are no other choices permitted. This allows you to extend or filter `options`, handle additional arguments in `argv`, or take action based on the current command.
Writing Output / Logging
---
Artificery provides a `Console` module which contains a number of functions for logging or writing output to standard out/standard error. A list of basic functions it provides is below:
* `configure/1`, takes a list of options which configures the logger, currently the only option is `:verbosity`
* `debug/1`, writes a debug message to stderr (colored cyan if terminal supports color)
* `info/1`, writes an info message to stdout (no color)
* `notice/1`, writes an informational notice to stdout (bright blue)
* `success/1`, writes a success message to stdout (bright green)
* `warn/1`, writes a warning to stderr (yellow)
* `error/1`, writes an error to stderr (red), and also halts/terminates the process with a non-zero exit code
In addition to writing messages to the terminal, `Console` also provides a way to provide a spinner/loading animation while some long-running work is being performed, also supporting the ability to update the message with progress information.
The following example shows a trivial example of progress, by simply reading from a file in a loop, updating the status of the spinner while it reads. There are obviously cleaner ways of writing this, but hopefully it is clear what the capabilities are.
```
def load_data(_argv, %{path: path}) do
alias Artificery.Console
unless File.exists?(path) do
Console.error "No such file: #{path}"
end
# A state machine defined as a recursive anonymous function
# Each state updates the spinner status and is reflected in the console
loader = fn
:opening, _size, _bytes_read, _file, loader ->
Console.update_spinner("opening #{path}")
%{size: size} = File.stat!(path)
loader.(:reading, size, 0, File.open!(path), loader)
:reading, size, bytes_read, file, loader ->
progress = Float.round((size / bytes_read) * 100)
Console.update_spinner("reading..#{progress}%")
case IO.read(file) do
:eof ->
loader.(:done, size, bytes_read, file, loader)
{:error, _reason} = err ->
Console.update_spinner("read error!")
File.close!(file)
err
new_data ->
loader.(:reading, size, byte_size(new_data), file, loader)
end
:done, size, bytes_read, file, loader ->
Console.update_spinner("done! (total bytes read #{bytes_read})")
File.close!(file)
:ok
end
results =
Console.spinner "Loading data.." do
loader.(:opening, 0, 0, nil, loader)
end
case results do
{:error, reason} ->
Console.error "Failed to load data from #{path}: #{inspect reason}"
:ok ->
Console.success "Load complete!"
end end
```
Handling Input
---
Artificery exposes some functions for working with interactive user sessions:
* `yes?/1`, asks the user a question and expects a yes/no response, returns a boolean
* `ask/2`, queries the user for information they need to provide
###
Example
Let's shoot for a slightly more amped up `hello` command:
```
def hello(_argv, _opts) do
name = Console.ask "What is your name?", validator: &is_valid_name/1
Console.success "Hello #{name}!"
end
defp is_valid_name(name) when byte_size(name) > 1, do: :ok defp is_valid_name(_), do: {:error, "You must tell me your name or I can't greet you!"}
```
The above will accept any name more than one character in length, obviously not super robust, but the general idea is shown here.
The `ask` function also supports transforming responses, and providing defaults in the case where you want to accept blank answers.
Check the docs for more information!
Producing An Escript
---
To use your newly created CLI as an escript, simply add the following to your
`mix.exs`:
```
defp project do
[
...
escript: escript()
]
end
...
defp escript do
[main_module: MyCliModule]
end
```
The `main_module` to use is the module in which you added `use Artificery`,
i.e. the module in which you defined the commands your application exposes.
Finally, run [`mix escript.build`](https://hexdocs.pm/mix/Mix.Tasks.Escript.Build.html) to generate the escript executable. You can then run `./yourapp help` to test it out.
Using In Releases
---
If you want to define the CLI as part of a larger application, and consume it via custom commands in Distillery, it is very straightforward to do. You'll need to define a custom command and add it to your release configuration:
```
# rel/config.exs
release :myapp do
set commands: [
mycli: "rel/commands/mycli.sh"
]
end
```
Then in `rel/commands/mycli.sh` add the following:
```
#!/usr/bin/env bash
elixir -e "MyCliModule.main" -- "$@"
```
Since the code for your application will already be on the path in a release,
we simply need to invoke the CLI module and pass in arguments. We add `--`
between the `elixir` arguments and those provided from the command line to ensure that they are not treated like arguments to our CLI. Artificery handles this, so you simply need to ensure that you add `--` when invoking via `elixir`
like this.
You can then invoke your CLI via the custom command, for example, `bin/myapp mycli help` to print the help text.
Roadmap
---
* [ ] Support validators
I'm open to suggestions, just open an issue titled `RFC: <feature you are requesting>`.
License
---
Copyright (c) 2018 <NAME>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at <http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
Artificery behaviour
===
This module defines the behaviour and public API for Artificery command line applications.
Usage
---
To get started, simply `use` this module, and you can start defining your CLI right away:
```
use Artificery
command :hello, "Say hello" do
option :name, :string, "The name of the person to greet"
end
...
def hello(_argv, %{name: name}) do
Console.success "Hello #{name}!"
end
```
This module exports a few macros for building up complex CLI applications, so please review the documentation for more information on each, and how to use them.
Summary
===
[Types](#types)
---
[argv()](#t:argv/0)
[options()](#t:options/0)
[Functions](#functions)
---
[argument(name, type)](#argument/2)
Like `option`, but rather than a command switch, it defines a positional argument.
[argument(name, type, help)](#argument/3)
Like [`argument/2`](#argument/2), but takes either help text or a keyword list of flags.
[argument(name, type, help, flags)](#argument/4)
Like [`argument/3`](#argument/3), but takes a name, type, help text and keyword list of flags.
[command(name, help)](#command/2)
Defines a new command with the given name and either help text or flags.
[command(name, help, help)](#command/3)
Defines a new command with the given name, flags or help text, and definition,
or flags, help text, and no definition.
[command(name, flags, help, list)](#command/4)
Defines a new command with the given name, flags, help text, and definition.
[defoption(name, type, flags)](#defoption/3)
Defines an option which can be imported into one or more commands.
[defoption(name, type, help, flags)](#defoption/4)
Like [`defoption/3`](#defoption/3), but takes the option name, type, help, and flags.
[option(name)](#option/1)
Imports an option defined via `defoption` into the current scope.
[option(name, overrides)](#option/2)
When used in the following form
[option(name, type, flags)](#option/3)
Similar to `defoption`, but defines an option inline.
[option(name, type, help, flags)](#option/4)
[Callbacks](#callbacks)
---
[pre_dispatch(arg1, argv, options)](#c:pre_dispatch/3)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
[Link to this section](#callbacks)
Callbacks
===
Artificery.Console
===
A minimal logger.
Summary
===
[Functions](#functions)
---
[ask(question, opts \\ [])](#ask/2)
Ask the user to provide data in response to a question.
[configure(opts)](#configure/1)
Updates the logger configuration with the given options.
[debug(device \\ :stdio, msg)](#debug/2)
Prints a debug message, only visible when verbose mode is on.
[error(device \\ :stdio, msg)](#error/2)
Prints an error message, and then halts the process.
[halt(code)](#halt/1)
Terminates the process with the given status code.
[info(device \\ :stdio, msg)](#info/2)
Prints an info message.
[notice(device \\ :stdio, msg)](#notice/2)
Prints a notice
[spinner(msg, opts \\ [], list)](#spinner/3)
Provides a spinner while some long-running work is being done.
[success(device \\ :stdio, msg)](#success/2)
Prints a success message
[update_spinner(status)](#update_spinner/1)
Updates a running spinner with the provided status text.
[warn(device \\ :stdio, msg)](#warn/2)
Prints a warning message
[write(msg)](#write/1)
Write text or iodata to standard output.
[write(msg, styles)](#write/2)
[write(device, msg, styles)](#write/3)
[yes?(question)](#yes?/1)
Ask the user a question which requires a yes/no answer, returns a boolean.
[Link to this section](#functions)
Functions
===
Artificery.Console.Table
===
A printer for tabular data.
Summary
===
[Functions](#functions)
---
[format(title, header, rows, opts \\ [])](#format/4)
Given a title, header, and rows, formats the data as a table.
[print(title, header, rows, opts \\ [])](#print/4)
Given a title, header, and rows, prints the data as a table.
[Link to this section](#functions)
Functions
===
Artificery.Entry
===
This module defines the entrypoint for Artificery command line applications,
which handles argument parsing, dispatch, and help generation.
API Reference
===
Modules
---
[Artificery](Artificery.html)
This module defines the behaviour and public API for Artificery command line applications.
[Artificery.Console](Artificery.Console.html)
A minimal logger.
[Artificery.Console.Table](Artificery.Console.Table.html)
A printer for tabular data.
[Artificery.Entry](Artificery.Entry.html)
This module defines the entrypoint for Artificery command line applications,
which handles argument parsing, dispatch, and help generation. |
rusoto_docdb | rust | Rust | Crate rusoto_docdb
===
Amazon DocumentDB API documentation
If you’re using the service, you’re probably looking for DocdbClient and Docdb.
Structs
---
AddSourceIdentifierToSubscriptionMessageRepresents the input to AddSourceIdentifierToSubscription.
AddSourceIdentifierToSubscriptionResultAddTagsToResourceMessageRepresents the input to AddTagsToResource.
ApplyPendingMaintenanceActionMessageRepresents the input to ApplyPendingMaintenanceAction.
ApplyPendingMaintenanceActionResultAvailabilityZoneInformation about an Availability Zone.
CertificateA certificate authority (CA) certificate for an account.
CertificateMessageCloudwatchLogsExportConfigurationThe configuration setting for the log types to be enabled for export to Amazon CloudWatch Logs for a specific instance or cluster.
The `EnableLogTypes` and `DisableLogTypes` arrays determine which logs are exported (or not exported) to CloudWatch Logs. The values within these arrays depend on the engine that is being used.
CopyDBClusterParameterGroupMessageRepresents the input to CopyDBClusterParameterGroup.
CopyDBClusterParameterGroupResultCopyDBClusterSnapshotMessageRepresents the input to CopyDBClusterSnapshot.
CopyDBClusterSnapshotResultCreateDBClusterMessageRepresents the input to CreateDBCluster.
CreateDBClusterParameterGroupMessageRepresents the input of CreateDBClusterParameterGroup.
CreateDBClusterParameterGroupResultCreateDBClusterResultCreateDBClusterSnapshotMessageRepresents the input of CreateDBClusterSnapshot.
CreateDBClusterSnapshotResultCreateDBInstanceMessageRepresents the input to CreateDBInstance.
CreateDBInstanceResultCreateDBSubnetGroupMessageRepresents the input to CreateDBSubnetGroup.
CreateDBSubnetGroupResultCreateEventSubscriptionMessageRepresents the input to CreateEventSubscription.
CreateEventSubscriptionResultCreateGlobalClusterMessageRepresents the input to CreateGlobalCluster.
CreateGlobalClusterResultDBClusterDetailed information about a cluster.
DBClusterMemberContains information about an instance that is part of a cluster.
DBClusterMessageRepresents the output of DescribeDBClusters.
DBClusterParameterGroupDetailed information about a cluster parameter group.
DBClusterParameterGroupDetailsRepresents the output of DBClusterParameterGroup.
DBClusterParameterGroupNameMessageContains the name of a cluster parameter group.
DBClusterParameterGroupsMessageRepresents the output of DBClusterParameterGroups.
DBClusterRoleDescribes an Identity and Access Management (IAM) role that is associated with a cluster.
DBClusterSnapshotDetailed information about a cluster snapshot.
DBClusterSnapshotAttributeContains the name and values of a manual cluster snapshot attribute.
Manual cluster snapshot attributes are used to authorize other accounts to restore a manual cluster snapshot.
DBClusterSnapshotAttributesResultDetailed information about the attributes that are associated with a cluster snapshot.
DBClusterSnapshotMessageRepresents the output of DescribeDBClusterSnapshots.
DBEngineVersion Detailed information about an engine version.
DBEngineVersionMessageRepresents the output of DescribeDBEngineVersions.
DBInstanceDetailed information about an instance.
DBInstanceMessageRepresents the output of DescribeDBInstances.
DBInstanceStatusInfoProvides a list of status information for an instance.
DBSubnetGroupDetailed information about a subnet group.
DBSubnetGroupMessageRepresents the output of DescribeDBSubnetGroups.
DeleteDBClusterMessageRepresents the input to DeleteDBCluster.
DeleteDBClusterParameterGroupMessageRepresents the input to DeleteDBClusterParameterGroup.
DeleteDBClusterResultDeleteDBClusterSnapshotMessageRepresents the input to DeleteDBClusterSnapshot.
DeleteDBClusterSnapshotResultDeleteDBInstanceMessageRepresents the input to DeleteDBInstance.
DeleteDBInstanceResultDeleteDBSubnetGroupMessageRepresents the input to DeleteDBSubnetGroup.
DeleteEventSubscriptionMessageRepresents the input to DeleteEventSubscription.
DeleteEventSubscriptionResultDeleteGlobalClusterMessageRepresents the input to DeleteGlobalCluster.
DeleteGlobalClusterResultDescribeCertificatesMessageDescribeDBClusterParameterGroupsMessageRepresents the input to DescribeDBClusterParameterGroups.
DescribeDBClusterParametersMessageRepresents the input to DescribeDBClusterParameters.
DescribeDBClusterSnapshotAttributesMessageRepresents the input to DescribeDBClusterSnapshotAttributes.
DescribeDBClusterSnapshotAttributesResultDescribeDBClusterSnapshotsMessageRepresents the input to DescribeDBClusterSnapshots.
DescribeDBClustersMessageRepresents the input to DescribeDBClusters.
DescribeDBEngineVersionsMessageRepresents the input to DescribeDBEngineVersions.
DescribeDBInstancesMessageRepresents the input to DescribeDBInstances.
DescribeDBSubnetGroupsMessageRepresents the input to DescribeDBSubnetGroups.
DescribeEngineDefaultClusterParametersMessageRepresents the input to DescribeEngineDefaultClusterParameters.
DescribeEngineDefaultClusterParametersResultDescribeEventCategoriesMessageRepresents the input to DescribeEventCategories.
DescribeEventSubscriptionsMessageRepresents the input to DescribeEventSubscriptions.
DescribeEventsMessageRepresents the input to DescribeEvents.
DescribeGlobalClustersMessageDescribeOrderableDBInstanceOptionsMessageRepresents the input to DescribeOrderableDBInstanceOptions.
DescribePendingMaintenanceActionsMessageRepresents the input to DescribePendingMaintenanceActions.
DocdbClientA client for the Amazon DocDB API.
EndpointNetwork information for accessing a cluster or instance. Client programs must specify a valid endpoint to access these Amazon DocumentDB resources.
EngineDefaultsContains the result of a successful invocation of the `DescribeEngineDefaultClusterParameters` operation.
EventDetailed information about an event.
EventCategoriesMapAn event source type, accompanied by one or more event category names.
EventCategoriesMessageRepresents the output of DescribeEventCategories.
EventSubscriptionDetailed information about an event to which you have subscribed.
EventSubscriptionsMessageRepresents the output of DescribeEventSubscriptions.
EventsMessageRepresents the output of DescribeEvents.
FailoverDBClusterMessageRepresents the input to FailoverDBCluster.
FailoverDBClusterResultFilterA named set of filter values, used to return a more specific list of results. You can use a filter to match a set of resources by specific criteria, such as IDs.
Wildcards are not supported in filters.
GlobalClusterA data type representing an Amazon DocumentDB global cluster.
GlobalClusterMemberA data structure with information about any primary and secondary clusters associated with an Amazon DocumentDB global clusters.
GlobalClustersMessageListTagsForResourceMessageRepresents the input to ListTagsForResource.
ModifyDBClusterMessageRepresents the input to ModifyDBCluster.
ModifyDBClusterParameterGroupMessageRepresents the input to ModifyDBClusterParameterGroup.
ModifyDBClusterResultModifyDBClusterSnapshotAttributeMessageRepresents the input to ModifyDBClusterSnapshotAttribute.
ModifyDBClusterSnapshotAttributeResultModifyDBInstanceMessageRepresents the input to ModifyDBInstance.
ModifyDBInstanceResultModifyDBSubnetGroupMessageRepresents the input to ModifyDBSubnetGroup.
ModifyDBSubnetGroupResultModifyEventSubscriptionMessageRepresents the input to ModifyEventSubscription.
ModifyEventSubscriptionResultModifyGlobalClusterMessageRepresents the input to ModifyGlobalCluster.
ModifyGlobalClusterResultOrderableDBInstanceOptionThe options that are available for an instance.
OrderableDBInstanceOptionsMessageRepresents the output of DescribeOrderableDBInstanceOptions.
ParameterDetailed information about an individual parameter.
PendingCloudwatchLogsExportsA list of the log types whose configuration is still pending. These log types are in the process of being activated or deactivated.
PendingMaintenanceActionProvides information about a pending maintenance action for a resource.
PendingMaintenanceActionsMessageRepresents the output of DescribePendingMaintenanceActions.
PendingModifiedValues One or more modified settings for an instance. These modified settings have been requested, but haven't been applied yet.
RebootDBInstanceMessageRepresents the input to RebootDBInstance.
RebootDBInstanceResultRemoveFromGlobalClusterMessageRepresents the input to RemoveFromGlobalCluster.
RemoveFromGlobalClusterResultRemoveSourceIdentifierFromSubscriptionMessageRepresents the input to RemoveSourceIdentifierFromSubscription.
RemoveSourceIdentifierFromSubscriptionResultRemoveTagsFromResourceMessageRepresents the input to RemoveTagsFromResource.
ResetDBClusterParameterGroupMessageRepresents the input to ResetDBClusterParameterGroup.
ResourcePendingMaintenanceActionsRepresents the output of ApplyPendingMaintenanceAction.
RestoreDBClusterFromSnapshotMessageRepresents the input to RestoreDBClusterFromSnapshot.
RestoreDBClusterFromSnapshotResultRestoreDBClusterToPointInTimeMessageRepresents the input to RestoreDBClusterToPointInTime.
RestoreDBClusterToPointInTimeResultStartDBClusterMessageStartDBClusterResultStopDBClusterMessageStopDBClusterResultSubnet Detailed information about a subnet.
TagMetadata assigned to an Amazon DocumentDB resource consisting of a key-value pair.
TagListMessageRepresents the output of ListTagsForResource.
UpgradeTargetThe version of the database engine that an instance can be upgraded to.
VpcSecurityGroupMembershipUsed as a response element for queries on virtual private cloud (VPC) security group membership.
Enums
---
AddSourceIdentifierToSubscriptionErrorErrors returned by AddSourceIdentifierToSubscription
AddTagsToResourceErrorErrors returned by AddTagsToResource
ApplyPendingMaintenanceActionErrorErrors returned by ApplyPendingMaintenanceAction
CopyDBClusterParameterGroupErrorErrors returned by CopyDBClusterParameterGroup
CopyDBClusterSnapshotErrorErrors returned by CopyDBClusterSnapshot
CreateDBClusterErrorErrors returned by CreateDBCluster
CreateDBClusterParameterGroupErrorErrors returned by CreateDBClusterParameterGroup
CreateDBClusterSnapshotErrorErrors returned by CreateDBClusterSnapshot
CreateDBInstanceErrorErrors returned by CreateDBInstance
CreateDBSubnetGroupErrorErrors returned by CreateDBSubnetGroup
CreateEventSubscriptionErrorErrors returned by CreateEventSubscription
CreateGlobalClusterErrorErrors returned by CreateGlobalCluster
DeleteDBClusterErrorErrors returned by DeleteDBCluster
DeleteDBClusterParameterGroupErrorErrors returned by DeleteDBClusterParameterGroup
DeleteDBClusterSnapshotErrorErrors returned by DeleteDBClusterSnapshot
DeleteDBInstanceErrorErrors returned by DeleteDBInstance
DeleteDBSubnetGroupErrorErrors returned by DeleteDBSubnetGroup
DeleteEventSubscriptionErrorErrors returned by DeleteEventSubscription
DeleteGlobalClusterErrorErrors returned by DeleteGlobalCluster
DescribeCertificatesErrorErrors returned by DescribeCertificates
DescribeDBClusterParameterGroupsErrorErrors returned by DescribeDBClusterParameterGroups
DescribeDBClusterParametersErrorErrors returned by DescribeDBClusterParameters
DescribeDBClusterSnapshotAttributesErrorErrors returned by DescribeDBClusterSnapshotAttributes
DescribeDBClusterSnapshotsErrorErrors returned by DescribeDBClusterSnapshots
DescribeDBClustersErrorErrors returned by DescribeDBClusters
DescribeDBEngineVersionsErrorErrors returned by DescribeDBEngineVersions
DescribeDBInstancesErrorErrors returned by DescribeDBInstances
DescribeDBSubnetGroupsErrorErrors returned by DescribeDBSubnetGroups
DescribeEngineDefaultClusterParametersErrorErrors returned by DescribeEngineDefaultClusterParameters
DescribeEventCategoriesErrorErrors returned by DescribeEventCategories
DescribeEventSubscriptionsErrorErrors returned by DescribeEventSubscriptions
DescribeEventsErrorErrors returned by DescribeEvents
DescribeGlobalClustersErrorErrors returned by DescribeGlobalClusters
DescribeOrderableDBInstanceOptionsErrorErrors returned by DescribeOrderableDBInstanceOptions
DescribePendingMaintenanceActionsErrorErrors returned by DescribePendingMaintenanceActions
FailoverDBClusterErrorErrors returned by FailoverDBCluster
ListTagsForResourceErrorErrors returned by ListTagsForResource
ModifyDBClusterErrorErrors returned by ModifyDBCluster
ModifyDBClusterParameterGroupErrorErrors returned by ModifyDBClusterParameterGroup
ModifyDBClusterSnapshotAttributeErrorErrors returned by ModifyDBClusterSnapshotAttribute
ModifyDBInstanceErrorErrors returned by ModifyDBInstance
ModifyDBSubnetGroupErrorErrors returned by ModifyDBSubnetGroup
ModifyEventSubscriptionErrorErrors returned by ModifyEventSubscription
ModifyGlobalClusterErrorErrors returned by ModifyGlobalCluster
RebootDBInstanceErrorErrors returned by RebootDBInstance
RemoveFromGlobalClusterErrorErrors returned by RemoveFromGlobalCluster
RemoveSourceIdentifierFromSubscriptionErrorErrors returned by RemoveSourceIdentifierFromSubscription
RemoveTagsFromResourceErrorErrors returned by RemoveTagsFromResource
ResetDBClusterParameterGroupErrorErrors returned by ResetDBClusterParameterGroup
RestoreDBClusterFromSnapshotErrorErrors returned by RestoreDBClusterFromSnapshot
RestoreDBClusterToPointInTimeErrorErrors returned by RestoreDBClusterToPointInTime
StartDBClusterErrorErrors returned by StartDBCluster
StopDBClusterErrorErrors returned by StopDBCluster
Traits
---
DocdbTrait representing the capabilities of the Amazon DocDB API. Amazon DocDB clients implement this trait.
Crate rusoto_docdb
===
Amazon DocumentDB API documentation
If you’re using the service, you’re probably looking for DocdbClient and Docdb.
Structs
---
AddSourceIdentifierToSubscriptionMessageRepresents the input to AddSourceIdentifierToSubscription.
AddSourceIdentifierToSubscriptionResultAddTagsToResourceMessageRepresents the input to AddTagsToResource.
ApplyPendingMaintenanceActionMessageRepresents the input to ApplyPendingMaintenanceAction.
ApplyPendingMaintenanceActionResultAvailabilityZoneInformation about an Availability Zone.
CertificateA certificate authority (CA) certificate for an account.
CertificateMessageCloudwatchLogsExportConfigurationThe configuration setting for the log types to be enabled for export to Amazon CloudWatch Logs for a specific instance or cluster.
The `EnableLogTypes` and `DisableLogTypes` arrays determine which logs are exported (or not exported) to CloudWatch Logs. The values within these arrays depend on the engine that is being used.
CopyDBClusterParameterGroupMessageRepresents the input to CopyDBClusterParameterGroup.
CopyDBClusterParameterGroupResultCopyDBClusterSnapshotMessageRepresents the input to CopyDBClusterSnapshot.
CopyDBClusterSnapshotResultCreateDBClusterMessageRepresents the input to CreateDBCluster.
CreateDBClusterParameterGroupMessageRepresents the input of CreateDBClusterParameterGroup.
CreateDBClusterParameterGroupResultCreateDBClusterResultCreateDBClusterSnapshotMessageRepresents the input of CreateDBClusterSnapshot.
CreateDBClusterSnapshotResultCreateDBInstanceMessageRepresents the input to CreateDBInstance.
CreateDBInstanceResultCreateDBSubnetGroupMessageRepresents the input to CreateDBSubnetGroup.
CreateDBSubnetGroupResultCreateEventSubscriptionMessageRepresents the input to CreateEventSubscription.
CreateEventSubscriptionResultCreateGlobalClusterMessageRepresents the input to CreateGlobalCluster.
CreateGlobalClusterResultDBClusterDetailed information about a cluster.
DBClusterMemberContains information about an instance that is part of a cluster.
DBClusterMessageRepresents the output of DescribeDBClusters.
DBClusterParameterGroupDetailed information about a cluster parameter group.
DBClusterParameterGroupDetailsRepresents the output of DBClusterParameterGroup.
DBClusterParameterGroupNameMessageContains the name of a cluster parameter group.
DBClusterParameterGroupsMessageRepresents the output of DBClusterParameterGroups.
DBClusterRoleDescribes an Identity and Access Management (IAM) role that is associated with a cluster.
DBClusterSnapshotDetailed information about a cluster snapshot.
DBClusterSnapshotAttributeContains the name and values of a manual cluster snapshot attribute.
Manual cluster snapshot attributes are used to authorize other accounts to restore a manual cluster snapshot.
DBClusterSnapshotAttributesResultDetailed information about the attributes that are associated with a cluster snapshot.
DBClusterSnapshotMessageRepresents the output of DescribeDBClusterSnapshots.
DBEngineVersion Detailed information about an engine version.
DBEngineVersionMessageRepresents the output of DescribeDBEngineVersions.
DBInstanceDetailed information about an instance.
DBInstanceMessageRepresents the output of DescribeDBInstances.
DBInstanceStatusInfoProvides a list of status information for an instance.
DBSubnetGroupDetailed information about a subnet group.
DBSubnetGroupMessageRepresents the output of DescribeDBSubnetGroups.
DeleteDBClusterMessageRepresents the input to DeleteDBCluster.
DeleteDBClusterParameterGroupMessageRepresents the input to DeleteDBClusterParameterGroup.
DeleteDBClusterResultDeleteDBClusterSnapshotMessageRepresents the input to DeleteDBClusterSnapshot.
DeleteDBClusterSnapshotResultDeleteDBInstanceMessageRepresents the input to DeleteDBInstance.
DeleteDBInstanceResultDeleteDBSubnetGroupMessageRepresents the input to DeleteDBSubnetGroup.
DeleteEventSubscriptionMessageRepresents the input to DeleteEventSubscription.
DeleteEventSubscriptionResultDeleteGlobalClusterMessageRepresents the input to DeleteGlobalCluster.
DeleteGlobalClusterResultDescribeCertificatesMessageDescribeDBClusterParameterGroupsMessageRepresents the input to DescribeDBClusterParameterGroups.
DescribeDBClusterParametersMessageRepresents the input to DescribeDBClusterParameters.
DescribeDBClusterSnapshotAttributesMessageRepresents the input to DescribeDBClusterSnapshotAttributes.
DescribeDBClusterSnapshotAttributesResultDescribeDBClusterSnapshotsMessageRepresents the input to DescribeDBClusterSnapshots.
DescribeDBClustersMessageRepresents the input to DescribeDBClusters.
DescribeDBEngineVersionsMessageRepresents the input to DescribeDBEngineVersions.
DescribeDBInstancesMessageRepresents the input to DescribeDBInstances.
DescribeDBSubnetGroupsMessageRepresents the input to DescribeDBSubnetGroups.
DescribeEngineDefaultClusterParametersMessageRepresents the input to DescribeEngineDefaultClusterParameters.
DescribeEngineDefaultClusterParametersResultDescribeEventCategoriesMessageRepresents the input to DescribeEventCategories.
DescribeEventSubscriptionsMessageRepresents the input to DescribeEventSubscriptions.
DescribeEventsMessageRepresents the input to DescribeEvents.
DescribeGlobalClustersMessageDescribeOrderableDBInstanceOptionsMessageRepresents the input to DescribeOrderableDBInstanceOptions.
DescribePendingMaintenanceActionsMessageRepresents the input to DescribePendingMaintenanceActions.
DocdbClientA client for the Amazon DocDB API.
EndpointNetwork information for accessing a cluster or instance. Client programs must specify a valid endpoint to access these Amazon DocumentDB resources.
EngineDefaultsContains the result of a successful invocation of the `DescribeEngineDefaultClusterParameters` operation.
EventDetailed information about an event.
EventCategoriesMapAn event source type, accompanied by one or more event category names.
EventCategoriesMessageRepresents the output of DescribeEventCategories.
EventSubscriptionDetailed information about an event to which you have subscribed.
EventSubscriptionsMessageRepresents the output of DescribeEventSubscriptions.
EventsMessageRepresents the output of DescribeEvents.
FailoverDBClusterMessageRepresents the input to FailoverDBCluster.
FailoverDBClusterResultFilterA named set of filter values, used to return a more specific list of results. You can use a filter to match a set of resources by specific criteria, such as IDs.
Wildcards are not supported in filters.
GlobalClusterA data type representing an Amazon DocumentDB global cluster.
GlobalClusterMemberA data structure with information about any primary and secondary clusters associated with an Amazon DocumentDB global clusters.
GlobalClustersMessageListTagsForResourceMessageRepresents the input to ListTagsForResource.
ModifyDBClusterMessageRepresents the input to ModifyDBCluster.
ModifyDBClusterParameterGroupMessageRepresents the input to ModifyDBClusterParameterGroup.
ModifyDBClusterResultModifyDBClusterSnapshotAttributeMessageRepresents the input to ModifyDBClusterSnapshotAttribute.
ModifyDBClusterSnapshotAttributeResultModifyDBInstanceMessageRepresents the input to ModifyDBInstance.
ModifyDBInstanceResultModifyDBSubnetGroupMessageRepresents the input to ModifyDBSubnetGroup.
ModifyDBSubnetGroupResultModifyEventSubscriptionMessageRepresents the input to ModifyEventSubscription.
ModifyEventSubscriptionResultModifyGlobalClusterMessageRepresents the input to ModifyGlobalCluster.
ModifyGlobalClusterResultOrderableDBInstanceOptionThe options that are available for an instance.
OrderableDBInstanceOptionsMessageRepresents the output of DescribeOrderableDBInstanceOptions.
ParameterDetailed information about an individual parameter.
PendingCloudwatchLogsExportsA list of the log types whose configuration is still pending. These log types are in the process of being activated or deactivated.
PendingMaintenanceActionProvides information about a pending maintenance action for a resource.
PendingMaintenanceActionsMessageRepresents the output of DescribePendingMaintenanceActions.
PendingModifiedValues One or more modified settings for an instance. These modified settings have been requested, but haven't been applied yet.
RebootDBInstanceMessageRepresents the input to RebootDBInstance.
RebootDBInstanceResultRemoveFromGlobalClusterMessageRepresents the input to RemoveFromGlobalCluster.
RemoveFromGlobalClusterResultRemoveSourceIdentifierFromSubscriptionMessageRepresents the input to RemoveSourceIdentifierFromSubscription.
RemoveSourceIdentifierFromSubscriptionResultRemoveTagsFromResourceMessageRepresents the input to RemoveTagsFromResource.
ResetDBClusterParameterGroupMessageRepresents the input to ResetDBClusterParameterGroup.
ResourcePendingMaintenanceActionsRepresents the output of ApplyPendingMaintenanceAction.
RestoreDBClusterFromSnapshotMessageRepresents the input to RestoreDBClusterFromSnapshot.
RestoreDBClusterFromSnapshotResultRestoreDBClusterToPointInTimeMessageRepresents the input to RestoreDBClusterToPointInTime.
RestoreDBClusterToPointInTimeResultStartDBClusterMessageStartDBClusterResultStopDBClusterMessageStopDBClusterResultSubnet Detailed information about a subnet.
TagMetadata assigned to an Amazon DocumentDB resource consisting of a key-value pair.
TagListMessageRepresents the output of ListTagsForResource.
UpgradeTargetThe version of the database engine that an instance can be upgraded to.
VpcSecurityGroupMembershipUsed as a response element for queries on virtual private cloud (VPC) security group membership.
Enums
---
AddSourceIdentifierToSubscriptionErrorErrors returned by AddSourceIdentifierToSubscription
AddTagsToResourceErrorErrors returned by AddTagsToResource
ApplyPendingMaintenanceActionErrorErrors returned by ApplyPendingMaintenanceAction
CopyDBClusterParameterGroupErrorErrors returned by CopyDBClusterParameterGroup
CopyDBClusterSnapshotErrorErrors returned by CopyDBClusterSnapshot
CreateDBClusterErrorErrors returned by CreateDBCluster
CreateDBClusterParameterGroupErrorErrors returned by CreateDBClusterParameterGroup
CreateDBClusterSnapshotErrorErrors returned by CreateDBClusterSnapshot
CreateDBInstanceErrorErrors returned by CreateDBInstance
CreateDBSubnetGroupErrorErrors returned by CreateDBSubnetGroup
CreateEventSubscriptionErrorErrors returned by CreateEventSubscription
CreateGlobalClusterErrorErrors returned by CreateGlobalCluster
DeleteDBClusterErrorErrors returned by DeleteDBCluster
DeleteDBClusterParameterGroupErrorErrors returned by DeleteDBClusterParameterGroup
DeleteDBClusterSnapshotErrorErrors returned by DeleteDBClusterSnapshot
DeleteDBInstanceErrorErrors returned by DeleteDBInstance
DeleteDBSubnetGroupErrorErrors returned by DeleteDBSubnetGroup
DeleteEventSubscriptionErrorErrors returned by DeleteEventSubscription
DeleteGlobalClusterErrorErrors returned by DeleteGlobalCluster
DescribeCertificatesErrorErrors returned by DescribeCertificates
DescribeDBClusterParameterGroupsErrorErrors returned by DescribeDBClusterParameterGroups
DescribeDBClusterParametersErrorErrors returned by DescribeDBClusterParameters
DescribeDBClusterSnapshotAttributesErrorErrors returned by DescribeDBClusterSnapshotAttributes
DescribeDBClusterSnapshotsErrorErrors returned by DescribeDBClusterSnapshots
DescribeDBClustersErrorErrors returned by DescribeDBClusters
DescribeDBEngineVersionsErrorErrors returned by DescribeDBEngineVersions
DescribeDBInstancesErrorErrors returned by DescribeDBInstances
DescribeDBSubnetGroupsErrorErrors returned by DescribeDBSubnetGroups
DescribeEngineDefaultClusterParametersErrorErrors returned by DescribeEngineDefaultClusterParameters
DescribeEventCategoriesErrorErrors returned by DescribeEventCategories
DescribeEventSubscriptionsErrorErrors returned by DescribeEventSubscriptions
DescribeEventsErrorErrors returned by DescribeEvents
DescribeGlobalClustersErrorErrors returned by DescribeGlobalClusters
DescribeOrderableDBInstanceOptionsErrorErrors returned by DescribeOrderableDBInstanceOptions
DescribePendingMaintenanceActionsErrorErrors returned by DescribePendingMaintenanceActions
FailoverDBClusterErrorErrors returned by FailoverDBCluster
ListTagsForResourceErrorErrors returned by ListTagsForResource
ModifyDBClusterErrorErrors returned by ModifyDBCluster
ModifyDBClusterParameterGroupErrorErrors returned by ModifyDBClusterParameterGroup
ModifyDBClusterSnapshotAttributeErrorErrors returned by ModifyDBClusterSnapshotAttribute
ModifyDBInstanceErrorErrors returned by ModifyDBInstance
ModifyDBSubnetGroupErrorErrors returned by ModifyDBSubnetGroup
ModifyEventSubscriptionErrorErrors returned by ModifyEventSubscription
ModifyGlobalClusterErrorErrors returned by ModifyGlobalCluster
RebootDBInstanceErrorErrors returned by RebootDBInstance
RemoveFromGlobalClusterErrorErrors returned by RemoveFromGlobalCluster
RemoveSourceIdentifierFromSubscriptionErrorErrors returned by RemoveSourceIdentifierFromSubscription
RemoveTagsFromResourceErrorErrors returned by RemoveTagsFromResource
ResetDBClusterParameterGroupErrorErrors returned by ResetDBClusterParameterGroup
RestoreDBClusterFromSnapshotErrorErrors returned by RestoreDBClusterFromSnapshot
RestoreDBClusterToPointInTimeErrorErrors returned by RestoreDBClusterToPointInTime
StartDBClusterErrorErrors returned by StartDBCluster
StopDBClusterErrorErrors returned by StopDBCluster
Traits
---
DocdbTrait representing the capabilities of the Amazon DocDB API. Amazon DocDB clients implement this trait.
Struct rusoto_docdb::DocdbClient
===
```
pub struct DocdbClient { /* private fields */ }
```
A client for the Amazon DocDB API.
Implementations
---
source### impl DocdbClient
source#### pub fn new(region: Region) -> DocdbClient
Creates a client backed by the default tokio event loop.
The client will use the default credentials provider and tls client.
source#### pub fn new_with<P, D>( request_dispatcher: D, credentials_provider: P, region: Region) -> DocdbClient where P: ProvideAwsCredentials + Send + Sync + 'static, D: DispatchSignedRequest + Send + Sync + 'static,
source#### pub fn new_with_client(client: Client, region: Region) -> DocdbClient
Trait Implementations
---
source### impl Clone for DocdbClient
source#### fn clone(&self) -> DocdbClient
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Docdb for DocdbClient
source#### fn add_source_identifier_to_subscription<'life0, 'async_trait>( &'life0 self, input: AddSourceIdentifierToSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<AddSourceIdentifierToSubscriptionResult, RusotoError<AddSourceIdentifierToSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds a source identifier to an existing event notification subscription.
source#### fn add_tags_to_resource<'life0, 'async_trait>( &'life0 self, input: AddTagsToResourceMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<AddTagsToResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds metadata tags to an Amazon DocumentDB resource. You can use these tags with cost allocation reporting to track costs that are associated with Amazon DocumentDB resources or in a `Condition` statement in an Identity and Access Management (IAM) policy for Amazon DocumentDB.
source#### fn apply_pending_maintenance_action<'life0, 'async_trait>( &'life0 self, input: ApplyPendingMaintenanceActionMessage) -> Pin<Box<dyn Future<Output = Result<ApplyPendingMaintenanceActionResult, RusotoError<ApplyPendingMaintenanceActionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Applies a pending maintenance action to a resource (for example, to an Amazon DocumentDB instance).
source#### fn copy_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: CopyDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<CopyDBClusterParameterGroupResult, RusotoError<CopyDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Copies the specified cluster parameter group.
source#### fn copy_db_cluster_snapshot<'life0, 'async_trait>( &'life0 self, input: CopyDBClusterSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<CopyDBClusterSnapshotResult, RusotoError<CopyDBClusterSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Copies a snapshot of a cluster.
To copy a cluster snapshot from a shared manual cluster snapshot, `SourceDBClusterSnapshotIdentifier` must be the Amazon Resource Name (ARN) of the shared cluster snapshot. You can only copy a shared DB cluster snapshot, whether encrypted or not, in the same Region.
To cancel the copy operation after it is in progress, delete the target cluster snapshot identified by `TargetDBClusterSnapshotIdentifier` while that cluster snapshot is in the *copying* status.
source#### fn create_db_cluster<'life0, 'async_trait>( &'life0 self, input: CreateDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterResult, RusotoError<CreateDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new Amazon DocumentDB cluster.
source#### fn create_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: CreateDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterParameterGroupResult, RusotoError<CreateDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new cluster parameter group.
Parameters in a cluster parameter group apply to all of the instances in a cluster.
A cluster parameter group is initially created with the default parameters for the database engine used by instances in the cluster. In Amazon DocumentDB, you cannot make modifications directly to the `default.docdb3.6` cluster parameter group. If your Amazon DocumentDB cluster is using the default cluster parameter group and you want to modify a value in it, you must first create a new parameter group or copy an existing parameter group, modify it, and then apply the modified parameter group to your cluster. For the new cluster parameter group and associated settings to take effect, you must then reboot the instances in the cluster without failover. For more information, see Modifying Amazon DocumentDB Cluster Parameter Groups.
source#### fn create_db_cluster_snapshot<'life0, 'async_trait>( &'life0 self, input: CreateDBClusterSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterSnapshotResult, RusotoError<CreateDBClusterSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a snapshot of a cluster.
source#### fn create_db_instance<'life0, 'async_trait>( &'life0 self, input: CreateDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBInstanceResult, RusotoError<CreateDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new instance.
source#### fn create_db_subnet_group<'life0, 'async_trait>( &'life0 self, input: CreateDBSubnetGroupMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBSubnetGroupResult, RusotoError<CreateDBSubnetGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new subnet group. subnet groups must contain at least one subnet in at least two Availability Zones in the Region.
source#### fn create_event_subscription<'life0, 'async_trait>( &'life0 self, input: CreateEventSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<CreateEventSubscriptionResult, RusotoError<CreateEventSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates an Amazon DocumentDB event notification subscription. This action requires a topic Amazon Resource Name (ARN) created by using the Amazon DocumentDB console, the Amazon SNS console, or the Amazon SNS API. To obtain an ARN with Amazon SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the Amazon SNS console.
You can specify the type of source (`SourceType`) that you want to be notified of. You can also provide a list of Amazon DocumentDB sources (`SourceIds`) that trigger the events, and you can provide a list of event categories (`EventCategories`) for events that you want to be notified of. For example, you can specify `SourceType = db-instance`, `SourceIds = mydbinstance1, mydbinstance2` and `EventCategories = Availability, Backup`.
If you specify both the `SourceType` and `SourceIds` (such as `SourceType = db-instance` and `SourceIdentifier = myDBInstance1`), you are notified of all the `db-instance` events for the specified source. If you specify a `SourceType` but do not specify a `SourceIdentifier`, you receive notice of the events for that source type for all your Amazon DocumentDB sources. If you do not specify either the `SourceType` or the `SourceIdentifier`, you are notified of events generated from all Amazon DocumentDB sources belonging to your customer account.
source#### fn create_global_cluster<'life0, 'async_trait>( &'life0 self, input: CreateGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<CreateGlobalClusterResult, RusotoError<CreateGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates an Amazon DocumentDB global cluster that can span multiple multiple Regions. The global cluster contains one primary cluster with read-write capability, and up-to give read-only secondary clusters. Global clusters uses storage-based fast replication across regions with latencies less than one second, using dedicated infrastructure with no impact to your workload’s performance.
You can create a global cluster that is initially empty, and then add a primary and a secondary to it. Or you can specify an existing cluster during the create operation, and this cluster becomes the primary of the global cluster.
This action only applies to Amazon DocumentDB clusters.
source#### fn delete_db_cluster<'life0, 'async_trait>( &'life0 self, input: DeleteDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<DeleteDBClusterResult, RusotoError<DeleteDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a previously provisioned cluster. When you delete a cluster, all automated backups for that cluster are deleted and can't be recovered. Manual DB cluster snapshots of the specified cluster are not deleted.
source#### fn delete_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: DeleteDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a specified cluster parameter group. The cluster parameter group to be deleted can't be associated with any clusters.
source#### fn delete_db_cluster_snapshot<'life0, 'async_trait>( &'life0 self, input: DeleteDBClusterSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<DeleteDBClusterSnapshotResult, RusotoError<DeleteDBClusterSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a cluster snapshot. If the snapshot is being copied, the copy operation is terminated.
The cluster snapshot must be in the `available` state to be deleted.
source#### fn delete_db_instance<'life0, 'async_trait>( &'life0 self, input: DeleteDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<DeleteDBInstanceResult, RusotoError<DeleteDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a previously provisioned instance.
source#### fn delete_db_subnet_group<'life0, 'async_trait>( &'life0 self, input: DeleteDBSubnetGroupMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDBSubnetGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a subnet group.
The specified database subnet group must not be associated with any DB instances.
source#### fn delete_event_subscription<'life0, 'async_trait>( &'life0 self, input: DeleteEventSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<DeleteEventSubscriptionResult, RusotoError<DeleteEventSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an Amazon DocumentDB event notification subscription.
source#### fn delete_global_cluster<'life0, 'async_trait>( &'life0 self, input: DeleteGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<DeleteGlobalClusterResult, RusotoError<DeleteGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a global cluster. The primary and secondary clusters must already be detached or deleted before attempting to delete a global cluster.
This action only applies to Amazon DocumentDB clusters.
source#### fn describe_certificates<'life0, 'async_trait>( &'life0 self, input: DescribeCertificatesMessage) -> Pin<Box<dyn Future<Output = Result<CertificateMessage, RusotoError<DescribeCertificatesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of certificate authority (CA) certificates provided by Amazon DocumentDB for this account.
source#### fn describe_db_cluster_parameter_groups<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterParameterGroupsMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupsMessage, RusotoError<DescribeDBClusterParameterGroupsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of `DBClusterParameterGroup` descriptions. If a `DBClusterParameterGroupName` parameter is specified, the list contains only the description of the specified cluster parameter group.
source#### fn describe_db_cluster_parameters<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterParametersMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupDetails, RusotoError<DescribeDBClusterParametersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns the detailed parameter list for a particular cluster parameter group.
source#### fn describe_db_cluster_snapshot_attributes<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterSnapshotAttributesMessage) -> Pin<Box<dyn Future<Output = Result<DescribeDBClusterSnapshotAttributesResult, RusotoError<DescribeDBClusterSnapshotAttributesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of cluster snapshot attribute names and values for a manual DB cluster snapshot.
When you share snapshots with other accounts, `DescribeDBClusterSnapshotAttributes` returns the `restore` attribute and a list of IDs for the accounts that are authorized to copy or restore the manual cluster snapshot. If `all` is included in the list of values for the `restore` attribute, then the manual cluster snapshot is public and can be copied or restored by all accounts.
source#### fn describe_db_cluster_snapshots<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterSnapshotsMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterSnapshotMessage, RusotoError<DescribeDBClusterSnapshotsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about cluster snapshots. This API operation supports pagination.
source#### fn describe_db_clusters<'life0, 'async_trait>( &'life0 self, input: DescribeDBClustersMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterMessage, RusotoError<DescribeDBClustersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about provisioned Amazon DocumentDB clusters. This API operation supports pagination. For certain management features such as cluster and instance lifecycle management, Amazon DocumentDB leverages operational technology that is shared with Amazon RDS and Amazon Neptune. Use the `filterName=engine,Values=docdb` filter parameter to return only Amazon DocumentDB clusters.
source#### fn describe_db_engine_versions<'life0, 'async_trait>( &'life0 self, input: DescribeDBEngineVersionsMessage) -> Pin<Box<dyn Future<Output = Result<DBEngineVersionMessage, RusotoError<DescribeDBEngineVersionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of the available engines.
source#### fn describe_db_instances<'life0, 'async_trait>( &'life0 self, input: DescribeDBInstancesMessage) -> Pin<Box<dyn Future<Output = Result<DBInstanceMessage, RusotoError<DescribeDBInstancesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about provisioned Amazon DocumentDB instances. This API supports pagination.
source#### fn describe_db_subnet_groups<'life0, 'async_trait>( &'life0 self, input: DescribeDBSubnetGroupsMessage) -> Pin<Box<dyn Future<Output = Result<DBSubnetGroupMessage, RusotoError<DescribeDBSubnetGroupsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of `DBSubnetGroup` descriptions. If a `DBSubnetGroupName` is specified, the list will contain only the descriptions of the specified `DBSubnetGroup`.
source#### fn describe_engine_default_cluster_parameters<'life0, 'async_trait>( &'life0 self, input: DescribeEngineDefaultClusterParametersMessage) -> Pin<Box<dyn Future<Output = Result<DescribeEngineDefaultClusterParametersResult, RusotoError<DescribeEngineDefaultClusterParametersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns the default engine and system parameter information for the cluster database engine.
source#### fn describe_event_categories<'life0, 'async_trait>( &'life0 self, input: DescribeEventCategoriesMessage) -> Pin<Box<dyn Future<Output = Result<EventCategoriesMessage, RusotoError<DescribeEventCategoriesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Displays a list of categories for all event source types, or, if specified, for a specified source type.
source#### fn describe_event_subscriptions<'life0, 'async_trait>( &'life0 self, input: DescribeEventSubscriptionsMessage) -> Pin<Box<dyn Future<Output = Result<EventSubscriptionsMessage, RusotoError<DescribeEventSubscriptionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists all the subscription descriptions for a customer account. The description for a subscription includes `SubscriptionName`, `SNSTopicARN`, `CustomerID`, `SourceType`, `SourceID`, `CreationTime`, and `Status`.
If you specify a `SubscriptionName`, lists the description for that subscription.
source#### fn describe_events<'life0, 'async_trait>( &'life0 self, input: DescribeEventsMessage) -> Pin<Box<dyn Future<Output = Result<EventsMessage, RusotoError<DescribeEventsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns events related to instances, security groups, snapshots, and DB parameter groups for the past 14 days. You can obtain events specific to a particular DB instance, security group, snapshot, or parameter group by providing the name as a parameter. By default, the events of the past hour are returned.
source#### fn describe_global_clusters<'life0, 'async_trait>( &'life0 self, input: DescribeGlobalClustersMessage) -> Pin<Box<dyn Future<Output = Result<GlobalClustersMessage, RusotoError<DescribeGlobalClustersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about Amazon DocumentDB global clusters. This API supports pagination.
This action only applies to Amazon DocumentDB clusters.
source#### fn describe_orderable_db_instance_options<'life0, 'async_trait>( &'life0 self, input: DescribeOrderableDBInstanceOptionsMessage) -> Pin<Box<dyn Future<Output = Result<OrderableDBInstanceOptionsMessage, RusotoError<DescribeOrderableDBInstanceOptionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of orderable instance options for the specified engine.
source#### fn describe_pending_maintenance_actions<'life0, 'async_trait>( &'life0 self, input: DescribePendingMaintenanceActionsMessage) -> Pin<Box<dyn Future<Output = Result<PendingMaintenanceActionsMessage, RusotoError<DescribePendingMaintenanceActionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of resources (for example, instances) that have at least one pending maintenance action.
source#### fn failover_db_cluster<'life0, 'async_trait>( &'life0 self, input: FailoverDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<FailoverDBClusterResult, RusotoError<FailoverDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Forces a failover for a cluster.
A failover for a cluster promotes one of the Amazon DocumentDB replicas (read-only instances) in the cluster to be the primary instance (the cluster writer).
If the primary instance fails, Amazon DocumentDB automatically fails over to an Amazon DocumentDB replica, if one exists. You can force a failover when you want to simulate a failure of a primary instance for testing.
source#### fn list_tags_for_resource<'life0, 'async_trait>( &'life0 self, input: ListTagsForResourceMessage) -> Pin<Box<dyn Future<Output = Result<TagListMessage, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists all tags on an Amazon DocumentDB resource.
source#### fn modify_db_cluster<'life0, 'async_trait>( &'life0 self, input: ModifyDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBClusterResult, RusotoError<ModifyDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies a setting for an Amazon DocumentDB cluster. You can change one or more database configuration parameters by specifying these parameters and the new values in the request.
source#### fn modify_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: ModifyDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupNameMessage, RusotoError<ModifyDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies the parameters of a cluster parameter group. To modify more than one parameter, submit a list of the following: `ParameterName`, `ParameterValue`, and `ApplyMethod`. A maximum of 20 parameters can be modified in a single request.
Changes to dynamic parameters are applied immediately. Changes to static parameters require a reboot or maintenance window before the change can take effect.
After you create a cluster parameter group, you should wait at least 5 minutes before creating your first cluster that uses that cluster parameter group as the default parameter group. This allows Amazon DocumentDB to fully complete the create action before the parameter group is used as the default for a new cluster. This step is especially important for parameters that are critical when creating the default database for a cluster, such as the character set for the default database defined by the `character*set*database` parameter.
source#### fn modify_db_cluster_snapshot_attribute<'life0, 'async_trait>( &'life0 self, input: ModifyDBClusterSnapshotAttributeMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBClusterSnapshotAttributeResult, RusotoError<ModifyDBClusterSnapshotAttributeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds an attribute and values to, or removes an attribute and values from, a manual cluster snapshot.
To share a manual cluster snapshot with other accounts, specify `restore` as the `AttributeName`, and use the `ValuesToAdd` parameter to add a list of IDs of the accounts that are authorized to restore the manual cluster snapshot. Use the value `all` to make the manual cluster snapshot public, which means that it can be copied or restored by all accounts. Do not add the `all` value for any manual cluster snapshots that contain private information that you don't want available to all accounts. If a manual cluster snapshot is encrypted, it can be shared, but only by specifying a list of authorized account IDs for the `ValuesToAdd` parameter. You can't use `all` as a value for that parameter in this case.
source#### fn modify_db_instance<'life0, 'async_trait>( &'life0 self, input: ModifyDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBInstanceResult, RusotoError<ModifyDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies settings for an instance. You can change one or more database configuration parameters by specifying these parameters and the new values in the request.
source#### fn modify_db_subnet_group<'life0, 'async_trait>( &'life0 self, input: ModifyDBSubnetGroupMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBSubnetGroupResult, RusotoError<ModifyDBSubnetGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies an existing subnet group. subnet groups must contain at least one subnet in at least two Availability Zones in the Region.
source#### fn modify_event_subscription<'life0, 'async_trait>( &'life0 self, input: ModifyEventSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<ModifyEventSubscriptionResult, RusotoError<ModifyEventSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies an existing Amazon DocumentDB event notification subscription.
source#### fn modify_global_cluster<'life0, 'async_trait>( &'life0 self, input: ModifyGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<ModifyGlobalClusterResult, RusotoError<ModifyGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modify a setting for an Amazon DocumentDB global cluster. You can change one or more configuration parameters (for example: deletion protection), or the global cluster identifier by specifying these parameters and the new values in the request.
This action only applies to Amazon DocumentDB clusters.
source#### fn reboot_db_instance<'life0, 'async_trait>( &'life0 self, input: RebootDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<RebootDBInstanceResult, RusotoError<RebootDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
You might need to reboot your instance, usually for maintenance reasons. For example, if you make certain changes, or if you change the cluster parameter group that is associated with the instance, you must reboot the instance for the changes to take effect.
Rebooting an instance restarts the database engine service. Rebooting an instance results in a momentary outage, during which the instance status is set to *rebooting*.
source#### fn remove_from_global_cluster<'life0, 'async_trait>( &'life0 self, input: RemoveFromGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<RemoveFromGlobalClusterResult, RusotoError<RemoveFromGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Detaches an Amazon DocumentDB secondary cluster from a global cluster. The cluster becomes a standalone cluster with read-write capability instead of being read-only and receiving data from a primary in a different region.
This action only applies to Amazon DocumentDB clusters.
source#### fn remove_source_identifier_from_subscription<'life0, 'async_trait>( &'life0 self, input: RemoveSourceIdentifierFromSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<RemoveSourceIdentifierFromSubscriptionResult, RusotoError<RemoveSourceIdentifierFromSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes a source identifier from an existing Amazon DocumentDB event notification subscription.
source#### fn remove_tags_from_resource<'life0, 'async_trait>( &'life0 self, input: RemoveTagsFromResourceMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<RemoveTagsFromResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes metadata tags from an Amazon DocumentDB resource.
source#### fn reset_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: ResetDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupNameMessage, RusotoError<ResetDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies the parameters of a cluster parameter group to the default value. To reset specific parameters, submit a list of the following: `ParameterName` and `ApplyMethod`. To reset the entire cluster parameter group, specify the `DBClusterParameterGroupName` and `ResetAllParameters` parameters.
When you reset the entire group, dynamic parameters are updated immediately and static parameters are set to `pending-reboot` to take effect on the next DB instance reboot.
source#### fn restore_db_cluster_from_snapshot<'life0, 'async_trait>( &'life0 self, input: RestoreDBClusterFromSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<RestoreDBClusterFromSnapshotResult, RusotoError<RestoreDBClusterFromSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new cluster from a snapshot or cluster snapshot.
If a snapshot is specified, the target cluster is created from the source DB snapshot with a default configuration and default security group.
If a cluster snapshot is specified, the target cluster is created from the source cluster restore point with the same configuration as the original source DB cluster, except that the new cluster is created with the default security group.
source#### fn restore_db_cluster_to_point_in_time<'life0, 'async_trait>( &'life0 self, input: RestoreDBClusterToPointInTimeMessage) -> Pin<Box<dyn Future<Output = Result<RestoreDBClusterToPointInTimeResult, RusotoError<RestoreDBClusterToPointInTimeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Restores a cluster to an arbitrary point in time. Users can restore to any point in time before `LatestRestorableTime` for up to `BackupRetentionPeriod` days. The target cluster is created from the source cluster with the same configuration as the original cluster, except that the new cluster is created with the default security group.
source#### fn start_db_cluster<'life0, 'async_trait>( &'life0 self, input: StartDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<StartDBClusterResult, RusotoError<StartDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Restarts the stopped cluster that is specified by `DBClusterIdentifier`. For more information, see Stopping and Starting an Amazon DocumentDB Cluster.
source#### fn stop_db_cluster<'life0, 'async_trait>( &'life0 self, input: StopDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<StopDBClusterResult, RusotoError<StopDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Stops the running cluster that is specified by `DBClusterIdentifier`. The cluster must be in the *available* state. For more information, see Stopping and Starting an Amazon DocumentDB Cluster.
Auto Trait Implementations
---
### impl !RefUnwindSafe for DocdbClient
### impl Send for DocdbClient
### impl Sync for DocdbClient
### impl Unpin for DocdbClient
### impl !UnwindSafe for DocdbClient
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Trait rusoto_docdb::Docdb
===
```
pub trait Docdb {
fn add_source_identifier_to_subscription<'life0, 'async_trait>(
&'life0 self,
input: AddSourceIdentifierToSubscriptionMessage
) -> Pin<Box<dyn Future<Output = Result<AddSourceIdentifierToSubscriptionResult, RusotoError<AddSourceIdentifierToSubscriptionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn add_tags_to_resource<'life0, 'async_trait>(
&'life0 self,
input: AddTagsToResourceMessage
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<AddTagsToResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn apply_pending_maintenance_action<'life0, 'async_trait>(
&'life0 self,
input: ApplyPendingMaintenanceActionMessage
) -> Pin<Box<dyn Future<Output = Result<ApplyPendingMaintenanceActionResult, RusotoError<ApplyPendingMaintenanceActionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn copy_db_cluster_parameter_group<'life0, 'async_trait>(
&'life0 self,
input: CopyDBClusterParameterGroupMessage
) -> Pin<Box<dyn Future<Output = Result<CopyDBClusterParameterGroupResult, RusotoError<CopyDBClusterParameterGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn copy_db_cluster_snapshot<'life0, 'async_trait>(
&'life0 self,
input: CopyDBClusterSnapshotMessage
) -> Pin<Box<dyn Future<Output = Result<CopyDBClusterSnapshotResult, RusotoError<CopyDBClusterSnapshotError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_db_cluster<'life0, 'async_trait>(
&'life0 self,
input: CreateDBClusterMessage
) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterResult, RusotoError<CreateDBClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_db_cluster_parameter_group<'life0, 'async_trait>(
&'life0 self,
input: CreateDBClusterParameterGroupMessage
) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterParameterGroupResult, RusotoError<CreateDBClusterParameterGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_db_cluster_snapshot<'life0, 'async_trait>(
&'life0 self,
input: CreateDBClusterSnapshotMessage
) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterSnapshotResult, RusotoError<CreateDBClusterSnapshotError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_db_instance<'life0, 'async_trait>(
&'life0 self,
input: CreateDBInstanceMessage
) -> Pin<Box<dyn Future<Output = Result<CreateDBInstanceResult, RusotoError<CreateDBInstanceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_db_subnet_group<'life0, 'async_trait>(
&'life0 self,
input: CreateDBSubnetGroupMessage
) -> Pin<Box<dyn Future<Output = Result<CreateDBSubnetGroupResult, RusotoError<CreateDBSubnetGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_event_subscription<'life0, 'async_trait>(
&'life0 self,
input: CreateEventSubscriptionMessage
) -> Pin<Box<dyn Future<Output = Result<CreateEventSubscriptionResult, RusotoError<CreateEventSubscriptionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn create_global_cluster<'life0, 'async_trait>(
&'life0 self,
input: CreateGlobalClusterMessage
) -> Pin<Box<dyn Future<Output = Result<CreateGlobalClusterResult, RusotoError<CreateGlobalClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_db_cluster<'life0, 'async_trait>(
&'life0 self,
input: DeleteDBClusterMessage
) -> Pin<Box<dyn Future<Output = Result<DeleteDBClusterResult, RusotoError<DeleteDBClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_db_cluster_parameter_group<'life0, 'async_trait>(
&'life0 self,
input: DeleteDBClusterParameterGroupMessage
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDBClusterParameterGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_db_cluster_snapshot<'life0, 'async_trait>(
&'life0 self,
input: DeleteDBClusterSnapshotMessage
) -> Pin<Box<dyn Future<Output = Result<DeleteDBClusterSnapshotResult, RusotoError<DeleteDBClusterSnapshotError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_db_instance<'life0, 'async_trait>(
&'life0 self,
input: DeleteDBInstanceMessage
) -> Pin<Box<dyn Future<Output = Result<DeleteDBInstanceResult, RusotoError<DeleteDBInstanceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_db_subnet_group<'life0, 'async_trait>(
&'life0 self,
input: DeleteDBSubnetGroupMessage
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDBSubnetGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_event_subscription<'life0, 'async_trait>(
&'life0 self,
input: DeleteEventSubscriptionMessage
) -> Pin<Box<dyn Future<Output = Result<DeleteEventSubscriptionResult, RusotoError<DeleteEventSubscriptionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn delete_global_cluster<'life0, 'async_trait>(
&'life0 self,
input: DeleteGlobalClusterMessage
) -> Pin<Box<dyn Future<Output = Result<DeleteGlobalClusterResult, RusotoError<DeleteGlobalClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_certificates<'life0, 'async_trait>(
&'life0 self,
input: DescribeCertificatesMessage
) -> Pin<Box<dyn Future<Output = Result<CertificateMessage, RusotoError<DescribeCertificatesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_cluster_parameter_groups<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBClusterParameterGroupsMessage
) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupsMessage, RusotoError<DescribeDBClusterParameterGroupsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_cluster_parameters<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBClusterParametersMessage
) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupDetails, RusotoError<DescribeDBClusterParametersError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_cluster_snapshot_attributes<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBClusterSnapshotAttributesMessage
) -> Pin<Box<dyn Future<Output = Result<DescribeDBClusterSnapshotAttributesResult, RusotoError<DescribeDBClusterSnapshotAttributesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_cluster_snapshots<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBClusterSnapshotsMessage
) -> Pin<Box<dyn Future<Output = Result<DBClusterSnapshotMessage, RusotoError<DescribeDBClusterSnapshotsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_clusters<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBClustersMessage
) -> Pin<Box<dyn Future<Output = Result<DBClusterMessage, RusotoError<DescribeDBClustersError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_engine_versions<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBEngineVersionsMessage
) -> Pin<Box<dyn Future<Output = Result<DBEngineVersionMessage, RusotoError<DescribeDBEngineVersionsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_instances<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBInstancesMessage
) -> Pin<Box<dyn Future<Output = Result<DBInstanceMessage, RusotoError<DescribeDBInstancesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_db_subnet_groups<'life0, 'async_trait>(
&'life0 self,
input: DescribeDBSubnetGroupsMessage
) -> Pin<Box<dyn Future<Output = Result<DBSubnetGroupMessage, RusotoError<DescribeDBSubnetGroupsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_engine_default_cluster_parameters<'life0, 'async_trait>(
&'life0 self,
input: DescribeEngineDefaultClusterParametersMessage
) -> Pin<Box<dyn Future<Output = Result<DescribeEngineDefaultClusterParametersResult, RusotoError<DescribeEngineDefaultClusterParametersError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_event_categories<'life0, 'async_trait>(
&'life0 self,
input: DescribeEventCategoriesMessage
) -> Pin<Box<dyn Future<Output = Result<EventCategoriesMessage, RusotoError<DescribeEventCategoriesError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_event_subscriptions<'life0, 'async_trait>(
&'life0 self,
input: DescribeEventSubscriptionsMessage
) -> Pin<Box<dyn Future<Output = Result<EventSubscriptionsMessage, RusotoError<DescribeEventSubscriptionsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_events<'life0, 'async_trait>(
&'life0 self,
input: DescribeEventsMessage
) -> Pin<Box<dyn Future<Output = Result<EventsMessage, RusotoError<DescribeEventsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_global_clusters<'life0, 'async_trait>(
&'life0 self,
input: DescribeGlobalClustersMessage
) -> Pin<Box<dyn Future<Output = Result<GlobalClustersMessage, RusotoError<DescribeGlobalClustersError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_orderable_db_instance_options<'life0, 'async_trait>(
&'life0 self,
input: DescribeOrderableDBInstanceOptionsMessage
) -> Pin<Box<dyn Future<Output = Result<OrderableDBInstanceOptionsMessage, RusotoError<DescribeOrderableDBInstanceOptionsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn describe_pending_maintenance_actions<'life0, 'async_trait>(
&'life0 self,
input: DescribePendingMaintenanceActionsMessage
) -> Pin<Box<dyn Future<Output = Result<PendingMaintenanceActionsMessage, RusotoError<DescribePendingMaintenanceActionsError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn failover_db_cluster<'life0, 'async_trait>(
&'life0 self,
input: FailoverDBClusterMessage
) -> Pin<Box<dyn Future<Output = Result<FailoverDBClusterResult, RusotoError<FailoverDBClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn list_tags_for_resource<'life0, 'async_trait>(
&'life0 self,
input: ListTagsForResourceMessage
) -> Pin<Box<dyn Future<Output = Result<TagListMessage, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn modify_db_cluster<'life0, 'async_trait>(
&'life0 self,
input: ModifyDBClusterMessage
) -> Pin<Box<dyn Future<Output = Result<ModifyDBClusterResult, RusotoError<ModifyDBClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn modify_db_cluster_parameter_group<'life0, 'async_trait>(
&'life0 self,
input: ModifyDBClusterParameterGroupMessage
) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupNameMessage, RusotoError<ModifyDBClusterParameterGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn modify_db_cluster_snapshot_attribute<'life0, 'async_trait>(
&'life0 self,
input: ModifyDBClusterSnapshotAttributeMessage
) -> Pin<Box<dyn Future<Output = Result<ModifyDBClusterSnapshotAttributeResult, RusotoError<ModifyDBClusterSnapshotAttributeError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn modify_db_instance<'life0, 'async_trait>(
&'life0 self,
input: ModifyDBInstanceMessage
) -> Pin<Box<dyn Future<Output = Result<ModifyDBInstanceResult, RusotoError<ModifyDBInstanceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn modify_db_subnet_group<'life0, 'async_trait>(
&'life0 self,
input: ModifyDBSubnetGroupMessage
) -> Pin<Box<dyn Future<Output = Result<ModifyDBSubnetGroupResult, RusotoError<ModifyDBSubnetGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn modify_event_subscription<'life0, 'async_trait>(
&'life0 self,
input: ModifyEventSubscriptionMessage
) -> Pin<Box<dyn Future<Output = Result<ModifyEventSubscriptionResult, RusotoError<ModifyEventSubscriptionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn modify_global_cluster<'life0, 'async_trait>(
&'life0 self,
input: ModifyGlobalClusterMessage
) -> Pin<Box<dyn Future<Output = Result<ModifyGlobalClusterResult, RusotoError<ModifyGlobalClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn reboot_db_instance<'life0, 'async_trait>(
&'life0 self,
input: RebootDBInstanceMessage
) -> Pin<Box<dyn Future<Output = Result<RebootDBInstanceResult, RusotoError<RebootDBInstanceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn remove_from_global_cluster<'life0, 'async_trait>(
&'life0 self,
input: RemoveFromGlobalClusterMessage
) -> Pin<Box<dyn Future<Output = Result<RemoveFromGlobalClusterResult, RusotoError<RemoveFromGlobalClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn remove_source_identifier_from_subscription<'life0, 'async_trait>(
&'life0 self,
input: RemoveSourceIdentifierFromSubscriptionMessage
) -> Pin<Box<dyn Future<Output = Result<RemoveSourceIdentifierFromSubscriptionResult, RusotoError<RemoveSourceIdentifierFromSubscriptionError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn remove_tags_from_resource<'life0, 'async_trait>(
&'life0 self,
input: RemoveTagsFromResourceMessage
) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<RemoveTagsFromResourceError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn reset_db_cluster_parameter_group<'life0, 'async_trait>(
&'life0 self,
input: ResetDBClusterParameterGroupMessage
) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupNameMessage, RusotoError<ResetDBClusterParameterGroupError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn restore_db_cluster_from_snapshot<'life0, 'async_trait>(
&'life0 self,
input: RestoreDBClusterFromSnapshotMessage
) -> Pin<Box<dyn Future<Output = Result<RestoreDBClusterFromSnapshotResult, RusotoError<RestoreDBClusterFromSnapshotError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn restore_db_cluster_to_point_in_time<'life0, 'async_trait>(
&'life0 self,
input: RestoreDBClusterToPointInTimeMessage
) -> Pin<Box<dyn Future<Output = Result<RestoreDBClusterToPointInTimeResult, RusotoError<RestoreDBClusterToPointInTimeError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn start_db_cluster<'life0, 'async_trait>(
&'life0 self,
input: StartDBClusterMessage
) -> Pin<Box<dyn Future<Output = Result<StartDBClusterResult, RusotoError<StartDBClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
fn stop_db_cluster<'life0, 'async_trait>(
&'life0 self,
input: StopDBClusterMessage
) -> Pin<Box<dyn Future<Output = Result<StopDBClusterResult, RusotoError<StopDBClusterError>>> + Send + 'async_trait> where
'life0: 'async_trait,
Self: 'async_trait;
}
```
Trait representing the capabilities of the Amazon DocDB API. Amazon DocDB clients implement this trait.
Required Methods
---
source#### fn add_source_identifier_to_subscription<'life0, 'async_trait>( &'life0 self, input: AddSourceIdentifierToSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<AddSourceIdentifierToSubscriptionResult, RusotoError<AddSourceIdentifierToSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds a source identifier to an existing event notification subscription.
source#### fn add_tags_to_resource<'life0, 'async_trait>( &'life0 self, input: AddTagsToResourceMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<AddTagsToResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds metadata tags to an Amazon DocumentDB resource. You can use these tags with cost allocation reporting to track costs that are associated with Amazon DocumentDB resources or in a `Condition` statement in an Identity and Access Management (IAM) policy for Amazon DocumentDB.
source#### fn apply_pending_maintenance_action<'life0, 'async_trait>( &'life0 self, input: ApplyPendingMaintenanceActionMessage) -> Pin<Box<dyn Future<Output = Result<ApplyPendingMaintenanceActionResult, RusotoError<ApplyPendingMaintenanceActionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Applies a pending maintenance action to a resource (for example, to an Amazon DocumentDB instance).
source#### fn copy_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: CopyDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<CopyDBClusterParameterGroupResult, RusotoError<CopyDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Copies the specified cluster parameter group.
source#### fn copy_db_cluster_snapshot<'life0, 'async_trait>( &'life0 self, input: CopyDBClusterSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<CopyDBClusterSnapshotResult, RusotoError<CopyDBClusterSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Copies a snapshot of a cluster.
To copy a cluster snapshot from a shared manual cluster snapshot, `SourceDBClusterSnapshotIdentifier` must be the Amazon Resource Name (ARN) of the shared cluster snapshot. You can only copy a shared DB cluster snapshot, whether encrypted or not, in the same Region.
To cancel the copy operation after it is in progress, delete the target cluster snapshot identified by `TargetDBClusterSnapshotIdentifier` while that cluster snapshot is in the *copying* status.
source#### fn create_db_cluster<'life0, 'async_trait>( &'life0 self, input: CreateDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterResult, RusotoError<CreateDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new Amazon DocumentDB cluster.
source#### fn create_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: CreateDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterParameterGroupResult, RusotoError<CreateDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new cluster parameter group.
Parameters in a cluster parameter group apply to all of the instances in a cluster.
A cluster parameter group is initially created with the default parameters for the database engine used by instances in the cluster. In Amazon DocumentDB, you cannot make modifications directly to the `default.docdb3.6` cluster parameter group. If your Amazon DocumentDB cluster is using the default cluster parameter group and you want to modify a value in it, you must first create a new parameter group or copy an existing parameter group, modify it, and then apply the modified parameter group to your cluster. For the new cluster parameter group and associated settings to take effect, you must then reboot the instances in the cluster without failover. For more information, see Modifying Amazon DocumentDB Cluster Parameter Groups.
source#### fn create_db_cluster_snapshot<'life0, 'async_trait>( &'life0 self, input: CreateDBClusterSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBClusterSnapshotResult, RusotoError<CreateDBClusterSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a snapshot of a cluster.
source#### fn create_db_instance<'life0, 'async_trait>( &'life0 self, input: CreateDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBInstanceResult, RusotoError<CreateDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new instance.
source#### fn create_db_subnet_group<'life0, 'async_trait>( &'life0 self, input: CreateDBSubnetGroupMessage) -> Pin<Box<dyn Future<Output = Result<CreateDBSubnetGroupResult, RusotoError<CreateDBSubnetGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new subnet group. subnet groups must contain at least one subnet in at least two Availability Zones in the Region.
source#### fn create_event_subscription<'life0, 'async_trait>( &'life0 self, input: CreateEventSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<CreateEventSubscriptionResult, RusotoError<CreateEventSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates an Amazon DocumentDB event notification subscription. This action requires a topic Amazon Resource Name (ARN) created by using the Amazon DocumentDB console, the Amazon SNS console, or the Amazon SNS API. To obtain an ARN with Amazon SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the Amazon SNS console.
You can specify the type of source (`SourceType`) that you want to be notified of. You can also provide a list of Amazon DocumentDB sources (`SourceIds`) that trigger the events, and you can provide a list of event categories (`EventCategories`) for events that you want to be notified of. For example, you can specify `SourceType = db-instance`, `SourceIds = mydbinstance1, mydbinstance2` and `EventCategories = Availability, Backup`.
If you specify both the `SourceType` and `SourceIds` (such as `SourceType = db-instance` and `SourceIdentifier = myDBInstance1`), you are notified of all the `db-instance` events for the specified source. If you specify a `SourceType` but do not specify a `SourceIdentifier`, you receive notice of the events for that source type for all your Amazon DocumentDB sources. If you do not specify either the `SourceType` or the `SourceIdentifier`, you are notified of events generated from all Amazon DocumentDB sources belonging to your customer account.
source#### fn create_global_cluster<'life0, 'async_trait>( &'life0 self, input: CreateGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<CreateGlobalClusterResult, RusotoError<CreateGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates an Amazon DocumentDB global cluster that can span multiple multiple Regions. The global cluster contains one primary cluster with read-write capability, and up-to give read-only secondary clusters. Global clusters uses storage-based fast replication across regions with latencies less than one second, using dedicated infrastructure with no impact to your workload’s performance.
You can create a global cluster that is initially empty, and then add a primary and a secondary to it. Or you can specify an existing cluster during the create operation, and this cluster becomes the primary of the global cluster.
This action only applies to Amazon DocumentDB clusters.
source#### fn delete_db_cluster<'life0, 'async_trait>( &'life0 self, input: DeleteDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<DeleteDBClusterResult, RusotoError<DeleteDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a previously provisioned cluster. When you delete a cluster, all automated backups for that cluster are deleted and can't be recovered. Manual DB cluster snapshots of the specified cluster are not deleted.
source#### fn delete_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: DeleteDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a specified cluster parameter group. The cluster parameter group to be deleted can't be associated with any clusters.
source#### fn delete_db_cluster_snapshot<'life0, 'async_trait>( &'life0 self, input: DeleteDBClusterSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<DeleteDBClusterSnapshotResult, RusotoError<DeleteDBClusterSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a cluster snapshot. If the snapshot is being copied, the copy operation is terminated.
The cluster snapshot must be in the `available` state to be deleted.
source#### fn delete_db_instance<'life0, 'async_trait>( &'life0 self, input: DeleteDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<DeleteDBInstanceResult, RusotoError<DeleteDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a previously provisioned instance.
source#### fn delete_db_subnet_group<'life0, 'async_trait>( &'life0 self, input: DeleteDBSubnetGroupMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<DeleteDBSubnetGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a subnet group.
The specified database subnet group must not be associated with any DB instances.
source#### fn delete_event_subscription<'life0, 'async_trait>( &'life0 self, input: DeleteEventSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<DeleteEventSubscriptionResult, RusotoError<DeleteEventSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes an Amazon DocumentDB event notification subscription.
source#### fn delete_global_cluster<'life0, 'async_trait>( &'life0 self, input: DeleteGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<DeleteGlobalClusterResult, RusotoError<DeleteGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Deletes a global cluster. The primary and secondary clusters must already be detached or deleted before attempting to delete a global cluster.
This action only applies to Amazon DocumentDB clusters.
source#### fn describe_certificates<'life0, 'async_trait>( &'life0 self, input: DescribeCertificatesMessage) -> Pin<Box<dyn Future<Output = Result<CertificateMessage, RusotoError<DescribeCertificatesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of certificate authority (CA) certificates provided by Amazon DocumentDB for this account.
source#### fn describe_db_cluster_parameter_groups<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterParameterGroupsMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupsMessage, RusotoError<DescribeDBClusterParameterGroupsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of `DBClusterParameterGroup` descriptions. If a `DBClusterParameterGroupName` parameter is specified, the list contains only the description of the specified cluster parameter group.
source#### fn describe_db_cluster_parameters<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterParametersMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupDetails, RusotoError<DescribeDBClusterParametersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns the detailed parameter list for a particular cluster parameter group.
source#### fn describe_db_cluster_snapshot_attributes<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterSnapshotAttributesMessage) -> Pin<Box<dyn Future<Output = Result<DescribeDBClusterSnapshotAttributesResult, RusotoError<DescribeDBClusterSnapshotAttributesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of cluster snapshot attribute names and values for a manual DB cluster snapshot.
When you share snapshots with other accounts, `DescribeDBClusterSnapshotAttributes` returns the `restore` attribute and a list of IDs for the accounts that are authorized to copy or restore the manual cluster snapshot. If `all` is included in the list of values for the `restore` attribute, then the manual cluster snapshot is public and can be copied or restored by all accounts.
source#### fn describe_db_cluster_snapshots<'life0, 'async_trait>( &'life0 self, input: DescribeDBClusterSnapshotsMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterSnapshotMessage, RusotoError<DescribeDBClusterSnapshotsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about cluster snapshots. This API operation supports pagination.
source#### fn describe_db_clusters<'life0, 'async_trait>( &'life0 self, input: DescribeDBClustersMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterMessage, RusotoError<DescribeDBClustersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about provisioned Amazon DocumentDB clusters. This API operation supports pagination. For certain management features such as cluster and instance lifecycle management, Amazon DocumentDB leverages operational technology that is shared with Amazon RDS and Amazon Neptune. Use the `filterName=engine,Values=docdb` filter parameter to return only Amazon DocumentDB clusters.
source#### fn describe_db_engine_versions<'life0, 'async_trait>( &'life0 self, input: DescribeDBEngineVersionsMessage) -> Pin<Box<dyn Future<Output = Result<DBEngineVersionMessage, RusotoError<DescribeDBEngineVersionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of the available engines.
source#### fn describe_db_instances<'life0, 'async_trait>( &'life0 self, input: DescribeDBInstancesMessage) -> Pin<Box<dyn Future<Output = Result<DBInstanceMessage, RusotoError<DescribeDBInstancesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about provisioned Amazon DocumentDB instances. This API supports pagination.
source#### fn describe_db_subnet_groups<'life0, 'async_trait>( &'life0 self, input: DescribeDBSubnetGroupsMessage) -> Pin<Box<dyn Future<Output = Result<DBSubnetGroupMessage, RusotoError<DescribeDBSubnetGroupsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of `DBSubnetGroup` descriptions. If a `DBSubnetGroupName` is specified, the list will contain only the descriptions of the specified `DBSubnetGroup`.
source#### fn describe_engine_default_cluster_parameters<'life0, 'async_trait>( &'life0 self, input: DescribeEngineDefaultClusterParametersMessage) -> Pin<Box<dyn Future<Output = Result<DescribeEngineDefaultClusterParametersResult, RusotoError<DescribeEngineDefaultClusterParametersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns the default engine and system parameter information for the cluster database engine.
source#### fn describe_event_categories<'life0, 'async_trait>( &'life0 self, input: DescribeEventCategoriesMessage) -> Pin<Box<dyn Future<Output = Result<EventCategoriesMessage, RusotoError<DescribeEventCategoriesError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Displays a list of categories for all event source types, or, if specified, for a specified source type.
source#### fn describe_event_subscriptions<'life0, 'async_trait>( &'life0 self, input: DescribeEventSubscriptionsMessage) -> Pin<Box<dyn Future<Output = Result<EventSubscriptionsMessage, RusotoError<DescribeEventSubscriptionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists all the subscription descriptions for a customer account. The description for a subscription includes `SubscriptionName`, `SNSTopicARN`, `CustomerID`, `SourceType`, `SourceID`, `CreationTime`, and `Status`.
If you specify a `SubscriptionName`, lists the description for that subscription.
source#### fn describe_events<'life0, 'async_trait>( &'life0 self, input: DescribeEventsMessage) -> Pin<Box<dyn Future<Output = Result<EventsMessage, RusotoError<DescribeEventsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns events related to instances, security groups, snapshots, and DB parameter groups for the past 14 days. You can obtain events specific to a particular DB instance, security group, snapshot, or parameter group by providing the name as a parameter. By default, the events of the past hour are returned.
source#### fn describe_global_clusters<'life0, 'async_trait>( &'life0 self, input: DescribeGlobalClustersMessage) -> Pin<Box<dyn Future<Output = Result<GlobalClustersMessage, RusotoError<DescribeGlobalClustersError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns information about Amazon DocumentDB global clusters. This API supports pagination.
This action only applies to Amazon DocumentDB clusters.
source#### fn describe_orderable_db_instance_options<'life0, 'async_trait>( &'life0 self, input: DescribeOrderableDBInstanceOptionsMessage) -> Pin<Box<dyn Future<Output = Result<OrderableDBInstanceOptionsMessage, RusotoError<DescribeOrderableDBInstanceOptionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of orderable instance options for the specified engine.
source#### fn describe_pending_maintenance_actions<'life0, 'async_trait>( &'life0 self, input: DescribePendingMaintenanceActionsMessage) -> Pin<Box<dyn Future<Output = Result<PendingMaintenanceActionsMessage, RusotoError<DescribePendingMaintenanceActionsError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Returns a list of resources (for example, instances) that have at least one pending maintenance action.
source#### fn failover_db_cluster<'life0, 'async_trait>( &'life0 self, input: FailoverDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<FailoverDBClusterResult, RusotoError<FailoverDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Forces a failover for a cluster.
A failover for a cluster promotes one of the Amazon DocumentDB replicas (read-only instances) in the cluster to be the primary instance (the cluster writer).
If the primary instance fails, Amazon DocumentDB automatically fails over to an Amazon DocumentDB replica, if one exists. You can force a failover when you want to simulate a failure of a primary instance for testing.
source#### fn list_tags_for_resource<'life0, 'async_trait>( &'life0 self, input: ListTagsForResourceMessage) -> Pin<Box<dyn Future<Output = Result<TagListMessage, RusotoError<ListTagsForResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Lists all tags on an Amazon DocumentDB resource.
source#### fn modify_db_cluster<'life0, 'async_trait>( &'life0 self, input: ModifyDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBClusterResult, RusotoError<ModifyDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies a setting for an Amazon DocumentDB cluster. You can change one or more database configuration parameters by specifying these parameters and the new values in the request.
source#### fn modify_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: ModifyDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupNameMessage, RusotoError<ModifyDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies the parameters of a cluster parameter group. To modify more than one parameter, submit a list of the following: `ParameterName`, `ParameterValue`, and `ApplyMethod`. A maximum of 20 parameters can be modified in a single request.
Changes to dynamic parameters are applied immediately. Changes to static parameters require a reboot or maintenance window before the change can take effect.
After you create a cluster parameter group, you should wait at least 5 minutes before creating your first cluster that uses that cluster parameter group as the default parameter group. This allows Amazon DocumentDB to fully complete the create action before the parameter group is used as the default for a new cluster. This step is especially important for parameters that are critical when creating the default database for a cluster, such as the character set for the default database defined by the `character*set*database` parameter.
source#### fn modify_db_cluster_snapshot_attribute<'life0, 'async_trait>( &'life0 self, input: ModifyDBClusterSnapshotAttributeMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBClusterSnapshotAttributeResult, RusotoError<ModifyDBClusterSnapshotAttributeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Adds an attribute and values to, or removes an attribute and values from, a manual cluster snapshot.
To share a manual cluster snapshot with other accounts, specify `restore` as the `AttributeName`, and use the `ValuesToAdd` parameter to add a list of IDs of the accounts that are authorized to restore the manual cluster snapshot. Use the value `all` to make the manual cluster snapshot public, which means that it can be copied or restored by all accounts. Do not add the `all` value for any manual cluster snapshots that contain private information that you don't want available to all accounts. If a manual cluster snapshot is encrypted, it can be shared, but only by specifying a list of authorized account IDs for the `ValuesToAdd` parameter. You can't use `all` as a value for that parameter in this case.
source#### fn modify_db_instance<'life0, 'async_trait>( &'life0 self, input: ModifyDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBInstanceResult, RusotoError<ModifyDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies settings for an instance. You can change one or more database configuration parameters by specifying these parameters and the new values in the request.
source#### fn modify_db_subnet_group<'life0, 'async_trait>( &'life0 self, input: ModifyDBSubnetGroupMessage) -> Pin<Box<dyn Future<Output = Result<ModifyDBSubnetGroupResult, RusotoError<ModifyDBSubnetGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies an existing subnet group. subnet groups must contain at least one subnet in at least two Availability Zones in the Region.
source#### fn modify_event_subscription<'life0, 'async_trait>( &'life0 self, input: ModifyEventSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<ModifyEventSubscriptionResult, RusotoError<ModifyEventSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies an existing Amazon DocumentDB event notification subscription.
source#### fn modify_global_cluster<'life0, 'async_trait>( &'life0 self, input: ModifyGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<ModifyGlobalClusterResult, RusotoError<ModifyGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modify a setting for an Amazon DocumentDB global cluster. You can change one or more configuration parameters (for example: deletion protection), or the global cluster identifier by specifying these parameters and the new values in the request.
This action only applies to Amazon DocumentDB clusters.
source#### fn reboot_db_instance<'life0, 'async_trait>( &'life0 self, input: RebootDBInstanceMessage) -> Pin<Box<dyn Future<Output = Result<RebootDBInstanceResult, RusotoError<RebootDBInstanceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
You might need to reboot your instance, usually for maintenance reasons. For example, if you make certain changes, or if you change the cluster parameter group that is associated with the instance, you must reboot the instance for the changes to take effect.
Rebooting an instance restarts the database engine service. Rebooting an instance results in a momentary outage, during which the instance status is set to *rebooting*.
source#### fn remove_from_global_cluster<'life0, 'async_trait>( &'life0 self, input: RemoveFromGlobalClusterMessage) -> Pin<Box<dyn Future<Output = Result<RemoveFromGlobalClusterResult, RusotoError<RemoveFromGlobalClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Detaches an Amazon DocumentDB secondary cluster from a global cluster. The cluster becomes a standalone cluster with read-write capability instead of being read-only and receiving data from a primary in a different region.
This action only applies to Amazon DocumentDB clusters.
source#### fn remove_source_identifier_from_subscription<'life0, 'async_trait>( &'life0 self, input: RemoveSourceIdentifierFromSubscriptionMessage) -> Pin<Box<dyn Future<Output = Result<RemoveSourceIdentifierFromSubscriptionResult, RusotoError<RemoveSourceIdentifierFromSubscriptionError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes a source identifier from an existing Amazon DocumentDB event notification subscription.
source#### fn remove_tags_from_resource<'life0, 'async_trait>( &'life0 self, input: RemoveTagsFromResourceMessage) -> Pin<Box<dyn Future<Output = Result<(), RusotoError<RemoveTagsFromResourceError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Removes metadata tags from an Amazon DocumentDB resource.
source#### fn reset_db_cluster_parameter_group<'life0, 'async_trait>( &'life0 self, input: ResetDBClusterParameterGroupMessage) -> Pin<Box<dyn Future<Output = Result<DBClusterParameterGroupNameMessage, RusotoError<ResetDBClusterParameterGroupError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Modifies the parameters of a cluster parameter group to the default value. To reset specific parameters, submit a list of the following: `ParameterName` and `ApplyMethod`. To reset the entire cluster parameter group, specify the `DBClusterParameterGroupName` and `ResetAllParameters` parameters.
When you reset the entire group, dynamic parameters are updated immediately and static parameters are set to `pending-reboot` to take effect on the next DB instance reboot.
source#### fn restore_db_cluster_from_snapshot<'life0, 'async_trait>( &'life0 self, input: RestoreDBClusterFromSnapshotMessage) -> Pin<Box<dyn Future<Output = Result<RestoreDBClusterFromSnapshotResult, RusotoError<RestoreDBClusterFromSnapshotError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Creates a new cluster from a snapshot or cluster snapshot.
If a snapshot is specified, the target cluster is created from the source DB snapshot with a default configuration and default security group.
If a cluster snapshot is specified, the target cluster is created from the source cluster restore point with the same configuration as the original source DB cluster, except that the new cluster is created with the default security group.
source#### fn restore_db_cluster_to_point_in_time<'life0, 'async_trait>( &'life0 self, input: RestoreDBClusterToPointInTimeMessage) -> Pin<Box<dyn Future<Output = Result<RestoreDBClusterToPointInTimeResult, RusotoError<RestoreDBClusterToPointInTimeError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Restores a cluster to an arbitrary point in time. Users can restore to any point in time before `LatestRestorableTime` for up to `BackupRetentionPeriod` days. The target cluster is created from the source cluster with the same configuration as the original cluster, except that the new cluster is created with the default security group.
source#### fn start_db_cluster<'life0, 'async_trait>( &'life0 self, input: StartDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<StartDBClusterResult, RusotoError<StartDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Restarts the stopped cluster that is specified by `DBClusterIdentifier`. For more information, see Stopping and Starting an Amazon DocumentDB Cluster.
source#### fn stop_db_cluster<'life0, 'async_trait>( &'life0 self, input: StopDBClusterMessage) -> Pin<Box<dyn Future<Output = Result<StopDBClusterResult, RusotoError<StopDBClusterError>>> + Send + 'async_trait>> where 'life0: 'async_trait, Self: 'async_trait,
Stops the running cluster that is specified by `DBClusterIdentifier`. The cluster must be in the *available* state. For more information, see Stopping and Starting an Amazon DocumentDB Cluster.
Implementors
---
source### impl Docdb for DocdbClient
Struct rusoto_docdb::AddSourceIdentifierToSubscriptionMessage
===
```
pub struct AddSourceIdentifierToSubscriptionMessage {
pub source_identifier: String,
pub subscription_name: String,
}
```
Represents the input to AddSourceIdentifierToSubscription.
Fields
---
`source_identifier: String`The identifier of the event source to be added:
* If the source type is an instance, a `DBInstanceIdentifier` must be provided.
* If the source type is a security group, a `DBSecurityGroupName` must be provided.
* If the source type is a parameter group, a `DBParameterGroupName` must be provided.
* If the source type is a snapshot, a `DBSnapshotIdentifier` must be provided.
`subscription_name: String`The name of the Amazon DocumentDB event notification subscription that you want to add a source identifier to.
Trait Implementations
---
source### impl Clone for AddSourceIdentifierToSubscriptionMessage
source#### fn clone(&self) -> AddSourceIdentifierToSubscriptionMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AddSourceIdentifierToSubscriptionMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AddSourceIdentifierToSubscriptionMessage
source#### fn default() -> AddSourceIdentifierToSubscriptionMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<AddSourceIdentifierToSubscriptionMessage> for AddSourceIdentifierToSubscriptionMessage
source#### fn eq(&self, other: &AddSourceIdentifierToSubscriptionMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AddSourceIdentifierToSubscriptionMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AddSourceIdentifierToSubscriptionMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for AddSourceIdentifierToSubscriptionMessage
### impl Send for AddSourceIdentifierToSubscriptionMessage
### impl Sync for AddSourceIdentifierToSubscriptionMessage
### impl Unpin for AddSourceIdentifierToSubscriptionMessage
### impl UnwindSafe for AddSourceIdentifierToSubscriptionMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::AddSourceIdentifierToSubscriptionResult
===
```
pub struct AddSourceIdentifierToSubscriptionResult {
pub event_subscription: Option<EventSubscription>,
}
```
Fields
---
`event_subscription: Option<EventSubscription>`Trait Implementations
---
source### impl Clone for AddSourceIdentifierToSubscriptionResult
source#### fn clone(&self) -> AddSourceIdentifierToSubscriptionResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AddSourceIdentifierToSubscriptionResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AddSourceIdentifierToSubscriptionResult
source#### fn default() -> AddSourceIdentifierToSubscriptionResult
Returns the “default value” for a type. Read more
source### impl PartialEq<AddSourceIdentifierToSubscriptionResult> for AddSourceIdentifierToSubscriptionResult
source#### fn eq(&self, other: &AddSourceIdentifierToSubscriptionResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AddSourceIdentifierToSubscriptionResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AddSourceIdentifierToSubscriptionResult
Auto Trait Implementations
---
### impl RefUnwindSafe for AddSourceIdentifierToSubscriptionResult
### impl Send for AddSourceIdentifierToSubscriptionResult
### impl Sync for AddSourceIdentifierToSubscriptionResult
### impl Unpin for AddSourceIdentifierToSubscriptionResult
### impl UnwindSafe for AddSourceIdentifierToSubscriptionResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::AddTagsToResourceMessage
===
```
pub struct AddTagsToResourceMessage {
pub resource_name: String,
pub tags: Vec<Tag>,
}
```
Represents the input to AddTagsToResource.
Fields
---
`resource_name: String`The Amazon DocumentDB resource that the tags are added to. This value is an Amazon Resource Name .
`tags: Vec<Tag>`The tags to be assigned to the Amazon DocumentDB resource.
Trait Implementations
---
source### impl Clone for AddTagsToResourceMessage
source#### fn clone(&self) -> AddTagsToResourceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AddTagsToResourceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AddTagsToResourceMessage
source#### fn default() -> AddTagsToResourceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<AddTagsToResourceMessage> for AddTagsToResourceMessage
source#### fn eq(&self, other: &AddTagsToResourceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AddTagsToResourceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AddTagsToResourceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for AddTagsToResourceMessage
### impl Send for AddTagsToResourceMessage
### impl Sync for AddTagsToResourceMessage
### impl Unpin for AddTagsToResourceMessage
### impl UnwindSafe for AddTagsToResourceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ApplyPendingMaintenanceActionMessage
===
```
pub struct ApplyPendingMaintenanceActionMessage {
pub apply_action: String,
pub opt_in_type: String,
pub resource_identifier: String,
}
```
Represents the input to ApplyPendingMaintenanceAction.
Fields
---
`apply_action: String`The pending maintenance action to apply to this resource.
Valid values: `system-update`, `db-upgrade`
`opt_in_type: String`A value that specifies the type of opt-in request or undoes an opt-in request. An opt-in request of type `immediate` can't be undone.
Valid values:
* `immediate` - Apply the maintenance action immediately.
* `next-maintenance` - Apply the maintenance action during the next maintenance window for the resource.
* `undo-opt-in` - Cancel any existing `next-maintenance` opt-in requests.
`resource_identifier: String`The Amazon Resource Name (ARN) of the resource that the pending maintenance action applies to.
Trait Implementations
---
source### impl Clone for ApplyPendingMaintenanceActionMessage
source#### fn clone(&self) -> ApplyPendingMaintenanceActionMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ApplyPendingMaintenanceActionMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ApplyPendingMaintenanceActionMessage
source#### fn default() -> ApplyPendingMaintenanceActionMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ApplyPendingMaintenanceActionMessage> for ApplyPendingMaintenanceActionMessage
source#### fn eq(&self, other: &ApplyPendingMaintenanceActionMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ApplyPendingMaintenanceActionMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ApplyPendingMaintenanceActionMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ApplyPendingMaintenanceActionMessage
### impl Send for ApplyPendingMaintenanceActionMessage
### impl Sync for ApplyPendingMaintenanceActionMessage
### impl Unpin for ApplyPendingMaintenanceActionMessage
### impl UnwindSafe for ApplyPendingMaintenanceActionMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ApplyPendingMaintenanceActionResult
===
```
pub struct ApplyPendingMaintenanceActionResult {
pub resource_pending_maintenance_actions: Option<ResourcePendingMaintenanceActions>,
}
```
Fields
---
`resource_pending_maintenance_actions: Option<ResourcePendingMaintenanceActions>`Trait Implementations
---
source### impl Clone for ApplyPendingMaintenanceActionResult
source#### fn clone(&self) -> ApplyPendingMaintenanceActionResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ApplyPendingMaintenanceActionResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ApplyPendingMaintenanceActionResult
source#### fn default() -> ApplyPendingMaintenanceActionResult
Returns the “default value” for a type. Read more
source### impl PartialEq<ApplyPendingMaintenanceActionResult> for ApplyPendingMaintenanceActionResult
source#### fn eq(&self, other: &ApplyPendingMaintenanceActionResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ApplyPendingMaintenanceActionResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ApplyPendingMaintenanceActionResult
Auto Trait Implementations
---
### impl RefUnwindSafe for ApplyPendingMaintenanceActionResult
### impl Send for ApplyPendingMaintenanceActionResult
### impl Sync for ApplyPendingMaintenanceActionResult
### impl Unpin for ApplyPendingMaintenanceActionResult
### impl UnwindSafe for ApplyPendingMaintenanceActionResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::AvailabilityZone
===
```
pub struct AvailabilityZone {
pub name: Option<String>,
}
```
Information about an Availability Zone.
Fields
---
`name: Option<String>`The name of the Availability Zone.
Trait Implementations
---
source### impl Clone for AvailabilityZone
source#### fn clone(&self) -> AvailabilityZone
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for AvailabilityZone
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for AvailabilityZone
source#### fn default() -> AvailabilityZone
Returns the “default value” for a type. Read more
source### impl PartialEq<AvailabilityZone> for AvailabilityZone
source#### fn eq(&self, other: &AvailabilityZone) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AvailabilityZone) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AvailabilityZone
Auto Trait Implementations
---
### impl RefUnwindSafe for AvailabilityZone
### impl Send for AvailabilityZone
### impl Sync for AvailabilityZone
### impl Unpin for AvailabilityZone
### impl UnwindSafe for AvailabilityZone
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::Certificate
===
```
pub struct Certificate {
pub certificate_arn: Option<String>,
pub certificate_identifier: Option<String>,
pub certificate_type: Option<String>,
pub thumbprint: Option<String>,
pub valid_from: Option<String>,
pub valid_till: Option<String>,
}
```
A certificate authority (CA) certificate for an account.
Fields
---
`certificate_arn: Option<String>`The Amazon Resource Name (ARN) for the certificate.
Example: `arn:aws:rds:us-east-1::cert:rds-ca-2019`
`certificate_identifier: Option<String>`The unique key that identifies a certificate.
Example: `rds-ca-2019`
`certificate_type: Option<String>`The type of the certificate.
Example: `CA`
`thumbprint: Option<String>`The thumbprint of the certificate.
`valid_from: Option<String>`The starting date-time from which the certificate is valid.
Example: `2019-07-31T17:57:09Z`
`valid_till: Option<String>`The date-time after which the certificate is no longer valid.
Example: `2024-07-31T17:57:09Z`
Trait Implementations
---
source### impl Clone for Certificate
source#### fn clone(&self) -> Certificate
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Certificate
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Certificate
source#### fn default() -> Certificate
Returns the “default value” for a type. Read more
source### impl PartialEq<Certificate> for Certificate
source#### fn eq(&self, other: &Certificate) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Certificate) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Certificate
Auto Trait Implementations
---
### impl RefUnwindSafe for Certificate
### impl Send for Certificate
### impl Sync for Certificate
### impl Unpin for Certificate
### impl UnwindSafe for Certificate
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CertificateMessage
===
```
pub struct CertificateMessage {
pub certificates: Option<Vec<Certificate>>,
pub marker: Option<String>,
}
```
Fields
---
`certificates: Option<Vec<Certificate>>`A list of certificates for this account.
`marker: Option<String>`An optional pagination token provided if the number of records retrieved is greater than `MaxRecords`. If this parameter is specified, the marker specifies the next record in the list. Including the value of `Marker` in the next call to `DescribeCertificates` results in the next page of certificates.
Trait Implementations
---
source### impl Clone for CertificateMessage
source#### fn clone(&self) -> CertificateMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CertificateMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CertificateMessage
source#### fn default() -> CertificateMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CertificateMessage> for CertificateMessage
source#### fn eq(&self, other: &CertificateMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CertificateMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CertificateMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CertificateMessage
### impl Send for CertificateMessage
### impl Sync for CertificateMessage
### impl Unpin for CertificateMessage
### impl UnwindSafe for CertificateMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CloudwatchLogsExportConfiguration
===
```
pub struct CloudwatchLogsExportConfiguration {
pub disable_log_types: Option<Vec<String>>,
pub enable_log_types: Option<Vec<String>>,
}
```
The configuration setting for the log types to be enabled for export to Amazon CloudWatch Logs for a specific instance or cluster.
The `EnableLogTypes` and `DisableLogTypes` arrays determine which logs are exported (or not exported) to CloudWatch Logs. The values within these arrays depend on the engine that is being used.
Fields
---
`disable_log_types: Option<Vec<String>>`The list of log types to disable.
`enable_log_types: Option<Vec<String>>`The list of log types to enable.
Trait Implementations
---
source### impl Clone for CloudwatchLogsExportConfiguration
source#### fn clone(&self) -> CloudwatchLogsExportConfiguration
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CloudwatchLogsExportConfiguration
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CloudwatchLogsExportConfiguration
source#### fn default() -> CloudwatchLogsExportConfiguration
Returns the “default value” for a type. Read more
source### impl PartialEq<CloudwatchLogsExportConfiguration> for CloudwatchLogsExportConfiguration
source#### fn eq(&self, other: &CloudwatchLogsExportConfiguration) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CloudwatchLogsExportConfiguration) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CloudwatchLogsExportConfiguration
Auto Trait Implementations
---
### impl RefUnwindSafe for CloudwatchLogsExportConfiguration
### impl Send for CloudwatchLogsExportConfiguration
### impl Sync for CloudwatchLogsExportConfiguration
### impl Unpin for CloudwatchLogsExportConfiguration
### impl UnwindSafe for CloudwatchLogsExportConfiguration
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CopyDBClusterParameterGroupMessage
===
```
pub struct CopyDBClusterParameterGroupMessage {
pub source_db_cluster_parameter_group_identifier: String,
pub tags: Option<Vec<Tag>>,
pub target_db_cluster_parameter_group_description: String,
pub target_db_cluster_parameter_group_identifier: String,
}
```
Represents the input to CopyDBClusterParameterGroup.
Fields
---
`source_db_cluster_parameter_group_identifier: String`The identifier or Amazon Resource Name (ARN) for the source cluster parameter group.
Constraints:
* Must specify a valid cluster parameter group.
* If the source cluster parameter group is in the same Region as the copy, specify a valid parameter group identifier; for example, `my-db-cluster-param-group`, or a valid ARN.
* If the source parameter group is in a different Region than the copy, specify a valid cluster parameter group ARN; for example, `arn:aws:rds:us-east-1:123456789012:sample-cluster:sample-parameter-group`.
`tags: Option<Vec<Tag>>`The tags that are to be assigned to the parameter group.
`target_db_cluster_parameter_group_description: String`A description for the copied cluster parameter group.
`target_db_cluster_parameter_group_identifier: String`The identifier for the copied cluster parameter group.
Constraints:
* Cannot be null, empty, or blank.
* Must contain from 1 to 255 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `my-cluster-param-group1`
Trait Implementations
---
source### impl Clone for CopyDBClusterParameterGroupMessage
source#### fn clone(&self) -> CopyDBClusterParameterGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CopyDBClusterParameterGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CopyDBClusterParameterGroupMessage
source#### fn default() -> CopyDBClusterParameterGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CopyDBClusterParameterGroupMessage> for CopyDBClusterParameterGroupMessage
source#### fn eq(&self, other: &CopyDBClusterParameterGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CopyDBClusterParameterGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CopyDBClusterParameterGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CopyDBClusterParameterGroupMessage
### impl Send for CopyDBClusterParameterGroupMessage
### impl Sync for CopyDBClusterParameterGroupMessage
### impl Unpin for CopyDBClusterParameterGroupMessage
### impl UnwindSafe for CopyDBClusterParameterGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CopyDBClusterParameterGroupResult
===
```
pub struct CopyDBClusterParameterGroupResult {
pub db_cluster_parameter_group: Option<DBClusterParameterGroup>,
}
```
Fields
---
`db_cluster_parameter_group: Option<DBClusterParameterGroup>`Trait Implementations
---
source### impl Clone for CopyDBClusterParameterGroupResult
source#### fn clone(&self) -> CopyDBClusterParameterGroupResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CopyDBClusterParameterGroupResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CopyDBClusterParameterGroupResult
source#### fn default() -> CopyDBClusterParameterGroupResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CopyDBClusterParameterGroupResult> for CopyDBClusterParameterGroupResult
source#### fn eq(&self, other: &CopyDBClusterParameterGroupResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CopyDBClusterParameterGroupResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CopyDBClusterParameterGroupResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CopyDBClusterParameterGroupResult
### impl Send for CopyDBClusterParameterGroupResult
### impl Sync for CopyDBClusterParameterGroupResult
### impl Unpin for CopyDBClusterParameterGroupResult
### impl UnwindSafe for CopyDBClusterParameterGroupResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CopyDBClusterSnapshotMessage
===
```
pub struct CopyDBClusterSnapshotMessage {
pub copy_tags: Option<bool>,
pub kms_key_id: Option<String>,
pub pre_signed_url: Option<String>,
pub source_db_cluster_snapshot_identifier: String,
pub tags: Option<Vec<Tag>>,
pub target_db_cluster_snapshot_identifier: String,
}
```
Represents the input to CopyDBClusterSnapshot.
Fields
---
`copy_tags: Option<bool>`Set to `true` to copy all tags from the source cluster snapshot to the target cluster snapshot, and otherwise `false`. The default is `false`.
`kms_key_id: Option<String>`The KMS key ID for an encrypted cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you copy an encrypted cluster snapshot from your account, you can specify a value for `KmsKeyId` to encrypt the copy with a new KMS encryption key. If you don't specify a value for `KmsKeyId`, then the copy of the cluster snapshot is encrypted with the same KMS key as the source cluster snapshot.
If you copy an encrypted cluster snapshot that is shared from another account, then you must specify a value for `KmsKeyId`.
To copy an encrypted cluster snapshot to another Region, set `KmsKeyId` to the KMS key ID that you want to use to encrypt the copy of the cluster snapshot in the destination Region. KMS encryption keys are specific to the Region that they are created in, and you can't use encryption keys from one Region in another Region.
If you copy an unencrypted cluster snapshot and specify a value for the `KmsKeyId` parameter, an error is returned.
`pre_signed_url: Option<String>`The URL that contains a Signature Version 4 signed request for the`CopyDBClusterSnapshot` API action in the Region that contains the source cluster snapshot to copy. You must use the `PreSignedUrl` parameter when copying a cluster snapshot from another Region.
If you are using an Amazon Web Services SDK tool or the CLI, you can specify `SourceRegion` (or `--source-region` for the CLI) instead of specifying `PreSignedUrl` manually. Specifying `SourceRegion` autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source Region.
The presigned URL must be a valid request for the `CopyDBClusterSnapshot` API action that can be executed in the source Region that contains the cluster snapshot to be copied. The presigned URL request must contain the following parameter values:
* `SourceRegion` - The ID of the region that contains the snapshot to be copied.
* `SourceDBClusterSnapshotIdentifier` - The identifier for the the encrypted cluster snapshot to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source Region. For example, if you are copying an encrypted cluster snapshot from the us-east-1 Region, then your `SourceDBClusterSnapshotIdentifier` looks something like the following: `arn:aws:rds:us-east-1:12345678012:sample-cluster:sample-cluster-snapshot`.
* `TargetDBClusterSnapshotIdentifier` - The identifier for the new cluster snapshot to be created. This parameter isn't case sensitive.
`source_db_cluster_snapshot_identifier: String`The identifier of the cluster snapshot to copy. This parameter is not case sensitive.
Constraints:
* Must specify a valid system snapshot in the *available* state.
* If the source snapshot is in the same Region as the copy, specify a valid snapshot identifier.
* If the source snapshot is in a different Region than the copy, specify a valid cluster snapshot ARN.
Example: `my-cluster-snapshot1`
`tags: Option<Vec<Tag>>`The tags to be assigned to the cluster snapshot.
`target_db_cluster_snapshot_identifier: String`The identifier of the new cluster snapshot to create from the source cluster snapshot. This parameter is not case sensitive.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `my-cluster-snapshot2`
Trait Implementations
---
source### impl Clone for CopyDBClusterSnapshotMessage
source#### fn clone(&self) -> CopyDBClusterSnapshotMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CopyDBClusterSnapshotMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CopyDBClusterSnapshotMessage
source#### fn default() -> CopyDBClusterSnapshotMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CopyDBClusterSnapshotMessage> for CopyDBClusterSnapshotMessage
source#### fn eq(&self, other: &CopyDBClusterSnapshotMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CopyDBClusterSnapshotMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CopyDBClusterSnapshotMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CopyDBClusterSnapshotMessage
### impl Send for CopyDBClusterSnapshotMessage
### impl Sync for CopyDBClusterSnapshotMessage
### impl Unpin for CopyDBClusterSnapshotMessage
### impl UnwindSafe for CopyDBClusterSnapshotMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CopyDBClusterSnapshotResult
===
```
pub struct CopyDBClusterSnapshotResult {
pub db_cluster_snapshot: Option<DBClusterSnapshot>,
}
```
Fields
---
`db_cluster_snapshot: Option<DBClusterSnapshot>`Trait Implementations
---
source### impl Clone for CopyDBClusterSnapshotResult
source#### fn clone(&self) -> CopyDBClusterSnapshotResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CopyDBClusterSnapshotResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CopyDBClusterSnapshotResult
source#### fn default() -> CopyDBClusterSnapshotResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CopyDBClusterSnapshotResult> for CopyDBClusterSnapshotResult
source#### fn eq(&self, other: &CopyDBClusterSnapshotResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CopyDBClusterSnapshotResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CopyDBClusterSnapshotResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CopyDBClusterSnapshotResult
### impl Send for CopyDBClusterSnapshotResult
### impl Sync for CopyDBClusterSnapshotResult
### impl Unpin for CopyDBClusterSnapshotResult
### impl UnwindSafe for CopyDBClusterSnapshotResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBClusterMessage
===
```
pub struct CreateDBClusterMessage {
pub availability_zones: Option<Vec<String>>,
pub backup_retention_period: Option<i64>,
pub db_cluster_identifier: String,
pub db_cluster_parameter_group_name: Option<String>,
pub db_subnet_group_name: Option<String>,
pub deletion_protection: Option<bool>,
pub enable_cloudwatch_logs_exports: Option<Vec<String>>,
pub engine: String,
pub engine_version: Option<String>,
pub global_cluster_identifier: Option<String>,
pub kms_key_id: Option<String>,
pub master_user_password: Option<String>,
pub master_username: Option<String>,
pub port: Option<i64>,
pub pre_signed_url: Option<String>,
pub preferred_backup_window: Option<String>,
pub preferred_maintenance_window: Option<String>,
pub storage_encrypted: Option<bool>,
pub tags: Option<Vec<Tag>>,
pub vpc_security_group_ids: Option<Vec<String>>,
}
```
Represents the input to CreateDBCluster.
Fields
---
`availability_zones: Option<Vec<String>>`A list of Amazon EC2 Availability Zones that instances in the cluster can be created in.
`backup_retention_period: Option<i64>`The number of days for which automated backups are retained. You must specify a minimum value of 1.
Default: 1
Constraints:
* Must be a value from 1 to 35.
`db_cluster_identifier: String`The cluster identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `my-cluster`
`db_cluster_parameter_group_name: Option<String>`The name of the cluster parameter group to associate with this cluster.
`db_subnet_group_name: Option<String>`A subnet group to associate with this cluster.
Constraints: Must match the name of an existing `DBSubnetGroup`. Must not be default.
Example: `mySubnetgroup`
`deletion_protection: Option<bool>`Specifies whether this cluster can be deleted. If `DeletionProtection` is enabled, the cluster cannot be deleted unless it is modified and `DeletionProtection` is disabled. `DeletionProtection` protects clusters from being accidentally deleted.
`enable_cloudwatch_logs_exports: Option<Vec<String>>`A list of log types that need to be enabled for exporting to Amazon CloudWatch Logs. You can enable audit logs or profiler logs. For more information, see Auditing Amazon DocumentDB Events and Profiling Amazon DocumentDB Operations.
`engine: String`The name of the database engine to be used for this cluster.
Valid values: `docdb`
`engine_version: Option<String>`The version number of the database engine to use. The `--engine-version` will default to the latest major engine version. For production workloads, we recommend explicitly declaring this parameter with the intended major engine version.
`global_cluster_identifier: Option<String>`The cluster identifier of the new global cluster.
`kms_key_id: Option<String>`The KMS key identifier for an encrypted cluster.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a cluster using the same account that owns the KMS encryption key that is used to encrypt the new cluster, you can use the KMS key alias instead of the ARN for the KMS encryption key.
If an encryption key is not specified in `KmsKeyId`:
* If the `StorageEncrypted` parameter is `true`, Amazon DocumentDB uses your default encryption key.
KMS creates the default encryption key for your account. Your account has a different default encryption key for each Regions.
`master_user_password: Option<String>`The password for the master database user. This password can contain any printable ASCII character except forward slash (/), double quote ("), or the "at" symbol (@).
Constraints: Must contain from 8 to 100 characters.
`master_username: Option<String>`The name of the master user for the cluster.
Constraints:
* Must be from 1 to 63 letters or numbers.
* The first character must be a letter.
* Cannot be a reserved word for the chosen database engine.
`port: Option<i64>`The port number on which the instances in the cluster accept connections.
`pre_signed_url: Option<String>`Not currently supported.
`preferred_backup_window: Option<String>`The daily time range during which automated backups are created if automated backups are enabled using the `BackupRetentionPeriod` parameter.
The default is a 30-minute window selected at random from an 8-hour block of time for each Region.
Constraints:
* Must be in the format `hh24:mi-hh24:mi`.
* Must be in Universal Coordinated Time (UTC).
* Must not conflict with the preferred maintenance window.
* Must be at least 30 minutes.
`preferred_maintenance_window: Option<String>`The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: `ddd:hh24:mi-ddd:hh24:mi`
The default is a 30-minute window selected at random from an 8-hour block of time for each Region, occurring on a random day of the week.
Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Minimum 30-minute window.
`storage_encrypted: Option<bool>`Specifies whether the cluster is encrypted.
`tags: Option<Vec<Tag>>`The tags to be assigned to the cluster.
`vpc_security_group_ids: Option<Vec<String>>`A list of EC2 VPC security groups to associate with this cluster.
Trait Implementations
---
source### impl Clone for CreateDBClusterMessage
source#### fn clone(&self) -> CreateDBClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBClusterMessage
source#### fn default() -> CreateDBClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBClusterMessage> for CreateDBClusterMessage
source#### fn eq(&self, other: &CreateDBClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterMessage
### impl Send for CreateDBClusterMessage
### impl Sync for CreateDBClusterMessage
### impl Unpin for CreateDBClusterMessage
### impl UnwindSafe for CreateDBClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBClusterParameterGroupMessage
===
```
pub struct CreateDBClusterParameterGroupMessage {
pub db_cluster_parameter_group_name: String,
pub db_parameter_group_family: String,
pub description: String,
pub tags: Option<Vec<Tag>>,
}
```
Represents the input of CreateDBClusterParameterGroup.
Fields
---
`db_cluster_parameter_group_name: String`The name of the cluster parameter group.
Constraints:
* Must not match the name of an existing `DBClusterParameterGroup`.
This value is stored as a lowercase string.
`db_parameter_group_family: String`The cluster parameter group family name.
`description: String`The description for the cluster parameter group.
`tags: Option<Vec<Tag>>`The tags to be assigned to the cluster parameter group.
Trait Implementations
---
source### impl Clone for CreateDBClusterParameterGroupMessage
source#### fn clone(&self) -> CreateDBClusterParameterGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBClusterParameterGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBClusterParameterGroupMessage
source#### fn default() -> CreateDBClusterParameterGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBClusterParameterGroupMessage> for CreateDBClusterParameterGroupMessage
source#### fn eq(&self, other: &CreateDBClusterParameterGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterParameterGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterParameterGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterParameterGroupMessage
### impl Send for CreateDBClusterParameterGroupMessage
### impl Sync for CreateDBClusterParameterGroupMessage
### impl Unpin for CreateDBClusterParameterGroupMessage
### impl UnwindSafe for CreateDBClusterParameterGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBClusterParameterGroupResult
===
```
pub struct CreateDBClusterParameterGroupResult {
pub db_cluster_parameter_group: Option<DBClusterParameterGroup>,
}
```
Fields
---
`db_cluster_parameter_group: Option<DBClusterParameterGroup>`Trait Implementations
---
source### impl Clone for CreateDBClusterParameterGroupResult
source#### fn clone(&self) -> CreateDBClusterParameterGroupResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBClusterParameterGroupResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBClusterParameterGroupResult
source#### fn default() -> CreateDBClusterParameterGroupResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBClusterParameterGroupResult> for CreateDBClusterParameterGroupResult
source#### fn eq(&self, other: &CreateDBClusterParameterGroupResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterParameterGroupResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterParameterGroupResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterParameterGroupResult
### impl Send for CreateDBClusterParameterGroupResult
### impl Sync for CreateDBClusterParameterGroupResult
### impl Unpin for CreateDBClusterParameterGroupResult
### impl UnwindSafe for CreateDBClusterParameterGroupResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBClusterResult
===
```
pub struct CreateDBClusterResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for CreateDBClusterResult
source#### fn clone(&self) -> CreateDBClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBClusterResult
source#### fn default() -> CreateDBClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBClusterResult> for CreateDBClusterResult
source#### fn eq(&self, other: &CreateDBClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterResult
### impl Send for CreateDBClusterResult
### impl Sync for CreateDBClusterResult
### impl Unpin for CreateDBClusterResult
### impl UnwindSafe for CreateDBClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBClusterSnapshotMessage
===
```
pub struct CreateDBClusterSnapshotMessage {
pub db_cluster_identifier: String,
pub db_cluster_snapshot_identifier: String,
pub tags: Option<Vec<Tag>>,
}
```
Represents the input of CreateDBClusterSnapshot.
Fields
---
`db_cluster_identifier: String`The identifier of the cluster to create a snapshot for. This parameter is not case sensitive.
Constraints:
* Must match the identifier of an existing `DBCluster`.
Example: `my-cluster`
`db_cluster_snapshot_identifier: String`The identifier of the cluster snapshot. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `my-cluster-snapshot1`
`tags: Option<Vec<Tag>>`The tags to be assigned to the cluster snapshot.
Trait Implementations
---
source### impl Clone for CreateDBClusterSnapshotMessage
source#### fn clone(&self) -> CreateDBClusterSnapshotMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBClusterSnapshotMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBClusterSnapshotMessage
source#### fn default() -> CreateDBClusterSnapshotMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBClusterSnapshotMessage> for CreateDBClusterSnapshotMessage
source#### fn eq(&self, other: &CreateDBClusterSnapshotMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterSnapshotMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterSnapshotMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterSnapshotMessage
### impl Send for CreateDBClusterSnapshotMessage
### impl Sync for CreateDBClusterSnapshotMessage
### impl Unpin for CreateDBClusterSnapshotMessage
### impl UnwindSafe for CreateDBClusterSnapshotMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBClusterSnapshotResult
===
```
pub struct CreateDBClusterSnapshotResult {
pub db_cluster_snapshot: Option<DBClusterSnapshot>,
}
```
Fields
---
`db_cluster_snapshot: Option<DBClusterSnapshot>`Trait Implementations
---
source### impl Clone for CreateDBClusterSnapshotResult
source#### fn clone(&self) -> CreateDBClusterSnapshotResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBClusterSnapshotResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBClusterSnapshotResult
source#### fn default() -> CreateDBClusterSnapshotResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBClusterSnapshotResult> for CreateDBClusterSnapshotResult
source#### fn eq(&self, other: &CreateDBClusterSnapshotResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterSnapshotResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterSnapshotResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterSnapshotResult
### impl Send for CreateDBClusterSnapshotResult
### impl Sync for CreateDBClusterSnapshotResult
### impl Unpin for CreateDBClusterSnapshotResult
### impl UnwindSafe for CreateDBClusterSnapshotResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBInstanceMessage
===
```
pub struct CreateDBInstanceMessage {
pub auto_minor_version_upgrade: Option<bool>,
pub availability_zone: Option<String>,
pub db_cluster_identifier: String,
pub db_instance_class: String,
pub db_instance_identifier: String,
pub engine: String,
pub preferred_maintenance_window: Option<String>,
pub promotion_tier: Option<i64>,
pub tags: Option<Vec<Tag>>,
}
```
Represents the input to CreateDBInstance.
Fields
---
`auto_minor_version_upgrade: Option<bool>`This parameter does not apply to Amazon DocumentDB. Amazon DocumentDB does not perform minor version upgrades regardless of the value set.
Default: `false`
`availability_zone: Option<String>`The Amazon EC2 Availability Zone that the instance is created in.
Default: A random, system-chosen Availability Zone in the endpoint's Region.
Example: `us-east-1d`
`db_cluster_identifier: String`The identifier of the cluster that the instance will belong to.
`db_instance_class: String`The compute and memory capacity of the instance; for example, `db.r5.large`.
`db_instance_identifier: String`The instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `mydbinstance`
`engine: String`The name of the database engine to be used for this instance.
Valid value: `docdb`
`preferred_maintenance_window: Option<String>`The time range each week during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: `ddd:hh24:mi-ddd:hh24:mi`
The default is a 30-minute window selected at random from an 8-hour block of time for each Region, occurring on a random day of the week.
Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Minimum 30-minute window.
`promotion_tier: Option<i64>`A value that specifies the order in which an Amazon DocumentDB replica is promoted to the primary instance after a failure of the existing primary instance.
Default: 1
Valid values: 0-15
`tags: Option<Vec<Tag>>`The tags to be assigned to the instance. You can assign up to 10 tags to an instance.
Trait Implementations
---
source### impl Clone for CreateDBInstanceMessage
source#### fn clone(&self) -> CreateDBInstanceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBInstanceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBInstanceMessage
source#### fn default() -> CreateDBInstanceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBInstanceMessage> for CreateDBInstanceMessage
source#### fn eq(&self, other: &CreateDBInstanceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBInstanceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBInstanceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBInstanceMessage
### impl Send for CreateDBInstanceMessage
### impl Sync for CreateDBInstanceMessage
### impl Unpin for CreateDBInstanceMessage
### impl UnwindSafe for CreateDBInstanceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBInstanceResult
===
```
pub struct CreateDBInstanceResult {
pub db_instance: Option<DBInstance>,
}
```
Fields
---
`db_instance: Option<DBInstance>`Trait Implementations
---
source### impl Clone for CreateDBInstanceResult
source#### fn clone(&self) -> CreateDBInstanceResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBInstanceResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBInstanceResult
source#### fn default() -> CreateDBInstanceResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBInstanceResult> for CreateDBInstanceResult
source#### fn eq(&self, other: &CreateDBInstanceResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBInstanceResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBInstanceResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBInstanceResult
### impl Send for CreateDBInstanceResult
### impl Sync for CreateDBInstanceResult
### impl Unpin for CreateDBInstanceResult
### impl UnwindSafe for CreateDBInstanceResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBSubnetGroupMessage
===
```
pub struct CreateDBSubnetGroupMessage {
pub db_subnet_group_description: String,
pub db_subnet_group_name: String,
pub subnet_ids: Vec<String>,
pub tags: Option<Vec<Tag>>,
}
```
Represents the input to CreateDBSubnetGroup.
Fields
---
`db_subnet_group_description: String`The description for the subnet group.
`db_subnet_group_name: String`The name for the subnet group. This value is stored as a lowercase string.
Constraints: Must contain no more than 255 letters, numbers, periods, underscores, spaces, or hyphens. Must not be default.
Example: `mySubnetgroup`
`subnet_ids: Vec<String>`The Amazon EC2 subnet IDs for the subnet group.
`tags: Option<Vec<Tag>>`The tags to be assigned to the subnet group.
Trait Implementations
---
source### impl Clone for CreateDBSubnetGroupMessage
source#### fn clone(&self) -> CreateDBSubnetGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBSubnetGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBSubnetGroupMessage
source#### fn default() -> CreateDBSubnetGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBSubnetGroupMessage> for CreateDBSubnetGroupMessage
source#### fn eq(&self, other: &CreateDBSubnetGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBSubnetGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBSubnetGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBSubnetGroupMessage
### impl Send for CreateDBSubnetGroupMessage
### impl Sync for CreateDBSubnetGroupMessage
### impl Unpin for CreateDBSubnetGroupMessage
### impl UnwindSafe for CreateDBSubnetGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateDBSubnetGroupResult
===
```
pub struct CreateDBSubnetGroupResult {
pub db_subnet_group: Option<DBSubnetGroup>,
}
```
Fields
---
`db_subnet_group: Option<DBSubnetGroup>`Trait Implementations
---
source### impl Clone for CreateDBSubnetGroupResult
source#### fn clone(&self) -> CreateDBSubnetGroupResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateDBSubnetGroupResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateDBSubnetGroupResult
source#### fn default() -> CreateDBSubnetGroupResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateDBSubnetGroupResult> for CreateDBSubnetGroupResult
source#### fn eq(&self, other: &CreateDBSubnetGroupResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBSubnetGroupResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBSubnetGroupResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBSubnetGroupResult
### impl Send for CreateDBSubnetGroupResult
### impl Sync for CreateDBSubnetGroupResult
### impl Unpin for CreateDBSubnetGroupResult
### impl UnwindSafe for CreateDBSubnetGroupResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateEventSubscriptionMessage
===
```
pub struct CreateEventSubscriptionMessage {
pub enabled: Option<bool>,
pub event_categories: Option<Vec<String>>,
pub sns_topic_arn: String,
pub source_ids: Option<Vec<String>>,
pub source_type: Option<String>,
pub subscription_name: String,
pub tags: Option<Vec<Tag>>,
}
```
Represents the input to CreateEventSubscription.
Fields
---
`enabled: Option<bool>` A Boolean value; set to `true` to activate the subscription, set to `false` to create the subscription but not active it.
`event_categories: Option<Vec<String>>` A list of event categories for a `SourceType` that you want to subscribe to.
`sns_topic_arn: String`The Amazon Resource Name (ARN) of the SNS topic created for event notification. Amazon SNS creates the ARN when you create a topic and subscribe to it.
`source_ids: Option<Vec<String>>`The list of identifiers of the event sources for which events are returned. If not specified, then all sources are included in the response. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it can't end with a hyphen or contain two consecutive hyphens.
Constraints:
* If `SourceIds` are provided, `SourceType` must also be provided.
* If the source type is an instance, a `DBInstanceIdentifier` must be provided.
* If the source type is a security group, a `DBSecurityGroupName` must be provided.
* If the source type is a parameter group, a `DBParameterGroupName` must be provided.
* If the source type is a snapshot, a `DBSnapshotIdentifier` must be provided.
`source_type: Option<String>`The type of source that is generating the events. For example, if you want to be notified of events generated by an instance, you would set this parameter to `db-instance`. If this value is not specified, all events are returned.
Valid values: `db-instance`, `db-cluster`, `db-parameter-group`, `db-security-group`, `db-cluster-snapshot`
`subscription_name: String`The name of the subscription.
Constraints: The name must be fewer than 255 characters.
`tags: Option<Vec<Tag>>`The tags to be assigned to the event subscription.
Trait Implementations
---
source### impl Clone for CreateEventSubscriptionMessage
source#### fn clone(&self) -> CreateEventSubscriptionMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateEventSubscriptionMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateEventSubscriptionMessage
source#### fn default() -> CreateEventSubscriptionMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateEventSubscriptionMessage> for CreateEventSubscriptionMessage
source#### fn eq(&self, other: &CreateEventSubscriptionMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateEventSubscriptionMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateEventSubscriptionMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateEventSubscriptionMessage
### impl Send for CreateEventSubscriptionMessage
### impl Sync for CreateEventSubscriptionMessage
### impl Unpin for CreateEventSubscriptionMessage
### impl UnwindSafe for CreateEventSubscriptionMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateEventSubscriptionResult
===
```
pub struct CreateEventSubscriptionResult {
pub event_subscription: Option<EventSubscription>,
}
```
Fields
---
`event_subscription: Option<EventSubscription>`Trait Implementations
---
source### impl Clone for CreateEventSubscriptionResult
source#### fn clone(&self) -> CreateEventSubscriptionResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateEventSubscriptionResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateEventSubscriptionResult
source#### fn default() -> CreateEventSubscriptionResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateEventSubscriptionResult> for CreateEventSubscriptionResult
source#### fn eq(&self, other: &CreateEventSubscriptionResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateEventSubscriptionResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateEventSubscriptionResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateEventSubscriptionResult
### impl Send for CreateEventSubscriptionResult
### impl Sync for CreateEventSubscriptionResult
### impl Unpin for CreateEventSubscriptionResult
### impl UnwindSafe for CreateEventSubscriptionResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateGlobalClusterMessage
===
```
pub struct CreateGlobalClusterMessage {
pub database_name: Option<String>,
pub deletion_protection: Option<bool>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub global_cluster_identifier: String,
pub source_db_cluster_identifier: Option<String>,
pub storage_encrypted: Option<bool>,
}
```
Represents the input to CreateGlobalCluster.
Fields
---
`database_name: Option<String>`The name for your database of up to 64 alpha-numeric characters. If you do not provide a name, Amazon DocumentDB will not create a database in the global cluster you are creating.
`deletion_protection: Option<bool>`The deletion protection setting for the new global cluster. The global cluster can't be deleted when deletion protection is enabled.
`engine: Option<String>`The name of the database engine to be used for this cluster.
`engine_version: Option<String>`The engine version of the global cluster.
`global_cluster_identifier: String`The cluster identifier of the new global cluster.
`source_db_cluster_identifier: Option<String>`The Amazon Resource Name (ARN) to use as the primary cluster of the global cluster. This parameter is optional.
`storage_encrypted: Option<bool>`The storage encryption setting for the new global cluster.
Trait Implementations
---
source### impl Clone for CreateGlobalClusterMessage
source#### fn clone(&self) -> CreateGlobalClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateGlobalClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateGlobalClusterMessage
source#### fn default() -> CreateGlobalClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateGlobalClusterMessage> for CreateGlobalClusterMessage
source#### fn eq(&self, other: &CreateGlobalClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateGlobalClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateGlobalClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateGlobalClusterMessage
### impl Send for CreateGlobalClusterMessage
### impl Sync for CreateGlobalClusterMessage
### impl Unpin for CreateGlobalClusterMessage
### impl UnwindSafe for CreateGlobalClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::CreateGlobalClusterResult
===
```
pub struct CreateGlobalClusterResult {
pub global_cluster: Option<GlobalCluster>,
}
```
Fields
---
`global_cluster: Option<GlobalCluster>`Trait Implementations
---
source### impl Clone for CreateGlobalClusterResult
source#### fn clone(&self) -> CreateGlobalClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for CreateGlobalClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for CreateGlobalClusterResult
source#### fn default() -> CreateGlobalClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<CreateGlobalClusterResult> for CreateGlobalClusterResult
source#### fn eq(&self, other: &CreateGlobalClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateGlobalClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateGlobalClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateGlobalClusterResult
### impl Send for CreateGlobalClusterResult
### impl Sync for CreateGlobalClusterResult
### impl Unpin for CreateGlobalClusterResult
### impl UnwindSafe for CreateGlobalClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBCluster
===
```
pub struct DBCluster {
pub associated_roles: Option<Vec<DBClusterRole>>,
pub availability_zones: Option<Vec<String>>,
pub backup_retention_period: Option<i64>,
pub cluster_create_time: Option<String>,
pub db_cluster_arn: Option<String>,
pub db_cluster_identifier: Option<String>,
pub db_cluster_members: Option<Vec<DBClusterMember>>,
pub db_cluster_parameter_group: Option<String>,
pub db_subnet_group: Option<String>,
pub db_cluster_resource_id: Option<String>,
pub deletion_protection: Option<bool>,
pub earliest_restorable_time: Option<String>,
pub enabled_cloudwatch_logs_exports: Option<Vec<String>>,
pub endpoint: Option<String>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub hosted_zone_id: Option<String>,
pub kms_key_id: Option<String>,
pub latest_restorable_time: Option<String>,
pub master_username: Option<String>,
pub multi_az: Option<bool>,
pub percent_progress: Option<String>,
pub port: Option<i64>,
pub preferred_backup_window: Option<String>,
pub preferred_maintenance_window: Option<String>,
pub read_replica_identifiers: Option<Vec<String>>,
pub reader_endpoint: Option<String>,
pub replication_source_identifier: Option<String>,
pub status: Option<String>,
pub storage_encrypted: Option<bool>,
pub vpc_security_groups: Option<Vec<VpcSecurityGroupMembership>>,
}
```
Detailed information about a cluster.
Fields
---
`associated_roles: Option<Vec<DBClusterRole>>`Provides a list of the Identity and Access Management (IAM) roles that are associated with the cluster. (IAM) roles that are associated with a cluster grant permission for the cluster to access other Amazon Web Services services on your behalf.
`availability_zones: Option<Vec<String>>`Provides the list of Amazon EC2 Availability Zones that instances in the cluster can be created in.
`backup_retention_period: Option<i64>`Specifies the number of days for which automatic snapshots are retained.
`cluster_create_time: Option<String>`Specifies the time when the cluster was created, in Universal Coordinated Time (UTC).
`db_cluster_arn: Option<String>`The Amazon Resource Name (ARN) for the cluster.
`db_cluster_identifier: Option<String>`Contains a user-supplied cluster identifier. This identifier is the unique key that identifies a cluster.
`db_cluster_members: Option<Vec<DBClusterMember>>`Provides the list of instances that make up the cluster.
`db_cluster_parameter_group: Option<String>`Specifies the name of the cluster parameter group for the cluster.
`db_subnet_group: Option<String>`Specifies information on the subnet group that is associated with the cluster, including the name, description, and subnets in the subnet group.
`db_cluster_resource_id: Option<String>`The Region-unique, immutable identifier for the cluster. This identifier is found in CloudTrail log entries whenever the KMS key for the cluster is accessed.
`deletion_protection: Option<bool>`Specifies whether this cluster can be deleted. If `DeletionProtection` is enabled, the cluster cannot be deleted unless it is modified and `DeletionProtection` is disabled. `DeletionProtection` protects clusters from being accidentally deleted.
`earliest_restorable_time: Option<String>`The earliest time to which a database can be restored with point-in-time restore.
`enabled_cloudwatch_logs_exports: Option<Vec<String>>`A list of log types that this cluster is configured to export to Amazon CloudWatch Logs.
`endpoint: Option<String>`Specifies the connection endpoint for the primary instance of the cluster.
`engine: Option<String>`Provides the name of the database engine to be used for this cluster.
`engine_version: Option<String>`Indicates the database engine version.
`hosted_zone_id: Option<String>`Specifies the ID that Amazon Route 53 assigns when you create a hosted zone.
`kms_key_id: Option<String>`If `StorageEncrypted` is `true`, the KMS key identifier for the encrypted cluster.
`latest_restorable_time: Option<String>`Specifies the latest time to which a database can be restored with point-in-time restore.
`master_username: Option<String>`Contains the master user name for the cluster.
`multi_az: Option<bool>`Specifies whether the cluster has instances in multiple Availability Zones.
`percent_progress: Option<String>`Specifies the progress of the operation as a percentage.
`port: Option<i64>`Specifies the port that the database engine is listening on.
`preferred_backup_window: Option<String>`Specifies the daily time range during which automated backups are created if automated backups are enabled, as determined by the `BackupRetentionPeriod`.
`preferred_maintenance_window: Option<String>`Specifies the weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
`read_replica_identifiers: Option<Vec<String>>`Contains one or more identifiers of the secondary clusters that are associated with this cluster.
`reader_endpoint: Option<String>`The reader endpoint for the cluster. The reader endpoint for a cluster load balances connections across the Amazon DocumentDB replicas that are available in a cluster. As clients request new connections to the reader endpoint, Amazon DocumentDB distributes the connection requests among the Amazon DocumentDB replicas in the cluster. This functionality can help balance your read workload across multiple Amazon DocumentDB replicas in your cluster.
If a failover occurs, and the Amazon DocumentDB replica that you are connected to is promoted to be the primary instance, your connection is dropped. To continue sending your read workload to other Amazon DocumentDB replicas in the cluster, you can then reconnect to the reader endpoint.
`replication_source_identifier: Option<String>`Contains the identifier of the source cluster if this cluster is a secondary cluster.
`status: Option<String>`Specifies the current state of this cluster.
`storage_encrypted: Option<bool>`Specifies whether the cluster is encrypted.
`vpc_security_groups: Option<Vec<VpcSecurityGroupMembership>>`Provides a list of virtual private cloud (VPC) security groups that the cluster belongs to.
Trait Implementations
---
source### impl Clone for DBCluster
source#### fn clone(&self) -> DBCluster
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBCluster
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBCluster
source#### fn default() -> DBCluster
Returns the “default value” for a type. Read more
source### impl PartialEq<DBCluster> for DBCluster
source#### fn eq(&self, other: &DBCluster) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBCluster) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBCluster
Auto Trait Implementations
---
### impl RefUnwindSafe for DBCluster
### impl Send for DBCluster
### impl Sync for DBCluster
### impl Unpin for DBCluster
### impl UnwindSafe for DBCluster
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterMember
===
```
pub struct DBClusterMember {
pub db_cluster_parameter_group_status: Option<String>,
pub db_instance_identifier: Option<String>,
pub is_cluster_writer: Option<bool>,
pub promotion_tier: Option<i64>,
}
```
Contains information about an instance that is part of a cluster.
Fields
---
`db_cluster_parameter_group_status: Option<String>`Specifies the status of the cluster parameter group for this member of the DB cluster.
`db_instance_identifier: Option<String>`Specifies the instance identifier for this member of the cluster.
`is_cluster_writer: Option<bool>`A value that is `true` if the cluster member is the primary instance for the cluster and `false` otherwise.
`promotion_tier: Option<i64>`A value that specifies the order in which an Amazon DocumentDB replica is promoted to the primary instance after a failure of the existing primary instance.
Trait Implementations
---
source### impl Clone for DBClusterMember
source#### fn clone(&self) -> DBClusterMember
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterMember
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterMember
source#### fn default() -> DBClusterMember
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterMember> for DBClusterMember
source#### fn eq(&self, other: &DBClusterMember) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterMember) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterMember
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterMember
### impl Send for DBClusterMember
### impl Sync for DBClusterMember
### impl Unpin for DBClusterMember
### impl UnwindSafe for DBClusterMember
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterMessage
===
```
pub struct DBClusterMessage {
pub db_clusters: Option<Vec<DBCluster>>,
pub marker: Option<String>,
}
```
Represents the output of DescribeDBClusters.
Fields
---
`db_clusters: Option<Vec<DBCluster>>`A list of clusters.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for DBClusterMessage
source#### fn clone(&self) -> DBClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterMessage
source#### fn default() -> DBClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterMessage> for DBClusterMessage
source#### fn eq(&self, other: &DBClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterMessage
### impl Send for DBClusterMessage
### impl Sync for DBClusterMessage
### impl Unpin for DBClusterMessage
### impl UnwindSafe for DBClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterParameterGroup
===
```
pub struct DBClusterParameterGroup {
pub db_cluster_parameter_group_arn: Option<String>,
pub db_cluster_parameter_group_name: Option<String>,
pub db_parameter_group_family: Option<String>,
pub description: Option<String>,
}
```
Detailed information about a cluster parameter group.
Fields
---
`db_cluster_parameter_group_arn: Option<String>`The Amazon Resource Name (ARN) for the cluster parameter group.
`db_cluster_parameter_group_name: Option<String>`Provides the name of the cluster parameter group.
`db_parameter_group_family: Option<String>`Provides the name of the parameter group family that this cluster parameter group is compatible with.
`description: Option<String>`Provides the customer-specified description for this cluster parameter group.
Trait Implementations
---
source### impl Clone for DBClusterParameterGroup
source#### fn clone(&self) -> DBClusterParameterGroup
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterParameterGroup
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterParameterGroup
source#### fn default() -> DBClusterParameterGroup
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterParameterGroup> for DBClusterParameterGroup
source#### fn eq(&self, other: &DBClusterParameterGroup) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterParameterGroup) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterParameterGroup
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterParameterGroup
### impl Send for DBClusterParameterGroup
### impl Sync for DBClusterParameterGroup
### impl Unpin for DBClusterParameterGroup
### impl UnwindSafe for DBClusterParameterGroup
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterParameterGroupDetails
===
```
pub struct DBClusterParameterGroupDetails {
pub marker: Option<String>,
pub parameters: Option<Vec<Parameter>>,
}
```
Represents the output of DBClusterParameterGroup.
Fields
---
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`parameters: Option<Vec<Parameter>>`Provides a list of parameters for the cluster parameter group.
Trait Implementations
---
source### impl Clone for DBClusterParameterGroupDetails
source#### fn clone(&self) -> DBClusterParameterGroupDetails
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterParameterGroupDetails
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterParameterGroupDetails
source#### fn default() -> DBClusterParameterGroupDetails
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterParameterGroupDetails> for DBClusterParameterGroupDetails
source#### fn eq(&self, other: &DBClusterParameterGroupDetails) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterParameterGroupDetails) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterParameterGroupDetails
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterParameterGroupDetails
### impl Send for DBClusterParameterGroupDetails
### impl Sync for DBClusterParameterGroupDetails
### impl Unpin for DBClusterParameterGroupDetails
### impl UnwindSafe for DBClusterParameterGroupDetails
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterParameterGroupNameMessage
===
```
pub struct DBClusterParameterGroupNameMessage {
pub db_cluster_parameter_group_name: Option<String>,
}
```
Contains the name of a cluster parameter group.
Fields
---
`db_cluster_parameter_group_name: Option<String>`The name of a cluster parameter group.
Constraints:
* Must be from 1 to 255 letters or numbers.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
This value is stored as a lowercase string.
Trait Implementations
---
source### impl Clone for DBClusterParameterGroupNameMessage
source#### fn clone(&self) -> DBClusterParameterGroupNameMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterParameterGroupNameMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterParameterGroupNameMessage
source#### fn default() -> DBClusterParameterGroupNameMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterParameterGroupNameMessage> for DBClusterParameterGroupNameMessage
source#### fn eq(&self, other: &DBClusterParameterGroupNameMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterParameterGroupNameMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterParameterGroupNameMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterParameterGroupNameMessage
### impl Send for DBClusterParameterGroupNameMessage
### impl Sync for DBClusterParameterGroupNameMessage
### impl Unpin for DBClusterParameterGroupNameMessage
### impl UnwindSafe for DBClusterParameterGroupNameMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterParameterGroupsMessage
===
```
pub struct DBClusterParameterGroupsMessage {
pub db_cluster_parameter_groups: Option<Vec<DBClusterParameterGroup>>,
pub marker: Option<String>,
}
```
Represents the output of DBClusterParameterGroups.
Fields
---
`db_cluster_parameter_groups: Option<Vec<DBClusterParameterGroup>>`A list of cluster parameter groups.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for DBClusterParameterGroupsMessage
source#### fn clone(&self) -> DBClusterParameterGroupsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterParameterGroupsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterParameterGroupsMessage
source#### fn default() -> DBClusterParameterGroupsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterParameterGroupsMessage> for DBClusterParameterGroupsMessage
source#### fn eq(&self, other: &DBClusterParameterGroupsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterParameterGroupsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterParameterGroupsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterParameterGroupsMessage
### impl Send for DBClusterParameterGroupsMessage
### impl Sync for DBClusterParameterGroupsMessage
### impl Unpin for DBClusterParameterGroupsMessage
### impl UnwindSafe for DBClusterParameterGroupsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterRole
===
```
pub struct DBClusterRole {
pub role_arn: Option<String>,
pub status: Option<String>,
}
```
Describes an Identity and Access Management (IAM) role that is associated with a cluster.
Fields
---
`role_arn: Option<String>`The Amazon Resource Name (ARN) of the IAMrole that is associated with the DB cluster.
`status: Option<String>`Describes the state of association between the IAMrole and the cluster. The `Status` property returns one of the following values:
* `ACTIVE` - The IAMrole ARN is associated with the cluster and can be used to access other Amazon Web Services services on your behalf.
* `PENDING` - The IAMrole ARN is being associated with the cluster.
* `INVALID` - The IAMrole ARN is associated with the cluster, but the cluster cannot assume the IAMrole to access other Amazon Web Services services on your behalf.
Trait Implementations
---
source### impl Clone for DBClusterRole
source#### fn clone(&self) -> DBClusterRole
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterRole
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterRole
source#### fn default() -> DBClusterRole
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterRole> for DBClusterRole
source#### fn eq(&self, other: &DBClusterRole) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterRole) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterRole
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterRole
### impl Send for DBClusterRole
### impl Sync for DBClusterRole
### impl Unpin for DBClusterRole
### impl UnwindSafe for DBClusterRole
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterSnapshot
===
```
pub struct DBClusterSnapshot {
pub availability_zones: Option<Vec<String>>,
pub cluster_create_time: Option<String>,
pub db_cluster_identifier: Option<String>,
pub db_cluster_snapshot_arn: Option<String>,
pub db_cluster_snapshot_identifier: Option<String>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub kms_key_id: Option<String>,
pub master_username: Option<String>,
pub percent_progress: Option<i64>,
pub port: Option<i64>,
pub snapshot_create_time: Option<String>,
pub snapshot_type: Option<String>,
pub source_db_cluster_snapshot_arn: Option<String>,
pub status: Option<String>,
pub storage_encrypted: Option<bool>,
pub vpc_id: Option<String>,
}
```
Detailed information about a cluster snapshot.
Fields
---
`availability_zones: Option<Vec<String>>`Provides the list of Amazon EC2 Availability Zones that instances in the cluster snapshot can be restored in.
`cluster_create_time: Option<String>`Specifies the time when the cluster was created, in Universal Coordinated Time (UTC).
`db_cluster_identifier: Option<String>`Specifies the cluster identifier of the cluster that this cluster snapshot was created from.
`db_cluster_snapshot_arn: Option<String>`The Amazon Resource Name (ARN) for the cluster snapshot.
`db_cluster_snapshot_identifier: Option<String>`Specifies the identifier for the cluster snapshot.
`engine: Option<String>`Specifies the name of the database engine.
`engine_version: Option<String>`Provides the version of the database engine for this cluster snapshot.
`kms_key_id: Option<String>`If `StorageEncrypted` is `true`, the KMS key identifier for the encrypted cluster snapshot.
`master_username: Option<String>`Provides the master user name for the cluster snapshot.
`percent_progress: Option<i64>`Specifies the percentage of the estimated data that has been transferred.
`port: Option<i64>`Specifies the port that the cluster was listening on at the time of the snapshot.
`snapshot_create_time: Option<String>`Provides the time when the snapshot was taken, in UTC.
`snapshot_type: Option<String>`Provides the type of the cluster snapshot.
`source_db_cluster_snapshot_arn: Option<String>`If the cluster snapshot was copied from a source cluster snapshot, the ARN for the source cluster snapshot; otherwise, a null value.
`status: Option<String>`Specifies the status of this cluster snapshot.
`storage_encrypted: Option<bool>`Specifies whether the cluster snapshot is encrypted.
`vpc_id: Option<String>`Provides the virtual private cloud (VPC) ID that is associated with the cluster snapshot.
Trait Implementations
---
source### impl Clone for DBClusterSnapshot
source#### fn clone(&self) -> DBClusterSnapshot
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterSnapshot
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterSnapshot
source#### fn default() -> DBClusterSnapshot
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterSnapshot> for DBClusterSnapshot
source#### fn eq(&self, other: &DBClusterSnapshot) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterSnapshot) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterSnapshot
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterSnapshot
### impl Send for DBClusterSnapshot
### impl Sync for DBClusterSnapshot
### impl Unpin for DBClusterSnapshot
### impl UnwindSafe for DBClusterSnapshot
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterSnapshotAttribute
===
```
pub struct DBClusterSnapshotAttribute {
pub attribute_name: Option<String>,
pub attribute_values: Option<Vec<String>>,
}
```
Contains the name and values of a manual cluster snapshot attribute.
Manual cluster snapshot attributes are used to authorize other accounts to restore a manual cluster snapshot.
Fields
---
`attribute_name: Option<String>`The name of the manual cluster snapshot attribute.
The attribute named `restore` refers to the list of accounts that have permission to copy or restore the manual cluster snapshot.
`attribute_values: Option<Vec<String>>`The values for the manual cluster snapshot attribute.
If the `AttributeName` field is set to `restore`, then this element returns a list of IDs of the accounts that are authorized to copy or restore the manual cluster snapshot. If a value of `all` is in the list, then the manual cluster snapshot is public and available for any account to copy or restore.
Trait Implementations
---
source### impl Clone for DBClusterSnapshotAttribute
source#### fn clone(&self) -> DBClusterSnapshotAttribute
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterSnapshotAttribute
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterSnapshotAttribute
source#### fn default() -> DBClusterSnapshotAttribute
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterSnapshotAttribute> for DBClusterSnapshotAttribute
source#### fn eq(&self, other: &DBClusterSnapshotAttribute) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterSnapshotAttribute) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterSnapshotAttribute
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterSnapshotAttribute
### impl Send for DBClusterSnapshotAttribute
### impl Sync for DBClusterSnapshotAttribute
### impl Unpin for DBClusterSnapshotAttribute
### impl UnwindSafe for DBClusterSnapshotAttribute
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterSnapshotAttributesResult
===
```
pub struct DBClusterSnapshotAttributesResult {
pub db_cluster_snapshot_attributes: Option<Vec<DBClusterSnapshotAttribute>>,
pub db_cluster_snapshot_identifier: Option<String>,
}
```
Detailed information about the attributes that are associated with a cluster snapshot.
Fields
---
`db_cluster_snapshot_attributes: Option<Vec<DBClusterSnapshotAttribute>>`The list of attributes and values for the cluster snapshot.
`db_cluster_snapshot_identifier: Option<String>`The identifier of the cluster snapshot that the attributes apply to.
Trait Implementations
---
source### impl Clone for DBClusterSnapshotAttributesResult
source#### fn clone(&self) -> DBClusterSnapshotAttributesResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterSnapshotAttributesResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterSnapshotAttributesResult
source#### fn default() -> DBClusterSnapshotAttributesResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterSnapshotAttributesResult> for DBClusterSnapshotAttributesResult
source#### fn eq(&self, other: &DBClusterSnapshotAttributesResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterSnapshotAttributesResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterSnapshotAttributesResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterSnapshotAttributesResult
### impl Send for DBClusterSnapshotAttributesResult
### impl Sync for DBClusterSnapshotAttributesResult
### impl Unpin for DBClusterSnapshotAttributesResult
### impl UnwindSafe for DBClusterSnapshotAttributesResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBClusterSnapshotMessage
===
```
pub struct DBClusterSnapshotMessage {
pub db_cluster_snapshots: Option<Vec<DBClusterSnapshot>>,
pub marker: Option<String>,
}
```
Represents the output of DescribeDBClusterSnapshots.
Fields
---
`db_cluster_snapshots: Option<Vec<DBClusterSnapshot>>`Provides a list of cluster snapshots.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for DBClusterSnapshotMessage
source#### fn clone(&self) -> DBClusterSnapshotMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBClusterSnapshotMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBClusterSnapshotMessage
source#### fn default() -> DBClusterSnapshotMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DBClusterSnapshotMessage> for DBClusterSnapshotMessage
source#### fn eq(&self, other: &DBClusterSnapshotMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBClusterSnapshotMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBClusterSnapshotMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DBClusterSnapshotMessage
### impl Send for DBClusterSnapshotMessage
### impl Sync for DBClusterSnapshotMessage
### impl Unpin for DBClusterSnapshotMessage
### impl UnwindSafe for DBClusterSnapshotMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBEngineVersion
===
```
pub struct DBEngineVersion {
pub db_engine_description: Option<String>,
pub db_engine_version_description: Option<String>,
pub db_parameter_group_family: Option<String>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub exportable_log_types: Option<Vec<String>>,
pub supports_log_exports_to_cloudwatch_logs: Option<bool>,
pub valid_upgrade_target: Option<Vec<UpgradeTarget>>,
}
```
Detailed information about an engine version.
Fields
---
`db_engine_description: Option<String>`The description of the database engine.
`db_engine_version_description: Option<String>`The description of the database engine version.
`db_parameter_group_family: Option<String>`The name of the parameter group family for the database engine.
`engine: Option<String>`The name of the database engine.
`engine_version: Option<String>`The version number of the database engine.
`exportable_log_types: Option<Vec<String>>`The types of logs that the database engine has available for export to Amazon CloudWatch Logs.
`supports_log_exports_to_cloudwatch_logs: Option<bool>`A value that indicates whether the engine version supports exporting the log types specified by `ExportableLogTypes` to CloudWatch Logs.
`valid_upgrade_target: Option<Vec<UpgradeTarget>>`A list of engine versions that this database engine version can be upgraded to.
Trait Implementations
---
source### impl Clone for DBEngineVersion
source#### fn clone(&self) -> DBEngineVersion
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBEngineVersion
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBEngineVersion
source#### fn default() -> DBEngineVersion
Returns the “default value” for a type. Read more
source### impl PartialEq<DBEngineVersion> for DBEngineVersion
source#### fn eq(&self, other: &DBEngineVersion) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBEngineVersion) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBEngineVersion
Auto Trait Implementations
---
### impl RefUnwindSafe for DBEngineVersion
### impl Send for DBEngineVersion
### impl Sync for DBEngineVersion
### impl Unpin for DBEngineVersion
### impl UnwindSafe for DBEngineVersion
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBEngineVersionMessage
===
```
pub struct DBEngineVersionMessage {
pub db_engine_versions: Option<Vec<DBEngineVersion>>,
pub marker: Option<String>,
}
```
Represents the output of DescribeDBEngineVersions.
Fields
---
`db_engine_versions: Option<Vec<DBEngineVersion>>`Detailed information about one or more engine versions.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for DBEngineVersionMessage
source#### fn clone(&self) -> DBEngineVersionMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBEngineVersionMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBEngineVersionMessage
source#### fn default() -> DBEngineVersionMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DBEngineVersionMessage> for DBEngineVersionMessage
source#### fn eq(&self, other: &DBEngineVersionMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBEngineVersionMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBEngineVersionMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DBEngineVersionMessage
### impl Send for DBEngineVersionMessage
### impl Sync for DBEngineVersionMessage
### impl Unpin for DBEngineVersionMessage
### impl UnwindSafe for DBEngineVersionMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBInstance
===
```
pub struct DBInstance {
pub auto_minor_version_upgrade: Option<bool>,
pub availability_zone: Option<String>,
pub backup_retention_period: Option<i64>,
pub ca_certificate_identifier: Option<String>,
pub db_cluster_identifier: Option<String>,
pub db_instance_arn: Option<String>,
pub db_instance_class: Option<String>,
pub db_instance_identifier: Option<String>,
pub db_instance_status: Option<String>,
pub db_subnet_group: Option<DBSubnetGroup>,
pub dbi_resource_id: Option<String>,
pub enabled_cloudwatch_logs_exports: Option<Vec<String>>,
pub endpoint: Option<Endpoint>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub instance_create_time: Option<String>,
pub kms_key_id: Option<String>,
pub latest_restorable_time: Option<String>,
pub pending_modified_values: Option<PendingModifiedValues>,
pub preferred_backup_window: Option<String>,
pub preferred_maintenance_window: Option<String>,
pub promotion_tier: Option<i64>,
pub publicly_accessible: Option<bool>,
pub status_infos: Option<Vec<DBInstanceStatusInfo>>,
pub storage_encrypted: Option<bool>,
pub vpc_security_groups: Option<Vec<VpcSecurityGroupMembership>>,
}
```
Detailed information about an instance.
Fields
---
`auto_minor_version_upgrade: Option<bool>`Does not apply. This parameter does not apply to Amazon DocumentDB. Amazon DocumentDB does not perform minor version upgrades regardless of the value set.
`availability_zone: Option<String>`Specifies the name of the Availability Zone that the instance is located in.
`backup_retention_period: Option<i64>`Specifies the number of days for which automatic snapshots are retained.
`ca_certificate_identifier: Option<String>`The identifier of the CA certificate for this DB instance.
`db_cluster_identifier: Option<String>`Contains the name of the cluster that the instance is a member of if the instance is a member of a cluster.
`db_instance_arn: Option<String>`The Amazon Resource Name (ARN) for the instance.
`db_instance_class: Option<String>`Contains the name of the compute and memory capacity class of the instance.
`db_instance_identifier: Option<String>`Contains a user-provided database identifier. This identifier is the unique key that identifies an instance.
`db_instance_status: Option<String>`Specifies the current state of this database.
`db_subnet_group: Option<DBSubnetGroup>`Specifies information on the subnet group that is associated with the instance, including the name, description, and subnets in the subnet group.
`dbi_resource_id: Option<String>`The Region-unique, immutable identifier for the instance. This identifier is found in CloudTrail log entries whenever the KMS key for the instance is accessed.
`enabled_cloudwatch_logs_exports: Option<Vec<String>>`A list of log types that this instance is configured to export to CloudWatch Logs.
`endpoint: Option<Endpoint>`Specifies the connection endpoint.
`engine: Option<String>`Provides the name of the database engine to be used for this instance.
`engine_version: Option<String>`Indicates the database engine version.
`instance_create_time: Option<String>`Provides the date and time that the instance was created.
`kms_key_id: Option<String>` If `StorageEncrypted` is `true`, the KMS key identifier for the encrypted instance.
`latest_restorable_time: Option<String>`Specifies the latest time to which a database can be restored with point-in-time restore.
`pending_modified_values: Option<PendingModifiedValues>`Specifies that changes to the instance are pending. This element is included only when changes are pending. Specific changes are identified by subelements.
`preferred_backup_window: Option<String>` Specifies the daily time range during which automated backups are created if automated backups are enabled, as determined by the `BackupRetentionPeriod`.
`preferred_maintenance_window: Option<String>`Specifies the weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
`promotion_tier: Option<i64>`A value that specifies the order in which an Amazon DocumentDB replica is promoted to the primary instance after a failure of the existing primary instance.
`publicly_accessible: Option<bool>`Not supported. Amazon DocumentDB does not currently support public endpoints. The value of `PubliclyAccessible` is always `false`.
`status_infos: Option<Vec<DBInstanceStatusInfo>>`The status of a read replica. If the instance is not a read replica, this is blank.
`storage_encrypted: Option<bool>`Specifies whether or not the instance is encrypted.
`vpc_security_groups: Option<Vec<VpcSecurityGroupMembership>>`Provides a list of VPC security group elements that the instance belongs to.
Trait Implementations
---
source### impl Clone for DBInstance
source#### fn clone(&self) -> DBInstance
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBInstance
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBInstance
source#### fn default() -> DBInstance
Returns the “default value” for a type. Read more
source### impl PartialEq<DBInstance> for DBInstance
source#### fn eq(&self, other: &DBInstance) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBInstance) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBInstance
Auto Trait Implementations
---
### impl RefUnwindSafe for DBInstance
### impl Send for DBInstance
### impl Sync for DBInstance
### impl Unpin for DBInstance
### impl UnwindSafe for DBInstance
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBInstanceMessage
===
```
pub struct DBInstanceMessage {
pub db_instances: Option<Vec<DBInstance>>,
pub marker: Option<String>,
}
```
Represents the output of DescribeDBInstances.
Fields
---
`db_instances: Option<Vec<DBInstance>>`Detailed information about one or more instances.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for DBInstanceMessage
source#### fn clone(&self) -> DBInstanceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBInstanceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBInstanceMessage
source#### fn default() -> DBInstanceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DBInstanceMessage> for DBInstanceMessage
source#### fn eq(&self, other: &DBInstanceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBInstanceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBInstanceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DBInstanceMessage
### impl Send for DBInstanceMessage
### impl Sync for DBInstanceMessage
### impl Unpin for DBInstanceMessage
### impl UnwindSafe for DBInstanceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBInstanceStatusInfo
===
```
pub struct DBInstanceStatusInfo {
pub message: Option<String>,
pub normal: Option<bool>,
pub status: Option<String>,
pub status_type: Option<String>,
}
```
Provides a list of status information for an instance.
Fields
---
`message: Option<String>`Details of the error if there is an error for the instance. If the instance is not in an error state, this value is blank.
`normal: Option<bool>`A Boolean value that is `true` if the instance is operating normally, or `false` if the instance is in an error state.
`status: Option<String>`Status of the instance. For a `StatusType` of read replica, the values can be `replicating`, error, `stopped`, or `terminated`.
`status_type: Option<String>`This value is currently "`read replication`."
Trait Implementations
---
source### impl Clone for DBInstanceStatusInfo
source#### fn clone(&self) -> DBInstanceStatusInfo
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBInstanceStatusInfo
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBInstanceStatusInfo
source#### fn default() -> DBInstanceStatusInfo
Returns the “default value” for a type. Read more
source### impl PartialEq<DBInstanceStatusInfo> for DBInstanceStatusInfo
source#### fn eq(&self, other: &DBInstanceStatusInfo) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBInstanceStatusInfo) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBInstanceStatusInfo
Auto Trait Implementations
---
### impl RefUnwindSafe for DBInstanceStatusInfo
### impl Send for DBInstanceStatusInfo
### impl Sync for DBInstanceStatusInfo
### impl Unpin for DBInstanceStatusInfo
### impl UnwindSafe for DBInstanceStatusInfo
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBSubnetGroup
===
```
pub struct DBSubnetGroup {
pub db_subnet_group_arn: Option<String>,
pub db_subnet_group_description: Option<String>,
pub db_subnet_group_name: Option<String>,
pub subnet_group_status: Option<String>,
pub subnets: Option<Vec<Subnet>>,
pub vpc_id: Option<String>,
}
```
Detailed information about a subnet group.
Fields
---
`db_subnet_group_arn: Option<String>`The Amazon Resource Name (ARN) for the DB subnet group.
`db_subnet_group_description: Option<String>`Provides the description of the subnet group.
`db_subnet_group_name: Option<String>`The name of the subnet group.
`subnet_group_status: Option<String>`Provides the status of the subnet group.
`subnets: Option<Vec<Subnet>>`Detailed information about one or more subnets within a subnet group.
`vpc_id: Option<String>`Provides the virtual private cloud (VPC) ID of the subnet group.
Trait Implementations
---
source### impl Clone for DBSubnetGroup
source#### fn clone(&self) -> DBSubnetGroup
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBSubnetGroup
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBSubnetGroup
source#### fn default() -> DBSubnetGroup
Returns the “default value” for a type. Read more
source### impl PartialEq<DBSubnetGroup> for DBSubnetGroup
source#### fn eq(&self, other: &DBSubnetGroup) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBSubnetGroup) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBSubnetGroup
Auto Trait Implementations
---
### impl RefUnwindSafe for DBSubnetGroup
### impl Send for DBSubnetGroup
### impl Sync for DBSubnetGroup
### impl Unpin for DBSubnetGroup
### impl UnwindSafe for DBSubnetGroup
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DBSubnetGroupMessage
===
```
pub struct DBSubnetGroupMessage {
pub db_subnet_groups: Option<Vec<DBSubnetGroup>>,
pub marker: Option<String>,
}
```
Represents the output of DescribeDBSubnetGroups.
Fields
---
`db_subnet_groups: Option<Vec<DBSubnetGroup>>`Detailed information about one or more subnet groups.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for DBSubnetGroupMessage
source#### fn clone(&self) -> DBSubnetGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DBSubnetGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DBSubnetGroupMessage
source#### fn default() -> DBSubnetGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DBSubnetGroupMessage> for DBSubnetGroupMessage
source#### fn eq(&self, other: &DBSubnetGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DBSubnetGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DBSubnetGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DBSubnetGroupMessage
### impl Send for DBSubnetGroupMessage
### impl Sync for DBSubnetGroupMessage
### impl Unpin for DBSubnetGroupMessage
### impl UnwindSafe for DBSubnetGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBClusterMessage
===
```
pub struct DeleteDBClusterMessage {
pub db_cluster_identifier: String,
pub final_db_snapshot_identifier: Option<String>,
pub skip_final_snapshot: Option<bool>,
}
```
Represents the input to DeleteDBCluster.
Fields
---
`db_cluster_identifier: String`The cluster identifier for the cluster to be deleted. This parameter isn't case sensitive.
Constraints:
* Must match an existing `DBClusterIdentifier`.
`final_db_snapshot_identifier: Option<String>` The cluster snapshot identifier of the new cluster snapshot created when `SkipFinalSnapshot` is set to `false`.
Specifying this parameter and also setting the `SkipFinalShapshot` parameter to `true` results in an error.
Constraints:
* Must be from 1 to 255 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
`skip_final_snapshot: Option<bool>` Determines whether a final cluster snapshot is created before the cluster is deleted. If `true` is specified, no cluster snapshot is created. If `false` is specified, a cluster snapshot is created before the DB cluster is deleted.
If `SkipFinalSnapshot` is `false`, you must specify a `FinalDBSnapshotIdentifier` parameter.
Default: `false`
Trait Implementations
---
source### impl Clone for DeleteDBClusterMessage
source#### fn clone(&self) -> DeleteDBClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBClusterMessage
source#### fn default() -> DeleteDBClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBClusterMessage> for DeleteDBClusterMessage
source#### fn eq(&self, other: &DeleteDBClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterMessage
### impl Send for DeleteDBClusterMessage
### impl Sync for DeleteDBClusterMessage
### impl Unpin for DeleteDBClusterMessage
### impl UnwindSafe for DeleteDBClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBClusterParameterGroupMessage
===
```
pub struct DeleteDBClusterParameterGroupMessage {
pub db_cluster_parameter_group_name: String,
}
```
Represents the input to DeleteDBClusterParameterGroup.
Fields
---
`db_cluster_parameter_group_name: String`The name of the cluster parameter group.
Constraints:
* Must be the name of an existing cluster parameter group.
* You can't delete a default cluster parameter group.
* Cannot be associated with any clusters.
Trait Implementations
---
source### impl Clone for DeleteDBClusterParameterGroupMessage
source#### fn clone(&self) -> DeleteDBClusterParameterGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBClusterParameterGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBClusterParameterGroupMessage
source#### fn default() -> DeleteDBClusterParameterGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBClusterParameterGroupMessage> for DeleteDBClusterParameterGroupMessage
source#### fn eq(&self, other: &DeleteDBClusterParameterGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterParameterGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterParameterGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterParameterGroupMessage
### impl Send for DeleteDBClusterParameterGroupMessage
### impl Sync for DeleteDBClusterParameterGroupMessage
### impl Unpin for DeleteDBClusterParameterGroupMessage
### impl UnwindSafe for DeleteDBClusterParameterGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBClusterResult
===
```
pub struct DeleteDBClusterResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for DeleteDBClusterResult
source#### fn clone(&self) -> DeleteDBClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBClusterResult
source#### fn default() -> DeleteDBClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBClusterResult> for DeleteDBClusterResult
source#### fn eq(&self, other: &DeleteDBClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterResult
### impl Send for DeleteDBClusterResult
### impl Sync for DeleteDBClusterResult
### impl Unpin for DeleteDBClusterResult
### impl UnwindSafe for DeleteDBClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBClusterSnapshotMessage
===
```
pub struct DeleteDBClusterSnapshotMessage {
pub db_cluster_snapshot_identifier: String,
}
```
Represents the input to DeleteDBClusterSnapshot.
Fields
---
`db_cluster_snapshot_identifier: String`The identifier of the cluster snapshot to delete.
Constraints: Must be the name of an existing cluster snapshot in the `available` state.
Trait Implementations
---
source### impl Clone for DeleteDBClusterSnapshotMessage
source#### fn clone(&self) -> DeleteDBClusterSnapshotMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBClusterSnapshotMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBClusterSnapshotMessage
source#### fn default() -> DeleteDBClusterSnapshotMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBClusterSnapshotMessage> for DeleteDBClusterSnapshotMessage
source#### fn eq(&self, other: &DeleteDBClusterSnapshotMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterSnapshotMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterSnapshotMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterSnapshotMessage
### impl Send for DeleteDBClusterSnapshotMessage
### impl Sync for DeleteDBClusterSnapshotMessage
### impl Unpin for DeleteDBClusterSnapshotMessage
### impl UnwindSafe for DeleteDBClusterSnapshotMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBClusterSnapshotResult
===
```
pub struct DeleteDBClusterSnapshotResult {
pub db_cluster_snapshot: Option<DBClusterSnapshot>,
}
```
Fields
---
`db_cluster_snapshot: Option<DBClusterSnapshot>`Trait Implementations
---
source### impl Clone for DeleteDBClusterSnapshotResult
source#### fn clone(&self) -> DeleteDBClusterSnapshotResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBClusterSnapshotResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBClusterSnapshotResult
source#### fn default() -> DeleteDBClusterSnapshotResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBClusterSnapshotResult> for DeleteDBClusterSnapshotResult
source#### fn eq(&self, other: &DeleteDBClusterSnapshotResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterSnapshotResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterSnapshotResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterSnapshotResult
### impl Send for DeleteDBClusterSnapshotResult
### impl Sync for DeleteDBClusterSnapshotResult
### impl Unpin for DeleteDBClusterSnapshotResult
### impl UnwindSafe for DeleteDBClusterSnapshotResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBInstanceMessage
===
```
pub struct DeleteDBInstanceMessage {
pub db_instance_identifier: String,
}
```
Represents the input to DeleteDBInstance.
Fields
---
`db_instance_identifier: String`The instance identifier for the instance to be deleted. This parameter isn't case sensitive.
Constraints:
* Must match the name of an existing instance.
Trait Implementations
---
source### impl Clone for DeleteDBInstanceMessage
source#### fn clone(&self) -> DeleteDBInstanceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBInstanceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBInstanceMessage
source#### fn default() -> DeleteDBInstanceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBInstanceMessage> for DeleteDBInstanceMessage
source#### fn eq(&self, other: &DeleteDBInstanceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBInstanceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBInstanceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBInstanceMessage
### impl Send for DeleteDBInstanceMessage
### impl Sync for DeleteDBInstanceMessage
### impl Unpin for DeleteDBInstanceMessage
### impl UnwindSafe for DeleteDBInstanceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBInstanceResult
===
```
pub struct DeleteDBInstanceResult {
pub db_instance: Option<DBInstance>,
}
```
Fields
---
`db_instance: Option<DBInstance>`Trait Implementations
---
source### impl Clone for DeleteDBInstanceResult
source#### fn clone(&self) -> DeleteDBInstanceResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBInstanceResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBInstanceResult
source#### fn default() -> DeleteDBInstanceResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBInstanceResult> for DeleteDBInstanceResult
source#### fn eq(&self, other: &DeleteDBInstanceResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBInstanceResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBInstanceResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBInstanceResult
### impl Send for DeleteDBInstanceResult
### impl Sync for DeleteDBInstanceResult
### impl Unpin for DeleteDBInstanceResult
### impl UnwindSafe for DeleteDBInstanceResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteDBSubnetGroupMessage
===
```
pub struct DeleteDBSubnetGroupMessage {
pub db_subnet_group_name: String,
}
```
Represents the input to DeleteDBSubnetGroup.
Fields
---
`db_subnet_group_name: String`The name of the database subnet group to delete.
You can't delete the default subnet group.
Constraints:
Must match the name of an existing `DBSubnetGroup`. Must not be default.
Example: `mySubnetgroup`
Trait Implementations
---
source### impl Clone for DeleteDBSubnetGroupMessage
source#### fn clone(&self) -> DeleteDBSubnetGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteDBSubnetGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteDBSubnetGroupMessage
source#### fn default() -> DeleteDBSubnetGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteDBSubnetGroupMessage> for DeleteDBSubnetGroupMessage
source#### fn eq(&self, other: &DeleteDBSubnetGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBSubnetGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBSubnetGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBSubnetGroupMessage
### impl Send for DeleteDBSubnetGroupMessage
### impl Sync for DeleteDBSubnetGroupMessage
### impl Unpin for DeleteDBSubnetGroupMessage
### impl UnwindSafe for DeleteDBSubnetGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteEventSubscriptionMessage
===
```
pub struct DeleteEventSubscriptionMessage {
pub subscription_name: String,
}
```
Represents the input to DeleteEventSubscription.
Fields
---
`subscription_name: String`The name of the Amazon DocumentDB event notification subscription that you want to delete.
Trait Implementations
---
source### impl Clone for DeleteEventSubscriptionMessage
source#### fn clone(&self) -> DeleteEventSubscriptionMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteEventSubscriptionMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteEventSubscriptionMessage
source#### fn default() -> DeleteEventSubscriptionMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteEventSubscriptionMessage> for DeleteEventSubscriptionMessage
source#### fn eq(&self, other: &DeleteEventSubscriptionMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteEventSubscriptionMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteEventSubscriptionMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteEventSubscriptionMessage
### impl Send for DeleteEventSubscriptionMessage
### impl Sync for DeleteEventSubscriptionMessage
### impl Unpin for DeleteEventSubscriptionMessage
### impl UnwindSafe for DeleteEventSubscriptionMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteEventSubscriptionResult
===
```
pub struct DeleteEventSubscriptionResult {
pub event_subscription: Option<EventSubscription>,
}
```
Fields
---
`event_subscription: Option<EventSubscription>`Trait Implementations
---
source### impl Clone for DeleteEventSubscriptionResult
source#### fn clone(&self) -> DeleteEventSubscriptionResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteEventSubscriptionResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteEventSubscriptionResult
source#### fn default() -> DeleteEventSubscriptionResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteEventSubscriptionResult> for DeleteEventSubscriptionResult
source#### fn eq(&self, other: &DeleteEventSubscriptionResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteEventSubscriptionResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteEventSubscriptionResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteEventSubscriptionResult
### impl Send for DeleteEventSubscriptionResult
### impl Sync for DeleteEventSubscriptionResult
### impl Unpin for DeleteEventSubscriptionResult
### impl UnwindSafe for DeleteEventSubscriptionResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteGlobalClusterMessage
===
```
pub struct DeleteGlobalClusterMessage {
pub global_cluster_identifier: String,
}
```
Represents the input to DeleteGlobalCluster.
Fields
---
`global_cluster_identifier: String`The cluster identifier of the global cluster being deleted.
Trait Implementations
---
source### impl Clone for DeleteGlobalClusterMessage
source#### fn clone(&self) -> DeleteGlobalClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteGlobalClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteGlobalClusterMessage
source#### fn default() -> DeleteGlobalClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteGlobalClusterMessage> for DeleteGlobalClusterMessage
source#### fn eq(&self, other: &DeleteGlobalClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteGlobalClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteGlobalClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteGlobalClusterMessage
### impl Send for DeleteGlobalClusterMessage
### impl Sync for DeleteGlobalClusterMessage
### impl Unpin for DeleteGlobalClusterMessage
### impl UnwindSafe for DeleteGlobalClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DeleteGlobalClusterResult
===
```
pub struct DeleteGlobalClusterResult {
pub global_cluster: Option<GlobalCluster>,
}
```
Fields
---
`global_cluster: Option<GlobalCluster>`Trait Implementations
---
source### impl Clone for DeleteGlobalClusterResult
source#### fn clone(&self) -> DeleteGlobalClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DeleteGlobalClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DeleteGlobalClusterResult
source#### fn default() -> DeleteGlobalClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DeleteGlobalClusterResult> for DeleteGlobalClusterResult
source#### fn eq(&self, other: &DeleteGlobalClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteGlobalClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteGlobalClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteGlobalClusterResult
### impl Send for DeleteGlobalClusterResult
### impl Sync for DeleteGlobalClusterResult
### impl Unpin for DeleteGlobalClusterResult
### impl UnwindSafe for DeleteGlobalClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeCertificatesMessage
===
```
pub struct DescribeCertificatesMessage {
pub certificate_identifier: Option<String>,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Fields
---
`certificate_identifier: Option<String>`The user-supplied certificate identifier. If this parameter is specified, information for only the specified certificate is returned. If this parameter is omitted, a list of up to `MaxRecords` certificates is returned. This parameter is not case sensitive.
Constraints
* Must match an existing `CertificateIdentifier`.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`marker: Option<String>`An optional pagination token provided by a previous `DescribeCertificates` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>`The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints:
* Minimum: 20
* Maximum: 100
Trait Implementations
---
source### impl Clone for DescribeCertificatesMessage
source#### fn clone(&self) -> DescribeCertificatesMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeCertificatesMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeCertificatesMessage
source#### fn default() -> DescribeCertificatesMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeCertificatesMessage> for DescribeCertificatesMessage
source#### fn eq(&self, other: &DescribeCertificatesMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeCertificatesMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeCertificatesMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeCertificatesMessage
### impl Send for DescribeCertificatesMessage
### impl Sync for DescribeCertificatesMessage
### impl Unpin for DescribeCertificatesMessage
### impl UnwindSafe for DescribeCertificatesMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBClusterParameterGroupsMessage
===
```
pub struct DescribeDBClusterParameterGroupsMessage {
pub db_cluster_parameter_group_name: Option<String>,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Represents the input to DescribeDBClusterParameterGroups.
Fields
---
`db_cluster_parameter_group_name: Option<String>`The name of a specific cluster parameter group to return details for.
Constraints:
* If provided, must match the name of an existing `DBClusterParameterGroup`.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
Trait Implementations
---
source### impl Clone for DescribeDBClusterParameterGroupsMessage
source#### fn clone(&self) -> DescribeDBClusterParameterGroupsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBClusterParameterGroupsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBClusterParameterGroupsMessage
source#### fn default() -> DescribeDBClusterParameterGroupsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBClusterParameterGroupsMessage> for DescribeDBClusterParameterGroupsMessage
source#### fn eq(&self, other: &DescribeDBClusterParameterGroupsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterParameterGroupsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterParameterGroupsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterParameterGroupsMessage
### impl Send for DescribeDBClusterParameterGroupsMessage
### impl Sync for DescribeDBClusterParameterGroupsMessage
### impl Unpin for DescribeDBClusterParameterGroupsMessage
### impl UnwindSafe for DescribeDBClusterParameterGroupsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBClusterParametersMessage
===
```
pub struct DescribeDBClusterParametersMessage {
pub db_cluster_parameter_group_name: String,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
pub source: Option<String>,
}
```
Represents the input to DescribeDBClusterParameters.
Fields
---
`db_cluster_parameter_group_name: String`The name of a specific cluster parameter group to return parameter details for.
Constraints:
* If provided, must match the name of an existing `DBClusterParameterGroup`.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
`source: Option<String>` A value that indicates to return only parameters for a specific source. Parameter sources can be `engine`, `service`, or `customer`.
Trait Implementations
---
source### impl Clone for DescribeDBClusterParametersMessage
source#### fn clone(&self) -> DescribeDBClusterParametersMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBClusterParametersMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBClusterParametersMessage
source#### fn default() -> DescribeDBClusterParametersMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBClusterParametersMessage> for DescribeDBClusterParametersMessage
source#### fn eq(&self, other: &DescribeDBClusterParametersMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterParametersMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterParametersMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterParametersMessage
### impl Send for DescribeDBClusterParametersMessage
### impl Sync for DescribeDBClusterParametersMessage
### impl Unpin for DescribeDBClusterParametersMessage
### impl UnwindSafe for DescribeDBClusterParametersMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBClusterSnapshotAttributesMessage
===
```
pub struct DescribeDBClusterSnapshotAttributesMessage {
pub db_cluster_snapshot_identifier: String,
}
```
Represents the input to DescribeDBClusterSnapshotAttributes.
Fields
---
`db_cluster_snapshot_identifier: String`The identifier for the cluster snapshot to describe the attributes for.
Trait Implementations
---
source### impl Clone for DescribeDBClusterSnapshotAttributesMessage
source#### fn clone(&self) -> DescribeDBClusterSnapshotAttributesMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBClusterSnapshotAttributesMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBClusterSnapshotAttributesMessage
source#### fn default() -> DescribeDBClusterSnapshotAttributesMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBClusterSnapshotAttributesMessage> for DescribeDBClusterSnapshotAttributesMessage
source#### fn eq(&self, other: &DescribeDBClusterSnapshotAttributesMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterSnapshotAttributesMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterSnapshotAttributesMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterSnapshotAttributesMessage
### impl Send for DescribeDBClusterSnapshotAttributesMessage
### impl Sync for DescribeDBClusterSnapshotAttributesMessage
### impl Unpin for DescribeDBClusterSnapshotAttributesMessage
### impl UnwindSafe for DescribeDBClusterSnapshotAttributesMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBClusterSnapshotAttributesResult
===
```
pub struct DescribeDBClusterSnapshotAttributesResult {
pub db_cluster_snapshot_attributes_result: Option<DBClusterSnapshotAttributesResult>,
}
```
Fields
---
`db_cluster_snapshot_attributes_result: Option<DBClusterSnapshotAttributesResult>`Trait Implementations
---
source### impl Clone for DescribeDBClusterSnapshotAttributesResult
source#### fn clone(&self) -> DescribeDBClusterSnapshotAttributesResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBClusterSnapshotAttributesResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBClusterSnapshotAttributesResult
source#### fn default() -> DescribeDBClusterSnapshotAttributesResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBClusterSnapshotAttributesResult> for DescribeDBClusterSnapshotAttributesResult
source#### fn eq(&self, other: &DescribeDBClusterSnapshotAttributesResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterSnapshotAttributesResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterSnapshotAttributesResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterSnapshotAttributesResult
### impl Send for DescribeDBClusterSnapshotAttributesResult
### impl Sync for DescribeDBClusterSnapshotAttributesResult
### impl Unpin for DescribeDBClusterSnapshotAttributesResult
### impl UnwindSafe for DescribeDBClusterSnapshotAttributesResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBClusterSnapshotsMessage
===
```
pub struct DescribeDBClusterSnapshotsMessage {
pub db_cluster_identifier: Option<String>,
pub db_cluster_snapshot_identifier: Option<String>,
pub filters: Option<Vec<Filter>>,
pub include_public: Option<bool>,
pub include_shared: Option<bool>,
pub marker: Option<String>,
pub max_records: Option<i64>,
pub snapshot_type: Option<String>,
}
```
Represents the input to DescribeDBClusterSnapshots.
Fields
---
`db_cluster_identifier: Option<String>`The ID of the cluster to retrieve the list of cluster snapshots for. This parameter can't be used with the `DBClusterSnapshotIdentifier` parameter. This parameter is not case sensitive.
Constraints:
* If provided, must match the identifier of an existing `DBCluster`.
`db_cluster_snapshot_identifier: Option<String>`A specific cluster snapshot identifier to describe. This parameter can't be used with the `DBClusterIdentifier` parameter. This value is stored as a lowercase string.
Constraints:
* If provided, must match the identifier of an existing `DBClusterSnapshot`.
* If this identifier is for an automated snapshot, the `SnapshotType` parameter must also be specified.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`include_public: Option<bool>`Set to `true` to include manual cluster snapshots that are public and can be copied or restored by any account, and otherwise `false`. The default is `false`.
`include_shared: Option<bool>`Set to `true` to include shared manual cluster snapshots from other accounts that this account has been given permission to copy or restore, and otherwise `false`. The default is `false`.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
`snapshot_type: Option<String>`The type of cluster snapshots to be returned. You can specify one of the following values:
* `automated` - Return all cluster snapshots that Amazon DocumentDB has automatically created for your account.
* `manual` - Return all cluster snapshots that you have manually created for your account.
* `shared` - Return all manual cluster snapshots that have been shared to your account.
* `public` - Return all cluster snapshots that have been marked as public.
If you don't specify a `SnapshotType` value, then both automated and manual cluster snapshots are returned. You can include shared cluster snapshots with these results by setting the `IncludeShared` parameter to `true`. You can include public cluster snapshots with these results by setting the`IncludePublic` parameter to `true`.
The `IncludeShared` and `IncludePublic` parameters don't apply for `SnapshotType` values of `manual` or `automated`. The `IncludePublic` parameter doesn't apply when `SnapshotType` is set to `shared`. The `IncludeShared` parameter doesn't apply when `SnapshotType` is set to `public`.
Trait Implementations
---
source### impl Clone for DescribeDBClusterSnapshotsMessage
source#### fn clone(&self) -> DescribeDBClusterSnapshotsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBClusterSnapshotsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBClusterSnapshotsMessage
source#### fn default() -> DescribeDBClusterSnapshotsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBClusterSnapshotsMessage> for DescribeDBClusterSnapshotsMessage
source#### fn eq(&self, other: &DescribeDBClusterSnapshotsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterSnapshotsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterSnapshotsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterSnapshotsMessage
### impl Send for DescribeDBClusterSnapshotsMessage
### impl Sync for DescribeDBClusterSnapshotsMessage
### impl Unpin for DescribeDBClusterSnapshotsMessage
### impl UnwindSafe for DescribeDBClusterSnapshotsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBClustersMessage
===
```
pub struct DescribeDBClustersMessage {
pub db_cluster_identifier: Option<String>,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Represents the input to DescribeDBClusters.
Fields
---
`db_cluster_identifier: Option<String>`The user-provided cluster identifier. If this parameter is specified, information from only the specific cluster is returned. This parameter isn't case sensitive.
Constraints:
* If provided, must match an existing `DBClusterIdentifier`.
`filters: Option<Vec<Filter>>`A filter that specifies one or more clusters to describe.
Supported filters:
* `db-cluster-id` - Accepts cluster identifiers and cluster Amazon Resource Names (ARNs). The results list only includes information about the clusters identified by these ARNs.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
Trait Implementations
---
source### impl Clone for DescribeDBClustersMessage
source#### fn clone(&self) -> DescribeDBClustersMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBClustersMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBClustersMessage
source#### fn default() -> DescribeDBClustersMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBClustersMessage> for DescribeDBClustersMessage
source#### fn eq(&self, other: &DescribeDBClustersMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClustersMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClustersMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClustersMessage
### impl Send for DescribeDBClustersMessage
### impl Sync for DescribeDBClustersMessage
### impl Unpin for DescribeDBClustersMessage
### impl UnwindSafe for DescribeDBClustersMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBEngineVersionsMessage
===
```
pub struct DescribeDBEngineVersionsMessage {
pub db_parameter_group_family: Option<String>,
pub default_only: Option<bool>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub filters: Option<Vec<Filter>>,
pub list_supported_character_sets: Option<bool>,
pub list_supported_timezones: Option<bool>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Represents the input to DescribeDBEngineVersions.
Fields
---
`db_parameter_group_family: Option<String>`The name of a specific parameter group family to return details for.
Constraints:
* If provided, must match an existing `DBParameterGroupFamily`.
`default_only: Option<bool>`Indicates that only the default version of the specified engine or engine and major version combination is returned.
`engine: Option<String>`The database engine to return.
`engine_version: Option<String>`The database engine version to return.
Example: `3.6.0`
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`list_supported_character_sets: Option<bool>`If this parameter is specified and the requested engine supports the `CharacterSetName` parameter for `CreateDBInstance`, the response includes a list of supported character sets for each engine version.
`list_supported_timezones: Option<bool>`If this parameter is specified and the requested engine supports the `TimeZone` parameter for `CreateDBInstance`, the response includes a list of supported time zones for each engine version.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
Trait Implementations
---
source### impl Clone for DescribeDBEngineVersionsMessage
source#### fn clone(&self) -> DescribeDBEngineVersionsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBEngineVersionsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBEngineVersionsMessage
source#### fn default() -> DescribeDBEngineVersionsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBEngineVersionsMessage> for DescribeDBEngineVersionsMessage
source#### fn eq(&self, other: &DescribeDBEngineVersionsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBEngineVersionsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBEngineVersionsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBEngineVersionsMessage
### impl Send for DescribeDBEngineVersionsMessage
### impl Sync for DescribeDBEngineVersionsMessage
### impl Unpin for DescribeDBEngineVersionsMessage
### impl UnwindSafe for DescribeDBEngineVersionsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBInstancesMessage
===
```
pub struct DescribeDBInstancesMessage {
pub db_instance_identifier: Option<String>,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Represents the input to DescribeDBInstances.
Fields
---
`db_instance_identifier: Option<String>`The user-provided instance identifier. If this parameter is specified, information from only the specific instance is returned. This parameter isn't case sensitive.
Constraints:
* If provided, must match the identifier of an existing `DBInstance`.
`filters: Option<Vec<Filter>>`A filter that specifies one or more instances to describe.
Supported filters:
* `db-cluster-id` - Accepts cluster identifiers and cluster Amazon Resource Names (ARNs). The results list includes only the information about the instances that are associated with the clusters that are identified by these ARNs.
* `db-instance-id` - Accepts instance identifiers and instance ARNs. The results list includes only the information about the instances that are identified by these ARNs.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
Trait Implementations
---
source### impl Clone for DescribeDBInstancesMessage
source#### fn clone(&self) -> DescribeDBInstancesMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBInstancesMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBInstancesMessage
source#### fn default() -> DescribeDBInstancesMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBInstancesMessage> for DescribeDBInstancesMessage
source#### fn eq(&self, other: &DescribeDBInstancesMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBInstancesMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBInstancesMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBInstancesMessage
### impl Send for DescribeDBInstancesMessage
### impl Sync for DescribeDBInstancesMessage
### impl Unpin for DescribeDBInstancesMessage
### impl UnwindSafe for DescribeDBInstancesMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeDBSubnetGroupsMessage
===
```
pub struct DescribeDBSubnetGroupsMessage {
pub db_subnet_group_name: Option<String>,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Represents the input to DescribeDBSubnetGroups.
Fields
---
`db_subnet_group_name: Option<String>`The name of the subnet group to return details for.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
Trait Implementations
---
source### impl Clone for DescribeDBSubnetGroupsMessage
source#### fn clone(&self) -> DescribeDBSubnetGroupsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeDBSubnetGroupsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeDBSubnetGroupsMessage
source#### fn default() -> DescribeDBSubnetGroupsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeDBSubnetGroupsMessage> for DescribeDBSubnetGroupsMessage
source#### fn eq(&self, other: &DescribeDBSubnetGroupsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBSubnetGroupsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBSubnetGroupsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBSubnetGroupsMessage
### impl Send for DescribeDBSubnetGroupsMessage
### impl Sync for DescribeDBSubnetGroupsMessage
### impl Unpin for DescribeDBSubnetGroupsMessage
### impl UnwindSafe for DescribeDBSubnetGroupsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeEngineDefaultClusterParametersMessage
===
```
pub struct DescribeEngineDefaultClusterParametersMessage {
pub db_parameter_group_family: String,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Represents the input to DescribeEngineDefaultClusterParameters.
Fields
---
`db_parameter_group_family: String`The name of the cluster parameter group family to return the engine parameter information for.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
Trait Implementations
---
source### impl Clone for DescribeEngineDefaultClusterParametersMessage
source#### fn clone(&self) -> DescribeEngineDefaultClusterParametersMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeEngineDefaultClusterParametersMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeEngineDefaultClusterParametersMessage
source#### fn default() -> DescribeEngineDefaultClusterParametersMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeEngineDefaultClusterParametersMessage> for DescribeEngineDefaultClusterParametersMessage
source#### fn eq(&self, other: &DescribeEngineDefaultClusterParametersMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeEngineDefaultClusterParametersMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEngineDefaultClusterParametersMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEngineDefaultClusterParametersMessage
### impl Send for DescribeEngineDefaultClusterParametersMessage
### impl Sync for DescribeEngineDefaultClusterParametersMessage
### impl Unpin for DescribeEngineDefaultClusterParametersMessage
### impl UnwindSafe for DescribeEngineDefaultClusterParametersMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeEngineDefaultClusterParametersResult
===
```
pub struct DescribeEngineDefaultClusterParametersResult {
pub engine_defaults: Option<EngineDefaults>,
}
```
Fields
---
`engine_defaults: Option<EngineDefaults>`Trait Implementations
---
source### impl Clone for DescribeEngineDefaultClusterParametersResult
source#### fn clone(&self) -> DescribeEngineDefaultClusterParametersResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeEngineDefaultClusterParametersResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeEngineDefaultClusterParametersResult
source#### fn default() -> DescribeEngineDefaultClusterParametersResult
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeEngineDefaultClusterParametersResult> for DescribeEngineDefaultClusterParametersResult
source#### fn eq(&self, other: &DescribeEngineDefaultClusterParametersResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeEngineDefaultClusterParametersResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEngineDefaultClusterParametersResult
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEngineDefaultClusterParametersResult
### impl Send for DescribeEngineDefaultClusterParametersResult
### impl Sync for DescribeEngineDefaultClusterParametersResult
### impl Unpin for DescribeEngineDefaultClusterParametersResult
### impl UnwindSafe for DescribeEngineDefaultClusterParametersResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeEventCategoriesMessage
===
```
pub struct DescribeEventCategoriesMessage {
pub filters: Option<Vec<Filter>>,
pub source_type: Option<String>,
}
```
Represents the input to DescribeEventCategories.
Fields
---
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`source_type: Option<String>`The type of source that is generating the events.
Valid values: `db-instance`, `db-parameter-group`, `db-security-group`
Trait Implementations
---
source### impl Clone for DescribeEventCategoriesMessage
source#### fn clone(&self) -> DescribeEventCategoriesMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeEventCategoriesMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeEventCategoriesMessage
source#### fn default() -> DescribeEventCategoriesMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeEventCategoriesMessage> for DescribeEventCategoriesMessage
source#### fn eq(&self, other: &DescribeEventCategoriesMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeEventCategoriesMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEventCategoriesMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEventCategoriesMessage
### impl Send for DescribeEventCategoriesMessage
### impl Sync for DescribeEventCategoriesMessage
### impl Unpin for DescribeEventCategoriesMessage
### impl UnwindSafe for DescribeEventCategoriesMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeEventSubscriptionsMessage
===
```
pub struct DescribeEventSubscriptionsMessage {
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
pub subscription_name: Option<String>,
}
```
Represents the input to DescribeEventSubscriptions.
Fields
---
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
`subscription_name: Option<String>`The name of the Amazon DocumentDB event notification subscription that you want to describe.
Trait Implementations
---
source### impl Clone for DescribeEventSubscriptionsMessage
source#### fn clone(&self) -> DescribeEventSubscriptionsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeEventSubscriptionsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeEventSubscriptionsMessage
source#### fn default() -> DescribeEventSubscriptionsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeEventSubscriptionsMessage> for DescribeEventSubscriptionsMessage
source#### fn eq(&self, other: &DescribeEventSubscriptionsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeEventSubscriptionsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEventSubscriptionsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEventSubscriptionsMessage
### impl Send for DescribeEventSubscriptionsMessage
### impl Sync for DescribeEventSubscriptionsMessage
### impl Unpin for DescribeEventSubscriptionsMessage
### impl UnwindSafe for DescribeEventSubscriptionsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeEventsMessage
===
```
pub struct DescribeEventsMessage {
pub duration: Option<i64>,
pub end_time: Option<String>,
pub event_categories: Option<Vec<String>>,
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
pub source_identifier: Option<String>,
pub source_type: Option<String>,
pub start_time: Option<String>,
}
```
Represents the input to DescribeEvents.
Fields
---
`duration: Option<i64>`The number of minutes to retrieve events for.
Default: 60
`end_time: Option<String>` The end of the time interval for which to retrieve events, specified in ISO 8601 format.
Example: 2009-07-08T18:00Z
`event_categories: Option<Vec<String>>`A list of event categories that trigger notifications for an event notification subscription.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
`source_identifier: Option<String>`The identifier of the event source for which events are returned. If not specified, then all sources are included in the response.
Constraints:
* If `SourceIdentifier` is provided, `SourceType` must also be provided.
* If the source type is `DBInstance`, a `DBInstanceIdentifier` must be provided.
* If the source type is `DBSecurityGroup`, a `DBSecurityGroupName` must be provided.
* If the source type is `DBParameterGroup`, a `DBParameterGroupName` must be provided.
* If the source type is `DBSnapshot`, a `DBSnapshotIdentifier` must be provided.
* Cannot end with a hyphen or contain two consecutive hyphens.
`source_type: Option<String>`The event source to retrieve events for. If no value is specified, all events are returned.
`start_time: Option<String>` The beginning of the time interval to retrieve events for, specified in ISO 8601 format.
Example: 2009-07-08T18:00Z
Trait Implementations
---
source### impl Clone for DescribeEventsMessage
source#### fn clone(&self) -> DescribeEventsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeEventsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeEventsMessage
source#### fn default() -> DescribeEventsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeEventsMessage> for DescribeEventsMessage
source#### fn eq(&self, other: &DescribeEventsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeEventsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEventsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEventsMessage
### impl Send for DescribeEventsMessage
### impl Sync for DescribeEventsMessage
### impl Unpin for DescribeEventsMessage
### impl UnwindSafe for DescribeEventsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeGlobalClustersMessage
===
```
pub struct DescribeGlobalClustersMessage {
pub filters: Option<Vec<Filter>>,
pub global_cluster_identifier: Option<String>,
pub marker: Option<String>,
pub max_records: Option<i64>,
}
```
Fields
---
`filters: Option<Vec<Filter>>`A filter that specifies one or more global DB clusters to describe.
Supported filters: `db-cluster-id` accepts cluster identifiers and cluster Amazon Resource Names (ARNs). The results list will only include information about the clusters identified by these ARNs.
`global_cluster_identifier: Option<String>`The user-supplied cluster identifier. If this parameter is specified, information from only the specific cluster is returned. This parameter isn't case-sensitive.
`marker: Option<String>`An optional pagination token provided by a previous `DescribeGlobalClusters` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>`The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token called a marker is included in the response so that you can retrieve the remaining results.
Trait Implementations
---
source### impl Clone for DescribeGlobalClustersMessage
source#### fn clone(&self) -> DescribeGlobalClustersMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeGlobalClustersMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeGlobalClustersMessage
source#### fn default() -> DescribeGlobalClustersMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeGlobalClustersMessage> for DescribeGlobalClustersMessage
source#### fn eq(&self, other: &DescribeGlobalClustersMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeGlobalClustersMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeGlobalClustersMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeGlobalClustersMessage
### impl Send for DescribeGlobalClustersMessage
### impl Sync for DescribeGlobalClustersMessage
### impl Unpin for DescribeGlobalClustersMessage
### impl UnwindSafe for DescribeGlobalClustersMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribeOrderableDBInstanceOptionsMessage
===
```
pub struct DescribeOrderableDBInstanceOptionsMessage {
pub db_instance_class: Option<String>,
pub engine: String,
pub engine_version: Option<String>,
pub filters: Option<Vec<Filter>>,
pub license_model: Option<String>,
pub marker: Option<String>,
pub max_records: Option<i64>,
pub vpc: Option<bool>,
}
```
Represents the input to DescribeOrderableDBInstanceOptions.
Fields
---
`db_instance_class: Option<String>`The instance class filter value. Specify this parameter to show only the available offerings that match the specified instance class.
`engine: String`The name of the engine to retrieve instance options for.
`engine_version: Option<String>`The engine version filter value. Specify this parameter to show only the available offerings that match the specified engine version.
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`license_model: Option<String>`The license model filter value. Specify this parameter to show only the available offerings that match the specified license model.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
`vpc: Option<bool>`The virtual private cloud (VPC) filter value. Specify this parameter to show only the available VPC or non-VPC offerings.
Trait Implementations
---
source### impl Clone for DescribeOrderableDBInstanceOptionsMessage
source#### fn clone(&self) -> DescribeOrderableDBInstanceOptionsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribeOrderableDBInstanceOptionsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribeOrderableDBInstanceOptionsMessage
source#### fn default() -> DescribeOrderableDBInstanceOptionsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribeOrderableDBInstanceOptionsMessage> for DescribeOrderableDBInstanceOptionsMessage
source#### fn eq(&self, other: &DescribeOrderableDBInstanceOptionsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeOrderableDBInstanceOptionsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeOrderableDBInstanceOptionsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeOrderableDBInstanceOptionsMessage
### impl Send for DescribeOrderableDBInstanceOptionsMessage
### impl Sync for DescribeOrderableDBInstanceOptionsMessage
### impl Unpin for DescribeOrderableDBInstanceOptionsMessage
### impl UnwindSafe for DescribeOrderableDBInstanceOptionsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::DescribePendingMaintenanceActionsMessage
===
```
pub struct DescribePendingMaintenanceActionsMessage {
pub filters: Option<Vec<Filter>>,
pub marker: Option<String>,
pub max_records: Option<i64>,
pub resource_identifier: Option<String>,
}
```
Represents the input to DescribePendingMaintenanceActions.
Fields
---
`filters: Option<Vec<Filter>>`A filter that specifies one or more resources to return pending maintenance actions for.
Supported filters:
* `db-cluster-id` - Accepts cluster identifiers and cluster Amazon Resource Names (ARNs). The results list includes only pending maintenance actions for the clusters identified by these ARNs.
* `db-instance-id` - Accepts instance identifiers and instance ARNs. The results list includes only pending maintenance actions for the DB instances identified by these ARNs.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`max_records: Option<i64>` The maximum number of records to include in the response. If more records exist than the specified `MaxRecords` value, a pagination token (marker) is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
`resource_identifier: Option<String>`The ARN of a resource to return pending maintenance actions for.
Trait Implementations
---
source### impl Clone for DescribePendingMaintenanceActionsMessage
source#### fn clone(&self) -> DescribePendingMaintenanceActionsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for DescribePendingMaintenanceActionsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for DescribePendingMaintenanceActionsMessage
source#### fn default() -> DescribePendingMaintenanceActionsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<DescribePendingMaintenanceActionsMessage> for DescribePendingMaintenanceActionsMessage
source#### fn eq(&self, other: &DescribePendingMaintenanceActionsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribePendingMaintenanceActionsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribePendingMaintenanceActionsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribePendingMaintenanceActionsMessage
### impl Send for DescribePendingMaintenanceActionsMessage
### impl Sync for DescribePendingMaintenanceActionsMessage
### impl Unpin for DescribePendingMaintenanceActionsMessage
### impl UnwindSafe for DescribePendingMaintenanceActionsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::Endpoint
===
```
pub struct Endpoint {
pub address: Option<String>,
pub hosted_zone_id: Option<String>,
pub port: Option<i64>,
}
```
Network information for accessing a cluster or instance. Client programs must specify a valid endpoint to access these Amazon DocumentDB resources.
Fields
---
`address: Option<String>`Specifies the DNS address of the instance.
`hosted_zone_id: Option<String>`Specifies the ID that Amazon Route 53 assigns when you create a hosted zone.
`port: Option<i64>`Specifies the port that the database engine is listening on.
Trait Implementations
---
source### impl Clone for Endpoint
source#### fn clone(&self) -> Endpoint
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Endpoint
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Endpoint
source#### fn default() -> Endpoint
Returns the “default value” for a type. Read more
source### impl PartialEq<Endpoint> for Endpoint
source#### fn eq(&self, other: &Endpoint) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Endpoint) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Endpoint
Auto Trait Implementations
---
### impl RefUnwindSafe for Endpoint
### impl Send for Endpoint
### impl Sync for Endpoint
### impl Unpin for Endpoint
### impl UnwindSafe for Endpoint
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::EngineDefaults
===
```
pub struct EngineDefaults {
pub db_parameter_group_family: Option<String>,
pub marker: Option<String>,
pub parameters: Option<Vec<Parameter>>,
}
```
Contains the result of a successful invocation of the `DescribeEngineDefaultClusterParameters` operation.
Fields
---
`db_parameter_group_family: Option<String>`The name of the cluster parameter group family to return the engine parameter information for.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`parameters: Option<Vec<Parameter>>`The parameters of a particular cluster parameter group family.
Trait Implementations
---
source### impl Clone for EngineDefaults
source#### fn clone(&self) -> EngineDefaults
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EngineDefaults
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EngineDefaults
source#### fn default() -> EngineDefaults
Returns the “default value” for a type. Read more
source### impl PartialEq<EngineDefaults> for EngineDefaults
source#### fn eq(&self, other: &EngineDefaults) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EngineDefaults) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EngineDefaults
Auto Trait Implementations
---
### impl RefUnwindSafe for EngineDefaults
### impl Send for EngineDefaults
### impl Sync for EngineDefaults
### impl Unpin for EngineDefaults
### impl UnwindSafe for EngineDefaults
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::Event
===
```
pub struct Event {
pub date: Option<String>,
pub event_categories: Option<Vec<String>>,
pub message: Option<String>,
pub source_arn: Option<String>,
pub source_identifier: Option<String>,
pub source_type: Option<String>,
}
```
Detailed information about an event.
Fields
---
`date: Option<String>`Specifies the date and time of the event.
`event_categories: Option<Vec<String>>`Specifies the category for the event.
`message: Option<String>`Provides the text of this event.
`source_arn: Option<String>`The Amazon Resource Name (ARN) for the event.
`source_identifier: Option<String>`Provides the identifier for the source of the event.
`source_type: Option<String>`Specifies the source type for this event.
Trait Implementations
---
source### impl Clone for Event
source#### fn clone(&self) -> Event
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Event
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Event
source#### fn default() -> Event
Returns the “default value” for a type. Read more
source### impl PartialEq<Event> for Event
source#### fn eq(&self, other: &Event) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Event) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Event
Auto Trait Implementations
---
### impl RefUnwindSafe for Event
### impl Send for Event
### impl Sync for Event
### impl Unpin for Event
### impl UnwindSafe for Event
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::EventCategoriesMap
===
```
pub struct EventCategoriesMap {
pub event_categories: Option<Vec<String>>,
pub source_type: Option<String>,
}
```
An event source type, accompanied by one or more event category names.
Fields
---
`event_categories: Option<Vec<String>>`The event categories for the specified source type.
`source_type: Option<String>`The source type that the returned categories belong to.
Trait Implementations
---
source### impl Clone for EventCategoriesMap
source#### fn clone(&self) -> EventCategoriesMap
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EventCategoriesMap
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EventCategoriesMap
source#### fn default() -> EventCategoriesMap
Returns the “default value” for a type. Read more
source### impl PartialEq<EventCategoriesMap> for EventCategoriesMap
source#### fn eq(&self, other: &EventCategoriesMap) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EventCategoriesMap) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EventCategoriesMap
Auto Trait Implementations
---
### impl RefUnwindSafe for EventCategoriesMap
### impl Send for EventCategoriesMap
### impl Sync for EventCategoriesMap
### impl Unpin for EventCategoriesMap
### impl UnwindSafe for EventCategoriesMap
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::EventCategoriesMessage
===
```
pub struct EventCategoriesMessage {
pub event_categories_map_list: Option<Vec<EventCategoriesMap>>,
}
```
Represents the output of DescribeEventCategories.
Fields
---
`event_categories_map_list: Option<Vec<EventCategoriesMap>>`A list of event category maps.
Trait Implementations
---
source### impl Clone for EventCategoriesMessage
source#### fn clone(&self) -> EventCategoriesMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EventCategoriesMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EventCategoriesMessage
source#### fn default() -> EventCategoriesMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<EventCategoriesMessage> for EventCategoriesMessage
source#### fn eq(&self, other: &EventCategoriesMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EventCategoriesMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EventCategoriesMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for EventCategoriesMessage
### impl Send for EventCategoriesMessage
### impl Sync for EventCategoriesMessage
### impl Unpin for EventCategoriesMessage
### impl UnwindSafe for EventCategoriesMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::EventSubscription
===
```
pub struct EventSubscription {
pub cust_subscription_id: Option<String>,
pub customer_aws_id: Option<String>,
pub enabled: Option<bool>,
pub event_categories_list: Option<Vec<String>>,
pub event_subscription_arn: Option<String>,
pub sns_topic_arn: Option<String>,
pub source_ids_list: Option<Vec<String>>,
pub source_type: Option<String>,
pub status: Option<String>,
pub subscription_creation_time: Option<String>,
}
```
Detailed information about an event to which you have subscribed.
Fields
---
`cust_subscription_id: Option<String>`The Amazon DocumentDB event notification subscription ID.
`customer_aws_id: Option<String>`The Amazon Web Services customer account that is associated with the Amazon DocumentDB event notification subscription.
`enabled: Option<bool>`A Boolean value indicating whether the subscription is enabled. A value of `true` indicates that the subscription is enabled.
`event_categories_list: Option<Vec<String>>`A list of event categories for the Amazon DocumentDB event notification subscription.
`event_subscription_arn: Option<String>`The Amazon Resource Name (ARN) for the event subscription.
`sns_topic_arn: Option<String>`The topic ARN of the Amazon DocumentDB event notification subscription.
`source_ids_list: Option<Vec<String>>`A list of source IDs for the Amazon DocumentDB event notification subscription.
`source_type: Option<String>`The source type for the Amazon DocumentDB event notification subscription.
`status: Option<String>`The status of the Amazon DocumentDB event notification subscription.
Constraints:
Can be one of the following: `creating`, `modifying`, `deleting`, `active`, `no-permission`, `topic-not-exist`
The `no-permission` status indicates that Amazon DocumentDB no longer has permission to post to the SNS topic. The `topic-not-exist` status indicates that the topic was deleted after the subscription was created.
`subscription_creation_time: Option<String>`The time at which the Amazon DocumentDB event notification subscription was created.
Trait Implementations
---
source### impl Clone for EventSubscription
source#### fn clone(&self) -> EventSubscription
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EventSubscription
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EventSubscription
source#### fn default() -> EventSubscription
Returns the “default value” for a type. Read more
source### impl PartialEq<EventSubscription> for EventSubscription
source#### fn eq(&self, other: &EventSubscription) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EventSubscription) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EventSubscription
Auto Trait Implementations
---
### impl RefUnwindSafe for EventSubscription
### impl Send for EventSubscription
### impl Sync for EventSubscription
### impl Unpin for EventSubscription
### impl UnwindSafe for EventSubscription
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::EventSubscriptionsMessage
===
```
pub struct EventSubscriptionsMessage {
pub event_subscriptions_list: Option<Vec<EventSubscription>>,
pub marker: Option<String>,
}
```
Represents the output of DescribeEventSubscriptions.
Fields
---
`event_subscriptions_list: Option<Vec<EventSubscription>>`A list of event subscriptions.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for EventSubscriptionsMessage
source#### fn clone(&self) -> EventSubscriptionsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EventSubscriptionsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EventSubscriptionsMessage
source#### fn default() -> EventSubscriptionsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<EventSubscriptionsMessage> for EventSubscriptionsMessage
source#### fn eq(&self, other: &EventSubscriptionsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EventSubscriptionsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EventSubscriptionsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for EventSubscriptionsMessage
### impl Send for EventSubscriptionsMessage
### impl Sync for EventSubscriptionsMessage
### impl Unpin for EventSubscriptionsMessage
### impl UnwindSafe for EventSubscriptionsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::EventsMessage
===
```
pub struct EventsMessage {
pub events: Option<Vec<Event>>,
pub marker: Option<String>,
}
```
Represents the output of DescribeEvents.
Fields
---
`events: Option<Vec<Event>>`Detailed information about one or more events.
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
Trait Implementations
---
source### impl Clone for EventsMessage
source#### fn clone(&self) -> EventsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for EventsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for EventsMessage
source#### fn default() -> EventsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<EventsMessage> for EventsMessage
source#### fn eq(&self, other: &EventsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &EventsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for EventsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for EventsMessage
### impl Send for EventsMessage
### impl Sync for EventsMessage
### impl Unpin for EventsMessage
### impl UnwindSafe for EventsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::FailoverDBClusterMessage
===
```
pub struct FailoverDBClusterMessage {
pub db_cluster_identifier: Option<String>,
pub target_db_instance_identifier: Option<String>,
}
```
Represents the input to FailoverDBCluster.
Fields
---
`db_cluster_identifier: Option<String>`A cluster identifier to force a failover for. This parameter is not case sensitive.
Constraints:
* Must match the identifier of an existing `DBCluster`.
`target_db_instance_identifier: Option<String>`The name of the instance to promote to the primary instance.
You must specify the instance identifier for an Amazon DocumentDB replica in the cluster. For example, `mydbcluster-replica1`.
Trait Implementations
---
source### impl Clone for FailoverDBClusterMessage
source#### fn clone(&self) -> FailoverDBClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for FailoverDBClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for FailoverDBClusterMessage
source#### fn default() -> FailoverDBClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<FailoverDBClusterMessage> for FailoverDBClusterMessage
source#### fn eq(&self, other: &FailoverDBClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &FailoverDBClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for FailoverDBClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for FailoverDBClusterMessage
### impl Send for FailoverDBClusterMessage
### impl Sync for FailoverDBClusterMessage
### impl Unpin for FailoverDBClusterMessage
### impl UnwindSafe for FailoverDBClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::FailoverDBClusterResult
===
```
pub struct FailoverDBClusterResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for FailoverDBClusterResult
source#### fn clone(&self) -> FailoverDBClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for FailoverDBClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for FailoverDBClusterResult
source#### fn default() -> FailoverDBClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<FailoverDBClusterResult> for FailoverDBClusterResult
source#### fn eq(&self, other: &FailoverDBClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &FailoverDBClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for FailoverDBClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for FailoverDBClusterResult
### impl Send for FailoverDBClusterResult
### impl Sync for FailoverDBClusterResult
### impl Unpin for FailoverDBClusterResult
### impl UnwindSafe for FailoverDBClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::Filter
===
```
pub struct Filter {
pub name: String,
pub values: Vec<String>,
}
```
A named set of filter values, used to return a more specific list of results. You can use a filter to match a set of resources by specific criteria, such as IDs.
Wildcards are not supported in filters.
Fields
---
`name: String`The name of the filter. Filter names are case sensitive.
`values: Vec<String>`One or more filter values. Filter values are case sensitive.
Trait Implementations
---
source### impl Clone for Filter
source#### fn clone(&self) -> Filter
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Filter
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Filter
source#### fn default() -> Filter
Returns the “default value” for a type. Read more
source### impl PartialEq<Filter> for Filter
source#### fn eq(&self, other: &Filter) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Filter) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Filter
Auto Trait Implementations
---
### impl RefUnwindSafe for Filter
### impl Send for Filter
### impl Sync for Filter
### impl Unpin for Filter
### impl UnwindSafe for Filter
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::GlobalCluster
===
```
pub struct GlobalCluster {
pub database_name: Option<String>,
pub deletion_protection: Option<bool>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub global_cluster_arn: Option<String>,
pub global_cluster_identifier: Option<String>,
pub global_cluster_members: Option<Vec<GlobalClusterMember>>,
pub global_cluster_resource_id: Option<String>,
pub status: Option<String>,
pub storage_encrypted: Option<bool>,
}
```
A data type representing an Amazon DocumentDB global cluster.
Fields
---
`database_name: Option<String>`The default database name within the new global cluster.
`deletion_protection: Option<bool>`The deletion protection setting for the new global cluster.
`engine: Option<String>`The Amazon DocumentDB database engine used by the global cluster.
`engine_version: Option<String>`Indicates the database engine version.
`global_cluster_arn: Option<String>`The Amazon Resource Name (ARN) for the global cluster.
`global_cluster_identifier: Option<String>`Contains a user-supplied global cluster identifier. This identifier is the unique key that identifies a global cluster.
`global_cluster_members: Option<Vec<GlobalClusterMember>>`The list of cluster IDs for secondary clusters within the global cluster. Currently limited to one item.
`global_cluster_resource_id: Option<String>`The Region-unique, immutable identifier for the global database cluster. This identifier is found in AWS CloudTrail log entries whenever the AWS KMS customer master key (CMK) for the cluster is accessed.
`status: Option<String>`Specifies the current state of this global cluster.
`storage_encrypted: Option<bool>`The storage encryption setting for the global cluster.
Trait Implementations
---
source### impl Clone for GlobalCluster
source#### fn clone(&self) -> GlobalCluster
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GlobalCluster
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GlobalCluster
source#### fn default() -> GlobalCluster
Returns the “default value” for a type. Read more
source### impl PartialEq<GlobalCluster> for GlobalCluster
source#### fn eq(&self, other: &GlobalCluster) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GlobalCluster) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GlobalCluster
Auto Trait Implementations
---
### impl RefUnwindSafe for GlobalCluster
### impl Send for GlobalCluster
### impl Sync for GlobalCluster
### impl Unpin for GlobalCluster
### impl UnwindSafe for GlobalCluster
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::GlobalClusterMember
===
```
pub struct GlobalClusterMember {
pub db_cluster_arn: Option<String>,
pub is_writer: Option<bool>,
pub readers: Option<Vec<String>>,
}
```
A data structure with information about any primary and secondary clusters associated with an Amazon DocumentDB global clusters.
Fields
---
`db_cluster_arn: Option<String>`The Amazon Resource Name (ARN) for each Amazon DocumentDB cluster.
`is_writer: Option<bool>` Specifies whether the Amazon DocumentDB cluster is the primary cluster (that is, has read-write capability) for the Amazon DocumentDB global cluster with which it is associated.
`readers: Option<Vec<String>>`The Amazon Resource Name (ARN) for each read-only secondary cluster associated with the Aurora global cluster.
Trait Implementations
---
source### impl Clone for GlobalClusterMember
source#### fn clone(&self) -> GlobalClusterMember
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GlobalClusterMember
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GlobalClusterMember
source#### fn default() -> GlobalClusterMember
Returns the “default value” for a type. Read more
source### impl PartialEq<GlobalClusterMember> for GlobalClusterMember
source#### fn eq(&self, other: &GlobalClusterMember) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GlobalClusterMember) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GlobalClusterMember
Auto Trait Implementations
---
### impl RefUnwindSafe for GlobalClusterMember
### impl Send for GlobalClusterMember
### impl Sync for GlobalClusterMember
### impl Unpin for GlobalClusterMember
### impl UnwindSafe for GlobalClusterMember
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::GlobalClustersMessage
===
```
pub struct GlobalClustersMessage {
pub global_clusters: Option<Vec<GlobalCluster>>,
pub marker: Option<String>,
}
```
Fields
---
`global_clusters: Option<Vec<GlobalCluster>>``marker: Option<String>`Trait Implementations
---
source### impl Clone for GlobalClustersMessage
source#### fn clone(&self) -> GlobalClustersMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for GlobalClustersMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for GlobalClustersMessage
source#### fn default() -> GlobalClustersMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<GlobalClustersMessage> for GlobalClustersMessage
source#### fn eq(&self, other: &GlobalClustersMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &GlobalClustersMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for GlobalClustersMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for GlobalClustersMessage
### impl Send for GlobalClustersMessage
### impl Sync for GlobalClustersMessage
### impl Unpin for GlobalClustersMessage
### impl UnwindSafe for GlobalClustersMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ListTagsForResourceMessage
===
```
pub struct ListTagsForResourceMessage {
pub filters: Option<Vec<Filter>>,
pub resource_name: String,
}
```
Represents the input to ListTagsForResource.
Fields
---
`filters: Option<Vec<Filter>>`This parameter is not currently supported.
`resource_name: String`The Amazon DocumentDB resource with tags to be listed. This value is an Amazon Resource Name (ARN).
Trait Implementations
---
source### impl Clone for ListTagsForResourceMessage
source#### fn clone(&self) -> ListTagsForResourceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ListTagsForResourceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ListTagsForResourceMessage
source#### fn default() -> ListTagsForResourceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ListTagsForResourceMessage> for ListTagsForResourceMessage
source#### fn eq(&self, other: &ListTagsForResourceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListTagsForResourceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceMessage
### impl Send for ListTagsForResourceMessage
### impl Sync for ListTagsForResourceMessage
### impl Unpin for ListTagsForResourceMessage
### impl UnwindSafe for ListTagsForResourceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBClusterMessage
===
```
pub struct ModifyDBClusterMessage {
pub apply_immediately: Option<bool>,
pub backup_retention_period: Option<i64>,
pub cloudwatch_logs_export_configuration: Option<CloudwatchLogsExportConfiguration>,
pub db_cluster_identifier: String,
pub db_cluster_parameter_group_name: Option<String>,
pub deletion_protection: Option<bool>,
pub engine_version: Option<String>,
pub master_user_password: Option<String>,
pub new_db_cluster_identifier: Option<String>,
pub port: Option<i64>,
pub preferred_backup_window: Option<String>,
pub preferred_maintenance_window: Option<String>,
pub vpc_security_group_ids: Option<Vec<String>>,
}
```
Represents the input to ModifyDBCluster.
Fields
---
`apply_immediately: Option<bool>`A value that specifies whether the changes in this request and any pending changes are asynchronously applied as soon as possible, regardless of the `PreferredMaintenanceWindow` setting for the cluster. If this parameter is set to `false`, changes to the cluster are applied during the next maintenance window.
The `ApplyImmediately` parameter affects only the `NewDBClusterIdentifier` and `MasterUserPassword` values. If you set this parameter value to `false`, the changes to the `NewDBClusterIdentifier` and `MasterUserPassword` values are applied during the next maintenance window. All other changes are applied immediately, regardless of the value of the `ApplyImmediately` parameter.
Default: `false`
`backup_retention_period: Option<i64>`The number of days for which automated backups are retained. You must specify a minimum value of 1.
Default: 1
Constraints:
* Must be a value from 1 to 35.
`cloudwatch_logs_export_configuration: Option<CloudwatchLogsExportConfiguration>`The configuration setting for the log types to be enabled for export to Amazon CloudWatch Logs for a specific instance or cluster. The `EnableLogTypes` and `DisableLogTypes` arrays determine which logs are exported (or not exported) to CloudWatch Logs.
`db_cluster_identifier: String`The cluster identifier for the cluster that is being modified. This parameter is not case sensitive.
Constraints:
* Must match the identifier of an existing `DBCluster`.
`db_cluster_parameter_group_name: Option<String>`The name of the cluster parameter group to use for the cluster.
`deletion_protection: Option<bool>`Specifies whether this cluster can be deleted. If `DeletionProtection` is enabled, the cluster cannot be deleted unless it is modified and `DeletionProtection` is disabled. `DeletionProtection` protects clusters from being accidentally deleted.
`engine_version: Option<String>`The version number of the database engine to which you want to upgrade. Modifying engine version is not supported on Amazon DocumentDB.
`master_user_password: Option<String>`The password for the master database user. This password can contain any printable ASCII character except forward slash (/), double quote ("), or the "at" symbol (@).
Constraints: Must contain from 8 to 100 characters.
`new_db_cluster_identifier: Option<String>`The new cluster identifier for the cluster when renaming a cluster. This value is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `my-cluster2`
`port: Option<i64>`The port number on which the cluster accepts connections.
Constraints: Must be a value from `1150` to `65535`.
Default: The same port as the original cluster.
`preferred_backup_window: Option<String>`The daily time range during which automated backups are created if automated backups are enabled, using the `BackupRetentionPeriod` parameter.
The default is a 30-minute window selected at random from an 8-hour block of time for each Region.
Constraints:
* Must be in the format `hh24:mi-hh24:mi`.
* Must be in Universal Coordinated Time (UTC).
* Must not conflict with the preferred maintenance window.
* Must be at least 30 minutes.
`preferred_maintenance_window: Option<String>`The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: `ddd:hh24:mi-ddd:hh24:mi`
The default is a 30-minute window selected at random from an 8-hour block of time for each Region, occurring on a random day of the week.
Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Minimum 30-minute window.
`vpc_security_group_ids: Option<Vec<String>>`A list of virtual private cloud (VPC) security groups that the cluster will belong to.
Trait Implementations
---
source### impl Clone for ModifyDBClusterMessage
source#### fn clone(&self) -> ModifyDBClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBClusterMessage
source#### fn default() -> ModifyDBClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBClusterMessage> for ModifyDBClusterMessage
source#### fn eq(&self, other: &ModifyDBClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterMessage
### impl Send for ModifyDBClusterMessage
### impl Sync for ModifyDBClusterMessage
### impl Unpin for ModifyDBClusterMessage
### impl UnwindSafe for ModifyDBClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBClusterParameterGroupMessage
===
```
pub struct ModifyDBClusterParameterGroupMessage {
pub db_cluster_parameter_group_name: String,
pub parameters: Vec<Parameter>,
}
```
Represents the input to ModifyDBClusterParameterGroup.
Fields
---
`db_cluster_parameter_group_name: String`The name of the cluster parameter group to modify.
`parameters: Vec<Parameter>`A list of parameters in the cluster parameter group to modify.
Trait Implementations
---
source### impl Clone for ModifyDBClusterParameterGroupMessage
source#### fn clone(&self) -> ModifyDBClusterParameterGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBClusterParameterGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBClusterParameterGroupMessage
source#### fn default() -> ModifyDBClusterParameterGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBClusterParameterGroupMessage> for ModifyDBClusterParameterGroupMessage
source#### fn eq(&self, other: &ModifyDBClusterParameterGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterParameterGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterParameterGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterParameterGroupMessage
### impl Send for ModifyDBClusterParameterGroupMessage
### impl Sync for ModifyDBClusterParameterGroupMessage
### impl Unpin for ModifyDBClusterParameterGroupMessage
### impl UnwindSafe for ModifyDBClusterParameterGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBClusterResult
===
```
pub struct ModifyDBClusterResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for ModifyDBClusterResult
source#### fn clone(&self) -> ModifyDBClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBClusterResult
source#### fn default() -> ModifyDBClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBClusterResult> for ModifyDBClusterResult
source#### fn eq(&self, other: &ModifyDBClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterResult
### impl Send for ModifyDBClusterResult
### impl Sync for ModifyDBClusterResult
### impl Unpin for ModifyDBClusterResult
### impl UnwindSafe for ModifyDBClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBClusterSnapshotAttributeMessage
===
```
pub struct ModifyDBClusterSnapshotAttributeMessage {
pub attribute_name: String,
pub db_cluster_snapshot_identifier: String,
pub values_to_add: Option<Vec<String>>,
pub values_to_remove: Option<Vec<String>>,
}
```
Represents the input to ModifyDBClusterSnapshotAttribute.
Fields
---
`attribute_name: String`The name of the cluster snapshot attribute to modify.
To manage authorization for other accounts to copy or restore a manual cluster snapshot, set this value to `restore`.
`db_cluster_snapshot_identifier: String`The identifier for the cluster snapshot to modify the attributes for.
`values_to_add: Option<Vec<String>>`A list of cluster snapshot attributes to add to the attribute specified by `AttributeName`.
To authorize other accounts to copy or restore a manual cluster snapshot, set this list to include one or more account IDs. To make the manual cluster snapshot restorable by any account, set it to `all`. Do not add the `all` value for any manual cluster snapshots that contain private information that you don't want to be available to all accounts.
`values_to_remove: Option<Vec<String>>`A list of cluster snapshot attributes to remove from the attribute specified by `AttributeName`.
To remove authorization for other accounts to copy or restore a manual cluster snapshot, set this list to include one or more account identifiers. To remove authorization for any account to copy or restore the cluster snapshot, set it to `all` . If you specify `all`, an account whose account ID is explicitly added to the `restore` attribute can still copy or restore a manual cluster snapshot.
Trait Implementations
---
source### impl Clone for ModifyDBClusterSnapshotAttributeMessage
source#### fn clone(&self) -> ModifyDBClusterSnapshotAttributeMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBClusterSnapshotAttributeMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBClusterSnapshotAttributeMessage
source#### fn default() -> ModifyDBClusterSnapshotAttributeMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBClusterSnapshotAttributeMessage> for ModifyDBClusterSnapshotAttributeMessage
source#### fn eq(&self, other: &ModifyDBClusterSnapshotAttributeMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterSnapshotAttributeMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterSnapshotAttributeMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterSnapshotAttributeMessage
### impl Send for ModifyDBClusterSnapshotAttributeMessage
### impl Sync for ModifyDBClusterSnapshotAttributeMessage
### impl Unpin for ModifyDBClusterSnapshotAttributeMessage
### impl UnwindSafe for ModifyDBClusterSnapshotAttributeMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBClusterSnapshotAttributeResult
===
```
pub struct ModifyDBClusterSnapshotAttributeResult {
pub db_cluster_snapshot_attributes_result: Option<DBClusterSnapshotAttributesResult>,
}
```
Fields
---
`db_cluster_snapshot_attributes_result: Option<DBClusterSnapshotAttributesResult>`Trait Implementations
---
source### impl Clone for ModifyDBClusterSnapshotAttributeResult
source#### fn clone(&self) -> ModifyDBClusterSnapshotAttributeResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBClusterSnapshotAttributeResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBClusterSnapshotAttributeResult
source#### fn default() -> ModifyDBClusterSnapshotAttributeResult
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBClusterSnapshotAttributeResult> for ModifyDBClusterSnapshotAttributeResult
source#### fn eq(&self, other: &ModifyDBClusterSnapshotAttributeResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterSnapshotAttributeResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterSnapshotAttributeResult
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterSnapshotAttributeResult
### impl Send for ModifyDBClusterSnapshotAttributeResult
### impl Sync for ModifyDBClusterSnapshotAttributeResult
### impl Unpin for ModifyDBClusterSnapshotAttributeResult
### impl UnwindSafe for ModifyDBClusterSnapshotAttributeResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBInstanceMessage
===
```
pub struct ModifyDBInstanceMessage {
pub apply_immediately: Option<bool>,
pub auto_minor_version_upgrade: Option<bool>,
pub ca_certificate_identifier: Option<String>,
pub db_instance_class: Option<String>,
pub db_instance_identifier: String,
pub new_db_instance_identifier: Option<String>,
pub preferred_maintenance_window: Option<String>,
pub promotion_tier: Option<i64>,
}
```
Represents the input to ModifyDBInstance.
Fields
---
`apply_immediately: Option<bool>`Specifies whether the modifications in this request and any pending modifications are asynchronously applied as soon as possible, regardless of the `PreferredMaintenanceWindow` setting for the instance.
If this parameter is set to `false`, changes to the instance are applied during the next maintenance window. Some parameter changes can cause an outage and are applied on the next reboot.
Default: `false`
`auto_minor_version_upgrade: Option<bool>`This parameter does not apply to Amazon DocumentDB. Amazon DocumentDB does not perform minor version upgrades regardless of the value set.
`ca_certificate_identifier: Option<String>`Indicates the certificate that needs to be associated with the instance.
`db_instance_class: Option<String>`The new compute and memory capacity of the instance; for example, `db.r5.large`. Not all instance classes are available in all Regions.
If you modify the instance class, an outage occurs during the change. The change is applied during the next maintenance window, unless `ApplyImmediately` is specified as `true` for this request.
Default: Uses existing setting.
`db_instance_identifier: String`The instance identifier. This value is stored as a lowercase string.
Constraints:
* Must match the identifier of an existing `DBInstance`.
`new_db_instance_identifier: Option<String>` The new instance identifier for the instance when renaming an instance. When you change the instance identifier, an instance reboot occurs immediately if you set `Apply Immediately` to `true`. It occurs during the next maintenance window if you set `Apply Immediately` to `false`. This value is stored as a lowercase string.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `mydbinstance`
`preferred_maintenance_window: Option<String>`The weekly time range (in UTC) during which system maintenance can occur, which might result in an outage. Changing this parameter doesn't result in an outage except in the following situation, and the change is asynchronously applied as soon as possible. If there are pending actions that cause a reboot, and the maintenance window is changed to include the current time, changing this parameter causes a reboot of the instance. If you are moving this window to the current time, there must be at least 30 minutes between the current time and end of the window to ensure that pending changes are applied.
Default: Uses existing setting.
Format: `ddd:hh24:mi-ddd:hh24:mi`
Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Must be at least 30 minutes.
`promotion_tier: Option<i64>`A value that specifies the order in which an Amazon DocumentDB replica is promoted to the primary instance after a failure of the existing primary instance.
Default: 1
Valid values: 0-15
Trait Implementations
---
source### impl Clone for ModifyDBInstanceMessage
source#### fn clone(&self) -> ModifyDBInstanceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBInstanceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBInstanceMessage
source#### fn default() -> ModifyDBInstanceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBInstanceMessage> for ModifyDBInstanceMessage
source#### fn eq(&self, other: &ModifyDBInstanceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBInstanceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBInstanceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBInstanceMessage
### impl Send for ModifyDBInstanceMessage
### impl Sync for ModifyDBInstanceMessage
### impl Unpin for ModifyDBInstanceMessage
### impl UnwindSafe for ModifyDBInstanceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBInstanceResult
===
```
pub struct ModifyDBInstanceResult {
pub db_instance: Option<DBInstance>,
}
```
Fields
---
`db_instance: Option<DBInstance>`Trait Implementations
---
source### impl Clone for ModifyDBInstanceResult
source#### fn clone(&self) -> ModifyDBInstanceResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBInstanceResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBInstanceResult
source#### fn default() -> ModifyDBInstanceResult
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBInstanceResult> for ModifyDBInstanceResult
source#### fn eq(&self, other: &ModifyDBInstanceResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBInstanceResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBInstanceResult
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBInstanceResult
### impl Send for ModifyDBInstanceResult
### impl Sync for ModifyDBInstanceResult
### impl Unpin for ModifyDBInstanceResult
### impl UnwindSafe for ModifyDBInstanceResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBSubnetGroupMessage
===
```
pub struct ModifyDBSubnetGroupMessage {
pub db_subnet_group_description: Option<String>,
pub db_subnet_group_name: String,
pub subnet_ids: Vec<String>,
}
```
Represents the input to ModifyDBSubnetGroup.
Fields
---
`db_subnet_group_description: Option<String>`The description for the subnet group.
`db_subnet_group_name: String`The name for the subnet group. This value is stored as a lowercase string. You can't modify the default subnet group.
Constraints: Must match the name of an existing `DBSubnetGroup`. Must not be default.
Example: `mySubnetgroup`
`subnet_ids: Vec<String>`The Amazon EC2 subnet IDs for the subnet group.
Trait Implementations
---
source### impl Clone for ModifyDBSubnetGroupMessage
source#### fn clone(&self) -> ModifyDBSubnetGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBSubnetGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBSubnetGroupMessage
source#### fn default() -> ModifyDBSubnetGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBSubnetGroupMessage> for ModifyDBSubnetGroupMessage
source#### fn eq(&self, other: &ModifyDBSubnetGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBSubnetGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBSubnetGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBSubnetGroupMessage
### impl Send for ModifyDBSubnetGroupMessage
### impl Sync for ModifyDBSubnetGroupMessage
### impl Unpin for ModifyDBSubnetGroupMessage
### impl UnwindSafe for ModifyDBSubnetGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyDBSubnetGroupResult
===
```
pub struct ModifyDBSubnetGroupResult {
pub db_subnet_group: Option<DBSubnetGroup>,
}
```
Fields
---
`db_subnet_group: Option<DBSubnetGroup>`Trait Implementations
---
source### impl Clone for ModifyDBSubnetGroupResult
source#### fn clone(&self) -> ModifyDBSubnetGroupResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyDBSubnetGroupResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyDBSubnetGroupResult
source#### fn default() -> ModifyDBSubnetGroupResult
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyDBSubnetGroupResult> for ModifyDBSubnetGroupResult
source#### fn eq(&self, other: &ModifyDBSubnetGroupResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBSubnetGroupResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBSubnetGroupResult
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBSubnetGroupResult
### impl Send for ModifyDBSubnetGroupResult
### impl Sync for ModifyDBSubnetGroupResult
### impl Unpin for ModifyDBSubnetGroupResult
### impl UnwindSafe for ModifyDBSubnetGroupResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyEventSubscriptionMessage
===
```
pub struct ModifyEventSubscriptionMessage {
pub enabled: Option<bool>,
pub event_categories: Option<Vec<String>>,
pub sns_topic_arn: Option<String>,
pub source_type: Option<String>,
pub subscription_name: String,
}
```
Represents the input to ModifyEventSubscription.
Fields
---
`enabled: Option<bool>` A Boolean value; set to `true` to activate the subscription.
`event_categories: Option<Vec<String>>` A list of event categories for a `SourceType` that you want to subscribe to.
`sns_topic_arn: Option<String>`The Amazon Resource Name (ARN) of the SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
`source_type: Option<String>`The type of source that is generating the events. For example, if you want to be notified of events generated by an instance, set this parameter to `db-instance`. If this value is not specified, all events are returned.
Valid values: `db-instance`, `db-parameter-group`, `db-security-group`
`subscription_name: String`The name of the Amazon DocumentDB event notification subscription.
Trait Implementations
---
source### impl Clone for ModifyEventSubscriptionMessage
source#### fn clone(&self) -> ModifyEventSubscriptionMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyEventSubscriptionMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyEventSubscriptionMessage
source#### fn default() -> ModifyEventSubscriptionMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyEventSubscriptionMessage> for ModifyEventSubscriptionMessage
source#### fn eq(&self, other: &ModifyEventSubscriptionMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyEventSubscriptionMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyEventSubscriptionMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyEventSubscriptionMessage
### impl Send for ModifyEventSubscriptionMessage
### impl Sync for ModifyEventSubscriptionMessage
### impl Unpin for ModifyEventSubscriptionMessage
### impl UnwindSafe for ModifyEventSubscriptionMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyEventSubscriptionResult
===
```
pub struct ModifyEventSubscriptionResult {
pub event_subscription: Option<EventSubscription>,
}
```
Fields
---
`event_subscription: Option<EventSubscription>`Trait Implementations
---
source### impl Clone for ModifyEventSubscriptionResult
source#### fn clone(&self) -> ModifyEventSubscriptionResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyEventSubscriptionResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyEventSubscriptionResult
source#### fn default() -> ModifyEventSubscriptionResult
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyEventSubscriptionResult> for ModifyEventSubscriptionResult
source#### fn eq(&self, other: &ModifyEventSubscriptionResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyEventSubscriptionResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyEventSubscriptionResult
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyEventSubscriptionResult
### impl Send for ModifyEventSubscriptionResult
### impl Sync for ModifyEventSubscriptionResult
### impl Unpin for ModifyEventSubscriptionResult
### impl UnwindSafe for ModifyEventSubscriptionResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyGlobalClusterMessage
===
```
pub struct ModifyGlobalClusterMessage {
pub deletion_protection: Option<bool>,
pub global_cluster_identifier: String,
pub new_global_cluster_identifier: Option<String>,
}
```
Represents the input to ModifyGlobalCluster.
Fields
---
`deletion_protection: Option<bool>`Indicates if the global cluster has deletion protection enabled. The global cluster can't be deleted when deletion protection is enabled.
`global_cluster_identifier: String`The identifier for the global cluster being modified. This parameter isn't case-sensitive.
Constraints:
* Must match the identifier of an existing global cluster.
`new_global_cluster_identifier: Option<String>`The new identifier for a global cluster when you modify a global cluster. This value is stored as a lowercase string.
* Must contain from 1 to 63 letters, numbers, or hyphens
The first character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Example: `my-cluster2`
Trait Implementations
---
source### impl Clone for ModifyGlobalClusterMessage
source#### fn clone(&self) -> ModifyGlobalClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyGlobalClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyGlobalClusterMessage
source#### fn default() -> ModifyGlobalClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyGlobalClusterMessage> for ModifyGlobalClusterMessage
source#### fn eq(&self, other: &ModifyGlobalClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyGlobalClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyGlobalClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyGlobalClusterMessage
### impl Send for ModifyGlobalClusterMessage
### impl Sync for ModifyGlobalClusterMessage
### impl Unpin for ModifyGlobalClusterMessage
### impl UnwindSafe for ModifyGlobalClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ModifyGlobalClusterResult
===
```
pub struct ModifyGlobalClusterResult {
pub global_cluster: Option<GlobalCluster>,
}
```
Fields
---
`global_cluster: Option<GlobalCluster>`Trait Implementations
---
source### impl Clone for ModifyGlobalClusterResult
source#### fn clone(&self) -> ModifyGlobalClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ModifyGlobalClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ModifyGlobalClusterResult
source#### fn default() -> ModifyGlobalClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<ModifyGlobalClusterResult> for ModifyGlobalClusterResult
source#### fn eq(&self, other: &ModifyGlobalClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyGlobalClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyGlobalClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyGlobalClusterResult
### impl Send for ModifyGlobalClusterResult
### impl Sync for ModifyGlobalClusterResult
### impl Unpin for ModifyGlobalClusterResult
### impl UnwindSafe for ModifyGlobalClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::OrderableDBInstanceOption
===
```
pub struct OrderableDBInstanceOption {
pub availability_zones: Option<Vec<AvailabilityZone>>,
pub db_instance_class: Option<String>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub license_model: Option<String>,
pub vpc: Option<bool>,
}
```
The options that are available for an instance.
Fields
---
`availability_zones: Option<Vec<AvailabilityZone>>`A list of Availability Zones for an instance.
`db_instance_class: Option<String>`The instance class for an instance.
`engine: Option<String>`The engine type of an instance.
`engine_version: Option<String>`The engine version of an instance.
`license_model: Option<String>`The license model for an instance.
`vpc: Option<bool>`Indicates whether an instance is in a virtual private cloud (VPC).
Trait Implementations
---
source### impl Clone for OrderableDBInstanceOption
source#### fn clone(&self) -> OrderableDBInstanceOption
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for OrderableDBInstanceOption
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for OrderableDBInstanceOption
source#### fn default() -> OrderableDBInstanceOption
Returns the “default value” for a type. Read more
source### impl PartialEq<OrderableDBInstanceOption> for OrderableDBInstanceOption
source#### fn eq(&self, other: &OrderableDBInstanceOption) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &OrderableDBInstanceOption) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for OrderableDBInstanceOption
Auto Trait Implementations
---
### impl RefUnwindSafe for OrderableDBInstanceOption
### impl Send for OrderableDBInstanceOption
### impl Sync for OrderableDBInstanceOption
### impl Unpin for OrderableDBInstanceOption
### impl UnwindSafe for OrderableDBInstanceOption
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::OrderableDBInstanceOptionsMessage
===
```
pub struct OrderableDBInstanceOptionsMessage {
pub marker: Option<String>,
pub orderable_db_instance_options: Option<Vec<OrderableDBInstanceOption>>,
}
```
Represents the output of DescribeOrderableDBInstanceOptions.
Fields
---
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`orderable_db_instance_options: Option<Vec<OrderableDBInstanceOption>>`The options that are available for a particular orderable instance.
Trait Implementations
---
source### impl Clone for OrderableDBInstanceOptionsMessage
source#### fn clone(&self) -> OrderableDBInstanceOptionsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for OrderableDBInstanceOptionsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for OrderableDBInstanceOptionsMessage
source#### fn default() -> OrderableDBInstanceOptionsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<OrderableDBInstanceOptionsMessage> for OrderableDBInstanceOptionsMessage
source#### fn eq(&self, other: &OrderableDBInstanceOptionsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &OrderableDBInstanceOptionsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for OrderableDBInstanceOptionsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for OrderableDBInstanceOptionsMessage
### impl Send for OrderableDBInstanceOptionsMessage
### impl Sync for OrderableDBInstanceOptionsMessage
### impl Unpin for OrderableDBInstanceOptionsMessage
### impl UnwindSafe for OrderableDBInstanceOptionsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::Parameter
===
```
pub struct Parameter {
pub allowed_values: Option<String>,
pub apply_method: Option<String>,
pub apply_type: Option<String>,
pub data_type: Option<String>,
pub description: Option<String>,
pub is_modifiable: Option<bool>,
pub minimum_engine_version: Option<String>,
pub parameter_name: Option<String>,
pub parameter_value: Option<String>,
pub source: Option<String>,
}
```
Detailed information about an individual parameter.
Fields
---
`allowed_values: Option<String>`Specifies the valid range of values for the parameter.
`apply_method: Option<String>`Indicates when to apply parameter updates.
`apply_type: Option<String>`Specifies the engine-specific parameters type.
`data_type: Option<String>`Specifies the valid data type for the parameter.
`description: Option<String>`Provides a description of the parameter.
`is_modifiable: Option<bool>` Indicates whether (`true`) or not (`false`) the parameter can be modified. Some parameters have security or operational implications that prevent them from being changed.
`minimum_engine_version: Option<String>`The earliest engine version to which the parameter can apply.
`parameter_name: Option<String>`Specifies the name of the parameter.
`parameter_value: Option<String>`Specifies the value of the parameter.
`source: Option<String>`Indicates the source of the parameter value.
Trait Implementations
---
source### impl Clone for Parameter
source#### fn clone(&self) -> Parameter
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Parameter
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Parameter
source#### fn default() -> Parameter
Returns the “default value” for a type. Read more
source### impl PartialEq<Parameter> for Parameter
source#### fn eq(&self, other: &Parameter) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Parameter) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Parameter
Auto Trait Implementations
---
### impl RefUnwindSafe for Parameter
### impl Send for Parameter
### impl Sync for Parameter
### impl Unpin for Parameter
### impl UnwindSafe for Parameter
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::PendingCloudwatchLogsExports
===
```
pub struct PendingCloudwatchLogsExports {
pub log_types_to_disable: Option<Vec<String>>,
pub log_types_to_enable: Option<Vec<String>>,
}
```
A list of the log types whose configuration is still pending. These log types are in the process of being activated or deactivated.
Fields
---
`log_types_to_disable: Option<Vec<String>>`Log types that are in the process of being enabled. After they are enabled, these log types are exported to Amazon CloudWatch Logs.
`log_types_to_enable: Option<Vec<String>>`Log types that are in the process of being deactivated. After they are deactivated, these log types aren't exported to CloudWatch Logs.
Trait Implementations
---
source### impl Clone for PendingCloudwatchLogsExports
source#### fn clone(&self) -> PendingCloudwatchLogsExports
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PendingCloudwatchLogsExports
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PendingCloudwatchLogsExports
source#### fn default() -> PendingCloudwatchLogsExports
Returns the “default value” for a type. Read more
source### impl PartialEq<PendingCloudwatchLogsExports> for PendingCloudwatchLogsExports
source#### fn eq(&self, other: &PendingCloudwatchLogsExports) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PendingCloudwatchLogsExports) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PendingCloudwatchLogsExports
Auto Trait Implementations
---
### impl RefUnwindSafe for PendingCloudwatchLogsExports
### impl Send for PendingCloudwatchLogsExports
### impl Sync for PendingCloudwatchLogsExports
### impl Unpin for PendingCloudwatchLogsExports
### impl UnwindSafe for PendingCloudwatchLogsExports
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::PendingMaintenanceAction
===
```
pub struct PendingMaintenanceAction {
pub action: Option<String>,
pub auto_applied_after_date: Option<String>,
pub current_apply_date: Option<String>,
pub description: Option<String>,
pub forced_apply_date: Option<String>,
pub opt_in_status: Option<String>,
}
```
Provides information about a pending maintenance action for a resource.
Fields
---
`action: Option<String>`The type of pending maintenance action that is available for the resource.
`auto_applied_after_date: Option<String>`The date of the maintenance window when the action is applied. The maintenance action is applied to the resource during its first maintenance window after this date. If this date is specified, any `next-maintenance` opt-in requests are ignored.
`current_apply_date: Option<String>`The effective date when the pending maintenance action is applied to the resource.
`description: Option<String>`A description providing more detail about the maintenance action.
`forced_apply_date: Option<String>`The date when the maintenance action is automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any `immediate` opt-in requests are ignored.
`opt_in_status: Option<String>`Indicates the type of opt-in request that has been received for the resource.
Trait Implementations
---
source### impl Clone for PendingMaintenanceAction
source#### fn clone(&self) -> PendingMaintenanceAction
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PendingMaintenanceAction
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PendingMaintenanceAction
source#### fn default() -> PendingMaintenanceAction
Returns the “default value” for a type. Read more
source### impl PartialEq<PendingMaintenanceAction> for PendingMaintenanceAction
source#### fn eq(&self, other: &PendingMaintenanceAction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PendingMaintenanceAction) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PendingMaintenanceAction
Auto Trait Implementations
---
### impl RefUnwindSafe for PendingMaintenanceAction
### impl Send for PendingMaintenanceAction
### impl Sync for PendingMaintenanceAction
### impl Unpin for PendingMaintenanceAction
### impl UnwindSafe for PendingMaintenanceAction
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::PendingMaintenanceActionsMessage
===
```
pub struct PendingMaintenanceActionsMessage {
pub marker: Option<String>,
pub pending_maintenance_actions: Option<Vec<ResourcePendingMaintenanceActions>>,
}
```
Represents the output of DescribePendingMaintenanceActions.
Fields
---
`marker: Option<String>`An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by `MaxRecords`.
`pending_maintenance_actions: Option<Vec<ResourcePendingMaintenanceActions>>`The maintenance actions to be applied.
Trait Implementations
---
source### impl Clone for PendingMaintenanceActionsMessage
source#### fn clone(&self) -> PendingMaintenanceActionsMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PendingMaintenanceActionsMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PendingMaintenanceActionsMessage
source#### fn default() -> PendingMaintenanceActionsMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<PendingMaintenanceActionsMessage> for PendingMaintenanceActionsMessage
source#### fn eq(&self, other: &PendingMaintenanceActionsMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PendingMaintenanceActionsMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PendingMaintenanceActionsMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for PendingMaintenanceActionsMessage
### impl Send for PendingMaintenanceActionsMessage
### impl Sync for PendingMaintenanceActionsMessage
### impl Unpin for PendingMaintenanceActionsMessage
### impl UnwindSafe for PendingMaintenanceActionsMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::PendingModifiedValues
===
```
pub struct PendingModifiedValues {
pub allocated_storage: Option<i64>,
pub backup_retention_period: Option<i64>,
pub ca_certificate_identifier: Option<String>,
pub db_instance_class: Option<String>,
pub db_instance_identifier: Option<String>,
pub db_subnet_group_name: Option<String>,
pub engine_version: Option<String>,
pub iops: Option<i64>,
pub license_model: Option<String>,
pub master_user_password: Option<String>,
pub multi_az: Option<bool>,
pub pending_cloudwatch_logs_exports: Option<PendingCloudwatchLogsExports>,
pub port: Option<i64>,
pub storage_type: Option<String>,
}
```
One or more modified settings for an instance. These modified settings have been requested, but haven't been applied yet.
Fields
---
`allocated_storage: Option<i64>` Contains the new `AllocatedStorage` size for then instance that will be applied or is currently being applied.
`backup_retention_period: Option<i64>`Specifies the pending number of days for which automated backups are retained.
`ca_certificate_identifier: Option<String>`Specifies the identifier of the certificate authority (CA) certificate for the DB instance.
`db_instance_class: Option<String>` Contains the new `DBInstanceClass` for the instance that will be applied or is currently being applied.
`db_instance_identifier: Option<String>` Contains the new `DBInstanceIdentifier` for the instance that will be applied or is currently being applied.
`db_subnet_group_name: Option<String>`The new subnet group for the instance.
`engine_version: Option<String>`Indicates the database engine version.
`iops: Option<i64>`Specifies the new Provisioned IOPS value for the instance that will be applied or is currently being applied.
`license_model: Option<String>`The license model for the instance.
Valid values: `license-included`, `bring-your-own-license`, `general-public-license`
`master_user_password: Option<String>`Contains the pending or currently in-progress change of the master credentials for the instance.
`multi_az: Option<bool>`Indicates that the Single-AZ instance is to change to a Multi-AZ deployment.
`pending_cloudwatch_logs_exports: Option<PendingCloudwatchLogsExports>`A list of the log types whose configuration is still pending. These log types are in the process of being activated or deactivated.
`port: Option<i64>`Specifies the pending port for the instance.
`storage_type: Option<String>`Specifies the storage type to be associated with the instance.
Trait Implementations
---
source### impl Clone for PendingModifiedValues
source#### fn clone(&self) -> PendingModifiedValues
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for PendingModifiedValues
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for PendingModifiedValues
source#### fn default() -> PendingModifiedValues
Returns the “default value” for a type. Read more
source### impl PartialEq<PendingModifiedValues> for PendingModifiedValues
source#### fn eq(&self, other: &PendingModifiedValues) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &PendingModifiedValues) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for PendingModifiedValues
Auto Trait Implementations
---
### impl RefUnwindSafe for PendingModifiedValues
### impl Send for PendingModifiedValues
### impl Sync for PendingModifiedValues
### impl Unpin for PendingModifiedValues
### impl UnwindSafe for PendingModifiedValues
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RebootDBInstanceMessage
===
```
pub struct RebootDBInstanceMessage {
pub db_instance_identifier: String,
pub force_failover: Option<bool>,
}
```
Represents the input to RebootDBInstance.
Fields
---
`db_instance_identifier: String`The instance identifier. This parameter is stored as a lowercase string.
Constraints:
* Must match the identifier of an existing `DBInstance`.
`force_failover: Option<bool>` When `true`, the reboot is conducted through a Multi-AZ failover.
Constraint: You can't specify `true` if the instance is not configured for Multi-AZ.
Trait Implementations
---
source### impl Clone for RebootDBInstanceMessage
source#### fn clone(&self) -> RebootDBInstanceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RebootDBInstanceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RebootDBInstanceMessage
source#### fn default() -> RebootDBInstanceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<RebootDBInstanceMessage> for RebootDBInstanceMessage
source#### fn eq(&self, other: &RebootDBInstanceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RebootDBInstanceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RebootDBInstanceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for RebootDBInstanceMessage
### impl Send for RebootDBInstanceMessage
### impl Sync for RebootDBInstanceMessage
### impl Unpin for RebootDBInstanceMessage
### impl UnwindSafe for RebootDBInstanceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RebootDBInstanceResult
===
```
pub struct RebootDBInstanceResult {
pub db_instance: Option<DBInstance>,
}
```
Fields
---
`db_instance: Option<DBInstance>`Trait Implementations
---
source### impl Clone for RebootDBInstanceResult
source#### fn clone(&self) -> RebootDBInstanceResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RebootDBInstanceResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RebootDBInstanceResult
source#### fn default() -> RebootDBInstanceResult
Returns the “default value” for a type. Read more
source### impl PartialEq<RebootDBInstanceResult> for RebootDBInstanceResult
source#### fn eq(&self, other: &RebootDBInstanceResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RebootDBInstanceResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RebootDBInstanceResult
Auto Trait Implementations
---
### impl RefUnwindSafe for RebootDBInstanceResult
### impl Send for RebootDBInstanceResult
### impl Sync for RebootDBInstanceResult
### impl Unpin for RebootDBInstanceResult
### impl UnwindSafe for RebootDBInstanceResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RemoveFromGlobalClusterMessage
===
```
pub struct RemoveFromGlobalClusterMessage {
pub db_cluster_identifier: String,
pub global_cluster_identifier: String,
}
```
Represents the input to RemoveFromGlobalCluster.
Fields
---
`db_cluster_identifier: String`The Amazon Resource Name (ARN) identifying the cluster that was detached from the Amazon DocumentDB global cluster.
`global_cluster_identifier: String`The cluster identifier to detach from the Amazon DocumentDB global cluster.
Trait Implementations
---
source### impl Clone for RemoveFromGlobalClusterMessage
source#### fn clone(&self) -> RemoveFromGlobalClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RemoveFromGlobalClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RemoveFromGlobalClusterMessage
source#### fn default() -> RemoveFromGlobalClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<RemoveFromGlobalClusterMessage> for RemoveFromGlobalClusterMessage
source#### fn eq(&self, other: &RemoveFromGlobalClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveFromGlobalClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveFromGlobalClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveFromGlobalClusterMessage
### impl Send for RemoveFromGlobalClusterMessage
### impl Sync for RemoveFromGlobalClusterMessage
### impl Unpin for RemoveFromGlobalClusterMessage
### impl UnwindSafe for RemoveFromGlobalClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RemoveFromGlobalClusterResult
===
```
pub struct RemoveFromGlobalClusterResult {
pub global_cluster: Option<GlobalCluster>,
}
```
Fields
---
`global_cluster: Option<GlobalCluster>`Trait Implementations
---
source### impl Clone for RemoveFromGlobalClusterResult
source#### fn clone(&self) -> RemoveFromGlobalClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RemoveFromGlobalClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RemoveFromGlobalClusterResult
source#### fn default() -> RemoveFromGlobalClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<RemoveFromGlobalClusterResult> for RemoveFromGlobalClusterResult
source#### fn eq(&self, other: &RemoveFromGlobalClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveFromGlobalClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveFromGlobalClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveFromGlobalClusterResult
### impl Send for RemoveFromGlobalClusterResult
### impl Sync for RemoveFromGlobalClusterResult
### impl Unpin for RemoveFromGlobalClusterResult
### impl UnwindSafe for RemoveFromGlobalClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RemoveSourceIdentifierFromSubscriptionMessage
===
```
pub struct RemoveSourceIdentifierFromSubscriptionMessage {
pub source_identifier: String,
pub subscription_name: String,
}
```
Represents the input to RemoveSourceIdentifierFromSubscription.
Fields
---
`source_identifier: String` The source identifier to be removed from the subscription, such as the instance identifier for an instance, or the name of a security group.
`subscription_name: String`The name of the Amazon DocumentDB event notification subscription that you want to remove a source identifier from.
Trait Implementations
---
source### impl Clone for RemoveSourceIdentifierFromSubscriptionMessage
source#### fn clone(&self) -> RemoveSourceIdentifierFromSubscriptionMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RemoveSourceIdentifierFromSubscriptionMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RemoveSourceIdentifierFromSubscriptionMessage
source#### fn default() -> RemoveSourceIdentifierFromSubscriptionMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<RemoveSourceIdentifierFromSubscriptionMessage> for RemoveSourceIdentifierFromSubscriptionMessage
source#### fn eq(&self, other: &RemoveSourceIdentifierFromSubscriptionMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveSourceIdentifierFromSubscriptionMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveSourceIdentifierFromSubscriptionMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveSourceIdentifierFromSubscriptionMessage
### impl Send for RemoveSourceIdentifierFromSubscriptionMessage
### impl Sync for RemoveSourceIdentifierFromSubscriptionMessage
### impl Unpin for RemoveSourceIdentifierFromSubscriptionMessage
### impl UnwindSafe for RemoveSourceIdentifierFromSubscriptionMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RemoveSourceIdentifierFromSubscriptionResult
===
```
pub struct RemoveSourceIdentifierFromSubscriptionResult {
pub event_subscription: Option<EventSubscription>,
}
```
Fields
---
`event_subscription: Option<EventSubscription>`Trait Implementations
---
source### impl Clone for RemoveSourceIdentifierFromSubscriptionResult
source#### fn clone(&self) -> RemoveSourceIdentifierFromSubscriptionResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RemoveSourceIdentifierFromSubscriptionResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RemoveSourceIdentifierFromSubscriptionResult
source#### fn default() -> RemoveSourceIdentifierFromSubscriptionResult
Returns the “default value” for a type. Read more
source### impl PartialEq<RemoveSourceIdentifierFromSubscriptionResult> for RemoveSourceIdentifierFromSubscriptionResult
source#### fn eq(&self, other: &RemoveSourceIdentifierFromSubscriptionResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveSourceIdentifierFromSubscriptionResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveSourceIdentifierFromSubscriptionResult
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveSourceIdentifierFromSubscriptionResult
### impl Send for RemoveSourceIdentifierFromSubscriptionResult
### impl Sync for RemoveSourceIdentifierFromSubscriptionResult
### impl Unpin for RemoveSourceIdentifierFromSubscriptionResult
### impl UnwindSafe for RemoveSourceIdentifierFromSubscriptionResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RemoveTagsFromResourceMessage
===
```
pub struct RemoveTagsFromResourceMessage {
pub resource_name: String,
pub tag_keys: Vec<String>,
}
```
Represents the input to RemoveTagsFromResource.
Fields
---
`resource_name: String`The Amazon DocumentDB resource that the tags are removed from. This value is an Amazon Resource Name (ARN).
`tag_keys: Vec<String>`The tag key (name) of the tag to be removed.
Trait Implementations
---
source### impl Clone for RemoveTagsFromResourceMessage
source#### fn clone(&self) -> RemoveTagsFromResourceMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RemoveTagsFromResourceMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RemoveTagsFromResourceMessage
source#### fn default() -> RemoveTagsFromResourceMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<RemoveTagsFromResourceMessage> for RemoveTagsFromResourceMessage
source#### fn eq(&self, other: &RemoveTagsFromResourceMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveTagsFromResourceMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveTagsFromResourceMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveTagsFromResourceMessage
### impl Send for RemoveTagsFromResourceMessage
### impl Sync for RemoveTagsFromResourceMessage
### impl Unpin for RemoveTagsFromResourceMessage
### impl UnwindSafe for RemoveTagsFromResourceMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ResetDBClusterParameterGroupMessage
===
```
pub struct ResetDBClusterParameterGroupMessage {
pub db_cluster_parameter_group_name: String,
pub parameters: Option<Vec<Parameter>>,
pub reset_all_parameters: Option<bool>,
}
```
Represents the input to ResetDBClusterParameterGroup.
Fields
---
`db_cluster_parameter_group_name: String`The name of the cluster parameter group to reset.
`parameters: Option<Vec<Parameter>>`A list of parameter names in the cluster parameter group to reset to the default values. You can't use this parameter if the `ResetAllParameters` parameter is set to `true`.
`reset_all_parameters: Option<bool>`A value that is set to `true` to reset all parameters in the cluster parameter group to their default values, and `false` otherwise. You can't use this parameter if there is a list of parameter names specified for the `Parameters` parameter.
Trait Implementations
---
source### impl Clone for ResetDBClusterParameterGroupMessage
source#### fn clone(&self) -> ResetDBClusterParameterGroupMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResetDBClusterParameterGroupMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResetDBClusterParameterGroupMessage
source#### fn default() -> ResetDBClusterParameterGroupMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<ResetDBClusterParameterGroupMessage> for ResetDBClusterParameterGroupMessage
source#### fn eq(&self, other: &ResetDBClusterParameterGroupMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResetDBClusterParameterGroupMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResetDBClusterParameterGroupMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for ResetDBClusterParameterGroupMessage
### impl Send for ResetDBClusterParameterGroupMessage
### impl Sync for ResetDBClusterParameterGroupMessage
### impl Unpin for ResetDBClusterParameterGroupMessage
### impl UnwindSafe for ResetDBClusterParameterGroupMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::ResourcePendingMaintenanceActions
===
```
pub struct ResourcePendingMaintenanceActions {
pub pending_maintenance_action_details: Option<Vec<PendingMaintenanceAction>>,
pub resource_identifier: Option<String>,
}
```
Represents the output of ApplyPendingMaintenanceAction.
Fields
---
`pending_maintenance_action_details: Option<Vec<PendingMaintenanceAction>>`A list that provides details about the pending maintenance actions for the resource.
`resource_identifier: Option<String>`The Amazon Resource Name (ARN) of the resource that has pending maintenance actions.
Trait Implementations
---
source### impl Clone for ResourcePendingMaintenanceActions
source#### fn clone(&self) -> ResourcePendingMaintenanceActions
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for ResourcePendingMaintenanceActions
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for ResourcePendingMaintenanceActions
source#### fn default() -> ResourcePendingMaintenanceActions
Returns the “default value” for a type. Read more
source### impl PartialEq<ResourcePendingMaintenanceActions> for ResourcePendingMaintenanceActions
source#### fn eq(&self, other: &ResourcePendingMaintenanceActions) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResourcePendingMaintenanceActions) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResourcePendingMaintenanceActions
Auto Trait Implementations
---
### impl RefUnwindSafe for ResourcePendingMaintenanceActions
### impl Send for ResourcePendingMaintenanceActions
### impl Sync for ResourcePendingMaintenanceActions
### impl Unpin for ResourcePendingMaintenanceActions
### impl UnwindSafe for ResourcePendingMaintenanceActions
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RestoreDBClusterFromSnapshotMessage
===
```
pub struct RestoreDBClusterFromSnapshotMessage {
pub availability_zones: Option<Vec<String>>,
pub db_cluster_identifier: String,
pub db_subnet_group_name: Option<String>,
pub deletion_protection: Option<bool>,
pub enable_cloudwatch_logs_exports: Option<Vec<String>>,
pub engine: String,
pub engine_version: Option<String>,
pub kms_key_id: Option<String>,
pub port: Option<i64>,
pub snapshot_identifier: String,
pub tags: Option<Vec<Tag>>,
pub vpc_security_group_ids: Option<Vec<String>>,
}
```
Represents the input to RestoreDBClusterFromSnapshot.
Fields
---
`availability_zones: Option<Vec<String>>`Provides the list of Amazon EC2 Availability Zones that instances in the restored DB cluster can be created in.
`db_cluster_identifier: String`The name of the cluster to create from the snapshot or cluster snapshot. This parameter isn't case sensitive.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
Example: `my-snapshot-id`
`db_subnet_group_name: Option<String>`The name of the subnet group to use for the new cluster.
Constraints: If provided, must match the name of an existing `DBSubnetGroup`.
Example: `mySubnetgroup`
`deletion_protection: Option<bool>`Specifies whether this cluster can be deleted. If `DeletionProtection` is enabled, the cluster cannot be deleted unless it is modified and `DeletionProtection` is disabled. `DeletionProtection` protects clusters from being accidentally deleted.
`enable_cloudwatch_logs_exports: Option<Vec<String>>`A list of log types that must be enabled for exporting to Amazon CloudWatch Logs.
`engine: String`The database engine to use for the new cluster.
Default: The same as source.
Constraint: Must be compatible with the engine of the source.
`engine_version: Option<String>`The version of the database engine to use for the new cluster.
`kms_key_id: Option<String>`The KMS key identifier to use when restoring an encrypted cluster from a DB snapshot or cluster snapshot.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a cluster with the same account that owns the KMS encryption key used to encrypt the new cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
If you do not specify a value for the `KmsKeyId` parameter, then the following occurs:
* If the snapshot or cluster snapshot in `SnapshotIdentifier` is encrypted, then the restored cluster is encrypted using the KMS key that was used to encrypt the snapshot or the cluster snapshot.
* If the snapshot or the cluster snapshot in `SnapshotIdentifier` is not encrypted, then the restored DB cluster is not encrypted.
`port: Option<i64>`The port number on which the new cluster accepts connections.
Constraints: Must be a value from `1150` to `65535`.
Default: The same port as the original cluster.
`snapshot_identifier: String`The identifier for the snapshot or cluster snapshot to restore from.
You can use either the name or the Amazon Resource Name (ARN) to specify a cluster snapshot. However, you can use only the ARN to specify a snapshot.
Constraints:
* Must match the identifier of an existing snapshot.
`tags: Option<Vec<Tag>>`The tags to be assigned to the restored cluster.
`vpc_security_group_ids: Option<Vec<String>>`A list of virtual private cloud (VPC) security groups that the new cluster will belong to.
Trait Implementations
---
source### impl Clone for RestoreDBClusterFromSnapshotMessage
source#### fn clone(&self) -> RestoreDBClusterFromSnapshotMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RestoreDBClusterFromSnapshotMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RestoreDBClusterFromSnapshotMessage
source#### fn default() -> RestoreDBClusterFromSnapshotMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<RestoreDBClusterFromSnapshotMessage> for RestoreDBClusterFromSnapshotMessage
source#### fn eq(&self, other: &RestoreDBClusterFromSnapshotMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RestoreDBClusterFromSnapshotMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RestoreDBClusterFromSnapshotMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for RestoreDBClusterFromSnapshotMessage
### impl Send for RestoreDBClusterFromSnapshotMessage
### impl Sync for RestoreDBClusterFromSnapshotMessage
### impl Unpin for RestoreDBClusterFromSnapshotMessage
### impl UnwindSafe for RestoreDBClusterFromSnapshotMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RestoreDBClusterFromSnapshotResult
===
```
pub struct RestoreDBClusterFromSnapshotResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for RestoreDBClusterFromSnapshotResult
source#### fn clone(&self) -> RestoreDBClusterFromSnapshotResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RestoreDBClusterFromSnapshotResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RestoreDBClusterFromSnapshotResult
source#### fn default() -> RestoreDBClusterFromSnapshotResult
Returns the “default value” for a type. Read more
source### impl PartialEq<RestoreDBClusterFromSnapshotResult> for RestoreDBClusterFromSnapshotResult
source#### fn eq(&self, other: &RestoreDBClusterFromSnapshotResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RestoreDBClusterFromSnapshotResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RestoreDBClusterFromSnapshotResult
Auto Trait Implementations
---
### impl RefUnwindSafe for RestoreDBClusterFromSnapshotResult
### impl Send for RestoreDBClusterFromSnapshotResult
### impl Sync for RestoreDBClusterFromSnapshotResult
### impl Unpin for RestoreDBClusterFromSnapshotResult
### impl UnwindSafe for RestoreDBClusterFromSnapshotResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RestoreDBClusterToPointInTimeMessage
===
```
pub struct RestoreDBClusterToPointInTimeMessage {
pub db_cluster_identifier: String,
pub db_subnet_group_name: Option<String>,
pub deletion_protection: Option<bool>,
pub enable_cloudwatch_logs_exports: Option<Vec<String>>,
pub kms_key_id: Option<String>,
pub port: Option<i64>,
pub restore_to_time: Option<String>,
pub source_db_cluster_identifier: String,
pub tags: Option<Vec<Tag>>,
pub use_latest_restorable_time: Option<bool>,
pub vpc_security_group_ids: Option<Vec<String>>,
}
```
Represents the input to RestoreDBClusterToPointInTime.
Fields
---
`db_cluster_identifier: String`The name of the new cluster to be created.
Constraints:
* Must contain from 1 to 63 letters, numbers, or hyphens.
* The first character must be a letter.
* Cannot end with a hyphen or contain two consecutive hyphens.
`db_subnet_group_name: Option<String>`The subnet group name to use for the new cluster.
Constraints: If provided, must match the name of an existing `DBSubnetGroup`.
Example: `mySubnetgroup`
`deletion_protection: Option<bool>`Specifies whether this cluster can be deleted. If `DeletionProtection` is enabled, the cluster cannot be deleted unless it is modified and `DeletionProtection` is disabled. `DeletionProtection` protects clusters from being accidentally deleted.
`enable_cloudwatch_logs_exports: Option<Vec<String>>`A list of log types that must be enabled for exporting to Amazon CloudWatch Logs.
`kms_key_id: Option<String>`The KMS key identifier to use when restoring an encrypted cluster from an encrypted cluster.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a cluster with the same account that owns the KMS encryption key used to encrypt the new cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
You can restore to a new cluster and encrypt the new cluster with an KMS key that is different from the KMS key used to encrypt the source cluster. The new DB cluster is encrypted with the KMS key identified by the `KmsKeyId` parameter.
If you do not specify a value for the `KmsKeyId` parameter, then the following occurs:
* If the cluster is encrypted, then the restored cluster is encrypted using the KMS key that was used to encrypt the source cluster.
* If the cluster is not encrypted, then the restored cluster is not encrypted.
If `DBClusterIdentifier` refers to a cluster that is not encrypted, then the restore request is rejected.
`port: Option<i64>`The port number on which the new cluster accepts connections.
Constraints: Must be a value from `1150` to `65535`.
Default: The default port for the engine.
`restore_to_time: Option<String>`The date and time to restore the cluster to.
Valid values: A time in Universal Coordinated Time (UTC) format.
Constraints:
* Must be before the latest restorable time for the instance.
* Must be specified if the `UseLatestRestorableTime` parameter is not provided.
* Cannot be specified if the `UseLatestRestorableTime` parameter is `true`.
* Cannot be specified if the `RestoreType` parameter is `copy-on-write`.
Example: `2015-03-07T23:45:00Z`
`source_db_cluster_identifier: String`The identifier of the source cluster from which to restore.
Constraints:
* Must match the identifier of an existing `DBCluster`.
`tags: Option<Vec<Tag>>`The tags to be assigned to the restored cluster.
`use_latest_restorable_time: Option<bool>`A value that is set to `true` to restore the cluster to the latest restorable backup time, and `false` otherwise.
Default: `false`
Constraints: Cannot be specified if the `RestoreToTime` parameter is provided.
`vpc_security_group_ids: Option<Vec<String>>`A list of VPC security groups that the new cluster belongs to.
Trait Implementations
---
source### impl Clone for RestoreDBClusterToPointInTimeMessage
source#### fn clone(&self) -> RestoreDBClusterToPointInTimeMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RestoreDBClusterToPointInTimeMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RestoreDBClusterToPointInTimeMessage
source#### fn default() -> RestoreDBClusterToPointInTimeMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<RestoreDBClusterToPointInTimeMessage> for RestoreDBClusterToPointInTimeMessage
source#### fn eq(&self, other: &RestoreDBClusterToPointInTimeMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RestoreDBClusterToPointInTimeMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RestoreDBClusterToPointInTimeMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for RestoreDBClusterToPointInTimeMessage
### impl Send for RestoreDBClusterToPointInTimeMessage
### impl Sync for RestoreDBClusterToPointInTimeMessage
### impl Unpin for RestoreDBClusterToPointInTimeMessage
### impl UnwindSafe for RestoreDBClusterToPointInTimeMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::RestoreDBClusterToPointInTimeResult
===
```
pub struct RestoreDBClusterToPointInTimeResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for RestoreDBClusterToPointInTimeResult
source#### fn clone(&self) -> RestoreDBClusterToPointInTimeResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for RestoreDBClusterToPointInTimeResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for RestoreDBClusterToPointInTimeResult
source#### fn default() -> RestoreDBClusterToPointInTimeResult
Returns the “default value” for a type. Read more
source### impl PartialEq<RestoreDBClusterToPointInTimeResult> for RestoreDBClusterToPointInTimeResult
source#### fn eq(&self, other: &RestoreDBClusterToPointInTimeResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RestoreDBClusterToPointInTimeResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RestoreDBClusterToPointInTimeResult
Auto Trait Implementations
---
### impl RefUnwindSafe for RestoreDBClusterToPointInTimeResult
### impl Send for RestoreDBClusterToPointInTimeResult
### impl Sync for RestoreDBClusterToPointInTimeResult
### impl Unpin for RestoreDBClusterToPointInTimeResult
### impl UnwindSafe for RestoreDBClusterToPointInTimeResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::StartDBClusterMessage
===
```
pub struct StartDBClusterMessage {
pub db_cluster_identifier: String,
}
```
Fields
---
`db_cluster_identifier: String`The identifier of the cluster to restart. Example: `docdb-2019-05-28-15-24-52`
Trait Implementations
---
source### impl Clone for StartDBClusterMessage
source#### fn clone(&self) -> StartDBClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StartDBClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StartDBClusterMessage
source#### fn default() -> StartDBClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<StartDBClusterMessage> for StartDBClusterMessage
source#### fn eq(&self, other: &StartDBClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StartDBClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StartDBClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for StartDBClusterMessage
### impl Send for StartDBClusterMessage
### impl Sync for StartDBClusterMessage
### impl Unpin for StartDBClusterMessage
### impl UnwindSafe for StartDBClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::StartDBClusterResult
===
```
pub struct StartDBClusterResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for StartDBClusterResult
source#### fn clone(&self) -> StartDBClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StartDBClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StartDBClusterResult
source#### fn default() -> StartDBClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<StartDBClusterResult> for StartDBClusterResult
source#### fn eq(&self, other: &StartDBClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StartDBClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StartDBClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for StartDBClusterResult
### impl Send for StartDBClusterResult
### impl Sync for StartDBClusterResult
### impl Unpin for StartDBClusterResult
### impl UnwindSafe for StartDBClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::StopDBClusterMessage
===
```
pub struct StopDBClusterMessage {
pub db_cluster_identifier: String,
}
```
Fields
---
`db_cluster_identifier: String`The identifier of the cluster to stop. Example: `docdb-2019-05-28-15-24-52`
Trait Implementations
---
source### impl Clone for StopDBClusterMessage
source#### fn clone(&self) -> StopDBClusterMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StopDBClusterMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StopDBClusterMessage
source#### fn default() -> StopDBClusterMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<StopDBClusterMessage> for StopDBClusterMessage
source#### fn eq(&self, other: &StopDBClusterMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StopDBClusterMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StopDBClusterMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for StopDBClusterMessage
### impl Send for StopDBClusterMessage
### impl Sync for StopDBClusterMessage
### impl Unpin for StopDBClusterMessage
### impl UnwindSafe for StopDBClusterMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::StopDBClusterResult
===
```
pub struct StopDBClusterResult {
pub db_cluster: Option<DBCluster>,
}
```
Fields
---
`db_cluster: Option<DBCluster>`Trait Implementations
---
source### impl Clone for StopDBClusterResult
source#### fn clone(&self) -> StopDBClusterResult
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for StopDBClusterResult
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for StopDBClusterResult
source#### fn default() -> StopDBClusterResult
Returns the “default value” for a type. Read more
source### impl PartialEq<StopDBClusterResult> for StopDBClusterResult
source#### fn eq(&self, other: &StopDBClusterResult) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StopDBClusterResult) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StopDBClusterResult
Auto Trait Implementations
---
### impl RefUnwindSafe for StopDBClusterResult
### impl Send for StopDBClusterResult
### impl Sync for StopDBClusterResult
### impl Unpin for StopDBClusterResult
### impl UnwindSafe for StopDBClusterResult
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::Subnet
===
```
pub struct Subnet {
pub subnet_availability_zone: Option<AvailabilityZone>,
pub subnet_identifier: Option<String>,
pub subnet_status: Option<String>,
}
```
Detailed information about a subnet.
Fields
---
`subnet_availability_zone: Option<AvailabilityZone>`Specifies the Availability Zone for the subnet.
`subnet_identifier: Option<String>`Specifies the identifier of the subnet.
`subnet_status: Option<String>`Specifies the status of the subnet.
Trait Implementations
---
source### impl Clone for Subnet
source#### fn clone(&self) -> Subnet
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Subnet
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Subnet
source#### fn default() -> Subnet
Returns the “default value” for a type. Read more
source### impl PartialEq<Subnet> for Subnet
source#### fn eq(&self, other: &Subnet) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Subnet) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Subnet
Auto Trait Implementations
---
### impl RefUnwindSafe for Subnet
### impl Send for Subnet
### impl Sync for Subnet
### impl Unpin for Subnet
### impl UnwindSafe for Subnet
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::Tag
===
```
pub struct Tag {
pub key: Option<String>,
pub value: Option<String>,
}
```
Metadata assigned to an Amazon DocumentDB resource consisting of a key-value pair.
Fields
---
`key: Option<String>`The required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with "`aws:`" or "`rds:`". The string can contain only the set of Unicode letters, digits, white space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").
`value: Option<String>`The optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with "`aws:`" or "`rds:`". The string can contain only the set of Unicode letters, digits, white space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").
Trait Implementations
---
source### impl Clone for Tag
source#### fn clone(&self) -> Tag
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for Tag
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for Tag
source#### fn default() -> Tag
Returns the “default value” for a type. Read more
source### impl PartialEq<Tag> for Tag
source#### fn eq(&self, other: &Tag) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &Tag) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for Tag
Auto Trait Implementations
---
### impl RefUnwindSafe for Tag
### impl Send for Tag
### impl Sync for Tag
### impl Unpin for Tag
### impl UnwindSafe for Tag
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::TagListMessage
===
```
pub struct TagListMessage {
pub tag_list: Option<Vec<Tag>>,
}
```
Represents the output of ListTagsForResource.
Fields
---
`tag_list: Option<Vec<Tag>>`A list of one or more tags.
Trait Implementations
---
source### impl Clone for TagListMessage
source#### fn clone(&self) -> TagListMessage
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for TagListMessage
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for TagListMessage
source#### fn default() -> TagListMessage
Returns the “default value” for a type. Read more
source### impl PartialEq<TagListMessage> for TagListMessage
source#### fn eq(&self, other: &TagListMessage) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &TagListMessage) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for TagListMessage
Auto Trait Implementations
---
### impl RefUnwindSafe for TagListMessage
### impl Send for TagListMessage
### impl Sync for TagListMessage
### impl Unpin for TagListMessage
### impl UnwindSafe for TagListMessage
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::UpgradeTarget
===
```
pub struct UpgradeTarget {
pub auto_upgrade: Option<bool>,
pub description: Option<String>,
pub engine: Option<String>,
pub engine_version: Option<String>,
pub is_major_version_upgrade: Option<bool>,
}
```
The version of the database engine that an instance can be upgraded to.
Fields
---
`auto_upgrade: Option<bool>`A value that indicates whether the target version is applied to any source DB instances that have `AutoMinorVersionUpgrade` set to `true`.
`description: Option<String>`The version of the database engine that an instance can be upgraded to.
`engine: Option<String>`The name of the upgrade target database engine.
`engine_version: Option<String>`The version number of the upgrade target database engine.
`is_major_version_upgrade: Option<bool>`A value that indicates whether a database engine is upgraded to a major version.
Trait Implementations
---
source### impl Clone for UpgradeTarget
source#### fn clone(&self) -> UpgradeTarget
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for UpgradeTarget
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for UpgradeTarget
source#### fn default() -> UpgradeTarget
Returns the “default value” for a type. Read more
source### impl PartialEq<UpgradeTarget> for UpgradeTarget
source#### fn eq(&self, other: &UpgradeTarget) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &UpgradeTarget) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for UpgradeTarget
Auto Trait Implementations
---
### impl RefUnwindSafe for UpgradeTarget
### impl Send for UpgradeTarget
### impl Sync for UpgradeTarget
### impl Unpin for UpgradeTarget
### impl UnwindSafe for UpgradeTarget
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Struct rusoto_docdb::VpcSecurityGroupMembership
===
```
pub struct VpcSecurityGroupMembership {
pub status: Option<String>,
pub vpc_security_group_id: Option<String>,
}
```
Used as a response element for queries on virtual private cloud (VPC) security group membership.
Fields
---
`status: Option<String>`The status of the VPC security group.
`vpc_security_group_id: Option<String>`The name of the VPC security group.
Trait Implementations
---
source### impl Clone for VpcSecurityGroupMembership
source#### fn clone(&self) -> VpcSecurityGroupMembership
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
source### impl Debug for VpcSecurityGroupMembership
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Default for VpcSecurityGroupMembership
source#### fn default() -> VpcSecurityGroupMembership
Returns the “default value” for a type. Read more
source### impl PartialEq<VpcSecurityGroupMembership> for VpcSecurityGroupMembership
source#### fn eq(&self, other: &VpcSecurityGroupMembership) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &VpcSecurityGroupMembership) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for VpcSecurityGroupMembership
Auto Trait Implementations
---
### impl RefUnwindSafe for VpcSecurityGroupMembership
### impl Send for VpcSecurityGroupMembership
### impl Sync for VpcSecurityGroupMembership
### impl Unpin for VpcSecurityGroupMembership
### impl UnwindSafe for VpcSecurityGroupMembership
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToOwned for T where T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.
source#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
source#### fn clone_into(&self, target: &mutT)
🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::AddSourceIdentifierToSubscriptionError
===
```
pub enum AddSourceIdentifierToSubscriptionError {
SourceNotFoundFault(String),
SubscriptionNotFoundFault(String),
}
```
Errors returned by AddSourceIdentifierToSubscription
Variants
---
### `SourceNotFoundFault(String)`
The requested source could not be found.
### `SubscriptionNotFoundFault(String)`
The subscription name does not exist.
Implementations
---
source### impl AddSourceIdentifierToSubscriptionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<AddSourceIdentifierToSubscriptionErrorTrait Implementations
---
source### impl Debug for AddSourceIdentifierToSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for AddSourceIdentifierToSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for AddSourceIdentifierToSubscriptionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<AddSourceIdentifierToSubscriptionError> for AddSourceIdentifierToSubscriptionError
source#### fn eq(&self, other: &AddSourceIdentifierToSubscriptionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AddSourceIdentifierToSubscriptionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AddSourceIdentifierToSubscriptionError
Auto Trait Implementations
---
### impl RefUnwindSafe for AddSourceIdentifierToSubscriptionError
### impl Send for AddSourceIdentifierToSubscriptionError
### impl Sync for AddSourceIdentifierToSubscriptionError
### impl Unpin for AddSourceIdentifierToSubscriptionError
### impl UnwindSafe for AddSourceIdentifierToSubscriptionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::AddTagsToResourceError
===
```
pub enum AddTagsToResourceError {
DBClusterNotFoundFault(String),
DBInstanceNotFoundFault(String),
DBSnapshotNotFoundFault(String),
}
```
Errors returned by AddTagsToResource
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
### `DBSnapshotNotFoundFault(String)`
`DBSnapshotIdentifier` doesn't refer to an existing snapshot.
Implementations
---
source### impl AddTagsToResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<AddTagsToResourceErrorTrait Implementations
---
source### impl Debug for AddTagsToResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for AddTagsToResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for AddTagsToResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<AddTagsToResourceError> for AddTagsToResourceError
source#### fn eq(&self, other: &AddTagsToResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &AddTagsToResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for AddTagsToResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for AddTagsToResourceError
### impl Send for AddTagsToResourceError
### impl Sync for AddTagsToResourceError
### impl Unpin for AddTagsToResourceError
### impl UnwindSafe for AddTagsToResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ApplyPendingMaintenanceActionError
===
```
pub enum ApplyPendingMaintenanceActionError {
InvalidDBClusterStateFault(String),
InvalidDBInstanceStateFault(String),
ResourceNotFoundFault(String),
}
```
Errors returned by ApplyPendingMaintenanceAction
Variants
---
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
### `ResourceNotFoundFault(String)`
The specified resource ID was not found.
Implementations
---
source### impl ApplyPendingMaintenanceActionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ApplyPendingMaintenanceActionErrorTrait Implementations
---
source### impl Debug for ApplyPendingMaintenanceActionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ApplyPendingMaintenanceActionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ApplyPendingMaintenanceActionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ApplyPendingMaintenanceActionError> for ApplyPendingMaintenanceActionError
source#### fn eq(&self, other: &ApplyPendingMaintenanceActionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ApplyPendingMaintenanceActionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ApplyPendingMaintenanceActionError
Auto Trait Implementations
---
### impl RefUnwindSafe for ApplyPendingMaintenanceActionError
### impl Send for ApplyPendingMaintenanceActionError
### impl Sync for ApplyPendingMaintenanceActionError
### impl Unpin for ApplyPendingMaintenanceActionError
### impl UnwindSafe for ApplyPendingMaintenanceActionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CopyDBClusterParameterGroupError
===
```
pub enum CopyDBClusterParameterGroupError {
DBParameterGroupAlreadyExistsFault(String),
DBParameterGroupNotFoundFault(String),
DBParameterGroupQuotaExceededFault(String),
}
```
Errors returned by CopyDBClusterParameterGroup
Variants
---
### `DBParameterGroupAlreadyExistsFault(String)`
A parameter group with the same name already exists.
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
### `DBParameterGroupQuotaExceededFault(String)`
This request would cause you to exceed the allowed number of parameter groups.
Implementations
---
source### impl CopyDBClusterParameterGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CopyDBClusterParameterGroupErrorTrait Implementations
---
source### impl Debug for CopyDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CopyDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CopyDBClusterParameterGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CopyDBClusterParameterGroupError> for CopyDBClusterParameterGroupError
source#### fn eq(&self, other: &CopyDBClusterParameterGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CopyDBClusterParameterGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CopyDBClusterParameterGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for CopyDBClusterParameterGroupError
### impl Send for CopyDBClusterParameterGroupError
### impl Sync for CopyDBClusterParameterGroupError
### impl Unpin for CopyDBClusterParameterGroupError
### impl UnwindSafe for CopyDBClusterParameterGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CopyDBClusterSnapshotError
===
```
pub enum CopyDBClusterSnapshotError {
DBClusterSnapshotAlreadyExistsFault(String),
DBClusterSnapshotNotFoundFault(String),
InvalidDBClusterSnapshotStateFault(String),
InvalidDBClusterStateFault(String),
KMSKeyNotAccessibleFault(String),
SnapshotQuotaExceededFault(String),
}
```
Errors returned by CopyDBClusterSnapshot
Variants
---
### `DBClusterSnapshotAlreadyExistsFault(String)`
You already have a cluster snapshot with the given identifier.
### `DBClusterSnapshotNotFoundFault(String)`
`DBClusterSnapshotIdentifier` doesn't refer to an existing cluster snapshot.
### `InvalidDBClusterSnapshotStateFault(String)`
The provided value isn't a valid cluster snapshot state.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `KMSKeyNotAccessibleFault(String)`
An error occurred when accessing an KMS key.
### `SnapshotQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of snapshots.
Implementations
---
source### impl CopyDBClusterSnapshotError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CopyDBClusterSnapshotErrorTrait Implementations
---
source### impl Debug for CopyDBClusterSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CopyDBClusterSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CopyDBClusterSnapshotError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CopyDBClusterSnapshotError> for CopyDBClusterSnapshotError
source#### fn eq(&self, other: &CopyDBClusterSnapshotError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CopyDBClusterSnapshotError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CopyDBClusterSnapshotError
Auto Trait Implementations
---
### impl RefUnwindSafe for CopyDBClusterSnapshotError
### impl Send for CopyDBClusterSnapshotError
### impl Sync for CopyDBClusterSnapshotError
### impl Unpin for CopyDBClusterSnapshotError
### impl UnwindSafe for CopyDBClusterSnapshotError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CreateDBClusterError
===
```
pub enum CreateDBClusterError {
DBClusterAlreadyExistsFault(String),
DBClusterNotFoundFault(String),
DBClusterParameterGroupNotFoundFault(String),
DBClusterQuotaExceededFault(String),
DBInstanceNotFoundFault(String),
DBSubnetGroupDoesNotCoverEnoughAZs(String),
DBSubnetGroupNotFoundFault(String),
GlobalClusterNotFoundFault(String),
InsufficientStorageClusterCapacityFault(String),
InvalidDBClusterStateFault(String),
InvalidDBInstanceStateFault(String),
InvalidDBSubnetGroupStateFault(String),
InvalidGlobalClusterStateFault(String),
InvalidSubnet(String),
InvalidVPCNetworkStateFault(String),
KMSKeyNotAccessibleFault(String),
StorageQuotaExceededFault(String),
}
```
Errors returned by CreateDBCluster
Variants
---
### `DBClusterAlreadyExistsFault(String)`
You already have a cluster with the given identifier.
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBClusterParameterGroupNotFoundFault(String)`
`DBClusterParameterGroupName` doesn't refer to an existing cluster parameter group.
### `DBClusterQuotaExceededFault(String)`
The cluster can't be created because you have reached the maximum allowed quota of clusters.
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
### `DBSubnetGroupDoesNotCoverEnoughAZs(String)`
Subnets in the subnet group should cover at least two Availability Zones unless there is only one Availability Zone.
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
### `GlobalClusterNotFoundFault(String)`
The `GlobalClusterIdentifier` doesn't refer to an existing global cluster.
### `InsufficientStorageClusterCapacityFault(String)`
There is not enough storage available for the current action. You might be able to resolve this error by updating your subnet group to use different Availability Zones that have more storage available.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
### `InvalidDBSubnetGroupStateFault(String)`
The subnet group can't be deleted because it's in use.
### `InvalidGlobalClusterStateFault(String)`
The requested operation can't be performed while the cluster is in this state.
### `InvalidSubnet(String)`
The requested subnet is not valid, or multiple subnets were requested that are not all in a common virtual private cloud (VPC).
### `InvalidVPCNetworkStateFault(String)`
The subnet group doesn't cover all Availability Zones after it is created because of changes that were made.
### `KMSKeyNotAccessibleFault(String)`
An error occurred when accessing an KMS key.
### `StorageQuotaExceededFault(String)`
The request would cause you to exceed the allowed amount of storage available across all instances.
Implementations
---
source### impl CreateDBClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateDBClusterErrorTrait Implementations
---
source### impl Debug for CreateDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateDBClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateDBClusterError> for CreateDBClusterError
source#### fn eq(&self, other: &CreateDBClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterError
### impl Send for CreateDBClusterError
### impl Sync for CreateDBClusterError
### impl Unpin for CreateDBClusterError
### impl UnwindSafe for CreateDBClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CreateDBClusterParameterGroupError
===
```
pub enum CreateDBClusterParameterGroupError {
DBParameterGroupAlreadyExistsFault(String),
DBParameterGroupQuotaExceededFault(String),
}
```
Errors returned by CreateDBClusterParameterGroup
Variants
---
### `DBParameterGroupAlreadyExistsFault(String)`
A parameter group with the same name already exists.
### `DBParameterGroupQuotaExceededFault(String)`
This request would cause you to exceed the allowed number of parameter groups.
Implementations
---
source### impl CreateDBClusterParameterGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateDBClusterParameterGroupErrorTrait Implementations
---
source### impl Debug for CreateDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateDBClusterParameterGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateDBClusterParameterGroupError> for CreateDBClusterParameterGroupError
source#### fn eq(&self, other: &CreateDBClusterParameterGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterParameterGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterParameterGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterParameterGroupError
### impl Send for CreateDBClusterParameterGroupError
### impl Sync for CreateDBClusterParameterGroupError
### impl Unpin for CreateDBClusterParameterGroupError
### impl UnwindSafe for CreateDBClusterParameterGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CreateDBClusterSnapshotError
===
```
pub enum CreateDBClusterSnapshotError {
DBClusterNotFoundFault(String),
DBClusterSnapshotAlreadyExistsFault(String),
InvalidDBClusterSnapshotStateFault(String),
InvalidDBClusterStateFault(String),
SnapshotQuotaExceededFault(String),
}
```
Errors returned by CreateDBClusterSnapshot
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBClusterSnapshotAlreadyExistsFault(String)`
You already have a cluster snapshot with the given identifier.
### `InvalidDBClusterSnapshotStateFault(String)`
The provided value isn't a valid cluster snapshot state.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `SnapshotQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of snapshots.
Implementations
---
source### impl CreateDBClusterSnapshotError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateDBClusterSnapshotErrorTrait Implementations
---
source### impl Debug for CreateDBClusterSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateDBClusterSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateDBClusterSnapshotError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateDBClusterSnapshotError> for CreateDBClusterSnapshotError
source#### fn eq(&self, other: &CreateDBClusterSnapshotError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBClusterSnapshotError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBClusterSnapshotError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBClusterSnapshotError
### impl Send for CreateDBClusterSnapshotError
### impl Sync for CreateDBClusterSnapshotError
### impl Unpin for CreateDBClusterSnapshotError
### impl UnwindSafe for CreateDBClusterSnapshotError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CreateDBInstanceError
===
```
pub enum CreateDBInstanceError {
AuthorizationNotFoundFault(String),
DBClusterNotFoundFault(String),
DBInstanceAlreadyExistsFault(String),
DBParameterGroupNotFoundFault(String),
DBSecurityGroupNotFoundFault(String),
DBSubnetGroupDoesNotCoverEnoughAZs(String),
DBSubnetGroupNotFoundFault(String),
InstanceQuotaExceededFault(String),
InsufficientDBInstanceCapacityFault(String),
InvalidDBClusterStateFault(String),
InvalidSubnet(String),
InvalidVPCNetworkStateFault(String),
KMSKeyNotAccessibleFault(String),
StorageQuotaExceededFault(String),
StorageTypeNotSupportedFault(String),
}
```
Errors returned by CreateDBInstance
Variants
---
### `AuthorizationNotFoundFault(String)`
The specified CIDR IP or Amazon EC2 security group isn't authorized for the specified security group.
Amazon DocumentDB also might not be authorized to perform necessary actions on your behalf using IAM.
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBInstanceAlreadyExistsFault(String)`
You already have a instance with the given identifier.
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
### `DBSecurityGroupNotFoundFault(String)`
`DBSecurityGroupName` doesn't refer to an existing security group.
### `DBSubnetGroupDoesNotCoverEnoughAZs(String)`
Subnets in the subnet group should cover at least two Availability Zones unless there is only one Availability Zone.
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
### `InstanceQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of instances.
### `InsufficientDBInstanceCapacityFault(String)`
The specified instance class isn't available in the specified Availability Zone.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidSubnet(String)`
The requested subnet is not valid, or multiple subnets were requested that are not all in a common virtual private cloud (VPC).
### `InvalidVPCNetworkStateFault(String)`
The subnet group doesn't cover all Availability Zones after it is created because of changes that were made.
### `KMSKeyNotAccessibleFault(String)`
An error occurred when accessing an KMS key.
### `StorageQuotaExceededFault(String)`
The request would cause you to exceed the allowed amount of storage available across all instances.
### `StorageTypeNotSupportedFault(String)`
Storage of the specified `StorageType` can't be associated with the DB instance.
Implementations
---
source### impl CreateDBInstanceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateDBInstanceErrorTrait Implementations
---
source### impl Debug for CreateDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateDBInstanceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateDBInstanceError> for CreateDBInstanceError
source#### fn eq(&self, other: &CreateDBInstanceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBInstanceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBInstanceError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBInstanceError
### impl Send for CreateDBInstanceError
### impl Sync for CreateDBInstanceError
### impl Unpin for CreateDBInstanceError
### impl UnwindSafe for CreateDBInstanceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CreateDBSubnetGroupError
===
```
pub enum CreateDBSubnetGroupError {
DBSubnetGroupAlreadyExistsFault(String),
DBSubnetGroupDoesNotCoverEnoughAZs(String),
DBSubnetGroupQuotaExceededFault(String),
DBSubnetQuotaExceededFault(String),
InvalidSubnet(String),
}
```
Errors returned by CreateDBSubnetGroup
Variants
---
### `DBSubnetGroupAlreadyExistsFault(String)`
`DBSubnetGroupName` is already being used by an existing subnet group.
### `DBSubnetGroupDoesNotCoverEnoughAZs(String)`
Subnets in the subnet group should cover at least two Availability Zones unless there is only one Availability Zone.
### `DBSubnetGroupQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of subnet groups.
### `DBSubnetQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of subnets in a subnet group.
### `InvalidSubnet(String)`
The requested subnet is not valid, or multiple subnets were requested that are not all in a common virtual private cloud (VPC).
Implementations
---
source### impl CreateDBSubnetGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateDBSubnetGroupErrorTrait Implementations
---
source### impl Debug for CreateDBSubnetGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateDBSubnetGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateDBSubnetGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateDBSubnetGroupError> for CreateDBSubnetGroupError
source#### fn eq(&self, other: &CreateDBSubnetGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateDBSubnetGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateDBSubnetGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateDBSubnetGroupError
### impl Send for CreateDBSubnetGroupError
### impl Sync for CreateDBSubnetGroupError
### impl Unpin for CreateDBSubnetGroupError
### impl UnwindSafe for CreateDBSubnetGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CreateEventSubscriptionError
===
```
pub enum CreateEventSubscriptionError {
EventSubscriptionQuotaExceededFault(String),
SNSInvalidTopicFault(String),
SNSNoAuthorizationFault(String),
SNSTopicArnNotFoundFault(String),
SourceNotFoundFault(String),
SubscriptionAlreadyExistFault(String),
SubscriptionCategoryNotFoundFault(String),
}
```
Errors returned by CreateEventSubscription
Variants
---
### `EventSubscriptionQuotaExceededFault(String)`
You have reached the maximum number of event subscriptions.
### `SNSInvalidTopicFault(String)`
Amazon SNS has responded that there is a problem with the specified topic.
### `SNSNoAuthorizationFault(String)`
You do not have permission to publish to the SNS topic Amazon Resource Name (ARN).
### `SNSTopicArnNotFoundFault(String)`
The SNS topic Amazon Resource Name (ARN) does not exist.
### `SourceNotFoundFault(String)`
The requested source could not be found.
### `SubscriptionAlreadyExistFault(String)`
The provided subscription name already exists.
### `SubscriptionCategoryNotFoundFault(String)`
The provided category does not exist.
Implementations
---
source### impl CreateEventSubscriptionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateEventSubscriptionErrorTrait Implementations
---
source### impl Debug for CreateEventSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateEventSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateEventSubscriptionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateEventSubscriptionError> for CreateEventSubscriptionError
source#### fn eq(&self, other: &CreateEventSubscriptionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateEventSubscriptionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateEventSubscriptionError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateEventSubscriptionError
### impl Send for CreateEventSubscriptionError
### impl Sync for CreateEventSubscriptionError
### impl Unpin for CreateEventSubscriptionError
### impl UnwindSafe for CreateEventSubscriptionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::CreateGlobalClusterError
===
```
pub enum CreateGlobalClusterError {
DBClusterNotFoundFault(String),
GlobalClusterAlreadyExistsFault(String),
GlobalClusterQuotaExceededFault(String),
InvalidDBClusterStateFault(String),
}
```
Errors returned by CreateGlobalCluster
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `GlobalClusterAlreadyExistsFault(String)`
The `GlobalClusterIdentifier` already exists. Choose a new global cluster identifier (unique name) to create a new global cluster.
### `GlobalClusterQuotaExceededFault(String)`
The number of global clusters for this account is already at the maximum allowed.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
Implementations
---
source### impl CreateGlobalClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<CreateGlobalClusterErrorTrait Implementations
---
source### impl Debug for CreateGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for CreateGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for CreateGlobalClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<CreateGlobalClusterError> for CreateGlobalClusterError
source#### fn eq(&self, other: &CreateGlobalClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &CreateGlobalClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for CreateGlobalClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for CreateGlobalClusterError
### impl Send for CreateGlobalClusterError
### impl Sync for CreateGlobalClusterError
### impl Unpin for CreateGlobalClusterError
### impl UnwindSafe for CreateGlobalClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DeleteDBClusterError
===
```
pub enum DeleteDBClusterError {
DBClusterNotFoundFault(String),
DBClusterSnapshotAlreadyExistsFault(String),
InvalidDBClusterSnapshotStateFault(String),
InvalidDBClusterStateFault(String),
SnapshotQuotaExceededFault(String),
}
```
Errors returned by DeleteDBCluster
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBClusterSnapshotAlreadyExistsFault(String)`
You already have a cluster snapshot with the given identifier.
### `InvalidDBClusterSnapshotStateFault(String)`
The provided value isn't a valid cluster snapshot state.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `SnapshotQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of snapshots.
Implementations
---
source### impl DeleteDBClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteDBClusterErrorTrait Implementations
---
source### impl Debug for DeleteDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteDBClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteDBClusterError> for DeleteDBClusterError
source#### fn eq(&self, other: &DeleteDBClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterError
### impl Send for DeleteDBClusterError
### impl Sync for DeleteDBClusterError
### impl Unpin for DeleteDBClusterError
### impl UnwindSafe for DeleteDBClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DeleteDBClusterParameterGroupError
===
```
pub enum DeleteDBClusterParameterGroupError {
DBParameterGroupNotFoundFault(String),
InvalidDBParameterGroupStateFault(String),
}
```
Errors returned by DeleteDBClusterParameterGroup
Variants
---
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
### `InvalidDBParameterGroupStateFault(String)`
The parameter group is in use, or it is in a state that is not valid. If you are trying to delete the parameter group, you can't delete it when the parameter group is in this state.
Implementations
---
source### impl DeleteDBClusterParameterGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteDBClusterParameterGroupErrorTrait Implementations
---
source### impl Debug for DeleteDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteDBClusterParameterGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteDBClusterParameterGroupError> for DeleteDBClusterParameterGroupError
source#### fn eq(&self, other: &DeleteDBClusterParameterGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterParameterGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterParameterGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterParameterGroupError
### impl Send for DeleteDBClusterParameterGroupError
### impl Sync for DeleteDBClusterParameterGroupError
### impl Unpin for DeleteDBClusterParameterGroupError
### impl UnwindSafe for DeleteDBClusterParameterGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DeleteDBClusterSnapshotError
===
```
pub enum DeleteDBClusterSnapshotError {
DBClusterSnapshotNotFoundFault(String),
InvalidDBClusterSnapshotStateFault(String),
}
```
Errors returned by DeleteDBClusterSnapshot
Variants
---
### `DBClusterSnapshotNotFoundFault(String)`
`DBClusterSnapshotIdentifier` doesn't refer to an existing cluster snapshot.
### `InvalidDBClusterSnapshotStateFault(String)`
The provided value isn't a valid cluster snapshot state.
Implementations
---
source### impl DeleteDBClusterSnapshotError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteDBClusterSnapshotErrorTrait Implementations
---
source### impl Debug for DeleteDBClusterSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteDBClusterSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteDBClusterSnapshotError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteDBClusterSnapshotError> for DeleteDBClusterSnapshotError
source#### fn eq(&self, other: &DeleteDBClusterSnapshotError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBClusterSnapshotError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBClusterSnapshotError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBClusterSnapshotError
### impl Send for DeleteDBClusterSnapshotError
### impl Sync for DeleteDBClusterSnapshotError
### impl Unpin for DeleteDBClusterSnapshotError
### impl UnwindSafe for DeleteDBClusterSnapshotError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DeleteDBInstanceError
===
```
pub enum DeleteDBInstanceError {
DBInstanceNotFoundFault(String),
DBSnapshotAlreadyExistsFault(String),
InvalidDBClusterStateFault(String),
InvalidDBInstanceStateFault(String),
SnapshotQuotaExceededFault(String),
}
```
Errors returned by DeleteDBInstance
Variants
---
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
### `DBSnapshotAlreadyExistsFault(String)`
`DBSnapshotIdentifier` is already being used by an existing snapshot.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
### `SnapshotQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of snapshots.
Implementations
---
source### impl DeleteDBInstanceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteDBInstanceErrorTrait Implementations
---
source### impl Debug for DeleteDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteDBInstanceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteDBInstanceError> for DeleteDBInstanceError
source#### fn eq(&self, other: &DeleteDBInstanceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBInstanceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBInstanceError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBInstanceError
### impl Send for DeleteDBInstanceError
### impl Sync for DeleteDBInstanceError
### impl Unpin for DeleteDBInstanceError
### impl UnwindSafe for DeleteDBInstanceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DeleteDBSubnetGroupError
===
```
pub enum DeleteDBSubnetGroupError {
DBSubnetGroupNotFoundFault(String),
InvalidDBSubnetGroupStateFault(String),
InvalidDBSubnetStateFault(String),
}
```
Errors returned by DeleteDBSubnetGroup
Variants
---
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
### `InvalidDBSubnetGroupStateFault(String)`
The subnet group can't be deleted because it's in use.
### `InvalidDBSubnetStateFault(String)`
The subnet isn't in the *available* state.
Implementations
---
source### impl DeleteDBSubnetGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteDBSubnetGroupErrorTrait Implementations
---
source### impl Debug for DeleteDBSubnetGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteDBSubnetGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteDBSubnetGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteDBSubnetGroupError> for DeleteDBSubnetGroupError
source#### fn eq(&self, other: &DeleteDBSubnetGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteDBSubnetGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteDBSubnetGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteDBSubnetGroupError
### impl Send for DeleteDBSubnetGroupError
### impl Sync for DeleteDBSubnetGroupError
### impl Unpin for DeleteDBSubnetGroupError
### impl UnwindSafe for DeleteDBSubnetGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DeleteEventSubscriptionError
===
```
pub enum DeleteEventSubscriptionError {
InvalidEventSubscriptionStateFault(String),
SubscriptionNotFoundFault(String),
}
```
Errors returned by DeleteEventSubscription
Variants
---
### `InvalidEventSubscriptionStateFault(String)`
Someone else might be modifying a subscription. Wait a few seconds, and try again.
### `SubscriptionNotFoundFault(String)`
The subscription name does not exist.
Implementations
---
source### impl DeleteEventSubscriptionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteEventSubscriptionErrorTrait Implementations
---
source### impl Debug for DeleteEventSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteEventSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteEventSubscriptionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteEventSubscriptionError> for DeleteEventSubscriptionError
source#### fn eq(&self, other: &DeleteEventSubscriptionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteEventSubscriptionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteEventSubscriptionError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteEventSubscriptionError
### impl Send for DeleteEventSubscriptionError
### impl Sync for DeleteEventSubscriptionError
### impl Unpin for DeleteEventSubscriptionError
### impl UnwindSafe for DeleteEventSubscriptionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DeleteGlobalClusterError
===
```
pub enum DeleteGlobalClusterError {
GlobalClusterNotFoundFault(String),
InvalidGlobalClusterStateFault(String),
}
```
Errors returned by DeleteGlobalCluster
Variants
---
### `GlobalClusterNotFoundFault(String)`
The `GlobalClusterIdentifier` doesn't refer to an existing global cluster.
### `InvalidGlobalClusterStateFault(String)`
The requested operation can't be performed while the cluster is in this state.
Implementations
---
source### impl DeleteGlobalClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DeleteGlobalClusterErrorTrait Implementations
---
source### impl Debug for DeleteGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DeleteGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DeleteGlobalClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DeleteGlobalClusterError> for DeleteGlobalClusterError
source#### fn eq(&self, other: &DeleteGlobalClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DeleteGlobalClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DeleteGlobalClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for DeleteGlobalClusterError
### impl Send for DeleteGlobalClusterError
### impl Sync for DeleteGlobalClusterError
### impl Unpin for DeleteGlobalClusterError
### impl UnwindSafe for DeleteGlobalClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeCertificatesError
===
```
pub enum DescribeCertificatesError {
CertificateNotFoundFault(String),
}
```
Errors returned by DescribeCertificates
Variants
---
### `CertificateNotFoundFault(String)`
`CertificateIdentifier` doesn't refer to an existing certificate.
Implementations
---
source### impl DescribeCertificatesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeCertificatesErrorTrait Implementations
---
source### impl Debug for DescribeCertificatesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeCertificatesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeCertificatesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeCertificatesError> for DescribeCertificatesError
source#### fn eq(&self, other: &DescribeCertificatesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeCertificatesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeCertificatesError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeCertificatesError
### impl Send for DescribeCertificatesError
### impl Sync for DescribeCertificatesError
### impl Unpin for DescribeCertificatesError
### impl UnwindSafe for DescribeCertificatesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBClusterParameterGroupsError
===
```
pub enum DescribeDBClusterParameterGroupsError {
DBParameterGroupNotFoundFault(String),
}
```
Errors returned by DescribeDBClusterParameterGroups
Variants
---
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
Implementations
---
source### impl DescribeDBClusterParameterGroupsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBClusterParameterGroupsErrorTrait Implementations
---
source### impl Debug for DescribeDBClusterParameterGroupsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBClusterParameterGroupsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBClusterParameterGroupsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBClusterParameterGroupsError> for DescribeDBClusterParameterGroupsError
source#### fn eq(&self, other: &DescribeDBClusterParameterGroupsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterParameterGroupsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterParameterGroupsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterParameterGroupsError
### impl Send for DescribeDBClusterParameterGroupsError
### impl Sync for DescribeDBClusterParameterGroupsError
### impl Unpin for DescribeDBClusterParameterGroupsError
### impl UnwindSafe for DescribeDBClusterParameterGroupsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBClusterParametersError
===
```
pub enum DescribeDBClusterParametersError {
DBParameterGroupNotFoundFault(String),
}
```
Errors returned by DescribeDBClusterParameters
Variants
---
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
Implementations
---
source### impl DescribeDBClusterParametersError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBClusterParametersErrorTrait Implementations
---
source### impl Debug for DescribeDBClusterParametersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBClusterParametersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBClusterParametersError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBClusterParametersError> for DescribeDBClusterParametersError
source#### fn eq(&self, other: &DescribeDBClusterParametersError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterParametersError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterParametersError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterParametersError
### impl Send for DescribeDBClusterParametersError
### impl Sync for DescribeDBClusterParametersError
### impl Unpin for DescribeDBClusterParametersError
### impl UnwindSafe for DescribeDBClusterParametersError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBClusterSnapshotAttributesError
===
```
pub enum DescribeDBClusterSnapshotAttributesError {
DBClusterSnapshotNotFoundFault(String),
}
```
Errors returned by DescribeDBClusterSnapshotAttributes
Variants
---
### `DBClusterSnapshotNotFoundFault(String)`
`DBClusterSnapshotIdentifier` doesn't refer to an existing cluster snapshot.
Implementations
---
source### impl DescribeDBClusterSnapshotAttributesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBClusterSnapshotAttributesErrorTrait Implementations
---
source### impl Debug for DescribeDBClusterSnapshotAttributesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBClusterSnapshotAttributesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBClusterSnapshotAttributesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBClusterSnapshotAttributesError> for DescribeDBClusterSnapshotAttributesError
source#### fn eq(&self, other: &DescribeDBClusterSnapshotAttributesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterSnapshotAttributesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterSnapshotAttributesError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterSnapshotAttributesError
### impl Send for DescribeDBClusterSnapshotAttributesError
### impl Sync for DescribeDBClusterSnapshotAttributesError
### impl Unpin for DescribeDBClusterSnapshotAttributesError
### impl UnwindSafe for DescribeDBClusterSnapshotAttributesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBClusterSnapshotsError
===
```
pub enum DescribeDBClusterSnapshotsError {
DBClusterSnapshotNotFoundFault(String),
}
```
Errors returned by DescribeDBClusterSnapshots
Variants
---
### `DBClusterSnapshotNotFoundFault(String)`
`DBClusterSnapshotIdentifier` doesn't refer to an existing cluster snapshot.
Implementations
---
source### impl DescribeDBClusterSnapshotsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBClusterSnapshotsErrorTrait Implementations
---
source### impl Debug for DescribeDBClusterSnapshotsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBClusterSnapshotsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBClusterSnapshotsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBClusterSnapshotsError> for DescribeDBClusterSnapshotsError
source#### fn eq(&self, other: &DescribeDBClusterSnapshotsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClusterSnapshotsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClusterSnapshotsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClusterSnapshotsError
### impl Send for DescribeDBClusterSnapshotsError
### impl Sync for DescribeDBClusterSnapshotsError
### impl Unpin for DescribeDBClusterSnapshotsError
### impl UnwindSafe for DescribeDBClusterSnapshotsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBClustersError
===
```
pub enum DescribeDBClustersError {
DBClusterNotFoundFault(String),
}
```
Errors returned by DescribeDBClusters
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
Implementations
---
source### impl DescribeDBClustersError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBClustersErrorTrait Implementations
---
source### impl Debug for DescribeDBClustersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBClustersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBClustersError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBClustersError> for DescribeDBClustersError
source#### fn eq(&self, other: &DescribeDBClustersError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBClustersError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBClustersError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBClustersError
### impl Send for DescribeDBClustersError
### impl Sync for DescribeDBClustersError
### impl Unpin for DescribeDBClustersError
### impl UnwindSafe for DescribeDBClustersError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBEngineVersionsError
===
```
pub enum DescribeDBEngineVersionsError {}
```
Errors returned by DescribeDBEngineVersions
Implementations
---
source### impl DescribeDBEngineVersionsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBEngineVersionsErrorTrait Implementations
---
source### impl Debug for DescribeDBEngineVersionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBEngineVersionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBEngineVersionsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBEngineVersionsError> for DescribeDBEngineVersionsError
source#### fn eq(&self, other: &DescribeDBEngineVersionsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBEngineVersionsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBEngineVersionsError
### impl Send for DescribeDBEngineVersionsError
### impl Sync for DescribeDBEngineVersionsError
### impl Unpin for DescribeDBEngineVersionsError
### impl UnwindSafe for DescribeDBEngineVersionsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBInstancesError
===
```
pub enum DescribeDBInstancesError {
DBInstanceNotFoundFault(String),
}
```
Errors returned by DescribeDBInstances
Variants
---
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
Implementations
---
source### impl DescribeDBInstancesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBInstancesErrorTrait Implementations
---
source### impl Debug for DescribeDBInstancesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBInstancesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBInstancesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBInstancesError> for DescribeDBInstancesError
source#### fn eq(&self, other: &DescribeDBInstancesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBInstancesError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBInstancesError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBInstancesError
### impl Send for DescribeDBInstancesError
### impl Sync for DescribeDBInstancesError
### impl Unpin for DescribeDBInstancesError
### impl UnwindSafe for DescribeDBInstancesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeDBSubnetGroupsError
===
```
pub enum DescribeDBSubnetGroupsError {
DBSubnetGroupNotFoundFault(String),
}
```
Errors returned by DescribeDBSubnetGroups
Variants
---
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
Implementations
---
source### impl DescribeDBSubnetGroupsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeDBSubnetGroupsErrorTrait Implementations
---
source### impl Debug for DescribeDBSubnetGroupsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeDBSubnetGroupsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeDBSubnetGroupsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeDBSubnetGroupsError> for DescribeDBSubnetGroupsError
source#### fn eq(&self, other: &DescribeDBSubnetGroupsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeDBSubnetGroupsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeDBSubnetGroupsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeDBSubnetGroupsError
### impl Send for DescribeDBSubnetGroupsError
### impl Sync for DescribeDBSubnetGroupsError
### impl Unpin for DescribeDBSubnetGroupsError
### impl UnwindSafe for DescribeDBSubnetGroupsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeEngineDefaultClusterParametersError
===
```
pub enum DescribeEngineDefaultClusterParametersError {}
```
Errors returned by DescribeEngineDefaultClusterParameters
Implementations
---
source### impl DescribeEngineDefaultClusterParametersError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeEngineDefaultClusterParametersErrorTrait Implementations
---
source### impl Debug for DescribeEngineDefaultClusterParametersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeEngineDefaultClusterParametersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeEngineDefaultClusterParametersError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeEngineDefaultClusterParametersError> for DescribeEngineDefaultClusterParametersError
source#### fn eq(&self, other: &DescribeEngineDefaultClusterParametersError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEngineDefaultClusterParametersError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEngineDefaultClusterParametersError
### impl Send for DescribeEngineDefaultClusterParametersError
### impl Sync for DescribeEngineDefaultClusterParametersError
### impl Unpin for DescribeEngineDefaultClusterParametersError
### impl UnwindSafe for DescribeEngineDefaultClusterParametersError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeEventCategoriesError
===
```
pub enum DescribeEventCategoriesError {}
```
Errors returned by DescribeEventCategories
Implementations
---
source### impl DescribeEventCategoriesError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeEventCategoriesErrorTrait Implementations
---
source### impl Debug for DescribeEventCategoriesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeEventCategoriesError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeEventCategoriesError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeEventCategoriesError> for DescribeEventCategoriesError
source#### fn eq(&self, other: &DescribeEventCategoriesError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEventCategoriesError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEventCategoriesError
### impl Send for DescribeEventCategoriesError
### impl Sync for DescribeEventCategoriesError
### impl Unpin for DescribeEventCategoriesError
### impl UnwindSafe for DescribeEventCategoriesError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeEventSubscriptionsError
===
```
pub enum DescribeEventSubscriptionsError {
SubscriptionNotFoundFault(String),
}
```
Errors returned by DescribeEventSubscriptions
Variants
---
### `SubscriptionNotFoundFault(String)`
The subscription name does not exist.
Implementations
---
source### impl DescribeEventSubscriptionsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeEventSubscriptionsErrorTrait Implementations
---
source### impl Debug for DescribeEventSubscriptionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeEventSubscriptionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeEventSubscriptionsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeEventSubscriptionsError> for DescribeEventSubscriptionsError
source#### fn eq(&self, other: &DescribeEventSubscriptionsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeEventSubscriptionsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEventSubscriptionsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEventSubscriptionsError
### impl Send for DescribeEventSubscriptionsError
### impl Sync for DescribeEventSubscriptionsError
### impl Unpin for DescribeEventSubscriptionsError
### impl UnwindSafe for DescribeEventSubscriptionsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeEventsError
===
```
pub enum DescribeEventsError {}
```
Errors returned by DescribeEvents
Implementations
---
source### impl DescribeEventsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeEventsErrorTrait Implementations
---
source### impl Debug for DescribeEventsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeEventsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeEventsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeEventsError> for DescribeEventsError
source#### fn eq(&self, other: &DescribeEventsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeEventsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeEventsError
### impl Send for DescribeEventsError
### impl Sync for DescribeEventsError
### impl Unpin for DescribeEventsError
### impl UnwindSafe for DescribeEventsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeGlobalClustersError
===
```
pub enum DescribeGlobalClustersError {
GlobalClusterNotFoundFault(String),
}
```
Errors returned by DescribeGlobalClusters
Variants
---
### `GlobalClusterNotFoundFault(String)`
The `GlobalClusterIdentifier` doesn't refer to an existing global cluster.
Implementations
---
source### impl DescribeGlobalClustersError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeGlobalClustersErrorTrait Implementations
---
source### impl Debug for DescribeGlobalClustersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeGlobalClustersError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeGlobalClustersError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeGlobalClustersError> for DescribeGlobalClustersError
source#### fn eq(&self, other: &DescribeGlobalClustersError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribeGlobalClustersError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeGlobalClustersError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeGlobalClustersError
### impl Send for DescribeGlobalClustersError
### impl Sync for DescribeGlobalClustersError
### impl Unpin for DescribeGlobalClustersError
### impl UnwindSafe for DescribeGlobalClustersError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribeOrderableDBInstanceOptionsError
===
```
pub enum DescribeOrderableDBInstanceOptionsError {}
```
Errors returned by DescribeOrderableDBInstanceOptions
Implementations
---
source### impl DescribeOrderableDBInstanceOptionsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribeOrderableDBInstanceOptionsErrorTrait Implementations
---
source### impl Debug for DescribeOrderableDBInstanceOptionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribeOrderableDBInstanceOptionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribeOrderableDBInstanceOptionsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribeOrderableDBInstanceOptionsError> for DescribeOrderableDBInstanceOptionsError
source#### fn eq(&self, other: &DescribeOrderableDBInstanceOptionsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribeOrderableDBInstanceOptionsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribeOrderableDBInstanceOptionsError
### impl Send for DescribeOrderableDBInstanceOptionsError
### impl Sync for DescribeOrderableDBInstanceOptionsError
### impl Unpin for DescribeOrderableDBInstanceOptionsError
### impl UnwindSafe for DescribeOrderableDBInstanceOptionsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::DescribePendingMaintenanceActionsError
===
```
pub enum DescribePendingMaintenanceActionsError {
ResourceNotFoundFault(String),
}
```
Errors returned by DescribePendingMaintenanceActions
Variants
---
### `ResourceNotFoundFault(String)`
The specified resource ID was not found.
Implementations
---
source### impl DescribePendingMaintenanceActionsError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<DescribePendingMaintenanceActionsErrorTrait Implementations
---
source### impl Debug for DescribePendingMaintenanceActionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for DescribePendingMaintenanceActionsError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for DescribePendingMaintenanceActionsError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<DescribePendingMaintenanceActionsError> for DescribePendingMaintenanceActionsError
source#### fn eq(&self, other: &DescribePendingMaintenanceActionsError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &DescribePendingMaintenanceActionsError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for DescribePendingMaintenanceActionsError
Auto Trait Implementations
---
### impl RefUnwindSafe for DescribePendingMaintenanceActionsError
### impl Send for DescribePendingMaintenanceActionsError
### impl Sync for DescribePendingMaintenanceActionsError
### impl Unpin for DescribePendingMaintenanceActionsError
### impl UnwindSafe for DescribePendingMaintenanceActionsError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::FailoverDBClusterError
===
```
pub enum FailoverDBClusterError {
DBClusterNotFoundFault(String),
InvalidDBClusterStateFault(String),
InvalidDBInstanceStateFault(String),
}
```
Errors returned by FailoverDBCluster
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
Implementations
---
source### impl FailoverDBClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<FailoverDBClusterErrorTrait Implementations
---
source### impl Debug for FailoverDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for FailoverDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for FailoverDBClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<FailoverDBClusterError> for FailoverDBClusterError
source#### fn eq(&self, other: &FailoverDBClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &FailoverDBClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for FailoverDBClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for FailoverDBClusterError
### impl Send for FailoverDBClusterError
### impl Sync for FailoverDBClusterError
### impl Unpin for FailoverDBClusterError
### impl UnwindSafe for FailoverDBClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ListTagsForResourceError
===
```
pub enum ListTagsForResourceError {
DBClusterNotFoundFault(String),
DBInstanceNotFoundFault(String),
DBSnapshotNotFoundFault(String),
}
```
Errors returned by ListTagsForResource
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
### `DBSnapshotNotFoundFault(String)`
`DBSnapshotIdentifier` doesn't refer to an existing snapshot.
Implementations
---
source### impl ListTagsForResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ListTagsForResourceErrorTrait Implementations
---
source### impl Debug for ListTagsForResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ListTagsForResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ListTagsForResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ListTagsForResourceError> for ListTagsForResourceError
source#### fn eq(&self, other: &ListTagsForResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ListTagsForResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ListTagsForResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for ListTagsForResourceError
### impl Send for ListTagsForResourceError
### impl Sync for ListTagsForResourceError
### impl Unpin for ListTagsForResourceError
### impl UnwindSafe for ListTagsForResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ModifyDBClusterError
===
```
pub enum ModifyDBClusterError {
DBClusterAlreadyExistsFault(String),
DBClusterNotFoundFault(String),
DBClusterParameterGroupNotFoundFault(String),
DBSubnetGroupNotFoundFault(String),
InvalidDBClusterStateFault(String),
InvalidDBInstanceStateFault(String),
InvalidDBSecurityGroupStateFault(String),
InvalidDBSubnetGroupStateFault(String),
InvalidSubnet(String),
InvalidVPCNetworkStateFault(String),
StorageQuotaExceededFault(String),
}
```
Errors returned by ModifyDBCluster
Variants
---
### `DBClusterAlreadyExistsFault(String)`
You already have a cluster with the given identifier.
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBClusterParameterGroupNotFoundFault(String)`
`DBClusterParameterGroupName` doesn't refer to an existing cluster parameter group.
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
### `InvalidDBSecurityGroupStateFault(String)`
The state of the security group doesn't allow deletion.
### `InvalidDBSubnetGroupStateFault(String)`
The subnet group can't be deleted because it's in use.
### `InvalidSubnet(String)`
The requested subnet is not valid, or multiple subnets were requested that are not all in a common virtual private cloud (VPC).
### `InvalidVPCNetworkStateFault(String)`
The subnet group doesn't cover all Availability Zones after it is created because of changes that were made.
### `StorageQuotaExceededFault(String)`
The request would cause you to exceed the allowed amount of storage available across all instances.
Implementations
---
source### impl ModifyDBClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ModifyDBClusterErrorTrait Implementations
---
source### impl Debug for ModifyDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ModifyDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ModifyDBClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ModifyDBClusterError> for ModifyDBClusterError
source#### fn eq(&self, other: &ModifyDBClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterError
### impl Send for ModifyDBClusterError
### impl Sync for ModifyDBClusterError
### impl Unpin for ModifyDBClusterError
### impl UnwindSafe for ModifyDBClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ModifyDBClusterParameterGroupError
===
```
pub enum ModifyDBClusterParameterGroupError {
DBParameterGroupNotFoundFault(String),
InvalidDBParameterGroupStateFault(String),
}
```
Errors returned by ModifyDBClusterParameterGroup
Variants
---
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
### `InvalidDBParameterGroupStateFault(String)`
The parameter group is in use, or it is in a state that is not valid. If you are trying to delete the parameter group, you can't delete it when the parameter group is in this state.
Implementations
---
source### impl ModifyDBClusterParameterGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ModifyDBClusterParameterGroupErrorTrait Implementations
---
source### impl Debug for ModifyDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ModifyDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ModifyDBClusterParameterGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ModifyDBClusterParameterGroupError> for ModifyDBClusterParameterGroupError
source#### fn eq(&self, other: &ModifyDBClusterParameterGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterParameterGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterParameterGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterParameterGroupError
### impl Send for ModifyDBClusterParameterGroupError
### impl Sync for ModifyDBClusterParameterGroupError
### impl Unpin for ModifyDBClusterParameterGroupError
### impl UnwindSafe for ModifyDBClusterParameterGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ModifyDBClusterSnapshotAttributeError
===
```
pub enum ModifyDBClusterSnapshotAttributeError {
DBClusterSnapshotNotFoundFault(String),
InvalidDBClusterSnapshotStateFault(String),
SharedSnapshotQuotaExceededFault(String),
}
```
Errors returned by ModifyDBClusterSnapshotAttribute
Variants
---
### `DBClusterSnapshotNotFoundFault(String)`
`DBClusterSnapshotIdentifier` doesn't refer to an existing cluster snapshot.
### `InvalidDBClusterSnapshotStateFault(String)`
The provided value isn't a valid cluster snapshot state.
### `SharedSnapshotQuotaExceededFault(String)`
You have exceeded the maximum number of accounts that you can share a manual DB snapshot with.
Implementations
---
source### impl ModifyDBClusterSnapshotAttributeError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ModifyDBClusterSnapshotAttributeErrorTrait Implementations
---
source### impl Debug for ModifyDBClusterSnapshotAttributeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ModifyDBClusterSnapshotAttributeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ModifyDBClusterSnapshotAttributeError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ModifyDBClusterSnapshotAttributeError> for ModifyDBClusterSnapshotAttributeError
source#### fn eq(&self, other: &ModifyDBClusterSnapshotAttributeError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBClusterSnapshotAttributeError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBClusterSnapshotAttributeError
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBClusterSnapshotAttributeError
### impl Send for ModifyDBClusterSnapshotAttributeError
### impl Sync for ModifyDBClusterSnapshotAttributeError
### impl Unpin for ModifyDBClusterSnapshotAttributeError
### impl UnwindSafe for ModifyDBClusterSnapshotAttributeError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ModifyDBInstanceError
===
```
pub enum ModifyDBInstanceError {
AuthorizationNotFoundFault(String),
CertificateNotFoundFault(String),
DBInstanceAlreadyExistsFault(String),
DBInstanceNotFoundFault(String),
DBParameterGroupNotFoundFault(String),
DBSecurityGroupNotFoundFault(String),
DBUpgradeDependencyFailureFault(String),
InsufficientDBInstanceCapacityFault(String),
InvalidDBInstanceStateFault(String),
InvalidDBSecurityGroupStateFault(String),
InvalidVPCNetworkStateFault(String),
StorageQuotaExceededFault(String),
StorageTypeNotSupportedFault(String),
}
```
Errors returned by ModifyDBInstance
Variants
---
### `AuthorizationNotFoundFault(String)`
The specified CIDR IP or Amazon EC2 security group isn't authorized for the specified security group.
Amazon DocumentDB also might not be authorized to perform necessary actions on your behalf using IAM.
### `CertificateNotFoundFault(String)`
`CertificateIdentifier` doesn't refer to an existing certificate.
### `DBInstanceAlreadyExistsFault(String)`
You already have a instance with the given identifier.
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
### `DBSecurityGroupNotFoundFault(String)`
`DBSecurityGroupName` doesn't refer to an existing security group.
### `DBUpgradeDependencyFailureFault(String)`
The upgrade failed because a resource that the depends on can't be modified.
### `InsufficientDBInstanceCapacityFault(String)`
The specified instance class isn't available in the specified Availability Zone.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
### `InvalidDBSecurityGroupStateFault(String)`
The state of the security group doesn't allow deletion.
### `InvalidVPCNetworkStateFault(String)`
The subnet group doesn't cover all Availability Zones after it is created because of changes that were made.
### `StorageQuotaExceededFault(String)`
The request would cause you to exceed the allowed amount of storage available across all instances.
### `StorageTypeNotSupportedFault(String)`
Storage of the specified `StorageType` can't be associated with the DB instance.
Implementations
---
source### impl ModifyDBInstanceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ModifyDBInstanceErrorTrait Implementations
---
source### impl Debug for ModifyDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ModifyDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ModifyDBInstanceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ModifyDBInstanceError> for ModifyDBInstanceError
source#### fn eq(&self, other: &ModifyDBInstanceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBInstanceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBInstanceError
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBInstanceError
### impl Send for ModifyDBInstanceError
### impl Sync for ModifyDBInstanceError
### impl Unpin for ModifyDBInstanceError
### impl UnwindSafe for ModifyDBInstanceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ModifyDBSubnetGroupError
===
```
pub enum ModifyDBSubnetGroupError {
DBSubnetGroupDoesNotCoverEnoughAZs(String),
DBSubnetGroupNotFoundFault(String),
DBSubnetQuotaExceededFault(String),
InvalidSubnet(String),
SubnetAlreadyInUse(String),
}
```
Errors returned by ModifyDBSubnetGroup
Variants
---
### `DBSubnetGroupDoesNotCoverEnoughAZs(String)`
Subnets in the subnet group should cover at least two Availability Zones unless there is only one Availability Zone.
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
### `DBSubnetQuotaExceededFault(String)`
The request would cause you to exceed the allowed number of subnets in a subnet group.
### `InvalidSubnet(String)`
The requested subnet is not valid, or multiple subnets were requested that are not all in a common virtual private cloud (VPC).
### `SubnetAlreadyInUse(String)`
The subnet is already in use in the Availability Zone.
Implementations
---
source### impl ModifyDBSubnetGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ModifyDBSubnetGroupErrorTrait Implementations
---
source### impl Debug for ModifyDBSubnetGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ModifyDBSubnetGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ModifyDBSubnetGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ModifyDBSubnetGroupError> for ModifyDBSubnetGroupError
source#### fn eq(&self, other: &ModifyDBSubnetGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyDBSubnetGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyDBSubnetGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyDBSubnetGroupError
### impl Send for ModifyDBSubnetGroupError
### impl Sync for ModifyDBSubnetGroupError
### impl Unpin for ModifyDBSubnetGroupError
### impl UnwindSafe for ModifyDBSubnetGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ModifyEventSubscriptionError
===
```
pub enum ModifyEventSubscriptionError {
EventSubscriptionQuotaExceededFault(String),
SNSInvalidTopicFault(String),
SNSNoAuthorizationFault(String),
SNSTopicArnNotFoundFault(String),
SubscriptionCategoryNotFoundFault(String),
SubscriptionNotFoundFault(String),
}
```
Errors returned by ModifyEventSubscription
Variants
---
### `EventSubscriptionQuotaExceededFault(String)`
You have reached the maximum number of event subscriptions.
### `SNSInvalidTopicFault(String)`
Amazon SNS has responded that there is a problem with the specified topic.
### `SNSNoAuthorizationFault(String)`
You do not have permission to publish to the SNS topic Amazon Resource Name (ARN).
### `SNSTopicArnNotFoundFault(String)`
The SNS topic Amazon Resource Name (ARN) does not exist.
### `SubscriptionCategoryNotFoundFault(String)`
The provided category does not exist.
### `SubscriptionNotFoundFault(String)`
The subscription name does not exist.
Implementations
---
source### impl ModifyEventSubscriptionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ModifyEventSubscriptionErrorTrait Implementations
---
source### impl Debug for ModifyEventSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ModifyEventSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ModifyEventSubscriptionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ModifyEventSubscriptionError> for ModifyEventSubscriptionError
source#### fn eq(&self, other: &ModifyEventSubscriptionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyEventSubscriptionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyEventSubscriptionError
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyEventSubscriptionError
### impl Send for ModifyEventSubscriptionError
### impl Sync for ModifyEventSubscriptionError
### impl Unpin for ModifyEventSubscriptionError
### impl UnwindSafe for ModifyEventSubscriptionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ModifyGlobalClusterError
===
```
pub enum ModifyGlobalClusterError {
GlobalClusterNotFoundFault(String),
InvalidGlobalClusterStateFault(String),
}
```
Errors returned by ModifyGlobalCluster
Variants
---
### `GlobalClusterNotFoundFault(String)`
The `GlobalClusterIdentifier` doesn't refer to an existing global cluster.
### `InvalidGlobalClusterStateFault(String)`
The requested operation can't be performed while the cluster is in this state.
Implementations
---
source### impl ModifyGlobalClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ModifyGlobalClusterErrorTrait Implementations
---
source### impl Debug for ModifyGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ModifyGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ModifyGlobalClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ModifyGlobalClusterError> for ModifyGlobalClusterError
source#### fn eq(&self, other: &ModifyGlobalClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ModifyGlobalClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ModifyGlobalClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for ModifyGlobalClusterError
### impl Send for ModifyGlobalClusterError
### impl Sync for ModifyGlobalClusterError
### impl Unpin for ModifyGlobalClusterError
### impl UnwindSafe for ModifyGlobalClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::RebootDBInstanceError
===
```
pub enum RebootDBInstanceError {
DBInstanceNotFoundFault(String),
InvalidDBInstanceStateFault(String),
}
```
Errors returned by RebootDBInstance
Variants
---
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
Implementations
---
source### impl RebootDBInstanceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<RebootDBInstanceErrorTrait Implementations
---
source### impl Debug for RebootDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for RebootDBInstanceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for RebootDBInstanceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<RebootDBInstanceError> for RebootDBInstanceError
source#### fn eq(&self, other: &RebootDBInstanceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RebootDBInstanceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RebootDBInstanceError
Auto Trait Implementations
---
### impl RefUnwindSafe for RebootDBInstanceError
### impl Send for RebootDBInstanceError
### impl Sync for RebootDBInstanceError
### impl Unpin for RebootDBInstanceError
### impl UnwindSafe for RebootDBInstanceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::RemoveFromGlobalClusterError
===
```
pub enum RemoveFromGlobalClusterError {
DBClusterNotFoundFault(String),
GlobalClusterNotFoundFault(String),
InvalidGlobalClusterStateFault(String),
}
```
Errors returned by RemoveFromGlobalCluster
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `GlobalClusterNotFoundFault(String)`
The `GlobalClusterIdentifier` doesn't refer to an existing global cluster.
### `InvalidGlobalClusterStateFault(String)`
The requested operation can't be performed while the cluster is in this state.
Implementations
---
source### impl RemoveFromGlobalClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<RemoveFromGlobalClusterErrorTrait Implementations
---
source### impl Debug for RemoveFromGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for RemoveFromGlobalClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for RemoveFromGlobalClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<RemoveFromGlobalClusterError> for RemoveFromGlobalClusterError
source#### fn eq(&self, other: &RemoveFromGlobalClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveFromGlobalClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveFromGlobalClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveFromGlobalClusterError
### impl Send for RemoveFromGlobalClusterError
### impl Sync for RemoveFromGlobalClusterError
### impl Unpin for RemoveFromGlobalClusterError
### impl UnwindSafe for RemoveFromGlobalClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::RemoveSourceIdentifierFromSubscriptionError
===
```
pub enum RemoveSourceIdentifierFromSubscriptionError {
SourceNotFoundFault(String),
SubscriptionNotFoundFault(String),
}
```
Errors returned by RemoveSourceIdentifierFromSubscription
Variants
---
### `SourceNotFoundFault(String)`
The requested source could not be found.
### `SubscriptionNotFoundFault(String)`
The subscription name does not exist.
Implementations
---
source### impl RemoveSourceIdentifierFromSubscriptionError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<RemoveSourceIdentifierFromSubscriptionErrorTrait Implementations
---
source### impl Debug for RemoveSourceIdentifierFromSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for RemoveSourceIdentifierFromSubscriptionError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for RemoveSourceIdentifierFromSubscriptionError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<RemoveSourceIdentifierFromSubscriptionError> for RemoveSourceIdentifierFromSubscriptionError
source#### fn eq(&self, other: &RemoveSourceIdentifierFromSubscriptionError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveSourceIdentifierFromSubscriptionError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveSourceIdentifierFromSubscriptionError
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveSourceIdentifierFromSubscriptionError
### impl Send for RemoveSourceIdentifierFromSubscriptionError
### impl Sync for RemoveSourceIdentifierFromSubscriptionError
### impl Unpin for RemoveSourceIdentifierFromSubscriptionError
### impl UnwindSafe for RemoveSourceIdentifierFromSubscriptionError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::RemoveTagsFromResourceError
===
```
pub enum RemoveTagsFromResourceError {
DBClusterNotFoundFault(String),
DBInstanceNotFoundFault(String),
DBSnapshotNotFoundFault(String),
}
```
Errors returned by RemoveTagsFromResource
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBInstanceNotFoundFault(String)`
`DBInstanceIdentifier` doesn't refer to an existing instance.
### `DBSnapshotNotFoundFault(String)`
`DBSnapshotIdentifier` doesn't refer to an existing snapshot.
Implementations
---
source### impl RemoveTagsFromResourceError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<RemoveTagsFromResourceErrorTrait Implementations
---
source### impl Debug for RemoveTagsFromResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for RemoveTagsFromResourceError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for RemoveTagsFromResourceError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<RemoveTagsFromResourceError> for RemoveTagsFromResourceError
source#### fn eq(&self, other: &RemoveTagsFromResourceError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RemoveTagsFromResourceError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RemoveTagsFromResourceError
Auto Trait Implementations
---
### impl RefUnwindSafe for RemoveTagsFromResourceError
### impl Send for RemoveTagsFromResourceError
### impl Sync for RemoveTagsFromResourceError
### impl Unpin for RemoveTagsFromResourceError
### impl UnwindSafe for RemoveTagsFromResourceError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::ResetDBClusterParameterGroupError
===
```
pub enum ResetDBClusterParameterGroupError {
DBParameterGroupNotFoundFault(String),
InvalidDBParameterGroupStateFault(String),
}
```
Errors returned by ResetDBClusterParameterGroup
Variants
---
### `DBParameterGroupNotFoundFault(String)`
`DBParameterGroupName` doesn't refer to an existing parameter group.
### `InvalidDBParameterGroupStateFault(String)`
The parameter group is in use, or it is in a state that is not valid. If you are trying to delete the parameter group, you can't delete it when the parameter group is in this state.
Implementations
---
source### impl ResetDBClusterParameterGroupError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<ResetDBClusterParameterGroupErrorTrait Implementations
---
source### impl Debug for ResetDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for ResetDBClusterParameterGroupError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for ResetDBClusterParameterGroupError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<ResetDBClusterParameterGroupError> for ResetDBClusterParameterGroupError
source#### fn eq(&self, other: &ResetDBClusterParameterGroupError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &ResetDBClusterParameterGroupError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for ResetDBClusterParameterGroupError
Auto Trait Implementations
---
### impl RefUnwindSafe for ResetDBClusterParameterGroupError
### impl Send for ResetDBClusterParameterGroupError
### impl Sync for ResetDBClusterParameterGroupError
### impl Unpin for ResetDBClusterParameterGroupError
### impl UnwindSafe for ResetDBClusterParameterGroupError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::RestoreDBClusterFromSnapshotError
===
```
pub enum RestoreDBClusterFromSnapshotError {
DBClusterAlreadyExistsFault(String),
DBClusterQuotaExceededFault(String),
DBClusterSnapshotNotFoundFault(String),
DBSnapshotNotFoundFault(String),
DBSubnetGroupNotFoundFault(String),
InsufficientDBClusterCapacityFault(String),
InsufficientStorageClusterCapacityFault(String),
InvalidDBClusterSnapshotStateFault(String),
InvalidDBSnapshotStateFault(String),
InvalidRestoreFault(String),
InvalidSubnet(String),
InvalidVPCNetworkStateFault(String),
KMSKeyNotAccessibleFault(String),
StorageQuotaExceededFault(String),
}
```
Errors returned by RestoreDBClusterFromSnapshot
Variants
---
### `DBClusterAlreadyExistsFault(String)`
You already have a cluster with the given identifier.
### `DBClusterQuotaExceededFault(String)`
The cluster can't be created because you have reached the maximum allowed quota of clusters.
### `DBClusterSnapshotNotFoundFault(String)`
`DBClusterSnapshotIdentifier` doesn't refer to an existing cluster snapshot.
### `DBSnapshotNotFoundFault(String)`
`DBSnapshotIdentifier` doesn't refer to an existing snapshot.
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
### `InsufficientDBClusterCapacityFault(String)`
The cluster doesn't have enough capacity for the current operation.
### `InsufficientStorageClusterCapacityFault(String)`
There is not enough storage available for the current action. You might be able to resolve this error by updating your subnet group to use different Availability Zones that have more storage available.
### `InvalidDBClusterSnapshotStateFault(String)`
The provided value isn't a valid cluster snapshot state.
### `InvalidDBSnapshotStateFault(String)`
The state of the snapshot doesn't allow deletion.
### `InvalidRestoreFault(String)`
You cannot restore from a virtual private cloud (VPC) backup to a non-VPC DB instance.
### `InvalidSubnet(String)`
The requested subnet is not valid, or multiple subnets were requested that are not all in a common virtual private cloud (VPC).
### `InvalidVPCNetworkStateFault(String)`
The subnet group doesn't cover all Availability Zones after it is created because of changes that were made.
### `KMSKeyNotAccessibleFault(String)`
An error occurred when accessing an KMS key.
### `StorageQuotaExceededFault(String)`
The request would cause you to exceed the allowed amount of storage available across all instances.
Implementations
---
source### impl RestoreDBClusterFromSnapshotError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<RestoreDBClusterFromSnapshotErrorTrait Implementations
---
source### impl Debug for RestoreDBClusterFromSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for RestoreDBClusterFromSnapshotError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for RestoreDBClusterFromSnapshotError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<RestoreDBClusterFromSnapshotError> for RestoreDBClusterFromSnapshotError
source#### fn eq(&self, other: &RestoreDBClusterFromSnapshotError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RestoreDBClusterFromSnapshotError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RestoreDBClusterFromSnapshotError
Auto Trait Implementations
---
### impl RefUnwindSafe for RestoreDBClusterFromSnapshotError
### impl Send for RestoreDBClusterFromSnapshotError
### impl Sync for RestoreDBClusterFromSnapshotError
### impl Unpin for RestoreDBClusterFromSnapshotError
### impl UnwindSafe for RestoreDBClusterFromSnapshotError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::RestoreDBClusterToPointInTimeError
===
```
pub enum RestoreDBClusterToPointInTimeError {
DBClusterAlreadyExistsFault(String),
DBClusterNotFoundFault(String),
DBClusterQuotaExceededFault(String),
DBClusterSnapshotNotFoundFault(String),
DBSubnetGroupNotFoundFault(String),
InsufficientDBClusterCapacityFault(String),
InsufficientStorageClusterCapacityFault(String),
InvalidDBClusterSnapshotStateFault(String),
InvalidDBClusterStateFault(String),
InvalidDBSnapshotStateFault(String),
InvalidRestoreFault(String),
InvalidSubnet(String),
InvalidVPCNetworkStateFault(String),
KMSKeyNotAccessibleFault(String),
StorageQuotaExceededFault(String),
}
```
Errors returned by RestoreDBClusterToPointInTime
Variants
---
### `DBClusterAlreadyExistsFault(String)`
You already have a cluster with the given identifier.
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `DBClusterQuotaExceededFault(String)`
The cluster can't be created because you have reached the maximum allowed quota of clusters.
### `DBClusterSnapshotNotFoundFault(String)`
`DBClusterSnapshotIdentifier` doesn't refer to an existing cluster snapshot.
### `DBSubnetGroupNotFoundFault(String)`
`DBSubnetGroupName` doesn't refer to an existing subnet group.
### `InsufficientDBClusterCapacityFault(String)`
The cluster doesn't have enough capacity for the current operation.
### `InsufficientStorageClusterCapacityFault(String)`
There is not enough storage available for the current action. You might be able to resolve this error by updating your subnet group to use different Availability Zones that have more storage available.
### `InvalidDBClusterSnapshotStateFault(String)`
The provided value isn't a valid cluster snapshot state.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBSnapshotStateFault(String)`
The state of the snapshot doesn't allow deletion.
### `InvalidRestoreFault(String)`
You cannot restore from a virtual private cloud (VPC) backup to a non-VPC DB instance.
### `InvalidSubnet(String)`
The requested subnet is not valid, or multiple subnets were requested that are not all in a common virtual private cloud (VPC).
### `InvalidVPCNetworkStateFault(String)`
The subnet group doesn't cover all Availability Zones after it is created because of changes that were made.
### `KMSKeyNotAccessibleFault(String)`
An error occurred when accessing an KMS key.
### `StorageQuotaExceededFault(String)`
The request would cause you to exceed the allowed amount of storage available across all instances.
Implementations
---
source### impl RestoreDBClusterToPointInTimeError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<RestoreDBClusterToPointInTimeErrorTrait Implementations
---
source### impl Debug for RestoreDBClusterToPointInTimeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for RestoreDBClusterToPointInTimeError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for RestoreDBClusterToPointInTimeError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<RestoreDBClusterToPointInTimeError> for RestoreDBClusterToPointInTimeError
source#### fn eq(&self, other: &RestoreDBClusterToPointInTimeError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &RestoreDBClusterToPointInTimeError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for RestoreDBClusterToPointInTimeError
Auto Trait Implementations
---
### impl RefUnwindSafe for RestoreDBClusterToPointInTimeError
### impl Send for RestoreDBClusterToPointInTimeError
### impl Sync for RestoreDBClusterToPointInTimeError
### impl Unpin for RestoreDBClusterToPointInTimeError
### impl UnwindSafe for RestoreDBClusterToPointInTimeError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::StartDBClusterError
===
```
pub enum StartDBClusterError {
DBClusterNotFoundFault(String),
InvalidDBClusterStateFault(String),
InvalidDBInstanceStateFault(String),
}
```
Errors returned by StartDBCluster
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
Implementations
---
source### impl StartDBClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<StartDBClusterErrorTrait Implementations
---
source### impl Debug for StartDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for StartDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for StartDBClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<StartDBClusterError> for StartDBClusterError
source#### fn eq(&self, other: &StartDBClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StartDBClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StartDBClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for StartDBClusterError
### impl Send for StartDBClusterError
### impl Sync for StartDBClusterError
### impl Unpin for StartDBClusterError
### impl UnwindSafe for StartDBClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
Enum rusoto_docdb::StopDBClusterError
===
```
pub enum StopDBClusterError {
DBClusterNotFoundFault(String),
InvalidDBClusterStateFault(String),
InvalidDBInstanceStateFault(String),
}
```
Errors returned by StopDBCluster
Variants
---
### `DBClusterNotFoundFault(String)`
`DBClusterIdentifier` doesn't refer to an existing cluster.
### `InvalidDBClusterStateFault(String)`
The cluster isn't in a valid state.
### `InvalidDBInstanceStateFault(String)`
The specified instance isn't in the *available* state.
Implementations
---
source### impl StopDBClusterError
source#### pub fn from_response( res: BufferedHttpResponse) -> RusotoError<StopDBClusterErrorTrait Implementations
---
source### impl Debug for StopDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Display for StopDBClusterError
source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
source### impl Error for StopDBClusterError
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more
1.0.0 · source#### fn description(&self) -> &str
👎 Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
source### impl PartialEq<StopDBClusterError> for StopDBClusterError
source#### fn eq(&self, other: &StopDBClusterError) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
source#### fn ne(&self, other: &StopDBClusterError) -> bool
This method tests for `!=`.
source### impl StructuralPartialEq for StopDBClusterError
Auto Trait Implementations
---
### impl RefUnwindSafe for StopDBClusterError
### impl Send for StopDBClusterError
### impl Sync for StopDBClusterError
### impl Unpin for StopDBClusterError
### impl UnwindSafe for StopDBClusterError
Blanket Implementations
---
source### impl<T> Any for T where T: 'static + ?Sized,
source#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
source### impl<T> Borrow<T> for T where T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
source### impl<T> BorrowMut<T> for T where T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
source### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
source### impl<T> Instrument for T
source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper. Read more
source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an
`Instrumented` wrapper. Read more
source### impl<T, U> Into<U> for T where U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
source### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`
source### impl<T> ToString for T where T: Display + ?Sized,
source#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
source### impl<T, U> TryFrom<U> for T where U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
source### impl<T, U> TryInto<U> for T where U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
source### impl<T> WithSubscriber for T
source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more
source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a
`WithDispatch` wrapper. Read more |
blockbook | rust | Rust | Crate blockbook
===
Rust Blockbook Library
---
This crate provides REST and WebSocket clients to query various information from a Blockbook server, which is a block explorer backend created and maintained by SatoshiLabs.
Note that this crate currently only exposes a Bitcoin-specific API,
even though Blockbook provides a unified API that supports multiple cryptocurrencies.
The methods exposed in this crate make extensive use of types from the
`bitcoin` crate to provide strongly typed APIs.
An example of how to use the `REST client`:
```
let client = blockbook::Blockbook::new(url);
// query the Genesis block hash let genesis_hash = client
.block_hash(blockbook::Height::from_consensus(0).unwrap())
.await?;
assert_eq!(
genesis_hash.to_string(),
"000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"
);
// query the full block let genesis = client.block_by_hash(genesis_hash).await?;
assert_eq!(genesis.previous_block_hash, None);
// inspect the first coinbase transaction let tx = genesis.txs.get(0).unwrap();
assert!((tx.vout.get(0).unwrap().value.to_btc() - 50.0).abs() < f64::EPSILON);
```
For an example of how to use the WebSocket client, see `its documentation`.
### Supported Blockbook Version
The currently supported Blockbook version is `0.4.0`.
Modules
---
* websocketThe WebSocket client.
This module contains the WebSocket `Client` for interacting with a Blockbook server via a WebSocket connection. The client provides numerous query-response methods, as well as a few subscription methods.
Structs
---
* AddressInfoAddress information that includes a list of involved transaction IDs.
* AddressInfoBasicInformation about the funds moved from or to an address.
* AddressInfoDetailedAddress information that includes a list of involved transactions.
* AddressInfoPagingPaging information.
* BackendInformation about the full node backing the Blockbook server.
* BalanceHistoryA balance history entry.
* BlockInformation about a block.
* BlockTransactionInformation about a transaction.
* BlockVinInformation about a transaction input.
* BlockVoutInformation about a transaction output.
* BlockbookA REST client that can query a Blockbook server.
* OpReturnAn `OP_RETURN` output.
* ScriptPubKeyA script specifying spending conditions.
* ScriptSigA script fulfilling spending conditions.
* StatusStatus and backend information of the Blockbook server.
* StatusBlockbookStatus information of the Blockbook server.
* TickerA timestamp and a set of exchange rates for multiple currencies.
* TickersListInformation about the available exchange rates at a given timestamp.
* TokenInformation about funds at a Bitcoin address derived from an `extended public key`.
* TransactionInformation about a transaction.
* TransactionSpecificDetailed information about a transaction input.
* UtxoInformation about an unspent transaction outputs.
* VersionVersion information about the full node.
* VinInformation about a transaction input.
* VinSpecificBitcoin-specific information about a transaction input.
* VoutInformation about a transaction output.
* VoutSpecificBitcoin-specific information about a transaction output.
* XPubInfoDetailed information about funds held in addresses derivable from an `extended public key`,
* XPubInfoBasicAggregated information about funds held in addresses derivable from an `extended public key`.
Enums
---
* AddressBlockVoutEither an address or an `OP_RETURN output`.
* AddressFilterUsed to select which addresses to consider when deriving from
`extended public keys`.
* AssetA cryptocurrency asset.
* ChainThe specific chain (mainnet, testnet, …).
* CurrencyThe supported currencies.
* ErrorThe errors emitted by the REST client.
* ScriptPubKeyTypeThe type of spending condition.
* TxThe variants for the transactions contained in `AddressInfoDetailed::transactions`.
* TxDetailUsed to select the level of detail for `address info` transactions.
Crate blockbook
===
Rust Blockbook Library
---
This crate provides REST and WebSocket clients to query various information from a Blockbook server, which is a block explorer backend created and maintained by SatoshiLabs.
Note that this crate currently only exposes a Bitcoin-specific API,
even though Blockbook provides a unified API that supports multiple cryptocurrencies.
The methods exposed in this crate make extensive use of types from the
`bitcoin` crate to provide strongly typed APIs.
An example of how to use the `REST client`:
```
let client = blockbook::Blockbook::new(url);
// query the Genesis block hash let genesis_hash = client
.block_hash(blockbook::Height::from_consensus(0).unwrap())
.await?;
assert_eq!(
genesis_hash.to_string(),
"000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"
);
// query the full block let genesis = client.block_by_hash(genesis_hash).await?;
assert_eq!(genesis.previous_block_hash, None);
// inspect the first coinbase transaction let tx = genesis.txs.get(0).unwrap();
assert!((tx.vout.get(0).unwrap().value.to_btc() - 50.0).abs() < f64::EPSILON);
```
For an example of how to use the WebSocket client, see `its documentation`.
### Supported Blockbook Version
The currently supported Blockbook version is `0.4.0`.
Modules
---
* websocketThe WebSocket client.
This module contains the WebSocket `Client` for interacting with a Blockbook server via a WebSocket connection. The client provides numerous query-response methods, as well as a few subscription methods.
Structs
---
* AddressInfoAddress information that includes a list of involved transaction IDs.
* AddressInfoBasicInformation about the funds moved from or to an address.
* AddressInfoDetailedAddress information that includes a list of involved transactions.
* AddressInfoPagingPaging information.
* BackendInformation about the full node backing the Blockbook server.
* BalanceHistoryA balance history entry.
* BlockInformation about a block.
* BlockTransactionInformation about a transaction.
* BlockVinInformation about a transaction input.
* BlockVoutInformation about a transaction output.
* BlockbookA REST client that can query a Blockbook server.
* OpReturnAn `OP_RETURN` output.
* ScriptPubKeyA script specifying spending conditions.
* ScriptSigA script fulfilling spending conditions.
* StatusStatus and backend information of the Blockbook server.
* StatusBlockbookStatus information of the Blockbook server.
* TickerA timestamp and a set of exchange rates for multiple currencies.
* TickersListInformation about the available exchange rates at a given timestamp.
* TokenInformation about funds at a Bitcoin address derived from an `extended public key`.
* TransactionInformation about a transaction.
* TransactionSpecificDetailed information about a transaction input.
* UtxoInformation about an unspent transaction outputs.
* VersionVersion information about the full node.
* VinInformation about a transaction input.
* VinSpecificBitcoin-specific information about a transaction input.
* VoutInformation about a transaction output.
* VoutSpecificBitcoin-specific information about a transaction output.
* XPubInfoDetailed information about funds held in addresses derivable from an `extended public key`,
* XPubInfoBasicAggregated information about funds held in addresses derivable from an `extended public key`.
Enums
---
* AddressBlockVoutEither an address or an `OP_RETURN output`.
* AddressFilterUsed to select which addresses to consider when deriving from
`extended public keys`.
* AssetA cryptocurrency asset.
* ChainThe specific chain (mainnet, testnet, …).
* CurrencyThe supported currencies.
* ErrorThe errors emitted by the REST client.
* ScriptPubKeyTypeThe type of spending condition.
* TxThe variants for the transactions contained in `AddressInfoDetailed::transactions`.
* TxDetailUsed to select the level of detail for `address info` transactions.
Struct blockbook::Blockbook
===
```
pub struct Blockbook { /* private fields */ }
```
A REST client that can query a Blockbook server.
Provides a set methods that allow strongly typed access to the APIs available from a Blockbook server.
See the `module documentation` for some concrete examples of how to use call these APIs.
Implementations
---
### impl Blockbook
#### pub fn new(base_url: Url) -> Self
Constructs a new client for a given server `base_url`.
`base_url` should not contain the `/api/v2/` path fragment.
#### pub async fn status(&self) -> Result<Status, ErrorQueries information about the Blockbook server status.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn block_hash(&self, height: Height) -> Result<BlockHash, ErrorRetrieves the `BlockHash` of a block of the given `height`.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn transaction(&self, txid: Txid) -> Result<Transaction, ErrorRetrieves information about a transaction with a given `txid`.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn transaction_specific(
&self,
txid: Txid
) -> Result<TransactionSpecific, ErrorRetrieves information about a transaction with a given `txid` as reported by the Bitcoin Core backend.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn block_by_height(&self, height: Height) -> Result<Block, ErrorRetrieves information about a block of the specified `height`.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn block_by_hash(&self, hash: BlockHash) -> Result<Block, ErrorRetrieves information about a block with the specified `hash`.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn tickers_list(&self, timestamp: Time) -> Result<TickersList, ErrorRetrieves a list of available price tickers close to a given `timestamp`.
The API will return a tickers list that is as close as possible to the specified `timestamp`.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn ticker(
&self,
currency: Currency,
timestamp: Option<Time>
) -> Result<Ticker, ErrorRetrieves the exchange rate for a given `currency`.
The API will return a ticker that is as close as possible to the provided
`timestamp`. If `timestamp` is `None`, the latest available ticker will be returned.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn tickers(&self, timestamp: Option<Time>) -> Result<Ticker, ErrorRetrieves the exchange rates for all available currencies.
The API will return tickers that are as close as possible to the provided
`timestamp`. If `timestamp` is `None`, the latest available tickers will be returned.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn address_info_specific_basic(
&self,
address: &Address,
also_in: Option<&Currency>
) -> Result<AddressInfoBasic, ErrorRetrieves basic aggregated information about a provided `address`.
If an `also_in` `Currency` is specified, the total balance will also be returned in terms of that currency.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn address_info(
&self,
address: &Address
) -> Result<AddressInfo, ErrorRetrieves basic aggregated information as well as a list of `Txid`s for a given `address`.
The `txids` field of the response will be paged if the `address` was involved in many transactions. In this case, use `address_info_specific`
to control the pagination.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn address_info_specific(
&self,
address: &Address,
page: Option<&NonZeroU32>,
pagesize: Option<&NonZeroU16>,
from: Option<&Height>,
to: Option<&Height>,
also_in: Option<&Currency>
) -> Result<AddressInfo, ErrorRetrieves basic aggregated information as well as a paginated list of `Txid`s for a given `address`.
If an `also_in` `Currency` is specified, the total balance will also be returned in terms of that currency.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn address_info_specific_detailed(
&self,
address: &Address,
page: Option<&NonZeroU32>,
pagesize: Option<&NonZeroU16>,
from: Option<&Height>,
to: Option<&Height>,
details: &TxDetail,
also_in: Option<&Currency>
) -> Result<AddressInfoDetailed, ErrorRetrieves basic aggregated information as well as a paginated list of `Tx` objects for a given `address`.
The `details` parameter specifies how much information should be returned for the transactions in question:
* `TxDetail::Light`: A list of `Tx::Light` abbreviated transaction information
* `TxDetail::Full`: A list of `Tx::Ordinary` detailed transaction information
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn utxos_from_address(
&self,
address: Address,
confirmed_only: bool
) -> Result<Vec<Utxo>, ErrorRetrieves information about unspent transaction outputs (UTXOs) that a given address controls.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn utxos_from_xpub(
&self,
xpub: &str,
confirmed_only: bool
) -> Result<Vec<Utxo>, ErrorRetrieves information about unspent transaction outputs (UTXOs) that are controlled by addresses that can be derived from the given `extended public key`.
For details of how Blockbook attempts to derive addresses, see the
`xpub_info_basic` documentation.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn balance_history(
&self,
address: &Address,
from: Option<Time>,
to: Option<Time>,
currency: Option<Currency>,
group_by: Option<u32>
) -> Result<Vec<BalanceHistory>, ErrorRetrieves a paginated list of information about the balance history of a given `address`.
If a `currency` is specified, contemporary exchange rates will be included for each balance history event.
The history can be aggregated into chunks of time of a desired length by specifiying a `group_by` interval in seconds.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn send_transaction(
&self,
tx: &BitcoinTransaction
) -> Result<Txid, ErrorBroadcasts a transaction to the network, returning its `Txid`.
If you already have a serialized transaction, you can use this API as follows:
```
// Assuming you have a hex serialization of a transaction:
// let raw_tx = hex::decode(raw_tx_hex).unwrap();
let tx: bitcoin::Transaction = bitcoin::consensus::deserialize(&raw_tx).unwrap();
client.send_transaction(&tx).await?;
```
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn xpub_info_basic(
&self,
xpub: &str,
include_token_list: bool,
address_filter: Option<&AddressFilter>,
also_in: Option<&Currency>
) -> Result<XPubInfoBasic, ErrorRetrieves information about the funds held by addresses from public keys derivable from an `extended public key`.
See the above link for more information about how the Blockbook server will try to derive public keys and addresses from the extended public key.
Briefly, the extended key is expected to be derived at `m/purpose'/coin_type'/account'`,
and Blockbook will derive `change` and `index` levels below that, subject to a gap limit of unused indices.
In addition to the aggregated amounts, per-address indicators can also be retrieved (Blockbook calls them `tokens`) by setting
`include_token_list`. The `AddressFilter` enum then allows selecting the addresses holding a balance, addresses having been used, or all addresses.
If an `also_in` `Currency` is specified, the total balance will also be returned in terms of that currency.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
#### pub async fn xpub_info(
&self,
xpub: &str,
page: Option<&NonZeroU32>,
pagesize: Option<&NonZeroU16>,
from: Option<&Height>,
to: Option<&Height>,
entire_txs: bool,
address_filter: Option<&AddressFilter>,
also_in: Option<&Currency>
) -> Result<XPubInfo, ErrorRetrieves information about the funds held by addresses from public keys derivable from an `extended public key`,
as well as a paginated list of `Txid`s or `Transaction`s that affect addresses derivable from the extended public key.
`Txid`s or `Transaction`s are included in the returned `XPubInfo` based on whether `entire_txs` is set to `false` or `true` respectively.
For the other arguments, see the documentation of `xpub_info_basic`.
##### Errors
If the underlying network request fails, if the server returns a non-success response, or if the response body is of unexpected format.
Auto Trait Implementations
---
### impl !RefUnwindSafe for Blockbook
### impl Send for Blockbook
### impl Sync for Blockbook
### impl Unpin for Blockbook
### impl !UnwindSafe for Blockbook
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct blockbook::websocket::Blockbook
===
```
pub struct Blockbook { /* private fields */ }
```
A WebSocket client for querying and subscribing to information from a Blockbook server.
See the `module documentation` for an example of how to use it.
Implementations
---
### impl Blockbook
#### pub async fn new(url: Url) -> Result<Self, ErrorConstructs a new client for a given server `url`.
`url` must contain the `/websocket` path fragment.
##### Errors
If the WebSocket connection could not be established.
#### pub async fn info(&mut self) -> Result<Info, ErrorRetrieves information about the full node backing the Blockbook server and the chain state.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn block_hash(&mut self, height: Height) -> Result<BlockHash, ErrorRetrieves a block hash of a block at a given `height`.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn current_fiat_rates(
&mut self,
currencies: Vec<Currency>
) -> Result<Ticker, ErrorRetrieves the current exchange rates for a list of given currencies.
If no `currencies` are specified, all available exchange rates will be returned.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn available_currencies(
&mut self,
time: Time
) -> Result<TickersList, ErrorUses the provided timestamp and returns the closest available timestamp and a list of available currencies at that timestamp.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn fiat_rates_for_timestamps(
&mut self,
timestamps: Vec<Time>,
currencies: Option<Vec<Currency>>
) -> Result<Vec<Ticker>, ErrorRetrieves exchange rates at a number of provided `timestamps`.
If no `currencies` are specified, all available exchange rates will be returned at each timestamp.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn address_info_basic(
&mut self,
address: Address,
also_in: Option<Currency>
) -> Result<AddressInfoBasic, ErrorRetrieves basic aggregated information about a provided `address`.
If an `also_in` `Currency` is specified, the total balance will also be returned in terms of that currency.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn address_info_txids(
&mut self,
address: Address,
page: Option<NonZeroU32>,
pagesize: Option<NonZeroU16>,
from: Option<Height>,
to: Option<Height>,
also_in: Option<Currency>
) -> Result<AddressInfo, ErrorRetrieves basic aggregated information as well as a paginated list of `Txid`s for a given `address`.
If an `also_in` `Currency` is specified, the total balance will also be returned in terms of that currency.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn address_info_txs(
&mut self,
address: Address,
page: Option<NonZeroU32>,
pagesize: Option<NonZeroU16>,
from: Option<Height>,
to: Option<Height>,
also_in: Option<Currency>
) -> Result<AddressInfoDetailed, ErrorRetrieves basic aggregated information as well as a paginated list of `Tx` objects for a given `address`.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn transaction(&mut self, txid: Txid) -> Result<Transaction, ErrorRetrieves information about a transaction with the given `txid`.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn transaction_specific(
&mut self,
txid: Txid
) -> Result<TransactionSpecific, ErrorRetrieves information about a transaction with a given `txid`
as reported by the Bitcoin Core backend.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn estimate_fee(
&mut self,
blocks: Vec<u16>
) -> Result<Vec<Amount>, ErrorReturns the estimated fee for a set of target blocks to wait. The returned unit is satoshis per vByte.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn estimate_tx_fee(
&mut self,
blocks: Vec<u16>,
tx_size: u32
) -> Result<Vec<Amount>, ErrorReturns the estimated total fee for a transaction of the given size in bytes for a set of target blocks to wait.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn send_transaction(
&mut self,
transaction: &BitcoinTransaction
) -> Result<Txid, ErrorBroadcasts a transaction to the network, returning its `Txid`.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn utxos_from_address(
&mut self,
address: Address
) -> Result<Vec<Utxo>, ErrorRetrieves all unspent transaciton outputs controlled by an address.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn balance_history(
&mut self,
address: Address,
from: Option<Time>,
to: Option<Time>,
currencies: Option<Vec<Currency>>,
group_by: Option<u32>
) -> Result<Vec<BalanceHistory>, ErrorThe `group_by` parameter sets the interval length (in seconds)
over which transactions are consolidated into `BalanceHistory`
entries. Defaults to 3600s.
##### Errors
If the WebSocket connection was closed or emitted an error, or if the response body is of unexpected format.
#### pub async fn subscribe_blocks(
&mut self
) -> impl Stream<Item = Result<Block, Error>Subscribe to new blocks being added to the chain.
##### Errors
If the WebSocket connection was closed or emitted an error, if the subscription could not be established, or if the response body is of unexpected format.
#### pub async fn subscribe_fiat_rates(
&mut self,
currency: Option<Currency>
) -> impl Stream<Item = Result<HashMap<Currency, f64>, Error>Subscribes to updates on exchange rates.
Blockbook will emit fresh exchange rates whenever a new block is found.
If `None` is passed, all available fiat rates will be returned on each update.
##### Errors
If the WebSocket connection was closed or emitted an error, if the subscription could not be established, or if the response body is of unexpected format.
#### pub async fn subscribe_addresses(
&mut self,
addresses: Vec<Address>
) -> impl Stream<Item = Result<(Address, Transaction), Error>Subscribe to transactions that involve at least one of a set of addresses.
The method returns tuples of the address that was invovled and the transaction itself.
##### Errors
If the WebSocket connection was closed or emitted an error, if the subscription could not be established, or if the response body is of unexpected format.
Trait Implementations
---
### impl Drop for Blockbook
#### fn drop(&mut self)
Executes the destructor for this type. Read moreAuto Trait Implementations
---
### impl !RefUnwindSafe for Blockbook
### impl Send for Blockbook
### impl Sync for Blockbook
### impl Unpin for Blockbook
### impl !UnwindSafe for Blockbook
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module blockbook::websocket
===
The WebSocket client.
This module contains the WebSocket `Client` for interacting with a Blockbook server via a WebSocket connection. The client provides numerous query-response methods, as well as a few subscription methods.
An example of how to use it to make single queries:
```
let mut client = blockbook::websocket::Blockbook::new(url).await?;
// query the Genesis block hash let genesis_hash = client
.block_hash(blockbook::Height::from_consensus(0).unwrap())
.await?;
assert_eq!(
genesis_hash.to_string(),
"000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"
);
// query the first ever non-coinbase Bitcoin transaction from Satoshi to <NAME> let txid = "f4184fc596403b9d638783cf57adfe4c75c605f6356fbc91338530e9831e9e16".parse().unwrap();
let tx = client.transaction(txid).await?;
assert!((tx.vout.get(0).unwrap().value.to_btc() - 10.0).abs() < f64::EPSILON);
```
An example of how to use it for subscriptions:
```
use futures::StreamExt;
let mut client = blockbook::websocket::Blockbook::new(url).await?;
let mut blocks = client.subscribe_blocks().await;
while let Some(Ok(block)) = blocks.next().await {
println!("received block {}", block.height);
}
```
Structs
---
* BlockInformation about a block.
* BlockbookA WebSocket client for querying and subscribing to information from a Blockbook server.
* InfoInformation about the full node backing the Blockbook server and the chain state.
Enums
---
* ErrorThe errors emitted by the WebSocket client.
Struct blockbook::AddressInfo
===
```
pub struct AddressInfo {
pub paging: AddressInfoPaging,
pub basic: AddressInfoBasic,
pub txids: Vec<Txid>,
}
```
Address information that includes a list of involved transaction IDs.
Fields
---
`paging: AddressInfoPaging``basic: AddressInfoBasic``txids: Vec<Txid>`Trait Implementations
---
### impl Clone for AddressInfo
#### fn clone(&self) -> AddressInfo
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &AddressInfo) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for AddressInfo
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for AddressInfo
### impl Send for AddressInfo
### impl Sync for AddressInfo
### impl Unpin for AddressInfo
### impl UnwindSafe for AddressInfo
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::AddressInfoBasic
===
```
pub struct AddressInfoBasic {
pub address: Address,
pub balance: Amount,
pub total_received: Amount,
pub total_sent: Amount,
pub unconfirmed_balance: Amount,
pub unconfirmed_txs: u32,
pub txs: u32,
pub secondary_value: Option<f64>,
}
```
Information about the funds moved from or to an address.
Fields
---
`address: Address``balance: Amount``total_received: Amount``total_sent: Amount``unconfirmed_balance: Amount``unconfirmed_txs: u32``txs: u32``secondary_value: Option<f64>`Trait Implementations
---
### impl Clone for AddressInfoBasic
#### fn clone(&self) -> AddressInfoBasic
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &AddressInfoBasic) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for AddressInfoBasic
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for AddressInfoBasic
### impl Send for AddressInfoBasic
### impl Sync for AddressInfoBasic
### impl Unpin for AddressInfoBasic
### impl UnwindSafe for AddressInfoBasic
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::AddressInfoDetailed
===
```
pub struct AddressInfoDetailed {
pub paging: AddressInfoPaging,
pub basic: AddressInfoBasic,
pub transactions: Vec<Tx>,
}
```
Address information that includes a list of involved transactions.
Fields
---
`paging: AddressInfoPaging``basic: AddressInfoBasic``transactions: Vec<Tx>`Trait Implementations
---
### impl Clone for AddressInfoDetailed
#### fn clone(&self) -> AddressInfoDetailed
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &AddressInfoDetailed) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for AddressInfoDetailed
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for AddressInfoDetailed
### impl Send for AddressInfoDetailed
### impl Sync for AddressInfoDetailed
### impl Unpin for AddressInfoDetailed
### impl UnwindSafe for AddressInfoDetailed
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::AddressInfoPaging
===
```
pub struct AddressInfoPaging {
pub page: u32,
pub total_pages: Option<u32>,
pub items_on_page: u32,
}
```
Paging information.
Fields
---
`page: u32``total_pages: Option<u32>`The `total_pages` is unknown and hence set to `None` when a block height filter is set and the number of transactions is higher than the `pagesize` (default: 1000).
`items_on_page: u32`Trait Implementations
---
### impl Clone for AddressInfoPaging
#### fn clone(&self) -> AddressInfoPaging
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &AddressInfoPaging) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for AddressInfoPaging
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for AddressInfoPaging
### impl StructuralPartialEq for AddressInfoPaging
Auto Trait Implementations
---
### impl RefUnwindSafe for AddressInfoPaging
### impl Send for AddressInfoPaging
### impl Sync for AddressInfoPaging
### impl Unpin for AddressInfoPaging
### impl UnwindSafe for AddressInfoPaging
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Backend
===
```
pub struct Backend {
pub chain: Chain,
pub blocks: Height,
pub headers: u32,
pub best_block_hash: BlockHash,
pub difficulty: String,
pub size_on_disk: u64,
pub version: Version,
pub protocol_version: String,
}
```
Information about the full node backing the Blockbook server.
Fields
---
`chain: Chain``blocks: Height``headers: u32``best_block_hash: BlockHash``difficulty: String``size_on_disk: u64``version: Version``protocol_version: String`Trait Implementations
---
### impl Clone for Backend
#### fn clone(&self) -> Backend
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Backend) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Backend
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Backend
### impl StructuralPartialEq for Backend
Auto Trait Implementations
---
### impl RefUnwindSafe for Backend
### impl Send for Backend
### impl Sync for Backend
### impl Unpin for Backend
### impl UnwindSafe for Backend
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::BalanceHistory
===
```
pub struct BalanceHistory {
pub time: Time,
pub txs: u32,
pub received: Amount,
pub sent: Amount,
pub sent_to_self: Amount,
pub rates: HashMap<Currency, f64>,
}
```
A balance history entry.
Fields
---
`time: Time``txs: u32``received: Amount``sent: Amount``sent_to_self: Amount``rates: HashMap<Currency, f64>`Trait Implementations
---
### impl Debug for BalanceHistory
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &BalanceHistory) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl StructuralPartialEq for BalanceHistory
Auto Trait Implementations
---
### impl RefUnwindSafe for BalanceHistory
### impl Send for BalanceHistory
### impl Sync for BalanceHistory
### impl Unpin for BalanceHistory
### impl UnwindSafe for BalanceHistory
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Block
===
```
pub struct Block {
pub page: u32,
pub total_pages: u32,
pub items_on_page: u32,
pub hash: BlockHash,
pub previous_block_hash: Option<BlockHash>,
pub next_block_hash: Option<BlockHash>,
pub height: Height,
pub confirmations: u32,
pub size: u32,
pub time: Time,
pub version: u32,
pub merkle_root: TxMerkleNode,
pub nonce: String,
pub bits: String,
pub difficulty: String,
pub tx_count: u32,
pub txs: Vec<BlockTransaction>,
}
```
Information about a block.
Fields
---
`page: u32``total_pages: u32``items_on_page: u32``hash: BlockHash``previous_block_hash: Option<BlockHash>``next_block_hash: Option<BlockHash>``height: Height``confirmations: u32``size: u32``time: Time``version: u32``merkle_root: TxMerkleNode``nonce: String``bits: String``difficulty: String``tx_count: u32``txs: Vec<BlockTransaction>`Trait Implementations
---
### impl Clone for Block
#### fn clone(&self) -> Block
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Block) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Block
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Block
### impl StructuralPartialEq for Block
Auto Trait Implementations
---
### impl RefUnwindSafe for Block
### impl Send for Block
### impl Sync for Block
### impl Unpin for Block
### impl UnwindSafe for Block
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::BlockTransaction
===
```
pub struct BlockTransaction {
pub txid: Txid,
pub vin: Vec<BlockVin>,
pub vout: Vec<BlockVout>,
pub block_hash: BlockHash,
pub block_height: Height,
pub confirmations: u32,
pub block_time: Time,
pub value: Amount,
pub value_in: Amount,
pub fees: Amount,
}
```
Information about a transaction.
Fields
---
`txid: Txid``vin: Vec<BlockVin>``vout: Vec<BlockVout>``block_hash: BlockHash``block_height: Height``confirmations: u32``block_time: Time``value: Amount``value_in: Amount``fees: Amount`Trait Implementations
---
### impl Clone for BlockTransaction
#### fn clone(&self) -> BlockTransaction
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &BlockTransaction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for BlockTransaction
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for BlockTransaction
### impl StructuralPartialEq for BlockTransaction
Auto Trait Implementations
---
### impl RefUnwindSafe for BlockTransaction
### impl Send for BlockTransaction
### impl Sync for BlockTransaction
### impl Unpin for BlockTransaction
### impl UnwindSafe for BlockTransaction
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::BlockVin
===
```
pub struct BlockVin {
pub n: u16,
pub addresses: Option<Vec<Address>>,
pub is_address: bool,
pub value: Amount,
}
```
Information about a transaction input.
Fields
---
`n: u16``addresses: Option<Vec<Address>>`Can be `None` or multiple addresses for a non-standard script,
where the latter indicates a multisig input
`is_address: bool`Indicates a standard script
`value: Amount`Trait Implementations
---
### impl Clone for BlockVin
#### fn clone(&self) -> BlockVin
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &BlockVin) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for BlockVin
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for BlockVin
### impl StructuralPartialEq for BlockVin
Auto Trait Implementations
---
### impl RefUnwindSafe for BlockVin
### impl Send for BlockVin
### impl Sync for BlockVin
### impl Unpin for BlockVin
### impl UnwindSafe for BlockVin
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::BlockVout
===
```
pub struct BlockVout {
pub value: Amount,
pub n: u16,
pub spent: Option<bool>,
pub addresses: Vec<AddressBlockVout>,
pub is_address: bool,
}
```
Information about a transaction output.
Fields
---
`value: Amount``n: u16``spent: Option<bool>``addresses: Vec<AddressBlockVout>``is_address: bool`Indicates the `addresses` vector to contain the `Address` `AddressBlockVout` variant
Trait Implementations
---
### impl Clone for BlockVout
#### fn clone(&self) -> BlockVout
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &BlockVout) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for BlockVout
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for BlockVout
### impl StructuralPartialEq for BlockVout
Auto Trait Implementations
---
### impl RefUnwindSafe for BlockVout
### impl Send for BlockVout
### impl Sync for BlockVout
### impl Unpin for BlockVout
### impl UnwindSafe for BlockVout
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::OpReturn
===
```
pub struct OpReturn(pub String);
```
An `OP_RETURN` output.
Tuple Fields
---
`0: String`Trait Implementations
---
### impl Clone for OpReturn
#### fn clone(&self) -> OpReturn
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &OpReturn) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for OpReturn
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for OpReturn
### impl StructuralPartialEq for OpReturn
Auto Trait Implementations
---
### impl RefUnwindSafe for OpReturn
### impl Send for OpReturn
### impl Sync for OpReturn
### impl Unpin for OpReturn
### impl UnwindSafe for OpReturn
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::ScriptPubKey
===
```
pub struct ScriptPubKey {
pub address: Address,
pub asm: String,
pub desc: Option<String>,
pub script: ScriptBuf,
pub type: ScriptPubKeyType,
}
```
A script specifying spending conditions.
Fields
---
`address: Address``asm: String``desc: Option<String>``script: ScriptBuf``type: ScriptPubKeyType`Trait Implementations
---
### impl Clone for ScriptPubKey
#### fn clone(&self) -> ScriptPubKey
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &ScriptPubKey) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for ScriptPubKey
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for ScriptPubKey
### impl StructuralPartialEq for ScriptPubKey
Auto Trait Implementations
---
### impl RefUnwindSafe for ScriptPubKey
### impl Send for ScriptPubKey
### impl Sync for ScriptPubKey
### impl Unpin for ScriptPubKey
### impl UnwindSafe for ScriptPubKey
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::ScriptSig
===
```
pub struct ScriptSig {
pub asm: String,
pub script: ScriptBuf,
}
```
A script fulfilling spending conditions.
Fields
---
`asm: String``script: ScriptBuf`Trait Implementations
---
### impl Clone for ScriptSig
#### fn clone(&self) -> ScriptSig
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> ScriptSig
Returns the “default value” for a type.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &ScriptSig) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for ScriptSig
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for ScriptSig
### impl StructuralPartialEq for ScriptSig
Auto Trait Implementations
---
### impl RefUnwindSafe for ScriptSig
### impl Send for ScriptSig
### impl Sync for ScriptSig
### impl Unpin for ScriptSig
### impl UnwindSafe for ScriptSig
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Status
===
```
pub struct Status {
pub blockbook: StatusBlockbook,
pub backend: Backend,
}
```
Status and backend information of the Blockbook server.
Fields
---
`blockbook: StatusBlockbook``backend: Backend`Trait Implementations
---
### impl Clone for Status
#### fn clone(&self) -> Status
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Status) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Status
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Status
### impl StructuralPartialEq for Status
Auto Trait Implementations
---
### impl RefUnwindSafe for Status
### impl Send for Status
### impl Sync for Status
### impl Unpin for Status
### impl UnwindSafe for Status
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::StatusBlockbook
===
```
pub struct StatusBlockbook {
pub coin: Asset,
pub host: String,
pub version: Version,
pub git_commit: String,
pub build_time: DateTime<Utc>,
pub sync_mode: bool,
pub is_initial_sync: bool,
pub is_in_sync: bool,
pub best_height: Height,
pub last_block_time: DateTime<Utc>,
pub is_in_sync_mempool: bool,
pub last_mempool_time: DateTime<Utc>,
pub mempool_size: u32,
pub decimals: u8,
pub db_size: u64,
pub about: String,
pub has_fiat_rates: bool,
pub current_fiat_rates_time: DateTime<Utc>,
pub historical_fiat_rates_time: DateTime<Utc>,
}
```
Status information of the Blockbook server.
Fields
---
`coin: Asset``host: String``version: Version``git_commit: String``build_time: DateTime<Utc>``sync_mode: bool``is_initial_sync: bool``is_in_sync: bool``best_height: Height``last_block_time: DateTime<Utc>``is_in_sync_mempool: bool``last_mempool_time: DateTime<Utc>``mempool_size: u32``decimals: u8``db_size: u64``about: String``has_fiat_rates: bool``current_fiat_rates_time: DateTime<Utc>``historical_fiat_rates_time: DateTime<Utc>`Trait Implementations
---
### impl Clone for StatusBlockbook
#### fn clone(&self) -> StatusBlockbook
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &StatusBlockbook) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for StatusBlockbook
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for StatusBlockbook
### impl StructuralPartialEq for StatusBlockbook
Auto Trait Implementations
---
### impl RefUnwindSafe for StatusBlockbook
### impl Send for StatusBlockbook
### impl Sync for StatusBlockbook
### impl Unpin for StatusBlockbook
### impl UnwindSafe for StatusBlockbook
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Ticker
===
```
pub struct Ticker {
pub timestamp: Time,
pub rates: HashMap<Currency, f64>,
}
```
A timestamp and a set of exchange rates for multiple currencies.
Fields
---
`timestamp: Time``rates: HashMap<Currency, f64>`Trait Implementations
---
### impl Clone for Ticker
#### fn clone(&self) -> Ticker
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Ticker) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Ticker
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for Ticker
### impl Send for Ticker
### impl Sync for Ticker
### impl Unpin for Ticker
### impl UnwindSafe for Ticker
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::TickersList
===
```
pub struct TickersList {
pub timestamp: Time,
pub available_currencies: Vec<Currency>,
}
```
Information about the available exchange rates at a given timestamp.
Fields
---
`timestamp: Time``available_currencies: Vec<Currency>`Trait Implementations
---
### impl Clone for TickersList
#### fn clone(&self) -> TickersList
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &TickersList) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for TickersList
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for TickersList
### impl StructuralPartialEq for TickersList
Auto Trait Implementations
---
### impl RefUnwindSafe for TickersList
### impl Send for TickersList
### impl Sync for TickersList
### impl Unpin for TickersList
### impl UnwindSafe for TickersList
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Token
===
```
pub struct Token {
pub type: String,
pub address: Address,
pub path: DerivationPath,
pub transfers: u32,
pub decimals: u8,
pub balance: Amount,
pub total_received: Amount,
pub total_sent: Amount,
}
```
Information about funds at a Bitcoin address derived from an `extended public key`.
Fields
---
`type: String``address: Address``path: DerivationPath``transfers: u32``decimals: u8``balance: Amount``total_received: Amount``total_sent: Amount`Trait Implementations
---
### impl Clone for Token
#### fn clone(&self) -> Token
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Token) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Token
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for Token
### impl Send for Token
### impl Sync for Token
### impl Unpin for Token
### impl UnwindSafe for Token
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Transaction
===
```
pub struct Transaction {
pub txid: Txid,
pub version: u8,
pub lock_time: Option<Height>,
pub vin: Vec<Vin>,
pub vout: Vec<Vout>,
pub size: u32,
pub vsize: u32,
pub block_hash: Option<BlockHash>,
pub block_height: Option<Height>,
pub confirmations: u32,
pub block_time: Time,
pub value: Amount,
pub value_in: Amount,
pub fees: Amount,
pub script: ScriptBuf,
}
```
Information about a transaction.
Fields
---
`txid: Txid``version: u8``lock_time: Option<Height>``vin: Vec<Vin>``vout: Vec<Vout>``size: u32``vsize: u32``block_hash: Option<BlockHash>``None` for unconfirmed transactions
`block_height: Option<Height>``None` for unconfirmed transactions
`confirmations: u32``block_time: Time``value: Amount``value_in: Amount``fees: Amount``script: ScriptBuf`Trait Implementations
---
### impl Clone for Transaction
#### fn clone(&self) -> Transaction
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Transaction) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Transaction
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Transaction
### impl StructuralPartialEq for Transaction
Auto Trait Implementations
---
### impl RefUnwindSafe for Transaction
### impl Send for Transaction
### impl Sync for Transaction
### impl Unpin for Transaction
### impl UnwindSafe for Transaction
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::TransactionSpecific
===
```
pub struct TransactionSpecific {
pub txid: Txid,
pub version: u8,
pub vin: Vec<VinSpecific>,
pub vout: Vec<VoutSpecific>,
pub blockhash: BlockHash,
pub blocktime: Time,
pub wtxid: Wtxid,
pub confirmations: u32,
pub locktime: LockTime,
pub script: ScriptBuf,
pub size: u32,
pub time: Time,
pub vsize: u32,
pub weight: u32,
}
```
Detailed information about a transaction input.
Fields
---
`txid: Txid``version: u8``vin: Vec<VinSpecific>``vout: Vec<VoutSpecific>``blockhash: BlockHash``blocktime: Time``wtxid: Wtxid``confirmations: u32``locktime: LockTime``script: ScriptBuf``size: u32``time: Time``vsize: u32``weight: u32`Trait Implementations
---
### impl Clone for TransactionSpecific
#### fn clone(&self) -> TransactionSpecific
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &TransactionSpecific) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for TransactionSpecific
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for TransactionSpecific
### impl StructuralPartialEq for TransactionSpecific
Auto Trait Implementations
---
### impl RefUnwindSafe for TransactionSpecific
### impl Send for TransactionSpecific
### impl Sync for TransactionSpecific
### impl Unpin for TransactionSpecific
### impl UnwindSafe for TransactionSpecific
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Utxo
===
```
pub struct Utxo {
pub txid: Txid,
pub vout: u32,
pub value: Amount,
pub height: Option<Height>,
pub confirmations: u32,
pub locktime: Option<Time>,
pub coinbase: Option<bool>,
pub address: Option<Address>,
pub path: Option<DerivationPath>,
}
```
Information about an unspent transaction outputs.
Fields
---
`txid: Txid``vout: u32``value: Amount``height: Option<Height>``confirmations: u32``locktime: Option<Time>``coinbase: Option<bool>``address: Option<Address>``path: Option<DerivationPath>`Trait Implementations
---
### impl Clone for Utxo
#### fn clone(&self) -> Utxo
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Utxo) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Utxo
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for Utxo
### impl Send for Utxo
### impl Sync for Utxo
### impl Unpin for Utxo
### impl UnwindSafe for Utxo
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Version
===
```
pub struct Version {
pub version: String,
pub subversion: String,
}
```
Version information about the full node.
Fields
---
`version: String``subversion: String`Trait Implementations
---
### impl Clone for Version
#### fn clone(&self) -> Version
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Version) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Version
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Version
### impl StructuralPartialEq for Version
Auto Trait Implementations
---
### impl RefUnwindSafe for Version
### impl Send for Version
### impl Sync for Version
### impl Unpin for Version
### impl UnwindSafe for Version
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Vin
===
```
pub struct Vin {
pub txid: Txid,
pub vout: Option<u16>,
pub sequence: Option<Sequence>,
pub n: u16,
pub addresses: Vec<Address>,
pub is_address: bool,
pub value: Amount,
}
```
Information about a transaction input.
Fields
---
`txid: Txid``vout: Option<u16>``sequence: Option<Sequence>``n: u16``addresses: Vec<Address>``is_address: bool``value: Amount`Trait Implementations
---
### impl Clone for Vin
#### fn clone(&self) -> Vin
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Vin) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Vin
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Vin
### impl StructuralPartialEq for Vin
Auto Trait Implementations
---
### impl RefUnwindSafe for Vin
### impl Send for Vin
### impl Sync for Vin
### impl Unpin for Vin
### impl UnwindSafe for Vin
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::VinSpecific
===
```
pub struct VinSpecific {
pub sequence: Sequence,
pub txid: Txid,
pub tx_in_witness: Option<Witness>,
pub script_sig: ScriptSig,
pub vout: u32,
}
```
Bitcoin-specific information about a transaction input.
Fields
---
`sequence: Sequence``txid: Txid``tx_in_witness: Option<Witness>``script_sig: ScriptSig``vout: u32`Trait Implementations
---
### impl Clone for VinSpecific
#### fn clone(&self) -> VinSpecific
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &VinSpecific) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for VinSpecific
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for VinSpecific
### impl StructuralPartialEq for VinSpecific
Auto Trait Implementations
---
### impl RefUnwindSafe for VinSpecific
### impl Send for VinSpecific
### impl Sync for VinSpecific
### impl Unpin for VinSpecific
### impl UnwindSafe for VinSpecific
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::Vout
===
```
pub struct Vout {
pub value: Amount,
pub n: u16,
pub spent: Option<bool>,
pub script: ScriptBuf,
pub addresses: Vec<Address>,
pub is_address: bool,
}
```
Information about a transaction output.
Fields
---
`value: Amount``n: u16``spent: Option<bool>``script: ScriptBuf``addresses: Vec<Address>``is_address: bool`Trait Implementations
---
### impl Clone for Vout
#### fn clone(&self) -> Vout
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Vout) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Vout
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Vout
### impl StructuralPartialEq for Vout
Auto Trait Implementations
---
### impl RefUnwindSafe for Vout
### impl Send for Vout
### impl Sync for Vout
### impl Unpin for Vout
### impl UnwindSafe for Vout
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::VoutSpecific
===
```
pub struct VoutSpecific {
pub n: u32,
pub script_pub_key: ScriptPubKey,
pub value: Amount,
}
```
Bitcoin-specific information about a transaction output.
Fields
---
`n: u32``script_pub_key: ScriptPubKey``value: Amount`Trait Implementations
---
### impl Clone for VoutSpecific
#### fn clone(&self) -> VoutSpecific
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &VoutSpecific) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for VoutSpecific
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for VoutSpecific
### impl StructuralPartialEq for VoutSpecific
Auto Trait Implementations
---
### impl RefUnwindSafe for VoutSpecific
### impl Send for VoutSpecific
### impl Sync for VoutSpecific
### impl Unpin for VoutSpecific
### impl UnwindSafe for VoutSpecific
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::XPubInfo
===
```
pub struct XPubInfo {
pub paging: AddressInfoPaging,
pub basic: XPubInfoBasic,
pub txids: Option<Vec<Txid>>,
pub transactions: Option<Vec<Transaction>>,
}
```
Detailed information about funds held in addresses derivable from an `extended public key`,
Fields
---
`paging: AddressInfoPaging``basic: XPubInfoBasic``txids: Option<Vec<Txid>>``transactions: Option<Vec<Transaction>>`Trait Implementations
---
### impl Clone for XPubInfo
#### fn clone(&self) -> XPubInfo
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &XPubInfo) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for XPubInfo
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for XPubInfo
### impl Send for XPubInfo
### impl Sync for XPubInfo
### impl Unpin for XPubInfo
### impl UnwindSafe for XPubInfo
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Struct blockbook::XPubInfoBasic
===
```
pub struct XPubInfoBasic {
pub address: String,
pub balance: Amount,
pub total_received: Amount,
pub total_sent: Amount,
pub unconfirmed_balance: Amount,
pub unconfirmed_txs: u32,
pub txs: u32,
pub used_tokens: u32,
pub secondary_value: Option<f64>,
pub tokens: Option<Vec<Token>>,
}
```
Aggregated information about funds held in addresses derivable from an `extended public key`.
Fields
---
`address: String``balance: Amount``total_received: Amount``total_sent: Amount``unconfirmed_balance: Amount``unconfirmed_txs: u32``txs: u32``used_tokens: u32``secondary_value: Option<f64>``tokens: Option<Vec<Token>>`Trait Implementations
---
### impl Clone for XPubInfoBasic
#### fn clone(&self) -> XPubInfoBasic
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &XPubInfoBasic) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for XPubInfoBasic
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
Auto Trait Implementations
---
### impl RefUnwindSafe for XPubInfoBasic
### impl Send for XPubInfoBasic
### impl Sync for XPubInfoBasic
### impl Unpin for XPubInfoBasic
### impl UnwindSafe for XPubInfoBasic
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum blockbook::AddressBlockVout
===
```
pub enum AddressBlockVout {
Address(Address),
OpReturn(OpReturn),
}
```
Either an address or an `OP_RETURN output`.
Variants
---
### Address(Address)
### OpReturn(OpReturn)
Trait Implementations
---
### impl Clone for AddressBlockVout
#### fn clone(&self) -> AddressBlockVout
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &AddressBlockVout) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for AddressBlockVout
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for AddressBlockVout
### impl StructuralPartialEq for AddressBlockVout
Auto Trait Implementations
---
### impl RefUnwindSafe for AddressBlockVout
### impl Send for AddressBlockVout
### impl Sync for AddressBlockVout
### impl Unpin for AddressBlockVout
### impl UnwindSafe for AddressBlockVout
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum blockbook::AddressFilter
===
```
pub enum AddressFilter {
NonZero,
Used,
Derived,
}
```
Used to select which addresses to consider when deriving from
`extended public keys`.
Variants
---
### NonZero
### Used
### Derived
Auto Trait Implementations
---
### impl RefUnwindSafe for AddressFilter
### impl Send for AddressFilter
### impl Sync for AddressFilter
### impl Unpin for AddressFilter
### impl UnwindSafe for AddressFilter
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Enum blockbook::Asset
===
```
#[non_exhaustive]pub enum Asset {
Bitcoin,
}
```
A cryptocurrency asset.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Bitcoin
Trait Implementations
---
### impl Clone for Asset
#### fn clone(&self) -> Asset
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Asset) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Asset
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Asset
### impl StructuralPartialEq for Asset
Auto Trait Implementations
---
### impl RefUnwindSafe for Asset
### impl Send for Asset
### impl Sync for Asset
### impl Unpin for Asset
### impl UnwindSafe for Asset
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum blockbook::Chain
===
```
#[non_exhaustive]pub enum Chain {
Main,
}
```
The specific chain (mainnet, testnet, …).
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Main
Trait Implementations
---
### impl Clone for Chain
#### fn clone(&self) -> Chain
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Chain) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Chain
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Chain
### impl StructuralPartialEq for Chain
Auto Trait Implementations
---
### impl RefUnwindSafe for Chain
### impl Send for Chain
### impl Sync for Chain
### impl Unpin for Chain
### impl UnwindSafe for Chain
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum blockbook::Currency
===
```
#[non_exhaustive]pub enum Currency {
Aed,
Ars,
Aud,
Bch,
Bdt,
Bhd,
Bits,
Bmd,
Bnb,
Brl,
Btc,
Cad,
Chf,
Clp,
Cny,
Czk,
Dkk,
Dot,
Eos,
Eth,
Eur,
Gbp,
Hkd,
Huf,
Idr,
Ils,
Inr,
Jpy,
Krw,
Kwd,
Link,
Lkr,
Ltc,
Mmk,
Mxn,
Myr,
Ngn,
Nok,
Nzd,
Php,
Pkr,
Pln,
Rub,
Sar,
Sats,
Sek,
Sgd,
Thb,
Try,
Twd,
Uah,
Usd,
Vef,
Vnd,
Xag,
Xau,
Xdr,
Xlm,
Xrp,
Yfi,
Zar,
}
```
The supported currencies.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### Aed
### Ars
### Aud
### Bch
### Bdt
### Bhd
### Bits
### Bmd
### Bnb
### Brl
### Btc
### Cad
### Chf
### Clp
### Cny
### Czk
### Dkk
### Dot
### Eos
### Eth
### Eur
### Gbp
### Hkd
### Huf
### Idr
### Ils
### Inr
### Jpy
### Krw
### Kwd
### Link
### Lkr
### Ltc
### Mmk
### Mxn
### Myr
### Ngn
### Nok
### Nzd
### Php
### Pkr
### Pln
### Rub
### Sar
### Sats
### Sek
### Sgd
### Thb
### Try
### Twd
### Uah
### Usd
### Vef
### Vnd
### Xag
### Xau
### Xdr
### Xlm
### Xrp
### Yfi
### Zar
Trait Implementations
---
### impl Clone for Currency
#### fn clone(&self) -> Currency
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &Currency) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Currency
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Currency
### impl StructuralPartialEq for Currency
Auto Trait Implementations
---
### impl RefUnwindSafe for Currency
### impl Send for Currency
### impl Sync for Currency
### impl Unpin for Currency
### impl UnwindSafe for Currency
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum blockbook::Error
===
```
pub enum Error {
RequestError(Error),
UrlError(ParseError),
}
```
The errors emitted by the REST client.
Variants
---
### RequestError(Error)
An error during a network request.
### UrlError(ParseError)
An error while parsing a URL.
Trait Implementations
---
### impl Debug for Error
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
#### fn from(e: Error) -> Self
Converts to this type from the input type.### impl From<ParseError> for Error
#### fn from(e: ParseError) -> Self
Converts to this type from the input type.Auto Trait Implementations
---
### impl !RefUnwindSafe for Error
### impl Send for Error
### impl Sync for Error
### impl Unpin for Error
### impl !UnwindSafe for Error
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
#### type Output = T
Should always be `Self`### impl<T> ToString for Twhere
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Enum blockbook::ScriptPubKeyType
===
```
#[non_exhaustive]pub enum ScriptPubKeyType {
NonStandard,
PubKey,
PubKeyHash,
WitnessV0PubKeyHash,
ScriptHash,
WitnessV0ScriptHash,
MultiSig,
NullData,
WitnessV1Taproot,
WitnessUnknown,
}
```
The type of spending condition.
Variants (Non-exhaustive)
---
Non-exhaustive enums could have additional variants added in future. Therefore, when matching against variants of non-exhaustive enums, an extra wildcard arm must be added to account for any future variants.### NonStandard
### PubKey
### PubKeyHash
### WitnessV0PubKeyHash
### ScriptHash
### WitnessV0ScriptHash
### MultiSig
### NullData
### WitnessV1Taproot
### WitnessUnknown
Trait Implementations
---
### impl Clone for ScriptPubKeyType
#### fn clone(&self) -> ScriptPubKeyType
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &ScriptPubKeyType) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for ScriptPubKeyType
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for ScriptPubKeyType
### impl StructuralPartialEq for ScriptPubKeyType
Auto Trait Implementations
---
### impl RefUnwindSafe for ScriptPubKeyType
### impl Send for ScriptPubKeyType
### impl Sync for ScriptPubKeyType
### impl Unpin for ScriptPubKeyType
### impl UnwindSafe for ScriptPubKeyType
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum blockbook::Tx
===
```
pub enum Tx {
Ordinary(Transaction),
Light(BlockTransaction),
}
```
The variants for the transactions contained in `AddressInfoDetailed::transactions`.
Variants
---
### Ordinary(Transaction)
### Light(BlockTransaction)
Trait Implementations
---
### impl Clone for Tx
#### fn clone(&self) -> Tx
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer.
#### fn eq(&self, other: &Tx) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Serialize for Tx
#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>where
__S: Serializer,
Serialize this value into the given Serde serializer.
### impl StructuralEq for Tx
### impl StructuralPartialEq for Tx
Auto Trait Implementations
---
### impl RefUnwindSafe for Tx
### impl Send for Tx
### impl Sync for Tx
### impl Unpin for Tx
### impl UnwindSafe for Tx
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
#### fn equivalent(&self, key: &K) -> bool
Compare self to `key` and return `true` if they are equal.### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T> Serialize for Twhere
T: Serialize + ?Sized,
#### fn erased_serialize(&self, serializer: &mut dyn Serializer) -> Result<Ok, Error### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper.
T: for<'de> Deserialize<'de>,
Enum blockbook::TxDetail
===
```
pub enum TxDetail {
Light,
Full,
}
```
Used to select the level of detail for `address info` transactions.
Variants
---
### Light
### Full
Auto Trait Implementations
---
### impl RefUnwindSafe for TxDetail
### impl Send for TxDetail
### impl Sync for TxDetail
### impl Unpin for TxDetail
### impl UnwindSafe for TxDetail
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> Same<T> for T
#### type Output = T
Should always be `Self`### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere
V: MultiLane<T>,
#### fn vzip(self) -> V
### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more |
xfcc-parser | rust | Rust | Enum xfcc_parser::error::XfccError
===
```
pub enum XfccError<'a> {
TrailingSequence(&'a [u8]),
DuplicatePairKey(PairKey),
ParsingError(Err<Error<&'a [u8]>>),
}
```
XFCC header parsing error
Variants
---
### TrailingSequence(&'a [u8])
Used by `element_list` when there is unconsumed data at the end of an XFCC header
### DuplicatePairKey(PairKey)
Used by `element_list` when more than one value is given for a key that accepts only one value
### ParsingError(Err<Error<&'a [u8]>>)
Represents an underlying parsing error
Trait Implementations
---
### impl<'a> Debug for XfccError<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Formats the value using the given formatter.
👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports.
Converts to this type from the input type.### impl<'a> PartialEq<XfccError<'a>> for XfccError<'a#### fn eq(&self, other: &XfccError<'a>) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<'a> StructuralPartialEq for XfccError<'aAuto Trait Implementations
---
### impl<'a> RefUnwindSafe for XfccError<'a### impl<'a> Send for XfccError<'a### impl<'a> Sync for XfccError<'a### impl<'a> Unpin for XfccError<'a### impl<'a> UnwindSafe for XfccError<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere
E: Error + ?Sized,
#### fn provide<'a>(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`.
T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct xfcc_parser::Element
===
```
pub struct Element<'a> {
pub by: Vec<Cow<'a, str>>,
pub hash: Option<Cow<'a, str>>,
pub cert: Option<Cow<'a, str>>,
pub chain: Option<Cow<'a, str>>,
pub subject: Option<Cow<'a, str>>,
pub uri: Vec<Cow<'a, str>>,
pub dns: Vec<Cow<'a, str>>,
}
```
An XFCC element
Fields
---
`by: Vec<Cow<'a, str>>``hash: Option<Cow<'a, str>>``cert: Option<Cow<'a, str>>``chain: Option<Cow<'a, str>>``subject: Option<Cow<'a, str>>``uri: Vec<Cow<'a, str>>``dns: Vec<Cow<'a, str>>`Trait Implementations
---
### impl<'a> Debug for Element<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<'a> TryFrom<Vec<(PairKey, Cow<'a, str>), Global>> for Element<'a#### type Error = XfccError<'aThe type returned in the event of a conversion error.#### fn try_from(element_raw: ElementRaw<'a>) -> Result<Self, Self::ErrorPerforms the conversion.### impl<'a> Eq for Element<'a### impl<'a> StructuralEq for Element<'a### impl<'a> StructuralPartialEq for Element<'aAuto Trait Implementations
---
### impl<'a> RefUnwindSafe for Element<'a### impl<'a> Send for Element<'a### impl<'a> Sync for Element<'a### impl<'a> Unpin for Element<'a### impl<'a> UnwindSafe for Element<'aBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Function xfcc_parser::element_raw_list
===
```
pub fn element_raw_list(s: &[u8]) -> IResult<&[u8], Vec<ElementRaw<'_>>>
```
Parses an XFCC header to a list of raw XFCC elements, each consists of a list of key-value pairs
Arguments
---
* `s` - An XFCC header
Examples
---
```
use std::borrow::Cow;
use xfcc_parser::PairKey;
let input = br#"By=http://frontend.lyft.com;Subject="/C=US/ST=CA/L=San Francisco/OU=Lyft/CN=Test Client";URI=http://testclient.lyft.com"#;
let (trailing, elements) = xfcc_parser::element_raw_list(input).unwrap();
assert!(trailing.is_empty());
assert_eq!(elements[0], vec![
(PairKey::By, Cow::from("http://frontend.lyft.com")),
(PairKey::Subject, Cow::from("/C=US/ST=CA/L=San Francisco/OU=Lyft/CN=Test Client")),
(PairKey::Uri, Cow::from("http://testclient.lyft.com")),
]);
```
Type Definition xfcc_parser::ElementRaw
===
```
pub type ElementRaw<'a> = Vec<(PairKey, Cow<'a, str>)>;
```
A list of key-value pairs representing a raw XFCC element |
gatsby-source-drupal-multilanguage | npm | JavaScript | gatsby-source-drupal-multilanguage - remake for multilanguage
===
Source plugin for pulling data (including images) into Gatsby from Drupal sites - with translation support.
Pulls data from Drupal 8 sites with the
[Drupal JSONAPI module](https://www.drupal.org/project/jsonapi) installed.
An example site built with the headless Drupal distro
[ContentaCMS](https://twitter.com/contentacms) is at
<https://using-drupal.gatsbyjs.org/`apiBase` Option allows changing the API entry point depending on the version of jsonapi used by your Drupal instance. The default value is `jsonapi`, which has been used since jsonapi version `8.x-1.0-alpha4`.
Install
---
`npm install --save gatsby-source-drupal-multilanguage`
How to use
---
```
// In your gatsby-config.js module.exports = {
plugins: [
{
resolve: `gatsby-source-drupal-multilanguage`,
options: {
baseUrl: `https://live-contentacms.pantheonsite.io/`,
apiBase: `api` // optional, defaults to `jsonapi`
}
}
]
};
```
### Filters
You can use the `filters` option to limit the data that is retrieved from Drupal. Filters are applied per JSON API collection. You can use any [valid JSON API filter query](https://www.drupal.org/docs/8/modules/jsonapi/filtering). For large data sets this can reduce the build time of your application by allowing Gatsby to skip content you'll never use.
As an example, if your JSON API endpoint (<https://live-contentacms.pantheonsite.io/api>) returns the following collections list, then `articles` and `recipes` are both collections that can have a filters applied:
```
{
...
links: {
articles: "https://live-contentacms.pantheonsite.io/api/articles",
recipes: "https://live-contentacms.pantheonsite.io/api/recipes",
...
}
}
```
To retrieve only recipes with a specific tag you could do something like the following where the key (recipe) is the collection from above, and the value is the filter you want to apply.
```
// In your gatsby-config.js module.exports = {
plugins: [
{
resolve: `gatsby-source-drupal-multilanguage`,
options: {
baseUrl: `https://live-contentacms.pantheonsite.io/`,
apiBase: `api`,
filters: {
// collection : filter
recipe: "filter[tags.name][value]=British"
}
}
}
]
};
```
Which would result in Gatsby using the filtered collection [https://live-contentacms.pantheonsite.io/api/recipes?filter[tags.name][value]=British](https://live-contentacms.pantheonsite.io/api/recipes?filter%5Btags.name%5D%5Bvalue%5D=British) to retrieve data.
### Basic Auth
You can use `basicAuth` option if your site is protected by basicauth.
First, you need a way to pass environment variables to the build process, so secrets and other secured data aren't committed to source control. We recommend using [`dotenv`](https://github.com/motdotla/dotenv) which will then expose environment variables. [Read more about dotenv and using environment variables here](https://www.gatsbyjs.org/docs/environment-variables). Then we can *use* these environment variables and configure our plugin.
```
// In your gatsby-config.js module.exports = {
plugins: [
{
resolve: `gatsby-source-drupal-multilanguage`,
options: {
baseUrl: `https://live-contentacms.pantheonsite.io/`,
apiBase: `api`, // optional, defaults to `jsonapi`
basicAuth: {
username: process.env.BASIC_AUTH_USERNAME,
password: process.env.BASIC_AUTH_PASSWORD
}
}
}
]
};
```
How to query
---
You can query nodes created from Drupal like the following:
```
{
allArticle {
edges {
node {
title
internalId
created(formatString: "DD-MMM-YYYY")
}
}
}
}
```
Readme
---
### Keywords
* gatsby
* gatsby-plugin
* gatsby-source-plugin |
github.com/performancecopilot/speed | go | Go | README
[¶](#section-readme)
---
![Speed](https://github.com/performancecopilot/speed/raw/v3.0.0/images/speed.png)
Golang implementation of the Performance Co-Pilot (PCP) instrumentation API
> [A **Google Summer of Code 2016** project!](https://summerofcode.withgoogle.com/archive/2016/projects/5093214348378112/)
[![Build Status](https://travis-ci.org/performancecopilot/speed.svg?branch=master)](https://travis-ci.org/performancecopilot/speed) [![Build status](https://ci.appveyor.com/api/projects/status/gins77i4ej1o5xef?svg=true)](https://ci.appveyor.com/project/suyash/speed) [![Coverage Status](https://coveralls.io/repos/github/performancecopilot/speed/badge.svg?branch=master)](https://coveralls.io/github/performancecopilot/speed?branch=master) [![GoDoc](https://godoc.org/github.com/performancecopilot/speed?status.svg)](https://godoc.org/github.com/performancecopilot/speed) [![Go Report Card](https://goreportcard.com/badge/github.com/performancecopilot/speed)](https://goreportcard.com/report/github.com/performancecopilot/speed)
* [Install](#readme-install)
+ [Prerequisites](#readme-prerequisites)
- [PCP](#readme-pcp)
- [Go](#readme-go)
- [[Optional] [Vector](http://vectoross.io/)](#optional-vectorhttpvectorossio)
+ [Getting the library](#readme-getting-the-library)
+ [Getting the examples](#readme-getting-the-examples)
* [Walkthrough](#readme-walkthrough)
+ [SingletonMetric](#readme-singletonmetric)
+ [InstanceMetric](#readme-instancemetric)
+ [Counter](#readme-counter)
+ [CounterVector](#readme-countervector)
+ [Gauge](#readme-gauge)
+ [GaugeVector](#readme-gaugevector)
+ [Timer](#readme-timer)
+ [Histogram](#readme-histogram)
* [Visualization through Vector](#readme-visualization-through-vector)
* [Go Kit](#readme-go-kit)
### Install
#### Prerequisites
##### [PCP](http://pcp.io)
Install Performance Co-Pilot on your local machine, either using prebuilt archives or by getting and building the source code. For detailed instructions, read the [page from vector documentation](http://vectoross.io/docs/installing-performance-co-pilot). For building from source on ubuntu 14.04, a simplified list of steps is [here](https://gist.github.com/suyash/0def9b33890d4a99ca9dd96724e1ac84)
##### [Go](https://golang.org)
Set up a go environment on your computer. For more information about these steps, please read [how to write go code](https://golang.org/doc/code.html), or [watch the video](https://www.youtube.com/watch?v=XCsL89YtqCs)
* download and install go 1.6 or above from <https://golang.org/dl>
* set up `$GOPATH` to the root folder where you want to keep your go code
* add `$GOPATH/bin` to your `$PATH` by adding `export PATH=$GOPATH/bin:$PATH` to your shell configuration file, such as to your `.bashrc`, if using a Bourne shell variant.
##### [Optional] [Vector](http://vectoross.io/)
Vector is an on premise visualization tool for metrics exposed using Performance Co-Pilot. For more read the [official documentation](http://vectoross.io/docs/) and play around with the [online demo](http://vectoross.io/demo)
#### Getting the library
```
go get github.com/performancecopilot/speed
```
#### Getting the examples
All examples are executable go programs. Simply doing
```
go get github.com/performancecopilot/speed/examples/<example name>
```
will get the example and add an executable to `$GOPATH/bin`. If it is on your path, simply doing
```
<example name>
```
will run the binary, running the example
### Walkthrough
There are 3 main components defined in the library, a [**Client**](https://godoc.org/github.com/performancecopilot/speed#Client), a [**Registry**](https://godoc.org/github.com/performancecopilot/speed#Registry) and a [**Metric**](https://godoc.org/github.com/performancecopilot/speed#Metric). A client is created using an application name, and the same name is used to create a memory mapped file in `PCP_TMP_DIR`. Each client contains a registry of metrics that it holds, and will publish on being activated. It also has a `SetFlag` method allowing you to set a mmv flag while a mapping is not active, to one of three values, [`NoPrefixFlag`, `ProcessFlag` and `SentinelFlag`](https://godoc.org/github.com/performancecopilot/speed#MMVFlag). The ProcessFlag is the default and reports metrics prefixed with the application name (i.e. like `mmv.app_name.metric.name`). Setting it to `NoPrefixFlag` will report metrics without being prefixed with the application name (i.e. like `mmv.metric.name`) which can lead to namespace collisions, so be sure of what you're doing.
A client can register metrics to report through 2 interfaces, the first is the `Register` method, that takes a raw metric object. The other is using `RegisterString`, that can take a string with metrics and instances to register similar to the interface in parfait, along with type, semantics and unit, in that order. A client can be activated by calling the `Start` method, deactivated by the `Stop` method. While a client is active, no new metrics can be registered but it is possible to stop existing client for metric registration.
Each client contains an instance of the `Registry` interface, which can give different information like the number of registered metrics and instance domains. It also exports methods to register metrics and instance domains.
Finally, metrics are defined as implementations of different metric interfaces, but they all implement the `Metric` interface, the different metric types defined are
#### [SingletonMetric](https://godoc.org/github.com/performancecopilot/speed#SingletonMetric)
This type defines a metric with no instance domain and only one value. It **requires** type, semantics and unit for construction, and optionally takes a couple of description strings. A simple construction
```
metric, err := speed.NewPCPSingletonMetric(
42, // initial value
"simple.counter", // name
speed.Int32Type, // type
speed.CounterSemantics, // semantics
speed.OneUnit, // unit
"A Simple Metric", // short description
"This is a simple counter metric to demonstrate the speed API", // long description
)
```
A SingletonMetric supports a `Val` method that returns the metric value and a `Set(interface{})` method that sets the metric value.
#### [InstanceMetric](https://godoc.org/github.com/performancecopilot/speed#InstanceMetric)
An `InstanceMetric` is a single metric object containing multiple values of the same type for multiple instances. It also **requires** an instance domain along with type, semantics and unit for construction, and optionally takes a couple of description strings. A simple construction
```
indom, err := speed.NewPCPInstanceDomain(
"Acme Products", // name
[]string{"Anvils", "Rockets", "Giant_Rubber_Bands"}, // instances
"Acme products", // short description
"Most popular products produced by the Acme Corporation", // long description
)
...
countmetric, err := speed.NewPCPInstanceMetric(
speed.Instances{
"Anvils": 0,
"Rockets": 0,
"Giant_Rubber_Bands": 0,
},
"products.count",
indom,
speed.Uint64Type,
speed.CounterSemantics,
speed.OneUnit,
"Acme factory product throughput",
`Monotonic increasing counter of products produced in the Acme Corporation
factory since starting the Acme production application. Quality guaranteed.`,
)
```
An instance metric supports a `ValInstance(string)` method that returns the value as well as a `SetInstance(interface{}, string)` that sets the value of a particular instance.
#### [Counter](https://godoc.org/github.com/performancecopilot/speed#Counter)
A counter is simply a PCPSingletonMetric with `Int64Type`, `CounterSemantics` and `OneUnit`.
It can optionally take a short and a long description.
A simple example
```
c, err := speed.NewPCPCounter(0, "a.simple.counter")
```
a counter supports `Set(int64)` to set a value, `Inc(int64)` to increment by a custom delta and `Up()` to increment by 1.
#### [CounterVector](https://godoc.org/github.com/performancecopilot/speed#CounterVector)
A CounterVector is a PCPInstanceMetric , with `Int64Type`, `CounterSemantics` and `OneUnit` and an instance domain created and registered on initialization, with the name `metric_name.indom`.
A simple example
```
c, err := speed.NewPCPCounterVector(
map[string]uint64{
"instance1": 0,
"instance2": 1,
}, "another.simple.counter"
)
```
It supports `Val(string)`, `Set(uint64, string)`, `Inc(uint64, string)` and `Up(string)` amongst other things.
#### [Gauge](https://godoc.org/github.com/performancecopilot/speed#Gauge)
A Gauge is a simple SingletonMetric storing float64 values, i.e. a PCP Singleton Metric with `DoubleType`, `InstantSemantics` and `OneUnit`.
A simple example
```
g, err := speed.NewPCPGauge(0, "a.sample.gauge")
```
supports `Val()`, `Set(float64)`, `Inc(float64)` and `Dec(float64)`
#### [GaugeVector](https://godoc.org/github.com/performancecopilot/speed#GaugeVector)
A Gauge Vector is a PCP instance metric with `DoubleType`, `InstantSemantics` and `OneUnit` and an autogenerated instance domain. A simple example
```
g, err := NewPCPGaugeVector(map[string]float64{
"instance1": 1.2,
"instance2": 2.4,
}, "met")
```
supports `Val(string)`, `Set(float64, string)`, `Inc(float64, string)` and `Dec(float64, string)`
#### [Timer](https://godoc.org/github.com/performancecopilot/speed#Timer)
A timer stores the time elapsed for different operations. **It is not compatible with PCP's elapsed type metrics**. It takes a name and a `TimeUnit` for construction.
```
timer, err := speed.NewPCPTimer("test", speed.NanosecondUnit)
```
calling `timer.Start()` signals the start of an operation
calling `timer.Stop()` signals end of an operation and will return the total elapsed time calculated by the metric so far.
#### [Histogram](https://godoc.org/github.com/performancecopilot/speed#Histogram)
A histogram implements a PCP Instance Metric that reports the `mean`, `variance` and `standard_deviation` while using a histogram backed by [codahale's hdrhistogram implementation in golang](https://github.com/codahale/hdrhistogram). Other than these, it also returns a custom percentile and buckets for plotting graphs. It requires a low and a high value and the number of significant figures used at the time of construction.
```
m, err := speed.NewPCPHistogram("hist", 0, 1000, 5)
```
### Visualization through Vector
[Vector supports adding custom widgets for custom metrics](http://vectoross.io/docs/creating-widgets.html). However, that requires you to rebuild vector from scratch after adding the widget configuration. But if it is a one time thing, its worth it. For example here is the configuration I added to display the metric from the basic_histogram example
```
{
name: 'mmv.histogram_test.hist',
title: 'speed basic histogram example',
directive: 'line-time-series',
dataAttrName: 'data',
dataModelType: CumulativeMetricDataModel,
dataModelOptions: {
name: 'mmv.histogram_test.hist'
},
size: {
width: '50%',
height: '250px'
},
enableVerticalResize: false,
group: 'speed',
attrs: {
forcey: 100,
percentage: false
}
}
```
and the visualization I got
![screenshot from 2016-08-27 01 05 56](https://cloud.githubusercontent.com/assets/16324837/18172229/45b0442c-7082-11e6-9edd-ab6f91dc9f2e.png)
### [Go Kit](https://gokit.io)
Go kit provides [a wrapper package](https://godoc.org/github.com/go-kit/kit/metrics/pcp) over speed that can be used for building microservices that expose metrics using PCP.
For modified versions of the examples in go-kit that use pcp to report metrics, see [suyash/kit-pcp-examples](https://github.com/suyash/kit-pcp-examples)
### Projects using Speed
* <https://github.com/lzap/pcp-mmvstatsdDocumentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm
### Overview [¶](#pkg-overview)
Package speed implements a golang client for the Performance Co-Pilot instrumentation API.
It is based on the C/Perl/Python API implemented in PCP core as well as the Java API implemented by `parfait`, a separate project.
Some examples on using the API are implemented as executable go programs in the
`examples` subdirectory.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [type Client](#Client)
* [type CountUnit](#CountUnit)
* + [func (c CountUnit) Count(CountUnit, int8) MetricUnit](#CountUnit.Count)
+ [func (c CountUnit) PMAPI() uint32](#CountUnit.PMAPI)
+ [func (c CountUnit) Space(s SpaceUnit, dimension int8) MetricUnit](#CountUnit.Space)
+ [func (i CountUnit) String() string](#CountUnit.String)
+ [func (c CountUnit) Time(t TimeUnit, dimension int8) MetricUnit](#CountUnit.Time)
* [type Counter](#Counter)
* [type CounterVector](#CounterVector)
* [type Gauge](#Gauge)
* [type GaugeVector](#GaugeVector)
* [type Histogram](#Histogram)
* [type HistogramBucket](#HistogramBucket)
* [type InstanceDomain](#InstanceDomain)
* [type InstanceMetric](#InstanceMetric)
* [type Instances](#Instances)
* + [func (i Instances) Keys() []string](#Instances.Keys)
* [type MMVFlag](#MMVFlag)
* + [func (i MMVFlag) String() string](#MMVFlag.String)
* [type Metric](#Metric)
* [type MetricSemantics](#MetricSemantics)
* + [func (i MetricSemantics) String() string](#MetricSemantics.String)
* [type MetricType](#MetricType)
* + [func (m MetricType) IsCompatible(val interface{}) bool](#MetricType.IsCompatible)
+ [func (i MetricType) String() string](#MetricType.String)
* [type MetricUnit](#MetricUnit)
* + [func NewMetricUnit() MetricUnit](#NewMetricUnit)
* [type PCPClient](#PCPClient)
* + [func NewPCPClient(name string) (*PCPClient, error)](#NewPCPClient)
+ [func NewPCPClientWithRegistry(name string, registry *PCPRegistry) (*PCPClient, error)](#NewPCPClientWithRegistry)
* + [func (c *PCPClient) Length() int](#PCPClient.Length)
+ [func (c *PCPClient) MustRegister(m Metric)](#PCPClient.MustRegister)
+ [func (c *PCPClient) MustRegisterIndom(indom InstanceDomain)](#PCPClient.MustRegisterIndom)
+ [func (c *PCPClient) MustRegisterString(str string, val interface{}, t MetricType, s MetricSemantics, u MetricUnit) Metric](#PCPClient.MustRegisterString)
+ [func (c *PCPClient) MustStart()](#PCPClient.MustStart)
+ [func (c *PCPClient) MustStop()](#PCPClient.MustStop)
+ [func (c *PCPClient) Register(m Metric) error](#PCPClient.Register)
+ [func (c *PCPClient) RegisterIndom(indom InstanceDomain) error](#PCPClient.RegisterIndom)
+ [func (c *PCPClient) RegisterString(str string, val interface{}, t MetricType, s MetricSemantics, u MetricUnit) (Metric, error)](#PCPClient.RegisterString)
+ [func (c *PCPClient) Registry() Registry](#PCPClient.Registry)
+ [func (c *PCPClient) SetFlag(flag MMVFlag) error](#PCPClient.SetFlag)
+ [func (c *PCPClient) Start() error](#PCPClient.Start)
+ [func (c *PCPClient) Stop() error](#PCPClient.Stop)
* [type PCPCounter](#PCPCounter)
* + [func NewPCPCounter(val int64, name string, desc ...string) (*PCPCounter, error)](#NewPCPCounter)
* + [func (c *PCPCounter) Inc(val int64) error](#PCPCounter.Inc)
+ [func (m PCPCounter) Indom() *PCPInstanceDomain](#PCPCounter.Indom)
+ [func (c *PCPCounter) MustInc(val int64)](#PCPCounter.MustInc)
+ [func (c *PCPCounter) Set(val int64) error](#PCPCounter.Set)
+ [func (c *PCPCounter) Up()](#PCPCounter.Up)
+ [func (c *PCPCounter) Val() int64](#PCPCounter.Val)
* [type PCPCounterVector](#PCPCounterVector)
* + [func NewPCPCounterVector(values map[string]int64, name string, desc ...string) (*PCPCounterVector, error)](#NewPCPCounterVector)
* + [func (c *PCPCounterVector) Inc(inc int64, instance string) error](#PCPCounterVector.Inc)
+ [func (c *PCPCounterVector) IncAll(val int64)](#PCPCounterVector.IncAll)
+ [func (m PCPCounterVector) Indom() *PCPInstanceDomain](#PCPCounterVector.Indom)
+ [func (m PCPCounterVector) Instances() []string](#PCPCounterVector.Instances)
+ [func (c *PCPCounterVector) MustInc(inc int64, instance string)](#PCPCounterVector.MustInc)
+ [func (c *PCPCounterVector) MustSet(val int64, instance string)](#PCPCounterVector.MustSet)
+ [func (c *PCPCounterVector) Set(val int64, instance string) error](#PCPCounterVector.Set)
+ [func (c *PCPCounterVector) SetAll(val int64)](#PCPCounterVector.SetAll)
+ [func (c *PCPCounterVector) Up(instance string)](#PCPCounterVector.Up)
+ [func (c *PCPCounterVector) UpAll()](#PCPCounterVector.UpAll)
+ [func (c *PCPCounterVector) Val(instance string) (int64, error)](#PCPCounterVector.Val)
* [type PCPGauge](#PCPGauge)
* + [func NewPCPGauge(val float64, name string, desc ...string) (*PCPGauge, error)](#NewPCPGauge)
* + [func (g *PCPGauge) Dec(val float64) error](#PCPGauge.Dec)
+ [func (g *PCPGauge) Inc(val float64) error](#PCPGauge.Inc)
+ [func (m PCPGauge) Indom() *PCPInstanceDomain](#PCPGauge.Indom)
+ [func (g *PCPGauge) MustDec(val float64)](#PCPGauge.MustDec)
+ [func (g *PCPGauge) MustInc(val float64)](#PCPGauge.MustInc)
+ [func (g *PCPGauge) MustSet(val float64)](#PCPGauge.MustSet)
+ [func (g *PCPGauge) Set(val float64) error](#PCPGauge.Set)
+ [func (g *PCPGauge) Val() float64](#PCPGauge.Val)
* [type PCPGaugeVector](#PCPGaugeVector)
* + [func NewPCPGaugeVector(values map[string]float64, name string, desc ...string) (*PCPGaugeVector, error)](#NewPCPGaugeVector)
* + [func (g *PCPGaugeVector) Dec(inc float64, instance string) error](#PCPGaugeVector.Dec)
+ [func (g *PCPGaugeVector) DecAll(val float64)](#PCPGaugeVector.DecAll)
+ [func (g *PCPGaugeVector) Inc(inc float64, instance string) error](#PCPGaugeVector.Inc)
+ [func (g *PCPGaugeVector) IncAll(val float64)](#PCPGaugeVector.IncAll)
+ [func (m PCPGaugeVector) Indom() *PCPInstanceDomain](#PCPGaugeVector.Indom)
+ [func (m PCPGaugeVector) Instances() []string](#PCPGaugeVector.Instances)
+ [func (g *PCPGaugeVector) MustDec(inc float64, instance string)](#PCPGaugeVector.MustDec)
+ [func (g *PCPGaugeVector) MustInc(inc float64, instance string)](#PCPGaugeVector.MustInc)
+ [func (g *PCPGaugeVector) MustSet(val float64, instance string)](#PCPGaugeVector.MustSet)
+ [func (g *PCPGaugeVector) Set(val float64, instance string) error](#PCPGaugeVector.Set)
+ [func (g *PCPGaugeVector) SetAll(val float64)](#PCPGaugeVector.SetAll)
+ [func (g *PCPGaugeVector) Val(instance string) (float64, error)](#PCPGaugeVector.Val)
* [type PCPHistogram](#PCPHistogram)
* + [func NewPCPHistogram(name string, low, high int64, sigfigures int, unit MetricUnit, desc ...string) (*PCPHistogram, error)](#NewPCPHistogram)
* + [func (h *PCPHistogram) Buckets() []*HistogramBucket](#PCPHistogram.Buckets)
+ [func (h *PCPHistogram) High() int64](#PCPHistogram.High)
+ [func (m PCPHistogram) Indom() *PCPInstanceDomain](#PCPHistogram.Indom)
+ [func (m PCPHistogram) Instances() []string](#PCPHistogram.Instances)
+ [func (h *PCPHistogram) Low() int64](#PCPHistogram.Low)
+ [func (h *PCPHistogram) Max() int64](#PCPHistogram.Max)
+ [func (h *PCPHistogram) Mean() float64](#PCPHistogram.Mean)
+ [func (h *PCPHistogram) Min() int64](#PCPHistogram.Min)
+ [func (h *PCPHistogram) MustRecord(val int64)](#PCPHistogram.MustRecord)
+ [func (h *PCPHistogram) MustRecordN(val, n int64)](#PCPHistogram.MustRecordN)
+ [func (h *PCPHistogram) Percentile(p float64) int64](#PCPHistogram.Percentile)
+ [func (h *PCPHistogram) Record(val int64) error](#PCPHistogram.Record)
+ [func (h *PCPHistogram) RecordN(val, n int64) error](#PCPHistogram.RecordN)
+ [func (h *PCPHistogram) StandardDeviation() float64](#PCPHistogram.StandardDeviation)
+ [func (h *PCPHistogram) Variance() float64](#PCPHistogram.Variance)
* [type PCPInstanceDomain](#PCPInstanceDomain)
* + [func NewPCPInstanceDomain(name string, instances []string, desc ...string) (*PCPInstanceDomain, error)](#NewPCPInstanceDomain)
* + [func (indom *PCPInstanceDomain) Description() string](#PCPInstanceDomain.Description)
+ [func (indom *PCPInstanceDomain) HasInstance(name string) bool](#PCPInstanceDomain.HasInstance)
+ [func (indom *PCPInstanceDomain) ID() uint32](#PCPInstanceDomain.ID)
+ [func (indom *PCPInstanceDomain) InstanceCount() int](#PCPInstanceDomain.InstanceCount)
+ [func (indom *PCPInstanceDomain) Instances() []string](#PCPInstanceDomain.Instances)
+ [func (indom *PCPInstanceDomain) MatchInstances(ins []string) bool](#PCPInstanceDomain.MatchInstances)
+ [func (indom *PCPInstanceDomain) Name() string](#PCPInstanceDomain.Name)
+ [func (indom *PCPInstanceDomain) String() string](#PCPInstanceDomain.String)
* [type PCPInstanceMetric](#PCPInstanceMetric)
* + [func NewPCPInstanceMetric(vals Instances, name string, indom *PCPInstanceDomain, t MetricType, ...) (*PCPInstanceMetric, error)](#NewPCPInstanceMetric)
* + [func (m PCPInstanceMetric) Indom() *PCPInstanceDomain](#PCPInstanceMetric.Indom)
+ [func (m PCPInstanceMetric) Instances() []string](#PCPInstanceMetric.Instances)
+ [func (m *PCPInstanceMetric) MustSetInstance(val interface{}, instance string)](#PCPInstanceMetric.MustSetInstance)
+ [func (m *PCPInstanceMetric) SetInstance(val interface{}, instance string) error](#PCPInstanceMetric.SetInstance)
+ [func (m *PCPInstanceMetric) ValInstance(instance string) (interface{}, error)](#PCPInstanceMetric.ValInstance)
* [type PCPMetric](#PCPMetric)
* [type PCPRegistry](#PCPRegistry)
* + [func NewPCPRegistry() *PCPRegistry](#NewPCPRegistry)
* + [func (r *PCPRegistry) AddInstanceDomain(indom InstanceDomain) error](#PCPRegistry.AddInstanceDomain)
+ [func (r *PCPRegistry) AddInstanceDomainByName(name string, instances []string) (InstanceDomain, error)](#PCPRegistry.AddInstanceDomainByName)
+ [func (r *PCPRegistry) AddMetric(m Metric) error](#PCPRegistry.AddMetric)
+ [func (r *PCPRegistry) AddMetricByString(str string, val interface{}, t MetricType, s MetricSemantics, u MetricUnit) (Metric, error)](#PCPRegistry.AddMetricByString)
+ [func (r *PCPRegistry) HasInstanceDomain(name string) bool](#PCPRegistry.HasInstanceDomain)
+ [func (r *PCPRegistry) HasMetric(name string) bool](#PCPRegistry.HasMetric)
+ [func (r *PCPRegistry) InstanceCount() int](#PCPRegistry.InstanceCount)
+ [func (r *PCPRegistry) InstanceDomainCount() int](#PCPRegistry.InstanceDomainCount)
+ [func (r *PCPRegistry) MetricCount() int](#PCPRegistry.MetricCount)
+ [func (r *PCPRegistry) StringCount() int](#PCPRegistry.StringCount)
+ [func (r *PCPRegistry) ValuesCount() int](#PCPRegistry.ValuesCount)
* [type PCPSingletonMetric](#PCPSingletonMetric)
* + [func NewPCPSingletonMetric(val interface{}, name string, t MetricType, s MetricSemantics, u MetricUnit, ...) (*PCPSingletonMetric, error)](#NewPCPSingletonMetric)
* + [func (m PCPSingletonMetric) Indom() *PCPInstanceDomain](#PCPSingletonMetric.Indom)
+ [func (m *PCPSingletonMetric) MustSet(val interface{})](#PCPSingletonMetric.MustSet)
+ [func (m *PCPSingletonMetric) Set(val interface{}) error](#PCPSingletonMetric.Set)
+ [func (m *PCPSingletonMetric) String() string](#PCPSingletonMetric.String)
+ [func (m *PCPSingletonMetric) Val() interface{}](#PCPSingletonMetric.Val)
* [type PCPTimer](#PCPTimer)
* + [func NewPCPTimer(name string, unit TimeUnit, desc ...string) (*PCPTimer, error)](#NewPCPTimer)
* + [func (m PCPTimer) Indom() *PCPInstanceDomain](#PCPTimer.Indom)
+ [func (t *PCPTimer) Reset() error](#PCPTimer.Reset)
+ [func (t *PCPTimer) Start() error](#PCPTimer.Start)
+ [func (t *PCPTimer) Stop() (float64, error)](#PCPTimer.Stop)
* [type Registry](#Registry)
* [type SingletonMetric](#SingletonMetric)
* [type SpaceUnit](#SpaceUnit)
* + [func (s SpaceUnit) Count(c CountUnit, dimension int8) MetricUnit](#SpaceUnit.Count)
+ [func (s SpaceUnit) PMAPI() uint32](#SpaceUnit.PMAPI)
+ [func (s SpaceUnit) Space(SpaceUnit, int8) MetricUnit](#SpaceUnit.Space)
+ [func (i SpaceUnit) String() string](#SpaceUnit.String)
+ [func (s SpaceUnit) Time(t TimeUnit, dimension int8) MetricUnit](#SpaceUnit.Time)
* [type TimeUnit](#TimeUnit)
* + [func (t TimeUnit) Count(c CountUnit, dimension int8) MetricUnit](#TimeUnit.Count)
+ [func (t TimeUnit) PMAPI() uint32](#TimeUnit.PMAPI)
+ [func (t TimeUnit) Space(s SpaceUnit, dimension int8) MetricUnit](#TimeUnit.Space)
+ [func (i TimeUnit) String() string](#TimeUnit.String)
+ [func (t TimeUnit) Time(TimeUnit, int8) MetricUnit](#TimeUnit.Time)
* [type Timer](#Timer)
### Constants [¶](#pkg-constants)
```
const (
HeaderLength = 40
TocLength = 16
Metric1Length = 104
Metric2Length = 48
ValueLength = 32
Instance1Length = 80
Instance2Length = 24
InstanceDomainLength = 32
StringLength = 256
)
```
byte lengths of different components in an mmv file
```
const (
HistogramMin = 0
HistogramMax = 3600000000
)
```
the maximum and minimum values that can be recorded by a histogram
```
const MaxDataValueSize = 16
```
MaxDataValueSize is the maximum byte length for a stored metric value, unless it is a string
```
const MaxV1NameLength = 63
```
MaxV1NameLength is the maximum length for a metric/instance name under MMV format 1
```
const PCPClusterIDBitLength = 12
```
PCPClusterIDBitLength is the bit length of the cluster id for a set of PCP metrics
```
const PCPInstanceDomainBitLength = 22
```
PCPInstanceDomainBitLength is the maximum bit length of a PCP Instance Domain
see: <https://github.com/performancecopilot/pcp/blob/master/src/include/pcp/impl.h#L102-L121```
const PCPMetricItemBitLength = 10
```
PCPMetricItemBitLength is the maximum bit size of a PCP Metric id.
see: <https://github.com/performancecopilot/pcp/blob/master/src/include/pcp/impl.h#L102-L121```
const Version = "3.0.0"
```
Version is the last tagged version of the package
### Variables [¶](#pkg-variables)
```
var EraseFileOnStop = [false](/builtin#false)
```
EraseFileOnStop if set to true, will also delete the memory mapped file
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [Client](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L39) [¶](#Client)
```
type Client interface {
// a client must contain a registry of metrics
Registry() [Registry](#Registry)
// starts monitoring
Start() [error](/builtin#error)
// Start that will panic on failure
MustStart()
// stop monitoring
Stop() [error](/builtin#error)
// Stop that will panic on failure
MustStop()
// adds a metric to be monitored
Register([Metric](#Metric)) [error](/builtin#error)
// tries to add a metric to be written and panics on error
MustRegister([Metric](#Metric))
// adds metric from a string
RegisterString([string](/builtin#string), interface{}, [MetricType](#MetricType), [MetricSemantics](#MetricSemantics), [MetricUnit](#MetricUnit)) ([Metric](#Metric), [error](/builtin#error))
// tries to add a metric from a string and panics on an error
MustRegisterString([string](/builtin#string), interface{}, [MetricType](#MetricType), [MetricSemantics](#MetricSemantics), [MetricUnit](#MetricUnit)) [Metric](#Metric)
}
```
Client defines the interface for a type that can talk to an instrumentation agent
####
type [CountUnit](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L338) [¶](#CountUnit)
```
type CountUnit [uint32](/builtin#uint32)
```
CountUnit is a type representing a counted quantity.
```
const OneUnit [CountUnit](#CountUnit) = 1<<20 | [iota](/builtin#iota)<<8
```
OneUnit represents the only CountUnit.
For count units bits 8-11 are 1 and bits 21-24 are scale.
####
func (CountUnit) [Count](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L362) [¶](#CountUnit.Count)
```
func (c [CountUnit](#CountUnit)) Count([CountUnit](#CountUnit), [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Count adds a count unit to the current unit at a specific dimension
####
func (CountUnit) [PMAPI](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L347) [¶](#CountUnit.PMAPI)
```
func (c [CountUnit](#CountUnit)) PMAPI() [uint32](/builtin#uint32)
```
PMAPI returns the PMAPI representation for a CountUnit.
####
func (CountUnit) [Space](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L352) [¶](#CountUnit.Space)
```
func (c [CountUnit](#CountUnit)) Space(s [SpaceUnit](#SpaceUnit), dimension [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Space adds a space unit to the current unit at a specific dimension
####
func (CountUnit) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/countunit_string.go#L11) [¶](#CountUnit.String)
```
func (i [CountUnit](#CountUnit)) String() [string](/builtin#string)
```
####
func (CountUnit) [Time](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L357) [¶](#CountUnit.Time)
```
func (c [CountUnit](#CountUnit)) Time(t [TimeUnit](#TimeUnit), dimension [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Time adds a time unit to the current unit at a specific dimension
####
type [Counter](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L652) [¶](#Counter)
```
type Counter interface {
[Metric](#Metric)
Val() [int64](/builtin#int64)
Set([int64](/builtin#int64)) [error](/builtin#error)
Inc([int64](/builtin#int64)) [error](/builtin#error)
MustInc([int64](/builtin#int64))
Up() // same as MustInc(1)
}
```
Counter defines a metric that holds a single value that can only be incremented.
####
type [CounterVector](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1085) [¶](#CounterVector)
```
type CounterVector interface {
[Metric](#Metric)
Val([string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error))
Set([int64](/builtin#int64), [string](/builtin#string)) [error](/builtin#error)
MustSet([int64](/builtin#int64), [string](/builtin#string))
SetAll([int64](/builtin#int64))
Inc([int64](/builtin#int64), [string](/builtin#string)) [error](/builtin#error)
MustInc([int64](/builtin#int64), [string](/builtin#string))
IncAll([int64](/builtin#int64))
Up([string](/builtin#string))
UpAll()
}
```
CounterVector defines a Counter on multiple instances.
####
type [Gauge](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L746) [¶](#Gauge)
```
type Gauge interface {
[Metric](#Metric)
Val() [float64](/builtin#float64)
Set([float64](/builtin#float64)) [error](/builtin#error)
MustSet([float64](/builtin#float64))
Inc([float64](/builtin#float64)) [error](/builtin#error)
Dec([float64](/builtin#float64)) [error](/builtin#error)
MustInc([float64](/builtin#float64))
MustDec([float64](/builtin#float64))
}
```
Gauge defines a metric that holds a single double value that can be incremented or decremented.
####
type [GaugeVector](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1232) [¶](#GaugeVector)
```
type GaugeVector interface {
[Metric](#Metric)
Val([string](/builtin#string)) ([float64](/builtin#float64), [error](/builtin#error))
Set([float64](/builtin#float64), [string](/builtin#string)) [error](/builtin#error)
MustSet([float64](/builtin#float64), [string](/builtin#string))
SetAll([float64](/builtin#float64))
Inc([float64](/builtin#float64), [string](/builtin#string)) [error](/builtin#error)
MustInc([float64](/builtin#float64), [string](/builtin#string))
IncAll([float64](/builtin#float64))
Dec([float64](/builtin#float64), [string](/builtin#string)) [error](/builtin#error)
MustDec([float64](/builtin#float64), [string](/builtin#string))
DecAll([float64](/builtin#float64))
}
```
GaugeVector defines a Gauge on multiple instances
####
type [Histogram](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1349) [¶](#Histogram)
```
type Histogram interface {
Max() [int64](/builtin#int64) // Maximum value recorded so far
Min() [int64](/builtin#int64) // Minimum value recorded so far
High() [int64](/builtin#int64) // Highest allowed value
Low() [int64](/builtin#int64) // Lowest allowed value
Record([int64](/builtin#int64)) [error](/builtin#error) // Records a new value
RecordN([int64](/builtin#int64), [int64](/builtin#int64)) [error](/builtin#error) // Records multiple instances of the same value
MustRecord([int64](/builtin#int64))
MustRecordN([int64](/builtin#int64), [int64](/builtin#int64))
Mean() [float64](/builtin#float64) // Mean of all recorded data
Variance() [float64](/builtin#float64) // Variance of all recorded data
StandardDeviation() [float64](/builtin#float64) // StandardDeviation of all recorded data
Percentile([float64](/builtin#float64)) [int64](/builtin#int64) // Percentile returns the value at the passed percentile
}
```
Histogram defines a metric that records a distribution of data
####
type [HistogramBucket](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1561) [¶](#HistogramBucket)
```
type HistogramBucket struct {
From, To, Count [int64](/builtin#int64)
}
```
HistogramBucket is a single histogram bucket within a fixed range.
####
type [InstanceDomain](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L10) [¶](#InstanceDomain)
```
type InstanceDomain interface {
ID() [uint32](/builtin#uint32) // unique identifier for the instance domain
Name() [string](/builtin#string) // name of the instance domain
Description() [string](/builtin#string) // description for the instance domain
HasInstance(name [string](/builtin#string)) [bool](/builtin#bool) // checks if an instance is in the indom
InstanceCount() [int](/builtin#int) // returns the number of instances in the indom
Instances() [][string](/builtin#string) // returns a slice of instances in the instance domain
}
```
InstanceDomain defines the interface for an instance domain
####
type [InstanceMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L427) [¶](#InstanceMetric)
```
type InstanceMetric interface {
[Metric](#Metric)
// gets the value of a particular instance
ValInstance([string](/builtin#string)) (interface{}, [error](/builtin#error))
// sets the value of a particular instance
SetInstance(interface{}, [string](/builtin#string)) [error](/builtin#error)
// tries to set the value of a particular instance and panics on error
MustSetInstance(interface{}, [string](/builtin#string))
// returns a slice containing all instances in the metric
Instances() [][string](/builtin#string)
}
```
InstanceMetric defines the interface for a metric that stores multiple values in instances and instance domains.
####
type [Instances](https://github.com/performancecopilot/speed/blob/v3.0.0/instance.go#L4) [¶](#Instances)
```
type Instances map[[string](/builtin#string)]interface{}
```
Instances defines a valid collection of instance name and values
####
func (Instances) [Keys](https://github.com/performancecopilot/speed/blob/v3.0.0/instance.go#L7) [¶](#Instances.Keys)
```
func (i [Instances](#Instances)) Keys() [][string](/builtin#string)
```
Keys collects and returns all the keys in all instance values
####
type [MMVFlag](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L91) [¶](#MMVFlag)
```
type MMVFlag [int](/builtin#int)
```
MMVFlag represents an enumerated type to represent mmv flag values
```
const (
NoPrefixFlag [MMVFlag](#MMVFlag) = 1 << [iota](/builtin#iota)
ProcessFlag
SentinelFlag
)
```
values for MMVFlag
####
func (MMVFlag) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/mmvflag_string.go#L17) [¶](#MMVFlag.String)
```
func (i [MMVFlag](#MMVFlag)) String() [string](/builtin#string)
```
####
type [Metric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L387) [¶](#Metric)
```
type Metric interface {
// gets the unique id generated for this metric
ID() [uint32](/builtin#uint32)
// gets the name for the metric
Name() [string](/builtin#string)
// gets the type of a metric
Type() [MetricType](#MetricType)
// gets the unit of a metric
Unit() [MetricUnit](#MetricUnit)
// gets the semantics for a metric
Semantics() [MetricSemantics](#MetricSemantics)
// gets the description of a metric
Description() [string](/builtin#string)
}
```
Metric defines the general interface a type needs to implement to qualify as a valid PCP metric.
####
type [MetricSemantics](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L370) [¶](#MetricSemantics)
```
type MetricSemantics [int32](/builtin#int32)
```
MetricSemantics represents an enumerated type representing the possible values for the semantics of a metric.
```
const (
NoSemantics [MetricSemantics](#MetricSemantics) = [iota](/builtin#iota)
CounterSemantics
InstantSemantics
DiscreteSemantics
)
```
Possible values for MetricSemantics.
####
func (MetricSemantics) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/metricsemantics_string.go#L17) [¶](#MetricSemantics.String)
```
func (i [MetricSemantics](#MetricSemantics)) String() [string](/builtin#string)
```
####
type [MetricType](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L17) [¶](#MetricType)
```
type MetricType [int32](/builtin#int32)
```
MetricType is an enumerated type representing all valid types for a metric.
```
const (
Int32Type [MetricType](#MetricType) = [iota](/builtin#iota)
Uint32Type
Int64Type
Uint64Type
FloatType
DoubleType
StringType
)
```
Possible values for a MetricType.
####
func (MetricType) [IsCompatible](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L66) [¶](#MetricType.IsCompatible)
```
func (m [MetricType](#MetricType)) IsCompatible(val interface{}) [bool](/builtin#bool)
```
IsCompatible checks if the passed value is compatible with the current MetricType.
####
func (MetricType) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/metrictype_string.go#L11) [¶](#MetricType.String)
```
func (i [MetricType](#MetricType)) String() [string](/builtin#string)
```
####
type [MetricUnit](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L134) [¶](#MetricUnit)
```
type MetricUnit interface {
[fmt](/fmt).[Stringer](/fmt#Stringer)
// return 32 bit PMAPI representation for the unit
// see: <https://github.com/performancecopilot/pcp/blob/master/src/include/pcp/pmapi.h#L61-L101>
PMAPI() [uint32](/builtin#uint32)
// add a space unit to the current unit at a specific dimension
Space([SpaceUnit](#SpaceUnit), [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
// add a time unit to the current unit at a specific dimension
Time([TimeUnit](#TimeUnit), [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
// add a count unit to the current unit at a specific dimension
Count([CountUnit](#CountUnit), [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
}
```
MetricUnit defines the interface for a unit type for speed.
####
func [NewMetricUnit](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L160) [¶](#NewMetricUnit)
```
func NewMetricUnit() [MetricUnit](#MetricUnit)
```
NewMetricUnit returns a new object for initialization
####
type [PCPClient](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L103) [¶](#PCPClient)
```
type PCPClient struct {
// contains filtered or unexported fields
}
```
PCPClient implements a client that can generate instrumentation for PCP
####
func [NewPCPClient](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L122) [¶](#NewPCPClient)
```
func NewPCPClient(name [string](/builtin#string)) (*[PCPClient](#PCPClient), [error](/builtin#error))
```
NewPCPClient initializes a new PCPClient object
####
func [NewPCPClientWithRegistry](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L127) [¶](#NewPCPClientWithRegistry)
```
func NewPCPClientWithRegistry(name [string](/builtin#string), registry *[PCPRegistry](#PCPRegistry)) (*[PCPClient](#PCPClient), [error](/builtin#error))
```
NewPCPClientWithRegistry initializes a new PCPClient object with the given registry
####
func (*PCPClient) [Length](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L174) [¶](#PCPClient.Length)
```
func (c *[PCPClient](#PCPClient)) Length() [int](/builtin#int)
```
Length returns the byte length of data in the mmv file written by the current writer
####
func (*PCPClient) [MustRegister](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L667) [¶](#PCPClient.MustRegister)
```
func (c *[PCPClient](#PCPClient)) MustRegister(m [Metric](#Metric))
```
MustRegister is simply a Register that can panic
####
func (*PCPClient) [MustRegisterIndom](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L679) [¶](#PCPClient.MustRegisterIndom)
```
func (c *[PCPClient](#PCPClient)) MustRegisterIndom(indom [InstanceDomain](#InstanceDomain))
```
MustRegisterIndom is simply a RegisterIndom that can panic
####
func (*PCPClient) [MustRegisterString](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L691) [¶](#PCPClient.MustRegisterString)
```
func (c *[PCPClient](#PCPClient)) MustRegisterString(str [string](/builtin#string), val interface{}, t [MetricType](#MetricType), s [MetricSemantics](#MetricSemantics), u [MetricUnit](#MetricUnit)) [Metric](#Metric)
```
MustRegisterString is simply a RegisterString that panics
####
func (*PCPClient) [MustStart](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L622) [¶](#PCPClient.MustStart)
```
func (c *[PCPClient](#PCPClient)) MustStart()
```
MustStart is a start that panics
####
func (*PCPClient) [MustStop](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L657) [¶](#PCPClient.MustStop)
```
func (c *[PCPClient](#PCPClient)) MustStop()
```
MustStop is a stop that panics
####
func (*PCPClient) [Register](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L664) [¶](#PCPClient.Register)
```
func (c *[PCPClient](#PCPClient)) Register(m [Metric](#Metric)) [error](/builtin#error)
```
Register is simply a shorthand for Registry().AddMetric
####
func (*PCPClient) [RegisterIndom](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L674) [¶](#PCPClient.RegisterIndom)
```
func (c *[PCPClient](#PCPClient)) RegisterIndom(indom [InstanceDomain](#InstanceDomain)) [error](/builtin#error)
```
RegisterIndom is simply a shorthand for Registry().AddInstanceDomain
####
func (*PCPClient) [RegisterString](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L686) [¶](#PCPClient.RegisterString)
```
func (c *[PCPClient](#PCPClient)) RegisterString(str [string](/builtin#string), val interface{}, t [MetricType](#MetricType), s [MetricSemantics](#MetricSemantics), u [MetricUnit](#MetricUnit)) ([Metric](#Metric), [error](/builtin#error))
```
RegisterString is simply a shorthand for Registry().AddMetricByString
####
func (*PCPClient) [Registry](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L142) [¶](#PCPClient.Registry)
```
func (c *[PCPClient](#PCPClient)) Registry() [Registry](#Registry)
```
Registry returns a writer's registry
####
func (*PCPClient) [SetFlag](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L147) [¶](#PCPClient.SetFlag)
```
func (c *[PCPClient](#PCPClient)) SetFlag(flag [MMVFlag](#MMVFlag)) [error](/builtin#error)
```
SetFlag sets the MMVflag for the client
####
func (*PCPClient) [Start](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L195) [¶](#PCPClient.Start)
```
func (c *[PCPClient](#PCPClient)) Start() [error](/builtin#error)
```
Start dumps existing registry data
####
func (*PCPClient) [Stop](https://github.com/performancecopilot/speed/blob/v3.0.0/client.go#L629) [¶](#PCPClient.Stop)
```
func (c *[PCPClient](#PCPClient)) Stop() [error](/builtin#error)
```
Stop removes existing mapping and cleans up
####
type [PCPCounter](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L667) [¶](#PCPCounter)
```
type PCPCounter struct {
// contains filtered or unexported fields
}
```
PCPCounter implements a PCP compatible Counter Metric.
####
func [NewPCPCounter](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L678) [¶](#NewPCPCounter)
```
func NewPCPCounter(val [int64](/builtin#int64), name [string](/builtin#string), desc ...[string](/builtin#string)) (*[PCPCounter](#PCPCounter), [error](/builtin#error))
```
NewPCPCounter creates a new PCPCounter instance.
It requires an initial int64 value and a metric name for construction.
optionally it can also take a couple of description strings that are used as short and long descriptions respectively.
Internally it creates a PCP SingletonMetric with Int64Type, CounterSemantics and CountUnit.
####
func (*PCPCounter) [Inc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L715) [¶](#PCPCounter.Inc)
```
func (c *[PCPCounter](#PCPCounter)) Inc(val [int64](/builtin#int64)) [error](/builtin#error)
```
Inc increases the stored counter's value by the passed increment.
####
func (PCPCounter) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L594) [¶](#PCPCounter.Indom)
```
func (m PCPCounter) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
####
func (*PCPCounter) [MustInc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L733) [¶](#PCPCounter.MustInc)
```
func (c *[PCPCounter](#PCPCounter)) MustInc(val [int64](/builtin#int64))
```
MustInc is Inc that panics on failure.
####
func (*PCPCounter) [Set](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L701) [¶](#PCPCounter.Set)
```
func (c *[PCPCounter](#PCPCounter)) Set(val [int64](/builtin#int64)) [error](/builtin#error)
```
Set sets the value of the counter.
####
func (*PCPCounter) [Up](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L740) [¶](#PCPCounter.Up)
```
func (c *[PCPCounter](#PCPCounter)) Up()
```
Up increases the counter by 1.
####
func (*PCPCounter) [Val](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L693) [¶](#PCPCounter.Val)
```
func (c *[PCPCounter](#PCPCounter)) Val() [int64](/builtin#int64)
```
Val returns the current value of the counter.
####
type [PCPCounterVector](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1105) [¶](#PCPCounterVector)
```
type PCPCounterVector struct {
// contains filtered or unexported fields
}
```
PCPCounterVector implements a CounterVector
####
func [NewPCPCounterVector](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1130) [¶](#NewPCPCounterVector)
```
func NewPCPCounterVector(values map[[string](/builtin#string)][int64](/builtin#int64), name [string](/builtin#string), desc ...[string](/builtin#string)) (*[PCPCounterVector](#PCPCounterVector), [error](/builtin#error))
```
NewPCPCounterVector creates a new instance of a PCPCounterVector.
it requires a metric name and a set of instance names and values as a map.
it can optionally accept a couple of strings as short and long descriptions of the metric.
Internally it uses a PCP InstanceMetric with Int64Type, CounterSemantics and CountUnit.
####
func (*PCPCounterVector) [Inc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1189) [¶](#PCPCounterVector.Inc)
```
func (c *[PCPCounterVector](#PCPCounterVector)) Inc(inc [int64](/builtin#int64), instance [string](/builtin#string)) [error](/builtin#error)
```
Inc increments the value of a particular instance of PCPCounterVector.
####
func (*PCPCounterVector) [IncAll](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1217) [¶](#PCPCounterVector.IncAll)
```
func (c *[PCPCounterVector](#PCPCounterVector)) IncAll(val [int64](/builtin#int64))
```
IncAll increments all instances by the same value and panics on an error.
####
func (PCPCounterVector) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1027) [¶](#PCPCounterVector.Indom)
```
func (m PCPCounterVector) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
Indom returns the instance domain for the metric.
####
func (PCPCounterVector) [Instances](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1031) [¶](#PCPCounterVector.Instances)
```
func (m PCPCounterVector) Instances() [][string](/builtin#string)
```
Instances returns a slice containing all instances in the InstanceMetric.
Basically a shorthand for metric.Indom().Instances().
####
func (*PCPCounterVector) [MustInc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1210) [¶](#PCPCounterVector.MustInc)
```
func (c *[PCPCounterVector](#PCPCounterVector)) MustInc(inc [int64](/builtin#int64), instance [string](/builtin#string))
```
MustInc panics if Inc fails.
####
func (*PCPCounterVector) [MustSet](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1175) [¶](#PCPCounterVector.MustSet)
```
func (c *[PCPCounterVector](#PCPCounterVector)) MustSet(val [int64](/builtin#int64), instance [string](/builtin#string))
```
MustSet panics if Set fails.
####
func (*PCPCounterVector) [Set](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1158) [¶](#PCPCounterVector.Set)
```
func (c *[PCPCounterVector](#PCPCounterVector)) Set(val [int64](/builtin#int64), instance [string](/builtin#string)) [error](/builtin#error)
```
Set sets the value of a particular instance of PCPCounterVector.
####
func (*PCPCounterVector) [SetAll](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1182) [¶](#PCPCounterVector.SetAll)
```
func (c *[PCPCounterVector](#PCPCounterVector)) SetAll(val [int64](/builtin#int64))
```
SetAll sets all instances to the same value and panics on an error.
####
func (*PCPCounterVector) [Up](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1224) [¶](#PCPCounterVector.Up)
```
func (c *[PCPCounterVector](#PCPCounterVector)) Up(instance [string](/builtin#string))
```
Up increments the value of a particular instance ny 1.
####
func (*PCPCounterVector) [UpAll](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1227) [¶](#PCPCounterVector.UpAll)
```
func (c *[PCPCounterVector](#PCPCounterVector)) UpAll()
```
UpAll ups all instances and panics on an error.
####
func (*PCPCounterVector) [Val](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1145) [¶](#PCPCounterVector.Val)
```
func (c *[PCPCounterVector](#PCPCounterVector)) Val(instance [string](/builtin#string)) ([int64](/builtin#int64), [error](/builtin#error))
```
Val returns the value of a particular instance of PCPCounterVector.
####
type [PCPGauge](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L764) [¶](#PCPGauge)
```
type PCPGauge struct {
// contains filtered or unexported fields
}
```
PCPGauge defines a PCP compatible Gauge metric
####
func [NewPCPGauge](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L775) [¶](#NewPCPGauge)
```
func NewPCPGauge(val [float64](/builtin#float64), name [string](/builtin#string), desc ...[string](/builtin#string)) (*[PCPGauge](#PCPGauge), [error](/builtin#error))
```
NewPCPGauge creates a new PCPGauge instance.
Tt requires an initial float64 value and a metric name for construction.
Optionally it can also take a couple of description strings that are used as short and long descriptions respectively.
Internally it creates a PCP SingletonMetric with DoubleType, InstantSemantics and CountUnit.
####
func (*PCPGauge) [Dec](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L831) [¶](#PCPGauge.Dec)
```
func (g *[PCPGauge](#PCPGauge)) Dec(val [float64](/builtin#float64)) [error](/builtin#error)
```
Dec adds a value to the existing Gauge value.
####
func (*PCPGauge) [Inc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L811) [¶](#PCPGauge.Inc)
```
func (g *[PCPGauge](#PCPGauge)) Inc(val [float64](/builtin#float64)) [error](/builtin#error)
```
Inc adds a value to the existing Gauge value.
####
func (PCPGauge) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L594) [¶](#PCPGauge.Indom)
```
func (m PCPGauge) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
####
func (*PCPGauge) [MustDec](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L836) [¶](#PCPGauge.MustDec)
```
func (g *[PCPGauge](#PCPGauge)) MustDec(val [float64](/builtin#float64))
```
MustDec will panic if Dec fails.
####
func (*PCPGauge) [MustInc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L824) [¶](#PCPGauge.MustInc)
```
func (g *[PCPGauge](#PCPGauge)) MustInc(val [float64](/builtin#float64))
```
MustInc will panic if Inc fails.
####
func (*PCPGauge) [MustSet](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L804) [¶](#PCPGauge.MustSet)
```
func (g *[PCPGauge](#PCPGauge)) MustSet(val [float64](/builtin#float64))
```
MustSet will panic if Set fails.
####
func (*PCPGauge) [Set](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L797) [¶](#PCPGauge.Set)
```
func (g *[PCPGauge](#PCPGauge)) Set(val [float64](/builtin#float64)) [error](/builtin#error)
```
Set sets the current value of the Gauge.
####
func (*PCPGauge) [Val](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L790) [¶](#PCPGauge.Val)
```
func (g *[PCPGauge](#PCPGauge)) Val() [float64](/builtin#float64)
```
Val returns the current value of the Gauge.
####
type [PCPGaugeVector](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1253) [¶](#PCPGaugeVector)
```
type PCPGaugeVector struct {
// contains filtered or unexported fields
}
```
PCPGaugeVector implements a GaugeVector
####
func [NewPCPGaugeVector](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1262) [¶](#NewPCPGaugeVector)
```
func NewPCPGaugeVector(values map[[string](/builtin#string)][float64](/builtin#float64), name [string](/builtin#string), desc ...[string](/builtin#string)) (*[PCPGaugeVector](#PCPGaugeVector), [error](/builtin#error))
```
NewPCPGaugeVector creates a new instance of a PCPGaugeVector.
It requires a name and map of instance names to their values.
Optionally, it can also accept a couple of strings providing more details about the metric.
####
func (*PCPGaugeVector) [Dec](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1338) [¶](#PCPGaugeVector.Dec)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) Dec(inc [float64](/builtin#float64), instance [string](/builtin#string)) [error](/builtin#error)
```
Dec decrements the value of a particular instance of PCPGaugeVector
####
func (*PCPGaugeVector) [DecAll](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1344) [¶](#PCPGaugeVector.DecAll)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) DecAll(val [float64](/builtin#float64))
```
DecAll decrements all instances by the same value and panics on an error
####
func (*PCPGaugeVector) [Inc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1311) [¶](#PCPGaugeVector.Inc)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) Inc(inc [float64](/builtin#float64), instance [string](/builtin#string)) [error](/builtin#error)
```
Inc increments the value of a particular instance of PCPGaugeVector
####
func (*PCPGaugeVector) [IncAll](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1331) [¶](#PCPGaugeVector.IncAll)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) IncAll(val [float64](/builtin#float64))
```
IncAll increments all instances by the same value and panics on an error
####
func (PCPGaugeVector) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1027) [¶](#PCPGaugeVector.Indom)
```
func (m PCPGaugeVector) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
Indom returns the instance domain for the metric.
####
func (PCPGaugeVector) [Instances](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1031) [¶](#PCPGaugeVector.Instances)
```
func (m PCPGaugeVector) Instances() [][string](/builtin#string)
```
Instances returns a slice containing all instances in the InstanceMetric.
Basically a shorthand for metric.Indom().Instances().
####
func (*PCPGaugeVector) [MustDec](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1341) [¶](#PCPGaugeVector.MustDec)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) MustDec(inc [float64](/builtin#float64), instance [string](/builtin#string))
```
MustDec panics if Dec fails
####
func (*PCPGaugeVector) [MustInc](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1324) [¶](#PCPGaugeVector.MustInc)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) MustInc(inc [float64](/builtin#float64), instance [string](/builtin#string))
```
MustInc panics if Inc fails
####
func (*PCPGaugeVector) [MustSet](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1297) [¶](#PCPGaugeVector.MustSet)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) MustSet(val [float64](/builtin#float64), instance [string](/builtin#string))
```
MustSet panics if Set fails
####
func (*PCPGaugeVector) [Set](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1290) [¶](#PCPGaugeVector.Set)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) Set(val [float64](/builtin#float64), instance [string](/builtin#string)) [error](/builtin#error)
```
Set sets the value of a particular instance of PCPGaugeVector
####
func (*PCPGaugeVector) [SetAll](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1304) [¶](#PCPGaugeVector.SetAll)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) SetAll(val [float64](/builtin#float64))
```
SetAll sets all instances to the same value and panics on an error
####
func (*PCPGaugeVector) [Val](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1277) [¶](#PCPGaugeVector.Val)
```
func (g *[PCPGaugeVector](#PCPGaugeVector)) Val(instance [string](/builtin#string)) ([float64](/builtin#float64), [error](/builtin#error))
```
Val returns the value of a particular instance of PCPGaugeVector
####
type [PCPHistogram](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1372) [¶](#PCPHistogram)
```
type PCPHistogram struct {
// contains filtered or unexported fields
}
```
PCPHistogram implements a histogram for PCP backed by the coda hale hdrhistogram
<https://github.com/codahale/hdrhistogram####
func [NewPCPHistogram](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1420) [¶](#NewPCPHistogram)
```
func NewPCPHistogram(name [string](/builtin#string), low, high [int64](/builtin#int64), sigfigures [int](/builtin#int), unit [MetricUnit](#MetricUnit), desc ...[string](/builtin#string)) (*[PCPHistogram](#PCPHistogram), [error](/builtin#error))
```
NewPCPHistogram returns a new instance of PCPHistogram.
The lowest value for `low` is 0.
The highest value for `high` is 3,600,000,000.
`low` **must** be less than `high`.
The value of `sigfigures` can be between 1 and 5.
It also requires a unit to be explicitly passed for construction.
Optionally, a couple of description strings may be passed as the short and long descriptions of the metric.
####
func (*PCPHistogram) [Buckets](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1566) [¶](#PCPHistogram.Buckets)
```
func (h *[PCPHistogram](#PCPHistogram)) Buckets() []*[HistogramBucket](#HistogramBucket)
```
Buckets returns a list of histogram buckets.
####
func (*PCPHistogram) [High](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1444) [¶](#PCPHistogram.High)
```
func (h *[PCPHistogram](#PCPHistogram)) High() [int64](/builtin#int64)
```
High returns the maximum recordable value.
####
func (PCPHistogram) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1027) [¶](#PCPHistogram.Indom)
```
func (m PCPHistogram) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
Indom returns the instance domain for the metric.
####
func (PCPHistogram) [Instances](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1031) [¶](#PCPHistogram.Instances)
```
func (m PCPHistogram) Instances() [][string](/builtin#string)
```
Instances returns a slice containing all instances in the InstanceMetric.
Basically a shorthand for metric.Indom().Instances().
####
func (*PCPHistogram) [Low](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1447) [¶](#PCPHistogram.Low)
```
func (h *[PCPHistogram](#PCPHistogram)) Low() [int64](/builtin#int64)
```
Low returns the minimum recordable value.
####
func (*PCPHistogram) [Max](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1450) [¶](#PCPHistogram.Max)
```
func (h *[PCPHistogram](#PCPHistogram)) Max() [int64](/builtin#int64)
```
Max returns the maximum recorded value so far.
####
func (*PCPHistogram) [Mean](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1537) [¶](#PCPHistogram.Mean)
```
func (h *[PCPHistogram](#PCPHistogram)) Mean() [float64](/builtin#float64)
```
Mean returns the mean of all values recorded so far.
####
func (*PCPHistogram) [Min](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1457) [¶](#PCPHistogram.Min)
```
func (h *[PCPHistogram](#PCPHistogram)) Min() [int64](/builtin#int64)
```
Min returns the minimum recorded value so far.
####
func (*PCPHistogram) [MustRecord](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1510) [¶](#PCPHistogram.MustRecord)
```
func (h *[PCPHistogram](#PCPHistogram)) MustRecord(val [int64](/builtin#int64))
```
MustRecord panics if Record fails.
####
func (*PCPHistogram) [MustRecordN](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1530) [¶](#PCPHistogram.MustRecordN)
```
func (h *[PCPHistogram](#PCPHistogram)) MustRecordN(val, n [int64](/builtin#int64))
```
MustRecordN panics if RecordN fails.
####
func (*PCPHistogram) [Percentile](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1558) [¶](#PCPHistogram.Percentile)
```
func (h *[PCPHistogram](#PCPHistogram)) Percentile(p [float64](/builtin#float64)) [int64](/builtin#int64)
```
Percentile returns the value at the passed percentile.
####
func (*PCPHistogram) [Record](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1497) [¶](#PCPHistogram.Record)
```
func (h *[PCPHistogram](#PCPHistogram)) Record(val [int64](/builtin#int64)) [error](/builtin#error)
```
Record records a new value.
####
func (*PCPHistogram) [RecordN](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1517) [¶](#PCPHistogram.RecordN)
```
func (h *[PCPHistogram](#PCPHistogram)) RecordN(val, n [int64](/builtin#int64)) [error](/builtin#error)
```
RecordN records multiple instances of the same value.
####
func (*PCPHistogram) [StandardDeviation](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1544) [¶](#PCPHistogram.StandardDeviation)
```
func (h *[PCPHistogram](#PCPHistogram)) StandardDeviation() [float64](/builtin#float64)
```
StandardDeviation returns the standard deviation of all values recorded so far.
####
func (*PCPHistogram) [Variance](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1551) [¶](#PCPHistogram.Variance)
```
func (h *[PCPHistogram](#PCPHistogram)) Variance() [float64](/builtin#float64)
```
Variance returns the variance of all values recorded so far.
####
type [PCPInstanceDomain](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L25) [¶](#PCPInstanceDomain)
```
type PCPInstanceDomain struct {
// contains filtered or unexported fields
}
```
PCPInstanceDomain wraps a PCP compatible instance domain
####
func [NewPCPInstanceDomain](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L36) [¶](#NewPCPInstanceDomain)
```
func NewPCPInstanceDomain(name [string](/builtin#string), instances [][string](/builtin#string), desc ...[string](/builtin#string)) (*[PCPInstanceDomain](#PCPInstanceDomain), [error](/builtin#error))
```
NewPCPInstanceDomain creates a new instance domain or returns an already created one for the passed name NOTE: this is different from parfait's idea of generating ids for InstanceDomains We simply generate a unique 32 bit hash for an instance domain name, and if it has not already been created, we create it, otherwise we return the already created version
####
func (*PCPInstanceDomain) [Description](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L118) [¶](#PCPInstanceDomain.Description)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) Description() [string](/builtin#string)
```
Description returns the description for PCPInstanceDomain
####
func (*PCPInstanceDomain) [HasInstance](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L75) [¶](#PCPInstanceDomain.HasInstance)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) HasInstance(name [string](/builtin#string)) [bool](/builtin#bool)
```
HasInstance returns true if an instance of the specified name is in the Indom
####
func (*PCPInstanceDomain) [ID](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L81) [¶](#PCPInstanceDomain.ID)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) ID() [uint32](/builtin#uint32)
```
ID returns the id for PCPInstanceDomain
####
func (*PCPInstanceDomain) [InstanceCount](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L87) [¶](#PCPInstanceDomain.InstanceCount)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) InstanceCount() [int](/builtin#int)
```
InstanceCount returns the number of instances in the current instance domain
####
func (*PCPInstanceDomain) [Instances](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L92) [¶](#PCPInstanceDomain.Instances)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) Instances() [][string](/builtin#string)
```
Instances returns a slice of defined instances for the instance domain
####
func (*PCPInstanceDomain) [MatchInstances](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L103) [¶](#PCPInstanceDomain.MatchInstances)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) MatchInstances(ins [][string](/builtin#string)) [bool](/builtin#bool)
```
MatchInstances returns true if the passed InstanceDomain has exactly the same instances as the passed array
####
func (*PCPInstanceDomain) [Name](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L84) [¶](#PCPInstanceDomain.Name)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) Name() [string](/builtin#string)
```
Name returns the name for PCPInstanceDomain
####
func (*PCPInstanceDomain) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/instance_domain.go#L122) [¶](#PCPInstanceDomain.String)
```
func (indom *[PCPInstanceDomain](#PCPInstanceDomain)) String() [string](/builtin#string)
```
####
type [PCPInstanceMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1037) [¶](#PCPInstanceMetric)
```
type PCPInstanceMetric struct {
// contains filtered or unexported fields
}
```
PCPInstanceMetric represents a PCPMetric that can have multiple values over multiple instances in an instance domain.
####
func [NewPCPInstanceMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1045) [¶](#NewPCPInstanceMetric)
```
func NewPCPInstanceMetric(vals [Instances](#Instances), name [string](/builtin#string), indom *[PCPInstanceDomain](#PCPInstanceDomain), t [MetricType](#MetricType), s [MetricSemantics](#MetricSemantics), u [MetricUnit](#MetricUnit), desc ...[string](/builtin#string)) (*[PCPInstanceMetric](#PCPInstanceMetric), [error](/builtin#error))
```
NewPCPInstanceMetric creates a new instance of PCPSingletonMetric.
it takes 2 extra optional strings as short and long description parameters,
which on not being present are set to empty strings.
####
func (PCPInstanceMetric) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1027) [¶](#PCPInstanceMetric.Indom)
```
func (m PCPInstanceMetric) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
Indom returns the instance domain for the metric.
####
func (PCPInstanceMetric) [Instances](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1031) [¶](#PCPInstanceMetric.Instances)
```
func (m PCPInstanceMetric) Instances() [][string](/builtin#string)
```
Instances returns a slice containing all instances in the InstanceMetric.
Basically a shorthand for metric.Indom().Instances().
####
func (*PCPInstanceMetric) [MustSetInstance](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1076) [¶](#PCPInstanceMetric.MustSetInstance)
```
func (m *[PCPInstanceMetric](#PCPInstanceMetric)) MustSetInstance(val interface{}, instance [string](/builtin#string))
```
MustSetInstance is a SetInstance that panics.
####
func (*PCPInstanceMetric) [SetInstance](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1068) [¶](#PCPInstanceMetric.SetInstance)
```
func (m *[PCPInstanceMetric](#PCPInstanceMetric)) SetInstance(val interface{}, instance [string](/builtin#string)) [error](/builtin#error)
```
SetInstance sets the value for a particular instance of the metric.
####
func (*PCPInstanceMetric) [ValInstance](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L1060) [¶](#PCPInstanceMetric.ValInstance)
```
func (m *[PCPInstanceMetric](#PCPInstanceMetric)) ValInstance(instance [string](/builtin#string)) (interface{}, [error](/builtin#error))
```
ValInstance returns the value for a particular instance of the metric.
####
type [PCPMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L446) [¶](#PCPMetric)
```
type PCPMetric interface {
[Metric](#Metric)
// a PCPMetric will always have an instance domain, even if it is nil
Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
ShortDescription() [string](/builtin#string)
LongDescription() [string](/builtin#string)
}
```
PCPMetric defines the interface for a metric that is compatible with PCP.
####
type [PCPRegistry](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L48) [¶](#PCPRegistry)
```
type PCPRegistry struct {
// contains filtered or unexported fields
}
```
PCPRegistry implements a registry for PCP as the client
####
func [NewPCPRegistry](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L73) [¶](#NewPCPRegistry)
```
func NewPCPRegistry() *[PCPRegistry](#PCPRegistry)
```
NewPCPRegistry creates a new PCPRegistry object
####
func (*PCPRegistry) [AddInstanceDomain](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L135) [¶](#PCPRegistry.AddInstanceDomain)
```
func (r *[PCPRegistry](#PCPRegistry)) AddInstanceDomain(indom [InstanceDomain](#InstanceDomain)) [error](/builtin#error)
```
AddInstanceDomain will add a new instance domain to the current registry
####
func (*PCPRegistry) [AddInstanceDomainByName](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L223) [¶](#PCPRegistry.AddInstanceDomainByName)
```
func (r *[PCPRegistry](#PCPRegistry)) AddInstanceDomainByName(name [string](/builtin#string), instances [][string](/builtin#string)) ([InstanceDomain](#InstanceDomain), [error](/builtin#error))
```
AddInstanceDomainByName adds an instance domain using passed parameters
####
func (*PCPRegistry) [AddMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L196) [¶](#PCPRegistry.AddMetric)
```
func (r *[PCPRegistry](#PCPRegistry)) AddMetric(m [Metric](#Metric)) [error](/builtin#error)
```
AddMetric will add a new metric to the current registry
####
func (*PCPRegistry) [AddMetricByString](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L322) [¶](#PCPRegistry.AddMetricByString)
```
func (r *[PCPRegistry](#PCPRegistry)) AddMetricByString(str [string](/builtin#string), val interface{}, t [MetricType](#MetricType), s [MetricSemantics](#MetricSemantics), u [MetricUnit](#MetricUnit)) ([Metric](#Metric), [error](/builtin#error))
```
AddMetricByString dynamically creates a PCPMetric
####
func (*PCPRegistry) [HasInstanceDomain](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L117) [¶](#PCPRegistry.HasInstanceDomain)
```
func (r *[PCPRegistry](#PCPRegistry)) HasInstanceDomain(name [string](/builtin#string)) [bool](/builtin#bool)
```
HasInstanceDomain returns true if the registry already has an indom of the specified name
####
func (*PCPRegistry) [HasMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L126) [¶](#PCPRegistry.HasMetric)
```
func (r *[PCPRegistry](#PCPRegistry)) HasMetric(name [string](/builtin#string)) [bool](/builtin#bool)
```
HasMetric returns true if the registry already has a metric of the specified name
####
func (*PCPRegistry) [InstanceCount](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L81) [¶](#PCPRegistry.InstanceCount)
```
func (r *[PCPRegistry](#PCPRegistry)) InstanceCount() [int](/builtin#int)
```
InstanceCount returns the number of instances across all indoms in the registry
####
func (*PCPRegistry) [InstanceDomainCount](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L89) [¶](#PCPRegistry.InstanceDomainCount)
```
func (r *[PCPRegistry](#PCPRegistry)) InstanceDomainCount() [int](/builtin#int)
```
InstanceDomainCount returns the number of instance domains in the registry
####
func (*PCPRegistry) [MetricCount](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L97) [¶](#PCPRegistry.MetricCount)
```
func (r *[PCPRegistry](#PCPRegistry)) MetricCount() [int](/builtin#int)
```
MetricCount returns the number of metrics in the registry
####
func (*PCPRegistry) [StringCount](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L108) [¶](#PCPRegistry.StringCount)
```
func (r *[PCPRegistry](#PCPRegistry)) StringCount() [int](/builtin#int)
```
StringCount returns the number of strings in the registry
####
func (*PCPRegistry) [ValuesCount](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L105) [¶](#PCPRegistry.ValuesCount)
```
func (r *[PCPRegistry](#PCPRegistry)) ValuesCount() [int](/builtin#int)
```
ValuesCount returns the number of values in the registry
####
type [PCPSingletonMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L600) [¶](#PCPSingletonMetric)
```
type PCPSingletonMetric struct {
// contains filtered or unexported fields
}
```
PCPSingletonMetric defines a singleton metric with no instance domain only a value and a valueoffset.
####
func [NewPCPSingletonMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L608) [¶](#NewPCPSingletonMetric)
```
func NewPCPSingletonMetric(val interface{}, name [string](/builtin#string), t [MetricType](#MetricType), s [MetricSemantics](#MetricSemantics), u [MetricUnit](#MetricUnit), desc ...[string](/builtin#string)) (*[PCPSingletonMetric](#PCPSingletonMetric), [error](/builtin#error))
```
NewPCPSingletonMetric creates a new instance of PCPSingletonMetric it takes 2 extra optional strings as short and long description parameters,
which on not being present are set to blank strings.
####
func (PCPSingletonMetric) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L594) [¶](#PCPSingletonMetric.Indom)
```
func (m PCPSingletonMetric) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
####
func (*PCPSingletonMetric) [MustSet](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L639) [¶](#PCPSingletonMetric.MustSet)
```
func (m *[PCPSingletonMetric](#PCPSingletonMetric)) MustSet(val interface{})
```
MustSet is a Set that panics on failure.
####
func (*PCPSingletonMetric) [Set](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L631) [¶](#PCPSingletonMetric.Set)
```
func (m *[PCPSingletonMetric](#PCPSingletonMetric)) Set(val interface{}) [error](/builtin#error)
```
Set Sets the current value of PCPSingletonMetric.
####
func (*PCPSingletonMetric) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L645) [¶](#PCPSingletonMetric.String)
```
func (m *[PCPSingletonMetric](#PCPSingletonMetric)) String() [string](/builtin#string)
```
####
func (*PCPSingletonMetric) [Val](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L623) [¶](#PCPSingletonMetric.Val)
```
func (m *[PCPSingletonMetric](#PCPSingletonMetric)) Val() interface{}
```
Val returns the current Set value of PCPSingletonMetric.
####
type [PCPTimer](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L859) [¶](#PCPTimer)
```
type PCPTimer struct {
// contains filtered or unexported fields
}
```
PCPTimer implements a PCP compatible Timer It also functionally implements a metric with elapsed type from PCP
####
func [NewPCPTimer](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L870) [¶](#NewPCPTimer)
```
func NewPCPTimer(name [string](/builtin#string), unit [TimeUnit](#TimeUnit), desc ...[string](/builtin#string)) (*[PCPTimer](#PCPTimer), [error](/builtin#error))
```
NewPCPTimer creates a new PCPTimer instance of the specified unit.
It requires a metric name and a TimeUnit for construction.
It can optionally take a couple of description strings.
Internally it uses a PCP SingletonMetric.
####
func (PCPTimer) [Indom](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L594) [¶](#PCPTimer.Indom)
```
func (m PCPTimer) Indom() *[PCPInstanceDomain](#PCPInstanceDomain)
```
####
func (*PCPTimer) [Reset](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L885) [¶](#PCPTimer.Reset)
```
func (t *[PCPTimer](#PCPTimer)) Reset() [error](/builtin#error)
```
Reset resets the timer to 0
####
func (*PCPTimer) [Start](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L897) [¶](#PCPTimer.Start)
```
func (t *[PCPTimer](#PCPTimer)) Start() [error](/builtin#error)
```
Start signals the timer to start monitoring.
####
func (*PCPTimer) [Stop](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L911) [¶](#PCPTimer.Stop)
```
func (t *[PCPTimer](#PCPTimer)) Stop() ([float64](/builtin#float64), [error](/builtin#error))
```
Stop signals the timer to end monitoring and return elapsed time so far.
####
type [Registry](https://github.com/performancecopilot/speed/blob/v3.0.0/registry.go#L12) [¶](#Registry)
```
type Registry interface {
// checks if an instance domain of the passed name is already present or not
HasInstanceDomain(name [string](/builtin#string)) [bool](/builtin#bool)
// checks if an metric of the passed name is already present or not
HasMetric(name [string](/builtin#string)) [bool](/builtin#bool)
// returns the number of Metrics in the current registry
MetricCount() [int](/builtin#int)
// returns the number of Values in the current registry
ValuesCount() [int](/builtin#int)
// returns the number of Instance Domains in the current registry
InstanceDomainCount() [int](/builtin#int)
// returns the number of instances across all instance domains in the current registry
InstanceCount() [int](/builtin#int)
// returns the number of non null strings initialized in the current registry
StringCount() [int](/builtin#int)
// adds a InstanceDomain object to the writer
AddInstanceDomain([InstanceDomain](#InstanceDomain)) [error](/builtin#error)
// adds a InstanceDomain object after constructing it using passed name and instances
AddInstanceDomainByName(name [string](/builtin#string), instances [][string](/builtin#string)) ([InstanceDomain](#InstanceDomain), [error](/builtin#error))
// adds a Metric object to the writer
AddMetric([Metric](#Metric)) [error](/builtin#error)
// adds a Metric object after parsing the passed string for Instances and InstanceDomains
AddMetricByString(name [string](/builtin#string), val interface{}, t [MetricType](#MetricType), s [MetricSemantics](#MetricSemantics), u [MetricUnit](#MetricUnit)) ([Metric](#Metric), [error](/builtin#error))
}
```
Registry defines a valid set of instance domains and metrics
####
type [SingletonMetric](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L410) [¶](#SingletonMetric)
```
type SingletonMetric interface {
[Metric](#Metric)
// gets the value of the metric
Val() interface{}
// sets the value of the metric to a value, optionally returns an error on failure
Set(interface{}) [error](/builtin#error)
// tries to set and panics on error
MustSet(interface{})
}
```
SingletonMetric defines the interface for a metric that stores only one value.
####
type [SpaceUnit](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L265) [¶](#SpaceUnit)
```
type SpaceUnit [uint32](/builtin#uint32)
```
SpaceUnit is an enumerated type representing all units for space.
```
const (
ByteUnit [SpaceUnit](#SpaceUnit) = 1<<28 | [iota](/builtin#iota)<<16
KilobyteUnit
MegabyteUnit
GigabyteUnit
TerabyteUnit
PetabyteUnit
ExabyteUnit
)
```
Possible values for SpaceUnit.
####
func (SpaceUnit) [Count](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L297) [¶](#SpaceUnit.Count)
```
func (s [SpaceUnit](#SpaceUnit)) Count(c [CountUnit](#CountUnit), dimension [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Count adds a count unit to the current unit at a specific dimension
####
func (SpaceUnit) [PMAPI](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L282) [¶](#SpaceUnit.PMAPI)
```
func (s [SpaceUnit](#SpaceUnit)) PMAPI() [uint32](/builtin#uint32)
```
PMAPI returns the PMAPI representation for a SpaceUnit for space units bits 0-3 are 1 and bits 13-16 are scale
####
func (SpaceUnit) [Space](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L287) [¶](#SpaceUnit.Space)
```
func (s [SpaceUnit](#SpaceUnit)) Space([SpaceUnit](#SpaceUnit), [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Space adds a space unit to the current unit at a specific dimension
####
func (SpaceUnit) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/spaceunit_string.go#L27) [¶](#SpaceUnit.String)
```
func (i [SpaceUnit](#SpaceUnit)) String() [string](/builtin#string)
```
####
func (SpaceUnit) [Time](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L292) [¶](#SpaceUnit.Time)
```
func (s [SpaceUnit](#SpaceUnit)) Time(t [TimeUnit](#TimeUnit), dimension [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Time adds a time unit to the current unit at a specific dimension
####
type [TimeUnit](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L302) [¶](#TimeUnit)
```
type TimeUnit [uint32](/builtin#uint32)
```
TimeUnit is an enumerated type representing all possible units for representing time.
```
const (
NanosecondUnit [TimeUnit](#TimeUnit) = 1<<24 | [iota](/builtin#iota)<<12
MicrosecondUnit
MillisecondUnit
SecondUnit
MinuteUnit
HourUnit
)
```
Possible Values for TimeUnit.
for time units bits 4-7 are 1 and bits 17-20 are scale.
####
func (TimeUnit) [Count](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L333) [¶](#TimeUnit.Count)
```
func (t [TimeUnit](#TimeUnit)) Count(c [CountUnit](#CountUnit), dimension [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Count adds a count unit to the current unit at a specific dimension
####
func (TimeUnit) [PMAPI](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L318) [¶](#TimeUnit.PMAPI)
```
func (t [TimeUnit](#TimeUnit)) PMAPI() [uint32](/builtin#uint32)
```
PMAPI returns the PMAPI representation for a TimeUnit.
####
func (TimeUnit) [Space](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L323) [¶](#TimeUnit.Space)
```
func (t [TimeUnit](#TimeUnit)) Space(s [SpaceUnit](#SpaceUnit), dimension [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Space adds a space unit to the current unit at a specific dimension
####
func (TimeUnit) [String](https://github.com/performancecopilot/speed/blob/v3.0.0/timeunit_string.go#L25) [¶](#TimeUnit.String)
```
func (i [TimeUnit](#TimeUnit)) String() [string](/builtin#string)
```
####
func (TimeUnit) [Time](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L328) [¶](#TimeUnit.Time)
```
func (t [TimeUnit](#TimeUnit)) Time([TimeUnit](#TimeUnit), [int8](/builtin#int8)) [MetricUnit](#MetricUnit)
```
Time adds a time unit to the current unit at a specific dimension
####
type [Timer](https://github.com/performancecopilot/speed/blob/v3.0.0/metrics.go#L848) [¶](#Timer)
```
type Timer interface {
[Metric](#Metric)
Start() [error](/builtin#error)
Stop() ([float64](/builtin#float64), [error](/builtin#error))
}
```
Timer defines a metric that accumulates time periods Start signals the beginning of monitoring.
End signals the end of monitoring and adding the elapsed time to the accumulated time, and returning it. |
lowpassFilter | cran | R | Package ‘lowpassFilter’
October 13, 2022
Title Lowpass Filtering
Version 1.0-2
Depends R (>= 3.0.0)
Imports Rcpp (>= 0.12.3), stats, methods
LinkingTo Rcpp
Suggests testthat
Description Creates lowpass filters which are commonly used in ion channel recordings. It sup-
ports generation of random numbers that are filtered, i.e. follow a model for ion channel record-
ings, see <doi:10.1109/TNB.2018.2845126>. Furthermore, time continuous convolu-
tions of piecewise constant signals with the kernel of lowpass filters can be computed.
License GPL-3
Encoding UTF-8
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [ctb],
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-04-29 18:40:02 UTC
R topics documented:
lowpassFilter-packag... 2
convolv... 3
deconvolv... 4
helpFunctionsFilte... 5
lowpassFilte... 7
randomGeneratio... 9
lowpassFilter-package Lowpass Filtering
Description
Creates lowpass filters and offers further functionalities around them. Lowpass filters are commonly
used in ion channel recordings.
Details
The main function of this package is lowpassFilter which creates lowpass filters, currently only
Bessel filters are supported. randomGeneration and randomGenerationMA allow to generate ran-
dom numbers that are filtered, i.e. follow a model for ion channel recordings, see (Pein et al., 2018,
2020). getConvolution, getConvolutionJump, and getConvolutionPeak allow to compute the
convolution of a signal with the kernel of a lowpass filter.
References
<NAME>., <NAME>., <NAME>., and <NAME>. (2020) Heterogeneous idealization of ion channel
recordings - Open channel noise. Submitted.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2018) Fully-automatic mul-
tiresolution idealization for filtered ion channel recordings: flickering event detection. IEEE Trans.
Nanobioscience, 17(3):300-320.
<NAME>. (2017) Heterogeneous Multiscale Change-Point Inference and its Application to Ion Chan-
nel Recordings. PhD thesis, Georg-August-Universität Göttingen. http://hdl.handle.net/11858/00-
1735-0000-002E-E34A-7.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>.
(2013) Idealizing ion channel recordings by a jump segmentation multiresolution filter. IEEE Trans.
Nanobioscience, 12(4):376-386.
See Also
lowpassFilter, randomGeneration, randomGenerationMA, getConvolution, getConvolutionJump,
getConvolutionPeak
Examples
# creates a lowpass filter
filter <- lowpassFilter(type = "bessel", param = list(pole = 4, cutoff = 0.1), sr = 1e4)
time <- 1:4000 / filter$sr
# creates a piecewise constant signal with a single peak
stepfun <- getSignalPeak(time, cp1 = 0.2, cp2 = 0.2 + 3 / filter$sr,
value = 20, leftValue = 40, rightValue = 40)
# computes the convolution of the signal with the kernel of the lowpass filter
signal <- getConvolutionPeak(time, cp1 = 0.2, cp2 = 0.2 + 3 / filter$sr,
value = 20, leftValue = 40, rightValue = 40,
filter = filter)
# generates random numbers that are filtered
data <- randomGenerationMA(n = 4000, filter = filter, signal = signal, noise = 1.4)
# generated data
plot(time, data, pch = 16)
# zoom into the single peak
plot(time, data, pch = 16, xlim = c(0.199, 0.202), ylim = c(19, 45))
lines(time, stepfun, col = "blue", type = "s", lwd = 2)
lines(time, signal, col = "red", lwd = 2)
# use of data randomGeneration instead
data <- randomGeneration(n = 4000, filter = filter, signal = signal, noise = 1.4)
# similar result
plot(time, data, pch = 16, xlim = c(0.199, 0.202), ylim = c(19, 45))
lines(time, stepfun, col = "blue", type = "s", lwd = 2)
lines(time, signal, col = "red", lwd = 2)
convolve Time discrete convolution
Description
For developers only; computes a time discrete convolution.
Usage
.convolve(val, kern)
Arguments
val a numeric vector giving the values
kern a numeric vector giving the time discrete kernel
Value
A numeric vector giving the convolution.
See Also
lowpassFilter
deconvolve Deconvolution of a single jump / isolated peak
Description
For developers only; computes the deconvolution of a single jump or an isolated peak assuming that
the observations are lowpass filtered. More details are given in (Pein et al., 2018).
Usage
.deconvolveJump(grid, observations, time, leftValue, rightValue,
typeFilter, inputFilter, covariances)
.deconvolvePeak(gridLeft, gridRight, observations, time, leftValue, rightValue,
typeFilter, inputFilter, covariances, tolerance)
Arguments
grid, gridLeft, gridRight
numeric vectors giving the potential time points of the single jump, of the left
and right jump points of the peak, respectively
observations a numeric vector giving the observed data
time a numeric vector of length length(observations) giving the time points at
which the observations are observed
leftValue, rightValue
single numerics giving the value (conductance level) before and after the jump /
peak, respectively
typeFilter, inputFilter
a description of the assumed lowpass filter, usually computed by lowpassFilter
covariances a numeric vector giving the (regularized) covariances of the observations
tolerance a single numeric giving a tolerance for the decision whether the left jump point
is smaller than the right jump point
Value
For .deconvolveJump a single numeric giving the jump point. For .deconvolvePeak a list con-
taining the entries left, right and value giving the left and right jump point and the value of the
peak, respectively.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2018) Fully-automatic mul-
tiresolution idealization for filtered ion channel recordings: flickering event detection. IEEE Trans.
Nanobioscience, 17(3):300-320.
See Also
lowpassFilter
helpFunctionsFilter Convolved piecewise constant signals
Description
Creates piecewise constant signals with a single jump / peak. Computes the convolution of piece-
wise constant signals with the kernel of a lowpass filter.
Usage
getConvolution(t, stepfun, filter, truncated = TRUE)
getSignalJump(t, cp, leftValue, rightValue)
getConvolutionJump(t, cp, leftValue, rightValue, filter, truncated = TRUE)
getSignalPeak(t, cp1, cp2, value, leftValue, rightValue)
getConvolutionPeak(t, cp1, cp2, value, leftValue, rightValue, filter, truncated = TRUE)
Arguments
t a numeric vector giving the time points at which the signal / convolution should
be computed
stepfun specification of the piecewise constant signal, i.e. a data.frame with named
arguments leftEnd, rightEnd and value giving the start and end points of the
constant segments and the values on the segments, for instance an object of class
stepblock as available by the package ’stepR’
cp, cp1, cp2 a single numeric giving the location of the single, first and second jump point,
respectively
value, leftValue, rightValue
a single numeric giving the function value at, before and after the peak / jump,
respectively
filter an object of class lowpassFilter giving the analogue lowpass filter
truncated a single logical (not NA) indicating whether the signal should be convolved with
the truncated or the untruncated filter kernel
Value
a numeric of length length(t) giving the signal / convolution at time points t
References
<NAME>., <NAME>., <NAME>., and <NAME>. (2020) Heterogeneous idealization of ion channel
recordings - Open channel noise. Submitted.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2018) Fully-automatic mul-
tiresolution idealization for filtered ion channel recordings: flickering event detection. IEEE Trans.
Nanobioscience, 17(3):300-320.
<NAME>. (2017) Heterogeneous Multiscale Change-Point Inference and its Application to Ion Chan-
nel Recordings. PhD thesis, Georg-August-Universität Göttingen. http://hdl.handle.net/11858/00-
1735-0000-002E-E34A-7.
See Also
lowpassFilter
Examples
# creating and plotting a signal with a single jump at 0 from 0 to 1
time <- seq(-2, 13, 0.01)
signal <- getSignalJump(time, 0, 0, 1)
plot(time, signal, type = "l")
# setting up the filter
filter <- lowpassFilter(param = list(pole = 4, cutoff = 0.1))
# convolution with the truncated filter
convolution <- getConvolutionJump(time, 0, 0, 1, filter)
lines(time, convolution, col = "red")
# without truncating the filter, looks almost equal
convolution <- getConvolutionJump(time, 0, 0, 1, filter, truncated = FALSE)
lines(time, convolution, col = "blue")
# creating and plotting a signal with a single peak with jumps
# at 0 and at 3 from 0 to 1 to 0
time <- seq(-2, 16, 0.01)
signal <- getSignalPeak(time, 0, 3, 1, 0, 0)
plot(time, signal, type = "l")
# convolution with the truncated filter
convolution <- getConvolutionPeak(time, 0, 3, 1, 0, 0, filter)
lines(time, convolution, col = "red")
# without truncating the filter, looks almost equal
convolution <- getConvolutionPeak(time, 0, 3, 1, 0, 0, filter, truncated = FALSE)
lines(time, convolution, col = "blue")
# doing the same with getConvolution
# signal can also be an object of class stepblock instead,
# e.g. constructed by stepR::stepblock
signal <- data.frame(value = c(0, 1, 0), leftEnd = c(-2, 0, 3), rightEnd = c(0, 3, 16))
convolution <- getConvolution(time, signal, filter)
lines(time, convolution, col = "red")
convolution <- getConvolution(time, signal, filter, truncated = FALSE)
lines(time, convolution, col = "blue")
# more complicated signal
time <- seq(-2, 21, 0.01)
signal <- data.frame(value = c(0, 10, 0, 50, 0), leftEnd = c(-2, 0, 3, 6, 8),
rightEnd = c(0, 3, 6, 8, 21))
convolution <- getConvolution(time, signal, filter)
plot(time, convolution, col = "red", type = "l")
convolution <- getConvolution(time, signal, filter, truncated = FALSE)
lines(time, convolution, col = "blue")
lowpassFilter Lowpass filtering
Description
Creates a lowpass filter.
Usage
lowpassFilter(type = c("bessel"), param, sr = 1, len = NULL, shift = 0.5)
## S3 method for class 'lowpassFilter'
print(x, ...)
Arguments
type a string specifying the type of the filter, currently only Bessel filters are sup-
ported
param a list specifying the parameters of the filter depending on type. For "bessel"
the entries pole and cutoff have to be specified and no other named entries
are allowed. pole has to be a single integer giving the number of poles (order).
cutoff has to be a single positive numeric not larger than 1 giving the normal-
ized cutoff frequency, i.e. the cutoff frequency (in the temporal domain) of the
filter divided by the sampling rate
sr a single numeric giving the sampling rate
len a single integer giving the filter length of the truncated and digitised filter, see
Value for more details. By default (NULL) it is chosen such that the autocorre-
lation function is below 1e-3 at len / sr and at all lager lags (len + i) / sr,
with i a positive integer
shift a single numeric between 0 and 1 giving a shift for the digitised filter, i.e. kernel
and step are obtained by evaluating the corresponding functions at (0:len +
shift) / sr
x the object
... for generic methods only
Value
An object of class lowpassFilter, i.e. a list that contains
"type", "param", "sr", "len" the corresponding arguments
"kernfun" the kernel function of the filter, obtained as the Laplace transform of the corresponding
transfer function
"stepfun" the step-response of the filter, i.e. the antiderivative of the filter kernel
"acfun" the autocorrelation function, i.e. the convolution of the filter kernel with itself
"acAntiderivative" the antiderivative of the autocorrelation function
"truncatedKernfun" the kernel function of the at len / sr truncated filter, i.e. kernfun truncated
and rescaled such that the new kernel still integrates to 1
"truncatedStepfun" the step-response of the at len / sr truncated filter, i.e. the antiderivative of
the kernel of the truncated filter
"truncatedAcfun" the autocorrelation function of the at len / sr truncated filter, i.e. the convo-
lution of the kernel of the truncated filter with itself
"truncatedAcAntiderivative" the antiderivative of the autocorrelation function of the at len /
sr truncated filter
"kern" the digitised filter kernel normalised to one, i.e. kernfun((0:len + shift) / sr) / sum(kernfun((0:len
+ shift) / sr))
"step" the digitised step-response of the filter, i.e. stepfun((0:len + shift) / sr)
"acf" the discrete autocorrelation, i.e. acfun(0:len / sr)
"jump" the last index of the left half of the filter, i.e. min(which(ret$step >= 0.5)) - 1L, it
indicates how much a jump is shifted in time by a convolution of the signal with the digitised
kernel of the lowpassfilter; if all values are below 0.5, len is returned with a warning
"number" for developers; an integer indicating the type of the filter
"list" for developers; a list containing precomputed quantities to recreate the filter in C++
References
<NAME>., <NAME>., <NAME>., and <NAME>. (2020) Heterogeneous idealization of ion channel
recordings - Open channel noise. Submitted.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2018) Fully-automatic mul-
tiresolution idealization for filtered ion channel recordings: flickering event detection. IEEE Trans.
Nanobioscience, 17(3):300-320.
<NAME>. (2017) Heterogeneous Multiscale Change-Point Inference and its Application to Ion Chan-
nel Recordings. PhD thesis, Georg-August-Universität Göttingen. http://hdl.handle.net/11858/00-
1735-0000-002E-E34A-7.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>.
(2013) Idealizing ion channel recordings by a jump segmentation multiresolution filter. IEEE Trans.
Nanobioscience, 12(4):376-386.
See Also
filter
Examples
filter <- lowpassFilter(type = "bessel", param = list(pole = 4L, cutoff = 1e3 / 1e4),
sr = 1e4)
# filter kernel, truncated version
plot(filter$kernfun, xlim = c(0, 20 / filter$sr))
t <- seq(0, 20 / filter$sr, 0.01 / filter$sr)
# truncated version looks very similar
lines(t, filter$truncatedKernfun(t), col = "red")
# filter$len (== 11) is chosen automatically
# this ensures that filter$acf < 1e-3 for this lag and at all larger lags
plot(filter$acfun, xlim = c(0, 20 / filter$sr), ylim = c(-0.003, 0.003))
abline(h = 0.001, lty = "22")
abline(h = -0.001, lty = "22")
abline(v = (filter$len - 1L) / filter$sr, col = "grey")
abline(v = filter$len / filter$sr, col = "red")
# filter with sr == 1
filter <- lowpassFilter(type = "bessel", param = list(pole = 4L, cutoff = 1e3 / 1e4))
# filter kernel and its truncated version
plot(filter$kernfun, xlim = c(0, 20 / filter$sr))
t <- seq(0, 20 / filter$sr, 0.01 / filter$sr)
# truncated version looks very similar
lines(t, filter$truncatedKernfun(t), col = "red")
# digitised filter
points((0:filter$len + 0.5) / filter$sr, filter$kern, col = "red", pch = 16)
# without a shift
filter <- lowpassFilter(type = "bessel", param = list(pole = 4L, cutoff = 1e3 / 1e4),
shift = 0)
# filter$kern starts with zero
points(0:filter$len / filter$sr, filter$kern, col = "blue", pch = 16)
# much shorter filter
filter <- lowpassFilter(type = "bessel", param = list(pole = 4L, cutoff = 1e3 / 1e4),
len = 4L)
points((0:filter$len + 0.5) / filter$sr, filter$kern, col = "darkgreen", pch = 16)
randomGeneration Random number generation
Description
Generate random numbers that are filtered. Both, signal and noise, are convolved with the given
lowpass filter, see details. Can be used to generate synthetic data resembling ion channel recordings,
please see (Pein et al., 2018, 2020) for the exact models.
Usage
randomGeneration(n, filter, signal = 0, noise = 1, oversampling = 100L, seed = n,
startTime = 0, truncated = TRUE)
randomGenerationMA(n, filter, signal = 0, noise = 1, seed = n,
startTime = 0, truncated = TRUE)
Arguments
n a single positive integer giving the number of observations that should be gen-
erated
filter an object of class lowpassFilter giving the analogue lowpass filter
signal either a numeric of length 1 or of length n giving the convolved signal, i.e. the
mean of the random numbers, or an object that can be passed to getConvolution,
i.e. an object of class stepblock, see Examples, giving the signal that will be
convolved with the kernel of the lowpass filter filter
noise for randomGenerationMA a single positive finite numeric giving the constant
noise level, for randomGeneration either a numeric of length 1 or of length
(n + filter$len - 1L) * oversampling or an object of class stepblock, see
Examples, giving the noise of the random errors, see Details
oversampling a single positive integer giving the factor by which the errors should be over-
sampled, see Details
seed will be passed to set.seed to set a seed, set.seed will not be called if this
argument is set to "no", i.e. a single value, interpreted as an integer, NULL or
"no"
startTime a single finite numeric giving the time at which sampling should start
truncated a single logical (not NA) indicating whether the signal should be convolved with
the truncated or the untruncated filter kernel
Details
As discussed in (Pein et al., 2018) and (Pein et al., 2020), in ion channel recordings the recorded
data points can be modelled as equidistant sampled at rate filter$sr from the convolution of a
piecewise constant signal perturbed by Gaussian white noise scaled by the noise level with the ker-
nel of an analogue lowpass filter. The noise level is either constant (homogeneous noise, see (Pein et
al., 2018) ) or itself varying (heterogeneous noise, see (Pein et al., 2020) ). randomGeneration and
randomGenerationMA generate synthetic data from such models. randomGeneration allows ho-
mogeneous and heterogeneous noise, while randomGenerationMA only allows homogeneous noise,
i.e. noise has to be a single numeric giving the constant noise level. The resulting observations
represent the conductance at time points startTime + 1:n / filter$sr.
The generated observations are the sum of a convolved signal evaluated at those time points plus
centred Gaussian errors that are correlated (coloured noise), because of the filtering, and scaled by
the noise level. The convolved signal evaluated at those time points can either by specified in signal
directly or signal can specify a piecewise constant signal that will be convolved with the filter
using getConvolution and evaluated at those time points. randomGenerationMA computes a mov-
ing average process with the desired autocorrelation to generate random errors. randomGeneration
oversamples the error, i.e. generates errors at time points startTime + (seq(1 - filter$len + 1
/ oversampling, n, 1 / oversampling) - 1 / 2 / oversampling) / filter$sr, which will then
be convolved with the filter. For this function noise can either give the noise levels at those over-
sampled time points or specify a piecewise constant function that will be automatically evaluated at
those time points.
Value
a numeric vector of length n giving the generated random numbers
References
<NAME>., <NAME>., <NAME>., and <NAME>. (2020) Heterogeneous idealization of ion channel
recordings - Open channel noise. Submitted.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2018) Fully-automatic mul-
tiresolution idealization for filtered ion channel recordings: flickering event detection. IEEE Trans.
Nanobioscience, 17(3):300-320.
<NAME>. (2017) Heterogeneous Multiscale Change-Point Inference and its Application to Ion Chan-
nel Recordings. PhD thesis, Georg-August-Universität Göttingen. http://hdl.handle.net/11858/00-
1735-0000-002E-E34A-7.
See Also
lowpassFilter, getConvolution
Examples
filter <- lowpassFilter(type = "bessel", param = list(pole = 4, cutoff = 0.1), sr = 1e4)
time <- 1:4000 / filter$sr
stepfun <- getSignalPeak(time, cp1 = 0.2, cp2 = 0.2 + 3 / filter$sr,
value = 20, leftValue = 40, rightValue = 40)
signal <- getConvolutionPeak(time, cp1 = 0.2, cp2 = 0.2 + 3 / filter$sr,
value = 20, leftValue = 40, rightValue = 40, filter = filter)
data <- randomGenerationMA(n = 4000, filter = filter, signal = signal, noise = 1.4)
# generated data
plot(time, data, pch = 16)
# zoom into the single peak
plot(time, data, pch = 16, xlim = c(0.199, 0.202), ylim = c(19, 45))
lines(time, stepfun, col = "blue", type = "s", lwd = 2)
lines(time, signal, col = "red", lwd = 2)
# use of randomGeneration instead
data <- randomGeneration(n = 4000, filter = filter, signal = signal, noise = 1.4)
# similar result
plot(time, data, pch = 16, xlim = c(0.199, 0.202), ylim = c(19, 45))
lines(time, stepfun, col = "blue", type = "s", lwd = 2)
lines(time, signal, col = "red", lwd = 2)
## heterogeneous noise
# manual creation of an object of class 'stepblock'
# instead the function stepblock in the package stepR can be used
noise <- data.frame(leftEnd = c(0, 0.2, 0.2 + 3 / filter$sr),
rightEnd = c(0.2, 0.2 + 3 / filter$sr, 0.4),
value = c(1, 30, 1))
attr(noise, "x0") <- 0
class(noise) <- c("stepblock", class(noise))
data <- randomGeneration(n = 4000, filter = filter, signal = signal, noise = noise)
plot(time, data, pch = 16, xlim = c(0.199, 0.202), ylim = c(19, 45))
lines(time, stepfun, col = "blue", type = "s", lwd = 2)
lines(time, signal, col = "red", lwd = 2) |
azurex | hex | Erlang |
API Reference
===
[Modules](#modules)
---
[Azurex](Azurex.html)
Azure connection library.
Currently only implements Blob Storage.
[Azurex.Authorization.SharedKey](Azurex.Authorization.SharedKey.html)
Implements Azure Rest Api Authorization method.
[Azurex.Blob](Azurex.Blob.html)
Implementation of Azure Blob Storage.
[Azurex.Blob.Block](Azurex.Blob.Block.html)
Implementation of Azure Blob Storage.
[Azurex.Blob.Config](Azurex.Blob.Config.html)
Azurex Blob Config
[Azurex.Blob.Container](Azurex.Blob.Container.html)
Implementation of Azure Blob Storage
[Azurex.Blob.SharedAccessSignature](Azurex.Blob.SharedAccessSignature.html)
Implements shared access signatures (SAS) on Blob Storage resources.
Azurex
===
Azure connection library.
Currently only implements Blob Storage.
Azurex.Authorization.SharedKey
===
Implements Azure Rest Api Authorization method.
It is based on: <https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key>
As defined in 26 November 2019
[Summary](#summary)
===
[Functions](#functions)
---
[format\_date(date\_time)](#format_date/1)
[sign(request, opts \\ [])](#sign/2)
[Functions](#functions)
===
Azurex.Blob
===
Implementation of Azure Blob Storage.
In the functions below set container as nil to use the one configured in [`Azurex.Blob.Config`](Azurex.Blob.Config.html).
[Summary](#summary)
===
[Functions](#functions)
---
[copy\_blob(source\_name, destination\_name, container \\ nil)](#copy_blob/3)
Copies a blob to a destination.
[delete\_blob(name, container \\ nil, params \\ [])](#delete_blob/3)
[get\_blob(name, container \\ nil, params \\ [])](#get_blob/3)
Download a blob
[get\_url(container)](#get_url/1)
Returns the url for a container (defaults to the one in [`Azurex.Blob.Config`](Azurex.Blob.Config.html))
[get\_url(container, blob\_name)](#get_url/2)
Returns the url for a file in a container (defaults to the one in [`Azurex.Blob.Config`](Azurex.Blob.Config.html))
[head\_blob(name, container \\ nil, params \\ [])](#head_blob/3)
Checks if a blob exists, and returns metadata for the blob if it does
[list\_blobs(container \\ nil, params \\ [])](#list_blobs/2)
Lists all blobs in a container
[list\_containers()](#list_containers/0)
[put\_blob(name, blob, content\_type, container \\ nil, params \\ [])](#put_blob/5)
Upload a blob.
[Functions](#functions)
===
Azurex.Blob.Block
===
Implementation of Azure Blob Storage.
You can:
* [upload a block as part of a blob](https://learn.microsoft.com/en-us/rest/api/storageservices/put-block)
* [commit a list of blocks as part of a blob](https://learn.microsoft.com/en-us/rest/api/storageservices/put-block-list)
[Summary](#summary)
===
[Functions](#functions)
---
[put\_block(container, chunk, name, params)](#put_block/4)
Creates a block to be committed to a blob.
[put\_block\_list(block\_ids, container, name, blob\_content\_type, params)](#put_block_list/5)
Commits the given list of block\_ids to a blob.
[Functions](#functions)
===
Azurex.Blob.Config
===
Azurex Blob Config
[Summary](#summary)
===
[Functions](#functions)
---
[api\_url()](#api_url/0)
Azure endpoint url, optional Defaults to `https://{name}.blob.core.windows.net` where `name` is the `storage_account_name`
[default\_container()](#default_container/0)
Azure container name, optional.
[get\_connection\_string\_value(key)](#get_connection_string_value/1)
Returns the value in the connection string given the string key.
[parse\_connection\_string(connection\_string)](#parse_connection_string/1)
Parses a connection string to a key value map.
[storage\_account\_connection\_string()](#storage_account_connection_string/0)
Azure storage account connection string.
Required if `storage_account_name` or `storage_account_key` not set.
[storage\_account\_key()](#storage_account_key/0)
Azure storage account access key. Base64 encoded, as provided by azure UI.
Required if `storage_account_connection_string` not set.
[storage\_account\_name()](#storage_account_name/0)
Azure storage account name.
Required if `storage_account_connection_string` not set.
[Functions](#functions)
===
Azurex.Blob.Container
===
Implementation of Azure Blob Storage
[Summary](#summary)
===
[Functions](#functions)
---
[create(container)](#create/1)
[head\_container(container)](#head_container/1)
[Functions](#functions)
===
Azurex.Blob.SharedAccessSignature
===
Implements shared access signatures (SAS) on Blob Storage resources.
Based on:
<https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas[Summary](#summary)
===
[Functions](#functions)
---
[sas\_url(container, resource, opts \\ [])](#sas_url/3)
Generates a SAS url on a resource in a given container.
[Functions](#functions)
=== |
npi | cran | R | Package ‘npi’
November 14, 2022
Title Access the U.S. National Provider Identifier Registry API
Version 0.2.0
Description Access the United States National Provider Identifier
Registry API <https://npiregistry.cms.hhs.gov/api/>. Obtain and transform
administrative data linked to a specific individual or organizational
healthcare provider, or perform advanced searches based on provider name,
location, type of service, credentials, and other attributes exposed by
the API.
License MIT + file LICENSE
URL https://github.com/ropensci/npi/, https://docs.ropensci.org/npi/,
https://npiregistry.cms.hhs.gov/api/
BugReports https://github.com/ropensci/npi/issues/
Depends R (>= 3.1)
Imports checkLuhn, checkmate, curl, dplyr, glue, httr, magrittr,
purrr, rlang, stringr, tibble, tidyr, utils
Suggests covr, httptest, knitr, mockery, rmarkdown, spelling, testthat
(>= 2.1.0)
VignetteBuilder knitr
Encoding UTF-8
Language en-US
LazyData true
RoxygenNote 7.2.1
NeedsCompilation no
Author <NAME> [cre, aut, cph] (<https://orcid.org/0000-0002-2145-0145>),
<NAME> [ctb],
<NAME> [rev] (<https://orcid.org/0000-0002-4659-7522>),
<NAME> [rev] (<https://orcid.org/0000-0002-1402-4498>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-11-14 11:30:02 UTC
R topics documented:
npi... 2
npi_flatte... 3
npi_flatten.npi_result... 3
npi_is_vali... 4
npi_searc... 5
npi_summariz... 7
npi_summarize.npi_result... 8
npis Sample results from the NPI Registry
Description
A dataset containing 10 records returned from an NPI Registry search for providers with a primary
address in New York City.
Usage
npis
Format
A tibble with 10 rows and 11 columns, organized as follows:
npi [integer] 10-digit National Provider Identifier number
enumeration_type [character] Type of provider NPI, either "Individual" or "Organizational".
basic [list of 1 tibble] Basic information about the provider.
other_names [list of tibbles] Other names the provider goes by.
identifiers [list of tibbles] Other identifiers linked to the NPI.
taxonomies [list of tibbles] Healthcare Provider Taxonomy classification.
addresses [list of tibbles] Addresses for the provider’s primary practice location and primary mail-
ing address.
practice_locations [list of tibbles] Addresses for the provider’s other practice locations.
endpoints [list of tibbles] Details about provider’s endpoints for health information exchange.
created_date [datetime] Date NPI record was first created (UTC).
last_updated_date [datetime] UTC timestamp of the last time the NPI record was updated.
Details
search_npi(city = "New York City", limit = 10)
Source
https://npiregistry.cms.hhs.gov/registry/help-api
npi_flatten S3 method to flatten an npi_results object
Description
S3 method to flatten an npi_results object
Usage
npi_flatten(df, cols, key)
Arguments
df A data frame containing the results of a call to npi_search.
cols If non-NULL, only the named columns specified here will be be flattened and
returned along with npi.
key A quoted column name from df to use as a matching key. The default value is
"npi".
Value
A data frame (tibble) with flattened list columns.
Examples
# Flatten all list columns
data(npis)
npi_flatten(npis)
# Only flatten specified columns
npi_flatten(npis, cols = c("basic", "identifiers"))
npi_flatten.npi_results
Flatten NPI search results
Description
This function takes an npi_results S3 object returned by npi_search and flattens its list columns.
It unnests the lists columns and left joins them by npi. You can optionally specify which columns
from df to include.
Usage
## S3 method for class 'npi_results'
npi_flatten(df, cols = NULL, key = "npi")
Arguments
df A data frame containing the results of a call to npi_search.
cols If non-NULL, only the named columns specified here will be be flattened and
returned along with npi.
key A quoted column name from df to use as a matching key. The default value is
"npi".
Details
The names of unnested columns are prefixed by the name of their originating list column to avoid
name clashes and show their lineage. List columns containing all NULL data will be absent from
the result because there are no columns to unnest.
Value
A data frame (tibble) with flattened list columns.
Examples
# Flatten all list columns
data(npis)
npi_flatten(npis)
# Only flatten specified columns
npi_flatten(npis, cols = c("basic", "identifiers"))
npi_is_valid Check if candidate NPI number is valid
Description
Check whether a number is a valid NPI number per the specifications detailed in the Final Rule for
the Standard Unique Health Identifier for Health Care Providers (69 FR 3434).
Usage
npi_is_valid(x)
Arguments
x 10-digit candidate NPI number
Value
Boolean indicating whether npi is valid
Examples
npi_is_valid(1234567893) # TRUE
npi_is_valid(1234567898) # FALSE
npi_search Search the NPI Registry
Description
Search the U.S. National Provider Identifier (NPI) Registry using parameters exposed by the reg-
istry’s API (Version 2.1). Results are combined and returned as a tibble with an S3 class of
npi_results. See Value below for a description of the returned object.
Usage
npi_search(
number = NULL,
enumeration_type = NULL,
taxonomy_description = NULL,
first_name = NULL,
last_name = NULL,
use_first_name_alias = NULL,
organization_name = NULL,
address_purpose = NULL,
city = NULL,
state = NULL,
postal_code = NULL,
country_code = NULL,
limit = 10L
)
Arguments
number (Optional) 10-digit NPI number assigned to the provider.
enumeration_type
(Optional) Type of provider associated with the NPI, one of:
"ind" Individual provider (NPI-1)
"org" Organizational provider (NPI-2)
taxonomy_description
(Optional) Scalar character vector with a taxonomy description or code from the
NUCC Healthcare Provider Taxonomy.
first_name (Optional) This field only applies to Individual Providers. Trailing wildcard
entries are permitted requiring at least two characters to be entered (e.g. "jo*"
). This field allows the following special characters: ampersand, apostrophe,
colon, comma, forward slash, hyphen, left and right parentheses, period, pound
sign, quotation mark, and semi-colon.
last_name (Optional) This field only applies to Individual Providers. Trailing wildcard
entries are permitted requiring at least two characters to be entered. This field
allows the following special characters: ampersand, apostrophe, colon, comma,
forward slash, hyphen, left and right parentheses, period, pound sign, quotation
mark, and semi-colon.
use_first_name_alias
(Optional) This field only applies to Individual Providers when not doing a wild-
card search. When set to "True", the search results will include Providers with
similar First Names. E.g., first_name=Robert, will also return Providers with
the first name of Rob, Bob, Robbie, Bobby, etc. Valid Values are: TRUE: Will
include alias/similar names; FALSE: Will only look for exact matches.
organization_name
(Optional) This field only applies to Organizational Providers. Trailing wildcard
entries are permitted requiring at least two characters to be entered. This field al-
lows the following special characters: ampersand, apostrophe, "at" sign, colon,
comma, forward slash, hyphen, left and right parentheses, period, pound sign,
quotation mark, and semi-colon. Both the Organization Name and Other Organi-
zation Name fields associated with an NPI are examined for matching contents,
therefore, the results might contain an organization name different from the one
entered in the Organization Name criterion.
address_purpose
Refers to whether the address information entered pertains to the provider’s
Mailing Address or the provider’s Practice Location Address. When not speci-
fied, the results will contain the providers where either the Mailing Address or
any of Practice Location Addresses match the entered address information. Pri-
mary will only search against Primary Location Address. While Secondary will
only search against Secondary Location Addresses. Valid values are: "location",
"mailing", "primary", "secondary".
city The City associated with the provider’s address identified in Address Purpose.
To search for a Military Address enter either APO or FPO into the City field.
This field allows the following special characters: ampersand, apostrophe, colon,
comma, forward slash, hyphen, left and right parentheses, period, pound sign,
quotation mark, and semi-colon.
state The State abbreviation associated with the provider’s address identified in Ad-
dress Purpose. This field cannot be used as the only input criterion. If this field is
used, at least one other field, besides the Enumeration Type and Country, must
be populated. Valid values for states: https://npiregistry.cms.hhs.gov/
registry/API-State-Abbr
postal_code The Postal Code associated with the provider’s address identified in Address
Purpose. If you enter a 5 digit postal code, it will match any appropriate 9 digit
(zip+4) codes in the data. Trailing wildcard entries are permitted requiring at
least two characters to be entered (e.g., "21*").
country_code The Country associated with the provider’s address identified in Address Pur-
pose. This field can be used as the only input criterion as long as the value
selected is not US (United States). Valid values for country codes: https:
//npiregistry.cms.hhs.gov/registry/API-Country-Abbr
limit Maximum number of records to return, from 1 to 1200 inclusive. The default
is 10. Because the API returns up to 200 records per request, values of limit
greater than 200 will result in multiple API calls.
Details
By default, the function requests up to 10 records, but the limit argument accepts values from 1 to
the API’s limit of 1200.
Value
Data frame (tibble) containing the results of the search.
References
https://npiregistry.cms.hhs.gov/registry/help-api Data dictionary for fields returned
NUCC Healthcare Provider Taxonomy
Examples
## Not run:
# 10 NPI records for New York City
npi_search(city = "New York City")
# 1O NPI records for New York City, organizations only
npi_search(city = "New York City", enumeration_type = "org")
# 1O NPI records for New York City, individuals only
npi_search(city = "New York City", enumeration_type = "ind")
# 1200 NPI records for New York City
npi_search(city = "New York City", limit = 1200)
# Nutritionists in Maine
npi_search(state = "ME", taxonomy_description = "Nutritionist")
# Record associated with NPI 1245251222
npi_search(number = 1245251222)
## End(Not run)
npi_summarize S3 method to summarize an npi_results object
Description
S3 method to summarize an npi_results object
Usage
npi_summarize(object, ...)
Arguments
object An npi_results S3 object
... Additional optional arguments
Value
Tibble containing the following columns:
npi National Provider Identifier (NPI) number
name Provider’s first and last name for individual providers, organization name for organizational
providers.
enumeration_type Type of provider associated with the NPI, either "Individual" or "Organiza-
tional"
primary_practice_address Full address of the provider’s primary practice location
phone Provider’s telephone number
primary_taxonomy Primary taxonomy description
Examples
data(npis)
npi_summarize(npis)
npi_summarize.npi_results
Summary method for npi_results S3 object
Description
Print a human-readable overview of each record return in the results from a call to npi_search.
The format of the summary is modeled after the one offered on the NPI registry website.
Usage
## S3 method for class 'npi_results'
npi_summarize(object, ...)
Arguments
object An npi_results S3 object
... Additional optional arguments
Value
Tibble containing the following columns:
npi National Provider Identifier (NPI) number
name Provider’s first and last name for individual providers, organization name for organizational
providers.
enumeration_type Type of provider associated with the NPI, either "Individual" or "Organiza-
tional"
primary_practice_address Full address of the provider’s primary practice location
phone Provider’s telephone number
primary_taxonomy Primary taxonomy description
Examples
data(npis)
npi_summarize(npis) |
react-server-gulp-module-tagger | npm | JavaScript | react-server-gulp-module-tagger
===
A [gulp](http://gulpjs.com) plugin for tagging [react-server](https://www.npmjs.com/package/react-server) logger instances with information about the module they're being used in.
To transpile your source for use with [React Server](https://www.npmjs.com/package/react-server), install gulp and the plugin
```
npm i -D gulp react-server-gulp-module-tagger
```
Then add the task to your gulpfile
```
const gulp = require('gulp');const tagger = require('react-server-gulp-module-tagger');gulp.task('compile', () => { gulp.src('src') .pipe(tagger()) .pipe(gulp.dest('dist'));});
```
A compile task might also use [Babel](https://babeljs.io) with the [React Server Babel preset](https://www.npmjs.com/package/babel-preset-react-server) to transpile jsx and es 7 for the browser and the server
```
const gulp = require('gulp');const babel = require('gulp-babel');const tagger = require('react-server-gulp-module-tagger'); gulp.task('compile', () => { gulp.src('src') .pipe(tagger( trim: 'src.' )) .pipe(babel({ presets: ['react-server'] })) .pipe(gulp.dest('dist'));});
```
Given a [`getLogger`](http://redfin.github.io/react-server/annotated-src/logging) call,
adds the correct arguments to keep the server and the browser in sync.
For example, given a module in `src/components/my-feature/foo.js`, and using the options
`{ trim: 'src.', prefix: 'react-server.' }`
```
let logger = require("react-server").logging.getLogger(__LOGGER__);
```
returns a logger instance that will have consistent coloring on the server and the client, and that has a human-friendly, readable name that easily maps to the file tree (in this example `react-server.components.my-feature.foo`).
If you need more than one logger in your module, you can distinguish them with labels
```
var fooLogger = logging.getLogger(__LOGGER__({ label: "foo" }));var barLogger = logging.getLogger(__LOGGER__({ label: "bar" }));
```
Two other tokens, `__CHANNEL__` and `__CACHE__`, are reserved for future use,
and will also be replaced with a module context.
Readme
---
### Keywords
none |
km | cran | R | Package ‘km.ci’
October 13, 2022
Type Package
Title Confidence Intervals for the Kaplan-Meier Estimator
Version 0.5-6
Date 2022-04-04
Author <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.5.0)
Imports stats, survival
Description Computes various confidence intervals for the Kaplan-Meier
estimator, namely: Peto's CI, Rothman CI, CI's based on
Greenwood's variance, Thomas and Grunkemeier CI and the
simultaneous confidence bands by Nair and Hall and Wellner.
License GPL (>= 2)
Encoding UTF-8
Repository CRAN
RoxygenNote 7.1.2
NeedsCompilation no
Date/Publication 2022-04-06 11:52:42 UTC
R topics documented:
critical.value.hall.9... 2
critical.value.hall.9... 2
critical.value.hall.9... 2
critical.value.nair.9... 3
critical.value.nair.9... 3
critical.value.nair.9... 3
km.c... 4
rectum.da... 5
critical.value.hall.90
Critical Values
Description
Critical values for the 90 % Hall-Wellner band.
Details
These values are taken from the book by Klein & Moeschberger.
Source
Klein, Moeschberger (2002): Survival Analysis, Springer.
critical.value.hall.95
Critical Values
Description
Critical values for the 95 % Hall-Wellner band.
Details
These values are taken from the book by Klein & Moeschberger.
Source
Klein, Moeschberger (2002): Survival Analysis, Springer
critical.value.hall.99
Critical Values
Description
Critical values for the 99 % Hall-Wellner band.
Details
These values are taken from the book by Klein & Moeschberger.
Source
Klein, Moeschberger (2002): Survival Analysis, Springer.
critical.value.nair.90 3
critical.value.nair.90
Critical Values
Description
Critical values for the 90 % equal precision band by Nair.
Details
These values are taken from the book by Klein & Moeschberger.
Source
Klein, Moeschberger (2002): Survival Analysis, Springer.
critical.value.nair.95
Critical Values
Description
Critical values for the 95 % equal precision band by Nair.
Details
These values are taken from the book by Klein & Moeschberger.
Source
Klein, Moeschberger (2002): Survival Analysis, Springer.
critical.value.nair.99
Critical Values
Description
Critical values for the 99 % equal precision band by Nair.
Details
These values are taken from the book by Klein & Moeschberger.
Source
Klein, Moeschberger (2002): Survival Analysis, Springer.
km.ci Confidence Intervals for the Kaplan-Meier Estimator.
Description
Computes pointwise and simultaneous confidence intervals for the Kaplan-Meier estimator.
Usage
km.ci(survi, conf.level = 0.95, tl = NA, tu = NA, method = "rothman")
Arguments
survi A survival object for which the new confidence limits should be computed. This
can be built using the "Surv" and the "survfit" function in the R package "sur-
vival". "km.ci" modifies the confidence limits in this object.
conf.level The level for a two-sided confidence interval on the survival curve. Default is
0.95.
tl The lower time boundary for the simultaneous confidence limits. If it is missing
the smallest event time is used.
tu The upper time boundary for the simultaneous confidence limits. If it is missing
the largest event time is used.
method One of ’"peto"’, ’"linear"’, ’"log"’, "loglog"’, ’"rothman"’, "grunkemeier"’, ’"hall-
wellner"’, ’"loghall"’, "epband"’, "logep"
Details
A simulation study showed, that three confidence intervals produce satisfying confidence limits.
One is the "loglog" confidence interval, an interval which is based on the log of the hazard. The other
competitive confidence concept was introduced by Rothman (1978) and is using the assumption
that the survival estimator follows a binomial distribution. Another good confidence concept was
invented by Thomas and Grunkemeier (1975) and is derived by minimizing the likelihood function
under certain constraints. Special thanks goes to <NAME> for providing code for the
confidence interval by Thomas and Grunkemeier.
The confidence interval using Peto’s variance can not be recommended since it yields confidence
limits outside the admissible range [0;1] as well as the "linear" and the "log" (which is based on the
logarithm of S(t)).
The function can produce simultaneous confidence bands, too. The Hall-Wellner band (1980) and
the Equal Precision band by Nair (1984) together with their log-transformed counterpart. From
all simultaneous confidence intervals only the log-transformed Equal Precision "logep" band can be
recommended. The limits are computed according to the statistical tables in Klein and Moeschberger
(2002).
Value
a ’survfit’ object;
see the help on ’survfit.object’ for details.
Author(s)
<NAME>.
References
<NAME>., <NAME>. and Mansmann, U.. Comparison of simultaneous and pointwise confidence
intervals for survival functions. (2005, submitted to Biom. J.).
See Also
survfit, print.survfit, plot.survfit, lines.survfit, summary.survfit, survfit.object,
coxph, Surv, strata.
Examples
require(survival)
data(rectum.dat)
# fit a Kaplan-Meier and plot it
fit <- survfit(Surv(time, status) ~ 1, data=rectum.dat)
plot(fit)
fit2 <- km.ci(fit)
plot(fit2)
rectum.dat Rectum carcinoma data set.
Description
The rectum data contains 205 persons from a study about the survival of patients with rectum
carcinoma. Due to the severe course of disease the follow-up was almost perfect in these data and
involves hardly any censoring and survivors. The data was used to analyze the behavior of the
confidence intervals in data sets with low censoring rate.
Format
A data frame with 205 observations on the following 2 variables.
time Time in months
status Status at dropout
Source
Merkel, <NAME> al.(2001).The prognostic inhomogeneity in pT3 rectal carcinomas. Int J
Colorectal Dis.16, 305–306. |
pwc.pdf | free_programming_book | Unknown | Programowanie w jzyku C
Dla pocztkujcych oraz rednio zaawansowanych programistw wersja: 1.0 (27.10.2010)
1
Spis treci 1 Wprowadzenie... 5 1.1 Informacje od autora... 5 1.2 Jak napisana jest ta ksika?... 5 1.3 Dla kogo jest ta ksika?... 6 2 Podstawy jzyka C... 6 2.1 Pierwszy program... 6 2.1.1 Struktura oraz opis kodu rdowego jzyka C... 7 2.1.2 Komentarze... 8 2.2 Zmienne i stae... 9 2.2.1 Typy zmiennych... 9 2.2.2 Zakres typw zmiennych... 10 2.2.3 Nazwy zmiennych i deklaracja zmiennych... 11 2.2.4 Stae... 12 2.2.5 Wyraenia stae i stae symboliczne... 13 2.2.6 Staa wyliczenia... 14 2.2.7 Zasig zmiennych... 15 2.3 Matematyka... 17 2.3.1 Operatory arytmetyczne... 17 2.3.2 Operatory logiczne i relacje... 19 2.3.3 Operatory zwikszania, zmniejszania oraz przypisywania... 21 2.3.4 Operatory bitowe... 23 2.3.5 Priorytety... 35 2.3.6 Funkcje matematyczne... 36 3 Sterowanie programem... 40 3.1 Instrukcja if else... 40 3.2 Instrukcja switch... 44 3.3 Ptle... 47 3.3.1 for... 47 3.3.2 while... 52 3.3.3 do while... 53 3.4 Instrukcja break... 54 3.5 Instrukcja continue... 55 3.6 Instrukcja goto, etykiety... 56 4 Funkcje... 57 4.1 Oglna posta funkcji oraz funkcja zwracajca wartoci cakowite... 57 4.2 Funkcje zwracajce wartoci rzeczywiste... 59 4.3 Funkcje nie zwracajce wartoci oraz brak argumentw... 62 4.4 Pliki nagwkowe... 63 4.4.1 Kompilacja warunkowa... 66 4.5 extern, static, register... 68 4.5.1 extern... 68 4.5.2 static... 71 4.5.3 register... 75 4.6 Funkcje rekurencyjne... 77 2
5 Tablice i wskaniki... 78 5.1 Tablice... 78 5.1.1 Tablice jednowymiarowe... 78 5.1.2 Tablice wielowymiarowe... 79 5.2 Wskaniki... 82 5.3 Przekazywanie adresu do funkcji... 84 5.4 Zalenoci midzy tablicami, a wskanikami... 85 5.5 Operacje na wskanikach... 89 5.6 Wskanik typu void... 91 5.7 Tablice znakowe... 92 5.8 Przekazywanie tablicy do funkcji... 95 5.9 Wskaniki do wskanikw... 97 5.10 Tablica wskanikw... 98 6 Argumenty funkcji main... 104 7 Struktury... 110 7.1 Podstawowe informacje o strukturach... 110 7.2 Operacje na elementach struktury... 113 7.3 Przekazywanie struktur do funkcji... 113 7.4 Zagniedone struktury... 117 7.5 Tablice struktur... 119 7.6 Sowo kluczowe typedef... 121 7.7 Unie... 122 7.8 Pola bitowe... 125 8 Operacje wejcia i wyjcia... 130 8.1 Funkcja getchar i putchar... 130 8.2 Funkcja printf i sprintf... 131 8.3 Funkcja scanf i sscanf... 135 8.4 Zmienna ilo argumentw... 139 8.5 Obsuga plikw... 141 8.6 Pobieranie i wywietlanie caych wierszy tekstw funkcje: fgets, fputs... 146 9 Dynamicznie przydzielana pami... 150 10 Biblioteka standardowa... 153 10.1 assert.h... 153 10.2 complex.h... 155 10.3 ctype.h... 158 10.4 errno.h... 160 10.5 iso646.h... 161 10.6 limits.h... 162 10.7 locale.h... 164 10.8 math.h... 167 10.9 setjmp.h... 168 10.10 signal.h... 170 10.11 stdarg.h... 173 10.12 stdbool.h... 173 10.13 stdio.h... 174 10.13.1 Operacje na plikach... 176 10.13.2 Formatowane wyjcie... 186 3
10.13.3 Formatowane wejcie... 189 10.13.4 Wejcie i wyjcie znakowe... 191 10.13.5 Pozycja w pliku... 194 10.13.6 Obsuga bdw... 198 10.14 stdlib.h... 202 10.14.1 Konwersja cigu znakw na liczby... 203 10.14.2 Pseudo-losowe liczby ... 209 10.14.3 Dynamicznie przydzielana pami... 210 10.14.4 Funkcje oddziaywujce ze rodowiskiem uruchomienia... 212 10.14.5 Wyszukiwanie i sortowanie... 218 10.14.6 Arytmetyka liczb cakowitych... 222 10.15 string.h... 223 10.15.1 Kopiowanie... 224 10.15.2 Doczanie... 229 10.15.3 Porwnywanie... 230 10.15.4 Wyszukiwanie... 235 10.15.5 Inne... 242 10.16 time.h... 244 10.16.1 Manipulacja czasem... 245 10.16.2 Konwersje... 250 10.16.3 Makra... 254 10.16.4 Typy danych... 255 11 MySQL Integracja programu z baz danych... 257 Dodatek A... 260 A.1 Zmiana katalogu... 261 A.2 Tworzenie katalogu... 261 A.3 Usuwanie plikw i katalogw... 262 A.4 Wywietlanie zawartoci katalogu... 262 A.5 Kompilacja programw... 263 A.6 Ustawianie uprawnie... 264 A.7 Ustawianie waciciela... 264 Dodatek B... 265 B.1 Powoka systemowa... 265 B.2 Polecenie time, formatowanie wynikw... 265 Dodatek C... 268 C.1 Instalacja MySQL... 268 C.2 Podstawowe polecenia MySQL... 269 4
1 Wprowadzenie 1.1 Informacje od autora Witajcie. Informacje zawarte w tej ksice nie stanowi kompletnego kompendium wiedzy z zakresu jzyka C, natomiast podstawowe oraz rednio zaawansowane operacje jakie mona wykonywa z uyciem tego jzyka. Ksika ta zostaa napisana cakowicie przypadkiem, zaczo si to bardzo niewinnie od pisania maego poradnika, ktry wraz z upywem wakacji rozrasta si, by wreszcie osign obecn posta. Ksika w gruncie rzeczy skada si z bardzu wielu, bo a z ponad 180 przykadw. Wszystkie przykady zostay skompilowane z uyciem kompilatora gcc w wersji 4.4.1 na Linuksie. Staraem si tumaczy wszystkie zagadanienia najlepiej jak tylko potrafiem, eby zrozumiay to osoby nie majce bladego pojcia na temat programowania, przez co bardziej dowiadczeni programici mog odczu lekki dyskomfort. Jak mi wyszo? Mam nadzieje, e ocenisz sam. W wielu programach pokazany zosta jedynie sposb uycia pewnych mechanizmw.
W programach z prawdziwego zdarzenia wykorzystanie ich wizaoby si z konkretnym zadaniem.
Jeli zauwaysz jakiekolwiek bdy moesz wysa mi informacje wraz z opisem, gdzie wdar si bd na adres <EMAIL>. Ksika ta udostpniana jest na licencji Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported lub nowszej. ycz miej lektury.
<NAME> 1.2 Jak napisana jest ta ksika?
Opis parametru Przykad
Kody rdowe przedstawione zostay z ramce z tem
#include Polecenia systemowe w Linuksie wyrnione zostay w ramce z tem
$ cat plik.c Nazwy funkcji i plikw nagwkowych zostay zapisane tak czcionk main
Nazwy typw zmiennych, deklaracje zmiennych, sowa kluczowe int
Makra oraz pliki z kodem zostay wytuszczone NULL, main.c Prototypy funkcji zostay zapisane tak czcionk int printf (...)
Tabela 1.2.1 Informacje nawigacyjne 5
1.3 Dla kogo jest ta ksika?
Materia ten jest dla wszystkich tych, ktrzy chc nauczy si programowa w jzyku C nie znajc go w ogle, bd maj jakie pojcie, lecz nie wiedz co z czym si je. Materia ten moe by pomocny rwnie dla ludzi, ktrzy mieli ju styczno z programowaniem w C, natomiast dugo nie programowali i pragn odwiey swoj wiedz.
2 Podstawy jzyka C 2.1 Pierwszy program Pierwszy program, ktry napiszesz w jzyku C bdzie mia za zadanie wywietlenie tekstu "Hello World!". Nie tylko w C, lecz w innych jzykach programowania rwnie stosuje si tego typu praktyk, by pokaza w jaki sposb informacje wywietlane s na ekranie monitora. Tak wic w dowolnym edytorze tekstu wpisz poniszy kod (listing 2.1.1) oraz zapisz go pod nazw helloWorld.c (np. w katalogu domowym).
#include <stdio.h>
main ()
{
printf("Hello World!\n");
}
Listing 2.1.1 Pierwszy program Hello World.
Aby skompilowa nowo utworzony kod naley w konsoli systemowej przej do katalogu, w ktrym znajduje si plik helloWorld.c oraz wyda polecenie kompilacji kompilatora gcc. Podstawowe polecenia systemu Linux znajduj si w dodatku A.
$ gcc helloWorld.c -o helloWorld Jeli nie zrobie adnej literwki, kompilator nie powinien wywietli adnego bdu tudzie ostrzeenia, co za tym idzie kod rdowy powinien zosta skompilowany, a wic plik wykonywalny o nazwie helloWorld powinien zosta utworzony.
Aby uruchomi plik wykonywalny i zobaczy, czy faktycznie na ekranie monitora pojawi si napis
"Hello World!" w konsoli wpisz ponisze polecenie.
6
$ ./helloWorld Gratulacje! Wanie napisae, skompilowae oraz uruchomie swj pierwszy program w jzyku C.
Jeli chcesz dowiedzie si wicej na temat jzyka C, czytaj dalej ten materia.
2.1.1 Struktura oraz opis kodu rdowego jzyka C Programy pisane w jzyku C oprcz ciaa gwnej funkcji wykorzystuj jeszcze szereg innych funkcji:
definiowanych przez programist wasnorcznie (na listingu 2.1.1 nie wystpuje)
zdefiniowanych w bibliotece standardowej (funkcja printf z listingu 2.1.1)
Kady program moe posiada dowoln ilo funkcji, lecz warunkiem poprawnej kompilacji jest uycie funkcji main w kodzie programu. Jest to najwaniejsza funkcja w programie, poniewa skompilowany program wykonuje si od pocztku funkcji main a do jej zakoczenia.
Przykadowy, a zarazem bardzo prosty program pokazany na listingu 2.1.1 w pierwszej linii zawiera informacj dla kompilatora, aby doczy plik nagwkowy stdio.h, ktry zawiera informacje na temat standardowego wejcia i wyjcia (standard input / output). Dziki temu moglimy uy funkcji printf,
ktra jak ju wiesz drukuje tekst na ekranie monitora (standardowe wyjcie).
Kolejna linia to definicja gwnej funkcji programu funkcji main. Funkcja main oraz inne funkcje mog przyjmowa argumenty, jeli funkcja przyjmuje jakie argumenty, to zapisuje si je pomidzy nawiasami. Jak wida w naszym przypadku, funkcja main nie przyjmuje adnych argumentw
(o argumentach przyjmowanych przez funkcj main oraz inne funkcje dowiesz si pniej).
W nastpnej linii wystpuje nawiast klamrowy otwierajcy ({). Pomidzy nawiasami klamrowymi znajduje si ciao funkcji. W ciele funkcji wystpuj definicje zmiennych lokalnych, wywoania funkcji bibliotecznych, wywoania funkcji napisanych przez programistw, generalnie rzecz biorc, jeli jaka instrukcja ma zosta wykonana, to musi ona by wywoana w funkcji main. Ewentualnie, jeli jaka czynno, ktr chcemy wykona znajduje si w innej funkcji (w ciele innej funkcji), to dana funkcja musi zosta wywoana w ciele funkcji main.
Nastpna linia programu to wywoanie funkcji printf. Jak ju wiesz, funkcja ta drukuje informacje zawarte w cudzysowie (wyraenie "Hello World!" jest argumentem funkcji). Nie mniej jednak moe 7
by nie jasne dlaczego po znaku wykrzyknika wystpuje kombinacja znakw \n. Ot za pomoc tej kombinacji znakw moemy przej kursorem do nowej linii, czyli jeli bymy dodali w kolejnej linii nastpn instrukcj printf z dowolnym argumentem (dowolny napis zawarty w cudzysowie) to uzyskalibymy go w nowej linii, pod napisem "Hello World!". Jeli by nie byo znaku nowej linii (\n
new line character) to acuch znakw przekazany jako parametr drugiej funkcji zostaby wywietlony tu za znakiem wykrzyknika, nawet bez spacji. Kada pojedycza instrukcja w jzyku C koczy si rednikiem. Nawias klamrowy zamykajcy (}) bdcy w ostatniej linii programu zamyka ciao funkcji main.
Tak wic wiesz ju jak napisa prosty program w jzyku C, wiesz co oznaczaj poszczeglne czci programu, oraz wiesz gdzie umieszcza si wywoania funkcji. W kolejnym podpunkcie znajduj si informacje o komentarzach.
2.1.2 Komentarze Tworzenie komentarzy podczas pisania programw jest bardzo istotne. Oczywicie w programach,
ktre maj stosunkowo mao linii kodu nie jest to, a tak potrzebne. Nie mniej jednak w bardziej rozbudowanych programach (wieloplikowych) jest to wane. Czasem napiszemy co, zajrzymy do pliku po kilkunastu dniach i na nowo musimy czyta od pocztku co napisana przez nas samych funkcja robi. Komentarze s cakowicie ignorowane przez kompilator, tak wic mona wpisywa tam dowolne zdania, skrty mylowe, itp. Komentarze dzieli si na dwa rodzaje:
Komentarz liniowy
Komentarz blokowy Komentarz liniowy zaczyna si od dwch znakw ukonika (//) i koczy si wraz ze znakiem nowej linii. Jak sama nazwa wskazuje, komentarze te zajmuj jedn lini, poniewa przejcie do nowej linii koczy komentarz. Komentarz blokowy zaczyna si od znakw /*, a koczy si na znakach */.
Komentarze te mog obejmowa wiksz ilo wierszy ni jeden. Na listingu 2.1.2 pokazane zostay dwa sposoby wstawiania komentarzy.
#include <stdio.h> // Docz plik nagwkowy stdio.h Komentarz liniowy
/* Tutaj mog znajdowa si prototypy funkcji 8
Ale o tym troch pniej....
Komentarz wieloliniowy blokowy
*/
main ()
{
printf("Hello World!\n");
}
Listing 2.1.2 Komentarze w kodzie 2.2 Zmienne i stae Zmienne i stae to obiekty, ktre zajmuj pewien obszar w pamici komputera, do ktrego moemy si odwoa podajc ich nazw lub adres (wskanik). Do zmiennej mona wpisywa oraz zmienia
(w trakcie dziaania programu) wartoci zalene od jej typu. Do staych przypisujemy warto raz
(w kodzie rdowym programu) i jej ju zmieni nie moemy.
2.2.1 Typy zmiennych Kada zmienna lub staa ma swj typ, co oznacza tyle, e moe przyjmowa wartoci z zakresu danego typu. W poniszej tabeli przedstawione zostay typy zmiennych oraz staych wraz z opisem jakie wartoci przyjmuj. Zakresy zmiennych zostay przedstawione w punkcie 2.2.2.
Typ zmiennej (staej)
Przyjmowane wartoci int
Liczby cakowite float
Liczby rzeczywiste pojedynczej precyzji double
Liczby rzeczywiste podwjnej precyzji char
Zazwyczaj pojedyncza litera (pojedynczy bajt)
short int Krtsze liczby cakowite, ni int long int Dusze liczby cakowite, ni int long long int Bardzo due liczby cakowite long double Dusze liczby rzeczywiste, ni double Tabela 2.2.1 Typy zmiennych i staych Istniej jeszcze dodatkowe przedrostki (kwalifikatory), ktre mona doda przed typem zmiennej, tymi sowami s:
9
signed Przedrostek umoliwiajcy definicj liczb dodatnich oraz ujemnych (standardowo)
unsigned Przedrostek umoliwiajcy definicj liczb tylko dodatnich oraz zera.
2.2.2 Zakres typw zmiennych Zakresy typw zmiennych s istotnym zagadnieniem podczas pisania programu, nie mona przekroczy zakresu danego typu, poniewa program moe zachowa si nie tak jakbymy tego chcieli.
Przekroczenie zakresu w przypadku zmiennej typu int prowadzi do ustawienia w zmiennej wartoci ujemnej, tzn najmniejszej wartoci jaki typ int obsuguje. W poniszej tabeli znajduje si zestawienie zakresw poszczeglnych typw.
Typ zmiennej Rozmiar (bajty)1 int
Zakres Od
Do 4
-2 147 483 648 2 147 483 647 float
4 1.5 10-45 3.4 1038 double
8 5.0 10-324 3.4 10308 char
1
-128 127
short int 2
-32 768 32 767 long int 4
-2 147 483 648 2 147 483 647 long long int 8
long double 12
-9 223 372 036 854 775 808 9 223 372 036 854 775 807 1.9 10-4951 1.1 104932 Tabela 2.2.2 Zakresy zmiennych oraz rozmiary dla liczb ze znakiem (signed)
Typ zmiennej Rozmiar (bajty)
unsigned int unsigned char Zakres
Od Do
4 0
4 294 967 295 1
0 255
1 Sprawdzone na 32-bitowym procesorze, na innych moe si rni.
10
unsigned short int 2
0 65535
unsigned long int 4
0 4 294 967 295 unsigned long long int 8
0 18 446 744 073 709 551 615 Tabela 2.2.3 Zakres zmiennych oraz rozmiary dla liczb bez znaku (unsigned)
2.2.3 Nazwy zmiennych i deklaracja zmiennych Deklaracja zmiennych w jzyku C jest bardzo prosta. Po pierwsze podajemy jej typ, po drugie podajemy jej nazw, na kocu definicji stawiamy rednik. Jeli tworzymi kilka zmiennych danego typu, to moemy wypisywa ich nazwy po przecinku. Przykadowe definicje zmiennych lokalnych,
oraz globalnych pokazane zostay na listingu 2.2.1
#include <stdio.h>
unsigned short numer;
unsigned id = 10;
main()
{
const float podatek = 0.22;
int i, k = 2, z;
unsigned int iloscLudzi;
int dolna_granica = -10;
float cenaKawy = 5.4;
}
Listing 2.2.1 Definicja zmiennych Nazwy zmiennych mog by dowolnymi cigami znakw, mog zawiera cyfry przy czym cyfra nie moe by pierwszym znakiem. Znak podkrelenia rwnie jest dozwolony. Trzeba mie na uwadze fakt, i wielko liter jest rozrniana! Staa podatek jest zdefiniowana, natomiast wyraz Podatek nie jest sta typu float!
Zaleca si, aby nazwy zmiennych byy zwizane z ich docelowym przeznaczeniem. Jak wida na listingu 2.2.1 czytajc nazwy uytych zmiennych mamy poniekd informacj do czego bd suy i dlaczego takie, a nie inne typy zmiennych zostay uyte. Nazwy zmiennych takie jak i, k, z su zazwyczaj do sterownia ptlami.
Zmienne globalne deklarowane s poza ciaem funkcji. Dodawanie przedrostka okrelajcego jego dugo bd znak, lub jednoczenie oba mona zapisa w postaci penej, czyli np. unsigned short int 11
nazwaZmiennej; bd w skrconej formie (unsigned short nazwaZmiennej), jak pokazano na listingu 2.2.1, czyli nie piszc sowa kluczowego int. Pisanie skrconej formy oznacza, e typem zmiennej bdzie int!
Wartoci zmiennych mog zosta zainicjonowane podczas tworzenia zmiennych. Po nazwie zmiennej wstawiamy znak rwnoci i wpisujemy odpowiedni (do typu zmiennej) warto (wartoci jakie mona wpisywa przedstawione zostay w tabelach: 2.2.4 oraz 2.2.5 w podpunkcie 2.2.4 Stae).
Jeli tworzmy zmienn globaln i nie zainicjujemy jej adnej wartoci, to kompilator przypisze jej warto zero, co jest odrnieniem od zmiennych lokalnych, ktre nie zainicjonowane przez programist posiadaj mieci (losowe wartoci, ktre byy w danym obszarze pamici przed jej zajciem).
2.2.4 Stae Mona powiedzie, e stae to zmienne tylko do odczytu. Raz przypisana warto do staej podczas pisania kodu nie moe zosta zmieniona przez uytkownika podczas uywania programu. Sta definiuje si poprzez uycie sowa kluczowego const przed typem i nazw zmiennej. A wic deklaracje staych typu float oraz int wygldaj nastpujco:
const float nazwaStalejFloat = yyy;
// (1)
const int nazwaStalejInt = xxx;
// (2)
Gdzie jako yyy, xxx moemy wpisa jedn z wartoci przedstawionych w poniszych tabelach.
W deklaracji (1) zamiast float mona wpisa double w celu uzyskania wikszej dokadnoci
(podwjna precyzja). Analogicznie w deklaracji (2) int moe zosta zamieniony na long int, lub inny typ w celu podwyszenia bd zmniejszenia zakresu.
yyy Opis przykadowej przypisywanej wartoci 10.4 Staa zmiennopozycyjna zawierajca kropk dziesitn 104E-1 Staa zmiennopozycyjna zawierajca wykadnik1 1 104E-1 = 10410-1 = 10.4 12
Staa zmiennopozycyjna zawierajca kropk dziesitn oraz wykadnik
1.24E-3 Tabela 2.2.4 Rne sposoby wpisywania wartoci liczb rzeczywistych xxx
Opis przykadowej przypisywanej wartoci 145
Staa cakowita dziesitna 10E+5 Staa cakowita dziesitna z wykadnikiem2 0230
Staa cakowita zapisana w systemie semkowym (zero na pocztku)
0x143 Staa cakowita zapisana w systemie szesnastkowym (0x, lub OX na pocztku)3 Tabela 2.2.5 Rne sposoby wpisywania wartoci liczb cakowitych Przykadowe definicje staych:
const int dwaMiliony = 2E6;
// 2000000 const int liczbaHex = 0x3E8;
// 3E8 to dziesitnie 1000 const double malaLiczba = 23E-10;
// 0.0000000023 Aby sprawdzi, czy faktycznie tak jest, w ciele funkcji main wpisz ponisz linijk kodu, ktra zawiera funkcj printf. O funkcji printf troch wicej informacji zostanie podane pniej.
printf(%d\n, liczbaHex);
2.2.5 Wyraenia stae i stae symboliczne Wyraenia stae s to wyraenia, ktre nie zale od zmiennych. Czyli mog zawiera stae (const),
stae wyliczenia (enum), zwyke wartoci na sztywno wpisane w kod programu bd stae symboliczne.
2 Rwnowany zapis: 10E5 3 Rwnowany zapis: OX143 13
Sta symboliczn definiuje si poza funkcjami, czyli jest globalna (dostpna dla wszystkich funkcji).
Stae symboliczne tworzy si za pomoc dyrektywy preprocesora (czci kompilatora) #define. Na listingu 2.2.2 pokazane zostay cztery przykady uycia staej symbolicznej.
#include <stdio.h>
#define MAX 10
#define ILOCZYN(x,y) (x)*(y)
#define DRUKUJ(wyrazenie) printf(#wyrazenie " = %g\n", wyrazenie)
#define POLACZ_TEKST(arg1, arg2) arg1 ## arg2 main ()
{
double POLACZ_TEKST(po, datek) = 0.22;
printf("MAX: %d\n", MAX);
printf("ILOCZYN: %d\n", ILOCZYN(MAX,MAX));
DRUKUJ(10.0/5.5);
printf("%.2f\n", podatek);
}
Listing 2.2.2 Uycie #define.
Druga linia powyszego kodu definiuje sta symboliczn MAX o wartoci 10. Kompilator podczas tumaczenia kodu zamieni wszystkie wystpienia staej MAX na odpowiadajc jej warto. Linia trzecia definiuje tak jakby funkcj1 ILOCZYN, ktra przyjmuje dwa argumenty i je wymnaa. Czwarta linia programu tworzy makro, ktre po wywoaniu wstawi wyrazenie w miejsce #wyrazenie (znak #
jest obowizkowy) oraz obliczy je i wstawi w miejsce deskryptora formatu (%g), a wszystko to zostanie wydrukowane za pomoc funkcji printf. Za pomoc operatora ## skleja si argumenty.
Skadnia dla tego operatora jest taka jak pokazano w pitej linii listingu 2.2.2. W funkcji main uywamy tego makra do poczenia sw po i datek, co w efekcie daje podatek. Jako i przed nazw stoi sowo double, a po nazwie inicjacja wartoci, to sowo podatek staje si zmienn typu double.
Pierwsza instrukcja printf drukuje liczb 10, poniewa staa MAX posiada tak warto, natomiast druga wydrukuje wynik mnoenia liczby, ktra kryje si pod nazw MAX. Oczywicie wynikiem bdzie liczba 100.
2.2.6 Staa wyliczenia Staa wyliczenia jest tworzona przy pomocy sowa kluczowego enum. Idea polega na tym, e nazwom 1 Wicej informacji o funkcjach znajduje si w rozdziale Funkcje.
14
zawartym w nawiasach klamrowych przyporzdkowywane s liczby cakowite, poczwszy od 0.
Poniszy przykad ilustruje omwione zachowanie.
#include <stdio.h>
main ()
{
enum wyliczenia {NIE, TAK};
printf("%d\n", NIE);
// 0 printf("%d\n", TAK);
// 1
}
Listing 2.2.3 Uycie staej wyliczenia.
Mona zdefiniowa sta wyliczenia, w ktrej sami zadecydujemy jakie wartoci bd przyporzdkowane do kolejnych sw pomidzy nawiasami klamrowymi. Jeli nie zadeklarujemy wszystkich, to kompilator uzupeni je w sposb nastpujcy: Znajduje ostatni zdefiniowan warto wyrazu i przypisuje do kolejnego wyrazu zwikszon warto o jeden. Listing 2.2.4 pokazuje jak si definiuje nowe wartoci i jak kompilator dopenia te, ktre nie zostay uzupenione.
#include <stdio.h>
main ()
{
enum tydzien {PON = 1, WTO, SRO, CZW, PT, SOB, ND};
printf("%d\n", PON); // 1 - Zdefiniowane printf("%d\n", WTO); // 2 - Dopenione: Do wartoci PON dodane 1 printf("%d\n", SRO); // 3 - WTO + 1 printf("%d\n", CZW); // 4 - Analogicznie pozostae printf("%d\n", PT); // 5 printf("%d\n", SOB); // 6 printf("%d\n", ND); // 7
}
Listing 2.2.4 Uycie staej wyliczenia wraz z definicj wartoci Zastosowanie staych wyliczenia zostanie pokazane w rozdziale dotyczcym sterowania programem
instrukcja switch.
2.2.7 Zasig zmiennych Zasig zmiennych jest bardzo istotnym zagadnieniem, poniewa moemy czasem prbowa odwoa si do zmiennej, ktra w rzeczywistoci w danym miejscu nie istnieje. Listing 2.2.5 pokazuje troch 15
duszy kawaek kodu, natomiast uwiadamia istotne aspekty zwizane z zasigiem zmiennych.
#include <stdio.h>
int iloscCali = 10;
void drukujZmienna (void);
main ()
{
int index = 5;
printf("%d\n", index);
printf("%d\n", iloscCali);
drukujZmienna();
{
int numer = 50;
printf("%d\n", numer);
}
printf("%d\n", numer);
}
/* Nie zadziala */
void drukujZmienna (void)
{
int id = 4;
printf("%d\n", id);
printf("%d\n", iloscCali);
}
Listing 2.2.5 Zakresy zmiennych Wida nowo na powyszym listingu, s dwie funkcj: main oraz drukujZmienna. Zakres zmiennej lokalnej, czyli takiej, ktra utworzona jest w dowolnej funkcji obejmuje tylko t funkcj. To znaczy zmienna id, ktra jest zadeklarowana w funkcji drukujZmienna dostpna jest tylko w tej funkcji.
Z funkcji main nie mona si do niej odwoa i odwrotnie, zmienna index dostpna jest tylko w gwnej funkcji programu. Ciekawostk moe by fakt, i zmienna numer pomimo tego,
e zadeklarowana jest w funkcji gwnej, nie jest dostpna w kadym miejscu funkcji main. Nawiasy klamrowe tworz wydzielony blok, w ktrym utworzone zmienne dostpne s tylko pomidzy klamrami (czyli w tym bloku). Dlatego te ostatnia instrukcja nie zadziaa kompilator zgosi bd,
ktry mwi, e nie mona uywa zmiennej, jeli si jej wczeniej nie zadeklaruje.
Zmienna globalna iloscCali widziana jest w kadym miejscu, tzn mona jej uywa w kadej funkcji oraz musi by zdefiniowana dokadnie jeden raz.
16
2.3 Matematyka 2.3.1 Operatory arytmetyczne Lista operatorw arytmetycznych zostaa podana w poniszej tabeli. Operatory te s operatorami dwuargumentowymi, czyli jak sama nazwa mwi, potrzebuj dwch argumentw.
Operator Funkcja operatora
+
Dodawanie
-
Odejmowanie
*
Mnoenie
/
Dzielenie
%
Dzielenie modulo (reszta z dzielenia)
Tabela 2.3.1 Operatory arytmetyczne Poniej poka deklaracj zmiennych, uycie operatorw, oraz wywietlenie wyniku. Wszystkie ponisze instrukcj prosz wpisa w ciele funkcji main.
int a, b, wynik;
a = 10;
b = 7;
wynik = a + b;
// wynik = a - b; lub wynik = a * b;
printf(%d\n, wynik);
A co z dzieleniem? Z dzieleniem jest w zasadzie tak samo, natomiast trzeba wspomnie o bardzo wanej rzeczy. Dzielenie liczb cakowitych (typ int) w wyniku da liczb cakowit, czyli cyfry po przecinku zostan obcite. Aby tego unikn, tzn aby w wyniku dosta liczb rzeczywist przynajmniej jeden z argumentw musi by liczb rzeczywist (float, double) oraz zmienna przetrzymujca wynik te musi by typu rzeczywistego.
Aby dokona tego o czym wspomniaem (odnonie typu rzeczywistego jednego z argumentw) mona postpi na kilka sposobw. Pierwszy z nich, chyba najprostszy zadeklarowa argument jako zmienn typu rzeczywistego.
float a, wynik;
int b;
17
a = 10.0;
b = 7;
wynik = a / b;
// wynik = 1.428571 printf(%f\n, wynik);
Drugim sposobem jest pomnoenie jednego z argumentw przez liczb rzeczywist, czyli eby nie zmieni tej liczby, a zmieni tylko jej typ, moemy pomnoy j przez 1.0.
int a, b;
float wynik;
a = 10;
b = 7;
wynik = a*1.0 / b;
// wynik = 1.428571 printf(%f\n, wynik);
Trzeci sposb korzysta z operatora rzutowania (o nim jeszcze nie byo wspomniane). Operator rzutowania ma posta:
(typ_rzutowania) wyrazenie;
Co oznacza tyle, e rozszerza, bd zawa dany typ zmiennej ustawiajc nowy. Jeli wyrazenie jest zmienn typu int to po zrzutowaniu na float bdzie t sam liczb tylko dodatkowo warto jej bdzie skada si z kropki, po ktrej nastpi zero (np. 10.0). W drug stron te mona, jeli zmienna jest typu float, a zrzutujemy j na int to kocwka, czyli kropka i liczby po kropce zostan obcite i zostanie sama liczba cakowita. A wic w naszym przypadku mona by byo zrobi to w nastpujcy sposb:
int a, b;
float wynik;
a = 10;
b = 7;
wynik = (float)a / b;
printf(%f\n, wynik);
Dzielenie modulo powoduje wywietlenie reszty z dzielenia dwch argumentw. Operator reszty 18
z dzielenia uywany jest tylko dla liczb (typw) cakowitych. Na poniszym przykadzie zostao to pokazane.
int a = 10, b = 7, c = 5, wynik;
wynik = a % b;
// 3 printf(%d\n, wynik);
wynik = a % c;
// 0 printf(%d\n, wynik);
2.3.2 Operatory logiczne i relacje W jzyku C istniej operatory logiczne, dziki ktrym moemy sprawdza warunki, ktre z kolei mog sterowa programem. Operatory logiczne zostay przedstawione w niniejszej tabeli.
Operator Funkcja operatora Relacje
>
Wikszy ni
>=
Wikszy lub rwny ni
<
Mniejszy ni
<=
Mniejszy lub rwny ni Operatory przyrwnania
==
Rwny
!=
Rny Operatory logiczne
&&
Logiczne i
||
Logiczne lub Tabela 2.3.2 Relacje, operatory przyrwnania oraz operatory logiczne Kady z wyej wymienionych operatorw jest operatorem dwu argumentowym, wic jego sposb uycia i zastosowanie mona przedstawi na poniszym przykadowym kodzie:
#include <stdio.h>
main ()
{
const int gornaGranica = 10;
19
const int dolnaGranica = -10;
int a = 4;
if (a >= dolnaGranica && a <= gornaGranica)
if (a % 2 == 0)
{
printf("Liczba a (%d) zawiera sie w przedziale: ", a);
printf("<%d;%d> i jest parzysta\n", dolnaGranica, gornaGranica);
}
else
{
printf("Liczba a (%d) zawiera sie w przedziale: ", a);
printf("<%d;%d> i jest nie parzysta\n", dolnaGranica,
gornaGranica);
}
else printf("Liczba nie zawiera sie w podanym zakresie\n");
}
Listing 2.3.1 Uycie operatorw W tym miejscu skupimy si bardziej tylko na uyciu operatorw, w jaki sposb si ich uywa, jak to dziaa itp. Dziaanie instrukcji ifelse zostanie omwione w podpunkcie 3.1.
Spjrzmy na ponisze wyraenie, w ktrym uyto dwch operatorw relacji i jednego operatora logicznego.
a >= dolnaGranica && a <= gornaGranica Priorytety operatorw zostay opisane w podpunkcie 2.3.5, ale tak krtko: Operatory >= i <= maj priorytet wikszy ni operator &&, dziki tej informacji nie musimy stosowa nawiasw, poniewa najpierw wykona si warunek sprawdzajcy czy a jest wiksze lub rwne od dolnej granicy
(dolnaGranica). W tym momencie trzeba troszk nawiza do tego co to znaczy logiczne i.
W tabeli poniej zostay podane kombinacje bramki logicznej i, z ktrej korzysta operator &&.
X1 X2
Y 0
0 0
0 1
0 1
0 0
1 1
1 Tabela 2.3.3 Bramka logiczna i 20
Potraktujmy wyraenie a >= dolnaGranica jako zmienn X1, wyraenie a <= gornaGranica jako zmienn X21. A cao a >= dolnaGranica && a <= gornaGranica jako zmienn Y.
Jeli pierwszy warunek jest speniony, czyli a jest wiksze bd rwne wartoci zmiennej dolnaGranica, to to wyraenie przyjmuje warto 1, mona sobie to wyobrazi, e do zmiennej X1 przypisujemy warto 1. Teraz spjrz na tabel 2.3.3, Y rwna si 1 wtedy i tylko wtedy, gdy X1 i X2 rwnaj si jednoczenie 1. Skoro nasze X1 przyjo warto 1, to jest sens sprawdzenia drugiego warunku, ktre oznaczylimy jako X2. Jeli X2 rwna si 1, czyli warunek zosta speniony
(a mniejsze lub rwne wartoci zmiennej gornaGranica) to Y rwna si 1. Jeli Y rwna si 1 to zostanie sprawdzony kolejny warunek, na parzysto liczby a (poniewa, jeli Y rwna si 1, to if(1)
jest wartoci prawdziw i zostan wykonywane polecenia po if, jeli Y rwnaby si 0, to if(0) jest faszywe, wic wykonay by si instrukcje znajdujce si po else2). Jeli liczba a dzieli si przez 2 bez reszty (czyli reszta z dzielenia liczby a przez 2 rwna si 0) to liczba jest parzysta, w przeciwnym wypadku liczba jest nie parzysta.
W gruncie rzeczy to by byo tyle jeli chodzi o operatory. Warto zaznajomi si z priorytetami, ktre operatory maj wysze, ktre nisze. W razie nie pewnoci mona uy nawiasw, ktre zapewniaj wykonanie instrukcji w nawiasie przed tymi spoza nawiasw.
2.3.3 Operatory zwikszania, zmniejszania oraz przypisywania W poniszej tabeli znajduje si zestawienie operatorw zwikszania oraz zmniejszania, a pod tabel sposb uycia.
Funkcja operatora Zwikszanie
Zmniejszanie Deklaracja operatora Nazwa operatora n++
Post inkrementacja
++n Pre inkrementacja n--
Post dekrementacja
--n Pre dekrementacja Tabela 2.3.4 Operatory zwikszania, zmniejszania 1 Moemy to traktowa jako zmienn, poniewa rwnie dobrze w kodzie z listingu 2.3.1 moglibymy zdefiniowa zmienn int X1 = a >= dolnaGranica; oraz zmienn int X2 = a <= gornaGranica; i w warunku wstawi if (X1 && X2). Sposb pisania jest dowolny i zaley od przyzwyczaje programisty.
2 Omwienie sposobu dziaania instrukcji if-else znajduje si w podpunkcie 3.1 21
Poniej pokae sposb uycia tych operatorw w kodzie, oraz wytumacz zasad ich dziaania.
Przykad bdzie z operatorem inkrementacji (zwikszania). Zasada dziaania operatora dekrementacji jest analogiczna.
#include <stdio.h>
main ()
{
int a = 0, n = 0;
a = n++;
printf("a = %d\n", a);
printf("n = %d\n", n);
// 0
// 1 a = 0;
n = 0;
a = ++n;
printf("a = %d\n", a);
printf("n = %d\n", n);
}
// 1
// 1 Listing 2.3.2 Uycie operatora przypisania oraz zwikszania A wic tak, operator post inkrementacji tak samo jak operator pre inkrementacji zwiksza warto swojego argumentu. Rnica midzy nimi jest taka, e z pomoc operatora post inkrementacji zmienna a w wyraeniu a = n++ bdzie zawieraa star warto zmiennej n, czyli t przed zwikszeniem. Operator ten dziaa w taki wanie sposb, e zwiksza warto zmiennej dopiero po tym jak zostanie ona uyta.
Operator pre inkrementacji zwiksza warto zmiennej jeszcze przed jej uyciem, dlatego w tym przypadku zmienna a i n bd miay warto 1.
Operatory pre i post dekrementacji dziaaj w analogiczny sposb, tylko, e zmniejszaj warto swojego argumentu.
Wyraenie, ktre ma posta n = n + 5 mona i z reguy zapisuje si w innej, krtszej postaci za pomoc operatora przypisania, definiowanego jako n += 5. Oglna posta operatora przypisania to: X=
Gdzie X moe by jednym z nastpujcych znakw:
+
-
*
/
%
<<
>>
&
^
|
Poniej wystpuj przykadowe deklaracje operatora przypisania.
22
int i = 2, k = 3;
i += k;
// i = i + k; i = 5 i *= k + 2
// i = i * (k + 2); i = 10 2.3.4 Operatory bitowe Aby dobrze zrozumie operacj na bitach, trzeba zrobi pewne wprowadzenie o liczbach binarnych
(bin). W tym miejscu nie bd si skupia na sposobie w jaki si przelicza liczby z jednego systemu na drugi, bo nie to jest celem naszych rozwaa. Do wszystkich tych czynnoci naley uy kalkulatora,
ktry potrafi wywietla wartoci w rnych systemach liczbowych.
A wic tak, najpierw opisz operatory bitowe, a pniej na przykadach pokae zasad ich dziaania wraz z opisem. W tabeli poniej znajduj si operatory bitowe oferowane przez jzyk C. Operatory te mona stosowa do manipulowania bitami jedynie argumentw cakowitych!
Operator Nazwa operatora
&
Bitowa koniunkcja (AND)
|
Bitowa alternatywa (OR)
^
Bitowa rnica symetryczna (XOR)
<<
Przesunicie w lewo
>>
Przesunicie w prawo
~
Dopenienie jedynkowe Tabela 2.3.5 Operatory bitowe Zacznijmy wic od bitowej koniunkcji (&). Bitowa koniunkcja uywana jest do zasaniania (zerowania)
pewnych bitw z danej liczby. Przydatn rzecz moe okaza si tabela 2.3.3 z punktu 2.3.2, ktra to jest tablic prawdy dla logicznej koniunkcji (logiczne i).
Dajmy na przykad liczb dziesitn 1435, ktrej reprezentacj binarn jest liczba 10110011011
(wiersz pierwszy). Oglnie uycie operatora koniunkcji bitowej mona rozumie jako porwnanie parami bitw liczby, na ktrej operacj wykonujemy oraz liczby, o ktrej zaraz powiem.
Jeli chcemy zasoni pewn ilo bitw, np. pi bitw liczc od lewej strony. Musimy w takim wypadku do drugiego wiersza wstawi zera pod bitami, ktre chcemy zasoni, a na reszt bitw wstawi jedynki (wiersz drugi).
23
1 0
1 1
0 0
1 1
0 1
1 0
0 0
0 0
1 1
1 1
1 1
Dzieje si tak poniewa taka jest zasada dziaania bramki logicznej AND (logiczne i). Wszdzie tam gdzie wystpuje cho jedno zero, wynikiem oglnie bdzie zero (wstawiamy zero, wic zerujemy wynik danego bitu). A tam gdzie wstawimy jedynk, wartoci nie zmienimy (moe by zero, a moe by jeden).
To by taki wstp teoretyczny, eby wiedzie o co tam w ogle chodzi. Teraz czas przej do tego, jak trzeba liczb uy, by zasoni tak, a nie inn ilo bitw. Bierzemy liczb z drugiego wiersza,
zamieniamy j na warto semkow (liczb semkow w C wstawiamy poprzedzajc j zerem!, np.
077), bd szesnastkow (liczb szesnastkow wstawiamy poprzedzajc j 0x, lub 0X) i wstawiamy do polecenia. Listing 2.3.3 pokazuje jak to zrobi.
#include <stdio.h>
main ()
{
int liczba;
liczba = 1435;
liczba = liczba & 0x3F;
// 0x3F = 00000111111 printf("%d\n", liczba);
// 27
}
Listing 2.3.3 Uycie operatora koniunkcji bitowej Wynikiem jest liczba 27, poniewa pi bitw liczc od lewej zostao wyzerowanych i z naszej binarnej liczby 10110011011 zostaa liczba 011011, co po zamienieniu na dziesitn warto daje 27.
Alternatywny sposb zasaniania pewnej iloci bitw (sposb bardziej praktyczny) znajduje si w zadaniach: 2.6 oraz 2.7.
Bitowa alternatywa (|) dziaa w troch inny sposb, ni bitowa koniunkcja, ktra czycia pewn ilo bitw, a mianowicie ustawia bity na tych pozycjach, na ktrych chcemy.
Dla przykadu wemy liczb dziesitn 1342, ktrej reprezentacj binarn jest liczba 10100111110.
Tworzymy tabelk tak jak w poprzednim przykadzie i z pomoc operatora bitowej alternatywy moemy ustawi bity, na dowolnej pozycji. Waciwie to moemy zmieni z zera na jeden konkretny bit. Operacja zamiany z jedynki na zero, to bitowa koniunkcja. Tabela 2.3.6 przedstawia tablic prawdy 24
dla logicznego lub.
X1 X2
Y 0
0 0
0 1
1 1
0 1
1 1
1 Tabela 2.3.6 Tablica prawdy logicznego lub (OR)
Powysza tabela moe okaza si przydatna w zrozumieniu sposobu dziaania operatora bitowej alternatywy. Czyli mwic w skrcie dziaa to na zasadzie takiej, jeli porwnamy dwa bity logicznym lub, to nie zmieni ostatecznej wartoci bitu warto zero. Natomiast jeli chcemy zmieni warto to musimy wstawi jedynk. Tabela 2.3.6 pokazuje to dokadnie. Jeli gdziekolwiek wystpuje jedynka,
to wartoci kocow jest jedynka.
Powracajc do naszego przykadu, zaomy, e chcemy zrobi liczb dziesitn 2047, ktrej reprezentacj binarn jest liczba 11111111111. Wpisujemy do tabeli liczb, ktra pod zerami bdzie miaa jedynki. T liczb jest 01011000001, jej reprezentacj w systemie szesnastkowym jest 2C1.
1 0
1 0
0 1
1 1
1 1
0 0
1 0
1 1
0 0
0 0
0 1
Bardzo podobnie do poprzedniego wyglda niniejszy listing. Rnic oczywicie jest operator bitowy.
#include <stdio.h>
main ()
{
int liczba;
liczba = 1342;
liczba |= 0x2C1;
// liczba = liczba | 0x2C1;
printf("%d\n", liczba);
// 2047
}
Listing 2.3.4 Uycie operatora bitowej alternatywy Operator bitowej rnicy symetrycznej (^) ustawia warto jeden, jeli dwa porwnywane ze sob bity maj rne wartoci. Tabela 2.3.7 pokazuje tablic prawdy dla logicznej rnicy symetrycznej.
25
X1 X2
Y 0
0 0
0 1
1 1
0 1
1 0
1 Tabela 2.3.7 Tabela prawdy logicznej rnicy symetrycznej Wemy na przykad liczb dziesitn 1735, ktrej reprezentacj binarn jest liczba 11011000111.
Tworzymy po raz kolejny tabelk i wpisujemy do pierwszego wiersza reprezentacj binarn naszej liczby. Spjrzmy na tabel 2.3.7 eby zmieni ostatecznie warto bitu na zero, to pod jedynkami musimy wpisa jedynki, a pod zerami zera. eby warto bitu zostaa zmieniona na jeden, to pod jedynkami musimy wpisa zero, a pod zerami jeden.
Aby przerobi nasz liczb na liczb 1039, ktrej reprezentacj binarn jest liczba 10000001111 musimy wpisa takie liczby w wierszu drugim, by po sprawdzeniu ich z tablic prawdy rnicy symetrycznej uzyska warto binarn liczby 1039.
1 1
0 1
1 0
0 0
1 1
1 0
1 0
1 1
0 0
1 0
0 0
Wartoci w drugim wierszu tabeli jest cig cyfr 01011001000, ktrego reprezentacj szesnastkow jest 2C8. Listing 2.3.5 pokazuje ju znan metod operacji na bitach.
#include <stdio.h>
main ()
{
int liczba;
liczba = 1735;
liczba ^= 0x2C8;
// liczba = liczba ^ 0x2C8;
printf("%d\n", liczba);
// 1039
}
Listing 2.3.5 Uycie operatora bitowej rnicy symetrycznej (XOR)
Operator przesunicia suy jak sama nazwa wskazuje do przesunicia bitw. Jeli mamy liczb dziesitn 28, ktrej reprezentacj binarn jest liczba 11100, to uycie operatora przesunicia w lewo spowoduje przesuniecie wszystkich bitw w lewo o konkrern ilo pozycji. Ponisza tabela prezentuje 26
to zachowanie. W pierwszym wierszu wpisana jest binarna warto dziesitnej liczby 28. W wierszu drugim po wykonaniu przesunicia w lewo o 2. Jak wida, po przesuniciu z prawej strony zostay dopisane zera. Listing 2.3.6 pokazuje jak uywa si przesunicia w lewo w jzyku C.
0 0
1 1
1 0
0 1
1 1
0 0
0 0
#include <stdio.h>
main (void)
{
int liczba = 28;
liczba <<= 2;
// liczba = liczba << 2;
printf("%d\n", liczba);
// 112
}
Listing 2.3.6 Uycie operatora przesunicia w lewo.
Operator przesunicia w prawo dziaa analogicznie do tego opisanego przed chwil, z t rnic, e przesuwa bity w prawo. Zakadajc wic, e przesuwamy liczb 28, ponisza tabela pokazuje, e po przesuniciu wartoci naszej liczby bdzie 7.
0 0
1 1
1 0
0 0
0 0
0 1
1 1
#include <stdio.h>
main (void)
{
int liczba = 28;
liczba >>= 2;
// liczba = liczba >> 2;
printf("%d\n", liczba);
// 7
}
Listing 2.3.7 Uycie operatora przesunicia w prawo Ostatni operator bitowy to operator dopenienia jedynkowego. Operator ten przyjmuje jeden argument i neguje wszystkie bity (czyli zamienia ich wartoci z zera na jeden i odwrotnie). Niech przykadem bdzie liczba 77, ktrej reprezentacj binarn jest liczba 1001101. Tworzymy wic tabel tak jak w poprzednich przykadach i wpisujemy t warto do pierwszego wiersza.
27
0
0 1
0 0
1 1
0 1
1
1 0
1 1
0 0
1 0
W drugim natomiast wierszu zostay pokazane bity po negacji. Nasza liczba (77) skadaa si z wikszej iloci bitw ni 7. W przykadzie poniszym uyto typu unsigned, ktry zajmuje 32 bity.
Reszta bitw przed negacj miaa warto zero, ktra nie zmieniaa wyniku (lewa strona tabeli). Po negacji wszystkie te zera zostay zamienione na jedynki i wynik jest bardzo du liczb. Poniszy listing pokazuje omwione zachowanie.
#include <stdio.h>
main ()
{
unsigned x;
x = 77;
/* x: 00000000000000000000000001001101
* ~x: 11111111111111111111111110110010
*/
}
printf("%u\n", ~x);
// 4294967218 Listing 2.3.8 Uycie operatora bitowej negacji Aby nasz wynik by tym, ktrego oczekujemy (tabela drugi wiersz; to zaznaczone kolorem) musimy zasoni pozostae bity. Do tego zadania uyjemy operatora bitowej koniunkcji. Skoro chcemy, aby tylko 7 bitw byo widocznych, to musimy ustawi na nich jedynk. Wartoci, ktra bdzie nam potrzebna to 1111111, a szesnastkowo 7F. Tak wic podany niej listing pokazuje, e po negacji oraz zasoniciu otrzymujemy liczb zgodn z zaoeniami, czyli 50.
#include <stdio.h>
main ()
{
unsigned x;
x = 77;
x = ~x;
x &= 0x7F;
}
// 00000000000000000000000001001101
// 11111111111111111111111110110010
// 00000000000000000000000000110010 printf("%u\n", x);
// 50 Listing 2.3.9 Otrzymana liczba zgodna z zaoeniem 28
Aby pokaza jakie zastosowanie tych operatorw poka rozwizania zada 2.6 oraz 2.7 z ksiki:
Jzyk ANSI C autorstwa <NAME> oraz <NAME>. W zadaniach tych naleao napisa funkcje i tak wanie je tutaj zaprezentuj. W razie, gdyby co byo nie jasne, to w rozdziale 4. zostay opisane funkcje.
Zadanie 2.6 Napisz funkcj setbits(x, p, n, y) zwracajc warto x, w ktrej n bitw poczynajc od pozycji p
zastpiono przez n skrajnych bitw z prawej strony y. Pozostae bity x nie powinny ulec zmianie.
Odpowied:
Przede wszystkim, aby zrozumie sens tego zadania naley narysowa sobie tabelki tak jak te poniej.
Obrazuje to problem, dziki czemu atwiej moemy sobie wyobrazi jakie bity maj by zamienione na jakie. Tak wic zadanie rozwiemy w kilku poniszych krokach.
Krok 1 Przyjmujemy dwie dowolne liczby za x, oraz y i wpisujemy je do tabeli. Niech wartoci x bdzie 1023, ktrej reprezentacj binarn jest 1111111111, a jako y przyjmijmy warto 774, binarnie 1100000110. Jako n rozumiemy ilo bitw do zamiany, a jako p pozycj od ktrej te bity zamieniamy.
Przyjmijmy za n warto 5, a za p warto 7. Kolorem tym zosta zaznaczony obszar, w ktry mamy wstawi bity z obszaru szarego.
x 1
1 1
1 1
1 1
1 1
1 y
1 1
0 0
0 0
0 1
1 0
Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 2 W kroku drugim negujemy warto y za pomoc operatora dopenienia jedynkowego. Mona by si zastanawia dlaczego to robimy. Jest to wyjanione w kroku czwartym. Pki co tworzymy tabelk i zapisujemy zanegowan warto y. W programie przypiszemy zanegowan warto y do zmiennej y,
tak wic w kolejnym punkcie bd operowa zmienn y jako wartoci zanegowan.
29
y 1
1 0
0 0
0 0
1 1
0 y = ~y 0
0 1
1 1
1 1
0 0
1 Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 3 W tym kroku zasaniamy wartoci, ktre nie s nam potrzebne (potrzebne jest tylko pi bitw liczc od prawej strony. Bity: 0, 1, 2, 3, 4). Do tego celu uywamy bitowej koniunkcji.
0 0
0 0
0 0
0 0
0 0
0
~0 1
1 1
1 1
1 1
1 1
1
~0<<n 1
1 1
1 1
0 0
0 0
0
~(~0<<n)
0 0
0 0
0 1
1 1
1 1
y 0
0 1
1 1
1 1
0 0
1 y &= ~(~0<<n)
0 0
0 0
0 1
1 0
0 1
W pierwszej kolumnie wpisalimy same zera, w drugiej je zanegowalimy. Otrzymane jedynki przesunelimy o n (w naszym przypadku 5) pozycji (wiersz trzeci). W wierszu czwartym negujemy to co otrzymalimy w wierszu poprzednim. W tym miejscu otrzymalimy mask, ktra zasoni nam nie potrzebne bity, a zostawi tylko pic bitw z prawej strony. Przed ostatni wiersz to y (zanegowana warto z poprzedniego kroku). W ostatnim wierszu wpisujemy warto porwnania wiersza przed ostatniego z mask (za pomoc bramki i). Cao zapisujemy w y.
Krok 4 W tym kroku przesuwamy warto y o pewn ilo miejsc. O tym ile tych miejsc jest decyduje wzr:
p + 1 n. Czyli w naszym przypadku y przesuwamy o 3 miejsca.
y 0
0 0
0 0
1 1
0 0
1 y << p + 1 - n 0
0 1
1 0
0 1
0 0
0 W tym miejscu chce powiedzie dlaczego na pocztku (krok 2) zanegowalimy nasz warto y. Jest to zwizane bezporednio z tym krokiem, a mianowicie z przesuniciem wartoci y o wyliczon na podstawie wyej wymienionego wzoru ilo pozycji. Po lewej stronie mamy same zera (wzgldem kolorowego ta), po prawej trzy zera a nasza warto w kolorowym tle jest negacj wartoci, ktr 30
musimy wstawi w wyznaczone miejsce. Kolejn i przedostatni rzecz jak musimy zrobi to zanegowa warto y, by otrzyma te wartoci, ktre chcemy i zamiast zer jedynki, by przy ostatniej czynnoci porwnaniu parami bitw (za pomoc bramki i) nie usun adnego innego bitu.
Krok 5 Negacja wartoci znajdujcej si pod y oraz przypisanej tej zanegowanej wartoci do zmiennej y.
y 0
0 1
1 0
0 1
0 0
0 y = ~y 1
1 0
0 1
1 0
1 1
1 Krok 6 Porwnanie wartoci zanegowanej z liczb kryjc si pod x.
x 1
1 1
1 1
1 1
1 1
1 y
1 1
0 0
1 1
0 1
1 1
x&y 1
1 0
0 1
1 0
1 1
1 Nr bitu 9
8 7
6 5
4 3
2 1
0 A wic w tych paru krokach wstawilimy pic pocztkowych bitw zmiennej y do zmiennej x na pozycjach bitw od 3 do 7. Listing tego programu prezentuje si nastpujco (duo krtszy ni mogo by si wydawa). Zostaa napisana funkcja, ktra jest wywoywana z funkcji main.
#include <stdio.h>
unsigned setbits (unsigned x, int p, int n, unsigned y);
main()
{
unsigned x = 1023, y = 774;
int p = 7, n = 5;
}
printf("%u\n", setbits(x, p, n, y)); // 823 unsigned setbits (unsigned x, int p, int n, unsigned y)
{
y = ~y;
// Krok 2 y &= ~(~0 << n);
// Krok 3 y <<= p + 1 - n;
// Krok 4 y = ~y;
// Krok 5 31
return x & y;
}
// Krok 6 Listing 2.3.10 Funkcja setbits Zadanie 2.7 Napisz funkcj invert(x, p, n) zwracajc warto x, w ktrej n bitw poczynajc od pozycji p
zamieniono z 1 na 0 i odwrotnie. Pozostae bity x nie powinny ulec zmianie.
Odpowied:
Tak jak w poprzednim przykadzie, podzielimy wykonanie naszego zadania na kilka krokw, w ktrych bdziemy rysowa tabelki dla penego zrozumienia postawionego nam problemu. Rozwizanie tego zadania moe by troch dusze ni poprzedniego, nie mniej jednak zasady postpowania s podobne.
Krok 1 W tym kroku przyjmujemy jak liczb za x, niech t liczb bdzie 621, ktrej reprezentacj binarn jest 1001101101. Wybieramy ilo bitw, ktre chcemy zanegowa, oraz pozycj od ktrej t ilo bitw bdziemy liczy. Odpowiednio zmienna n, oraz p. Zamy niech n bdzie 3, a p rwna si 6.
Czyli musimy zanegowa bity: 6, 5, 4.
Przed 1
0 0
1 1
0 1
1 0
1 Po
1 0
0 0
0 1
1 1
0 1
Nr bitu 9
8 7
6 5
4 3
2 1
0 Warto Po dziesitnie to 541 i wanie takiego wyniku si spodziewamy.
Krok 2 W tym kroku przypisujemy do zmiennej pomocniczej x1 przesunit warto x o p + 1 n pozycji w prawo.
x 1
0 0
1 1
0 1
1 0
1 x1 = x >> p+1-n 0
0 0
0 1
0 0
1 1
0 Nr bitu 9
8 7
6 5
4 3
2 1
0 32
Krok 3 W kroku trzecim negujemy warto zmiennej x1, i przypisujemy j do x1.
x1 0
0 0
0 1
0 0
1 1
0 x1 = ~x1 1
1 1
1 0
1 1
0 0
1 Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 4 W kroku czwartym tworzymy mask (tak jak w poprzednim zadaniu), dziki ktrej zasonimy nie potrzebne bity, a zostawimy tylko te na ktrych nam zaley, czyli bity: 0, 1, 2. Wynik caej operacji przypisujemy do zmiennej pomocniczej x1.
0 0
0 0
0 0
0 0
0 0
0
~0 1
1 1
1 1
1 1
1 1
1
~0 << n 1
1 1
1 1
1 1
0 0
0
~(~0 << n)
0 0
0 0
0 0
0 1
1 1
x1 1
1 1
1 0
1 1
0 0
1 x1 & ~(~0 << n)
0 0
0 0
0 0
0 0
0 1
Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 5 W kroku pitym przesuwamy bity zmiennej x1 o p + 1 n pozycji w lewo.
x1 0
0 0
0 0
0 0
0 0
1 x1 << p+1n 0
0 0
0 0
1 0
0 0
0 Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 6 W kroku tym tworzymy pomocnicz zmienn z, ktra bdzie suya do wyzerowania pewnej iloci bitw zmiennej x, ktr nastpnie uzupenimy bitami przygotowanymi w kroku 5. Warto przed ostatniego wiersza przypisujemy do zmiennej z.
~0 1
1 1
1 1
1 1
1 1
1
~0 << n 1
1 1
1 1
1 1
0 0
0 z = ~(~0 << n)
0 0
0 0
0 0
0 1
1 1
33
Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 7 W kroku tym przesuwamy warto z o p + 1 n pozycji w lewo.
z 0
0 0
0 0
0 0
1 1
1 z << p+1-n 0
0 0
1 1
1 0
0 0
0 Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 8 W kroku tym negujemy warto zmiennej z.
z 0
0 0
1 1
1 0
0 0
0 z = ~z 1
1 1
0 0
0 1
1 1
1 Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 9 W tym kroku zerujemy warto bitw, na ktre mamy ustawi pewne bity (zaoenie zadania) za pomoc zmiennej z. Wida w tabelce z kroku 8, e pewne bity s zerami, wic jak porwnamy parami
(bitowa koniunkcja) zmienn z wraz ze zmienn x, to wyzerujemy pewne bity. Warto t przypisujemy do zmiennej pomocniczej g (wiersz trzeci).
x 1
0 0
1 1
0 1
1 0
1 z
1 1
1 0
0 0
1 1
1 1
g=x&z 1
0 0
0 0
0 1
1 0
1 Nr bitu 9
8 7
6 5
4 3
2 1
0 Krok 10 Teraz zostao ju tylko ustawi bity w miejsce, gdzie zostay one wyzerowane. W kroku pitym przygotowalimy specjalnie do tego celu t liczb. Teraz za pomoc bitowej alternatywy ustawimy te bity.
x1 0
0 0
0 0
1 0
0 0
0 g
1 0
0 0
0 0
1 1
0 1
x1 | g 1
0 0
0 0
1 1
1 0
1 34
Nr bitu 9
8 7
6 5
4 3
2 1
0 Tak o to otrzymalismy liczb 1000011101, ktrej reprezentacj dziesitn jest liczba 541. Poniszy listing pokazuje jak wygldaj te operacj w jzyku C.
#include <stdio.h>
unsigned invert (unsigned x, int p, int n);
main ()
{
unsigned x = 621;
int p = 6;
int n = 3;
printf("%u\n", invert(x, p, n));
}
unsigned invert (unsigned x, int p, int n)
{
unsigned x1, z, g;
}
x1 = x >> p + 1 - n;
x1 = ~x1;
x1 &= ~(~0 << n);
x1 <<= p + 1 - n;
z = ~(~0 << n);
z <<= p + 1 - n;
z = ~z;
g = x & z;
return x1 | g;
// Krok 2
// Krok 3
// Krok 4
// Krok 5
// Krok 6
// Krok 7
// Krok 8
// Krok 9
// Krok 10 Listing 2.3.11 Funkcja invert.
2.3.5 Priorytety Poniej znajduje si tabela z priorytetami operatorw. Nie wszystkie jeszcze zostay omwione, lecz zostan w kolejnych rozdziaach. Najwyej w tabeli znajduj si operatory z najwyszym priorytetem.
Kolejne wiersze maj coraz nisze priorytety. Operatory w jednym wierszu maj jednakowy priorytet.
Operatory czno
() [] -> .
Lewostronna
! ~ ++ -- + - * & (typ) sizeof Prawostronna
35
* / %
Lewostronna
+ -
Lewostronna
<< >>
Lewostronna
< <= > >=
Lewostronna
== !=
Lewostronna
&
Lewostronna
^
Lewostronna
|
Lewostronna
&&
Lewostronna
||
Lewostronna
?:
Prawostronna
= += -= *= /= %= ^= |= <<= >>=
Prawostronna
,
Lewostronna Tabela 2.3.8 Tabela priorytetw 2.3.6 Funkcje matematyczne Wszystkie opisane w tym rozdziale funkcje korzystaj z pliku nagwkowego math.h, ktry musi zosta doczony, aby funkcje te byy rozpoznawane. Poniszy zapis:
double sin (double);
Oznacza po kolei:
double Typ zwracanej wartoci double
sin nazwa funkcji
(double) Ilo (jeli wicej ni jeden, to typy wypisane s po przecinku) i typ argumentw.
Wszystkie funkcje opisane w tym rozdziale zwracaj warto typu double. Argumenty x, y s typu double, a argument n typu int. Kompilacja odbywa si prawie tak samo jak w poprzednich przykadach, nie mniej jednak doczamy przecznik (opcj) -lm. A wic polecenie bdzie wyglda nastpujco:
36
$ gcc plik.c -o plik -lm Jeszcze jedna uwaga. Wartoci ktw funkcji trygonometryczny wyraa si w radianach. W poniszej tabeli znajduje si zestawienie funkcji matematycznych.
Nazwa funkcji Deklaracja
Dodatkowe informacje Sinus
sin(x)
-
Cosinus cos(x)
-
Tangens tan(x)
-
Arcus sinus asin(x)
Arcus cosinus acos(x)
Arcus tangens atan(x)
Arcus tangens atan2(y,x)
Sinus hiperboliczny sinh(x)
-
Cosinus hiperboliczny cosh(x)
-
Tangens hiperboliczny tanh(x)
-
Funkcja wykadnicza exp(x)
ex Logarytm naturalny log(x)
ln x , x0 Logarytm o podstawie 10 log10(x)
log 10 x , x0 Potgowanie2
pow(x,y)
xy Pierwiastkowanie
sqrt(x)
x , x 0 Najmniejsza liczba cakowita ceil(x)
Nie mniejsza ni x wynik typu double Najwiksza liczba cakowita floor(x)
Nie wiksza ni x wynik typu double Warto bezwzgldna fabs(x)
x
-
ldexp(x,n)
x2n frexp(x, int *exp)
Rozdziela x na znormalizowan cz
0.5 ; 1 uamkow
z przedziau
i wykadnik potgi 2. Funkcja zwraca cz uamkow, a wykadnik potgi wstawia do
*exp. Jeli x = 0, to obie czci wyniku rwnaj si 0.
-
y
; , x1 ; 1 1 2 2 y 0 ; , x 1 ; 1 y
tan 1
;
2 2 y
;
x 1 y Mam na myli y jako wartoci funkcji, nie argument, ktry wystpuje w innych funkcjach 2 Bd zakresu wystpi gdy: x = 0 i y <= 0 lub x < 0 i y nie jest cakowite 37
-
modf(x, double *u)
Rozdziela x na cz cakowit i uamkow,
obie z takim samym znakiem co x. Cz cakowit wstawia do *u i zwraca cz uamkow
-
fmod(x,y)
Zmiennopozycyjna reszta z dzielenie x/y z tym samym znakiem co x;
Tabela 2.3.9 Funkcje z biblioteki math.h Przykad
Oblicz wyraenie podane poniej oraz wywietl dodatkowe dwa wyniki: zaokrglone w dl, oraz w gr do liczb cakowitych.
1 sin2 0.452tg 2 2
y=
log 10 142e 4 Odpowied
#include <stdio.h>
#include <math.h>
main ()
{
double y, licznik, mianownik;
double yGora, yDol;
double p1, p2, p3, p4;
// Zmienne pomocnicze p1 = pow(sin(0.45), 2);
p2 = tan(sqrt(2));
p3 = log10(14);
p4 = exp(4);
licznik = (1.0/2)*p1 + 2*p2;
mianownik = p3 + 2*p4;
y = licznik / mianownik;
yGora = ceil(y);
yDol = floor(y);
}
printf("y = \t\t\t%f\n", y);
printf("Zaokraglone w gore: \t%f\n", yGora);
printf("Zaokraglone w dol: \t%f\n", yDol);
Listing 2.3.12 Odpowied do zadania 38
Zdecydowanie atwiej utworzy zmienne pomocnicze, w ktrych wpiszemy pojedyncze operacje,
anieli wpisywa ca formu do zmiennej y. Jestem prawie pewien, e w pewnym momencie pogubiby si z nawiasami. Oczywicie sposb ten nie jest z gry narzucony, ma za zadanie uatwi napisanie tego kodu.
39
3 Sterowanie programem Przez sterowanie programem rozumie si zbir instrukcji, ktre potrafi wybiera w zalenoci od danego im parametru czy maj zrobi to, czy co innego. W przykadzie z listingu 2.3.1 pokazana zostaa instrukcja warunkowa ifelse, ktra w tym rozdziale zostanie omwiona bardziej szczegowo.
3.1 Instrukcja if else Pomimo i wyraenia warunkowe byy uywane w poprzednim rozdziale, to pozwol sobie opisa tutaj dokadniej zasad ich dziaania, oraz pokaza drugi sposb sprawdzania warunku. A wic zacznijmy od skadni instrukcji ifelse ktra wyglda nastpujco:
if (wyrazenie)
akcje1 else
akcje2 Gdzie jako wyrazenie mona wstawi dowolny warunek z uyciem operatorw logicznych,
przyrwnania oraz relacji. A jako akcje dowolne instrukcje wykonywane przez program (przypisanie wartoci zmiennym, wywietlenie komunikatu, itp). Dla przykadu wemy trywialny przykad, ktry sprawdza czy liczba a jest wiksza lub rwna liczbie b i drukuje odpowiedni komunikat. Listing 3.1.1 prezentuje kod programu.
#include <stdio.h>
main ()
{
int a = 4;
int b = 7;
}
if (a >= b)
printf("a wieksze lub rowne b\n");
else printf("a mniejsze lub rowne b\n");
Listing 3.1.1 Uycie instrukcji if else.
40
Naszym wyraeniem warunkowym na listingu 3.1.1 jest: a >= b. Jeli jest to prawda, czyli warto zmiennej a jest wiksza lub rwna wartoci zmiennej b, to to wyraenie przyjmuje warto jeden,
w przeciwnym wypadku zero. Co to znaczy? Znaczy to tyle, i gdybymy przypisali to wyraenie do zmiennej pomocniczej, np. w ten sposb:
int z = a >= b;
to zmienna z posiadaaby warto jeden (jeli warunek speniony) lub zero (jeli warunek nie speniony). W ten sposb moemy wstawi zmienn pomocnicz do instrukcji ifelse w ten sposb:
if (z)
if (1) jest prawdziwe1, czyli wykonaja si akcje1, czyli napis: a wiksze lub rwne b zostanie wydrukowany. If (0) jest faszywe, czyli nie wykonaj si akcje1, tylko te akcje, ktre s po sowie kluczowym else (jeli istnieje o tym za chwil), czyli akcje2.
Jeli chcemy aby wykonao si wicej instrukcji ni jedna (np. wywietlenie tekstu i przypisanie zmiennej pewnej wartoci kod poniej), jestemy zmuszeni uy nawiasw klamrowych po sowie kluczowym if i/lub else.
if (wyrazenie)
{
printf("Dowolny napis\n");
i++;
}
else
{
printf("Dowolny napis\n");
i--;
}
Mona w ogle nie uywa sowa kluczowego else, w razie nie spenionego warunku, nic si nie stanie, program przechodzi do wykonywania kolejnych instrukcji.
Jeli zagniedamy instrukcje ifelse, to musimy pamita o bardzo istotnej rzeczy, a mianowicie instrukcja else zawsze naley do ostatniej instrukcji if. Czyli w podanym niej przykadzie instrukcja else wykona si wtedy, kiedy zmienna n bdzie wiksza od 2 oraz nie parzysta.
1 if (x) jest prawdziwe dla wszystkich wartoci x, z wyjtkiem zera.
41
#include <stdio.h>
main ()
{
int n = 3;
}
if (n > 2)
if (n % 2 == 0)
printf("%d jest parzysta\n", n);
else printf("%d jest nie parzysta\n", n);
Listing 3.1.2 Zagniedone instrukcje if else.
Jeli chcielibymy, aby zagniedzona instrukcja if nie posiadaa czci else, a za to gwny if takow posiada musimy uy nawiasw klamrowych. Listing poniej pokazuje jak to zrobi.
#include <stdio.h>
main ()
{
int n = 2;
}
if (n > 2)
{
if (n % 2 == 0)
printf("%d jest parzysta\n", n);
}
else printf("Liczba: %d jest mniejsza lub rowna 2\n", n);
Listing 3.1.3 Zagniedone instrukcje if else kontynuacja Istnieje trzy argumentowy operator ?:, ktrego posta wyglda nastpujco:
wyrazenie1 ? wyrazenie2 : wyrazenie3 Operator ten dziaa w sposb nastpujcy, obliczane jest wyrazenie1 i jeli jest prawdziwe (rne od zera), to obliczane jest wyrazenie2 i ono staje si wartoci caego wyraenia. W przeciwnym wypadku obliczane jest wyrazenie3 i ono jest wartoci caego wyraenia. Poniszy listing demonstruje dziaanie operatora ?:.
42
#include <stdio.h>
main ()
{
int a = 3, z;
z = (a > 5) ? 5 : (a + 6);
printf("%d\n", z);
// 9
}
Listing 3.1.4 Operator trzyargumentowy ?:
W sterowaniu programem wystpuje jeszcze jedna konstrukcja zwizana z if-else, a mianowicie jest to else-if. Poniej znajduje si deklaracja teje konstrukcji, a pod ni opis wraz z przykadem.
if (wyrazenie1)
akcje1 else if (wyrazenie2)
akcje2 else
akcje3 Jak wida instrukcja ta jest rozbudowan instrukcj ifelse, zasada dziaania tej instrukcji jest niemale identyczna. Najpierw sprawdzane jest wyrazenie1, jeli jest faszywe, to program ma dodatkowe wyraenie do sprawdzenia wyrazenie2, jeli jest prawdziwe to wykonuj si akcje2, w przeciwnym wypadku wykonuj si akcje3. Chodzi o to, e w jednej instrukcji ifelse moe znajdowa si kilka warunkw do sprawdzenia, tak wic konstrukcji elseif mona dopisa tyle, ile si ich potrzebuje. Jeli program ustali prawdziwo ktregokolwiek z wyrae to zostaj wykonane instrukcje zwizane z tym wyraeniem i tylko te instrukcje, reszta jest pomijana. Poniej znajduje si przykad z uyciem konstrukcji elseif.
#include <stdio.h>
main ()
{
int n = 24;
if (n >= 0 && n <= 10)
printf("%d zawiera sie w przedziale: <%d; %d>\n", n, 0, 10);
else if (n >= 11 && n <= 20)
printf("%d zawiera sie w przedziale: <%d; %d>\n", n, 11, 20);
else if (n >= 21 && n <= 30)
printf("%d zawiera sie w przedziale: <%d; %d>\n", n, 21, 30);
43
else if (n >= 31)
printf("Liczba %d jest wieksza lub rowna 31\n", n);
else printf("Liczba jest ujemna\n");
}
Listing 3.1.5 Uycie konstrukcji else if Jak wida pierwszy warunek jest faszywy, drugi rwnie. Trzeci warunek jest prawdziwy tak wic zostanie wykonana instrukcja wywietlajca napis, e liczba zawiera si w przedziale <21; 30>
i tylko ta instrukcja zostanie wykonana, reszta warunkw nie jest sprawdzana.
3.2 Instrukcja switch Instrukcja switch rwnie suy do podejmowania decyzji. Mona by powiedzie, e jest podobna troch do konstrukcji elseif, lecz sposb jej definicji troch si rni. Poniej znajduje si oglna posta omawianej instrukcji.
switch (wyrazenie)
{
case wyrazenie_stale1: akcje1;
case wyrazenie_stale2: akcje2;
default: akcje3;
}
Na poniszym przykadzie poka zastosowanie i opisz dziaanie danej instrukcji.
#include <stdio.h>
main ()
{
int n = 3;
switch (n)
{
case 1:
case 2:
case 3:
printf("Poniedzialek\n");
break;
printf("Wtorek\n");
break;
printf("Sroda\n");
break;
44
case 4:
case 5:
case 6:
case 7:
default:
}
printf("Czwartek\n");
break;
printf("Piatek\n");
break;
printf("Sobota\n");
break;
printf("Niedziela\n");
break;
printf("Nie ma takiego dnia\n");
break;
}
Listing 3.2.1 Uycie instrukcji switch Zasada dziaania instrukcji switch jest nastpujca. Po sowie kluczowym switch wystpuje argument w nawiasie, ktry przyjmuje dowoln warto. Pomidzy klamrami znajduj si pewne przypadki
(case wyraenia stae, nie mog zalee od zmiennych). Po otrzymaniu argumentu instrukcja sprawdza, czy przekazana warto jest rwna wartoci jednego z wymienionych przypadkw i jeli tak jest to wykonuje akcje, ktre s zdefiniowane po dwukropku. Instrukcja break po kadej akcji jest bardzo istotna, poniewa przerywa dziaanie instrukcji switch, po wykonaniu zaplanowanej akcji.
Gdybymy nie uyli break, to instrukcja poniej zostaaby wykonana, a po niej kolejna i jeszcze nastpne, a do zamykajcego instrukcj switch nawiasu klamrowego.
Przypadek default oznacza, e aden z przypadkw nie mg zosta speniony (argument mia inn warto, ni te zdefiniowane przez case). Przypadek default moe zosta pominity i w razie, gdy argument bdzie si rni od wymienionych przypadkw, adna akcja nie zostanie podjta.
W rozdziale 2.2.6 (Staa wyliczenia) powiedziane byo, e zastosowanie staych wyliczenia bdzie pokazane w tym rozdziale, tak wic listing 3.2.2 pokazuje poczenie instrukcji switch i enum.
#include <stdio.h>
main()
{
enum tydzien {PON = 1, WT, SR, CZ, PT, SOB, ND};
int n = 10;
switch (n)
{
case PON:
case WT:
printf("Poniedzialek\n");
break;
printf("Wtorek\n");
break;
45
case SR:
case CZ:
default:
case PT:
case SOB:
case ND:
}
printf("Sroda\n");
break;
printf("Czwartek\n");
break;
printf("Nie ma takiego dnia\n");
break;
printf("Piatek\n");
break;
printf("Sobota\n");
break;
printf("Niedziela\n");
break;
}
Listing 3.2.2 Uycie instrukcji switch oraz enum Zasada dziaania jest analogiczna, z t rnic, e zamiast cyfr wpisalimy stae wyliczenia, ktre zdefiniowalimy w pitej linii za pomoc enum, ktry zosta opisany w punkcie 3.2.2. Wida rwnie,
e przypadek default nie musi by ostatni, kolejno deklaracji jest dowolna. Pomimo i nie trzeba po ostatnim przypadku pisa break, to lepiej to zrobi. Po dodaniu kolejnego przypadku mona zapomnie o tym, a pniejsze znalezienie bdu moe zaj troch czasu.
Innym rwnie przydatnym aspektem moe okaza si moliwo wpisywania pewnego zakresu podczas definicji przypadku. Listing 3.2.3 pokazuje to zachowanie.
#include <stdio.h>
main()
{
int n = 31;
switch (n)
{
case 0 ... 10 :
printf("%d zawiera sie w przedziale: ", n);
printf("<%d; %d>\n", 0, 10);
break;
case 11 ... 20 : printf("%d zawiera sie w przedziale: ", n);
printf("<%d; %d>\n", 11, 20);
break;
case 21 ... 30 : printf("%d zawiera sie w przedziale: ", n);
printf("<%d; %d>\n", 21, 30);
break;
default : if (n >= 31)
printf("Liczba %d jest wieksza lub rowna 31\n", n);
else printf("Liczba jest ujemna\n");
46
}
break;
}
Listing 3.2.3 Uycie instrukcji switch wraz z zakresem w przypadku Jak wida program dziaa analogicznie do tego z listingu 3.1.5. Trzeba byo i tak uy instrukcji if,
poniewa zakresy musz by z gry ustalone. Wic w przypadku default sprawdzamy, czy liczba jest wiksza lub rwna 31, jeli to fasz to liczba jest ujemna.
Wane jest to, aby trzy kropki byy oddzielone spacj midzy granicami przedziaw. Jeli by nie byy
program si nie skompiluje.
3.3 Ptle Zauwamy, e jeli potrzebujemy wykona pewn ilo operacji, ktra w gruncie rzeczy rni si tylko argumentem to nie potrzebujemy wpisywa wszystkich tych instrukcji w kodzie programu. Wystarczy uy jednej z dostpnych metod zaptlenia fragmentu kodu i tam wpisa dan instrukcj.
Przykady uycia petli zostay pokazane w kolejnych podpunktach tego rozdziau ktre odnosz si do poszczeglnych rodzajw ptli. Za pomoc kadej ptli mona wykona dokadnie to samo. Wic dlaczego jest ich tyle? Poniewa za pomoc nie ktrych konstrukcji atwiej jest zrealizowa pewne zadanie.
W przykadach tych uyto dotd nie poznanej instrukcji scanf, ktra pobiera wartoci od uytkownika,
zostanie ona omwiona bardziej szczeglowo w rozdziale 8. Operacje wejcia i wyjcia. W kadej z tych instrukcji mona uy nawiasw klamrowych w celu zgrupowania wikszej iloci akcji.
3.3.1 for Konstrukcja ptli for wyglda nastpujco:
for (wyrazenie1; wyrazenie2; wyrazenie3)
akcje Poniszy kod prezentuje uycie ptli for. W programie tym uytkownik bdzie musia poda doln oraz grn granic, a program obliczy ile jest midzy nimi liczb parzystych oraz wywietli te liczby.
47
#include <stdio.h>
main ()
{
int dolnaGranica, gornaGranica;
int i, ilParz = 0;
printf("Dolna granica: ");
scanf("%d", &dolnaGranica);
printf("Gorna granica: ");
scanf("%d", &gornaGranica);
printf("Pomiedzy zakresem: <%d; %d> ", dolnaGranica, gornaGranica);
printf("liczbami parzystymi sa: ");
for (i = dolnaGranica; i <= gornaGranica; i++)
if (i % 2 == 0)
{
ilParz++;
printf("%d ", i);
}
printf("\nOgolna ilosc liczb parzystych z tego zakresy to: %d\n",
ilParz);
}
Listing 3.3.1.1 Uycie ptli for do obliczenia iloci parzystych liczb Definiujemy zmienne, ktre bd przechowywa doln oraz grn granic. Po nazwach oczywicie wida, ktre to. Zmienna i jest zmienn sterujc ptl. Zmienna ilParz przechowuje ilo parzystych liczb, musimy wyzerowa jej warto przed uyciem, poniewa gdybymy tego nie zrobili, to pewnie dostalibymy inny wynik ni oczekiwalimy. Dzieje si tak, poniewa w zmiennych mog wystpowa mieci (omwione w rozdziale 2.2.3)
Funkcja scanf zostanie omwiona dokadniej w rozdziale dotyczcym operacji wejcia i wyjcia. Tutaj natomiast musimy wiedzie, e za pomoc tej funkcji moemy pobra dane od uytkownika (bez uycia zabezpiecze funkcja ta jest mao praktyczna, bo gdy wpiszemy liter zamiast liczby program si wykrzacza).
Dziaanie ptli for powinnimy rozumie nastpujco:
i = dolnaGranica Przypisanie do zmiennej sterujcej ptl wartoci, ktr wpisa uytkownik
i <= gornaGranica Warunek sprawdzany przed wykonaniem kadej iteracji 48
i++ zwikszenie zmiennej sterujcej
Gdy program wchodzi do tej ptli to pierwsz czynnoci jak robi jest przypisanie wartoci, ktr wpisa uytkownik jako doln granic do zmiennej sterujcej. Nastpnie sprawdzany jest warunek, czy i jest mniejsze lub rwne od grnej granicy, jeli tak jest, to zostan wykonane akcje (tutaj instrukcja if). Jeli akcji jest wicej, to podobnie jak w przypadku instrukcji if zamyka si je w nawiasy klamrowe.
Po wykonaniu akcji zwikszany jest licznik za pomoc instrukcji i++ i po raz kolejny sprawdzany jest warunek, jeli tym razem warunek jest faszywy, to ptla koczy swoje dziaanie.
W ptli for znajduje si instrukcja if, ktra sprawdza czy aktualna warto kryjca si pod zmienn sterujc i dzieli si przez dwa bez reszty, jeli tak jest, to liczba ta jest parzysta. Zmienna ilParz zostaje zwikszona o jeden oraz parzysta liczba jest wywietlana na ekran monitora.
Kade wyraenie instrukcji for mona pomin. Jeli pominiemy warunek, to bdzie to traktowane tak jakby zawsze by speniony. Aby uzyska ptl nieskoczon, moemy napisa j w ten sposb.
for (;;)
{
...
}
Ptl tak mona przerwa za pomoc instrukcji break, oraz return.
Jeli pomijamy pierwszy skadnik, czyli ustawienie wartoci zmiennej sterujcej to wypadao by, eby zmienna ta bya ustawiona tu przed ptl rcznie, bd miaa warto oczekiwan (jak w naszym przypadku podanej przez uytkownika). Zwikszenie licznika moe by czci wykonywanych akcji, wic przykadowa deklaracja instrukcji for poniej pokazuje analogiczne skutki jak listing 3.3.1.1.
i = dolnaGranica;
for (; i <= gornaGranica;)
{
// w tym miejscu instrukcja if {...}
i++;
}
49
Ptle mona zagnieda. Zagniedone ptle uywa si do operacji na tablicach wielowymiarowych
(w rozdziale 5. znajduj si informacje o tablicach). Przykad zagniedonych ptli for znajduje si poniej. Jest to tabliczka mnoenia.
#include <stdio.h>
main ()
{
int i, j;
for (i = 0; i <= 10; i++)
for (j = 0; j <= 10; j++)
if (!i && !j)
printf("*\t");
else if (!i && j)
printf("%d%c", j, (j == 10) ? '\n' : '\t');
else if (i && !j)
printf("%d%c", i, (j == 10) ? '\n' : '\t');
else printf("%d%c", i * j, (j == 10) ? '\n' : '\t');
}
Listing 3.3.1.2 Zagniedone ptle Przykad moe i wyglda strasznie, lecz postaram si go omwi najbardziej szczegowo jak potrafi.
Poniej znajduje si tabela 3.3.1, w ktrej de facto jest tabliczka mnoenia, ale jest rwnie dodatkowy wiersz (pierwszy), oraz dodatkowa kolumna (pierwsza) w ktrej s napisane numery iteracji, czyli jakie wartoci bd posiada zmienne sterujce i, oraz k. Dla lepszej czytelnoci tabliczka zaczyna si od nastpnej strony.
Wic zaczynamy, pierwsza ptla (zewntrzna), w ktrej zmienn sterujc jest i odpowiada za operacje na wierszach, wewntrzna ptla (zmienna sterujca j) odpowiada za manipulowanie kolumnami. Tak wic najpierw zmienna i przyjmuje warto zero (wiersz zerowy1 - te komrki), pniej wykona si 11 razy ptla wewntrzna (kolumny j = 0, , j = 10). Jeli j osignie warto jedenacie to ptla jest przerywana i zwikszany jest licznik zmiennej i. Po raz kolejny wykonuje si ptla wewntrza.
Operacje te powtarzaj si do czasu, gdy warunek ptli zewntrznej jest prawdziwy.
1 Przez wiersz zerowy rozumiemy ten, w ktrym i = 0.
50
j=0 j=1 j=2 j=3 j=4 j=5 j=6 j=7 j=8 j=9 j = 10 i=0
*
1 2
3 4
5 6
7 8
9 10
i=1 1
1 2
3 4
5 6
7 8
9 10
i=2 2
2 4
6 8
10 12
14 16
18 20
i=3 3
3 6
9 12
15 18
21 24
27 30
i=4 4
4 8
12 16
20 24
28 32
36 40
i=5 5
5 10
15 20
25 30
35 40
45 50
i=6 6
6 12
18 24
30 36
42 48
54 60
i=7 7
7 14
21 28
35 42
49 56
63 70
i=8 8
8 16
24 32
40 48
56 64
72 80
i=9 9
9 18
27 36
45 54
63 72
81 90
i = 10 10
10 20
30 40
50 60
70 80
90 100
Tabela 3.1.1 Tabliczka mnoenia wraz z pomocniczymi informacjami Warunki, ktre s w ptli wewntrznej steruj procesem wypeniania informacji w tabeli. Pierwszy warunek if (!i && !j) jest speniony Jeli i jest zerem i jednoczenie j jest zerem. Jeli ten warunek jest speniony, a speniony jest tylko raz, to wstaw gwiazdk (zielona komrka). Uyta konstrukcja elseif pozwala na sprawdzenie dodatkowych warunkw, a mianowicie else if (!i && j), ktry mwi tak: Jeli i jest zerem i j jest rne od zera, co dokadnie daje nam wiersz zerowy (ty wiersz). Jeli ten warunek jest speniony, to uzupenij go wartociami zmiennej j. Tutaj, jak i w kolejnych dwch funkcjach printf uyto operatora trjargumentowego, eby zrobi now lini po dziesitej kolumnie,
a pomidzy kolumnami wstawi tabulacj.
Kolejny warunek else if (i && !j) tumaczy si w ten sposb: Jeli i jest rne od zera i j jest zerem co jest odzwierciedleniem zerowej kolumny (niebieska kolumna). Z kolei jeli ten warunek jest speniony to uzupenij go wartociami zmiennej i. Jeli aden z powyszych warunkw nie zosta speniony to uzupenij tabliczk mnoenia (uzupeniamy wierszami) wartociami i * j.
51
3.3.2 while Ptla while jest w niektrych sytuacjach lepszym rozwizaniem, ni ptla for, np. wtedy, kiedy nie znamy z gry iloci wykona ptli. Konstrukcja ptli while wyglda nastpujco.
while (warunek)
akcje I dziaa w sposb nastpujcy, jeli warunek zostanie speniony, to akcje zostan wykonane.
W przeciwnym wypadku akcje si nie wykonuj. Dla przykadu zaprezentowano poniszy listing,
w ktrym uytkownik wpisuje znaki, a program pniej wywietla je pojedyczo, kady w nowej linii.
Przykad moe nie jest zbyt ambitny, lecz pokazuje, e do tego typu zada lepiej nadaje si ptla while,
ni for.
#include <stdio.h>
main ()
{
int ch;
while ((ch = getchar()) != '\n')
printf("Znak: %c\n", ch);
}
Listing 3.3.2.1 Uycie ptli while Jeszcze wyjanienie odnonie dotd nie uywanej funkcji getchar. Funkcja ta pobiera znak od uytkownika, jeli znakw jest wicej, a z reguy jest, poniewa enter (znak nowej linii) te jest znakiem znaki te przechowywane s w obszarze pamici zwanym buforem. Aby wywietli wszystkie znaki z bufora, mona uy ptli while, ktra tak zadeklarowana jak na listingu 3.3.2.1 skoczy swoje dziaanie dopiero wtedy, gdy natrafi na znak nowej linii. Poniszy obrazek moe uatwi zrozumienie tego. Niech wprowadzonymi znakami bd Ala ma kota.
52
A l
a m
while ((ch = getchar()) != '\n')
a k
o
...
t a
\n while ((ch = getchar()) != '\n')
'\n' jest rwne '\n'
A jest rne od '\n'
Rys. 3.3.2.1 Zastosowanie ptli while Dziaanie ptli jest nastpujce, sprawdza po kolei znaki bdce w buforze i wywietla je, jeli s rne od znaku nowej linii (\n).
3.3.3 do while Ptla dowhile dziaa podobnie do ptli while z t rnic, e warunek sprawdzany jest po wykonaniu akcji, co za tym idzie, ptla wykona si przynajmniej jeden raz. Konstrukcja ptli dowhile pokazana zostaa poniej.
do akcje
while (warunek)
Przykadem zastosowania moe by poniszy listing, w ktrym uytkownik wpisuje liczb i jeli jest ona rna od 19 to dostaje komunikat o tym, e liczba ta jest mniejsza, wiksza bd z nie dozwolonego zakresu.
#include <stdio.h>
main ()
{
int liczba;
do
{
printf(": ");
scanf("%d", &liczba);
switch (liczba)
53
{
case 1 ... 18 : printf("Za mala\n");
break;
case 20 ... 40 : printf("Za duza\n");
break;
default:
if (liczba == 19)
printf("Trafiles!\n");
else printf("Nie z tego zakresu!\n");
break;
}
} while (liczba != 19);
}
Listing 3.3.3.1 Uycie ptli do while Brak sprawdzenia poprawnoci wprowadzanych danych uniemoliwia jego bezbdn prac, chodzio tutaj natomiast o pokazanie samego zastosowania ptli dowhile, ktra jak wida wykonuje najpierw akcje (tutaj pobranie danych, sprawdzenie warunkw, wywietlenie informacji) a pniej sprawdza warunek, czy pobrana liczba jest rna od 19. Jeli tak jest, to akcje ponownie s wykonywane. Ptla przestaje si wykonywa jeli wprowadzon liczb jest liczba 19.
3.4 Instrukcja break Z instrukcj break spotkalimy si ju przy instrukcji switch, ktra koczya jej dziaanie.
Z uyciem break w ptlach for, while, dowhile jest analogicznie przerywa ona natychmiastowo dziaanie ptli i przechodzi do wykonywania kolejnych (jeli wystpuj) instrukcji poza ptl. Niech przykadem zastosowania instrukcji break bdzie poniszy listing.
#include <stdio.h>
main ()
{
int i, granica = 30;
for (i = 0; ;i++)
if (i == 30)
break;
else printf("%d%c", i, (i == granica - 1) ? '\n' : ' ');
}
Listing 3.4.1 Uycie instrukcji break 54
Jak wida w ptli for nie ma drugiego wyraenia (warunku) zakoczenia ptli. Gdybymy nie uyli instrukcji break to ptla wykonywaa by si w nieskoczono.
Trzeba pamita o jednym, instrukcja break przerywa ptle w ktrej zostaa zapisana. Jeli instrukcja break znajdzie si w zagniedonej ptli to przerwie tylko jej dziaanie, ptla zewntrzna bdzie dziaa tak, jak dziaaa.
3.5 Instrukcja continue Instrukcja continue troch rni si od instrukcji break. W zasadzie te co przerywa, lecz nie dziaanie caej ptli, a aktualnie wykonywanej iteracji. Akcje pod instrukcj continue s ignorowane.
Po wykonaniu instrukcji continue w przypadku ptli for zwikszany jest licznik zmiennej sterujcej,
a w przypadku ptli while oraz dowhile sprawdzany jest warunek.
Niech przykadem uycia instrukcji continue bdzie listing pokazujcy, e faktycznie ta instrukcja przerywa aktualnie wykonywane czynnoci.
#include <stdio.h>
main ()
{
int i;
int przed = 0, po = 0;
for (i = 0; i < 10; i++)
{
przed++;
continue;
po++;
}
printf("Przed: %d\n", przed);
printf("Po: %d\n", po);
// 10
// 0
}
Listing 3.5.1 Uycie instrukcji continue 55
3.6 Instrukcja goto, etykiety W jzyku C wystpuje instrukcja goto, ktra to w miejscu wywoania robi bezwarunkowy skok do miejsca oznaczonego konkretn etykiet. Skadnia tej instrukcji jest nastpujca.
goto nazwa_etykiety;
nazwa_etykiety: akcje W poniszym przykadzie zostaa uyta instrukcja goto, ktra drukuje cyfry od 0 do 9. Na kolejnym listingu zostaa pokazana ptla for, ktra robi dokadnie to samo. Dugo listingw mwi sama za siebie, ktr wersj lepiej uy. Nie mniej jednak instrukcj goto czasem uywa si do wyjcia z bardzo zagbionych ptli. Lecz w wikszoci przypadkw mona si obej bez niej. Za pomoc instrukcji goto moemy si odwoa tylko do tych etykiet, ktre s zdefiniowane w tej samej funkcji,
z ktrej ma by ona wywoana.
#include <stdio.h>
main ()
{
int i = 0;
start:
zwieksz:
stop:
}
if (i >= 10)
goto stop;
else
{
printf("%d\n", i);
goto zwieksz;
}
i++;
goto start;
printf("Koniec\n");
Listing 3.6.1 Zasada dziaania ptli for przy pomocy goto
#include <stdio.h>
main ()
{
int i;
for (i = 0; i < 10; i++)
printf("%d\n", i);
printf("Koniec\n");
}
Listing 3.6.2 Wywietlanie cyfr od 0 do 9 56
4 Funkcje Kady program posiada funkcje. Do tej pory uywalismy funkcji zdefiniowanych w bibliotece standardowej. W tym punkcie postaram si wyjani jak tworzy si funkcje wasnorcznie, jak rozdziela funkcje pomidzy plikami, jak je wywoywa itp.
W gruncie rzeczy funkcje s bardzo przydatne, poniewa jeli jak operacj mamy wykona dwukrotnie z rnymi wartociami, to po co pisa waciwie ten sam kod dwa razy, skoro mona napisa to raz i wywoa funkcj dwa razy z rnymi argumentami. Jest to oszczdno czasu, miejsca oraz przede wszystkim zyskujemy na czytelnoci kodu. W poniszych podpunktach zawarte s te wszystkie informacje, ktre pomog nam w uywaniu funkcji.
4.1 Oglna posta funkcji oraz funkcja zwracajca wartoci cakowite Oglna posta funkcji ma si nastpujco:
typ_zwracanej_wartosci nazwa_funkcji (parametry funkcji jesli wystepuja)
{
cialo funkcji
}
Jako typ zwracanej wartoci rozumiemy jeden z typw dostpnych w jzyku C (int, float, itp. lub void
o tym troch pniej).
Nazwa funkcji nazwa za pomoc ktrej bdziemy si odwoywa do naszej funkcji. Zasady nazywania funkcji s takie same jak zasady nazywania zmiennych.
Parametry funkcji funkcja moe przyjmowa jakie parametry i jeli tak jest, to wpisujemy nazwy typw wraz z nazwami zmiennych rozdzielonych przecinkami pomidzy nawiasami okrgymi.
W ciele nowo tworzonych funkcji podobnie jak w funkcji main wystpuj rnego rodzaju instrukcje.
Do funkcji odwoujemy si podajc jej nazw. Jeli funkcja przyjmuje jakie argumenty to podajemy je, rozdzielajc je przecinkami pomidzy nawiasami okrgymi. Jeli funkcja nie przyjmuje adnych argumentw, to zostawiamy nawiasy puste. Dla penego zrozumienia jak to wszystko dziaa pokazany 57
zosta listing 4.1.1 w ktrym wystpuj dwie funkcje, gwna oraz nwd1.
#include <stdio.h>
int nwd (int liczba1, int liczba2);
main ()
{
printf("NWD(%d, %d) = %d\n", 10, 14, nwd(10, 14));
printf("NWD(%d, %d) = %d\n", 28, 14, nwd(28, 14));
printf("NWD(%d, %d) = %d\n", 100, 30, nwd(100, 30));
printf("NWD(%d, %d) = %d\n", 1024, 64, nwd(1024, 64));
}
return 0;
int nwd (int liczba1, int liczba2)
{
int c;
while (liczba2 != 0)
{
c = liczba1 % liczba2;
liczba1 = liczba2;
liczba2 = c;
}
return liczba1;
}
Listing 4.1.1 Funkcja najwikszy wsplny dzielnik Zacznijmy wic od pocztku, deklaracj int nwd (int liczba1, int liczba2); nazywa si prototypem funkcji. Prototyp funkcji2 informuje kompilator o tym, jakiego typu jest dana funkcja oraz jakiego typu parametry przyjmuje. Jak wida funkcja nwd przyjmuje dwa parametry typu cakowitego. Zasadniczo w prototypie funkcji nie trzeba pisa nazw zmiennych, wystarcz same typy, tak wic zapis int nwd
(int, int); jest rwnowany. Jeli typ zwracanej wartoci zostanie pominity, to funkcja jest traktowana jak funkcja zwracajca typ int.
Odnonie wywoania funkcji powiem za chwil. Najpierw zaczn omawia nasz now funkcj nwd.
Pierwsza linia definicji funkcji nwd musi by taka sama jak prototyp funkcji, z tym e tutaj ju musz by podane nazwy parametrw, ktre funkcja przyjmuje, poniewa funkcja tworzy zmienne lokalne o takich wanie nazwach i na nich wykonuje operacje. Do funkcji przekazywane s argumenty przez warto, znaczy to tyle, e wartoci argumentw zostaj skopiowane i przypisane do zmiennych 1 Najwik<NAME> Funkcja bazujca na algorytmie Euklidesa 2 Wyjanienie uytecznoci prototypw funkcji zostao przedstawione w punkcie 4.2 58
lokalnych (parametrw funkcji). Tak wic wartoci przekazane z funkcji wywoujcej funkcj nwd nie ulegn zmianie. O innym sposobie przekazywania argumentw dowiesz si w rozdziale 5.
W prototypie funkcji wystpowa rednik na kocu, w definicji go nie ma. Ciao funkcji znajduje si pomidzy nawiasami klamrowymi.
Ciao funkcji nwd posiada pewne instrukcje (algorytm Euklidesa). Wana natomiast jest instrukcja return liczba1; ktra zwraca warto cakowit3 w miejscu wywoania funkcji.
W funkcji main drukujemy najwikszy wsplny dzielnik dla czterech par liczb. Mam nadziej, e widzisz ju zalety uywania funkcji. Raz napisana funkcja zostaa wywoana czterokrotnie z rnymi argumentami.
Jak ju pewnie zauwaye funkcja main te posiada instrukcj return 0; Nie jest to przypadek,
poniewa funkcja main te powinna zwraca warto do miejsca wywoania (system operacyjny).
Funkcje mog zwrci warto, ktra nie bdzie wykorzystana. Oczywicie jest to bez sensu, ale nic nie stoi na przeszkodzie by wywoa funkcj nwd, nie przypisujc jej do adnej zmiennej. Jeli przypisujemy do zmiennej wywoanie funkcji, to naley zwrci uwag na typy zmiennych.
Nasza funkcja nwd zostaa wywoana jako argument funkcji printf, po obliczeniu i zwrceniu wartoci,
wynik zostanie wywietlony na ekranie.
4.2 Funkcje zwracajce wartoci rzeczywiste Instniej funkcje zwracajce inne wartoci ni wartoci cakowite. Wiele funkcji z biblioteki standardowej (np. math.h) jak np. sin, czy cos zwracaj wartoci typu double. Definicja funkcji jest analogiczna, wic nie bd jej tu przedstawia. Rnica polega na tym, e typem zwracanej wartoci jest jeden z typw rzeczywistych uywanych w jzyku C. Dla przykadu podany zosta listing 4.2.1,
w ktrym zostaa napisana funkcja sum_geom, ktra oblicza sum n pocztkowych wyrazw cigu geometrycznego. Podczas wywoania funkcji argumenty naley poda w takiej kolejnoci: a1 (wyraz pierwszy), q (iloraz), n (ilo wyrazw).
#include <stdio.h>
#include <math.h>
3 Warto cakowit poniewa typem zwracanej wartoci przez t funkcj jest typ int.
59
double sum_geom(double a1, double q, unsigned n);
main()
{
unsigned n = 200;
double a1 = -2.0;
double q = 0.25;
printf("Dla ciagu o parametrach: a1 = %.2f, q = %.2f\n", a1, q);
printf("Suma %u poczatkowych wyrazow = %f\n", n, sum_geom(a1, q, n));
return 0;
}
double sum_geom(double a1, double q, unsigned n)
{
double sum;
if (q == 1)
return n*a1;
else sum = a1 * (1 - pow(q, n))/(1 - q);
return sum;
}
Listing 4.2.1 Suma cigu geometrycznego W powyszym przykadzie w funkcji main wywoujemy funkcj sum_geom, ktra za pomoc wzorw na sum cigu geometrycznego oblicza sum n pocztkowych wyrazw. Obliczon warto wstawia w miejsce wywoania. Warto ta zostanie wywietlona na ekranie monitora.
Jeli zastanawiae si po co s prototypy funkcji, to postaram si to wytumaczy w tym miejscu. De facto funkcja nie musiaaby mie prototypu jeli znajduje si w tym samym pliku co funkcja z ktrej zostanie ona wywoana oraz jej definicja znajduje si nad funkcj wywoujc. Poniszy przykad pokazuje, e program w takiej postaci skompiluje si bez bdw i wywietli poprawny wynik.
#include <stdio.h>
double podziel(double a, double b)
{
return a/b;
}
main ()
{
printf("4/9 = %f\n", podziel(4.0,9.0));
// 0.444444 return 0;
}
Liting 4.2.2 Bez prototypu funkcji 60
Jeli natomiast funkcja podziel znajdzie si poniej funkcji main oraz nie bdzie prototypu funkcji, to kompilator wywietli nastpujce bdy i ostrzeenia:
bez_prototypu.c: In function main:
bez_prototypu.c:6: warning: format %f expects type double, but argument 2 has type int bez_prototypu.c: At top level:
bez_prototypu.c:10: error: conflicting types for podziel bez_prototypu.c:6: note: previous implicit declaration of podziel was here Ostrzeenie o tym, e format %f wymaga typu rzeczywistego, a argument jest typu cakowitego moe wyda si nieco dziwne. Dlaczego kompilator pokazuje tak informacj, skoro funkcja ma zadeklarowany typ double? Ot odpowied na to pytanie jest nastpujca.
Jeli nie ma prototypu funkcji (czyli nie ma jawnej informacji o tym jaki typ wartoci funkcja zwraca)
to funkcja ta jest deklarowana w pierwszym miejscu w ktorym wystpuje, czyli tutaj: printf("4/9 =
%f\n", podziel(4.0,9.0)); Jak byo wspomniane wczeniej, jeli typ zwracanej wartoci nie zostanie zadeklarowany, to funkcja jest traktowana jakby zwracaa typ int. Funkcja podziel z powodu i nie zostaa nigdzie wczeniej zadeklarowana, jest deklarowana wanie w tej linii. A poniewa chcemy wywietli liczb rzeczywist uylimy deskryptora formatu %f, ktry oczekuje liczby typu rzeczywistego, a okazuje si, e dostaje liczbe cakowit.
Sowo dostaje zostao zapisane w cudzysowie poniewa program si nie skompilowa, wic tak naprawd nic nie dosta. Odpowied na poprzednie pytanie ma duo wsplnego z kolejnym pytaniem odnonie bdu w linii dziesitej, ktra mwi o konfikcie typw funkcji podziel.
Jak ju pewnie si domylasz deklaracja oraz definicja funkcji (deklaracja + ciao funkcji) musz by jednakowych typw. Nasza funkcja zostaa potraktowana jak zwracajca typ int, a definicja mwi co innego, a mianowicie, e zwraca typ double. Dlatego te ten bd zosta wywietlony przez kompilator.
Jednym ze sposobw na poprawn kompilacj omwionego przypadku jest deklaracja prototypu funkcji globalnie. Drugi sposb to deklaracja prototypu w funkcji z ktrej dana funkcja ma zosta wywoana. Ponisze listingi kompiluj si bez wyej wymienionych bdw i ostrzee.
#include <stdio.h>
double podziel (double a, double b);
main ()
{
printf("4/9 = %f\n", podziel(4.0,9.0));
// 0.444444 61
}
return 0;
double podziel(double a, double b)
{
return a/b;
}
Listing 4.2.3 Deklaracja prototypu
#include <stdio.h>
main ()
{
double podziel (double a, double b);
printf("4/9 = %f\n", podziel(4.0,9.0));
return 0;
}
// 0.444444 double podziel(double a, double b)
{
return a/b;
}
Listing 4.2.4 Deklaracja prototypu wewntrz main 4.3 Funkcje nie zwracajce wartoci oraz brak argumentw W jzyku C wystpuj te funkcje, ktre nie zwracaj adnych wartoci. W rozdziale dotyczcym tablic i wskanikw pokazane zostao szersze zastosowanie tych funkcji, anieli przedstawione w tym podpunkcie. Rnica w definicji pomidzy funkcjami, ktre nie zwracaj wartoci a tymi zwracajcymi jest nastpujca. Jako typ zwracanej wartoci wpisujemy sowo kluczowe void. Jako przykad podany zosta listing 4.3.1, w ktrym to drukowane s liczby z odstpem czasowym. Przykad oczywicie mao praktyczny, nie mniej jednak chodzi o pokazanie sposobu deklarowania tego typu funkcji, oraz brak argumentw, o ktrym zaraz powiem.
#include <stdio.h>
void poczekaj(void);
main (void)
{
int i;
printf("Drukuj liczby z odstepem czasowym\n");
62
for (i = 1; i < 50; i++)
{
printf("%d\n", i);
poczekaj();
}
return 0;
}
void poczekaj(void)
{
int i;
for (i = 0; i < 25E6; i++)
;
}
Listing 4.3.1 Uycie funkcji nie zwracajcej wartoci Pierwsz rzecz inn w stosunku do poprzednich programw jak mona zauway jest uycie sowa kluczowego void pomidzy nawiasami funkcji zarwno poczekaj jak i main. Sowa tego uywa si po to, aby powiedzie kompilatorowi, e funkcja nie przyjmuje adnych argumentw. Dziki temu zachowujemy wiksz kontrol poprawnoci. Od tej chwili w programach, w ktrych wystpuj funkcje nie przyjmujce argumentw bdziemy wpisywa w miejsce parametrw sowo kluczowe void zamiast zostawia puste nawiasy.
4.4 Pliki nagwkowe Jeli program, ktry piszemy zawiera duo funkcji (przez nas napisanych) to jest sens zastanowienia si, czy nie lepiej byo by podzieli go na par plikw, z ktrych kady odpowiadaby za konkretn czynno. Poniszy przykad, ktry poka jest oczywicie fikcyjnym problemem, bowiem wielko tego programu jest bardzo mao i mona by byo wszystko zapisa w jednym pliku. Nie mniej jednak chc pokaza pewne czynnoci o ktrych trzeba pamita dzielc kod programu pomidzy kilkoma plikami.
Nazwa pliku Zawarto
Listing main.c Wywoania funkcji 4.4.1 prototypes.h Prototypy funkcji, zmienna globalna, staa wyliczenia 4.4.2 63
pobierztekst.c Definicja funkcji 4.4.3 wyswietltekst.c Definicja funkcji, zmienna globalna
4.4.4 skopiujtekst.c Definicja funkcji 4.4.5 Tabela 4.4.1 Podzia programu na pliki opis A wic zacznijmy od pocztku, pliki main.c oraz prototypes.h maj tak posta.
#include <stdio.h>
#include "prototypes.h"
main (void)
{
char line[MAXLENGHT];
extern int id;
}
pobierzTekst(line);
wyswietlTekst(line);
skopiujTekst(line);
wyswietlTekst(bufor);
printf("%d\n", id);
return 0;
/* Pobieranie tekstu do line[] */
/* Wyswietlanie tekstu z line[] */
/* Kopiowanie tesktu line[] do bufor[] */
/* Wyswietlanie tekstu z bufor */
Listing 4.4.1 main.c
#define MAXLENGHT 1000 char bufor[MAXLENGHT];
void pobierzTekst(char line[]);
void wyswietlTekst(char line[]);
void skopiujTekst(char line[]);
Listing 4.4.2 prototypes.h W tym miejscu naley si pewne wyjanienie odnonie kilku szczegw zawartych w tych dwch listingach. Po pierwsze druga linijka w listingu 4.4.1 #include "prototypes.h" informuje preprocesor o tym, aby doczy plik prototypes.h znajdujcy si w tym samym katalogu co main.c.
W funkcji main deklarujemy tablic znakw line o dugoci MAXLENGHT zadeklarowana w prototypes.h. Dziki temu, e doczylimy plik z prototypami, w ktrym znajduje si staa 64
symboliczna moemy jej uy podczas tworzenia tablicy znakw. Kolejna linijka to deklaracja zmiennej zewntrznej, ktrej definicja znajduje si w innym pliku (wyswietltekst.c). Sowo kluczowe extern jest tutaj potrzebne, byo by zbdne, jeli cay kod programu byby w jednym pliku, tak jak na listingu 2.2.5. Kolejnymi liniami s wywoania funkcji, ktre znajduj si w innych plikach. Linia wyswietlTekst(bufor); pobiera jako argument zmienn globaln. W tym wypadku nie musimy deklarowa jej przed uyciem wraz ze sowem kluczowym extern poniewa definicja zmiennej bufor zostaa doczona wraz z plikiem prototypes.h. Wicej o sowie kluczowym extern znajduje si w punkcie 4.5. Kolejne trzy listingi to definicje funkcji, ktre zostay uyte w funkcji gwnej.
#include <stdio.h>
#include "prototypes.h"
void pobierzTekst(char line[])
{
int c;
int i = 0;
}
while ((c = getchar()) != EOF && c != '\n' && i < MAXLENGHT - 1)
line[i++] = c;
line[i] = '\0';
Listing 4.4.3 pobierztekst.c
#include <stdio.h>
int id = 10;
void wyswietlTekst(char line[])
{
printf("%s\n", line);
}
Listing 4.4.4 wyswietlteskt.c
#include "prototypes.h"
void skopiujTekst(char line[])
{
int i = 0;
int c;
while ((c = line[i]) != '\0')
bufor[i++] = c;
bufor[i] = '\0';
}
Listing 4.4.5 skopiujtekst.c 65
A wic po kolei, funkcja pobierzTekst przyjmuje jako argument tablic znakw (tablice omwione s w rozdziale 5.). Tworzymy pomocnicz zmienn c, do ktrej przypisujemy funkcj getchar, ktra pobiera znaki od uytkownika. Jeli c (pobrany znak) jest rne od End Of File (staa symboliczna oznaczajca koniec pliku dlatego te musielimy doczy plik nagwkowy stdio.h) i c jest rne od znaku nowego wiersza oraz i jest mniejsze od MAXLINE 1 to przypisz pobrany znak do tablicy na pozycji i a po przypisaniu zwiksz i o 1. Jeli warunek nie zosta speniony (warto i zostaa zwikszona w poprzednim kroku) to przypisz znak koca tablicy na pozycji i.
Funkcja wyswietlTekst to funkcja wywietlajca tekst, jako argument przyjmuje tablic znakw.
Funkcja skopiujTekst rwnie przyjmuje tablic znakw jako argument. Tworzymy zmienn pomocnicz i z wartoci pocztkow zero oraz zmienn c, do ktrej przypisujemy znak z tablicy z pozycji i oraz sprawdzamy, czy jest on rny od znaku koca tablicy. Jeli tak jest to do tablicy bufor na pozycji i przypisujemy ten wanie znak, oraz zwikszamy i o 1. Jeli c rwna si \0 to przerywamy ptl i przypisujemy do tablicy na pozycji i znak koca.
Aby skompilowa wszystkie te pliki wystarczy wpisa polecenie:
$ gcc main.c pobierztekst.c skopiujtekst.c wyswietltekst.c -o main Bd skrcon wersj pod warunkiem, e w katalogu znajduj si tylko te pliki z rozszerzeniem .c o ktrych w tym podpunkcie mwilimy.
$ gcc *.c -o main 4.4.1 Kompilacja warunkowa Kompilacja warunkowa opiera si o proste sprawdzenie warunku podczas tumaczenia kodu. Jeli warto pewnego kontrolnego parametru wynosi tyle, to skompiluj t cz kodu, jeli parametr posiada inn warto to nie kompiluj tego, a skompiluj co innego. Na poniszym przykadzie pokae co miaem na myli piszc o kontrolnym parametrze.
#include <stdio.h>
#if KONTROLA == 1 double z2 = 10.0;
#elif KONTROLA == 2 double z2 = 15.0;
#elif KONTROLA == 3 66
#else double z2 = 20.0;
#define X
#endif main (void)
{
#if !defined(X)
printf("%.1f\n", z2);
#endif return 0;
}
Listing 4.4.1.1 Kompilacja warunkowa Konstrukcja #elif zaoenia ma te same co kontrukcja else-if. Sprawdzany jest warunek, czy KONTROLA rwna si 1, 2 lub 3. Jeli tak jest to definiujemy zmienn z2 typu rzeczywistego z odpowiedni wartoci. Jeli zmienna kontrolna jest rna od wymienionych wartoci lub nie jest zdefiniowana w ogle to definiujemy X za pomoc #define.
Warto wyraenia defined (X) ma warto jeden, jeli X zostao zdefiniowane i zero w przeciwnym wypadku. My uylimy negacji, czyli instrukcja printf wykona si wtedy, jeli X nie zostao zdefiniowane (czyli KONTROLA ma jedn z wymienionych wartoci).
Pewnie si zastanawiasz dlaczego nigdzie nie ma zdefiniowanej staej KONTROLA. Ot mona to zrobi na dwa sposoby. Pierwszy z nich polega na umieszczeniu definicji #define KONTROLA WARTO na grze pliku. Drugi natomiast polega na uyciu specjalnej opcji podczas kompilacji i o tym wanie zaraz napisz.
Za pomoc opcji -D polecenia gcc moemy ustawi pewn warto makra podczas kompilacji. Czyli w naszym wypadku, aby na ekranie pojawia si warto 15.0 musimy skompilowa program w nastpujcy sposb.
$ gcc nazwa_pliku.c -o nazwa_pliku -D KONTROLA=2 Funkcja main z listingu 4.4.1.1 mogaby wyglda nieco inaczej. Preprocesor posiada wbudowane instrukcje #ifdef oraz #ifndef, ktre odpowiednio oznaczaj jeli zdefiniowano oraz jeli nie zdefiniowano. Czyli tre naszej funkcji main mogaby wyglda nastpujco:
#ifndef(X)
printf("%.1f\n", z2); #endif 67
4.5 extern, static, register Wszystkie te sowa kluczowe odnosz si do zmiennych, a extern i static rwnie do funkcji.
W poprzednim punkcie poniekd omwione zostao sowo extern, nie mniej jednak w tym miejscu przedstawi troch wicej szczegw. Mam nadzieje, e powtrzone informacje bardziej pomog, ni zaszkodz.
4.5.1 extern Sowo kluczowe extern uywane jest do powiadomienia kompilatora o tym, i uywana bdzie zmienna globalna, ktra nie jest zadeklarowana w pliku z ktrego bdziemy si do niej odwoywa i jej definicja nie jest doczana w pliku nagwkowym. extern suy rwnie do powiadomienia kompilatora o tym, i zmienna jest zdefiniowana w danym pliku, natomiast wywoywana jest przed jej definicj. Ponisze przykady mam nadziej wyjani ten teoretyczny wstp.
Przykad 4.5.1.1 Posiadamy dwa pliki: main.c, glob.c. W drugim pliku mamy definicj zmiennej globalnej o nazwie id wraz z przypisan wartoci. Z pliku main.c prbujemy si odwoa do tej zmiennej. Poniej znajduj si listingi, oraz dodatkowy opis.
#include <stdio.h>
main (void)
{
printf("%d\n", id);
return 0;
}
Listing 4.5.1.1 main.c int id = 150;
Listing 4.5.1.2 glob.c W tym przypadku, podczas kompilacji otrzymamy bd informujcy nas o tym, e kompilator nie zna tej zmiennej, tzn, e zmienna jest nie zadeklarowana.
68
main.c: In function main:
main.c:5: error: id undeclared (first use in this function)
main.c:5: error: (Each undeclared identifier is reported only once main.c:5: error: for each function it appears in.)
Problem ten mona rozwiza deklarujc zmienn id w funkcji main uywajc sowa kluczowego extern. Jeli plik main.c poprawimy i bdzie wyglda jak na listingu poniej, kompilacja przebiegnie bez problemowo.
#include <stdio.h>
main (void)
{
extern int id;
printf("%d\n", id);
return 0;
}
Listing 4.5.1.3 poprawiony main.c Jeszcze dla sprostowania par sw o definicji i deklaracji. W naszym przypadku definicja zmiennej id znajduje si w pliku glob.c jest tam przypisana do niej warto. W funkcji main jest deklaracja funkcji id z uyciem extern i tam ju nie jest potrzebna warto. W tym miejscu informujemy tylko kompilator, e taka zmienna istnieje i jest ona w innym pliku (lub w dalszej czci pliku o tym zaraz)
oraz, e chcemy jej uy.
Przykad 4.5.1.2 Posiadamy jeden plik main.c, w ktrym definicja zmiennej globalnej znajduje si po jej wywoaniu.
Na poniszym listingu znajduje si omwiona sytuacja.
#include <stdio.h>
main (void)
{
printf("%d\n", id);
return 0;
}
int id = 150;
Listing 4.5.1.4 main.c 69
Jeli tak bdziemy chcieli skompilowa ten plik, to otrzymamy bd identyczny jak w punkcie 4.5.1.1.
Kompilator nie zna tej zmiennej. Rozwizania s dwa, albo zadeklarujemy zmienn id przed funkcj main, lub zostawimy j w tym miejscu, w ktrym jest i uyjemy sowa extern podczas deklarowania zmiennej w funkcji main. Drugi sposb zosta pokazany na poniszym listingu.
#include <stdio.h>
main (void)
{
extern int id;
printf("%d\n", id);
return 0;
}
int id = 150;
Listing 4.5.1.5 poprawione main.c Przykad 4.5.1.3 Posiadamy dwa pliki: main.c, naglowki.h w funkcji main odwoujemy si do zmiennej, ktra jest zdefiniowana w pliku nagwkowym bez uycia sowa kluczowego extern. Kompilacja przebiega bez problemu i zmienna zostaje uyta. Plik nagwkowy musi zosta doczony przez dyrektyw #include.
Ponisze listingi pokazuj ten sposb.
#include <stdio.h>
#include "naglowki.h"
main (void)
{
printf("%d\n", id);
return 0;
}
Listing 4.5.1.6 main.c int id = 15;
Listing 4.5.1.7 naglowki.h W tym przypadku nie musimy deklarowa zmiennej wewntrz uywanej funkcji z uyciem extern.
Doczylimy plik nagwkowy, w ktrym definicja zmiennej zostaa zawarta, wic kompilator wie ju o tej zmiennej i moe jej uy.
Sowo kluczowe extern ma jeszcze zastosowanie do funkcji. W punkcie 4.2 przedstawiona zostaa 70
sytuacja (bd wraz z opisem) co by si stao, gdyby definicja funkcji znajdowaa si po jej wywoaniu i jednoczenie nie byo by prototypu funkcji. Zawsze lepiej uy prototypu funkcji, natomiast istnieje te moliwo, aby go nie uywa, a uy sowa kluczowego extern w funkcji, ktra chce si odwoa do wybranej funkcji bdcej w tym samym lub innym pliku. Poniszy listing prezentuje dan sytuacj.
#include <stdio.h>
main (void)
{
extern double podziel();
printf("%f\n", podziel(1.0, 2.0));
return 0;
}
double podziel (double a, double b)
{
return a/b;
}
Listing 4.5.1.8 Kolejne uycie extern W tym przykadzie wida, e funkcja podziel jest w tym samym pliku, lecz jeli chodzi o same zastosowanie extern, to rwnie dobrze mogaby by ona w innym pliku. Widzimy, e w funkcji main deklarujemy funkcj podziel zwracajc typ double. Nie potrzeba deklarowa parametrw, wystarczy poinformowa kompilator o tym, e chcemy uy takiej funkcji, ktra jest zdefiniowana w dalszej czci, bd w innym pliku.
4.5.2 static Zmienne statyczne mona definiowa zarwno globalnie jak i lokalnie. Rnica midzy statyczn a zwyk zmienn globaln jest taka, e statyczne zmienne globalne maj zasig od miejsca wystpienia do koca kompilowanego pliku. Czyli jeli umiecimy pewn zmienn w innym pliku, to bdzie ona dostpna tylko dla funkcji znajdujcych si w tamtym pliku. Uycie extern nic nie pomoe,
do zmiennej nie bdziemy mogli si odwoa. Jest to metoda na ukrycie zmiennych przed innymi funkcjami, ktre nie potrzebuj mie do niej dostpu. Zmienne statyczne mog mie tak sam nazw jak inne zmienne globalne i nie bdzie to kolidowa pod warunkiem, e definicje tych dwch zmiennych nie wystpuj w jednym pliku. Na poniszych przykadach zostao to pokazane.
71
Przykad 4.5.2.1 Mamy dwa pliki: main.c, wynik.c. W tym drugim mamy zdefiniowan zmienn statyczn globaln z zainicjowan wartoci pocztkow. Uywamy jej do operacji w funkcji wynik. W funkcji main odwoujemy si do funkcji wynik i otrzymujemy wynik operacji sumy argumentw podzielonych przez c. Aby sprawdzi czy mona si odwoa do zmiennej statycznej z innego pliku usu komentarze otaczajce deklaracj extern oraz instrukcj printf drukujc zmienn c. Informacje od kompilatora powinny by pouczajce.
#include <stdio.h>
double wynik (double, double);
main (void)
{
/* extern double c; */
printf("%f\n", wynik(1.0, 10.0));
/* printf("%f\n", c); */
return 0;
}
Listing 4.5.2.1 main.c static double c = 100.0;
double wynik (double a, double b)
{
return (a + b)/c;
}
Listing 4.5.2.2 wynik.c Przykad 4.5.2.2 W dwch plikach mamy definicj dwch zmiennych globalnych o tych samych nazwach, z czego jedna jest zmienn globaln statyczn.
#include <stdio.h>
int indeks = 11923;
int nr_indeksu (void);
main (void)
{
printf("%d\n", indeks);
printf("%d\n", nr_indeksu());
return 0;
}
Listing 4.5.2.3 main.c 72
static int indeks = 22943;
int nr_indeksu (void)
{
return indeks;
}
Listing 4.5.2.4 ind.c Wida, e w pliku ind.c zmienna globalna nazywa si tak samo, nie przeszkadza to, jeli jest to zmienna statyczna. Jeli nie byo by sowa static, to kompilator wywietli bd z informacj,
e wystpuje wielokrotna definicja zmiennej, co jest oczywicie zabronione. Jeli zmienna globalna zwyka i statyczna byy by zdefiniowane w jednym pliku o tych samych nazwach, to rwnie kompilator wywietli bad.
Przykad 4.5.2.3 Funkcje z natury s globalne i dostpne dla wszystkich czci programu. Mona natomiast uy static,
by funkcja bya widziana tylko w pliku, w ktrym wystpuje jej definicja. Na poniszych listingach wida, ze funkcj ilosc_cali mona uy tylko w pliku il_cal.c, natomiast prba odwoania si do tej funkcji z innego pliku, koczy si bdem.
#include <stdio.h>
double zamien (double);
main (void)
{
/*
extern double ilosc_cali ();
printf("5 cm to %f cali\n", ilosc_cali(5.0));
*/
printf("5 cm to %f cali\n", zamien(5.0));
return 0;
}
Listing 4.5.2.5 main.c static double ilosc_cali (double centymetr)
{
const double cal = 0.3937007874;
return centymetr * cal;
}
double zamien (double centymetr)
73
{
return ilosc_cali (centymetr);
}
Listing 4.5.2.6 il_cal.c Oczywicie mona by byo funkcj ilosc_cali zdefiniowa normalnie i to za jej pomoc otrzymywa wynik bezporednio. Natomiast chciaem pokaza, e z pomoc sowa static moemy ukry rwnie funkcj.
Istniej jeszcze zmienne statyczne wewntrzne, czyli lokalne dla funkcji. Zmienne te w odrnieniu od zwykych zmiennych lokalnych rni si tym, i nie znikaj po zakoczeniu dziaania funkcji, tzn przy kolejnym wywoaniu pamitaj poprzedni warto. Najlepszym przykadem na to bdzie poniszy listing, w ktrym z funkcji main wywoujemy drug funkcj trzykrotnie, za kadym razem funkcja ta drukuje po raz ktry zostaa wywoana. Gdyby w funkcji ilosc_wywolan zmienna i nie bya zadeklarowana jako static, to dostalibysmy wyniki nie zgodne z oczekiwaniami. Funkcja pokazywaa by za kadym razem gdy zostaa wywoana, e wywoywana jest po raz pierwszy.
#include <stdio.h>
void ilosc_wywolan (void);
main (void)
{
ilosc_wywolan();
ilosc_wywolan();
ilosc_wywolan();
return 0;
}
void ilosc_wywolan (void)
{
static int i = 1;
}
printf("Funkcja wywolana po raz: %d\n", i);
i++;
Listing 4.5.2.7 Zmienna lokalna statyczna 74
4.5.3 register Ide zmiennych rejestrowych jest to, e daj one informacje kompilatorowi, e bd bardzo czsto wykorzystywane. Zmienne te umieszczane s w rejestrach maszyny, na ktrej s uywane dziki czemu program moe wykonywa si szybciej. Wszystko zaley jednak od kompilatora, poniewa kompilator moe cakowicie zignorowa sowo kluczowe register i traktowa j normalnie. Dla sprawdzenia tego mechanizmu przeprowadziem pewien test. Uruchomiem program, ktry waciwie zoony jest tylko z jednej ptli, ktr wykonuje 1E9 razy (1 i dziewi zer). Raz zmienn sterujc jest zmienna i, a raz zmienna i rejestrowa. Ciekawostk moe by fakt, i przeprowadziem test na dwch komputerach. Dane komputerw zamieszczone zostay w tabeli 4.5.3.1. Wyniki zostay umieszczone w tabelach 4.5.3.2 oraz 4.5.3.3. Rnice s zdumiewajce.
Tabela 4.5.3.2 Tabela
4.5.3.3 Nazwa procesora Intel Pentium 4 M-740
(Centrino Sonoma)
Nazwa Procesora Intel Core 2 Duo P7450 Taktowanie procesora 1.7GHz Taktowanie procesora 2.13GHz Dodatkowe informacje Cache 2MB FSB 533MHz Dodatkowe informacje Cache 3MB FSB 1066MHz Pami RAM 1536MB DDR II Pami RAM 4096MB DDR II Tabela 4.5.3.1 Wykaz sprztu int i register int i Lp
Czas [s]
Lp Czas [s]
1 6.612 1
6.213 2
6.695 2
6.244 3
6.700 3
6.226 4
6.649 4
6.260 5
6.735 5
6.247 6
6.721 6
6.384 7
6.784 7
6.293 75
8 6.725 8
6.276 9
6.740 9
6.222 10
6.788 10
6.291 redni czas 6.715 redni czas 6.266 rednia rnica 0.449 Tabela 4.5.3.2 Porwnanie zmiennej register ze zwyk komputer 1.
int i register int i Lp
Czas [s]
Lp Czas [s]
1 3.872 1
2.369 2
3.869 2
2.363 3
3.870 3
2.368 4
3.886 4
2.361 5
3.877 5
2.370 6
3.878 6
2.379 7
3.870 7
2.367 8
3.873 8
2.366 9
3.877 9
2.372 10
3.874 10
2.373 redni czas 3.875 redni czas 2.369 rednia rnica 1.506 Tabela 4.5.3.3 Porwnanie zmiennej register ze zwyk komputer 2.
Kod programu zawarty jest w listingu poniej
#include <stdio.h>
main (void)
{
register int i;
for (i = 0; i < 1E9; i++)
;
return 0;
}
Listing 4.5.3.1 Zmienna register Wicej informacji na temat sprawdzania czasu wykonywania si programw znajduje si w dodatku B.
76
4.6 Funkcje rekurencyjne Funkcje rekurencyjne s to takie funkcje, ktre wywouj same siebie w pewnym momencie. Do znanym przykadem moe by funkcja silnia, ktra oblicza iloczyn liczb pocztkowych1. Niech przykadem funkcji rekurencyjnej bdzie poniszy listing.
#include <stdio.h>
int silnia (int arg);
main(void)
{
int n = 5;
printf("%d! = %d\n", n, silnia(n));
return 0;
}
int silnia (int arg)
{
if (!arg)
return 1;
else return arg * silnia (arg-1);
}
Listing 4.6.1 Silnia funkcja rekurencyjna Caa rekurencja odbywa si w przedostatniej linii, w ktrej wywoujemy t sam funkcj z argumentem pomniejszonym o 1. Aby zrozumie jak to tak naprawd dziaa postaram si opisa to w kilku zdaniach. Z funkcji main wywoujemy funkcj silnia z argumentem 5. Sterowanie przechodzi do funkcji silnia i sprawdza warunek, czy argument (5) jest rwny zero. Jeli nie jest (a na razie nie jest)
to za pomoc instrukcji return zwr warto arg silnia (arg 1). Czyli w naszym przypadku 5 silnia(4). silnia(4) wykonuje analogiczn prac zwraca 4 silnia (3) i tak dalej, a argument dojdzie do 1, w ktrym to zwrci 1 silnia(0). silnia(0) zwraca po prostu 1 (warunek w funkcji). Czyli do funkcji printf zostanie zwrcona warto 5 4 3 2 1 1.
1 Silnie oznacza si wykrzyknikiem. 5! = 1 2 3 4 5 77
5 Tablice i wskaniki Pomimo i w poprzednich rozdziaach pojawiay si tablice, to jednak nie zostay one wytumaczone na tyle obszernie, by mona byo to pomin. O wskanikach na razie nie byo mowy, tak wic w tym rozdziale zostan one omwione dosy szczegowo. Tablice i wskaniki maj ze sob wiele wsplnego, natomiast najpierw zostan omwione tablice oddzielnie, pniej wskaniki, a w punkcie 5.3 zostan pokazane zalenoci midzy tablicami, a wskanikami.
5.1 Tablice Tablic mona byo by sobie wyobrazi jako worek, do ktrego wrzucamy te same przedmioty, tylko o rnych kolorach. Jeli tworzymy tablic liczb cakowitych, to moemy mie w niej tylko liczby cakowite, lecz wartoci naturalnie mog si rni. Tablice zdecydowanie uatwiaj prac z danymi konkretnego typu. Bezsensownie jest tworzy 100 zmiennych typu int, nadawa im dziwne nazwy
(oczywicie mona by byo nazwa je schematycznie np.: a1, , a100, lecz jest to bezsensu), a potem odwoywa si do nich jakimi dziwnymi sposobami. W poniszych podpunktach zostay omwione tablice jedno i wielowymiarowe.
5.1.1 Tablice jednowymiarowe Tablic definiujemy raz, podajc jej nazw, a w nawiasach kwadratowych rozmiar. Jeli chcemy zainicjowa wartoci tablicy podczas jej tworzenia to moemy to zrobi stawiajc po zamykajcym nawiasie kwadratowym znak rwnoci, a nastpnie wpisujc wartoci odpowiedniego typu rozdzielone przecinkami pomidzy nawiasami klamrowymi. Przykadowe deklaracje tablic zostay pokazane poniej.
int tab[5];
double axp[7];
int mpx[] = {3, 7, 9, 11};
double kxx[10] = {13.3, 19.0, 19.9, 192.4};
Teraz pewne wyjanienia si nale. Tablice z niezainicjowanymi wartoci przechowuj mieci. Jeli 78
inicjujemy wartoci to nie musimy podawa rozmiaru tablicy, kompilator sam t warto obliczy.
Natomiast jeli podajemy warto tablicy i inicjujemy tylko cz elementw, to pozostae elementy s zerami (tablica kxx).
Niech ponisza tabela bdzie reprezentacj tablicy liczb calkowitych int tab[10];
10 15
20 145
43 234
14 0
0 11
tab[0]
tab[1]
tab[2]
tab[3]
tab[4]
tab[5]
tab[6]
tab[7]
tab[8]
tab[9]
Tabela 5.1.1.1 Tablica liczb cakowitych Do elementw tablicy odwoujemy si za pomoc nazwy tablicy oraz jej indeksu. Bardzo wan spraw jest to, i w jzyku C tablic indeksuje si od wartoci 0. Tak wic w tablicy dziesicio elementowej ostatni element dostpny jest pod indeksem 9, co zostao pokazane w tabeli 5.1.1.1. Tak wic, aby wywietli wszystkie wartoci tablicy tab wystarczy wrzuci j do ptli. Zostao to pokazane na poniszym listingu.
#include <stdio.h>
main (void)
{
int tab[10] = {10, 15, 20, 145, 43, 234, 14, 0, 0, 11};
int i;
for (i = 0; i < 10; i++)
printf("tab[%d] = %d\n", i, tab[i]);
return 0;
}
Listing 5.1.1.1 Drukowanie wszystkich elementw tablicy Gdybymy utworzyli 10 zmiennych typu int, to aby wydrukowa wszystkie, musielibymy wpisa 10 instrukcji printf. Niech ten przykad bdzie dowodem na to, i atwiej operuje si tablicami, jeli dane s jednego typu, ni ogromn iloci zmiennych.
5.1.2 Tablice wielowymiarowe Tablice wielowymiarowe definiuje si bardzo podobnie z t rnic, e zamiast jednego rozmiaru podaje si dwa. O co tutaj chodzi? Wybramy sobie macierz znan z matematyki (lub tabliczk 79
czekolady miejsce kadej kostki mogo by zosta okrelone na podstawie podania pozycji w wierszu oraz pozycji w kolumnie) jest to przykad tablicy dwuwymiarowej. Dla przykadu i lepszego zrozumienia pokazana zostaa poniej tabela, w ktrej umieszczone s nazwy, przez ktre odwoujemy si do konkretnych pl.
a[0][0]
a[0][1]
a[0][2]
a[0][3]
a[0][4]
a[1][0]
a[1][1]
a[1][2]
a[1][3]
a[1][4]
a[2][0]
a[2][1]
a[2][2]
a[2][3]
a[2][4]
a[3][0]
a[3][1]
a[3][2]
a[3][3]
a[3][4]
a[4][0]
a[4][1]
a[4][2]
a[4][3]
a[4][4]
Tabela 5.1.2.1 Tablica dwuwymiarowa Na poniszym listingu tworzymy tablic 5x5 z zainicjowanymi wartociami (o tym za chwil). Do poszczeglnych pl odwoujemy si analogicznie jak w przypadku tablic jednowymiarowych z t rnic, e dodajemy drugi indeks. Patrzc na powysz tabelk widzimy, e aby dosta si do rodkowego elementu musimy poda dwa indeksy o numerze 2, czyli a[2][2];
#include <stdio.h>
#define MAX 5 main (void)
{
double tab[MAX][MAX] = {
{0.0, 0.1, 0.2, 0.3, 0.4},
{1.0, 1.1, 1.2, 1.3, 1.4},
{2.0, 2.1, 2.2, 2.3, 2.4},
{3.0, 3.1, 3.2, 3.3, 3.4},
{4.0, 4.1, 4.2, 4.3, 4.4}
};
int i, j;
for (i = 0; i < MAX; i++)
for (j = 0; j < MAX; j++)
printf("%.1f%c", tab[i][j], (j == MAX - 1) ? '\n' : '\t');
return 0;
}
Listing 5.1.2.1 Tablica dwuwymiarowa 80
Za pomoc dwch ptli drukujemy wszystkie wartoci tablicy dwuwymiarowej. Pierwsza ptla
zewntrzna odpowiada za wywietlanie wierszy, a wewntrzna za wywietlanie kolumn. W instrukcji printf wystpuje jako trzeci argument operator trzyargumentowy, ktry sprawdza, czy mamy do czynienia z pit kolumn. Jeli tak jest to stawia znak nowego wiersza, w przeciwnym wypadku stawia tabulacj. Wszystko to w celu lepszej prezentacji danych.
Sposb inicjowania tablic wielowymiarowych jest podobny do inicjowania tablic jednowymiarowych.
Rznica polega na tym, e w jednowymiarowych tablicach po przecinku podajemy wartoci, w tablicy dwuwymiarowej podajemy tablic, czyli wartoci rozdzielone przecinkami w nawiasach klamrowych.
Po ostatniej wartoci moe wystpi przecinek, lecz nie musi. Przykad inicjowania tablicy trzy wymiarowej oraz wywietlania wszystkich wartoci pokazany zosta poniej.
#include <stdio.h>
main (void)
{
double tab[2][2][2] = {
{ {1.0, 2.0}, {3.0, 4.0} },
{ {5.0, 6.0}, {7.0, 8.0} }
};
int i, j, k;
for (i = 0; i < 2; i++)
for (j = 0; j < 2; j++)
for (k = 0; k < 2; k++)
printf("tab[%d][%d][%d] = %.1f\n", i, j, k, tab[i][j][k]);
return 0;
}
Listing 5.1.2.2 Tablica trzywymiarowa Tablice o wymiarze wikszym ni dwa raczej rzadziej s stosowane ni tablice jedno i dwuwymiarowe.
Nie mniej jednak sposb tworzenia takich tablic zosta pokazany. Jeszcze jedna uwaga, w tablicach wielowymiarowych, pierwszy rozmiar moe zosta pominity kompilator sam ustali jego warto.
A wic ponisze deklaracje s rwnowane.
double tab[][MAX] = { ... };
double tab[][2][2] = { ... };
81
5.2 Wskaniki Wskaniki s to zmienne, ktre nie przechowuj wartoci danego typu, tylko przechowuj adresy do tych zmiennych. Za pomoc wskanikw mona robi szereg operacji, ktrych nie mona by zrobi za pomoc zwykych zmiennych, ale o tym troch pniej.
Deklaracja wskanika polega na dodaniu gwiazdki pomidzy typem a nazw zmiennej. Poniej zostay pokazane trzy rwnowane deklaracje wskanikw. Nie ma rnicy czy gwiazdka bdzie bezporednio przed nazw, bezporednio po typie, czy po rodku kwestia przyzwyczaje, ja osobicie preferuj sposb pierwszy. Podczas deklarowania kilku zmiennych oraz wskanikw w jednej linii (listing)
sposb pierwszy wydaje si najbardziej intuicyjny.
int *wsk_i;
// wskaznik na typ int double* wsk_d;
// wskaznik na typ double char * wsk_i;
// wskaznik na typ char Jeli ju stworzylimy wskanik do konkretnego typu, to teraz wypadaoby przypisa adres jakiej zmiennej, do tego wskanika. Poniej zaprezentowany zosta listing z uytecznymi waciwociami wskanikw. Omwienie tych waciwoci znajduje si poniej.
#include <stdio.h>
main (void)
{
int x = 10, y, *wsk_x, *w2;
wsk_x = &x;
w2 = wsk_x;
printf("wsk:\t%p\t%d\n", wsk_x, *wsk_x);
printf("x: \t%p\t%d\n", &x, x);
*wsk_x = 15;
y = x;
printf("x: \t%p\t%d\n", &x, x);
printf("y: \t%p\t%d\n", &y, y);
*wsk_x += 10;
printf("x: \t%p\t%d\n", &x, x);
++*wsk_x;
printf("wsk:
\t%p\t%d\n", wsk_x, *wsk_x);
82
(*wsk_x)++;
printf("x: \t%p\t%d\n", wsk_x, *wsk_x);
*wsk_x++;
printf("x: \t%p\t%d\n", wsk_x, *wsk_x);
printf("w2: \t%p\t%d\n", w2, *w2);
return 0;
}
Listing 5.2.1 Uycie wskanika i pewne operacje Stworzylimy zmienn typu cakowitego z zainicjowan wartoci, zmienn bez zainicjowanej wartoci oraz dwa wskaniki do typu cakowitego. Do zmiennej wskanikowej musimy przypisa adres. Tak wic za pomoc operatora adresu & wycigamy adres ze zmiennej stojcej po prawej stronie tego operatora1 i przypisujemy ten adres do wsk_x. Jeli dwie zmienne s typu wskanikowego to moemy przypisa jedn warto do drugiej. Tak wic w2 bdzie wskazywaa na to co wsk_x, czyli na x. Jak wida na listingu 5.2.1 pierwsze dwie instrukcje printf wydrukuj dokadnie to samo, poniewa zmienna wsk_x wskazuje na zmienn x. Obie zmienne odnosz si do tego samego miejsca w pamici,
co wida po adresie, ktry zostanie wydrukowany. Rnica polega na odwoaniu si do zmiennych wskanikowych. Argumenty drugi i trzeci z drugiej instrukcji printf s waciwie znane. Przed chwila byo powiedziane, e jeli znak & stoi po lewej stronie zmiennej to nie jest to warto zmiennej, a jej adres. Natomiast w pierwszej instrukcji jest troch inaczej, poniewa zmienna wsk_x jest zmienn wskanikow, czyli wartoci, ktr przechowuje jest adres, to po prostu w miejscu w ktrym ma zosta wydrukowany adres wpisujemy tylko t nazw. Jeli chcemy zmieni warto kryjc si pod zmienn x, lub po prostu odwoa si do niej, ale za pomoc zmiennej wskanikowej to musimy t warto wycign spod tego adresu. Do tego celu uywamy operatora wyuskania (lub operator dereferencji), ktry jest gwiazdk stojc po lewej stronie zmiennej wskanikowej. Po zmianie wartoci z uyciem wsk_x, zmienna x bdzie posiadaa now warto, dlatego te zmienna y bdzie posiadaa now warto x. Uywajc operatora wyuskania mamy dostp do wartoci zmiennej, tak wic moemy wykonywa wszystkie operacje jakie mona byo wykonywa na zwykych zmiennych.
Operacja ++*wsk_x zwikszy o 1 warto, ktra kryje si pod adresem, ktry przechowuje ta zmienna
(w naszym przypadku x). Operator post inkrementacji te mona zastosowa, natomiast trzeba uy nawiasw, bo w przeciwnym wypadku zwikszymy adres wskanika. Tak te si stao w kolejnej instrukcji i adres bdzie zwikszony o 1 rozmiar typu, a wartoci bd mieci.
1 Nie naley myli jednoargumentowego operatora adresu & z dwuargumetnowym operatorem bitowej koniunkcji &.
83
5.3 Przekazywanie adresu do funkcji Jak wiadomo parametry przekazywane funkcjom przekazuj im tylko swoje wartoci. W funkcji tworzone s lokalne zmienne i na nich wykonuje si operacje, a na kocu zwraca si wynik operacji do miejsca wywoania. Za pomoc wskanikw moemy manipulowa wartociami przekazywanymi do funkcji, tzn do tej pory wartoci zmiennych z funkcji wywoujcej nie mogy ulec zmianie. Za pomoc wskanikw istnieje taka moliwo i pokae na przykadzie pewne zastosowanie tego mechanizmu.
Wemy na przykad funkcj fabs ze standardowej biblioteki, ktra zwraca warto bezwzgldn liczby podanej jako argument. Funkcja ta zwraca warto bezwzgldn, lecz nie zmienia wartoci swojego argumentu, co zostao pokazane na poniszym listingu.
#include <stdio.h>
#include <math.h>
main (void)
{
int x = -5;
printf("%.0f\n", fabs(x));
printf("%d\n", x);
return 0;
}
Listing 5.3.1 Uycie fabs Napiszemy teraz funkcj pobierajc adres obiektu (zmiennej). W funkcji tej bd wykonywane operacje bezporednio na zmiennej, ktra znajduje si w funkcji wywoujcej, co za tym idzie, funkcja abs2 zmieni warto swojego argumentu.
#include <stdio.h>
void abs2 (int *);
main (void)
{
int x = -5;
printf("%d\n", x);
abs2(&x);
printf("%d\n", x);
return 0;
}
void abs2 (int *arg)
{
if (*arg < 0)
*arg = -*arg;
}
Listing 5.3.2 Przekazywanie adresu do funkcji 84
W funkcji abs2 uywamy operatora wyuskania do sprawdzenia czy kryjca si warto pod przekazanym adresem jest mniejsza od zera i jeli tak jest, to zmieniamy jej znak na przeciwny. Po wywoaniu tej funkcji zmienna x w funkcji main zmieni warto na dodatni, jeli uprzednio byo ujemna. Na listingu 5.3.1 tylko wydrukowalimy warto dodatni, natomiast warto zmiennej nie ulega zmianie, bo funkcja nie miaa bezporedniego dostpu do jej adresu. A wic zapamita naley jedno, jeli uywamy wskanikw, to trzeba uywa ich mdrze, bo mona zmieni wartoci, ktrych zmieni nie chcielimy. Dziki poprzedniemu przykadowi wiemy ju, e moemy zmienia wartoci kilku zmiennych po wywoaniu jednej funkcji. Za pomoc return moglimy zwrci jedn warto.
W rozdziale 2.2.4 dotyczcym staych powiedziane byo, e staej nie mona zmieni, a jednak za pomoc wskanika da si to zrobi, potraktuj ten przykad jako ciekawostk.
#include <stdio.h>
void cConst (int *);
int main (void)
{
const int x = 10;
printf("%d\n", x);
cConst((int *)&x);
printf("%d\n", x);
return 0;
}
void cConst (int *arg)
{
*arg = 4;
}
Listing 5.3.3 Zmiana wartoci const za pomoc wskanika Jako argument wywoania funkcji podajemy (int *)&x, dokadnie oznacza to tyle, e adres zmiennej x,
ktra jest typu const int rzutujemy na int *, dziki temu moemy zmieni t warto. Jeli nie byo by rzutowania, to kompilator wywietli informacj, e przekazujemy zy typ danych. Z kolei jeli parametrem funkcji byby const int *, to kompilator wywietli bd z informacj o tym, e nie mona zmienia wartoci zmiennych tylko do odczytu.
5.4 Zalenoci midzy tablicami, a wskanikami Tablice i wskaniki s do mocno powizane ze sob. Moe nie wida tego na pierwszy rzut oka, lecz 85
jak przyjrzymy si bliej strukturze pamici wypenionej przez tablic oraz zauwaymy pewne waciwoci wskanikw, to dostrzeemy to podobiestwo. Poniszy schemat pamici moe okaza si pomocny. Tablica tab jest typu int, dlatego te kady element tablicy zajmuje po 4 bajty, co atwo mona sprawdzi sprawdzajc rznic pomidzy adresami.
tab[0]
tab[1]
tab[2]
tab[3]
tab[4]
0xbfeb0390 0xbfeb0394
0xbfeb0398 0xbfeb039c
0xbfeb03a0
Widzimy, e tablica tab zajmuje 5 komrek pamici, z czego tab[0] zaczyna si na adresie 0xbfeb0390 a koczy si na pocztku tab[1] czyli 0xbfeb0394. tab[4] zaczyna si od 0xbfeb03a0, a koczy si na 0xbfeb03a4 (nie pokazano). Adresy s liczbami szesnastkowymi, wic moemy sprawdzi ile ta tablica zajmuje bajtw. 0Xbfeb03a4 0xbfeb0390 = 14 wynik otrzymalimy w systemie szesnastkowym. 14 szesnastkowo to 20 dziesitnie, tak wic nasza tablica zajmuje 20 bajtw.
W tym miejscu warto wspomnie o operatorze sizeof, ktry zwraca ilo bajtw zajmowanych przez argument. Tak wic jeli chcemy sprawdzi ile na naszej maszynie zajmuje typ int wystarczy wpisa ponisz instrukcj w ciele funkcji main.
printf("%d\n", sizeof (int));
Operator sizeof zwraca warto cakowit zajmowanego miejsca przez dany typ, zmienn, tablic, itp.
A skoro jestemy ju przy moliwoci sprawdzenia ile zajmuje tablica, to czemu by tego nie wkorzysta? Wpisz powysz instrukcj zamieniajc argument operatora sizeof z int na tab (nazwa naszej tablicy) by przekona si, e nasza tablica faktycznie zajmuje 20 bajtw. Moemy zrobi co wicej, w punkcie 5.1 powiedziane byo, e nie trzeba wpisywa rozmiaru tablicy, wystarczy j zainicjowa dowolnymi wartociami a kompilator sam ustali rozmiar. Owszem, tak mona zrobi, ale skoro nie wiemy ile jest elementw tablicy, to jak mamy te dane np. wywietli? Jak warto wpisa do ptli for jako ograniczenie, by nie przekroczy iloci elementw tablicy? Odpowied jest waciwie prosta instrukcja sizeof. Poniszy listing bdzie tego dowodem.
#include <stdio.h>
main (void)
{
int tab[] = {8, 4, 3, 9, 1, 1, 23, 2, 11, 23};
int i;
86
}
for (i = 0; i < sizeof (tab) / sizeof (int); i++)
printf("%d ", tab[i]);
printf("\n");
return 0;
Listing 5.4.1 Uycie sizeof Wywoanie sizeof(tab) zwraca ilo bajtw zajmowanych przez tablic tab, a sizeof (int) zwraca ilo zajmowanych bajtw przez typ int, czyli przez jeden element tablicy. Iloraz tych dwch wartoci daje liczb elementw tablicy. Bezpieczniejszym sposobem moe okaza si sizeof (tab) / sizeof (tab[0]),
poniewa jeli zmienimy typ tablicy np. na double to ilorazem byby rozmiar tablicy double i rozmiar typu int, a to nie bya by ilo elementw tej tablicy. sizeof(tab[0]) podaje rozmiar pojedynczego elementu, ktry jest takiego samego typu jak tablica.
Tworzc tablic rezerwujemy pewn ilo miejsca w pamici poczwszy od jakiego adresu, co wida na pomocniczym schemacie z pocztku tego rozdziau. Powiedziane byo w punkcie 5.2, e za pomoc operatora wyuskania mona doda warto, lub wykona inn operacj na wartoci, kryjcej si pod danym adresem. Operacje mona wykonywa rwnie na adresach. Moemy doda lub odj jeden do adresu. Jeden to nie jeden bajt, tylko 1 rozmiar typu. Czyli jeli nasza tablica jest typu int, to dodanie jeden do wskanika przesunie go o 1 sizeof (int), co zreszt wida po rnicy midzy komrkami.
Tablic mona traktowa jako wskanik do zarezerwowanej pamici, ktry zaczyna si od pewnego miejsca tab[0]. Rwnowan postaci do tab[0] jest podane samej nazwy tablicy, w naszym przypadku tab, poniewa domylnie nazwa tablicy wskazuje na pierwszy jej element. Jeli teraz dodamy do tab (czyli adresu pierwszego elementu) jeden, to przesuniemy wskanik na nastpny element tablicy, czyli tab[1]. A eby uzyska warto elementu kryjcego si pod adresem tab+1 musimy uy poznanego ju wczeniej operatora wyuskania i zastosowa go w nastpujcy sposb
*(tab+1). Ta posta jest alternatyw dla tab[1]. Poniszy listing pokazuje alternatywne wywietlanie elementw tablicy.
#include <stdio.h>
main (void)
{
int tab[] = {8, 4, 3, 9, 1, 1, 23, 2, 11, 23};
int i;
87
for (i = 0; i < sizeof (tab) / sizeof (tab[0]); i++)
printf("%d ", *(tab+i));
printf("\n");
return 0;
}
Listing 5.4.2 Alternatywne wywietlanie elementw tablicy Teraz moemy zrobi pewnie do dziwn, moe nie zrozumia rzecz, ale zadeklarujemy wskanik,
ktry bdzie wskazywa na pierwszy element tablicy i za pomoc tego wskanika bdziemy si odwoywa do elementw tablicy.
#include <stdio.h>
main (void)
{
int tab[] = {8, 4, 3, 9, 1, 1, 23, 2, 11, 23};
int i;
int *wsk;
wsk = tab; /* wsk = &tab[0] */
for (i = 0; i < sizeof (tab) / sizeof (tab[0]); i++)
printf("%d ", *(wsk+i));
printf("\n");
return 0;
}
Listing 5.4.3 Wywietlanie elementw tablicy za pomoc wskanika Przypisujemy do wskanika adres pierwszego elementu tablicy. W zmiennej wskanikowej wsk mamy ju adres pocztku tablicy, wic jak byo ju powiedziane dodanie do adresu jeden przesuwa wskanik o rozmiar elementu, tak wic za pomoc wskanika wsk moemy przesuwa si na adresy poszczeglnych elementw tablicy i wywietla ich wartoci. Jeszcze jedna uwaga, wyraenie tab+i
(wsk+i) przesuwa wskanik o i sizeof (int) wzgldem pocztkowego adresu, lecz nie zapisuje nowej pozycji w zmiennej tab (wsk).
Naley pamita o jednej bardzo wanej rzeczy, wskanik jest zmienn, wic moe wskazywa na cokolwiek, wczajc w to adres pocztku tablicy. Natomiast nazwa tablicy zmienn nie jest, wic pewne operacje s niedozwolone. Poniszy listing prezentuje te rnice.
#include <stdio.h>
#define SIZE 5 main (void)
{
int tab[SIZE] = {3, 4, 5, 1, 2};
88
int i, *wsk;
wsk = tab;
for (i = 0; i < SIZE; i++)
printf("%d ", *(wsk+i));
printf("Wskaznik jest pod adresem: %p\n", wsk);
for (i = 0; i < SIZE; i++)
printf("%d ", *(tab+i));
printf("Wskaznik jest pod adresem: %p\n", tab);
for (i = 0; i < SIZE; i++)
printf("%d ", *wsk++);
printf("Wskaznik jest pod adresem: %p\n", wsk);
/*
*/
for (i = 0; i < SIZE; i++)
printf("%d ", *tab++);
// Blad printf("Wskaznik jest pod adresem: %p\n", tab);
return 0;
}
Listing 5.4.4 Rznicy midzy zmienn wskanikow a nazw tablicy We wszystkich ptlach drukujemy zawarto tablicy. Rznic jest pozycja wskanika. Po zakoczeniu pierwszej ptli for wskanik bdzie dalej na pierwszym elemencie tablicy, poniewa wywietlajc nie zmienialimy jego wartoci, tylko wywietlalimy warto przesunit o i wzgldem punktu bazowego.
Dziaanie drugiej ptli for jest analogiczne uywamy nie wskanika, a nazwy tablicy wic moemy tak samo wywietla element przesunity o i wzgldem pierwszego elementu tablicy. Trzecia ptla poza tym, e wywietla elementy tablicy przesuwa wskanik po kadej iteracji o 1 (w ramach przypomnienia wsk++ jest rwnowane z wsk = wsk + 1). Tak wic po zakoczeniu ptli wskanik znajdzie si na pozycji, ktra ju nie naley do tablicy tab. W czwartej ptli konstrukcja *tab++ jest niedozwolona, tak wic jeli usuniemy znaki komentarza i sprbujemy skompilowa program dostaniemy bd mwicy o tym, e L-warto1 (ang. L-value) wymaga operatora, ktry moe by zwikszony. Tak samo przypisania do tablicy takie jak tab = i, tudzie tab = &i s niedozwolone.
5.5 Operacje na wskanikach Istniej pewne operacje na wskanikach, ktre s dozwolone i takie, ktrych uywa nie mona,
1 Jednostka stojca po lewej stronie operatora przypisania 89
dlatego te w tym miejscu opisz te, ktrymi moemy si posugiwa. Do dozwolonych operacji na wskanikach zaliczamy:
Przypisanie wskanikw do obiektw tego samego typu
Dodawanie i odejmowanie wskanika i liczby cakowitej
Odejmowanie i porwnywanie dwch wskanikw do elementw tej samej tablicy
Przypisanie do wskanika wartoci zero
Porwnanie wskanika z zerem Pozostae operacje na wskanikach wliczajc w to dodawanie, mnoenie i dzielenie dwch wskanikw, oraz dodawanie liczb rzeczywistych do wskanikw s niedozwolone. Nie mona te przypisywa do wskanika jednego typu adresu zmiennej innego typu (wyjtkiem jest void *, ktry jest opisany w nastpnym punkcie). Poniej znajduj si przykady wyej wymienionych operacji.
Podpunkt drugi zrealizowalimy ju, chociaby na listingu 5.4.4. Reszta podpunktw zostaa pokazana poniej.
#include <stdio.h>
#define MAX 100 main (void)
{
int tab[MAX];
int *w1, *w2, *w3;
w1 = &tab[MAX-1];
w2 = tab;
if (w2 > w1)
printf("w2 jest wskaznikiem do dalszego elementu\n");
else
{
printf("w2 jest wskaznikiem do blizszego elementu\n");
w3 = NULL;
}
if (w3 == NULL)
printf("Roznica wskaznikow: %d\n", (w1-w2));
return 0;
}
Listing 5.5.1 Uycie operacji na wskanikach 90
Do wartoci wskanika w3 moglibymy przypisa zero i sprawdzi w warunku czy w3 rwna si zero.
Nie mniej jednak utworzono specjaln sta symboliczn NULL (ktra ma warto 0) zdefiniowan w nagwku stdio.h w celu podkrelenia, e chodzi o specjaln warto wskanika. 0 i NULL mog by stosowane zamiennie. Niektre funkcje zwracaj warto NULL.
5.6 Wskanik typu void Wskanik typu void moe wskazywa na dowolny obiekt (dowolny typ), natomiast nie mona uy operatora wyuskania do niego. Poniej znajduje si przykad, ktry pokazuje moliwoci typu void.
#include <stdio.h>
main (void)
{
int i = 10, *wsk_i;
double k = 15.5, *wsk_d;
void *x;
x = &i;
printf("x: \t%p\n", x);
// printf("x: %d\n", *x);
wsk_i = x;
printf("wsk_i: \t%p\t%d\n", wsk_i, *wsk_i);
x = &k;
printf("x: \t%p\n", x);
// printf("x: %f\n", *x);
wsk_d = x;
printf("wsk_d: \t%p\t%f\n", wsk_d, *wsk_d);
return 0;
}
Listing 5.6.1 Wskanik typu void Jak wida do wskanika typu void mona przypisa zarwno adres zmiennej typu int, jak i typu double. Natomiast nie moemy si odwoa do wartoci kompilator zaprotestuje. Jeli wskanik typu void wskazuje na typ int, to moemy ten adres przypisa do wskanika typu int i warto kryjc si pod wskazywanym adresem wywietli. To samo obowizuje dla typu double i innych.
91
5.7 Tablice znakowe Typem tablic znakowych jest typ char. Tablice znakowe su do przechowywania staych napisowych. Przechowywa stae napisowe mona na dwa sposoby, albo za pomoc tablicy znakw,
albo za pomoc wskanika. Rnica midzy nimi istnieje i na poniszym rysunku postram si j wytumaczy.
char *wk wk
char tab[7]
M u
z y
k a
\0 S
t r
e f
a
\0 Rysunek 5.7.1 Rnice midzy wskanikiem a tablic A teraz dla penego zrozumienia dwie deklaracje zmiennych z rysunku 5.7.1 char *wk = "Muzyka";
char tab[7] = "Strefa";
Zarwno wskanik wk jak i tablica tab wskazuj na napis, ktry skada si z takiej samej iloci znakw. Wyobramy sobie to w sposb nastpujcy. Podczas kompilacji programu, staa napisowa Muzyka zapisywana jest gdzie w pamici, a do wskanika wk zostaje przypisany adres pierwszego znaku, czyli adres litery M. Tekst jest odczytywany (np. przez funkcj printf) a do znaku koczcego sta napisow \0. Tablica tab podczas kompilacji uzupeniana jest znakami wyrazu Strefa. Rnice midzy tymi dwiema formami zapisu s nastpujce:
Elementw sowa Muzyka nie mona zmienia!
Wskanik wk moe w pniejszym czasie wskazywa na co innego 92
Elementy tablicy mona zmienia dowolnie
Nazwa tab zawsze bdzie odnosia si do tablicy.
Zanim pokazany zostanie listing prezentujcy rnice midzy wskanikiem, a tablic znakw chciabym jeszcze powiedzie troch wanie o tablicy znakw. Tablica znakw podobnie jak tablica liczb cakowitych czy rzeczywistych posiada rozmiar i elementy. Elementy te moe wpisywa normalnie tak jak wpisywalimy liczby, tylko, e litery ujmujemy w znaki apostrofu, lub wpisujemy w cudzysowie cig znakw, co jest zdecydowanie atwiejszym i szybszym sposobem. Naley pamita o wanej rzeczy, jeli wpisujemy znaki pojedyczo oddzielone przecinkami, to musimy wstawi znak
\0 na ostatniej pozycji, tego nie trzeba robi jeli wpisujemy cig znakw w cudzysowie (podczas kompilacji znak ten zostanie dostawiony automatycznie).
Jeli inicjujemy tablic jakimi znakami, to rozmiar nie musi by wpisany, poniewa jest obliczany na etapie kompilacji. Najwaniejsz rzecz o ktrej trzeba pamita jest fakt, e rozmiar tablicy to ilo znakw + 1 (\0). Czyli tak jak na naszym rysunku 5.7.1 tablica tab zajmuje siedem bajtw, bo ilo znakw staej napisowej to sze oraz jeden bajt na znak \0. Na poniszym listingu pokazane zostay rnice o ktrych bya mowa.
#include <stdio.h>
main (void)
{
int i;
char *wk = "Muzyka";
char tab[7] = "Strefa";
char tab2[7] = {'S', 't', 'r', 'e', 'f', 'a', '\0'};
printf("wk: \t\t%s\n", wk);
printf("tab: \t\t%s\n", tab);
printf("tab2: \t\t%s\n", tab2);
printf("wk[1]: \t\t%c\n", wk[1]);
// wk[1] = 'U';
// Segmentation fault wk = tab;
printf("wk: \t\t%s\n", wk);
for (i = 0; i < sizeof (tab)/sizeof (tab[0]); i++)
if (tab[i] == 'e')
tab[i] = 'E';
93
}
printf("tab: \t\t%s\n", tab);
// tab = wk;
// Blad return 0;
Listing 5.7.1 Rnice midzy wskanikiem a tablic Na powyszym listingu tab oraz tab2 posiadaj taki sam cig znakw, aczkolwiek zapisane s inaczej.
Mona wywietli niektre elementy staej znakowej Muzyka wskazywanej przez wk, natomiast nie mona ich zmieni. Wskanik wk, jeli ju nie jest potrzebny do wywietlania sowa Muzyka moe wskazywa na co innego (wk = tab; wk wskazuje na pierwszy element tablicy tab). Tak jak byo powiedziane, elementy tablicy mona zmienia, realizuje to nasza ptla w ktrej przeszukujemy tablic w celu znalezienia litery e i zastpienia jej przez jej wielki odpowiednik. tab zawsze bdzie wskazywa na tablic, nie mona tego zmieni.
Ciekawostk moe by fakt, e znaki bdce w tablicy znakw mona by byo wywietli za pomoc ptli, ale istnieje deskryptor formatu %s, ktry wywietla znaki tablicy i koczy swoje dziaanie na znaku \0. Dla przykadu pokazano jeszcze jeden listing.
#include <stdio.h>
main (void)
{
char *wsk = "Ala ma kota";
char tab[] = "Lorem ipsum";
int i, k, z;
k = sizeof (tab) / sizeof (char);
printf("Rozmiar tablicy: \t\t%d\n", sizeof (tab));
printf("Rozmiar typu char: \t\t%d\n", sizeof (char));
printf("Ilosc elementow: \t\t%d\n", k);
printf("Zawartosc tablicy: \t\t");
for (i = 0; i < k-1; i++)
printf("%c", tab[i]);
printf("\n\n");
z = sizeof (wsk)/sizeof (char);
printf("Rozmiar wskaznika: \t\t%d\n", sizeof (wsk));
printf("Rozmiar typu char: \t\t%d\n", sizeof (char));
printf("Ilosc elementow: \t\t%d\n", z);
printf("Tekst wskazywany: \t\t");
for (i = 0; wsk[i] != '\0'; i++)
printf("%c", wsk[i]);
printf("\n");
return 0;
}
Listing 5.7.2 Rozmiar wskanika 94
Na powyszym przykadzie moe nie ma nowych rzeczy, aczkolwiek jest jedna bardzo wana rzecz,
ktra przyda nam si w kolejnym punkcie. A mianowicie rozmiar wskanika. Rozmiar tablicy obliczany jest jako ilo elementw rozmiar typu tablicy. Czyli wywietlilimy rozmiar tablicy tab,
rozmiar typu char, a ilorazem ich jest ilo elementw tablicy. Skoro mamy rozmiar wiemy ile jest znakw w tablicy to moemy je wszystkie wydrukowa. Poniewa elementw w tablicy jest 12 (11 znakw bez \0), a tablice indeksuje si od zera, czyli nasze drukowalne znaki s na pozycjach od 0 do 10 to warunek ptli jest i < k-1, czyli i < 11, czyli cznie wydrukowane zostan znaki od pocztku do dziesitego wcznie. Z rozmiarem wskanika jest inaczej i tutaj mog pojawi si pewne kopoty. Nie jestemy w stanie powiedzie ile elementw ma wskazywany tekst, poniewa sizeof (wsk) pokazuje tylko rozmiar wskanika. Ale dziki znakowi koczcemu sta napisow jestemy w stanie wydrukowa wszystkie znaki, a to zostao pokazane w drugiej ptli for, jeli znak jest rny od \0 to go wydrukuj.
5.8 Przekazywanie tablicy do funkcji Z punktu widzenia programu nie ma to znaczenia, czy do funkcji przekazujemy wskanik, tablic, czy cz tablicy. Niech nasz funkcj bdzie funkcja duze, ktra zamienia wielko liter swojego argumentu z maych na due.
#include <stdio.h>
#include <ctype.h>
void duze (char *tab);
main (void)
{
char tab1[] = "ala ma kota";
char tab2[] = "lorem ipsum";
char *napis = "ala nie ma kota";
char *wsk;
wsk = tab2;
printf("%s\n", tab1);
duze(tab1);
printf("%s\n", tab1);
printf("%s\n", wsk);
duze(&wsk[5]);
printf("%s\n", wsk);
//duze(napis);
95
}
return 0;
void duze (char *tab)
{
while (*tab = toupper(*tab))
tab++;
}
Listing 5.8.1 Przekazywanie tablicy do funkcji Jak powiedziano w poprzednich rozdziaach tablice i wskaniki maj bardzo wiele wsplnego, tak wic nie ma najmniejszego znaczenia czy funkcja duze przyjmuje tablic, czy wskanik jako argument.
Jeli zapisalibymy, e przyjmuje tablic to prototyp funkcji wygldaby tak:
void duze (char tab[]);
A w funkcji moglibymy zostawic to co jest, lub zapisa to w nastpujcy sposb:
void duze (char tab[])
{
int i = 0;
while (tab[i] = toupper(tab[i]))
i++;
}
Do funkcji duze przekazujemy adres tablicy znakw, nie ma rnicy czy przekazujemy ca tablic czy tylko jej fragment. W ptli while zmieniamy wielko znaku kryjcego si pod przekazanym adresem za pomoc funkcji toupper, jeli znakiem tym nie by znak \0, a nastpnie zwikszamy adres i wykonujemy kolejn zmian znaku. Jeli znakiem jest znak koca tablicy, to wyraenie while (0) jest faszywe, a wic funkcja duze zakoczy swoje dziaanie. Znak \0 w wyraeniach warunkowych traktowany jest jak zero. Wskanikowi wsk przypisalimy adres pierwszego elementu tablicy tab2, tak wic uywanie wsk odnosi si tak naprawd do tablicy tab2. Przekazalimy cz tablicy tab2 za pomoc wskanika deklaracj &wsk[5]. Czyli przekazujemy adres szstego elementu i od tego elementu robiona jest operacja zamiany znakw, wic kocowym efektem bdzie duy wyraz IPSUM,
a pierwszy wyraz bez zmiany. Wywoanie funkcji duze z argumentem napis ujte jest w komentarz,
poniewa jak wspomniano ju nie mona zmieni zawartoci pamici na ktr wskazuje ten wskanik.
W przeciwnym wypadku bdzie bd segmentation fault, ktry mwi o tym, e program chce odwoa si do pamici do ktrej nie ma dostpu. Nie ktrzy mogli by si zdziwi dlaczego przekazujc do 96
funkcji duze tablic moemy wykona operacj tab++; skoro bya mowa w punkcie 5.4, e tak nie mona. Tutaj ukon w stron funkcji i zmiennych lokalnych. Jeli zmienna jest zadeklarowana jako parametr funkcji w ten sposb char *tab tzn, e wskanik na typ znakowy jest lokalny. Czyli przekazujc do funkcji tablic, czyli tak naprawd adres jej pierwszej komrki zapisujemy ten adres w zmiennej lokalnej tab, na ktrej ju moemy wykonywa dozwolone operacje na wskanikach.
5.9 Wskaniki do wskanikw Jak ju powiedziano wskaniki to zmienne przetrzymujce adres innej zmiennej, ale jak kada zmienna tak rwnie i wskanik musi by zapisany gdzie w pamici, czyli posiada swj adres. Mona utworzy inny wskanik, ktry bdzie odwoywa si do adresu tego wskanika. Rysunek z kolejnej strony moe okaza si pomocny.
Zmienna wsk2_x
wsk1_x x
Adres 0xbfd01d84
0xbfd01d88 0xbfd01d8c
Wartosc 0xbfd01d88
0xbfd01d8c 10
Pami
0xbfd01d84 0xbfd01d88
0xbfd01d8c
Rysunek 5.9.1 Ilustracja pamici i zawartoci wskanikw Na rysunku zostao pokazane, e zmienna x posiada warto 10 i jej adresem jest 0xbfd01d8c.
Wskanik wsk1_x wskazuje na adres zmiennej x, czyli jako warto ma zapisany adres zmiennej x,
oraz posiada swj adres. Wskanik wsk2_x rwnie posiada swj adres, a jako warto ma wpisany adres wskanika wsk1_x. Listing poniszy pokazuje jak deklarowa wskaniki do wskanikw.
#include <stdio.h>
main (void)
{
97
int x = 10;
int *wsk1_x = &x;
//wsk1_x = &x;
int **wsk2_x = &wsk1_x;
//wsk2_x = &wsk1_x;
printf("x: \t\t%p\t%d\n", &x, x);
printf("wsk1_x: \t%p\t%p\t%d\n", &wsk1_x, wsk1_x, *wsk1_x);
printf("wsk2_x: \t%p\t%p\t%p\t%d\n", &wsk2_x, wsk2_x,
*wsk2_x,**wsk2_x);
return 0;
}
Listing 5.9.1 Wskanik do wskanika Najciekawsz lini moe by ostatnia, w ktrej instrukcja printf pobiera pi argumentw. Zacznijmy od tego co zostanie przekazane do wywietlenia.
&wsk2_x Adres wskanika wsk2_x
wsk2_x Adres wskanika wsk1_x
*wsk2_x Wycigamy warto kryjc si pod wskanikiem wsk1_x Adres zmiennej x
**wsk2_x Z wyej wycignitego adresu wycigamy warto 10 5.10 Tablica wskanikw Jak ju wspomniano wskaniki to te zmienne, dlatego mona utworzy z nich tablic do przechowywania wikszej iloci adresw tego samego typu. Sposb definiowania jest bardzo podobny do definicji pojedynczego wskanika z t rnic, e dodajemy rozmiar po nazwie. Tak wic dziesicio elementowa tablica wskanikw na typ char wyglda nastpujco.
char *tabWsk[10];
Istnieje moliwo inicjowania tablic wskanikw i to chciabym pokaza na poniszym przykadzie.
Rysunek do zadania moe wyjani pewne nie jasnoci.
#include <stdio.h>
char *nazwa_dnia (int n);
int main (void)
98
{
int nrDnia;
char *wskDzien;
printf("Podaj nr dnia: ");
scanf("%d", &nrDnia);
wskDzien = nazwa_dnia(nrDnia);
printf("%s\n", wskDzien);
return 0;
}
char *nazwa_dnia (int n)
{
char *dzien[] = {
"Blad: Nie ma takiego dnia",
"Poniedzialek",
"Wtorek",
"Sroda",
"Czwartek",
"Piatek",
"Sobota",
"Niedziela",
};
return (n < 1 || n > 7) ? dzien[0] : dzien[n];
}
Listing 5.10.1 Tablica wskanikw dzien[0]
B l
a d
\0 dzien[1]
P o
n i
e d
z dzien[2]
W t
o r
e k
\0 dzien[3]
S r
o d
a
\0 dzien[4]
C z
w a
r t
e dzien[5]
P i
a t
e k
\0 dzien[6]
S o
b o
t a
\0 dzien[7]
N i
e d
z i
indeks 0
1 2
3 4
5 i
a l
k
\0 e
l a
\0 6
7 8
9 e
k
\0 10
11 12
Rysunek 5.10.1 Miejsca, na ktre wskazuj wskaniki 99
A teraz pewne wyjanienie co do tego programu. Funcja nazwa_dnia zwraca wskanik do typu znakowego, a jako argument przyjmuje liczb cakowit co wida po deklaracji. W ciele tej funkcji jest zainicjowana tablica wskanikw do typu znakowego. Mona by sobie to wyobrazi jak tablic dwu wymiarow z czego kady element jest napisem. atwiej jest zapisa to jako tablic wskanikw,
poniewa rozmiar napisw jest rny. W funkcji main deklarujemy zmienn typu cakowitego do przechowywania liczby pobranej od uytkownika oraz wskanik na typ char nasza funkcja zwraca taki wskanik, wic do tej zmiennej mona go przypisa. Po reakcji uytkownika do funkcji przekazywana jest liczba i na podstawie warunku w instrukcji return zwracany jest odpowiedni wskanik do zerowego indeksu (pierwszy element) pewnego cigu znakw. Jeli liczba n jest spoza zakresu to zwracany jest wskanik do litery B, ktra wystpuje pod zerowym elementem tablicy dzien na pozycji zero (czyli dzien[0][0] to litera B). Jeli liczba n jest z dozwolonego zakresu to zwracany jest adres do pierwszej litery (pierwszy element) n-tego elementu. Jak powiedziane byo w poprzednich rozdziaach funkcja printf wywietla znaki, a osignie znak \0.
W instrukcji printf mona wstawi bezporednio odwoanie do funkcji, czyli linijka printf("%s\n", wskDzien);
mogaby zosta zamieniona na printf("%s\n", nazwa_dnia(nrDnia));
O jeszcze jednej sprawie trzeba napisa. Tablica dzien w funkcji nazwa_dnia zadeklarowana jest jako tablica wskanikw, kady ze wskanikw moe wskazywa na rny cig znakw. Rozmiarem tej tablicy wskanikw bdzie rozmiar typu wskanika ilo elementw, czyli sizeof (char *) 8,
czyli 32 bajty. Z drugiej strony jak chcielibymy zadeklarowa tablic dwuwymiarow, to pierwszy rozmiar mona pominc (ale bdzie to osiem), a drugim rozmiarem musi by minimalna ilo znakw by przechowa najduszy element, czyli w naszym przypadku 26. 26 8 = 208 bajtw.
#include <stdio.h>
int main (void)
{
char dzien[][26] = {
"Blad: Nie ma takiego dnia", "Poniedzialek", "Wtorek",
"Sroda", "Czwartek", "Piatek", "Sobota", "Niedziela",
};
char *wskDzien[] = {
"Blad: Nie ma takiego dnia", "Poniedzialek", "Wtorek",
100
};
"Sroda", "Czwartek", "Piatek", "Sobota", "Niedziela",
printf("%s\n", dzien[0]);
printf("%d\n", sizeof (dzien));
printf("%s\n", wskDzien[0]);
printf("%d\n", sizeof (wskDzien));
return 0;
}
Listing 5.10.2 Rnica w rozmiarach tablicy dwuwymiarowej i tablicy wskanikw Tworzc tablic dwuwymiarow tracimy troch pamici, dlatego, e cigi znakw nie s tego samego rozmiaru (no chyba, e s, ale to szczeglny przypadek). Skoro kady element ma zapewnione 26 znakw, a wykorzystuje mniej to pozostae miejsce jest nie wykorzystane, a pami zarezerwowana.
Poniszy rysunek wraz z listingiem mog pomc w zrozumieniu tego.
Cig znakw Rozmiar
Blad: Nie ma takiego dnia\0 1
26 Poniedzialek\0
27
183 208
52 Niedziela\0 Rysunek 5.10.2 Zajmowana pami przez tablic dwuwymiarow Poniszy kod pokazuje, ile miejsc zostao nie wykorzystanych, czyli ile pamici straconej.
#include <stdio.h>
int main (void)
{
char dzien[][26] = {
"Blad: Nie ma takiego dnia", "Poniedzialek", "Wtorek",
"Sroda", "Czwartek", "Piatek", "Sobota", "Niedziela",
};
int i, j;
for (i = 0; i < 8; i++)
for (j = 0; j < 26; j++)
printf("dzien[%d][%d]\t= %c\t%p\n", i, j, dzien[i][j],
&dzien[i][j]);
printf("Rozmiar: %d\n", sizeof(dzien));
return 0;
}
Listing 5.10.3 Zajmowane miejsce przez tablic dwuwymiarow 101
Jeli uywamy tablicy wskanikw do odwoywania si do cigu tekstw zapisanych gdzie w pamici,
to tekst ten zapisywany jest bajt po bajcie w pewnym obszarze pamici, a kolejne wskaniki przypisywane s do adresw, w ktrych zaczyna si nowy cig znakw, nowy w sensie ten po znaku \0. Poniszy rysunek i listing mog pomc w zrozumieniu tego.
Cig znakw Rozmiar
Blad: Nie ma takiego dnia\0 1
26 Poniedzialek\0
Niedziela\0 27
76
39
85 Rysunek 5.10.3 Zajmowana pami przez cig znakw do ktrych odwouj wskaniki
#include <stdio.h>
int main (void)
{
char *wskDzien[] = {
"Blad: Nie ma takiego dnia", "Poniedzialek", "Wtorek",
"Sroda", "Czwartek", "Piatek", "Sobota", "Niedziela",
};
int i, j;
for (i = 0; i < 8; i++)
{
for (j = 0; wskDzien[i][j] != '\0'; j++)
printf("dzien[%d][%d]\t= %c\t%p\n", i, j, wskDzien[i][j],
&wskDzien[i][j]);
printf("dzien[%d][%d]\t= %c\t%p\n", i, j, '\0', &wskDzien[i]
[j]);
}
}
return 0;
Listing 5.10.4 Zajmowane miejsce przez cigi znakw do ktrych odwouj si wskaniki Skoro mowa jest o tablicach wskanikw, a na razie tylko zaprezentowane zostay tablice wskanikw do typu znakowego, to jest sens pokaza jak wygldaj tablice wskanikw np. do typu cakowitego.
Moe na tym przykadzie zostan rozwiane wszelkie wtpliwoci, jeli jakie zaistniay. Tablic wskanikw do typu int definiuje si analogicznie, tylko zmienia si typ zmiennej. Poniszy listing pokazuje to.
#include <stdio.h>
102
int main (void)
{
int x = 10, y = 100, z = 1000;
int *tab[] = { &x, &y, &z };
int i;
int **wskTAB;
wskTAB = tab;
for (i = 0; i < 3; i++)
printf("%p %p %d\n", (tab+i), *(tab+i), **(tab+i));
for (i = 0; i < 3; i++)
printf("%p %p %d\n", &tab[i], tab[i], *tab[i]);
}
for (i = 0; i < 3; i++)
{
printf("%p %p %d\n", wskTAB, *wskTAB, **wskTAB);
wskTAB++;
}
Listing 5.10.5 Tablica wskanikw do typu cakowitego Na tym przykadzie moe wida lepiej tablic wskanikw. Widzimy, e kady element tablicy tab,
jest wskanikiem (adresem) do jednej ze zmiennych zadeklarowanych powyej. Deklaracja **wskTAB mwi o tym, e jest to podwjny wskanik, a linijk niej przypisujemy do niego adres zerowego elementu tablicy tab. Druga ptla for moe wydawa si najbardziej zrozumiaa. W pierwszej kolumnie zostanie wydrukowany adres i-tego elementu tablicy, w drugiej kolumnie warto kryjca si pod i-tym elementem (czyli adres zmiennej x, y lub z), a w trzeciej kolumnie zostanie wydrukowana warto kryjca si pod adresem wycignitym z drugiej kolumny. Trzecia ptla rni si od pierwszej tym, e przesuwamy w niej pozycj wskanika, czego w pierwszej zrobi nie moemy (omwione w punkcie 5.4).
103
6 Argumenty funkcji main Kada funkcja moe przyjmowa jakie, bd nie przyjmowa adnych argumentw. Funkcje przyjmujce argumenty ju byy, ale funkcja main do tej pory nie przyjmowaa adnych argumentw.
Dlatego w tym miejscu opisz za pomoc jakiego narzdzia mona przekaza do funkcji main pewne wartoci. Dopiero teraz zostanie to omwione, bowiem narzdziem tym jest tablica wskanikw.
W nawiasach funkcji main wpisujemy dwa parametry. Typem pierwszego jest liczba cakowita, drugi natomiast to tablica wskanikw na typ znakowy. Domylnie nazwano je argc (ang. argument count),
czyli ilo podanych argumentw oraz argv (ang. argument vector), ktry jest tablic wskanikw indeksowanych od zera do argc-1. Oczywicie nazwy parametrw mona nazwa wedug wasnego uznania, ale te przyjto za standardowe. Poniszy rysunek pokazuje ten mechanizm.
*argv[]
p o w i t a n i e \0 argv[0]
Nazwa programu D z i e n \0 argv[1]
Pierwszy parametr d o b r y \0 argv[2]
Drugi parametr Rysunek 6.1 Tablica wskanikw do argumentw funkcji main A wic tak, uruchamiajc program z argumentami (o tym za chwil) program rezerwuje pami na nazw programu oraz wystpujce argumenty. W zalenoci od dugoci poszczeglnych argumentw taka ilo bajtw jest rezerwowana oczywicie plus jeden bajt na znak \0. Do tablicy wskanikw przypisywane s adresy pocztku cigu znakw. Na pozycji zerowej w tablicy wskanikw argv jest adres pierwszego znaku nazwy programu, na pozycji pierwszej adres pierwszego znaku pierwszego argumentu, itd. Na rysunku 6.1 wida, e argumenty s dwa, natomiast argc (ilo argumentw) rwna si trzy, poniewa nazwa programu te jest traktowana jako argument. Nasz program wywietlajcy tekst przekazany jako argument mona zapisa w nastpujcy sposb.
104
#include <stdio.h>
int main (int argc, char *argv[])
{
int i;
for (i = 1; i < argc; i++)
printf("%s ", argv[i]);
printf("\n");
return 0;
}
Listing 6.1 Argumenty funkcji main Aby nasz program by taki sam jak schemat przyjty na rysunku 6.1 musimy go skompilowa i poda odpowiedni nazw, a pniej uruchomi z dwoma argumentami.
$ gcc nazwa_programu.c -o powitanie
$ ./powitanie Dzien dobry Program naley uruchomi tak jak pokazano powyej. Do programu przekazalimy dwa argumenty i oba zostay wydrukowane. Powiedziano, e nazwa programu te jest argumentem, a nie zostaa wydrukowana owszem, ptla for zaczyna si od pierwszego elementu, nie zerowego, ktry wskazuje na nazw programu.
Kolejny przykad moe by troch ciekawszy. Napiszemy kalkulator, ktry bdzie przyjmowa argumenty w nastpujcej kolejnoci: argument pierwszy, operator, argument drugi, a wynik zostanie wywietlony. Jako operator mam na myli + (dodawanie), - (odejmowanie), x (mnoenie), / (dzielenie).
Pod listingiem znajduje si opis poszczeglnych czci programu.
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
double tab[2];
if (argc != 4)
{
printf("Uzycie: %s arg1 op arg2\n", argv[0]);
return -1;
}
else
{
105
tab[0] = atof(argv[1]);
tab[1] = atof(argv[3]);
switch(argv[2][0])
{
case '+' : printf("%.2f\n", tab[0] + tab[1]);
break;
case '-' :
printf("%.2f\n", tab[0] - tab[1]);
break;
case 'x' :
printf("%.2f\n", tab[0] * tab[1]);
break;
case '/' : if (!tab[1])
{
printf("Blad: Dzielenie przez zero\n");
return -1;
}
else printf("%.2f\n", tab[0] / tab[1]);
break;
}
}
return 0;
}
Listing 6.2 Prosty kalkulator A wic na samym pocztku tworzymy dwu elementow tablic liczb typu double, a nastpnie sprawdzamy, czy argc jest rne od 4 (pamitamy, nazwa programu to te argument) jeli tak jest to wydrukuj informacj o sposobie uywania programu i zakocz jego dziaanie. Do funkcji printf przekazujemy nazw programu do wywietlenia i tak jak pokazane byo na rysunku, argv[0] jest wskanikiem do nazwy programu, po czym wypisana jest kolejno przyjmowanych argumentw podczas uruchamiania programu. Jeli uytkownik podczas uruchamiania programu nie wpisze wszystkich argumentw lub wpisze ich za duo to dostanie informacj nastpujc.
Uzycie: ./kalkulator arg1 op arg2 Funkcja atof, przyjmuje wskanik do tekstu i konwertuje wystpujce w tym tekcie znaki na liczb typu double. atof akceptuje znak minus na pocztku cigu znakw. Jeli jako parametr atof podamy taki cig znakw: -3aa3 to skonwertowane do typu double zostan tylko pierwsze dwa znaki, czyli -3,
pozostae zostan odrzucone. Funkcja ta koczy dziaanie na pierwszym znaku nie bdcym liczb.
Wyjtkiem s litery e, E, ktre tworz notacj wykadnicz (opisane w punkcie 2.2.4). Jako argumenty funkcji atof podajemy drugi argument wywoania funkcji main (arg1) oraz czwarty argument (arg2).
106
Te wartoci przypisujemy do elementw tablicy. Teraz moe najciekawsza cz, jeli nie wiesz dlaczego argument switch jest zapisany tak argv[2][0], a nie w ten sposb argv[2] to poniszy obrazek moe Ci to wyjani.
argv[2]
+
\0 Rysunek 6.2 Cig znakw wskazywanych przez argv[2]
Instrukcja switch jako swj argument chce pojedynczy znak, a nie cig znakw. Pomimo tego, i wpisalismy jako operator tylko znak dodawania, bd innej operacji to i tak dostawiony zosta znak \0.
Aby wpisa tylko jeden znak, musimy odwoa si do zerowego znaku wskazywanego przez ten wskanik, czyli dostawi drugi indeks, argv[2][0] odnosi si do znaku plus z rysunku 6.2. Dalsze czci instrukcji switch zostay omwione przy omawianiu teje instrukcji.
Wikszo programw w rodowisku Linuksa posiada opcj --help, ktra wywietla pomoc do programu. Ogranicz si tutaj do opcji -h, ktra te bdzie wywietlaa pseudo pomoc, a opcja -h -n,
bd -hn bdzie wywietlaa co innego. Najpierw pokazany zostanie listing, a poniej opis poszczeglnych czci.
#include <stdio.h>
int main (int argc, char *argv[])
{
int h = 0, n = 0;
int inny = 0;
int znak;
while (--argc > 0 && (*++argv)[0] == '-')
while (znak = *++argv[0])
switch (znak)
{
case 'h' : ++h;
break;
case 'n' : ++n;
break;
default : ++inny;
break;
}
if (h == 1 && !inny && !n)
107
printf("Pomoc - sama opcja -h\n");
else if (h == 1 && n == 1 && !inny)
printf("Inna pomoc - opcja -hn / -h -n\n");
else if (!h && !inny && n == 1)
printf("Inna opcja - opcja -n\n");
else printf("Zla opcja\n");
}
return 0;
Listing 6.3 Pomoc za pomoc opcji Z pomoc moe przyj poniszy obrazek, na ktrym zaznaczone s wane czci programu.
argv
.
/
w
++argv k
-
h
++argv
\0
-
n
*argv[0]
*++argv[0]
\0
*++argv[0]
(*++argv)[0]
*++argv[0]
(*++argv)[0]
Rysunek 6.3 Wskaniki do nazwy oraz parametru Zmienne h oraz n zainicjowane s wartoci zero, jeli wystpia ktra z tych liter w argumencie, to zostanie zwikszony ich licznik. Zmienna inny bdzie zliczaa wystpienia innych argumentw. Ptla while dziaa dopki ilo argumentw jest wiksza od zera. Kolejn cz warunku warto rozpatrze bardziej szczegowo. Spjrzmy na rysunek 6.3, ktry pokazuje, e na nazw programu wskazuje argv.
Operator [] ma najwyszy priorytet, wic jak wykonamy instrukcj *++argv[0] to przesuniemy si wzdu cigu znakw. Jeli zastosujemy instrukcj (*++argv)[0] to najpierw zostanie zwikszony wskanik, czyli zostanie przesunity do kolejnego cigu znakw, czyli naszego nastpnego argumentu,
a poniewa w nawiasie mamy podany indeks zero, to sprawdzamy, czy pierwszym znakiem jest minus,
co jest zapisane w warunku. Jeli tak jest to przypisujemy do zmiennej znak nastpny znak wystpujcy zaraz po minusie. Nastpnie wykonuje si instrukcja switch i z powrotem do zmiennej znak przypisywany jest kolejny znak, jeli ten znak jest zerem to ptla nie jest wykonywana 108
i wykonywana jest zewntrzna ptla while, zwikszany jest wskanik (przesuwamy go do nastpnego cigu znakw) i jeli na zerowej pozycji wystpuje minus to sprawdzamy kolejne znaki, jeli nie to przechodzimy do sprawdzania warunkw. Jeli zmienna h rwna si jeden (w argumencie wystpuje litera h tylko jeden raz) oraz litera n oraz inne znaki nie wystpuj, to wydrukuj poniszy tekst. Jeli h rwna si jeden i n rwna si jeden i nie byo innych znakw to wydrukuj poniszy tekst. Jeli parametrem bya tylko litera n, to wydrukuj odpowiedni informacj. W przeciwnym wypadku poinformuj o zej opcji. Operator pre inkrementacji zosta uyty z tego wzgldu, eby nie sprawdza nazwy plikw tylko przej od razu na pierwszy argument.
109
7 Struktury Struktury w jzyku C jak i w innych jzykach programowania su do przechowywania danych rnych typw, ktre s ze sob cile powizane (np. informacje o pracowniku/studencie, wsprzdne punktu, dugo bokw prostokta, itp) w celu zgrupowania ich pod wspln nazw oraz przechowywania w jednym wyznaczonym obszarze pamici.
7.1 Podstawowe informacje o strukturach Aby zadeklarowa struktur trzeba uy sowa kluczowego struct, po ktrym wystpuje bd nie etykieta struktury (nazwa typu strukturowego), nastpnie pomidzy nawiasami klamrowymi wystpuj skadowe struktury (definicje zmiennych, odwoania do innych struktur, itp.). Po nawiasie klamrowym zamykajcym moe lecz nie musi wystpowa lista zmiennych (zmienne typu strukturowego).
W gruncie rzeczy jeli zmienne nie wystpuj to zdefiniowalimy jedynie typ strukturowy.
Przykadowe deklaracje typw strukturowych oraz zmiennych typu strukturowego zostay pokazane na poniszym listingu. Opis poszczeglnych mechanizmw zosta przedstawiony poniej.
#include <stdio.h>
struct punkty {
int x, y;
};
struct prostokat {
int a;
int b;
} pr1 = {3, 12}, pr2;
struct {
float r;
} kolo;
int main (void)
{
struct punkty pk1;
struct prostokat pr3;
pk1.x = 10;
pk1.y = 15;
printf("x: %d\ty: %d\n", pk1.x, pk1.y);
printf("a: %d\tb: %d\n", pr1.a, pr1.b);
pr2.a = 8;
pr2.b = 4;
printf("a: %d\tb: %d\n", pr2.a, pr2.b);
pr3.a = 10;
pr3.b = 12;
110
printf("a: %d\tb: %d\n", pr3.a, pr3.b);
kolo.r = 3.45;
printf("r: %.2f\n", kolo.r);
return 0;
}
Listing 7.1.1 Przykady struktur Pierwsza struktura, ktra ma etykiet nazwan punkty zawiera jak wida dwie skadowe - zmienne typu cakowitego. Nie ma zadeklarowanych adnych zmiennych typu strukturowego, tak wic aby skorzysta z tej struktury trzeba tak zmienn utworzy. Definicja zmiennej do typu strukturowego punkty jest pierwsz instrukcj w funkcji main, podajemy sowo kluczowe struct nazw etykiety i nazw zmiennej. Stworzylimy zmienn pk1 typu punkty i z jej pomoc moemy zmienia wartoci zmiennych x i y. Do elementw struktury (skadowych struktury) odwoujemy si za pomoc operatora
"." (kropka), ktry wystpuje pomidzy zmienn typu strukturowego (pk1) a elementem struktury (x,
y). Druga struktura ktr zadeklarowalimy ma etykiet prostokat, w ktrej rwnie wystpuj dwie skadowe zmienne typu int, lecz waniejsz rzecz jest to, i wystpuj zmienne strukturowe pr1 oraz pr2, za pomoc ktrych mona odwoywa si do zmiennych a i b (nie trzeba tworzy nowej zmiennej, jak to byo w poprzednim przypadku), zmienne te s globalne. Jak wida i co jest kolejn waciwoci moemy zainicjowa wartoci pocztkowe elementw struktury w taki sposb jak zostay zapisane w zmiennej pr1 (dlaczego tak, a nie w standardowy sposb np. int a = 3 zostanie wyjanione pniej). Kolejna struktura, ktra nie ma etykiety pokazuje nastpn waciwo, nie trzeba podawa nazwy etykiety wystarczy poda nazw zmiennej po koczcym skadowe nawiasie klamrowym. Lecz w tym przypadku nie zadeklarujemy wicej zmiennych tego typu strukturowego, poniewa nie ma etykiety z pomoc ktrej tak zmienn moglibymy utworzy.
Teraz pewne informacje odnonie zajmowanego miejsca przez struktury. Struktura punkty dopki nie zostanie utworzona zmienna typu punkty nie zajmuje adnego miejsca w pamici. Po deklaracji zmiennej pk1 struktura zajmuje sumaryczn ilo bajtw, ktre zajmuj skadowe, czyli w naszym przypadku 8 bajtw (dwie zmienne typu cakowitego cztero bajtowego). Rysunek moe pomc w zrozumieniu tego.
111
Nazwa pk1
pk1.x pk1.y
Adres 0xbff84718
0xbff8471c 0xbff84720
Rozmiar 4
4
Rysunek 7.1.1 Rozmiar struktury Jak wida adresem pocztku struktury jest adres pierwszej zmiennej, czyli zmiennej x. Zmienna ta zajmuje cztery bajty, tak wic nastpny element bdzie pod adresem o cztery wikszym od poprzedniego, co wida na rysunku. Zmienna y koczy si pod adresem 0xbff84720, dlatego te caa struktura koczy si pod tym samym adresem. Struktura zajmuje cznie 8 bajtw. Moe rysunek ten przybliy troch spraw dlaczego nie mona przypisywa wartoci zmiennym bdcym skadowymi struktury w dobrze znany nam sposb. Jeli jest to jeszcze nie jasne to ju tumacz. Deklaracja struktury do ktrej nie odnosi si adna zmienna opisuje tylko wzorzec struktury, czyli jak bdzie ona reprezentowana pniej w pamici. Do typu prostokat zadeklarowalimy dwie zmienne (pr1, pr2)
i obie te zmienne posiadaj dostp do zmiennych a i b, z tym e zmienna pr1.a jest pod zupenie innym adresem ni pr2.a. Oglnie rzecz biorc struktura bez zmiennej typu strukturowego jest bez uyteczna, staje si uyteczna w momencie gdy stworzymy tak zmienn, co za tym idzie zarezerwujemy pami dla wszystkich jej skadowych. O nazwach struktur, zmiennych oraz skadowych nic nie wspomniano jeszcze, dlatego chc teraz o tym powiedzie. Struktur mona nazwa dowolnie, zmienna typu strukturowego moe mie tak sam nazw jak etykieta struktury oraz jedna z jej skadowych. Poniszy listing jest poprawny.
#include <stdio.h>
struct info {
int info;
} info;
int main (void)
{
info.info = 10;
printf("%d\n", info.info);
return 0;
}
Listing 7.1.2 Trzy takie same nazwy 112
7.2 Operacje na elementach struktury Operacje na elementach struktury s takie same jak na zwykych zmiennych, jakby nie byo s to waciwie zwyke zmienne, ktre s zgrupowane w jednym miejscu i odwoujemy si do nich za pomoc specjalnego operatora.
7.3 Przekazywanie struktur do funkcji O operacjach na elemetach struktury powiedziano, natomiast nie powiedziano o operacjach na caych strukturach, co mona robi, a czego nie. Dozwolonymi operacjami dla struktury s:
Przypisanie do struktury w caoci innej struktury
Pobranie adresu za pomoc operatora &
Odwoanie si do elementw struktury Operacj ktra jest zabroniona to porwnywanie struktur. Jeli chodzi o manipulowanie danymi zawartymi w strukturze za pomoc funkcji to mona to zrobi na kilka sposobw. Po pierwsze mona przekaza do funkcji wartoci i przypisa je do elementw struktury. Po drugie mona przekaza ca struktur i zwrci ca struktur za pomoc return (patrz punkt pierwszy dozwolonych operacji). A po trzecie mona przekaza wskanik do struktury. Wymienione operacja zostay przedstawione na poniszym listingu.
#include <stdio.h>
struct informacja {
int x, y, z;
};
struct informacja przypiszWartosci (int x, int y, int z);
struct informacja dodajWartosci (struct informacja info, int px, int py,
int pz);
void odejmij (struct informacja *info, int px, int py, int pz);
int main (void)
{
struct informacja info1, info2;
info1 = przypiszWartosci(10, 100, 1000);
printf("%d %d %d\n", info1.x, info1.y, info1.z);
info2 = dodajWartosci(info1, 40, 50, 60);
printf("%d %d %d\n", info2.x, info2.y, info2.z);
113
odejmij(&info2, 100, 300, 500);
printf("%d %d %d\n", info2.x, info2.y, info2.z);
printf("%p %p %p %p\n", &info1, &info1.x, &info1.y, &info1.z);
printf("%p %p %p %p\n", &info2, &info2.x, &info2.y, &info2.z);
return 0;
}
struct informacja przypiszWartosci (int x, int y, int z)
{
struct informacja tmp;
tmp.x = x;
tmp.y = y;
tmp.z = z;
return tmp;
}
struct informacja dodajWartosci (struct informacja info, int px, int py,
int pz)
{
info.x += px;
info.y += py;
info.z += pz;
return info;
}
void odejmij (struct informacja *info, int px, int py, int pz)
{
info->x -= px;
info->y -= py;
info->z -= pz;
}
Listing 7.3.1 Pewne operacje z uyciem struktur Pamitamy, e funkcje przed nazw miay typ zwracanej wartoci, dlatego te w prototypach tych dwch funkcji jako typ zwracanej wartoci jest struct informacja, co oznacza, e funkcja zwraca struktur typu informacja, czyli tak jak zdefiniowana jest powyej. Funkcja przypiszWartosci pobiera trzy argumenty cakowite. W ciele tej funkcji musimy zadeklarowa zmienn typu informacja
(tmp), w ktrej przechowamy tymczasowo przypisane dane do momentu zwrcenia caej struktury i przypisania jej do konkretnej zmiennej (info1). Skoro funkcja zwraca struktur typu informacja, to moemy wywoa t funkcj przypisujc j do zmiennej, ktra odnosi si do struktury informacja i tym sposobem przypisalimy jedn struktur do drugiej. Funkcja dodajWartosci te zwraca struktur typu informacja, ale jako pierwszy argument przyjmuje ca struktur, kolejnymi argumentami s wartoci, ktre zostaj dodane. Wida w ciele tej funkcji, e nie musimy deklarowa pomocniczej zmiennej do tej struktury, skoro przekazalimy struktur, to wystarczy odwoa si do jej pl.
Zwracamy znowu ca struktur i tym razem przypisujemy j do info2. Funkcja odejmij przyjmuje 114
wskanik do struktury, jak si atwo domyle nie trzeba zwraca wartoci, by zmiany byy widoczne.
Tutaj chyba najciekawsza cz, bowiem zosta uyty dotd nie uywany operator -> (minus nawias trjktny zamykajcy), ktry ma najwyszy priorytet. Co ciekawe posta (*info).x -= px jest rwnowana, nawiasy s konieczne poniewa operator "." (kropka) ma wyszy priorytet ni operator *
(operator wyuskania / dereferencji), gdybymy opucili nawiasy to dostalibymy taki komunikat.
error: request for member x in something not a structure or union Dwie dotd nie omwione linie listingu 7.1.2 to te drukujce adresy. Mona sprawdzi jaki adres ma struktura i jej poszczeglne elementy. Adres struktury i pierwszego elementu bd takie same, co zostao pokazane na rysunku 7.1.1.
Pomimo i pokazane zostay operacje zwizane ze wskanikami do struktur to naley jednak o tym powiedzie dokadniej, dla przykadu pokazany zosta poniszy kod.
#include <stdio.h>
struct info {
char imie[30];
int wiek;
float waga;
} dane = {"Kasia", 19, 53.3};
void x (void);
int main (void)
{
struct info dk, *wsk, *wsk_dk;
dk = dane;
wsk = &dane;
wsk_dk = &dk;
printf("%s\t%d\t%.2fkg\n", wsk_dk->imie, wsk_dk->wiek, wsk_dk>waga);
printf("%s\t%d\t%.2fkg\n", (*wsk_dk).imie, (*wsk_dk).wiek,
(*wsk_dk).waga);
printf("%s\t%d\t%.2fkg\n", wsk->imie, wsk->wiek, wsk->waga);
printf("%s\t%d\t%.2fkg\n", (*wsk).imie, (*wsk).wiek, (*wsk).waga);
x();
printf("%s\t%d\t%.2fkg\n", wsk_dk->imie, wsk_dk->wiek, wsk_dk>waga);
printf("%s\t%d\t%.2fkg\n", (*wsk_dk).imie, (*wsk_dk).wiek,
(*wsk_dk).waga);
printf("%s\t%d\t%.2fkg\n", wsk->imie, wsk->wiek, wsk->waga);
printf("%s\t%d\t%.2fkg\n", (*wsk).imie, (*wsk).wiek, (*wsk).waga);
return 0;
}
void x (void)
{
115
dane.wiek = 0;
dane.waga = 0.0;
}
Listing 7.3.2 Wskanik do struktury A wic stworzylimy struktur info, do ktrej odwouje si zmienna dane z zainicjowanymi wartociami (tak przy okazji tablic znakw inicjuje si wanie w ten sposb). W funkcji main definiujemy zmienn dk oraz dwa wskaniki do typu strukturowego. Do tych wskanikw trzeba przypisa adres zmiennych. Do wskanika wsk przypisujemy adres zmiennej dane (zmienna ta jak si pniej okae jest zmienn globaln), do wskanika wsk_dk adres zmiennej dk, do ktrej dwie linie wczeniej skopiowalimy zawarto zmiennej dane. Jak wida operacje wsk_dk->imie oraz
(*wsk_dk).imie s rwnowane. Analogicznie jest ze wskanikiem wsk. Wywoanie funkcji x zeruje dwie zmienne liczbowe ze struktury (zmienna dane jest globalna co wida po wykonaniu dwch ostatnich instrukcji printf). Dwie instrukcje printf zaraz po wywoaniu funkcji x wywietl dalej te same wartoci, poniewa wskanik wskazuje na zmienn lokaln, do ktrej tylko skopiowalimy zawarto zmiennej dane.
Poniszy przykad moe i jest troch fikcyjny, ale pokazuje pewne dotd nie omwione waciwoci wskanikw.
#include <stdio.h>
struct dane {
int wiek;
int *wsk_wiek;
};
struct info {
struct dane *z_dan;
};
int main (void)
{
struct info inf1, *wsk_inf;
wsk_inf = &inf1;
struct dane dan1;
dan1.wsk_wiek = &dan1.wiek;
wsk_inf->z_dan = &dan1;
wsk_inf->z_dan->wiek = 80;
printf("%d\n", wsk_inf->z_dan->wiek);
printf("%d\n", *wsk_inf->z_dan->wsk_wiek);
return 0;
}
Listing 7.3.3 Kolejne uycie wskanikw 116
Tworzymy dwa typy strukturowe, typ dane oraz typ info. W pierwszym z nich skadowymi s liczba typu int i wskanik na tak liczb, w drugim natomiast wskanik na typ dane. W funkcji main deklarujemy zmienn typu info oraz wskanik tego samego typu. Do wskanika przypisujemy adres tej zmiennej, eby mc si odwoywa do elementw struktury za pomoc wskanika. Zmienna dan1 typu dane te musi zosta utworzona, bo bdzie potrzebny jej adres. Do wskanika wsk_wiek
(struktura dane) przypisujemy adres skadowej struktury dane - wiek. Teraz za pomoc wskanika na struktur info (wsk_inf) przypisujemy adres zmiennej strukturowej dan1 do wskanika na tak struktur (z_dan). A nastpnie moemy zmieni warto zmiennej wiek za pomoc wskanikw.
Druga instrukcja printf wydrukuje warto zmiennej wskanikowej wsk_wiek, poniewa zapis
*wsk_inf->z_dan->wsk_wiek jest rwnowany zapisowi *(wsk_inf->z_dan->wsk_wiek) dlatego, e operator -> ma wikszy prioryter ni operator *. Czyli najpierw dostaniemy si do adresu, a pniej wycigniemy to co pod nim siedzi.
7.4 Zagniedone struktury Struktury mona zagnieda to znaczy dziaa to w ten sposb, e definiujemy jedn struktur, nie musimy tworzy do niej zmiennej, a nastpnie definiujemy drug struktur, w ktrej jako skadow deklarujemy zmienn do tej pierwszej. Mam nadziej, e na poniszym przykadzie zostanie to wyjanione. Tabela przedstawia dane, ktre zostan zapisane w strukturze.
Imi Nazwisko
Wz<NAME>
<NAME>
185 89
Data urodzenia Dzie
<NAME>
12 9
1979 Tabela 7.4.1 Dane zawodnika
#include <stdio.h>
#include <string.h>
struct dataUrodzenia {
int dzien;
int miesiac;
int rok;
};
struct daneZawodnika {
char imie[20];
char nazwisko[30];
117
int wzrost, waga;
struct dataUrodzenia urodziny;
};
int main (void)
{
struct daneZawodnika dane;
strncpy(dane.imie, "Jan", sizeof (dane.imie));
strncpy(dane.nazwisko, "Kowalski", sizeof (dane.nazwisko));
dane.wzrost = 185;
dane.waga = 89;
dane.urodziny.dzien = 12;
dane.urodziny.miesiac = 9;
dane.urodziny.rok = 1979;
printf("%s\t%s\t%d\t%d\t%d\t%d\t%d\n", dane.imie, dane.nazwisko,
dane.wzrost, dane.waga, dane.urodziny.dzien, dane.urodziny.miesiac,
dane.urodziny.rok);
return 0;
}
Listing 7.4.1 Struktury zagniedone Najpierw deklarujemy typ strukturowy dataUrodzenia, ktry bdzie suy do przechowywania dnia,
miesica oraz roku urodzenia zawodnika. Wida z tabeli, e nadaje si to do rozdzielenia, eby w danych zawodnika nie byo zmiennej opisujcej dzie, miesic czy rok, tylko odpowiednia struktura.
W strukturze daneZawodnika s oglne informacje o zawodniku i jest tworzona zmienna urodziny,
ktra odnosi si do pierwszej struktury. W funkcji main tworzymy zmienn dane, ktra odwouje si do typu daneZawodnika, a nastpnie kopiujemy cig znakw Jan za pomoc funkcji strncpy
(biblioteki standardowej string.h) do tablicy znakw imie podajc jako trzeci argument maksymaln ilo kopiowanych znakw. Analogicznie dzieje si z nazwiskiem. Kolejne przypisania znane s ju z poprzedniego punktu. Ciekawe moe by odwoanie si do pl struktury dataUrodzenia.
A mianowicie robi si to tak, najpierw podaje si nazw struktury, stawia si kropk i podaje nazw pola (analogicznie jak w poprzednim punkcie), tylko teraz naszym polem jest kolejna struktura, tak wic po kropce stawiamy nazw zmiennej, ktra odwouje si do tamtej struktury, stawiamy kolejn kropk i podajemy wreszcie to pole, do ktrego chcemy si odwoa. Analogicznie byoby, gdybymy stworzyli wicej zagniede.
118
7.5 Tablice struktur Tablice struktur deklaruje si analogicznie do zwykych tablic. Waciwie to deklaruje si je tak samo jak zwyk zmienn typu strukturowego, tylko dodaje si rozmiar. Czyli deklaracja tablicy do struktury daneZawodnika bdzie wygldaa nastpujco.
struct daneZawodnika tab[10];
#include <math.h>
#include <stdio.h>
#define MAX 10 struct dane {
int k;
float x;
};
int main (void)
{
struct dane tab[MAX];
int i;
for (i = 0; i < MAX; i++)
{
tab[i].k = pow(2, i);
tab[i].x = i + 3.5 * (i + 2);
}
for (i = 0; i < MAX; i++)
{
printf("tab[%d].k = %d\t", i, tab[i].k);
printf("tab[%d].x = %.2f\n", i, tab[i].x);
}
return 0;
}
Listing 7.5.1 Tablica struktur Deklaracja tablicy jest pierwsz instrukcj funkcji main, ktrej rozmiar zadeklarowany jest jako staa symboliczna MAX. W ptli for uzupeniamy elementy tab[i].k oraz tab[i].x, a nastpnie drukujemy wszystkie wartoci. Program mia na celu jedynie pokaza w jaki sposb odwoujemy si do poszczeglnych elementw tablicy struktur.
Kolejnym przykadem bdzie tablica struktur wraz z inicjalizacj. Program bdzie drukowa wszystkie wiersze struktury, chyba, e dostanie podczas uruchomienia jako argument numer wiersza, jeli bdzie on poprawny, to wydrukuje tylko ten wiersz. Opis znajduje si poniej listingu.
#include <stdio.h>
119
struct ludzie {
char *imie;
char *nazwisko;
int wiek;
} id[] = {
{"Jan", "Kowalski", 18},
{"Tadeusz", "Nowak", 55},
{"Marcin", "Maly", 23},
{"Karol", "Biegacz", 45},
{"Tomasz", "Smialy", 20},
{"Kamil", "Mlody", 22},
{"Tymon", "Kowalewski", 28},
};
int main (int argc, char *argv[])
{
int i, k, kontrola = 0;
int ilElem = sizeof (id) / sizeof (struct ludzie);
if (argc == 2)
{
k = atoi(argv[1]);
if (k > 0 && k <= ilElem)
kontrola = 1;
}
if (kontrola)
printf("%s %s %d\n", id[k-1].imie, id[k-1].nazwisko, id[k-1].wiek);
else for (i = 0; i < ilElem; i++)
printf("%s %s %d\n", id[i].imie, id[i].nazwisko, id[i].wiek);
return 0;
}
Listing 7.5.2 Tablice struktur i argumenty funkcji Jak wida na listingu tablic struktur inicjuje si w taki wanie sposb. Dla lepszej czytelnoci kady wiersz struktury zosta ujty w nawiasy klamrowe (ktre mogy by zosta pominite) oraz zosta zapisany w osobnej linii, dlatego te duo atwiej jest powiedzie, jakie nazwisko kryje si pod id[3].nazwisko. W funkcji main deklarujemy zmienn kontroln kontrola z zainicjowan wartoci zero, zmienn k suy do przetrzymywania argumentu wywoania programu. Zmienna ilElem zawiera ilo elementw, ktra jest obliczona za pomoc znanej ju metody (przedstawionej w punkcie 5.4) dla przypomnienia rozmiar tablicy / rozmiar typu = ilo elementw. Jeli program zosta wywoany z parametrem to przeksztacamy go do typu int i przypisujemy do zmiennej k. Poniszy warunek sprawdza czy liczba ta jest wiksza od zera i mniejsza lub rwna od iloci elementw, jeli tak jest to 120
zmienna kontrola przyjmuje warto jeden. Po ustaleniu, czy program zosta wywoany z argumentem i czy argument by z odpowiedniego zakresu sprawdzamy stan zmiennej kontrolnej. Jeli jest rna od zera to drukujemy tylko wiersz, ktry podalimy jako argument. Zapis id[k-1], a konkretnie k-1 jest niezbdne, poniewa jak wiadomo tablice s indeksowane od zera, a dziki temu, e warunek na przedzia liczby by taki jaki by musimy odj jeden, eby indeks si zgadza. Zazwyczaj dla ludzi pierwszy wystpujcy wiersz nazywany jest pierwszym nie zerowym, tak wic mona wpisa jako argument wywoania programu 1, aby uzyska dostp do zerowego wiersza. Ma to te inn zalet, jeli wpisalibymy jako argument np. liter, to funkcja atoi przypisaa by do zmiennej k warto zero i tak wpisujc liter dostalibymy zerowy wiersz co raczej nie byoby zamierzonym efektem.
7.6 Sowo kluczowe typedef Za pomoc sowa kluczowego typedef tworzy si nowe nazwy dla dobrze ju znanych nam typw danych. Co to znaczy i do czego to moe si przyda? Na poniszym listingu zastosowano ten mechanizm, opis znajduje si poniej.
#include <stdio.h>
typedef struct pole_ab {
int a, b;
} Bok;
int f_pole (Bok pole);
int main (void)
{
typedef int Licznik;
typedef int Dlugosc;
Bok pole;
// struct pole_ab pole;
Licznik i;
// int i;
Dlugosc max = 15;
// int max = 15;
for (i = 0; i < max; i++)
{
pole.a = i + 3;
pole.b = pole.a + 7;
printf("Pole (%d, %d) = %d\n", pole.a, pole.b, f_pole(pole));
}
return 0;
}
int f_pole (Bok pole)
{
return pole.a * pole.b;
}
Listing 7.6.1 Uycie typedef 121
Zacznijmy wic od miejsca, w ktrym definiowana jest struktura. Za pomoc sowa typedef stworzylimy nazw Bok, ktra moe by pisana w zastpstwie struct pole_ab. Znaczy to tyle, e Bok pole; oznacza dokadnie to samo co struct pole_ab pole; jest to definicja nowej nazwy odnoszcej si do starego typu danych. Analogicznie jest z dwiema zmiennymi typu cakowitego,
pomimo i nie jest to skrcony zapis, to moe okaza si pomocny. Jeli zadeklarowalibymy kilka zmiennych za pomoc Licznik, to wiemy (bynajmniej takie jest zaoenie), e zmienne te odnosz si do liczenia czego, np. iloci wykona ptli. Jak powiedziane byo w jednym z poprzednich punktw,
aby przekaza struktur do funkcji musimy zrobi to tak:
int nasza_funkcja (struct nazwa_struktury nazwa_zmiennej)
I my zrobilimy dokadnie to samo, tylko za pomoc zdefiniowanej prdzej nazwy Bok. Uywanie typedef ma swoje korzyci, poniewa zazwyczaj skraca zapis, np. podczas przekazywania argumentw do funkcji. Naley zwrci uwag na jedn wan rzecz, skadnia uycia typedef ma si nastpujco:
typedef nazwa_typu_danych nowa_nazwa Gdyby nie byo sowa typedef to nowa_nazwa byaby nazw zmiennej podanego typu, tak samo jest w przypadku poniszym typedef struct pole_ab {
int a, b;
} Bok;
Gdyby nie typedef to Bok byby zmienn odnoszc si do typu strukturowego pole_ab. Dla wyrnienia nazwy odnoszcej si do jakiego typu zapisujemy j rozpoczynajc wielk liter,
aczkolwiek nie jest to aden wymg.
7.7 Unie Unie bynajmniej ze sposobu zapisu podobne s do struktur, nie mniej jednak s troch inne. Co to znaczy? Przypomnijmy sobie ile miejsca zajmowaa struktura w pamici komputera. Struktura zajmuje sumaryczn ilo bajtw zajmowanych przez jej elementy. Z uni jest troch inaczej, poniewa unia zajmuje tyle miejsca ile zajmuje najwiksze z jej pl. W jednym czasie moe przechowywa warto jednego typu. Poniszy rysunek moe pomc w zrozumieniu tego, a poniszy listing pokazuje dziaanie unii.
122
int x
x x
x double
x x
x x
x x
x x
x x
x x
x x
x x
9 10
11 12
13 14
15 16
int tab[4]
Rozmiar unii Adres
1 2
3 4
0xbfebf100 5
6 7
8
0xbfebf110 Rysunek 7.7.1 Schemat zajmowanej pamici przez uni Na rysunku wida, e typ int zajmuje 4 bajty, typ double 8 bajtw, a cztero elementowa tablica liczb typu int zajmuje 16 bajtw. Caa unia zajmuje tyle, eby pomieci najwikszy z typw, czyli w tym przypadku 16 bajtw. Unia w jednej chwili moe posiada tylko jedn warto konkretnego typu, co zostao pokazane na poniszym listingu.
#include <stdio.h>
union dane {
int i;
double x;
int tab[4];
};
int main (void)
{
int i;
union dane u_data = {10};
printf("Addr: %p Rozm: %d\n", &u_data, sizeof (u_data));
printf("Addr: %p Rozm: %d\n", &u_data.i, sizeof (u_data.i));
printf("Addr: %p Rozm: %d\n", &u_data.x, sizeof (u_data.x));
printf("Addr: %p Rozm: %d\n", &u_data.tab, sizeof (u_data.tab));
printf("u_data.i = %d\n", u_data.i);
u_data.x = 19.9;
printf("u_data.x = %.2f\n", u_data.x);
printf("u_data.i = %d\n", u_data.i);
u_data.tab[0] = 4;
u_data.tab[1] = 9;
u_data.tab[2] = 13;
u_data.tab[3] = 159;
for (i = 0; i < sizeof (u_data.tab)/sizeof (u_data.tab[0]); i++)
printf("u_data.tab[%d] = %d\n", i, u_data.tab[i]);
printf("u_data.x = %.2f\n", u_data.x);
u_data.x = 19.9;
123
printf("u_data.x = %.2f\n", u_data.x);
}
printf("u_data.i = %d\n", u_data.i);
printf("u_data.i = %d\n", *(&u_data.i+1));
printf("u_data.i = %d\n", *(&u_data.i+2));
printf("u_data.i = %d\n", *(&u_data.i+3));
return 0;
Listing 7.7.1 Uycie unii Na listingu wida, e uni definiuje si i uywa tak samo jak struktur. W funkcji main tworzymy zmienn odwoujc si do unii, moemy zainicjowa warto, lecz tylko dla jednego (pierwszego)
elementu unii (czyli tutaj dla zmiennej i). Pierwsza instrukcja printf drukuje adres pocztku unii oraz jej rozmiar. Kolejne trzy instrukcje printf drukuj informacje o adresie, w ktrym zaczynaj si zmienne
(czyli pocztek unii) oraz rozmiar jaki dany typ zajmuje. Poniewa do zmiennej i przypisalimy warto podczas inicjalizacji to moemy t warto wydrukowa. W nastpnej linii przypisujemy do zmiennej typu double pewn warto. Warto, ktra bya pod zmienn i zostaje zasonita przez co mieci zostaj wydrukowane, gdy chcemy odwoa si do tej zmiennej w kolejnej linii. Kolejne instrukcje przypisuj wartoci do tablicy. W ptli drukujemy te wartoci, w warunku wystpuje konstrukcja omwiona w rozdziale 5.4. Prba wydrukowania zmiennej x jest nie udana, poniewa warto ta zostaa ju zasonita. Po przypisaniu wartoci do zmiennej x przysaniamy dwa pierwsze elementy tablicy i co jest wane, nie zasonilimy pozostaych, czyli moemy si do nich odwoa.
eby tego dokona trzeba uy wskanikw i przesun wskanik na odpowiedni pozycj. Wskanik ten zosta przesunity w instrukcji printf podany jako drugi argument. Najpierw staramy si wydrukowa warto, ktra znajduje si pod zmienn i dostajemy mieci co jest zrozumiae.
W nastpnej linii za pomoc instrukcji &u_data.i wycigamy adres pocztku unii (oraz pocztku zmiennej i) i dodajemy 1 czyli przesuwamy wskanik o 1 rozmiar typu int, czyli 4 bajty i za pomoc gwiazdki wycigamy warto, ktra jest pod tym adresem. Rysunkiem mona si wesprze i dziki niemu wiemy, e rozmiar double zawiera 8 bajtw czyli pod tym adresem, na ktry przesunelimy wskanik te bd znajdowa si mieci, poniewa przypisujc warto do zmiennej x zasonilimy pierwsze 8 bajtw. Przesunicie wskanika o dwie i trzy pozycje, co zostao zrobione w kolejnych instrukcjach wywietla poprawne dane, czyli te ktre byy zapisane w tablicy, poniewa dane te nie zostay zasonite. Pewna wana kwestia odnonie przesuwania wskanika, w tych instrukcjach wydrukowalimy tylko to co jest pod kolejnymi adresami, samego wskanika nie ruszylimy
124
wskanik dalej wskazuje na pocztek unii.
7.8 Pola bitowe Pomimo i obecne komputery raczej nie maj problemu z iloci pamici, to jednak mona j zaoszczdzi za pomoc pl bitowych. Znaczy to tyle, e jeli pewne informacje przetrzymuj wartoci 1 lub 0 (czyli np. wczony/wyaczony przecznik, stan diody, itp.) to po co rezerwowa dla kadej takiej zmiennej duo wicej pamici ni jest potrzebne? Dla przykadu wemy sytuacj (rozwizanie tego zadania skadao by si ze znacznie wikszej iloci operacji, ni te przedstawione tutaj), w ktrej mamy namiastk komputera pokadowego w samochodzie. Czujniki przesyaj informacje na temat zasilania samochodu, poziomu paliwa oraz wody w zbiornikach, stanu caej elektrycznoci i stanu hamulcw. Na listingu 7.8.1 znajduje si program wykorzystujcy pola bitowe, a pod nim pomocny rysunek wraz z opisem.
#include <stdio.h>
struct {
unsigned int zasilanie unsigned int paliwo unsigned int woda unsigned int elektr unsigned int hamulce
} samochod;
: 1;
: 3;
: 2;
: 1;
: 1;
void init (void);
int main (void)
{
printf("Rozmiar: %d\n", sizeof (samochod));
init();
if (samochod.zasilanie == 1)
printf("Zasilanie poprawne\n");
else printf("Zasilanie nie dziala poprawnie\n");
printf("Ilosc paliwa: %.2f%%\n", samochod.paliwo / 4.0 * 100);
printf("Ilosc wody:
%.2f%%\n", samochod.woda / 2.0 * 100);
if (samochod.elektr == 1)
printf("Elektryka dziala poprawnie\n");
else printf("Elektryka nie dziala poprawnie\n");
if (samochod.hamulce == 1)
printf("Hamulce dzialaja poprawnie\n");
else printf("Hamulce nie dzialaja poprawnie\n");
125
return 0;
}
void init (void)
{
samochod.zasilanie = 1;
samochod.paliwo = 3;
samochod.woda = 1;
samochod.elektr = 1;
samochod.hamulce = 1;
}
Listing 7.8.1 Pola bitowe Nazwa
-
Nr bitu 31
-
h
- samochod.hamulce e
- samochod.elektr w
- samochod.woda p
- samochod.paliwo z
- samochod.zasilanie
-
h e
7 6
w 5
p 4
3 2
z 1
0 Rysunek 7.8.1 Pola bitowe Teraz jeszcze pewn rzecz odnonie liczb binarnych musz napomkn. Na jednym bicie mog zosta zapisane wartoci 0 lub 1. Moliwe wartoci zapisane na dwch i trzech bitach znajduj si w tabeli 7.8.1.
126
3 bity 2 bity W
0 0
0 0
0 0
1 1
0 1
0 2
0 1
1 3
1 0
0 4
1 0
1 5
1 1
0 6
1 1
1 7
Tabela 7.8.1 Moliwe wartoci zapisane na dwch i trzech bitach Zmienna typu strukturowego samochod ma rozmiar czterech bajtw, a moemy zapisa 5 rnych informacji do naszego zadania, co w przypadku zwykych zmiennych wizao by si z liczb rozmiar typu ilo rnych informacji (czyli 20 bajtw). Zeby zadeklarowa zmienn odnoszc si do konkretnej iloci bitw naley w strukturze po nazwie zmiennej postawi dwukropek oraz poda liczb bitw, co pokazano na listingu. Informacja na temat zasilania (zasilanie) wymaga jednego bitu, jeli jest zasilanie to ustaw warto na 1, jeli nie ma to 0, tak samo jest ze zmiennymi przyjmujcymi informacje o elektrycznoci (elektr) i o stanie hamulcw (hamulce). Informacje o zawartoci pynw w zbiorniku wymaga wikszej iloci bitw. Informacje o stanie paliwa maj by wywietlane w picio stopniowej skali (tj. 0, 25, 50, 75, 100 [%]). Spjrz teraz na tabel 7.8.1 i zwr uwag na to,
e na dwch bitach moemy zapisa cztery wartoci (0, 1, 2, 3), czyli potrzebujemy trzech bitw, aby nasza skala pokazywaa dobry poziom. Dla poziomu wody wystarcz dwa bity, poniewa nasza trzy stopniowa skala pokazuje 0, 50, 100%. Odwoania do pl bitowych s analogiczne jak do zwykych zmiennych bdcymi skadowymi struktury. W funkcji init przypisujemy wartoci tym wanie zmiennym. Naley pamita o bardzo wanej rzeczy, nie moemy przypisa do zmiennej, ktra odwouje si do dwch bitw wartoci np. cztery. Jeli sprbujemy zrobi co takiego jak na poniszym listingu to otrzymamy ostrzeenie od kompilatora, program si skompiluje, ale wynik nie bdzie tym, ktrego oczekiwalimy.
127
warning: large integer implicitly truncated to unsigned type
#include <stdio.h>
struct {
unsigned int bl : 2;
} x;
int main (void)
{
x.bl = 4;
printf("%d\n", x.bl);
}
Listing 7.8.2 Przypisanie wikszej liczby, ni dozwolona Kolejn rzecz o ktrej trzeba pamita jest to, e zmienna ktra odwouje si do pola bitowego musi by liczb cakowit ze znakiem (signed) lub bez znaku (unsigned). Pola te traktowane s jako malutkie zmienne typu cakowitego, tak wic wszystkie operacje arytmetyczne mog zosta wykonane,
co zreszt wida w instrukcjach printf, ktre wywietlaj poziom zawartoci pynw w zbiornikach.
Jeli chcemy zapeni dziury (nie wykorzystane bity pomidzy polami) moemy to zrobi, nie wpisujc nazwy zmiennej, czyli struktura wygldaa by tak:
struct {
unsigned int zasilanie unsigned int paliwo unsigned int woda unsigned int elektr unsigned int hamulce unsigned int
} samochod;
: 1;
: 3;
: 2;
: 1;
: 1;
: 24;
Za pomoc tej operacji wypenilimy dziur, ktra bya pomidzy wykorzystanymi bitami, a reszt do dugoci sowa, czyli do dugoci rozmiaru typu int. Dopki rozmiar tej struktury bdzie si rwna cztery bajty, to tak jakbymy mieli jedn zwyk zmienn zadeklarowan jako skadow struktury. Jeli rozmiar bdzie wikszy np. osiem bajtw to analogicznie dwie zmienne typu int. Za pomoc specjalnego rozmiaru 0 moemy przesun kolejne pola bitowe do granicy nastpnego sowa, poniszy przykad pokazuje to.
#include <stdio.h>
128
struct {
unsigned int a : 1;
unsigned int b : 2;
unsigned int c : 1;
unsigned int
: 0;
unsigned int d : 10;
unsigned int
: 22;
} x;
int main (void)
{
printf("%d\n", sizeof (x));
return 0;
}
Listing 7.8.3 Uzycie rozmiaru 0 Uycie rozmiaru 0 powoduje, e pole bitowe d bdzie zaczynao si od nowego sowa (czyli tak jakby druga zmienna zwyka), dlatego te rozmiarem struktury bdzie osiem bajtw.
Jeli piszemy programy na maszynie, w ktrej bity pola umieszcza si od prawej do lewej, jak to pokazano na rysunku 7.8.1 to program nie bdzie dziaa poprawnie na maszynach w ktrej umieszcza si bity w odwrotnej kolejnoci (od lewej do prawej).
129
8 Operacje wejcia i wyjcia Operacje wejcia i wyjcia byy ju uywane w poprzednich rozdziaach. Operacje wejcia moe mniej, natomiast funkcj printf znamy ju do dobrze. W kolejnych podpunktach zawarte s informacje na temat funkcji wywietlajcych dane (bardziej szczegowo) oraz na temat tych, ktre pobieraj dane od uytkownika, wraz z obsug bdw. Znajduj si te informacje o obsudze plikw oraz zmiennej iloci argumentw.
8.1 Funkcja getchar i putchar Funkcja getchar bya ju uywana, chociaby w punkcie 3.3.2 podczas omawiania ptli while.
W tamtym miejscu zapis warunku by ok, jeli chodzi o wczytywanie tylko jednego wiersza. W tym miejscu poka jak za pomoc waciwie tego samego kodu wydrukowa ca zawarto pliku
(o drukowaniu zawartoci pliku te jeszcze nie byo). Funkcja ta zostaa opisana w punkcie 3.3.2, staa EOF zostaa opisana w punkcie 4.4. Tak wic poczmy te dwie rzeczy i napiszmy program, ktry drukuje pobrany tekst. Funkcja przestaje dziaa jak osignie koniec pliku (tudzie wczytywanie danych z klawiatury zostanie przerwane).
#include <stdio.h>
int main (void)
{
int c;
while ((c = getchar()) != EOF)
putchar(c);
return 0;
}
Listing 8.1.1 Drukowanie wczytanego tekstu Po skompilowaniu i uruchomieniu programu wpisujemy teskt, po wciniciu klawisza enter przechodzimy do nowej linii, lecz tekst, ktry zosta pobrany zostaje wydrukowany. Aby zakoczy wciskamy kombinacj ctrl-c lub ctrl-d.
Aby wywietli zawarto caego pliku za pomoc naszego programu trzeba ten plik przekaza do programu i wtedy funkcja getchar zamiast oczekiwa na znaki wpisane z klawiatury pobiera je z pliku.
eby przekaza plik tekstowy do naszego programu musimy wpisa nastpujce polecenie.
130
$ ./main < main.c Zakadajc, e plik wykonywalny nazywa si main, a plik z kodem rdowym main.c. Jeli wszystko poszo dobrze, to kod rdowy zostanie wydrukowany na ekranie.
Mona to te zrobi za pomoc potoku. Potok jest to mechanizm komunikacji midzyprocesowej, lecz w to wgbia si nie bdziemy. Uycie tego jest nastpujce
$ cat main.c | ./main Chodzi o to, e to co otrzymujemy na wyjciu pierwszego polecenia (przed znakiem |) przekazywane jest na wejcie drugiego polecenia. W tym przypadku jest to dziaanie na okoo, poniewa samo polecenie cat wywietla zawarto pliku, lecz chodzio o pokazanie alternatywy, swoj drog uywanie potokw jest bardzo uyteczne.
Skoro mona przekaza plik tekstowy na wejcie, to moe mona zapisa do pliku (przekaza na wyjcie) co co wpisujemy, bd zawarto innego pliku. Za pomoc poniszego polecenia wszystko co wpiszemy zostanie przekazane do pliku plik.txt i nic nie zostanie wywietlone na ekranie.
$ ./main > plik.c A za pomoc poniszego polecenia cay plik main.c zostanie przekazany do pliku plik.c
$ ./main < main.c > plik.c Funkcja putchar podobnie do funkcji printf wywietla informacje na standardowym wyjciu (ekran monitora) z t rnic, e putchar nie moe formatowa wyjcia.
8.2 Funkcja printf i sprintf Z funkcj printf mielimy styczno waciwie w kadym programie, podstawowe deskryptory formatu dla liczb cakowitych i rzeczywistych s ju znane, lecz wszystkie wraz z rnymi kombinacjami zostay przedstawione w tym podpunkcie. Oglna deklaracja funkcji printf ma si nastpujco:
int printf (char *format, arg1, arg2, ...)
131
Wielokropek oznacza, e funkcja nie ma sztywnej iloci parametrw ma zmienn (wicej informacji o zmiennej iloci parametrw znajduje si w podpunkcie 8.4). Funkcja printf zwraca liczb typu int z iloci wywietlonych znakw, oraz drukuje tekst wskazywany przez format, ktry zazwyczaj ujty jest w znaki cudzysowu, znakami mog by dowolne drukowalne znaki oraz specjalne znaki zaczynajce si od znaku %, ktre nazwane s deskryptorami formatu. W miejsce tych deskryptorw podczas drukowania zostaj wstawione kolejne argumenty funkcji printf. Deskyptor formatu skada si ze znaku % oraz jednego ze znakw przeksztacenia wyszczeglnionych w tabeli 8.2.1, pomidzy znakiem procenta, a znakiem przeksztacenia mog lecz nie musz wystpi w nastpujcej kolejnoci:
(minus) Wyrwnanie argumentu do lewej strony jego pola
Liczba okrelajca minimalny rozmiar pola
. (kropka) Oddzielajca rozmiar pola od precyzji
Prezycja
Maksymalna liczba znakw dla tekstu
Liczba cyfr po kropce dzisitnej dla liczb rzeczywistych
Minimalna liczba cyfr dla wartoci cakowitej
Jedna z liter: h jeli argument cakowity naley wypisa jako short, lub l jeli long Poniej znajduje si tabela, ktra zawiera znaki, ktre mog by uyte do formatowania tekstu.
Znak przeksztacenia Typ argumentu Wyjcie
d, i int
Liczba dziesitna o
int Liczba semkowa bez znaku, bez wiodcego zera x, X int
Liczba szesnastkowa bez znaku, bez wiodcych 0x, 0X u
int Liczba dziesitna bez znaku (unsigned)
c int
Jeden znak s
char *
Cig znakw wywietlany a do znaku koczcego \0,
lub przekroczenie liczby okrelonej przez precyzj f
double
[-]m.dddddd, ilo d okrela precyzja, domylnie 6 e, E double
[-]m.ddddddexx, [-]m.ddddddExx 132
g, G double
Liczba wypisana w formacie %e, %E jeli wykadnik jest mniejszy ni -4 albo wikszy lub rwny precyzji, w przeciwnym wypadku liczba wypisana w formacie %f p
void *
Wskanik
%
-
Wypisanie po prostu znaku %
Tabela 8.2.1 Deskryptory formatu funkcji printf Przykadowe uycia deskryptorw formatu znajduj si na poniszym listingu. Kady deskryptor z pierwszych czterech instrukcji printf zosta zapisany pomidzy znakami | aby lepiej wida byo wyrwnania, dugoci pl itp.
#include <stdio.h>
int main (void)
{
int kk1 = 145E5;
double pi = 3.14159265;
double xx = 3.1E-7;
char *tab = "Ala ma kota";
long int el = 10L;
unsigned int ui = 1E4;
unsigned long ul = 10E5L;
int max = 6;
printf("|%d| \t|%f| \t|%s|\n\n", kk1, pi, tab);
printf("|%15d| \t|%15f| \t|%15s|\n\n", kk1, pi, tab);
printf("|%-15d| \t|%-15.8f| \t|%-15.8s|\n\n", kk1, pi, tab);
printf("|%-15.10d| \t|%-.2f| \t|%.10s|\n\n", kk1, pi, tab);
printf("%d %i %o %x\n\n", kk1, kk1, kk1, kk1);
printf("%ld %u %lu %c\n\n", el, ui, ul, tab[0]);
printf("%.2e %.10g %.7f %p %%\n", xx, xx, xx, &pi);
printf("|%15.*s|\n", max, tab);
printf("|%*.*s|\n", max, max, tab);
}
return 0;
Listing 8.2.1 Uycie deskryptorw formatu Ciekawostk mog by dwie ostatnie instrukcje printf. W pierwszej precyzja posiada gwiazdk. Za pomoc zmiennej, ktra jest argumentem na odpowiedniej pozycji ustala si odpowiedni warto precyzji, w tym przypadku bdzie ona miaa warto, ktra jest pod zmienn max, czyli 6. W drugiej rozmiar pola oraz precyzja jest ustalana na podstawie zmiennej max. Precyzja dla liczb rzeczywistych 133
okrela ilo miejsc po przecinku, dla liczb cakowitych minimaln ilo wywietlanych cyfr, co wida po wydruku niektrych funkcji printf, e wywietlane s wiodce zera, a dla tekstu ilo wywietlanych liter, jeli warto prezycji jest mniejsza ni ilo liter, to cig znakw nie zostanie wywietlony w caoci. Zwr uwag na wywietlanie liczb typu long, dodajemy literk l przed deskryptorem dla konkretnego typu %ld long int, %lu unsigned long, %llu unsigned long long, itp.
W tym miejscu chc wspomnie o czym co nazywa si sekwencje ucieczki (ang. escape sequences).
W gruncie rzeczy mielimy z nimi styczno nie jednokrotnie, lecz nie wszystkie zostay pokazane na przykadzie. Jest to znak (znaki) poprzedzone znakiem ukonika, ktre nie s drukowane (np. na ekranie monitora) w normalny sposb, tylko speniaj pewn okrelon funkcj (np. znak nowej linii).
Tak wic ponisza tabela, to zbir tych znakw, a pod tabelk znajduje si przykad uycia.
Sekwencja ucieczki Reprezentacja
\a Dwik (alarm)
\b Backspace
\f Wysunicie strony
\n Nowa linia
\r Powrt karetki (ang. carriage return)
\t Tabulacja pozioma
\v Tabulacja pionowa
\'
Apostrof
\
Cudzysw
\\
Ukonik
\?
Znak zapytania
\ooo Znak ASCII podany jako liczba semkowa
\xhh Znak ASCII podany jako liczba szesnastkowa Tabela 8.2.2 Sekwencje ucieczki
#include <stdio.h>
int main (void)
{
char *napis = "\x50\x72\x7a\x79\x67\x6f\x64\x79 \x6b\x6f\x74\x61
\x46\x69\x6c\x65\x6d\x6f\x6e\x61";
printf("%s\n", napis);
134
printf(" \b\? \' \" \\\n");
return 0;
}
Listing 8.2.2 Uycie niektrych sekwencji ucieczek Funkcja sprintf jest prawie, e identyczna jak funkcja printf, z t rnic, e zamiast drukowa sformatowany tekst na standardowe wyjcie (monitor) to zapisuje ten cig znakw w tablicy podanej jako pierwszy argument. Tablica musi by odpowiednio dua. Prototyp funkcji wyglda nastpujco:
int sprintf(char *string, char *format, arg1, arg2, ...);
Przykad uycia funkcji sprintf zosta pokazany na poniszym listingu. Obie funkcje zadeklarowane s w nagwku stdio.h.
#include <stdio.h>
int main (void)
{
char tab[50];
int wiek = 20;
sprintf(tab, "Czesc jestem Tomek i mam %d lat", wiek);
printf("%s\n", tab);
return 0;
}
Listing 8.2.3 Uycie funkcji sprintf 8.3 Funkcja scanf i sscanf Funkcja scanf suy do wczytywania danych z klawiatury. W jednym z poprzednich rozdziaw bya uyta, lecz nie bya omwiona zbyt dobrze. W tym miejscu zostanie omwiona dokadniej wraz z pokazanymi przykadami zastosowania. Prototyp funkcji wyglda nastpujco:
int scanf (char *format, arg1, arg2, ...);
Funkcja zwraca warto cakowit, ktra odpowiada poprawnej iloci wczytanych znakw. Argumenty musz by wskanikami, czyli jako argument trzeba przekaza adres zmiennej. Funkcji scanf uywa 135
si bardzo podobnie do funkcji printf, tzn deskryptory formatu tworzy si tak samo, a list znakw przeksztacenia przedstawia ponisza tabela.
Znak przeksztacenia Typ argumentu Wejcie
d int *
Liczba cakowita dziesitna i
int *
Liczba cakowita; moe wystpi w postaci semkowej z wiodcym zerem, lub szesnastkowej z widcymi 0x,
0X o
int *
Liczba cakowita w postaci semkowej z wiodcym zerem lub bez u
unsigned int *
x int *
Liczba cakowita w postaci szesnastkowej z wiodcym 0x, 0X lub bez c
char *
Znak s
char *
Tekst e, f, g float *
Liczba zmiennopozycyjna z opcjonalnym znakiem,
opcjonaln kropk dziesitn i opcjonalnym wykadnikiem
%
-
Liczba cakowita dziesitna bez znaku Znak %, nie ma adnego przypisania Listing 8.3.1 Deskryptory formatu funkcji scanf Przed znakami przeksztacenia d, i, o, u, x mona wstawi liter h, jeli chodzi nam o wczytanie liczby short, lub l, jeli liczba ma by typu long. Dla znakw e, f, g litera l oznacza, e typem wczytywanych danych jest double, a nie float.
Przykad uycia funkcji scanf w rnych wariantach pokazuje poniszy listing.
#include <stdio.h>
int main (void)
{
int x;
double y;
printf("Podaj liczbe calkowita: ");
scanf("%d", &x);
printf("Podaj liczbe rzeczywista: ");
scanf("%lf", &y);
printf("%d %.2f\n", x, y);
return 0;
}
Listing 8.3.1 Uycie funkcji scanf 136
Tak zapisane uycie funkcji scanf nie jest bezpieczne, poniewa jeli wpiszemy liter podczas wprowadzania liczby cakowitej to wychodz do ciekawe rzeczy, co zreszt warto przetestowa, lub wpiszemy liczb rzeczywist to zostanie wzita cz cakowita, a kropka i kolejne cyfry zostan przypisane do kolejnego wywoania funkcji, czyli tego, ktre pobiera liczb rzeczywist. Dzieje si tak poniewa funkcja scanf koczy dziaanie wtedy, kiedy natrafi na pierwszy nie nalecy znak do zbioru danego deskryptora formatu, czyli podczas wczytywania liczb cakowitych, jak wpiszemy 3.4, to pierwsze wywoanie funkcji wemie tylko cz cakowit. Trjka zostaa zabrana, ale .4 zostanie w buforze, tak wic kolejnymi znakami pobranymi przez funkcj scanf bd .4, poniewa pobieramy liczb rzeczywist, warto ta jest traktowana poprawnie. Wan rzecz jest to i o tym naley pamita,
e scanf jako argument przyjmuje adres do zmiennej, a nie jej nazw!
Za pomoc funkcji scanf mona pobiera teksty, ale nie jest to najlepsza metoda poniewa funkcja ta skopiuje do tablicy tylko czarne znaki, tzn, e jeli wpiszemy spacj pomidzy wyrazami, to na spacji poprzestanie wczytywa, czyli tylko pierwsze sowo zostanie wczytane. Nie mniej jednak pojedyncze wyrazy mona pobiera za pomoc scanf. Poniszy przykad pokazuje to i pewn inna rzecz.
#include <stdio.h>
int main (void)
{
int dzien, rok;
char miesiac[20];
printf("Podaj date w formacie: Dzien Miesiac Rok\n");
scanf("%d %s %d", &dzien, miesiac, &rok);
printf("Dzien: %d\n", dzien);
printf("Miesiac: %s\n", miesiac);
printf("Rok: %d\n", rok);
return 0;
}
Listing 8.3.2 Uycie scanf do pobierania sowa W funkcji scanf jako pierwszy parametr podane jest %d %s %d, czyli funkcja oczekuje liczby cakowitej, biaego znaku (spacja, tabulatura), sowa, biaego znaku, liczby cakowitej. (Jeli zastanawiasz si dlaczego przekazalismy adresy zmiennych dzien oraz rok, a miesiac po prostu jako nazw, to dla przypomnienia nazwa tablicy jest wskanikiem). Zamiast tej spacji mona wstawi inny znak, np. ukonik, dziki czemu wpiszemy dat w formacie RRRR/MM/DD, poniszy przykad 137
pokazuje to.
#include <stdio.h>
int main (void)
{
int dzien, miesiac, rok;
printf("Podaj date w formacie: RRRR/MM/DD\n");
scanf("%d/%d/%d", &rok, &miesiac, &dzien);
printf("Dzien: %d\n", dzien);
printf("Miesiac: %d\n", miesiac);
printf("Rok: %d\n", rok);
return 0;
}
Listing 8.3.3 Kolejne uycie scanf inny format daty Przykad wczytywania liczb w innych systemach liczbowych znajduje si poniej.
#include <stdio.h>
int main (void)
{
int ld;
printf("Podaj liczbe dziesietna: ");
scanf("%d", &ld);
printf("Podales liczbe: %d\n", ld);
printf("Podaj liczbe osemkowa: ");
scanf("%o", &ld);
printf("Osemkowo %o to dziesietnie %d\n", ld, ld);
}
printf("Podaj liczbe szesnatkowa: ");
scanf("%x", &ld);
printf("Hex: %x, Oct: %o, Dec: %d\n", ld, ld, ld);
return 0;
Listing 8.3.4 Wczytywanie liczb w innych systemach liczbowych Funkcja scanf nadaje si do pobierania liczb, lecz z tekstami sobie nie radzi. Biorc pod uwag fakt, e wpisanie zej liczby, bez mechanizmw zabezpieczajcych moe wykrzaczy cay program, lepszym rozwizaniem jest pobieranie liczb do tablicy znakw, a nastpnie za pomoc odpowiedniej funkcji przeksztacenie tego cigu do liczby. Pokazane zostanie to w punkcie 8.6.
138
Funkcja sscanf jest podobna do scanf, rni si tylko tym, e zamiast ze standardowego wejcia czyta znaki wskazywane przez jej pierwszy argument. Deklaracja funkcji wyglda nastpujco.
int sscanf (char *string, char *format, arg1, arg2, ...);
Dla przykadu wemy nastpujcy program.
#include <stdio.h>
int main (void)
{
char *napis = "02 Sierpien 2010";
int dzien, rok;
char miesiac[20];
sscanf(napis, "%d %s %d", &dzien, miesiac, &rok);
printf("Dzien: %d\n", dzien);
printf("Miesiac: %s\n", miesiac);
printf("Rok: %d\n", rok);
}
return 0;
Listing 8.3.5 Uycie funkcji sscanf Wida, e funkcja sscanf pobiera tekst i przeksztaca go nie ze standardowego wejcia, a z cigu znakw wskazywanego przez napis, rwnie dobrze zamiast wskanika do tekstu mogaby by tablica znakw z zainicjowanymi wartociami.
8.4 Zmienna ilo argumentw Ze zmienn iloci argumentw mielimy, a waciwie mamy styczno cay czas, funkcje printf i scanf pobieraj rn ilo argumentw. Z ich pomoc moemy wydrukowa lub pobra dowoln ilo wartoci. Generalnie deklaracja funkcji o zmiennej iloci parametrw posiada trzy kropki (...)
jako ostatni parametr. Wywoujc funkcj z rn iloci argumentw funkcja nie ma ich nazw, skoro nie ma nazw, to jak si do nich odwouje? Specjalne makra zdefiniowane w pliku nagwkowym stdarg.h umoliwiaj poruszanie si po takiej licie argumentw i zaraz pokae jak si za to zabra.
Poniszy rysunek moe zobrazowa dane zagadnienie.
139
double srednia (ilosc_arg, ...);
(5)
va_end (lista_param);
srednia (5, 1, 2, 3, 4, 5);
va_start (lista_param, ilosc_arg);
(3)
(2)
(4)
va_arg (lista_param, double);
(1)
va_list lista_param;
Rysunek 8.4.1 Zmienna ilo argumentw Aby odwoywa si do nie nazwanych argumentw musimy utworzy zmienn typu va_list, w naszym przypadku nazywa si ona lista_param. Makro va_start odpowiedzialne jest za to, aby zainicjowa warto zmiennej lista_param na pierwszy nie nazwany argument, dlatego te jako pierwszy argument przyjmuje zmienn typu va_list, a drugim jest ostatni nazwany parametr, w naszym przypadku jest to jedyny nazwany parametr i nazywa si ilosc_arg. Wywoanie makra va_arg wyciga warto, ktra kryje si pod aktualnie wskazywanym argumentem i przesuwa zmienn lista_param na nastpny argument. Aby wiedzie o ile przesun zmienn lista_param oraz jaki jest typ szukanej wartoci,
drugim argumentem makra va_arg jest typ zmiennej. Po zakoczeniu dziaa trzeba uy makra va_end, ktre czyci wszystko to co zwizane byo z poprzednimi wywoaniami. Kod do tego zadania jest nastpujcy.
#include <stdio.h>
#include <stdarg.h>
double srednia (int ilosc_arg, ...);
int main (void)
{
printf("%.2f\n", srednia(3, 1.0, 4.0, 5.0));
printf("%.2f\n", srednia(4, 2.0, 4.5, 5.5, 3.5));
printf("%.2f\n", srednia(3, 1.5, 4.0, 5.5));
140
printf("%.2f\n", srednia(4, 1.78, 4.34, 5.11));
return 0;
}
double srednia (int ilosc_arg, ...)
{
va_list lista_param;
va_start (lista_param, ilosc_arg);
int i;
double suma = 0;
}
for (i = 0; i < ilosc_arg; i++)
suma += va_arg(lista_param, double);
va_end(lista_param);
return suma / (double) ilosc_arg;
Listing 8.4.1 Zmienna ilo parametrw Pierwszym argumentem funkcji srednia jest ilo argumentw, bdzie ona odpowiedzialna za sterowanie ptl, eby mona byo zsumowa wszystkie argumenty no i oczywicie podzieli przez ich ilo, by otrzyma redni arytmetyczn. Kade wywoanie makra va_arg wyciga warto wskazywanego argumentu i przesuwa zmienn lista_param na nastpny argument w kolejnoci.
8.5 Obsuga plikw Do tej pory jedyne co miao wspolnego z plikami to byo przekierowanie standardowego wejcia,
tudzie wyjcia przy pomocy linii polece, za pomoc czego moglimy odczyta wszystko co jest w pliku, lub zapisa co do niego. W tym miejscu pokazane zostanie, jak za pomoc funkcji otworzy plik, przeczyta jego zawarto, zapisa w nim co, itp.
Przed kad operacj na pliku plik ten trzeba otworzy, suy do tego funkcji fopen, ktrej deklaracja wyglda nastpujco.
FILE *fopen(const char *filename, const char *mode);
Wida, e funkcja zwraca wskanik, ale do czego? Jakiego typu? Ot FILE jest to struktura, ktra zawiera informacje o buforze, pozycji znaku, rodzaju dostpu, bdach itp. W gruncie rzeczy nie musimy si tym interesowa, wane natomiast jest jak si do tego zabra. Zamy, e zadeklarujemy co takiego.
141
FILE *wsk_plik;
Stworzylimy wskanik do pliku, teraz wystarczy przypisa do niego funkcj fopen z wypenionymi parametrami, tj. z ciek oraz trybem dostpu do pliku. Zanim pokazana zostanie gotowa instrukcja par sw wyjanienia czym jest tryb dostpu. Tryb dostpu jest informacj o tym co bdzie z plikiem robione, tzn czy bdzie tylko czytana zawarto pliku, czy czytana i zapisywana, w kadym bd razie jeden z trybw trzeba poda, s one wyszczeglnione w poniszej tabeli.
Tryb dostpu Funkcja
"r"
Czytanie
"r+"
"w"
"w+"
Dodatkowe informacje Otwarcie pliku tylko do czytania Czytanie i nadpisywanie Otwarcie pliku do czytania oraz nadpisywania Zapisywanie
Otwarcie pliku do zapisywania (zamazuje poprzednie dane)
Nadpisywanie i czytanie Otwarcie pliku do nadpisywania i czytania
"a"
Dopisywanie Otwarcie pliku w celu dopisywania (jeli plik nie istnieje, to jest tworzony jeli si da)
"a+"
Dopisywanie i czytanie Otwarcie pliku w celu dopisywania (jeli plik nie istnieje, to jest tworzony jeli si da) i czytania
"t"
Tryb tekstowy Otwarcie pliku w trybie tekstowym
"b"
Tryb binarny Otwarcie pliku w trybie binarnym Tabela 8.5.1 Tryby dostpu Maa uwaga, tryby mona czy, tzn aby otworzy plik do czytania, zapisu i w trybie binarnym to naley jako tryb poda "rwb".
Jeli w katalogu, w ktrym jest plik wykonywalny znajduje si plik sport.txt to aby go otworzy w trybie zapisu naley wpisa tak instrukcj:
wsk_plik = fopen("sport.txt", "w");
Oczywicie wpisywanie na sztywno nazwy pliku w kodzie rdowym nie jest dobrym pomysem, bo za kadym razem musielibymy kompilowa program na nowo, jeli nazwa pliku by si zmienia.
Dobrym, a zarazem wygodnym sposobem jest podanie nazwy pliku jako argument wywoania programu. Poniszy listing prezentuje podstawowe uycie omwionych instrukcji.
#include <stdio.h>
int main (int argc, char *argv[])
{
142
FILE *wsk_plik;
if (argc != 2)
{
printf("Uzycie: %s nazwa_pliku\n", argv[0]);
return -1;
}
}
if ((wsk_plik = fopen(argv[1], "w")) != NULL)
{
printf("Plik otwarto.\n");
fprintf(wsk_plik, "Ala ma kota, a kot ma %d lat\n", 5);
fclose(wsk_plik);
}
return 0;
Listing 8.5.1 Zapisanie informacji w pliku Funkcja fopen w razie nie powodzenia zwraca warto NULL, dlatego te sprawdzamy warunek, jeli otwarto plik poprawnie, to zostanie wywietlona informacja o tym, oraz za pomoc dotd nie omawianej funkcji fprintf tekst zostaje zapisany w pliku. Obowizkowo po skoczeniu operacji na pliku trzeba go zamkn za pomoc funkcji fclose, w ktrej jako argument podajemy wskanik do pliku.
Funkcja fprintf jak ju zreszt pewnie widzisz jest analogiczna jak printf, z ma rnic. Jako pierwszy argument podajemy wskanik do pliku, deklaracja tej funkcji wyglda nastpujco.
int fprintf (FILE *wsk_file, char *format, ...);
Wywoanie funkcji fprintf z pierwszym argumentem jako stdout (standard out) tj.
fprintf(stdout, "%s %s\n", tab[0], tab[1]);
daje efekt dokadnie taki sam jak printf, czyli informacje zostan wydrukowane na ekranie.
Istnieje funkcja fscanf, ktra jest analogiczna do scanf z t rnic, e pobiera tekst z pliku, czyli tak samo jak w przypadku poprzedniej funkcji jako pierwszy argument podajemy wskanik do pliku.
Deklaracja jej wyglda tak.
int fscanf(FILE *wsk_file, char *format, ...);
A uycie moe by nastpujce.
#include <stdio.h>
143
int main (int argc, char *argv[])
{
FILE *wsk_plik;
char dane[3][10];
if (argc != 2)
{
printf("Uzycie: %s nazwa_pliku\n", argv[0]);
return -1;
}
}
if ((wsk_plik = fopen(argv[1], "r")) != NULL)
{
fscanf(wsk_plik, "%s %s %s", dane[0], dane[1], dane[2]);
fclose (wsk_plik);
printf("%s %s %s\n", dane[0], dane[1], dane[2]);
}
return 0;
Listing 8.5.2 Uycie fscanf Analogicznie i dla tej funkcji z drobn rnic mona wymusi, aby wczytywaa znaki z klawiatury,
rnica polega na tym, e zamiast wskanika do pliku podajemy stdin (standard in). A wic ponisze polecenie wczyta liczb cakowit z klawiatury.
fscanf(stdin, "%d", &d);
stdin oraz stdout zwane s strumieniami, pierwszy odnosi si do wejcia, drugi do wyjcia. Jest jeszcze trzeci, nazwany stderr, ktry to jest wyjciem bdw. Zarwno stdout i stderr drukuj informacje na standardowe wyjcie, lecz jest midzy nimi subtelna rnica, ktr na poniszych przykadach poka.
#include <stdio.h>
int main (int argc, char *argv[])
{
FILE *wsk_plik;
if (argc != 2)
{
printf("Uzycie: %s nazwa_pliku\n", argv[0]);
return -1;
}
if ((wsk_plik = fopen(argv[1], "r")) != NULL)
144
{
}
printf("Plik zostal pomyslnie otwarty\n");
fclose(wsk_plik);
}
return 0;
Listing 8.5.3 Wywietlanie bdw na stdout
#include <stdio.h>
int main (int argc, char *argv[])
{
FILE *wsk_plik;
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s nazwa_pliku\n", argv[0]);
return -1;
}
}
if ((wsk_plik = fopen(argv[1], "r")) != NULL)
{
fprintf(stderr, "Plik zostal pomyslnie otwarty\n");
fclose(wsk_plik);
}
return 0;
Listing 8.5.4 Wywietlanie bdw na stderr Te dwa listingi jak wida rni si tylko instrukcj drukujc komunikat o bdzie i powodzeniu.
Skoro obie funkcje drukuj informacje na ekranie to co za rnica, ktrej si uywa? Ot jest rnica i zaraz to zasymulujemy. Program podczas uruchamiania pobiera argumenty, a konkretnie jeden i powinna to by nazwa pliku. Tak wic po kompilacji uruchom programy w nastpujcy sposb.
$ ./pierwszy plik1.txt > info.txt A nastpnie wpisz
$ cat info.txt Jeli plik1.txt istnia to otrzymasz wiadomo, e plik zosta pomylnie otwarty, jeli nie istnia to info.txt bdzie puste. W przypadku drugiego programu, jeli otwierany plik nie istnia, to info.txt 145
bdzie puste, jeli plik istnia informacja zostanie wywietlona na ekranie, nie w pliku to jest wanie ta rnica. Niech kolejny przypadek dobitnie pokae rnic midzy tymi strumieniami oraz uwiadomi nam, ktry lepiej uywa do sygnalizacji bdw. Program sprawdza czy byy podane dwa parametry, jeli podamy trzy, to zasygnalizuje bd, tak wic wpisz:
$ ./pierwszy plik1.txt plik2.txt > info.txt I wywietl zawarto pliku info.txt, tak samo jak w poprzednim przykadzie. Wida, e dostaniemy komunikat Uzycie: ./pierwszy nazwa_pliku. To jest bd, a nie informacja, ktra moe nam si przyda, wic nie powinna znale si w pliku. Uruchamiajc drugi program w ten sam sposb dostaniemy informacj tak sam, lecz wywietlon na ekranie, a plik info.txt bdzie pusty. Tym si wanie rni strumie stderr od stdout.
8.6 Pobieranie i wywietlanie caych wierszy tekstw funkcje: fgets, fputs Do pobierania oraz drukowania caych wierszy mona uy funkcji fgets oraz fputs zdefiniowanych w bibliotece standardowej. Deklaracja funkcji fgets wyglda nastpujco:
char *fgets (char *line, int maxline, FILE *wsk_plik);
Jako trzeci argument podajemy wskanik do pliku, z ktrego funkcja ta czyta po kolei wiersze. Po przeczytaniu jednego wiersza wstawia ten cig znakw do tablicy znakowej line cznie ze znakiem nowej linii \n. fgets czyta maxline-1 znakw dlatego, e kady przeczytany i wstawiony do tablicy znakowej wiersz zakoczony jest znakiem \0. Jako wskanik do pliku mona poda rwnie stdin, co wie si z tym, e funkcja bdzie czytaa wiersz ze standardowego wejcia (czyli klawiatury). Uycie fgets mona zobaczy w poniszym programie, ktry przyjmuje jako argument wywoania funkcji nazw pliku i drukuje zawarto pliku na standardowe wyjcie numerujc wiersze.
#include <stdio.h>
#define MAX 1000 int main (int argc, char *argv[])
{
FILE *wsk_plik;
char *progName, *fileName;
unsigned int counter = 1;
char line[MAX];
146
progName = argv[0];
fileName = argv[1];
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s nazwa_pliku\n", progName);
return -1;
}
if ((wsk_plik = fopen(fileName, "r")) != NULL)
{
while (fgets(line, MAX, wsk_plik) != NULL)
{
printf("%2d: %s", counter, line);
counter++;
}
fclose(wsk_plik);
}
else
{
fprintf(stderr, "%s: Nie moge otworzyc pliku: %s\n", progName,
fileName);
return -1;
}
return 0;
}
Listing 8.6.1 Uycie fgets Deklarujemy wskanik do pliku za pomoc pierwszej instrukcji w funkcji main, nastpnie co nie byo konieczne, lecz moe uatwi czytanie kodu deklarujemy dwa wskaniki do tekstu, ktre odpowiednio bd przechowywa nazw programu i nazw pliku. Trzeba utworzy licznik, ktry bdzie przechowywa ilo wierszy, deklarujemy t zmienn jako zmienn bez znaku, wierszy ujemnych przecie nie ma, a unsigned jak wiadomo podwaja zakres zmiennej. Potrzebujemy tablic znakw do ktrej wiersz zostanie skopiowany, tak wic tworzymy tablic line o rozmiarze MAX, ktry jest sta symboliczn. Funkcja fgets po wykryciu konca pliku zwraca warto NULL, tak wic ptla while bdzie wykonywa si dopki, dopty koniec pliku nie zostanie wykryty. W przypadku sukcesu funkcja zwraca wskanik do tablicy line. Po wykryciu koca pliku, plik jest zamykany, a program koczy swoje dziaanie. Naley zwrci uwag na lini:
printf("%2d: %s", counter, line);
Jak wida, nie trzeba wpisywa znaku nowego wiersza, poniewa jak wspomniano fgets pobiera znak nowego wiersza i przypisuje go do tablicy. Prototyp funkcji fputs wyglda tak:
147
int fputs (char *line, FILE *wsk_plik);
Funkcja zwraca zero, jeli wszystko poszo dobrze i NULL w razie nie powodzenia. Jako pierwszy argument funkcja przyjmuje tablic znakw, ktr drukuje do pliku wskazywanego przez drugi argument. Podobnie jak przy poprzedniej funkcji, drugim argumentem moe by strumie stdout,
ktry wydrukuje dan tablic na ekran monitora. Przykad uycia zosta pokazany na poniszym listingu.
#include <stdio.h>
#define MAX 1000 int main (int argc, char *argv[])
{
FILE *wsk_plik;
char *progName = argv[0], *fileName = argv[1];
char line[MAX];
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s nazwa pliku\n", progName);
return -1;
}
if ((wsk_plik = fopen(fileName, "a")) != NULL)
{
printf("Wpisz wiersz: ");
fgets(line, MAX, stdin);
if (fputs(line, wsk_plik) != EOF)
fprintf(stderr, "Wiersz pomyslnie zapisany w pliku: %s\n",
fileName);
else fprintf(stderr, "Nie udalo mi sie zapisac wiersza w pliku:
%s\n", fileName);
fclose(wsk_plik);
}
else
{
fprintf(stderr, "%s: Nie moge otworzyc pliku: %s\n", progName,
fileName);
return -1;
}
return 0;
}
Listing 8.6.2 Uycie fputs 148
Pocztek jest w zasadzie taki sam, tak wic omwienie zaczn od miejsca w ktrym wpisujemy tekst do tablicy. Tak jak powiedziane byo, jeli trzecim argumentem funkcji fgets bdzie stdin, to zamiast pobiera tekst z pliku pobieramy go ze standardowego wejcia. Po pobraniu wiersza sprawdzamy czy fputs zwraca EOF, ktry oznacza bd. Jeli bdu nie byo, to na stderr wysyamy informacj o pomylnym zapisaniu wiersza. Jeli jednak jaki bd wystpi to wywietlamy stosowny do takiej sytuacji komunikat. Teraz pytanie, jak sprowokowa bd zapisu, aby zobaczy informacj o nie udanym zapisie oraz bd otwarcia pliku (skoro tyb a tworzy plik, jeli go nie ma)? Najpierw uruchomimy program podajc jako parametr nazw pliku, ktry istnieje, bd nie istnieje i w takim wypadku zostanie utworzony, czyli na przykad:
$ ./nazwa_programu plik1.txt Plik zosta utworzony, wpisany wiersz zosta dopisany, czyli program dziaa tak jak powinien. Teraz maa uwaga, w systemach Linux wystpuj prawa dostpu do pliku, co w bardzo podstawowym stopniu opisano w dodatku A, podpunkt 4 i 6. Aby otwarcie pliku si nie powiodo musimy usun prawa dostpu do pliku, wpisz w konsoli ponisze polecenie:
$ chmod 0 plik1.txt Teraz po uruchomieniu programu z argumentem plik1.txt dostaniemy poniszy komunikat, poniewa system operacyjny nie pozwoli na otwarcie pliku:
./nazwa_programu: Nie moge otworzyc pliku: plik1.txt Jeli chcemy z powrotem nada prawa dostepu do pliku wpisz np. tak:
$ chmod 750 plik1.txt Aby otrzyma komunikat o nie powodzeniu podczas zapisywania wiersza do pliku mona na przykad sprbowa otworzy plik w trybie r, prba zapisu do pliku, kiedy otwarty jest on w trybie czytania koczy si bdem.
149
9 Dynamicznie przydzielana pami Do dynamicznie przydzielanej pamici stosuje si dwie funkcje zdefiniowane w pliku nagwkowym stdlib.h, a mianowicie malloc, oraz calloc, w zasadzie obie zwracaj wskanik do pewnego obszaru,
aczkolwiek rni si maym szczegem, o ktrym zaraz powiem. Deklaracja funkcji malloc wyglda nastpujco:
void *malloc (size_t n);
Funkcja ta w przypadku powodzenia zwraca wskanik do n bajtw nie zainicjowanej pamici. Jeli nie udao si zarezerwowa miejsca, to funkcja zwraca NULL. Deklaracja funkcji calloc wyglda nastpujco:
void *calloc (size_t n, size_t size);
Funkcja calloc zwraca wskanik do n size bajtw, czyli tyle miejsca, aby pomieci tablic n elementw o rozmiarze size. W razie nie powodzenia, analogicznie do poprzedniej funkcji zwraca NULL. size_t jest odwoaniem do pewnego typu danych (typ cakowity bez znaku) do okrelania dugoci, rozmiaru itp. Nic nie stoi na przeszkodzie by uywa podstawowego typu danych jak np.
unsigned int. Przydzielona pami przez funkcj calloc inicjowana jest zerami.
Jeszcze jedna uwaga do obu tych funkcji. Warto wskanika trzeba zrzutowa na konkretny typ danych. Poniszy przykad pokazuje jak za pomoc funkcji malloc zarezerwowa n miejsc dla pseudolosowych liczb typu int.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main (int argc, char *argv[])
{
size_t iloscElementow;
int *wsk, i;
// unsigned int iloscElementow;
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s ilosc_elementow_tablicy\n",
argv[0]);
return -1;
}
srand(time(0));
150
iloscElementow = atoi(argv[1]);
if ((wsk = (int *)malloc (iloscElementow * sizeof (int))) == NULL)
{
fprintf(stderr, "Pamieci nie udalo sie zarezerwowac\n");
return -1;
}
for (i = 0; i < iloscElementow; i++)
wsk[i] = rand() % 100;
}
for (i = 0; i < iloscElementow; i++)
printf("%d: %d\n", i, wsk[i]);
free(wsk);
return 0;
Listing 9.1 Uycie funkcji malloc O funkcji srand, rand i time powiem za chwil, najpierw to co nas interesuje najbardziej. Tworzymy wskanik do typu cakowitego i nastpnie przypisujemy do niego wywoanie funkcji malloc, ktra jako argument przyjmuje ilo bajtw, ktre ma zarezerwowa. Iloci t jest ilo elementw podanych jako argument wywoania programu razy rozmiar typu int. Cao musimy zrzutowa na odpowiedni typ, czyli w naszym przypadku wskanik na int (int *). Nastpnie sprawdzamy, czy wskanik rwna si NULL, jeli tak, to trzeba przerwa dziaanie programu, bo nie mona si odwoywa do nie zaalokowanej pamici. Absolutnym obowizkiem jest zwolnienie pamici wtedy, kiedy ju nie jest uywana. W tak maym programie jak ten z listingu 9.1 nic si nie stanie jeli nie uyjemy funkcji free,
poniewa program zakoczy si po wydrukowaniu wartoci, a wraz z kocem programu pami jest zwalniana. Nie mniej jednak czasem rezerwacja pamici moe odbywa si w innych funkcjach i nie zwolnienie pamici wie si z tym, e dopki program dziaa, pami ta nie moe by po raz drugi zaalokowana. Pami zwalnia si za pomoc funkcji free, ktrej argumentem jest wskanik do zaalokowanej wczeniej pamici przez funkcj malloc lub calloc. Nie mona zwalnia pamici, ktra nie bya prdzej zaalokowana oraz odwoywa si do pamici, ktra zostaa zwolniona. Jak wida,
w ptli odwoujemy si do elementw jak do tablicy i nie jest to przypadek, poniewa iloci zaalokowanej pamici jest ilo elementw rozmiar typu, a skoro wszystkie elementy s obok siebie w pamici, to moemy przesuwa wskanik o konkretn ilo miejsc, tzn o rozmiar elementu,
w gruncie rzeczy na tej samej zasadzie dziaaj tablice, dlatego moemy indeksowa zmienn wskanikow. Alternatywnym (w tym wypadku moe bardziej zrozumiaym) sposobem byo by 151
przesuwanie wskanika, np. w ten sposb *(wsk+i). Obie operacje s sobie rwnowane. Uylimy funkcji malloc do zaalokowania miejsca dla tablicy, rwnie dobrze mona by byo wywoa funkcj calloc w nastpujcy sposb:
wsk = (int *)calloc (iloscElementow, sizeof (int));
Rysunek moe zobrazuje sposb alokacji pamici przez funkcj z rodziny alloc.
Pami
0x950a008 0x950a009 0x950a00a 0x950a00b 0x950a00c
n=1 wsk = malloc (n * sizeof (int))
Pami wsk
0x950a008 0x950a00c
*wsk = warto Rysunek 9.1 Obrazowe przedstawienie sposobu alokacji pamici Funkcja rand zwraca pseudolosow liczb, jeli uylibymy tylko funkcji rand to raz skompilowany program wywietlaby zawsze te same liczby, wic to nas nie zadowala. Do losowania rnych liczb uywa si mechanizmu, ktry nazywa si zarodkiem, realizuje to funkcja srand, ale i tym razem jeli podamy w argumencie sta liczb, to funkcja rand wylosuje nam pewne liczby i przy kadym uruchomieniu bdzie pokazywaa te same wartoci, to te nas nie zadowala. W tej sytuacji ratuje nas funkcja time. Funkcja time zwraca informacj o czasie, wywoanie funkcji z parametrem 0 daje ilo sekund, ktre upyny od 1 stycznia 1970 roku, wic przy kadym uruchomieniu ilo ta jest inna, tak wic dostajemy liczby pseudolosowe. Wywoanie instrukcji rand() % 100 tworzy przedzia liczb losowych od 0 do 99.
152
10 Biblioteka standardowa W rozdziale tym przedstawione zostan funkcje z biblioteki standardowej. Biblioteka standardowa jest to zestaw plikw nagwkowych, w ktrych zawarte s definicje poszczeglnych funkcji, typw oraz makr. Do tej pory zdarzyo nam si ju wykorzystywa np. funkcje printf, scanf, tudzie malloc.
Deklaracje tych funkcji (oraz wielu innych) znajduj si w plikach nagwkowych, ktre trzeba doczy za pomoc include. Plikami nagwkowymi opisanymi w tej ksice wchodzcymi w skad biblioteki standardowej s:
assert.h complex.h ctype.h errno.h iso646.h limits.h locale.h math.h setjmp.h signal.h stdarg.h stdbool.h stdio.h stdlib.h string.h time.h W kolejnych podpunktach niniejszego rozdziau zostan omwione wyej wymienione pliki nagwkowe zawierajce bardzo uyteczne funkcje. Pomimo wielkich stara materia przedstawiony w tym punkcie nie stanowi kompletnego kompedium wiedzy z zakresu biblioteki standardowej, aby tak uzyska mona wej na stron http://www.gnu.org/software/libc/manual/ na ktrej w kilku rnych formatach przedstawiona jest biblioteka jzyka C (ang. The GNU C Library) bardzo szczegowo (wersja pdf ma ponad 1000 stron).
10.1 assert.h W pliku nagwkowym assert.h znajduje si zdefiniowane makro assert, ktre suy do testowania programu, w celu wykrycia (jeli s jakie) bdw. assert zdefiniowane jest nastpujco:
void assert (int expression);
Jako parametr podajemy wyraenie, ktre po obliczeniu przyjmuje warto niezerow, bd zero. Jeli expression przyjmuje warto zero, to za pomoc assert dziaanie programu zostaje przerwane i na standardowe wyjcie bdw (stderr) zostaje wywietlona informacje zgodna z nastpujcym schematem. Jeli argument ma warto niezerow to nic si nie dzieje.
file: source.c:linenum: function: Assertion `expression' failed.
Aborted 153
Gdzie:
file nazwa skompilowanego programu
source.c nazwa pliku rdowego
linenum numer linii, w ktrym wystpuje assert
function nazwa funkcji
expression argument assert Przykad uycia zosta pokazany na poniszym listingu.
#include <stdio.h>
#include <assert.h>
int main (void)
{
int x = 1, y = 0;
int w1, w2;
w1 = x || y;
w2 = x && y;
assert(w1);
assert(w2);
return 0;
}
Do zmiennych w1 i w2 przypisujemy wynik operacji logicznych na zmiennych x i y. Zmienna w1 bdzie miaa warto jeden (zgodnie z tabel 2.3.6), a zmienna w2 warto zero (zgodnie z tabel 2.3.3). Pierwsze wywoanie assert nie wywouje adnej akcji, poniewa w1 ma warto jeden.
Natomiast drugie wywoanie, gdzie argument ma warto zerow zakoczy program ze stosownym komunikatem. Funkcja assert powinina by uywana raczej do kontrolowania programu, a nie do informowania uytkownika, e co poszo nie tak. Dajmy na to przykad taki, uytkownik podaje dwie liczby, z ktrych ma zosta wywietlony iloraz. Jeli jako drug liczb poda warto zero, to lepiej,
eby program napisany by w taki sposb, aby uytkownik mg poda jeszcze raz inn liczb anieli program by cakowicie koczy dziaanie z komunikatem, ktry dla zwykego uytkownika bdzie najprawdopodobniej nie zrozumiay.
154
10.2 complex.h Nagwek complex.h zawiera definicje funkcji uywanych do arytmetyki liczb zespolonych. Funkcji tych jest do sporo i zostay wypisane w poniszej tabeli. Przykad zawierajcy operacje na liczbach zespolonych zosta pokazany pod tabel. Maa uwaga, kada z funkcji posiada trzy wersje, ktre rni si typem zwracanych wartoci. W tabeli zostay wypisane wersje, w ktrych typem zwracanej wartoci jest double, lub double complex. Dodatkowo s funkcje zwracajce liczby typu float, float complex oraz long double, long double complex. Aby to osign do nazwy funkcji dodajemy liter odpowiednio f, bd l (el). Na listingu zostao to pokazane.
Prototyp funkcji Opis
double cabs (double complex)
Modu liczby zespolonej double carg (double complex)
Argument liczby zespolonej double cimag (double complex)
Cz urojona liczby zespolonej double creal (double complex)
Cz rzeczywista liczby zespolonej double complex csqrt (double complex)
Pierwiastek z liczby zespolonej double complex cacos (double complex)
Funkcja odwrotna do cosinusa double complex cacosh (double complex)
Funkcja odwrotna do cosinusa hiperbolicznego double complex casin (double complex)
Funkcja odwrotna do sinusa double complex casinh (double complex)
Funkcja odwrotna do sinusa hiperbolicznego double complex catan (double complex)
Funkcja odwrotna do tangensa double complex catanh (double complex)
Funkcja odwrotna do tangensa hiperbolicznego double complex ccos (double complex)
Funkcja cosinus double complex ccosh (double complex)
Funkcja cosinus hiperboliczny double complex cexp (double complex)
Funkcja eksponencjalna double complex clog (double complex)
Logarytm double complex conj (double complex)
Sprzenie zespolone double complex cpow (double complex,
double complex)
Potgowanie double complex csin (double complex)
Funkcja sinus double complex csinh (double complex)
Funkcja sinus hiperboliczny double complex ctan (double complex)
Funkcja tangens double complex ctanh (double complex)
Funkcja tangens hiperboliczny 155
Dajmy na to przykad nastpujcy, dana jest liczba z okrelona wzorem w postaci algebraicznej:
z =45i Wywietli modu, argument, cz rzeczywist i urojon liczby z oraz sprzonej liczby zespolonej z oraz kty jakie obie liczby tworz z dodatnim kierunkiem osi Re.
#include <stdio.h>
#include <complex.h>
#include <math.h>
int main (void)
{
double complex z = 4+5i;
double complex z_ = conj(z);
double mod = cabs(z);
double re = creal(z);
double im = cimag(z);
double arg1, arg2;
arg1 = atan(im/re);
arg1 *= (180/M_PI);
printf("Modul: %lf\n", mod);
printf("Re{z}: %lf\n", re);
printf("Im{z}: %lf\n", im);
printf("Arg{z}: %lf\n", arg1);
mod = cabs(z_);
re = creal(z_);
im = cimag(z_);
arg2 = atan(im/re);
arg2 *= (180/M_PI);
printf("Modul: %lf\n", mod);
printf("Re{z_}: %lf\n", re);
printf("Im{z_}: %lf\n", im);
printf("Arg{z_}: %lf\n", arg2);
return 0;
}
Nowoci moe okaza si sposb deklarowanie liczby zespolonej, po typie danych dodajemy sowo kluczowe complex. Liczb zespolon mona zapia jeszcze na takie sposoby:
double complex z = 4 + 5I;
double complex z = 4 + 5 * I;
156
Poniewa funkcje cabs, carg, cimag oraz creal zwracaj wartoci typu double przypisujemy je do zmiennych typu double. Funkcja conj zwraca warto typu double complex, wic do takiej zmiennej musimy przypisa jej wywoanie. Argument (czyli kt pomidzy moduem, a dodatnim kierunkiem osi Re) obliczamy za pomoc funkcji odwrotnej do tangensa, czyli atan. Wynik jest w radianach, tak wic aby zamieni na stopnie uyto formuy:
radians180
Analogicznie jest z liczb sprzon do z. Do zmiennej z_ typu double complex przypisujemy wywoanie funkcji conj, ktra zwraca liczb sprzon do z. Nastpnie wycigamy z niej cz rzeczywist i urojon za pomoc tych samych funkcji co poprzednio. Istotn informacj jest to,
e jeli zmienna z_ nie bdzie typu double complex, to kompilator nie poinformuje nas o tym,
a przypisan liczb nie bdzie liczba zespolona, tylko cz rzeczywista ze sprzonej liczby zespolonej, w naszym przypadku 4. Wic jeli bdziemy chcieli wywietli cz urojon i argument takiej liczby, to dostaniemy w obydwu przypadkach zero.
Na pocztku powiedziane byo, e kada z tych funkcji ma swoje odpowiedniki dla typu float, oraz long double. Tak wic na poniszym przykadzie bdzie wyjaniona zasada uywania tych funkcji.
#include <stdio.h>
#include <complex.h>
#include <math.h>
int main (void)
{
double complex z = 5 + 8i;
float arg = cargf(z);
float abs = cabsf(z);
long double complex asin = casinl(z);
arg *= (180/M_PI);
printf("cargf(z): %f\n", arg);
printf("cabsf(z): %f\n", abs);
printf("%.20Lf\n%.20Lfi\n", creall(asin), cimagl(asin));
}
return 0;
Jak wida jedyn rnic jest to, e po nazwie funkcji dodajemy literk f w przypadku float, i l w przypadku long double. Trzeba pamita jeszcze o tym, e jeli chcemy wywietli cz 157
rzeczywist i urojon dla zmiennej asin, to musimy uy odpowiednikw long double (creall,
cimagl).
10.3 ctype.h Nagwek ctype.h zawiera deklaracje funkcji sprawdzajcych i wywietlajcych informacje na temat znakw. W poniszej tabeli znajduje si lista tych funkcji wraz z oglnym opisem.
Nazwa funkcji Opis
isalnum Funkcja sprawdzajca czy znak jest znakiem alfanumeryczny isalpha
Funkcja sprawdzajca czy znak jest znakiem z alfabetu isblank
Funkcja sprawdzajca czy znak jest znakiem odstpu (spacja, tabulacja)
iscntrl Funkcja sprawdzajca czy znak jest znakiem sterujcym isdigit
Funkcja sprawdzajca czy znak jest liczb dziesitn isgraph
Funkcja sprawdzajca czy znak jest znakiem drukowalnym rnym od spacji islower
Funkcja sprawdzajca czy znak jest ma liter isprint
Funkcja sprawdzajca czy znak jest znakiem drukowalnym (razem ze spacj)
ispunct Funkcja sprawdzajca czy znak jest znakiem przestankowym1 isspace
Funkcja sprawdzajca czy znak jest biaym znakiem2 isupper
Funkcja sprawdzajca czy znak jest du liter isxdigit
Funkcja sprawdzajca czy znak jest liczb szesnastkow3 Kada z wyej wymienionych funkcji zwraca warto niezerow jeli jest prawda i zero jeli jest fasz.
Prototypy funkcji s takie same dla wszystkich, dlatego te poka tylko jeden, mianowicie:
int alnum (int c);
Istniej jeszcze dwie funkcje zamieniajce wielko liter, a mianowicie tolower oraz toupper.
Odpowiednio pierwsza zamienia duy znak na may (jeli by may, to nic si nie dzieje), druga zamienia may znak na duy (jeli by duy to nic si nie dzieje). Obie funkcje przyjmuj znak jako argument. Przykad uycia tych funkcji zosta pokazany na poniszym listingu.
1 Wszystkie znaki drukowalne dla ktrych funkcje isspace oraz isalnum zwracaj warto zero.
2 Biae znaki: spacja, tabulacja pozioma (\t) , tabulacja pionowa (\v), przejcie do nowej linii (\n), powrt karetki (\r),
wysunicie strony (\f).
3 Znaki: cyfry od 0 do 9, litery od a f bez wzgldu na wielko.
158
#include <stdio.h>
#include <ctype.h>
int main (void)
{
char msg[200];
int i;
int count[12];
char *info[] = {"alnum: ", "alpha: ", "blank: ", "cntrl: ",
"digit: ", "graph: ", "lower: ", "print: ",
"punct: ", "space: ", "upper: ", "xdigit: "};
for (i = 0; i < sizeof (count) / sizeof (int); i++)
count[i] = 0;
printf("> ");
fgets(msg, sizeof (msg), stdin);
for (i = 0; msg[i] != '\0'; i++)
{
if (isalnum(msg[i])) count[0]++;
if (isalpha(msg[i])) count[1]++;
if (isblank(msg[i])) count[2]++;
if (iscntrl(msg[i])) count[3]++;
if (isdigit(msg[i])) count[4]++;
if (isgraph(msg[i])) count[5]++;
if (islower(msg[i])) count[6]++;
if (isprint(msg[i])) count[7]++;
if (ispunct(msg[i])) count[8]++;
if (isspace(msg[i])) count[9]++;
if (isupper(msg[i])) count[10]++;
if (isxdigit(msg[i])) count[11]++;
}
for (i = 0; i < sizeof (count) / sizeof (int); i++)
printf("%s%d\n", info[i], count[i]);
for (i = 0; msg[i] != '\0'; i++)
msg[i] = tolower(msg[i]);
printf("%s", msg);
for (i = 0; msg[i] != '\0'; i++)
msg[i] = toupper(msg[i]);
printf("%s", msg);
}
return 0;
Instrukcja for jako warunek przyjmuje wyraenie, ktre sprawdza, czy dany znak jest rny od znaku 159
koca tablicy. Jak pamitamy z poprzednich rozdziaw za pomoc funkcji fgets przypisujemy tekst do tablicy wraz ze znakiem koca \0, dziki temu atwo moemy ustali kiedy pobrany tekst si skoczy i przerwa sprawdzanie warunkw. Warunki zamieszczone w ptli sprawdzaj po kolei, czy znak z i-tej iteracji jest znakiem alfanumerycznym, znakiem z alfabetu itp. Jeli tak jest to zwikszany jest licznik wystpie, ktry jest tablic dwunasto elementow. Dla zaoszczdzenia miejsca instrukcja inkrementacji zostaa zapisana w tej samej linii co warunek. Na grze programu zostaa utworzona tablica wskanikw do typu znakowego, aby podczas drukowania wypisywa za co dany licznik odpowiada w atwiejszy sposb.
10.4 errno.h Nagwek errno.h zawiera zdefiniowane makro do informowania o bdach. Bdw tych jest do duo, wszystkich tutaj nie opisz, natomiast poka sposb uywania tego mechanizmu oraz jak wywietli wszystkie moliwe bdy. Do przechowywania numeru bdu suy specjalna zmienna errno przechowujca kod ostatniego bdu. Opis bdu spod tego numeru mona otrzyma za pomoc funkcji strerror, ktrej prototyp wyglda nastpujco.
char *strerror (int no_err);
Jak wida funkcja zwraca wskanik do tekstu, a jako argument przyjmuje numer bdu, czyli to co kryje si pod zmienn errno. Poniszy przykad pokazuje jak wywietli list wszystkich bdw
(numer bdu wraz z opisem).
#include <stdio.h>
#include <errno.h>
int main (void)
{
int i;
char *opis;
}
for (i = 0; i < 132; i++)
{
errno = i;
opis = (char *)strerror(errno);
printf("Errno: %d - %s\n", errno, opis);
}
return 0;
160
Do wskanika opis przypisujemy otrzymany wskanik od funkcji strerror. W kolejnej linijce drukujemy numer bdu (zmienna errno), oraz opis bdu (zmienna opis). Aby pokaza jak to dziaa w praktyce rozwamy nastpujcy przykad.
#include <stdio.h>
#include <errno.h>
int main (int argc, char *argv[])
{
char *progname = argv[0];
char *filename = argv[1];
FILE *wp;
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s filename\n", progname);
return -1;
}
if ((wp = fopen(filename, "r")) != NULL)
{
fprintf(stderr, "Plik otwarty pomyslnie\n");
fclose(wp);
}
else fprintf(stderr, "No:%d - %s\n",errno,(char *)strerror(errno));
return 0;
}
Jeli wskanik wp bdzie posiada warto NULL co jest rwnowane (w tym kontekcie) z sytuacj,
w ktrej nie uda si otworzy pliku to wykona si instrukcja else, po ktrej na standardowe wyjcie bdw zostanie wywietlony numer oraz opis bedu.
10.5 iso646.h Nagwek iso646.h zawiera makra odnoszce si do operatorw relacji opisanych w tabeli 2.3.2 oraz operatorw bitowych z tabeli 2.3.5. Wprowadzono je po to, aeby uatwi wprowadzanie tych operatorw ludziom uywajcym klawiatury nie-QWERTY (ukad liter). Nagwek ten opisuje 11 makr, ktre zdefiniowane s nastpujco. Przykad 2.3.10 z uyciem makr z nagwka iso646.h znajduje si na listingu.
161
Makro Operator
and
&&
and_eq
&=
bitand
&
bitor
|
compl
~
not
!
not_eq
!=
or
||
or_eq
|=
xor
^
xor_eq
^=
Tabela 10.5.1 Lista makr z nagwka iso646.h
#include <stdio.h>
#include <iso646.h>
unsigned setbits (unsigned x, int p, int n, unsigned y);
int main (void)
{
unsigned x = 1023, y = 774;
int p = 7, n = 5;
printf("%u\n", setbits(x, p, n, y));
}
// 823 unsigned setbits (unsigned x, int p, int n, unsigned y)
{
y = compl(y);
// Krok 2 y and_eq compl(compl(0) << n);
// Krok 3 y <<= p + 1 - n;
// Krok 4 y = compl(y);
// Krok 5 return x bitand y;
// Krok 6
}
10.6 limits.h Nagwek limits.h zawiera makra, ktre zwracaj pewne wartoci. Wartociami tymi s zakresy typw danych. W gruncie rzeczy zakres typu zaley od implementacji, lecz z pomoc tych makr mona sprawdzi jaki zakres posiada konkretny typ danych na naszym sprzcie. Makra te zostay zestawione 162
w tabeli 10.6.1. Przykad uycia znajduje si poniej.
Nazwa Opis
CHAR_BIT Liczba bitw w typie char SCHAR_MIN
Minimalna warto dla typu char ze znakiem (signed)
SCHAR_MAX Maksymalna warto dla typu char ze znakiem (signed)
UCHAR_MAX Maksymalna warto dla typu char bez znaku (unsigned)
CHAR_MIN Minimalna warto dla typu char CHAR_MAX
Maksymalna warto dla typu char SHRT_MIN
Minimalna warto dla typu short int SHRT_MAX
Maksymalna warto dla typu short int USHRT_MAX
Maksymalna warto dla typu short int bez znaku (unsigned)
INT_MIN Minimalna warto dla typu int INT_MAX
Maksymalna warto dla typu int UINT_MAX
Maksymalna warto dla typu int bez znaku (unsigned)
LONG_MIN Minimalna warto dla typu long int LONG_MAX
Maksymalna warto dla typu long int ULONG_MAX
Maksymalna warto dla typu long int bez znaku (unsigned)
LLONG_MIN Minimalna warto dla typu long long int LLONG_MAX
Maksymalna warto dla typu long long int ULLONG_MAX
Maksymalna warto dla typu long long int bez znaku (unsigned)
Tabela 10.6.1 Spis makr z nagwka limits.h
#include <stdio.h>
#include <limits.h>
int main (void)
{
printf("CHAR_BIT: %d\n", CHAR_BIT);
printf("SCHAR_MIN: %d\n", SCHAR_MIN);
printf("SCHAR_MAX: %d\n", SCHAR_MAX);
printf("UCHAR_MAX: %d\n", UCHAR_MAX);
printf("CHAR_MIN: %d\n", CHAR_MIN);
printf("CHAR_MAX: %d\n", CHAR_MAX);
printf("SHRT_MIN: %d\n", SHRT_MIN);
printf("SHRT_MAX: %d\n", SHRT_MAX);
printf("USHRT_MAX: %d\n", USHRT_MAX);
printf("INT_MIN: %d\n", INT_MIN);
printf("INT_MAX: %d\n", INT_MAX);
163
printf("UINT_MAX: %u\n", UINT_MAX);
printf("LONG_MIN: %ld\n", LONG_MIN);
printf("LONG_MAX: %ld\n", LONG_MAX);
printf("ULONG_MAX: %lu\n", ULONG_MAX);
printf("LLONG_MIN: %lld\n", LLONG_MIN);
printf("LLONG_MAX: %lld\n", LLONG_MAX);
printf("ULLONG_MAX: %llu\n", ULLONG_MAX);
}
10.7 return 0;
locale.h Nagwek locale.h zawiera dwie funkcje (setlocale, localeconv) i jeden typ danych (lconv)
uywanych do ustawie regionalnych. W zalenoci od kompilatora mona wybra rne ustawienia lokalne, ale przynajmniej dwa ustawienia s standardowe i dostpne we wszystkich implementacjach.
Wybr ustawienia lokalnego nastpuje za pomoc funkcji setlocale, dlatego te o standardowych ustawieniach zostanie powiedziane podczas opisu teje funkcji.
Zasb Opis
setlocale Funkcja ustawiajca lub pobierajca informacje o ustawieniach lokalnych localeconv
Funkcja zwracajca struktur typu lconv lconv
Struktura danych zawierajca rne informacje o lokalnych ustawieniach Tabela 10.7.1 Funkcje i typ danych z nagwka locale.h char *setlocale (int category, const char *locale)
Opis Funkcja setlocale uywana jest do ustawiania ustawie lokalnych, lub zmiany ich w caoci lub czciowo w aktualnie wykonujcym si programie. Jeli jako locale podamy NULL to otrzymamy nazw aktualnego ustawienia lokalnego. Na starcie wszystkie programy maj ustawione "C" jako locale. Aby uy ustawie domylnych dla danego rodowiska musimy poda jako drugi argument "".
Parametry Pierwszym parametrem jest informacja o tym co bdzie zmieniane (cz ustawie, lub cao),
drugim natomiast na jak warto. Drugi parametr moe mie rne wartoci, nie mniej jednak te dwie wymienione w tabeli powinny by dostpne we wszystkich implementacjach.
164
category Nazwa
Zmiana LC_ALL
Cae ustawienia lokalne LC_COLLATE
Wpywa na zachowanie funkcji strcoll i strxfrm LC_CTYPE
Wpywa na zachowanie funkcji z ctype.h (z wyjtkiem isdigit, isxdigit)
LC_MONETARY Wpywa na formatowanie wartoci monetarnych LC_NUMERIC
Wpywa na kropk dziesitn w formatowanym wejciu / wyjciu LC_TIME
Wpywa na zachowanie funkcji strftime locale
Lokalna nazwa Opis
"C"
Minimalne ustawienia lokalne "C"
""
Domylne ustawienia rodowiskowe Zwracana warto Funkcja setlocale w przypadku powodzenia zwraca wskanik do tekstu (nazwa ustawie)
identyfikujcy aktualne ustawienia lokalne dla kategorii (category). W przypadku nie powodzenia funkcja zwraca NULL.
struct lconv *localeconv (void)
Opis Funkcja localeconv jest uywana do pobierania informacji o ustawieniach lokalnych. Informacje te zwraca w formie wskanika do struktury typu lconv. Trzeba mie na uwadz, e kolejne wywoania funkcji nadpisuj zmienione wartoci.
Parametry Brak
Zwracana warto Funkcja zwraca wskanik do typu strukturowego lconv.
Struktura lconv przetrzymuje informacje o tym w jaki sposb maj by zapisane wartoci monetarne i nie monetarne. Struktura ta posiada nastpujce pola.
165
Pole Opis
char *decimal_point Kropka dziesitna uywana w liczbach nie monetarnych char *thousands_sep Separator uywany do ograniczenia grup cyfr w liczbach nie monetarnych
char *grouping Ustawienie iloci cyfr, w ktre grupowane s liczby nie monetarne:
"\3" grupuje tak: 1,000,000
"\1\2\3"grupuje tak: 1,000,00,0
"\3\1" grupuje tak: 1,0,0,0,000 char *int_curr_symbol Midzynarodowy znak waluty, trzyliterowy, np. USD char *currency_symbol Lokalny znak waluty, np. $
char *mon_decimal_point Kropka dziesitna uywana w liczbach monetarnych char *mon_thousands_sep Separator uywany do ograniczenia grup cyfr w liczbach monetarnych char *mon_grouping Okrela ilo cyfr kadej grupy, ktra ma by oddzielona w polu mon_thousands_sep
char *positive_sign Znak uywany dla dodatnich liczb monetarnych, oraz zera char *negative_sign Znak uywany dla ujemnych liczb monetarnych char int_frac_digits Liczba miejsc po przecinku midzynarodowym formacie char frac_digits Liczba miejsc po przecinku dla monetarnych wartoci w lokalnym formacie
char p_cs_precedes Jeli warto tego pola rwna si 1, to znak waluty poprzedza liczby dodatnie (lub zero). Jeli warto to 0, to znak waluty jest po wartoci liczbowej
char n_cs_precedes Jeli warto tego pola rwna si 1, to znak waluty poprzedza liczby ujemne. Jeli warto to 0, to znak waluty jest po wartoci liczbowej char p_sep_by_space Jeli warto tego pola rwna si 1, to spacja wystpuje pomidzy znakiem waluty, a wartoci liczbow (dodatnia, lub zero). Jeli 0 to spacja nie wystpuje.
char n_sep_by_space Jeli warto tego pola rwna si 1, to spacja wystpuje pomidzy znakiem waluty, a wartoci liczbow (ujemna). Jeli 0 to spacja nie wystpuje.
char p_sign_posn Pozycja znaku dla dodatnich (lub zera) wartoci monetarnych:
0 Znak waluty i warto liczbowa otoczone nawiasami 1 Znak przed znakiem waluty i wartoci 2 Znak po znakiem waluty i wartoci 3 Znak bezporednio przed znakiem waluty 4 Znak bezporednio po znaku waluty char n_sign_posn Pozycja znaku dla ujemnych wartoci monetarnych (zobacz powyej)
dla monetarnych
wartoci w
166
Przykad
#include <stdio.h>
#include <locale.h>
int main (void)
{
struct lconv *lv;
printf("Nazwa ustawien: %s\n", setlocale(LC_ALL, NULL));
lv = localeconv();
printf("int_curr_symbol: %s\n", lv->int_curr_symbol);
printf("Nazwa ustawien: %s\n", setlocale(LC_ALL, ""));
lv = localeconv();
printf("int_curr_symbol: %s\n", lv->int_curr_symbol);
return 0;
}
10.8 math.h W punkcie 2.3.6 przedstawione zostay funkcje matematyczne, lecz nie wszystko z nagwka math.h.
Rzecz o ktrej tam nie wspomniano to wykaz staych matematycznych, dlatego te zostay one umieszczone w poniszej tabeli.
Nazwa staej M_E
Opis Podstawa logarytmu naturalnego liczba e M_LOG2E
Logarytm przy podstawie 2 z e M_LOG10E
Logarytm przy podstawie 10 z e M_LN2
Logarytm naturalny z 2 M_LN10
Logarytm naturalny z 10 M_PI
Liczba PI M_PI_2
Liczba PI podzielona przez 2 M_PI_4
Liczba PI podzielona przez 4 M_1_PI
Odwrotno PI, czyli 1/PI M_2_PI
Odwrotno PI pomnoona przez 2, czyli 2/PI M_2_SQRTPI
Odwrotno liczby PI pod pierwiastkiem pomnoona przez 2, czyli 2/sqrt(PI)
M_SQRT2 Pierwiastek kwadratowy z 2 M_SQRT1_2
Pierwiastek kwadratowy z
Tabela 10.8.1 Stae matematyczne 167
#include <stdio.h>
#include <math.h>
int main (void)
{
printf("%f\n", M_E);
printf("%f\n", M_LOG2E);
printf("%f\n", M_LOG10E);
printf("%f\n", M_LN2);
printf("%f\n", M_LN10);
printf("%f\n", M_PI);
printf("%f\n", M_PI_2);
printf("%f\n", M_PI_4);
printf("%f\n", M_1_PI);
printf("%f\n", M_2_PI);
printf("%f\n", M_2_SQRTPI);
printf("%f\n", M_SQRT2);
printf("%f\n", M_SQRT1_2);
return 0;
}
10.9 setjmp.h Nagwek setjmp.h zawiera informacje umoliwiajce omijanie normalnych wywoa funkcji, tzn nie lokalne skoki (non-local goto). Nagwek zawiera definicj funkcji longjmp, setjmp, oraz typu jmp_buf. Prototyp funkcji longjmp wyglda nastpujco.
void longjmp (jmp_buf env, int val)
Opis Funkcji uywa si do cofnicia do ostatniego wywoania funkcji setjmp. Wszystkie potrzebne informacje znajduj si w zmiennej env. Funkcja ta odwouje si do ostatniego wywoania setjmp.
Wywoana funkcja nakazuje zwrci funkcji setjmp warto val.
Parametry Pierwszym parametrem jest zmienna przechowujca informacje potrzebne do odtworzenia rodowiska,
ktre zostao zapisane w miejscu wywoania funkcji setjmp. Drugim parametrem jest liczba cakowita,
ktra pomaga w sterowaniu skokami.
Zwracana warto Brak.
168
int setjmp (jmp_buf env)
Opis Funkcja pobiera jako argument zmienn typu jmp_buf, w ktrej przechowywane s informacje odnonie rodowiska, ktre w pniejszym czasie mog zosta uyte w wywoaniu funkcji longjmp.
Funkcja tworzy punkt bazowy, do ktrego mona si cofn.
Parametry Zmienna typu jmp_buf.
Zwracana warto Za pierwszym razem funkcja zwraca zawsze zero. Jeli wywoamy funkcj longjmp ze zmienn env to setjmp zwraca warto podan jako drugi argument funkcji longjmp.
Typ jmp_buf jest to typ danych zdolny przetrzymywa informacje o rodowisku, ktre w pniejszym czasie moe by wykorzystane przez funkcje z nagwka setjmp.h.
Przykad
#include <stdio.h>
#include <setjmp.h>
int main (void)
{
jmp_buf env;
int kontrola;
kontrola = setjmp(env);
if (!kontrola)
longjmp (env, 1);
else printf("longjmp sie wykonalo\n");
return 0;
}
Omwienie programu Tworzymy zmienn env typu jmp_buf do przechowywania informacji o stanie rodowiska. Nastpnie do zmiennej kontrola przypisujemy wywoanie setjmp, ktre w tym miejscu zwrci warto 0, bo jest to pierwsze wywoanie tej funkcji. Warunek ptli jest speniony, tak wic wykonuje si funkcja longjmp, ktra oddaje sterowanie do miejsca wywoania setjmp, oraz przekazuje warto, ktra ma zosta zwrcona za pomoc setjmp, czyli 1. Dlatego te przy kolejnym wywoaniu instrukcji if,
169
warunek nie jest speniony i zostaje wydrukowana informacja, ktra znajduje si w czci else.
10.10 signal.h Nagwek signal.h definiuje dwie funkcje, ktre odpowiedzialne s za wspprac z sygnaami. Sygna jest to midzy procesowa forma komunikacji uywana w Linuksie (Unix). Takim sygnaem, moe by na przykad ten, ktry jest uywany do zakoczenia dziaania programu, jeli program si zaptli, a nie uylimy adnej instrukcji break.
int raise (signal sig);
Opis Funkcja wysya sygna podany jako argument do aktualnie wykonywanego procesu.
Parametry Parametrem jest sygna, ktry ma zosta przekazany do programu. Moliwe sygnay zostay wypisane w tabeli (zdefiniowane jako makra).
sig Pena nazwa Opis
SIGABRT Signal Abort Nieprawidowe zakoczenie programu (analogia do wywoania funkcji abort)
SIGFPE Signal
Bdna operacja matematyczna, np. dzielenie przez zero, albo Floating-Point operacja skotkujca przepenieniem (niekoniecznie dla typw Exception
float).
SIGILL Signal Illegal Niepoprawne dane w funkcji, takie jak niedozwolone instrukcje Instructions
SIGINT SIGSEGV
SIGTERM Signal
Interrupt Interaktywny sygna ostrzeenia, zazwyczaj generowany przez uytkownika
Signal Niepoprawny dostp do zasobu, np. wtedy, gdy program stara Segmentation si odczyta, lub zapisa co do pamici, ktra nie moe by Violation
uyta w tym celu.
Signal Terminate
adanie zakoczenia programu Zwracana warto Zero jest zwracane w przypadku sukcesu. W przeciwnym wypadku warto nie zerowa.
170
Przykad
#include <stdio.h>
#include <signal.h>
void dowidzenia (void);
int main (void)
{
int i;
printf(": ");
scanf("%d", &i);
if (i)
raise(SIGABRT);
atexit(dowidzenia);
return 0;
}
void dowidzenia (void)
{
printf("Dowidzenia\n");
}
Omwienie programu Jeli podamy warto rn od zera, to program wykona funkcj raise z argumentem koczcym dziaanie programu w sposb nieprawidowy, przez co funkcja dowidzenia si nie wykona.
void (*signal(int sig, void (*func)(int)))(int)
Opis Funkcja ustawia akcj, ktra ma zosta wykonana w momencie, gdy program otrzyma sygna sig. Jeli wartoci func bdzie SIG_DFL to nastpi domylna obsuga sygnau, jeli SIG_IGN to sygna zostanie zignorowany. Jeli adna z dwch wymienionych wartoci nie zostanie przekazana to parametrem tym jest wskanik do funkcji, ktra wykona si w przypadku wystpienia sygnau. Jeli nie uyjemy w ogle funkcji signal, to sygnay wkonuj si tak jakby zosta przekazany argument SIG_DFL.
Parametry Jako sig rozumiemy jeden z sygnaw wymienionych w tabeli (funkcja raise). Jako func mona poda SIG_DFL, SIG_IGN, lub wskanik do wasnorcznie napisanej funkcji. Jeli func ma si odnosi do naszej funkcji, to jej prototyp powinien wyglda nastpujco:
171
void funkcja (int parametr)
Zwracana warto W przypadku sukcesu funkcja signal zwraca warto func (czyli sygna bdzie traktowany domylnie,
ignorowany, lub funkcja bdzie go obsugiwa). W przypadku nie powodzenia funkcja zwraca SIG_ERR (makro zdefiniowane w nagwku signal.h) oraz errno zostaje ustawione na odpowiedni warto.
Przykad
#include <stdio.h>
#include <signal.h>
#include <errno.h>
void sgtm (int x);
int main (void)
{
if (signal(SIGABRT, SIG_IGN) == SIG_ERR)
fprintf(stderr, "%s\n", (char *)strerror(errno));
raise(SIGABRT);
printf("Funkcja jednak dalej dziala\n");
signal(SIGTERM, sgtm);
raise(SIGTERM);
raise(SIGABRT);
signal(SIGABRT, SIG_DFL);
raise(SIGABRT);
return 0;
}
void sgtm (int x)
{
printf("Pojawil sie sygnal SIGTERM\n");
return;
}
Omwienie programu Sprawdzamy warunek, w ktrym porwnujemy wywoanie funkcji signal z wartoci SIG_ERR, jeli s rwne to wydrukowane zostan informacj o bdzie. Jeli natomiast zwrcona warto jest inna, to sygna SIGABRT przy kadym nastpnym wywoaniu bdzie ignorowany. Kolejne wywoanie signal,
mwi o tym, e jeli wystpi sygna SIGTERM to zostanie uruchomiona funkcja sgtm, co w kolejnej linii zostao pokazane. Kolejne wywoanie sygnau SIGABRT pokazuje, e dopki nie zmienimy akcji 172
to bdzie si wykonywa ostatnia ustawiona akcja, dlatego te sygna jest po raz kolejny ignorowany.
W kolejnej linii zmieniamy akcj na domyln, dlatego te ostatnie wywoanie raise przerwie dziaanie programu.
10.11 stdarg.h W nagwku stdarg.h znajduj si makra, dziki ktrym mona uywa zmiennej listy parametrw,
tzn wywoywa funkcj z rn iloci argumentw. Mechanizm ten zosta szczegowo opisany w punkcie 8.4. Nagwek definiuje jeden typ danych va_list i trzy funkcj (makra) va_start, va_arg,
va_end. Poniewa opis uycia wraz z przykadem znajduje si we wspomnianym punkcie 8.4 w tym miejscu przedstawie tylko prototypy dla danych funkcji.
void va_start (va_list ap, ilosc_parametrow)
type va_arg (va_list ap, type)
Gdzie type oznacza konkretny typ danych, jeli argumenty s typu int, to zamieniamy oba wystpienia sowa type na int.
void va_end (va_list ap)
10.12 stdbool.h W nagwku stdbool.h znajduj si cztery makra, za porednictwem ktrych moemy w sposb bardziej oczywisty (jeli wartoci 1 i 0 nie byy oczywiste przy warunkach) sterowa programem.
Makrami tymi s:
bool typ danych
true odpowiada wartoci 1
false odpowiada wartoci 0
__bool_true_false_are_defined odpowiada wartoci 1 Jeli pojawiy si jakiekolwiek wtpliwoci odnonie sposobu uycia, to przykad poniszy powinien je zdecydowanie rozwia.
173
#include <stdio.h>
#include <stdbool.h>
int main (void)
{
bool condition = true;
if (condition)
{
printf("To jest prawda\n");
condition = false;
}
if (!condition)
printf("To jest falsz\n");
return 0;
}
10.13 stdio.h W nagwku stdio.h znajduj si funkcje dotyczce wejcia i wyjcia. W rozdziale 8 zostaa opisana cz funkcji nalecych do tego nagwka. W tym miejscu zostan wymienione wszystkie funkcje,
lecz opisane zostan te, ktrych w rozdziale 8 nie omwiono. Poniewa funkcje te uywa si zazwyczaj rwnolegle z innymi, tak wic przykad uycia moe (lecz nie musi) ograniczy si do jednego na grup.
Nazwa funkcji Opis
Operacje na plikach fopen
Otwarcie pliku freopen
Otwarcie pliku ze zwizanym strumieniem fflush
Czyszczenie zbuforowanych danych fclose
Zamykanie pliku remove
Usunicie pliku rename
Zmiana nazwy pliku tmpfile
Tworzenie tymczasowego pliku tmpnam
Tworzenie unikalnej nazwy pliku setvbuf
Kontrolowanie buforowania danych setbuf
Kontrolowanie buforowania danych Formatowane wyjcie fprintf
Wypisywanie danych do strumienia 174
printf Wypisywanie danych na stdout sprintf
Wypisywanie danych do tablicy znakw vprintf
vfprintf vsprintf
Odpowiedniki dla funkcji printf z t rnic, e podajemy arg, ktry zosta zainicjowany przez va_start Formatowane wejcie fscanf
Czytanie danych ze strumienia scanf
Czytanie danych ze stdin sscanf
Czytanie danych z tablicy znakw Wejcie i wyjcie znakowe fgetc
Czytanie znaku ze strumienia fgets
Czytanie wikszej iloci znakw ze strumienia fputc
Wypisanie znaku do strumienia fputs
Wypisanie wikszej ilosci znakow do strumienia getc
Czytanie znaku ze stdin getchar
Czytanie znaku ze stdin gets
Wczytanie wiersza ze stdin i zapisanie go w tablicy putc
Wypisanie znaku do strumienia putchar
Wypisanie znaku do stdout puts
Wypisanie tekstu z tablicy do stdout Pozycja w pliku fseek
Wyznaczenie pozycji w strumieniu ftell
Warto biecej pozycji dla strumienia rewind
Przewinicie pozycji na pocztek pliku fgetpos
Zapamitanie pozycji pliku fsetpos
Ustawienie pozycji w pliku z uyciem wartoci otrzymanej przez funkcje fgetpos Obsuga bdw clearerr
Kasowanie znacznika bdu i znacznika koca pliku dla strumienia
feof Sprawdzanie znacznika koca pliku ferror
Sprawdzanie znacznika bdu dla strumienia perror
Wywietlanie komunikatu o bdzie 175
10.13.1 Operacje na plikach FILE *fopen (const char *filename, const char *mode)
Opis Funkcja szczegowo opisana w punkcie 8.5 FILE *freopen (const char *filename, const char *mode, FILE *stream)
Opis Funkcja freopen pocztkowo prbuje zamkn (jeli wystpuj) pliki zwizane ze strumieniem stream. Nastpnie, jeli pliki zostay zamknite lub nie byo takich plikw, funkcja otwiera plik w konkretnym trybie (mode) wskazywany przez filename i powizuje go ze strumieniem stream w taki sam sposb jak funkcja fopen.
Parametry Pierwszym parametrem jest nazwa pliku, drugim tryb otwarcie pliku (zobacz funkcj fopen), trzecim jest strumie (FILE *, lub jeden z podstawowych stdin, stdout).
Zwracana warto W przypadku sukcesu funkcja zwraca wskanik do pliku, jeli operacja si nie powioda to NULL jest zwracane.
Przykad
#include <stdio.h>
int main (void)
{
FILE *wp;
char *napis = "Napis do pliku 1";
char *info = "Napis do pliku 2";
wp = fopen ("my_first_file.txt", "w");
fprintf(wp, "%s\n", napis);
freopen ("my_second_file.txt", "w", wp);
fprintf(wp, "%s\n", info);
fclose(wp);
}
return 0;
176
Omwienie programu Najpierw do wskanika wp przypisujemy otwarcie pliku my_first_file.txt w trybie zapisu, drukujemy do tego pliku znaki wskazywane przez napis. Nastpnie za pomoc freopen otwieramy drugi plik,
przypisujc go do tego samego wskanika. Funkcja freopen zamyka poczenie z pierwszym plikiem przed otwarciem drugiego.
int fflush (FILE *stream)
Opis Funkcj mona kolokwialnie nazwa funkcj odwieajc zawarto pliku. Chodzi o to, e jeli otworzylimy jaki plik do zapisu, to wszystkie wprowadzone dane do pliku nie zostan zapisane w nim dopki plik nie zostanie zamknity poprawnie, lub funkcja fflush nie zostanie wywoana. Jeli argumentem bdzie NULL, to wszystkie otwarte pliki s odwieane i pomimo i plik nie zosta zamknity zawarto zostaje wydrukowana do pliku.
Parametry Parametrem jest wskanik do pliku, lub NULL.
Zwracana warto Zero jest zwracane w przypadku sukcesu. Jeli wystpi jaki problem, to EOF jest zwracane.
Przykad Przykad uycia tej funkcji zosta przedstawiony w kilku nastpnych funkcjach. Nie mniej jednak w tym miejscu poka co si stanie, jeli argumentem bdzie NULL.
#include <stdio.h>
int main (void)
{
FILE *wp1, *wp2, *wp3;
wp1 = fopen("plik1.txt", "w+");
wp2 = fopen("plik2.txt", "w+");
wp3 = fopen("plik3.txt", "w+");
fprintf(wp1, "Ciag znakow do plik1.txt\n");
fprintf(wp2, "Ciag znakow do plik2.txt\n");
fprintf(wp3, "Ciag znakow do plik3.txt\n");
getchar();
fflush(NULL);
177
getchar();
fclose(wp1);
fclose(wp2);
fclose(wp3);
return 0;
}
Omwienie programu Otwieramy trzy pliki do zapisu i odczytu, nastpnie drukujemy trzy cigi znakw do trzech rnych plikw. W momencie gdy program prosi nas o wpisanie znaku otwieramy nowy terminal i sprawdzamy zawarto tych trzech plikw za pomoc polecenia:
$ cat *.txt W plikach nie ma nic, poniewa uyte zostao buforowanie (opis tego znajduje si w funkcjach setbuf i setvbuf po przeczytaniu tych funkcji zdecydowanie atwiej bdzie Ci zrozumie ten mechanizm uyty w tym listingu). W momencie, gdy czycimy bufor, a waciwie wszystkie bufory (bo s otwarte trzy pliki) zawarto zostaje przeniesiona do pliku. W tym momencie program prosi o podanie kolejnego znaku, jeli przed naciniciem entera wywietlisz zawarto plikw jeszcze raz to ujrzysz ju zawarto, ktra zostaa przekazana do nich.
int fclose (FILE *stream)
Opis Funkcja opisana w punkcie 8.5 int remove (const char *filename)
Opis Funkcja usuwa plik o nazwie filename. Jeli plik znajduje si w innym katalogu ni biecy, to moemy poda ciek do tego pliku.
Parametry Jako parametr podajemy nazw pliku, jeli plik znajduje si w katalogu biecym, bd ciek do pliku, jeli plik znajduje si w innym miejscu.
178
Zwracana warto Funkcja zwraca zero, jeli plik udao si usun. W przeciwnym wypadku funkcja zwraca warto nie zerow oraz ustawia zmienn errno na odpowiedni warto.
Przykad
#include <stdio.h>
#include <errno.h>
int main (int argc, char *argv[])
{
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s nazwa_pliku\n", argv[0]);
return -1;
}
}
if (!remove(argv[1]))
fprintf(stderr, "Usuniecie pliku: %s powiodlo sie\n", argv[1]);
else fprintf(stderr, "%s : %s\n", argv[1], (char *)strerror(errno));
return 0;
int rename (const char *oldname, const char *newname)
Opis Funkcja rename zmienia nazw pliku lub katalogu podanego jako parametr oldname na now nazw sprecyzowan jako newname.
Parametry Funkcja przyjmuje dwa argumenty. Pierwszym z nich jest aktualna nazwa pliku lub katalogu, drugim natomiast jest nowa nazwa.
Zwracana warto Funkcja zwraca zero, jeli nazwa pliku zostaa pomylnie zmieniona, bd warto nie zerow jeli operacja ta si nie powioda. Jeli operacja si nie powiedzie zmienna errno zostaje ustawiona na odpowiedni warto.
Przykad
#include <stdio.h>
#include <errno.h>
int main (int argc, char *argv[])
179
{
if (argc != 3)
{
fprintf(stderr, "Uzycie: %s oldName newName\n", argv[0]);
return -1;
}
if (!rename(argv[1], argv[2]))
fprintf(stderr, "Plik o nazwie: %s teraz nazywa sie: %s\n",
argv[1], argv[2]);
else
{
fprintf(stderr, "%s : %s\n", argv[1], (char *)strerror(errno));
return -1;
}
return 0;
}
FILE *tmpfile (void)
Opis Funkcja tworzy tymczasowy plik binarny, ktry otwarty jest w trybie wb+ oraz gwarantuje, e nazwa tego pliku bdzie inna ni jakiegokolwiek istniejcego pliku. Tymczasowy plik jest automatycznie usuwany w momencie zamknicie strumienia (pliku), lub w przypadku, gdy program koczy swoje dziaanie normalnie.
Parametry Brak.
Zwracana warto Funkcja zwraca wskanik do pliku w przypadku sukcesu i NULL jeli pliku nie udao si utworzy.
Przykad
#include <stdio.h>
int main (void)
{
FILE *wp;
int c;
wp = tmpfile();
fprintf(wp, "Ala ma kota\n");
rewind(wp);
while ((c = fgetc(wp)) != EOF)
180
putchar(c);
return 0;
}
Omwienie programu Za pomoc tmpfile tworzymy tymczasowy plik, do ktrego drukujemy pewien cig znakw. Nastpnie za pomoc rewind przesuwamy kursor na pocztek pliku oraz drukujemy wszystkie znaki, a do EOF.
Po zamkniciu programu normalnie, plik jest zamykany i usuwany.
char *tmpnam (char s[L_tmpnam])
Opis Funkcja tworzy losow nazw rn od jakiegokolwiek istniejcego pliku. Ten cig znakw mona wykorzysta do utworzenia pliku tymczasowego z pewnoci, e nie nadpiszemy adnego innego. Jeli utworzymy plik za pomoc fopen z nazw otrzyman za pomoc tmpnam, to naley pamita, e po zakoczeniu programu plik nie jest kasowany, mona go usun za pomoc remove.
Parametry Parametrem jest tablica znakw skadajca si z L_tmpnam elementw. Warto ta jest makrem,
ktrego definicja znajduje si w nagwku stdio.h. Jeli parametrem jest NULL to nazwa jest zwracana.
Zwracana warto Jeli parametrem jest tablica s, to ona jest zwracana. Jeli parametrem jest NULL, to przypisanie funkcji powinno by do wewntrznego wskanika (pokazane na przykadzie). Jeli funkcji nie uda si utworzy nazwy pliku to NULL jest zwracane.
Przykad
#include <stdio.h>
int main (void)
{
FILE *wp1, *wp2;
char bufor[L_tmpnam];
char *buf;
tmpnam(bufor);
printf("Wygenerowana nazwa to: %s\n", bufor);
buf = tmpnam(NULL);
181
printf("Wygenerowana nazwa to: %s\n", buf);
wp1 = fopen(bufor, "w");
wp2 = fopen(buf, "w");
fprintf(wp1, "Pewien ciag znakow 1\n");
fprintf(wp2, "Pewien ciag znakow 2\n");
fflush(wp1);
fflush(wp2);
getchar();
remove(bufor);
remove(buf);
}
return 0;
Omwienie programu W programie tym na pocztku definiujemy tablic znakw o iloci elementw rwnej L_tmpnam,
oraz wskanik na tym char. Nastpnie wywoujemy funkcj tmpnam z argumentem bufor.
Wygenerowana nazwa zostanie zapisana w tej tablicy. Jeli przypiszmy wywoanie z argumentem NULL do wskanika, to wskanik bdzie wskazywa na wygenerowan nazw. Za pomoc funkcji fopen otwieramy (a waciwie tworzymy) pliki o podanych nazwach i drukujemy do nich cigi znakw. Funkcja getchar zostaa wpisana, aeby mona byo zobaczy, e faktycznie taki plik zosta utworzony i pewien cig znakw zosta do niego zapisany. Dlatego te otwrz kolejny terminal i wpisz ponisze polecenie:
$ ls /tmp W katalogu tym powinny znajdowa si jakie pliki, mniejsza o to, nas interesuj dwa pliki, ktre utworzylimy przed chwil. U mnie nazywaj si one, bynajmniej zaczynaj si od liter file po ktrych wystpuj losowe znaki. Aby wywietli zawarto tych plikw wpisz:
$ cat /tmp/file*
Najlepszym przykadem uycia funkcji fflush jest to wanie miejsce. Gdybymy nie wywoali tych funkcji to nie zobaczylibymy zawartoci pliku, tzn zobaczylibymy, lecz nie byoby w nim tekstu,
ktry chcielimy tam umieci. Mona by sobie to wyobrazi, e funkcja fflush odwiea zawarto 182
pliku. Po naciniciu klawisza enter, program zakoczy swoje dziaanie, a pliki zostan usunite.
void setbuf (FILE *stream, char *bufor)
Opis Informacje o buforze byy w jakim tam stopniu poruszone podczas omawiania ptli while. W tym miejscu powiem o tym szerzej, oraz poka zastosowanie oraz waciwoci buforu. Funkcja setbuf wcza lub wycza pene buforowanie dla danego strumienia. Funkcj wywouje si raz, po otworzeniu pliku, a przed wykonaniem jakiejkolwiek operacji wejcia, lub wyjcia. Tablica, ktra bdzie buforem powinna skada si z co najmniej BUFSIZ (staa zdefiniowane w nagwku stdio.h)
elementw. Buforowany strumie ma to do siebie, e informacje zostaj zapisane w pliku dopiero po jego zamkniciu, lub po uyciu funkcji czyszczcej bufor (fflush). Po zakoczeniu programu bufor jest automatycznie czyszczony. Jeli jako bufor podamy NULL to wyczamy buforowania dla strumienia stream. W tym przypadku drukowanie do pliku odbywa si natychmiastowo. Wszystkie otwarte pliki standardowo s buforowane. Za pomoc tej funkcji mona zdefiniowa swj wasny bufor.
Parametry Pierwszym parametrem jest wskanik do pliku, drugim tablica znakw o iloci elementw nie mniejszej ni BUFSIZE.
Zwracana warto Brak.
Przykad 1
#include <stdio.h>
int main (void)
{
FILE *wp;
char buff[BUFSIZ];
wp = fopen ("moj_plik.txt", "w+");
setbuf(wp, buff);
fprintf(wp, "Pewien ciag znakow 1\n");
fprintf(wp, "Pewien ciag znakow 2\n");
fprintf(wp, "Pewien ciag znakow 3\n");
fprintf(wp, "Pewien ciag znakow 4\n");
printf("%s", buff);
//fflush(wp);
getchar();
fclose(wp);
183
}
return 0;
Omwienie programu 1 Definiujemy bufor buff o rozmiarze BUFSIZ otwieramy plik w trybie w+ i ustawiamy buforowanie dla strumienia wp. Drukujemy do pliku pewne cigi znakw, poniewa uywamy buforowanej operacji wejcia / wyjcia to dane te najpierw kopiowane s do tablicy buff, co zostao udowodnione podczas drukowania zawartoci tablicy. Funkcja getchar zostaa uyta po to, aby sprawdzi czy faktycznie dane nie zostay wydrukowane do pliku. Podczas gdy program oczekuje na podanie znaku moemy sprawdzi w innym terminalu zawarto pliku moj_plik.txt. Do czasu, a plik nie zostanie zamknity,
lub funkcja odwieajca zawarto pliku nie zostanie uruchomiona, w pliku nic si nie pojawi. Usu komentarze sprzed funkcji fflush, by zobaczy, e zawarto pokae si w pliku nawet wtedy, gdy program oczekuje na nasz reakcj.
Przykad 2
#include <stdio.h>
int main (void)
{
FILE *wp;
wp = fopen ("moj_plik_2.txt", "w+");
setbuf(wp, NULL);
fprintf(wp, "Pewien ciag znakow 1\n");
fprintf(wp, "Pewien ciag znakow 2\n");
fprintf(wp, "Pewien ciag znakow 3\n");
getchar();
fclose(wp);
return 0;
}
Omwienie programu 2 W tym programie wyczamy buforowanie dla strumienia wp. Cig znakw zosta zapisany w pliku przed jego zamkniciem. Mona to sprawdzi uruchamiajc nowe okienko terminala, w ktrym obejrzymy zawarto pliku moj_plik_2.txt.
184
int setvbuf (FILE *stream, char *bufor, int mode, size_t size)
Opis Funkcja ta ma wiele wsplnego z omwion funkcj setbuf, lecz rni si pewnymi szczegami o ktrych zaraz powiem. Funkcja ta ustawia buforowanie w tryb penego, liniowego, oraz wyczonego buforowania. setvbuf tak samo jak setbuf powinna by wywoana raz, po nawizaniu poczenia z plikiem, oraz przed uyciem jakichkolwiek operacji wejcia / wyjcia. Rozmiar buforu jest definiowany w ostatnim parametrze jako ilo bajtw. Jeli jako bufor podamy NULL to system sam dynamicznie zaalokuje pami (size bajtw) i bdzie korzysta z niej jako buforu dla strumienia stream. Trzeci parametr odpowiada za tryb buforu, tabela z dostpnymi wartociami tego parametru zostaa pokazana poniej. Pene buforowanie oraz brak buforowania zostay opisane podczas omawiania funkcji setbuf, natomiast buforowanie liniowe opisz w tym miejscu. Jak sama nazwa mwi, wic atwo mona si domyle, e dane zapisywane s w pliku po wykryciu znaku nowej linii.
Parametry Pierwszym parametrem jest wskanik do pliku (strumie), drugim tablica, ktra bdzie suya jako bufor o rozmiarze nie mniejszym ni size. Trzecim parametrem jest tryb buforowania (tabelka),
czwarty parametr to ilo bajtw dla bufora. Jeli drugi parametr bdzie mia warto NULL, to system zaalokuje automatycznie pami o rozmiarze size bajtw, ktry bdzie suy jako bufor.
Tryb Opis
_IOFBF Pene buforowanie _IOLBF
Buforowanie liniowe _IONBF
Brak buforowania Zwracana warto W przypadku poprawnego poczenia buforu z plikiem zero jest zwracane. W przeciwnym wypadku nie zerowa warto jest zwracana.
Przykad
#include <stdio.h>
int main (void)
{
int size = 1024;
FILE *wp;
char buff[size];
185
wp = fopen("file1.txt", "w+");
setvbuf(wp, buff, _IOLBF, size);
fprintf(wp, "Linia bez znaku konca linii. ");
getchar();
fprintf(wp, "Druga linia bez znaku konca linii. ");
getchar();
fprintf(wp, "Linia ze znakiem konca linii\n");
getchar();
fprintf(wp, "Linia nr. 1\n");
getchar();
fprintf(wp, "Linia nr. 2\n");
getchar();
fprintf(wp, "Linia nr. 3\n");
}
fclose(wp);
return 0;
Omwienie programu Najpierw opisz co si stanie dla takiego trybu buforowania jaki zosta ustawiony (buforowanie liniowe), a nastpnie dla pozostaych dwch. Tak wic dobrze by byo, jakby zmieni i sprawdzi jak si program zachowuje przy innym buforowaniu. Zrbmy pewn czynno, ktra nam udowodni, e dziaa to tak, jak zamierzono. Podczas kadego wywoania funkcji getchar sprawd zawarto pliku file1.txt, a nastpnie nacinij enter. Skoro mamy do czynienia z buforowaniem liniowym, ktre jak wspomniano wymaga znaku koca linii, by wydrukowa cig znakw do pliku, domylamy si, e bdziemy musieli wcisn dwa razy enter zanim nasza zawarto pojawi si w pliku, bowiem dopiero po drugiej funkcji getchar nastpuje drukowanie do pliku cigu znakw zakoczonych znakiem nowej linii. Jeli tryb zmienilibymy na _IONBF, to kady cig znakw, nie zalenie od tego czy zakoczony jest znakiem nowej linii, czy nie wydrukowany zostaby do pliku natychmiastowo. Jeli trybem byby _IOFBF to zawarto pliku uzupeniona by bya dopiero po zamkniciu pliku, lub wywoaniu funkcji fflush.
10.13.2 Formatowane wyjcie int fprintf (FILE *stream, const char *format, ...)
186
Opis Funkcja fprintf jest bardzo podobna do funkcji printf, ktra zostaa szczegowo opisana w punkcie 8.2.
Funkcja rni si tym, e potrafi wysa tekst rwnie na inny strumien (stream) ni stdout.
Parametry Funkcja jako pierwszy parametr przyjmuje wskanik do strumienia. Jeli strumieniem jest standardowe wyjcie, czyli wywoanie funkcji fprintf(stdout, ); wyglda nastpujco, to jest ona rwnowana z wywoaniem funkcji printf.
Zwracana warto W razie powodzenia funkcja zwraca ilo znakw wywietlonych (zapisanych). W przypadku nie powodzenia warto ujemna jest zwracana.
Przykad Jeden na grupe na zakoczenie niniejszego podpunktu.
int printf (const char *format, ...)
Opis Funkcja szczegowo opisana w punkcie 8.2 int sprintf (char *s, const char *format, ...)
Opis Funkcja opisana szczegowo w punkcie 8.2 int vprintf (const char *format, va_list arg)
Opis Funkcja jest w gruncie rzeczy tak sam funkcj jak printf, lecz z t rnic, e zamiast parametrw wypisanych podczas wywoania funkcji przyjmuje list argumentw arg (Zmienna dugo parametrw opisana w punkcie 8.4). Funkcja ta nie wywouje automatycznie makra va_end.
Parametry Funkcja jako pierwszy argument przyjmuje dokadnie to samo co jej odpowiednik bez literki v.
Natomiast drugim argumentem jest lista parametrw.
Zwracana warto W razie powodzenia funkcja zwraca ilo znakw wywietlonych (zapisanych). W przypadku nie 187
powodzenia warto ujemna jest zwracana.
Przykad Jeden na grupe na zakoczenie niniejszego podpunktu.
int vfprintf (FILE *stream, const char *format, va_list arg)
Opis Funkcja od swojego odpowiednika bez literki v na pocztku rni si tylko ostatnim argumentem
(argumentami). Natomiast pierwsze dwa parametry s identyczne. Funkcja ta nie wywouje automatycznie makra va_end.
Parametry Ostatni argument jest to zmienna lista parametrw.
Zwracana warto W razie powodzenia funkcja zwraca ilo znakw wywietlonych (zapisanych). W przypadku nie powodzenia warto ujemna jest zwracana.
Przykad Jeden na grupe na zakoczenie niniejszego podpunktu.
int vsprintf (char *s, const char *format, va_list arg)
Opis Funkcja od swojego odpowiednika bez literki v na pocztku rni si tylko ostatnim argumentem
(argumentami). Natomiast pierwsze dwa parametry s identyczne. Funkcja ta nie wywouje automatycznie makra va_end.
Parametry Ostatni argument jest to zmienna lista dugoci.
Zwracana warto W razie powodzenia funkcja zwraca ilo znakw wywietlonych (zapisanych). W przypadku nie powodzenia warto ujemna jest zwracana.
Przykad Przykad obejmuje dziaanie wszystkich wyej wymienionych funkcji.
#include <stdio.h>
#include <stdarg.h>
188
void newPrint (const char *format, ...);
void newPrint_s (char *tab, const char *format, ...);
void newPrint_v (FILE *stream, const char *format, ...);
int main (void)
{
char tab[200];
int ilosc;
ilosc = sprintf(tab, "Lorem ipsum dolor sit amet");
fprintf(stdout, "%s\n", tab);
printf("Ilosc wyswietlonych znakow: %d\n", ilosc);
newPrint("Ala ma kota, a kot ma %d lat\n", 5);
newPrint_s (tab, "Ja mam %d lat. Ty masz %d lat\n", 20, 20);
fprintf(stdout, "%s", tab);
newPrint_v (stdout, "Ja mam %d lat. Ty masz %d lat\n", 20, 20);
return 0;
}
void newPrint (const char *format, ...)
{
va_list argument;
va_start (argument, format);
vprintf (format, argument);
va_end (argument);
}
void newPrint_s (char *tab, const char *format, ...)
{
va_list argument;
va_start (argument, format);
vsprintf (tab, format, argument);
va_end (argument);
}
void newPrint_v (FILE *stream, const char *format, ...)
{
va_list argument;
va_start (argument, format);
vfprintf (stream, format, argument);
va_end (argument);
}
10.13.3 Formatowane wejcie int fscanf (FILE *stream, const char *format, ...)
189
Opis Funkcja fscanf dziaa na tej samej zasadzie co te opisane w pukcie 8.3, z t rnic, e jako pierwszy argument przyjmuje wskanik do pliku, tudzie jeden ze standardowych strumieni.
Parametry Pierwszy parametr to wskanik do pliku, lub jeden ze standardowych strumieni, natomiast pozostae takie same jak w przypadku funkcji scanf.
Zwracana warto W przypadku powodzenia funkcja zwraca ilo wczytanych elementw. W przypadku gdy wystpi bd wejcia funkcja zwraca EOF.
Przykad
#include <stdio.h>
int main (int argc, char *argv[])
{
FILE *wp;
int tab[4];
int i, k;
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s nazwa_pliku\n", argv[0]);
return -1;
}
if ((wp = fopen(argv[1], "r")) != NULL)
{
k = fscanf(wp, "%d-%d-%d-%d", &tab[0], &tab[1], &tab[2],
&tab[3]);
for (i = 0; i < k; i++)
printf("tab[%d] = %d\n", i, tab[i]);
fclose (wp);
}
else
{
fprintf(stderr, "Pliku: %s nie udalo sie otworzyc\n", argv[1]);
return -1;
}
return 0;
}
Takie mae wyjanienie odnonie powyszego listingu. Trzeba uruchomi program z argumentem, tzn z nazw pliku, ktrego zawarto bdzie w takiej postaci:
190
X-Y-Z-K Gdzie X, Y, Z i K s dowolnymi liczbami cakowitymi.
int scanf (const char *format, ...)
Opis Funkcja opisana w punkcie 8.3 int sscanf (char *s, const char *format, ...)
Opis Funkcja opisana w punkcie 8.3 10.13.4 Wejcie i wyjcie znakowe int fgetc (FILE *stream)
Opis Funkcja pobierajca jeden znak ze strumienia. Strumieniem moe by stdin, lub np. plik.
Parametry Parametrem jest wskanik do pliku, lub stdin.
Zwracana warto Funkcja zwraca znak, ktry pobraa ze strumienia.
Przykad
#include <stdio.h>
int main (void)
{
int i = 0;
do
{
if (i)
while (fgetc(stdin) != '\n')
;
printf("Zakonczyc? [y/n] ");
i = 1;
} while (fgetc(stdin) != 'y');
191
return 0;
}
char *fgets (char *s, int n, FILE *stream)
Opis Funkcja opisana w pukcie 8.6 int fputc (int c, FILE *stream)
Opis Funkcja ta drukuje znak do okrelonego strumienia.
Parametry Pierwszym parametrem jest znak, drugim jest strumie. Strumieniem moe by stdout, stderr, lub np.
plik.
Zwracana warto W przypadku powodzenia pierwszy argument (znak) drukowany jest do okrelonego strumienia. Jeli wystpiy bdy EOF jest zwracane.
Przykad
#include <stdio.h>
int main (void)
{
char info[] = "Bardzo wazna informacja\n";
int i;
for (i = 0; i < sizeof (info) / sizeof (info[0]); i++)
fputc(info[i], stdout);
}
return 0;
int fputs (char *s, FILE *stream)
Opis Funkcja opisana w pukcie 8.6 int getc (FILE *stream)
Opis Funkcja dziaa analogicznie do funkcji fgetc.
192
int getchar (void)
Opis Funkcja opisana w pukcie 8.1 char *gets (char *s)
Opis Funkcja gets pobiera znaki od uytkownika i zapisuje je do tablicy s. Funkcja ta jest podobna do funkcji fgets, lecz rni si paroma wanymi szczegami. Po pierwsze nie mona sprecyzowa strumienia, z ktrego pochodz znaki, wic mona je wprowadzi tylko ze standardowego wejcia
(stdin) oraz nie ma moliwoci poinformowania o iloci wprowadzonych znakw, co potencjalnie moe by przyczyn bdw jeli wprowadzimy wicej znakw, ni tablica posiada miejsc. Funkcja czyta znaki, a do napotkania znaku nowej linii, ktrego nie umieszcza w tablicy. Raczej powinno uywa si funkcji fgets, dziki ktrej mamy pewno, e skopiujemy tylko tyle ile potrzebujemy znakw, dziki czemu obejdzie si bez pisania po pamici.
Parametry Parametrem jest tablica znakw, do ktrej zapisujemy wprowadzony tekst, czyli funkcja zwraca wskanik do tablicy s. Jeli wystpi bd to funkcja zwraca NULL.
Zwracana warto W przypadku sukcesu, znaki s zapisane w tablicy.
Przykad
#include <stdio.h>
int main (void)
{
char s[40];
printf("> ");
gets(s);
printf("%s\n", s);
return 0;
}
int putc (int c, FILE *stream)
Opis Funkcja dziaa analogicznie do funkcji fputc.
193
int putchar (int c)
Opis Funkcja opisana w pukcie 8.1 int puts (char *s)
Opis Funkcja drukuje na standardowe wyjcie (stdout) znaki wskazywane przez s. puts koczy drukowanie po napotkaniu znaku koca (\0). Na koniec drukowanego tekstu dodawany jest znak nowej linii.
Parametry Parametrem jest wskanik do tekstu s.
Zwracana warto W przypadku sukcesu funkcja zwraca ilo wydrukowanych znakw.
Przykad
#include <stdio.h>
int main (void)
{
int k;
char *w = "Aloha";
k = puts (w);
printf("%d\n", k);
return 0;
}
10.13.5 Pozycja w pliku int fseek (FILE *stream, long offset, int origin)
Opis Funkcja ustawia kursor (wskanik) zwizany z strumieniem na inne zdefiniowane miejsce. Robi to poprzez dodanie do pozycji kursora (origin) pewnej ustalonej wartoci (offset).
Parametry Pierwszym parametrem jest wskanik do strumienia, najczciej wskanik do pliku. Drugim parametrem jest offset, jest to liczba bajtw dodawanych do pozycji referencyjnej origin, ktra jest trzecim parametrem. Zmienna origin moe przyjmowa nastpujce wartoci:
194
SEEK_SET Pocztek pliku
SEEK_CUR Aktualna pozycja w pliku
SEEK_END Koniec pliku Zwracana warto W przypadku sukcesu funkcja zwraca zero. Jeli wystpiy jakie bdy funkcja zwraca warto nie zerow.
Przykad Jeden na grup na zakoczenie niniejszego podpunktu.
long int ftell (FILE *stream)
Opis Funkcja zwraca aktualn pozycj kursora powizanego z konkretnym strumieniem, zazwyczaj plikiem.
Parametry Parametrem jest wskanik do strumienia, najczciej pliku.
Zwracana warto W przypadku sukcesu funkcja zwraca aktualn warto kursora. W razie nie powodzenia warto -1L jest zwracana.
Przykad Jeden na grup na zakoczenie niniejszego podpunktu.
void rewind (FILE *stream)
Opis Funkcja przesuwa kursor na pocztek pliku.
Parametry Parametrem jest wskanik do pliku.
Zwracana warto Brak.
Przykad Jeden na grupe na zakoczenie niniejszego podpunktu.
195
int fgetpos (FILE *stream, fpos_t *ptr)
Opis Funkcja pobiera informacje o pozycji kursora i przechowuje t informacj w specjalnej zmiennej, ktra jest typu fpos_t. Zmienna ta moe by uyta w funkcji fsetpos.
Parametry Pierwszy parametr to wskanik do pliku, drugi parametr to obiekt w ktrym przechowuje si informacje o pooeniu kursora w pliku.
Zwracana warto W przypadku sukcesu funkcja zwraca warto zero, jeli wystpiy jakie bdy to funkcja zwraca warto nie zerow.
int fsetpos (FILE *stream, const fpos_t *ptr)
Opis Funkcja zmienia pozycj kursora w pliku na warto podan jako drugi parametr, czyli ptr. Warto ptr jest dostarczana za pomoc funkcji fgetpos.
Parametry Pierwszym parametrem jest wskanik do pliku, drugim natomiast wskanik do obiektu typu fpos_t,
ktrego uzyskano za pomoc funkcji fgetpos.
Zwracana warto W przypadku sukcesu funkcja zwraca zero, w razie nie powodzenia warto nie zerowa zostaje zwrcona oraz zmienna errno zostaje ustawiona na konkretn warto.
Przykad
#include <stdio.h>
int main (int argc, char *argv[])
{
FILE *wp;
long int rozmiar;
fpos_t pp;
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s filename\n", argv[0]);
return -1;
}
196
if ((wp = fopen(argv[1], "w")) != NULL)
{
fprintf(wp, "Ala ma kota, a kot ma %d lat\n", 5);
fseek (wp, 22L, SEEK_SET);
fprintf(wp, "%d", 9);
fseek (wp, 0L, SEEK_END);
rozmiar = ftell (wp);
printf("Rozmiar pliku: %ld bytes\n", rozmiar);
rewind (wp);
rozmiar = ftell (wp);
printf("Kursor znajduje sie na poczatku, dlatego rozmiar pliku rowna sie: %ld bytes\n", rozmiar);
fseek (wp, 0, SEEK_END);
fgetpos(wp, &pp);
printf("Koniec pliku znajduje sie na pozycji: %d\n", pp);
fsetpos (wp, &pp);
fprintf(wp, "Tekst zostal dodany pozniej, wiec teraz koniec pliku jest gdzie indziej\n");
fseek (wp, 0, SEEK_END);
fgetpos(wp, &pp);
printf("Nowy rozmiar pliku to: %d bytes\n", pp);
}
fclose(wp);
}
return 0;
Omwienie programu Po otwarciu pliku zapisujemy do niego zdanie, w ktrym wystpuje liczba lat rwna 5. Nastpnie przesuwany kursor (czyli pozycj od ktrej zaczynamy pisa) na 22. miejsce liczc od pocztku pliku
(origin = SEEK_SET) a nastpnie wpisujemy do pliku na nastpn pozycj liczb 9. Dlatego te, po wywietleniu zawartoci pliku liczb lat kota bdzie 9, a nie 5. Nastpnie przesuwamy kursor na koniec pliku i pobieramy aktualn pozycj kursora (ilo znakw) za pomoc ftell. Z uyciem rewind przesunelimy kursor na pocztek, dlatego te wywoanie ftell da warto zero. W nastpnej linii przesuwamy kursor na koniec, a nastpnie zapamitujemy t pozycj za pomoc fgetpos w zmiennej pp. Ustawiamy pozycj w pliku na warto, ktr przed chwil pobralimy i wpisujemy kolejny cig znakw. Poprzednie wywoania funkcji sprawdzajcych rozmiar pliku byy dla pliku, ktry nie mia nowego cigu znakw, dlatego faktycznie rni si on od poprzedniego, co udowadniaj ostatnie wywoania funkcji.
197
10.13.6 Obsuga bdw void clearerr (FILE *stream)
Opis Kiedy funkcja dziaajca na strumieniach (pliku) zawiedzie z powodu wykrycia koca pliku, lub z powodu innego bdu wskanik bdu moe zosta ustawiony. Funkcja ta czyci informacje o tych bdach.
Parametry Parametrem jest wskanik do pliku.
Zwracana warto Brak.
Przykad
#include <stdio.h>
int main (void)
{
FILE *wp;
int znak;
if ((wp = fopen("plik1", "r")) == NULL)
{
perror("Blad otwarcia");
return -1;
}
fputc('A', wp);
if (ferror(wp))
{
perror("Blad zapisu");
clearerr(wp);
}
znak = fgetc(wp);
if (!ferror(wp))
printf("Pobrano znak: %c\n", znak);
}
return 0;
Omwienie programu Poniewa funkcja ferror zwraca warto nie zerow jeli przy ostatniej operacji na pliku wystpi bd 198
to informacja o bdzie zostaje wywietlona. Gdyby nie byo wywoania funkcji clearerr to przy nastpnym wywoaniu funkcji ferror znw mielibymy warto nie zerow dlatego te informacja o pobranym znaku nie zostaa by wywietlona. Funkcja clearerr czyci te informacje i dlatego te po pobraniu znaku nie ma adnego bdu, a wic informacja zostaje wywietlona.
int feof (FILE *stream)
Opis Funkcja feof sprawdza czy kursor (wskanik pozycji) skojarzony z konkretnym strumieniem jest ustawiony na warto EOF, czyli End-Of-File.
Parametry Parametrem jest wskanik do pliku, lub strumienia stdin.
Zwracana warto Funkcja zwraca warto nie zerow, jeli kursor jest ustawiony na EOF. W przeciwnym wypadku zwraca warto zero.
int ferror (FILE *stream)
Opis Funkcja sprawdza czy ostatnia wykonana operacja na pliku nie powioda si (czy wskanik bdu jest ustawiony). W gruncie rzeczy wskanik bdu ustawiany jest w momencie, kiedy ostatnia operacja na pliku zawiedzie.
Parametry Parametrem jest wskanik do pliku.
Zwracana warto Jeli poprzednia operacja na pliku nie powioda si to funkcja zwraca warto nie zerow. Jeli wszystko poszo zgodnie z zaoeniami zero jest zwracane.
void perror (const char *s)
Opis Funkcja suy do wywietlania informacji o bdzie. Interpretuje warto zmiennej errno i wywietla stosowny komunikat, ktry zwizany jest z odpowiedni jej wartoci. Funkcja drukuje komunikat 199
w nastpujcej postaci:
Nie mozna otworzyc pliku: No such file or directory Pierwsza cz (do dwukropka) jest cigiem znakw wskazywanych przez s, druga cz jest to tekst zwizany z konkretn wartoci errno.
Parametry Funkcja pobiera wskanik do tekstu.
Zwracana warto Brak.
Przykad
#include <stdio.h>
#include <stdlib.h>
void wpisz (char *filename);
void odczytaj (char *filename);
int main (int argc, char *argv[])
{
if (argc != 2)
{
fprintf(stderr, "Uzycie: %s filename\n", argv[0]);
return -1;
}
//wpisz (argv[1]);
odczytaj (argv[1]);
}
return 0;
void wpisz (char *filename)
{
FILE *wp;
if ((wp = fopen(filename, "r")) != NULL)
{
fputc('A', wp);
if (ferror(wp))
perror("Blad zapisu");
fclose(wp);
}
else
{
200
perror ("Nie mozna otworzyc pliku");
exit (-1);
}
}
void odczytaj (char *filename)
{
FILE *wp;
int znak;
if ((wp = fopen(filename, "r")) != NULL)
{
znak = fgetc(wp);
while (znak != EOF)
{
putchar(znak);
znak = fgetc(wp);
}
if (feof(wp))
fprintf(stderr, "-- KONIEC PLIKU --\n");
}
else
{
fclose(wp);
perror ("Nie mozna otworzyc pliku");
exit (-1);
}
}
Omwienie programu W funkcji wpisz celowo ustawiono tryb otwarcie jako r, eby pokaza jaki bd zostaje wywietlony podczas prby wpisania do tak otwartego pliku czegokolwiek. Funkcja odczytaj ma za zadanie wywietli na ekranie wszystkie znaki z danego pliku, a na kocu poinformowa uytkownika o tym,
e nastpi koniec pliku. Do zmiennej znak przypisujemy wywoanie funkcji pobierajcej znak z pliku,
a nastpnie sprawdzamy, czy ten znak jest rny od EOF (koniec pliku) jeli tak jest, to drukujemy go,
a nastpnie powtarzamy przypisanie, aby ptla dziaaa do czasu, a osignie EOF. Nastpnie sprawdzamy czy feof zwrcia warto nie zerow (bdzie tak, jeli kursor zostanie ustawiony na EOF) i jeli tak jest to drukujemy na stderr informacje o kocu pliku.
201
10.14 stdlib.h W nagwku stdlib.h znajduj si rne uyteczne funkcje, takie jak komunikacja ze rodowiskiem
(system operacyjny), dynamicznie przydzielana pami, przeksztacanie liczb, sortowanie, itp.
W poniszej tabeli zostay zestawione te funkcje, a pod tabel opis wraz z przykadami.
Nazwa Opis
Konwersja cigu znakw na liczby atof
strtod atoi
atol strtol
strtoul Przeksztacenie znakw na liczb typu double Przeksztacenie znakw na liczb typu int Przeksztacenie znakw na liczb typu long int Przeksztacenie znakw na liczb typu unsigned long int Pseudo-losowe liczby rand
Generowanie liczb pseudo-losowych srand
Inicjowanie generatora liczb pseudo-losowych Dynamicznie przydzielana pami calloc
Alokowanie pamici dla tablicy malloc
Alokowanie pamici (blok)
realloc Ponowna alokacja pamici free
Zwalnienie zaalokowanej pamici Funkcje oddziaywujce ze rodowiskiem uruchomienia abort
Przerwanie procesu atexit
Uruchomienia funkcji po zakoczeniu programu exit
Zakoczenie wzywanego procesu getenv
Zwrcenie wartoci przez system system
Wywoanie polecenia systemowego Wyszukiwanie i sortowanie bsearch
Wyszukiwanie w tablicy qsort
Sortowanie elementw tablicy Arytmetyka liczb cakowitych abs
labs Warto bezwzgldna 202
div Dzielenie liczb cakowitych ldiv
10.14.1 Konwersja cigu znakw na liczby double atof (const char *str)
Opis Funkcja konwertujca tekst na liczb typu double. Przede wszystkim pomijane s pocztkowe biae znaki, nastpnie po napotkaniu pierwszego nie biaego znaku funkcja sprawdza czy jest to poprawny znak do skonwertowania, jeli tak jest to przechodzi do kolejnego znaku. W momencie napotkania pierwszego nie poprawnego znaku funkcja koczy konwertowanie, a reszta znakw jest ignorowana.
Poprawnymi znakami s:
Opcjonalny znak plus (+) lub minus (-)
Cyfry wliczajc w to kropk dziesitn
Opcjonalny wykadnik litera e, lub E ze znakiem oraz cyfry (np. 1E+2)
Jeli pierwszy znak w cigu nie jest poprawnym znakiem to konwersja nie jest wykonywana.
Parametry Parametrem jest cig znakw, moe to by tablica znakw, lub wskanik wskazujcy na dowolny cig znakw.
Zwracana warto W przypadku sukcesu funkcja zwraca liczb rzeczywist typu double. Jeli konwersja nie zostaa wykonana to warto 0.0 jest zwracana.
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
double *w;
int i;
203
w = (double *) calloc(argc-1, sizeof (double));
if (argc < 2)
{
fprintf(stderr, "Uzycie: %s parametr(y)\n", argv[0]);
return -1;
}
for (i = 1; i <= argc - 1; i++)
w[i-1] = atof(argv[i]);
for (i = 0; i < argc - 1; i++)
printf("w[%d] = %lf\n", i, w[i]);
free(w);
}
return 0;
Omwienie programu Aby mie moliwo wpisywania nie ograniczonej liczby argumentw (liczb do skonwertowania)
musimy skorzysta z dynamicznie przydzielanej pamici (opisane w rozdziale 9.). Alokujemy miejsca na (argc-1) sizeof (double) poniewa pierwszym argumentem jest nazwa programu. Poniewa zaalokowan pami indeksujemy od zera, tak wic indeks w musimy obniy o jeden.
double strtod (const char *str, char **ptr)
Opis Funkcja strtod jest bardzo podobna do funkcji atof lecz rni si pewn rzecz, a mianowicie chodzi o drugi parametr. Tak jak w atof funkcja strtod sprawdza po kolei znaki, jeli trafi na nie poprawny znak to przerywa swoje dziaanie, a pobrane znaki konwertuje do liczby typu double, jeli ptr nie jest NULL to reszta znakw wskazywana jest przez ptr. Poprawnymi znakami s te same znaki, co w funkcji powyej.
Parametry Pierwszym parametrem jest tablica znakw, lub wskanik wskazujcy na cig znakw. Drugim parametrem jest wskanik do wskanika na cig znakw.
Zwracana warto W przypadku sukcesu funkcja zwraca skonwertowan liczb w postaci liczby typu double. Jeli nie byo znakw do skonwertowania to funkcja zwraca warto 0.0.
204
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
char *str = "10.3 1000.43232";
char *ptr;
double dl;
dl = strtod(str, &ptr);
printf("%f %s\n", dl, ptr);
dl = strtod(ptr, NULL);
printf("%f\n", dl);
return 0;
}
Omwienie programu Wskanik str wskazuje na dwie liczby oddzielone spacj (tak naprawd jest to cig znakw, pki co,
nie s to liczby typu double). Do zmiennej dl przypisujemy wywoanie strtod z argumentami str, oraz
&ptr. Ampersand jest istotny poniewa prototyp funkcji informuje nas, e funkcja przyjmuje wskanik do wskanika, czyli adres wskanika. Po wykonaniu tej funkcji w zmiennej dl posiadamy ju pierwsz cz (cz do pierwszego nie poprawnego znaku, tutaj spacja) cigu znakw, a wskanik ptr wskazuje na pozosta cz. Kolejne wywoanie przypisuje do dl skonwertowany pozostay cig znakw. Jako drugi argument podajemy NULL.
int atoi (cont char *str)
Opis Funkcja bardzo podobna do powyszych z t rnic, e konwertuje znaki do liczby typu cakowitego.
Tak samo jak w przypadku funkcji atof, funkcja najpierw odrzuca wszystkie pocztkowe biae znaki,
a nastpnie sprawdza czy pierwszy nie biay znak jest poprawnym znakiem. Jeli tak jest to kolejne znaki s sprawdzane, a do napotkania pierwszego nie poprawnego znaku. Jeli pierwszym nie poprawnym znakiem jest biay znak lub cig znakw zawiera same biae znaki, to konwersja nie jest wykonywana. Poprawnymi znakami s cyfry oraz opcjonalny znak plus lub minus poprzedzajcy cyfry.
205
Parametry Parametrem jest tablica znakw, lub wskanik do cigu znakw.
Zwracana warto W przypadku sukcesu funkcja zwraca skonwertowan liczb jako liczb typu cakowitego. Jeli w tablicy nie byo adnego poprawnego znaku to zero jest zwracane. Jeli liczba jest spoza zakresu to staa INT_MAX, lub INT_MIN jest zwracana.
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
char *wskTab[] = {"
100", " 1E5", "-4.3", " 2.3", "
"98293489238492384923"};
int rozm = sizeof (wskTab) / sizeof (wskTab[0]);
int i;
int tab[rozm];
",
for (i = 0; i < rozm; i++)
tab[i] = atoi(wskTab[i]);
for (i = 0; i < rozm; i++)
printf("%d\n", tab[i]);
return 0;
}
Omwienie programu Aby pokaza rne przykady i nie tworzy wielu tablic znakw utworzono tablic wskanikw
(opisane w punkcie 5.10). Wyniki s przewidywalne, lecz ostatnia pozycja (bardzo dua liczba) moe okaza si ciekawa. Zwrcona zostaje warto INT_MAX opisana w punkcie 10.6, analogicznie byoby w przypadku bardzo duej liczby ujemnej.
long int atol (const char *str)
Opis Funkcja jest analogiczna do atoi z t rnic, e zwraca wartoci w postaci liczby typu long int.
W przypadku przekroczenia zakresu zwracane s wartoci LONG_MAX, lub LONG_MIN opisane w punkcie 10.6.
206
Parametry Parametrem jest tablica znakw, lub wskanik do cigu znakw.
Zwracana warto W razie sukcesu funkcja zwraca liczb w postaci long int. Jeli nie byo poprawnych znakw warto zero jest zwracana.
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
char *tab = "9832328";
long lg;
lg = atol(tab);
printf("%ld\n", lg);
}
return 0;
long int strtol (const char *str, char **ptr, int base)
Opis Dziaanie funkcji strtol jest bardzo podobne do omwionych ju funkcji. Rznica midzy nimi jest nastpujca, a mianowicie moemy poda w jakim systemie liczbowym liczba jest zapisana (parametr base podstawa systemu liczbowego). Zasady odnonie poprawnych znakw i drugiego parametru s analogiczne do tych przedstawionych w poprzednich funkcjach z t rnic, e poprawne znaki ustalane s na podstawie parametru base (dla hex nie dozwolonym znakiem bdzie np. Z, a dla binarnego wszystkie za wyjtkiem 0 i 1). Dozwolonymi wartociami dla parametru base jest warto z przedziau <2; 36>.
Parametry Pierwszy parametr to tablica znakw, lub wskanik na cig znakw. Drugim parametrem jest adres wskanika na cig znakw. Trzecim parametrem jest podstawa systemu liczbowego.
Zwracana warto W przypadku sukcesu funkcja zwraca skonwertowan liczb w postaci long int. Jeli nie mona byo przeprowadzi konwersacji znakw z powodu, e znaki nie nale do danego systemu liczbowego warto zero jest zwracana. Jeli skonwertowana liczba jest poza dopuszczalnym zakresem danego typu to warto LONG_MAX, lub LONG_MIN jest zwracana.
207
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
char *wskTab[] = {" 100", "0xAAA", "001011010101", "cacaca", "0779"};
int rozmiar = sizeof (wskTab) / sizeof (wskTab[0]);
long int tab[rozmiar];
tab[0] = strtol(wskTab[0], NULL, 10);
tab[1] = strtol(wskTab[1], NULL, 16);
tab[2] = strtol(wskTab[2], NULL, 2);
tab[3] = strtol(wskTab[3], NULL, 16);
tab[4] = strtol(wskTab[4], NULL, 8);
printf("%ld\n%ld\n%ld\n%ld\n%ld\n", tab[0], tab[1], tab[2], tab[3],
tab[4]);
return 0;
}
Omwienie programu Weryfikacja czy znak naley do danego systemu liczbowego odbywa si za porednictwem trzeciego parametru funkcji strtod. Znaki 0x poprzedzajce liczb szesnastkow mog lecz nie musz wystpowa. Jak wida ostatnia liczba poprzedzona znakiem 0 jest liczb semkow, konwertowanie jej koczy si na cyfrze 9, poniewa ta liczba nie wchodzi w skad semkowego systemu liczbowego.
Podobnie jak dla systemu szestnastkowego w systemie semkowym nie musi wystpowa na pocztku zero.
unsigned long int strtoul (const char *str, char **ptr, int base)
Opis Funkcja jest analogiczna do funkcji strtol, z t rnic, e zwraca warto typu unsigned long int.
Wszystkie zasady s analogiczne.
Parametry Takie same jak dla strtol.
Zwracana warto W przypadku sukcesu funkcja zwraca skonwertowan liczb w postaci unsigned long int. Jeli 208
konwersji nie udao si wykonowa warto zero jest zwracana. Jeli skonwertowana liczba jest spoza dopuszczalnego zakresu warto ULONG_MAX jest zwracana.
10.14.2 Pseudo-losowe liczby int rand (void)
Opis Funkcja zwracajca liczb pseudo-losow. Opis tej funkcji i caego mechanizmu z jakiego si korzysta zosta przedstawiony w rozdziale 9. Aby liczba pseudo-losowa bya z zawonego przedziau naley uy operacji modulo (rand() % wartosc).
Parametry Brak.
Zwracana warto Funkcja zwraca psuedo losow liczb z zakresu <0; RAND_MAX>.
void srand (unsigned int seed)
Opis Funkcja jest generatorem liczb pseudo-losowych. Jako parametr przyjmuje liczb na podstawie ktrej generuje wedug okrelonego algorytmu liczb pseudo-losow wywietlan za pomoc funkcji rand.
Aby funkcja rand losowaa za kadym razem inne wartoci, argument funkcji srand musi by rny,
mona to osign za pomoc funkcji time.
Parametry Dodatnia liczba cakowita.
Zwracana warto Brak.
Przykad
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main (void)
{
209
int random;
printf("Zarodek staly: ");
srand(1);
random = rand();
printf("%d\n", random % 1000);
printf("Zarodek zmienny: ");
srand(time(0));
random = rand();
printf("%d\n", random % 1000);
return 0;
}
Omwienie programu Warto random % 1000 losuje liczb z zakresu <0; 999>.
10.14.3 Dynamicznie przydzielana pami void *calloc (size_t n, size_t size)
Opis Funkcja szczegowo opisana w rozdziale 9.
Parametry Pierwszym parametrym jest ilo n (elementw tablicy), drugim jest rozmiar pojedynczego elementu.
size_t jest typem danych liczb cakowitych dodatnich, tak wic mona z powodzeniem uywa unsigned int.
Zwracana warto Funkcja zwraca wskanik do zaalokowanego miejsca. Przed uyciem naley zrzutowa wskanik na konkretny typ.
void *malloc (size_t size)
Opis Funkcja szczegowo opisana w rozdziale 9.
Parametry Parametrem jest dodatnia liczba cakowita, ktra definiuje jak wielki blok (w bajtach) ma zosta 210
zarezerwowany.
Zwracana warto Funkcja zwraca wskanik do pocztku zarezerwowanego bloku o rozmiarze size bajtw.
void *realloc (void *ptr, size_t size)
Opis Poniewa funkcja realloc nie zostaa opisana w rozdziale 9. pozwol sobie tutaj j omwi, a wraz z opisem pokaza przykad zastosowania. Funkcja realloc zmienia rozmiar zarezerwowanej pamici wskazywanej przez ptr. Zwiksza lub zmniejsza, w razie koniecznoci przenosi cay blok w inne miejsce i zwraca wskanik do pocztku bloku pamici. Przy pewnych warunkach funkcja realloc zachowuje si jak funkcja free, a przy innych jak funkcja malloc. Jeli jako pierwszy parametr podamy NULL to funkcja rezerwuje pami dokadnie tak samo jak malloc. Natomiast jeli jako size podamy zero, to funkcja zwalnia pami w taki sam sposb jak free.
Parametry Pierwszym parametrem jest wskanik do zaalokowanej wczeniej pamici za pomoc funkcji malloc,
calloc lub realloc. Drugim parametrem jest rozmiar bloku w bajtach.
Zwracana warto Funkcja zwraca wskanik do pocztku bloku zarezerwowanej na nowo pamici. Jeli podczas realokacji blok pamici zosta przeniesiony w inne miejsce to funkcja zwraca wskanik do tego miejsca. Jeli nie powiedzie si realokacja pamici to funkcja zwraca NULL, natomiast blok pamici wskazywany przez ptr zostaje nie ruszony. Jeli realloc uyjemy jako free to NULL jest zwracane.
Przykad
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main (void)
{
int i, ilosc = 15;
int *w = (int *) calloc(10, sizeof (int));
w = realloc (w, ilosc*sizeof(int));
srand(time(0));
for (i = 0; i < ilosc; i++)
w[i] = rand() % 10;
211
for (i = 0; i < ilosc; i++)
printf("w[%d] = %d\n", i, w[i]);
if (realloc (w, 0) == NULL)
printf("Pamiec wyczyszczona\n");
return 0;
}
Omwienie programu Na pocztku za pomoc funkcji calloc rezerwujemy miejsce dla 10 elementw tablicy o rozmiarze pojedynczego elementu typu int. W kolejnej linijce realokujemy pamic, jako pierwszy argument podajemy w, a jako drugi argument podajemy wyraenie takie jakie wida, poniewa musimy poda ilo miejsca w bajtach, wic 15 sizeof (int) pozwoli na zapisanie 15 elementw tablicy liczb cakowitych. Naley zwrci uwag na to, e jeli nie byo by wywoania tej funkcji program si wysypie, poniewa w instrukcjach for bdziemy odwoywa si do nie dozwolonego miejsca.
Instrukcja if pokazuje, e wywoanie funkcji realloc z drugim parametrem jako zero zwraca NULL jeli pami zostanie wyczyszczona.
void free (void *ptr)
Opis Funkcja opisana w rozdziale 9.
Parametry Wskanik do zaalokowanej wczeniej pamici.
Zwracana warto Brak
10.14.4 Funkcje oddziaywujce ze rodowiskiem uruchomienia void abort (void)
Opis Funkcja koczy dziaanie programu w nie prawidowy sposb. abort generuje sygna SIGABRT, ktry 212
koczy program wraz ze zwrceniem kodu bdu odpowiadajcemu nie prawidowemu zakoczeniu programu. Kod ten zwracany jest do rodowiska wywoania programu, a nie w miejscu wywoania funkcji.
Parametr Brak
Zwracana warto Brak
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
double n = 0.0, k = 10.0;
if (n != 0)
printf("%f\n", k/n);
else abort();
return 0;
}
Aby sprawdzi jaki kod zosta zwrcony do rodowiska, wystarczy, e wpiszesz nastpujce polecenie,
po zakoczeniu programu.
$ echo $?
int atexit (void (* function) (void))
Opis Funkcja atexit uywana jest wtedy, gdy chcemy wykona jak funkcj na zakonczenie dziaania programu. Funkcja podana jako parametr atexit wykona si tylko wtedy, gdy program zakoczy si w prawidowy (poprawny) sposb. Jeli wystpuje wicej wywoa funkcji atexit w jednym programie, to kolejno wykonywania wskazywanych funkcji jest odwrotna (jeli w programie wystpuj dwa wywoania atexit, to najpierw wykona si funkcja z drugiego wywoania, a pniej dopiero z pierwszego). Podajc wskanik do funkcji mamy na myli, e rejestrujemy j do wykonania na zakoczenie.
213
Parametry Parametrem jest wskanik do funkcji (nazwa funkcji bez nawiasw).
Zwracana warto Jeli funkcja zostaa pomylnie zarejestrowana to zero jest zwracane, w przeciwnym wypadku warto nie zerowa jest zwracana.
Przykad 1
#include <stdio.h>
#include <stdlib.h>
void goodbye (void);
void goodmorning (void);
int main (void)
{
atexit(goodbye);
atexit(goodmorning);
printf("Zaraz program sie zakonczy\n");
return 0;
}
void goodbye (void)
{
printf("Program skonczyl swoje dzialanie, dowidzenia!\n");
}
void goodmorning (void)
{
printf("Program jednak skonczyl swoje dzialanie\n");
}
Omwienie programu 1 Najpierw pojawi si napis z instrukcji printf (funkcja main), dlatego, e program jeszcze si nie skoczy. Zaraz po tym wystpuje return, tak wic wykonywane s funkcje zarejestrowane przez atexit. Jak powiedziane byo, jeli jest wicej wywoa atexit, to wykonywane s w odwrotnej kolejnoci, dlatego te najpierw wywietlony zostanie napis z funkcji goodmorning, a nastpnie z funkcji goodbye.
Przykad 2
#include <stdio.h>
#include <stdlib.h>
214
void napis (void);
int main (void)
{
atexit(napis);
abort();
return 0;
}
void napis (void)
{
printf("Ten napis nie zostanie wyswietlony\n");
}
Omwienie programu 2 Jak wspomniano, funkcje zarejestrowane przez atexit wykonuj si wtedy i tylko wtedy, gdy program koczy swoje dziaanie normalnie. Dlatego te w tym przypadku nie wykona si funkcja napis,
poniewa program jest przerywany przez funkcj abort.
void exit (int status)
Opis Funkcja ta wywoana w funkcji main dziaa tak samo jak return, tzn przerwywa wykonywanie programu w sposb poprawny i zwraca warto liczbow podan jako argument do miejsca wywoania
(konsola systemu operacyjnego). Jak wiemy funkcja return uyta w innej funkcji przerywa dziaanie tej funkcji i zwraca sterowanie do miejsca wywoania. Funkcja exit wywoana w innej funkcji przerywa dziaanie caego programu. Funkcja exit dodatkowo wywouje funkcj fclose dla wszystkich otwartych plikw.
Parametry Parametrem jest liczba typu cakowitego, ktra zostanie zwrcona do miejsca wywoania programu.
Zwracana warto Brak
Przykad
#include <stdio.h>
#include <stdlib.h>
void goodbye (void);
215
void terminate (void);
int main (void)
{
atexit(goodbye);
terminate();
}
void goodbye (void)
{
printf("Dowidzenia\n");
}
void terminate (void)
{
exit (0);
}
Opis programu Jak wida program jest zakoczony za pomoc funkcji terminate. Dziki temu, e funkcja exit koczy program w poprawny sposb funkcja goodbye wykona si na zakoczenie.
char *getenv (const char *name)
Opis Funkcja pobiera nazw zmiennej rodowiskowej (ang. environment variable), ktra posiada pewn warto. Wskanik odnoszcy si do cigu znakw okrelajcych warto owej zmiennej zostaje zwrcony. Jeli parametrem funkcji getenv nie jest zmienna rodowiskowa to NULL jest zwracane.
Parametry Parametrem jest nazwa zmiennej rodowiskowej.
Zwracana warto W przypadku, gdy podanym parametrem jest istniejca nazwa zmiennej rodowiskowej, wskanik do tej wartoci jest zwracany. Jeli dana zmienna nie istnieje, to NULL jest zwracane.
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
char *w;
216
int i;
if (argc < 2)
{
fprintf(stderr, "Uzycie: %s env_variable(s)\n", argv[0]);
return -1;
}
for (i = 1; i < argc; i++)
{
w = getenv(argv[i]);
if (w != NULL)
printf("%s\n", w);
else fprintf(stderr, "%s - nie jest zmienna srodowiskowa\n",
argv[i]);
}
}
return 0;
Zmiennymi rodowiskowymi s np. PATH, HOME, USER, SHELL. Rwnie dobrze mona stworzy swoj zmienn rodowiskow i sprawdzi, czy faktycznie dana warto, ktr przypisalimy kryje si pod ni za pomoc powyszego programu. Aby stworzy zmienn rodowiskow wpisz ponisze polecenie.
$ export VARNAME=154 Uruchomienie tego programu moe wyglda nastpujco.
./main PATH HOME USER SHELL VARNAME DDD Gdzie w przypadku DDD zostanie wywietlony komunikat, e nie jest to zmienna rodowiskowa.
int system (const char *command)
Opis Funkcja wykonuje polecenie systemowe. Po wykonaniu polecenia kontrola powraca do programu z wartoci typu int.
Parametry Parametrem jest cig znakw bdcy poleceniem systemowym, np. ls -l.
217
Zwracana warto Jeli udao si wykona polecenie to zero jest zwracane, w przeciwnym przypadku zwracana jest warto nie zerowa.
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
char *wsk[3] = {
"ls -l", "ps -Al", "ls -1"
};
char num[2];
int n;
printf("1 - Wyswietl ze szczegolami zawartosc\n");
printf("2 - Wyswietl liste procesow\n");
printf("3 - Wyswietl zawartosc, kazdy plik w nowej linii\n");
do
{
printf(": ");
fgets(num, sizeof (num), stdin);
n = atoi(num);
} while (n < 1 || n > 3);
system (wsk[n-1]);
return 0;
}
Naley wzi pod uwag fakt, i wywietlanie menu w razie pomylenia si uytkownika podczas wprowadzania nie jest wolne od bdw. Naleao by zastosowa pewne sztuczki, aby wywietlanie byo bardziej efektywne.
10.14.5 Wyszukiwanie i sortowanie void qsort (void *base, size_t n, size_t size, int (* comparator)
(const void *, const void *))
Opis Funkcja sortujca n elementw tablicy wskazywanej przez base, w ktrej rozmiar kadego elementu 218
wynosi size. Funkcja comparator uywana jest do okrelenia kolejnoci elementw (rosnco,
malejco).
Parametry Pierwszym parametrem jest wskanik do tablicy, drugim jest ilo elementw tablicy, trzecim jest rozmiar pojedynczego elementu tablicy, czwartym jest wskanik na funkcj porwnujc dwa elementy. Co do ostatniego parametru, to naley si wicej szczegw, a mianowicie:
funkcja musi pobiera dwa argumenty zdefiniowane jako void *.
funkcja musi zwraca warto identyfikujc, ktry z elementw jest wikszy (dla kolejnoci rosncej):
jeli elem1 < elem2, zwrc warto ujemn (dodatni dla kolejnoci malejcej)
jeli elem1 = elem2, zwrc zero
jeli elem1 > elem2, zwr warto dodatni (ujemn dla kolejnoci malejcej)
Zwracana warto Brak.
Przykad
#include <stdio.h>
int porownaj (const void *elem1, const void *elem2);
void wyswietl (int tab[], int size);
int main (void)
{
int tab[] = {12, 22, 10, 3, 20, 98, 32, 45};
int size = sizeof (tab[0]);
int num = sizeof (tab)/size;
wyswietl(tab, num);
qsort (tab, num, size, porownaj);
wyswietl(tab, num);
return 0;
}
int porownaj (const void *elem1, const void *elem2)
{
return (*(int *)elem1 - *(int *)elem2);
}
void wyswietl (int tab[], int size)
{
219
int i;
for (i = 0; i < size; i++)
printf("%d ", tab[i]);
printf("\n");
}
Omwienie programu Wyjanienie oczywicie si naley. Tworzymy tablic z zainicjowanymi wartociami. Zmienne pomocnicze bd przechowyway pewne wartoci, a mianowicie: size posiada warto pojedynczego elementu, w tym przypadku rozmiar typu int, num ilo elementw tablicy. Funkcja wyswietl wywietla po prostu wszystkie elementy tablicy. Ciekawa sytuacja jest w funkcji porownaj przypomnij sobie sytuacj, ktra zaistniaa w punkcie 5.3, wtedy gdy zmienialimy warto staej. Musielimy przekaza jako argument adres, lecz zrzutowany na typ int *. Poniewa nie moemy uy operatora dereferencji do typu void *, to musimy go zrzutowa na typ int *. Ujelimy to w nawias, dlatego, e po zrzutowaniu musimy wycign kryjc si pod tym adresem warto, dlatego te jest jedna gwiazdka z przodu. Na pierwszy rzut oka moe wydawa si to dziwne, moe trudne, lecz w gruncie rzeczy jest to odejmowanie dwch wartoci, kryjcych si pod wskazanymi adresami. Jeli elem1 jest wiksze to wynikiem bdzie liczba dodatnia, jeli s rwne to zero, w przeciwnym wypadku warto ujemna zostanie zwrcona.
void *bsearch (const void *key, const void *base, size_t n,
size_t size, int (* comparator) (const void *, const void *))
Opis Funkcja szuka podanej wartoci w tablicy o iloci elementw n wskazywanej przez base, gdzie kady element zajmuje size bajtw. Funkcja zwraca wskanik do wystpienia elementu.
Parametry Pierwszym parametrem jest wskanik do elementu szukanego, drugim jest wskanik do tablicy, ktr przeszukujemy, trzecim jest ilo elementw tablicy, czwartym rozmiar pojedynczego elementu,
a ostatnim parametrem jest funkcja porwnujca analogiczna jak w funkcji qsort. Poniewa przeszukiwanie jest binarne, to funkcja bsearch wymaga, aby przeszukiwana tablica bya posortowana w porzdku rosncym.
Zwracana warto W przypadku znalezienia szukanego elementu funkcja zwraca wskanik do niego. Jeli element nie 220
wystpowa w tablicy to funkcja zwraca NULL.
Przykad
#include <stdio.h>
int porownaj (const void *elem1, const void *elem2);
void wyswietl (int tab[], int size);
int main (void)
{
int tab[] = {12, 22, 10, 3, 20, 98, 32, 45};
int size = sizeof (tab[0]);
int num = sizeof (tab)/size;
int l = 22;
int *w;
wyswietl(tab, num);
qsort (tab, num, size, porownaj);
wyswietl(tab, num);
w = (int *)bsearch((void *)&l, tab, num, size, porownaj);
if (w != NULL)
printf("Wartosc: %d znajduje sie pod adresem: %p\n", *w, w);
else printf("Wartosc: %d nie znajduje sie w tablicy\n", l);
}
return 0;
int porownaj (const void *elem1, const void *elem2)
{
return (*(int *)elem1 - *(int *)elem2);
}
void wyswietl (int tab[], int size)
{
int i;
for (i = 0; i < size; i++)
printf("%d ", tab[i]);
printf("\n");
}
Omwienie programu Wikszo jest taka sama jak w przypadku qsort, poniewa aby wyszukiwa elementu tablicy za pomoc bsearch tablica musi by posortowana w kolejnoci rosncej. Funkcja zwraca wskanik void *, dlatego te musimy zrzutowa go na typ int *, dziki czemu bdziemy mogli wywietli jego warto, jeli zostanie ona znaleziona, oraz adres. W pierwszym argumencie pobieramy adres zmiennej l, a nastpnie rzutujemy j na void *, poniewa wymaga tego deklaracja pierwszego parametru.
221
10.14.6 Arytmetyka liczb cakowitych int abs (int n)
Opis Funkcja zwracajca warto bezwzgldn liczby cakowitej.
Parametry Parametrem jest liczba cakowita.
Zwracana warto Zwracan wartoci jest warto bezwzgldna liczby n.
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
int n = -10;
printf("%d\n", n);
n = abs(n);
printf("%d\n", n);
}
return 0;
long int labs (long int n)
Opis Funkcja analogiczna do abs.
Parametry Liczba typu long int.
Zwracana warto Warto bezwzgldna liczby n zwrcona jako typ long int.
div_t div (int n, int d)
Opis Funkcja wykonujca operacj dzielenia na liczbach cakowitych. Warto umieszczana jest 222
w strukturze typu div_t, ktra posiada dwa pola typu cakowitego quot oraz rem, ktre odpowiednio przechowuj wynik dzielenia cakowitego oraz reszt z dzielenia.
Parametry Pierwszym parametrem jest dzielna, drugim dzielnik.
Zwracana warto Warto zwracana jest do struktury. Do pola quot zostaje przypisana warto dzielenia cakowitego,
a do rem reszta z dzielenia, jeli wystpuje.
Przykad
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
div_t str;
str = div (23, 4);
printf("%d %d\n", str.quot, str.rem);
return 0;
}
ldiv_t ldiv (long n, long d)
Opis Funkcja analogiczna do funkcji div. Struktura jest analogiczna, tylko pola s typu long int.
Parametry Dzielna i dzielnik typu long int.
Zwracana warto Warto zwracana jest do struktury. Do pola quot zostaje przypisana warto dzielenia cakowitego,
a do rem reszta z dzielenia, jeli wystpuje.
10.15 string.h W nagwku string.h znajduj si funkcje wykonujce pewne czynnoci na tekstach. W poniszej tabeli zestawiono wszystkie te funkcje, a pod tabel opis wraz z przykadem.
223
Nazwa Opis
Kopiowanie strcpy
Funkcja kopiujca znaki strncpy
Funkcja kopiujca konkretn ilo znakw memcpy
Kopiowanie z jednego do drugiego obiektu memmove
j/w oraz dziaa gdy obiekty nachodz na siebie Doczanie
strcat Funkcja dopisujca znaki strncat
Funkcja dopisujca konkretn ilo znakw Porwnywanie
strcmp Funkcja porwnujca znaki strncmp
Funkcja porwnujca konkretn ilo znakw memcmp
Porwnanie okrelonej liczby znakw zawartych w dwch obiektach Wyszukiwanie
strpbrk Wyszukiwanie wielu znakw w cigu znakw strstr
Wyszukiwanie sowa w cigu znakw strtok
Wyszukiwanie w tekcie cigu znakw przedzielone konkretnymi znakami memchr
Wyszukiwanie znaku w obiekcie strchr
Wyszukiwanie znaku w cigu znakw strrchr
Wyszukiwanie znaku w cigu znakw od tyu strspn
Obliczanie dugoci przedrostka strcspn
Obliczanie dugoci przedrostka Inne
10.15.1 strlen
Dugo tekstu strerror
Wskanik do tekstu komunikatu o bdzie memset
Wstawianie znaku do konkretnej pocztkowej iloci znakw obiektu Kopiowanie
char *strcpy (char *destination, const char *source)
224
Opis Funkcja strcpy kopiuje cig znakw na ktry wskazuje source do tablicy znakw wskazywanej przez destination wraz ze znakiem koca. Aby nie pisa po pamici naley dobra odpowiedni rozmiar tablicy destination. Lepszym rozwizaniem jest funkcja strncpy.
Parametry Pierwszym parametrem jest tablica do ktrej znaki zostan skopiowane. Drugim parametrem jest cig znakw, ktry ma zosta skopiowany.
Zwracana warto Funkcja zwraca wskanik do destination.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char tab[50];
char *napis = "Ala ma kota";
char info[] = "Czesc";
strcpy(tab, napis);
printf("%s\n", tab);
strcpy(tab, info);
printf("%s\n", tab);
strcpy(tab, "Hello World");
printf("%s\n", tab);
return 0;
}
char *strncpy (char *destination, const char *source, size_t n)
Opis Funkcja strncpy tak samo jak wspomniana powyej strcpy kopiuje znaki z source do destination,
natomiast jako trzeci parametr przyjmuje maksymaln ilo znakw, ktre s kopiowane. Dziki temu funkcja ta jest bezpieczniejsza, poniewa jeli rozmiar tablicy destination jest za may by przechowa tekst z source to tekst zostanie obcity pod warunkiem, e ustawimy odpowiedni rozmiar jako n.
W przypadku gdy znak koca kopiowanego tekstu zostanie wykryty przed maksymaln iloci kopiowanych znakw, reszta pozycji w tablicy destination zostaje wypeniona zerami. Znak koca nie zostaje dodany automatycznie (jeli ilo znakw kopiowanych jest wiksza ni n) do tablicy 225
destination, wic o tym naley pamita.
Parametry Pierwszym parametrem jest tablica do ktrej znaki zostan skopiowane. Drugim parametrem jest cig znakw, ktry ma zosta skopiowany, trzecim parametrem jest ilo znakw ktre maj zosta skopiowane.
Zwracana warto Funkcja zwraca wskanik do destination.
Przykad
#include <string.h>
#include <stdio.h>
int main (void)
{
char tab[20];
char *napis = "Ala ma kota, a kot ma 5 lat";
char info[] = "Czesc";
int rozmiar = sizeof (tab);
strncpy(tab, napis, rozmiar-1);
tab[rozmiar-1] = '\0';
printf("%s\n", tab);
strncpy(tab, info, rozmiar);
printf("%s\n", tab);
strncpy(tab, "Hello World", rozmiar);
printf("%s\n", tab);
}
return 0;
void *memcpy (void *destination, const void *source, size_t n)
Opis Funkcja memcpy kopiuje pewien blok pamici w inne miejsce, a mianowicie kopiuje blok wskazywany przez source, do miejsca wskazywanego przez destination. Kopiowana jest n ilo bajtw. Bezpieczniejsz funkcj, ktra ma analogiczne dziaanie jest memmove.
Parametry Pierwszym parametrem jest miejsce docelowe kopiowanego bloku, drugim jest pocztek bloku kopiowanego, a trzecim ilo kopiowanych bajtw.
226
Zwracana warto Funkcja zwraca wskanik do destination.
Przykad
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
void wydrukuj (int *t, int size);
int main (void)
{
int tab[] = {10, 21, 11, 24, 22, 32, 11, 222, 19};
int *w;
w = (int *)malloc (sizeof (tab));
memcpy (w, tab, sizeof (tab));
*w = 100;
wydrukuj (tab, sizeof (tab)/sizeof (tab[0]));
wydrukuj (w, sizeof (tab)/sizeof (tab[0]));
printf("%p\n", tab);
printf("%p\n", w);
return 0;
}
void wydrukuj (int *t, int size)
{
int i;
for (i = 0; i < size; i++)
printf("%d ", *(t+i));
printf("\n");
}
Omwienie programu Stworzylimy tablic z zainicjowanymi wartociami cakowitymi. Nastpnie tworzymy wskanik na typ int, aby zarezerwowa tak sam ilo miejsca jak zajmuje tablica tab. Po zarezerwowaniu miejsca, kopiujemy zawarto bloku pamici zarezerwowanego przez tab (9 elementw cztero bajtowych) do miejsca, ktre przed chwil zarezerwowalimy. Aby uwidoczni, e s to dwa rne obszary w pamici przed wydrukowaniem zawartoci, zmieniona zostaa warto zerowego elementu wskazywanego przez w, a nastpnie wydrukowane zostay wartoci z obu obszarw pamici.
227
void *memmove (void *destination, const void *source, size_t n)
Opis Funkcje jest podobna do memcpy, z t rnic, e kopiowanie bloku pamici odbywa si za porednictwem tymczasowego bloku pamici. Dokadnie odbywa si to w nastpujcej kolejnoci,
kopiowany jest blok wskazywany przez source o rozmiarze n do tymczasowej tablicy o rozmiarze n bajtw, a nastpnie n bajtw z tymczasowej tablicy kopiowanych jest do miejsca wskazywanego przez destination. Zapobiega to kopiowania nadmiernej iloci danych do miejsca, ktrego rozmiar jest mniejszy (ang. overflow).
Parametry Pierwszym parametrem jest miejsce docelowe kopiowanego bloku, drugim jest pocztek bloku kopiowanego, a trzecim ilo kopiowanych bajtw.
Zwracana warto Funkcja zwraca wskanik do destination.
Przykad
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main (void)
{
char tab[50];
char *w;
fgets(tab, sizeof (tab)-1, stdin);
tab[sizeof (tab)-1] = '\0';
w =(char *)malloc (sizeof (tab));
memmove (w, tab, sizeof (tab));
tab[0] = 'X';
printf("%s", tab);
printf("%s", w);
return 0;
}
Omwienie programu Program jest bardzo podobny do tego z funkcji memcpy. W tym programie wprowadzamy znaki z klawiatury, ktre nastpnie kopiujemy do miejsca, ktre zostao zarezerwowane. Aby pokaza, e s to dwa rne obszary pamici, w tablicy tab na zerowej pozycji zosta zmieniony znak na X, co przy wydruku dowodzi, e blok pamici wskazywany przez w ma zawarto wpisan przez uytkownika.
228
10.15.2 Doczanie
char *strcat (char *destination, const char *source)
Opis Funkcja kopiuje znaki z source do destination, lecz nie wymazuje poprzedniej zawartoci tylko docza do istniejcego cigu znakw nowo skopiowane znaki. Podobnie jak w przypadku funkcji strcpy funkcja ta nie sprawdza, czy doczane znaki zmieszcz si w tablicy destination, co jest nie bezpieczne. Dlatego lepiej stosowa funkcj strncat. Znak koca zostaje ustawiony na kocu nowego cigu znakw.
Parametry Pierwszym parametrem jest tablica do ktrej znaki zostan doczone. Drugim parametrem jest cig znakw.
Zwracana warto Funkcja zwraca wskanik do destination.
Przykad
#include <stdio.h>
#include <string.h>
int main (int argc, char *argv[])
{
char tab[100];
if (argc != 3)
{
fprintf(stderr, "Uzycie: %s arg1 arg2\n", argv[0]);
return -1;
}
strcat (tab, argv[1]);
strcat (tab, argv[2]);
printf("%s\n", tab);
return 0;
}
char *strncat (char *destination, const char *source, size_t n)
Opis Funkcja robica dokadnie to samo co strcat z moliwoci ograniczenia kopiowanych znakw. Jeli 229
ilo znakw z source jest mniejsza ni rozmiar n, to kopiowane s tylko te znaki wliczajc w to znak koca.
Parametry Pierwszym parametrem jest tablica do ktrej znaki zostan doczone. Drugim parametrem jest cig znakw. Trzecim parametrem jest ilo doczanych znakw.
Zwracana warto Funkcja zwraca wskanik do destination.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char tab[30] = "Ala ma kota, a ";
char tab2[] = "kot ma 5 lat xxxxxxxxxxxxxxxx";
strncat (tab, tab2, 12);
printf("%s\n", tab);
return 0;
}
10.15.3 Porwnywanie
int strcmp (const char *str1, const char *str2)
Opis Funkcja porwnuje dwa cigi znakw (str1, str2). Zaczyna od pierwszego znaku w kadym z cigw znakw i jeli s one takie same to kontynuuje porwnywanie. Funkcja przerywa dziaanie jeli porwnywane znaki bd si rni, lub jeden z cigw si skoczy.
Parametry Pierwszym oraz drugim parametrem jest wskanik do cigu znakw.
Zwracana warto Funkcja zwraca warto cakowit. Jeli oba cigi znakw s takie same warto zero jest zwracane.
Jeli pierwszy nie pasujcy znak ma wiksz warto w str1 ni w str2 to warto dodatnia jest zwracana, w przeciwnym przypadku zwracana jest warto ujemna.
230
Przykad 1
#include <stdio.h>
#include <string.h>
int main (void)
{
char *str1 = "aaab";
char *str2 = "aaaa";
int w;
w = strcmp (str1, str2);
if (w > 0)
printf("Wyraz str1 wiekszy\n");
else if (w < 0)
printf("Wyraz str1 mniejszy\n");
else printf("Wyrazy rowne\n");
}
return 0;
Przykad 2
#include <stdio.h>
#include <string.h>
void create (char *filename);
int main (int argc, char *argv[])
{
if (argc != 3)
{
fprintf(stderr, "Uzycie: %s option filename\n", argv[0]);
fprintf(stderr, "Options: \n-r - remove file\n");
fprintf(stderr, "-c - create file\n");
return -1;
}
if (!strcmp (argv[1], "-r"))
if (!remove(argv[2]))
fprintf(stderr, "Plik: %s usunieto\n", argv[2]);
else fprintf(stderr, "Pliku: %s nie udalo sie usunac\n",
argv[2]);
if (!strcmp (argv[1], "-c"))
create(argv[2]);
return 0;
}
231
void create (char *filename)
{
FILE *wp;
if ((wp = fopen(filename, "w")) != NULL)
{
fprintf(stderr, "Plik: %s utworzono\n", filename);
fclose(wp);
}
else fprintf(stderr, "Pliku: %s nie udalo sie utworzyc\n",
filename);
}
Omwienie programu 1 Jak wida str1 bdzie wiksze, dlatego, e pierwszym rnicym si znakiem jest b, ktrego warto numeryczna jest wiksza od a.
Omwienie programu 2 Ten program z kolei wykorzystuje pewien mechanizm, ktry w programach Linuksowych jest bardzo czsto wykorzystywany, a mianowicie opcje podczas uruchamiania programu. Temat ten by poruszony w rozdziale 6 i tam bya przedstawiona bardziej uniwersalna metoda, lecz dla prostych programw mona zastosowa ten sposb. Sprawdzamy czy drugi argument (pamitamy pierwszy to nazwa programu) to znaki "-r" i jeli tak jest to podany plik (trzeci argument) zostaje usunity (oczywicie,
jeli istnieje). Jeli uylibymy opcji "-c" to plik zostanie utworzony.
int strncmp (const char *str1, const char *str2, size_t n)
Opis Dziaanie funkcji strncmp jest identyczne z funkcj strcmp z t rnic, e mamy moliwo sprawdzenia pewnej konkretnej iloci pocztkowych znakw. Funkcja porwnuje znaki od pierwszego w kadym z cigw i koczy dziaanie w momencie, gdy znaki si rni, jeden z cigw si skoczy,
lub porwna n znakw.
Parametry Pierwszym oraz drugim parametrem jest wskanik do cigu znakw, trzecim parametrem jest ilo porwnywanych znakw.
232
Zwracana warto Funkcja zwraca warto cakowit. Jeli oba cigi znakw s takie same warto zero jest zwracana.
Jeli pierwszy nie pasujcy znak ma wiksz warto w str1 ni w str2 to warto dodatnia jest zwracana, w przeciwnym przypadku zwracana jest warto ujemna.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char *op1 = "aaAaaab";
char *op2 = "aaAaaab fdfsd";
int w;
w = strncmp (op1, op2, 7);
if (!w)
printf("7 poczatkowych znakow - takie same\n");
}
return 0;
int memcmp (const void *ptr1, const void *ptr2, size_t n)
Opis Funkcja memcmp porwnuje pocztkowych n bajtw blokw pamici wskazywanych przez ptr1 oraz ptr2. Jeli blokiem wskazywanym przez oba wskaniki bdzie cig znakw, to funkcja zachowuje si analogicznie do funcji strcmp.
Parametry Paramert pierwszy i drugi to wskaniki do pewnego bloku pamici, ktry ma zosta porwnany.
Ostatnim parametrem jest ilo porwnywanych bajtw.
Zwracana warto Funkcja zwraca warto cakowit. Jeli porwnywane bloki s takie same, to warto zero jest zwracana. Jeli pierwszy rnicy si bajt ma wiksz warto w ptr1 ni w ptr2 to warto dodatnia jest zwracana, warto ujemna zwracana jest w przeciwnym wypadku. Wartoci bajtw porwnywane s jako unsigned char.
Przykad 1
#include <stdio.h>
233
#include <string.h>
int main (void)
{
char *nap1 = "Ala";
char *nap2 = "Ola";
int w;
w = memcmp (nap1, nap2, 3);
if (!w)
printf("Takie same\n");
else if (w < 0)
printf("nap1 < nap2\n");
else printf("nap1 > nap2\n");
return 0;
}
Przykad 2
#include <stdio.h>
#include <string.h>
int main (void)
{
int tab1[] = {9, 1, 2, 3, 7, 1};
int tab2[] = {9, 1, 2, 5, 5, 3};
int w;
w = memcmp(tab1, tab2, 12);
if (!w)
printf("Takie same\n");
else if (w < 0)
printf("tab1 < tab2\n");
else printf("tab1 > tab2\n");
}
return 0;
Omwienie programu 1 Uyta w programie tym funkcja memcmp daje analogiczny skutek jak funkcja strcmp dlatego, e porwnywan iloci bajtw jest liczba 3, co bezporednio wie si z iloci znakw obu napisw.
Kady bajt jest porwnywany i jeli na ktrej pozycji znaki si rni to odpowiednia warto jest zwracana.
234
Omwienie programu 2 Tutaj sprawa wyglda troch ciekawiej, bowiem porwnujemy pocztkowe trzy elementy tablicy, lecz iloci bajtw nie jest 3 jakby mona byo pomyle tylko 12, bo przecie kady element tablicy typu int zajmuje cztery bajty.
10.15.4 Wyszukiwanie
char *strchr (const char *source, int character)
Opis Funkacja przeszukuje rdo (source) w poszukiwaniu znaku (character). Znak koca \0 rwnie wliczany jest jako cz tekstu, tak wic mona go wyszuka.
Parametry Pierwszym parametrem jest wskanik do tekstu, drugim parametrem jest wyszukiwany znak.
Zwracana warto W przypadku znalezienia znaku wskanik do miejsca w ktrym on wystpuje jest zwracany. Funkcja zwraca wskanik do pierwszego wystpienia znaku. Jeli znaku nie udao si znale NULL jest zwracane.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char str[] = "100,00";
char *w;
}
w = strchr (str, ',');
if (w != NULL)
*w = '.';
printf("%s\n", str);
return 0;
235
char *strrchr (const char *source, int character)
Opis Funkcja robica dokadnie to samo co strchr z t rnic, e wyszukuje znak od koca.
Parametry Pierwszym parametrem jest wskanik do tekstu, drugim parametrem jest wyszukiwany znak.
Zwracana warto W przypadku znalezienia znaku wskanik do miejsca w ktrym on wystpuje jest zwracany. Funkcja zwraca wskanik do pierwszego od koca wystpienia znaku. Jeli znaku nie udao si znale NULL jest zwracane.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char tab[] = "100,000,00";
char *w;
w = strrchr(tab, ',');
if (w != NULL)
*w = '.';
printf("%s\n", tab);
return 0;
}
size_t strspn (const char *str1, const char *str2)
Opis Funkcja zwraca dugo pocztkowego tekstu z str1, ktry skada si jedynie ze znakw str2.
Parametry Pierwszym i drugim parametrem s wskaniki do tekstu.
Zwracana warto Dugo pocztkowego tekstu z str1, ktry zawiera jedynie znaki z str2. Jeli w str2 s identyczne znaki jak w str1 to zwrcona warto jest dugoci tekstu str1. Jeli pierwszy znak z str1 nie wystpuje w str2 to zwracane jest zero.
Przykad
#include <stdio.h>
236
#include <string.h>
int main (void)
{
size_t no;
char *str1 = "Ala ma kota";
char *str2 = str1;
no = strspn (str1, "lamA ");
printf("%d\n", no);
no = strspn (str1, str2);
printf("%d\n", no);
no = strspn (str1, "la ma kota");
printf("%d\n", no);
return 0;
}
size_t strcspn (const char *str1, const char *str2)
Opis Funkcja przesukuje str1 w poszukiwaniu pierwszego wystpienia jakiegokolwiek znaku z str2.
Parametry Pierwszym i drugim parametrem s wskaniki do tekstu.
Zwracana warto W przypadku znalezienia w str1 jakiegokolwiek znaku z str2 funkcja zwraca pozycj na ktrej ten znak si znajduje. Jeli takiego znaku nie znaleziono, to dugo str1 jest zwracana.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char *info = "Dzien dobry";
size_t no;
no = strcspn (info, "QXZ");
printf("%d\n", no);
no = strcspn (info, "wie");
printf("%d\n", no);
}
return 0;
237
Omwienie programu Pierwsze wywoanie funkcji strcspn zwrci dugo cigu znakw wskazywanego przez info dlatego,
e adnen ze znakw Q, X, Z nie wystpuje w tym tekcie. Drugie wywoanie funkcji zwrci warto dwa, poniewa pierwszym wystpieniem jakiegokolwiek znaku z str2 (wie) jest litera i, ktra jest na pozycji drugiej (liczc od zera).
char *strpbrk (const char *str1, const char *str2)
Opis Funkcja zwracajca wskanik do pierwszego wystpienia w str1 jakiegokolwiek znaku z str2.
Parametry Pierwszym i drugim parametrem jest wskanik do tekstu.
Zwracana warto Funkcja zwraca wskanik do pierwszego wystpienia jakiegokolwiek znaku z str2 w cigu znakw wskazywanym przez str1. Jeli aden znak nie zosta znaleziony funkcja zwraca NULL.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char *napis = "Ala ma kota, a kot ma 5 lat";
char *w;
char *smg = "euioaEUIOA";
w = strpbrk(napis, smg);
printf("W zdaniu: %s. Wystepuja nastepujace samogloski: \n", napis);
while (w != NULL)
{
printf("%c ", *w);
w = strpbrk(w+1, smg);
}
printf("\n");
return 0;
}
Omwienie programu Wskanik smg wskazuje na samogoski. Do wskanika w przypisujemy wywoanie funkcji strpbrk.
238
Nastpnie sprawdzamy czy jakakolwiek samogoska zostaa znaleziona (rne od NULL). Jeli tak, to drukujemy j, a nastpnie przypisujemy do w wywoanie funkcji strpbrk, ktra w tym miejscu przyjmuje jako pierwszy parametr nie wskanik do napis, a do w+1. Chodzi o to, e jeli w wskazuje na znak z tego cigu, to mona te odwoa si do dalszej czci cigu, a do znaku \0. Dlatego te przesuwamy wskanik na nastpn pozycj i od niej szukamy kolejnych samogosek.
char *strstr (const char *str1, const char *str2)
Opis Funkcja strstr wyszukuje w cigu wskazywanym przez str1 cigu (caego) str2. Funkcja jest uyteczna, jeli wyszukujemy np. wyraz, zdanie.
Parametry Pierwszym i drugim parametrem s wskaniki do tekstu.
Zwracana warto Funkcja zwraca wskanik do pierwszego wystpienia caego cigu znakw str2 w str1. Jeli wyraz
(zdanie) z str2 nie wystpuje w str1 zwracan wartoci jest NULL.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char *napis = "Siala baba mak, nie wiedziala jak,\na dziad wiedzial,
nie powiedzial,\ndostal 10 lat\n";
char *w;
w = strstr (napis, "nie wiedziala jak");
if (w != NULL)
printf("%s", w);
return 0;
}
Omwienie programu Do w przypisujemy wywoanie funkcji strstr, ktra szuka fragmentu nie widziala jak w tekcie wskazywanym przez napis. Jeli taki cig znakw si znajduje, to przypisany zostaje wskanik do pierwszego znaku znalezionej frazy. Nastpnie drukowany jest ten fragment wraz zreszt tekstu.
239
void *memchr (const void *ptr1, int value, size_t n)
Opis Funkcja przeszukuje n pocztkowych bajtw bloku pamici wskazywanego przez ptr w poszukiwaniu wartoci value (interpretowanej jako unsigned char).
Parametry Pierwszym parametrem jest wskanik do bloku pamici, drugim jest szukana warto, a trzecim jest ilo przeszukiwanych bajtw.
Zawracana warto Funkcja zwraca wskanik do miejsca wystpienia szukanej wartoci w przypadku znalezienia jej, lub NULL jeli takiej wartoci nie znalaza.
Przykad
#include <stdio.h>
#include <string.h>
void info (int *w, int size, int value);
int main (void)
{
int tab[] = {0, 1, 2, 4, 8, 16, 32, 64};
int *w;
int size, value;
size = 16;
value = 8;
w = (int *)memchr (tab, value, size);
info (w, size, value);
size = 20;
w = (int *)memchr (tab, value, size);
info (w, size, value);
printf("%p\n", tab+4);
return 0;
}
void info (int *w, int size, int value)
{
if (w == NULL)
printf("W pierwszych %d bajtach nie ma wartoci: %d\n", size, value);
else
{
printf("W pierwszych %d bajtach jest wartosc: %d\n", size, value);
printf("Pod adresem: %p\n", w);
}
}
240
Omwienie programu Na pocztku tworzona jest tablica liczb cakowitych z zainicjowanymi wartociami. Do zmiennych pomocniczych przypisujemy wartoci, a konkretnie ilo bajtw oraz szukan warto, ktre potrzebne s do funkcji memchr. Wywoanie funkcji przypisujemy do wskanika w, oraz rzutujemy wynik na
(int *). Nastpnie wywoujemy funkcj info, ktra przyjmuje jako argumenty wskanik w, rozmiar oraz warto w celu ustalenia i powiadomienia uytkownika, czy dana warto w przeszukiwanej iloci bajtw zostaa znaleziona. Jeli tak to dodatkowo wywielany jest adres komrki pamici, pod ktrym szukana warto si znajduje. Jak wida w obszarze 20 bajtw liczba 8 wystpuje.
char *strtok (char *str, const char *delimiters)
Opis Funkcja rozdziela pewne czci napisu midzy ktrymi wystpuj konkretne znaki (delimiters). Znaki te podaje si jako drugi parametr. Pierwsze wywoanie funkcji wymaga, aby pierwszym parametrem by wskanik do tekstu. W kolejnych wywoaniach funkcja wymaga jako pierwszego parametru wartoci NULL. Przeszukiwanie cigu rozpoczyna si od pierwszego znaku wskazywanego przez str,
ktry nie wystpuje w delimiters, a koczy si w momencie napotkanie ktregokolwiek znaku zawartego w delimiters. Znak ten zostaje zamieniony na \0 i przesukiwanie tekstu rozpoczyna si od kolejnego znaku. Przeszukiwanie koczy si w momencie gdy strtok natrafi na znak \0 koniec napisu.
Parametry Pierwsze wywoanie funkcji wymaga, aby pierwszym parametrem bya tablica znakw. Kolejne wywoania funkcji wymagaj NULL. Drugim parametrem jest wskanik do cigu znakw zawierajcy konkretne znaki.
Zwracana warto Po kadym wywoaniu funkcja zwraca wskanik do pocztku tekstu (od pierwszego znaku nie nalecego do delimiters, a do \0). Warto NULL jest zwracana po wykryciu koca tekstu.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
241
char str[] = "Czesc - To - Ja; Artur; Co, slychac?";
char *delimiters = "-;, ";
char *w;
}
w = strtok (str, delimiters);
while (w != NULL)
{
printf("%s\n", w);
w = strtok (NULL, delimiters);
}
return 0;
Omwienie programu Naszym zadaniem jest wywietli kady wyraz cigu znakw z str w nowej linii. Wyrazy oddzielone s pewnymi znakami (delimiters). Tak wic do rozwizania tego zadania uywamy funkcji strtok.
W pierwszym wywoaniu podajemy tablic znakw, a nastpnie w ptli drukujemy cig znakw od jednego delimitera do drugiego. Nastpne wywoanie funkcji jak wida jako pierwszy parametr pobiera NULL. Zakoczenie ptli nastpuje w momencie, gdy cig znakw zakoczy si.
10.15.5 Inne
size_t strlen (const char *str)
Opis Funkcja obliczajca dugo napisw. Rozpoczyna liczenie od pierwszego znaku, a koczy na znaku koczcym napis \0, ktrego nie wlicza.
Parametry Parametrem jest wskanik do cigu znakw.
Zwracana warto Zwracan wartoci jest ilo znakw.
Przykad
#include <stdio.h>
#include <string.h>
int main (void)
{
char *wsk = "Napis skladajacy sie z kilkunastu znakow";
char tab[] = "Ciekawe ile ten napis ma znakow";
242
char tab2[40] = "Ile znakow, a jaki rozmiar?";
printf("strlen (wsk): %d\n", strlen(wsk));
printf("strlen (tab): %d\n", strlen(tab));
printf("strlen (tab2): %d\n\n", strlen(tab2));
printf("sizeof (wsk): %d\n", sizeof (wsk));
printf("sizeof (tab): %d\n", sizeof (tab));
printf("sizeof (tab2): %d\n", sizeof (tab2));
return 0;
}
Pewna uwaga Dlaczego wartoci sizeof (wsk) i strlen (wsk) si rni? Z bardzo prostej przyczyny. Wskanik to zmienna, ktra wskazuje na inny obszar pamici, tak wic jej rozmiar to rozmiar zmiennej. Dlaczego strlen (tab) to 31, a sizeof (tab) to 32? Poniewa funkcja strlen nie zlicza znaku \0. Jeli mwimy o rozmiarze, to ten znak jest czci tablicy, dlatego te jest uwzgldniany. Dlaczego strlen (tab2)
i sizeof (tab2) si rni? Rozmiar jest z gry okrelony, a znakw jest po prostu mniej.
void *memset (void *ptr, int value, size_t n)
Opis Funkcja ustawia na pierwszy n bajtach wskazywanych przez ptr warto podan jako drugi parametr.
Parametry Pierwszym parametrem jest wskanik do bloku pamici, drugim parametrem jest ustawiana warto,
a ostatnim ilo bajtw do zamiany.
Zwracana warto Funkcja zwraca ptr.
Przykad 1
#include <stdio.h>
#include <string.h>
int main (void)
{
char tablica[] = "Pewien ciag znakow, ktory zostanie zakryty";
printf("%s\n", tablica);
memset(tablica, '.', 11);
printf("%s\n", tablica);
243
return 0;
}
Przykad 2
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main (void)
{
int rozmiar = 200;
char *x;
x = (char *)malloc(rozmiar);
x = memset(x, '-', rozmiar);
printf("%s\n", x);
printf("%c\n", *(x+10));
return 0;
}
Omwienie programu 1 Jedenacie pocztkowych bajtw tablicy znakw zostanie zmienionych za pomoc funkcji memset.
Na tych pozycjach funkcja ta ustawia znak podany jako drugi argument, czyli po wydrukowaniu zobaczymy na pocztku jedenacie kropek.
Omwienie programu 2 W tym programie alokujemy pami, a konkretnie 200 bajtw, na ktrych potem ustawiamy znaki '-'.
Aby odwoa si do pojedynczego znaku, mona zrobi to tak jak w ostatniej instrukcji printf, gdzie wycigamy warto, ktra kryje si na 10 pozycji wzgldem punku bazowego zarezerwowanej pamici.
10.16 time.h Nagwek time.h zawiera wszystkie informacje potrzebne do pobierania daty oraz wywietlania informacji o czasie. Zestawienie zawartoci omawianego nagwka znajduje si w poniszej tabeli.
Nazwa Opis
Manipulacja czasem clock
Funkcja clock 244
difftime Zwracanie rnicy pomidzy czasem mktime
Konwersja struktury tm do time_t time
Pobieranie aktualnej godziny Konwersje
asctime Konwertowanie struktury tm do cigu znakw ctime
Konwertowanie wartoci time_t do cigu znakw gmtime
Konwertowanie time_t do tm jako czas UTC localtime
Konwertowanie time_t do tm jako czas lokalny strftime
Formatowanie czasu do cigu znakw MAKRA
CLOCKS_PER_SEC Taktowanie zegara na sekundy NULL
Wskanik NULL TYPY DANYCH clock_t
Typ clock size_t
Dodatnia liczba cakowita time_t
Typ time struct tm Struktura tm Tabela 10.16.1 Zestawienie nagwka time.h 10.16.1 Manipulacja czasem clock_t clock (void)
Opis Funkcja clock pobiera czas procesora, ktry by potrzebny do wykonania okrelonego zadania. Aby sprawdzi ile wykonywa si np. jaki algorytm sortowania moemy uy tej wanie funkcji. Funkcj clock umieszczamy przed oraz po operacji, ktr chcemy zmierzy. Rnica tych dwch wartoci podzielona przez sta CLOCKS_PER_SEC daje ilo sekund.
Parametry Brak
Zwracana warto W przypadku nie powodzenia funkcja zwraca -1. Jeli bdy nie wystpiy to zostaje zwrcona liczba 245
taktw zegara.
Przykad
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
void bubblesort (int tab[], int rozmiar);
void losowo (int tab[], int rozmiar);
void wyswietl (int tab[], int rozmiar);
int main (void)
{
clock_t poczatek, koniec;
double sekundy;
int i;
int tab[10000];
int rozmiar = sizeof (tab) / sizeof (tab[0]);
losowo(tab, rozmiar);
wyswietl(tab, rozmiar);
poczatek = clock();
bubblesort(tab, rozmiar);
koniec = clock();
wyswietl(tab, rozmiar);
sekundy = ((double) (koniec - poczatek)) / CLOCKS_PER_SEC;
printf("Sortowanie tablicy o ilosci elementow: %d zajelo: %lf\n",
rozmiar, sekundy);
return 0;
}
void bubblesort (int tab[], int rozmiar)
{
int i, j, tmp;
for (i = 0; i < rozmiar-1; i++)
for (j = 0; j < rozmiar-1-i; j++)
if (tab[j+1] < tab[j])
{
tmp = tab[j];
tab[j] = tab[j+1];
tab[j+1] = tmp;
}
}
void losowo (int tab[], int rozmiar)
{
246
int i;
srand(time(NULL));
for (i = 0; i < rozmiar; i++)
tab[i] = rand() % 1000;
}
void wyswietl (int tab[], int rozmiar)
{
int i;
for (i = 0; i < rozmiar; i++)
printf("tab[%d] = %d\n", i, tab[i]);
}
Omwienie programu W tym miejscu nie bd tumaczy jak dziaa algorytm bbelkowy (bo taki zosta uyty) powiem jedynie o sposobie wyliczania czasu. A mianowicie przed wywoaniem funkcji bubblesort do zmiennej typu clock_t (poczatek) przypisujemy wywoanie funkcji clock, tu po bubblesort wywoujemy t funkcj po raz drugi przypisujc j do innej zmiennej. Rnica pomidzy kocow,
a pocztkow wartoci podzielona przez CLOCKS_PER_SEC daje ilo sekund. Aby wynik by liczb rzeczywist naley zrzutowa j na typ rzeczywisty, co zostao uczynione.
double difftime (time_t time2, time_t time1)
Opis Funkcja obliczajca rnic w sekundach pomidzy time1, a time2.
Parametry Pierwszym parametrem jest zmienna przechowywujca pniejszy z dwch czasw, drugim parametrem jest czas wczeniejszy.
Zwracana warto Funkcja zwraca rnic czasw (time2 time1) jako liczb rzeczywist.
Przykad
#include <stdio.h>
#include <time.h>
int main (void)
{
247
time_t time1, time2;
double wynik;
char name[50];
time (&time1);
printf("Wpisz swoje imie: ");
fgets(name, sizeof (name), stdin);
time (&time2);
wynik = difftime (time2, time1);
printf("\nWitaj, %sWpisanie imienia zajelo Ci zaledwie: %.2lfs\n",
name, wynik);
return 0;
}
time_t mktime (struct tm *timeptr)
Opis Funkcja konwertujca struktur tm do obiektu typu time_t.
Parametry Parametrem jest wskanik do struktury tm.
Zwracana warto Funkcja zwraca ilo sekund, ktre upyny od pewnego dnia roku jako obiekt typu time_t. Ten dzie jest sprecyzowany w strukturze wskazywanej przez timeptr. Jeli wystpi bd i data nie moe zosta przedstawiona warto -1 jest zwracana.
Przykad
#include <stdio.h>
#include <string.h>
#include <time.h>
void uzupelnijReszte (struct tm *tim);
void pobierzLiczbe (char *opis, char *buf, int *l);
int main (void)
{
struct tm tim;
char buffor[25];
pobierzLiczbe("Dzien", buffor, &tim.tm_mday);
pobierzLiczbe("Miesiac", buffor, &tim.tm_mon);
pobierzLiczbe("Rok", buffor, &tim.tm_year);
tim.tm_year -= 1900;
tim.tm_mon -= 1;
uzupelnijReszte(&tim);
248
if (mktime(&tim) == -1)
fprintf(stderr, "Bledna data\n");
else
{
strftime(buffor, sizeof (buffor), "%A", &tim);
printf("%s\n", buffor);
}
return 0;
}
void pobierzLiczbe (char *opis, char *buf, int *l)
{
printf("%s: ", opis);
fgets(buf, 5, stdin);
*l = atoi(buf);
}
void uzupelnijReszte (struct tm *tim)
{
tim->tm_hour = 0;
tim->tm_min = 0;
tim->tm_sec = 1;
tim->tm_isdst = -1;
}
Omwienie programu W programie posiadamy trzy funkcj. Funkcja pobierzLiczbe odpowiada za pobranie dnia, miesica oraz roku. Pierwszym parametrem jest cig znakw, ktry zostaje wywietlony podczas wprowadzania danych. Drugim parametrem jest tablica znakw, do ktrej bdziemy zapisywa te liczby, natomiast trzecim jest wskanik do struktury tm. Po trzykrotnym wywoaniu funkcji, od liczby lat wprowadzonej przez uytkownika odejmujemy 1900, a od liczby miesica odejmujemy 1 (aby dowiedzie si czemu tak zobacz struktur tm w punkcie 10.16.4 liczba lat liczona jest od 1900 roku, liczba miesicy jest liczona od 0). Funkcja uzupelnijReszte uzupenia pozostae pola struktury tm, jest to istotne,
poniewa mktime oblicza ilo sekund od tej daty, dlatego wszystkie pola musz by wypenione.
mktime zwrci -1 jeli wpisane przez uytkownika wartoci nie bd mogy zosta zamienione na odpowiedni dat. Jeli wszystko poszo zgodnie z oczekiwaniami, to zmienna typu strukturowego tim zostaa uzupeniona o konkretn dat, a nastpnie za pomoc funkcji strftime przeksztacona do czytelnej wersji (tutaj po prostu nazwa dnia zobacz tabel w funkcji strftime).
249
time_t time (time_t *timer)
Opis Funkcja pobiera informacje o czasie, a konkretnie o iloci sekund, ktra upyna od 1 stycznia 1970 roku. Jeli jako parametr podamy NULL, to po prostu zostanie zwrcona warto sekund. Jeli jako argument podamy adres do zmiennej, to rwnie warto sekund zostanie zwrcona i jednoczenie przypisana do tej zmiennej, dziki czemu w difftime, moglimy obliczy rnic.
Parametry NULL, lub adres zmiennej typu time_t.
Zwracana warto Ilo sekund, ktra upyna od 1 stycznia 1970 roku od godziny 00:00.
Przykad
#include <stdio.h>
#include <time.h>
int main (void)
{
time_t sekundy = time(NULL);
double minuty = sekundy / 60.0;
double godziny = minuty / 60;
double doby = godziny / 24;
double lata = doby / 365;
printf("%ld tyle sekund uplynelo od 01/01/1970\n", sekundy);
printf("%.2lf tyle minut\n", minuty);
printf("%.2lf tyle godzin\n", godziny);
printf("%.2lf tyle dob\n", doby);
printf("%.2lf tyle lat\n", lata);
return 0;
}
10.16.2 Konwersje
char *asctime (const struct tm *tmptr)
Opis Funkcja konwertujca zawarto struktury tm, wskazywanej przez tmptr do wersji dla ludzi, czyli takiej bardziej zrozumiaej.
250
Parametry Parametrem jest wskanik do struktury tm.
Zwracana warto Funkcja zwraca w postaci cigu znakw dat wraz z godzin w formacie zrozumiaym dla ludzi. A mianowicie:
DTG MSC DD GG:MM:SS RRRR DTG dzie tygodnia, MSC miesic, DD dzie, GG godzina, MM minuta, SS sekunda,
RRRR rok.
Przykad Przykad znajduje si przy opisie funkcji localtime oraz gmtime.
char *ctime (const time_t *timer)
Opis Funkcja jest bardzo podobna do funkcji asctime. Konwertuje obiekt typu time_t wskazywany przez timer do czytelnej postacji. Zwracana warto jest w takim samym formacie co asctime.
Parametry Parametrem jest wskanik do obiektu typu time_t.
Zwracana warto Funkcja zwraca dat jako cig znakw w postaci identycznej jak asctime.
Przykad
#include <stdio.h>
#include <time.h>
int main (void)
{
time_t tNow;
time (&tNow);
char *w;
w = ctime (&tNow);
}
printf("%s", w);
return 0;
251
struct tm *gmtime (const time_t *timer)
Opis Funkcja konwertujca obiekt typu time_t do struktury tm jako czas UTC (GMT timezone).
Parametry Wskanik do obiektu typu time_t.
Zwracana warto Funkcja zwraca wskanik do struktury tm.
Przykad
#include <stdio.h>
#include <time.h>
int main (void)
{
time_t tNow;
struct tm *tInfo;
time (&tNow);
tInfo = gmtime (&tNow);
printf("%s", asctime(tInfo));
return 0;
}
struct tm *localtime (const time_t *timer)
Opis Funkcja konwertujca obiekt typu time_t do struktury tm jako czas lokalny.
Parametry Wskanik do obiektu typu time_t.
Zwracana warto Funkcja zwraca wskanik do struktury tm.
Przykad
#include <stdio.h>
#include <time.h>
int main (void)
{
time_t tNow;
struct tm *tInfo;
252
time (&tNow);
tInfo = localtime(&tNow);
printf("%s", asctime(tInfo));
return 0;
}
size_t strftime (char *ptr, size_t max, const char *format, const struct tm *tmptr)
Opis Funkcja kopiuje do miejsca wskazywanego przez ptr zawarto wskazywan przez format, w ktrej mog wystpi przeksztacenia, ktre uzupeniane s za pomoc tmptr. max ogranicza ilo kopiowanych znakw.
Parametry Pierwszy parametr to wskanik do tablicy, drugim parametrem jest ilo kopiowanych znakw, trzeci to format w jakim zostan zapisane dane w tablicy, a czwarty to wskanik do struktury tm. Poniej znajduje si tabela, w ktrej znajduj si przeksztatniki uywane w tekcie wskazywanym przez format.
Przeksztatnik Oznaczenie
Przykad
%a Skrcona nazwa dnia tygodnia *
Sat
%A Pena nazwa dnia tygodnia *
Saturday
%b Skrcona nazwa miesica *
Oct
%B Pena nazwa miesica *
October
%c Reprezentacja daty i czasu *
Sat Oct 23 23:02:29 2010
%d Dzie miesica (01 31)
23
%H Godzina format 24h 18
%I Godzina format 12h 8
%j Dzie roku (001 366)
231
%m Numer miesica (01 12)
4
%M Minuta (00 59)
45
%p AM lub PM AM
%S Sekundy (00 59)
32
%U Numer tygodnia Niedziela pierwszym dniem tygodnia
(00 53)
52 253
%w Numer dnia tygodnia (0 6) 0 Niedziela 3
%W Numer tygodnia Poniedziaek pierwszym dniem tygodnia (00 53)
32
%x Reprezentacja daty *
10/23/10
%X Reprezentacja czasu *
23:21:59
%y Rok ostatnie dwie cyfry 99
%Y Rok pena reprezentacja roku 1999
%Z Nazwa strefy czasowej, lub skrt CEST
%%
Znak %
%
Pozycje oznaczone gwiazdk (*) zale od lokalnych ustawie.
Zwracana warto Jeli wszystkie znaki wliczajc w to \0 z format zostay skopiowane do ptr, to ilo skopiowanych znakw (bez znaku \0) zostaje zwrcona. W przeciwnym wypadku zero jest zwracane.
Przykad
#include <stdio.h>
#include <time.h>
#define MAX 1000 int main (void)
{
char buffor[MAX];
char *form = " %a\n %A\n %b\n %B\n %c\n %d\n %Z\n %W\n %X\n";
time_t timeNow;
struct tm *timeInfo;
time (&timeNow);
timeInfo = localtime (&timeNow);
}
strftime (buffor, sizeof (buffor), form, timeInfo);
printf("%s\n", buffor);
return 0;
10.16.3 Makra
CLOCKS_PER_SEC Makro CLOCKS_PER_SEC odpowiada za reprezentacj iloci taktw zegara na sekund. Dzielc warto uzyskan za pomoc funkcji clock, przez to makro uzyskamy ilo sekund. Przykad znajduje 254
si w punkcie 10.16.1.
NULL Makro NULL jest z reguy uywane w celu oznaczenia, e wskanik nie wskazuje na aden obiekt.
10.16.4 Typy danych clock_t
Typ danych zdolny przechowywa ilo taktw zegara oraz wspiera operacje arytmetyczne. Taki typ danych zwracany jest przez funkcj clock.
size_t Typ danych odpowiadajcy dodatniemu cakowitemu typowi danych. Operator sizeof zwraca dane tego typu.
time_t Typ danych zdolny przechowywa czas, oraz umoliwia wykonywanie operacji arytmetycznych. Ten typ danych zwracany jest przez funkcj time oraz uywany jako parametr nie ktrych funkcji z nagwka time.h.
struct tm Struktura zawierajca kalendarz oraz dat podzielon na czci na poszczeglne pola. W strukturze tej znajduje si dziewi pl typu int.
Nazwa Znaczenie
Zasig tm_sec
Sekundy (po minucie)
0 59 tm_min
Minuty (po godzinie)
0 59 tm_hour
Godziny liczone od pnocy 0 23 tm_mday
Dzie miesica 1 31 tm_mon
Miesic (liczony od stycznia)
0 11 tm_year
Rok (liczony od 1900)
-
tm_wday Dzien tygodnia (liczony od niedzieli)
06 tm_yday
Dzie roku (liczony od 1 stycznia)
0 365 tm_isdst
Czy jest dzie?
1 lub 0 Tabela 10.16.4.1 Struktura tm 255
#include <stdio.h>
#include <time.h>
int main (void)
{
time_t tNow;
struct tm *tInfo;
time (&tNow);
tInfo = localtime (&tNow);
printf("%d:%d:%d\n", tInfo->tm_hour, tInfo->tm_min, tInfo->tm_sec);
printf("%d/%d/%d\n", tInfo->tm_mday, tInfo->tm_mon, tInfo->tm_year
+1900);
printf("%d %d %d\n", tInfo->tm_wday, tInfo->tm_yday,tInfo->tm_isdst);
return 0;
}
Omwienie programu Najpierw definiujemy zmienn tNow, w ktrej bdziemy przechowywa ilo sekund, ktre upyney od 01/01/1970. Nastpnie tworzymy wskanik do struktury tm tInfo. Za pomoc funkcji time uzupeniamy zmienn tNow o te sekundy oraz do tInfo przypisujemy wywoanie localtime z konkretn iloci sekund. Od teraz moemy odwoywa si do wszystkich pl struktury. Ciekawostka moe by przy wywietlaniu roku, poniewa liczba wywietlona liczona jest od 1900 roku, tak wic dostalibymy warto 110, co aktualnym rokiem nie jest.
256
11 MySQL Integracja programu z baz danych Aby mie moliwo poczenia programu z baz danych MySQL musimy uy specjalnej biblioteki mysql.h. Problem polega na tym, e trzeba mie jeszcze zainstalowan baz danych MySQL.
W dodatku C opisane jest jak si z tym upora, przedstawiony jest zestaw podstawowych polece SQL oraz przykad bazy danych, ktry moe pomc w zrozumieniu niniejszego listingu. W tym miejscu przedstawiony jest przykad wraz z opisem za co odpowiadaj poszczeglne czci programu.
#include <stdio.h>
#include <mysql.h>
#include <stdlib.h>
void usage (char *filename, int n);
void initDatabase (void);
void showData (void);
char *servername, *username, *password, *database;
MYSQL *connection;
MYSQL_RES *result;
MYSQL_ROW row;
int main (int argc, char *argv[])
{
char *filename = argv[0];
usage(filename, argc);
servername = argv[1];
username = argv[2];
password = argv[3];
database = argv[4];
initDatabase();
// Nawiazanie polaczenia showData();
return 0;
}
void usage (char *filename, int n)
{
if (n != 5)
{
fprintf(stderr, "Uzycie: %s servername username password database\n", filename);
exit (-1);
}
}
void initDatabase (void)
257
{
connection = mysql_init(NULL);
if (!mysql_real_connect(connection, servername, username, password,
database, 0, NULL, 0))
{
fprintf(stderr, "%s\n", mysql_error(connection));
exit (-1);
}
}
void showData (void)
{
char command[250];
int i;
snprintf(command, sizeof (command), "SELECT * FROM `%s`.`tallest`;",
database);
if (mysql_query(connection, command))
{
fprintf(stderr, "%s\n", mysql_error(connection));
exit (-1);
}
result = mysql_use_result(connection);
while ((row = mysql_fetch_row(result)) != NULL)
{
for (i = 0; i < 7; i++)
printf("%s ", row[i]);
printf("\n");
}
mysql_free_result(result);
mysql_close(connection);
}
Listing 11.1 Uycie C i MySQL Na pierwszy rzut oka program moe i zajmuje wicej miejsca ni mona byo si spodziewa, lecz w gruncie rzeczy nie jest to nic nowego. Jedyn now rzecz s funkcje MySQL i sposb ich uycia.
Reszta zostaa przedstawiona w poprzednich rozdziaach, nie mniej jednak opisz dziaanie programu krok po kroku, lecz najpierw sposb kompilacji i uruchomienia naszego programu:
gcc main.c -o main $(mysql_config --cflags) $(mysql_config --libs)
./main localhost root TUTAJ_WPISZ_SWOJE_HASLO building 258
Tworzymy zmienne globalne, a waciwie globalne wskaniki do typw znakowych po to, aby nie przekazywa do kadej funkcji wykonujcej operacje na bazie danych wszystkich potrzebnych argumentw. Na pocztku funkcji main tworzymy wskanik filename ktry bdzie wskazywa na nazw programu. Do funkcji usage przekazujemy jako pierwszy parametr nazw programu, a jako drugi ilo argumentw, jeli jest ona rna od piciu to za pomoc funkcji exit koczymy dziaanie programu (mae przypomnienie, jeli uylibymy w funkcji usage instrukcji return, to sterowanie programu wrcioby do funkcji main, a to byby bd, dlatego uyto funkcji exit ktra koczy dziaanie caego programu). W kolejnych czterech liniach przypisujemy do globalnych wskanikw argumenty,
ktre zostay otrzymane wraz z wywoaniem programu. Funkcja initDatabase nawizuje poaczenie z baz danych. Funkcje mysql zawarte w tej funkcji s nowe, dlatego warto je omwi. Do wskanika na typ MYSQL o nazwie connection, ktry utworzony zosta globalnie przypisujemy wywoanie funkcji mysql_init, ktra nawizuje poczenie. Jeli funkcja mysql_real_connect zwrci warto zero, to znaczy, e poczenie nie udao si i program przerywa dziaanie z zastosowaniem odpowiedniego powiadomienia (mysql_error). Wywietlanie informacji jest najciekawsz rzecz.
Przede wszystkim, funkcja mysql_query przyjmuje jako pierwszy argument wskanik do poczenia
(connection), a jako drugi polecenie SQL. Zastosowano tutaj pewn sztuczk z uyciem funkcji snprintf, za pomoc ktrej przypisujemy do tablicy pewien cig znakw wraz ze zmiennymi (tutaj nazwa bazy danych). Drugim argumentem funkcji mysql_query jest wic ta tablica, w ktrej znajduje si polecenie SQL. Jeli funkcja zwrci warto nie zerow, to program koczy dziaanie ze stosownym komunikatem. Jeli wszystko poszo zgodnie z oczekiwaniami to do zmiennej result przypisywany jest wynik ostatniego wykonanego polecenia. Zmienn row mona traktowa jako tablic, do ktrej przypisujemy cay rekord, a do poszczeglnych kolumn odwoujemy si jak do elementw tablicy, czyli indeksujc je od zera. Poniewa nasz przykad bazy danych zawiera siedem kolumn, tak wic drukujemy wszystkie siedem elementw tablicy row. Na koniec czycimy zmienn result oraz zamykamy poczenie z baz.
259
Dodatek A W tym miejscu chciabym przedstawi podstawowe polecenia Linuksa potrzebne do dostania si do katalogu, w ktrym mamy kody rdowe (bo o to nam chodzi), skompilowanie oraz uruchomienie programu. A wic zacznijmy od wczenia terminala, ktrego wygld prezentuje si nastpujco (rys A.1).
Rys. A.1 Wygld terminala Tak po krtce mona omwi nastpujce czci terminala, a wic tak:
gruby nazwa uytkownika
earth nazwa hosta
~/Desktop/test/codes/stat katalog, w ktrym obecnie si znajdujemy
$ znak zachty (do wprowadzania polece)
W poniszej tabeli znajduj si podstawowe (potrzebne nam) polecenia systemu Linux.
Nazwa polecenia Opis
cd Zmiana katalogu mkdir
Tworzenie katalogu rm
Usuwanie plikw / katalogw ls
Wywietlanie zawartoci folderu gcc
Kompilowanie programw chmod
Ustawiania uprawnie chown
Ustawianie waciciela Tabela A.1 Podstawowe polecenia Linuksa potrzebne do kompilacji i uruchomienia.
260
Pomimo i do samego procesu kompilacji potrzebne jest jedno polecenie, to warto wiedzie te o tych innych podstawowych poleceniach, ktre mog uatwi nam prac z konsol. Programy napisane w tej ksice przeznaczone s do uruchamiana z terminala.
A.1 Zmiana katalogu Aby zmieni katalog w terminalu, naley uy polecenia cd. Polecenie to przyjmuje jako argument ciek do katalogu bezporedni, lub poredni (liczon od miejsca, w ktrym si znajdujemy). Na poniszym przykadzie zademonstruje jak dosta si na pulpit. Znaku $ nie wpisujemy oznacza on znak zachty wpisujemy polecenie stojce za znakiem dolara.
$ cd /home/gruby/Desktop/
Gdzie zamiast gruby wpiszesz nazw swojego uytkownika. Jeli katalogu Desktop nie ma, to moe by pod polsk nazw Pulpit. Drug, alternatywn metod jest wpisanie polecenia w nastpujcej formie.
$ cd ~/Desktop/
Przyjmijmy nastpujce zaoenia. Znajdujemy si na pulpicie, a na pulpicie jest katalog o nazwie codes. Aby do niego wej moemy wpisa bezporedni ciek, tak jak pokazano poniej.
$ cd /home/gruby/Desktop/codes Bd wpisa ciek poredni, tak jak pokazano poniej.
$ cd codes Przechodzenie do katalogu nadrzdnego odbywa si w nastpujcy sposb.
$ cd ..
A.2 Tworzenie katalogu Aby utworzy katalog uywamy polecenia mkdir. Przykad poniszy tworzy katalog o nazwie c_codes w katalogu, w ktrym aktualnie si znajdujemy.
261
$ mkdir c_codes A.3 Usuwanie plikw i katalogw Usuwanie plikw polega na prostym wydaniu polecenia rm. Usuwanie pustego katalogu mona zrealizowa za pomoc rmdir, natomiast katalogu z zawartoci rm -r. Usunicie pliku main.c (jeli plik wystpuje w katalogu, w ktrym jestemy) wyglda nastpujco.
$ rm main.c Usuwanie katalogu z zawartoci
$ rm -r c_codes A.4 Wywietlanie zawartoci katalogu Wywietlanie zawartoci katalogu jest bardzo czsto uywane, tak wic poka jak si to robi, oraz opisz pewne informacj, ktre wywietli terminal. Za pomoc polecenia ls, bez dodatkowych opcji dostaniemy wydruk zawartoci bez dodatkowych informacji o plikach / katalogach. Zamy, e wywietlamy zawarto katalogu c_codes, w ktrym znajduj si pliki z rozszerzeniem .c.
$ ls c_codes Terminal wydrukuje plik1.c plik2.c plik3.c plik4.c plik5.c plik5.c plik5.c plik6.c Waniejsz opcj tego polecenia jest opcja -l, za pomoc ktrej moemy dowiedzie si wicej o pliku.
Poniej wywietlone zostay informacje o pliku plik1.c. Polecenie ma si nastpujco.
$ ls -l plik1.c Terminal wydrukuje
-rw-r--r--
1 gruby
gruby 0
2010-07-13 22:56 plik1.c 262
Nie bd opisywa wszystkich kolumn, opisz tylko te, ktre s nam potrzebne. A mianowicie pierwsz kolumn, w ktrej s znaki: -rw-r--r--. Ktre oznaczaj, e waciciel pliku moe odczytywa (read),
i zapisywa (write) plik. Grupa i pozostali mog tylko odczytywa plik. Ma to istotne znaczenie, gdy chcemy uruchomi plik, ktry nie ma uprawnie do uruchamiania -x, taka operacja jest nie moliwa.
W punkcie A.6 jest pokazane jak zmieni uprawnienia. Trzecia i czwarta kolumna oznaczaj odpowiednio waciciela oraz grup pliku.
A.5 Kompilacja programw Kompilowanie plikw z kodami rdowymi odbywa si w nastpujcy sposb. Wpisujemy w terminalu jedno z poniszych polece.
$ cc nazwa_pliku.c Jeli kompilator nie wykryje adnych bdw to kompilacja pjdzie bezproblemowo i naszym plikiem wynikowym bdzie plik o nazwie: a.out. Plik taki uruchamiamy za pomoc polecenia:
$ ./a.out Program mona skompilowa inaczej, tzn nada konkretn nazw pliku wynikowego za pomoc polecenia:
$ gcc nazwa_pliku.c -o nazwa_pliku_wynikowego Jeli nasz program skada si z wikszej iloci plikw rdowych to wpisujemy je w nastpujcy sposb:
$ gcc plik1.c plik2.c plik3.c -o nazwa_pliku_wynikowego Pliki skompilowane z uyciem polecenia gcc uruchamia si analogicznie, do tego pokazanego wczeniej z t rnic, e wpisujemy nazw, ktr sami ustalilimy, tzn:
$ ./nazwa_pliku_wynikowego 263
A.6 Ustawianie uprawnie Aby ustawi uprawnienia do pliku naley uy polecenia chmod. Aby doda moliwo wykonywania pliku (dla wszystkich) trzeba uy opcji +x. Jeli chcemy odebra te uprawnienia to wstawiamy -x.
Analogicznie jest z czytaniem pliku (+r / -r), oraz zapisywaniem pliku (+w / -w). W szczegy polecenia chmod w tym miejscu te nie bd si wdawa. Chodzi o to, eby mie z grubsza pogld dlaczego pliku nie moemy uruchomi i jak to naprawi. Aby ustawi moliwo wykonywania pliku wpisujemy:
$ chmod +x nazwa_pliku_wykonywalnego Polecenie to moemy wykona jeli jestemy wacicielami pliku, bd mamy uprawnienia administratora.
A.7 Ustawianie waciciela Zmiana waciciela pliku odbywa si przy pomocy polecenia chown. Aby zmieni waciciela pliku statystyki naley wpisa polecenie w ten sposb.
$ chown gruby statystyki Gdzie zamiast gruby wpisujesz nazw uytkownika, ktry ma sta si wacicielem pliku. Rwnie w tym poleceniu naley posiada odpowiednie uprawnienia do zmiany waciciela pliku (albo zmieniany plik naley do nas, albo mamy uprawnienia administratora).
264
Dodatek B W dodatku tym chc omwi Linuksowe polecenie time, ktre sprawdza jak dugo wykonywa si program oraz podaje informacje o uytych zasobach.
B.1 Powoka systemowa W gruncie rzeczy wpisanie nazwy programu w konsoli powinno uruchomi ten program i zazwyczaj tak jest. Cay dowcip polega na tym, e jeli uywamy powoki bash, ktra jest obecnie najpopularniejsz powok to wpisanie polecenie time --version wywietli:
--version: command not found real 0m0.246s user 0m0.168s sys
0m0.064s Dzieje si tak poniewa powoka bash ma wbudowan funkcj sprawdzajc czas wykonywania programw, ale jest ona uboga ogranicza si tylko do tych trzech linijek, ktre wida powyej.
Problem ten nie wystpuje w powokach sh, ksh. W powoce csh rwnie wystpuje ten problem tylko troch inny komunikat zostaje wywietlony.
We wszystkich powokach dziaa natomiast bezporednie odwoanie si do programu. Wpisujc w konsoli ponisze polecenie otrzymamy wersj programu GNU time.
$ /usr/bin/time --version B.2 Polecenie time, formatowanie wynikw Za pomoc polecenia time moemy dosta bardzo wiele uytecznych informacji o uywanych zasobach po skoczeniu dziaania programu. Najprostsz form uycia wbudowanego programu time w powok bash jest wykonanie polecenia:
$ time ./nazwa_programu 265
Wynik powinien by nastpujcy real 0m2.539s user 0m2.448s sys
0m0.012s Natomiast jest to wbudowany program w powok, tak wic jego moliwoci ograniczaj si tylko do tych, ktre zostay pokazane powyej. Aby uzyska dostp do wszystkich opcji oraz do moliwoci formatowania drukowanego tekstu, do programu time musimy odwoa si w nastpujcy sposb.
$ /usr/bin/time ./nazwa_programu Podrcznik systemowy1 opisuje do dokadnie co jaki format robi, tak wic aby za duo si nie rozpisywa poka jak si uywa tego polecenia z opcjami oraz jak si formatuje drukowany tekst.
Aby uzyska wikszo informacji jakie program time jest w stanie wywietli naley uy opcji
--verbose (-v). Wpisanie w konsoli:
$ /usr/bin/time -verbose ./shell Wywietli takie o to informacje:
Command being timed: "./shell"
User time (seconds): 2.45 System time (seconds): 0.01 Percent of CPU this job got: 97%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.53 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 0 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 2559 Voluntary context switches: 1 Involuntary context switches: 249 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 1 W konsoli wpisz: man time 266
Formatowanie drukowanego tekstu odbywa si za pomoc opcji --format (-f). W podrczniku systemowym w sekcji FORMATTING THE OUTPUT znajduj si The resource specifiers, s to litery poprzedzone znakiem procenta (%). Dla przykadu wydrukujmy kompletn liczb sekund,
ktrych program uy bezporednio w trybie uytkownika oraz w trybie jdra systemu. Pierwszy z nich uywa przecznika %U, drugi %S. Jedno i drugie polecenie drukuje to samo (czasy wykonania naturalnie mog si rni).
$ /usr/bin/time --format="%U\t%S" ./nazwa_programu
$ /usr/bin/time -f "%U\t%S" ./nazwa_programu
\t, \n oraz \\ s dozwolonymi znakami, ktre odpowiednio oznaczaj tabulacj, przejcie do nowej linii,
wydrukowanie ukonika (\).
267
Dodatek C W dodatku tym omwi proces instalacji serwera MySQL, oraz pewnych czynnoci potrzebnych, aby biblioteka mysql.h bya widziana i mona byo j poczy z naszym programem. Podstawowe polecenia SQL za pomoc ktrych utworzymy baz danych potrzebn do rozdziau 11 znajduj si w dalejszej czci niniejszego dodatku.
C.1 Instalacja MySQL W zalenoci od tego jak dystrybucj posiadamy sposb instalacji moe si rni. Dla Debiana i Ubuntu instalacja MySQL sprowadza si do wykonania poniszego polecenia z uprawnieniami administratora.
# apt-get install mysql-server Podczas instalacji kreator poprosi o podanie hasa. Jest to wane, poniewa to haso bdzie Tobie potrzebne. Po instalacji wymagane jest zrobienie trzech czynnoci. Biblioteka libmysqlclient-dev posiada specjalny skrypt o nazwie mysql_config, dziki ktremu bdziemy w stanie poaczy nasz program z baz. Pierwszym poleceniem, ktre musimy wykona to instalacja owej biblioteki:
# apt-get install libmysqlclient-dev A nastpnie wykonujemy ponisze polecenia.
$ mysql_config --libs Terminal powinien wywietli mniej wicej tak odpowied
-Wl,-Bsymbolic-functions -rdynamic -L/usr/lib/mysql -lmysqlclient Ostatnim poleceniem jest
$ mysql_config --cflags 268
Na co terminal reaguje mniej wicej tak
-I/usr/include/mysql
-DBIG_JOINS=1
-fno-strict-aliasing
-DUNIV_LINUX Po tych czynnociach bdziemy mogli skompilowa program tak, jak jest to pokazane w rozdziale 11.
C.2 Podstawowe polecenia MySQL Aby mc korzysta z MySQL musimy w terminalu wpisa
$ mysql -u root -p Nastpnie podajemy haso i zostajemy zalogowani do bazy jako administrator. Jak pokazano na poniszym rysunku za wiele nie moemy z tego wycign jeli nie mamy troch pojcia na ten temat.
Rysunek C1. Widok MySQL.
Przede wszystkim, jeli chcemy zacz operowa na jakiej bazie danych, to musimy j wybra. Ale jak to zrobi, skoro nie wiemy jakie bazy s utworzone? Z pomoc przychodzi polecenie:
mysql> show databases;
W standardzie powinny by dwie bazy, a mianowicie: information_schema, oraz mysql. Wybieramy mysql, wanie jak j wybra? Polecenie use tutaj ma swoje zastosowanie, czyli:
mysql> use mysql;
Terminal poinformuje nas o tym, e baza zostaa zmieniona. Teraz, aby zobaczy jakie tabele s 269
dostpne w tej bazie, uywamy polecenia:
mysql> show tables;
Teraz, aby zobaczy jakie dane s np. w tabeli help_category. Wpisujemy polecenie:
mysql> select * from help_category;
Gdzie mona by sobie to byo przetumaczy w nastpujcy sposb Wybierz co z czego. W naszym przypadku Wybierz wszystko z help_category. Gwiazdka to znak wieloznacznoci, dlatego oznacza wszystko, czyli wszystkie rekordy bdce w danej tabeli.
Aby zawzi nasze wyszukiwania do pierwszych dziesiciu rekordw musimy uy dodatkowej czci w poleceniu wywietlajcym, a mianowicie:
mysql> select * from help_category where help_category_id <= 10;
Gdzie help_category_id jest pierwsz kolumn naszej tabeli. Aby sprawdzi jakie kolumny ma nasza tabela wpisujemy:
mysql> show columns from help_category;
Aby wynik naszego wyszukiwania by jeszcze wszy, a konkretnie by w pewnym zadanym przedziale, musimy uy pewnej sztuczki, ktr zreszt ju znasz po rozdziale operatory logiczne. Aby wywietli wszystkie rekordy, ktrych nazwy rozpoczynaj si na litery z przedziau <A;G> musimy wpisa nastpujce polecenie:
mysql> select * from help_category where name >= 'A' and name <= 'G';
Aby posortowa dane w kolejnoci rosncej, lub malejcej uywamy polecenia nastpujcego:
mysql> select * from help_category where help_category_id <= 10 order by help_category_id desc;
Podajemy warunek dla ktrego maj zosta wywietlone informacje, a nastpnie wpisujemy order by po ktrym nastpuje nazwa kolumny i sowo desc, bd asc, ktre odpowiednio sortuj malejco bd rosnco. Tak wic aby wywietli wszystkie rekordy, ktrych help_category_id jest wiksze od 5 a mniejsze od 15 oraz aby byy posortowane malejco wedug nazwy wpisujemy:
270
mysql> select * from help_category where help_category_id <= 15 order by name desc;
help_category_id
>=
5 and
By nie ingerowa w t baz, stworzymy now do ktrej bdziemy mogli doda nowe tabele, uzupeni je o konkretn ilo rekordw, usuwa i robi rne operacje na bazie danych. Zacznijmy wic od utworzenia nowej bazy danych, ktr nazwiemy building.
mysql> create database `building`;
Po utworzeniu bazy musimy utworzy tabel, w ktrej bdziemy przechowywa rekordy. Podczas tworzenia tabeli podajemy list kolumn wraz z odpowiednimi parametrami, poniej polecenia znajduje si opis.
mysql> CREATE TABLE `building`.`tallest` (
-> `id` INT NOT NULL AUTO_INCREMENT ,
-> `category` VARCHAR( 40 ) NOT NULL ,
-> `structure` VARCHAR( 40 ) NOT NULL ,
-> `country` VARCHAR( 40 ) NOT NULL ,
-> `city` VARCHAR( 40 ) NOT NULL ,
-> `height` FLOAT NOT NULL ,
-> `year` SMALLINT NOT NULL ,
-> PRIMARY KEY ( `id` ));
Jeli nie podamy na kocu rednika, to przejdziemy do nowej linii, ktra zaczyna si strzak i wpisujemy dalsz cz polecenia. Uyteczne, gdy polecenie jest do dugie, przez co robi si znacznie bardziej przejrzyste. Wszystkie polecenie SQL mona pisa wielkimi bd maymi literami,
kwestia upodoba. Tabel tworzymy w pierwszej linii podaj nazw bazy danych a po kropce nazw tabeli. Po nawiasie otwierajcym wpisujemy nazwy kolumn oraz ich atrybuty. Nie czas i miejsce na rozwodzenie si co one konkretnie oznaczaj i dlaczego tak, a nie inaczej, nie mniej jednak pierwsza kolumna to id, ktra przyjmuje wartoci cakowite oraz posiada auto increment, czyli automatyczne zwikszanie licznika. Kolejne cztery kolumny s typu varchar, czyli maksymalny tekst moe mie tyle ile podajemy w nawiasach, lecz w przypadku nie wykorzystania caoci rozmiar jest zmniejszany.
Kolumna height jest typu rzeczywistego, ostatnia kolumna year jest typu smallint.
Dodawanie rekordu odbywa si w nastpujcy sposb:
mysql> INSERT INTO `building`.`tallest` (`id`, `category`, `structure`,
`country`, `city`, `height`, `year`) VALUES (NULL, 'Concrete tower',
'Guangzhou TV & Sightseeing Tower', 'China', 'Guangzhou', '610', '2009');
271
No tutaj za duo te nie ma co tumaczy, pewien schemat, wedug ktrego dodajemy do bazy i ju.
Jedyn rzecz o ktrej warto powiedzie, to to, e w miejsce wartoci kolumny id wpisujemy NULL,
bo jest auto increment, wic warto zostanie automatycznie zwikszona.
Usuwanie rekordw jest rzecz bardzo prost, jeli mamy id kolumny, a skoro my mamy, to moemy wykorzysta nastpujce polecenie:
mysql> DELETE FROM `building`.`tallest` WHERE id = X;
Gdzie X jest oczywicie pewn wartoci cakowit reprezentujc dany rekord. Edytowanie wartoci danego rekordu realizuje si za pomoc nastpujcego polecenia:
mysql> UPDATE `building`.`tallest` SET `year` = '2010' WHERE
`tallest`.`id` = X;
Mona to przetumaczy nastpujco Z bazy building, tabeli tallest uaktualniamy rok przypisujc mu warto '2010' gdzie id tego rekordu rwna si X.
Aby usun tabel naley wpisa ponisze polecenie. Natomiast zwracam uwag na to, i po usuniciu nie ma moliwoci cofnicia, dlatego te naley zastanowi si dwa razy zanim podejmiemy tak decyzj.
mysql> DROP TABLE `building`.`tallest`;
Analogicznie jest z usuniciem caej bazy.
mysql> DROP DATABASE building;
272 |
ordbetareg | cran | R | Package ‘ordbetareg’
August 10, 2023
Type Package
Title Ordered Beta Regression Models with 'brms'
Version 0.7.2
Description Implements ordered beta regression models, which are for modeling continuous vari-
ables with upper and lower bounds, such as
survey sliders, dose-response relationships and indexes. For more information, see
Kubinec (2022) <doi:10.31235/osf.io/2sx6y>. The package is a front-
end to the R package 'brms', which
facilitates a range of regression specifications, including hierarchical, dynamic and
multivariate modeling.
BugReports https://github.com/saudiwin/ordbetareg_pack/issues
License MIT + file LICENSE
Encoding UTF-8
LazyData true
LazyDataCompression xz
RoxygenNote 7.2.3
Depends R (>= 3.5), brms (>= 2.18.0), stats
Imports transformr, dplyr, ggplot2 (>= 3.4.0), gganimate, tidyr
Suggests rmarkdown, knitr, gt, modelsummary (>= 1.4.1),
marginaleffects (>= 0.10.0), haven, stringr, Hmisc, collapse,
ggthemes, glmmTMB, mice, bayestestR
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-6655-4119>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-08-10 07:30:02 UTC
R topics documented:
dordbet... 2
fit_impute... 3
fit_multivariat... 3
normaliz... 4
ordbetare... 5
ord_fit_mea... 9
ord_fit_ph... 9
pe... 10
pp_check_ordbet... 10
rordbet... 12
sim_dat... 13
sim_ordbet... 13
dordbeta Probability Density Function for the Ordered Beta Distribution
Description
This function will return the density of given variates of the ordered beta distribution conditional on
values for the mean (mu), dispersion (phi) and cutpoints governing the ratio of degenerate (discrete)
to continuous responses.
Usage
dordbeta(x = 0.9, mu = 0.5, phi = 1, cutpoints = c(-1, 1), log = FALSE)
Arguments
x Variates of the ordered beta distribution (should be in the [0,1] interval).
mu Value of the mean of the distribution. Should be in the \(0,1\) interval (cannot
be strictly equal to 0 or 1). If length is greater than 1, should be of length x.
phi Value of the dispersion parameter. Should be strictly greater than 0. If length is
greater than 1, should be of length x.
cutpoints A vector of two numeric values for the cutpoints. Second value should
log where to return the log density be strictly greater than the first value.
Value
Returns a vector of length x of the density of the ordered beta distribution conditional on mu and
phi.
Examples
# examine density (likelihood) of different possible values
# given fixed values for ordered beta parameters
x <- seq(0, 1, by=0.01)
x_dens <- dordbeta(x, mu = 0.3, phi=2, cutpoints=c(-2, 2))
# Most likely value for x is approx 1
# Note discontinuity in density function between continuous/discrete values
# density function is a combined PMF/PDF, so not a real PDF
# can though be used for MLE
plot(x_dens, x)
# discrete values should be compared to each other:
# prob of discrete 0 > prob of discrete 1
x_dens[x==0] > x_dens[x==1]
fit_imputed Fitted Ordered Beta Regression Model (Imputed Datasets)
Description
A fitted ordered beta regression model on multiple imputed datasets generated by the package mice.
Usage
fit_imputed
Format
an ordbetareg object
fit_multivariate Fitted Ordered Beta Regression Model (Multivariate regression)
Description
A fitted ordered beta regression model with two responses, one an ordered beta regression and the
other a Gaussian/Normal outcome. Useful for examining mediation analysis.
Usage
fit_multivariate
Format
an ordbetareg object
normalize Normalize Outcome/Response to \[0,1\] Interval
Description
This function takes a continuous (double) column of data and converts it to have 0 as the lower
bound and 1 as the upper bound.
Usage
normalize(outcome, true_bounds = NULL)
Arguments
outcome Any non-character vector. Factors will be converted to numeric via coercion.
true_bounds Specify this parameter with the lower and upper bound if the observed min/max
of the outcome should not be used. Useful when an upper or lower bound exists
but the observed data is less than/more than that bound. The normalization
function will respect these bounds.
Details
Beta regression can only be done with a response that is continuous with a lower bound of 0 and
an upper bound of 1. However, it is straightforward to transform any lower and upper-bounded
continuous variable to the \[0,1\] interval. This function does the transformation and saves the
original bounds as attributes so that the bounds can be reverse-transformed.
Value
A numeric vector with an upper bound of 1 and a lower bound of 0. The original bounds are saved
in the attributes "lower_bound" and "upper_bound".
Examples
# set up arbitrary upper and lower-bounded vector
outcome <- runif(1000, min=-33, max=445)
# normalize to \[0,1\]
trans_outcome <- normalize(outcome=outcome)
summary(trans_outcome)
# only works with numeric vectors and factors
try(normalize(outcome=c('a','b')))
ordbetareg Fit Ordered Beta Regression Model
Description
This function allows you to estimate an ordered beta regression model via a formula syntax.
The ordbetareg package is essentially a wrapper around brms that enables the ordered beta re-
gression model to be fit. This model has advantages over other alternatives for continous data with
upper and lower bounds, such as survey sliders, indexes, dose-response relationships, and visual
analog scales (among others). The package allows for all of the many brms regression modeling
functions to be used with the ordered beta regression distribution.
Usage
ordbetareg(
formula = NULL,
data = NULL,
true_bounds = NULL,
phi_reg = "none",
use_brm_multiple = FALSE,
coef_prior_mean = 0,
coef_prior_SD = 5,
intercept_prior_mean = NULL,
intercept_prior_SD = NULL,
phi_prior = 0.1,
dirichlet_prior = c(1, 1, 1),
phi_coef_prior_mean = 0,
phi_coef_prior_SD = 5,
phi_intercept_prior_mean = NULL,
phi_intercept_prior_SD = NULL,
extra_prior = NULL,
init = "0",
make_stancode = FALSE,
...
)
Arguments
formula Either an R formula in the form response/DV ~ var1 + var2 etc. or formula
object as created/called by the brms brms::bf function. *Please avoid using 0 or
Intercept in the formula definition.
data An R data frame or tibble containing the variables in the formula
true_bounds If the true bounds of the outcome/response don’t exist in the data, pass a length
2 numeric vector of the minimum and maximum bounds to properly normalize
the outcome/response
phi_reg Whether you are including a linear model predicting the dispersion parameter,
phi, and/or for the response. If you are including models for both, pass option
’both’. If you only have an intercept for the outcome (i.e. a 1 in place of co-
variates), pass ’only’. If you only have intercepts for phi (such as a varying
intercepts/random effects) model, pass the value "intercepts". To set priors on
these intercepts, use the extra-prior option with the brms::set_prior function
(class="sd"). If no model of any kind for phi, the default, pass ’none’.
use_brm_multiple
(T/F) Whether the model should use brms::brm_multiple for multiple imputa-
tion over multiple dataframes passed as a list to the data argument
coef_prior_mean
The mean of the Normal distribution prior on the regression coefficients (for
predicting the mean of the response). Default is 0.
coef_prior_SD The SD of the Normal distribution prior on the regression coefficients (for pre-
dicting the mean of the response). Default is 5, which makes the prior weakly
informative on the logit scale.
intercept_prior_mean
The mean of the Normal distribution prior for the intercept. By default is NULL,
which means the intercept receives the same prior as coef_prior_mean. To
zero out the intercept, set this parameter to 0 and coef_prior_SD to a very
small number (0.01 or smaller). NOTE: the default intercept in brms is cen-
tered (mean-subtracted) by default. To use a traditional intercept, either add 0 +
Intercept to the formula or specify center=FALSE in the bf formula function
for brms. See brms::brmsformula() for more info.
intercept_prior_SD
The SD of the Normal distribution prior for the intercept. By default is NULL,
which means the intercept receives the same prior SD as coef_prior_SD.
phi_prior The mean parameter of the exponential prior on phi, which determines the dis-
persion of the beta distribution. The default is .1, which equals a mean of 10 and
is thus weakly informative on the interval (0.4, 30). If the response has very low
variance (i.e. tightly) clusters around a specific value, then decreasing this prior
(and increasing the expected value) may be helpful. Checking the value of phi
in the output of the model command will reveal if a value of 0.1 (mean of 10) is
too small.
dirichlet_prior
A vector of three integers corresponding to the prior parameters for the dirchlet
distribution (alpha parameter) governing the location of the cutpoints between
the components of the response (continuous vs. degenerate). The default is 1
which puts equal probability on degenerate versus continuous responses. Likely
only needs to be changed in a repeated sampling situation to stabilize the cut-
point locations across samples.
phi_coef_prior_mean
The mean of the Normal distribution prior on the regression coefficients for
predicting phi, the dispersion parameter. Only useful if a linear model is being
fit to phi. Default is 0.
phi_coef_prior_SD
The SD of the Normal distribution prior on the regression coefficients for pre-
dicting phi, the dispersion parameter. Only useful if a linear model is being fit to
phi. Default is 5, which makes the prior weakly informative on the exponential
scale.
phi_intercept_prior_mean
The mean of the Normal distribution prior for the phi (dispersion) regression
intercept. By default is NULL, which means the intercept receives the same
prior as phi_coef_prior_mean. To zero out the intercept, set this parameter to
0 and phi_coef_prior_SD to a very small number (0.01 or smaller).
phi_intercept_prior_SD
The SD of the Normal distribution prior for the phi (dispersion) regression in-
tercept. By default is NULL, which means the intercept receives the same prior
SD as phi_coef_prior_SD.
extra_prior An additional prior, such as a prior for a specific regression coefficient, added to
the outcome regression by passing one of the brms functions brms::set_prior or
brms::prior_string with appropriate values.
init This parameter is used to determine starting values for the Stan sampler to be-
gin Markov Chain Monte Carlo sampling. It is set by default at 0 because the
non-linear nature of beta regression means that it is possible to begin with ex-
treme values depending on the scale of the covariates. Setting this to 0 helps
the sampler find starting values. It does, on the other hand, limit the ability to
detect convergence issues with Rhat statistics. If that is a concern, such as with
an experimental feature of brms, set this to "random" to get more robust starting
values (just be sure to scale the covariates so they are not too large in absolute
size).
make_stancode If TRUE, will pass back the Stan code for the model as a character vector rather
than fitting the model.
... All other arguments passed on to the brm function
Details
This function is a wrapper around the brms::brm function, which is a powerful Bayesian regression
modeling engine using Stan. To fully explore the options available, including dynamic and hier-
archical modeling, please see the documentation for the brm function above. As the ordered beta
regression model is currently not available in brms natively, this modeling function allows a brms
model to be fit with the ordered beta regression distribution.
For more information about the model, see the paper here: https://osf.io/preprints/socarxiv/2sx6y/.
This function allows you to set priors on the dispersion parameter, the cutpoints, and the regression
coefficients (see below for options). However, to add specific priors on individual covariates, you
would need to use the brms::set_prior function by specifying an individual covariate (see function
documentation) and passing the result of the function call to the extra_prior argument.
This function will also automatically normalize the outcome so that it lies in the \[0,1\] interval,
as required by beta regression. For furthur information, see the documentation for the normalize
function.
Priors can be set on a variety of coefficients in the model, see the description of parameters coef_prior_mean
and intercept_prior_mean, in addition to setting a custom prior with the extra_prior option.
When setting priors on intercepts, it is important to note that by default, all intercepts in brms are
centered (the means are subtracted from the data). As a result, a prior set on the default intercept
will have a different interpretation than a traditional intercept (i.e. the value of the outcome when
the covariates are all zero). To change this setting, use the brms::bf() function as a wrapper around
the formula with the option center=FALSE to set priors on a traditional non-centered intercept.
Note that while brms also supports adding 0 + Intercept to the formula to address this issue,
ordbetareg does not support this syntax. Instead, use center=FALSE as an option to brms::bf().
To learn more about how the package works, see the vignette by using the command browseVignettes(package='ordbetar
For more info about the distribution, see this paper: https://osf.io/preprints/socarxiv/2sx6y/
To cite the package, please cite the following paper:
<NAME>. "Ordered Beta Regression: A Parsimonious, Well-Fitting Model for Continuous
Data with Lower and Upper Bounds." Political Analysis. 2022.
Value
A brms object fitted with the ordered beta regression distribution.
Examples
# load survey data that comes with the package
library(dplyr)
data("pew")
# prepare data
model_data <- select(pew,therm,
education="F_EDUCCAT2_FINAL",
region="F_CREGION_FINAL",
income="F_INCOME_FINAL")
# It takes a while to fit the models. Run the code
# below if you want to load a saved fitted model from the
# package, otherwise use the model-fitting code
data("ord_fit_mean")
# fit the actual model
if(.Platform$OS.type!="windows") {
ord_fit_mean <- ordbetareg(formula=therm ~ education + income +
(1|region),
data=model_data,
cores=2,chains=2)
}
# access values of the coefficients
summary(ord_fit_mean)
ord_fit_mean Fitted Ordered Beta Regression Model
Description
A fitted ordered beta regression model to the mean of the thermometer column from the pew data.
Usage
ord_fit_mean
Format
an ordbetareg object
ord_fit_phi Fitted Ordered Beta Regression Model (Phi Regression)
Description
A fitted ordered beta regression model to the dispersion parameter of the thermometer column from
the pew data.
Usage
ord_fit_phi
Format
an ordbetareg object
pew Pew American Trends Panel Wave 28
Description
A dataset with the non-missing responses for the 28th wave of the Pew American Trends Panel
survey.
Usage
pew
Format
A data frame with 140 variables and 2,538 observations.
Source
https://www.pewresearch.org/social-trends/dataset/american-trends-panel-wave-28/]
pp_check_ordbeta Accurate Posterior Predictive Plots for Ordbetareg Models
Description
The standard brms::pp_check plot available via brms is not accurate for ordbetareg models because
an ordered beta regression has both continuous and discrete components. This function implements
a bar plot and a density plot for the continuous and discrete elements separately, and will return
accurate posterior predictive plots relative to the data.
Usage
pp_check_ordbeta(
model = NULL,
type = "both",
ndraws = 10,
cores = NULL,
group = NULL,
new_theme = NULL,
outcome_label = NULL,
animate = FALSE,
reverse_bounds = TRUE,
facet_scales = "fixed"
)
Arguments
model A fitted ordbetareg model.
type Default is "both" for creating both a discrete (bar) and continuous (density) plot.
Can also be "discrete" for only the bar plot for discrete values (0/1) or "continu-
ous" for continuous values (density plot).
ndraws Number of posterior draws to use to calculate estimates and show in plot. De-
faults to 10.
cores Number of cores to use to produce posterior predictive distribution. Defaults to
NULL or 1 core.
group A factor variable of the same number of rows as the data that is used to broduce
grouped (faceted) plots of the posterior distribution.
new_theme Any additional themes to be added to ggplot2 (default is NULL).
outcome_label A character value that will replace the name of the outcome in the plot (default
is the name of the response variable in the data frame).
animate Whether to animate each posterior draw for continuous distributions (defaults to
FALSE).
reverse_bounds Whether to plot data using the original bounds in the data (i.e. not 0 and 1).
facet_scales The option passed on to the facet_wrap function in ggplot2 for the type of
scale for facetting if passing a variable for group. Defaults to "fixed" scales
but can be set to "free_y" to allow probability density/bar count scales to vary
or "free" to allow both x and y axes to vary (i.e., also outcome axis ticks).
Value
If "both", prints both plots and returns a list of both plots as ggplot2 objects. Otherwise, prints and
returnst the specific plot as a ggplot2 object.
Examples
# need a fitted ordbetareg model
data("ord_fit_mean")
out_plots <- pp_check_ordbeta(ord_fit_mean)
# view discrete bar plot
out_plots$discrete
# view continuous density plot
out_plots$continuous
# change title using ggplot2 ggtitle function
out_plots$discrete + ggplot2::ggtitle("New title")
rordbeta Generate Ordered Beta Variates
Description
This function will generate ordered beta random variates given values for the mean (mu), dispersion
(phi) and cutpoints governing the ratio of degenerate (discrete) to continuous responses.
Usage
rordbeta(n = 100, mu = 0.5, phi = 1, cutpoints = c(-1, 1))
Arguments
n Number of variates to generate.
mu Value of the mean of the distribution. Should be in the \(0,1\) interval (cannot
be strictly equal to 0 or 1). If length is greater than 1, should be of length n.
phi Value of the dispersion parameter. Should be strictly greater than 0. If length is
greater than 1, should be of length n.
cutpoints A vector of two numeric values for the cutpoints. Second value should be strictly
greater than the first value.
Value
A vector of length n of variates from the ordered beta distribution.
Examples
# generate 100 random variates with an average of 0.7
# all will be in the closed interval \[0,1\]
ordbeta_var <- rordbeta(n=100, mu=0.7, phi=2)
# Will be approx mean = 0.7 with high positive skew
summary(ordbeta_var)
sim_data Simulated Ordered Beta Regression Values
Description
The simulated draws used in the vignette for calculating statistical power.
Usage
sim_data
Format
A dataframe
sim_ordbeta Power Calculation via Simulation of the Ordered Beta Regression
Model
Description
This function allows you to calculate power curves (or anything else) via simulating the ordered
beta regression model.
Usage
sim_ordbeta(
N = 1000,
k = 5,
iter = 1000,
cores = 1,
phi = 1,
cutpoints = c(-1, 1),
beta_coef = NULL,
beta_type = "continuous",
treat_assign = 0.5,
return_data = FALSE,
seed = as.numeric(Sys.time()),
...
)
Arguments
N The sample size for the simulation. Include a vector of integers to examine
power/results for multiple sample sizes.
k The number of covariates/predictors.
iter The number of simulations to run. For power calculation, should be at least 500
(yes, this will take some time).
cores The number of cores to use to parallelize the simulation.
phi Value of the dispersion parameter in the beta distribution.
cutpoints Value of the two cutpoints for the ordered model. By default are the values -1
and +1 (these are interpreted in the logit scale and so should not be too large).
The farther apart, the fewer degenerate (0 or 1) responses there will be in the
distribution.
beta_coef If not null, a vector of length k of the true predictor coefficients/treatment values
to use for the simulation. Otherwise, coefficients are drawn from a random
uniform distribution from -1 to 1 for each predictor.
beta_type Can be either continuous or binary. Use the latter for conventional treatments
with two values.
treat_assign If beta_type is set to binary, you can use this parameter to set the proportion of
N assigned to treatment. By default, the parameter is set to 0.5 for equal/balanced
treatment control groups.
return_data Whether to return the simulated dqta as a list in the data column of the returned
data frame.
seed The seed to use to make the results reproducible. Set automatically to a date-
time stamp.
... Any other arguments are passed on to the brms::brm function to control model-
ing options.
Details
This function implements the simulation found in Kubinec (2022). This simulation allows you to
vary the sample size, number & type of predictors, values of the predictors (or treatment values),
and the power to target. The function returns a data frame with one row per simulation draw and
covariate k.
Value
a tibble data frame with columns of simulated and estimated values and rows for each simulation
iteration X coefficient combination. I.e., if there are five predictors, and 1,000 iterations, the result-
ing data frame will have 1,000 rows. If there are multiple values for N, then each value of N will
have its own set of iterations, making the final size of the data a multiple of the number of sample
sizes to iterate over. The data frame will have the following columns: 1.
sim_ordbeta 15
Examples
# This function takes a while to run as it has
# to fit an ordered beta regression to each
# draw. The package comes with a saved
# simulation dataset you can inspect to see what the
# result looks like
data("sim_data")
library(dplyr)
# will take a while to run this
if(.Platform$OS.type!="windows") {
sim_data <- sim_ordbeta(N=c(250,750),
k=1,
beta_coef = .5,
iter=5,cores=2,
beta_type="binary",
treat_assign=0.3)
}
# to get the power values by N, simply summarize/group
# by N with functions from the R package dplyr
sim_data %>%
group_by(N) %>%
summarize(mean_power=mean(power)) |
fclust | cran | R | Package ‘fclust’
November 16, 2022
Type Package
Title Fuzzy Clustering
Version 2.1.1.1
Date 2019-09-16
Author <NAME>, <NAME>, <NAME>
Maintainer <NAME> <<EMAIL>>
Description Algorithms for fuzzy clustering, cluster validity indices and plots for cluster valid-
ity and visualizing fuzzy clustering results.
Depends R (>= 3.3), base, stats, graphics, grDevices, utils
Imports Rcpp (>= 0.12.5), MASS (>= 7.3)
LinkingTo Rcpp, RcppArmadillo (>= 0.7)
License GPL (>= 2)
ByteCompile true
Repository CRAN
NeedsCompilation yes
LazyLoad yes
Encoding UTF-8
Date/Publication 2022-11-16 16:34:44 UTC
R topics documented:
ARI.... 3
butterfl... 4
cl.mem... 5
cl.memb.... 6
cl.memb.... 7
cl.siz... 8
cl.size.... 9
Fclus... 10
Fclust.compar... 11
Fclust.inde... 13
FK... 14
FKM.en... 16
FKM.ent.nois... 18
FKM.g... 21
FKM.gk.en... 23
FKM.gk.ent.nois... 25
FKM.gk.nois... 28
FKM.gk... 30
FKM.gkb.en... 32
FKM.gkb.ent.nois... 34
FKM.gkb.nois... 37
FKM.me... 39
FKM.med.nois... 42
FKM.nois... 44
FKM.p... 46
FKM.pf.nois... 48
houseVote... 50
Hra... 51
JACCARD.... 52
M... 54
MP... 55
NB... 56
NEFR... 57
NEFRC.nois... 59
P... 61
P... 62
plot.fclus... 63
print.fclus... 65
RI.... 66
SI... 67
SIL.... 68
summary.fclus... 69
synt.dat... 70
synt.data... 71
unemploymen... 72
VA... 73
VC... 74
VCV... 75
VIFC... 77
X... 78
ARI.F Fuzzy adjusted Rand index
Description
Produces the fuzzy version of the adjusted Rand index between a hard (reference) partition and a
fuzzy partition.
Usage
ARI.F(VC, U, t_norm)
Arguments
VC Vector of class labels
U Fuzzy membership degree matrix or data.frame
t_norm Type of the triangular norm: "minimum" (minimum triangular norm), "triangu-
lar product" (product norm) (default: "minimum")
Value
ari.fValue of the fuzzy adjusted Rand index
Author(s)
<NAME>, <NAME>, <NAME>
References
Campello, R.J., 2007. A fuzzy extension of the Rand index and other related indexes for clustering
and classification assessment. Pattern Recognition Letters, 28, 833-841.
Hubert, L., Arabie, P., 1985. Comparing partitions. Journal of Classification, 2, 193-218.
See Also
RI.F, JACCARD.F, Fclust.compare
Examples
## Not run:
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## fuzzy adjusted Rand index
ari.f=ARI.F(VC=Mc$Type,U=clust$U)
## End(Not run)
butterfly Butterfly data
Description
Synthetic dataset with 2 clusters and some outliers.
Usage
data(butterfly)
Format
A matrix with 17 rows and 2 columns.
Details
The butterfly data motivate the need for the fuzzy approach to clustering.
The presence of outliers can be handled using fuzzy k-means with noise cluster. In fact, differently
from fuzzy k-means, the membership degrees of the outliers are low for all the clusters.
Author(s)
<NAME>, <NAME>, <NAME>
See Also
Fclust, FKM, FKM.noise
Examples
## butterfly data
data(butterfly)
plot(butterfly,type='n')
text(butterfly[,1],butterfly[,2],labels=rownames(butterfly),cex=0.7,lwd=2)
## membership degree matrix using fuzzy k-means (rounded)
round(FKM(butterfly)$U,2)
## membership degree matrix using fuzzy k-means with noise cluster (rounded)
round(FKM.noise(butterfly,delta=3)$U,2)
cl.memb Cluster membership
Description
Produces a summary of the membership degree information.
Usage
cl.memb (U)
Arguments
U Membership degree matrix
Details
An object is assigned to a cluster according to the maximal membership degree. Therefore, it
produces the closest hard clustering partition
Value
info.U Matrix containing the indexes of the clusters where the objects are assigned (row
1) and the associated membership degrees (row 2)
Author(s)
<NAME>, <NAME>, <NAME>
See Also
cl.memb.H, cl.memb.t
Examples
n=20
k=3
## randomly generated membership degree matrix
U=matrix(runif(n*k,0,1), nrow=n, ncol=k)
U=U/apply(U,1,sum)
info.U=cl.memb(U)
## objects assigned to cluster 2
rownames(info.U[info.U[,1]==2,])
cl.memb.H Cluster membership
Description
Produces a summary of the membership degree information in the hard clustering sense (objects
are considered to be assigned to clusters only if the corresponding membership degree are >=0.5).
Usage
cl.memb.H (U)
Arguments
U Membership degree matrix
Details
An object is assigned to a cluster according to the maximal membership degree provided that such
a maximal membership degree is >=0.5, otherwise it is assumed that an object is not assigned to
any cluster (denoted by cluster index = 0 in row 1).
Value
info.U Matrix containing the indexes of the clusters where the objects are assigned (row
1) and the associated membership degrees (row 2)
Author(s)
<NAME>, <NAME>, <NAME>
See Also
cl.memb, cl.memb.t
Examples
n=20
k=3
## randomly generated membership degree matrix
U=matrix(runif(n*k,0,1), nrow=n, ncol=k)
U=U/apply(U,1,sum)
info.U=cl.memb.H(U)
## objects assigned to clusters in the hard clustering sense
rownames(info.U[info.U[,1]!=0,])
cl.memb.t Cluster membership
Description
Produces a summary of the membership degree information according to a threshold.
Usage
cl.memb.t (U, t)
Arguments
U Membership degree matrix
t Threshold in [0,1] (default: 0)
Details
An object is assigned to a cluster according to the maximal membership degree provided that such
a maximal membership degree is >= t, otherwise it is assumed that an object is not assigned to any
cluster (denoted by cluster index = 0 in row 1). The function can be useful to select the subset of
objects clearly assigned to clusters (objects with maximal membership degrees >= t).
Value
info.U Matrix containing the indexes of the clusters where the objects are assigned (row
1) and the associated membership degrees (row 2)
Author(s)
<NAME>, <NAME>, <NAME>
See Also
cl.memb, cl.memb.H
Examples
n=20
k=3
## randomly generated membership degree matrix
U=matrix(runif(n*k,0,1), nrow=n, ncol=k)
U=U/apply(U,1,sum)
## threshold t=0.6
info.U=cl.memb.t(U,0.6)
## objects clearly assigned to clusters
rownames(info.U[info.U[,1]!=0,])
cl.size Cluster size
Description
Produces the sizes of the clusters.
Usage
cl.size (U)
Arguments
U Membership degree matrix
Details
An object is assigned to a cluster according to the maximal membership degree.
Value
clus.size Vector containing the sizes of the clusters
Author(s)
<NAME>, <NAME>, <NAME>
See Also
cl.size.H
Examples
n=20
k=3
## randomly generated membership degree matrix
U=matrix(runif(n*k,0,1), nrow=n, ncol=k)
U=U/apply(U,1,sum)
clus.size=cl.size(U)
cl.size.H Cluster size
Description
Produces the sizes of the clusters in the hard clustering sense (objects are considered to be assigned
to clusters only if the corresponding membership degree are >=0.5).
Usage
cl.size.H (U)
Arguments
U Membership degree matrix
Details
An object is assigned to a cluster according to the maximal membership degree provided that such
a maximal membership degree is >=0.5, otherwise it is assumed that an object is not assigned to
any cluster.
Value
clus.size Vector containing the sizes of the clusters
Author(s)
<NAME>, <NAME>, <NAME>
See Also
cl.size
Examples
n=20
k=3
## randomly generated membership degree matrix
U=matrix(runif(n*k,0,1), nrow=n, ncol=k)
U=U/apply(U,1,sum)
## cluster size in the hard clustering sense
clus.size=cl.size.H(U)
Fclust Fuzzy clustering
Description
Performs fuzzy clustering by using the algorithms available in the package.
Usage
Fclust (X, k, type, ent, noise, stand, distance)
Arguments
X Matrix or data.frame
k An integer value specifying the number of clusters (default: 2)
type Fuzzy clustering algorithm: "standard" (standard algorithms: FKM - type if
distance=TRUE, NEFRC - type if if distance=FALSE), "polynomial" (algo-
rithms with the polynomial fuzzifier), "gk" (Gustafson and Kessel - like algo-
rithms), "gkb" (Gustafson, Kessel and Babuska - like algorithms), "medoids"
(Medoid - based algorithms) (default: "standard")
ent If ent=TRUE, the entropy regularization variant of the algorithm is run (default:
FALSE)
noise If noise=TRUE, the noise cluster variant of the algorithm is run (default: FALSE)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
distance If distance=TRUE, X is assumed to be a distance/dissimilarity matrix (default:
FALSE)
Details
The clustering algorithms are run by using default options.
To specify different options, use the corresponding function.
Value
clust Object of class fclust
Author(s)
<NAME>, <NAME>, <NAME>
See Also
print.fclust, summary.fclust, plot.fclust, FKM, FKM.ent, FKM.gk, FKM.gk.ent, FKM.gkb,
FKM.gkb.ent, FKM.med, FKM.pf, FKM.noise, FKM.ent.noise, FKM.gk.noise, FKM.gkb.ent.noise,
FKM.gkb.noise, FKM.gk.ent.noise,FKM.med.noise, FKM.pf.noise, NEFRC, NEFRC.noise, Fclust.index,
Fclust.compare
Examples
## Not run:
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=Fclust(Mc[,1:(ncol(Mc)-1)],k=6,type="standard",ent=FALSE,noise=FALSE,stand=1,distance=FALSE)
## fuzzy k-means with polynomial fuzzifier
## (excluded the factor column Type (last column))
clust=Fclust(Mc[,1:(ncol(Mc)-1)],k=6,type="polynomial",ent=FALSE,noise=FALSE,stand=1,distance=FALSE)
## fuzzy k-means with entropy regularization
## (excluded the factor column Type (last column))
clust=Fclust(Mc[,1:(ncol(Mc)-1)],k=6,type="standard",ent=TRUE,noise=FALSE,stand=1,distance=FALSE)
## fuzzy k-means with noise cluster
## (excluded the factor column Type (last column))
clust=Fclust(Mc[,1:(ncol(Mc)-1)],k=6,type="standard",ent=FALSE,noise=TRUE,stand=1,distance=FALSE)
## End(Not run)
Fclust.compare Similarity between partitions
Description
Performs some measures of similarity between a hard (reference) partition and a fuzzy partition.
Usage
Fclust.compare(VC, U, index, tnorm)
Arguments
VC Vector of class labels
U Fuzzy membership degree matrix or data.frame
index Measures of similarity: "ARI.F" (fuzzy version of the adjuster Rand index),
"RI.F" (fuzzy version of the Rand index), "JACCARD.F" (fuzzy version of the
Jaccard index), "ALL" for all the indexes (default: "ALL")
tnorm Type of the triangular norm: "minimum" (minimum triangular norm), "triangu-
lar product" (product norm) (default: "minimum")
Details
index is not case-sensitive. All the measures of similarity share the same properties of their non-
fuzzy counterpart.
Value
out.indexVector containing the similarity measures
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., 2007. A fuzzy extension of the Rand index and other related indexes for clustering
and classification assessment. Pattern Recognition Letters, 28, 833-841.
<NAME>., <NAME>., 1985. Comparing partitions. Journal of Classification, 2, 193-218.
<NAME>., 1901. Étude comparative de la distribution florale dans une portion des Alpes et des
Jura. Bulletin de la Société Vaudoise des Sciences Naturelles, 37, 547-579.
<NAME>., 1971. Objective criteria for the evaluation of clustering methods. Journal of the
American Statistical Association, 66, 846-850.
See Also
RI.F, ARI.F, JACCARD.F
Examples
## Not run:
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## all measures of similarity
all.indexes=Fclust.compare(VC=Mc$Type,U=clust$U)
## fuzzy adjusted Rand index
Fari.index=Fclust.compare(VC=Mc$Type,U=clust$U,index="ARI.F")
## End(Not run)
Fclust.index Cluster validity indexes
Description
Performs some cluster validity indexes for choosing the optimal number of clusters k.
Usage
Fclust.index (fclust.obj, index, alpha)
Arguments
fclust.obj Object of class fclust
index Cluster validity indexes to select the number of clusters: PC (partition coeffi-
cient), PE (partition entropy), MPC (modified partition coefficient), SIL (silhou-
ette), SIL.F (fuzzy silhouette), XB (Xie and Beni),ALL for all the indexes (de-
fault: "ALL")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
Details
index is not case-sensitive.
Value
out.index Vector containing the index values
Author(s)
<NAME>, <NAME>, <NAME>
See Also
PC, PE, MPC, SIL, SIL.F, XB, Fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## cluster validity indexes
all.indexes=Fclust.index(clust)
## Xie and Beni cluster validity index
XB.index=Fclust.index(clust,'XB')
FKM Fuzzy k-means
Description
Performs the fuzzy k-means clustering algorithm.
Usage
FKM (X, k, m, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM)
b Parameter of the polynomial fuzzifier (NULL for FKM)
vp Volume parameter (NULL for FKM)
delta Noise distance (NULL for FKM)
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., 1981. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press,
New York.
See Also
FKM.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means (excluded the factor column Type (last column)), fixing the number of clusters
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## fuzzy k-means (excluded the factor column Type (last column)), selecting the number of clusters
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=2:6,m=1.5,stand=1)
FKM.ent Fuzzy k-means with entropy regularization
Description
Performs the fuzzy k-means clustering algorithm with entropy regularization.
The entropy regularization allows us to avoid using the artificial fuzziness parameter m. This is
replaced by the degree of fuzzy entropy ent, related to the concept of temperature in statistical
physics. An interesting property of the fuzzy k-means with entropy regularization is that the proto-
types are obtained as weighted means with weights equal to the membership degrees (rather than to
the membership degrees at the power of m as is for the fuzzy k-means).
Usage
FKM.ent (X, k, ent, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
ent Degree of fuzzy entropy (default: 1)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
The default value for ent is in general not reasonable if FKM.ent is run using raw data.
The update of the membership degrees requires the computation of exponential functions. In some
cases, this may produce NaN values and the algorithm stops. Such a problem is usually solved by
running FKM.ent using standardized data (stand=1).
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM.ent)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.ent)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness (NULL for FKM.ent)
ent Degree of fuzzy entropy
b Parameter of the polynomial fuzzifier (NULL for FKM.ent)
vp Volume parameter (NULL for FKM.ent)
delta Noise distance (NULL for FKM.ent)
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.ent)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.ent)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.ent)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., 1995. A maximum entropy approach to fuzzy clustering. Proceedings of the
Fourth IEEE Conference on Fuzzy Systems (FUZZ-IEEE/IFES ’95), pp. 2227-2232.
<NAME>., <NAME>., 1999. Gaussian clustering method based on maximum-fuzzy-entropy inter-
pretation. Fuzzy Sets and Systems, 102, 253-258.
See Also
FKM.ent.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means with entropy regularization, fixing the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.ent(Mc[,1:(ncol(Mc)-1)],k=6,ent=3,RS=10,stand=1)
## fuzzy k-means with entropy regularization, selecting the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.ent(Mc[,1:(ncol(Mc)-1)],k=2:6,ent=3,RS=10,stand=1)
FKM.ent.noise Fuzzy k-means with entropy regularization and noise cluster
Description
Performs the fuzzy k-means clustering algorithm with entropy regularization and noise cluster.
The entropy regularization allows us to avoid using the artificial fuzziness parameter m. This is
replaced by the degree of fuzzy entropy ent, related to the concept of temperature in statistical
physics. An interesting property of the fuzzy k-means with entropy regularization is that the proto-
types are obtained as weighted means with weights equal to the membership degrees (rather than to
the membership degrees at the power of m as is for the fuzzy k-means).
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.ent.noise (X, k, ent, delta, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
ent Degree of fuzzy entropy (default: 1)
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM.ent using the same values of k and m)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
The default value for ent is in general not reasonable if FKM.ent is run using raw data.
The update of the membership degrees requires the computation of exponential functions. In some
cases, this may produce NaN values and the algorithm stops. Such a problem is usually solved by
running FKM.ent using standardized data (stand=1).
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM.ent.noise)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.ent.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness (NULL for FKM.ent.noise)
ent Degree of fuzzy entropy
b Parameter of the polynomial fuzzifier (NULL for FKM.ent.noise)
vp Volume parameter (NULL for FKM.ent.noise)
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.ent.noise)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.ent.noise)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.ent.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
<NAME>., <NAME>., 1995. A maximum entropy approach to fuzzy clustering. Proceedings of the
Fourth IEEE Conference on Fuzzy Systems (FUZZ-IEEE/IFES ’95), pp. 2227-2232.
<NAME>., <NAME>., 1999. Gaussian clustering method based on maximum-fuzzy-entropy inter-
pretation. Fuzzy Sets and Systems, 102, 253-258.
See Also
FKM.ent, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, butterfly
Examples
## butterfly data
data(butterfly)
## fuzzy k-means with entropy regularization and noise cluster, fixing the number of clusters
clust=FKM.ent.noise(butterfly,k = 2, RS=5,delta=3)
## fuzzy k-means with entropy regularization and noise cluster, selecting the number of clusters
clust=FKM.ent.noise(butterfly,RS=5,delta=3)
FKM.gk Gustafson and Kessel - like fuzzy k-means
Description
Performs the Gustafson and Kessel - like fuzzy k-means clustering algorithm.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
Usage
FKM.gk (X, k, m, vp, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
vp Volume parameter (default: rep(1,k))
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, then the algorithm stops and the element of value
is NaN.
The Babuska et al. variant in FKM.gkb is recommended.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gk)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM.gk)
b Parameter of the polynomial fuzzifier (NULL for FKM.gk)
vp Volume parameter (default: rep(1,max(k)). If k is a vector, for each group the
first k element of vpare considered.
delta Noise distance (NULL for FKM.gk)
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.gk)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.gk)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.gk)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., 1978. Fuzzy clustering with a fuzzy covariance matrix. Proceedings
of the IEEE Conference on Decision and Control, pp. 761-766.
See Also
FKM.gkb, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, unemployment
Examples
## Not run:
## unemployment data
data(unemployment)
## Gustafson and Kessel-like fuzzy k-means, fixing the number of clusters
clust=FKM.gk(unemployment,k=3,RS=10)
## Gustafson and Kessel-like fuzzy k-means, selecting the number of clusters
clust=FKM.gk(unemployment,k=2:6,RS=10)
## End(Not run)
FKM.gk.ent Gustafson and Kessel - like fuzzy k-means with entropy regularization
Description
Performs the Gustafson and Kessel - like fuzzy k-means clustering algorithm with entropy regular-
ization.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
The entropy regularization allows us to avoid using the artificial fuzziness parameter m. This is
replaced by the degree of fuzzy entropy ent, related to the concept of temperature in statistical
physics. An interesting property of the fuzzy k-means with entropy regularization is that the proto-
types are obtained as weighted means with weights equal to the membership degrees (rather than to
the membership degrees at the power of m as is for the fuzzy k-means).
Usage
FKM.gk.ent (X, k, ent, vp, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
ent Degree of fuzzy entropy (default: 1)
vp Volume parameter (default: rep(1,k))
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, the algorithm stops and the element of value is
NaN.
The default value for ent is in general not reasonable if FKM.gk.ent is run using raw data.
The update of the membership degrees requires the computation of exponential functions. In some
cases, this may produce NaN values and the algorithm stops. Such a problem is usually solved by
running FKM.gk.ent using standardized data (stand=1).
The Babuska et al. variant in FKM.gkb.ent is recommended.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gk.ent)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness (NULL for FKM.gk.ent)
ent Degree of fuzzy entropy
b Parameter of the polynomial fuzzifier (NULL for FKM.gk.ent)
vp Volume parameter (default: rep(1,max(k)). If k is a vector, for each group the
first k element of vpare considered.
delta Noise distance (NULL for FKM.gk.ent)
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.gk.ent)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.gk.ent)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.gk.ent)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., 2013. A new fuzzy clustering algorithm with entropy regularization.
Proceedings of the meeting on Classification and Data Analysis (CLADAG).
See Also
FKM.gkb.ent, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, unemployment
Examples
## unemployment data
data(unemployment)
## Gustafson and Kessel-like fuzzy k-means with entropy regularization,
##fixing the number of clusters
clust=FKM.gk.ent(unemployment,k=3,ent=0.2,RS=10,stand=1)
## Not run:
## Gustafson and Kessel-like fuzzy k-means with entropy regularization,
##selecting the number of clusters
clust=FKM.gk.ent(unemployment,k=2:6,ent=0.2,RS=10,stand=1)
## End(Not run)
FKM.gk.ent.noise Gustafson and Kessel - like fuzzy k-means with entropy regularization
and noise cluster
Description
Performs the Gustafson and Kessel - like fuzzy k-means clustering algorithm with entropy regular-
ization and noise cluster.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
The entropy regularization allows us to avoid using the artificial fuzziness parameter m. This is
replaced by the degree of fuzzy entropy ent, related to the concept of temperature in statistical
physics. An interesting property of the fuzzy k-means with entropy regularization is that the proto-
types are obtained as weighted means with weights equal to the membership degrees (rather than to
the membership degrees at the power of m as is for the fuzzy k-means).
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.gk.ent.noise (X,k,ent,vp,delta,RS,stand,startU,index,alpha,conv,maxit,seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
ent Degree of fuzzy entropy (default: 1)
vp Volume parameter (default: rep(1,k))
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM.gk.ent using the same values of k and m)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, the algorithm stops and the element of value is
NaN.
The default value for ent is in general not reasonable if FKM.gk.ent is run using raw data.
The update of the membership degrees requires the computation of exponential functions. In some
cases, this may produce NaN values and the algorithm stops. Such a problem is usually solved by
running FKM.gk.ent.noise using standardized data (stand=1).
The Babuska et al. variant in FKM.gkb.ent.noise is recommended.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gk.ent.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness (NULL for FKM.gk.ent.noise)
ent Degree of fuzzy entropy
b Parameter of the polynomial fuzzifier (NULL for FKM.gk.ent.noise)
vp Volume parameter (default: rep(1,max(k)). If k is a vector, for each group the
first k element of vpare considered.
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.ent.noise)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.ent.noise)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.ent.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
Dave’ R.N., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
<NAME>., Giordani P., 2013. A new fuzzy clustering algorithm with entropy regularization.
Proceedings of the meeting on Classification and Data Analysis (CLADAG).
See Also
FKM.gkb.ent.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust,
unemployment
Examples
## Not run:
## unemployment data
data(unemployment)
## Gustafson and Kessel-like fuzzy k-means with entropy regularization and noise cluster,
##fixing the number of clusters
clust=FKM.gk.ent.noise(unemployment,k=3,ent=0.2,delta=1,RS=10,stand=1)
## Gustafson and Kessel-like fuzzy k-means with entropy regularization and noise cluster,
##selecting the number of clusters
clust=FKM.gk.ent.noise(unemployment,k=2:6,ent=0.2,delta=1,RS=10,stand=1)
## End(Not run)
FKM.gk.noise Gustafson and Kessel - like fuzzy k-means with noise cluster
Description
Performs the Gustafson and Kessel - like fuzzy k-means clustering algorithm with noise cluster.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.gk.noise (X, k, m, vp, delta, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
vp Volume parameter (default: rep(1,max(k)). If k is a vector, for each group the
first k element of vpare considered.
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM.gk using the same values of k and m)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, then the algorithm stops and the element of value
is NaN.
The Babuska et al. variant in FKM.gkb.noise is recommended.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gk.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM.gk.noise)
b Parameter of the polynomial fuzzifier (NULL for FKM.gk.noise)
vp Volume parameter
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.gk.noise)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.gk.noise)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.gk.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
Dave’ R.N., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
<NAME>.E., Kessel W.C., 1978. Fuzzy clustering with a fuzzy covariance matrix. Proceedings
of the IEEE Conference on Decision and Control, pp. 761-766.
See Also
FKM.gkb.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, unemployment
Examples
## Not run:
## unemployment data
data(unemployment)
## Gustafson and Kessel-like fuzzy k-means with noise cluster, fixing the number of clusters
clust=FKM.gk.noise(unemployment,k=3,delta=20,RS=10)
## Gustafson and Kessel-like fuzzy k-means with noise cluster, selecting the number of clusters
clust=FKM.gk.noise(unemployment,k=2:6,delta=20,RS=10)
## End(Not run)
FKM.gkb Gustafson, Kessel and Babuska - like fuzzy k-means
Description
Performs the Gustafson, Kessel and Babuska - like fuzzy k-means clustering algorithm.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
The Babuska et al. variant improves the computation of the fuzzy covariance matrices in the stan-
dard Gustafson and Kessel clustering algorithm.
Usage
FKM.gkb (X, k, m, vp, gam, mcn, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
vp Volume parameter (default: rep(1,k))
gam Weighting parameter for the fuzzy covariance matrices (default: 0)
mcn Maximum condition number for the fuzzy covariance matrices (default: 1e+15)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: PC (partition coefficient),
PE (partition entropy), MPC (modified partition coefficient), SIL (silhouette),
SIL.F (fuzzy silhouette), XB (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+2)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, then the algorithm stops and the element of value
is NaN.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gkb)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of clustering index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM.gkb)
b Parameter of the polynomial fuzzifier (NULL for FKM.gkb)
vp Volume parameter
delta Noise distance (NULL for FKM.gkb)
gam Weighting parameter for the fuzzy covariance matrices
mcn Maximum condition number for the fuzzy covariance matrices
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.gkb)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., <NAME>., 2002. Improved covariance estimation for Gustafson-
Kessel clustering. Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-
IEEE), 1081-1085.
<NAME>., <NAME>., 1978. Fuzzy clustering with a fuzzy covariance matrix. Proceedings
of the IEEE Conference on Decision and Control, pp. 761-766.
See Also
FKM.gk, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, unemployment
Examples
## Not run:
## unemployment data
data(unemployment)
## Gustafson, Kessel and Babuska-like fuzzy k-means, fixing the number of clusters
clust=FKM.gkb(unemployment,k=3,RS=10)
## Gustafson, Kessel and Babuska-like fuzzy k-means, selecting the number of clusters
clust=FKM.gkb(unemployment,k=2:6,RS=10)
## End(Not run)
FKM.gkb.ent Gustafson, Kessel and Babuska - like fuzzy k-means with entropy reg-
ularization
Description
Performs the Gustafson, Kessel and Babuska - like fuzzy k-means clustering algorithm with entropy
regularization.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
The Babuska et al. variant improves the computation of the fuzzy covariance matrices in the stan-
dard Gustafson and Kessel clustering algorithm.
The entropy regularization allows us to avoid using the artificial fuzziness parameter m. This is
replaced by the degree of fuzzy entropy ent, related to the concept of temperature in statistical
physics. An interesting property of the fuzzy k-means with entropy regularization is that the proto-
types are obtained as weighted means with weights equal to the membership degrees (rather than to
the membership degrees at the power of m as is for the fuzzy k-means).
Usage
FKM.gkb.ent (X, k, ent, vp, gam, mcn, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
ent Degree of fuzzy entropy (default: 1)
vp Volume parameter (default: rep(1,k))
gam Weighting parameter for the fuzzy covariance matrices (default: 0)
mcn Maximum condition number for the fuzzy covariance matrices (default: 1e+15)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+2)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, the algorithm stops and the element of value is
NaN.
The default value for ent is in general not reasonable if FKM.gk.ent is run using raw data.
The update of the membership degrees requires the computation of exponential functions. In some
cases, this may produce NaN values and the algorithm stops. Such a problem is usually solved by
running FKM.gk.ent using standardized data (stand=1).
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gkb.ent)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k A integer value or vector indicating the number of clusters. (default: 2:6)
m Parameter of fuzziness (NULL for FKM.gkb.ent)
ent Degree of fuzzy entropy
b Parameter of the polynomial fuzzifier (NULL for FKM.gkb.ent)
vp Volume parameter (default: rep(1,max(k)). If k is a vector, for each group the
first k element of vpare considered.
delta Noise distance (NULL for FKM.gkb.ent)
gam Weighting parameter for the fuzzy covariance matrices
mcn Maximum condition number for the fuzzy covariance matrices
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.gkb.ent)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
Bab<NAME>., <NAME>., <NAME>., 2002. Improved covariance estimation for Gustafson-
Kessel clustering. Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-
IEEE), 1081-1085.
<NAME>., <NAME>., 2013. A new fuzzy clustering algorithm with entropy regularization.
Proceedings of the meeting on Classification and Data Analysis (CLADAG).
See Also
FKM.gk.ent, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, unemployment
Examples
## Not run:
## unemployment data
data(unemployment)
## Gustafson, Kessel and Babuska-like fuzzy k-means with entropy regularization,
##fixing the number of clusters
clust=FKM.gkb.ent(unemployment,k=3,ent=0.2,RS=10,stand=1)
## Gustafson, Kessel and Babuska-like fuzzy k-means with entropy regularization,
##selecting the number of clusters
clust=FKM.gkb.ent(unemployment,k=2:6,ent=0.2,RS=10,stand=1)
## End(Not run)
FKM.gkb.ent.noise Gustafson, Kessel and Babuska - like fuzzy k-means with entropy reg-
ularization and noise cluster
Description
Performs the Gustafson, Kessel and Babuska - like fuzzy k-means clustering algorithm with entropy
regularization and noise cluster.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
The Babuska et al. variant improves the computation of the fuzzy covariance matrices in the stan-
dard Gustafson and Kessel clustering algorithm.
The entropy regularization allows us to avoid using the artificial fuzziness parameter m. This is
replaced by the degree of fuzzy entropy ent, related to the concept of temperature in statistical
physics. An interesting property of the fuzzy k-means with entropy regularization is that the proto-
types are obtained as weighted means with weights equal to the membership degrees (rather than to
the membership degrees at the power of m as is for the fuzzy k-means).
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.gkb.ent.noise (X,k,ent,vp,delta,gam,mcn,RS,stand,startU,index,alpha,conv,maxit,seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
ent Degree of fuzzy entropy (default: 1)
vp Volume parameter (default: rep(1,max(k)). If k is a vector, for each group the
first k element of vpare considered.
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM.gk.ent using the same values of k and m)
gam Weighting parameter for the fuzzy covariance matrices (default: 0)
mcn Maximum condition number for the fuzzy covariance matrices (default: 1e+15)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+2)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, the algorithm stops and the element of value is
NaN.
The default value for ent is in general not reasonable if FKM.gk.ent is run using raw data.
The update of the membership degrees requires the computation of exponential functions. In some
cases, this may produce NaN values and the algorithm stops. Such a problem is usually solved by
running FKM.gk.ent.noise using standardized data (stand=1).
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gkb.ent.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness (NULL for FKM.gkb.ent.noise)
ent Degree of fuzzy entropy
b Parameter of the polynomial fuzzifier (NULL for FKM.gkb.ent.noise)
vp Volume parameter
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices
mcn Maximum condition number for the fuzzy covariance matrices
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.gkb.ent.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., <NAME>., 2002. Improved covariance estimation for Gustafson-
Kessel clustering. Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-
IEEE), 1081-1085.
<NAME>., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
<NAME>., <NAME>., 2013. A new fuzzy clustering algorithm with entropy regularization.
Proceedings of the meeting on Classification and Data Analysis (CLADAG).
See Also
FKM.gk.ent.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, unemployment
Examples
## Not run:
## unemployment data
data(unemployment)
## Gustafson, Kessel and Babuska-like fuzzy k-means with entropy regularization and noise cluster,
##fixing the number of clusters
clust=FKM.gkb.ent.noise(unemployment,k=3,ent=0.2,delta=1,RS=10,stand=1)
## Gustafson, Kessel and Babuska-like fuzzy k-means with entropy regularization and noise cluster,
##selecting the number of clusters
clust=FKM.gkb.ent.noise(unemployment,k=2:6,ent=0.2,delta=1,RS=10,stand=1)
## End(Not run)
FKM.gkb.noise Gustafson, Kessel and Babuska - like fuzzy k-means with noise cluster
Description
Performs the Gustafson, Kessel and Babuska - like fuzzy k-means clustering algorithm with noise
cluster.
Differently from fuzzy k-means, it is able to discover non-spherical clusters.
The Babuska et al. variant improves the computation of the fuzzy covariance matrices in the stan-
dard Gustafson and Kessel clustering algorithm.
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.gkb.noise (X,k,m,vp,delta,gam,mcn,RS,stand,startU,index,alpha,conv,maxit,seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
vp Volume parameter (default: rep(1,k))
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM.gk using the same values of k and m)
gam Weighting parameter for the fuzzy covariance matrices (default: 0)
mcn Maximum condition number for the fuzzy covariance matrices (default: 1e+15)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+2)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
If a cluster covariance matrix becomes singular, then the algorithm stops and the element of value
is NaN.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.gkb.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM.gkb.noise)
b Parameter of the polynomial fuzzifier (NULL for FKM.gkb.noise)
vp Volume parameter (default: rep(1,max(k)). If k is a vector, for each group the
first k element of vpare considered.
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices
mcn Maximum condition number for the fuzzy covariance matrices
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.gkb.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., <NAME>., 2002. Improved covariance estimation for Gustafson-
Kessel clustering. Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-
IEEE), 1081-1085.
<NAME>., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
<NAME>., <NAME>., 1978. Fuzzy clustering with a fuzzy covariance matrix. Proceedings
of the IEEE Conference on Decision and Control, pp. 761-766.
See Also
FKM.gk.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, unemployment
Examples
## Not run:
## unemployment data
data(unemployment)
## Gustafson, Kessel and Babuska-like fuzzy k-means with noise cluster,
##fixing the number of clusters
clust=FKM.gkb.noise(unemployment,k=3,delta=20,RS=10)
## Gustafson, Kessel and Babuska-like fuzzy k-means with noise cluster,
##selecting the number of clusters
clust=FKM.gkb.noise(unemployment,k=2:6,delta=20,RS=10)
## End(Not run)
FKM.med Fuzzy k-medoids
Description
Performs the fuzzy k-medoids clustering algorithm.
Differently from fuzzy k-means where the cluster prototypes (centroids) are artificial objects com-
puted as weighted means, in the fuzzy k-medoids the cluster prototypes (medoids) are a subset of
the observed objects.
Usage
FKM.med (X, k, m, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector indicating the number of clusters (default: 2:6)
m Parameter of fuzziness (default: 1.5)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: PC (partition coefficient),
PE (partition entropy), MPC (modified partition coefficient), SIL (silhouette),
SIL.F (fuzzy silhouette), XB (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
In FKM.med the parameter of fuzziness is usually lower than the one used in FKM.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM.med)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM.med)
b Parameter of the polynomial fuzzifier (NULL for FKM.med)
vp Volume parameter (NULL for FKM.med)
delta Noise distance (NULL for FKM.med)
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.med)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.med)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.med)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., <NAME>., <NAME>., 2001. Low-complexity fuzzy relational clustering
algorithms for web mining. IEEE Transactions on Fuzzy Systems, 9, 595-607.
See Also
FKM.med.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, Mc
Examples
## Not run:
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-medoids, fixing the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.med(Mc[,1:(ncol(Mc)-1)],k=6,m=1.1,RS=10,stand=1)
## fuzzy k-medoids, selecting the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.med(Mc[,1:(ncol(Mc)-1)],k=2:6,m=1.1,RS=10,stand=1)
## End(Not run)
FKM.med.noise Fuzzy k-medoids with noise cluster
Description
Performs the fuzzy k-medoids clustering algorithm with noise cluster.
Differently from fuzzy k-means where the cluster prototypes (centroids) are artificial objects com-
puted as weighted means, in the fuzzy k-medoids the cluster prototypes (medoids) are a subset of
the observed objects.
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.med.noise (X, k, m, delta, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 1.5)
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM.med using the same values of k and m)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: PC (partition coefficient),
PE (partition entropy), MPC (modified partition coefficient), SIL (silhouette),
SIL.F (fuzzy silhouette), XB (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
As for FKM.med, in FKM.med.noise the parameter of fuzziness is usually lower than the one used
in FKM.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM.med.noise)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of clustering index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM.med.noise)
b Parameter of the polynomial fuzzifier (NULL for FKM.med.noise)
vp Volume parameter (NULL for FKM.med.noise)
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.med.noise)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.med.noise)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.med.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
Dave’ R.N., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
<NAME>., <NAME>., <NAME>., <NAME>., 2001. Low-complexity fuzzy relational clustering
algorithms for web mining. IEEE Transactions on Fuzzy Systems, 9, 595-607.
See Also
FKM.med, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, butterfly
Examples
## butterfly data
data(butterfly)
## fuzzy k-medoids with noise cluster, fixing the number of clusters
clust=FKM.med.noise(butterfly,k=2,RS=5,delta=3)
## fuzzy k-medoids with noise cluster, selecting the number of clusters
clust=FKM.med.noise(butterfly,RS=5,delta=3)
FKM.noise Fuzzy k-means with noise cluster
Description
Performs the fuzzy k-means clustering algorithm with noise cluster.
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.noise (X, k, m, delta, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM using the same values of k and m)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: PC (partition coefficient),
PE (partition entropy), MPC (modified partition coefficient), SIL (silhouette),
SIL.F (fuzzy silhouette), XB (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM.noise)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for FKM.noise)
b Parameter of the polynomial fuzzifier (NULL for FKM.noise)
vp Volume parameter (NULL for FKM.noise)
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.noise)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.noise)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
Dave’ R.N., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
See Also
FKM, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, butterfly
Examples
## butterfly data
data(butterfly)
## fuzzy k-means with noise cluster, fixing the number of clusters
clust=FKM.noise(butterfly, k = 2, RS=5,delta=3)
## fuzzy k-means with noise cluster, selecting the number of clusters
clust=FKM.noise(butterfly,RS=5,delta=3)
FKM.pf Fuzzy k-means with polynomial fuzzifier
Description
Performs the fuzzy k-means clustering algorithm with polynomial fuzzifier function.
The polynomial fuzzifier creates areas of crisp membership degrees around the prototypes while,
outside of these areas of crisp membership degrees, fuzzy membership degrees are given. Therefore,
the polynomial fuzzifier produces membership degrees equal to one for objects clearly assigned to
clusters, that is, very close to the cluster prototypes.
Usage
FKM.pf (X, k, b, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
b Parameter of the polynomial fuzzifier (default: 0.5)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette), "XB" (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM.pf)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.pf)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness (NULL for FKM.pf)
ent Degree of fuzzy entropy (NULL for FKM.pf)
b Parameter of the polynomial fuzzifier
vp Volume parameter (NULL for FKM.pf)
delta Noise distance (NULL for FKM.pf)
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.pf)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.pf)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.pf)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., <NAME>., <NAME>., 2010. Fuzzy Cluster Analysis of Larger Data
Sets. In: Scalable Fuzzy Algorithms for Data Management and Analysis: Methods and Design IGI
Global, pp. 302-331. IGI Global, Hershey.
<NAME>., <NAME>., <NAME>., 2011. Fuzzy clustering with polynomial fuzzifier function in
connection with M-estimators. Applied and Computational Mathematics, 10, 146-163.
See Also
FKM.pf.noise, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means with polynomial fuzzifier, fixing the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.pf(Mc[,1:(ncol(Mc)-1)],k=6,stand=1)
## fuzzy k-means with polynomial fuzzifier, selecting the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.pf(Mc[,1:(ncol(Mc)-1)],k=2:6,stand=1)
FKM.pf.noise Fuzzy k-means with polynomial fuzzifier and noise cluster
Description
Performs the fuzzy k-means clustering algorithm with polynomial fuzzifier function and noise clus-
ter.
The polynomial fuzzifier creates areas of crisp membership degrees around the prototypes while,
outside of these areas of crisp membership degrees, fuzzy membership degrees are given. Therefore,
the polynomial fuzzifier produces membership degrees equal to one for objects clearly assigned to
clusters, that is, very close to the cluster prototypes.
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
FKM.pf.noise (X, k, b, delta, RS, stand, startU, index, alpha, conv, maxit, seed)
Arguments
X Matrix or data.frame
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
b Parameter of the polynomial fuzzifier (default: 0.5)
delta Noise distance (default: average Euclidean distance between objects and proto-
types from FKM.pf using the same values of k and m)
RS Number of (random) starts (default: 1)
stand Standardization: if stand=1, the clustering algorithm is run using standardized
data (default: no standardization)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: PC (partition coefficient),
PE (partition entropy), MPC (modified partition coefficient), SIL (silhouette),
SIL.F (fuzzy silhouette), XB (Xie and Beni) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix
F Array containing the covariance matrices of all the clusters (NULL for FKM.pf.noise)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for FKM.pf.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness (NULL for FKM.pf.noise)
ent Degree of fuzzy entropy (NULL for FKM.pf.noise)
b Parameter of the polynomial fuzzifier
vp Volume parameter (NULL for FKM.pf.noise)
delta Noise distance
gam Weighting parameter for the fuzzy covariance matrices (NULL for FKM.pf.noise)
mcn Maximum condition number for the fuzzy covariance matrices (NULL for FKM.pf.noise)
stand Standardization (Yes if stand=1, No if stand=0)
Xca Data used in the clustering algorithm (standardized data if stand=1)
X Raw data
D Dissimilarity matrix (NULL for FKM.pf.noise)
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
Dave’ R.N., 1991. Characterization and detection of noise in clustering. Pattern Recognition Let-
ters, 12, 657-664.
<NAME>., <NAME>., <NAME>., <NAME>., 2010. Fuzzy cluster analysis of larger data sets. In:
Scalable Fuzzy Algorithms for Data Management and Analysis: Methods and Design IGI Global,
pp. 302-331. IGI Global, Hershey.
<NAME>., <NAME>., <NAME>., 2011. Fuzzy clustering with polynomial fuzzifier function in
connection with M-estimators. Applied and Computational Mathematics, 10, 146-163.
See Also
FKM.pf, Fclust, Fclust.index, print.fclust, summary.fclust, plot.fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means with polynomial fuzzifier and noise cluster, fixing the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.pf.noise(Mc[,1:(ncol(Mc)-1)],k=6,stand=1)
## fuzzy k-means with polynomial fuzzifier and noise cluster, selecting the number of clusters
## (excluded the factor column Type (last column))
clust=FKM.pf.noise(Mc[,1:(ncol(Mc)-1)],k=2:6,stand=1)
houseVotes Congressional Voting Records Data
Description
1984 United Stated Congressional Voting Records for each of the U.S. House of Representatives
Congressmen on the 16 key votes identified by the Congressional Quarterly Almanac.
Usage
data(houseVotes)
Format
A data.frame with 435 rows on 17 columns (16 qualitative variables and 1 classification variable).
Details
The data collect 1984 United Stated Congressional Voting Records for each of the 435 U.S. House
of Representatives Congressmen on the 16 key votes identified by the Congressional Quarterly
Almanac (CQA). The variable class splits the observations in democrat and republican. The
qualitative variables refer to the votes on handicapped-infants, water-project-cost-sharing,
adoption-of-the-budget-resolution, physician-fee-freeze, el-salvador-aid, religious-groups-in-schools,
anti-satellite-test-ban, aid-to-nicaraguan-contras, mx-missile, immigration, synfuels-corporation-cutba
education-spending, superfund-right-to-sue, crime, duty-free-exports, and export-administration-act-sout
All these 16 variables are objects of class factor with three levels according to the CQA scheme:
y refers to the types of votes ”voted for”, ”paired for” and ”announced for”; n to ”voted against”,
”paired against” and ”announced against”; codeyn to ”voted present”, ”voted present to avoid con-
flict of interest” and ”did not vote or otherwise make a position known”.
Author(s)
<NAME>, <NAME>, <NAME>
Source
https://archive.ics.uci.edu/ml/datasets/congressional+voting+records
References
Schlimmer, J.C., 1987. Concept acquisition through representational adjustment. Doctoral disser-
tation, Department of Information and Computer Science, University of California, Irvine, CA.
See Also
NEFRC, NEFRC.noise
Examples
data(houseVotes)
X=houseVotes[,-1]
class=houseVotes[,1]
Hraw Raw prototypes
Description
Produces prototypes using the original units of measurement of X (useful if the clustering algorithm
is run using standardized data).
Usage
Hraw (X, H)
Arguments
X Matrix or data.frame
H Prototype matrix
Value
Hraw Prototypes matrix using the original units of measurement of X
Author(s)
<NAME>, <NAME>, <NAME>
See Also
Fclust, unemployment
Examples
## example n.1 (k-means case)
## unemployment data
data(unemployment)
## fuzzy k-means
unempFKM=FKM(unemployment,k=3,stand=1)
## standardized prototypes
unempFKM$H
## prototypes using the original units of measurement
unempFKM$Hraw=Hraw(unempFKM$X,unempFKM$H)
## example n.2 (k-medoids case)
## unemployment data
data(unemployment)
## fuzzy k-medoids
## Not run:
## It may take more than a few seconds
unempFKM.med=FKM.med(unemployment,k=3,RS=10,stand=1)
## prototypes using the original units of measurement:
## in fuzzy k-medoids one can equivalently use
unempFKM.med$Hraw1=Hraw(unempFKM.med$X,unempFKM.med$H)
unempFKM.med$Hraw2=unempFKM.med$X[unempFKM.med$medoid,]
## End(Not run)
JACCARD.F Fuzzy Jaccard index
Description
Produces the fuzzy version of the Jaccard index between a hard (reference) partition and a fuzzy
partition.
Usage
JACCARD.F(VC, U, t_norm)
Arguments
VC Vector of class labels
U Fuzzy membership degree matrix or data.frame
t_norm Type of the triangular norm: "minimum" (minimum triangular norm), "triangu-
lar product" (product norm) (default: "minimum")
Value
jaccard.fValue of the fuzzy Jaccard index
Author(s)
<NAME>, <NAME>, <NAME>
References
Campello, R.J., 2007. A fuzzy extension of the Rand index and other related indexes for clustering
and classification assessment. Pattern Recognition Letters, 28, 833-841.
Jaccard, P., 1901. Étude comparative de la distribution florale dans une portion des Alpes et des
Jura. Bulletin de la Société Vaudoise des Sciences Naturelles, 37, 547-579.
See Also
ARI.F, RI.F, Fclust.compare
Examples
## Not run:
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## fuzzy Jaccard index
jaccard.f=JACCARD.F(VC=Mc$Type,U=clust$U)
## End(Not run)
Mc McDonald’s data
Description
Nutrition analysis of McDonald’s menu items.
Usage
data(Mc)
Format
A data.frame with 81 rows and 16 columns.
Details
Data are from McDonald’s USA Nutrition Facts for Popular Menu Items. A subset of menu items is
reported. Beverages are excluded. In case of duplications, regular size or medium size information
is reported. The variable Type is a factor the levels of which specify the kind of the menu items.
Although some menu items could be well described by more than one level, only one level of the
variable Type specifies each menu item. Percent Daily Values (%DV) are based on a 2,000 calorie
diet. Some menu items are registered trademarks.
Author(s)
<NAME>, <NAME>, <NAME>
See Also
Fclust, FKM, FKM.ent, FKM.med
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
p=(ncol(Mc)-1)
## fuzzy k-means (excluded the factor column Type (last column))
clust.FKM=FKM(Mc[,1:p],k=6,m=1.5,stand=1)
## new factor column Cluster.FKM containing the cluster assignment information
## using fuzzy k-means
Mc[,ncol(Mc)+1]=factor(clust.FKM$clus[,1])
colnames(Mc)[ncol(Mc)]=("Cluster.FKM")
levels(Mc$Cluster.FKM)=paste("Clus FKM",1:clust.FKM$k,sep=" ")
## contingency table (Cluster.FKM vs Type)
## to assess whether clusters can be interpreted in terms of the levels of Type
table(Mc$Type,Mc$Cluster.FKM)
## prototypes using the original units of measurement
clust.FKM$Hraw=Hraw(clust.FKM$X,clust.FKM$H)
clust.FKM$Hraw
## fuzzy k-means with entropy regularization
## (excluded the factor column Type (last column))
## Not run:
## It may take more than a few seconds
clust.FKM.ent=FKM.ent(Mc[,1:p],k=6,ent=3,RS=10,stand=1)
## new factor column Cluster.FKM.ent containing the cluster assignment information
## using fuzzy k-medoids with entropy regularization
Mc[,ncol(Mc)+1]=factor(clust.FKM.ent$clus[,1])
colnames(Mc)[ncol(Mc)]=("Cluster.FKM.ent")
levels(Mc$Cluster.FKM.ent)=paste("Clus FKM.ent",1:clust.FKM.ent$k,sep=" ")
## contingency table (Cluster.FKM.ent vs Type)
## to assess whether clusters can be interpreted in terms of the levels of Type
table(Mc$Type,Mc$Cluster.FKM.ent)
## prototypes using the original units of measurement
clust.FKM.ent$Hraw=Hraw(clust.FKM.ent$X,clust.FKM.ent$H)
clust.FKM.ent$Hraw
## End(Not run)
## fuzzy k-medoids
## (excluded the factor column Type (last column))
clust.FKM.med=FKM.med(Mc[,1:p],k=6,m=1.1,RS=10,stand=1)
## new factor column Cluster.FKM.med containing the cluster assignment information
## using fuzzy k-medoids with entropy regularization
Mc[,ncol(Mc)+1]=factor(clust.FKM.med$clus[,1])
colnames(Mc)[ncol(Mc)]=("Cluster.FKM.med")
levels(Mc$Cluster.FKM.med)=paste("Clus FKM.med",1:clust.FKM.med$k,sep=" ")
## contingency table (Cluster.FKM.med vs Type)
## to assess whether clusters can be interpreted in terms of the levels of Type
table(Mc$Type,Mc$Cluster.FKM.med)
## prototypes using the original units of measurement
clust.FKM.med$Hraw=Hraw(clust.FKM.med$X,clust.FKM.med$H)
clust.FKM.med$Hraw
## or, equivalently,
Mc[clust.FKM.med$medoid,1:p]
MPC Modified partition coefficient
Description
Produces the modified partition coefficient index. The optimal number of clusters k is such that the
index takes the maximum value.
Usage
MPC (U)
Arguments
U Membership degree matrix
Value
mpc Value of the modified partition coefficient index
Author(s)
<NAME>, <NAME>, <NAME>
References
Dave’ R.N., 1996. Validating fuzzy partitions obtained through c-shells clustering. Pattern Recog-
nition Letters, 17, 613-623.
See Also
PC, PE, SIL, SIL.F, XB, Fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## modified partition coefficient
mpc=MPC(clust$U)
NBA NBA teams data
Description
NBA team statistics from the 2017-2018 regular season.
Usage
data(NBA)
Format
A data.frame with 30 rows and 22 columns.
Details
Data refer to some statistics of the NBA teams for the regular season 2017-2018. The teams are
distinguished according to two classification variables.
The statistics are: number of wins (W), field goals made (FGM), field goals attempted (FGA), field
goals percentage (FGP), 3 point field goals made (3PM), 3 point field goals attempted (3PA), 3 point
field goals percentage (3PP), free throws made (FTM), free throws attempted (FTA), free throws
percentage (FTP), offensive rebounds (OREB), defensive rebounds (DREB), assists (AST), turnovers
(TOV), steals (STL), blocks (BLK), blocked field goal attempts (BLKA), personal fouls (PF), personal
fouls drawn (PFD) and points (PTS). Moreover, reported are the conference (Conference) and the
playoff appearance (Playoff).
Author(s)
<NAME>, <NAME>, <NAME>
Source
https://stats.nba.com/teams/traditional/
See Also
FKM
Examples
## Not run:
data(NBA)
## A subset of variables is considered
X <- NBA[,c(4,7,10,11,12,13,14,15,16,17,20)]
clust.FKM=FKM(X=X,k=2:6,m=1.5,RS=50,stand=1,index="SIL.F",alpha=1)
summary(clust.FKM)
## End(Not run)
NEFRC Non-Euclidean Fuzzy Relational Clustering
Description
Performs the Non-Euclidean Fuzzy Relational data Clustering algorithm.
Usage
NEFRC(D, k, m, RS, startU, index, alpha, conv, maxit, seed)
Arguments
D Matrix or data.frame containing distances/dissimilarities
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
RS Number of (random) starts (default: 1)
startU Rational start for the membership degree matrix U (default: no rational start)
conv Convergence criterion (default: 1e-9)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix (NULL for NEFRC)
F Array containing the covariance matrices of all the clusters (NULL for NEFRC).
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for NEFRC).
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for NEFRC)
b Parameter of the polynomial fuzzifier (NULL for NEFRC)
vp Volume parameter (NULL for NEFRC)
delta Noise distance (NULL for NEFRC)
stand Standardization (Yes if stand=1, No if stand=0) (NULL for NEFRC)
Xca Data used in the clustering algorithm (NULL for NEFRC, D is used)
X Raw data (NULL for NEFRC)
D Dissimilarity matrix
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., & <NAME>. 2002. Robust fuzzy clustering of relational data. IEEE Transactions on
Fuzzy Systems, 10(6), 713-727.
See Also
NEFRC.noise, print.fclust, summary.fclust, plot.fclust
Examples
## Not run:
require(cluster)
data("houseVotes")
X <- houseVotes[,-1]
D <- daisy(x = X, metric = "gower")
clust.NEFRC <- NEFRC(D = D, k = 2:6, m = 2, index = "SIL.F")
summary(clust.NEFRC)
plot(clust.NEFRC)
## End(Not run)
NEFRC.noise Non-Euclidean Fuzzy Relational Clustering with noise cluster
Description
Performs the Non-Euclidean Fuzzy Relational data Clustering algorithm.
The noise cluster is an additional cluster (with respect to the k standard clusters) such that objects
recognized to be outliers are assigned to it with high membership degrees.
Usage
NEFRC.noise(D, k, m, delta, RS, startU, index, alpha, conv, maxit, seed)
Arguments
D Matrix or data.frame containing distances/dissimilarities
k An integer value or vector specifying the number of clusters for which the index
is to be calculated (default: 2:6)
m Parameter of fuzziness (default: 2)
delta Noise distance (default: average observed distance)
RS Number of (random) starts (default: 1)
startU Rational start for the membership degree matrix U (default: no rational start)
index Cluster validity index to select the number of clusters: "PC" (partition coeffi-
cient), "PE" (partition entropy), "MPC" (modified partition coefficient), "SIL"
(silhouette), "SIL.F" (fuzzy silhouette) (default: "SIL.F")
alpha Weighting coefficient for the fuzzy silhouette index SIL.F (default: 1)
conv Convergence criterion (default: 1e-9)
maxit Maximum number of iterations (default: 1e+6)
seed Seed value for random number generation (default: NULL)
Details
If startU is given, the argument k is ignored (the number of clusters is ncol(startU)).
If startU is given, the first element of value, cput and iter refer to the rational start.
Value
Object of class fclust, which is a list with the following components:
U Membership degree matrix
H Prototype matrix (NULL for NEFRC.noise)
F Array containing the covariance matrices of all the clusters (NULL for NEFRC.noise)
clus Matrix containing the indexes of the clusters where the objects are assigned
(column 1) and the associated membership degrees (column 2)
medoid Vector containing the indexes of the medoid objects (NULL for NEFRC.noise)
value Vector containing the loss function values for the RS starts
criterion Vector containing the values of the cluster validity index
iter Vector containing the numbers of iterations for the RS starts
k Number of clusters
m Parameter of fuzziness
ent Degree of fuzzy entropy (NULL for NEFRC.noise)
b Parameter of the polynomial fuzzifier (NULL for NEFRC.noise)
vp Volume parameter (NULL for NEFRC.noise)
delta Noise distance (NULL for NEFRC.noise).
stand Standardization (Yes if stand=1, No if stand=0) (NULL for NEFRC.noise).
Xca Data used in the clustering algorithm (NULL for NEFRC.noise), D is used)
X Raw data (NULL for NEFRC.noise)
D Dissimilarity matrix
call Matched call
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., & Sen, S. 2002. Robust fuzzy clustering of relational data. IEEE Transactions on
Fuzzy Systems, 10(6), 713-727.
See Also
NEFRC, print.fclust, summary.fclust, plot.fclust
Examples
## Not run:
require(cluster)
data("houseVotes")
X <- houseVotes[,-1]
D <- daisy(x = X, metric = "gower")
clust.NEFRC.noise <- NEFRC.noise(D = D, k = 2:6, m = 2, index = "SIL.F")
summary(clust.NEFRC.noise)
plot(clust.NEFRC.noise)
## End(Not run)
PC Partition coefficient
Description
Produces the partition coefficient index. The optimal number of clusters k is is such that the index
takes the maximum value.
Usage
PC (U)
Arguments
U Membership degree matrix
Value
pc Value of the partition coefficient index
Author(s)
<NAME>, <NAME>, <NAME>
References
Bezdek J.C., 1974. Cluster validity with fuzzy sets. Journal of Cybernetics, 3, 58-73.
See Also
PE, MPC, SIL, SIL.F, XB, Fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## partition coefficient
pc=PC(clust$U)
PE Partition entropy
Description
Produces the partition entropy index. The optimal number of clusters k is is such that the index
takes the minimum value.
Usage
PE (U, b)
Arguments
U Membership degree matrix
b Logarithmic base (default: exp(1))
Value
pe Value of the partition entropy index
Author(s)
<NAME>, <NAME>, <NAME>
References
Bezdek J.C., 1981. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press,
New York.
See Also
PC, MPC, SIL, SIL.F, XB, Fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## partition entropy index
pe=PE(clust$U)
plot.fclust Plotting fuzzy clustering output
Description
Plot method for class fclust. The function creates a scatter plot visualizing the cluster structure.
The objects are represented by points in the plot using observed variables or principal components.
Usage
## S3 method for class fclust
## S3 method for class 'fclust'
plot(x, v1v2, colclus, umin, ucex, pca, ...)
Arguments
x Object of class fclust
v1v2 Vector with two elements specifying the numbers of the variables (or of the
principal components) to be plotted (default: 1:2); in case of relational data, the
argument is ignored
colclus Vector specifying the color palette for the clusters (default: palette(rainbow(k)))
umin Lowest maximal membership degree such that an object is assigned to a cluster
(default: 0)
ucex Logical value specifying if the points are magnified according to the maximal
membership degree (if ucex=TRUE) (default: ucex=FALSE)
pca Logical value specifying if the objects are represented using principal compo-
nents (if pca=TRUE) (default: pca=FALSE); in case of relational data, the argu-
ment is ignored
... Additional arguments arguments for plot
Details
In the scatter plot the objects are represented by circles (pch=16) and the prototypes by stars (pch=8)
using observed variables (if pca=FALSE) or principal components (if pca=TRUE), the numbers of
which are specified in v1v2. Their colors differ for every cluster according to colclus. Objects
such that their maximal membership degrees are lower than umin are in black. The sizes of the
circles depends on the maximal membership degrees of the corresponding objects if ucex=TRUE.
Also note that principal components are extracted using standardized data.
In case of relational data, the first two components resulting from Non-metric Multidimensional
Scaling performed using the package MASS are used.
Author(s)
<NAME>, <NAME>, <NAME>
See Also
VIFCR, VAT, VCV, VCV2, Fclust, print.fclust, summary.fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## Scatter plot of Calories vs Cholesterol (mg)
names(Mc)
plot(clust,v1v2=c(1,5))
## Scatter plot of Calories vs Cholesterol (mg) using gray levels for the clusters
plot(clust,v1v2=c(1,5),colclus=gray.colors(6))
## Scatter plot of Calories vs Cholesterol (mg)
## coloring in black objects with maximal membership degree lower than 0.5
plot(clust,v1v2=c(1,5),umin=0.5)
## Scatter plot of Calories vs Cholesterol (mg)
## coloring in black objects with maximal membership degree lower than 0.5
## and magnifying the points according to the maximal membership degree
plot(clust,v1v2=c(1,5),umin=0.5,ucex=TRUE)
## Scatter plot using the first two principal components and
## coloring in black objects with maximal membership degree lower than 0.3
plot(clust,v1v2=1:2,umin=0.3,pca=TRUE)
print.fclust Printing fuzzy clustering output
Description
Print method for class fclust.
Usage
## S3 method for class fclust
## S3 method for class 'fclust'
print(x, ...)
Arguments
x Object of class fclust
... Additional arguments for print
Details
The function displays the number of objects, the number of clusters, the closest hard clustering
partition (objects assigned to the clusters with the highest membership degree) and the membership
degree matrix (rounded).
Author(s)
<NAME>, <NAME>, <NAME>
See Also
Fclust, summary.fclust, plot.fclust, unemployment
Examples
## unemployment data
data(unemployment)
## fuzzy k-means
unempFKM=FKM(unemployment,k=3,stand=1)
unempFKM
RI.F Fuzzy Rand index
Description
Produces the fuzzy version of the Rand index between a hard (reference) partition and a fuzzy
partition.
Usage
RI.F(VC, U, t_norm)
Arguments
VC Vector of class labels
U Fuzzy membership degree matrix or data.frame
t_norm Type of the triangular norm: "minimum" (minimum triangular norm), "triangu-
lar product" (product norm) (default: "minimum")
Value
ri.fValue of the fuzzy adjusted Rand index
Author(s)
<NAME>, <NAME>, <NAME>
References
Campello, R.J., 2007. A fuzzy extension of the Rand index and other related indexes for clustering
and classification assessment. Pattern Recognition Letters, 28, 833-841.
Rand, W.M., 1971. Objective criteria for the evaluation of clustering methods. Journal of the
American Statistical Association, 66, 846-850.
See Also
ARI.F, JACCARD.F, Fclust.compare
Examples
## Not run:
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## fuzzy Rand index
ri.f=RI.F(VC=Mc$Type,U=clust$U)
## End(Not run)
SIL Silhouette index
Description
Produces the silhouette index. The optimal number of clusters k is is such that the index takes the
maximum value.
Usage
SIL (Xca, U, distance)
Arguments
Xca Matrix or data.frame
U Membership degree matrix
distance If distance=TRUE, Xca is assumed to contain distances/dissimilarities (default:
FALSE)
Details
Xca should contain the same dataset used in the clustering algorithm, i.e., if the clustering algorithm
is run using standardized data, then SIL should be computed using the same standardized data.
Set distance=TRUE if Xca is a distance/dissimilarity matrix.
Value
sil.obj Vector containing the silhouette indexes for all the objects
sil Value of the silhouette index (mean of sil.obj)
Author(s)
<NAME>, <NAME>, <NAME>
References
Kaufman L., <NAME>., 1990. Finding Groups in Data: An Introduction to Cluster Analysis.
Wiley, New York.
See Also
PC, PE, MPC, SIL.F, XB, Fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## silhouette index
sil=SIL(clust$Xca,clust$U)
SIL.F Fuzzy silhouette index
Description
Produces the fuzzy silhouette index. The optimal number of clusters k is is such that the index takes
the maximum value.
Usage
SIL.F (Xca, U, alpha, distance)
Arguments
Xca Matrix or data.frame
U Membership degree matrix
alpha Weighting coefficient (default: 1)
distance If distance=TRUE, Xca is assumed to contain distances/dissimilarities (default:
FALSE)
Details
Xca should contain the same dataset used in the clustering algorithm, i.e., if the clustering algorithm
is run using standardized data, then SIL.F should be computed using the same standardized data.
Set distance=TRUE if Xca is a distance/dissimilarity matrix.
Value
sil.f Value of the fuzzy silhouette index
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., 2006. A fuzzy extension of the silhouette width criterion for
cluster analysis. Fuzzy Sets and Systems, 157, 2858-2875.
See Also
PC, PE, MPC, SIL, XB, Fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## fuzzy silhouette index
sil.f=SIL.F(clust$Xca,clust$U)
summary.fclust Summarizing fuzzy clustering output
Description
Summary method for class fclust.
Usage
## S3 method for class fclust
## S3 method for class 'fclust'
summary(object, ...)
Arguments
object Object of class fclust
... Additional arguments for summary
Details
The function displays the number of objects, the number of clusters, the cluster sizes, the closest
hard clustering partition (objects assigned to the clusters with the highest membership degree), the
cluster memberships (using the closest hard clustering partition), the number of objects with unclear
assignment (when the maximal membership degree is lower than 0.5), the objects with unclear as-
signment and the cluster sizes without unclear assignments (only if objects with unclear assignment
are present), the cluster summary (for every cluster: size, minimal membership degree, maximal
membership degree, average membership degree, number of objects with unclear assignment) and
the Euclidean distance matrix for the cluster prototypes.
Author(s)
<NAME>, <NAME>, <NAME>
See Also
Fclust, print.fclust, plot.fclust, unemployment
Examples
## unemployment data
data(unemployment)
## fuzzy k-means
unempFKM=FKM(unemployment,k=3,stand=1)
summary(unempFKM)
synt.data Synthetic data
Description
Synthetic dataset with 2 non-spherical clusters.
Usage
data(synt.data)
Format
A matrix with 302 rows and 2 columns.
Details
Although two clusters are clearly visible, fuzzy k-means fails to discover them. The Gustafson and
Kessel-like fuzzy k-means should be used for finding the known-in-advance clusters.
Author(s)
<NAME>, <NAME>, <NAME>
See Also
Fclust, FKM, FKM.gk, plot.fclust
Examples
## Not run:
## synthetic data
data(synt.data)
plot(synt.data)
## fuzzy k-means
syntFKM=FKM(synt.data)
## Gustafson and Kessel-like fuzzy k-means
syntFKM.gk=FKM.gk(synt.data)
## plot of cluster structures from fuzzy k-means and Gustafson and Kessel-like fuzzy k-means
par(mfcol = c(2,1))
plot(syntFKM)
plot(syntFKM.gk)
## End(Not run)
synt.data2 Synthetic data
Description
Synthetic dataset with 2 non-spherical clusters.
Usage
data(synt.data2)
Format
A matrix with 240 rows and 2 columns.
Details
Although three clusters are clearly visible, Gustafson and Kessel - like fuzzy k-means clustering
algorithm FKM.gk fails due to singularity of some covariance matrix. The Gustafson, Kessel and
Babuska - like fuzzy k-means clustering algorithm FKM.gkb should be used to avoid singularity
problem.
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., 1978. Fuzzy clustering with a fuzzy covariance matrix. Proceedings
of the IEEE Conference on Decision and Control, pp. 761-766.
See Also
Fclust, FKM.gk, FKM.gkb, plot.fclust
Examples
data(synt.data2)
plot(synt.data2)
## Gustafson and Kessel-like fuzzy k-means
syntFKM.gk=FKM.gk(synt.data2, k = 3, RS = 1, seed = 123)
## Gustafson, Kessel and Babuska-like fuzzy k-means
syntFKM.gkb=FKM.gkb(synt.data2, k = 3, RS = 1, seed = 123)
unemployment Unemployment data
Description
Unemployment data about some European countries in 2011.
Usage
data(unemployment)
Format
A data.frame with 32 rows and 3 columns.
Details
The source is Eurostat news-release 104/2012 - 4 July 2012. The 32 observations are European
countries: BELGIUM, BULGARIA, CZECHREPUBLIC, DENMARK, GERMANY, ESTONIA,
IRELAND, GREECE, SPAIN, FRANCE, ITALY, CYPRUS, LATVIA, LITHUANIA, LUXEM-
BOURG, HUNGARY, MALTA, NETHERLANDS, AUSTRIA, POLAND, PORTUGAL, ROMA-
NIA, SLOVENIA, SLOVAKIA, FINLAND, SWEDEN, UNITEDKINGDOM, ICELAND, NOR-
WAY, SWITZERLAND, CROATIA, TURKEY. The 3 variables are: the total unemployment rate,
defined as the percentage of unemployed persons aged 15-74 in the economically active population
(Variable 1); the youth unemployment rate, defined as the unemployment rate for young people
aged between 15 and 24 (Variable 2); the long-term unemployment share, defined as the Percentage
of unemployed persons who have been unemployed for 12 months or more (Variable 3). Non-
spherical clusters seem to be present in the data. The Gustafson and Kessel-like fuzzy k-means
should be used for finding them.
Author(s)
<NAME>, <NAME>, <NAME>
See Also
Fclust, FKM, FKM.gk
Examples
## unemployment data
data(unemployment)
## fuzzy k-means (only spherical clusters)
unempFKM=FKM(unemployment,k=3)
## Gustafson and Kessel-like fuzzy k-means (non-spherical clusters)
unempFKM.gk=FKM.gk(unemployment,k=3,RS=10)
VAT Visual Assessment of (Cluster) Tendency
Description
Digital intensity image to inspect the number of clusters
Usage
VAT (Xca)
Arguments
Xca Matrix or data.frame (usually data to be used in the clustering algorithm)
Details
Each cell refers to a dissimilarity between a pair of objects. Small dissimilarities are represented by
dark shades and large dissimilarities are represented by light shades. In the plot the dissimilarities
are reorganized in such a way that, roughly speaking, (darkly shaded) diagonal blocks correspond
to clusters in the data. Therefore, k dark blocks along its main diagonal suggest that the data contain
k (as yet unfound) clusters and the size of each block represents the approximate size of the cluster.
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., 2002. VAT: a tool for visual assessment of (cluster) tendency. Pro-
ceedings of the IEEE International Joint Conference on Neural Networks, , pp. 2225?2230.
<NAME>., <NAME>., 2003. Visual cluster validity for prototype generator clustering mod-
els. Pattern Recognition Letters, 24, 1563?1569.
<NAME>., <NAME>., 2008. VCV2 ? Visual Cluster Validity. In Zurada J.M., <NAME>.,
<NAME>. (Eds.): Lecture Notes in Computer Science, 5050, pp. 293?308. Springer-Verlag, Berlin
Heidelberg.
See Also
plot.fclust, VIFCR, VCV, VCV2, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## data standardization (after removing the column Serving Size)
Mc=scale(Mc[,1:(ncol(Mc)-1)],center=TRUE,scale=TRUE)[,]
## plot of VAT
VAT(Mc)
VCV Visual Cluster Validity
Description
Digital intensity image generated using the prototype matrix (and the membership degree matrix)
to do cluster validation. The function also plots the VAT image.
Usage
VCV (Xca, U, H, which)
Arguments
Xca Matrix or data.frame (usually data used in the clustering algorithm)
U Membership degree matrix
H Prototype matrix
which If a subset of the plots is required, specify a subset of the numbers 1:2 (default:
1:2).
Details
Plot 1 (which=1): VAT. Each cell refers to a dissimilarity between a pair of objects. Small dissim-
ilarities are represented by dark shades and large dissimilarities are represented by light shades. In
the plot the dissimilarities are reorganized in such a way that, roughly speaking, (darkly shaded)
diagonal blocks correspond to clusters in the data. Therefore, k dark blocks along its main diagonal
suggest that the data contain k (as yet unfound) clusters and the size of each block represents the
approximate size of the cluster.
Plot 2 (which=2): VCV. Each cell refers to a dissimilarity between a pair of objects computed with
respect to the cluster prototypes. Small dissimilarities are represented by dark shades and large
dissimilarities are represented by light shades. In the plot the dissimilarities are organized by re-
ordering the clusters (the original first cluster is the first reordered cluster and the remaining clusters
are reordered so that (new) cluster c+1 is the nearest of the remaining clusters to (newly indexed)
cluster c) and the objects (in accordance with decreasing membership degrees). If k dark blocks
along its main diagonal are visible, then a k-cluster structure is revealed. Note that the actual num-
ber of clusters can be revealed even when a larger number of clusters is used. This suggests that
the correct value of k can sometimes be found by running the algorithm with a large value of k, and
then ascertaining its correct value from the visual evidence in the VCV image.
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., 2002. VAT: a tool for visual assessment of (cluster) tendency. Pro-
ceedings of the IEEE International Joint Conference on Neural Networks, , pp. 2225?2230.
<NAME>., <NAME>., 2003. Visual cluster validity for prototype generator clustering models.
Pattern Recognition Letters, 24, 1563?1569.
See Also
plot.fclust, VIFCR, VAT, VCV2, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## plots of VAT and VCV
VCV(clust$Xca,clust$U,clust$H)
## plot of VCV
VCV(clust$Xca,clust$U,clust$H, 2)
VCV2 (New) Visual Cluster Validity
Description
Digital intensity image generated using the membership degree matrix to do cluster validation. The
function also plots the VAT image.
Usage
VCV2 (Xca, U, which)
Arguments
Xca Matrix or data.frame (usually data used in the clustering algorithm)
U Membership degree matrix
which If a subset of the plots is required, specify a subset of the numbers 1:2 (default:
1:2).
Details
Plot 1 (which=1): VAT. Each cell refers to a dissimilarity between a pair of objects. Small dissim-
ilarities are represented by dark shades and large dissimilarities are represented by light shades. In
the plot the dissimilarities are reorganized in such a way that, roughly speaking, (darkly shaded)
diagonal blocks correspond to clusters in the data. Therefore, k dark blocks along its main diagonal
suggest that the data contain k (as yet unfound) clusters and the size of each block represents the
approximate size of the cluster.
Plot 2 (which=2): VCV2. Each cell refers to a dissimilarity between a pair of objects computed with
respect to the cluster membership degrees. Small dissimilarities are represented by dark shades and
large dissimilarities are represented by light shades. In the plot the dissimilarities are reorganized
by using the VAT reordering. If k dark blocks along its main diagonal are visible, then a k-cluster
structure is revealed. Note that the actual number of clusters can be revealed even when a larger
number of clusters is used. This suggests that the correct value of k can sometimes be found by
running the algorithm with a large value of k, and then ascertaining its correct value from the visual
evidence in the VCV2 image.
Author(s)
<NAME>, <NAME>, <NAME>ini
References
<NAME>., <NAME>., 2002. VAT: a tool for visual assessment of (cluster) tendency. Pro-
ceedings of the IEEE International Joint Conference on Neural Networks, , pp. 2225?2230.
<NAME>., <NAME>., 2008. VCV2 ? Visual Cluster Validity. In Zurada J.M., <NAME>.,
<NAME>. (Eds.): Lecture Notes in Computer Science, 5050, pp. 293?308. Springer-Verlag, Berlin
Heidelberg.
See Also
plot.fclust, VIFCR, VAT, VCV, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## plots of VAT and VCV2
VCV2(clust$Xca,clust$U)
## plot of VCV2
VCV2(clust$Xca,clust$U, 2)
VIFCR Visual inspection of fuzzy clustering results
Description
Plots for validation of fuzzy clustering results. Three plots (selected by which) are available.
Usage
VIFCR (fclust.obj, which)
Arguments
fclust.obj Object of class fclust
which If a subset of the plots is required, specify a subset of the numbers 1:3 (default:
1:3).
Details
Plot 1 (which=1). Histogram of the membership degrees setting breaks=seq(from=0,to=1,by=0.1).
The frequencies are scaled so that the heights of the first and the latter rectangles are the same in
the ideal case of crisp (non-fuzzy) memberships. The fuzzy clustering solution should be such that
the heights of the first and the latter rectangles are high and those of the rectangles in the middle
are low. High heights of rectangles in the middle denote the presence of ambiguous membership
degrees. This is an indicator for a non-optimal clustering result.
Plot 2 (which=2). Scatter plot of the objects at the co-ordinates (u1,u2). For each object, u1 and u2
denote, respectively, the highest and the second highest membership degrees. All points lie within
the triangle with vertices (0,0), (0.5,0.5) and (1,0). In the ideal case of (almost) crisp membership
degrees all points are near the vertex (1,0). Points near the vertex (0.5,0.5) highlight ambiguous
objects shared by two clusters. Points near the vertex (0,0) are usually outliers characterized by low
membership degrees to all clusters (provided that the noise approach is considered).
Plot 3 (which=3). For each cluster, scatter plot of the of the objects at the co-ordinates (dc,uc). For
each object, dc is the squared Euclidean distance between the object and the cluster prototype and
uc is the membership degree of the object to the cluster. The ideal case is such that points are in the
upper left area or in the lower right area. In fact, this highlights high membership degrees for small
distances and low membership degrees for large distances.
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>., <NAME>., 2003. Visual inspection of fuzzy clustering results. In Benitez
J.M., <NAME>., <NAME>., <NAME>. (Eds.): Advances in Soft Computing - Engineering Design
and Manufacturing, pp. 65-76. Springer, London.
See Also
plot.fclust, VAT, VCV, VCV2, unemployment
Examples
## unemployment data
data(unemployment)
## fuzzy k-means
unempFKM=FKM(unemployment,k=3,stand=1)
## all plots
VIFCR(unempFKM)
## plots 1 and 3
VIFCR(unempFKM,c(1,3))
XB Xie and Beni index
Description
Produces the Xie and Beni index. The optimal number of clusters k is is such that the index takes
the minimum value.
Usage
XB (Xca, U, H, m)
Arguments
Xca Matrix or data.frame
U Membership degree matrix
H Prototype matrix
m Parameter of fuzziness (default: 2)
Details
Xca should contain the same dataset used in the clustering algorithm, i.e., if the clustering algorithm
is run using standardized data, then XB should be computed using the same standardized data.
m should be the same parameter of fuzziness used in the clustering algorithm.
Value
xb Value of the Xie and Beni index
Author(s)
<NAME>, <NAME>, <NAME>
References
<NAME>., <NAME>. (1991). A validity measure for fuzzy clustering, IEEE Transactions on Pattern
Analysis and Machine Intelligence, 13, 841-847.
See Also
PC, PE, MPC, SIL, SIL.F, Fclust, Mc
Examples
## McDonald's data
data(Mc)
names(Mc)
## data normalization by dividing the nutrition facts by the Serving Size (column 1)
for (j in 2:(ncol(Mc)-1))
Mc[,j]=Mc[,j]/Mc[,1]
## removing the column Serving Size
Mc=Mc[,-1]
## fuzzy k-means
## (excluded the factor column Type (last column))
clust=FKM(Mc[,1:(ncol(Mc)-1)],k=6,m=1.5,stand=1)
## Xie and Beni index
xb=XB(clust$Xca,clust$U,clust$H,clust$m) |
infrared | readthedoc | YAML | infrared 2.0.1.dev3045 documentation
[infrared](index.html#document-index)
---
What is InfraRed?[¶](#what-is-infrared)
===
InfraRed is a plugin based system that aims to provide an easy-to-use CLI for Ansible based projects.
It aims to leverage the power of Ansible in managing / deploying systems, while providing an alternative, fully customized,
CLI experience that can be used by anyone, without prior Ansible knowledge.
The project originated from Red Hat OpenStack infrastructure team that looked for a solution to provide an “easier” method for installing OpenStack from CLI but has since grown and can be used for *any* Ansible based projects.
Welcome to infrared’s documentation![¶](#welcome-to-infrared-s-documentation)
---
### Bootstrap[¶](#bootstrap)
#### Setup[¶](#setup)
Clone infrared 2.0 from GitHub:
```
git clone https://github.com/redhat-openstack/infrared.git
```
Make sure that all [prerequisites](setup.html#Prerequisites) are installed.
Setup virtualenv and [install](setup.html#Installation) from source using pip:
```
cd infrared virtualenv .venv && source .venv/bin/activate pip install --upgrade pip pip install --upgrade setuptools pip install .
```
Warning
It’s important to upgrade `pip` first, as default `pip` version in RHEL (1.4) might fail on dependencies
Note
infrared will create a default [workspace](workspace.html#workspace) for you. This workspace will manage your environment details.
Note
For development work it’s better to install in editable mode and work with master branch:
```
pip install -e .
```
#### Provision[¶](#provision)
In this example we’ll use [virsh](virsh.html) provisioner in order to demonstrate how easy and fast it is to provision machines using infrared.
Add the virsh [plugin](plugins.html):
```
infrared plugin add plugins/virsh
```
Print virsh help message and all input options:
```
infrared virsh --help
```
For basic execution, the user should only provide data for the mandatory parameters, this can be done in two ways:
1. [CLI](#cli)
2. [Answers File](#answers-file)
##### CLI[¶](#cli)
Notice the only three mandatory paramters in virsh provisioner are:
> * `--host-address` - the host IP or FQDN to ssh to
> * `--host-key` - the private key file used to authenticate to your `host-address` server
> * `--topology-nodes` - type and role of nodes you would like to deploy (e.g: `controller:3` == 3 VMs that will act as controllers)
We can now execute the provisioning process by providing those parameters through the CLI:
```
infrared virsh --host-address $HOST --host-key $HOST_KEY --topology-nodes "undercloud:1,controller:1,compute:1"
```
That is it, the machines are now provisioned and accessible:
```
TASK [update inventory file symlink] *******************************************
[[ previous task time: 0:00:00.306717 = 0.31s / 209.71s ]]
changed: [localhost]
PLAY RECAP *********************************************************************
compute-0 : ok=4 changed=3 unreachable=0 failed=0 controller-0 : ok=5 changed=4 unreachable=0 failed=0 localhost : ok=4 changed=3 unreachable=0 failed=0 undercloud-0 : ok=4 changed=3 unreachable=0 failed=0 hypervisor : ok=85 changed=29 unreachable=0 failed=0
[[ previous task time: 0:00:00.237104 = 0.24s / 209.94s ]]
[[ previous play time: 0:00:00.555806 = 0.56s / 209.94s ]]
[[ previous playbook time: 0:03:29.943926 = 209.94s / 209.94s ]]
[[ previous total time: 0:03:29.944113 = 209.94s / 0.00s ]]
```
Note
You can also use the auto-generated ssh config file to easily access the machines
##### Answers File[¶](#answers-file)
Unlike with [CLI](#cli), here a new answers file (INI based) will be created.
This file contains all the default & mandatory parameters in a section of its own (named `virsh` in our case), so the user can easily replace all mandatory parameters.
When the file is ready, it should be provided as an input for the `--from-file` option.
Generate Answers file for virsh provisioner:
```
infrared virsh --generate-answers-file virsh_prov.ini
```
Review the config file and edit as required:
virsh_prov.ini[¶](#id3)
```
[virsh]
host-key = Required argument. Edit with any value, OR override with CLI: --host-key=<option>
host-address = Required argument. Edit with any value, OR override with CLI: --host-address=<option>
topology-nodes = Required argument. Edit with one of the allowed values OR override with CLI: --topology-nodes=<option>
host-user = root
```
Note
`host-key`, `host-address` and `topology-nodes` don’t have default values. All arguments can be edited in file or overridden directly from CLI.
Note
Do not use double quotes or apostrophes for the string values in the answers file. Infrared will NOT remove those quotation marks that surround the values.
Edit mandatory parameters values in the answers file:
```
[virsh]
host-key = ~/.ssh/id_rsa host-address = my.host.address topology-nodes = undercloud:1,controller:1,compute:1 host-user = root
```
Execute provisioning using the newly created answers file:
```
infrared virsh --from-file=virsh_prov.ini
```
Note
You can always overwrite parameters from answers file with parameters from CLI:
```
infrared virsh --from-file=virsh_prov.ini --topology-nodes="undercloud:1,controller:1,compute:1,ceph:1"
```
Done. Quick & Easy!
#### Installing[¶](#installing)
Now let’s demonstrate the installation process by deploy an OpenStack environment using RHEL-OSP on the nodes we have provisioned in the previous stage.
##### Undercloud[¶](#undercloud)
First, we need to enable the tripleo-undercloud [plugin](plugins.html):
```
infrared plugin add plugins/tripleo-undercloud
```
Just like in the provisioning stage, here also the user should take care of the mandatory parameters
(by CLI or INI file) in order to be able to start the installation process.
Let’s deploy a [TripleO Undercloud](tripleo-undercloud.html):
```
infrared tripleo-undercloud --version 10 --images-task rpm
```
This will deploy OSP 10 (`Newton`) on the node `undercloud-0` provisioned previously.
Infrared provides support for upstream RDO deployments:
```
infrared tripleo-undercloud --version pike --images-task=import \
--images-url=https://images.rdoproject.org/pike/rdo_trunk/current-tripleo/stable/
```
This will deploy RDO Pike version (`OSP 11`) on the node `undercloud-0` provisioned previously.
Of course it is possible to use `--images-task=build` instead.
##### Overcloud[¶](#overcloud)
Like previously, need first to enable the associated [plugin](plugins.html):
```
infrared plugin add plugins/tripleo-overcloud
```
Let’s deploy a [TripleO Overcloud](tripleo-overcloud.html):
```
infrared tripleo-overcloud --deployment-files virt --version 10 --introspect yes --tagging yes --deploy yes
infrared cloud-config --deployment-files virt --tasks create_external_network,forward_overcloud_dashboard,network_time,tempest_deployer_input
```
This will deploy OSP 10 (`Newton`) overcloud from the undercloud defined previously previously.
Given the topology defined by the [Answers File](#answers-file) earlier, the overcloud should contain:
- 1 controller
- 1 compute
- 1 ceph storage
### Setup[¶](#setup)
#### Supported distros[¶](#supported-distros)
Currently supported distros are:
* Fedora 25, 26, 27
* RHEL 7.3, 7.4, 7.5
Warning
Python 2.7 and virtualenv are required.
#### Prerequisites[¶](#prerequisites)
Warning
sudo or root access is needed to install prerequisites!
General requirements:
```
sudo yum install git gcc libffi-devel openssl-devel
```
Note
Dependencies explained:
* git - version control of this project
* gcc - used for compilation of C backends for various libraries
* libffi-devel - required by [cffi](http://cffi.readthedocs.io/en/latest/)
* openssl-devel - required by [cryptography](http://cryptography.readthedocs.io/en/latest/)
Closed [Virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/) is required to create clean python environment separated from system:
```
sudo yum install python-virtualenv
```
Ansible requires [python binding for SELinux](http://docs.ansible.com/ansible/intro_installation.html#managed-node-requirements):
```
sudo yum install libselinux-python
```
otherwise it won’t be able to run modules with copy/file/template functions!
Note
libselinux-python is in [Prerequisites](#prerequisites) but doesn’t have a pip package. It must be installed on system level.
Note
Ansible requires also **libselinux-python** installed on all nodes using copy/file/template functions. Without this step all such tasks will fail!
#### Virtualenv[¶](#virtualenv)
`infrared` shares dependencies with other OpenStack products and projects.
Therefore there’s a high probability of conflicts with python dependencies,
which would result either with `infrared` failure, or worse, with breaking dependencies for other OpenStack products.
When working from source,
[virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/) usage is recommended for avoiding corrupting of system packages:
```
virtualenv .venv source .venv/bin/activate pip install --upgrade pip pip install --upgrade setuptools
```
Warning
**Use of latest ``pip`` is mandatory, especially on RHEL platform!**
Note
On Fedora 23 with EPEL repository enabled,
[RHBZ#1103566](https://bugzilla.redhat.com/show_bug.cgi?id=1103566) also requires
```
dnf install redhat-rpm-config
```
#### Installation[¶](#installation)
Clone stable branch from Github repository:
```
git clone https://github.com/redhat-openstack/infrared.git
```
Install `infrared` from source:
```
cd infrared pip install .
```
Note
For development work it’s better to install in editable mode and work with master branch:
```
pip install -e .
```
#### Ansible Configuration[¶](#ansible-configuration)
Config file([ansible.cfg](http://docs.ansible.com/ansible/latest/intro_configuration.html)) could be provided to get custom behavior for Ansible.
Infrared try to locate the Ansible config file(ansible.cfg) in several locations, in the following order:
> * ANSIBLE_CONFIG (an environment variable)
> * ansible.cfg (in the current directory)
> * ansible.cfg (in the Infrared home directory)
> * .ansible.cfg (in the home directory)
If none of this location contains Ansible config, InfraRed will create a default one in Infrared’s home directory
| | |
| --- | --- |
|
```
1 2
3 4
5 6
7 8
9
```
|
```
[defaults]
host_key_checking = False forks = 500 timeout = 30 force_color = 1
[ssh_connection]
pipelining = True ssh_args = -o ControlMaster=auto -o ControlPersist=60s
```
|
Note
Values for forks, host_key_checking and timeout have to be the same or greater.
#### Bash completion[¶](#bash-completion)
Bash completion script is in etc/bash_completion.d directory of git repository.
To enable global completion copy this script to proper path in the system (/etc/bash_completion.d):
```
cp etc/bash_completion.d/infrared /etc/bash_completion.d/
```
Alternatively, just source it to enable completion temporarily:
```
source etc/bash_completion.d/infrared
```
When working in virtualenv, might be a good idea to add import of this script to the virtualenv activation one:
```
echo ". $(pwd)/etc/bash_completion/infrared" >> ${VIRTUAL_ENV}/bin/activate
```
### Configuration[¶](#configuration)
Infrared uses the IR_HOME environment variable which points where infrared should keep all the internal configuration files and workspaces.
Currently by default the `IR_HOME` points the current working directory from which the infrared command is run.
To change that default location user can simply set `IR_HOME`, for example:
```
$ IR_HOME=/tmp/newhome ir workspace list
```
This will generate default configurations files in the specified directory.
#### Defaults from environment variables[¶](#defaults-from-environment-variables)
Infrared will load all environment variables starting with `IR_` and will transform them in default argument values that are passed to all modules.
This means that `IR_FOO_BAR=1` will do the same thing as adding
`--foo-bar=1` to infrared CLI.
Infrared uses the same precedence order as Ansible when it decide which value to load, first found is used:
* command line argument
* environment variable
* configuration file
* code (plugin spec default) value
#### Ansible configuration and limitations[¶](#ansible-configuration-and-limitations)
Usually infrared does not touch the settings specified in the ansible configuration file (`ansible.cfg`), with few exceptions.
Internally infrared use Ansible environment variables to set the directories for common resources (callback plugins, filter plugins, roles, etc); this means that the following keys from the Ansible configuration files are ignored:
* `callback_plugins`
* `filter_plugins`
* `roles_path`
It is possible to define custom paths for those items setting the corresponding environment variables:
* `ANSIBLE_CALLBACK_PLUGINS`
* `ANSIBLE_FILTER_PLUGINS`
* `ANSIBLE_ROLES_PATH`
### Workspaces[¶](#workspaces)
With workspaces, user can manage multiple environments created by infrared and alternate between them.
All runtime files (Inventory, hosts, ssh configuration, ansible.cfg, etc…) will be loaded from a workspace directory and all output files
(Inventory, ssh keys, environment settings, facts caches, etc…) will be generated into that directory.
Create:
Create new workspace. If name isn’t provided, infrared will generate one based on timestamp:
```
infrared workspace create example
Workspace 'example' added
```
Note
The create option will not switch to the newly created workspace. In order to switch to the new workspace, the `checkout` command should be used
Inventory:
Fetch workspace inventory file (a symlink to the real file that might be changed by infrared executions):
```
infrared workspace inventory
/home/USER/.infrared/workspaces/example/hosts
```
Checkout Switches to the specified workspace:
```
infrared workspace checkout example3
Now using workspace: 'example3'
```
Creates a new workspace if the `--create` or `-c` is specified and switches to it:
```
infrared workspace checkout --create example3
Workspace 'example3' added Now using workspace: 'example3'
```
Note
Checked out workspace is tracked via a status file in workspaces_dir, which means checked out workspace is persistent across shell sessions.
You can pass checked out workspace by environment variable `IR_WORKSPACE`, which is non persistent
```
ir workspace list
| Name | Is Active |
|---+---|
| bee | True |
| zoo | |
IR_WORKSPACE=zoo ir workspace list
| Name | Is Active |
|---+---|
| bee | |
| zoo | True |
ir workspace list
| Name | Is Active |
|---+---|
| bee | True |
| zoo | |
```
Warning
While `IR_WORKSPACE` is set ir workspace checkout is disabled
```
export IR_WORKSPACE=zoo ir workspace checkout zoo ERROR 'workspace checkout' command is disabled while IR_WORKSPACE environment variable is set.
```
List:
List all workspaces. Active workspace will be marked.:
```
infrared workspace list
+---+---+
| Name | Active |
+---+---+
| example | |
| example2 | * |
| rdo_testing | |
+---+---+
```
Note
If the `--active` switch is given, only the active workspace will be printed
Delete:
Deletes a workspace:
```
infrared workspace delete example
Workspace 'example' deleted
```
Delete multiple workspaces at once:
```
infrared workspace delete example1 example2 example3
Workspace 'example1' deleted Workspace 'example2' deleted Workspace 'example3' deleted
```
Cleanup:
Removes all the files from workspace. Unlike delete, this will keep the workspace namespace and keep it active if it was active before.:
```
infrared workspace cleanup example2
```
Export:
> Package workspace in a tar ball that can be shipped to, and loaded by, other infrared instances:
> ```
> infrared workspace export
> The active workspace example1 exported to example1.tar
> ```
> To export non-active workspaces, or control the output file:
> ```
> infrared workspace export -n example2 -f /tmp/look/at/my/workspace
> Workspace example2 exported to /tmp/look/at/my/workspace.tgz
> ```
Note
If the `-K/--copy-keys` flag is given, SSH keys from outside the workspace directory, will be copied to the workspace directory and the inventory file will be changed accordingly.
Import:
Load a previously exported workspace (local or remote):
```
infrared workspace import /tmp/look/at/my/new-workspace.tgz infrared workspace import http://free.ir/workspaces/newworkspace.tgz
Workspace new-workspace was imported
```
Control the workspace name:
```
infrared workspace import /tmp/look/at/my/new-workspace --name example3
Workspace example3 was imported
```
Node list:
List nodes, managed by a specific workspace:
```
infrared workspace node-list
| Name | Address | Groups |
|---+---+---|
| controller-0 | 172.16.0.94 | overcloud_nodes, network, controller, openstack_nodes |
| controller-1 | 172.16.0.97 | overcloud_nodes, network, controller, openstack_nodes |
infrared workspace node-list --name some_workspace_name
```
`--group` - list nodes that are member of specific group.
Group list:
List groups and nodes in them, managed by a specific workspace:
```
infrared workspace group-list
| Name | Nodes |
|---+---|
| overcloud_nodes | controller-0, compute-0, compute-1 |
| undercloud | undercloud-0 |
```
Note
To change the directory where Workspaces are managed, edit the `workspaces_base_folder` option.
Check the [Infrared Configuration](configuration.html) for details.
### Plugins[¶](#plugins)
In infrared 2.0, plugins are self contained Ansible projects. They can still also depend on common items provided by the core project.
Any ansible project can become an`infrared` plugin by adhering to the following structure (see [tests/example](https://github.com/redhat-openstack/infrared/tree/master/tests/example) for an example plugin):
```
tests/example
├── main.yml # Main playbook. All execution starts here
├── plugin.spec # Plugin definition
├── roles # Add here roles for the project to use
│ └── example_role
│ └── tasks
│ └── main.yml
```
Note
This structure will work without any `ansible.cfg` file provided (unless common resources are used),
as Ansible will search for references in the relative paths described above. To use an `ansible.cfg` config file, use absolute paths to the plugin directory.
#### Plugin structure[¶](#plugin-structure)
##### Main entry[¶](#main-entry)
infrared will look for a playbook called `main.yml` to start the execution from.
Note
If you want to use other playbook to start from - simply add it into config section in `plugin.spec`:
```
config:
plugin_type: other
entry_point: your-playbook.yml
...
```
Plugins are regular Ansible projects, and as such, they might include or reference any item
(files, roles, var files, ansible plugins, modules, templates, etc…) using relative paths to current playbook.
They can also use roles, callback and filter plugins defined in the `common/` directory provided by infrared core.
An example of `plugin_dir/main.yml`:
| | |
| --- | --- |
|
```
1
2
3
4
5
6
7
8
9 10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25 26
```
|
```
- name: Main Play
hosts: all
vars_files:
- vars/some_var_file.yml
roles:
- role: example_role
tasks:
- name: fail if no vars dict
when: "provision is not defined"
fail:
- name: fail if input calls for it
when: "provision.foo.bar == 'fail'"
fail:
- debug:
var: inventory_dir
tags: only_this
- name: Test output
vars:
output_file: output.example
file:
path: "{{ inventory_dir }}/{{ output_file }}"
state: touch
when: "{{ provision is defined }}"
```
|
##### Plugin Specification[¶](#plugin-specification)
infrared gets all plugin info from `plugin.spec` file. Following YAML format.
This file defines the CLI flags this plugin exposes, its name and its type.
```
config:
plugin_type: provision
entry_point: main.yml subparsers:
example:
description: Example provisioner plugin
include_groups: ["Ansible options", "Inventory", "Common options", "Answers file"]
groups:
- title: Group A
options:
foo-bar:
type: Value
help: "foo.bar option"
default: "default string"
flag:
type: Flag
help: "flag option"
dictionary-val:
type: KeyValueList
help: "dictionary-val option"
- title: Group B
options:
iniopt:
type: IniType
help: "Help for '--iniopt'"
action: append
nestedlist:
type: NestedList
help: "Help for '--nestedlist'"
action: append
- title: Group C
options:
uni-dep:
type: Value
help: "Help for --uni-dep"
required_when: "req-arg-a == yes"
multi-dep:
type: Value
help: "Help for --multi-dep"
required_when:
- "req-arg-a == yes"
- "req-arg-b == yes"
req-arg-a:
type: Bool
help: "Help for --req-arg-a"
req-arg-b:
type: Bool
help: "Help for --req-arg-b"
- title: Group D
options:
deprecated-way:
type: Value
help: "Deprecated way to do it"
new-way:
deprecates: deprecated-way
type: Value
help: "New way to do it"
- title: Group E
options:
tasks:
type: ListOfFileNames
help: |
This is example for option which is with type "ListOfFileNames" and has
auto propagation of "Allowed Values" in help. When we ask for --help it
will look in plugin folder for directory name as 'lookup_dir' value, and
will add all file names to "Allowed Values"
lookup_dir: 'post_tasks'
```
Config section:
* Plugin type can be one of the following: `provision`, `install`, `test`, `other`.
* Entry point is the main playbook for the plugin. by default this will refer to main.yml file but can be changed to ant other file.
To access the options defined in the spec from your playbooks and roles use the plugin type with the option name.
For example, to access `dictionary-val` use `{{ provision.dictionary.val }}`.
Note
the vars-dict defined by [Complex option types](#complex-option-types) is nested under `plugin_type` root key, and passed to Ansible using `--extra-vars` meaning that any vars file that has `plugin_type` as a root key, will be overriden by that vars-dict. See [Ansible variable precidence](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable) for more details.
###### Include Groups[¶](#include-groups)
A plugin can reference preset control arguments to be included in its CLI
Answers File:
Instead of explicitly listing all CLI options every time, infrared plugins can read their input from `INI` answers file, using `--from-file` switch.
use `--generate-answers-file` switch to generate such file. It will list all input arguments a plugin accepts, with their help and defaults.
CLI options still take precedence if explicitly listed, even when `--from-file`
is used.
Common Options:
* `--dry-run`: Don’t execute Ansible playbook. Only write generated vars dict to stdout
* `--output`: Redirect generated vars dict from stdout to an explicit file (YAML format).
* `--extra-vars`: Inject custom input into the [vars dict](plugins.html#Complexoptiontypes)
Inventory:
Load a new inventory to active [workspace](workspace.html). The file is copied to workspace directory so all `{{ inventory_dir }}` references in playbooks still point to workspace directory (and not to the input file’s directory).
Note
This file permanently becomes the workspace’s inventory. To revert to original workspace the workspace must be cleaned.
Ansible options:
* `--verbose`: Set ansible verbosity level
* `--ansible-args`: Pass all subsequent input to Ansible as raw arguments. This is for power-users wishing to access Ansible functionality not exposed by infrared:
```
infrared [...] --ansible-args step;tags=tag1,tag2;forks=500
```
Is the equivalent of:
```
ansible-playbook [...] --step --tags=tag1,tag2 --forks 500
```
###### Complex option types[¶](#complex-option-types)
Infrared extends [argparse](https://docs.python.org/2/library/argparse.html) with the following option types.
These options are nested into the vars dict that is later passed to Ansible as extra-vars.
* Value:
String value.
* Bool:
Boolean value. Accepts any form of YAML boolean: `yes`/`no`, `true`/`false` `on`/`off`.
Will fail if the string can’t be resolved to this type.
* Flag:
Acts as a flag, doesn’t parse any value.
Will always return `true`.
* IniType:
Value is in `section.option=value` format.
`append` is the default action for this type, so users can provide multiple args for the same parameter.
.. warning:: The IniType option is deprecated, use NestedDict instead of.
* NestedDict:
Value is in `section.option=value` format.
`append` is the default action for this type, so users can provide multiple args for the same parameter. Example:
```
infrared example --foo option1=value1 --foo option2=value2
```
```
{"foo": {"option1": "value1",
"option2": "value2"}}
```
* NestedList:
The NestedList option inherits NestedDict attributes and differs from NestedDict by value format. It composes value as list of dictionaries. Example:
```
infrared example --foo option1=value1 --foo option1=value2
```
```
{"foo": [{"option1": "value1"},
{"option1": "value2"}]}
```
* KeyValueList:
String representation of a flat dict `--options option1:value1,option2:value2`
becomes:
```
{"options": {"option1": "value1",
"option2": "value2"}}
```
The nesting is done in the following manner: option name is split by `-` delimiter and each part is a key of a dict nested in side the previous one, starting with “plugin_type”. Then value is nested at the inner-most level. Example:
```
infrared example --foo-bar=value1 --foo-another-bar=value2 --also_foo=value3
```
```
{
"provision": {
"foo": {
"bar": "value1",
"another": {
"bar": "value2"
}
},
"also_foo": "value3"
}
}
```
* FileValue The absolute or relative path to a file. Infrared validates whether file exists and transform the path to the absolute.
* VarFile
Same as the `FileValue` type but additionally Infrared will check the following locations for a file:
+ `argument/name/option_value`
+ `<spec_root>/defaults/argument/name/option_value`
+ `<spec_root>/var/argument/name/option_value`
In the example above the CLI option name is `--argument-name`.
The VarFile suites very well to describe options which point to the file with variables.
For example, user can describe network topologies parameters in separate files.
In that case, all these files can be put to the `<spec_root>/defaults/network` folder,
and plugin specification can look like:
```
config:
plugin_type: provision
entry_point: main.yml subparsers:
my_plugin:
description: Provisioner virtual machines on a single Hypervisor using libvirt
groups:
- title: topology
options:
network:
type: VarFile
help: |
Network configuration to be used
__LISTYAMLS__
default: defautl_3_nets
```
Then, the cli call can looks simply like:
```
infrared my_plugin --network=my_file
```
Here, the ‘my_file’ file should be present in the `/{defaults|var}/network` folder, otherwise an error will be displayed by the Infrared.
Infrared will transform that option to the absolute path and will put it to the provision.network variable:
```
provision.network: /home/user/..../my_plugin/defaults/my_file
```
That variable is later can be used in Ansible playbooks to load the appropriate network parameters.
Note
Infrared automatically checks for files with .yml extension. So the `my_file` and
`my_file.yml` will be validated.
* ListOfVarFiles The list of files. Same as `VarFile` but represents the list of files delimited by comma (`,`).
* VarDir The absolute or relative path to a directory. Same as `VarFile` but points to the directory instead of file
###### Placeholders[¶](#placeholders)
Placeholders allow users to add a level of sophistication in options help field.
* `__LISTYAMLS__`:
Will be replaced with a list of available YAML (`.yml`) file from the option’s settings dir.
| Assume a plugin with the following directory tree is installed:
```
plugin_dir
├── main.yml # Main playbook. All execution starts here
├── plugin.spec # Plugin definition
└── vars # Add here variable files
├── yamlsopt
│ ├── file_A1.yml # This file will be listed for yamlsopt
│ └── file_A2.yml # This file will be listed also for yamlsopt
└── another
└──yamlsopt
├── file_B1.yml # This file will be listed for another-yamlsopt
└── file_B2.yml # This file will be listed also for another-yamlsopt
```
Content of `plugin_dir/plugin.spec`:
```
plugin_type: provision
description: Example provisioner plugin
subparsers:
example:
groups:
- title: GroupA
yamlsopt:
type: Value
help: |
help of yamlsopt option
__LISTYAMLS__
another-yamlsopt:
type: Value
help: |
help of another-yamlsopt option
__LISTYAMLS__
```
Execution of help command (`infrared example --help`) for the ‘example’ plugin, will produce the following help screen:
```
usage: infrared example [-h] [--another-yamlsopt ANOTHER-YAMLSOPT]
[--yamlsopt YAMLSOPT]
optional arguments:
-h, --help show this help message and exit
GroupA:
--another-yamlsopt ANOTHER-YAMLSOPT
help of another-yamlsopt option
Available values: ['file_B1', 'file_B2']
--yamlsopt YAMLSOPT help of yamlsopt option
Available values: ['file_A1', 'file_A2']
```
###### Required Arguments[¶](#required-arguments)
InfraRed provides the ability to mark an argument in a specification file as ‘required’ using two flags:
1. ‘required’ - A boolean value tell whether the arguments required or not. (default is ‘False’)
2. ‘required_when’ - Makes this argument required only when the mentioned argument is given and the condition is True.
More than one condition is allowed with YAML list style. In this case the argument will be required if all the conditions are True.
For example, take a look on the `plugin.spec` (‘Group C’) in [Plugin Specification](#plugin-specification)
###### Argument Deprecation[¶](#argument-deprecation)
To deprecate an argument in InfraRed, you need to add flag ‘deprecates’ in newer argument
When we use a deprecated argument, InfraRed will warn you about that and it will add the new argument in Ansible parameters with the value of the deprecated
For example, take a look on the `plugin.spec` (‘Group D’) in [Plugin Specification](#plugin-specification)
#### Plugin Manager[¶](#plugin-manager)
The following commands are used to manage infrared plugins
Add:
infrared will look for a [plugin.spec](plugins.html#plugin-specification) file in each given source and register the plugin under the given plugin-type (when source is ‘all’, all available plugins will be installed):
```
infrared plugin add tests/example infrared plugin add example example2 infrared plugin add <git_url> [--revision <branch/tag/revision>]
infrared plugin add all
```
Note
“–revision” works with one plugin source only.
List:
List all available plugins, by type:
```
infrared plugin list
┌───────────┬─────────┐
│ Type │ Name │
├───────────┼─────────┤
│ provision │ example │
├───────────┼─────────┤
│ install │ │
├───────────┼─────────┤
│ test │ │
└───────────┴─────────┘
infrared plugin list --available
┌───────────┬────────────────────┬───────────┐
│ Type │ Name │ Installed │
├───────────┼────────────────────┼───────────┤
│ provision │ example │ * │
│ │ foreman │ │
│ │ openstack │ │
│ │ virsh │ │
├───────────┼────────────────────┼───────────┤
│ install │ collect-logs │ │
│ │ packstack │ │
│ │ tripleo-overcloud │ │
│ │ tripleo-undercloud │ │
├───────────┼────────────────────┼───────────┤
│ test │ rally │ │
│ │ tempest │ │
└───────────┴────────────────────┴───────────┘
```
Note
Supported plugin types are defined in plugin settings file which is auto generated.
Check the [Infrared Configuration](configuration.html) for details.
Remove:
Remove the given plugins (when name is ‘all’, all plugins will be removed):
```
infrared plugin remove example example2 infrared plugin remove all
```
Freeze:
Output installed plugins with their revisions in a [registry file format](plugins.html#registry-files).
When you need to be able to install somewhere else the exact same versions of plugins use `freeze` command:
```
infrared plugin freeze > registry.yaml
```
Import:
Installs all plugins from the given registry file.
The registry file can be either path to local file or to URL:
```
infrared plugin import plugins/registry.yaml infrared plugin import https://url/to/registry.yaml
```
Update:
Update a given Git-based plugin to a specific revision.
The update process pulls the latest changes from the remote and checks out a specific revision if given, otherwise, it will point to the tip of the updated branch.
If the “–skip_reqs” switch is set, the requirements installation will be skipped:
```
ir plugin update [--skip_reqs] [--hard-reset] name [revision]
```
Execute:
Plugins are added as subparsers under `plugin type` and will execute the [main playbook](plugins.html#Mainentry):
```
infrared example
```
#### Registry Files[¶](#registry-files)
Registry files are files containing a list of plugins to be installed using the infrared plugin import.
These files are used to hold the result from infrared plugin freeze for the purpose of achieving repeatable installations.
The Registry file contains a pinned version of everything that was installed when infrared plugin freeze was run.
##### Registry File Format[¶](#id2)
The registry file is following the YAML format.
Each section of the registry file contains an object which specifies the plugin to be installed:
* `src`: The path to the plugin. It can be either local path or git url
* `src_path`: (optional) Relative path within the repository where infrared plugin can be found.
* `rev`: (optional) If the plugin source is git, this allows to specify the revision to pull.
* `desc`: The plugin description.
* `type`: Plugin type can be one of the following: `provision`, `install`, `test`, `other`.
Example of a registry file:
```
---
plugin_name:
src: path/to/plugin/directory
rev: some_revision_hash
src_path: /path/to/plugin/in/repo
desc: Some plugin description
type: provision/test/install/other
```
#### How to create a new plugin[¶](#how-to-create-a-new-plugin)
Note
Check [COOKBOOK](plugins_guide.html) for the quick guide on how to create a plugin.
### Topology[¶](#topology)
A topology is a description of an environment you wish to provision.
We have divided it into two, [network topology](#network-topology) and [nodes topology](#nodes-topology).
#### Nodes topology[¶](#nodes-topology)
Before creating our environment, we need to decide how many and what type of nodes to create.
The following format is used to provide topology nodes:
```
infrared <provisioner_plugin> --topology-nodes NODENAME:AMOUNT
```
where `NODENAME` refers to files under `vars/topology/nodes/NODENAME.yml`
(or `defaults/topology/nodes/NODENAME.yml`)
and `AMOUNT` refers to the amount of nodes from the `NODENAME` we wish to create.
For example, if we choose the [Virsh](virsh.html) provisioner:
```
infrared virsh --topology-nodes undercloud:1,controller:3 ...
```
The above command will create 1 VM of type `undercloud` and 3 VMs of type `controller`
For any node that is provided in the CLI `--topology-nodes` flag,
infrared looks for the node first under `vars/topology/nodes/NODENAME.yml`
and if not found, under `default/topology/nodes/NODENAME.yml`
where we supply a default set of supported / recommended topology files.
Lets examine the structure of topology file (located: var/topology/nodes/controller.yml):
```
name: controller # the name of the VM to create, in case of several of the same type, appended with "-#"
prefix: null # in case we wish to add a prefix to the name cpu: "4" # number of vCPU to assign for the VM memory: "8192" # the amount of memory swap: "0" # swap allocation for the VM disks: # number of disks to create per VM
disk1: # the below values are passed `as is` to virt-install
import_url: null
path: "/var/lib/libvirt/images"
dev: "/dev/vda"
size: "40G"
cache: "unsafe"
preallocation: "metadata"
interfaces: # define the VM interfaces and to which network they should be connected
nic1:
network: "data"
nic2:
network: "management"
nic3:
network: "external"
external_network: management # define what will be the default external network groups: # ansible groups to assign to the newly created VM
- controller
- openstack_nodes
- overcloud_nodes
- network
```
For more topology file examples, please check out the default [available nodes](virsh_nodes)
To override default values in the topology dict the extra vars can be provided through the CLI. For example,
to add more memory to the controller node, the `override.controller.memory` value should be set:
```
infrared virsh --topology-nodes controller:1,compute:1 -e override.controller.memeory=30720
```
#### Network topology[¶](#network-topology)
Before creating our environment, we need to decide number and types of networks to create. The following format is used to provide topology networks:
```
infrared <provisioner_plugin> --topology-network NET_TOPOLOGY
```
where `NET_TOPOLOGY` refers to files under `vars/topology/network/NET_TOPOLOGY.yml`
(or if not found, `defaults/topology/network/NET_TOPOLOGY.yml`)
To make it easier for people, we have created a default network topology file called: `3_nets.yml` (you can find it under each provisioner plugin defaults/topology/network/3_nets.yml) that will be created automatically.
For example, if we choose the [Virsh](virsh.html) provisioner:
```
infrared virsh --topology-network 3_nets ...
```
The above command will create 3 networks: (based on the specification under `defaults/topology/network/3_nets.yml`)
# data network - an isolated network
# management network - NAT based network with a DHCP
# external network - NAT based network with DHCP
If we look in the `3_nets.yml` file, we will see this:
```
networks:
net1:
<snip>
net2:
name: "management" # the network name
external_connectivity: yes # whether we want it externally accessible
ip_address: "172.16.0.1" # the IP address of the bridge
netmask: "255.255.255.0"
forward: # forward method
type: "nat"
dhcp: # omit this if you don't want a DHCP
range: # the DHCP range to provide on that network
start: "172.16.0.2"
end: "172.16.0.100"
subnet_cidr: "172.16.0.0/24"
subnet_gateway: "172.16.0.1"
floating_ip: # whether you want to "save" a range for assigning IPs
start: "172.16.0.101"
end: "172.16.0.150"
net3:
<snip>
```
To override default values in the network dict the extra vars can be provided through the CLI. For example,
to change ip address of net2 network, the `override.networks.net2.ip_address` value should be set:
```
infrared virsh --topology-nodes controller:1,compute:1 -e override.networks.net2.ip_address=10.0.0.3
```
### Interactive SSH[¶](#interactive-ssh)
This plugin allows users to establish interactive ssh session to a host managed by infrared. To do this use:
```
infrared ssh <nodename>
```
where ‘nodename’ is a hostname from inventory file.
For example
```
infrared ssh controller-0
```
### New In infrared 2.0[¶](#new-in-infrared-2-0)
#### Highlights[¶](#highlights)
1. Workspaces:
Added [Workspaces](workspace.html). Every session must be tied to an active workspace.
All input and output file are taken from, and written to, the active workspace directory.
which allows easy migration of workspace, and avoids accidental overwrites of data,
or corrupting the working directory.
The deprecates `ir-archive` in favor of `workspace import` and `workspace export`
2. Stand-Alone Plugins:
Each plugins is fully contained within a single directory.
[Plugin structure](plugins.html) is fully defined and plugins can be loaded from any location on the system.
“Example plugin” shows contributors how to structure their Ansible projects to plug into infrared 3. SSH:
Added ability to establish interactive ssh connection to nodes, managed by workspace using workspace’s inventory
`infrared ssh <hostname>`
4. Single Entry-Point:
`ir-provisioner`, `ir-installer`, `ir-tester`
commands are deprecated in favor of a single `infrared` entry point (`ir` also works).
Type `infrared --help` to get the full usage manual.
5. TripleO:
`ir-installer ospd` was broken into two new plugins:
* [TripleO Undercloud](tripleo-undercloud.html):
Install undercloud up-to and including overcloud image creation
* [TripleO Overcloud](tripleo-overcloud.html):
Install overcloud using an exsiting undercloud.
6. Answers file:
The switch `--generate-conf-file` is renamed `--generate-answers-file` to avoid confusion with configuration files.
7. Topoloy:
The topology input type has been deprecated. Use KeyValueList to define node types and amounts, and `include_vars`
to add relevant files to playbooks, see [Topology](topology.html) description for more information 8. Cleanup:
the `--cleanup` options now accepts boolean values. Any YAML boolean is accpeted
(“yes/no”, “true/false”, “on/off”)
9. Bootstrap:
On virtual environmants, [tripleo-undercloud](tripleo-undercloud.html) can create a snapshot out of the undercloud VM that can later be used to bypass the installation process.
#### Example Script Upgrade[¶](#example-script-upgrade)
| infrared v2 | infrared v1 |
| --- | --- |
|
```
## CLEANUP ##
infrared virsh -v -o cleanup.yml \
--host-address example.redhat.com \
--host-key ~/.ssh/id_rsa \
--kill yes
## PROVISION ##
infrared virsh -v \
--topology-nodes undercloud:1,controller:1,compute:1 \
--host-address example.redhat.com \
--host-key ~/.ssh/id_rsa \
--image-url http://www.images.com/rhel-7.qcow2
## UNDERCLOUD ##
infrared tripleo-undercloud -v mirror tlv \
--version 9 \
--build passed_phase1 \
--ssl true \
--images-task rpm
## OVERCLOUD ##
infrared tripleo-overcloud -v \
--version 10 \
--introspect yes \
--tagging yes \
--deploy yes \
--deployment-files virt \
--network-backend vxlan \
--overcloud-ssl false \
--network-protocol ipv4
## POST TASKS ##
infrared cloud-config -v \
-o cloud-config.yml \
--deployment-files virt \
--tasks create_external_network,forward_overcloud_dashboard,network_time,tempest_deployer_input
## TEMPEST ##
infrared tempest -v \
--config-options "image.http_image=http://www.images.com/cirros.qcow2" \
--openstack-installer tripleo \
--openstack-version 9 \
--tests sanity
# Fetch inventory from active workspace WORKSPACE=$(ir workspace list | awk '/*/ {print $2}')
ansible -i .workspaces/$WORKSPACE/hosts all -m ping
```
|
```
## CLEANUP ##
ir-provisioner -d virsh -v \
--topology-nodes=undercloud:1,controller:1,compute:1 \
--host-address=example.redhat.com \
--host-key=~/.ssh/id_rsa \
--image-url=www.images.com/rhel-7.qcow2 \
--cleanup
## PROVISION ##
ir-provisioner -d virsh -v \
--topology-nodes=undercloud:1,controller:1,compute:1 \
--host-address=example.redhat.com \
--host-key=~/.ssh/id_rsa \
--image-url=http://www.images.com/rhel-7.qcow2
## OSPD ##
ir-installer --debug mirror tlv ospd -v -o install.yml\
--product-version=9 \
--product-build=latest \
--product-core-build=passed_phase1 \
--undercloud-ssl=true \
--images-task=rpm \
--deployment-files=$PWD/settings/installer/ospd/deployment/virt \
--network-backend=vxlan \
--overcloud-ssl=false \
--network-protocol=ipv4
ansible-playbook -i hosts -e @install.yml \
playbooks/installer/ospd/post_install/create_tempest_deployer_input_file.yml
## TEMPEST ##
ir-tester --debug tempest -v \
--config-options="image.http_image=http://www.images.com/cirros.qcow2" \
--tests=sanity.yml
ansible -i hosts all -m ping
```
|
### Advance Features[¶](#advance-features)
#### Injection points[¶](#injection-points)
Different people have different use cases which we cannot anticipate in advance.
To solve (partially) this need, we structured our playbooks in a way that breaks the logic into standalone plays.
Furthermore, each logical play can be overriden by the user at the invocation level.
Lets look at an example to make this point more clear.
Looking at our `virsh` main playbook, you will see:
```
- include: "{{ provision_cleanup | default('cleanup.yml') }}"
when: provision.cleanup|default(False)
```
Notice that the `include:` first tried to evaluate the variable `provision_cleanup` and afterwards defaults to our own cleanup playbook.
This condition allows users to inject their own custom cleanup process while still reuse all of our other playbooks.
##### Override playbooks[¶](#override-playbooks)
In this example we’ll use a custom playbook to override our cleanup play and replace it with the process described above.
First, lets create an empty playbook called: `noop.yml`:
```
---
- name: Just another empty play
hosts: localhost
tasks:
- name: say hello!
debug:
msg: "Hello!"
```
Next, when invoking infrared, we will pass the variable that points to our new empty playbook:
```
infrared virsh --host-address $HOST --host-key $HOST_KEY --topology-nodes $TOPOLOGY --kill yes -e provision_cleanup=noop.yml
```
Now lets run see the results:
```
PLAY [Just another empty play] *************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [say hello!] **************************************************************
[[ previous task time: 0:00:00.459290 = 0.46s / 0.47s ]]
ok: [localhost] => {
"msg": "Hello!"
}
msg: Hello!
```
If you have a place you would like to have an injection point and one is not provided, please [contact us](contacts.html).
#### Infrared Ansible Tags[¶](#infrared-ansible-tags)
##### Stages and their corresponding Ansible tags[¶](#stages-and-their-corresponding-ansible-tags)
Each stage can be executed using ansible plugin with set of ansible tags that are passed to the infrared plugin command:
| Plugin | Stage | Ansible Tags |
| --- | --- | --- |
| virsh | Provision | pre, hypervisor, networks, vms, user, post |
| tripleo-undercloud | Undercloud Deploy | validation, hypervisor, init, install, shade, configure, deploy |
| Images | images |
| tripleo-overcloud | Introspection | validation, init, introspect |
| Tagging | tag |
| Overcloud Deploy | loadbalancer, deploy_preparation, deploy |
| Post tasks | post |
###### Usage examples:[¶](#usage-examples)
The ansible tags can be used by passing all subsequent input to Ansible as raw arguments.
Provision (virsh plugin):
```
infrared virsh \
-o provision_settings.yml \
--topology-nodes undercloud:1,controller:1,compute:1 \
--host-address <my.host.redhat.com> \
--host-key </path/to/host/key> \
--image-url <image-url> \
--ansible-args="tags=pre,hypervisor,networks,vms,user,post"
```
Undercloud Deploy stage (tripleo-undercloud plugin):
```
infrared tripleo-undercloud \
-o undercloud_settings.yml \
--mirror tlv \
--version 12 \
--build passed_phase1 \
--ansible-args="tags=validation,hypervisor,init,install,shade,configure,deploy"
```
##### Tags explanation:[¶](#tags-explanation)
* Provision
+ pre - Pre run configuration
+ Hypervisor - Prepare the hypervisor for provisioning
+ Networks - Create Networks
+ Vms - Provision Vms
+ User - Create a sudoer user for non root SSH login
+ Post - perform post provision tasks
* Undercloud Deploy
+ Validation - Perform validations
+ Hypervisor - Patch hypervisor for undercloud deployment
- Add rhos-release repos and update ipxe-roms
- Create the stack user on the hypervisor and allow SSH to hypervisor
+ Init - Pre Run Adjustments
+ Install - Configure and Install Undercloud Repositories
+ Shade - Prepare shade node
+ Configure - Configure Undercloud
+ Deploy - Installing the undercloud
* Images
+ Images - Get the undercloud version and prepare the images
* Introspection
+ Validation - Perform validations
+ Init - pre-tasks
+ Introspect - Introspect our machines
* Tagging
+ Tag - Tag our machines with proper flavors
* Overcloud Deploy
+ Loadbalancer - Provision loadbalancer node
+ Deploy_preparation - Environment setup
+ Deploy - Deploy the Overcloud
* Post tasks
+ Post - Perform post install tasks
#### Virthost packages/repo requirements[¶](#virthost-packages-repo-requirements)
##### Virsh[¶](#virsh)
##### UEFI mode related binaries[¶](#uefi-mode-related-binaries)
For Virthost with RHEL 7.3, OVMF package is available in the supplementary channel, please install the package from there and rerun the command. If the Virthost use different OS or OS version, please check below.
According to [usage UEFI with QEMU](https://fedoraproject.org/wiki/Using_UEFI_with_QEMU) there is only one way to get the UEFI mode boot working with VMs, that often requires by Ironic team due to lack of hardware or impossibility to automate mode switching on baremetal nodes.
1. Add repo with OVMF binaries:
```
yum-config-manager --add-repo http://www.kraxel.org/repos/firmware.repo
```
2. Install OVMF binaries:
```
yum install -y edk2.git-ovmf-x64
```
3. Update QEMU config adding the following to the end of the /etc/libvirt/qemu.conf file:
```
nvram = [
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd"
]
```
4. Restart libvirt service:
```
systemctl restart libvirtd
```
##### IPv6 related host adjustments, which will also be required by UEFI[¶](#ipv6-related-host-adjustments-which-will-also-be-required-by-uefi)
When UEFI is in use, libvirt will require additional setup on the host, for IPv6 to be enabled:
1. Configure accept_ra = 2 in sysctl:
```
echo "net.ipv6.conf.all.accept_ra = 2" > /etc/sysctl.d/55-acceptra.conf
```
2. Enable the IPv6 related NAT modules:
```
modprobe nf_nat_ipv6
```
### Contact Us[¶](#contact-us)
#### Team[¶](#team)
| <NAME> | [<EMAIL>](mailto:fjansen%40redhat.com) |
| <NAME> | [<EMAIL>](mailto:obaranov%40redhat.com) |
| Mailing List | [<EMAIL>](mailto:rhos-infrared%40redhat.com) |
#### GitHub[¶](#github)
Issues are tracked via [GitHub](https://github.com/redhat-openstack/infrared/issues).
For any concern, please create a [new issue](https://github.com/redhat-openstack/infrared/issues/new).
#### IRC[¶](#irc)
We are available on `#infrared` irc channel on `freenode`.
### Contribute[¶](#contribute)
#### Red Hatters[¶](#red-hatters)
RedHat Employees should submit their changes via [review.gerrithub.io](https://review.gerrithub.io/#/q/project:redhat-openstack/infrared).
Only members of `rhosqeauto-core` on group on GerritHub or
`redhat-openstack` (RDO) organization on GitHub can submit patches.
ask any of the current members about it.
You can use git-review (dnf/yum/pip install).
To initialize the directory of `infrared` execute `git review -s`.
Every patch needs to have *Change-Id* in commit message
(`git review -s` installs post-commit hook to automatically add one).
For some more info about git review usage, read [GerritHub Intro](https://review.gerrithub.io/Documentation/intro-quick.html#_the_life_and_times_of_a_change) and [OpenStack Infra Manual](http://docs.openstack.org/infra/manual/developers.html).
#### Non Red Hatters[¶](#non-red-hatters)
Non-RedHat Employees should file pull requests to the [InfraRed project](https://github.com/redhat-openstack/infrared) on GitHub.
#### Release Notes[¶](#release-notes)
Infrared uses [reno](https://docs.openstack.org/reno/latest/) tool for providing release notes.
That means that a patch can include a reno file (release notes) containing detailed description what the impact is.
A reno file is a YAML file written in the releasenotes/notes directory which is generated using the reno tool this way:
> ```
> $ tox -e venv -- reno new <name-your-file>
> ```
where <name-your-file> can be:
* bugfix-<bug_name_or_id>
* newfeature-<feature_name>
* apichange-<description>
* deprecation-<descriptionRefer to the reno documentation for the full list of sections.
#### When a release note is needed[¶](#when-a-release-note-is-needed)
A release note is required anytime a reno section is needed. Below are some examples for each section.
Any sections that would be blank should be left out of the note file entirely.
upgrade A configuration option change (deprecation, removal or modified default), changes in core that can affect users of the previous release. Any changes in the Infrared API.
security If the patch fixes a known vulnerability features
New feature in Infrared core or a new major feature in one of a core plugin. Introducing of the new API options or CLI flags.
critical Bugfixes categorized as Critical and above in Jira.
fixes Bugs with high importance that have been fixed.
Three sections are left intentionally unexplained (`prelude`, `issues` and `other`).
Those are targeted to be filled in close to the release time for providing details about the soon-ish release.
Don’t use them unless you know exactly what you are doing.
### OVB deployment[¶](#ovb-deployment)
Deploy TripleO OpenStack on virtual nodes provisioned from an [OpenStack cloud](openstack_provisioner.html)
In a TripleO OpenStack deployment, the undercloud need to control the overcloud power management,
as well as serve its nodes with an operating system. Trying to do that inside an OpenStack cloud requires some modification from the client side as well as from the OpenStack cloud
The [OVB](http://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html) (openstack virtual baremetal) project solves this problem and we strongly recommended to read its documentation prior to moving next in this document.
#### OVB architecture overview[¶](#ovb-architecture-overview)
An OVB setup requires additional node to be present: Baremetal Controller (BMC).
This nodes captures all the IPMI requests dedicated to the OVB nodes and handles the machine power on/off operations, boot device change and other operations performed during the introspection phase.
Network architecture overview:
```
+---+ Data +---+
| | network | |
| Undercloud +---+--->+ OVB1 |
| | | | |
+---+---+ | +---+
| |
Management | | +---+
network | | | |
+---+---+ +--->| OVB2 |
| | | | |
| BMC | | +---+
| | |
+---+ | +---+
| | |
+--->+ OVB3 |
| |
+---+
```
The BMC node should be connected to the management network. infrared brings up an IP address on own management interface for every Overcloud node. This allows infrared to handle IPMI commands coming from the undercloud. Those IPs are later used in the generated
`instackenv.json` file.
For example, during the introspection phase, when the BMC sees the power off request for the OVB1 node, it performs a shutdown for the instance which corresponds to the OVB1 on the host cloud.
#### Provision ovb nodes[¶](#provision-ovb-nodes)
In order to provision ovb nodes, the [openstack provisioner](openstack_provisioner.html) can be used:
```
ir openstack -vvvv -o provision.yml \
--cloud=qeos7 \
--prefix=example-ovb- \
--topology-nodes=ovb_undercloud:1,bmc:1,ovb_controller:1,ovb_compute:1 \
--topology-network=3_nets_ovb \
--key-file ~/.ssh/example-key.pem \
--key-name=example-jenkins \
--image=rhel-guest-image-7.4-191
```
The `--topology-nodes` options should include the `bmc` instance. Also instead of standard `compute` and `controller` nodes the appropriate nodes with the `ovb` prefix should be used.
Such ovb node settings file holds several additional properties:
> * instance `image` details. Currently the `ipxe-boot` image should be used for all the ovb nodes.
> Only that image allows to boot from the network after restart.
> * `ovb` group in the groups section
> * network topology (NICs’ order)
For example, the ovb_compute settings can hold the following properties:
```
node_dict:
name: compute
image:
name: "ipxe-boot"
ssh_user: "root"
interfaces:
nic1:
network: "data"
nic2:
network: "management"
nic3:
network: "external"
external_network: external
groups:
- compute
- openstack_nodes
- overcloud_nodes
- ovb
```
The `--topology-network` should specify the topology with at 3 networks:
`data`, `management` and `external`:
> * data network is used by the TripleO to provision the overcloud nodes
> * management is used by the BMC to control IPMI operations
> * external holds floating ip’s and used by infrared to access the nodes
DHCP should be enabled only for the external network.
infrared provides the default `3_nets_ovb` network topology that allows to deploy the OVB setup.
The `--image` option should point to existing in OpenStack Glance image This value affects nodes, except configured to boot an `ipxe-boot` image
#### Install OpenStack with TripleO[¶](#install-openstack-with-tripleo)
To install OpenStack on ovb nodes the process is almost standard with small deviation.
The undercloud can be installed by running:
```
infrared tripleo-undercloud -v \
--version 10 \
--images-task rpm
```
The overcloud installation can be run with:
```
infrared tripleo-overcloud -v \
--version 10 \
--deployment-files ovb \
--public-network=yes \
--public-subnet=ovb_subnet \
--network-protocol ipv4 \
--post=yes \
--introspect=yes \
--tagging=yes
```
Here some ovb specific option should be considered:
> * if host cloud is not patched and not configured for the OVB deployments the `--deployment-files`
> should point to the ovb templates to skip unsupported features. See the [OVB limitations](#ovb-limitations) for details
> * the `--public_subnet` should point to the subnet settings to match with the OVB network topology
> and allocation addresses
Fully functional overcloud will be deployed into the OVB nodes.
#### OVB limitations[¶](#ovb-limitations)
The OVB approach requires a host cloud to be [patched and configured](http://openstack-virtual-baremetal.readthedocs.io/en/latest/host-cloud/setup.html).
Otherwise the following features will **NOT** be available:
> * Network isolation
> * HA (high availability). Setup with more that 1 controller, etc is not allowed.
> * Boot from network. This can be workaround by using the [ipxe_boot](https://github.com/cybertron/openstack-virtual-baremetal/tree/master/ipxe/elements/ipxe-boot-image) image for the OVB nodes.
### Troubleshoot[¶](#troubleshoot)
This page will list common pitfalls or known issues, and how to overcome them
#### Ansible Failures[¶](#ansible-failures)
##### Unreachable[¶](#unreachable)
###### Symptoms:[¶](#symptoms)
```
fatal: [hypervisor]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
```
###### Solution:[¶](#solution)
When Ansible fails because of `UNREACHABLE` reason, try to validate SSH credentials and make sure that all host are SSH-able.
In the case of `virsh` plugin, it’s clear from the message above that the designated hypervisor is unreachable. Check that:
1. `--host-address` is a reachable address (IP or FQDN).
2. `--host-key` is a **private** (not **public**) key file and that its permissions are correct.
3. `--host-user` (defaults to `root`) exits on the host.
4. Try to manually ssh to the host using the given user private key:
```
ssh -i $HOST_KEY $HOST_USER@$HOST_ADDRESS
```
#### Virsh Failures[¶](#virsh-failures)
##### Cannot create VM’s[¶](#cannot-create-vm-s)
###### Symptoms:[¶](#id1)
Virsh cannot create a VM and displays the following message:
```
ERROR Unable to add bridge management port XXX: Device or resource busy Domain installation does not appear to have been successful.
Otherwise, you can restart your domain by running:
virsh --connect qemu:///system start compute-0 otherwise, please restart your installation.
```
###### Solution:[¶](#id2)
This often can be caused by the misconfiguration of the hypervisor.
Check that all the `ovs` bridges are properly configured on the hypervisor:
```
$ ovs-vsctl show
6765bb7e-8f22-4dbe-848f-eaff2e94ed96 Bridge brbm
Port "vnet1"
Interface "vnet1"
error: "could not open network device vnet1 (No such device)"
Port brbm
Interface brbm
type: internal ovs_version: "2.6.1"
```
To fix the problem remove the broken bridge:
```
$ ovs-vsctl del-br brbm
```
##### Cannot activate IPv6 Network[¶](#cannot-activate-ipv6-network)
###### Symptoms:[¶](#id3)
Virsh fails on task ‘check if network is active’ or on task ‘Check if IPv6 enabled on host’ with one of the following error messages:
```
Failed to add IP address 2620:52:0:13b8::fe/64 to external
Network 'external' requires IPv6, but modules aren't loaded...
```
###### Solution:[¶](#id4)
Ipv6 is disabled on hypervisor. Please make sure to enable IPv6 on the hypervisor before creating network with IPv6,
otherwise, IPv6 networks will be created but will remain in ‘inactive’ state.
One possible solution on RH based OSes, is to enable IPv6 in kernel cmdline:
```
# sed -i s/ipv6.disable=1/ipv6.disable=0/ /etc/default/grub
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot
```
#### Frequently Asked Questions[¶](#frequently-asked-questions)
##### Where’s my inventory file?[¶](#where-s-my-inventory-file)
I’d like to run some personal Ansible playbook and/or “ad-hoc” but I can’t find my inventory file
All Ansible environment files are read from, and written to, [workspaces](workspace)
Use `infrared workspace inventory` to fetch a symlink to the active workspace’s inventory or `infrared workspace inventory WORKSPACE` for any workspace by name:
```
ansible -i `infrared workspace inventory` all -m ping
compute-0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
compute-1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
controller-0 | SUCCESS => {
"changed": false,
"ping": "pong"
}
localhost | SUCCESS => {
"changed": false,
"ping": "pong"
}
hypervisor | SUCCESS => {
"changed": false,
"ping": "pong"
}
undercloud-0 | SUCCESS => {
"changed": false,
"ping": "pong"
```
### Temporary Workarounds[¶](#temporary-workarounds)
The page displays temporary hacks that were merged to Infrared(IR) code. Since the core team is small and these fixes are tracked manually at the moment, we request the user to review the status of the hacks/BZs.
Plugin in which the hack is included Bugzilla/Issue User/#TODO
=== === ===
### Baremetal deployment[¶](#baremetal-deployment)
Infrared allows to perform baremetal deployments.
Note
Overcloud templates for the deployment should be prepared separately.
1. Undercloud provision step. Foreman plugin will be used in this example.
> infrared foreman -vv
> -o provision.yml –url foreman.example.com –user foreman_user –password foreman_password –host-address name.of.undercloud.host –host-key /path/to/host/key –role baremetal,undercloud,tester
>
2. Deploy Undercloud.
> infrared tripleo-undercloud -vv
> -o undercloud-install.yml –config-file path/to/undercloud.conf –version 11 –build 11 –images-task rpm
>
3. Deploy Overcloud.
For baremetal deployments, in order to reflect the real networking,
templates should be prepared by the user before the deployment, including `instackenv.json` file.
All addition parameters like storage (`ceph` or `swift`) disks or any other parameters should be added to the templates as well.
```
...
"cpu": "2",
"memory": "4096",
"disk": "0",
"disks": ["vda", "vdb"],
"arch": "x86_64",
...
infrared tripleo-overcloud -vv \
-o overcloud-install.yml \
--version 11 \
--instackenv-file path/to/instackenv.json \
--deployment-files /path/to/the/templates \
--overcloud-script /path/to/overcloud_deploy.sh \
--network-protocol ipv4 \
--network-backend vlan \
--public-network false \
--introspect yes \
--tagging yes \
--deploy yes
infrared cloud-config -vv \
-o cloud-config.yml \
--deployment-files virt \
--tasks create_external_network,forward_overcloud_dashboard,network_time,tempest_deployer_input
```
### Beaker[¶](#beaker)
Provision baremetal machines using Beaker.
#### Required arguments[¶](#required-arguments)
* `--url`: URL of the Beaker server.
* `--password`: The password for the login user.
* `--host-address`: Address/FQDN of the baremetal machine to be provisioned.
#### Optional arguments[¶](#optional-arguments)
* `--user`: Login username to authenticate to Beaker. (default: admin)
* `--web-service`: For cases where the Beaker user is not part of the kerberos system,
there is a need to set the Web service to RPC for authentication rather than rest. (default: rest)
* `--ca-cert`: For cases where the beaker user is not part of the kerberos system,
a CA Certificate is required for authentication with the Beaker server.
* `--host-user`: The username to SSH to the host with. (default: root)
* `--host-password`: User’s SSH password
* `--host-key`: User’s SSH key
* `--image`: The image to use for nodes provisioning. (Check the “sample.yml.example” under vars/image for example)
* `--cleanup`: Release the system
Note
Please run `ir beaker --help` for a full detailed list of all available options.
#### Execution example[¶](#execution-example)
Provision:
```
ir beaker --url=beaker.server.url --user=beaker.user --password=beaker.password --host-address=host.to.be.provisioned
```
Cleanup (Used for returning a loaned machine):
```
ir beaker --url=beaker.server.url --user=beaker.user --password=beaker.password --host-address=host.to.be.provisioned --cleanup=yes
```
### Foreman[¶](#foreman)
Provision baremetal machine using Foreman and add it to the inventory file.
#### Required arguments[¶](#required-arguments)
* `--url`: The Foreman API URL.
* `--user`: Foreman server login user.
* `--password`: Password for login user
* `--host-address`: Name or ID of the target host as listed in the Foreman server.
#### Optional arguments[¶](#optional-arguments)
* `--strategy`: Whether to use Foreman or system `ipmi` command. (default: foreman)
* `--action`: Which command to send with the power-management selected by mgmt_strategy. (default: cycle)
* `--wait`: Whether wait for host to return from rebuild or not. (default: yes)
* `--host-user`: The username to SSH to the host with. (default: root)
* `--host-password`: User’s SSH password
* `--host-key`: User’s SSH key
* `--host-ipmi-username`: Host IPMI username.
* `--host-ipmi-password`: Host IPMI password.
* `--roles`: Host roles
* `--os-id`: An integer represents the operating system ID to set
* `--medium-id`: An integer represents the medium ID to set
Note
Please run `ir foreman --help` for a full detailed list of all available options.
#### Execution example[¶](#execution-example)
```
ir foreman --url=foreman.server.api.url --user=foreman.user --password=foreman.password --host-address=host.to.be.provisioned
```
### OpenStack[¶](#openstack)
Provision VMs on an exiting OpenStack cloud, using native ansible’s [cloud modules](http://docs.ansible.com/ansible/list_of_cloud_modules.html#openstack).
#### OpenStack Cloud Details[¶](#openstack-cloud-details)
* `--cloud`: reference to OpenStack cloud credentials, using [os-client-config](http://docs.openstack.org/developer/os-client-config)
This library expects properly configured `cloud.yml` file:
> clouds.yml[¶](#id2)
> ```
> clouds:
> cloud_name:
> auth_url: http://openstack_instance:5000/v2.0
> username: <username>
> password: <password>
> project_name: <project_name>
> ```
`cloud_name` can be then referenced with `--cloud` option:
```
infrared openstack --cloud cloud_name ...
```
`clouds.yml` is expected in either `~/.config/openstack` or `/etc/openstack` directories according to [documentation](http://docs.openstack.org/developer/os-client-config/#config-files):
> Note
> You can also omit the cloud parameter, and infrared will sourced openstackrc file:
> ```
> source keystonerc
> infrared openstack openstack ...
> ```
>
* `--key-file`: Private key that will be used to ssh to the provisioned VMs.
Chosen matching public key will be uploaded to the OpenStack account,
unless `--key-name` is provided
* `--key-name`: Name of an existing keypair under the OpenStack account.
keypair should hold the public key that matches the provided private `--key-file`.
Use `openstack --os-cloud cloud_name keypair list` to list available keypairs.
* `--dns`: A Local DNS server used for the provisioned networks and VMs.
If not provided, OpenStack will use default DNS settings, which, in most cases,
will not resolve internal URLs.
#### Topology[¶](#topology)
* `--prefix`: prefix all resources with a string.
Use this with shared tenants to have unique resource names.
Note
`--prefix "XYZ"` will create router named `XYZrouter`.
Use `--prefix "XYZ-"` to create `XYZ-router`
* `--topology-network`: Description of the network topology.
By default, 3 networks will be provisioned with 1 router.
2 of them will be connected via the router to an external network discovered automatically
(when more than 1 external network is found, the first will be chosen).
The following is an example of a `3_nets.yml` file:
```
---
networks:
net1:
external_connectivity: no
name: "data"
ip_address: "192.168.24.254"
netmask: "255.255.255.0"
net2:
external_connectivity: yes
name: "management"
ip_address: "172.16.0.1"
netmask: "255.255.255.0"
forward: nat
dhcp:
range:
start: "172.16.0.2"
end: "172.16.0.100"
subnet_cidr: "172.16.0.0/24"
subnet_gateway: "172.16.0.1"
floating_ip:
start: "172.16.0.101"
end: "172.16.0.150"
net3:
external_connectivity: yes
name: "external"
ipv6:
ip_address: "2620:52:0:13b8::fe"
prefix: "64"
dhcp:
range:
start: "2620:52:0:13b8::fe:1"
end: "2620:52:0:13b8::fe:ff"
ip_address: "10.0.0.1"
netmask: "255.255.255.0"
forward: nat
dhcp:
range:
start: "10.0.0.2"
end: "10.0.0.100"
subnet_cidr: "10.0.0.0/24"
subnet_gateway: "10.0.0.1"
floating_ip:
start: "10.0.0.101"
end: "10.0.0.150"
nodes:
default:
interfaces:
- network: "data"
- network: "management"
- network: "external"
external_network:
network: "management"
novacontrol:
interfaces:
- network: "data"
- network: "management"
external_network:
network: "management"
odl:
interfaces:
- network: "management"
external_network:
network: "management"
```
* `--topology-nodes`: KeyValueList description of the nodes.
A floating IP will be provisioned on a designated network.
For more information about the structure of the topology files and how to create your own,
please refer to [Topology](topology.html) and [Virsh plugin](virsh.html#topology) description.
* `--image`: default image name or id for the VMs use `openstack --os-cloud cloud_name image list` to see a list of available images
* `--cleanup` Boolean. Whether to provision resources, or clean them from the tenant.
Infrared registers all provisioned resources to the [workspace](workspace.html) on creation,
and will clean only registered resources:
```
infrared openstack --cleanup yes
```
### Virsh[¶](#virsh)
Virsh provisioner is explicitly designed to be used for setup of virtual environments.
Such environments are used to emulate production environment like [tripleo-undercloud](tripleo-undercloud.html)
instances on one baremetal machine. It requires one prepared baremetal host (designated `hypervisor`)
to be reachable through SSH initially.
#### Hypervisor machine[¶](#hypervisor-machine)
Hypervisor machine is the target machine where infrared’s virsh provisioner will create virtual machines and networks (using libvirt) to emulate baremetal infrastructure.
As such there are several specific requirements it has to meet.
Generally, It needs to have **enough memory and disk** storage to hold multiple decent VMs
(each with GBytes of RAM and dozens of GB of disk).
Also for acceptable responsiveness (speed of deployment/testing) just <4 threads or low GHz CPU is not a recommended choice (if you have old and weaker CPU than current mid-high end mobile phone CPU you may suffer performance wise - and so more timeouts during deployment or in tests).
Especially, for Ironic (TripleO) to control them, those **libvirt VMs** need to be bootable/controllable for **iPXE provisioning**.
And also extra user has to exist, which can ssh in the hypervisor and control (restart…) libvirt VMs.
Note
infrared is attempting to configure or validate all (most) of this but it’s may be scattered across all provisiner/installer steps. Current infrared approach is stepped toeard direction to be more idempotent, and failures on previous runs shouldn’t prevent succesfull executinon of following runs.
What **user has to provide**:
> * have machine with **sudoer user ssh access** and **enough resources**,
> as minimum requirements for one VM are:
> + VCPU: 2|4|8
> + RAM: 8|16
> + HDD: 40GB+
> + in practice disk may be smaller, as they are thin provisioned,
> as long as you don’t force writing all the data (aka Tempest with rhel-guest instead of cirros etc)
> * **RHEL-7.3** and **RHEL-7.4** are tested, **CentOS** is also expected to work
> + may work with other distributions (best-effort/limited support)
> * **yum repositories** has to be **preconfigured** by user (foreman/…) before using infrared so it can install dependencies
> + esp. for infrared to handle `ipxe-roms-qemu` it requires either **RHEL-7.{3|4}-server channel**
What **infrared takes care of**:
> * `ipxe-roms-qemu` package of at least `version 2016xxyy` needs to be installed
> * other basic packages installed
> + `libvirt`, `libguestfs{-tools,-xfs}`, `qemu-kvm`, `wget`, `virt-install`
> * **virtualization support** (VT-x/AMD-V)
> + ideally with **nested=1** support
> * `stack` user created with polkit privileges for *org.libvirt.unix.manage*
> * **ssh key** with which infrared can authenticate (created and) added for *root* and *stack* user,
> ATM they are handled differently/separately:
> + for *root* the `infared/id_rsa.pub` gets added to authorized_keys
> + for *stack* `infrared/id_rsa_undercloud.pub` is added to authorized_keys, created/added later during installation
First, Libvirt and KVM are installed and configured to provide a virtualized environment.
Then, virtual machines are created for all requested nodes.
#### Topology[¶](#topology)
The first thing you need to decide before you deploy your environment is the `Topology`.
This refers to the number and type of VMs in your desired deployment environment.
If we use OpenStack as an example, a topology may look something like:
> * 1 VM called undercloud
> * 1 VM called controller
> * 1 VM called compute
To control how each VM is created, we have created a YAML file that describes the specification of each VM.
For more information about the structure of the topology files and how to create your own,
please refer to [Topology](topology.html).
Please see [Bootstrap](bootstrap.html) guide where usage is demonstrated.
* `--host-memory-overcommit`
By default memory overcommitment is false and provision will fail if Hypervisor’s free memory is lower than required memory for all nodes. Use –host-memory-overcommit True to change default behaviour.
##### Network layout[¶](#network-layout)
Baremetal machine used as host for such setup is called hypervisor. The whole deployment is designed to work within boundaries of this machine and (except public/natted traffic) shouldn’t reach beyond.
The following layout is part of default setup defined in
[plugins defaults](https://github.com/redhat-openstack/infrared/blob/master/plugins/virsh/defaults/topology/network/3_nets.yml):
```
hypervisor
|
+---+ nic0 - public IP
|
+---+ nic1 - not managed
|
... Libvirt VM's
| |
---+---+ data bridge (ctlplane, 192.0.2/24) +---+ data (nic0)
| | |
libvirt --+---+ management bridge (nat, dhcp, 172.16.0/24) +---+ managementnt (nic1)
| | |
---+---+ external bridge (nat, dhcp, 10.0.0/24) +---+ external (nic2)
```
On hypervisor, there are 3 new bridges created with libvirt - data, management and external.
Most important is data network which does not have DHCP and NAT enabled.
This network can later be used as `ctlplane` for OSP director deployments ([tripleo-undercloud](tripleo-undercloud.html)).
Other (usually physical) interfaces are not used (nic0, nic1, …) except for public/natted traffic.
External network is used for SSH forwarding so client (or Ansible) can access dynamically created nodes.
###### NAT Forwarding[¶](#nat-forwarding)
By default, all networks above are [NATed](https://wiki.libvirt.org/page/Networking#NAT_forwarding_.28aka_.22virtual_networks.22.29), meaning that they private networks only reachable via the hypervisor node.
infrared configures the nodes SSH connection to use the hypervisor host as proxy.
###### Bridged Network[¶](#bridged-network)
Some use-cases call for [direct access](https://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29) to some of the nodes.
This is achieved by adding a network with `forward: bridge` in its attributes to the network-topology file, and marking this network as external network on the relevant node files.
The result will create a virtual bridge on the hypervisor connected to the main NIC by default.
VMs attached to this bridge will be served by the same LAN as the hypervisor.
To specify any secondary NIC for the bridge, the `nic` property should be added to the network file under the bridge network:
```
net4:
name: br1
forward: bridge
nic: eth1
```
Warning
Be careful when using this feature. For example, an `undercloud` connected in this manner can disrupt the LAN by serving as an unauthorized DHCP server.
Fore example, see `tripleo` [node](tripleo) used in conjunction with `3_net_1_bridge`
[network file](1_bridge):
```
infrared virsh [...] --topology-nodes ironic:1,[...] --topology-network 3_net_1_bridge [...]
```
#### Workflow[¶](#workflow)
> 1. Setup libvirt and kvm environment
> 2. Setup libvirt networks
> 3. Download base image for undercloud (`--image-url`)
> 4. Create desired amount of images and integrate to libvirt
> 5. Define virtual machines with requested parameters (`--topology-nodes`)
> 6. Start virtual machines
Environments prepared such in way are usually used as basic virtual infrastructure for [tripleo-undercloud](tripleo-undercloud.html).
Note
Virsh provisioner has idempotency issues, so `infrared virsh ... --kill` must be run before reprovisioning every time to remove libvirt resources related to active hosts form workspace inventory or `infrared virsh ... --cleanup` to remove ALL domains and nettworks (except ‘default’) from hypervisor.
#### Topology Extend[¶](#topology-extend)
* `--topology-extend`: Extend existing deployment with nodes provided by topology.
If `--topology-extend` is True, all nodes from `--topology-nodes` will be added as new additional nodes
```
infrared virsh [...] --topology-nodes compute:1,[...] --topology-extend yes [...]
```
#### Topology Shrink[¶](#topology-shrink)
* `--remove-nodes`: Provide option for removing of nodes from existing topology:
```
infrared virsh [...] --remove-nodes compute-2,compute3
```
Warning
If try to extend topology after you remove node with index lower than maximum, extending will fail.
For example, if you have 4 compute nodes (compute-0,compute-1,compute-2,compute-3), removal of any node different than compute-3, will cause fail of future topology extending.
#### Multiply environments[¶](#multiply-environments)
In some use cases it might be needed to have multiply environments on the same host. Virsh provisioner currently supports that with `--prefix` parameter. Using it user can assign a prefix to created resources such as virtual instances,
networks, routers etc.
Warning
`--prefix` shouldn’t be more than 4 characters long because of libvirt limitation on resources name length.
```
infrared virsh [...] --topology-nodes compute:1,controller1,[...] --prefix foo [...]
```
Will create resource with `foo` prefix.
Resources from different environments could be differebtiaited using prefix, and virsh plugin will take care so they will not interfere with each other in terms of networking, virtual instances etc.
Cleanup procedure also supports `--prefix` parameter allowing to cleanup only needed environment, if `--prefix` is not given all resources on hypervisor will be cleaned.
### TripleO Undercloud[¶](#tripleo-undercloud)
Deploys a TripleO undercloud
#### Setup an Undercloud[¶](#setup-an-undercloud)
* `--version`: TripleO release to install.
Accepts either an integer for RHEL-OSP release, or a community release name (`Liberty`, `Mitaka`, `Newton`, etc…) for RDO release
* `--build`: Specify a build date or a label for the repositories.
Supports any rhos-release labels.
Examples: `passed_phase1`, `2016-08-11.1`, `Y1`, `Z3`, `GA`
Not used in case of RDO.
* `--buildmods`: Let you the option to add flags to rhos-release:
> `pin` - Pin puddle (dereference ‘latest’ links to prevent content from changing). This flad is selected by default
> `flea` - Enable flea repos.
> `unstable` - This will enable brew repos or poodles (in old releases).
> `none` - Use none of those flags.
> Note
> `--buildmods` and `--build` flags are internal Red Hat users only.
* `--enable-testing-repos`: Let you the option to enable testing/pending repos with rhos-release. Multiple values have to be coma separated.
Examples: `--enable-testing-repos rhel,extras,ceph` or `--enable-testing-repos all`
* `--cdn` Register the undercloud with a Red Hat Subscription Management platform.
Accepts a file with subscription details.
> cdn_creds.yml[¶](#id2)
> ```
> server_hostname: example.redhat.com
> username: user
> password: HIDDEN_PASS
> autosubscribe: yes
> server_insecure: yes
> ```
For the full list of supported input, see the [module documentation](http://docs.ansible.com/ansible/redhat_subscription_module.html).
Note
Pre-registered undercloud are also supported if `--cdn` flag is missing.
Warning
The contents of the file are hidden from the logged output, to protect private account credentials.
* `--from-source` Build tripleo components from the upstream git repository.
Accepts list of tripleo components. The delorean project is used to build rpm packages. For more information about delorean, visit [Delorean documentation](http://dlrn.readthedocs.io/en/latest).
To deploy specific tripleo components from git repository:
```
infrared tripleo-undercloud --version 13 \
--from-source name=openstack/python-tripleoclient \
--from-source name=openstack/neutron,refs=refs/changes/REF_ID \
--from-source name=openstack/puppet-neutron
```
Note
+ This feature is supported by OSP 13 or RDO queens versions.
+ This feature is expiremental and should be used only for development.
Note
In case of **virsh** deployment **ipxe-roms-qemu** will be installed on hypervisor node.
This package can be found in a **rhel-server** repo in case of RedHat and in **Base** repo in case of CentOS
To deploy a working undercloud:
```
infrared tripleo-undercloud --version 10
```
For better fine-tuning of packages, see [custom repositories](#custom-repositories).
#### Overcloud Images[¶](#overcloud-images)
The final part of the undercloud installation calls for creating the images from which the OverCloud will be later created.
* Depending on `--images-task` these the undercloud can be either:
> + `build` images:
> Build the overcloud images from a fresh guest image.
> To use a different image than the default CentOS cloud
> guest image, use `--images-url` to define base image than CentOS.
> For OSP installation, you must provide a url of a valid RHEL image.
> + `import` images from url:
> Download pre-built images from a given `--images-url`.
> + Download images via `rpm`:
> Starting from OSP 8, TripleO is packages with pre-built images avialable via RPM.
> To use different RPM, use `--images-url` to define the location of the RPM. You need
> to provide all dependencies of the remote RPM. Locations have to be separated with comma
> Note
> This option is invalid for RDO installation.
>
* Use `--images-packages` to define a list of additional packages to install on the OverCloud image.
Packages can be specified by name or by providing direct url to the rpm file.
* Use `--images-remove-packages` to define a list of packages to uninstall from the OverCloud image.
Packages must be specified by name.
* `--images-cleanup` tells infrared do remove the images files original after they are uploaded to the undercloud’s Glance service.
To configure overcloud images:
```
infrared tripleo-undercloud --images-task rpm
```
Note
This assumes an undercloud was already installed and will skip [installation](tripleo-undercloud.html#SetupanUndercloud) stage because `--version` is missing.
When using RDO (or for OSP 7), `rpm` strategy in unavailable. Use `import` with `--images-url` to download overcloud images from web:
```
infrared tripleo-undercloud --images-task import --images-url http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean
```
Note
The RDO overcloud images can be also found here: <https://images.rdoproject.orgIf pre-packaged images are unavailable, tripleo can build the images locally on top of a regular cloud guest image:
```
infrared tripleo-undercloud --images-task build
```
CentOS or RHEL guest images will be used for RDO and OSP respectively.
To use a different image specify `--images-url`:
```
infrared tripleo-undercloud --images-task build --images-url http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
```
Note
building the images takes a long time and it’s usually quicker to download them.
In order to update default overcloud image kernel provided by sources (for example RPM), with the latest kernel present on overcloud image,
specify `overcloud-update-kernel`.
Note
when installing kernel-rt inside overcloud guest image, the latest RealTime kernel will be used instead of default kernel.
See the [RDO deployment](rdo.html) page for more details on how to setup RDO product.
#### Undercloud Configuration[¶](#undercloud-configuration)
Undercloud is configured according to `undercloud.conf` file.
Use `--config-file` to provide this file, or let infrared generate one automatically, based on a sample file provided by the project.
Use `--config-options` to provide a list of `section.option=value` that will override specific fields in it.
Use the `--ssl=yes` option to install enable SSL on the undercloud. If used, a self-signed SSL cert will be generated.
#### Custom Repositories[¶](#custom-repositories)
Add custom repositories to the undercloud, after [installing the TripleO repositories](tripleo-undercloud.html#SetupanUndercloud).
* `--repos-config` setup repos using the ansible yum_repository module.
Using this option enables you to set specific options for each repository:
> repos_config.yml[¶](#id3)
> ```
> ---
> extra_repos:
> - name: my_repo1
> file: my_repo1.file
> description: my repo1
> baseurl: http://myurl.com/my_repo1
> enabled: 0
> gpgcheck: 0
> - name: my_repo2
> file: my_repo2.file
> description: my repo2
> baseurl: http://myurl.com/my_repo2
> enabled: 0
> gpgcheck: 0
> ...
> ```
> Note
> This expicitly supports some of the options found in
> yum_repository module (name, file, description, baseurl, enabled and gpgcheck).
> For more information about this module, visit [Ansible yum_repository documentation](https://docs.ansible.com/ansible/yum_repository_module.html).
> Note
> Custom repos generate by `--repos-config` can be uploaded to Overcloud guest image by specifying `--upload-extra-repos true`
>
* `repos-urls`: comma separated list of URLs to download repo files to `/etc/yum.repos.d`
Both options can be used together:
```
infrared tripleo-undercloud [...] --repos-config repos_config.yml --repos-urls "http://yoururl.com/repofile1.repo,http://yoururl.com/repofile2.repo"
```
#### TripleO Undercloud User[¶](#tripleo-undercloud-user)
`--user-name` and `--user-password` define a user, with password,
for the undercloud. Acorrding to TripleO guidelines, the default username is `stack`.
User will be created if necessary.
.. note:: Stack user password needs to be changed in case of public deployments
#### Backup[¶](#backup)
When working on a virtual environment, infrared can create a snapshot of the installed undercloud that can be later used to [restore](#restore) it on a future run, thus saving installation time.
In order to use this feature, first follow the [Setup an Undercloud](#setup-an-undercloud) section.
Once an undercloud VM is up and ready, run the following:
```
ir tripleo-undercloud --snapshot-backup yes
```
Or optionally, provide the file name of the image to create (defaults to “undercloud-snapshot.qcow2”).
.. note:: the filename refers to a path on the hypervisor.
> ir tripleo-undercloud –snapshot-backup yes –snapshot-filename custom-name.qcow2
This will prepare a qcow2 image of your undercloud ready for usage with [Restore](#restore).
Note
this assumes an undercloud is already installed and will skip
[installation](tripleo-undercloud.html#SetupanUndercloud) and [images](tripleo-undercloud.html#OvercloudImages) stages.
#### Restore[¶](#restore)
When working on a virtual environment, infrared can use a pre-made undercloud image to quickly set up an environment.
To use this feature, simply run:
```
ir tripleo-undercloud --snapshot-restore yes
```
Or optionally, provide the file name of the image to restore from (defaults to “undercloud-snapshot.qcow2”).
.. note:: the filename refers to a path on the hypervisor.
#### Undercloud Upgrade[¶](#undercloud-upgrade)
Upgrade is discovering current Undercloud version and upgrade it to the next major one.
To upgrade Undercloud run the following command:
```
infrared tripleo-undercloud -v --upgrade yes
```
Note
The [Overcloud](tripleo-overcloud.html) won’t need new images to upgrade to. But you’d need to upgrade the images for OC nodes before you attempt to scale out nodes. Example for Undercloud upgrade and images update:
```
infrared tripleo-undercloud -v --upgrade yes --images-task rpm
```
Warning
Currently, there is upgrade possibility from version 9 to version 10 only.
Warning
Upgrading from version 11 to version 12 isn’t supported via the tripleo-undercloud plugin anymore. Please check the tripleo-upgrade plugin for 11 to 12 [upgrade instructions](tripleo_upgrade.html).
#### Undercloud Update[¶](#undercloud-update)
Update is discovering current Undercloud version and perform minor version update.
To update Undercloud run the following command:
```
infrared tripleo-undercloud -v --update-undercloud yes
```
Example for update of Undercloud and Images:
```
infrared tripleo-undercloud -v --update-undercloud yes --images-task rpm
```
Warning
Infrared support update for RHOSP from version 8.
#### Undercloud Workarounds[¶](#undercloud-workarounds)
Allow injecting workarounds defined in an external file before/after the undercloud installation:
```
infrared tripleo-undercloud -v --workarounds 'http://server.localdomain/workarounds.yml'
```
The workarounds can be either patches posted on review.openstack.org or arbitrary shell commands.
Below is an example of a workarounds file:
```
---
pre_undercloud_deploy_workarounds:
- BZ#1623061:
patch: false
basedir: ''
id: ''
command: 'touch /home/stack/pre_workaround_applied'
post_undercloud_deploy_workarounds:
- BZ#1637589:
patch: true
basedir: '/usr/share/openstack-tripleo-heat-templates/'
id: '601277'
command: ''
```
##### TLS Everywhere[¶](#tls-everywhere)
Setup TLS Everywhere with FreeIPA.
`tls-everywhere`: It will install FreeIPA on first node from freeipa group and it will configure undercloud for TLS Everywhere.
### TripleO Upgrade[¶](#tripleo-upgrade)
Starting with OSP12 the upgrade/update of a TripleO deployment can be done via the tripleo-upgrade plugin.
tripleo-upgrade comes preinstalled as an InfraRed plugin. After a successful InfraRed overcloud deployment you need to run the following steps to upgrade the deployment:
Symlink roles path:
```
ln -s $(pwd)/plugins $(pwd)/plugins/tripleo-upgrade/infrared_plugin/roles
```
Set up undercloud upgrade repositories:
```
infrared tripleo-undercloud \
--upgrade yes \
--mirror ${mirror_location} \
--ansible-args="tags=upgrade_repos"
```
Upgrade undercloud:
```
infrared tripleo-upgrade \
--undercloud-upgrade yes
```
Set up overcloud upgrade repositories:
```
infrared tripleo-overcloud \
--deployment-files virt \
--upgrade yes \
--mirror ${mirror_location} \
--ansible-args="tags=upgrade_collect_info,upgrade_repos"
```
Upgrade overcloud:
```
infrared tripleo-upgrade \
--overcloud-upgrade yes
```
### TripleO Overcloud[¶](#tripleo-overcloud)
Deploys a TripleO overcloud from an existing undercloud
#### Stages Control[¶](#stages-control)
Run is broken into the following stages. Omitting any of the flags (or setting it to `no`) will skip that stage
* `--introspect` the overcloud nodes
* `--tag` overcloud nodes with proper flavors
* `--deploy` overcloud of given `--version` (see below)
#### Containers[¶](#containers)
* `--containers`: boolean. Specifies if containers should be used for deployment. Default value: True
Note
Containers are supported by OSP version >=12.
* `--container-images-packages`: the pairs for container images and packages URL(s) to install into those images.
Container images don’t have any yum repositories enabled by default, hence specifying URL of an RPM to install is mandatory. This option can be used multiple times for different container images.
Note
Only specified image(s) will get the packages installed. All images that depend on an updated image have to be updated as well (using this option or otherwise).
Example:
```
--container-images-packages openstack-opendaylight-docker=https://kojipkgs.fedoraproject.org//packages/tmux/2.5/3.fc27/x86_64/tmux-2.5-3.fc27.x86_64.rpm,https://kojipkgs.fedoraproject.org//packages/vim/8.0.844/2.fc27/x86_64/vim-minimal-8.0.844-2.fc27.x86_64.rpm
```
* `--container-images-patch`: comma, separated list of docker container images to patch using ‘/patched_rpm’ yum repository.
Patching involves ‘yum update’ inside the container. This feature is not supported when `registry-undercloud-skip`
is set to True. Also, if this option is not specified, InfraRed auto discovers images that should be updated. This option may be used to patch only a specific container image(s) without updating others that could be normally patched.
Example:
```
--container-images-patch openstack-opendaylight,openstack-nova-compute
```
* `--registry-undercloud-skip`: avoid using and mass populating the undercloud registry.
The registry or the `registry-mirror` will be used directly when possible, recommended using this option when you have a very good bandwidth to your registry.
* `--registry-mirror`: the alternative docker registry to use for deployment.
* `--registry-namespace`: the alternative docker registry namespace to use for deployment.
* The following options define the ceph container:
`--registry-ceph-tag`: tag used with the ceph container. Default value: latest
`--registry-ceph-namespace`: namesapce for the ceph container
#### Deployment Description[¶](#deployment-description)
* `--deployment-files`: Mandatory.
Path to a directory, containing heat-templates describing the overcloud deployment.
Choose `virt` to enable preset templates for virtual POC environment ([virsh](virsh.html) or [ovb](ovb.html)).
* `--instackenv-file`:
Path to the instackenv.json configuration file used for introspection.
For [virsh](virsh.html) and [ovb](ovb.html) deployment, infrared can generate this file automatically.
* `--version`: TripleO release to install.
Accepts either an integer for RHEL-OSP release, or a community release name (`Liberty`, `Mitaka`, `Newton`, etc…) for RDO release
* The following options define the number of nodes in the overcloud:
`--controller-nodes`, `--compute-nodes`, `--storage-nodes`.
If not provided, will try to evaluate the exiting nodes and default to `1`
for `compute`/`controller` or `0` for `storage`.
* `--hybrid`: Specifies whether deploying a hybrid environment.
When this flag it set, the user should pass to the `--instackenv-file` parameter a link to a JSON/YAML file.
The file contains information about the bare-metals servers that will be added to the instackenv.json file during introspection.
* `--environment-plan`/`-p`: Import environment plan YAML file that details the plan to be deployed by TripleO.
Beside specifying Heat environments and parameters, one can also provide parameters for TripleO Mistral workflows.
Warning
This option is supported by RHOSP version 12 and greater.
Below are examples of a JSON & YAML files in a valid format:
bm_nodes.yml[¶](#id3)
```
---
nodes:
- "name": "aaa-compute-0"
"pm_addr": "172.16.0.1"
"mac": ["00:11:22:33:44:55"]
"cpu": "8"
"memory": "32768"
"disk": "40"
"arch": "x86_64"
"pm_type": "pxe_ipmitool"
"pm_user": "pm_user"
"pm_password": "pm_password"
"pm_port": "6230"
- "name": "aaa-compute-1"
"pm_addr": "172.16.0.1"
"mac": ["00:11:22:33:44:56"]
"cpu": "8"
"memory": "32768"
"disk": "40"
"arch": "x86_64"
"pm_type": "pxe_ipmitool"
"pm_user": "pm_user"
"pm_password": "pm_password"
"pm_port": "6231"
```
bm_nodes.json[¶](#id4)
```
{
"nodes": [
{
"name": "aaa-compute-0",
"pm_addr": "172.16.0.1",
"mac": ["00:11:22:33:44:55"],
"cpu": "8",
"memory": "32768",
"disk": "40",
"arch": "x86_64",
"pm_type": "pxe_ipmitool",
"pm_user": "pm_user",
"pm_password": "pm_password",
"pm_port": "6230"
},
{
"name": "aaa-compute-1",
"pm_addr": "172.16.0.1",
"mac": ["00:11:22:33:44:56"],
"cpu": "8",
"memory": "32768",
"disk": "40",
"arch": "x86_64",
"pm_type": "pxe_ipmitool",
"pm_user": "pm_user",
"pm_password": "pm_password",
"pm_port": "6231"
}
]
}
```
#### Overcloud Options[¶](#overcloud-options)
* `--overcloud-ssl`: Boolean. Enable SSL for the overcloud services.
* `--overcloud-debug`: Boolean. Enable debug mode for the overcloud services.
* `--overcloud-templates`: Add extra environment template files or custom templates to “overcloud deploy” command. Format:
sahara.yml[¶](#id5)
```
---
tripleo_heat_templates:
- /usr/share/openstack-tripleo-heat-templates/environments/services/sahara.yaml
```
ovs-security-groups.yml[¶](#id6)
```
---
tripleo_heat_templates:
[]
custom_templates:
parameter_defaults:
NeutronOVSFirewallDriver: openvswitch
```
* `--overcloud-script`: Customize the script that will deploy the overcloud.
A path to a `*.sh` file containing `openstack overcloud deploy` command.
This is for advance users.
* `--heat-templates-basedir`: Allows to override the templates base dir to be used for deployment. Default value: “/usr/share/openstack-tripleo-heat-templates”
* `--resource-class-enabled`: Allows to enable or disable scheduling based on resource classes.
Scheduling based on resource classes, a Compute service flavor is able to use the node’s resource_class field (available starting with Bare Metal API version 1.21)
for scheduling, instead of the CPU, RAM, and disk properties defined in the flavor.
A flavor can request exactly one instance of a bare metal resource class.
For more information about this feature, visit [Openstack documentation](https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes).
To disable scheduling based on resource classes:
```
--resource-class-enabled False
```
Note
* Scheduling based on resource classes is supported by OSP version >=12.
* Scheduling based on resource classes is enabled by default for OSP version >=12.
* `--resource-class-override`: Allows to create custom resource class and associate it with flavor and instances.
The node field supports controller or controller-0 patterns or list of nodes split by delimiter :. Where controller means any of nodes with such name, while controller-0 is just that specific node.
Example:
```
--resource-class-override name=baremetal-ctr,flavor=controller,node=controller
--resource-class-override name=baremetal-cmp,flavor=compute,node=compute-0
--resource-class-override name=baremetal-other,flavor=compute,node=swift-0:baremetal
```
#### Tripleo Heat Templates configuration options[¶](#tripleo-heat-templates-configuration-options)
* `--config-heat`: Inject additional Tripleo Heat Templates configuration options under “paramater_defaults”
entry point. Example:
```
--config-heat ComputeExtraConfig.nova::allow_resize_to_same_host=true
--config-heat NeutronOVSFirewallDriver=openvswitch
```
should inject the following yaml to “overcloud deploy” command:
```
---
parameter_defaults:
ComputeExtraConfig:
nova::allow_resize_to_same_host: true
NeutronOVSFirewallDriver: openvswitch
```
* `--config-resource`: Inject additional Tripleo Heat Templates configuration options under “resource_registry”
entry point. Example:
```
--config-resource OS::TripleO::BlockStorage::Net::SoftwareConfig=/home/stack/nic-configs/cinder-storage.yaml
```
should inject the following yaml to “overcloud deploy” command:
```
---
resource_registry:
OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/nic-configs/cinder-storage.yaml
```
#### Controlling Node Placement[¶](#controlling-node-placement)
The default behavior for the director is to randomly select nodes for each role, usually based on their profile tag.
However, the director provides the ability to define specific node placement. This is a useful method to:
> * Assign specific node IDs
> * Assign custom hostnames
> * Assign specific IP addresses
[Cookbook](control_placement.html) example
Note
Options are supported for OSP10+
* `--specific-node-ids`: Bool. Default tagging behaviour is to set properties/capabilities profile, which is based on the node_type for all nodes from this type. If this value is set to true/yes, default behaviour will be overwritten and profile will be removed, node id will be added to properties/capabilities and scheduler hints will be generated. Examples of node IDs include controller-0, controller-1, compute-0, compute-1, and so forth.
* `--custom-hostnames`: Option to provide custom Hostnames for the nodes. Custom hostnames can be provided as values or a env file. Examples:
```
--custom-hostnames controller-0=ctr-rack-1-0,compute-0=compute-rack-2-0,ceph-0=ceph-rack-3-0
```
```
--custom-hostnames local/path/to/custom_hostnames.yaml
```
```
---
parameter_defaults:
HostnameMap:
ceph-0: storage-0
ceph-1: storage-1
ceph-2: storage-2
compute-0: novacompute-0
compute-1: novacompute-1
controller-0: ctrl-0
controller-1: ctrl-1
controller-2: ctrl-2
networker-0: net-0
```
Warning
When custom hostnames are used, after Overcloud install, InfraRed inventory will be updated with the new nodes names. Original node name will be stored as inventory variable named “original_name”. “original_name” can be used in playbooks as normal host var.
* `--predictable-ips`: Bool, assign Overcloud nodes with specific IPs on each network. IPs have to be outside DHCP pools.
> Warning
> Currently InfraRed only creates template for “resource_registry”. Nodes IPs need to be provided
> as user environment template, with option –overcloud-templates.
> Example of the template:
> ```
> ---
> parameter_defaults:
> CephStorageIPs:
> storage:
> - 172.16.1.100
> - 172.16.1.101
> - 172.16.1.102
> storage_mgmt:
> - 172.16.3.100
> - 172.16.3.101
> - 172.16.3.102
> ```
#### Overcloud Storage[¶](#overcloud-storage)
* `--storage-external`: Bool If `no`, the overcloud will deploy and manage the storage nodes.
If `yes` the overcloud will connect to an external, per-existing storage service.
* `--storage-backend`:
The type of storage service used as backend.
* `--storage-config`:
Storage configuration (YAML) file.
#### Composable Roles[¶](#composable-roles)
InfraRed allows to use custom roles to deploy overcloud. Check the [Composable roles](composable_roles.html) page for details.
#### Overcloud Upgrade[¶](#overcloud-upgrade)
Warning
Before Overcloud upgrade you need to perform upgrade of [Undercloud](tripleo-undercloud.html)
Warning
Upgrading from version 11 to version 12 isn’t supported via the tripleo-overcloud plugin anymore. Please check the tripleo-upgrade plugin for 11 to 12 [upgrade instructions](tripleo_upgrade.html).
Upgrade will detect Undercloud version and will upgrade Overcloud to the same version.
* `--upgrade`: Bool If yes, the overcloud will be upgraded.
Example:
```
infrared tripleo-overcloud -v --upgrade yes --deployment-files virt
```
* `--build`: target build to upgrade to
* `--enable-testing-repos`: Let you the option to enable testing/pending repos with rhos-release. Multiple values have to be coma separated.
Examples: `--enable-testing-repos rhel,extras,ceph` or `--enable-testing-repos all`
Example:
```
infrared tripleo-overcloud -v --upgrade yes --build 2017-05-30.1 --deployment-files virt
```
Note
Upgrade is assuming that Overcloud Deployment script and files/templates, which were used during the initial deployment are available at Undercloud node in home directory of Undercloud user. Deployment script location is assumed to be “~/overcloud_deploy.sh”
#### Overcloud Update[¶](#overcloud-update)
Warning
Before Overcloud update it’s recommended to update [Undercloud](tripleo-undercloud.html)
Warning
Overcloud Install, Overcloud Update and Overcloud Upgrade are mutually exclusive
Note
InfraRed supports minor updates from OpenStack 7
Minor update detects Undercloud’s version and updates packages within same version to latest available.
* `--ocupdate`: Bool deprecates: –updateto If yes, the overcloud will be updated
* `--build`: target build to update to defaults to `None`, in which case, update is disabled.
possible values: build-date, `latest`, `passed_phase1`, `z3` and all other labels supported by `rhos-release`
When specified, rhos-release repos would be setup and used for minor updates.
* `--enable-testing-repos`: Let you the option to enable testing/pending repos with rhos-release. Multiple values have to be coma separated.
Examples: `--enable-testing-repos rhel,extras,ceph` or `--enable-testing-repos all`
Example:
```
infrared tripleo-overcloud -v --ocupdate yes --build latest --deployment-files virt
```
Note
Minor update expects that Overcloud Deployment script and files/templates,
used during the initial deployment, are available at Undercloud node in home directory of Undercloud user.
Deployment script location is assumed to be “~/overcloud_deploy.sh”
* `--buildmods`: Let you the option to add flags to rhos-release:
> `pin` - Pin puddle (dereference ‘latest’ links to prevent content from changing). This flag is selected by default
> `flea` - Enable flea repos.
> `unstable` - This will enable brew repos or poodles (in old releases).
> `none` - Use none of those flags.
> Note
> `--buildmods` flag is for internal Red Hat usage.
#### Overcloud Reboot[¶](#overcloud-reboot)
It is possible to reboot overcloud nodes. This is needed if kernel got updated
* `--postreboot`: Bool If yes, reboot overcloud nodes one by one.
Example:
```
infrared tripleo-overcloud --deployment-files virt --postreboot yes infrared tripleo-overcloud --deployment-files virt --ocupdate yes --build latest --postreboot yes
```
##### TLS Everywhere[¶](#tls-everywhere)
Setup TLS Everywhere with FreeIPA.
`tls-everywhere`: It will configure overcloud for TLS Everywhere.
### Cloud Config[¶](#cloud-config)
Collection of overcloud configuration tasks to run after Overcloud deploy (Overcloud post tasks)
#### Flags[¶](#flags)
* `--tasks`:
Run one or more tasks to the cloud. separate with commas.
```
# Example:
infrared cloud-config --tasks create_external_network,compute_ssh,instance_ha
```
* `--overcloud-stack`:
The overcloud stack name.
* `--resync`:
Bool. Whether we need to resync services.
#### External Network[¶](#external-network)
To create external network we need to specify in `--tasks` the task `create_external_network` and then use the flags above:
* `--deployment-files`:
Name of folder in cloud’s user on undercloud, which containing the templates of the overcloud deployment.
* `--network-protocol`:
The overcloud network backend.
* `--public-net-name`:
Specifies the name of the public network.
.. note:: If not provided it will use the default one for the OSP version.
* `--public-subnet`:
Path to file containing different values for the subnet of the network above.
* `--external-vlan`:
An Optional external VLAN ID of the external network (Not the Public API network).
Set this to `yes` if overcloud’s external network is on a VLAN that’s unreachable from the undercloud. This will configure network access from UnderCloud to overcloud’s API/External(floating IPs)
network, creating a new VLAN interface connected to ovs’s `br-ctlplane` bridge.
.. note:: If your UnderCloud’s network is already configured properly, this could disrupt it, making overcloud API unreachable For more details, see:
[VALIDATING THE OVERCLOUD](https://access.redhat.com/documentation/en/red-hat-openstack-platform/10-beta/paged/director-installation-and-usage/chapter-6-performing-tasks-after-overcloud-creation)
```
# Example:
ir cloud-config --tasks create_external_network --deployment-files virt --public-subnet default_subnet --network-protocol ipv4
```
#### Scale Up/Down nodes[¶](#scale-up-down-nodes)
* `--scale-nodes`:
List of compute nodes to be added.
```
# Example:
ir cloud-config --tasks scale_up --scale-nodes compute-1,compute-2
```
* `--node-name`:
Name of the node to remove.
.. code-block:: shell
> # Example:
> ir cloud-config –tasks scale_down –node-name compute-0
#### Ironic Configuration[¶](#ironic-configuration)
* `vbmc-username`:
VBMC username.
* `vbmc-password`:
VBMC password.
Note
Necessary when Ironic’s driver is ‘pxe_ipmitool’ in OSP 11 and above.
#### Workload Launch[¶](#workload-launch)
* `--workload-image-url`:
Image source URL that should be used for uploading the workload Glance image.
* `--workload-memory`:
Amount of memory allocated to test workload flavor.
* `--workload-vcpu`:
Amount of v-cpus allocated to test workload flavor.
* `--workload-disk`:
Disk size allocated to test workload flavor.
* `--workload-index`:
Number of workload objects to be created.
```
# Example:
ir cloud-config --workload-memory 64 --workload-disk 1 --workload-index 3
```
### Tempest[¶](#tempest)
Runs Tempest tests against an OpenStack cloud.
#### Required arguments[¶](#required-arguments)
* `--openstack-installer`: The installer used to deploy OpenStack.
Enables extra configuration steps for certain installers. Supported installers are: `tripleo` and `packstack`.
* `--openstack-version`: The version of the OpenStack installed.
Enables additional configuration steps when version <= 7.
* `--tests`: The list of test suites to execute. For example: `network,compute`.
The complete list of the available suites can be found by running `ir tempest --help`
* `--openstackrc`: The [OpenStack RC](http://docs.openstack.org/user-guide/common/cli-set-environment-variables-using-openstack-rc.html) file.
The absolute and relative paths to the file are supported. When this option is not provided, infrared will try to use the keystonerc file from the active workspace.
The openstackrc file is copied to the tester station and used to configure and run Tempest.
#### Optional arguments[¶](#optional-arguments)
The following useful arguments can be provided to tune tempest tester. Complete list of arguments can be found by running `ir tempest --help`.
* `--setup`: The setup type for the tempest.
Can be `git` (default), `rpm` or pip. Default tempest git repository is <https://git.openstack.org/openstack/tempest.git>. This value can be overridden with the `--extra-vars` cli option:
```
ir tempest -e setup.repo=my.custom.repo [...]
```
* `--revision`: Specifies the revision for the case when tempest is installing from the git repository.
Default value is `HEAD`.
* `--deployer-input-file`: The deployer input file to use for Tempest configuration.
The absolute and relative paths to the file are supported. When this option is not provided infrared will try to use the deployer-input-file.conf file from active workspace folder.
For some OpenStack versions(kilo, juno, liberty) Tempest provides predefined deployer files. Those files can be downloaded from the git repo and passed to the Tempest tester:
```
BRANCH=liberty wget https://raw.githubusercontent.com/redhat-openstack/tempest/$BRANCH/etc/deployer-input-$BRANCH.conf ir tempest --tests=sanity \
--openstack-version=8 \
--openstack-installer=tripleo \
--deployer-input-file=deployer-input-$BRANCH.conf
```
* `--image`: Image to be uploaded to glance and used for testing. Path have to be a url.
If image is not provided, tempest config will use the default.
Note
You can specify image ssh user with `--config-options compute.image_ssh_user=`
#### Tempest results[¶](#tempest-results)
infrared fetches all the tempest output files, like results to the `tempest_results` folder under the active [workspace](workspace.html) folder:
```
ll .workspace/my_workspace/tempest_results/tempest-*
-rw-rw-r--. tempest-results-minimal.xml
-rw-rw-r--. tempest-results-neutron.xml
```
#### Downstream tests[¶](#downstream-tests)
The tempest plugin provides the `--plugin` cli option which can be used to specify the plugin url to install. This option can be used, for example, to specify a downstream repo with tempest tests and run them:
```
ir tempest --tests=neutron_downstream \
--openstack-version=12 \
--openstack-installer=tripleo \
--plugin=https://downstrem.repo/tempest_neutron_plugin \
--setup rpm
```
The plugin flag can also specify version of plugin to clone by separating the url and version with a comma:
```
ir tempest --tests=neutron_downstream \
--openstack-version=12 \
--openstack-installer=tripleo \
--plugin=https://downstrem.repo/tempest_neutron_plugin,osp10 \
--setup rpm
```
The neutron_downstream.yml file can reference the upstream project in case the downstream repo is dependant or imports any upstream modules:
```
---
test_dict:
test_regex: ''
whitelist:
- "^neutron_plugin.tests.scenario.*"
blacklist:
- "^tempest.api.network.*"
- "^tempest.scenario.test_network_basic_ops.test_hotplug_nic"
- "^tempest.scenario.test_network_basic_ops.test_update_instance_port_admin_state"
- "^tempest.scenario.test_network_basic_ops.test_port_security_macspoofing_port"
plugins:
upstream_neutron:
repo: "https://github.com/openstack/neutron.git"
```
### Collect-logs[¶](#collect-logs)
Collect-logs plugin allows the user to collect files & directories from hosts managed by active workspace. A list of paths to be archived is taken from
`vars/default_archives_list.yml` in the plugin’s dir. Logs are being packed as `.tar` files by default, unless the user explicitly use the
`--gzip` flag that will instruct the plugin to compress the logs with `gzip`.
Also it supports ‘[sosreport](https://access.redhat.com/solutions/3592)’ tool to collect configuration and diagnostic information from system. It is possible to use both logger facilities, log files from the host and sosreport.
Note
All nodes must have yum repositories configured in order for the tasks to work on them.
Note
Users can manually edit the `default_archives_list.yml` if need to add/delete paths.
Note
All nodes must have yum repositories configured in order for the tasks to work on them.
Note
To enable logging using all available faclilties, i.e. host and sosreport use parameter –logger=all
Usage example:
```
ir collect-logs --dest-dir=/tmp/ir_logs
ir collect-logs --dest-dir=/tmp/ir_logs --logger=sosreport
```
### Gabbi Tester[¶](#gabbi-tester)
Runs telemetry tests against the OpenStack cloud.
#### Required arguments[¶](#required-arguments)
* `--openstack-version`: The version of the OpenStack installed.
That option also defines the list of tests to run against the OpenStack.
* `--openstackrc`: The [OpenStack RC](http://docs.openstack.org/user-guide/common/cli-set-environment-variables-using-openstack-rc.html) file.
The absolute and relative paths to the file are supported. When this option is not provided, infrared will try to use the keystonerc file from the active workspace.
The openstackrc file is copied to the tester station and used to run tests
* `--undercloudrc`: The undercloud RC file.
The absolute and relative paths to the file are supported. When this option is not provided, infrared will try to use the stackrc file from the active workspace.
#### Optional arguments[¶](#optional-arguments)
* `--network`: Network settings to use.
Default network configuration includes the `protocol` (ipv4 or ipv6) and `interfaces` sections:
```
network:
protocol: ipv4
interfaces:
- net: management
name: eth1
- net: external
name: eth2
```
* `--setup`: The setup variables, such as git repo name, folders to use on tester and others:
```
setup:
repo_dest: ~/TelemetryGabbits
gabbi_venv: ~/gbr
gabbits_repo: <private-repo-url>
```
#### Gabbi results[¶](#gabbi-results)
infrared fetches all the output files, like results to the `gabbi_results` folder under the active [workspace](workspace.html) folder.
### List builds[¶](#list-builds)
The List Builds plugin is used to list all the available puddles (builds) for the given OSP version.
Usage:
```
$ ir list-builds --version 12
```
This will produce output in ansible style.
Alternatively you can have a clean raw output by saving builds to the file and printing them:
```
$ ir list-builds --version 12 --file-output builds.txt &> /dev/null && cat builds.txt
```
Output:
```
2017-08-16.1 # 16-Aug-2017 05:48 latest # 16-Aug-2017 05:48 latest_containers # 16-Aug-2017 05:48 passed_phase1 # 16-Aug-2017 05:48
......
```
### Pytest Runner[¶](#pytest-runner)
Pytest runner provide option to execute test on Tester node
Usage:
```
$ ir pytest-runner
```
This will run default tests for container sanity
Optional arguments::
* `--run`: Whether to run the test or only to prepare for it. Default value is ‘True’.
* `--repo`: Git repo which contain the test. Default value is ‘<https://code.engineering.redhat.com/gerrit/rhos-qe-core-installer>’
* `--file`: Location of the pytest file in the git repo. Default value is ‘tripleo/container_sanity.py’
### OSPD UI tester[¶](#ospd-ui-tester)
OSPD UI tests against the [undercloud](tripleo-undercloud.html) UI and works with RHOS10+.
#### Environment[¶](#environment)
To use the OSPD UI tester the following requirements should be met:
1. Undercloud should be installed 2. `Instackenv.json` should be generated and put into the undercloud machine.
3. A dedicated machine (uitester) should be provisioned. This machine will be used to run all the tests.
InfraRed allows to setup such environment. For example, the [virsh](virsh.html) plugin can be used to provision required machines:
```
ir virsh -vvvv -o provision.yml \
--topology-nodes=ironic:1,controller:3,compute:1,tester:1 \
--host-address=example.host.redhat.com \
--host-key ~/.ssh/example-key.pem
```
Note
Do not include undercloud machine into the tester group by using the `ironic` node.
To install undercloud use the [tripleo undercloud](tripleo-undercloud.html) plugin:
```
ir tripleo-undercloud -vvvv \
--version=10 \
--images-task=rpm
```
To deploy undercloud with the **ssl** support run tipleo-undercloud plugin with the `--ssl yes` option or use special template which sets `generate_service_certificate` to `true` and sets the undercloud_public_vip to allow external access to the undercloud:
```
ir tripleo-undercloud -vvvv \
--version=10 \
--images-task=rpm \
--ssl yes
```
The next step is to generate `instackenv.json` file. This step can be done using the [tripleo overcloud](tripleo-overcloud.html) plugin:
```
ir tripleo-overcloud -vvvv \
--version=10 \
--deployment-files=virt \
--ansible-args="tags=init,instack" \
--introspect=yes
```
For the overcloud plugin it is important to specify the `instack` ansible tag to limit overcloud execution only by the generation of the instackenv.json file.
#### OSPD UI tester options[¶](#ospd-ui-tester-options)
To run OSPD UI tester the following command can be used:
```
ir ospdui -vvvv \
--openstack-version=10 \
--tests=login \
--ssl yes \
--browser=chrome
```
Required arguments::
* `--openstack-version`: specifies the version of the product under test.
* `--tests`: the test suite to run. Run `ir ospdui --help` to see the list of all available suites to run.
Optional arguments::
* `--ssl`: specifies whether the undercloud was installed with ssl enabled or not. Default value is ‘no’.
* `--browser`: the webdriver to use. Default browser is firefox
* `--setup`: specifies the config parameters for the tester. See [Advanced configuration](#advanced-configuration) for details
* `--undercloudrc`: the absolute or relative path to the undercloud rc file. By default, the ‘stackrc’ file from the workspace dir will be used.
* `--topology-config`: the absolute or relative path to the topology configuration in json format. By default the following file is used:
```
{
"topology": {
"Controller": "3",
"Compute": "1",
"Ceph Storage": "3",
"Object Storage": "0",
"Block Storage": "0"
},
"network": {
"vlan": "10",
"allocation_pool_start": "192.168.200.10",
"allocation_pool_end": "192.168.200.150",
"gateway": "192.168.200.254",
"subnet_cidr": "192.168.200.0/24",
"allocation_pool_start_ipv6": "2001:db8:ca2:4::0010",
"allocation_pool_end_ipv6": "2001:db8:ca2:4::00f0",
"gateway_ipv6": "2001:db8:ca2:4::00fe",
"subnet_cidr_ipv6": "2001:db8:ca2:4::/64"
}
}
```
#### Advanced configuration[¶](#advanced-configuration)
By default all the tester parameters are read from the `vars\setup\default.yml` file under the plugin dir.
Setup variable file describes selenium, test repo and network parameters to use:
```
setup:
selenium:
chrome_driver:
url: http://chromedriver.storage.googleapis.com/2.27/chromedriver_linux64.zip
firefox_driver:
url: https://github.com/mozilla/geckodriver/releases/download/v0.14.0/geckodriver-v0.14.0-linux64.tar.gz
binary_name: geckodriver
ospdui:
repo: git://git.app.eng.bos.redhat.com/ospdui.git
revision: HEAD
dir: ~/ospdui_tests
network:
dev: eth0
ipaddr: 192.168.24.240
netmask: 255.255.255.0
```
To override any of these value you can copy `vars\setup\default.yml` to the same folder with the different name and change any value in that yml (for example git revision).
New setup config (without .yml extension) then can be specified with the `--setup` flag:
```
ir ospdui -vvvv \
--openstack-version=10 \
--tests=login \
--setup=custom_setup
```
#### Debugging[¶](#debugging)
The OSPD UI tester starts VNC server on the tester machine (by default on display `:1`). This allows to remotely debug and observe what is happening on the tester.
If you have direct network access to the tester, you can use any VNC client and connect.
If you are using virtual deployment the tunneling through the hypervisor to the tester instance should be created:
```
client $> ssh -f <EMAIL> -L 5901:<tester ip address>:5901 -N
```
Then you can use VNC viewer and connect to the `localhost:5901`.
#### Known Issues[¶](#known-issues)
* Automated UI tests cannot be run on the Firefox browser when SSL is enabled on undercloud.
Follow the following guide to fix that problem: <https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/appe-server_exceptions### RDO deployment[¶](#rdo-deployment)
Infrared allows to perform RDO based deployments.
To deploy RDO on virtual environment the following steps can be performed.
1. Provision virtual machines on a hypervisor with the virsh plugin. Use CentOS image:
```
infrared virsh -vv \
-o provision.yml \
--topology-nodes undercloud:1,controller:1,compute:1,ceph:1 \
--host-address my.host.redhat.com \
--host-key /path/to/host/key \
--image-url https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 \
-e override.controller.cpu=8 \
-e override.controller.memory=32768
```
2. Install the undercloud. Use RDO release name as a version:
```
infrared tripleo-undercloud -vv -o install.yml \
-o undercloud-install.yml \
--version pike
```
3. Build or import overcloud images from <https://images.rdoproject.org>:
```
# import images infrared tripleo-undercloud -vv \
-o undercloud-images.yml \
--images-task=import \
--images-url=https://images.rdoproject.org/pike/rdo_trunk/current-tripleo/stable/
# or build images infrared tripleo-undercloud -vv \
-o undercloud-images.yml \
--images-task=build \
```
Note
Overcloud image build process often takes more time than import.
4. Install RDO:
```
infrared tripleo-overcloud -v \
-o overcloud-install.yml \
--version pike \
--deployment-files virt \
--introspect yes \
--tagging yes \
--deploy yes
infrared cloud-config -vv \
-o cloud-config.yml \
--deployment-files virt \
--tasks create_external_network,forward_overcloud_dashboard,network_time,tempest_deployer_input
```
To install containerized RDO version (pike and above) the
`--registry-*`, `--containers yes` and `--registry-skip-puddle yes`
parameters should be provided:
```
infrared tripleo-overcloud \
--version queens \
--deployment-files virt \
--introspect yes \
--tagging yes \
--deploy yes \
--containers yes \
--registry-mirror trunk.registry.rdoproject.org \
--registry-namespace master \
--registry-tag current-tripleo-rdo \
--registry-prefix=centos-binary- \
--registry-skip-puddle yes
infrared cloud-config -vv \
-o cloud-config.yml \
--deployment-files virt \
--tasks create_external_network,forward_overcloud_dashboard,network_time,tempest_deployer_input
```
Note
For the –registry-tag the following RDO tags can be used:
`current-passed-ci`, `current-tripleo`, `current`, `tripleo-ci-testing`, etc
#### Known issues[¶](#known-issues)
1. Overcloud deployment fails with the following message:
```
Error: /Stage[main]/Gnocchi::Db::Sync/Exec[gnocchi-db-sync]: Failed to call refresh: Command exceeded timeout Error: /Stage[main]/Gnocchi::Db::Sync/Exec[gnocchi-db-sync]: Command exceeded timeout
```
> This error might be caused by the <https://bugs.launchpad.net/tripleo/+bug/1695760>.
> To workaround that issue the `--overcloud-templates disable-telemetry` flag should be added to the tripleo-overcloud command:
> ```
> infrared tripleo-overcloud -v \
> -o overcloud-install.yml \
> --version pike \
> --deployment-files virt \
> --introspect yes \
> --tagging yes \
> --deploy yes \
> --overcloud-templates disable-telemetry
> infrared cloud-config -vv \
> -o cloud-config.yml \
> --deployment-files virt \
> --tasks create_external_network,forward_overcloud_dashboard,network_time,tempest_deployer_input
> ```
### SplitStack deployment[¶](#splitstack-deployment)
Infrared allows to perform [SplitStack](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/11/html/director_installation_and_usage/chap-configuring_basic_overcloud_requirements_on_pre_provisioned_nodes) based deployment.
To deploy SplitStack on virtual environment the following steps can be performed.
1. Provision virtual machines on a hypervisor with the virsh plugin.:
```
infrared virsh -o provision.yml \
--topology-nodes undercloud:1,controller:3,compute:1 \
--topology-network split_nets \
--host-address $host \
--host-key $key \
--host-memory-overcommit False \
--image-url http://cool_iamge_url \
-e override.undercloud.disks.disk1.size=55G \
-e override.controller.cpu=8 \
-e override.controller.memory=32768 \
-e override.controller.deploy_os=true \
-e override.compute.deploy_os=true
```
2. Install the undercloud using required version(currently 11 and 12 was tested):
```
infrared tripleo-undercloud -o install.yml \
-o undercloud-install.yml \
--mirror tlv \
--version 12 \
--build passed_phase1 \
--splitstack yes \
--ssl yes
```
3. Install overcloud:
```
infrared tripleo-overcloud -o overcloud-install.yml \
--version 12 \
--deployment-files splitstack \
--role-files default \
--deploy yes \
--splitstack yes
```
### Composable Roles[¶](#composable-roles)
InfraRed allows to define [Composable Roles](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html-single/advanced_overcloud_customization/#Roles) while installing OpenStack with tripleo.
#### Overview[¶](#overview)
To deploy overcloud with the composable roles the additional templates should be provided:
* nodes template: list all the roles, list of the services for every role. For example:
```
- name: ObjectStorage
CountDefault: 1
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::Ntp
[...]
HostnameFormatDefault: swift-%index%
- name: Controller
CountDefault: 1
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephMon
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::CephRgw
[...]
HostnameFormatDefault: controller-%index%
- name: Compute
CountDefault: 1
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephClient
- OS::TripleO::Services::CephExternal
[....]
HostnameFormatDefault: compute-%index%
- name: Networker
CountDefault: 1
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::Kernel
[...]
HostnameFormatDefault: networker-%index%
```
* template with the information about roles count, flavors and other defaults:
```
parameter_defaults:
ObjectStorageCount: 1
OvercloudSwiftStorageFlavor: swift
ControllerCount: 2
OvercloudControlFlavor: controller
ComputeCount: 1
OvercloudComputeFlavor: compute
NetworkerCount: 1
OvercloudNetworkerFlavor: networker
[...]
```
* template with the information about roles resources (usually network and port resources):
```
resource_registry:
OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/deployment_files/network/nic-configs/osp11/swift-storage.yaml
OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/deployment_files/network/nic-configs/osp11/controller.yaml
OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/deployment_files/network/nic-configs/osp11/compute.yaml
OS::TripleO::Networker::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
OS::TripleO::Networker::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
OS::TripleO::Networker::Net::SoftwareConfig: /home/stack/deployment_files/network/nic-configs/osp11/networker.yaml
[...]
```
Note
The nic-configs in the infrared deployment folder are stored in two folders (`osp11` and `legacy`) depending on the product version installed.
InfraRed allows to simplify the process of templates generation and auto-populates the roles according to the deployed topology.
#### Defining topology and roles[¶](#defining-topology-and-roles)
Deployment approaches with composable roles differ for OSP11 and OSP12+ products.
For OSP11 user should manually compose all the roles templates and provide them to the deploy script.
For OSP12 and above the tripleo provides the `openstack overcloud roles generate` command to automatically generate roles templates.
See [THT roles](https://github.com/openstack/tripleo-heat-templates/tree/master/roles) for more information about tripleo roles.
##### OSP12 Deployment[¶](#osp12-deployment)
The Infrared provides there options to deploy openstack with composable roles in OSP12+.
**1) Automatically discover roles from the inventory.** In that case Inrared tries to determine what roles should be used basing on the list of the `overcloud_nodes` from the inventory file. To enable automatic roles discover the `--role-files`
option should be set to `auto` or any other non-list value (not separated with ‘,’). For example:
```
# provision ir virsh -vvvv
--topology-nodes=undercloud:1,controller:2,compute:1,networker:1,swift:1 \
--host-address=seal52.qa.lab.tlv.redhat.com \
--host-key ~/.ssh/my-prov-key
# do undercloud install [...]
# overcloud ir tripleo-overcloud -vvvv
--version=12 \
--deploy=yes \
--role-files=auto \
--deployment-files=composable_roles \
[...]
```
**2) Manually specify roles to use.** In that case user can specify the list roles to use by setting the `--role-files` otion to the list of roles from the [THT roles](https://github.com/openstack/tripleo-heat-templates/tree/master/roles):
```
# provision ir virsh -vvvv
--topology-nodes=undercloud:1,controller:2,compute:1,messaging:1,database:1,networker:1 \
--host-address=seal52.qa.lab.tlv.redhat.com \
--host-key ~/.ssh/my-prov-key
# do undercloud install [...]
# overcloud ir tripleo-overcloud -vvvv
--version=12 \
--deploy=yes \
--role-files=ControllerOpenstack,Compute,Messaging,Database,Networker \
--deployment-files=composable_roles \
[...]
```
**3) User legacy OSP11 approach to generate roles templates.** See detailed desciption below.
To enable that approach the `--tht-roles` flag should be set to no and the `--role-files` should point to the IR folder with the roles. For example:
```
# provision ir virsh -vvvv
--topology-nodes=undercloud:1,controller:2,compute:1,networker:1,swift:1 \
--host-address=seal52.qa.lab.tlv.redhat.com \
--host-key ~/.ssh/my-prov-key
# do undercloud install [...]
# overcloud ir tripleo-overcloud -vvvv
--version=12 \
--deploy=yes \
--role-files=networker \
--tht-roles=no \
--deployment-files=composable_roles \
[...]
```
##### OSP11 Deployment[¶](#osp11-deployment)
To deploy custom roles, InfraRed should know what nodes should be used for what roles. This involves a 2-step procedure.
**Step #1** Setup available nodes and store them in the InfraRed inventory. Those nodes can be configured by the `provision` plugin such as [virsh](virsh.html):
```
ir virsh -vvvv
--topology-nodes=undercloud:1,controller:2,compute:1,networker:1,swift:1 \
--host-address=seal52.qa.lab.tlv.redhat.com \
--host-key ~/.ssh/my-prov-key
```
In that example we defined a `networker` nodes which holds all the neutron services.
**Step #2** Provides a path to the roles definition while [installing the overcloud](tripleo-overcloud.html) using the `--role-files` option:
```
ir tripleo-overcloud -vvvv
--version=10 \
--deploy=yes \
--role-files=networker \
--deployment-files=composable_roles \
--introspect=yes \
--storage-backend=swift \
--tagging=yes \
--post=yes
```
In that example, to build the composable roles templates, InfraRed will look into the `<plugin_dir>/files/roles/networker` folder for the files that corresponds to all the node names defined in the `inventory->overcloud_nodes` group.
All those role files hold role parameters. See [Role Description](#role-description) section for details.
When role file is not found in the user specified folder InfraRed will try to use a `default` roles from the `<plugin_dir>/files/roles/default` folder.
For the topology described above with the networker custom role the following role files can be defined:
* <plugin_dir>/files/roles/**networker**/controller.yml - holds controller roles without neutron services
* <plugin_dir>/files/roles/**networker**/networker.yml - holds the networker role description with the neutron services
* <plugin_dir>/files/roles/**default**/compute.yml - a default compute role description
* <plugin_dir>/files/roles/**default**/swift.yml - a default swift role description
To deploy non-supported roles, a new folder should be created in the `<plugin_dir>/files/roles/`.
Any roles files that differ (e.g. service list) from the defaults should be put there. That folder is then can be referenced with the `--role-files=<folder name>` argument.
#### Role Description[¶](#role-description)
All the custom and defaults role descriptions are stored in the `<plugin_dir>/files/roles` folder.
Every role file holds the following information:
> * `name` - name of the role
> * `resource_registry` - all the resources required for a role.
> * `flavor` - the flavor to use for a role
> * `host_name_format` - the resulting host name format for the role node
> * `services` - the list of services the role holds
Below is an example of the controller default role:
```
controller_role:
name: Controller
# the primary role will be listed first in the roles_data.yaml template file.
primary_role: yes
# include resources
# the following vars can be used here:
# - ${ipv6_postfix}: will be replaced with _v6 when the ipv6 protocol is used for installation, otherwise is empty
# - ${deployment_dir} - will be replaced by the deployment folder location on the undercloud. Deployment folder can be specified with the ospd --deployment flag
# - ${nics_subfolder} - will be replaced by the appropriate subfolder with the nic-config's. The subfolder value
# is dependent on the product version installed.
resource_registry:
"OS::TripleO::Controller::Net::SoftwareConfig": "${deployment_dir}/network/nic-configs/${nics_subfolder}/controller${ipv6_postfix}.yaml"
# required to support OSP12 deployments
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
# we can also set a specific flavor for a role.
flavor: controller
host_name_format: 'controller-%index%'
# condition can be used to include or disable services. For example:
# - "{% if install.version |openstack_release < 11 %}OS::TripleO::Services::VipHosts{% endif %}"
services:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephClient
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::CephRgw
- OS::TripleO::Services::CinderApi
- OS::TripleO::Services::CinderBackup
- OS::TripleO::Services::CinderScheduler
- OS::TripleO::Services::CinderVolume
- OS::TripleO::Services::Core
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::Keystone
- OS::TripleO::Services::GlanceApi
- OS::TripleO::Services::GlanceRegistry
- OS::TripleO::Services::HeatApi
- OS::TripleO::Services::HeatApiCfn
- OS::TripleO::Services::HeatApiCloudwatch
- OS::TripleO::Services::HeatEngine
- OS::TripleO::Services::MySQL
- OS::TripleO::Services::NeutronDhcpAgent
- OS::TripleO::Services::NeutronL3Agent
- OS::TripleO::Services::NeutronMetadataAgent
- OS::TripleO::Services::NeutronApi
- OS::TripleO::Services::NeutronCorePlugin
- OS::TripleO::Services::NeutronOvsAgent
- OS::TripleO::Services::RabbitMQ
- OS::TripleO::Services::HAproxy
- OS::TripleO::Services::Keepalived
- OS::TripleO::Services::Memcached
- OS::TripleO::Services::Pacemaker
- OS::TripleO::Services::Redis
- OS::TripleO::Services::NovaConductor
- OS::TripleO::Services::MongoDb
- OS::TripleO::Services::NovaApi
- OS::TripleO::Services::NovaMetadata
- OS::TripleO::Services::NovaScheduler
- OS::TripleO::Services::NovaConsoleauth
- OS::TripleO::Services::NovaVncProxy
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::SwiftProxy
- OS::TripleO::Services::SwiftStorage
- OS::TripleO::Services::SwiftRingBuilder
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::CeilometerApi
- OS::TripleO::Services::CeilometerCollector
- OS::TripleO::Services::CeilometerExpirer
- OS::TripleO::Services::CeilometerAgentCentral
- OS::TripleO::Services::CeilometerAgentNotification
- OS::TripleO::Services::Horizon
- OS::TripleO::Services::GnocchiApi
- OS::TripleO::Services::GnocchiMetricd
- OS::TripleO::Services::GnocchiStatsd
- OS::TripleO::Services::ManilaApi
- OS::TripleO::Services::ManilaScheduler
- OS::TripleO::Services::ManilaBackendGeneric
- OS::TripleO::Services::ManilaBackendNetapp
- OS::TripleO::Services::ManilaBackendCephFs
- OS::TripleO::Services::ManilaShare
- OS::TripleO::Services::AodhApi
- OS::TripleO::Services::AodhEvaluator
- OS::TripleO::Services::AodhNotifier
- OS::TripleO::Services::AodhListener
- OS::TripleO::Services::SaharaApi
- OS::TripleO::Services::SaharaEngine
- OS::TripleO::Services::IronicApi
- OS::TripleO::Services::IronicConductor
- OS::TripleO::Services::NovaIronic
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::OpenDaylightApi
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::FluentdClient
- OS::TripleO::Services::VipHosts
```
The name of the role files should correspond to the node inventory name without prefix and index.
For example, for `user-prefix-controller-0` the name of the role should be `controller.yml`.
#### OSP11 Deployment example[¶](#osp11-deployment-example)
To deploy OpenStack with composable roles on virtual environment the following steps can be performed.
1. Provision all the required virtual machines on a hypervizor with the virsh plugin:
```
infrared virsh -vv \
-o provision.yml \
--topology-nodes undercloud:1,controller:3,db:3,messaging:3,networker:2,compute:1,ceph:1 \
--host-address my.host.redhat.com \
--host-key /path/to/host/key \
-e override.controller.cpu=8 \
-e override.controller.memory=32768
```
2. Install undercloud and overcloud images:
```
infrared tripleo-undercloud -vv -o install.yml \
-o undercloud-install.yml \
--version 11 \
--images-task rpm
```
3. Install overcloud:
```
infrared tripleo-overcloud -vv \
-o overcloud-install.yml \
--version 11 \
--role-files=composition \
--deployment-files composable_roles \
--introspect yes \
--tagging yes \
--deploy yes
infrared cloud-config -vv \
-o cloud-config.yml \
--deployment-files virt \
--tasks create_external_network,forward_overcloud_dashboard,network_time,tempest_deployer_input
```
### Tripleo OSP with Red Hat Subscriptions[¶](#tripleo-osp-with-red-hat-subscriptions)
#### Undercloud[¶](#undercloud)
To deploy OSP, the Undercloud [must be registered](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/11/html/director_installation_and_usage/chap-installing_the_undercloud#sect-Registering_your_System) to Red Hat channels.
Define the subscription details:
undercloud_cdn.yml[¶](#id1)
```
---
server_hostname: 'subscription.rhsm.redhat.com'
username: '<EMAIL>'
password: '123456'
autosubscribe: yes server_insecure: yes
```
Warning
During run time, contents of the file are hidden from the logged output, to protect private account credentials.
For the full list of supported input, see the Ansible [module documentation](http://docs.ansible.com/ansible/redhat_subscription_module.html).
For example, `autosubscribe: yes` can be replaced with `pool_id` or `pool: REGEX`,
where `REGEX` is a regular expression that searches for matching available pools.
Note
Pre-registered undercloud is also supported if `--cdn` flag is missing.
Deploy your undercloud. It’s recommended to use `--images-task rpm` to fetch pre-packaged images that are only available via Red Hat channels:
```
infrared tripleo-undercloud --version 11 --cdn undercloud_cdn.yml --images-task rpm
```
Warning
`--images-update` is not supported with cdn.
#### Overcloud[¶](#overcloud)
Once the undercloud is registered, the overcloud can be deployed. However, the overcloud nodes will not be registered and cannot receive updates. While the nodes can be later registered manually, Tripleo provides a way to register them automatically on deployment.
According to the [guide](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/advanced_overcloud_customization/sect-registering_the_overcloud) there are 2 heat-templates required. They can be included,
and their defaults overridden, using a [custom templates file](tripleo_overcloud.html).
overcloud_cdn.yml[¶](#id2)
```
---
tripleo_heat_templates:
- /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/rhel-registration-resource-registry.yaml
- /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/environment-rhel-registration.yaml
custom_templates:
parameter_defaults:
rhel_reg_activation_key: ""
rhel_reg_org: ""
rhel_reg_pool_id: ""
rhel_reg_method: "portal"
rhel_reg_sat_url: ""
rhel_reg_sat_repo: "rhel-7-server-rpms rhel-7-server-extras-rpms rhel-7-server-rh-common-rpms rhel-ha-for-rhel-7-server-rpms rhel-7-server-openstack-10-rpms"
rhel_reg_repos: ""
rhel_reg_auto_attach: ""
rhel_reg_base_url: "https://cdn.redhat.com"
rhel_reg_environment: ""
rhel_reg_force: "true"
rhel_reg_machine_name: ""
rhel_reg_password: "123456"
rhel_reg_release: ""
rhel_reg_server_url: "subscription.rhsm.redhat.com"
rhel_reg_service_level: ""
rhel_reg_user: "<EMAIL>"
rhel_reg_type: ""
rhel_reg_http_proxy_host: ""
rhel_reg_http_proxy_port: ""
rhel_reg_http_proxy_username: ""
rhel_reg_http_proxy_password: ""
```
Note
Please notice that the repos in the file above are for OSP 10
Deploy the overcloud with the custom templates file:
```
infrared tripleo-overcloud --version=11 --deployment-files=virt --introspect=yes --tagging=yes --deploy=yes --overcloud-templates overcloud_cdn.yml --post=yes
```
### Hybrid deployment[¶](#hybrid-deployment)
Infrared allows to deploy hybrid cloud. Hybrid cloud includes virtual nodes and baremetal nodes.
#### Create network topology configuration file[¶](#create-network-topology-configuration-file)
First the appropriate network configuration should be created.
Most common configuration can include for 3 bridged networks and one nat network for virtual machines provisioning the following configuration can be used:
```
cat << EOF > plugins/virsh/vars/topology/network/3_bridges_1_net.yml networks:
net1:
name: br-ctlplane
forward: bridge
nic: eno2
ip_address: 192.0.70.200
netmask: 255.255.255.0
net2:
name: br-vlan
forward: bridge
nic: enp6s0f0
net3:
name: br-link
forward: bridge
nic: enp6s0f1
net4:
external_connectivity: yes
name: "management"
ip_address: "172.16.0.1"
netmask: "255.255.255.0"
forward: nat
dhcp:
range:
start: "172.16.0.2"
end: "172.16.0.100"
subnet_cidr: "172.16.0.0/24"
subnet_gateway: "172.16.0.1"
floating_ip:
start: "172.16.0.101"
end: "172.16.0.150"
EOF
```
Note
Change nic names for the bridget networks to match hypervisor interfaces.
Note
Make sure you have `ip_address` or `bootproto=dhcp` defined for the br-ctlplane bridge. This is need to setup ssh access to the nodes after deployment is completed.
#### Create configurations files for the virtual nodes[¶](#create-configurations-files-for-the-virtual-nodes)
Next step is to add network topology of virtual nodes for the hybrid cloud: `controller` and `undercloud`.
Interface section for every node configuration should match to the network configuration.
Add controller configuration:
```
cat << EOF >> plugins/virsh/vars/topology/network/3_bridges_1_net.yml nodes:
undercloud:
interfaces:
- network: "br-ctlplane"
bridged: yes
- network: "management"
external_network:
network: "management"
EOF
```
Add undercloud configuration:
```
cat << EOF >> plugins/virsh/vars/topology/network/3_bridges_1_net.yml
controller:
interfaces:
- network: "br-ctlplane"
bridged: yes
- network: "br-vlan"
bridged: yes
- network: "br-link"
bridged: yes
- network: "management"
external_network:
network: "management"
EOF
```
#### Provision virtual nodes with virsh plugin[¶](#provision-virtual-nodes-with-virsh-plugin)
Once node configurations are done, the `virsh` plugin can be used to provision these nodes on a dedicated hypervisor:
```
infrared virsh -v \
--topology-nodes undercloud:1,controller:1 \
-e override.controller.memory=28672 \
-e override.undercloud.memory=28672 \
-e override.controller.cpu=6 \
-e override.undercloud.cpu=6 \
--host-address hypervisor.redhat.com \
--host-key ~/.ssh/key_file \
--topology-network 3_bridges_1_net
```
#### Install undercloud[¶](#install-undercloud)
Make sure you provide the undercloud.conf which corresponds to the baremetal environment:
```
infrared tripleo-undercloud -v \
--version=11 \
--build=passed_phase1 \
--images-task=rpm \
--config-file undercloud_hybrid.conf
```
#### Perform introspection and tagging[¶](#perform-introspection-and-tagging)
Create json file which lists all the baremetal nodes required for deployment:
```
cat << EOF > hybrid_nodes.json
{
"nodes": [
{
"name": "compute-0",
"pm_addr": "baremetal-mgmt.redhat.com",
"mac": ["14:02:ec:7c:88:30"],
"arch": "x86_64",
"pm_type": "pxe_ipmitool",
"pm_user": "admin",
"pm_password": "admin",
"cpu": "1",
"memory": "4096",
"disk": "40"
}]
}
EOF
```
Run introspection and tagging with infrared:
```
infrared tripleo-overcloud -vv -o prepare_instack.yml \
--version 11 \
--deployment-files virt \
--introspect=yes \
--tagging=yes \
--deploy=no \
-e provison_virsh_network_name=br-ctlplane \
--hybrid hybrid_nodes.json
```
Note
Make sure to provide the ‘provison_virsh_network_name’ name to specify network name to be used for provisioning.
#### Run deployment with appropriate templates[¶](#run-deployment-with-appropriate-templates)
Copy all the templates to the `plugins/tripleo-overcloud/vars/deployment/files/hybrid/`
and use `--deployment-files hybrid` and `--deploy yes` flags to run tripleo-overcloud deployment.
Additionally the `--overcloud-templates` option can be used to pass additional templates:
```
infrared tripleo-overcloud -vv \
--version 11 \
--deployment-files hybrid \
--introspect=no \
--compute-nodes 1 \
--tagging=no \
--deploy=yes \
--overcloud-templates <list of templates>
```
Note
Make sure to provide the `--compute-nodes 1` option. It indicates the number of compute nodes to be used for deployment.
### How to create a new plugin[¶](#how-to-create-a-new-plugin)
This is a short guide how new plugin can be added to the Infrared.
It is recommended to read [Plugins](plugins.html) section prior following steps from that guide.
#### Create new Git repo for a plugin[¶](#create-new-git-repo-for-a-plugin)
Recommended way to store Infarerd plugin is to put it into a separate Git repo.
So create and init new repo:
```
$ mkdir simple-plugin && cd simple-plugin
$ git init
```
Now you need to add two main files of every Infrared plugin:
* `plugin.spec`: describes the user interface of the plugin (CLI)
* `main.yml`: the default entry point anbile playbook which will be run by the Infrared
#### Create plugin.spec[¶](#create-plugin-spec)
The `plugin.spec` holds the descriptions of all the CLI flags as well as plugin name and plugin descriptions.
Sample plugin specification file can be like:
```
config:
plugin_type: other
entry_point: main.yml subparsers:
# the actual name of the plugin
simple-plugin:
description: This is a simple demo plugin
include_groups: ["Ansible options", "Common options"]
groups:
- title: Option group.
options:
option1:
type: Value
help: Simple option with default value
default: foo
flag:
type: Bool
default: False
```
Config section:
* `plugin_type`:
Depending of what plugin is intended to do, can be `provision`, `install`, `test` or `other`.
See [plugin specification](plugins.html#plugin-specification) for details.
* `entry_point`:
The main playbook for the plugin. by default this will refer to main.yml file but can be changed to ant other file.
Options::
* `plugin name` under the `subparsers`
Infrared extends it CLI with that name.
It is recommended to use `dash-separated-lowercase-words` for plugin names.
* `include_groups`: list what standard flags should be included to the plugin CLI.
Usually we include “Ansible options” to provide ansible specific options and “Common Options” to get `--extra-vars`, `--output` and `--dry-run`. See [plugins include groups](plugins.html#include-groups) for more information.
* `groups`: the list of options groups Groups several logically connected options.
* `options`: the list of options in a group.
Infrared allows to define different types of options, set option default value, mark options as required etc. Check the [plugins option types](plugins.html#complex-option-types) for details
#### Create main playbook[¶](#create-main-playbook)
Now when plugin specification is ready we need to put some business logic into a plugin.
Infrared collects user input from command line and pass it to the ansible by calling main playbook - that is configured as entry_point in `plugins.spec`.
The main playbook is a regular ansible playbook and can look like:
```
- hosts: localhost
tasks:
- name: debug user variables
debug:
var: other.option
- name: check bool flag
debug:
msg: "User flag is set"
when: other.flag
```
All the options provided by user goes to the plugin type namespace. Dashes in option names translated to the dots (`.`).
So for `--option1 bar` infrared will create the `other.option1: bar` ansible variable.
#### Push changes to the remote repo[¶](#push-changes-to-the-remote-repo)
Commit all the files:
```
$ git add .
$ git commit -m "Initial commit"
```
Add the URL to the remote repo (for example a GitHub repo) and push all the changes:
```
$ git remote add origin <remote repository>
$ git push origin master
```
#### Add plugin to the infrared[¶](#add-plugin-to-the-infrared)
Now you are ready to install and use your plugin.
Install infrared and add plugin by providing url to your plugin repo:
```
$ ir plugin add <remote repo>
$ ir plugin list
```
This should display the list of plugins and you should have your plugin name there:
```
┌───────────┬────────────────────┐
│ Type │ Name │
├───────────┼────────────────────┤
│ provision │ beaker │
│ │ virsh │
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
├───────────┼────────────────────┤
│ other │ simple-plugin │
│ │ collect-logs │
└───────────┴────────────────────┘
```
#### Run plugin[¶](#run-plugin)
Run plugin with infrared and check for the help message:
```
$ ir simple-plugin --help
```
You should see user defined option as well as the common options like –extra-args.
Run ir command and check the playbook output:
```
$ ir simple-plugin --options1 HW --flag yes
```
### Controlling Node Placement[¶](#controlling-node-placement)
#### Overview[¶](#overview)
The default behavior for the director is to randomly select nodes for each role, usually based on their profile tag.
However, the director provides the ability to define specific node placement. This is a useful method to:
> * Assign specific node IDs
> * Assign custom hostnames
> * Assign specific IP addresses
InfraRed support this method in [tripleo-overcloud](tripleo-overcloud.html#controlling-node-placement) plugin.
#### Defining topology and controlling node placement[¶](#defining-topology-and-controlling-node-placement)
The examples show how to provision several nodes with [virsh](virsh.html) plugin and then how to use controlling node placement option during Overcloud Deploy.
##### Topology[¶](#topology)
Topology include 1 undercloud, 3 controllers, 2 compute and 3 ceph nodes:
```
$ ir virsh -vvvv
--topology-nodes=undercloud:1,controller:3,compute:2,ceph:3 \
--host-address=seal52.qa.lab.tlv.redhat.com \
--host-key ~/.ssh/my-prov-key \
[...]
```
##### Overcloud Install[¶](#overcloud-install)
This step require [Undercloud](tripleo-undercloud.html) to be installed and tripleo-overcloud introspection and tagging to be done:
```
$ ir tripleo-overcloud -vvvv
--version=12 \
--deploy=yes \
--deployment-files=virt \
--specific-node-ids yes \
--custom-hostnames ceph-0=storage-0,ceph-1=storage-1,ceph-2=storage-2,compute-0=novacompute-0,compute-1=novacompute-1,controller-0=ctrl-0,controller-1=ctrl-1,controller-2=ctrl-2 \
--predictable-ips yes \
--overcloud-templates ips \
[...]
```
Warning
Currently node IPs need to be provided as user template with –overcloud-templates
##### InfraRed Inventory[¶](#infrared-inventory)
After Overcloud install, InfraRed directory contains the overcloud nodes with their new hostnames:
```
$ ir workspace node-list
+---+---+---+
| Name | Address | Groups |
+---+---+---+
| undercloud-0 | 172.16.0.5 | tester, undercloud, openstack_nodes |
+---+---+---+
| hypervisor | seal52.qa.lab.tlv.redhat.com | hypervisor, shade |
+---+---+---+
| novacompute-0 | 192.168.24.9 | overcloud_nodes, compute, openstack_nodes |
+---+---+---+
| novacompute-1 | 192.168.24.21 | overcloud_nodes, compute, openstack_nodes |
+---+---+---+
| storage-2 | 192.168.24.16 | overcloud_nodes, ceph, openstack_nodes |
+---+---+---+
| storage-1 | 192.168.24.6 | overcloud_nodes, ceph, openstack_nodes |
+---+---+---+
| storage-0 | 192.168.24.18 | overcloud_nodes, ceph, openstack_nodes |
+---+---+---+
| ctrl-2 | 192.168.24.10 | overcloud_nodes, network, controller, openstack_nodes |
+---+---+---+
| ctrl-0 | 192.168.24.15 | overcloud_nodes, network, controller, openstack_nodes |
+---+---+---+
| ctrl-1 | 192.168.24.14 | overcloud_nodes, network, controller, openstack_nodes |
+---+---+---+
```
### Controller replacement[¶](#controller-replacement)
The OSP Director allows to perofrm controller replacement procedure.
More details can be found here: <https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/director_installation_and_usage/sect-scaling_the_overcloud#sect-Replacing_Controller_NodesThe `cloud-config` plugin automates that procedure. Suppose you already have a deployment with more than one controller.
First step is to extend existing deployment with a new controller node. For virtaul deployment the `virsh` plugin can be used:
```
infrared virsh --topology-nodes controller:1 \
--topology-extend True \
--host-address my.hypervisor.address \
--host-key ~/.ssh/id_rsa
```
Next step is to perform controller replacement procedure using `cloud-config` plugin:
```
infrared cloud-config --tasks=replace_controller \
--controller-to-remove=controller-0 \
--controller-to-add=controller-3 \
```
This will repalce controller-0 with the newly added controller-3 node. Nodes index start from 0.
Currently controller replacement is supported only for OSP13 and above.
#### Advanced parameters[¶](#advanced-parameters)
In case the controller to be replaced cannot be connected by ssh, the `rc_controller_is_reachable` should be set to `no`.
This will skip some tasks that should be performed on the controller to be removed:
```
infrared cloud-config --tasks=replace_controller \
--controller-to-remove=controller-0 \
--controller-to-add=controller-3 \
-e rc_controller_is_reachable=no
```
### Standalone deployment[¶](#standalone-deployment)
Infrared allows to deploy tripleo openstack in stancalone mode. This means that all the openstack services will be hosted on one node.
See <https://blueprints.launchpad.net/tripleo/+spec/all-in-one> for details.
To start deployment the `standalone` host should be added to the inventory.
For the virtual deployment, the `virsh` infrared plugin can be used for that:
```
infrared virsh --topology-nodes standalone:1 \
--topology-network 1_net \
--host-address myvirthost.redhat.common
--host-key ~/.ssh/host-key.pem
```
After that start standalone deployment:
```
ir tripleo-standalone --version 14
```
### In development[¶](#in-development)
#### New Features[¶](#new-features)
* Allow to specify target hosts for collect-logs plugin.
Now user can limit the list of servers from wich IR should collect logs with the –hosts option:
```
infrared collect-logs --hosts undercloud
```
* Added reno tool usage to generare release notes.
Check <https://docs.openstack.org/reno/latest/> for details.
* Some nodes might use multiple disks. This means the director needs to identify the disk to use for the root disk during provisioning.
There are several properties you can use to help the director identify it:
> + model
> + vendor
> + serial
> + size
> + etc
This feature allows to configure root disk for a multi-disk nodes.
Example:
```
--root-disk-override node=compute,hint=size,hintvalue=50
# will set a root disk to be a on a device with 50GB for all compute nodes
--root-disk-override node=controller-1,hint=name,hintvalue=/dev/sdb
# will set a root disk for controller-1 to be /dev/sdb
```
For more info please check official docs at:
<https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/chap-configuring_basic_overcloud_requirements_with_the_cli_tools#sect-Defining_the_Root_Disk_for_NodesIndices and tables[¶](#indices-and-tables)
---
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
bibmanager | readthedoc | Markdown | bibmanager 1.4.9 documentation
[bibmanager](index.html#document-index)
---
bibmanager[¶](#bibmanager)
===
**The Next Standard in BibTeX Management**
---
| Author: | <NAME> and contributors (see [Contributors](#team)) |
| Contact: | [pcubillos[at]fulbrightmail.org](mailto:pcubillos%40fulbrightmail.org) |
| Organizations: | [Space Research Institute (IWF)](http://iwf.oeaw.ac.at/) |
| Web Site: | <https://github.com/pcubillos/bibmanager> |
| Date: | May 29, 2023 |
Features[¶](#features)
===
`bibmanager` is a command-line based application to facilitate the management of BibTeX entries, allowing the user to:
* Unify all BibTeX entries into a single database
* Automate .bib file generation when compiling a LaTeX project
* Automate duplicate detection and updates from arXiv to peer-reviewed
* Clean up (remove duplicates, ADS update) any external bibfile (since version 1.1.2)
* Keep a database of the entries’ PDFs and fetch PDFs from ADS (since version 1.2)
* Browse interactively through the database (since version 1.3)
* Keep track of the more relevant entries using custom-set tags (since version 1.4)
`bibmanager` also simplifies many other BibTeX-related tasks:
* Add or modify entries into the `bibmanager` database:
+ Merging user’s .bib files
+ Manually adding or editing entries
+ Add entries from ADS bibcodes
* entry adding via your default text editor
* Query entries in the `bibmanager` database by author, year, or title keywords
* Generate .bib files built from your .tex files
* Compile LaTeX projects with the `latex` or `pdflatex` directives
* Perform queries into ADS and add entries by bibcode
* Fetch PDF files from ADS (via their bibcode, new since version 1.2)
Be Kind[¶](#be-kind)
===
If `bibmanager` was useful for your research, please consider acknowledging the effort of the developers of this project. Here’s a BibTeX entry for that:
```
@MISC{Cubillos2020zndoBibmanager,
author = {{Cubillos}, <NAME>.},
title = "{bibmanager: A BibTeX manager for LaTeX projects, Zenodo, doi 10.5281/zenodo.2547042}",
year = 2020,
month = feb,
howpublished = {Zenodo},
eid = {10.5281/zenodo.2547042},
doi = {10.5281/zenodo.2547042},
publisher = {Zenodo},
url = {https://doi.org/10.5281/zenodo.2547042},
adsurl = {https://ui.adsabs.harvard.edu/abs/2020zndo...2547042C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System},
}
```
Note
Did you know that [<NAME>](https://github.com/AaronDavidSchneider) built this totally amazing bibmanager graphic interface?
This extension lets you quickly browse through your database,
retrieve metadata (title, date, tags), open in ADS or PDF
(download if needed), or just copy things to the clipboard.
**I’ve tried it and I can only recommend to checking it out!**
This is implemented via [Raycast](https://www.raycast.com/aaronschneider/bibmanager), which is available for Mac OS X users. To install Raycast and bibmanager extension check [these simple instructions](index.html#raycast).
Check out this video tutorial to get started with `bibmanager`:
And this one covering some other features:
Contributors[¶](#contributors)
===
`bibmanager` was created and is maintained by
[<NAME>](https://github.com/pcubillos/) ([pcubillos[at]fulbrightmail.org](mailto:pcubillos%40fulbrightmail.org)).
These people have directly contributed to make the software better:
* [<NAME>](https://github.com/michaelaye)
* [<NAME>](https://github.com/1313e)
* [<NAME>](https://github.com/AaronDavidSchneider)
Documentation[¶](#documentation)
===
Getting Started[¶](#getting-started)
---
`bibmanager` offers command-line tools to automate the management of BibTeX entries for LaTeX projects.
`bibmanager` places all of the user’s bibtex entries in a centralized database, which is beneficial because it allows `bibmanager` to automate duplicates detection, arxiv-to-peer review updates, and generate bibfiles with only the entries required for a specific LaTeX file.
There are four main categories for the `bibmanager` tools:
* [BibTeX Management](index.html#bibtex) tools help to create, edit, browse, and query from a
`bibmanager` database, containing all BibTeX entries that a user may need.
* [LaTeX Management](index.html#latex) tools help to generate (automatically) a bib file for specific LaTeX files, and compile LaTeX files without worring for maintaining/updating its bib file.
* [ADS Management](index.html#ads) tools help to makes queries into ADS, add entries from ADS, and cross-check the `bibmanager` database against ADS, to update arXiv-to-peer reviewed entries.
* [PDF Management](index.html#pdf) tools help to maintain a database of the PDF files associated to the BibTex entries: Fetch from ADS, set manually, and open in a PDF viewer.
Once installed (see below), take a look at the `bibmanager` main menu by executing the following command:
```
# Display bibmanager main help menu:
bibm -h
```
From there, take a look at the sub-command helps or the rest of these docs for further details, or see the [Quick Example](#qexample) for an introductory worked example.
### System Requirements[¶](#system-requirements)
`bibmanager` is compatible with Python3.6+ and has been [tested](https://travis-ci.com/pcubillos/bibmanager) to work in both Linux and OS X, with the following software:
* numpy (version 1.15.1+)
* requests (version 2.19.1+)
* packaging (version 17.1+)
* prompt_toolkit (version 3.0.5+)
* pygments (version 2.2.0+)
### Install[¶](#install)
To install `bibmanager` run the following command from the terminal:
```
pip install bibmanager
```
Or if you prefer conda:
```
conda install -c conda-forge bibmanager
```
Alternatively (e.g., for developers), clone the repository to your local machine with the following terminal commands:
```
git clone https://github.com/pcubillos/bibmanager cd bibmanager python setup.py develop
```
Note
To enable the ADS functionality, first you need to obtain an [ADS token](https://github.com/adsabs/adsabs-dev-api#access), and set it into the `ads_tokend` config parameter. To do this:
1. Create an account and login into the new [ADS system](https://ui.adsabs.harvard.edu/?bbbRedirect=1#user/account/login).
2. Get your token (or generate a new one) from [here](https://ui.adsabs.harvard.edu/#user/settings/token).
3. Set the `ads_token` bibmanager parameter:
```
# Set ads_token to 'my_ads_token':
bibm config ads_token my_ads_token
```
### Quick Example[¶](#quick-example)
Adding your BibTeX file into `bibmanager` is as simple as one command:
```
# Add this sample bibfile into the bibmanager database:
bibm merge ~/.bibmanager/examples/sample.bib
```
Compiling a LaTeX file that uses those BibTeX entries is equally simple:
```
# Compile your LaTeX project:
bibm latex ~/.bibmanager/examples/sample.tex
```
This command produced a BibTeX file according to the citations in sample.tex; then executed latex, bibtex, latex, latex; and finally produced a pdf file out of it. You can see the results in ~/.bibmanager/examples/sample.pdf.
As long as the citation keys are in the `bibmanager` database, you won’t need to worry about maintaining a bibfile anymore. The next sections will show all of the capabilities that `bibmanager` offers.
BibTeX Management[¶](#bibtex-management)
---
### reset[¶](#reset)
Reset the bibmanager database.
**Usage**
```
bibm reset [-h] [-d | -c] [bibfile]
```
This command resets the bibmanager database from scratch.
It creates a .bibmanager/ folder in the user folder (if it does not exists already), and it resets the bibmanager configuration to its default values.
If the user provides the `bibfile` argument, this command will populate the database with the entries from that file; otherwise,
it will set an empty database.
Note that this will overwrite any pre-existing database. In principle the user should not execute this command more than once in a given CPU.
**Options**
**bibfile**
Path to an existing BibTeX file.
**-d, --database**
Reset only the bibmanager database.
**-c, --config**
Reset only the bibmanager config parameters.
**-h, --help**
Show this help message and exit.
**Examples**
```
# Reset bibmanager database from scratch:
bibm reset
# Reset, including entries from a BibTeX file:
bibm reset my_file.bib
# Reset only the database (keep config parameters):
bibm reset my_file.bib -d
# Reset only the config parameters (keep database):
bibm reset -c
```
---
### merge[¶](#merge)
Merge a BibTeX file into the bibmanager database.
**Usage**
```
bibm merge [-h] bibfile [take]
```
**Description**
This command merges the content from an input BibTeX file with the bibmanager database.
The optional ‘take’ arguments defines the protocol for possible-
duplicate entries. Either take the ‘old’ entry (database), take the ‘new’ entry (bibfile), or ‘ask’ the user through the prompt
(displaying the alternatives). bibmanager considers four fields to check for duplicates: doi, isbn, bibcode, and eprint.
Additionally, bibmanager considers two more cases (always asking):
(1) new entry has duplicate key but different content, and
(2) new entry has duplicate title but different key.
**Options**
**bibfile**
Path to an existing BibTeX file.
**take**
Decision protocol for duplicates (choose: {old, new, ask}, default: old)
**-h, --help**
Show this help message and exit.
**Examples**
```
# Merge BibTeX file ignoring duplicates (unless they update from arXiv to peer-reviewed):
bibm merge my_file.bib
# Merge BibTeX file ovewriting entries if they are duplicates:
bibm merge my_file.bib new
# Merge BibTeX file asking the user which to take for each duplicate:
bibm merge my_file.bib ask
```
---
### edit[¶](#edit)
Edit the bibmanager database in a text editor.
**Usage**
```
bibm edit [-h]
```
**Description**
This command let’s you manually edit the bibmanager database,
in your pre-defined text editor. Once finished editing, save and close the text editor, and press ENTER in the terminal to incorporate the edits (edits after continuing on the terminal won’t count).
bibmanager selects the OS default text editor. But the user can set a preferred editor, see ‘bibm config -h’ for more information.
**Options**
**-h, --help**
Show this help message and exit.
**Examples**
```
# Launch text editor on the bibmanager BibTeX database:
bibm edit
```
#### Meta-Information[¶](#meta-information)
*(New since Version 1.2)*
`bibmanager` allows the user to add meta-information to the entries (info that is not contained in the BibTex itself). This meta-info can be set while editing the database with the `bibm edit`
command, by writting it before an entry.
There are currently two meta-parameters:
* The *freeze* meta-parameter is a flag that freezes an entry, preventing it to be modified when running [ads-update](index.html#ads-update).
* The *pdf* meta-parameter links a PDF file to the entry. To do this,
type ‘*pdf:*’ followed by the path to a PDF file. If the PDF file is already in the *home/pdf* folder (see [config](#config)), there’s no need to specify the path to the file. Alternatively, see the commands in [PDF Management](index.html#pdf).
* The *tags* meta-parameter enable setting user-defined tags for grouping and searching entries *(New since Version 1.4)*
Below there’s an example to freeze and link a PDF file to an entry:
```
This file was created by bibmanager https://pcubillos.github.io/bibmanager/
...
freeze pdf: /home/user/Downloads/Rubin1980.pdf
@ARTICLE{1980ApJ...238..471R,
author = {{<NAME>. and {<NAME>. and {<NAME>.},
title = "{Rotational properties of 21 SC galaxies with a large range of luminosities and radii, from NGC 4605 (R=4kpc) to UGC 2885 (R=122kpc).}",
journal = {\apj},
year = "1980",
month = "Jun",
volume = {238},
pages = {471-487},
doi = {10.1086/158003},
adsurl = {https://ui.adsabs.harvard.edu/abs/1980ApJ...238..471R},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
...
```
---
### add[¶](#add)
Add entries into the bibmanager database.
**Usage**
```
bibm add [-h] [take]
```
**Description**
This command allows the user to manually add BibTeX entries into the bibmanager database through the terminal prompt.
The optional ‘take’ argument defines the protocol for possible-duplicate entries. Either take the ‘old’ entry (database), take the ‘new’ entry (bibfile), or ‘ask’ the user through the prompt
(displaying the alternatives). bibmanager considers four fields to check for duplicates: doi, isbn, bibcode, and eprint.
Additionally, bibmanager considers two more cases (always asking):
(1) new entry has duplicate key but different content, and
(2) new entry has duplicate title but different key.
**Options**
**take**
Decision protocol for duplicates (choose: {old, new, ask}, default: new)
**-h, --help**
Show this help message and exit.
**Examples**
```
# Start multi-line prompt session to enter one or more BibTeX entries:
bibm add
```
---
### tag[¶](#tag)
Add or remove tags to entries in the database.
**Usage**
```
bibm tag [-h] [-d] [-v VERB]
```
**Description**
This command adds or removes user-defined tags to specified entries in the Bibmanager database, which can then be used for grouping and searches. The tags are case sensitive and should not contain blank spaces.
*(New since version 1.4)*
Additionally, if the user only sets tags (but no entries), this command will display the existing entries that contain those tags.
There are five levels of verbosity:
verb < 0: Display only the keys of the entries verb = 0: Display the title, year, first author, and key verb = 1: Display additionally the ADS/arXiv urls and meta info verb = 2: Display additionally the full list of authors verb > 2: Display the full BibTeX entries
**Options**
**-h, --help**
Show this help message and exit.
**-d, --delete**
Delete tags instead of add.
**-v VERB, --verb VERB**
Verbosity level if used to display entries.
**Examples**
```
# Add a tag to an entry:
bibm tag
(Syntax is: KEY_OR_BIBCODE KEY_OR_BIBCODE2 ... tags: TAG TAG2 ...)
Hunter2007ieeeMatplotlib tag: python
# Add multiple tags to multiple entries:
bibm tag
(Syntax is: KEY_OR_BIBCODE KEY_OR_BIBCODE2 ... tags: TAG TAG2 ...)
1913LowOB...2...56S 1918ApJ....48..154S tags: galaxies history
# Remove tags:
bibm tag -d
(Syntax is: KEY_OR_BIBCODE KEY_OR_BIBCODE2 ... tags: TAG TAG2 ...)
Slipher1913lobAndromedaRarialVelocity tags: galaxies
# Display all entries that contain the 'galaxies' tag:
bibm tag
(Syntax is: KEY_OR_BIBCODE KEY_OR_BIBCODE2 ... tags: TAG TAG2 ...)
tags: galaxies
```
---
### search[¶](#search)
Search entries in the bibmanager database.
**Usage**
```
bibm search [-h] [-v VERB]
```
**Description**
This command will trigger a prompt where the user can search for entries in the bibmanager database by authors, years, title keywords,
BibTeX key, or ADS bibcode. The matching results are displayed on screen according to the specified verbosity.
Search syntax is similar to ADS searches (including tab completion).
Multiple author, title keyword, and year queries act with AND logic;
whereas multiple-key queries and multiple-bibcode queries act with OR logic (see examples below).
There are five levels of verbosity:
verb < 0: Display only the keys of the entries verb = 0: Display the title, year, first author, and key verb = 1: Display additionally the ADS/arXiv urls and meta info verb = 2: Display additionally the full list of authors verb > 2: Display the full BibTeX entries
Note
1. There’s no need to worry about case in author names, unless they conflict with the BibTeX format rules:
<http://mirror.easyname.at/ctan/info/bibtex/tamethebeast/ttb_en.pdf>, p.23.
For example, *author:”oliphant, t”* will match *‘<NAME>’*
(because there is no ambiguity in first-von-last names), but
*author:”<NAME>”* wont match, because the lowercase *‘travis’*
will be interpreted as the von part of the last name.
2. Title words/phrase searches are case-insensitive.
**Options**
**-v VERB, --verb VERB**
Set output verbosity.
**-h, --help**
Show this help message and exit.
**Examples**
Note
These examples below assume that you merged the sample bibfile already, i.e.: `bibm merge ~/.bibmanager/examples/sample.bib`
Searches follow the ADS search syntax. Pressing *tab* displays the search fields:
The tab-completion also displays extra information at the bottom when navigating through some options.
Name examples:
```
# Search by last name (press tab to prompt the autocompleter):
bibm search
(Press 'tab' for autocomplete)
author:"oliphant"
Title: Array programming with NumPy, 2020 Authors: {Harris}, <NAME>.; et al.
key: HarrisEtal2020natNumpy
Title: SciPy 1.0: fundamental algorithms for scientific computing in Python,
2020 Authors: {Virtanen}, Pauli; et al.
key: VirtanenEtal2020natmeScipy
```
```
# Search by last name and initials (note blanks require one to use quotes):
bibm search
(Press 'tab' for autocomplete)
author:"oliphant, t"
Title: Array programming with NumPy, 2020 Authors: {Harris}, <NAME>.; et al.
key: HarrisEtal2020natNumpy
Title: SciPy 1.0: fundamental algorithms for scientific computing in Python,
2020 Authors: {Virtanen}, Pauli; et al.
key: VirtanenEtal2020natmeScipy
```
```
# Search by first-author only:
bibm search author:"^Harris"
Title: Array programming with NumPy, 2020 Authors: {Harris}, <NAME>.; et al.
key: HarrisEtal2020natNumpy
```
```
# Search multiple authors (using AND logic):
bibm search
(Press 'tab' for autocomplete)
author:"harris" author:"virtanen"
Title: Array programming with NumPy, 2020 Authors: {Harris}, <NAME>.; et al.
key: HarrisEtal2020natNumpy
Title: SciPy 1.0: fundamental algorithms for scientific computing in Python,
2020 Authors: {Virtanen}, Pauli; et al.
key: VirtanenEtal2020natmeScipy
```
Combine search fields:
```
# Search by author, year, and title words/phrases (using AND logic):
bibm search
(Press 'tab' for autocomplete)
author:"oliphant, t" title:"numpy"
Title: Array programming with NumPy, 2020 Authors: {Harris}, <NAME>.; et al.
key: HarrisEtal2020natNumpy
```
```
# Search multiple words/phrases in title (using AND logic):
bibm search
(Press 'tab' for autocomplete)
title:"HD 209458b" title:"atmospheric circulation"
Title: Atmospheric Circulation of Hot Jupiters: Coupled Radiative-Dynamical
General Circulation Model Simulations of HD 189733b and HD 209458b,
2009 Authors: {Showman}, <NAME>.; et al.
key: ShowmanEtal2009apjRadGCM
```
Year examples:
```
# Search on specific year:
bibm search
(Press 'tab' for autocomplete)
year: 1913
Title: The radial velocity of the Andromeda Nebula, 1913 Authors: {Slipher}, <NAME>.
key: Slipher1913lobAndromedaRarialVelocity
```
```
# Search anything between the specified years (inclusive):
bibm search
(Press 'tab' for autocomplete)
year:2013-2016
Title: Novae in the Spiral Nebulae and the Island Universe Theory, 1917 Authors: {Curtis}, <NAME>.
key: Curtis1917paspIslandUniverseTheory
Title: The radial velocity of the Andromeda Nebula, 1913 Authors: {Slipher}, <NAME>.
key: Slipher1913lobAndromedaRarialVelocity
```
```
# Search anything up to the specified year (note this syntax is not available on ADS):
bibm search
(Press 'tab' for autocomplete)
year: -1917
Title: Novae in the Spiral Nebulae and the Island Universe Theory, 1917 Authors: {Curtis}, <NAME>.
key: Curtis1917paspIslandUniverseTheory
Title: The radial velocity of the Andromeda Nebula, 1913 Authors: {Slipher}, <NAME>.
key: Slipher1913lobAndromedaRarialVelocity
```
```
# Search anything since the specified year:
bibm search
(Press 'tab' for autocomplete)
author:"oliphant, t" year: 2020-
Title: Array programming with NumPy, 2020 Authors: {Harris}, <NAME>.; et al.
key: HarrisEtal2020natNumpy
Title: SciPy 1.0: fundamental algorithms for scientific computing in Python,
2020 Authors: {Virtanen}, Pauli; et al.
key: VirtanenEtal2020natmeScipy
```
ADS bibcode examples (same applies to searches by key):
```
# Search by bibcode:
bibm search
(Press 'tab' for autocomplete)
bibcode:2013A&A...558A..33A
Title: Astropy: A community Python package for astronomy, 2013 Authors: {Astropy Collaboration}; et al.
key: Astropycollab2013aaAstropy
# UTF-8 encoding also works just fine:
bibm search
(Press 'tab' for autocomplete)
bibcode:2013A%26A...558A..33A
Title: Astropy: A community Python package for astronomy, 2013 Authors: {Astropy Collaboration}; et al.
key: Astropycollab2013aaAstropy
```
Search multiple keys (same applies to multiple-bibcodes searches):
```
# Search multiple keys at once (using OR logic):
bibm search
(Press 'tab' for autocomplete)
key:Curtis1917paspIslandUniverseTheory key:Shapley1918apjDistanceGlobularClusters
Title: Novae in the Spiral Nebulae and the Island Universe Theory, 1917 Authors: {Curtis}, <NAME>.
key: Curtis1917paspIslandUniverseTheory
Title: Studies based on the colors and magnitudes in stellar clusters. VII.
The distances, distribution in space, and dimensions of 69 globular
clusters., 1918 Authors: {Shapley}, H.
key: Shapley1918apjDistanceGlobularClusters
```
Use the `-v VERB` command to set the verbosity:
```
# Display only the keys:
bibm search -v -1
(Press 'tab' for autocomplete)
year: 1910-1920
Keys:
Curtis1917paspIslandUniverseTheory Shapley1918apjDistanceGlobularClusters
Slipher1913lobAndromedaRarialVelocity
```
```
# Display title, year, first author, and all keys/urls:
bibm search -v 1
(Press 'tab' for autocomplete)
author:"<NAME>"
Title: Synthesis of the Elements in Stars, 1957 Authors: {Burbidge}, <NAME>; et al.
bibcode: 1957RvMP...29..547B ADS url: https://ui.adsabs.harvard.edu/abs/1957RvMP...29..547B key: BurbidgeEtal1957rvmpStellarElementSynthesis
```
```
# Display title, year, full author list, URLs, and meta info:
bibm search -v 2
(Press 'tab' for autocomplete)
author:"<NAME>"
Title: Synthesis of the Elements in Stars, 1957 Authors: {Burbidge}, <NAME>; {Burbidge}, <NAME>.; {Fowler}, <NAME>.; and
{<NAME>.
bibcode: 1957RvMP...29..547B ADS url: https://ui.adsabs.harvard.edu/abs/1957RvMP...29..547B key: BurbidgeEtal1957rvmpStellarElementSynthesis
```
```
# Display full BibTeX entry:
bibm search -v 3
(Press 'tab' for autocomplete)
author:"<NAME>"
@ARTICLE{BurbidgeEtal1957rvmpStellarElementSynthesis,
author = {{<NAME> and {<NAME>. and {Fowler}, <NAME>.
and {Hoyle}, F.},
title = "{Synthesis of the Elements in Stars}",
journal = {Reviews of Modern Physics},
year = 1957,
month = Jan,
volume = {29},
pages = {547-650},
doi = {10.1103/RevModPhys.29.547},
adsurl = {https://ui.adsabs.harvard.edu/abs/1957RvMP...29..547B},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
```
---
### browse[¶](#browse)
Browse through the bibmanager database.
*(New since version 1.3)*
**Usage**
```
bibm browse [-h]
```
**Description**
Display the entire bibmanager database in an interactive full-screen application that lets you:
- Navigate through or search for specific entries
- Visualize the entries’ full BibTeX content
- Select entries for printing to screen or to file
- Open the entries’ PDF files
- Open the entries in ADS through the web browser
- Select sub-group of entries by tags *(New since version 1.4)*
**Options**
**-h, --help**
Show this help message and exit.
**Examples**
```
bibm browse
```
---
### export[¶](#export)
Export the bibmanager database into a bib file.
**Usage**
```
bibm export [-h] bibfile
```
**Description**
Export the entire bibmanager database into a bibliography file to a
.bib or .bbl format according to the file extension of the
‘bibfile’ argument.
Caution
For the moment, only export to .bib.
**Options**
**bibfile**
Path to an output BibTeX file.
**-h, --help**
Show this help message and exit.
**-meta**
Also include meta-information in output file.
**Examples**
```
bibm export my_file.bib
```
---
### cleanup[¶](#cleanup)
Clean up a bibtex or latex file of duplicates and outdated entries.
**Usage**
```
bibm cleanup [-h] [-ads] bibfile
```
**Description**
Clean up a BibTeX (.bib) or LaTeX (.tex) file by removing duplicates,
sorting the entries,
and (if requested) updating the entries by cross-checking against the ADS database. All of this is done independently of the
`bibmanager` database. The original files will be preserved by prepending the string ‘*orig_yyyy_mm_dd_*’ with the corresponding date.
*(New since version 1.1.2)*
**Options**
**bibfile**
Path to an existing .tex or .bib file.
*(New since version 1.4.9 this can also update .tex files)*
**-ads**
Update the bibfile entries cross-checking against the ADS database.
**-h, --help**
Show this help message and exit.
**Examples**
```
# Remove duplicates and sort:
bibm cleanup file.bib
# Remove duplicates, update ADS entries, and sort:
bibm cleanup file.bib -ads
# Remove duplicates, update ADS entries, and sort a .tex file
# (and also its .bib file and other referenced .tex files in the main .tex file)
bibm cleanup file.tex -ads
```
---
### config[¶](#config)
Manage the bibmanager configuration parameters.
**Usage**
```
bibm config [-h] [param] [value]
```
**Description**
This command displays or sets the value of bibmanager config parameters.
These are the parameters that can be set by the user:
* The `style` parameter sets the color-syntax style of displayed BibTeX entries. The default style is ‘autumn’.
See <http://pygments.org/demo/6780986/> for a demo of the style options.
The available options are:
> default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc,
> pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor,
> paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino,
> rainbow_dash, abap
>
* The `text_editor` sets the text editor to use when editing the bibmanager manually (i.e., a call to: bibm edit). By default, bibmanager uses the OS-default text editor.
Typical text editors are: emacs, vim, gedit.
To set the OS-default editor, set text_editor to *‘default’*.
Note that aliases defined in the .bash file are not accessible.
* The `paper` parameter sets the default paper format for latex compilation outputs (not for pdflatex, which is automatic).
Typical options are ‘letter’ (e.g., for ApJ articles) or ‘A4’ (e.g., for A&A).
* The `ads_token` parameter sets the ADS token required for ADS requests.
To obtain a token, follow the steps described here: <https://github.com/adsabs/adsabs-dev-api#access>
* The `ads_display` parameter sets the number of entries to show at a time,
for an ADS search query. The default number of entries to display is 20.
* The `home` parameter sets the `bibmanager` home directory (this could be very handy, e.g., by placing the database in a Dropbox folder to share the same database across multiple machines).
The number of arguments determines the action of this command (see examples below):
* with no arguments, display all available parameters and values.
* with the ‘param’ argument, display detailed info on the specified parameter and its current value.
* with both ‘param’ and ‘value’ arguments, set the value of the parameter.
**Options**
**param**
A bibmanager config parameter.
**value**
Value for a bibmanager config parameter.
**-h, --help**
Show this help message and exit.
**Examples**
```
# Display all config parameters and values:
bibm config
bibmanager configuration file:
PARAMETER VALUE
--- ---
style autumn text_editor default paper letter ads_token None ads_display 20 home /home/user/.bibmanager/
```
```
# Display value and help for the ads_token parameter:
bibm config ads_token
The 'ads_token' parameter sets the ADS token required for ADS requests.
To obtain a token follow the two steps described here:
https://github.com/adsabs/adsabs-dev-api#access
The current ADS token is 'None'
```
```
# Set the value of the BibTeX color-syntax:
bibm config style autumn
style updated to: autumn.
```
LaTeX Management[¶](#latex-management)
---
### bibtex[¶](#bibtex)
Generate a bibtex file from a tex file.
**Usage**
```
bibm bibtex [-h] texfile [bibfile]
```
**Description**
This command generates a BibTeX file by searching for the citation keys in the input LaTeX file, and stores the output into BibTeX file,
named after the argument in the \bibliography{bib_file} call in the LaTeX file. Alternatively, the user can specify the name of the output BibTeX file with the `bibfile` argument.
Any citation key not found in the bibmanager database, will be shown on the screen prompt.
**Options**
**texfile**
Path to an existing LaTeX file.
**bibfile**
Path to an output BibTeX file.
**-h, --help**
Show this help message and exit.
**Examples**
```
# Generate a BibTeX file with references cited in my_file.tex:
bibm bibtex my_file.tex
# Generate a BibTeX file with references cited in my_file.tex,
# naming the output file 'this_file.bib':
bibm bibtex my_file.tex this_file.bib
```
---
### latex[¶](#id1)
Compile a LaTeX file with the latex command.
**Usage**
```
bibm latex [-h] texfile [paper]
```
**Description**
This command compiles a LaTeX file using the latex command,
executing the following calls:
* Compute a BibTex file out of the citation calls in the .tex file.
* Remove all outputs from previous compilations.
* Call latex, bibtex, latex, latex to produce a .dvi file.
* Call dvi2ps and ps2pdf to produce the final .pdf file.
Prefer this command over the `bibm pdflatex` command when the LaTeX file contains .ps or .eps figures (as opposed to .pdf, .png, or .jpeg).
Note that the user does not necessarily need to be in the dir where the LaTeX files are.
**Options**
**texfile**
Path to an existing LaTeX file.
**paper**
Paper format, e.g., letter or A4 (default=letter).
**-h, --help**
Show this help message and exit.
**Examples**
```
# Compile a LaTeX project:
bibm latex my_file.tex
# File extension can be ommited:
bibm latex my_file
# Compile, setting explicitely the output paper format:
bibm latex my_file A4
```
---
### pdflatex[¶](#pdflatex)
Compile a LaTeX file with the pdflatex command.
**Usage**
```
bibm pdflatex [-h] texfile
```
**Description**
This command compiles a LaTeX file using the pdflatex command,
executing the following calls:
* Compute a BibTeX file out of the citation calls in the LaTeX file.
* Remove all outputs from previous compilations.
* Call pdflatex, bibtex, pdflatex, pdflatex to produce a .pdf file.
Prefer this command over the `bibm latex` command when the LaTeX file contains .pdf, .png, or .jpeg figures (as opposed to .ps or .eps).
Note that the user does not necessarily need to be in the dir where the LaTeX files are.
**Options**
**texfile**
Path to an existing LaTeX file.
**-h, --help**
Show this help message and exit.
**Examples**
```
# Compile a LaTeX project:
bibm pdflatex my_file.tex
# File extension can be ommited:
bibm pdflatex my_file
```
ADS Management[¶](#ads-management)
---
Note
To enable the ADS functionality, first you need to obtain an ADS token [[1]](#adstoken), and set it into the `ads_tokend` config parameter. To do this:
1. Create an account and login into the new [ADS system](https://ui.adsabs.harvard.edu/?bbbRedirect=1#user/account/login).
2. Get your token (or generate a new one) from [here](https://ui.adsabs.harvard.edu/#user/settings/token).
3. Set the `ads_token` bibmanager parameter:
```
# Set ads_token to 'my_ads_token':
bibm config ads_token my_ads_token
```
---
### ads-search[¶](#ads-search)
Do a query on ADS.
**Usage**
```
bibm ads-search [-h] [-n] [-a] [-f] [-o]
```
**Description**
This command enables ADS queries. The query syntax is identical to a query in the new ADS’s one-box search engine:
<https://ui.adsabs.harvard.edu>.
Here there is a detailed documentations for ADS searches:
<https://adsabs.github.io/help/search/search-syntax>
See below for typical query examples.
If you set the `-a/--add` flag, the code will prompt to add entries to the database right after showing the ADS search results.
Similarly, set the `-f/--fetch` or `-o/--open` flags to prompt to fetch or open PDF files right after showing the ADS search results. Note that you can combine these to add and fetch/open at the same time (e.g., `bibm ads-search -a -o`), or you can fetch/open PDFs that are not in the database (e.g., `bibm ads-search -o`).
*(New since version 1.2.7)*
Note
Note that a query will display at most ‘ads_display’ entries on screen at once (see `bibm config ads_display`). If a query matches more entries, the user can execute `bibm ads-search -n`
to display the next set of entries.
Caution
When making an ADS query, note that ADS requires the field values (when necessary) to use double quotes.
For example: author:”^<NAME>”.
**Options**
**-n, --next**
Display next set of entries that matched the previous query.
**-a, --add**
Query to add an entry after displaying the search results.
*(New since version 1.2.7)*
**-f, --fetch**
Query to fetch a PDF after displaying the search results.
*(New since version 1.2.7)*
**-o, --open**
Query to fetch/open a PDF after displaying the search results.
*(New since version 1.2.7)*
**-h, --help**
Show this help message and exit.
**Examples**
```
# Search entries for given author (press tab to prompt the autocompleter):
bibm ads-search
(Press 'tab' for autocomplete)
author:"^<NAME>"
Title: Exploring A Photospheric Radius Correction to Model Secondary Eclipse
Spectra for Transiting Exoplanets Authors: Fortney, <NAME>.; et al.
adsurl: https://ui.adsabs.harvard.edu/abs/2019arXiv190400025F bibcode: 2019arXiv190400025F
Title: Laboratory Needs for Exoplanet Climate Modeling Authors: <NAME>.; et al.
adsurl: https://ui.adsabs.harvard.edu/abs/2018LPICo2065.2068F bibcode: 2018LPICo2065.2068F
...
Showing entries 1--20 out of 74 matches. To show the next set, execute:
bibm ads-search -n
```
Basic author search examples:
```
# Search by author in article:
bibm ads-search
(Press 'tab' for autocomplete)
author:"<NAME>"
# Search by first author:
bibm ads-search
(Press 'tab' for autocomplete)
author:"^<NAME>"
# Search multiple authors:
bibm ads-search
(Press 'tab' for autocomplete)
author:("<NAME>" AND "<NAME>")
```
Search combining multiple fields:
```
# Search by author AND year:
bibm ads-search
(Press 'tab' for autocomplete)
author:"<NAME>" year:2010
# Search by author AND year range:
bibm ads-search
(Press 'tab' for autocomplete)
author:"<NAME>" year:2010-2019
# Search by author AND words/phrases in title:
bibm ads-search
(Press 'tab' for autocomplete)
author:"<NAME>" title:Spitzer
# Search by author AND words/phrases in abstract:
bibm ads-search
(Press 'tab' for autocomplete)
author:"<NAME>" abs:"HD 209458b"
```
Restrict searches to articles or peer-reviewed articles:
```
# Search by author AND request only articles:
bibm ads-search
(Press 'tab' for autocomplete)
author:"<NAME>" property:article
# Search by author AND request only peer-reviewed articles:
bibm ads-search
(Press 'tab' for autocomplete)
author:"<NAME>" property:refereed
```
Add entries and fetch/open PDFs right after the ADS search:
```
# Search and prompt to open a PDF right after (fetched PDF is not stored in database):
bibm ads-search -o
(Press 'tab' for autocomplete)
author:"^<NAME>" property:refereed year:2015-2019
Title: Exploring a Photospheric Radius Correction to Model Secondary Eclipse
Spectra for Transiting Exoplanets Authors: <NAME>.; et al.
adsurl: https://ui.adsabs.harvard.edu/abs/2019ApJ...880L..16F bibcode: 2019ApJ...880L..16F
...
Fetch/open entry from ADS:
Syntax is: key: KEY_VALUE FILENAME
or: bibcode: BIBCODE_VALUE FILENAME bibcode: 2019ApJ...880L..16F Fortney2019.pdf
```
```
# Search and prompt to add entry to database right after:
bibm ads-search -a
(Press 'tab' for autocomplete)
author:"^<NAME>" property:refereed year:2015-2019
Title: Exploring a Photospheric Radius Correction to Model Secondary Eclipse
Spectra for Transiting Exoplanets Authors: <NAME>.; et al.
adsurl: https://ui.adsabs.harvard.edu/abs/2019ApJ...880L..16F bibcode: 2019ApJ...880L..16F
...
Add entry from ADS:
Enter pairs of ADS bibcodes and BibTeX keys, one pair per line separated by blanks (press META+ENTER or ESCAPE ENTER when done):
2019ApJ...880L..16F FortneyEtal2019apjPhotosphericRadius
```
```
# Search and prompt to add entry and fetch/open its PDF right after:
bibm ads-search -a -f
(Press 'tab' for autocomplete)
author:"^<NAME>" property:refereed year:2015-2019
Title: Exploring a Photospheric Radius Correction to Model Secondary Eclipse
Spectra for Transiting Exoplanets Authors: Fortney, <NAME>.; et al.
adsurl: https://ui.adsabs.harvard.edu/abs/2019ApJ...880L..16F bibcode: 2019ApJ...880L..16F
...
Add entry from ADS:
Enter pairs of ADS bibcodes and BibTeX keys, one pair per line separated by blanks (press META+ENTER or ESCAPE ENTER when done):
2019ApJ...880L..16F FortneyEtal2019apjPhotosphericRadius
```
---
### ads-add[¶](#ads-add)
Add entries from ADS by bibcode into the bibmanager database.
**Usage**
```
bibm ads-add [-h] [-f] [-o] [bibcode key] [tag1 [tag2 ...]]
```
**Description**
This command add BibTeX entries from ADS by specifying pairs of ADS bibcodes and BibTeX keys.
Executing this command without arguments (i.e., `bibm ads-add`) launches an interactive prompt session allowing the user to enter multiple bibcode, key pairs.
By default, added entries replace previously existent entries in the bibmanager database.
With the optional arguments `-f/--fetch` or `-o/--open`, the code will attempt to fetch and fetch/open (respectively) the associated PDF files of the added entries.
*(New since version 1.2.7)*
Either at `bibm ads-add` or later via the prompt you can specify tags for the entries to be add.
*(New since version 1.4)*
**Options**
**bibcode**
The ADS bibcode of an entry.
**key**
BibTeX key to assign to the entry.
**tags**
Optional BibTeX tags to assign to the entries.
*(New since version 1.4)*
**-f, --fetch**
Fetch the PDF of the added entries.
*(New since version 1.2.7)*
**-o, --open**
Fetch and open the PDF of the added entries.
*(New since version 1.2.7)*
**-h, --help**
Show this help message and exit.
**Examples**
```
# Let's search and add the greatest astronomy PhD thesis of all times:
bibm ads-search
(Press 'tab' for autocomplete)
author:"^payne, cecilia" doctype:phdthesis
Title: Stellar Atmospheres; a Contribution to the Observational Study of High
Temperature in the Reversing Layers of Stars.
Authors: Payne, <NAME> adsurl: https://ui.adsabs.harvard.edu/abs/1925PhDT.........1P bibcode: 1925PhDT.........1P
```
```
# Add the entry to the bibmanager database:
bibm ads-add 1925PhDT.........1P Payne1925phdStellarAtmospheres
```
The user can optionally assign tags or request to fetch/open PDFs:
```
# Add the entry and assign a 'stars' tag to it:
bibm ads-add 1925PhDT.........1P Payne1925phdStellarAtmospheres stars
# Add the entry and fetch its PDF:
bibm ads-add -f 1925PhDT.........1P Payne1925phdStellarAtmospheres
# Add the entry and fetch/open its PDF:
bibm ads-add -o 1925PhDT.........1P Payne1925phdStellarAtmospheres
```
Alternatively, the call can be done without arguments, which allow the user to request multiple entries at once (and as above, set tags to each entry as desired):
```
# A call without bibcode,key arguments (interactive prompt):
bibm ads-add Enter pairs of ADS bibcodes and BibTeX keys (plus optional tags)
Use one line for each BibTeX entry, separate fields with blank spaces.
(press META+ENTER or ESCAPE ENTER when done):
1925PhDT.........1P Payne1925phdStellarAtmospheres stars
# Multiple entries at once, assigning tags (interactive prompt):
bibm ads-add Enter pairs of ADS bibcodes and BibTeX keys (plus optional tags)
Use one line for each BibTeX entry, separate fields with blank spaces.
(press META+ENTER or ESCAPE ENTER when done):
1925PhDT.........1P Payne1925phdStellarAtmospheres stars 1957RvMP...29..547B BurbidgeEtal1957rvmpStellarSynthesis stars nucleosynthesis
```
---
### ads-update[¶](#ads-update)
Update bibmanager database cross-checking entries with ADS.
**Usage**
```
bibm ads-update [-h] [update_keys]
```
**Description**
This command triggers an ADS search of all entries in the `bibmanager`
database that have a `bibcode`. Replacing these entries with the output from ADS.
The main utility of this command is to auto-update entries that were added as arXiv version, with their published version.
For arXiv updates, this command updates automatically the year and journal of the key (where possible). This is done by searching for the year and the string ‘arxiv’ in the key, using the bibcode info.
For example, an entry with key ‘NameEtal2010arxivGJ436b’ whose bibcode changed from ‘2010arXiv1007.0324B’ to ‘2011ApJ…731…16B’, will have a new key ‘<KEY>’.
To disable this feature, set the `update_keys` optional argument to ‘no’.
**Options**
**update_keys**
Update the keys of the entries. (choose from: {no, arxiv}, default: arxiv).
**-h, --help**
Show this help message and exit.
**Examples**
Note
These example outputs assume that you merged the sample bibfile already, i.e.: `bibm merge ~/.bibmanager/examples/sample.bib`
```
# Look at this entry with old info from arXiv:
bibm search -v author:"^Beaulieu"
Title: Methane in the Atmosphere of the Transiting Hot Neptune GJ436b?, 2010 Authors: {Beaulieu}, J.-P.; et al.
bibcode: 2010arXiv1007.0324B ADS url: http://adsabs.harvard.edu/abs/2010arXiv1007.0324B arXiv url: http://arxiv.org/abs/arXiv:1007.0324 key: BeaulieuEtal2010arxivGJ436b
# Update bibmanager entries that are in ADS:
bibm ads-update
Merged 0 new entries.
(Not counting updated references)
There were 1 entries updated from ArXiv to their peer-reviewed version.
These ones changed their key:
BeaulieuEtal2010arxivGJ436b -> BeaulieuEtal2011apjGJ436b
# Let's take a look at this entry again:
bibm search -v author:"^Beaulieu"
Title: Methane in the Atmosphere of the Transiting Hot Neptune GJ436B?, 2011 Authors: {Beaulieu}, J. -P.; et al.
bibcode: 2011ApJ...731...16B ADS url: https://ui.adsabs.harvard.edu/abs/2011ApJ...731...16B arXiv url: http://arxiv.org/abs/1007.0324 key: BeaulieuEtal2011apjGJ436b
```
Note
There might be cases when one does not want to ADS-update an entry. To prevent this to happen, the user can set the *freeze*
meta-parameter through the `bibm edit` command (see [edit](index.html#edit)).
---
**References**
| [[1]](#id1) | <https://github.com/adsabs/adsabs-dev-api#access> |
PDF Management[¶](#pdf-management)
---
Since version 1.2, `bibmanager` also doubles as a PDF database. The following commands describe how to fetch PDF entries from ADS, or manually link and open the PDF files associated to the `bibmanager`
database. All PDF files are stored in the `home`/pdf folder
(see [config](index.html#config), for more info to set `home`).
PDF files can also be manually linked to the database entries via the
`bibm edit` command (see [Meta-Information](index.html#meta)).
---
### fetch[¶](#fetch)
Fetch a PDF file from ADS.
**Usage**
```
bibm fetch [-h] [-o] [keycode] [filename]
```
**Description**
This command attempts to fetch from ADS the PDF file associated to a Bibtex entry in the `bibmanager` database. The request is made to the Journal, then the ADS server, and lastly to ArXiv until one succeeds.
The entry is specified by either the BibTex key or ADS bibcode, these can be specified on the initial command, or will be queried after through the prompt (see examples).
If the output PDF filename is not specified, the routine will guess a name with this syntax: LastnameYYYY_Journal_vol_page.pdf
Requests for entries not in the database can be made only by ADS bibcode (and auto-completion wont be able to predict their bibcode IDs).
*(New since version 1.2)*
**Options**
**keycode**
Either a BibTex key or an ADS bibcode identifier.
**filename**
Name for fetched PDF file.
**-h, --help**
Show this help message and exit
**-o, --open**
Open the fetched PDF if the request succeeded.
**Examples**
Note
These examples assume that you have this entry into the database: <NAME>. et al. (1980), ApJ, 238, 471. E.g., with: `bibm ads-add 1980ApJ...238..471R RubinEtal1980apjGalaxiesRotation`
A `bibm fetch` call without arguments triggers a prompt search with auto-complete help:
Note that as you navigate through the options, the display shows info about the entries at the bottom. Also, as long as the user provides a valid bibcode, you can fetch any PDF (no need to be an entry in the database).
```
# Fetch PDF for entry by BibTex key:
bibm fetch RubinEtal1980apjGalaxiesRotation
Fetching PDF file from Journal website:
Request failed with status code 404: NOT FOUND Fetching PDF file from ADS website:
Saved PDF to: '/home/user/.bibmanager/pdf/Rubin1980_ApJ_238_471.pdf'.
To open the PDF file, execute:
bibm open RubinEtal1980apjGalaxiesRotation
# Fetch PDF fir entry by ADS bibcode:
bibm fetch 1980ApJ...238..471R
...
Fetching PDF file from ADS website:
Saved PDF to: '/home/user/.bibmanager/pdf/Rubin1980_ApJ_238_471.pdf'.
To open the PDF file, execute:
bibm open RubinEtal1980apjGalaxiesRotation
# Fetch and set the output filename:
bibm fetch 1980ApJ...238..471R Rubin1980_gals_rotation.pdf
...
Fetching PDF file from ADS website:
Saved PDF to: '/home/user/.bibmanager/pdf/Rubin1980_gals_rotation.pdf'.
To open the PDF file, execute:
bibm open RubinEtal1980apjGalaxiesRotation
```
A `bibm fetch` call with the `-o/--open` flag automatically opens the PDF file after a successful fetch:
```
# Use prompt to find the BibTex entry (and open the PDF right after fetching):
bibm fetch RubinEtal1980apjGalaxiesRotation -o
Fetching PDF file from Journal website:
Request failed with status code 404: NOT FOUND Fetching PDF file from ADS website:
Saved PDF to: '/home/user/.bibmanager/pdf/Rubin1980_ApJ_238_471.pdf'.
```
---
### open[¶](#open)
Open the PDF file of a BibTex entry in the database.
**Usage**
```
bibm open [-h] [keycode]
```
**Description**
This command opens the PDF file associated to a Bibtex entry in the
`bibmanager` database. The entry is specified by either its BibTex key,
its ADS bibcode, or its PDF filename. These can be specified on the initial command, or will be queried through the prompt (with auto-complete help).
If the user requests a PDF for an entry without a PDF file but with an ADS bibcode, `bibmanager` will ask if the user wants to fetch the PDF from ADS.
*(New since version 1.2)*
**Options**
**keycode**
Either a key or an ADS bibcode identifier.
**-h, --help**
Show this help message and exit
**Examples**
```
# Open setting the BibTex key:
bibm open RubinEtal1980apjGalaxiesRotation
# Open setting the ADS bibcode:
bibm open 1980ApJ...238..471R
# Open setting the PDF filename:
bibm open Rubin1980_ApJ_238_471.pdf
```
```
# Use the prompt to find the BibTex entry:
bibm open Syntax is: key: KEY_VALUE
or: bibcode: BIBCODE_VALUE
or: pdf: PDF_VALUE
(Press 'tab' for autocomplete)
key: RubinEtal1980apjGalaxiesRotation
```
---
### pdf[¶](#id1)
Link a PDF file to a BibTex entry in the database.
**Usage**
```
bibm pdf [-h] [keycode pdf] [name]
```
**Description**
This command manually links an existing PDF file to a Bibtex entry in the `bibmanager` database. The PDF file is moved to the *‘home/pdf’*
folder (see [config](index.html#config)).
The entry is specified by either the BibTex key or ADS bibcode, these can be specified on the initial command, or will be queried after through the prompt (see examples).
If the output PDF filename is not specified, the code will preserve the file name. If the user sets *‘guess’* as filename, the code will guess a name based on the BibTex information.
*(New since version 1.2)*
**Options**
**keycode**
Either a key or an ADS bibcode identifier.
**pdf**
Path to PDF file to link to entry.
**filename**
New name for the linked PDF file.
**-h, --help**
Show this help message and exit
**Examples**
Say you already have an article’s PDF file here: *~/Downloads/Rubin1980.pdf*
```
# Link a downloaded PDF file to an entry:
bibm pdf 1980ApJ...238..471R ~/Downloads/Rubin1980.pdf Saved PDF to: '/home/user/.bibmanager/pdf/Rubin1980.pdf'.
# Link a downloaded PDF file (guessing the name from BibTex):
bibm pdf 1980ApJ...238..471R ~/Downloads/Rubin1980.pdf guess Saved PDF to: '/home/user/.bibmanager/pdf/Rubin1980_ApJ_238_471.pdf'.
# Link a downloaded PDF file (renaming the file):
bibm pdf 1980ApJ...238..471R ~/Downloads/Burbidge1957.pdf RubinEtal_1980.pdf Saved PDF to: '/home/user/.bibmanager/pdf/RubinEtal_1980.pdf'.
```
```
# Use the prompt to find the BibTex entry:
bibm pdf Syntax is: key: KEY_VALUE PDF_FILE FILENAME
or: bibcode: BIBCODE_VALUE PDF_FILE FILENAME
(output FILENAME is optional, set it to guess for automated naming)
key: RubinEtal1980apjGalaxiesRotation ~/Downloads/Rubin1980.pdf Saved PDF to: '/home/user/.bibmanager/pdf/Rubin1980.pdf'.
```
FAQs and Resources[¶](#faqs-and-resources)
---
### Frequently Asked Questions[¶](#frequently-asked-questions)
#### Why should I use `bibmanager`? I have already my working ecosystem.[¶](#why-should-i-use-bibmanager-i-have-already-my-working-ecosystem)
`bibmanager` simply makes your life easier, keeping all of your references at the tip of your fingers:
* No need to wonder whether to start a new BibTeX file from scratch or reuse an old one (probably a massive file), nor to think which was the most current.
* Easily add new entries: manually, from your existing BibTeX files, or from ADS, without risking having duplicates.
* Generate BibTeX files and compile a LaTeX project with a single command.
* You can stay up to date with ADS with a single command.
---
#### I use several machines to work, can I use a single database across all of them?[¶](#i-use-several-machines-to-work-can-i-use-a-single-database-across-all-of-them)
Yes!, since vesion 1.2 `bibmanager` has a `home` config parameter which sets the location of the database. By default `home` points at *~/.bibmanager*; however, you can set the `home` parameter into a folder in a Dropbox-type of system. The only nuance is that you’ll need to install and configure `bibmanager` in each machine, but now all of them will be pointing to the same database.
Note that the folder containing the associated PDF files (i.e.,
`home`/pdf) will also be moved into the new location.
---
#### I compiled my LaTeX file before merging its bibfile, did I just overwite my own BibTeX file?[¶](#i-compiled-my-latex-file-before-merging-its-bibfile-did-i-just-overwite-my-own-bibtex-file)
No, if `bibmanager` has to overwrite a bibfile edited by the user (say,
‘myrefs.bib’), it saves the old file (and date) as
‘orig_yyyy-mm-dd_myrefs.bib’.
---
#### I meged the BibTeX file for my LaTeX project, but it says there are missing references when I compile. What’s going on?[¶](#i-meged-the-bibtex-file-for-my-latex-project-but-it-says-there-are-missing-references-when-i-compile-what-s-going-on)
Probably, there were duplicate entries with previous entries in the
`bibmanager` database, but they had different keys. Simply, do a search of your missing reference, to check it’s key, something like:
```
# Surely, first author and year have not changed:
bibm search author:"^Author" year:the_year
```
Now, you can update the key in the LaTeX file (and as a bonus, you wont run into having duplicate entries in the future).
---
#### That Raycast extension looks sweet! How do I install it?[¶](#that-raycast-extension-looks-sweet-how-do-i-install-it)
Right, Raycast rocks. To install Raycast, simply go to their homepage
(<https://www.raycast.com/>), click on the `Download` tab in the upper right corner and follow the instruction of the installer.
To install the `bibmanager` extension, click on the `Store` tab
(from Raycast home’s page), and search for bibmanager. Once redirected, you’ll see a `Install Extension` tab, click it and follow the instructions.
---
#### I installed `bibmanager` while being in a virtual environment. But I don’t want to start the virtual env every time I want to use `bibm`.[¶](#i-installed-bibmanager-while-being-in-a-virtual-environment-but-i-don-t-want-to-start-the-virtual-env-every-time-i-want-to-use-bibm)
(This is not a question!, please state your FAQ in the form of a question) Anyway, no worries, the `bibm` executable entry point is safe to use even if you are not in the virtual environment.
What you can do is to add the path to the entry point into your bash:
```
# first, search for the entry-point executable (while in the virtual environment):
which bibm
/home/username/py36/bin/bibm
```
Then, add an alias with that path into your bash, e.g.: `alias bibm='/home/username/py36/bin/bibm'`. Now, you can access `bibm` at any time.
---
#### A unique database? Does it mean I need to have better keys to differentiate my entries?[¶](#a-unique-database-does-it-mean-i-need-to-have-better-keys-to-differentiate-my-entries)
Certainly, as a database grows, short BibTeX keys like ‘LastnameYYYY’
are sub-optimal, since they may conflict with other entries, and are not descriptive enough.
A good practice is to adopt a longer, more descriptive format.
I personally suggests this one:
| Authors | Format | Example |
| --- | --- | --- |
| 1 | LastYYYYjournalDescription | Shapley1918apjDistanceGClusters |
| 2 | Last1Last2YYYYjournalDescription | PerezGranger2007cseIPython |
| 3 | LastEtalYYYYjournalDescription | AstropycollabEtal2013aaAstropy |
That is:
* the first-author last name (capitalized)
* either nothing, the second-author last name (capitalized), or ‘Etal’
* the publication year
* the journal initials if any (and lower-cased)
* a couple words from the title that describe the article
(capitalized or best format at user’s discretion).
These long keys will keep you from running into issues, and will make the citations in your LaTeX documents nearly unambiguous at sight.
---
#### The code breaks with `UnicodeEncodeError` when running over ssh. What’s going on?[¶](#the-code-breaks-with-unicodeencodeerror-when-running-over-ssh-what-s-going-on)
As correctly guessed in this [Stack Overflow post](https://stackoverflow.com/questions/17374526), Python cannot determine the terminal encoding, and falls back to ASCII. You can fix this by setting the following environment variable, e.g., into your bash:
`export PYTHONIOENCODING=utf-8`
---
### Resources[¶](#resources)
Docs for queries in the new ADS:
<http://adsabs.github.io/help/search/search-syntaxThe ADS API:
<https://github.com/adsabs/adsabs-dev-apiBibTeX author format:
<http://mirror.easyname.at/ctan/info/bibtex/tamethebeast/ttb_en.pdf>
<http://texdoc.net/texmf-dist/doc/bibtex/base/btxdoc.pdfPygment style BibTeX options:
<http://pygments.org/demo/6693571/Set up conda:
<https://github.com/conda-forge/staged-recipesTesting:
<https://docs.pytest.org/>
<http://pythontesting.net/framework/pytest/pytest-fixtures-nuts-bolts/>
<https://blog.dbrgn.ch/2016/2/18/overriding_default_arguments_in_pytest/>
<https://www.patricksoftwareblog.com/monkeypatching-with-pytest/>
<https://requests-mock.readthedocs.io/en/Useful info from stackoverflow:
<https://stackoverflow.com/questions/17317219>
<https://stackoverflow.com/questions/18011902>
<https://stackoverflow.com/questions/26899001>
<https://stackoverflow.com/questions/2241348>
<https://stackoverflow.com/questions/1158076>
<https://stackoverflow.com/questions/17374526>
<https://stackoverflow.com/questions/43165341API[¶](#api)
---
### bibmanager[¶](#module-bibmanager)
### bibmanager.bib_manager[¶](#module-bibmanager.bib_manager)
*class* `bibmanager.bib_manager.``Bib`(*entry*, *pdf=None*, *freeze=None*, *tags=[]*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#Bib)[¶](#bibmanager.bib_manager.Bib)
```
Bibliographic-entry object.
Create a Bib() object from given entry.
Parameters
---
entry: String
A bibliographic entry text.
pdf: String
Name of PDF file associated with this entry.
freeze: Bool
Flag that, if True, prevents the entry to be ADS-updated.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> entry = '''@Misc{JonesEtal2001scipy,
author = {<NAME> and <NAME> and <NAME>},
title = {{SciPy}: Open source scientific tools for {Python}},
year = {2001},
}'''
>>> bib = bm.Bib(entry)
>>> print(bib.title)
SciPy: Open source scientific tools for Python
>>> for author in bib.authors:
>>> print(author)
Author(last='Jones', first='Eric', von='', jr='')
Author(last='Oliphant', first='Travis', von='', jr='')
Author(last='Peterson', first='Pearu', von='', jr='')
>>> print(bib.sort_author)
Sort_author(last='jones', first='e', von='', jr='', year=2001, month=13)
```
`get_authors`(*format='short'*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#Bib.get_authors)[¶](#bibmanager.bib_manager.Bib.get_authors)
```
wrapper for string representation for the author list.
See bib_manager.get_authors() for docstring.
```
`meta`()[[source]](_modules/bibmanager/bib_manager/bib_manager.html#Bib.meta)[¶](#bibmanager.bib_manager.Bib.meta)
```
String containing the non-None meta information.
```
`published`()[[source]](_modules/bibmanager/bib_manager/bib_manager.html#Bib.published)[¶](#bibmanager.bib_manager.Bib.published)
```
Published status according to the ADS bibcode field:
Return -1 if bibcode is None.
Return 0 if bibcode is arXiv.
Return 1 if bibcode is peer-reviewed journal.
```
`update_content`(*other*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#Bib.update_content)[¶](#bibmanager.bib_manager.Bib.update_content)
```
Update the bibtex content of self with that of other.
```
`update_key`(*new_key*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#Bib.update_key)[¶](#bibmanager.bib_manager.Bib.update_key)
```
Update key with new_key, making sure to also update content.
```
`bibmanager.bib_manager.``display_bibs`(*labels*, *bibs*, *meta=False*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#display_bibs)[¶](#bibmanager.bib_manager.display_bibs)
```
Display a list of bib entries on screen with flying colors.
Parameters
---
labels: List of Strings
Header labels to show above each Bib() entry.
bibs: List of Bib() objects
BibTeX entries to display.
meta: Bool
If True, also display the meta-information.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> e1 = '''@Misc{JonesEtal2001scipy,
author = {<NAME> and <NAME> and <NAME>},
title = {{SciPy}: Open source scientific tools for {Python}},
year = {2001},
}'''
>>> e2 = '''@Misc{Jones2001,
author = {<NAME> and <NAME> and <NAME>},
title = {SciPy: Open source scientific tools for Python},
year = {2001},
}'''
>>> bibs = [bm.Bib(e1), bm.Bib(e2)]
>>> bm.display_bibs(["DATABASE:\n", "NEW:\n"], bibs)
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
DATABASE:
@Misc{JonesEtal2001scipy,
author = {<NAME> and <NAME> and <NAME>},
title = {{SciPy}: Open source scientific tools for {Python}},
year = {2001},
}
NEW:
@Misc{Jones2001,
author = {<NAME> and <NAME> and <NAME>},
title = {SciPy: Open source scientific tools for Python},
year = {2001},
}
```
`bibmanager.bib_manager.``display_list`(*bibs*, *verb=-1*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#display_list)[¶](#bibmanager.bib_manager.display_list)
```
Display a list of BibTeX entries with different verbosity levels.
Although this might seem a duplication of display_bibs(), this function is meant to provide multiple levels of verbosity and generally to display longer lists of entries.
Parameters
---
bibs: List of Bib() objects
BibTeX entries to display.
verb: Integer
The desired verbosity level:
verb < 0: Display only the keys.
verb = 0: Display the title, year, first author, and key.
verb = 1: Display additionally the ADS and arXiv urls.
verb = 2: Display additionally the full list of authors.
verb > 2: Display the full BibTeX entry.
```
`bibmanager.bib_manager.``remove_duplicates`(*bibs*, *field*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#remove_duplicates)[¶](#bibmanager.bib_manager.remove_duplicates)
```
Look for duplicates (within a same list of entries) by field and remove them (in place).
Parameters
---
bibs: List of Bib() objects
Entries to filter.
field: String
Field to use for filtering ('doi', 'isbn', 'bibcode', or 'eprint').
Returns
---
replacements: dict
A dictionary of {old:new} duplicated keys that have been removed.
```
`bibmanager.bib_manager.``filter_field`(*bibs*, *new*, *field*, *take*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#filter_field)[¶](#bibmanager.bib_manager.filter_field)
```
Filter duplicate entries by field between new and bibs.
This routine modifies new removing the duplicates, and may modify bibs (depending on take argument).
Parameters
---
bibs: List of Bib() objects
Database entries.
new: List of Bib() objects
New entries to add.
field: String
Field to use for filtering.
take: String
Decision-making protocol to resolve conflicts when there are
duplicated entries:
'old': Take the database entry over new.
'new': Take the new entry over the database.
'ask': Ask user to decide (interactively).
```
`bibmanager.bib_manager.``read_file`(*bibfile=None*, *text=None*, *return_replacements=False*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#read_file)[¶](#bibmanager.bib_manager.read_file)
```
Create a list of Bib() objects from a BibTeX file (.bib file).
Parameters
---
bibfile: String
Path to an existing .bib file.
text: String
Content of a .bib file (ignored if bibfile is not None).
return_replacements: Bool
If True, also return a dictionary of replaced keys.
Returns
---
bibs: List of Bib() objects
List of Bib() objects of BibTeX entries in bibfile, sorted by
Sort_author() fields.
reps: Dict
A dictionary of replaced key names.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> text = (
>>> "@misc{AASteamHendrickson2018aastex62,\n"
>>> "author = {{AAS Journals Team} and {Hendrickson}, Amy},\n"
>>> "title = {{AASJournals/AASTeX60: Version 6.2 official release}},\n"
>>> "year = 2018\n"
>>> "}")
>>> bibs = bm.read_file(text=text)
```
`bibmanager.bib_manager.``save`(*entries*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#save)[¶](#bibmanager.bib_manager.save)
```
Save list of Bib() entries into bibmanager pickle database.
Parameters
---
entries: List of Bib() objects
bib files to store.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> # TBD: Load some entries
>>> bm.save(entries)
```
`bibmanager.bib_manager.``load`(*bm_database=None*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#load)[¶](#bibmanager.bib_manager.load)
```
Load a Bibmanager database of BibTeX entries.
Parameters
---
bm_database: String
A Bibmanager pickle database file. If None, default's the
database in system.
Returns
---
bibs: List Bib() instances
Return an empty list if there is no database file.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> bibs = bm.load()
```
`bibmanager.bib_manager.``find`(*key=None*, *bibcode=None*, *bibs=None*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#find)[¶](#bibmanager.bib_manager.find)
```
Find an specific entry in the database.
Parameters
---
key: String
Key of entry to find.
bibcode: String
Bibcode of entry to find (ignored if key is not None).
bibs: List of Bib() instances
Database where to search. If None, load the Bibmanager database.
Returns
---
bib: a Bib() instance
BibTex matching either key or bibcode.
```
`bibmanager.bib_manager.``get_version`(*bm_database=None*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#get_version)[¶](#bibmanager.bib_manager.get_version)
```
Get version of pickled database file.
If database does not exists, return current bibmanager version.
If database does not contain version, return '0.0.0'.
Parameters
---
bm_database: String
A Bibmanager pickle database file. If None, default's the
database in system.
Returns
---
version: String
bibmanager version of pickled objects.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> bibs = bm.get_version()
```
`bibmanager.bib_manager.``export`(*entries*, *bibfile=None*, *meta=False*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#export)[¶](#bibmanager.bib_manager.export)
```
Export list of Bib() entries into a .bib file.
Parameters
---
entries: List of Bib() objects
Entries to export.
bibfile: String
Output .bib file name. If None, export into home directory.
meta: Bool
If True, include meta information before the entries on the
output bib file.
```
`bibmanager.bib_manager.``merge`(*bibfile=None*, *new=None*, *take='old'*, *base=None*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#merge)[¶](#bibmanager.bib_manager.merge)
```
Merge entries from a new bibfile into the bibmanager database
(or into an input database).
Parameters
---
bibfile: String
New .bib file to merge into the bibmanager database.
new: List of Bib() objects
List of new BibTeX entries (ignored if bibfile is not None).
take: String
Decision-making protocol to resolve conflicts when there are
partially duplicated entries.
'old': Take the database entry over new.
'new': Take the new entry over the database.
'ask': Ask user to decide (interactively).
base: List of Bib() objects
If None, merge new entries into the bibmanager database.
If not None, merge new entries into base.
Returns
---
bibs: List of Bib() objects
Merged list of BibTeX entries.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> import os
>>> # TBD: Need to add sample2.bib into package.
>>> newbib = os.path.expanduser("~") + "/.bibmanager/examples/sample2.bib"
>>> # Merge newbib into database:
>>> bm.merge(newbib, take='old')
```
`bibmanager.bib_manager.``init`(*bibfile=None*, *reset_db=True*, *reset_config=False*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#init)[¶](#bibmanager.bib_manager.init)
```
Initialize bibmanager, reset database entries and config parameters.
Parameters
---
bibfile: String
A bibfile to include as the new bibmanager database.
If None, reset the bibmanager database with a clean slate.
reset_db: Bool
If True, reset the bibmanager database.
reset_config: Bool
If True, reset the config file.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> import os
>>> bibfile = os.path.expanduser("~") + "/.bibmanager/examples/sample.bib"
>>> bm.init(bibfile)
```
`bibmanager.bib_manager.``add_entries`(*take='ask'*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#add_entries)[¶](#bibmanager.bib_manager.add_entries)
```
Manually add BibTeX entries through the prompt.
Parameters
---
take: String
Decision-making protocol to resolve conflicts when there are
partially duplicated entries.
'old': Take the database entry over new.
'new': Take the new entry over the database.
'ask': Ask user to decide (interactively).
```
`bibmanager.bib_manager.``edit`()[[source]](_modules/bibmanager/bib_manager/bib_manager.html#edit)[¶](#bibmanager.bib_manager.edit)
```
Manually edit the bibfile database in text editor.
Resources
---
https://stackoverflow.com/questions/17317219/
https://docs.python.org/3.6/library/subprocess.html
```
`bibmanager.bib_manager.``search`(*authors=None*, *year=None*, *title=None*, *key=None*, *bibcode=None*, *tags=None*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#search)[¶](#bibmanager.bib_manager.search)
```
Search in bibmanager database by different fields/properties.
Parameters
---
authors: String or List of strings
An author name (or list of names) with BibTeX format (see parse_name()
docstring). To restrict search to a first author, prepend the
'^' character to a name.
year: Integer or two-element integer tuple
If integer, match against year; if tuple, minimum and maximum
matching years (including).
title: String or iterable (list, tuple, or ndarray of strings)
Match entries that contain all input strings in the title (ignore case).
key: String or list of strings
Match any entry whose key is in the input key.
bibcode: String or list of strings
Match any entry whose bibcode is in the input bibcode.
tags: String or list of strings
Match entries containing all specified tags.
Returns
---
matches: List of Bib() objects
Entries that match all input criteria.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> # Search by last name:
>>> matches = bm.search(authors="Cubillos")
>>> # Search by last name and initial:
>>> matches = bm.search(authors="<NAME>")
>>> # Search by author in given year:
>>> matches = bm.search(authors="<NAME>", year=2017)
>>> # Search by first author and co-author (using AND logic):
>>> matches = bm.search(authors=["^Cubillos", "Blecic"])
>>> # Search by keyword in title:
>>> matches = bm.search(title="Spitzer")
>>> # Search by keywords in title (using AND logic):
>>> matches = bm.search(title=["HD 189", "HD 209"])
>>> # Search by key (note that unlike the other fields, key and
>>> # bibcode use OR logic, so you can get many items at once):
>>> matches = bm.search(key="Astropycollab2013aaAstropy")
>>> # Search by bibcode (note no need to worry about UTF-8 encoding):
>>> matches = bm.search(bibcode=["2013A%26A...558A..33A",
>>> "1957RvMP...29..547B",
>>> "2017AJ....153....3C"])
```
`bibmanager.bib_manager.``prompt_search`(*keywords*, *field*, *prompt_text*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#prompt_search)[¶](#bibmanager.bib_manager.prompt_search)
```
Do an interactive prompt search in the Bibmanager database by the given keywords, with auto-complete and auto-suggest only offering non-None values of the given field.
Only one keyword must be set in the prompt.
A bottom toolbar dynamically shows additional info.
Parameters
---
keywords: List of strings
BibTex keywords to search by.
field: String
Filtering BibTex field for auto-complete and auto-suggest.
prompt_text: String
Text to display when launching the prompt.
Returns
---
kw_input: List of strings
List of the parsed input (same order as keywords).
Items are None for the keywords not defined.
extra: List of strings
Any further word written in the prompt.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> # Search by key or bibcode, of entries with non-None bibcode:
>>> keywords = ['key', 'bibcode']
>>> field = 'bibcode'
>>> prompt_text = ("Sample search (Press 'tab' for autocomplete):\n")
>>> prompt_input = bm.prompt_search(keywords, field, prompt_text)
Sample search (Press 'tab' for autocomplete):
key: Astropy2013aaAstroPy
>>> # Look at the results (list corresponds to [key, bibcode]):
>>> print(prompt_input[0])
['Astropy2013aaAstroPy', None]
>>> print(f'extra = {prompt_input[1]}')
extra = [None]
>>> # Repeat search, now by bibcode:
>>> prompt_input = u.prompt_search(keywords, field, prompt_text)
Sample search (Press 'tab' for autocomplete):
bibcode: 2013A&A...558A..33A
>>> print(prompt_input[0])
[None, '2013A&A...558A..33A']
```
`bibmanager.bib_manager.``prompt_search_tags`(*prompt_text*)[[source]](_modules/bibmanager/bib_manager/bib_manager.html#prompt_search_tags)[¶](#bibmanager.bib_manager.prompt_search_tags)
```
Do an interactive prompt search in the Bibmanager database by the given keywords, with auto-complete and auto-suggest only offering non-None values of the given field.
Only one keyword must be set in the prompt.
A bottom toolbar dynamically shows additional info.
Parameters
---
prompt_text: String
Text to display when launching the prompt.
Returns
---
kw_input: List of strings
List of the parsed input (same order as keywords).
Items are None for the keywords not defined.
```
`bibmanager.bib_manager.``browse`()[[source]](_modules/bibmanager/bib_manager/browser.html#browse)[¶](#bibmanager.bib_manager.browse)
```
A browser for the bibmanager database.
```
### bibmanager.config_manager[¶](#module-bibmanager.config_manager)
`bibmanager.config_manager.``help`(*key*)[[source]](_modules/bibmanager/config_manager/config_manager.html#help)[¶](#bibmanager.config_manager.help)
```
Display help information.
Parameters
---
key: String
A bibmanager config parameter.
```
`bibmanager.config_manager.``display`(*key=None*)[[source]](_modules/bibmanager/config_manager/config_manager.html#display)[¶](#bibmanager.config_manager.display)
```
Display the value(s) of the bibmanager config file on the prompt.
Parameters
---
key: String
bibmanager config parameter to display. Leave as None to display
the values from all parameters.
Examples
---
>>> import bibmanager.config_manager as cm
>>> # Show all parameters and values:
>>> cm.display()
bibmanager configuration file:
PARAMETER VALUE
--- ---
style autumn text_editor default paper letter ads_token None ads_display 20 home /home/user/.bibmanager/
>>> # Show an specific parameter:
>>> cm.display('text_editor')
text_editor: default
```
`bibmanager.config_manager.``get`(*key*)[[source]](_modules/bibmanager/config_manager/config_manager.html#get)[¶](#bibmanager.config_manager.get)
```
Get the value of a parameter in the bibmanager config file.
Parameters
---
key: String
The requested parameter name.
Returns
---
value: String
Value of the requested parameter.
Examples
---
>>> import bibmanager.config_manager as cm
>>> cm.get('paper')
'letter'
>>> cm.get('style')
'autumn'
```
`bibmanager.config_manager.``set`(*key*, *value*)[[source]](_modules/bibmanager/config_manager/config_manager.html#set)[¶](#bibmanager.config_manager.set)
```
Set the value of a bibmanager config parameter.
Parameters
---
key: String
bibmanager config parameter to set.
value: String
Value to set for input parameter.
Examples
---
>>> import bibmanager.config_manager as cm
>>> # Update text editor:
>>> cm.set('text_editor', 'vim')
text_editor updated to: vim.
>>> # Invalid bibmanager parameter:
>>> cm.set('styles', 'arduino')
ValueError: 'styles' is not a valid bibmanager config parameter.
The available parameters are:
['style', 'text_editor', 'paper', 'ads_token', 'ads_display', 'home']
>>> # Attempt to set an invalid style:
>>> cm.set('style', 'fake_style')
ValueError: 'fake_style' is not a valid style option. Available options are:
default, emacs, friendly, colorful, autumn, murphy, manni, monokai, perldoc,
pastie, borland, trac, native, fruity, bw, vim, vs, tango, rrt, xcode, igor,
paraiso-light, paraiso-dark, lovelace, algol, algol_nu, arduino,
rainbow_dash, abap
>>> # Attempt to set an invalid command for text_editor:
>>> cm.set('text_editor', 'my_own_editor')
ValueError: 'my_own_editor' is not a valid text editor.
>>> # Beware, one can still set a valid command that doesn't edit text:
>>> cm.set('text_editor', 'less')
text_editor updated to: less.
```
`bibmanager.config_manager.``update_keys`()[[source]](_modules/bibmanager/config_manager/config_manager.html#update_keys)[¶](#bibmanager.config_manager.update_keys)
```
Update config in HOME with keys from ROOT, without overwriting values.
```
### bibmanager.latex_manager[¶](#module-bibmanager.latex_manager)
`bibmanager.latex_manager.``get_bibfile`(*texfile*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#get_bibfile)[¶](#bibmanager.latex_manager.get_bibfile)
```
Find and extract the bibfile used by a .tex file.
This is done by looking for a '\bibliography{}' call.
Parameters
---
texfile: String
Name of an input tex file.
Returns
---
bibfile: String
bib file referenced in texfile.
```
`bibmanager.latex_manager.``no_comments`(*text*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#no_comments)[¶](#bibmanager.latex_manager.no_comments)
```
Remove comments from tex file, partially inspired by this:
https://stackoverflow.com/questions/2319019
Parameters
---
text: String
Content from a latex file.
Returns
---
no_comments_text: String
Input text with removed comments (as defined by latex format).
Examples
---
>>> import bibmanager.latex_manager as lm
>>> text = r'''
Hello, this is dog.
% This is a comment line.
This line ends with a comment. % A comment However, this is a percentage \%, not a comment.
OK, bye.'''
>>> print(lm.no_comments(text))
Hello, this is dog.
This line ends with a comment.
However, this is a percentage \%, not a comment.
OK, bye.
```
`bibmanager.latex_manager.``citations`(*text*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#citations)[¶](#bibmanager.latex_manager.citations)
```
Generator to find citations in a tex text. Partially inspired by this: https://stackoverflow.com/questions/29976397
Notes
---
Act recursively in case there are references inside the square brackets of the cite call. Only failing case I can think so far is if there are nested square brackets.
Parameters
---
text: String
String where to search for the latex citations.
Yields
---
citation: String
The citation key.
Examples
---
>>> import bibmanager.latex_manager as lm
>>> import os
>>> # Syntax matches any of these calls:
>>> tex = r'''
\citep{AuthorA}.
\citep[pre]{AuthorB}.
\citep[pre][post]{AuthorC}.
\citep [pre] [post] {AuthorD}.
\citep[{\pre},][post]{AuthorE, AuthorF}.
\citep[pre][post]{AuthorG} and \citep[pre][post]{AuthorH}.
\citep{
AuthorI}.
\citep
[][]{AuthorJ}.
\citep[pre
][post] {AuthorK, AuthorL}
\citep[see also \citealp{AuthorM}][]{AuthorN}'''
>>> for citation in lm.citations(tex):
>>> print(citation, end=" ")
AuthorA AuthorB AuthorC AuthorD AuthorE AuthorF AuthorG AuthorH AuthorI AuthorJ AuthorK AuthorL AuthorM AuthorN
>>> # Match all of these cite calls:
>>> tex = r'''
\cite{AuthorA}, \nocite{AuthorB}, \defcitealias{AuthorC}.
\citet{AuthorD}, \citet*{AuthorE}, \Citet{AuthorF}, \Citet*{AuthorG}.
\citep{AuthorH}, \citep*{AuthorI}, \Citep{AuthorJ}, \Citep*{AuthorK}.
\citealt{AuthorL}, \citealt*{AuthorM},
\Citealt{AuthorN}, \Citealt*{AuthorO}.
\citealp{AuthorP}, \citealp*{AuthorQ},
\Citealp{AuthorR}, \Citealp*{AuthorS}.
\citeauthor{AuthorT}, \citeauthor*{AuthorU}.
\Citeauthor{AuthorV}, \Citeauthor*{AuthorW}.
\citeyear{AuthorX}, \citeyear*{AuthorY}.
\citeyearpar{AuthorZ}, \citeyearpar*{AuthorAA}.'''
>>> for citation in lm.citations(tex):
>>> print(citation, end=" ")
AuthorA AuthorB AuthorC AuthorD AuthorE AuthorF AuthorG AuthorH AuthorI AuthorJ AuthorK AuthorL AuthorM AuthorN AuthorO AuthorP AuthorQ AuthorR AuthorS AuthorT AuthorU AuthorV AuthorW AuthorX AuthorY AuthorZ AuthorAA
>>> texfile = os.path.expanduser('~')+"/.bibmanager/examples/sample.tex"
>>> with open(texfile, encoding='utf-8') as f:
>>> tex = f.read()
>>> tex = lm.no_comments(tex)
>>> cites = [citation for citation in lm.citations(tex)]
>>> for key in np.unique(cites):
>>> print(key)
AASteamHendrickson2018aastex62 Astropycollab2013aaAstropy
Hunter2007ieeeMatplotlib JonesEtal2001scipy
MeurerEtal2017pjcsSYMPY PerezGranger2007cseIPython
vanderWaltEtal2011numpy
```
`bibmanager.latex_manager.``parse_subtex_files`(*tex*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#parse_subtex_files)[¶](#bibmanager.latex_manager.parse_subtex_files)
```
Recursively search for subfiles included in tex. Append their content at the end of tex and return.
Parameters
---
tex: String
String to parse.
Returns
---
tex: String
String with appended content from any subfile.
```
`bibmanager.latex_manager.``build_bib`(*texfile*, *bibfile=None*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#build_bib)[¶](#bibmanager.latex_manager.build_bib)
```
Generate a .bib file from a given tex file.
Parameters
---
texfile: String
Name of an input tex file.
bibfile: String
Name of an output bib file. If None, get bibfile name from
bibliography call inside the tex file.
Returns
---
missing: List of strings
List of the bibkeys not found in the bibmanager database.
```
`bibmanager.latex_manager.``update_keys`(*texfile*, *key_replacements*, *is_main*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#update_keys)[¶](#bibmanager.latex_manager.update_keys)
```
Update citation keys in a tex file according to the replace_dict.
Work out way recursively into sub-files.
Parameters
---
textfile: String
Path to an existing .tex file.
is_main: Bool
If True, ignore everything up to '\beging{document}' call.
```
`bibmanager.latex_manager.``clear_latex`(*texfile*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#clear_latex)[¶](#bibmanager.latex_manager.clear_latex)
```
Remove by-products of previous latex compilations.
Parameters
---
texfile: String
Path to an existing .tex file.
Notes
---
For an input argument texfile='filename.tex', this function deletes the files that begin with 'filename' followed by:
.bbl, .blg, .out, .dvi,
.log, .aux, .lof, .lot,
.toc, .ps, .pdf, Notes.bib
```
`bibmanager.latex_manager.``compile_latex`(*texfile*, *paper=None*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#compile_latex)[¶](#bibmanager.latex_manager.compile_latex)
```
Compile a .tex file into a .pdf file using latex calls.
Parameters
---
texfile: String
Path to an existing .tex file.
paper: String
Paper size for output. For example, ApJ articles use letter
format, whereas A&A articles use A4 format.
Notes
---
This function executes the following calls:
- compute a bibfile out of the citation calls in the .tex file.
- removes all outputs from previous compilations (see clear_latex())
- calls latex, bibtex, latex, latex to produce a .dvi file
- calls dvips to produce a .ps file, redirecting the output to
ps2pdf to produce the final .pdf file.
```
`bibmanager.latex_manager.``compile_pdflatex`(*texfile*)[[source]](_modules/bibmanager/latex_manager/latex_manager.html#compile_pdflatex)[¶](#bibmanager.latex_manager.compile_pdflatex)
```
Compile a .tex file into a .pdf file using pdflatex calls.
Parameters
---
texfile: String
Path to an existing .tex file.
Notes
---
This function executes the following calls:
- compute a bibfile out of the citation calls in the .tex file.
- removes all outputs from previous compilations (see clear_latex())
- calls pdflatex, bibtex, pdflatex, pdflatex to produce a .pdf file
```
### bibmanager.ads_manager[¶](#module-bibmanager.ads_manager)
`bibmanager.ads_manager.``manager`(*query=None*)[[source]](_modules/bibmanager/ads_manager/ads_manager.html#manager)[¶](#bibmanager.ads_manager.manager)
```
A manager, it doesn't really do anything, it just delegates.
```
`bibmanager.ads_manager.``search`(*query*, *start=0*, *cache_rows=200*, *sort='pubdate+desc'*)[[source]](_modules/bibmanager/ads_manager/ads_manager.html#search)[¶](#bibmanager.ads_manager.search)
```
Make a query from ADS.
Parameters
---
query: String
A query string like an entry in the new ADS interface:
https://ui.adsabs.harvard.edu/
start: Integer
Starting index of entry to return.
cache_rows: Integer
Maximum number of entries to return.
sort: String
Sorting field and direction to use.
Returns
---
results: List of dicts
Query outputs between indices start and start+rows.
nmatch: Integer
Total number of entries matched by the query.
Resources
---
A comprehensive description of the query format:
- http://adsabs.github.io/help/search/
Description of the query parameters:
- https://github.com/adsabs/adsabs-dev-api/blob/master/Search_API.ipynb
Examples
---
>>> import bibmanager.ads_manager as am
>>> # Search entries by author (note the need for double quotes,
>>> # otherwise, the search might produce bogus results):
>>> query = 'author:"cubillos, p"'
>>> results, nmatch = am.search(query)
>>> # Search entries by first author:
>>> query = 'author:"^cubillos, p"'
>>> # Combine search by first author and year:
>>> query = 'author:"^cubillos, p" year:2017'
>>> # Restrict search to article-type entries:
>>> query = 'author:"^cubillos, p" property:article'
>>> # Restrict search to peer-reviewed articles:
>>> query = 'author:"^cubillos, p" property:refereed'
>>> # Attempt with invalid token:
>>> results, nmatch = am.search(query)
ValueError: Invalid ADS request: Unauthorized, check you have a valid ADS token.
>>> # Attempt with invalid query ('properties' instead of 'property'):
>>> results, nmatch = am.search('author:"^cubillos, p" properties:refereed')
ValueError: Invalid ADS request:
org.apache.solr.search.SyntaxError: org.apache.solr.common.SolrException: undefined field properties
```
`bibmanager.ads_manager.``display`(*results*, *start*, *index*, *rows*, *nmatch*, *short=True*)[[source]](_modules/bibmanager/ads_manager/ads_manager.html#display)[¶](#bibmanager.ads_manager.display)
```
Show on the prompt a list of entries from an ADS search.
Parameters
---
results: List of dicts
Subset of entries returned by a query.
start: Integer
Index assigned to first entry in results.
index: Integer
First index to display.
rows: Integer
Number of entries to display.
nmatch: Integer
Total number of entries corresponding to query (not necessarily
the number of entries in results).
short: Bool
Format for author list. If True, truncate with 'et al' after
the second author.
Examples
---
>>> import bibmanager.ads_manager as am
>>> start = index = 0
>>> query = 'author:"^cubillos, p" property:refereed'
>>> results, nmatch = am.search(query, start=start)
>>> display(results, start, index, rows, nmatch)
```
`bibmanager.ads_manager.``add_bibtex`(*input_bibcodes*, *input_keys*, *eprints=[]*, *dois=[]*, *update_keys=True*, *base=None*, *tags=None*, *return_replacements=False*)[[source]](_modules/bibmanager/ads_manager/ads_manager.html#add_bibtex)[¶](#bibmanager.ads_manager.add_bibtex)
```
Add bibtex entries from a list of ADS bibcodes, with specified keys.
New entries will replace old ones without asking if they are duplicates.
Parameters
---
input_bibcodes: List of strings
A list of ADS bibcodes.
input_keys: List of strings
BibTeX keys to assign to each bibcode.
eprints: List of strings
List of ArXiv IDs corresponding to the input bibcodes.
dois: List of strings
List of DOIs corresponding to the input bibcodes.
update_keys: Bool
If True, attempt to update keys of entries that were updated
from arxiv to published versions.
base: List of Bib() objects
If None, merge new entries into the bibmanager database.
If not None, merge new entries into base.
tags: Nested list of strings
The list of tags for each input bibcode.
return_replacements: Bool
If True, also return a dictionary of replaced keys.
Returns
---
bibs: List of Bib() objects
Updated list of BibTeX entries.
reps: Dict
A dictionary of replaced key names.
Examples
---
>>> import bibmanager.ads_manager as am
>>> # A successful add call:
>>> bibcodes = ['1925PhDT.........1P']
>>> keys = ['Payne1925phdStellarAtmospheres']
>>> am.add_bibtex(bibcodes, keys)
>>> # A failing add call:
>>> bibcodes = ['1925PhDT....X....1P']
>>> am.add_bibtex(bibcodes, keys)
Error: There were no entries found for the input bibcodes.
>>> # A successful add call with multiple entries:
>>> bibcodes = ['1925PhDT.........1P', '2018MNRAS.481.5286F']
>>> keys = ['Payne1925phdStellarAtmospheres', 'FolsomEtal2018mnrasHD219134']
>>> am.add_bibtex(bibcodes, keys)
>>> # A partially failing call will still add those that succeed:
>>> bibcodes = ['1925PhDT.....X...1P', '2018MNRAS.481.5286F']
>>> am.add_bibtex(bibcodes, keys)
Warning: bibcode '1925PhDT.....X...1P' not found.
```
`bibmanager.ads_manager.``update`(*update_keys=True*, *base=None*, *return_replacements=False*)[[source]](_modules/bibmanager/ads_manager/ads_manager.html#update)[¶](#bibmanager.ads_manager.update)
```
Do an ADS query by bibcode for all entries that have an ADS bibcode.
Replacing old entries with the new ones. The main use of this function is to update arxiv version of articles.
Parameters
---
update_keys: Bool
If True, attempt to update keys of entries that were updated
from arxiv to published versions.
base: List of Bib() objects
The bibfile entries to update. If None, use the entries from
the bibmanager database as base.
return_replacements: Bool
If True, also return a dictionary of replaced keys.
Returns
---
reps: Dict
A dictionary of replaced key names.
```
`bibmanager.ads_manager.``key_update`(*key*, *bibcode*, *alternate_bibcode*)[[source]](_modules/bibmanager/ads_manager/ads_manager.html#key_update)[¶](#bibmanager.ads_manager.key_update)
```
Update key with year and journal of arxiv version of a key.
This function will search and update the year in a key,
and the journal if the key contains the word 'arxiv' (case insensitive).
The function extracts the info from the old and new bibcodes.
ADS bibcode format: http://adsabs.github.io/help/actions/bibcode
Examples
---
>>> import bibmanager.ads_manager as am
>>> key = 'BeaulieuEtal2010arxivGJ436b'
>>> bibcode = '2011ApJ...731...16B'
>>> alternate_bibcode = '2010arXiv1007.0324B'
>>> new_key = am.key_update(key, bibcode, alternate_bibcode)
>>> print(f'{key}\n{new_key}')
BeaulieuEtal2010arxivGJ436b BeaulieuEtal2011apjGJ436b
>>> key = 'CubillosEtal2018arXivRetrievals'
>>> bibcode = '2019A&A...550A.100B'
>>> alternate_bibcode = '2018arXiv123401234B'
>>> new_key = am.key_update(key, bibcode, alternate_bibcode)
>>> print(f'{key}\n{new_key}')
CubillosEtal2018arXivRetrievals CubillosEtal2019aaRetrievals
```
### bibmanager.pdf_manager[¶](#module-bibmanager.pdf_manager)
`bibmanager.pdf_manager.``guess_name`(*bib*, *arxiv=False*)[[source]](_modules/bibmanager/pdf_manager/pdf_manager.html#guess_name)[¶](#bibmanager.pdf_manager.guess_name)
```
Guess a PDF filename for a BibTex entry. Include at least author and year. If entry has a bibtex, include journal info.
Parameters
---
bib: A Bib() instance
BibTex entry to generate a PDF filename for.
arxiv: Bool
True if this PDF comes from ArXiv. If so, prepend 'arxiv_' into
the output name.
Returns
---
guess_filename: String
Suggested name for a PDF file of the entry.
Examples
---
>>> import bibmanager.bib_manager as bm
>>> import bibmanager.pdf_manager as pm
>>> bibs = bm.load()
>>> # Entry without bibcode:
>>> bib = bm.Bib('''@misc{AASteam2016aastex61,
>>> author = {{AAS Journals Team} and {Hendrickson}, A.},
>>> title = {AASJournals/AASTeX60: Version 6.1},
>>> year = 2016,
>>> }''')
>>> print(pm.guess_name(bib))
AASJournalsTeam2016.pdf
>>> # Entry with bibcode:
>>> bib = bm.Bib('''@ARTICLE{HuangEtal2014jqsrtCO2,
>>> author = {{<NAME>)}, Xinchuan and {Gamache}, <NAME>.},
>>> title = "{Reliable infrared line lists for 13 CO$_{2}$}",
>>> year = "2014",
>>> adsurl = {https://ui.adsabs.harvard.edu/abs/2014JQSRT.147..134H},
>>> }''')
>>> print(pm.guess_name(bib))
>>> Huang2014_JQSRT_147_134.pdf
>>> # Say, we are querying from ArXiv:
>>> print(pm.guess_name(bib, arxiv=True))
Huang2014_arxiv_JQSRT_147_134.pdf
```
`bibmanager.pdf_manager.``open`(*pdf=None*, *key=None*, *bibcode=None*, *pdf_file=None*)[[source]](_modules/bibmanager/pdf_manager/pdf_manager.html#open)[¶](#bibmanager.pdf_manager.open)
```
Open the PDF file associated to the entry matching the input key or bibcode argument.
Parameters
---
pdf: String
PDF file to open. This refers to a filename located in
home/pdf/. Thus, it should not contain the file path.
key: String
Key of Bibtex entry to open it's PDF (ignored if pdf is not None).
bibcode: String
Bibcode of Bibtex entry to open it's PDF (ignored if pdf or key
is not None).
pdf_file: String
Absolute path to PDF file to open. If not None, this argument
takes precedence over pdf, key, and bibcode.
```
`bibmanager.pdf_manager.``set_pdf`(*bib*, *pdf=None*, *bin_pdf=None*, *filename=None*, *arxiv=False*, *replace=False*)[[source]](_modules/bibmanager/pdf_manager/pdf_manager.html#set_pdf)[¶](#bibmanager.pdf_manager.set_pdf)
```
Update the PDF file of the given BibTex entry in database If pdf is not None, move the file into the database pdf folder.
Parameters
---
bibcode: String or Bib() instance
Entry to be updated (must exist in the Bibmanager database).
If string, the ADS bibcode of key ID of the entry.
pdf: String
Path to an existing PDF file.
Only one of pdf and bin_pdf must be not None.
bin_pdf: String
PDF content in binary format (e.g., as in req.content).
Only one of pdf and bin_pdf must be not None.
arxiv: Bool
Flag indicating the source of the PDF. If True, insert
'arxiv' into a guessed name.
filename: String
Filename to assign to the PDF file. If None, take name from
pdf input argument, or else from guess_name().
replace: Bool
Replace without asking if the entry already has a PDF assigned;
else, ask the user.
Returns
---
filename: String
If bib.pdf is not None at the end of this operation,
return the absolute path to the bib.pdf file (even if this points
to a pre-existing file).
Else, return None.
```
`bibmanager.pdf_manager.``request_ads`(*bibcode*, *source='journal'*)[[source]](_modules/bibmanager/pdf_manager/pdf_manager.html#request_ads)[¶](#bibmanager.pdf_manager.request_ads)
```
Request a PDF from ADS.
Parameters
---
bibcode: String
ADS bibcode of entry to request PDF.
source: String
Flag to indicate from which source make the request.
Choose between: 'journal', 'ads', or 'arxiv'.
Returns
---
req: requests.Response instance
The server's response to the HTTP request.
Return None if it failed to establish a connection.
Note
---
If the request succeeded, but the response content is not a PDF,
this function modifies the value of req.status_code (in a desperate attempt to give a meaningful answer).
Examples
---
>>> import bibmanager.pdf_manager as pm
>>> bibcode = '2017AJ....153....3C'
>>> req = pm.request_ads(bibcode)
>>> # On successful request, you can save the PDF file as, e.g.:
>>> with open('fetched_file.pdf', 'wb') as f:
>>> f.write(r.content)
>>> # Nature articles are not directly accessible from Journal:
>>> bibcode = '2018NatAs...2..220D'
>>> req = pm.request_ads(bibcode)
Request failed with status code 404: NOT FOUND
>>> # Get ArXiv instead:
>>> req = pm.request_ads(bibcode, source='arxiv')
```
`bibmanager.pdf_manager.``fetch`(*bibcode*, *filename=None*, *replace=None*)[[source]](_modules/bibmanager/pdf_manager/pdf_manager.html#fetch)[¶](#bibmanager.pdf_manager.fetch)
```
Attempt to fetch a PDF file from ADS. If successful, then add it into the database. If the fetch succeeds but the bibcode is not in the database, download file to current folder.
Parameters
---
bibcode: String
ADS bibcode of entry to update.
filename: String
Filename to assign to the PDF file. If None, get from
guess_name() function.
Replace: Bool
If True, enforce replacing a PDF regardless of a pre-existing one.
If None (default), only ask when fetched PDF comes from arxiv.
Returns
---
filename: String
If successful, return the full path of the file name.
If not, return None.
```
### bibmanager.utils[¶](#module-bibmanager.utils)
`bibmanager.utils.``HOME`[¶](#bibmanager.utils.HOME)
```
os.path.expanduser('~') + '/.bibmanager/'
```
`bibmanager.utils.``ROOT`[¶](#bibmanager.utils.ROOT)
```
os.path.realpath(os.path.dirname(__file__) + '/..') + '/'
```
`bibmanager.utils.``BOLD`[¶](#bibmanager.utils.BOLD)
```
'\x1b[1m'
```
`bibmanager.utils.``END`[¶](#bibmanager.utils.END)
```
'\x1b[0m'
```
`bibmanager.utils.``BANNER`[¶](#bibmanager.utils.BANNER)
```
'\n::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::\n'
```
`bibmanager.utils.``ads_keywords`[¶](#bibmanager.utils.ads_keywords)
```
['author:"^"', 'author:""', 'year:', 'title:""', 'abstract:""', 'property:refereed', 'property:article', 'abs:""', 'ack:""', 'aff:""', 'arXiv:', 'arxiv_class:""', 'bibcode:', 'bibgroup:""', 'bibstem:', 'body:""', 'citations()', 'copyright:', 'data:""', 'database:astronomy', 'database:physics', 'doctype:abstract', 'doctype:article', 'doctype:book', 'doctype:bookreview', 'doctype:catalog', 'doctype:circular', 'doctype:eprint', 'doctype:erratum', 'doctype:inproceedings', 'doctype:inbook', 'doctype:mastersthesis', 'doctype:misc', 'doctype:newsletter', 'doctype:obituary', 'doctype:phdthesis', 'doctype:pressrelease', 'doctype:proceedings', 'doctype:proposal', 'doctype:software', 'doctype:talk', 'doctype:techreport', 'doi:', 'full:""', 'grant:', 'identifier:""', 'issue:', 'keyword:""', 'lang:""', 'object:""', 'orcid:', 'page:', 'property:ads_openaccess', 'property:eprint', 'property:eprint_openaccess', 'property:inproceedings', 'property:non_article', 'property:notrefereed', 'property:ocrabstract', 'property:openaccess', 'property:pub_openaccess', 'property:software', 'references()', 'reviews()', 'similar()', 'topn()', 'trending()', 'useful()', 'vizier:""', 'volume:']
```
`bibmanager.utils.``BM_DATABASE`()[[source]](_modules/bibmanager/utils/utils.html#BM_DATABASE)[¶](#bibmanager.utils.BM_DATABASE)
```
The database of BibTex entries
```
`bibmanager.utils.``BM_BIBFILE`()[[source]](_modules/bibmanager/utils/utils.html#BM_BIBFILE)[¶](#bibmanager.utils.BM_BIBFILE)
```
Bibfile representation of the database
```
`bibmanager.utils.``BM_TMP_BIB`()[[source]](_modules/bibmanager/utils/utils.html#BM_TMP_BIB)[¶](#bibmanager.utils.BM_TMP_BIB)
```
Temporary bibfile database for editing
```
`bibmanager.utils.``BM_CACHE`()[[source]](_modules/bibmanager/utils/utils.html#BM_CACHE)[¶](#bibmanager.utils.BM_CACHE)
```
ADS queries cache
```
`bibmanager.utils.``BM_HISTORY_SEARCH`()[[source]](_modules/bibmanager/utils/utils.html#BM_HISTORY_SEARCH)[¶](#bibmanager.utils.BM_HISTORY_SEARCH)
```
Search history
```
`bibmanager.utils.``BM_HISTORY_ADS`()[[source]](_modules/bibmanager/utils/utils.html#BM_HISTORY_ADS)[¶](#bibmanager.utils.BM_HISTORY_ADS)
```
ADS search history
```
`bibmanager.utils.``BM_HISTORY_PDF`()[[source]](_modules/bibmanager/utils/utils.html#BM_HISTORY_PDF)[¶](#bibmanager.utils.BM_HISTORY_PDF)
```
PDF search history
```
`bibmanager.utils.``BM_HISTORY_TAGS`()[[source]](_modules/bibmanager/utils/utils.html#BM_HISTORY_TAGS)[¶](#bibmanager.utils.BM_HISTORY_TAGS)
```
PDF search history
```
`bibmanager.utils.``BM_PDF`()[[source]](_modules/bibmanager/utils/utils.html#BM_PDF)[¶](#bibmanager.utils.BM_PDF)
```
Folder for PDF files of the BibTex entries
```
*class* `bibmanager.utils.``Author`(*last*, *first*, *von*, *jr*)[¶](#bibmanager.utils.Author)
```
Author(last, first, von, jr)
Initialize self. See help(type(self)) for accurate signature.
```
`count`(*value*, */*)[¶](#bibmanager.utils.Author.count)
```
Return number of occurrences of value.
```
`index`(*value*, *start=0*, *stop=9223372036854775807*, */*)[¶](#bibmanager.utils.Author.index)
```
Return first index of value.
Raises ValueError if the value is not present.
```
*class* `bibmanager.utils.``Sort_author`(*last*, *first*, *von*, *jr*, *year*, *month*)[¶](#bibmanager.utils.Sort_author)
```
Sort_author(last, first, von, jr, year, month)
Initialize self. See help(type(self)) for accurate signature.
```
`count`(*value*, */*)[¶](#bibmanager.utils.Sort_author.count)
```
Return number of occurrences of value.
```
`index`(*value*, *start=0*, *stop=9223372036854775807*, */*)[¶](#bibmanager.utils.Sort_author.index)
```
Return first index of value.
Raises ValueError if the value is not present.
```
`bibmanager.utils.``ignored`(**exceptions*)[[source]](_modules/bibmanager/utils/utils.html#ignored)[¶](#bibmanager.utils.ignored)
```
Context manager to ignore exceptions. Taken from here:
https://www.youtube.com/watch?v=anrOzOapJ2E
```
`bibmanager.utils.``cd`(*newdir*)[[source]](_modules/bibmanager/utils/utils.html#cd)[¶](#bibmanager.utils.cd)
```
Context manager for changing the current working directory.
Taken from here: https://stackoverflow.com/questions/431684/
```
`bibmanager.utils.``ordinal`(*number*)[[source]](_modules/bibmanager/utils/utils.html#ordinal)[¶](#bibmanager.utils.ordinal)
```
Get ordinal string representation for input number(s).
Parameters
---
number: Integer or 1D integer ndarray
An integer or array of integers.
Returns
---
ord: String or List of strings
Ordinal representation of input number(s). Return a string if
input is int; else, return a list of strings.
Examples
---
>>> from bibmanager.utils import ordinal
>>> print(ordinal(1))
1st
>>> print(ordinal(2))
2nd
>>> print(ordinal(11))
11th
>>> print(ordinal(111))
111th
>>> print(ordinal(121))
121st
>>> print(ordinal(np.arange(1,6)))
['1st', '2nd', '3rd', '4th', '5th']
```
`bibmanager.utils.``count`(*text*)[[source]](_modules/bibmanager/utils/utils.html#count)[¶](#bibmanager.utils.count)
```
Count net number of braces in text (add 1 for each opening brace,
subtract one for each closing brace).
Parameters
---
text: String
A string.
Returns
---
counts: Integer
Net number of braces.
Examples
---
>>> from bibmanager.utils import count
>>> count('{Hello} world')
0
```
`bibmanager.utils.``nest`(*text*)[[source]](_modules/bibmanager/utils/utils.html#nest)[¶](#bibmanager.utils.nest)
```
Get braces nesting level for each character in text.
Parameters
---
text: String
String to inspect.
Returns
---
counts: 1D integer list
Braces nesting level for each character.
Examples
---
>>> from bibmanager.utils import nest
>>> s = "{{P\\'erez}, F. and {Granger}, B.~E.},"
>>> n = nest(s)
>>> print(f"{s}\n{''.join([str(v) for v in n])}")
{{P\'erez}, F. and {Granger}, B.~E.},
0122222222111111111122222222111111110
```
`bibmanager.utils.``cond_split`(*text*, *pattern*, *nested=None*, *nlev=-1*, *ret_nests=False*)[[source]](_modules/bibmanager/utils/utils.html#cond_split)[¶](#bibmanager.utils.cond_split)
```
Conditional find and split strings in a text delimited by all occurrences of pattern where the brace-nested level is nlev.
Parameters
---
text: String
String where to search for pattern.
pattern: String
A regex pattern to search.
nested: 1D integer iterable
Braces nesting level of characters in text.
nlev: Integer
Required nested level to accept pattern match.
ret_nests: Bool
If True, return a list with the arrays of nested level for each
of the returned substrings.
Returns
---
substrings: List of strings
List of strings delimited by the accepted pattern matches.
nests: List of integer ndarrays [optional]
nested level for substrings.
Examples
---
>>> from bibmanager.utils import cond_split
>>> # Split an author list string delimited by ' and ' pattern:
>>> cond_split("{P\\'erez}, F. and {Granger}, B.~E.", " and ")
["{P\\'erez}, F.", '{Granger}, B.~E.']
>>> # Protected instances (within braces) won't count:
>>> cond_split("{AAS and Astropy Teams} and {Hendrickson}, A.", " and ")
['{AAS and Astropy Teams}', '{Hendrickson}, A.']
>>> # Matches at the beginning or end do not count for split:
>>> cond_split(",Jones, Oliphant, Peterson,", ",")
['Jones', ' Oliphant', ' Peterson']
>>> # But two consecutive matches do return an empty string:
>>> cond_split("Jones,, Peterson", ",")
['Jones', '', ' Peterson']
```
`bibmanager.utils.``cond_next`(*text*, *pattern*, *nested*, *nlev=1*)[[source]](_modules/bibmanager/utils/utils.html#cond_next)[¶](#bibmanager.utils.cond_next)
```
Find next instance of pattern in text where nested is nlev.
Parameters
---
text: String
Text where to search for regex.
pattern: String
Regular expression to search for.
nested: 1D integer iterable
Braces-nesting level of characters in text.
nlev: Integer
Requested nested level.
Returns
---
Index integer of pattern in text. If not found, return the
index of the last character in text.
Examples
---
>>> from bibmanager.utils import nest, cond_next
>>> text = '"{{HITEMP}, the high-temperature molecular database}",'
>>> nested = nest(text)
>>> # Ignore comma within braces:
>>> cond_next(text, ",", nested, nlev=0)
53
```
`bibmanager.utils.``find_closing_bracket`(*text*, *start_pos=0*, *get_open=False*)[[source]](_modules/bibmanager/utils/utils.html#find_closing_bracket)[¶](#bibmanager.utils.find_closing_bracket)
```
Find the closing bracket that matches the nearest opening bracket in text starting from start_pos.
Parameters
---
text: String
Text to search through.
start_pos: Integer
Starting position where to start looking for the brackets.
get_opening: Bool
If True, return a tuple with the position of both
opening and closing brackets.
Returns
---
end_pos: Integer
The absolute position to the cursor position at closing bracket.
Returns None if there are no matching brackets.
Examples
---
>>> import bibmanager.utils as u
>>> text = '@ARTICLE{key, author={last_name}, title={The Title}}'
>>> end_pos = u.find_closing_bracket(text)
>>> print(text[:end_pos+1])
@ARTICLE{key, author={last_name}, title={The Title}}
>>> start_pos = 14
>>> end_pos = find_closing_bracket(text, start_pos=start_pos)
>>> print(text[start_pos:end_pos+1])
author={last_name}
```
`bibmanager.utils.``parse_name`(*name*, *nested=None*, *key=None*)[[source]](_modules/bibmanager/utils/utils.html#parse_name)[¶](#bibmanager.utils.parse_name)
```
Parse first, last, von, and jr parts from a name, following these rules:
http://mirror.easyname.at/ctan/info/bibtex/tamethebeast/ttb_en.pdf Page 23.
Parameters
---
name: String
A name following the BibTeX format.
nested: 1D integer ndarray
Nested level of characters in name.
key: Sting
The entry that contains this author name (to display in case of
a warning).
Returns
---
author: Author namedtuple
Four element tuple with the parsed name.
Examples
---
>>> from bibmanager.utils import parse_name
>>> names = ['{Hendrickson}, A.',
>>> '<NAME>',
>>> '{AAS Journals Team}',
>>> "St{\\'{e}}<NAME>"]
>>> for name in names:
>>> print(f'{repr(name)}:\n{parse_name(name)}\n')
'{<NAME>.':
Author(last='{Hendrickson}', first='A.', von='', jr='')
'<NAME>':
Author(last='Jones', first='Eric', von='', jr='')
'{AAS Journals Team}':
Author(last='{AAS Journals Team}', first='', von='', jr='')
"St{\\'{e}}<NAME>":
Author(last='Walt', first="St{\\'{e}}fan", von='<NAME>', jr='')
```
`bibmanager.utils.``repr_author`(*Author*)[[source]](_modules/bibmanager/utils/utils.html#repr_author)[¶](#bibmanager.utils.repr_author)
```
Get string representation of an Author namedtuple in the format:
von Last, jr., First.
Parameters
---
Author: An Author() namedtuple
An author name.
Examples
---
>>> from bibmanager.utils import repr_author, parse_name
>>> names = ['Last', 'First Last', 'First von Last', 'von Last, First',
>>> 'von Last, sr., First']
>>> for name in names:
>>> print(f"{name!r:22}: {repr_author(parse_name(name))}")
'Last' : Last
'First Last' : Last, First
'First von Last' : von Last, First
'von Last, First' : von Last, First
'von Last, sr., First': von Last, sr., First
```
`bibmanager.utils.``purify`(*name*, *german=False*)[[source]](_modules/bibmanager/utils/utils.html#purify)[¶](#bibmanager.utils.purify)
```
Replace accented characters closely following these rules:
https://tex.stackexchange.com/questions/57743/
For a more complete list of special characters, see Table 2.2 of
'The Not so Short Introduction to LaTeX2e' by Oetiker et al. (2008).
Parameters
---
name: String
Name to be 'purified'.
german: Bool
Replace umlaut with german style (append 'e' after).
Returns
---
Lower-cased name without accent characters.
Examples
---
>>> from bibmanager.utils import purify
>>> names = ["St{\\'{e}}fan",
"{{\\v S}ime{\\v c}kov{\\'a}}",
"{AAS Journals Team}",
"Kov{\\'a}{\\v r}{\\'i}k",
"Jarom{\\'i}<NAME>{\\'a\\v r\\'i}k",
"{\\.I}volgin",
"Gon{\\c c}alez Nu{\~n}ez",
"Knausg{\\aa}rd Sm{\\o}rrebr{\\o}d",
'Schr{\\"o}ding<NAME>{\\ss}er']
>>> for name in names:
>>> print(f"{name!r:35}: {purify(name)}")
"St{\\'{e}}fan" : stefan
"{{\\v S}ime{\\v c}kov{\\'a}}" : simeckova
'{AAS Journals Team}' : aas journals team
"Kov{\\'a}{\\v r}{\\'i}k" : kovarik
"Jarom{\\'i}<NAME>{\\'a\\v r\\'i}k" : jaromir kovarik
'{\\.I}volgin' : ivolgin
'Gon{\\c c}<NAME>{\\~n}ez' : <NAME>
'Knausg{\\aa}rd Sm{\\o}rrebr{\\o}d' : knausgaard smorrebrod
'Schr{\\"o}<NAME>{\\ss}er' : schrodinger besser
```
`bibmanager.utils.``initials`(*name*)[[source]](_modules/bibmanager/utils/utils.html#initials)[¶](#bibmanager.utils.initials)
```
Get initials from a name.
Parameters
---
name: String
A name.
Returns
---
initials: String
Name initials (lower cased).
Examples
---
>>> from bibmanager.utils import initials
>>> names = ["", "D.", "<NAME>.", "G.O.", '{\\"O}. H.', "<NAME>.",
>>> "Phil", "<NAME>"]
>>> for name in names:
>>> print(f"{name!r:20}: {initials(name)!r}")
'' : ''
'D.' : 'd'
'D. W.' : 'dw'
'G.O.' : 'g'
'{\\"O}. H.' : 'oh'
'J. Y.-K.' : 'jyk'
'Phil' : 'p'
'<NAME>' : 'phs'
>>> # 'G.O.' is a typo by the user, should have had a blank in between.
```
`bibmanager.utils.``get_authors`(*authors*, *format='long'*)[[source]](_modules/bibmanager/utils/utils.html#get_authors)[¶](#bibmanager.utils.get_authors)
```
Get string representation for the author list.
Parameters
---
authors: List of Author() nametuple format: String
If format='ushort', display only the first author's last name,
followed by a '+' if there are more authors.
If format='short', display at most the first two authors followed
by 'et al.' if corresponds.
Else, display the full list of authors.
Returns
---
author_list: String
String representation of the author list in the requested format.
Examples
---
>>> from bibmanager.utils import get_authors, parse_name
>>> author_lists = [
>>> [parse_name('{Hunter}, J. D.')],
>>> [parse_name('{AAS Journals Team}'), parse_name('{Hendrickson}, A.')],
>>> [parse_name('<NAME>'), parse_name('<NAME>'),
>>> parse_name('<NAME>')]
>>> ]
>>> # Ultra-short format:
>>> for i,authors in enumerate(author_lists):
>>> print(f"{i+1} author(s): {get_authors(authors, format='ushort')}")
1 author(s): Hunter 2 author(s): AAS Journals Team+
3 author(s): Jones+
>>> # Short format:
>>> for i,authors in enumerate(author_lists):
>>> print(f"{i+1} author(s): {get_authors(authors, format='short')}")
1 author(s): {Hunter}, <NAME>.
2 author(s): {AAS Journals Team} and {Hendrickson}, A.
3 author(s): Jones, Eric; et al.
>>> # Long format:
>>> for i,authors in enumerate(author_lists):
>>> print(f"{i+1} author(s): {get_authors(authors)}")
1 author(s): {Hunter}, <NAME>.
2 author(s): {AAS Journals Team} and {Hendrickson}, A.
3 author(s): <NAME>; <NAME>; and <NAME>
```
`bibmanager.utils.``next_char`(*text*)[[source]](_modules/bibmanager/utils/utils.html#next_char)[¶](#bibmanager.utils.next_char)
```
Get index of next non-blank character in string text.
Return zero if all characters are blanks.
Parameters
---
text: String
A string, duh!.
Examples
---
>>> from bibmanager.utils import next_char
>>> texts = ["Hello", " Hello", " Hello ", "", "\n Hello", " "]
>>> for text in texts:
>>> print(f"{text!r:11}: {next_char(text)}")
'Hello' : 0
' Hello' : 2
' Hello ' : 2
'' : 0
'\n Hello' : 2
' ' : 0
```
`bibmanager.utils.``last_char`(*text*)[[source]](_modules/bibmanager/utils/utils.html#last_char)[¶](#bibmanager.utils.last_char)
```
Get index of last non-blank character in string text.
Parameters
---
text: String
Any string.
Returns
---
index: Integer
Index of last non-blank character.
Examples
---
>>> from bibmanager.utils import last_char
>>> texts = ["Hello", " Hello", " Hello ", "", "\n Hello", " "]
>>> for text in texts:
>>> print(f"{text!r:12}: {last_char(text)}")
'Hello' : 5
' Hello' : 7
' Hello ' : 7
'' : 0
'\n Hello' : 7
' ' : 0
```
`bibmanager.utils.``get_fields`(*entry*)[[source]](_modules/bibmanager/utils/utils.html#get_fields)[¶](#bibmanager.utils.get_fields)
```
Generator to parse entries of a bibliographic entry.
Parameters
---
entry: String
A bibliographic entry text.
Yields
---
The first yield is the entry's key. All following yields are three-element tuples containing a field name, field value, and nested level of the field value.
Notes
---
Global quotations or braces on a value are removed before yielding.
Example
---
>>> from bibmanager.utils import get_fields
>>> entry = '''
@Article{Hunter2007ieeeMatplotlib,
Author = {{Hunter}, <NAME>.},
Title = {Matplotlib: A 2D graphics environment},
Journal = {Computing In Science \& Engineering},
Volume = {9},
Number = {3},
Pages = {90--95},
publisher = {IEEE COMPUTER SOC},
doi = {10.1109/MCSE.2007.55},
year = 2007
}'''
>>> fields = get_fields(entry)
>>> # Get the entry's key:
>>> print(next(fields))
Hunter2007ieeeMatplotlib
>>> # Now get the fields, values, and nested level:
>>> for key, value, nested in fields:
>>> print(f"{key:9}: {value}\n{'':11}{''.join([str(v) for v in nested])}")
author : {Hunter}, <NAME>.
233333332222222 title : Matplotlib: A 2D graphics environment
2222222222222222222222222222222222222 journal : Computing In Science \& Engineering
22222222222222222222222222222222222 volume : 9
2 number : 3
2 pages : 90--95
222222 publisher: IEEE COMPUTER SOC
22222222222222222 doi : 10.1109/MCSE.2007.55
22222222222222222222 year : 2007
1111
```
`bibmanager.utils.``req_input`(*prompt*, *options*)[[source]](_modules/bibmanager/utils/utils.html#req_input)[¶](#bibmanager.utils.req_input)
```
Query for an answer to prompt message until the user provides a valid input (i.e., answer is in options).
Parameters
---
prompt: String
Prompt text for input()'s argument.
options: List
List of options to accept. Elements in list are cast into strings.
Returns
---
answer: String
The user's input.
Examples
---
>>> from bibmanager.utils import req_input
>>> req_input('Enter number between 0 and 9: ', options=np.arange(10))
>>> # Enter the number 10:
Enter number between 0 and 9: 10
>>> # Now enter the number 5:
Not a valid input. Try again: 5
'5'
```
`bibmanager.utils.``warnings_format`(*message*, *category*, *filename*, *lineno*, *file=None*, *line=None*)[[source]](_modules/bibmanager/utils/utils.html#warnings_format)[¶](#bibmanager.utils.warnings_format)
```
Custom format for warnings.
```
`bibmanager.utils.``tokenizer`(*attribute*, *value*, *value_token=Token.Literal.String*)[[source]](_modules/bibmanager/utils/utils.html#tokenizer)[¶](#bibmanager.utils.tokenizer)
```
Shortcut to generate formatted-text tokens for attribute-value texts.
The attribute is set in a Token.Name.Attribute style, followed
by a colon (Token.Punctuation style), and followed by the value
(in value_token style).
Parameters
---
attribute: String
Name of the attribute.
value: String
The attribute's value.
value_token: a pygments.token object
The style for the attribute's value.
Returns
---
tokens: List of (style, text) tuples.
Tuples that can lated be fed into a FormattedText() or
other prompt_toolkit text formatting calls.
Examples
---
>>> import bibmanager.utils as u
>>> tokens = u.tokenizer('Title', 'Synthesis of the Elements in Stars')
>>> print(tokens)
[(Token.Name.Attribute, 'Title'),
(Token.Punctuation, ': '),
(Token.Literal.String, 'Synthesis of the Elements in Stars'),
(Token.Text, '
')]
>>> # Pretty printing:
>>> import prompt_toolkit
>>> from prompt_toolkit.formatted_text import PygmentsTokens
>>> from pygments.styles import get_style_by_name
>>> style = prompt_toolkit.styles.style_from_pygments_cls(
>>> get_style_by_name('autumn'))
>>> prompt_toolkit.print_formatted_text(
>>> PygmentsTokens(tokens), style=style)
Title: Synthesis of the Elements in Stars
```
`bibmanager.utils.``parse_search`(*input_text*)[[source]](_modules/bibmanager/utils/utils.html#parse_search)[¶](#bibmanager.utils.parse_search)
```
Parse field-value sets from an input string which is then passed to bm.search(). The format is the same as in ADS and it should be 'intuitive' given the auto-complete functionality. However,
for purposes of documentation see the examples below.
Parameters
---
input_text: String
A user-input search string.
Returns
---
matches: List of Bib() objects
Entries that match all input criteria.
Examples
---
>>> # First-author: contain the '^' char and value in quotes:
>>> matches = u.parse_search('author:"^<NAME>"')
>>> # Author or Title: value should be in quotes:
>>> matches = u.parse_search('author:"<NAME>"')
>>> # Specific year:
>>> matches = u.parse_search('year: 1984')
>>> # Year range:
>>> matches = u.parse_search('year: 1984-2004')
>>> # Open-ended year range (starting from, up to):
>>> matches = u.parse_search('year: 1984-')
>>> matches = u.parse_search('year: -1984')
>>> # key, bibcode, and tags don't need quotes:
>>> matches = u.parse_search('key: Payne1925phdStellarAtmospheres')
>>> matches = u.parse_search('bibcode: 1925PhDT.........1P')
>>> matches = u.parse_search('tags: stars')
>>> # Certainly, multiple field can be combined:
>>> matches = u.parse_search('author:"<NAME>" year:1925-1930')
```
*class* `bibmanager.utils.``DynamicKeywordCompleter`(*key_words*)[[source]](_modules/bibmanager/utils/utils.html#DynamicKeywordCompleter)[¶](#bibmanager.utils.DynamicKeywordCompleter)
```
Provide tab-completion for keys and words in corresponding key.
Initialize self. See help(type(self)) for accurate signature.
```
`get_completions`(*document*, *complete_event*)[[source]](_modules/bibmanager/utils/utils.html#DynamicKeywordCompleter.get_completions)[¶](#bibmanager.utils.DynamicKeywordCompleter.get_completions)
```
Get right key/option completions.
```
`get_completions_async`(*document: prompt_toolkit.document.Document*, *complete_event: prompt_toolkit.completion.base.CompleteEvent*) → AsyncGenerator[prompt_toolkit.completion.base.Completion, NoneType][¶](#bibmanager.utils.DynamicKeywordCompleter.get_completions_async)
```
Asynchronous generator for completions. (Probably, you won't have to override this.)
Asynchronous generator of :class:`.Completion` objects.
```
*class* `bibmanager.utils.``DynamicKeywordSuggester`[[source]](_modules/bibmanager/utils/utils.html#DynamicKeywordSuggester)[¶](#bibmanager.utils.DynamicKeywordSuggester)
```
Give dynamic suggestions as in DynamicKeywordCompleter.
Initialize self. See help(type(self)) for accurate signature.
```
`get_suggestion`(*buffer*, *document*)[[source]](_modules/bibmanager/utils/utils.html#DynamicKeywordSuggester.get_suggestion)[¶](#bibmanager.utils.DynamicKeywordSuggester.get_suggestion)
```
Return `None` or a :class:`.Suggestion` instance.
We receive both :class:`~prompt_toolkit.buffer.Buffer` and
:class:`~prompt_toolkit.document.Document`. The reason is that auto suggestions are retrieved asynchronously. (Like completions.) The buffer text could be changed in the meantime, but ``document`` contains the buffer document like it was at the start of the auto suggestion call. So, from here, don't access ``buffer.text``, but use
``document.text`` instead.
:param buffer: The :class:`~prompt_toolkit.buffer.Buffer` instance.
:param document: The :class:`~prompt_toolkit.document.Document` instance.
```
`get_suggestion_async`(*buff: 'Buffer'*, *document: prompt_toolkit.document.Document*) → Optional[prompt_toolkit.auto_suggest.Suggestion][¶](#bibmanager.utils.DynamicKeywordSuggester.get_suggestion_async)
```
Return a :class:`.Future` which is set when the suggestions are ready.
This function can be overloaded in order to provide an asynchronous implementation.
```
*class* `bibmanager.utils.``KeyWordCompleter`(*words*, *bibs*)[[source]](_modules/bibmanager/utils/utils.html#KeyWordCompleter)[¶](#bibmanager.utils.KeyWordCompleter)
```
Simple autocompletion on a list of words.
:param words: List of words or callable that returns a list of words.
:param ignore_case: If True, case-insensitive completion.
:param meta_dict: Optional dict mapping words to their meta-text. (This
should map strings to strings or formatted text.)
:param WORD: When True, use WORD characters.
:param sentence: When True, don't complete by comparing the word before the
cursor, but by comparing all the text before the cursor. In this case,
the list of words is just a list of strings, where each string can
contain spaces. (Can not be used together with the WORD option.)
:param match_middle: When True, match not only the start, but also in the
middle of the word.
:param pattern: Optional compiled regex for finding the word before
the cursor to complete. When given, use this regex pattern instead of
default one (see document._FIND_WORD_RE)
Initialize self. See help(type(self)) for accurate signature.
```
`get_completions`(*document*, *complete_event*)[[source]](_modules/bibmanager/utils/utils.html#KeyWordCompleter.get_completions)[¶](#bibmanager.utils.KeyWordCompleter.get_completions)
```
Get right key/option completions.
```
`get_completions_async`(*document: prompt_toolkit.document.Document*, *complete_event: prompt_toolkit.completion.base.CompleteEvent*) → AsyncGenerator[prompt_toolkit.completion.base.Completion, NoneType][¶](#bibmanager.utils.KeyWordCompleter.get_completions_async)
```
Asynchronous generator for completions. (Probably, you won't have to override this.)
Asynchronous generator of :class:`.Completion` objects.
```
*class* `bibmanager.utils.``AutoSuggestCompleter`[[source]](_modules/bibmanager/utils/utils.html#AutoSuggestCompleter)[¶](#bibmanager.utils.AutoSuggestCompleter)
```
Give suggestions based on the words in WordCompleter.
Initialize self. See help(type(self)) for accurate signature.
```
`get_suggestion`(*buffer*, *document*)[[source]](_modules/bibmanager/utils/utils.html#AutoSuggestCompleter.get_suggestion)[¶](#bibmanager.utils.AutoSuggestCompleter.get_suggestion)
```
Return `None` or a :class:`.Suggestion` instance.
We receive both :class:`~prompt_toolkit.buffer.Buffer` and
:class:`~prompt_toolkit.document.Document`. The reason is that auto suggestions are retrieved asynchronously. (Like completions.) The buffer text could be changed in the meantime, but ``document`` contains the buffer document like it was at the start of the auto suggestion call. So, from here, don't access ``buffer.text``, but use
``document.text`` instead.
:param buffer: The :class:`~prompt_toolkit.buffer.Buffer` instance.
:param document: The :class:`~prompt_toolkit.document.Document` instance.
```
`get_suggestion_async`(*buff: 'Buffer'*, *document: prompt_toolkit.document.Document*) → Optional[prompt_toolkit.auto_suggest.Suggestion][¶](#bibmanager.utils.AutoSuggestCompleter.get_suggestion_async)
```
Return a :class:`.Future` which is set when the suggestions are ready.
This function can be overloaded in order to provide an asynchronous implementation.
```
*class* `bibmanager.utils.``AutoSuggestKeyCompleter`[[source]](_modules/bibmanager/utils/utils.html#AutoSuggestKeyCompleter)[¶](#bibmanager.utils.AutoSuggestKeyCompleter)
```
Give suggestions based on the words in WordCompleter.
Initialize self. See help(type(self)) for accurate signature.
```
`get_suggestion`(*buffer*, *document*)[[source]](_modules/bibmanager/utils/utils.html#AutoSuggestKeyCompleter.get_suggestion)[¶](#bibmanager.utils.AutoSuggestKeyCompleter.get_suggestion)
```
Return `None` or a :class:`.Suggestion` instance.
We receive both :class:`~prompt_toolkit.buffer.Buffer` and
:class:`~prompt_toolkit.document.Document`. The reason is that auto suggestions are retrieved asynchronously. (Like completions.) The buffer text could be changed in the meantime, but ``document`` contains the buffer document like it was at the start of the auto suggestion call. So, from here, don't access ``buffer.text``, but use
``document.text`` instead.
:param buffer: The :class:`~prompt_toolkit.buffer.Buffer` instance.
:param document: The :class:`~prompt_toolkit.document.Document` instance.
```
`get_suggestion_async`(*buff: 'Buffer'*, *document: prompt_toolkit.document.Document*) → Optional[prompt_toolkit.auto_suggest.Suggestion][¶](#bibmanager.utils.AutoSuggestKeyCompleter.get_suggestion_async)
```
Return a :class:`.Future` which is set when the suggestions are ready.
This function can be overloaded in order to provide an asynchronous implementation.
```
*class* `bibmanager.utils.``LastKeyCompleter`(*key_words*)[[source]](_modules/bibmanager/utils/utils.html#LastKeyCompleter)[¶](#bibmanager.utils.LastKeyCompleter)
```
Give completer options according to last key found in input.
Parameters
---
key_words: Dict
Dictionary containing the available keys and the
set of words corresponding to each key.
An empty-string key denotes the default set of words to
show when no key is found in the input text.
```
`get_completions`(*document*, *complete_event*)[[source]](_modules/bibmanager/utils/utils.html#LastKeyCompleter.get_completions)[¶](#bibmanager.utils.LastKeyCompleter.get_completions)
```
Get right key/option completions, i.e., the set of possible keys (except the latest key found in the input text) and the set of words according to the latest key in the input text.
```
`get_completions_async`(*document: prompt_toolkit.document.Document*, *complete_event: prompt_toolkit.completion.base.CompleteEvent*) → AsyncGenerator[prompt_toolkit.completion.base.Completion, NoneType][¶](#bibmanager.utils.LastKeyCompleter.get_completions_async)
```
Asynchronous generator for completions. (Probably, you won't have to override this.)
Asynchronous generator of :class:`.Completion` objects.
```
*class* `bibmanager.utils.``LastKeySuggestCompleter`[[source]](_modules/bibmanager/utils/utils.html#LastKeySuggestCompleter)[¶](#bibmanager.utils.LastKeySuggestCompleter)
```
Give suggestions based on the keys and words in LastKeyCompleter.
Initialize self. See help(type(self)) for accurate signature.
```
`get_suggestion`(*buffer*, *document*)[[source]](_modules/bibmanager/utils/utils.html#LastKeySuggestCompleter.get_suggestion)[¶](#bibmanager.utils.LastKeySuggestCompleter.get_suggestion)
```
Return `None` or a :class:`.Suggestion` instance.
We receive both :class:`~prompt_toolkit.buffer.Buffer` and
:class:`~prompt_toolkit.document.Document`. The reason is that auto suggestions are retrieved asynchronously. (Like completions.) The buffer text could be changed in the meantime, but ``document`` contains the buffer document like it was at the start of the auto suggestion call. So, from here, don't access ``buffer.text``, but use
``document.text`` instead.
:param buffer: The :class:`~prompt_toolkit.buffer.Buffer` instance.
:param document: The :class:`~prompt_toolkit.document.Document` instance.
```
`get_suggestion_async`(*buff: 'Buffer'*, *document: prompt_toolkit.document.Document*) → Optional[prompt_toolkit.auto_suggest.Suggestion][¶](#bibmanager.utils.LastKeySuggestCompleter.get_suggestion_async)
```
Return a :class:`.Future` which is set when the suggestions are ready.
This function can be overloaded in order to provide an asynchronous implementation.
```
*class* `bibmanager.utils.``KeyPathCompleter`(*words*, *bibs*)[[source]](_modules/bibmanager/utils/utils.html#KeyPathCompleter)[¶](#bibmanager.utils.KeyPathCompleter)
```
Simple autocompletion on a list of words.
:param words: List of words or callable that returns a list of words.
:param ignore_case: If True, case-insensitive completion.
:param meta_dict: Optional dict mapping words to their meta-text. (This
should map strings to strings or formatted text.)
:param WORD: When True, use WORD characters.
:param sentence: When True, don't complete by comparing the word before the
cursor, but by comparing all the text before the cursor. In this case,
the list of words is just a list of strings, where each string can
contain spaces. (Can not be used together with the WORD option.)
:param match_middle: When True, match not only the start, but also in the
middle of the word.
:param pattern: Optional compiled regex for finding the word before
the cursor to complete. When given, use this regex pattern instead of
default one (see document._FIND_WORD_RE)
Initialize self. See help(type(self)) for accurate signature.
```
`get_completions`(*document*, *complete_event*)[[source]](_modules/bibmanager/utils/utils.html#KeyPathCompleter.get_completions)[¶](#bibmanager.utils.KeyPathCompleter.get_completions)
```
Get right key/option/file completions.
```
`get_completions_async`(*document: prompt_toolkit.document.Document*, *complete_event: prompt_toolkit.completion.base.CompleteEvent*) → AsyncGenerator[prompt_toolkit.completion.base.Completion, NoneType][¶](#bibmanager.utils.KeyPathCompleter.get_completions_async)
```
Asynchronous generator for completions. (Probably, you won't have to override this.)
Asynchronous generator of :class:`.Completion` objects.
```
`path_completions`(*text*)[[source]](_modules/bibmanager/utils/utils.html#KeyPathCompleter.path_completions)[¶](#bibmanager.utils.KeyPathCompleter.path_completions)
```
Slightly modified from PathCompleter.get_completions()
```
*class* `bibmanager.utils.``AlwaysPassValidator`(*bibs*, *toolbar_text=''*)[[source]](_modules/bibmanager/utils/utils.html#AlwaysPassValidator)[¶](#bibmanager.utils.AlwaysPassValidator)
```
Validator that always passes (using actually for bottom toolbar).
Initialize self. See help(type(self)) for accurate signature.
```
`from_callable`(*validate_func: Callable[[str], bool], error_message: str = 'Invalid input', move_cursor_to_end: bool = False*) → 'Validator'[¶](#bibmanager.utils.AlwaysPassValidator.from_callable)
```
Create a validator from a simple validate callable. E.g.:
.. code:: python
def is_valid(text):
return text in ['hello', 'world']
Validator.from_callable(is_valid, error_message='Invalid input')
:param validate_func: Callable that takes the input string, and returns
`True` if the input is valid input.
:param error_message: Message to be displayed if the input is invalid.
:param move_cursor_to_end: Move the cursor to the end of the input, if
the input is invalid.
```
`validate`(*document*)[[source]](_modules/bibmanager/utils/utils.html#AlwaysPassValidator.validate)[¶](#bibmanager.utils.AlwaysPassValidator.validate)
```
Validate the input.
If invalid, this should raise a :class:`.ValidationError`.
:param document: :class:`~prompt_toolkit.document.Document` instance.
```
`validate_async`(*document: prompt_toolkit.document.Document*) → None[¶](#bibmanager.utils.AlwaysPassValidator.validate_async)
```
Return a `Future` which is set when the validation is ready.
This function can be overloaded in order to provide an asynchronous implementation.
```
Contributing[¶](#contributing)
---
Feel free to contribute to this repository by submitting code pull requests, raising issues, or emailing the administrator directly.
### Raising Issues[¶](#raising-issues)
Whenever you want to raise a new issue, make sure that it has not already been mentioned in the issues list. If an issue exists, consider adding a comment if you have extra information that further describes the issue or may help to solve it.
If you are reporting a bug, make sure to be fully descriptive of the bug, including steps to reproduce the bug, error output logs, etc.
Make sure to designate appropriate tags to your issue.
An issue asking for a new functionality must include the `wish list`
tag. These issues must include an explanation as to why is such feature necessary. Note that if you also provide ideas, literature references, etc. that contribute to the implementation of the requested functionality, there will be more chances of the issue being solved.
### Programming Style[¶](#programming-style)
Everyone has his/her own programming style, I respect that. However,
some people have [terrible style](http://www.abstrusegoose.com/432).
Following good coding practices (see [PEP 8](https://www.python.org/dev/peps/pep-0008/), [PEP 20](https://www.python.org/dev/peps/pep-0020/), and [PEP 257](https://www.python.org/dev/peps/pep-0257/)) makes everyone happier: it will increase the chances of your code being added to the main repo,
and will make me work less. I strongly recommend the following programming guidelines:
> * Always keep it simple.
> * Lines are strictly 80 character long, no more.
> * **Never ever! use tabs (for any reason, just don’t).**
> * Avoid hard-coding values at all cost.
> * Avoid excessively short variable names (such as `x` or `a`).
> * Avoid excessively long variable names as well (just try to write a
> meaningful name).
> * Indent with 4 spaces.
> * Put whitespace around operators and after commas.
> * Separate blocks of code with 1 empty line.
> * Separate classes and functions with 2 empty lines.
> * Separate methods with 1 empty line.
> * Contraptions require meaningful comments.
> * Prefer commenting an entire block before the code than using
> in-line comments.
> * Always, always write docstrings.
> * Use `is` to compare with `None`, `True`, and `False`.
> * Limit try–except clauses to the bare minimum.
> * If you added a new functionality, make sure to also add its repective tests.
> * Make sure that your modifications pass the automated tests (travis).
Good pieces of code that do not follow these principles will still be gratefully accepted, but with a frowny face.
### Pull Requests[¶](#pull-requests)
To submit a pull request you will need to first (only once) fork the repository into your account. Edit the changes in your repository. When making a commit, always include a descriptive message of what changed. Then, click on the pull request button.
License[¶](#license)
---
The MIT License (MIT)
Copyright (c) 2018-2023 <NAME>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Featured Articles[¶](#featured-articles)
===
[ADS Blog](http://adsabs.github.io/blog/): **User-Developed Tools for ADS**
*(30 Jul 2019)*
<http://adsabs.github.io/blog/3rd-party-tools[AstroBetter](https://www.astrobetter.com): **Bibmanager: A BibTex Manager Designed for Astronomers**
*(17 Feb 2020)*
<https://www.astrobetter.com/blog/2020/02/17/bibmanager-a-bibtex-manager-designed-for-astronomers/---
Please send any feedback or inquiries to:
> <NAME> ([pcubillos[at]fulbrightmail.org](mailto:pc<EMAIL>los%40fulbrightmail.org)) |
corrgram | cran | R | Package ‘corrgram’
October 12, 2022
Title Plot a Correlogram
Version 1.14
Date 2021-04-29
Type Package
Description Calculates correlation of variables and displays the results
graphically. Included panel functions can display points, shading, ellipses, and
correlation values with confidence intervals. See Friendly (2002) <doi:10.1198/000313002533>.
Imports graphics, grDevices, stats
Suggests gridBase, knitr, Matrix, psych, rmarkdown, seriation,
sfsmisc, testthat
License GPL-3
LazyData yes
Encoding UTF-8
URL https://kwstat.github.io/corrgram/
BugReports https://github.com/kwstat/corrgram/issues/
VignetteBuilder knitr
RoxygenNote 7.1.0
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-0617-8673>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-04-29 17:20:06 UTC
R topics documented:
aut... 2
basebal... 3
corrgra... 4
vot... 7
auto Statistics of 1979 automobile models
Description
Statistics for 74 automobiles in the 1979 model year as sold in the US.
Usage
auto
Format
A data frame with 74 observations on the following 14 variables.
Model Make and model of car.
Origin a factor with levels A,E,J
Price Price in dollars.
MPG Miles per gallon.
Rep78 Repair record for 1978 on 1 (worst) to 5 (best) scale.
Rep77 Repair record for 1978 on 1 to 5 scale.
Hroom Headroom in inches.
Rseat Rear seat clearance in inches.
Trunk Trunk volume in cubic feet.
Weight Weight in pounds.
Length Length in inches.
Turn Turning diameter in feet.
Displa Engine displacement in cubic inches.
Gratio Gear ratio for high gear.
Details
The data is from various sources, primarily Consumer Reports, April, 1979, and the United States
government EPA statistics on fuel consumption.
Source
This data frame was created from http://euclid.psych.yorku.ca/ftp/sas/sssg/data/auto.sas
References
Originally published in Chambers, Cleveland, Kleiner, and Tukey, Graphical Methods for Data
Analysis, 1983, pages 352-355.
Examples
corrgram(auto[, -c(1:2)])
baseball Baseball Hitter’s Data
Description
Data are for 322 Major Leaque Baseball regular and substitute hitters in 1986.
Usage
baseball
Format
A data frame with 322 observations on the following 22 variables.
Name The hitter/player’s name
League Player’s league (American/National) at the beginning of 1987
Team Player’s team at the beginning of 1987
Position Player’s position in 1986: 1B=First base, 2B=Second base, 3B=Third base, C=Catcher,
OF=Outfild, DH=Designated hitter, SS=Short stop, UT=Utility
Atbat Number of times at bat in 1986
Hits Number of hits in 1986
Homer Number of home runs in 1986
Runs Number of runs in 1986
RBI Runs batted in during 1986
Walks Number of walks in 1986
Years Number of years in the major leagues
Atbatc Number of times at bat in his career
Hitsc Number of hits in career
Homerc Number of home runs in career
Runsc Number of runs in career
RBIc Number of Runs Batted In in career
Walksc Number of walks in career
Putouts Number of putouts in 1986
Assists Number of assists in 1986
Errors Number of errors in 1986
Salary Annual salary (in thousands) on opening day 1987
logSal Log of salary
Details
The levels of the player’s positions have been collapsed to fewer levels for a simpler analysis. See
the original data for the full list of positions.
The salary data were taken from Sports Illustrated, April 20, 1987. The salary of any player not
included in that article is listed as an NA. The 1986 and career statistics were taken from The 1987
Baseball Encyclopedia Update published by <NAME>, Macmillan Publishing Company, New
York.
Source
The data was originally published for the 1988 ASA Statistical Graphics and Computing Data
Exposition: http://lib.stat.cmu.edu/data-expo/1988.html.
The version of the data used to create this data was found at http://euclid.psych.yorku.ca/ftp/sas/sssg/data/baseball.sas
References
<NAME> (2002). Corrgrams: Exploratory Displays for Correlation Matrices, The American
Statistician, Vol 56.
Examples
vars2 <- c("Assists","Atbat","Errors","Hits","Homer","logSal",
"Putouts","RBI","Runs","Walks","Years")
corrgram(baseball[,vars2],
lower.panel=panel.shade, upper.panel=panel.pie)
corrgram Draw a correlogram
Description
The corrgram function produces a graphical display of a correlation matrix, called a correlogram.
The cells of the matrix can be shaded or colored to show the correlation value.
Usage
corrgram(
x,
type = NULL,
order = FALSE,
labels,
panel = panel.shade,
lower.panel = panel,
upper.panel = panel,
diag.panel = NULL,
text.panel = textPanel,
label.pos = c(0.5, 0.5),
label.srt = 0,
cex.labels = NULL,
font.labels = 1,
row1attop = TRUE,
dir = "",
gap = 0,
abs = FALSE,
col.regions = colorRampPalette(c("red", "salmon", "white", "royalblue", "navy")),
cor.method = "pearson",
outer.labels = NULL,
...
)
Arguments
x A tall data frame with one observation per row, or a correlation matrix.
type Use ’data’ or ’cor’/’corr’ to explicitly specify that ’x’ is data or a correlation
matrix. Rarely needed.
order Should variables be re-ordered? Use TRUE or "PCA" for PCA-based re-ordering.
If the ’seriation’ package is loaded, this can also be set to "OLO" for optimal leaf
ordering, "GW", and "HC".
labels Labels to use (instead of data frame variable names) for diagonal panels. If ’or-
der’ option is used, this vector of labels will be also be appropriately reordered
by the function.
panel Function used to plot the contents of each panel.
lower.panel, upper.panel
Separate panel functions used below/above the diagonal.
diag.panel, text.panel
Panel function used on the diagonal.
label.pos Horizontal and vertical placement of label in diagonal panels.
label.srt String rotation for diagonal labels.
cex.labels, font.labels
Graphics parameter for diagonal panels.
row1attop TRUE for diagonal like " \ ", FALSE for diagonal like " / ".
dir Use dir="left" instead of ’row1attop’.
gap Distance between panels.
abs Use absolute value of correlations for clustering? Default FALSE.
col.regions A function returning a vector of colors.
cor.method Correlation method to use in panel functions. Default is ’pearson’. Alternatives:
’spearman’, ’kendall’.
outer.labels A list of the form ’list(bottom,left,top,right)’. If ’bottom=TRUE’ (for example),
variable labels are added along the bottom outside edge.
For more control, use ’bottom=list(labels,cex,srt,adj)’, where ’labels’ is a vec-
tor of variable labels, ’cex’ affects the size, ’srt’ affects the rotation, and ’adj’
affects the adjustment of the labels. Defaults: ’labels’ uses column names;
cex=1’; ’srt=90’ (bottom/top), ’srt=0’ (left/right); ’adj=1’ (bottom/left), ’adj=0’
(top/right).
... Additional arguments passed to plotting methods.
Details
Note: Use the ’col.regions’ argument to specify colors.
Non-numeric columns in the data will be ignored.
The off-diagonal panels are specified with panel.pts, panel.pie, panel.shade, panel.fill,
‘panel.bar, panel.ellipse, panel.conf. panel.cor.
Diagonal panels are specified with panel.txt, panel.minmax, panel.density.
Use a NULL panel to omit drawing the panel.
This function is basically a modification of the pairs.default function with the use of customized
panel functions.
The panel.conf function uses cor.test and calculates pearson correlations. Confidence intervals
are not available in cor.test for other methods (kendall, spearman).
You can create your own panel functions by starting with one of the included panel functions and
making suitable modifications. Note that because of the way the panel functions are called inside
the main function, your custom panel function must include the arguments shown in the panel.pts
function, even if the custom panel function does not use those arguments!
TODO: legend, grid graphics version.
Value
The correlation matrix used for plotting is returned. The ’order’ and ’abs’ arguments affect the
returned value.
Author(s)
<NAME>
References
Friendly, Michael. 2002. Corrgrams: Exploratory Displays for Correlation Matrices. The American
Statistician, 56, 316–324. http://datavis.ca/papers/corrgram.pdf
<NAME> and <NAME>. 1996. A Graphical Display of Large Correlation Matrices. The
American Statistician, 50, 178-180.
Examples
# To reproduce the figures in <NAME>'s paper, see the
# vignette, or see the file 'friendly.r' in this package's
# test directory.
# Demonstrate density panel, correlation confidence panel
corrgram(iris, lower.panel=panel.pts, upper.panel=panel.conf,
diag.panel=panel.density)
# Demonstrate panel.shade, panel.pie, principal component ordering
vars2 <- c("Assists","Atbat","Errors","Hits","Homer","logSal",
"Putouts","RBI","Runs","Walks","Years")
corrgram(baseball[vars2], order=TRUE, main="Baseball data PC2/PC1 order",
lower.panel=panel.shade, upper.panel=panel.pie)
# CAUTION: The latticeExtra package also has a 'panel.ellipse' function
# that clashes with the same-named function in corrgram. In order to use
# the right one, the example below uses 'lower.panel=corrgram::panel.ellipse'.
# If you do not have latticeExtra loaded, you can just use
# 'lower.panel=panel.ellipse'.
# Demonstrate panel.bar, panel.ellipse, panel.minmax, col.regions
corrgram(auto, order=TRUE, main="Auto data (PC order)",
lower.panel=corrgram::panel.ellipse,
upper.panel=panel.bar, diag.panel=panel.minmax,
col.regions=colorRampPalette(c("darkgoldenrod4", "burlywood1",
"darkkhaki", "darkgreen")))
# 'vote' is a correlation matrix, not a data frame
corrgram(vote, order=TRUE, upper.panel=panel.cor)
# outer labels, all options, larger margins, xlab, ylab
labs=colnames(state.x77)
corrgram(state.x77, oma=c(7, 7, 2, 2),
outer.labels=list(bottom=list(labels=labs,cex=1.5,srt=60),
left=list(labels=labs,cex=1.5,srt=30,adj=c(1,0))))
mtext("Bottom", side=1, cex=2, line = -1.5, outer=TRUE, xpd=NA)
mtext("Left", side=2, cex=2, line = -1.5, outer=TRUE, xpd=NA)
vote Voting correlations
Description
Voting correlations
Usage
vote
Format
A 12x12 matrix.
Details
These are the correlations of traits, where each trait is measured for 17 developed countries (Europe,
US, Japan, Australia, New Zealand).
Source
<NAME> and <NAME> (2006). Electoral institutions and the politics of coalitions: Why
some democracies redistribute more than others. American Political Science Review, 100, 165-81.
Table A2.
References
Using Graphs Instead of Tables. http://tables2graphs.com/doku.php?id=03_descriptive_statistics
Examples
corrgram(vote, order=TRUE) |
cec1712-pac | rust | Rust | Crate cec1712_pac
===
Peripheral access API for CEC1712H_B2_SX microcontrollers (generated using svd2rust v0.25.1 ( ))
You can find an overview of the generated API here.
API features to be included in the next svd2rust release can be generated by cloning the svd2rust repository, checking out the above commit, and running `cargo doc --open`.
Re-exports
---
`pub use self::Interrupt as interrupt;``pub use dma_chan02 as dma_chan03;``pub use dma_chan02 as dma_chan04;``pub use dma_chan02 as dma_chan05;``pub use dma_chan02 as dma_chan06;``pub use dma_chan02 as dma_chan07;``pub use dma_chan02 as dma_chan08;``pub use dma_chan02 as dma_chan09;``pub use dma_chan02 as dma_chan10;``pub use dma_chan02 as dma_chan11;``pub use uart0 as uart1;``pub use uart0 as uart2;``pub use timer16_0 as timer16_1;``pub use timer32_0 as timer32_1;``pub use htm0 as htm1;``pub use tach0 as tach1;``pub use pwm0 as pwm2;``pub use pwm0 as pwm3;``pub use pwm0 as pwm5;``pub use pwm0 as pwm6;``pub use pwm0 as pwm7;``pub use led0 as led1;``pub use smb0 as smb1;``pub use smb0 as smb2;``pub use smb0 as smb3;``pub use smb0 as smb4;``pub use i2c0 as i2c1;``pub use i2c0 as i2c2;`Modules
---
adcThis block is designed to convert external analog voltage readings into digital values.
cctThis is a 16-bit auto-reloading timer/counter.
dma_chan00DMA Channel 00 Registers
dma_chan01DMA Channel 01 Registers
dma_chan02DMA Channel 02 Registers
dma_mainDMA Main Registers
ec_reg_bankThis block is designed to be accessed internally by the EC via the register interface.
eciaThe ECIA works in conjunction with the processor interrupt interface to handle hardware interrupts andd exceptions.
gcrThe Logical Device Configuration registers support motherboard designs in which the resources required by their components are known and assigned by the BIOS at POST.
genericCommon register and bit access and modify traits
gpioGPIO Pin Control Registers
htm0The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
i2c0The I2C interface can handle standard I2C interface.
led0The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
pcrThe Power, Clocks, and Resets (PCR) Section identifies clock sources, and reset inputs to the chip
pwm0The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
qmspiThe QMSPI may be used to communicate with various peripheral devices that use a Serial Peripheral Interface
rtcThis is the set of registers that are automatically counted by hardware every 1 second while the block is enabled
rtosRTOS is a 32-bit timer designed to operate on the 32kHz oscillator which is available during all chip sleep states.
smb0The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
sys_tickSystem timer
system_controlSystem Control Registers
tach0This block monitors TACH output signals from various types of fans, and determines their speed.
tfdpThe TFDP serially transmits EC-originated diagnostic vectors to an external debug trace system.
timer16_0This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
timer32_0This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
uart0The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
vbatThe VBAT Register Bank block is a block implemented for miscellaneous battery-backed registers
vbat_ramThe VBAT RAM is operational while the main power rail is operational, and will retain its values powered by battery power while the main rail is unpowered.
vciThe VBAT-Powered Control Interfaces with the RTC With Date and DST Adjustment as well as the Week Alarm.
wdtThe function of the Watchdog Timer is to provide a mechanism to detect if the internal embedded controller has failed.
weekThe Week Timer and the Sub-Week Timer assert the Power-Up Event Output which automatically powers-up the system from the G3 state
Structs
---
ADCThis block is designed to convert external analog voltage readings into digital values.
CBPCache and branch predictor maintenance operations
CCTThis is a 16-bit auto-reloading timer/counter.
CPUIDCPUID
CorePeripheralsCore peripherals
DCBDebug Control Block
DMA_CHAN00DMA Channel 00 Registers
DMA_CHAN01DMA Channel 01 Registers
DMA_CHAN02DMA Channel 02 Registers
DMA_CHAN03DMA Channel 02 Registers
DMA_CHAN04DMA Channel 02 Registers
DMA_CHAN05DMA Channel 02 Registers
DMA_CHAN06DMA Channel 02 Registers
DMA_CHAN07DMA Channel 02 Registers
DMA_CHAN08DMA Channel 02 Registers
DMA_CHAN09DMA Channel 02 Registers
DMA_CHAN10DMA Channel 02 Registers
DMA_CHAN11DMA Channel 02 Registers
DMA_MAINDMA Main Registers
DWTData Watchpoint and Trace unit
ECIAThe ECIA works in conjunction with the processor interrupt interface to handle hardware interrupts andd exceptions.
EC_REG_BANKThis block is designed to be accessed internally by the EC via the register interface.
FPBFlash Patch and Breakpoint unit
GCRThe Logical Device Configuration registers support motherboard designs in which the resources required by their components are known and assigned by the BIOS at POST.
GPIOGPIO Pin Control Registers
HTM0The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
HTM1The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
I2C0The I2C interface can handle standard I2C interface.
I2C1The I2C interface can handle standard I2C interface.
I2C2The I2C interface can handle standard I2C interface.
ITMInstrumentation Trace Macrocell
LED0The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
LED1The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
MPUMemory Protection Unit
NVICNested Vector Interrupt Controller
PCRThe Power, Clocks, and Resets (PCR) Section identifies clock sources, and reset inputs to the chip
PWM0The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM2The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM3The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM5The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM6The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM7The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PeripheralsAll the peripherals
QMSPIThe QMSPI may be used to communicate with various peripheral devices that use a Serial Peripheral Interface
RTCThis is the set of registers that are automatically counted by hardware every 1 second while the block is enabled
RTOSRTOS is a 32-bit timer designed to operate on the 32kHz oscillator which is available during all chip sleep states.
SCBSystem Control Block
SMB0The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB1The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB2The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB3The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB4The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SYSTSysTick: System Timer
SYSTEM_CONTROLSystem Control Registers
SYS_TICKSystem timer
TACH0This block monitors TACH output signals from various types of fans, and determines their speed.
TACH1This block monitors TACH output signals from various types of fans, and determines their speed.
TFDPThe TFDP serially transmits EC-originated diagnostic vectors to an external debug trace system.
TIMER16_0This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
TIMER16_1This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
TIMER32_0This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
TIMER32_1This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
TPIUTrace Port Interface Unit
UART0The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
UART1The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
UART2The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
VBATThe VBAT Register Bank block is a block implemented for miscellaneous battery-backed registers
VBAT_RAMThe VBAT RAM is operational while the main power rail is operational, and will retain its values powered by battery power while the main rail is unpowered.
VCIThe VBAT-Powered Control Interfaces with the RTC With Date and DST Adjustment as well as the Week Alarm.
WDTThe function of the Watchdog Timer is to provide a mechanism to detect if the internal embedded controller has failed.
WEEKThe Week Timer and the Sub-Week Timer assert the Power-Up Event Output which automatically powers-up the system from the G3 state
Enums
---
InterruptEnumeration of all the interrupts.
Constants
---
NVIC_PRIO_BITSNumber available in the NVIC for configuring priority
Attribute Macros
---
interrupt
Crate cec1712_pac
===
Peripheral access API for CEC1712H_B2_SX microcontrollers (generated using svd2rust v0.25.1 ( ))
You can find an overview of the generated API here.
API features to be included in the next svd2rust release can be generated by cloning the svd2rust repository, checking out the above commit, and running `cargo doc --open`.
Re-exports
---
`pub use self::Interrupt as interrupt;``pub use dma_chan02 as dma_chan03;``pub use dma_chan02 as dma_chan04;``pub use dma_chan02 as dma_chan05;``pub use dma_chan02 as dma_chan06;``pub use dma_chan02 as dma_chan07;``pub use dma_chan02 as dma_chan08;``pub use dma_chan02 as dma_chan09;``pub use dma_chan02 as dma_chan10;``pub use dma_chan02 as dma_chan11;``pub use uart0 as uart1;``pub use uart0 as uart2;``pub use timer16_0 as timer16_1;``pub use timer32_0 as timer32_1;``pub use htm0 as htm1;``pub use tach0 as tach1;``pub use pwm0 as pwm2;``pub use pwm0 as pwm3;``pub use pwm0 as pwm5;``pub use pwm0 as pwm6;``pub use pwm0 as pwm7;``pub use led0 as led1;``pub use smb0 as smb1;``pub use smb0 as smb2;``pub use smb0 as smb3;``pub use smb0 as smb4;``pub use i2c0 as i2c1;``pub use i2c0 as i2c2;`Modules
---
adcThis block is designed to convert external analog voltage readings into digital values.
cctThis is a 16-bit auto-reloading timer/counter.
dma_chan00DMA Channel 00 Registers
dma_chan01DMA Channel 01 Registers
dma_chan02DMA Channel 02 Registers
dma_mainDMA Main Registers
ec_reg_bankThis block is designed to be accessed internally by the EC via the register interface.
eciaThe ECIA works in conjunction with the processor interrupt interface to handle hardware interrupts andd exceptions.
gcrThe Logical Device Configuration registers support motherboard designs in which the resources required by their components are known and assigned by the BIOS at POST.
genericCommon register and bit access and modify traits
gpioGPIO Pin Control Registers
htm0The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
i2c0The I2C interface can handle standard I2C interface.
led0The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
pcrThe Power, Clocks, and Resets (PCR) Section identifies clock sources, and reset inputs to the chip
pwm0The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
qmspiThe QMSPI may be used to communicate with various peripheral devices that use a Serial Peripheral Interface
rtcThis is the set of registers that are automatically counted by hardware every 1 second while the block is enabled
rtosRTOS is a 32-bit timer designed to operate on the 32kHz oscillator which is available during all chip sleep states.
smb0The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
sys_tickSystem timer
system_controlSystem Control Registers
tach0This block monitors TACH output signals from various types of fans, and determines their speed.
tfdpThe TFDP serially transmits EC-originated diagnostic vectors to an external debug trace system.
timer16_0This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
timer32_0This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
uart0The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
vbatThe VBAT Register Bank block is a block implemented for miscellaneous battery-backed registers
vbat_ramThe VBAT RAM is operational while the main power rail is operational, and will retain its values powered by battery power while the main rail is unpowered.
vciThe VBAT-Powered Control Interfaces with the RTC With Date and DST Adjustment as well as the Week Alarm.
wdtThe function of the Watchdog Timer is to provide a mechanism to detect if the internal embedded controller has failed.
weekThe Week Timer and the Sub-Week Timer assert the Power-Up Event Output which automatically powers-up the system from the G3 state
Structs
---
ADCThis block is designed to convert external analog voltage readings into digital values.
CBPCache and branch predictor maintenance operations
CCTThis is a 16-bit auto-reloading timer/counter.
CPUIDCPUID
CorePeripheralsCore peripherals
DCBDebug Control Block
DMA_CHAN00DMA Channel 00 Registers
DMA_CHAN01DMA Channel 01 Registers
DMA_CHAN02DMA Channel 02 Registers
DMA_CHAN03DMA Channel 02 Registers
DMA_CHAN04DMA Channel 02 Registers
DMA_CHAN05DMA Channel 02 Registers
DMA_CHAN06DMA Channel 02 Registers
DMA_CHAN07DMA Channel 02 Registers
DMA_CHAN08DMA Channel 02 Registers
DMA_CHAN09DMA Channel 02 Registers
DMA_CHAN10DMA Channel 02 Registers
DMA_CHAN11DMA Channel 02 Registers
DMA_MAINDMA Main Registers
DWTData Watchpoint and Trace unit
ECIAThe ECIA works in conjunction with the processor interrupt interface to handle hardware interrupts andd exceptions.
EC_REG_BANKThis block is designed to be accessed internally by the EC via the register interface.
FPBFlash Patch and Breakpoint unit
GCRThe Logical Device Configuration registers support motherboard designs in which the resources required by their components are known and assigned by the BIOS at POST.
GPIOGPIO Pin Control Registers
HTM0The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
HTM1The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
I2C0The I2C interface can handle standard I2C interface.
I2C1The I2C interface can handle standard I2C interface.
I2C2The I2C interface can handle standard I2C interface.
ITMInstrumentation Trace Macrocell
LED0The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
LED1The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
MPUMemory Protection Unit
NVICNested Vector Interrupt Controller
PCRThe Power, Clocks, and Resets (PCR) Section identifies clock sources, and reset inputs to the chip
PWM0The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM2The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM3The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM5The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM6The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PWM7The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
PeripheralsAll the peripherals
QMSPIThe QMSPI may be used to communicate with various peripheral devices that use a Serial Peripheral Interface
RTCThis is the set of registers that are automatically counted by hardware every 1 second while the block is enabled
RTOSRTOS is a 32-bit timer designed to operate on the 32kHz oscillator which is available during all chip sleep states.
SCBSystem Control Block
SMB0The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB1The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB2The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB3The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SMB4The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
SYSTSysTick: System Timer
SYSTEM_CONTROLSystem Control Registers
SYS_TICKSystem timer
TACH0This block monitors TACH output signals from various types of fans, and determines their speed.
TACH1This block monitors TACH output signals from various types of fans, and determines their speed.
TFDPThe TFDP serially transmits EC-originated diagnostic vectors to an external debug trace system.
TIMER16_0This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
TIMER16_1This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
TIMER32_0This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
TIMER32_1This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
TPIUTrace Port Interface Unit
UART0The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
UART1The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
UART2The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
VBATThe VBAT Register Bank block is a block implemented for miscellaneous battery-backed registers
VBAT_RAMThe VBAT RAM is operational while the main power rail is operational, and will retain its values powered by battery power while the main rail is unpowered.
VCIThe VBAT-Powered Control Interfaces with the RTC With Date and DST Adjustment as well as the Week Alarm.
WDTThe function of the Watchdog Timer is to provide a mechanism to detect if the internal embedded controller has failed.
WEEKThe Week Timer and the Sub-Week Timer assert the Power-Up Event Output which automatically powers-up the system from the G3 state
Enums
---
InterruptEnumeration of all the interrupts.
Constants
---
NVIC_PRIO_BITSNumber available in the NVIC for configuring priority
Attribute Macros
---
interrupt
Enum cec1712_pac::Interrupt
===
```
#[repr(u16)]
pub enum Interrupt {
GIRQ08,
GIRQ09,
GIRQ10,
GIRQ11,
GIRQ12,
GIRQ13,
GIRQ14,
GIRQ15,
GIRQ18,
GIRQ20,
GIRQ21,
GIRQ23,
GIRQ26,
I2CSMB0,
I2CSMB1,
I2CSMB2,
I2CSMB3,
DMA_CH00,
DMA_CH01,
DMA_CH02,
DMA_CH03,
DMA_CH04,
DMA_CH05,
DMA_CH06,
DMA_CH07,
DMA_CH08,
DMA_CH09,
DMA_CH10,
DMA_CH11,
UART0,
UART1,
UART2,
TACH0,
TACH1,
ADC_SNGL,
ADC_RPT,
LED0,
LED1,
QMSPI,
TMR,
HTMR0,
HTMR1,
WK,
WKSUB,
WKSEC,
WKSUBSEC,
SYSPWR,
RTC,
RTC_ALARM,
VCI_IN0,
VCI_IN1,
TIMER16_0,
TIMER16_1,
TIMER32_0,
TIMER32_1,
CCT,
CCT_CAP0,
CCT_CAP1,
CCT_CAP2,
CCT_CAP3,
CCT_CAP4,
CCT_CAP5,
CCT_CMP0,
CCT_CMP1,
I2CSMB4,
I2C0,
I2C1,
I2C2,
WDT,
}
```
Enumeration of all the interrupts.
Variants
---
### `GIRQ08`
0 - GIRQ08
### `GIRQ09`
1 - GIRQ09
### `GIRQ10`
2 - GIRQ10
### `GIRQ11`
3 - GIRQ11
### `GIRQ12`
4 - GIRQ12
### `GIRQ13`
5 - GIRQ13
### `GIRQ14`
6 - GIRQ14
### `GIRQ15`
7 - GIRQ15
### `GIRQ18`
10 - GIRQ18
### `GIRQ20`
12 - GIRQ20
### `GIRQ21`
13 - GIRQ21
### `GIRQ23`
14 - GIRQ23
### `GIRQ26`
17 - GIRQ26
### `I2CSMB0`
20 - I2CSMB0
### `I2CSMB1`
21 - I2CSMB1
### `I2CSMB2`
22 - I2CSMB2
### `I2CSMB3`
23 - I2CSMB3
### `DMA_CH00`
24 - DMA_CH00
### `DMA_CH01`
25 - DMA_CH01
### `DMA_CH02`
26 - DMA_CH02
### `DMA_CH03`
27 - DMA_CH03
### `DMA_CH04`
28 - DMA_CH04
### `DMA_CH05`
29 - DMA_CH05
### `DMA_CH06`
30 - DMA_CH06
### `DMA_CH07`
31 - DMA_CH07
### `DMA_CH08`
32 - DMA_CH08
### `DMA_CH09`
33 - DMA_CH09
### `DMA_CH10`
34 - DMA_CH10
### `DMA_CH11`
35 - DMA_CH11
### `UART0`
40 - UART0
### `UART1`
41 - UART1
### `UART2`
44 - UART2
### `TACH0`
71 - TACH0
### `TACH1`
72 - TACH1
### `ADC_SNGL`
78 - ADC_SNGL
### `ADC_RPT`
79 - ADC_RPT
### `LED0`
83 - LED0
### `LED1`
84 - LED1
### `QMSPI`
91 - QMSPI
### `TMR`
111 - TMR
### `HTMR0`
112 - HTMR0
### `HTMR1`
113 - HTMR1
### `WK`
114 - WK
### `WKSUB`
115 - WKSUB
### `WKSEC`
116 - WKSEC
### `WKSUBSEC`
117 - WKSUBSEC
### `SYSPWR`
118 - SYSPWR
### `RTC`
119 - RTC
### `RTC_ALARM`
120 - RTC_ALARM
### `VCI_IN0`
122 - VCI_IN0
### `VCI_IN1`
123 - VCI_IN1
### `TIMER16_0`
136 - TIMER16_0
### `TIMER16_1`
137 - TIMER16_1
### `TIMER32_0`
140 - TIMER32_0
### `TIMER32_1`
141 - TIMER32_1
### `CCT`
146 - CCT
### `CCT_CAP0`
147 - CCT_CAP0
### `CCT_CAP1`
148 - CCT_CAP1
### `CCT_CAP2`
149 - CCT_CAP2
### `CCT_CAP3`
150 - CCT_CAP3
### `CCT_CAP4`
151 - CCT_CAP4
### `CCT_CAP5`
152 - CCT_CAP5
### `CCT_CMP0`
153 - CCT_CMP0
### `CCT_CMP1`
154 - CCT_CMP1
### `I2CSMB4`
158 - I2CSMB4
### `I2C0`
168 - I2C0
### `I2C1`
169 - I2C1
### `I2C2`
170 - I2C2
### `WDT`
171 - WDT
Trait Implementations
---
### impl Clone for Interrupt
#### fn clone(&self) -> Interrupt
Returns a copy of the value. Read more
1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`. Read more
### impl Debug for Interrupt
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl InterruptNumber for Interrupt
#### fn number(self) -> u16
Return the interrupt number associated with this variant. Read more
### impl PartialEq<Interrupt> for Interrupt
#### fn eq(&self, other: &Interrupt) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`. Read more
1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason. Read more
### impl Copy for Interrupt
### impl Eq for Interrupt
### impl StructuralEq for Interrupt
### impl StructuralPartialEq for Interrupt
Auto Trait Implementations
---
### impl RefUnwindSafe for Interrupt
### impl Send for Interrupt
### impl Sync for Interrupt
### impl Unpin for Interrupt
### impl UnwindSafe for Interrupt
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Module cec1712_pac::dma_chan02
===
DMA Channel 02 Registers
Modules
---
activateEnable this channel for operation. The DMA Main Control: Activate must also be enabled for this channel to be operational.
ctrlDMA Channel N Control
dstartThis is the Master Device address.
ienDMA CHANNEL N INTERRUPT ENABLE
istsDMA Channel N Interrupt Status
mendThis is the ending address for the Memory device.
mstartThis is the starting address for the Memory device.
Structs
---
RegisterBlockRegister block
Type Definitions
---
ACTIVATEACTIVATE (rw) register accessor: an alias for `Reg<ACTIVATE_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
DSTARTDSTART (rw) register accessor: an alias for `Reg<DSTART_SPEC>`
IENIEN (rw) register accessor: an alias for `Reg<IEN_SPEC>`
ISTSISTS (rw) register accessor: an alias for `Reg<ISTS_SPEC>`
MENDMEND (rw) register accessor: an alias for `Reg<MEND_SPEC>`
MSTARTMSTART (rw) register accessor: an alias for `Reg<MSTART_SPEC>`
Module cec1712_pac::uart0
===
The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
Re-exports
---
`pub use data::DATA;``pub use dlab::DLAB;`Modules
---
dataCluster UART when DLAB=0
dlabCluster UART when DLAB=1
Structs
---
RegisterBlockRegister block
Module cec1712_pac::timer16_0
===
This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
Modules
---
cntThis is the value of the Timer counter. This is updated by Hardware but may be set by Firmware.
ctrlTimer Control Register
ienThis is the interrupt enable for the status EVENT_INTERRUPT bit in the Timer Status Register
prldThis is the value of the Timer pre-load for the counter. This is used by H/W when the counter is to be restarted automatically; this will become the new value of the counter upon restart.
stsThis is the interrupt status that fires when the timer reaches its limit
Structs
---
RegisterBlockRegister block
Type Definitions
---
CNTCNT (rw) register accessor: an alias for `Reg<CNT_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
IENIEN (rw) register accessor: an alias for `Reg<IEN_SPEC>`
PRLDPRLD (rw) register accessor: an alias for `Reg<PRLD_SPEC>`
STSSTS (rw) register accessor: an alias for `Reg<STS_SPEC>`
Module cec1712_pac::timer32_0
===
This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
Modules
---
cntThis is the value of the Timer counter. This is updated by Hardware but may be set by Firmware.
ctrlTimer Control Register
ienThis is the interrupt enable for the status EVENT_INTERRUPT bit in the Timer Status Register
prldThis is the value of the Timer pre-load for the counter. This is used by H/W when the counter is to be restarted automatically; this will become the new value of the counter upon restart.
stsThis is the interrupt status that fires when the timer reaches its limit
Structs
---
RegisterBlockRegister block
Type Definitions
---
CNTCNT (rw) register accessor: an alias for `Reg<CNT_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
IENIEN (rw) register accessor: an alias for `Reg<IEN_SPEC>`
PRLDPRLD (rw) register accessor: an alias for `Reg<PRLD_SPEC>`
STSSTS (rw) register accessor: an alias for `Reg<STS_SPEC>`
Module cec1712_pac::htm0
===
The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
Modules
---
cntThe current state of the Hibernation Timer.
ctrlHTimer Control Register
prld15:0]
This register is used to set the Hibernation Timer Preload value.
Structs
---
RegisterBlockRegister block
Type Definitions
---
CNTCNT (r) register accessor: an alias for `Reg<CNT_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
PRLDPRLD (rw) register accessor: an alias for `Reg<PRLD_SPEC>`
Module cec1712_pac::tach0
===
This block monitors TACH output signals from various types of fans, and determines their speed.
Modules
---
ctrlTACHx Control Register
lim_hiTACH HIGH LIMIT Register
lim_loTACHx Low Limit Register
stsTACHx Status Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
LIM_HILIM_HI (rw) register accessor: an alias for `Reg<LIM_HI_SPEC>`
LIM_LOLIM_LO (rw) register accessor: an alias for `Reg<LIM_LO_SPEC>`
STSSTS (rw) register accessor: an alias for `Reg<STS_SPEC>`
Module cec1712_pac::pwm0
===
The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
Modules
---
cfgPWMx CONFIGURATION REGISTER
cnt_offThis field determine both the frequency and duty cycle of the PWM signal. Setting this field to a value of n will cause the Off time of the PWM to be n+1 cycles of the PWM Clock Source.
When this field is set to zero, the PWM_OUTPUT is held high (Full On).
cnt_onThis field determines both the frequency and duty cycle of the PWM signal. Setting this field to a value of n will cause the On time of the PWM to be n+1 cycles of the PWM Clock Source.
When this field is set to zero and the PWMX_COUNTER_OFF_TIME is not set to zero, the PWM_OUTPUT is held low (Full Off).
Structs
---
RegisterBlockRegister block
Type Definitions
---
CFGCFG (rw) register accessor: an alias for `Reg<CFG_SPEC>`
CNT_OFFCNT_OFF (rw) register accessor: an alias for `Reg<CNT_OFF_SPEC>`
CNT_ONCNT_ON (rw) register accessor: an alias for `Reg<CNT_ON_SPEC>`
Module cec1712_pac::led0
===
The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
Modules
---
cfgLED Configuration
dlyLED Delay
intrvlLED Update Interval
limitLED Limits This register may be written at any time. Values written into the register are held in an holding register, which is transferred into the actual register at the end of a PWM period. The two byte fields may be written independently. Reads of this register return the current contents and not the value of the holding register.
outdlyLED Output Delay
stepThis register has eight segment fields which provide the amount the current duty cycle is adjusted at the end of every PWM period. Segment field selection is decoded based on the segment index. The segment index equation utilized depends on the SYMMETRY bit in the LED Configuration Register Register) . In Symmetric Mode the Segment_Index[2:0]
= Duty Cycle Bits[7:5]
. In Asymmetric Mode the Segment_Index[2:0]
is the bit concatenation of following: Segment_Index[2]
= (FALLING RAMP TIME in Figure 30-3, Clipping Example) and Segment_Index[1:0]
= Duty Cycle Bits[7:6].
Structs
---
RegisterBlockRegister block
Type Definitions
---
CFGCFG (rw) register accessor: an alias for `Reg<CFG_SPEC>`
DLYDLY (rw) register accessor: an alias for `Reg<DLY_SPEC>`
INTRVLINTRVL (rw) register accessor: an alias for `Reg<INTRVL_SPEC>`
LIMITLIMIT (rw) register accessor: an alias for `Reg<LIMIT_SPEC>`
OUTDLYOUTDLY (rw) register accessor: an alias for `Reg<OUTDLY_SPEC>`
STEPSTEP (rw) register accessor: an alias for `Reg<STEP_SPEC>`
Module cec1712_pac::smb0
===
The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
Modules
---
bbctrlBit-Bang Control Register
blkidBlock ID Register
blkrevRevision Register
busclkBus Clock Register
cfgConfiguration Register
complCompletion Register
datatmData Timing Register
i2cdataThis register holds the data that are either shifted out to or shifted in from the I2C port.
idlscIdle Scaling Register
mcmdSMBus Master Command Register
mtr_rxbSMBus Master Receive Buffer Register
mtr_txbSMBus Master Transmit Buffer Register
own_addrOwn Address Register Note that the Data Register and Own Address fields are offset by one bit, so that programming Own Address 1 with a value of 55h will result in the value AAh being recognized as the SMB Controller Core slave address.
pecPacket Error Check (PEC) Register
prm_ctrlThis is the Promiscuous Control Register
prm_ienThis is the Promiscuous Interrupt Enable Register
prm_stsThis is the Promiscuous Interrupt Register
rshtmRepeated Start Hold Time Register
rstsStatus Register
rsvd1Reserved
rsvd2Reserved
scmdSMBus Slave Command Register
slv_addrThis is the Slave Address Register
slv_rxbSMBus Slave Receive Buffer Register
slv_txbSMBus Slave Transmit Buffer Register
testTest
tmoutscTime-Out Scaling Register
wake_enWAKE ENABLE Register
wake_stsWAKE STATUS Register
wctrlControl Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
BBCTRLBBCTRL (rw) register accessor: an alias for `Reg<BBCTRL_SPEC>`
BLKIDBLKID (r) register accessor: an alias for `Reg<BLKID_SPEC>`
BLKREVBLKREV (r) register accessor: an alias for `Reg<BLKREV_SPEC>`
BUSCLKBUSCLK (rw) register accessor: an alias for `Reg<BUSCLK_SPEC>`
CFGCFG (rw) register accessor: an alias for `Reg<CFG_SPEC>`
COMPLCOMPL (rw) register accessor: an alias for `Reg<COMPL_SPEC>`
DATATMDATATM (rw) register accessor: an alias for `Reg<DATATM_SPEC>`
I2CDATAI2CDATA (rw) register accessor: an alias for `Reg<I2CDATA_SPEC>`
IDLSCIDLSC (rw) register accessor: an alias for `Reg<IDLSC_SPEC>`
MCMDMCMD (rw) register accessor: an alias for `Reg<MCMD_SPEC>`
MTR_RXBMTR_RXB (rw) register accessor: an alias for `Reg<MTR_RXB_SPEC>`
MTR_TXBMTR_TXB (rw) register accessor: an alias for `Reg<MTR_TXB_SPEC>`
OWN_ADDROWN_ADDR (rw) register accessor: an alias for `Reg<OWN_ADDR_SPEC>`
PECPEC (rw) register accessor: an alias for `Reg<PEC_SPEC>`
PRM_CTRLPRM_CTRL (rw) register accessor: an alias for `Reg<PRM_CTRL_SPEC>`
PRM_IENPRM_IEN (rw) register accessor: an alias for `Reg<PRM_IEN_SPEC>`
PRM_STSPRM_STS (rw) register accessor: an alias for `Reg<PRM_STS_SPEC>`
RSHTMRSHTM (rw) register accessor: an alias for `Reg<RSHTM_SPEC>`
RSTSRSTS (r) register accessor: an alias for `Reg<RSTS_SPEC>`
RSVD1RSVD1 (r) register accessor: an alias for `Reg<RSVD1_SPEC>`
RSVD2RSVD2 (r) register accessor: an alias for `Reg<RSVD2_SPEC>`
SCMDSCMD (rw) register accessor: an alias for `Reg<SCMD_SPEC>`
SLV_ADDRSLV_ADDR (rw) register accessor: an alias for `Reg<SLV_ADDR_SPEC>`
SLV_RXBSLV_RXB (rw) register accessor: an alias for `Reg<SLV_RXB_SPEC>`
SLV_TXBSLV_TXB (rw) register accessor: an alias for `Reg<SLV_TXB_SPEC>`
TESTTEST (r) register accessor: an alias for `Reg<TEST_SPEC>`
TMOUTSCTMOUTSC (rw) register accessor: an alias for `Reg<TMOUTSC_SPEC>`
WAKE_ENWAKE_EN (rw) register accessor: an alias for `Reg<WAKE_EN_SPEC>`
WAKE_STSWAKE_STS (rw) register accessor: an alias for `Reg<WAKE_STS_SPEC>`
WCTRLWCTRL (w) register accessor: an alias for `Reg<WCTRL_SPEC>`
Module cec1712_pac::i2c0
===
The I2C interface can handle standard I2C interface.
Modules
---
bb_ctrlBit-Bang Control Register
blkidBlock ID Register
blkrevRevision Register
busclkBus Clock Register
cfgConfiguration Register
clksyncThis is Clock Sync Register. This register must not be written, or undesirable results may occur.
complCompletion Register
datatmData Timing Register
i2cdataThis register holds the data that are either shifted out to or shifted in from the I2C port.
own_addrOwn Address Register Note that the Data Register and Own Address fields are offset by one bit, so that programming Own Address 1 with a value of 55h will result in the value AAh being recognized as the SMB Controller Core slave address.
prm_ctrlThis is the Promiscuous Control Register. This register is functional only in Promiscuous mode.
prm_ienThis is the Promiscuous Interrupt Enable Register.
prm_stsThis is the Promiscuous Interrupt Register. This register bit will be functional only in Promiscuous mode.
rshtmRepeated Start Hold Time Register
rstsStatus Register
rsvd1Reserved
rsvd2Reserved
slv_addrThis is the Slave Address Register.
tmoutscTime-Out Scaling Register
wake_enWAKE ENABLE Register
wake_stsWAKE STATUS Register
wctrlControl Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
BB_CTRLBB_CTRL (rw) register accessor: an alias for `Reg<BB_CTRL_SPEC>`
BLKIDBLKID (r) register accessor: an alias for `Reg<BLKID_SPEC>`
BLKREVBLKREV (r) register accessor: an alias for `Reg<BLKREV_SPEC>`
BUSCLKBUSCLK (rw) register accessor: an alias for `Reg<BUSCLK_SPEC>`
CFGCFG (rw) register accessor: an alias for `Reg<CFG_SPEC>`
CLKSYNCCLKSYNC (r) register accessor: an alias for `Reg<CLKSYNC_SPEC>`
COMPLCOMPL (rw) register accessor: an alias for `Reg<COMPL_SPEC>`
DATATMDATATM (rw) register accessor: an alias for `Reg<DATATM_SPEC>`
I2CDATAI2CDATA (rw) register accessor: an alias for `Reg<I2CDATA_SPEC>`
OWN_ADDROWN_ADDR (rw) register accessor: an alias for `Reg<OWN_ADDR_SPEC>`
PRM_CTRLPRM_CTRL (rw) register accessor: an alias for `Reg<PRM_CTRL_SPEC>`
PRM_IENPRM_IEN (rw) register accessor: an alias for `Reg<PRM_IEN_SPEC>`
PRM_STSPRM_STS (rw) register accessor: an alias for `Reg<PRM_STS_SPEC>`
RSHTMRSHTM (rw) register accessor: an alias for `Reg<RSHTM_SPEC>`
RSTSRSTS (r) register accessor: an alias for `Reg<RSTS_SPEC>`
RSVD1RSVD1 (r) register accessor: an alias for `Reg<RSVD1_SPEC>`
RSVD2RSVD2 (r) register accessor: an alias for `Reg<RSVD2_SPEC>`
SLV_ADDRSLV_ADDR (rw) register accessor: an alias for `Reg<SLV_ADDR_SPEC>`
TMOUTSCTMOUTSC (rw) register accessor: an alias for `Reg<TMOUTSC_SPEC>`
WAKE_ENWAKE_EN (rw) register accessor: an alias for `Reg<WAKE_EN_SPEC>`
WAKE_STSWAKE_STS (rw) register accessor: an alias for `Reg<WAKE_STS_SPEC>`
WCTRLWCTRL (w) register accessor: an alias for `Reg<WCTRL_SPEC>`
Module cec1712_pac::adc
===
This block is designed to convert external analog voltage readings into digital values.
Modules
---
cfgThe ADC Configuration Register is used to configure the ADC clock timing.
chan_rdAll 16 ADC channels return their results into a 32-bit reading register. In each case the low 10 bits of the reading register return the result of the Analog to Digital conversion and the upper 22 bits return 0.
chan_stsThe ADC Status Register indicates whether the ADC has completed a conversion cycle. All bits are cleared by being written with a 1.
0: conversion of the corresponding ADC channel is not complete 1: conversion of the corresponding ADC channel is complete
ctrlThe ADC Control Register is used to control the behavior of the Analog to Digital Converter.
delayThe ADC Delay register determines the delay from setting Start_Repeat in the ADC Control Register and the start of a conversion cycle. This register also controls the interval between conversion cycles in repeat mode.
rept_enThe ADC Repeat Register is used to control which ADC channels are captured during a repeat conversion cycle initiated by the Start_Repeat bit in the ADC Control Register.
sar_cfgThis is the SAR ADC Configuration Register.
sar_ctrlThis is the SAR ADC Control Register.
sng_enThe ADC Single Register is used to control which ADC channel is captured during a Single-Sample conversion cycle initiated by the Start_Single bit in the ADC Control Register.
APPLICATION NOTE: Do not change the bits in this register in the middle of a conversion cycle to insure proper operation.
0: single cycle conversions for this channel are disabled 1: single cycle conversions for this channel are enabled
vref_chanThe ADC Channel Register is used to configure the reference voltage to the clock timing.
vref_ctrlThis is the VREF Control Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
CFGCFG (rw) register accessor: an alias for `Reg<CFG_SPEC>`
CHAN_RDCHAN_RD (rw) register accessor: an alias for `Reg<CHAN_RD_SPEC>`
CHAN_STSCHAN_STS (rw) register accessor: an alias for `Reg<CHAN_STS_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
DELAYDELAY (rw) register accessor: an alias for `Reg<DELAY_SPEC>`
REPT_ENREPT_EN (rw) register accessor: an alias for `Reg<REPT_EN_SPEC>`
SAR_CFGSAR_CFG (rw) register accessor: an alias for `Reg<SAR_CFG_SPEC>`
SAR_CTRLSAR_CTRL (rw) register accessor: an alias for `Reg<SAR_CTRL_SPEC>`
SNG_ENSNG_EN (rw) register accessor: an alias for `Reg<SNG_EN_SPEC>`
VREF_CHANVREF_CHAN (rw) register accessor: an alias for `Reg<VREF_CHAN_SPEC>`
VREF_CTRLVREF_CTRL (rw) register accessor: an alias for `Reg<VREF_CTRL_SPEC>`
Module cec1712_pac::cct
===
This is a 16-bit auto-reloading timer/counter.
Modules
---
cap0This register saves the value copied from the Free Running timer on a programmed edge of ICT0.
cap0_ctrlThis register is used to configure capture and compare timers 0-3.
cap1This register saves the value copied from the Free Running timer on a programmed edge of ICT1.
cap1_ctrlThis register is used to configure capture and compare timers 4-5.
cap2This register saves the value copied from the Free Running timer on a programmed edge of ICT0.
cap3This register saves the value copied from the Free Running timer on a programmed edge of ICT0.
cap4This register saves the value copied from the Free Running timer on a programmed edge of ICT4.
cap5This register saves the value copied from the Free Running timer on a programmed edge of ICT5.
comp0A COMPARE 0 interrupt is generated when this register matches the value in the Free Running Timer.
comp1A COMPARE 1 interrupt is generated when this register matches the value in the Free Running Timer.
ctrlThis register controls the capture and compare timer.
free_runThis register contains the current value of the Free Running Timer.
mux_selThis register selects the pin mapping to the capture register.
Structs
---
RegisterBlockRegister block
Type Definitions
---
CAP0CAP0 (rw) register accessor: an alias for `Reg<CAP0_SPEC>`
CAP0_CTRLCAP0_CTRL (rw) register accessor: an alias for `Reg<CAP0_CTRL_SPEC>`
CAP1CAP1 (rw) register accessor: an alias for `Reg<CAP1_SPEC>`
CAP1_CTRLCAP1_CTRL (rw) register accessor: an alias for `Reg<CAP1_CTRL_SPEC>`
CAP2CAP2 (rw) register accessor: an alias for `Reg<CAP2_SPEC>`
CAP3CAP3 (rw) register accessor: an alias for `Reg<CAP3_SPEC>`
CAP4CAP4 (rw) register accessor: an alias for `Reg<CAP4_SPEC>`
CAP5CAP5 (rw) register accessor: an alias for `Reg<CAP5_SPEC>`
COMP0COMP0 (rw) register accessor: an alias for `Reg<COMP0_SPEC>`
COMP1COMP1 (rw) register accessor: an alias for `Reg<COMP1_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
FREE_RUNFREE_RUN (rw) register accessor: an alias for `Reg<FREE_RUN_SPEC>`
MUX_SELMUX_SEL (rw) register accessor: an alias for `Reg<MUX_SEL_SPEC>`
Module cec1712_pac::dma_chan00
===
DMA Channel 00 Registers
Modules
---
activateEnable this channel for operation. The DMA Main Control: Activate must also be enabled for this channel to be operational.
crc_dataDMA CHANNEL N CRC DATA
crc_enDMA CHANNEL N CRC ENABLE
crc_post_stsDMA CHANNEL N CRC POST STATUS
ctrlDMA Channel N Control
dstartThis is the Master Device address.
ienDMA CHANNEL N INTERRUPT ENABLE
istsDMA Channel N Interrupt Status
mendThis is the ending address for the Memory device.
mstartThis is the starting address for the Memory device.
Structs
---
RegisterBlockRegister block
Type Definitions
---
ACTIVATEACTIVATE (rw) register accessor: an alias for `Reg<ACTIVATE_SPEC>`
CRC_DATACRC_DATA (rw) register accessor: an alias for `Reg<CRC_DATA_SPEC>`
CRC_ENCRC_EN (rw) register accessor: an alias for `Reg<CRC_EN_SPEC>`
CRC_POST_STSCRC_POST_STS (rw) register accessor: an alias for `Reg<CRC_POST_STS_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
DSTARTDSTART (rw) register accessor: an alias for `Reg<DSTART_SPEC>`
IENIEN (rw) register accessor: an alias for `Reg<IEN_SPEC>`
ISTSISTS (rw) register accessor: an alias for `Reg<ISTS_SPEC>`
MENDMEND (rw) register accessor: an alias for `Reg<MEND_SPEC>`
MSTARTMSTART (rw) register accessor: an alias for `Reg<MSTART_SPEC>`
Module cec1712_pac::dma_chan01
===
DMA Channel 01 Registers
Modules
---
activateEnable this channel for operation. The DMA Main Control: Activate must also be enabled for this channel to be operational.
ctrlDMA Channel N Control
dstartThis is the Master Device address.
fill_dataDMA CHANNEL N FILL DATA
fill_enDMA CHANNEL N FILL ENABLE
fill_stsDMA CHANNEL N FILL STATUS
ienDMA CHANNEL N INTERRUPT ENABLE
istsDMA Channel N Interrupt Status
mendThis is the ending address for the Memory device.
mstartThis is the starting address for the Memory device.
Structs
---
RegisterBlockRegister block
Type Definitions
---
ACTIVATEACTIVATE (rw) register accessor: an alias for `Reg<ACTIVATE_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
DSTARTDSTART (rw) register accessor: an alias for `Reg<DSTART_SPEC>`
FILL_DATAFILL_DATA (rw) register accessor: an alias for `Reg<FILL_DATA_SPEC>`
FILL_ENFILL_EN (rw) register accessor: an alias for `Reg<FILL_EN_SPEC>`
FILL_STSFILL_STS (rw) register accessor: an alias for `Reg<FILL_STS_SPEC>`
IENIEN (rw) register accessor: an alias for `Reg<IEN_SPEC>`
ISTSISTS (rw) register accessor: an alias for `Reg<ISTS_SPEC>`
MENDMEND (rw) register accessor: an alias for `Reg<MEND_SPEC>`
MSTARTMSTART (rw) register accessor: an alias for `Reg<MSTART_SPEC>`
Module cec1712_pac::dma_main
===
DMA Main Registers
Modules
---
actrstSoft reset the entire module. Enable the blocks operation.
data_pktDebug register that has the data that is stored in the Data Packet. This data is read data from the currently active transfer source.
Structs
---
RegisterBlockRegister block
Type Definitions
---
ACTRSTACTRST (rw) register accessor: an alias for `Reg<ACTRST_SPEC>`
DATA_PKTDATA_PKT (r) register accessor: an alias for `Reg<DATA_PKT_SPEC>`
Module cec1712_pac::ec_reg_bank
===
This block is designed to be accessed internally by the EC via the register interface.
Modules
---
aesh_bswap_ctrlAES HASH Byte Swap Control Register.
ahb_err_addrAHB Error Address [0:0]
AHB_ERR_ADDR, In priority order:
ahb_err_ctrlAHB Error Control [0:0]
AHB_ERROR_DISABLE, 0: EC memory exceptions are enabled. 1: EC memory exceptions are disabled.
brom_stsThis register contains the VTR Reset Status for BOOT ROM.
crypto_srstSystem Shutdown Reset
debug_ctrlDebug Enable Register
etm_ctrlETM TRACE Enable [0:0]
TRACE_EN (TRACE_EN) This bit enables the ARM TRACE debug port (ETM/ITM). The Trace Debug Interface pins are forced to the TRACE functions. 0 = ARM TRACE port disabled, 1= ARM TRACE port enabled
fw_scr0BOOT ROM Scratch 0 Register
fw_scr1BOOT ROM Scratch 1 Register
fw_scr2BOOT ROM Scratch 2 Register
fw_scr3BOOT ROM Scratch 3 Register
gpio_bank_pwrGPIO Bank Power Register
intr_ctrlInterrupt Control [0:0]
NVIC_EN (NVIC_EN) This bit enables Alternate NVIC IRQ’s Vectors. The Alternate NVIC Vectors provides each interrupt event with a dedicated (direct) NVIC vector.
0 = Alternate NVIC vectors disabled, 1= Alternate NVIC vectors enabled
jtag_mcfgJTAG Master Configuration Register
jtag_mcmdJTAG Master Command Register
jtag_mstsJTAG Master Status Register
jtag_mtdiJTAG Master TDI Register
jtag_mtdoJTAG Master TDO Register
jtag_mtmsJTAG Master TMS Register
otp_lockLock Register
peci_disPECI Disable
stap_tmirThis register is a mirror of the Boot Control Register.
wdt_cntWDT Event Count [3:0]
WDT_COUNT (WDT_COUNT) These EC R/W bits are cleared to 0 on VCC1 POR, but not on a WDT.
Note: This field is written by Boot ROM firmware to indicate the number of times a WDT fired before loading a good EC code image.
Structs
---
RegisterBlockRegister block
Type Definitions
---
AESH_BSWAP_CTRLAESH_BSWAP_CTRL (rw) register accessor: an alias for `Reg<AESH_BSWAP_CTRL_SPEC>`
AHB_ERR_ADDRAHB_ERR_ADDR (rw) register accessor: an alias for `Reg<AHB_ERR_ADDR_SPEC>`
AHB_ERR_CTRLAHB_ERR_CTRL (rw) register accessor: an alias for `Reg<AHB_ERR_CTRL_SPEC>`
BROM_STSBROM_STS (rw) register accessor: an alias for `Reg<BROM_STS_SPEC>`
CRYPTO_SRSTCRYPTO_SRST (rw) register accessor: an alias for `Reg<CRYPTO_SRST_SPEC>`
DEBUG_CTRLDEBUG_CTRL (rw) register accessor: an alias for `Reg<DEBUG_CTRL_SPEC>`
ETM_CTRLETM_CTRL (rw) register accessor: an alias for `Reg<ETM_CTRL_SPEC>`
FW_SCR0FW_SCR0 (rw) register accessor: an alias for `Reg<FW_SCR0_SPEC>`
FW_SCR1FW_SCR1 (rw) register accessor: an alias for `Reg<FW_SCR1_SPEC>`
FW_SCR2FW_SCR2 (rw) register accessor: an alias for `Reg<FW_SCR2_SPEC>`
FW_SCR3FW_SCR3 (rw) register accessor: an alias for `Reg<FW_SCR3_SPEC>`
GPIO_BANK_PWRGPIO_BANK_PWR (rw) register accessor: an alias for `Reg<GPIO_BANK_PWR_SPEC>`
INTR_CTRLINTR_CTRL (rw) register accessor: an alias for `Reg<INTR_CTRL_SPEC>`
JTAG_MCFGJTAG_MCFG (rw) register accessor: an alias for `Reg<JTAG_MCFG_SPEC>`
JTAG_MCMDJTAG_MCMD (rw) register accessor: an alias for `Reg<JTAG_MCMD_SPEC>`
JTAG_MSTSJTAG_MSTS (r) register accessor: an alias for `Reg<JTAG_MSTS_SPEC>`
JTAG_MTDIJTAG_MTDI (rw) register accessor: an alias for `Reg<JTAG_MTDI_SPEC>`
JTAG_MTDOJTAG_MTDO (rw) register accessor: an alias for `Reg<JTAG_MTDO_SPEC>`
JTAG_MTMSJTAG_MTMS (rw) register accessor: an alias for `Reg<JTAG_MTMS_SPEC>`
OTP_LOCKOTP_LOCK (rw) register accessor: an alias for `Reg<OTP_LOCK_SPEC>`
PECI_DISPECI_DIS (rw) register accessor: an alias for `Reg<PECI_DIS_SPEC>`
STAP_TMIRSTAP_TMIR (r) register accessor: an alias for `Reg<STAP_TMIR_SPEC>`
WDT_CNTWDT_CNT (rw) register accessor: an alias for `Reg<WDT_CNT_SPEC>`
Module cec1712_pac::ecia
===
The ECIA works in conjunction with the processor interrupt interface to handle hardware interrupts andd exceptions.
Modules
---
blk_en_clrBlock Enable Clear Register.
blk_en_setBlock Enable Set Register
blk_irq_vtorBlock IRQ Vector Register
en_clr8GIRQ8 Enable Clear Register
en_clr9GIRQ9 Enable Clear Register
en_clr10GIRQ10 Enable Clear Register
en_clr11GIRQ11 Enable Clear Register
en_clr12GIRQ12 Enable Clear Register
en_clr13GIRQ13 Enable Clear Register
en_clr14GIRQ14 Enable Clear Register
en_clr15GIRQ15 Enable Clear Register
en_clr16GIRQ16 Enable Clear Register
en_clr17GIRQ17 Enable Clear Register
en_clr18GIRQ18 Enable Clear Register
en_clr19GIRQ19 Enable Clear Register
en_clr20GIRQ20 Enable Clear Register
en_clr21GIRQ21 Enable Clear Register
en_clr22GIRQ22 Enable Clear Register
en_clr23GIRQ23 Enable Clear Register
en_clr24GIRQ24 Enable Clear Register
en_clr25GIRQ25 Enable Clear Register
en_clr26GIRQ26 Enable Clear Register
en_set8GIRQ8 Enable Set Register
en_set9GIRQ9 Enable Set Register
en_set10GIRQ10 Enable Set Register
en_set11GIRQ11 Enable Set Register
en_set12GIRQ12 Enable Set Register
en_set13GIRQ13 Enable Set Register
en_set14GIRQ14 Enable Set Register
en_set15GIRQ15 Enable Set Register
en_set16GIRQ16 Enable Set Register
en_set17GIRQ17 Enable Set Register
en_set18GIRQ18 Enable Set Register
en_set19GIRQ19 Enable Set Register
en_set20GIRQ20 Enable Set Register
en_set21GIRQ21 Enable Set Register
en_set22GIRQ22 Enable Set Register
en_set23GIRQ23 Enable Set Register
en_set24GIRQ24 Enable Set Register
en_set25GIRQ25 Enable Set Register
en_set26GIRQ26 Enable Set Register
result8GIRQ8 Result Register
result9GIRQ9 Result Register
result10GIRQ10 Result Register
result11GIRQ11 Result Register
result12GIRQ12 Result Register
result13GIRQ13 Result Register
result14GIRQ14 Result Register
result15GIRQ15 Result Register
result16GIRQ16 Result Register
result17GIRQ17 Result Register
result18GIRQ18 Result Register
result19GIRQ19 Result Register
result20GIRQ20 Result Register
result21GIRQ21 Result Register
result22GIRQ22 Result Register
result23GIRQ23 Result Register
result24GIRQ24 Result Register
result25GIRQ25 Result Register
result26GIRQ26 Result Register
src8GIRQ8 Source Register
src9GIRQ9 Source Register
src10GIRQ10 Source Register
src11GIRQ11 Source Register
src12GIRQ12 Source Register
src13GIRQ13 Source Register
src14GIRQ14 Source Register
src15GIRQ15 Source Register
src16GIRQ16 Source Register
src17GIRQ17 Source Register
src18GIRQ18 Source Register
src19GIRQ19 Source Register
src20GIRQ20 Source Register
src21GIRQ21 Source Register
src22GIRQ22 Source Register
src23GIRQ23 Source Register
src24GIRQ24 Source Register
src25GIRQ25 Source Register
src26GIRQ26 Source Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
BLK_EN_CLRBLK_EN_CLR (rw) register accessor: an alias for `Reg<BLK_EN_CLR_SPEC>`
BLK_EN_SETBLK_EN_SET (rw) register accessor: an alias for `Reg<BLK_EN_SET_SPEC>`
BLK_IRQ_VTORBLK_IRQ_VTOR (r) register accessor: an alias for `Reg<BLK_IRQ_VTOR_SPEC>`
EN_CLR8EN_CLR8 (rw) register accessor: an alias for `Reg<EN_CLR8_SPEC>`
EN_CLR9EN_CLR9 (rw) register accessor: an alias for `Reg<EN_CLR9_SPEC>`
EN_CLR10EN_CLR10 (rw) register accessor: an alias for `Reg<EN_CLR10_SPEC>`
EN_CLR11EN_CLR11 (rw) register accessor: an alias for `Reg<EN_CLR11_SPEC>`
EN_CLR12EN_CLR12 (rw) register accessor: an alias for `Reg<EN_CLR12_SPEC>`
EN_CLR13EN_CLR13 (rw) register accessor: an alias for `Reg<EN_CLR13_SPEC>`
EN_CLR14EN_CLR14 (rw) register accessor: an alias for `Reg<EN_CLR14_SPEC>`
EN_CLR15EN_CLR15 (rw) register accessor: an alias for `Reg<EN_CLR15_SPEC>`
EN_CLR16EN_CLR16 (rw) register accessor: an alias for `Reg<EN_CLR16_SPEC>`
EN_CLR17EN_CLR17 (rw) register accessor: an alias for `Reg<EN_CLR17_SPEC>`
EN_CLR18EN_CLR18 (rw) register accessor: an alias for `Reg<EN_CLR18_SPEC>`
EN_CLR19EN_CLR19 (rw) register accessor: an alias for `Reg<EN_CLR19_SPEC>`
EN_CLR20EN_CLR20 (rw) register accessor: an alias for `Reg<EN_CLR20_SPEC>`
EN_CLR21EN_CLR21 (rw) register accessor: an alias for `Reg<EN_CLR21_SPEC>`
EN_CLR22EN_CLR22 (rw) register accessor: an alias for `Reg<EN_CLR22_SPEC>`
EN_CLR23EN_CLR23 (rw) register accessor: an alias for `Reg<EN_CLR23_SPEC>`
EN_CLR24EN_CLR24 (rw) register accessor: an alias for `Reg<EN_CLR24_SPEC>`
EN_CLR25EN_CLR25 (rw) register accessor: an alias for `Reg<EN_CLR25_SPEC>`
EN_CLR26EN_CLR26 (rw) register accessor: an alias for `Reg<EN_CLR26_SPEC>`
EN_SET8EN_SET8 (rw) register accessor: an alias for `Reg<EN_SET8_SPEC>`
EN_SET9EN_SET9 (rw) register accessor: an alias for `Reg<EN_SET9_SPEC>`
EN_SET10EN_SET10 (rw) register accessor: an alias for `Reg<EN_SET10_SPEC>`
EN_SET11EN_SET11 (rw) register accessor: an alias for `Reg<EN_SET11_SPEC>`
EN_SET12EN_SET12 (rw) register accessor: an alias for `Reg<EN_SET12_SPEC>`
EN_SET13EN_SET13 (rw) register accessor: an alias for `Reg<EN_SET13_SPEC>`
EN_SET14EN_SET14 (rw) register accessor: an alias for `Reg<EN_SET14_SPEC>`
EN_SET15EN_SET15 (rw) register accessor: an alias for `Reg<EN_SET15_SPEC>`
EN_SET16EN_SET16 (rw) register accessor: an alias for `Reg<EN_SET16_SPEC>`
EN_SET17EN_SET17 (rw) register accessor: an alias for `Reg<EN_SET17_SPEC>`
EN_SET18EN_SET18 (rw) register accessor: an alias for `Reg<EN_SET18_SPEC>`
EN_SET19EN_SET19 (rw) register accessor: an alias for `Reg<EN_SET19_SPEC>`
EN_SET20EN_SET20 (rw) register accessor: an alias for `Reg<EN_SET20_SPEC>`
EN_SET21EN_SET21 (rw) register accessor: an alias for `Reg<EN_SET21_SPEC>`
EN_SET22EN_SET22 (rw) register accessor: an alias for `Reg<EN_SET22_SPEC>`
EN_SET23EN_SET23 (rw) register accessor: an alias for `Reg<EN_SET23_SPEC>`
EN_SET24EN_SET24 (rw) register accessor: an alias for `Reg<EN_SET24_SPEC>`
EN_SET25EN_SET25 (rw) register accessor: an alias for `Reg<EN_SET25_SPEC>`
EN_SET26EN_SET26 (rw) register accessor: an alias for `Reg<EN_SET26_SPEC>`
RESULT8RESULT8 (r) register accessor: an alias for `Reg<RESULT8_SPEC>`
RESULT9RESULT9 (r) register accessor: an alias for `Reg<RESULT9_SPEC>`
RESULT10RESULT10 (r) register accessor: an alias for `Reg<RESULT10_SPEC>`
RESULT11RESULT11 (r) register accessor: an alias for `Reg<RESULT11_SPEC>`
RESULT12RESULT12 (r) register accessor: an alias for `Reg<RESULT12_SPEC>`
RESULT13RESULT13 (r) register accessor: an alias for `Reg<RESULT13_SPEC>`
RESULT14RESULT14 (r) register accessor: an alias for `Reg<RESULT14_SPEC>`
RESULT15RESULT15 (r) register accessor: an alias for `Reg<RESULT15_SPEC>`
RESULT16RESULT16 (r) register accessor: an alias for `Reg<RESULT16_SPEC>`
RESULT17RESULT17 (r) register accessor: an alias for `Reg<RESULT17_SPEC>`
RESULT18RESULT18 (r) register accessor: an alias for `Reg<RESULT18_SPEC>`
RESULT19RESULT19 (r) register accessor: an alias for `Reg<RESULT19_SPEC>`
RESULT20RESULT20 (r) register accessor: an alias for `Reg<RESULT20_SPEC>`
RESULT21RESULT21 (r) register accessor: an alias for `Reg<RESULT21_SPEC>`
RESULT22RESULT22 (r) register accessor: an alias for `Reg<RESULT22_SPEC>`
RESULT23RESULT23 (r) register accessor: an alias for `Reg<RESULT23_SPEC>`
RESULT24RESULT24 (r) register accessor: an alias for `Reg<RESULT24_SPEC>`
RESULT25RESULT25 (r) register accessor: an alias for `Reg<RESULT25_SPEC>`
RESULT26RESULT26 (r) register accessor: an alias for `Reg<RESULT26_SPEC>`
SRC8SRC8 (rw) register accessor: an alias for `Reg<SRC8_SPEC>`
SRC9SRC9 (rw) register accessor: an alias for `Reg<SRC9_SPEC>`
SRC10SRC10 (rw) register accessor: an alias for `Reg<SRC10_SPEC>`
SRC11SRC11 (rw) register accessor: an alias for `Reg<SRC11_SPEC>`
SRC12SRC12 (rw) register accessor: an alias for `Reg<SRC12_SPEC>`
SRC13SRC13 (rw) register accessor: an alias for `Reg<SRC13_SPEC>`
SRC14SRC14 (rw) register accessor: an alias for `Reg<SRC14_SPEC>`
SRC15SRC15 (rw) register accessor: an alias for `Reg<SRC15_SPEC>`
SRC16SRC16 (rw) register accessor: an alias for `Reg<SRC16_SPEC>`
SRC17SRC17 (rw) register accessor: an alias for `Reg<SRC17_SPEC>`
SRC18SRC18 (rw) register accessor: an alias for `Reg<SRC18_SPEC>`
SRC19SRC19 (rw) register accessor: an alias for `Reg<SRC19_SPEC>`
SRC20SRC20 (rw) register accessor: an alias for `Reg<SRC20_SPEC>`
SRC21SRC21 (rw) register accessor: an alias for `Reg<SRC21_SPEC>`
SRC22SRC22 (rw) register accessor: an alias for `Reg<SRC22_SPEC>`
SRC23SRC23 (rw) register accessor: an alias for `Reg<SRC23_SPEC>`
SRC24SRC24 (rw) register accessor: an alias for `Reg<SRC24_SPEC>`
SRC25SRC25 (rw) register accessor: an alias for `Reg<SRC25_SPEC>`
SRC26SRC26 (rw) register accessor: an alias for `Reg<SRC26_SPEC>`
Module cec1712_pac::gcr
===
The Logical Device Configuration registers support motherboard designs in which the resources required by their components are known and assigned by the BIOS at POST.
Modules
---
dev_idA read-only register which provides device identification.
dev_revA read-only register which provides device revision information.
dev_subidA read-only register which provides device sub ID information.
ldnA write to this register selects the current logical device. This allows access to the control and configuration registers for each logical device. Note: The Activate command operates only on the selected logical device.
leg_dev_idA read-only register which provides legacy device identification.
leg_dev_revA read-only register which provides legacy device revision information.
Structs
---
RegisterBlockRegister block
Type Definitions
---
DEV_IDDEV_ID (r) register accessor: an alias for `Reg<DEV_ID_SPEC>`
DEV_REVDEV_REV (r) register accessor: an alias for `Reg<DEV_REV_SPEC>`
DEV_SUBIDDEV_SUBID (r) register accessor: an alias for `Reg<DEV_SUBID_SPEC>`
LDNLDN (rw) register accessor: an alias for `Reg<LDN_SPEC>`
LEG_DEV_IDLEG_DEV_ID (r) register accessor: an alias for `Reg<LEG_DEV_ID_SPEC>`
LEG_DEV_REVLEG_DEV_REV (r) register accessor: an alias for `Reg<LEG_DEV_REV_SPEC>`
Module cec1712_pac::generic
===
Common register and bit access and modify traits
Structs
---
ArrayProxyAccess an array of `COUNT` items of type `T` with the items `STRIDE` bytes apart. This is a zero-sized-type. No objects of this type are ever actually created, it is only a convenience for wrapping pointer arithmetic.
RRegister reader.
RegThis structure provides volatile access to registers.
WRegister writer.
Traits
---
ReadableTrait implemented by readable registers to enable the `read` method.
RegisterSpecRaw register type
ResettableReset value of the register.
WritableTrait implemented by writeable registers.
Type Definitions
---
BitReaderBit-wise field reader
BitWriterBit-wise write field proxy
BitWriter0CBit-wise write field proxy
BitWriter0SBit-wise write field proxy
BitWriter0TBit-wise write field proxy
BitWriter1CBit-wise write field proxy
BitWriter1SBit-wise write field proxy
BitWriter1TBit-wise write field proxy
FieldReaderField reader.
FieldWriterWrite field Proxy with unsafe `bits`
FieldWriterSafeWrite field Proxy with safe `bits`
Module cec1712_pac::gpio
===
GPIO Pin Control Registers
Modules
---
ctrl0GPIO Pin Control Register
ctrl1GPIO Pin Control Register
ctrl2GPIO Pin Control Register
ctrl2p0The GPIO PIN_CTRL2 Registers
ctrl2p1The GPIO PIN_CTRL2 Registers
ctrl2p2The GPIO PIN_CTRL2 Registers
ctrl2p3The GPIO PIN_CTRL2 Registers
ctrl2p4The GPIO PIN_CTRL2 Registers
ctrl2p5The GPIO PIN_CTRL2 Registers
ctrl2p6The GPIO PIN_CTRL2 Registers
ctrl2p7The GPIO PIN_CTRL2 Registers
ctrl2p10The GPIO PIN_CTRL2 Registers
ctrl2p11The GPIO PIN_CTRL2 Registers
ctrl2p12The GPIO PIN_CTRL2 Registers
ctrl2p13The GPIO PIN_CTRL2 Registers
ctrl2p14The GPIO PIN_CTRL2 Registers
ctrl2p15The GPIO PIN_CTRL2 Registers
ctrl2p16The GPIO PIN_CTRL2 Registers
ctrl2p17The GPIO PIN_CTRL2 Registers
ctrl2p20The GPIO PIN_CTRL2 Registers
ctrl2p21The GPIO PIN_CTRL2 Registers
ctrl2p22The GPIO PIN_CTRL2 Registers
ctrl2p23The GPIO PIN_CTRL2 Registers
ctrl2p24The GPIO PIN_CTRL2 Registers
ctrl2p25The GPIO PIN_CTRL2 Registers
ctrl2p26The GPIO PIN_CTRL2 Registers
ctrl3GPIO Pin Control Register
ctrl4GPIO Pin Control Register
ctrl5GPIO Pin Control Register
ctrl6GPIO Pin Control Register
ctrl7GPIO Pin Control Register
ctrl10GPIO Pin Control Register
ctrl11GPIO Pin Control Register
ctrl12GPIO Pin Control Register
ctrl13GPIO Pin Control Register
ctrl14GPIO Pin Control Register
ctrl15GPIO Pin Control Register
ctrl16GPIO Pin Control Register
ctrl17GPIO Pin Control Register
ctrl20GPIO Pin Control Register
ctrl21GPIO Pin Control Register
ctrl22GPIO Pin Control Register
ctrl23GPIO Pin Control Register
ctrl24GPIO Pin Control Register
ctrl25GPIO Pin Control Register
ctrl26GPIO Pin Control Register
parinThe GPIO Input Registers.
paroutThe GPIO Output Registers.
Structs
---
RegisterBlockRegister block
Type Definitions
---
CTRL0CTRL0 (rw) register accessor: an alias for `Reg<CTRL0_SPEC>`
CTRL1CTRL1 (rw) register accessor: an alias for `Reg<CTRL1_SPEC>`
CTRL2CTRL2 (rw) register accessor: an alias for `Reg<CTRL2_SPEC>`
CTRL2P0CTRL2P0 (rw) register accessor: an alias for `Reg<CTRL2P0_SPEC>`
CTRL2P1CTRL2P1 (rw) register accessor: an alias for `Reg<CTRL2P1_SPEC>`
CTRL2P2CTRL2P2 (rw) register accessor: an alias for `Reg<CTRL2P2_SPEC>`
CTRL2P3CTRL2P3 (rw) register accessor: an alias for `Reg<CTRL2P3_SPEC>`
CTRL2P4CTRL2P4 (rw) register accessor: an alias for `Reg<CTRL2P4_SPEC>`
CTRL2P5CTRL2P5 (rw) register accessor: an alias for `Reg<CTRL2P5_SPEC>`
CTRL2P6CTRL2P6 (rw) register accessor: an alias for `Reg<CTRL2P6_SPEC>`
CTRL2P7CTRL2P7 (rw) register accessor: an alias for `Reg<CTRL2P7_SPEC>`
CTRL2P10CTRL2P10 (rw) register accessor: an alias for `Reg<CTRL2P10_SPEC>`
CTRL2P11CTRL2P11 (rw) register accessor: an alias for `Reg<CTRL2P11_SPEC>`
CTRL2P12CTRL2P12 (rw) register accessor: an alias for `Reg<CTRL2P12_SPEC>`
CTRL2P13CTRL2P13 (rw) register accessor: an alias for `Reg<CTRL2P13_SPEC>`
CTRL2P14CTRL2P14 (rw) register accessor: an alias for `Reg<CTRL2P14_SPEC>`
CTRL2P15CTRL2P15 (rw) register accessor: an alias for `Reg<CTRL2P15_SPEC>`
CTRL2P16CTRL2P16 (rw) register accessor: an alias for `Reg<CTRL2P16_SPEC>`
CTRL2P17CTRL2P17 (rw) register accessor: an alias for `Reg<CTRL2P17_SPEC>`
CTRL2P20CTRL2P20 (rw) register accessor: an alias for `Reg<CTRL2P20_SPEC>`
CTRL2P21CTRL2P21 (rw) register accessor: an alias for `Reg<CTRL2P21_SPEC>`
CTRL2P22CTRL2P22 (rw) register accessor: an alias for `Reg<CTRL2P22_SPEC>`
CTRL2P23CTRL2P23 (rw) register accessor: an alias for `Reg<CTRL2P23_SPEC>`
CTRL2P24CTRL2P24 (rw) register accessor: an alias for `Reg<CTRL2P24_SPEC>`
CTRL2P25CTRL2P25 (rw) register accessor: an alias for `Reg<CTRL2P25_SPEC>`
CTRL2P26CTRL2P26 (rw) register accessor: an alias for `Reg<CTRL2P26_SPEC>`
CTRL3CTRL3 (rw) register accessor: an alias for `Reg<CTRL3_SPEC>`
CTRL4CTRL4 (rw) register accessor: an alias for `Reg<CTRL4_SPEC>`
CTRL5CTRL5 (rw) register accessor: an alias for `Reg<CTRL5_SPEC>`
CTRL6CTRL6 (rw) register accessor: an alias for `Reg<CTRL6_SPEC>`
CTRL7CTRL7 (rw) register accessor: an alias for `Reg<CTRL7_SPEC>`
CTRL10CTRL10 (rw) register accessor: an alias for `Reg<CTRL10_SPEC>`
CTRL11CTRL11 (rw) register accessor: an alias for `Reg<CTRL11_SPEC>`
CTRL12CTRL12 (rw) register accessor: an alias for `Reg<CTRL12_SPEC>`
CTRL13CTRL13 (rw) register accessor: an alias for `Reg<CTRL13_SPEC>`
CTRL14CTRL14 (rw) register accessor: an alias for `Reg<CTRL14_SPEC>`
CTRL15CTRL15 (rw) register accessor: an alias for `Reg<CTRL15_SPEC>`
CTRL16CTRL16 (rw) register accessor: an alias for `Reg<CTRL16_SPEC>`
CTRL17CTRL17 (rw) register accessor: an alias for `Reg<CTRL17_SPEC>`
CTRL20CTRL20 (rw) register accessor: an alias for `Reg<CTRL20_SPEC>`
CTRL21CTRL21 (rw) register accessor: an alias for `Reg<CTRL21_SPEC>`
CTRL22CTRL22 (rw) register accessor: an alias for `Reg<CTRL22_SPEC>`
CTRL23CTRL23 (rw) register accessor: an alias for `Reg<CTRL23_SPEC>`
CTRL24CTRL24 (rw) register accessor: an alias for `Reg<CTRL24_SPEC>`
CTRL25CTRL25 (rw) register accessor: an alias for `Reg<CTRL25_SPEC>`
CTRL26CTRL26 (rw) register accessor: an alias for `Reg<CTRL26_SPEC>`
PARINPARIN (rw) register accessor: an alias for `Reg<PARIN_SPEC>`
PAROUTPAROUT (rw) register accessor: an alias for `Reg<PAROUT_SPEC>`
Module cec1712_pac::pcr
===
The Power, Clocks, and Resets (PCR) Section identifies clock sources, and reset inputs to the chip
Modules
---
clk_req_0Clock Required 0 Register
clk_req_1Clock Required 1 Register
clk_req_2Clock Required 2 Register
clk_req_3Clock Required 3 Register
clk_req_4Clock Required 4 Register
lock_regLOCK Register
osc_idOscillator ID Register
proc_clk_ctrlProcessor Clock Control Register [7:0]
Processor Clock Divide Value (PROC_DIV)
pwr_rst_ctrlPower Reset Control Register
pwr_rst_stsPCR Power Reset Status Register
rst_en_0Reset Enable 0 Register
rst_en_1Reset Enable 1 Register
rst_en_2Reset Enable 2 Register
rst_en_3Reset Enable 3 Register
rst_en_4Reset Enable 4 Register
slow_clk_ctrlConfigures the EC_CLK clock domain
slp_en_0Sleep Enable 0 Register
slp_en_1Sleep Enable 1 Register
slp_en_2Sleep Enable 2 Register
slp_en_3Sleep Enable 3 Register
slp_en_4Sleep Enable 4 Register
sys_rstSystem Reset Register
sys_slp_ctrlSystem Sleep Control
Structs
---
RegisterBlockRegister block
Type Definitions
---
CLK_REQ_0CLK_REQ_0 (rw) register accessor: an alias for `Reg<CLK_REQ_0_SPEC>`
CLK_REQ_1CLK_REQ_1 (rw) register accessor: an alias for `Reg<CLK_REQ_1_SPEC>`
CLK_REQ_2CLK_REQ_2 (rw) register accessor: an alias for `Reg<CLK_REQ_2_SPEC>`
CLK_REQ_3CLK_REQ_3 (rw) register accessor: an alias for `Reg<CLK_REQ_3_SPEC>`
CLK_REQ_4CLK_REQ_4 (rw) register accessor: an alias for `Reg<CLK_REQ_4_SPEC>`
LOCK_REGLOCK_REG (rw) register accessor: an alias for `Reg<LOCK_REG_SPEC>`
OSC_IDOSC_ID (rw) register accessor: an alias for `Reg<OSC_ID_SPEC>`
PROC_CLK_CTRLPROC_CLK_CTRL (rw) register accessor: an alias for `Reg<PROC_CLK_CTRL_SPEC>`
PWR_RST_CTRLPWR_RST_CTRL (rw) register accessor: an alias for `Reg<PWR_RST_CTRL_SPEC>`
PWR_RST_STSPWR_RST_STS (rw) register accessor: an alias for `Reg<PWR_RST_STS_SPEC>`
RST_EN_0RST_EN_0 (rw) register accessor: an alias for `Reg<RST_EN_0_SPEC>`
RST_EN_1RST_EN_1 (rw) register accessor: an alias for `Reg<RST_EN_1_SPEC>`
RST_EN_2RST_EN_2 (rw) register accessor: an alias for `Reg<RST_EN_2_SPEC>`
RST_EN_3RST_EN_3 (rw) register accessor: an alias for `Reg<RST_EN_3_SPEC>`
RST_EN_4RST_EN_4 (rw) register accessor: an alias for `Reg<RST_EN_4_SPEC>`
SLOW_CLK_CTRLSLOW_CLK_CTRL (rw) register accessor: an alias for `Reg<SLOW_CLK_CTRL_SPEC>`
SLP_EN_0SLP_EN_0 (rw) register accessor: an alias for `Reg<SLP_EN_0_SPEC>`
SLP_EN_1SLP_EN_1 (rw) register accessor: an alias for `Reg<SLP_EN_1_SPEC>`
SLP_EN_2SLP_EN_2 (rw) register accessor: an alias for `Reg<SLP_EN_2_SPEC>`
SLP_EN_3SLP_EN_3 (rw) register accessor: an alias for `Reg<SLP_EN_3_SPEC>`
SLP_EN_4SLP_EN_4 (rw) register accessor: an alias for `Reg<SLP_EN_4_SPEC>`
SYS_RSTSYS_RST (rw) register accessor: an alias for `Reg<SYS_RST_SPEC>`
SYS_SLP_CTRLSYS_SLP_CTRL (rw) register accessor: an alias for `Reg<SYS_SLP_CTRL_SPEC>`
Module cec1712_pac::qmspi
===
The QMSPI may be used to communicate with various peripheral devices that use a Serial Peripheral Interface
Modules
---
buf_cnt_stsQMSPI Buffer Count Status Register
buf_cnt_trigQMSPI Buffer Count Trigger Register
cstmQMSPI Chip Select Timing Register
ctrlQMSPI SPI Control
descrQMSPI Description Buffer 0 Register
exeQMSPI Execute Register
ienQMSPI Interrupt Enable Register
ifctrlQMSPI Interface Control Register
modeQMSPI Mode Register
rx_fifoQMSPI Receive Buffer Register
stsQMSPI Status Register
tx_fifoQMSPI Transmit Buffer Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
BUF_CNT_STSBUF_CNT_STS (rw) register accessor: an alias for `Reg<BUF_CNT_STS_SPEC>`
BUF_CNT_TRIGBUF_CNT_TRIG (rw) register accessor: an alias for `Reg<BUF_CNT_TRIG_SPEC>`
CSTMCSTM (rw) register accessor: an alias for `Reg<CSTM_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
DESCRDESCR (rw) register accessor: an alias for `Reg<DESCR_SPEC>`
EXEEXE (rw) register accessor: an alias for `Reg<EXE_SPEC>`
IENIEN (rw) register accessor: an alias for `Reg<IEN_SPEC>`
IFCTRLIFCTRL (rw) register accessor: an alias for `Reg<IFCTRL_SPEC>`
MODEMODE (rw) register accessor: an alias for `Reg<MODE_SPEC>`
RX_FIFORX_FIFO (rw) register accessor: an alias for `Reg<RX_FIFO_SPEC>`
STSSTS (rw) register accessor: an alias for `Reg<STS_SPEC>`
TX_FIFOTX_FIFO (rw) register accessor: an alias for `Reg<TX_FIFO_SPEC>`
Module cec1712_pac::rtc
===
This is the set of registers that are automatically counted by hardware every 1 second while the block is enabled
Modules
---
ctrlRTC Control Register
day_of_monDay of Month Register
day_of_wkDay of Week Register
daylt_savbDaylight Savings Backward Register
daylt_savfDaylight Savings Forward Register
hrHours Register
hr_alarmHours Alarm Register
minMinutes Register
min_alarmMinutes Alarm Register
monthMonth Register
regaRegister A
regbRegister B
regcRegister C
regdRegister D
secSeconds Register
sec_alarmSeconds Alarm Register
wk_alarmWeek Alarm Register[7:0]
yearYear Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
DAYLT_SAVBDAYLT_SAVB (rw) register accessor: an alias for `Reg<DAYLT_SAVB_SPEC>`
DAYLT_SAVFDAYLT_SAVF (rw) register accessor: an alias for `Reg<DAYLT_SAVF_SPEC>`
DAY_OF_MONDAY_OF_MON (rw) register accessor: an alias for `Reg<DAY_OF_MON_SPEC>`
DAY_OF_WKDAY_OF_WK (rw) register accessor: an alias for `Reg<DAY_OF_WK_SPEC>`
HRHR (rw) register accessor: an alias for `Reg<HR_SPEC>`
HR_ALARMHR_ALARM (rw) register accessor: an alias for `Reg<HR_ALARM_SPEC>`
MINMIN (rw) register accessor: an alias for `Reg<MIN_SPEC>`
MIN_ALARMMIN_ALARM (rw) register accessor: an alias for `Reg<MIN_ALARM_SPEC>`
MONTHMONTH (rw) register accessor: an alias for `Reg<MONTH_SPEC>`
REGAREGA (rw) register accessor: an alias for `Reg<REGA_SPEC>`
REGBREGB (rw) register accessor: an alias for `Reg<REGB_SPEC>`
REGCREGC (rw) register accessor: an alias for `Reg<REGC_SPEC>`
REGDREGD (rw) register accessor: an alias for `Reg<REGD_SPEC>`
SECSEC (rw) register accessor: an alias for `Reg<SEC_SPEC>`
SEC_ALARMSEC_ALARM (rw) register accessor: an alias for `Reg<SEC_ALARM_SPEC>`
WK_ALARMWK_ALARM (rw) register accessor: an alias for `Reg<WK_ALARM_SPEC>`
YEARYEAR (rw) register accessor: an alias for `Reg<YEAR_SPEC>`
Module cec1712_pac::rtos
===
RTOS is a 32-bit timer designed to operate on the 32kHz oscillator which is available during all chip sleep states.
Modules
---
cntRTOS Timer Count Register.
ctrlRTOS Timer Control Register
prldRTOS Timer Preload Register
softirqSoft Interrupt Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
CNTCNT (rw) register accessor: an alias for `Reg<CNT_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
PRLDPRLD (rw) register accessor: an alias for `Reg<PRLD_SPEC>`
SOFTIRQSOFTIRQ (w) register accessor: an alias for `Reg<SOFTIRQ_SPEC>`
Module cec1712_pac::sys_tick
===
System timer
Modules
---
calibSysTick Calibration Value Register
csrSysTick Control and Status Register
cvrSysTick Current Value Register
rvrSysTick Reload Value Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
CALIBCALIB (r) register accessor: an alias for `Reg<CALIB_SPEC>`
CSRCSR (rw) register accessor: an alias for `Reg<CSR_SPEC>`
CVRCVR (rw) register accessor: an alias for `Reg<CVR_SPEC>`
RVRRVR (rw) register accessor: an alias for `Reg<RVR_SPEC>`
Module cec1712_pac::system_control
===
System Control Registers
Modules
---
actlrAuxiliary Control Register
adrAuxiliary Feature Register
afsrAuxiliary Fault Status Register
aircrApplication Interrupt and Reset Control Register
bfarBusFault Address Register
ccrConfiguration and Control Register
cfsrConfigurable Fault Status Register
cpacrCoprocessor Access Control Register
cpuidCPUID Base Register
dfrDebug Feature Register
dfsrDebug Fault Status Register
hfsrHardFault Status Register
icsrInterrupt Control and State Register
ictrInterrupt Controller Type Register
isarInstruction Set Attributes Register
mmfarMemManage Fault Address Register
mmfrMemory Model Feature Register
pfrProcessor Feature Register
scrSystem Control Register
shcsrSystem Handler Control and State Register
shpr1System Handler Priority Register 1
shpr2System Handler Priority Register 2
shpr3System Handler Priority Register 3
Structs
---
RegisterBlockRegister block
Type Definitions
---
ACTLRACTLR (rw) register accessor: an alias for `Reg<ACTLR_SPEC>`
ADRADR (r) register accessor: an alias for `Reg<ADR_SPEC>`
AFSRAFSR (rw) register accessor: an alias for `Reg<AFSR_SPEC>`
AIRCRAIRCR (rw) register accessor: an alias for `Reg<AIRCR_SPEC>`
BFARBFAR (rw) register accessor: an alias for `Reg<BFAR_SPEC>`
CCRCCR (rw) register accessor: an alias for `Reg<CCR_SPEC>`
CFSRCFSR (rw) register accessor: an alias for `Reg<CFSR_SPEC>`
CPACRCPACR (rw) register accessor: an alias for `Reg<CPACR_SPEC>`
CPUIDCPUID (r) register accessor: an alias for `Reg<CPUID_SPEC>`
DFRDFR (r) register accessor: an alias for `Reg<DFR_SPEC>`
DFSRDFSR (rw) register accessor: an alias for `Reg<DFSR_SPEC>`
HFSRHFSR (rw) register accessor: an alias for `Reg<HFSR_SPEC>`
ICSRICSR (rw) register accessor: an alias for `Reg<ICSR_SPEC>`
ICTRICTR (r) register accessor: an alias for `Reg<ICTR_SPEC>`
ISARISAR (r) register accessor: an alias for `Reg<ISAR_SPEC>`
MMFARMMFAR (rw) register accessor: an alias for `Reg<MMFAR_SPEC>`
MMFRMMFR (r) register accessor: an alias for `Reg<MMFR_SPEC>`
PFRPFR (rw) register accessor: an alias for `Reg<PFR_SPEC>`
SCRSCR (rw) register accessor: an alias for `Reg<SCR_SPEC>`
SHCSRSHCSR (rw) register accessor: an alias for `Reg<SHCSR_SPEC>`
SHPR1SHPR1 (rw) register accessor: an alias for `Reg<SHPR1_SPEC>`
SHPR2SHPR2 (rw) register accessor: an alias for `Reg<SHPR2_SPEC>`
SHPR3SHPR3 (rw) register accessor: an alias for `Reg<SHPR3_SPEC>`
Module cec1712_pac::tfdp
===
The TFDP serially transmits EC-originated diagnostic vectors to an external debug trace system.
Modules
---
ctrlDebug Control Register
msdataDebug data to be shifted out on the TFDP Debug port. While data is being shifted out, the Host Interface will ‘hold-off’ additional writes to the data register until the transfer is complete.
Structs
---
RegisterBlockRegister block
Type Definitions
---
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
MSDATAMSDATA (rw) register accessor: an alias for `Reg<MSDATA_SPEC>`
Module cec1712_pac::vbat
===
The VBAT Register Bank block is a block implemented for miscellaneous battery-backed registers
Modules
---
clk32_enCLOCK ENABLE
mcnt_hiCOUNTER HIWORD
mcnt_loMONOTONIC COUNTER
pfrsThe Power-Fail and Reset Status Register collects and retains the VBAT RST and WDT event status when VCC1 is unpowered.
sys_shdnSystem Shutdown Enable register.
vwr_bckpVWIRE_BACKUP
Structs
---
RegisterBlockRegister block
Type Definitions
---
CLK32_ENCLK32_EN (rw) register accessor: an alias for `Reg<CLK32_EN_SPEC>`
MCNT_HIMCNT_HI (rw) register accessor: an alias for `Reg<MCNT_HI_SPEC>`
MCNT_LOMCNT_LO (rw) register accessor: an alias for `Reg<MCNT_LO_SPEC>`
PFRSPFRS (rw) register accessor: an alias for `Reg<PFRS_SPEC>`
SYS_SHDNSYS_SHDN (rw) register accessor: an alias for `Reg<SYS_SHDN_SPEC>`
VWR_BCKPVWR_BCKP (rw) register accessor: an alias for `Reg<VWR_BCKP_SPEC>`
Module cec1712_pac::vbat_ram
===
The VBAT RAM is operational while the main power rail is operational, and will retain its values powered by battery power while the main rail is unpowered.
Modules
---
mem32-bits of VBAT powered RAM.
Structs
---
RegisterBlockRegister block
Type Definitions
---
MEMMEM (rw) register accessor: an alias for `Reg<MEM_SPEC>`
Module cec1712_pac::vci
===
The VBAT-Powered Control Interfaces with the RTC With Date and DST Adjustment as well as the Week Alarm.
Modules
---
buffer_enVCI Buffer Enable Register
ctrl_stsVCI Register
hldoff_cntHoldoff Count Register
input_enVCI Input Enable Register
latch_enLatch Enable Register
latch_rstLatch Resets Register
nedge_detVCI Negedge Detect Register
pedge_detVCI Posedge Detect Register
polarityVCI Polarity Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
BUFFER_ENBUFFER_EN (rw) register accessor: an alias for `Reg<BUFFER_EN_SPEC>`
CTRL_STSCTRL_STS (rw) register accessor: an alias for `Reg<CTRL_STS_SPEC>`
HLDOFF_CNTHLDOFF_CNT (rw) register accessor: an alias for `Reg<HLDOFF_CNT_SPEC>`
INPUT_ENINPUT_EN (rw) register accessor: an alias for `Reg<INPUT_EN_SPEC>`
LATCH_ENLATCH_EN (rw) register accessor: an alias for `Reg<LATCH_EN_SPEC>`
LATCH_RSTLATCH_RST (rw) register accessor: an alias for `Reg<LATCH_RST_SPEC>`
NEDGE_DETNEDGE_DET (rw) register accessor: an alias for `Reg<NEDGE_DET_SPEC>`
PEDGE_DETPEDGE_DET (rw) register accessor: an alias for `Reg<PEDGE_DET_SPEC>`
POLARITYPOLARITY (rw) register accessor: an alias for `Reg<POLARITY_SPEC>`
Module cec1712_pac::wdt
===
The function of the Watchdog Timer is to provide a mechanism to detect if the internal embedded controller has failed.
Modules
---
cntThis read-only register provides the current WDT count.
ctrlWDT Control Register
ienWatch Dog Interrupt Enable Register.
kickThe WDT Kick Register is a strobe. Reads of this register return 0. Writes to this register cause the WDT to reload the WDT Load Register value and start decrementing when the WDT_ENABLE bit in the WDT Control Register is set to ‘1’. When the WDT_ENABLE bit in the WDT Control Register is cleared to ‘0’, writes to the WDT Kick Register have no effect.
loadWriting this field reloads the Watch Dog Timer counter.
stsThis register provides the current WDT count.
Structs
---
RegisterBlockRegister block
Type Definitions
---
CNTCNT (r) register accessor: an alias for `Reg<CNT_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
IENIEN (rw) register accessor: an alias for `Reg<IEN_SPEC>`
KICKKICK (w) register accessor: an alias for `Reg<KICK_SPEC>`
LOADLOAD (rw) register accessor: an alias for `Reg<LOAD_SPEC>`
STSSTS (rw) register accessor: an alias for `Reg<STS_SPEC>`
Module cec1712_pac::week
===
The Week Timer and the Sub-Week Timer assert the Power-Up Event Output which automatically powers-up the system from the G3 state
Modules
---
alarm_cntWeek Alarm Counter Register
clkdivClock Divider Register
ctrlControl Register
ss_intr_selSub-Second Programmable Interrupt Select Register
swk_alarmSub-Week Alarm Counter Register
swk_ctrlSub-Week Control Register
tmr_compWeek Timer Compare Register
Structs
---
RegisterBlockRegister block
Type Definitions
---
ALARM_CNTALARM_CNT (rw) register accessor: an alias for `Reg<ALARM_CNT_SPEC>`
CLKDIVCLKDIV (rw) register accessor: an alias for `Reg<CLKDIV_SPEC>`
CTRLCTRL (rw) register accessor: an alias for `Reg<CTRL_SPEC>`
SS_INTR_SELSS_INTR_SEL (rw) register accessor: an alias for `Reg<SS_INTR_SEL_SPEC>`
SWK_ALARMSWK_ALARM (r) register accessor: an alias for `Reg<SWK_ALARM_SPEC>`
SWK_CTRLSWK_CTRL (r) register accessor: an alias for `Reg<SWK_CTRL_SPEC>`
TMR_COMPTMR_COMP (rw) register accessor: an alias for `Reg<TMR_COMP_SPEC>`
Struct cec1712_pac::ADC
===
```
pub struct ADC { /* private fields */ }
```
This block is designed to convert external analog voltage readings into digital values.
Implementations
---
### impl ADC
#### pub const PTR: *constRegisterBlock = {0x40007c00 as *const adc::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for ADC
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for ADC
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for ADC
Auto Trait Implementations
---
### impl RefUnwindSafe for ADC
### impl !Sync for ADC
### impl Unpin for ADC
### impl UnwindSafe for ADC
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::CBP
===
```
pub struct CBP { /* private fields */ }
```
Cache and branch predictor maintenance operations
Implementations
---
### impl CBP
#### pub fn iciallu(&mut self)
I-cache invalidate all to PoU
#### pub fn icimvau(&mut self, mva: u32)
I-cache invalidate by MVA to PoU
#### pub unsafe fn dcimvac(&mut self, mva: u32)
D-cache invalidate by MVA to PoC
#### pub unsafe fn dcisw(&mut self, set: u16, way: u16)
D-cache invalidate by set-way
`set` is masked to be between 0 and 3, and `way` between 0 and 511.
#### pub fn dccmvau(&mut self, mva: u32)
D-cache clean by MVA to PoU
#### pub fn dccmvac(&mut self, mva: u32)
D-cache clean by MVA to PoC
#### pub fn dccsw(&mut self, set: u16, way: u16)
D-cache clean by set-way
`set` is masked to be between 0 and 3, and `way` between 0 and 511.
#### pub fn dccimvac(&mut self, mva: u32)
D-cache clean and invalidate by MVA to PoC
#### pub fn dccisw(&mut self, set: u16, way: u16)
D-cache clean and invalidate by set-way
`set` is masked to be between 0 and 3, and `way` between 0 and 511.
#### pub fn bpiall(&mut self)
Branch predictor invalidate all
### impl CBP
#### pub const PTR: *constRegisterBlock = {0xe000ef50 as *const cortex_m::peripheral::cbp::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for CBP
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<CBP as Deref>::Target
Dereferences the value.
### impl Send for CBP
Auto Trait Implementations
---
### impl RefUnwindSafe for CBP
### impl !Sync for CBP
### impl Unpin for CBP
### impl UnwindSafe for CBP
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::CCT
===
```
pub struct CCT { /* private fields */ }
```
This is a 16-bit auto-reloading timer/counter.
Implementations
---
### impl CCT
#### pub const PTR: *constRegisterBlock = {0x40001000 as *const cct::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for CCT
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for CCT
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for CCT
Auto Trait Implementations
---
### impl RefUnwindSafe for CCT
### impl !Sync for CCT
### impl Unpin for CCT
### impl UnwindSafe for CCT
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::CPUID
===
```
pub struct CPUID { /* private fields */ }
```
CPUID
Implementations
---
### impl CPUID
#### pub fn select_cache(&mut self, level: u8, ind: CsselrCacheType)
Selects the current CCSIDR
* `level`: the required cache level minus 1, e.g. 0 for L1, 1 for L2
* `ind`: select instruction cache or data/unified cache
`level` is masked to be between 0 and 7.
#### pub fn cache_num_sets_ways( &mut self, level: u8, ind: CsselrCacheType) -> (u16, u16)
Returns the number of sets and ways in the selected cache
#### pub fn cache_dminline() -> u32
Returns log2 of the number of words in the smallest cache line of all the data cache and unified caches that are controlled by the processor.
This is the `DminLine` field of the CTR register.
#### pub fn cache_iminline() -> u32
Returns log2 of the number of words in the smallest cache line of all the instruction caches that are controlled by the processor.
This is the `IminLine` field of the CTR register.
### impl CPUID
#### pub const PTR: *constRegisterBlock = {0xe000ed00 as *const cortex_m::peripheral::cpuid::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for CPUID
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<CPUID as Deref>::Target
Dereferences the value.
### impl Send for CPUID
Auto Trait Implementations
---
### impl RefUnwindSafe for CPUID
### impl !Sync for CPUID
### impl Unpin for CPUID
### impl UnwindSafe for CPUID
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::CorePeripherals
===
```
pub struct CorePeripherals {
pub CBP: CBP,
pub CPUID: CPUID,
pub DCB: DCB,
pub DWT: DWT,
pub FPB: FPB,
pub FPU: FPU,
pub ICB: ICB,
pub ITM: ITM,
pub MPU: MPU,
pub NVIC: NVIC,
pub SAU: SAU,
pub SCB: SCB,
pub SYST: SYST,
pub TPIU: TPIU,
/* private fields */
}
```
Core peripherals
Fields
---
`CBP: CBP`Cache and branch predictor maintenance operations.
Not available on Armv6-M.
`CPUID: CPUID`CPUID
`DCB: DCB`Debug Control Block
`DWT: DWT`Data Watchpoint and Trace unit
`FPB: FPB`Flash Patch and Breakpoint unit.
Not available on Armv6-M.
`FPU: FPU`Floating Point Unit.
`ICB: ICB`Implementation Control Block.
The name is from the v8-M spec, but the block existed in earlier revisions, without a name.
`ITM: ITM`Instrumentation Trace Macrocell.
Not available on Armv6-M and Armv8-M Baseline.
`MPU: MPU`Memory Protection Unit
`NVIC: NVIC`Nested Vector Interrupt Controller
`SAU: SAU`Security Attribution Unit
`SCB: SCB`System Control Block
`SYST: SYST`SysTick: System Timer
`TPIU: TPIU`Trace Port Interface Unit.
Not available on Armv6-M.
Implementations
---
### impl Peripherals
#### pub fn take() -> Option<PeripheralsReturns all the core peripherals *once*
#### pub unsafe fn steal() -> Peripherals
Unchecked version of `Peripherals::take`
Auto Trait Implementations
---
### impl RefUnwindSafe for Peripherals
### impl Send for Peripherals
### impl !Sync for Peripherals
### impl Unpin for Peripherals
### impl UnwindSafe for Peripherals
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DCB
===
```
pub struct DCB { /* private fields */ }
```
Debug Control Block
Implementations
---
### impl DCB
#### pub fn enable_trace(&mut self)
Enables TRACE. This is for example required by the
`peripheral::DWT` cycle counter to work properly.
As by STM documentation, this flag is not reset on soft-reset, only on power reset.
#### pub fn disable_trace(&mut self)
Disables TRACE. See `DCB::enable_trace()` for more details
#### pub fn is_debugger_attached() -> bool
Is there a debugger attached? (see note)
Note: This function is reported not to work
on Cortex-M0 devices. Per the ARM v6-M Architecture Reference Manual, “Access to the DHCSR from software running on the processor is IMPLEMENTATION DEFINED”. Indeed, from the Cortex-M0+ r0p1 Technical Reference Manual, “Note Software cannot access the debug registers.”
### impl DCB
#### pub const PTR: *constRegisterBlock = {0xe000edf0 as *const cortex_m::peripheral::dcb::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for DCB
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<DCB as Deref>::Target
Dereferences the value.
### impl Send for DCB
Auto Trait Implementations
---
### impl RefUnwindSafe for DCB
### impl !Sync for DCB
### impl Unpin for DCB
### impl UnwindSafe for DCB
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN00
===
```
pub struct DMA_CHAN00 { /* private fields */ }
```
DMA Channel 00 Registers
Implementations
---
### impl DMA_CHAN00
#### pub const PTR: *constRegisterBlock = {0x40002440 as *const dma_chan00::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN00
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN00
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN00
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN00
### impl !Sync for DMA_CHAN00
### impl Unpin for DMA_CHAN00
### impl UnwindSafe for DMA_CHAN00
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN01
===
```
pub struct DMA_CHAN01 { /* private fields */ }
```
DMA Channel 01 Registers
Implementations
---
### impl DMA_CHAN01
#### pub const PTR: *constRegisterBlock = {0x40002480 as *const dma_chan01::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN01
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN01
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN01
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN01
### impl !Sync for DMA_CHAN01
### impl Unpin for DMA_CHAN01
### impl UnwindSafe for DMA_CHAN01
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN02
===
```
pub struct DMA_CHAN02 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN02
#### pub const PTR: *constRegisterBlock = {0x400024c0 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN02
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN02
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN02
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN02
### impl !Sync for DMA_CHAN02
### impl Unpin for DMA_CHAN02
### impl UnwindSafe for DMA_CHAN02
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN03
===
```
pub struct DMA_CHAN03 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN03
#### pub const PTR: *constRegisterBlock = {0x40002500 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN03
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN03
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN03
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN03
### impl !Sync for DMA_CHAN03
### impl Unpin for DMA_CHAN03
### impl UnwindSafe for DMA_CHAN03
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN04
===
```
pub struct DMA_CHAN04 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN04
#### pub const PTR: *constRegisterBlock = {0x40002540 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN04
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN04
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN04
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN04
### impl !Sync for DMA_CHAN04
### impl Unpin for DMA_CHAN04
### impl UnwindSafe for DMA_CHAN04
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN05
===
```
pub struct DMA_CHAN05 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN05
#### pub const PTR: *constRegisterBlock = {0x40002580 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN05
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN05
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN05
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN05
### impl !Sync for DMA_CHAN05
### impl Unpin for DMA_CHAN05
### impl UnwindSafe for DMA_CHAN05
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN06
===
```
pub struct DMA_CHAN06 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN06
#### pub const PTR: *constRegisterBlock = {0x400025c0 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN06
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN06
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN06
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN06
### impl !Sync for DMA_CHAN06
### impl Unpin for DMA_CHAN06
### impl UnwindSafe for DMA_CHAN06
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN07
===
```
pub struct DMA_CHAN07 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN07
#### pub const PTR: *constRegisterBlock = {0x40002600 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN07
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN07
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN07
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN07
### impl !Sync for DMA_CHAN07
### impl Unpin for DMA_CHAN07
### impl UnwindSafe for DMA_CHAN07
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN08
===
```
pub struct DMA_CHAN08 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN08
#### pub const PTR: *constRegisterBlock = {0x40002640 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN08
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN08
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN08
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN08
### impl !Sync for DMA_CHAN08
### impl Unpin for DMA_CHAN08
### impl UnwindSafe for DMA_CHAN08
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN09
===
```
pub struct DMA_CHAN09 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN09
#### pub const PTR: *constRegisterBlock = {0x40002680 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN09
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN09
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN09
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN09
### impl !Sync for DMA_CHAN09
### impl Unpin for DMA_CHAN09
### impl UnwindSafe for DMA_CHAN09
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN10
===
```
pub struct DMA_CHAN10 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN10
#### pub const PTR: *constRegisterBlock = {0x400026c0 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN10
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN10
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN10
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN10
### impl !Sync for DMA_CHAN10
### impl Unpin for DMA_CHAN10
### impl UnwindSafe for DMA_CHAN10
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_CHAN11
===
```
pub struct DMA_CHAN11 { /* private fields */ }
```
DMA Channel 02 Registers
Implementations
---
### impl DMA_CHAN11
#### pub const PTR: *constRegisterBlock = {0x40002700 as *const dma_chan02::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_CHAN11
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_CHAN11
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_CHAN11
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_CHAN11
### impl !Sync for DMA_CHAN11
### impl Unpin for DMA_CHAN11
### impl UnwindSafe for DMA_CHAN11
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DMA_MAIN
===
```
pub struct DMA_MAIN { /* private fields */ }
```
DMA Main Registers
Implementations
---
### impl DMA_MAIN
#### pub const PTR: *constRegisterBlock = {0x40002400 as *const dma_main::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for DMA_MAIN
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for DMA_MAIN
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for DMA_MAIN
Auto Trait Implementations
---
### impl RefUnwindSafe for DMA_MAIN
### impl !Sync for DMA_MAIN
### impl Unpin for DMA_MAIN
### impl UnwindSafe for DMA_MAIN
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::DWT
===
```
pub struct DWT { /* private fields */ }
```
Data Watchpoint and Trace unit
Implementations
---
### impl DWT
#### pub fn num_comp() -> u8
Number of comparators implemented
A value of zero indicates no comparator support.
#### pub fn has_exception_trace() -> bool
Returns `true` if the the implementation supports sampling and exception tracing
#### pub fn has_external_match() -> bool
Returns `true` if the implementation includes external match signals
#### pub fn has_cycle_counter() -> bool
Returns `true` if the implementation supports a cycle counter
#### pub fn has_profiling_counter() -> bool
Returns `true` if the implementation the profiling counters
#### pub fn enable_cycle_counter(&mut self)
Enables the cycle counter
The global trace enable (`DCB::enable_trace`) should be set before enabling the cycle counter, the processor may ignore writes to the cycle counter enable if the global trace is disabled
(implementation defined behaviour).
#### pub fn disable_cycle_counter(&mut self)
Disables the cycle counter
#### pub fn cycle_counter_enabled() -> bool
Returns `true` if the cycle counter is enabled
#### pub fn get_cycle_count() -> u32
👎Deprecated since 0.7.4: Use `cycle_count` which follows the C-GETTER convention
Returns the current clock cycle count
#### pub fn cycle_count() -> u32
Returns the current clock cycle count
#### pub fn set_cycle_count(&mut self, count: u32)
Set the cycle count
#### pub fn unlock()
Removes the software lock on the DWT
Some devices, like the STM32F7, software lock the DWT after a power cycle.
#### pub fn cpi_count() -> u8
Get the CPI count
Counts additional cycles required to execute multi-cycle instructions,
except those recorded by `lsu_count`, and counts any instruction fetch stalls.
#### pub fn set_cpi_count(&mut self, count: u8)
Set the CPI count
#### pub fn exception_count() -> u8
Get the total cycles spent in exception processing
#### pub fn set_exception_count(&mut self, count: u8)
Set the exception count
#### pub fn sleep_count() -> u8
Get the total number of cycles that the processor is sleeping
ARM recommends that this counter counts all cycles when the processor is sleeping,
regardless of whether a WFI or WFE instruction, or the sleep-on-exit functionality,
caused the entry to sleep mode.
However, all sleep features are implementation defined and therefore when this counter counts is implementation defined.
#### pub fn set_sleep_count(&mut self, count: u8)
Set the sleep count
#### pub fn lsu_count() -> u8
Get the additional cycles required to execute all load or store instructions
#### pub fn set_lsu_count(&mut self, count: u8)
Set the lsu count
#### pub fn fold_count() -> u8
Get the folded instruction count
Increments on each instruction that takes 0 cycles.
#### pub fn set_fold_count(&mut self, count: u8)
Set the folded instruction count
### impl DWT
#### pub const PTR: *constRegisterBlock = {0xe0001000 as *const cortex_m::peripheral::dwt::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for DWT
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<DWT as Deref>::Target
Dereferences the value.
### impl Send for DWT
Auto Trait Implementations
---
### impl RefUnwindSafe for DWT
### impl !Sync for DWT
### impl Unpin for DWT
### impl UnwindSafe for DWT
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::ECIA
===
```
pub struct ECIA { /* private fields */ }
```
The ECIA works in conjunction with the processor interrupt interface to handle hardware interrupts andd exceptions.
Implementations
---
### impl ECIA
#### pub const PTR: *constRegisterBlock = {0x4000e000 as *const ecia::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for ECIA
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for ECIA
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for ECIA
Auto Trait Implementations
---
### impl RefUnwindSafe for ECIA
### impl !Sync for ECIA
### impl Unpin for ECIA
### impl UnwindSafe for ECIA
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::EC_REG_BANK
===
```
pub struct EC_REG_BANK { /* private fields */ }
```
This block is designed to be accessed internally by the EC via the register interface.
Implementations
---
### impl EC_REG_BANK
#### pub const PTR: *constRegisterBlock = {0x4000fc00 as *const ec_reg_bank::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for EC_REG_BANK
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for EC_REG_BANK
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for EC_REG_BANK
Auto Trait Implementations
---
### impl RefUnwindSafe for EC_REG_BANK
### impl !Sync for EC_REG_BANK
### impl Unpin for EC_REG_BANK
### impl UnwindSafe for EC_REG_BANK
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::FPB
===
```
pub struct FPB { /* private fields */ }
```
Flash Patch and Breakpoint unit
Implementations
---
### impl FPB
#### pub const PTR: *constRegisterBlock = {0xe0002000 as *const cortex_m::peripheral::fpb::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for FPB
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<FPB as Deref>::Target
Dereferences the value.
### impl Send for FPB
Auto Trait Implementations
---
### impl RefUnwindSafe for FPB
### impl !Sync for FPB
### impl Unpin for FPB
### impl UnwindSafe for FPB
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::GCR
===
```
pub struct GCR { /* private fields */ }
```
The Logical Device Configuration registers support motherboard designs in which the resources required by their components are known and assigned by the BIOS at POST.
Implementations
---
### impl GCR
#### pub const PTR: *constRegisterBlock = {0x400fff00 as *const gcr::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for GCR
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for GCR
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for GCR
Auto Trait Implementations
---
### impl RefUnwindSafe for GCR
### impl !Sync for GCR
### impl Unpin for GCR
### impl UnwindSafe for GCR
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::GPIO
===
```
pub struct GPIO { /* private fields */ }
```
GPIO Pin Control Registers
Implementations
---
### impl GPIO
#### pub const PTR: *constRegisterBlock = {0x40081000 as *const gpio::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for GPIO
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for GPIO
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for GPIO
Auto Trait Implementations
---
### impl RefUnwindSafe for GPIO
### impl !Sync for GPIO
### impl Unpin for GPIO
### impl UnwindSafe for GPIO
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::HTM0
===
```
pub struct HTM0 { /* private fields */ }
```
The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
Implementations
---
### impl HTM0
#### pub const PTR: *constRegisterBlock = {0x40009800 as *const htm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for HTM0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for HTM0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for HTM0
Auto Trait Implementations
---
### impl RefUnwindSafe for HTM0
### impl !Sync for HTM0
### impl Unpin for HTM0
### impl UnwindSafe for HTM0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::HTM1
===
```
pub struct HTM1 { /* private fields */ }
```
The Hibernation Timer can generate a wake event to the Embedded Controller (EC) when it is in a hibernation mode
Implementations
---
### impl HTM1
#### pub const PTR: *constRegisterBlock = {0x40009820 as *const htm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for HTM1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for HTM1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for HTM1
Auto Trait Implementations
---
### impl RefUnwindSafe for HTM1
### impl !Sync for HTM1
### impl Unpin for HTM1
### impl UnwindSafe for HTM1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::I2C0
===
```
pub struct I2C0 { /* private fields */ }
```
The I2C interface can handle standard I2C interface.
Implementations
---
### impl I2C0
#### pub const PTR: *constRegisterBlock = {0x40005100 as *const i2c0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for I2C0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for I2C0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for I2C0
Auto Trait Implementations
---
### impl RefUnwindSafe for I2C0
### impl !Sync for I2C0
### impl Unpin for I2C0
### impl UnwindSafe for I2C0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::I2C1
===
```
pub struct I2C1 { /* private fields */ }
```
The I2C interface can handle standard I2C interface.
Implementations
---
### impl I2C1
#### pub const PTR: *constRegisterBlock = {0x40005200 as *const i2c0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for I2C1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for I2C1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for I2C1
Auto Trait Implementations
---
### impl RefUnwindSafe for I2C1
### impl !Sync for I2C1
### impl Unpin for I2C1
### impl UnwindSafe for I2C1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::I2C2
===
```
pub struct I2C2 { /* private fields */ }
```
The I2C interface can handle standard I2C interface.
Implementations
---
### impl I2C2
#### pub const PTR: *constRegisterBlock = {0x40005300 as *const i2c0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for I2C2
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for I2C2
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for I2C2
Auto Trait Implementations
---
### impl RefUnwindSafe for I2C2
### impl !Sync for I2C2
### impl Unpin for I2C2
### impl UnwindSafe for I2C2
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::ITM
===
```
pub struct ITM { /* private fields */ }
```
Instrumentation Trace Macrocell
Implementations
---
### impl ITM
#### pub const PTR: *mutRegisterBlock = {0xe0000000 as *mut cortex_m::peripheral::itm::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *mutRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for ITM
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<ITM as Deref>::Target
Dereferences the value.
### impl DerefMut for ITM
#### fn deref_mut(&mut self) -> &mut <ITM as Deref>::Target
Mutably dereferences the value.
### impl Send for ITM
Auto Trait Implementations
---
### impl RefUnwindSafe for ITM
### impl !Sync for ITM
### impl Unpin for ITM
### impl UnwindSafe for ITM
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::LED0
===
```
pub struct LED0 { /* private fields */ }
```
The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
Implementations
---
### impl LED0
#### pub const PTR: *constRegisterBlock = {0x4000b800 as *const led0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for LED0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for LED0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for LED0
Auto Trait Implementations
---
### impl RefUnwindSafe for LED0
### impl !Sync for LED0
### impl Unpin for LED0
### impl UnwindSafe for LED0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::LED1
===
```
pub struct LED1 { /* private fields */ }
```
The LED is implemented using a PWM that can be driven either by the 48 MHz clock or by a 32.768 KHz clock input.
Implementations
---
### impl LED1
#### pub const PTR: *constRegisterBlock = {0x4000b900 as *const led0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for LED1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for LED1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for LED1
Auto Trait Implementations
---
### impl RefUnwindSafe for LED1
### impl !Sync for LED1
### impl Unpin for LED1
### impl UnwindSafe for LED1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::MPU
===
```
pub struct MPU { /* private fields */ }
```
Memory Protection Unit
Implementations
---
### impl MPU
#### pub const PTR: *constRegisterBlock = {0xe000ed90 as *const cortex_m::peripheral::mpu::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for MPU
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<MPU as Deref>::Target
Dereferences the value.
### impl Send for MPU
Auto Trait Implementations
---
### impl RefUnwindSafe for MPU
### impl !Sync for MPU
### impl Unpin for MPU
### impl UnwindSafe for MPU
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::NVIC
===
```
pub struct NVIC { /* private fields */ }
```
Nested Vector Interrupt Controller
Implementations
---
### impl NVIC
#### pub fn request<I>(&mut self, interrupt: I)where I: InterruptNumber,
Request an IRQ in software
Writing a value to the INTID field is the same as manually pending an interrupt by setting the corresponding interrupt bit in an Interrupt Set Pending Register. This is similar to
`NVIC::pend`.
This method is not available on ARMv6-M chips.
#### pub fn mask<I>(interrupt: I)where I: InterruptNumber,
Disables `interrupt`
#### pub unsafe fn unmask<I>(interrupt: I)where I: InterruptNumber,
Enables `interrupt`
This function is `unsafe` because it can break mask-based critical sections
#### pub fn get_priority<I>(interrupt: I) -> u8where I: InterruptNumber,
Returns the NVIC priority of `interrupt`
*NOTE* NVIC encodes priority in the highest bits of a byte so values like `1` and `2` map to the same priority. Also for NVIC priorities, a lower value (e.g. `16`) has higher priority (urgency) than a larger value (e.g. `32`).
#### pub fn is_active<I>(interrupt: I) -> boolwhere I: InterruptNumber,
Is `interrupt` active or pre-empted and stacked
#### pub fn is_enabled<I>(interrupt: I) -> boolwhere I: InterruptNumber,
Checks if `interrupt` is enabled
#### pub fn is_pending<I>(interrupt: I) -> boolwhere I: InterruptNumber,
Checks if `interrupt` is pending
#### pub fn pend<I>(interrupt: I)where I: InterruptNumber,
Forces `interrupt` into pending state
#### pub unsafe fn set_priority<I>(&mut self, interrupt: I, prio: u8)where I: InterruptNumber,
Sets the “priority” of `interrupt` to `prio`
*NOTE* See `get_priority` method for an explanation of how NVIC priorities work.
On ARMv6-M, updating an interrupt priority requires a read-modify-write operation. On ARMv7-M, the operation is performed in a single atomic write operation.
##### Unsafety
Changing priority levels can break priority-based critical sections (see
`register::basepri`) and compromise memory safety.
#### pub fn unpend<I>(interrupt: I)where I: InterruptNumber,
Clears `interrupt`’s pending state
### impl NVIC
#### pub const PTR: *constRegisterBlock = {0xe000e100 as *const cortex_m::peripheral::nvic::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for NVIC
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<NVIC as Deref>::Target
Dereferences the value.
### impl Send for NVIC
Auto Trait Implementations
---
### impl RefUnwindSafe for NVIC
### impl !Sync for NVIC
### impl Unpin for NVIC
### impl UnwindSafe for NVIC
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::PCR
===
```
pub struct PCR { /* private fields */ }
```
The Power, Clocks, and Resets (PCR) Section identifies clock sources, and reset inputs to the chip
Implementations
---
### impl PCR
#### pub const PTR: *constRegisterBlock = {0x40080100 as *const pcr::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for PCR
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for PCR
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for PCR
Auto Trait Implementations
---
### impl RefUnwindSafe for PCR
### impl !Sync for PCR
### impl Unpin for PCR
### impl UnwindSafe for PCR
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::PWM0
===
```
pub struct PWM0 { /* private fields */ }
```
The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
Implementations
---
### impl PWM0
#### pub const PTR: *constRegisterBlock = {0x40005800 as *const pwm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for PWM0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for PWM0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for PWM0
Auto Trait Implementations
---
### impl RefUnwindSafe for PWM0
### impl !Sync for PWM0
### impl Unpin for PWM0
### impl UnwindSafe for PWM0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::PWM2
===
```
pub struct PWM2 { /* private fields */ }
```
The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
Implementations
---
### impl PWM2
#### pub const PTR: *constRegisterBlock = {0x40005820 as *const pwm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for PWM2
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for PWM2
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for PWM2
Auto Trait Implementations
---
### impl RefUnwindSafe for PWM2
### impl !Sync for PWM2
### impl Unpin for PWM2
### impl UnwindSafe for PWM2
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::PWM3
===
```
pub struct PWM3 { /* private fields */ }
```
The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
Implementations
---
### impl PWM3
#### pub const PTR: *constRegisterBlock = {0x40005830 as *const pwm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for PWM3
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for PWM3
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for PWM3
Auto Trait Implementations
---
### impl RefUnwindSafe for PWM3
### impl !Sync for PWM3
### impl Unpin for PWM3
### impl UnwindSafe for PWM3
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::PWM5
===
```
pub struct PWM5 { /* private fields */ }
```
The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
Implementations
---
### impl PWM5
#### pub const PTR: *constRegisterBlock = {0x40005850 as *const pwm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for PWM5
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for PWM5
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for PWM5
Auto Trait Implementations
---
### impl RefUnwindSafe for PWM5
### impl !Sync for PWM5
### impl Unpin for PWM5
### impl UnwindSafe for PWM5
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::PWM6
===
```
pub struct PWM6 { /* private fields */ }
```
The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
Implementations
---
### impl PWM6
#### pub const PTR: *constRegisterBlock = {0x40005860 as *const pwm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for PWM6
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for PWM6
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for PWM6
Auto Trait Implementations
---
### impl RefUnwindSafe for PWM6
### impl !Sync for PWM6
### impl Unpin for PWM6
### impl UnwindSafe for PWM6
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::PWM7
===
```
pub struct PWM7 { /* private fields */ }
```
The PWM block generates an arbitrary duty cycle output at frequencies from less than 0.1 Hz to 24 MHz
Implementations
---
### impl PWM7
#### pub const PTR: *constRegisterBlock = {0x40005870 as *const pwm0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for PWM7
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for PWM7
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for PWM7
Auto Trait Implementations
---
### impl RefUnwindSafe for PWM7
### impl !Sync for PWM7
### impl Unpin for PWM7
### impl UnwindSafe for PWM7
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::Peripherals
===
```
pub struct Peripherals {
pub PCR: PCR,
pub DMA_MAIN: DMA_MAIN,
pub DMA_CHAN00: DMA_CHAN00,
pub DMA_CHAN01: DMA_CHAN01,
pub DMA_CHAN02: DMA_CHAN02,
pub DMA_CHAN03: DMA_CHAN03,
pub DMA_CHAN04: DMA_CHAN04,
pub DMA_CHAN05: DMA_CHAN05,
pub DMA_CHAN06: DMA_CHAN06,
pub DMA_CHAN07: DMA_CHAN07,
pub DMA_CHAN08: DMA_CHAN08,
pub DMA_CHAN09: DMA_CHAN09,
pub DMA_CHAN10: DMA_CHAN10,
pub DMA_CHAN11: DMA_CHAN11,
pub ECIA: ECIA,
pub GCR: GCR,
pub UART0: UART0,
pub UART1: UART1,
pub UART2: UART2,
pub GPIO: GPIO,
pub WDT: WDT,
pub TIMER16_0: TIMER16_0,
pub TIMER16_1: TIMER16_1,
pub TIMER32_0: TIMER32_0,
pub TIMER32_1: TIMER32_1,
pub CCT: CCT,
pub HTM0: HTM0,
pub HTM1: HTM1,
pub RTOS: RTOS,
pub RTC: RTC,
pub WEEK: WEEK,
pub TACH0: TACH0,
pub TACH1: TACH1,
pub PWM0: PWM0,
pub PWM2: PWM2,
pub PWM3: PWM3,
pub PWM5: PWM5,
pub PWM6: PWM6,
pub PWM7: PWM7,
pub ADC: ADC,
pub LED0: LED0,
pub LED1: LED1,
pub SMB0: SMB0,
pub SMB1: SMB1,
pub SMB2: SMB2,
pub SMB3: SMB3,
pub SMB4: SMB4,
pub I2C0: I2C0,
pub I2C1: I2C1,
pub I2C2: I2C2,
pub QMSPI: QMSPI,
pub TFDP: TFDP,
pub VCI: VCI,
pub VBAT_RAM: VBAT_RAM,
pub VBAT: VBAT,
pub EC_REG_BANK: EC_REG_BANK,
pub SYS_TICK: SYS_TICK,
pub SYSTEM_CONTROL: SYSTEM_CONTROL,
}
```
All the peripherals
Fields
---
`PCR: PCR`PCR
`DMA_MAIN: DMA_MAIN`DMA_MAIN
`DMA_CHAN00: DMA_CHAN00`DMA_CHAN00
`DMA_CHAN01: DMA_CHAN01`DMA_CHAN01
`DMA_CHAN02: DMA_CHAN02`DMA_CHAN02
`DMA_CHAN03: DMA_CHAN03`DMA_CHAN03
`DMA_CHAN04: DMA_CHAN04`DMA_CHAN04
`DMA_CHAN05: DMA_CHAN05`DMA_CHAN05
`DMA_CHAN06: DMA_CHAN06`DMA_CHAN06
`DMA_CHAN07: DMA_CHAN07`DMA_CHAN07
`DMA_CHAN08: DMA_CHAN08`DMA_CHAN08
`DMA_CHAN09: DMA_CHAN09`DMA_CHAN09
`DMA_CHAN10: DMA_CHAN10`DMA_CHAN10
`DMA_CHAN11: DMA_CHAN11`DMA_CHAN11
`ECIA: ECIA`ECIA
`GCR: GCR`GCR
`UART0: UART0`UART0
`UART1: UART1`UART1
`UART2: UART2`UART2
`GPIO: GPIO`GPIO
`WDT: WDT`WDT
`TIMER16_0: TIMER16_0`TIMER16_0
`TIMER16_1: TIMER16_1`TIMER16_1
`TIMER32_0: TIMER32_0`TIMER32_0
`TIMER32_1: TIMER32_1`TIMER32_1
`CCT: CCT`CCT
`HTM0: HTM0`HTM0
`HTM1: HTM1`HTM1
`RTOS: RTOS`RTOS
`RTC: RTC`RTC
`WEEK: WEEK`WEEK
`TACH0: TACH0`TACH0
`TACH1: TACH1`TACH1
`PWM0: PWM0`PWM0
`PWM2: PWM2`PWM2
`PWM3: PWM3`PWM3
`PWM5: PWM5`PWM5
`PWM6: PWM6`PWM6
`PWM7: PWM7`PWM7
`ADC: ADC`ADC
`LED0: LED0`LED0
`LED1: LED1`LED1
`SMB0: SMB0`SMB0
`SMB1: SMB1`SMB1
`SMB2: SMB2`SMB2
`SMB3: SMB3`SMB3
`SMB4: SMB4`SMB4
`I2C0: I2C0`I2C0
`I2C1: I2C1`I2C1
`I2C2: I2C2`I2C2
`QMSPI: QMSPI`QMSPI
`TFDP: TFDP`TFDP
`VCI: VCI`VCI
`VBAT_RAM: VBAT_RAM`VBAT_RAM
`VBAT: VBAT`VBAT
`EC_REG_BANK: EC_REG_BANK`EC_REG_BANK
`SYS_TICK: SYS_TICK`SYS_TICK
`SYSTEM_CONTROL: SYSTEM_CONTROL`SYSTEM_CONTROL
Implementations
---
### impl Peripherals
#### pub fn take() -> Option<SelfReturns all the peripherals *once*
#### pub unsafe fn steal() -> Self
Unchecked version of `Peripherals::take`
Auto Trait Implementations
---
### impl RefUnwindSafe for Peripherals
### impl Send for Peripherals
### impl !Sync for Peripherals
### impl Unpin for Peripherals
### impl UnwindSafe for Peripherals
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::QMSPI
===
```
pub struct QMSPI { /* private fields */ }
```
The QMSPI may be used to communicate with various peripheral devices that use a Serial Peripheral Interface
Implementations
---
### impl QMSPI
#### pub const PTR: *constRegisterBlock = {0x40070000 as *const qmspi::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for QMSPI
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for QMSPI
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for QMSPI
Auto Trait Implementations
---
### impl RefUnwindSafe for QMSPI
### impl !Sync for QMSPI
### impl Unpin for QMSPI
### impl UnwindSafe for QMSPI
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::RTC
===
```
pub struct RTC { /* private fields */ }
```
This is the set of registers that are automatically counted by hardware every 1 second while the block is enabled
Implementations
---
### impl RTC
#### pub const PTR: *constRegisterBlock = {0x400f5000 as *const rtc::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for RTC
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for RTC
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for RTC
Auto Trait Implementations
---
### impl RefUnwindSafe for RTC
### impl !Sync for RTC
### impl Unpin for RTC
### impl UnwindSafe for RTC
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::RTOS
===
```
pub struct RTOS { /* private fields */ }
```
RTOS is a 32-bit timer designed to operate on the 32kHz oscillator which is available during all chip sleep states.
Implementations
---
### impl RTOS
#### pub const PTR: *constRegisterBlock = {0x40007400 as *const rtos::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for RTOS
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for RTOS
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for RTOS
Auto Trait Implementations
---
### impl RefUnwindSafe for RTOS
### impl !Sync for RTOS
### impl Unpin for RTOS
### impl UnwindSafe for RTOS
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SCB
===
```
pub struct SCB { /* private fields */ }
```
System Control Block
Implementations
---
### impl SCB
#### pub fn vect_active() -> VectActive
Returns the active exception number
### impl SCB
#### pub fn enable_icache(&mut self)
Enables I-cache if currently disabled.
This operation first invalidates the entire I-cache.
#### pub fn disable_icache(&mut self)
Disables I-cache if currently enabled.
This operation invalidates the entire I-cache after disabling.
#### pub fn icache_enabled() -> bool
Returns whether the I-cache is currently enabled.
#### pub fn invalidate_icache(&mut self)
Invalidates the entire I-cache.
#### pub fn enable_dcache(&mut self, cpuid: &mut CPUID)
Enables D-cache if currently disabled.
This operation first invalidates the entire D-cache, ensuring it does not contain stale values before being enabled.
#### pub fn disable_dcache(&mut self, cpuid: &mut CPUID)
Disables D-cache if currently enabled.
This operation subsequently cleans and invalidates the entire D-cache,
ensuring all contents are safely written back to main memory after disabling.
#### pub fn dcache_enabled() -> bool
Returns whether the D-cache is currently enabled.
#### pub fn clean_dcache(&mut self, cpuid: &mut CPUID)
Cleans the entire D-cache.
This function causes everything in the D-cache to be written back to main memory,
overwriting whatever is already there.
#### pub fn clean_invalidate_dcache(&mut self, cpuid: &mut CPUID)
Cleans and invalidates the entire D-cache.
This function causes everything in the D-cache to be written back to main memory,
and then marks the entire D-cache as invalid, causing future reads to first fetch from main memory.
#### pub unsafe fn invalidate_dcache_by_address(&mut self, addr: usize, size: usize)
Invalidates D-cache by address.
* `addr`: The address to invalidate, which must be cache-line aligned.
* `size`: Number of bytes to invalidate, which must be a multiple of the cache line size.
Invalidates D-cache cache lines, starting from the first line containing `addr`,
finishing once at least `size` bytes have been invalidated.
Invalidation causes the next read access to memory to be fetched from main memory instead of the cache.
##### Cache Line Sizes
Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `addr` must be 32-byte aligned and `size` must be a multiple of 32. At the time of writing, no other Cortex-M cores have data caches.
If `addr` is not cache-line aligned, or `size` is not a multiple of the cache line size,
other data before or after the desired memory would also be invalidated, which can very easily cause memory corruption and undefined behaviour.
##### Safety
After invalidating, the next read of invalidated data will be from main memory. This may cause recent writes to be lost, potentially including writes that initialized objects.
Therefore, this method may cause uninitialized memory or invalid values to be read,
resulting in undefined behaviour. You must ensure that main memory contains valid and initialized values before invalidating.
`addr` **must** be aligned to the size of the cache lines, and `size` **must** be a multiple of the cache line size, otherwise this function will invalidate other memory,
easily leading to memory corruption and undefined behaviour. This precondition is checked in debug builds using a `debug_assert!()`, but not checked in release builds to avoid a runtime-dependent `panic!()` call.
#### pub unsafe fn invalidate_dcache_by_ref<T>(&mut self, obj: &mutT)
Invalidates an object from the D-cache.
* `obj`: The object to invalidate.
Invalidates D-cache starting from the first cache line containing `obj`,
continuing to invalidate cache lines until all of `obj` has been invalidated.
Invalidation causes the next read access to memory to be fetched from main memory instead of the cache.
##### Cache Line Sizes
Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `obj` must be 32-byte aligned, and its size must be a multiple of 32 bytes. At the time of writing, no other Cortex-M cores have data caches.
If `obj` is not cache-line aligned, or its size is not a multiple of the cache line size,
other data before or after the desired memory would also be invalidated, which can very easily cause memory corruption and undefined behaviour.
##### Safety
After invalidating, `obj` will be read from main memory on next access. This may cause recent writes to `obj` to be lost, potentially including the write that initialized it.
Therefore, this method may cause uninitialized memory or invalid values to be read,
resulting in undefined behaviour. You must ensure that main memory contains a valid and initialized value for T before invalidating `obj`.
`obj` **must** be aligned to the size of the cache lines, and its size **must** be a multiple of the cache line size, otherwise this function will invalidate other memory,
easily leading to memory corruption and undefined behaviour. This precondition is checked in debug builds using a `debug_assert!()`, but not checked in release builds to avoid a runtime-dependent `panic!()` call.
#### pub unsafe fn invalidate_dcache_by_slice<T>(&mut self, slice: &mut [T])
Invalidates a slice from the D-cache.
* `slice`: The slice to invalidate.
Invalidates D-cache starting from the first cache line containing members of `slice`,
continuing to invalidate cache lines until all of `slice` has been invalidated.
Invalidation causes the next read access to memory to be fetched from main memory instead of the cache.
##### Cache Line Sizes
Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `slice` must be 32-byte aligned, and its size must be a multiple of 32 bytes. At the time of writing, no other Cortex-M cores have data caches.
If `slice` is not cache-line aligned, or its size is not a multiple of the cache line size,
other data before or after the desired memory would also be invalidated, which can very easily cause memory corruption and undefined behaviour.
##### Safety
After invalidating, `slice` will be read from main memory on next access. This may cause recent writes to `slice` to be lost, potentially including the write that initialized it.
Therefore, this method may cause uninitialized memory or invalid values to be read,
resulting in undefined behaviour. You must ensure that main memory contains valid and initialized values for T before invalidating `slice`.
`slice` **must** be aligned to the size of the cache lines, and its size **must** be a multiple of the cache line size, otherwise this function will invalidate other memory,
easily leading to memory corruption and undefined behaviour. This precondition is checked in debug builds using a `debug_assert!()`, but not checked in release builds to avoid a runtime-dependent `panic!()` call.
#### pub fn clean_dcache_by_address(&mut self, addr: usize, size: usize)
Cleans D-cache by address.
* `addr`: The address to start cleaning at.
* `size`: The number of bytes to clean.
Cleans D-cache cache lines, starting from the first line containing `addr`,
finishing once at least `size` bytes have been invalidated.
Cleaning the cache causes whatever data is present in the cache to be immediately written to main memory, overwriting whatever was in main memory.
##### Cache Line Sizes
Cache line sizes vary by core. For all Cortex-M7 cores, the cache line size is fixed to 32 bytes, which means `addr` should generally be 32-byte aligned and `size` should be a multiple of 32. At the time of writing, no other Cortex-M cores have data caches.
If `addr` is not cache-line aligned, or `size` is not a multiple of the cache line size,
other data before or after the desired memory will also be cleaned. From the point of view of the core executing this function, memory remains consistent, so this is not unsound,
but is worth knowing about.
#### pub fn clean_dcache_by_ref<T>(&mut self, obj: &T)
Cleans an object from the D-cache.
* `obj`: The object to clean.
Cleans D-cache starting from the first cache line containing `obj`,
continuing to clean cache lines until all of `obj` has been cleaned.
It is recommended that `obj` is both aligned to the cache line size and a multiple of the cache line size long, otherwise surrounding data will also be cleaned.
Cleaning the cache causes whatever data is present in the cache to be immediately written to main memory, overwriting whatever was in main memory.
#### pub fn clean_dcache_by_slice<T>(&mut self, slice: &[T])
Cleans a slice from D-cache.
* `slice`: The slice to clean.
Cleans D-cache starting from the first cache line containing members of `slice`,
continuing to clean cache lines until all of `slice` has been cleaned.
It is recommended that `slice` is both aligned to the cache line size and a multiple of the cache line size long, otherwise surrounding data will also be cleaned.
Cleaning the cache causes whatever data is present in the cache to be immediately written to main memory, overwriting whatever was in main memory.
#### pub fn clean_invalidate_dcache_by_address(&mut self, addr: usize, size: usize)
Cleans and invalidates D-cache by address.
* `addr`: The address to clean and invalidate.
* `size`: The number of bytes to clean and invalidate.
Cleans and invalidates D-cache starting from the first cache line containing `addr`,
finishing once at least `size` bytes have been cleaned and invalidated.
It is recommended that `addr` is aligned to the cache line size and `size` is a multiple of the cache line size, otherwise surrounding data will also be cleaned.
Cleaning and invalidating causes data in the D-cache to be written back to main memory,
and then marks that data in the D-cache as invalid, causing future reads to first fetch from main memory.
### impl SCB
#### pub fn set_sleepdeep(&mut self)
Set the SLEEPDEEP bit in the SCR register
#### pub fn clear_sleepdeep(&mut self)
Clear the SLEEPDEEP bit in the SCR register
### impl SCB
#### pub fn set_sleeponexit(&mut self)
Set the SLEEPONEXIT bit in the SCR register
#### pub fn clear_sleeponexit(&mut self)
Clear the SLEEPONEXIT bit in the SCR register
### impl SCB
#### pub fn sys_reset() -> !
Initiate a system reset request to reset the MCU
### impl SCB
#### pub fn set_pendsv()
Set the PENDSVSET bit in the ICSR register which will pend the PendSV interrupt
#### pub fn is_pendsv_pending() -> bool
Check if PENDSVSET bit in the ICSR register is set meaning PendSV interrupt is pending
#### pub fn clear_pendsv()
Set the PENDSVCLR bit in the ICSR register which will clear a pending PendSV interrupt
#### pub fn set_pendst()
Set the PENDSTSET bit in the ICSR register which will pend a SysTick interrupt
#### pub fn is_pendst_pending() -> bool
Check if PENDSTSET bit in the ICSR register is set meaning SysTick interrupt is pending
#### pub fn clear_pendst()
Set the PENDSTCLR bit in the ICSR register which will clear a pending SysTick interrupt
### impl SCB
#### pub fn get_priority(system_handler: SystemHandler) -> u8
Returns the hardware priority of `system_handler`
*NOTE*: Hardware priority does not exactly match logical priority levels. See
`NVIC.get_priority` for more details.
#### pub unsafe fn set_priority(&mut self, system_handler: SystemHandler, prio: u8)
Sets the hardware priority of `system_handler` to `prio`
*NOTE*: Hardware priority does not exactly match logical priority levels. See
`NVIC.get_priority` for more details.
On ARMv6-M, updating a system handler priority requires a read-modify-write operation. On ARMv7-M, the operation is performed in a single, atomic write operation.
##### Unsafety
Changing priority levels can break priority-based critical sections (see
`register::basepri`) and compromise memory safety.
#### pub fn enable(&mut self, exception: Exception)
Enable the exception
If the exception is enabled, when the exception is triggered, the exception handler will be executed instead of the HardFault handler.
This function is only allowed on the following exceptions:
* `MemoryManagement`
* `BusFault`
* `UsageFault`
* `SecureFault` (can only be enabled from Secure state)
Calling this function with any other exception will do nothing.
#### pub fn disable(&mut self, exception: Exception)
Disable the exception
If the exception is disabled, when the exception is triggered, the HardFault handler will be executed instead of the exception handler.
This function is only allowed on the following exceptions:
* `MemoryManagement`
* `BusFault`
* `UsageFault`
* `SecureFault` (can not be changed from Non-secure state)
Calling this function with any other exception will do nothing.
#### pub fn is_enabled(&self, exception: Exception) -> bool
Check if an exception is enabled
This function is only allowed on the following exception:
* `MemoryManagement`
* `BusFault`
* `UsageFault`
* `SecureFault` (can not be read from Non-secure state)
Calling this function with any other exception will read `false`.
### impl SCB
#### pub const PTR: *constRegisterBlock = {0xe000ed04 as *const cortex_m::peripheral::scb::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for SCB
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<SCB as Deref>::Target
Dereferences the value.
### impl Send for SCB
Auto Trait Implementations
---
### impl RefUnwindSafe for SCB
### impl !Sync for SCB
### impl Unpin for SCB
### impl UnwindSafe for SCB
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SMB0
===
```
pub struct SMB0 { /* private fields */ }
```
The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
Implementations
---
### impl SMB0
#### pub const PTR: *constRegisterBlock = {0x40004000 as *const smb0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for SMB0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for SMB0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for SMB0
Auto Trait Implementations
---
### impl RefUnwindSafe for SMB0
### impl !Sync for SMB0
### impl Unpin for SMB0
### impl UnwindSafe for SMB0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SMB1
===
```
pub struct SMB1 { /* private fields */ }
```
The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
Implementations
---
### impl SMB1
#### pub const PTR: *constRegisterBlock = {0x40004400 as *const smb0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for SMB1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for SMB1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for SMB1
Auto Trait Implementations
---
### impl RefUnwindSafe for SMB1
### impl !Sync for SMB1
### impl Unpin for SMB1
### impl UnwindSafe for SMB1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SMB2
===
```
pub struct SMB2 { /* private fields */ }
```
The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
Implementations
---
### impl SMB2
#### pub const PTR: *constRegisterBlock = {0x40004800 as *const smb0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for SMB2
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for SMB2
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for SMB2
Auto Trait Implementations
---
### impl RefUnwindSafe for SMB2
### impl !Sync for SMB2
### impl Unpin for SMB2
### impl UnwindSafe for SMB2
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SMB3
===
```
pub struct SMB3 { /* private fields */ }
```
The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
Implementations
---
### impl SMB3
#### pub const PTR: *constRegisterBlock = {0x40004c00 as *const smb0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for SMB3
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for SMB3
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for SMB3
Auto Trait Implementations
---
### impl RefUnwindSafe for SMB3
### impl !Sync for SMB3
### impl Unpin for SMB3
### impl UnwindSafe for SMB3
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SMB4
===
```
pub struct SMB4 { /* private fields */ }
```
The SMBus interface can handle standard SMBus 2.0 protocols as well as I2C interface.
Implementations
---
### impl SMB4
#### pub const PTR: *constRegisterBlock = {0x40005000 as *const smb0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn rsts(&self) -> &RSTS
0x00 - Status Register
#### pub fn wctrl(&self) -> &WCTRL
0x00 - Control Register
Trait Implementations
---
### impl Debug for SMB4
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for SMB4
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for SMB4
Auto Trait Implementations
---
### impl RefUnwindSafe for SMB4
### impl !Sync for SMB4
### impl Unpin for SMB4
### impl UnwindSafe for SMB4
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SYST
===
```
pub struct SYST { /* private fields */ }
```
SysTick: System Timer
Implementations
---
### impl SYST
#### pub fn clear_current(&mut self)
Clears current value to 0
After calling `clear_current()`, the next call to `has_wrapped()` will return `false`.
#### pub fn disable_counter(&mut self)
Disables counter
#### pub fn disable_interrupt(&mut self)
Disables SysTick interrupt
#### pub fn enable_counter(&mut self)
Enables counter
*NOTE* The reference manual indicates that:
“The SysTick counter reload and current value are undefined at reset, the correct initialization sequence for the SysTick counter is:
* Program reload value
* Clear current value
* Program Control and Status register“
The sequence translates to `self.set_reload(x); self.clear_current(); self.enable_counter()`
#### pub fn enable_interrupt(&mut self)
Enables SysTick interrupt
#### pub fn get_clock_source(&mut self) -> SystClkSource
Gets clock source
*NOTE* This takes `&mut self` because the read operation is side effectful and can clear the bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`)
#### pub fn get_current() -> u32
Gets current value
#### pub fn get_reload() -> u32
Gets reload value
#### pub fn get_ticks_per_10ms() -> u32
Returns the reload value with which the counter would wrap once per 10 ms
Returns `0` if the value is not known (e.g. because the clock can change dynamically).
#### pub fn has_reference_clock() -> bool
Checks if an external reference clock is available
#### pub fn has_wrapped(&mut self) -> bool
Checks if the counter wrapped (underflowed) since the last check
*NOTE* This takes `&mut self` because the read operation is side effectful and will clear the bit of the read register.
#### pub fn is_counter_enabled(&mut self) -> bool
Checks if counter is enabled
*NOTE* This takes `&mut self` because the read operation is side effectful and can clear the bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`)
#### pub fn is_interrupt_enabled(&mut self) -> bool
Checks if SysTick interrupt is enabled
*NOTE* This takes `&mut self` because the read operation is side effectful and can clear the bit that indicates that the timer has wrapped (cf. `SYST.has_wrapped`)
#### pub fn is_precise() -> bool
Checks if the calibration value is precise
Returns `false` if using the reload value returned by
`get_ticks_per_10ms()` may result in a period significantly deviating from 10 ms.
#### pub fn set_clock_source(&mut self, clk_source: SystClkSource)
Sets clock source
#### pub fn set_reload(&mut self, value: u32)
Sets reload value
Valid values are between `1` and `0x00ffffff`.
*NOTE* To make the timer wrap every `N` ticks set the reload value to `N - 1`
### impl SYST
#### pub const PTR: *constRegisterBlock = {0xe000e010 as *const cortex_m::peripheral::syst::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for SYST
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<SYST as Deref>::Target
Dereferences the value.
### impl Send for SYST
Auto Trait Implementations
---
### impl RefUnwindSafe for SYST
### impl !Sync for SYST
### impl Unpin for SYST
### impl UnwindSafe for SYST
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SYSTEM_CONTROL
===
```
pub struct SYSTEM_CONTROL { /* private fields */ }
```
System Control Registers
Implementations
---
### impl SYSTEM_CONTROL
#### pub const PTR: *constRegisterBlock = {0xe000e000 as *const system_control::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for SYSTEM_CONTROL
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for SYSTEM_CONTROL
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for SYSTEM_CONTROL
Auto Trait Implementations
---
### impl RefUnwindSafe for SYSTEM_CONTROL
### impl !Sync for SYSTEM_CONTROL
### impl Unpin for SYSTEM_CONTROL
### impl UnwindSafe for SYSTEM_CONTROL
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::SYS_TICK
===
```
pub struct SYS_TICK { /* private fields */ }
```
System timer
Implementations
---
### impl SYS_TICK
#### pub const PTR: *constRegisterBlock = {0xe000e010 as *const sys_tick::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for SYS_TICK
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for SYS_TICK
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for SYS_TICK
Auto Trait Implementations
---
### impl RefUnwindSafe for SYS_TICK
### impl !Sync for SYS_TICK
### impl Unpin for SYS_TICK
### impl UnwindSafe for SYS_TICK
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TACH0
===
```
pub struct TACH0 { /* private fields */ }
```
This block monitors TACH output signals from various types of fans, and determines their speed.
Implementations
---
### impl TACH0
#### pub const PTR: *constRegisterBlock = {0x40006000 as *const tach0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for TACH0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for TACH0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for TACH0
Auto Trait Implementations
---
### impl RefUnwindSafe for TACH0
### impl !Sync for TACH0
### impl Unpin for TACH0
### impl UnwindSafe for TACH0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TACH1
===
```
pub struct TACH1 { /* private fields */ }
```
This block monitors TACH output signals from various types of fans, and determines their speed.
Implementations
---
### impl TACH1
#### pub const PTR: *constRegisterBlock = {0x40006010 as *const tach0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for TACH1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for TACH1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for TACH1
Auto Trait Implementations
---
### impl RefUnwindSafe for TACH1
### impl !Sync for TACH1
### impl Unpin for TACH1
### impl UnwindSafe for TACH1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TFDP
===
```
pub struct TFDP { /* private fields */ }
```
The TFDP serially transmits EC-originated diagnostic vectors to an external debug trace system.
Implementations
---
### impl TFDP
#### pub const PTR: *constRegisterBlock = {0x40008c00 as *const tfdp::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for TFDP
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for TFDP
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for TFDP
Auto Trait Implementations
---
### impl RefUnwindSafe for TFDP
### impl !Sync for TFDP
### impl Unpin for TFDP
### impl UnwindSafe for TFDP
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TIMER16_0
===
```
pub struct TIMER16_0 { /* private fields */ }
```
This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
Implementations
---
### impl TIMER16_0
#### pub const PTR: *constRegisterBlock = {0x40000c00 as *const timer16_0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for TIMER16_0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for TIMER16_0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for TIMER16_0
Auto Trait Implementations
---
### impl RefUnwindSafe for TIMER16_0
### impl !Sync for TIMER16_0
### impl Unpin for TIMER16_0
### impl UnwindSafe for TIMER16_0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TIMER16_1
===
```
pub struct TIMER16_1 { /* private fields */ }
```
This 16-bit timer block offers a simple mechanism for firmware to maintain a time base
Implementations
---
### impl TIMER16_1
#### pub const PTR: *constRegisterBlock = {0x40000c20 as *const timer16_0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for TIMER16_1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for TIMER16_1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for TIMER16_1
Auto Trait Implementations
---
### impl RefUnwindSafe for TIMER16_1
### impl !Sync for TIMER16_1
### impl Unpin for TIMER16_1
### impl UnwindSafe for TIMER16_1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TIMER32_0
===
```
pub struct TIMER32_0 { /* private fields */ }
```
This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
Implementations
---
### impl TIMER32_0
#### pub const PTR: *constRegisterBlock = {0x40000c80 as *const timer32_0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for TIMER32_0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for TIMER32_0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for TIMER32_0
Auto Trait Implementations
---
### impl RefUnwindSafe for TIMER32_0
### impl !Sync for TIMER32_0
### impl Unpin for TIMER32_0
### impl UnwindSafe for TIMER32_0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TIMER32_1
===
```
pub struct TIMER32_1 { /* private fields */ }
```
This 32-bit timer block offers a simple mechanism for firmware to maintain a time base
Implementations
---
### impl TIMER32_1
#### pub const PTR: *constRegisterBlock = {0x40000ca0 as *const timer32_0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for TIMER32_1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for TIMER32_1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for TIMER32_1
Auto Trait Implementations
---
### impl RefUnwindSafe for TIMER32_1
### impl !Sync for TIMER32_1
### impl Unpin for TIMER32_1
### impl UnwindSafe for TIMER32_1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::TPIU
===
```
pub struct TPIU { /* private fields */ }
```
Trace Port Interface Unit
Implementations
---
### impl TPIU
#### pub const PTR: *constRegisterBlock = {0xe0040000 as *const cortex_m::peripheral::tpiu::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
👎Deprecated since 0.7.5: Use the associated constant `PTR` instead
Returns a pointer to the register block
Trait Implementations
---
### impl Deref for TPIU
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &<TPIU as Deref>::Target
Dereferences the value.
### impl Send for TPIU
Auto Trait Implementations
---
### impl RefUnwindSafe for TPIU
### impl !Sync for TPIU
### impl Unpin for TPIU
### impl UnwindSafe for TPIU
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::UART0
===
```
pub struct UART0 { /* private fields */ }
```
The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
Implementations
---
### impl UART0
#### pub const PTR: *constRegisterBlock = {0x400f2400 as *const uart0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn dlab(&self) -> &DLAB
0x00..0x3f1 - UART when DLAB=1
#### pub fn data(&self) -> &DATA
0x00..0x3f1 - UART when DLAB=0
Trait Implementations
---
### impl Debug for UART0
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for UART0
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for UART0
Auto Trait Implementations
---
### impl RefUnwindSafe for UART0
### impl !Sync for UART0
### impl Unpin for UART0
### impl UnwindSafe for UART0
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::UART1
===
```
pub struct UART1 { /* private fields */ }
```
The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
Implementations
---
### impl UART1
#### pub const PTR: *constRegisterBlock = {0x400f2800 as *const uart0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn dlab(&self) -> &DLAB
0x00..0x3f1 - UART when DLAB=1
#### pub fn data(&self) -> &DATA
0x00..0x3f1 - UART when DLAB=0
Trait Implementations
---
### impl Debug for UART1
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for UART1
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for UART1
Auto Trait Implementations
---
### impl RefUnwindSafe for UART1
### impl !Sync for UART1
### impl Unpin for UART1
### impl UnwindSafe for UART1
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::UART2
===
```
pub struct UART2 { /* private fields */ }
```
The 16550 UART is a full-function Two Pin Serial Port that supports the standard RS-232 Interface.
Implementations
---
### impl UART2
#### pub const PTR: *constRegisterBlock = {0x400f2c00 as *const uart0::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Methods from Deref<Target = RegisterBlock>
---
#### pub fn dlab(&self) -> &DLAB
0x00..0x3f1 - UART when DLAB=1
#### pub fn data(&self) -> &DATA
0x00..0x3f1 - UART when DLAB=0
Trait Implementations
---
### impl Debug for UART2
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for UART2
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for UART2
Auto Trait Implementations
---
### impl RefUnwindSafe for UART2
### impl !Sync for UART2
### impl Unpin for UART2
### impl UnwindSafe for UART2
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::VBAT
===
```
pub struct VBAT { /* private fields */ }
```
The VBAT Register Bank block is a block implemented for miscellaneous battery-backed registers
Implementations
---
### impl VBAT
#### pub const PTR: *constRegisterBlock = {0x4000a400 as *const vbat::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for VBAT
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for VBAT
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for VBAT
Auto Trait Implementations
---
### impl RefUnwindSafe for VBAT
### impl !Sync for VBAT
### impl Unpin for VBAT
### impl UnwindSafe for VBAT
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::VBAT_RAM
===
```
pub struct VBAT_RAM { /* private fields */ }
```
The VBAT RAM is operational while the main power rail is operational, and will retain its values powered by battery power while the main rail is unpowered.
Implementations
---
### impl VBAT_RAM
#### pub const PTR: *constRegisterBlock = {0x4000a800 as *const vbat_ram::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for VBAT_RAM
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for VBAT_RAM
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for VBAT_RAM
Auto Trait Implementations
---
### impl RefUnwindSafe for VBAT_RAM
### impl !Sync for VBAT_RAM
### impl Unpin for VBAT_RAM
### impl UnwindSafe for VBAT_RAM
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::VCI
===
```
pub struct VCI { /* private fields */ }
```
The VBAT-Powered Control Interfaces with the RTC With Date and DST Adjustment as well as the Week Alarm.
Implementations
---
### impl VCI
#### pub const PTR: *constRegisterBlock = {0x4000ae00 as *const vci::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for VCI
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for VCI
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for VCI
Auto Trait Implementations
---
### impl RefUnwindSafe for VCI
### impl !Sync for VCI
### impl Unpin for VCI
### impl UnwindSafe for VCI
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::WDT
===
```
pub struct WDT { /* private fields */ }
```
The function of the Watchdog Timer is to provide a mechanism to detect if the internal embedded controller has failed.
Implementations
---
### impl WDT
#### pub const PTR: *constRegisterBlock = {0x40000400 as *const wdt::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for WDT
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for WDT
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for WDT
Auto Trait Implementations
---
### impl RefUnwindSafe for WDT
### impl !Sync for WDT
### impl Unpin for WDT
### impl UnwindSafe for WDT
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct cec1712_pac::WEEK
===
```
pub struct WEEK { /* private fields */ }
```
The Week Timer and the Sub-Week Timer assert the Power-Up Event Output which automatically powers-up the system from the G3 state
Implementations
---
### impl WEEK
#### pub const PTR: *constRegisterBlock = {0x4000ac80 as *const week::RegisterBlock}
Pointer to the register block
#### pub const fn ptr() -> *constRegisterBlock
Return the pointer to the register block
Trait Implementations
---
### impl Debug for WEEK
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Deref for WEEK
#### type Target = RegisterBlock
The resulting type after dereferencing.
#### fn deref(&self) -> &Self::Target
Dereferences the value.
### impl Send for WEEK
Auto Trait Implementations
---
### impl RefUnwindSafe for WEEK
### impl !Sync for WEEK
### impl Unpin for WEEK
### impl UnwindSafe for WEEK
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Constant cec1712_pac::NVIC_PRIO_BITS
===
```
pub const NVIC_PRIO_BITS: u8 = 3;
```
Number available in the NVIC for configuring priority
Attribute Macro cec1712_pac::interrupt
===
[]
```
#[interrupt]
``` |
github.com/aws/aws-sdk-go-v2/service/iotthingsgraph | go | Go | None
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package iotthingsgraph provides the API client, operations, and parameter types for AWS IoT Things Graph.
AWS IoT Things Graph AWS IoT Things Graph provides an integrated set of tools that enable developers to connect devices and services that use different standards, such as units of measure and communication protocols. AWS IoT Things Graph makes it possible to build IoT applications with little to no code by connecting devices and services and defining how they interact at an abstract level. For more information about how AWS IoT Things Graph works, see the User Guide (<https://docs.aws.amazon.com/thingsgraph/latest/ug/iot-tg-whatis.html>) .
The AWS IoT Things Graph service is discontinued.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [func NewDefaultEndpointResolver() *internalendpoints.Resolver](#NewDefaultEndpointResolver)
* [func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options)](#WithAPIOptions)
* [func WithEndpointResolver(v EndpointResolver) func(*Options)](#WithEndpointResolver)deprecated
* [func WithEndpointResolverV2(v EndpointResolverV2) func(*Options)](#WithEndpointResolverV2)
* [type AssociateEntityToThingInput](#AssociateEntityToThingInput)
* [type AssociateEntityToThingOutput](#AssociateEntityToThingOutput)
* [type Client](#Client)
* + [func New(options Options, optFns ...func(*Options)) *Client](#New)
+ [func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client](#NewFromConfig)
* + [func (c *Client) AssociateEntityToThing(ctx context.Context, params *AssociateEntityToThingInput, ...) (*AssociateEntityToThingOutput, error)](#Client.AssociateEntityToThing)deprecated
+ [func (c *Client) CreateFlowTemplate(ctx context.Context, params *CreateFlowTemplateInput, optFns ...func(*Options)) (*CreateFlowTemplateOutput, error)](#Client.CreateFlowTemplate)deprecated
+ [func (c *Client) CreateSystemInstance(ctx context.Context, params *CreateSystemInstanceInput, ...) (*CreateSystemInstanceOutput, error)](#Client.CreateSystemInstance)deprecated
+ [func (c *Client) CreateSystemTemplate(ctx context.Context, params *CreateSystemTemplateInput, ...) (*CreateSystemTemplateOutput, error)](#Client.CreateSystemTemplate)deprecated
+ [func (c *Client) DeleteFlowTemplate(ctx context.Context, params *DeleteFlowTemplateInput, optFns ...func(*Options)) (*DeleteFlowTemplateOutput, error)](#Client.DeleteFlowTemplate)deprecated
+ [func (c *Client) DeleteNamespace(ctx context.Context, params *DeleteNamespaceInput, optFns ...func(*Options)) (*DeleteNamespaceOutput, error)](#Client.DeleteNamespace)deprecated
+ [func (c *Client) DeleteSystemInstance(ctx context.Context, params *DeleteSystemInstanceInput, ...) (*DeleteSystemInstanceOutput, error)](#Client.DeleteSystemInstance)deprecated
+ [func (c *Client) DeleteSystemTemplate(ctx context.Context, params *DeleteSystemTemplateInput, ...) (*DeleteSystemTemplateOutput, error)](#Client.DeleteSystemTemplate)deprecated
+ [func (c *Client) DeploySystemInstance(ctx context.Context, params *DeploySystemInstanceInput, ...) (*DeploySystemInstanceOutput, error)](#Client.DeploySystemInstance)deprecated
+ [func (c *Client) DeprecateFlowTemplate(ctx context.Context, params *DeprecateFlowTemplateInput, ...) (*DeprecateFlowTemplateOutput, error)](#Client.DeprecateFlowTemplate)deprecated
+ [func (c *Client) DeprecateSystemTemplate(ctx context.Context, params *DeprecateSystemTemplateInput, ...) (*DeprecateSystemTemplateOutput, error)](#Client.DeprecateSystemTemplate)deprecated
+ [func (c *Client) DescribeNamespace(ctx context.Context, params *DescribeNamespaceInput, optFns ...func(*Options)) (*DescribeNamespaceOutput, error)](#Client.DescribeNamespace)deprecated
+ [func (c *Client) DissociateEntityFromThing(ctx context.Context, params *DissociateEntityFromThingInput, ...) (*DissociateEntityFromThingOutput, error)](#Client.DissociateEntityFromThing)deprecated
+ [func (c *Client) GetEntities(ctx context.Context, params *GetEntitiesInput, optFns ...func(*Options)) (*GetEntitiesOutput, error)](#Client.GetEntities)deprecated
+ [func (c *Client) GetFlowTemplate(ctx context.Context, params *GetFlowTemplateInput, optFns ...func(*Options)) (*GetFlowTemplateOutput, error)](#Client.GetFlowTemplate)deprecated
+ [func (c *Client) GetFlowTemplateRevisions(ctx context.Context, params *GetFlowTemplateRevisionsInput, ...) (*GetFlowTemplateRevisionsOutput, error)](#Client.GetFlowTemplateRevisions)deprecated
+ [func (c *Client) GetNamespaceDeletionStatus(ctx context.Context, params *GetNamespaceDeletionStatusInput, ...) (*GetNamespaceDeletionStatusOutput, error)](#Client.GetNamespaceDeletionStatus)deprecated
+ [func (c *Client) GetSystemInstance(ctx context.Context, params *GetSystemInstanceInput, optFns ...func(*Options)) (*GetSystemInstanceOutput, error)](#Client.GetSystemInstance)deprecated
+ [func (c *Client) GetSystemTemplate(ctx context.Context, params *GetSystemTemplateInput, optFns ...func(*Options)) (*GetSystemTemplateOutput, error)](#Client.GetSystemTemplate)deprecated
+ [func (c *Client) GetSystemTemplateRevisions(ctx context.Context, params *GetSystemTemplateRevisionsInput, ...) (*GetSystemTemplateRevisionsOutput, error)](#Client.GetSystemTemplateRevisions)deprecated
+ [func (c *Client) GetUploadStatus(ctx context.Context, params *GetUploadStatusInput, optFns ...func(*Options)) (*GetUploadStatusOutput, error)](#Client.GetUploadStatus)deprecated
+ [func (c *Client) ListFlowExecutionMessages(ctx context.Context, params *ListFlowExecutionMessagesInput, ...) (*ListFlowExecutionMessagesOutput, error)](#Client.ListFlowExecutionMessages)deprecated
+ [func (c *Client) ListTagsForResource(ctx context.Context, params *ListTagsForResourceInput, ...) (*ListTagsForResourceOutput, error)](#Client.ListTagsForResource)deprecated
+ [func (c *Client) SearchEntities(ctx context.Context, params *SearchEntitiesInput, optFns ...func(*Options)) (*SearchEntitiesOutput, error)](#Client.SearchEntities)deprecated
+ [func (c *Client) SearchFlowExecutions(ctx context.Context, params *SearchFlowExecutionsInput, ...) (*SearchFlowExecutionsOutput, error)](#Client.SearchFlowExecutions)deprecated
+ [func (c *Client) SearchFlowTemplates(ctx context.Context, params *SearchFlowTemplatesInput, ...) (*SearchFlowTemplatesOutput, error)](#Client.SearchFlowTemplates)deprecated
+ [func (c *Client) SearchSystemInstances(ctx context.Context, params *SearchSystemInstancesInput, ...) (*SearchSystemInstancesOutput, error)](#Client.SearchSystemInstances)deprecated
+ [func (c *Client) SearchSystemTemplates(ctx context.Context, params *SearchSystemTemplatesInput, ...) (*SearchSystemTemplatesOutput, error)](#Client.SearchSystemTemplates)deprecated
+ [func (c *Client) SearchThings(ctx context.Context, params *SearchThingsInput, optFns ...func(*Options)) (*SearchThingsOutput, error)](#Client.SearchThings)deprecated
+ [func (c *Client) TagResource(ctx context.Context, params *TagResourceInput, optFns ...func(*Options)) (*TagResourceOutput, error)](#Client.TagResource)deprecated
+ [func (c *Client) UndeploySystemInstance(ctx context.Context, params *UndeploySystemInstanceInput, ...) (*UndeploySystemInstanceOutput, error)](#Client.UndeploySystemInstance)deprecated
+ [func (c *Client) UntagResource(ctx context.Context, params *UntagResourceInput, optFns ...func(*Options)) (*UntagResourceOutput, error)](#Client.UntagResource)deprecated
+ [func (c *Client) UpdateFlowTemplate(ctx context.Context, params *UpdateFlowTemplateInput, optFns ...func(*Options)) (*UpdateFlowTemplateOutput, error)](#Client.UpdateFlowTemplate)deprecated
+ [func (c *Client) UpdateSystemTemplate(ctx context.Context, params *UpdateSystemTemplateInput, ...) (*UpdateSystemTemplateOutput, error)](#Client.UpdateSystemTemplate)deprecated
+ [func (c *Client) UploadEntityDefinitions(ctx context.Context, params *UploadEntityDefinitionsInput, ...) (*UploadEntityDefinitionsOutput, error)](#Client.UploadEntityDefinitions)deprecated
* [type CreateFlowTemplateInput](#CreateFlowTemplateInput)
* [type CreateFlowTemplateOutput](#CreateFlowTemplateOutput)
* [type CreateSystemInstanceInput](#CreateSystemInstanceInput)
* [type CreateSystemInstanceOutput](#CreateSystemInstanceOutput)
* [type CreateSystemTemplateInput](#CreateSystemTemplateInput)
* [type CreateSystemTemplateOutput](#CreateSystemTemplateOutput)
* [type DeleteFlowTemplateInput](#DeleteFlowTemplateInput)
* [type DeleteFlowTemplateOutput](#DeleteFlowTemplateOutput)
* [type DeleteNamespaceInput](#DeleteNamespaceInput)
* [type DeleteNamespaceOutput](#DeleteNamespaceOutput)
* [type DeleteSystemInstanceInput](#DeleteSystemInstanceInput)
* [type DeleteSystemInstanceOutput](#DeleteSystemInstanceOutput)
* [type DeleteSystemTemplateInput](#DeleteSystemTemplateInput)
* [type DeleteSystemTemplateOutput](#DeleteSystemTemplateOutput)
* [type DeploySystemInstanceInput](#DeploySystemInstanceInput)
* [type DeploySystemInstanceOutput](#DeploySystemInstanceOutput)
* [type DeprecateFlowTemplateInput](#DeprecateFlowTemplateInput)
* [type DeprecateFlowTemplateOutput](#DeprecateFlowTemplateOutput)
* [type DeprecateSystemTemplateInput](#DeprecateSystemTemplateInput)
* [type DeprecateSystemTemplateOutput](#DeprecateSystemTemplateOutput)
* [type DescribeNamespaceInput](#DescribeNamespaceInput)
* [type DescribeNamespaceOutput](#DescribeNamespaceOutput)
* [type DissociateEntityFromThingInput](#DissociateEntityFromThingInput)
* [type DissociateEntityFromThingOutput](#DissociateEntityFromThingOutput)
* [type EndpointParameters](#EndpointParameters)
* + [func (p EndpointParameters) ValidateRequired() error](#EndpointParameters.ValidateRequired)
+ [func (p EndpointParameters) WithDefaults() EndpointParameters](#EndpointParameters.WithDefaults)
* [type EndpointResolver](#EndpointResolver)
* + [func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver](#EndpointResolverFromURL)
* [type EndpointResolverFunc](#EndpointResolverFunc)
* + [func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error)](#EndpointResolverFunc.ResolveEndpoint)
* [type EndpointResolverOptions](#EndpointResolverOptions)
* [type EndpointResolverV2](#EndpointResolverV2)
* + [func NewDefaultEndpointResolverV2() EndpointResolverV2](#NewDefaultEndpointResolverV2)
* [type GetEntitiesInput](#GetEntitiesInput)
* [type GetEntitiesOutput](#GetEntitiesOutput)
* [type GetFlowTemplateInput](#GetFlowTemplateInput)
* [type GetFlowTemplateOutput](#GetFlowTemplateOutput)
* [type GetFlowTemplateRevisionsAPIClient](#GetFlowTemplateRevisionsAPIClient)
* [type GetFlowTemplateRevisionsInput](#GetFlowTemplateRevisionsInput)
* [type GetFlowTemplateRevisionsOutput](#GetFlowTemplateRevisionsOutput)
* [type GetFlowTemplateRevisionsPaginator](#GetFlowTemplateRevisionsPaginator)
* + [func NewGetFlowTemplateRevisionsPaginator(client GetFlowTemplateRevisionsAPIClient, ...) *GetFlowTemplateRevisionsPaginator](#NewGetFlowTemplateRevisionsPaginator)
* + [func (p *GetFlowTemplateRevisionsPaginator) HasMorePages() bool](#GetFlowTemplateRevisionsPaginator.HasMorePages)
+ [func (p *GetFlowTemplateRevisionsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*GetFlowTemplateRevisionsOutput, error)](#GetFlowTemplateRevisionsPaginator.NextPage)
* [type GetFlowTemplateRevisionsPaginatorOptions](#GetFlowTemplateRevisionsPaginatorOptions)
* [type GetNamespaceDeletionStatusInput](#GetNamespaceDeletionStatusInput)
* [type GetNamespaceDeletionStatusOutput](#GetNamespaceDeletionStatusOutput)
* [type GetSystemInstanceInput](#GetSystemInstanceInput)
* [type GetSystemInstanceOutput](#GetSystemInstanceOutput)
* [type GetSystemTemplateInput](#GetSystemTemplateInput)
* [type GetSystemTemplateOutput](#GetSystemTemplateOutput)
* [type GetSystemTemplateRevisionsAPIClient](#GetSystemTemplateRevisionsAPIClient)
* [type GetSystemTemplateRevisionsInput](#GetSystemTemplateRevisionsInput)
* [type GetSystemTemplateRevisionsOutput](#GetSystemTemplateRevisionsOutput)
* [type GetSystemTemplateRevisionsPaginator](#GetSystemTemplateRevisionsPaginator)
* + [func NewGetSystemTemplateRevisionsPaginator(client GetSystemTemplateRevisionsAPIClient, ...) *GetSystemTemplateRevisionsPaginator](#NewGetSystemTemplateRevisionsPaginator)
* + [func (p *GetSystemTemplateRevisionsPaginator) HasMorePages() bool](#GetSystemTemplateRevisionsPaginator.HasMorePages)
+ [func (p *GetSystemTemplateRevisionsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*GetSystemTemplateRevisionsOutput, error)](#GetSystemTemplateRevisionsPaginator.NextPage)
* [type GetSystemTemplateRevisionsPaginatorOptions](#GetSystemTemplateRevisionsPaginatorOptions)
* [type GetUploadStatusInput](#GetUploadStatusInput)
* [type GetUploadStatusOutput](#GetUploadStatusOutput)
* [type HTTPClient](#HTTPClient)
* [type HTTPSignerV4](#HTTPSignerV4)
* [type ListFlowExecutionMessagesAPIClient](#ListFlowExecutionMessagesAPIClient)
* [type ListFlowExecutionMessagesInput](#ListFlowExecutionMessagesInput)
* [type ListFlowExecutionMessagesOutput](#ListFlowExecutionMessagesOutput)
* [type ListFlowExecutionMessagesPaginator](#ListFlowExecutionMessagesPaginator)
* + [func NewListFlowExecutionMessagesPaginator(client ListFlowExecutionMessagesAPIClient, ...) *ListFlowExecutionMessagesPaginator](#NewListFlowExecutionMessagesPaginator)
* + [func (p *ListFlowExecutionMessagesPaginator) HasMorePages() bool](#ListFlowExecutionMessagesPaginator.HasMorePages)
+ [func (p *ListFlowExecutionMessagesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListFlowExecutionMessagesOutput, error)](#ListFlowExecutionMessagesPaginator.NextPage)
* [type ListFlowExecutionMessagesPaginatorOptions](#ListFlowExecutionMessagesPaginatorOptions)
* [type ListTagsForResourceAPIClient](#ListTagsForResourceAPIClient)
* [type ListTagsForResourceInput](#ListTagsForResourceInput)
* [type ListTagsForResourceOutput](#ListTagsForResourceOutput)
* [type ListTagsForResourcePaginator](#ListTagsForResourcePaginator)
* + [func NewListTagsForResourcePaginator(client ListTagsForResourceAPIClient, params *ListTagsForResourceInput, ...) *ListTagsForResourcePaginator](#NewListTagsForResourcePaginator)
* + [func (p *ListTagsForResourcePaginator) HasMorePages() bool](#ListTagsForResourcePaginator.HasMorePages)
+ [func (p *ListTagsForResourcePaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListTagsForResourceOutput, error)](#ListTagsForResourcePaginator.NextPage)
* [type ListTagsForResourcePaginatorOptions](#ListTagsForResourcePaginatorOptions)
* [type Options](#Options)
* + [func (o Options) Copy() Options](#Options.Copy)
* [type ResolveEndpoint](#ResolveEndpoint)
* + [func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, ...) (out middleware.SerializeOutput, metadata middleware.Metadata, err error)](#ResolveEndpoint.HandleSerialize)
+ [func (*ResolveEndpoint) ID() string](#ResolveEndpoint.ID)
* [type SearchEntitiesAPIClient](#SearchEntitiesAPIClient)
* [type SearchEntitiesInput](#SearchEntitiesInput)
* [type SearchEntitiesOutput](#SearchEntitiesOutput)
* [type SearchEntitiesPaginator](#SearchEntitiesPaginator)
* + [func NewSearchEntitiesPaginator(client SearchEntitiesAPIClient, params *SearchEntitiesInput, ...) *SearchEntitiesPaginator](#NewSearchEntitiesPaginator)
* + [func (p *SearchEntitiesPaginator) HasMorePages() bool](#SearchEntitiesPaginator.HasMorePages)
+ [func (p *SearchEntitiesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*SearchEntitiesOutput, error)](#SearchEntitiesPaginator.NextPage)
* [type SearchEntitiesPaginatorOptions](#SearchEntitiesPaginatorOptions)
* [type SearchFlowExecutionsAPIClient](#SearchFlowExecutionsAPIClient)
* [type SearchFlowExecutionsInput](#SearchFlowExecutionsInput)
* [type SearchFlowExecutionsOutput](#SearchFlowExecutionsOutput)
* [type SearchFlowExecutionsPaginator](#SearchFlowExecutionsPaginator)
* + [func NewSearchFlowExecutionsPaginator(client SearchFlowExecutionsAPIClient, params *SearchFlowExecutionsInput, ...) *SearchFlowExecutionsPaginator](#NewSearchFlowExecutionsPaginator)
* + [func (p *SearchFlowExecutionsPaginator) HasMorePages() bool](#SearchFlowExecutionsPaginator.HasMorePages)
+ [func (p *SearchFlowExecutionsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*SearchFlowExecutionsOutput, error)](#SearchFlowExecutionsPaginator.NextPage)
* [type SearchFlowExecutionsPaginatorOptions](#SearchFlowExecutionsPaginatorOptions)
* [type SearchFlowTemplatesAPIClient](#SearchFlowTemplatesAPIClient)
* [type SearchFlowTemplatesInput](#SearchFlowTemplatesInput)
* [type SearchFlowTemplatesOutput](#SearchFlowTemplatesOutput)
* [type SearchFlowTemplatesPaginator](#SearchFlowTemplatesPaginator)
* + [func NewSearchFlowTemplatesPaginator(client SearchFlowTemplatesAPIClient, params *SearchFlowTemplatesInput, ...) *SearchFlowTemplatesPaginator](#NewSearchFlowTemplatesPaginator)
* + [func (p *SearchFlowTemplatesPaginator) HasMorePages() bool](#SearchFlowTemplatesPaginator.HasMorePages)
+ [func (p *SearchFlowTemplatesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*SearchFlowTemplatesOutput, error)](#SearchFlowTemplatesPaginator.NextPage)
* [type SearchFlowTemplatesPaginatorOptions](#SearchFlowTemplatesPaginatorOptions)
* [type SearchSystemInstancesAPIClient](#SearchSystemInstancesAPIClient)
* [type SearchSystemInstancesInput](#SearchSystemInstancesInput)
* [type SearchSystemInstancesOutput](#SearchSystemInstancesOutput)
* [type SearchSystemInstancesPaginator](#SearchSystemInstancesPaginator)
* + [func NewSearchSystemInstancesPaginator(client SearchSystemInstancesAPIClient, params *SearchSystemInstancesInput, ...) *SearchSystemInstancesPaginator](#NewSearchSystemInstancesPaginator)
* + [func (p *SearchSystemInstancesPaginator) HasMorePages() bool](#SearchSystemInstancesPaginator.HasMorePages)
+ [func (p *SearchSystemInstancesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*SearchSystemInstancesOutput, error)](#SearchSystemInstancesPaginator.NextPage)
* [type SearchSystemInstancesPaginatorOptions](#SearchSystemInstancesPaginatorOptions)
* [type SearchSystemTemplatesAPIClient](#SearchSystemTemplatesAPIClient)
* [type SearchSystemTemplatesInput](#SearchSystemTemplatesInput)
* [type SearchSystemTemplatesOutput](#SearchSystemTemplatesOutput)
* [type SearchSystemTemplatesPaginator](#SearchSystemTemplatesPaginator)
* + [func NewSearchSystemTemplatesPaginator(client SearchSystemTemplatesAPIClient, params *SearchSystemTemplatesInput, ...) *SearchSystemTemplatesPaginator](#NewSearchSystemTemplatesPaginator)
* + [func (p *SearchSystemTemplatesPaginator) HasMorePages() bool](#SearchSystemTemplatesPaginator.HasMorePages)
+ [func (p *SearchSystemTemplatesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*SearchSystemTemplatesOutput, error)](#SearchSystemTemplatesPaginator.NextPage)
* [type SearchSystemTemplatesPaginatorOptions](#SearchSystemTemplatesPaginatorOptions)
* [type SearchThingsAPIClient](#SearchThingsAPIClient)
* [type SearchThingsInput](#SearchThingsInput)
* [type SearchThingsOutput](#SearchThingsOutput)
* [type SearchThingsPaginator](#SearchThingsPaginator)
* + [func NewSearchThingsPaginator(client SearchThingsAPIClient, params *SearchThingsInput, ...) *SearchThingsPaginator](#NewSearchThingsPaginator)
* + [func (p *SearchThingsPaginator) HasMorePages() bool](#SearchThingsPaginator.HasMorePages)
+ [func (p *SearchThingsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*SearchThingsOutput, error)](#SearchThingsPaginator.NextPage)
* [type SearchThingsPaginatorOptions](#SearchThingsPaginatorOptions)
* [type TagResourceInput](#TagResourceInput)
* [type TagResourceOutput](#TagResourceOutput)
* [type UndeploySystemInstanceInput](#UndeploySystemInstanceInput)
* [type UndeploySystemInstanceOutput](#UndeploySystemInstanceOutput)
* [type UntagResourceInput](#UntagResourceInput)
* [type UntagResourceOutput](#UntagResourceOutput)
* [type UpdateFlowTemplateInput](#UpdateFlowTemplateInput)
* [type UpdateFlowTemplateOutput](#UpdateFlowTemplateOutput)
* [type UpdateSystemTemplateInput](#UpdateSystemTemplateInput)
* [type UpdateSystemTemplateOutput](#UpdateSystemTemplateOutput)
* [type UploadEntityDefinitionsInput](#UploadEntityDefinitionsInput)
* [type UploadEntityDefinitionsOutput](#UploadEntityDefinitionsOutput)
### Constants [¶](#pkg-constants)
```
const ServiceAPIVersion = "2018-09-06"
```
```
const ServiceID = "IoTThingsGraph"
```
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [NewDefaultEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L33) [¶](#NewDefaultEndpointResolver)
```
func NewDefaultEndpointResolver() *[internalendpoints](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints).[Resolver](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints#Resolver)
```
NewDefaultEndpointResolver constructs a new service endpoint resolver
####
func [WithAPIOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L152) [¶](#WithAPIOptions)
added in v1.0.0
```
func WithAPIOptions(optFns ...func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)) func(*[Options](#Options))
```
WithAPIOptions returns a functional option for setting the Client's APIOptions option.
####
func [WithEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L163)
deprecated
```
func WithEndpointResolver(v [EndpointResolver](#EndpointResolver)) func(*[Options](#Options))
```
Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for this field will likely prevent you from using any endpoint-related service features released after the introduction of EndpointResolverV2 and BaseEndpoint.
To migrate an EndpointResolver implementation that uses a custom endpoint, set the client option BaseEndpoint instead.
####
func [WithEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L171) [¶](#WithEndpointResolverV2)
added in v1.15.0
```
func WithEndpointResolverV2(v [EndpointResolverV2](#EndpointResolverV2)) func(*[Options](#Options))
```
WithEndpointResolverV2 returns a functional option for setting the Client's EndpointResolverV2 option.
### Types [¶](#pkg-types)
####
type [AssociateEntityToThingInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_AssociateEntityToThing.go#L38) [¶](#AssociateEntityToThingInput)
```
type AssociateEntityToThingInput struct {
// The ID of the device to be associated with the thing. The ID should be in the
// following format. urn:tdm:REGION/ACCOUNT ID/default:device:DEVICENAME
//
// This member is required.
EntityId *[string](/builtin#string)
// The name of the thing to which the entity is to be associated.
//
// This member is required.
ThingName *[string](/builtin#string)
// The version of the user's namespace. Defaults to the latest version of the
// user's namespace.
NamespaceVersion *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [AssociateEntityToThingOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_AssociateEntityToThing.go#L58) [¶](#AssociateEntityToThingOutput)
```
type AssociateEntityToThingOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [Client](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L29) [¶](#Client)
```
type Client struct {
// contains filtered or unexported fields
}
```
Client provides the API client to make operations call for AWS IoT Things Graph.
####
func [New](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L36) [¶](#New)
```
func New(options [Options](#Options), optFns ...func(*[Options](#Options))) *[Client](#Client)
```
New returns an initialized Client based on the functional options. Provide additional functional options to further configure the behavior of the client,
such as changing the client's endpoint or adding custom middleware behavior.
####
func [NewFromConfig](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L280) [¶](#NewFromConfig)
```
func NewFromConfig(cfg [aws](/github.com/aws/aws-sdk-go-v2/aws).[Config](/github.com/aws/aws-sdk-go-v2/aws#Config), optFns ...func(*[Options](#Options))) *[Client](#Client)
```
NewFromConfig returns a new client from the provided config.
####
func (*Client) [AssociateEntityToThing](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_AssociateEntityToThing.go#L23)
deprecated
```
func (c *[Client](#Client)) AssociateEntityToThing(ctx [context](/context).[Context](/context#Context), params *[AssociateEntityToThingInput](#AssociateEntityToThingInput), optFns ...func(*[Options](#Options))) (*[AssociateEntityToThingOutput](#AssociateEntityToThingOutput), [error](/builtin#error))
```
Associates a device with a concrete thing that is in the user's registry. A thing can be associated with only one device at a time. If you associate a thing with a new device id, its previous association will be removed.
Deprecated: since: 2022-08-30
####
func (*Client) [CreateFlowTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateFlowTemplate.go#L26)
deprecated
```
func (c *[Client](#Client)) CreateFlowTemplate(ctx [context](/context).[Context](/context#Context), params *[CreateFlowTemplateInput](#CreateFlowTemplateInput), optFns ...func(*[Options](#Options))) (*[CreateFlowTemplateOutput](#CreateFlowTemplateOutput), [error](/builtin#error))
```
Creates a workflow template. Workflows can be created only in the user's namespace. (The public namespace contains only entities.) The workflow can contain only entities in the specified namespace. The workflow is validated against the entities in the latest version of the user's namespace unless another namespace version is specified in the request.
Deprecated: since: 2022-08-30
####
func (*Client) [CreateSystemInstance](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateSystemInstance.go#L34)
deprecated
```
func (c *[Client](#Client)) CreateSystemInstance(ctx [context](/context).[Context](/context#Context), params *[CreateSystemInstanceInput](#CreateSystemInstanceInput), optFns ...func(*[Options](#Options))) (*[CreateSystemInstanceOutput](#CreateSystemInstanceOutput), [error](/builtin#error))
```
Creates a system instance. This action validates the system instance, prepares the deployment-related resources. For Greengrass deployments, it updates the Greengrass group that is specified by the greengrassGroupName parameter. It also adds a file to the S3 bucket specified by the s3BucketName parameter. You need to call DeploySystemInstance after running this action. For Greengrass deployments, since this action modifies and adds resources to a Greengrass group and an S3 bucket on the caller's behalf, the calling identity must have write permissions to both the specified Greengrass group and S3 bucket. Otherwise, the call will fail with an authorization error. For cloud deployments, this action requires a flowActionsRoleArn value. This is an IAM role that has permissions to access AWS services, such as AWS Lambda and AWS IoT, that the flow uses when it executes. If the definition document doesn't specify a version of the user's namespace, the latest version will be used by default.
Deprecated: since: 2022-08-30
####
func (*Client) [CreateSystemTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateSystemTemplate.go#L24)
deprecated
```
func (c *[Client](#Client)) CreateSystemTemplate(ctx [context](/context).[Context](/context#Context), params *[CreateSystemTemplateInput](#CreateSystemTemplateInput), optFns ...func(*[Options](#Options))) (*[CreateSystemTemplateOutput](#CreateSystemTemplateOutput), [error](/builtin#error))
```
Creates a system. The system is validated against the entities in the latest version of the user's namespace unless another namespace version is specified in the request.
Deprecated: since: 2022-08-30
####
func (*Client) [DeleteFlowTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteFlowTemplate.go#L24)
deprecated
```
func (c *[Client](#Client)) DeleteFlowTemplate(ctx [context](/context).[Context](/context#Context), params *[DeleteFlowTemplateInput](#DeleteFlowTemplateInput), optFns ...func(*[Options](#Options))) (*[DeleteFlowTemplateOutput](#DeleteFlowTemplateOutput), [error](/builtin#error))
```
Deletes a workflow. Any new system or deployment that contains this workflow will fail to update or deploy. Existing deployments that contain the workflow will continue to run (since they use a snapshot of the workflow taken at the time of deployment).
Deprecated: since: 2022-08-30
####
func (*Client) [DeleteNamespace](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteNamespace.go#L23)
deprecated
```
func (c *[Client](#Client)) DeleteNamespace(ctx [context](/context).[Context](/context#Context), params *[DeleteNamespaceInput](#DeleteNamespaceInput), optFns ...func(*[Options](#Options))) (*[DeleteNamespaceOutput](#DeleteNamespaceOutput), [error](/builtin#error))
```
Deletes the specified namespace. This action deletes all of the entities in the namespace. Delete the systems and flows that use entities in the namespace before performing this action. This action takes no request parameters.
Deprecated: since: 2022-08-30
####
func (*Client) [DeleteSystemInstance](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteSystemInstance.go#L23)
deprecated
```
func (c *[Client](#Client)) DeleteSystemInstance(ctx [context](/context).[Context](/context#Context), params *[DeleteSystemInstanceInput](#DeleteSystemInstanceInput), optFns ...func(*[Options](#Options))) (*[DeleteSystemInstanceOutput](#DeleteSystemInstanceOutput), [error](/builtin#error))
```
Deletes a system instance. Only system instances that have never been deployed,
or that have been undeployed can be deleted. Users can create a new system instance that has the same ID as a deleted system instance.
Deprecated: since: 2022-08-30
####
func (*Client) [DeleteSystemTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteSystemTemplate.go#L23)
deprecated
```
func (c *[Client](#Client)) DeleteSystemTemplate(ctx [context](/context).[Context](/context#Context), params *[DeleteSystemTemplateInput](#DeleteSystemTemplateInput), optFns ...func(*[Options](#Options))) (*[DeleteSystemTemplateOutput](#DeleteSystemTemplateOutput), [error](/builtin#error))
```
Deletes a system. New deployments can't contain the system after its deletion.
Existing deployments that contain the system will continue to work because they use a snapshot of the system that is taken when it is deployed.
Deprecated: since: 2022-08-30
####
func (*Client) [DeploySystemInstance](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeploySystemInstance.go#L32)
deprecated
```
func (c *[Client](#Client)) DeploySystemInstance(ctx [context](/context).[Context](/context#Context), params *[DeploySystemInstanceInput](#DeploySystemInstanceInput), optFns ...func(*[Options](#Options))) (*[DeploySystemInstanceOutput](#DeploySystemInstanceOutput), [error](/builtin#error))
```
Greengrass and Cloud Deployments Deploys the system instance to the target specified in CreateSystemInstance . Greengrass Deployments If the system or any workflows and entities have been updated before this action is called, then the deployment will create a new Amazon Simple Storage Service resource file and then deploy it. Since this action creates a Greengrass deployment on the caller's behalf, the calling identity must have write permissions to the specified Greengrass group. Otherwise, the call will fail with an authorization error. For information about the artifacts that get added to your Greengrass core device when you use this API, see AWS IoT Things Graph and AWS IoT Greengrass (<https://docs.aws.amazon.com/thingsgraph/latest/ug/iot-tg-greengrass.html>)
.
Deprecated: since: 2022-08-30
####
func (*Client) [DeprecateFlowTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeprecateFlowTemplate.go#L23)
deprecated
```
func (c *[Client](#Client)) DeprecateFlowTemplate(ctx [context](/context).[Context](/context#Context), params *[DeprecateFlowTemplateInput](#DeprecateFlowTemplateInput), optFns ...func(*[Options](#Options))) (*[DeprecateFlowTemplateOutput](#DeprecateFlowTemplateOutput), [error](/builtin#error))
```
Deprecates the specified workflow. This action marks the workflow for deletion.
Deprecated flows can't be deployed, but existing deployments will continue to run.
Deprecated: since: 2022-08-30
####
func (*Client) [DeprecateSystemTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeprecateSystemTemplate.go#L21)
deprecated
```
func (c *[Client](#Client)) DeprecateSystemTemplate(ctx [context](/context).[Context](/context#Context), params *[DeprecateSystemTemplateInput](#DeprecateSystemTemplateInput), optFns ...func(*[Options](#Options))) (*[DeprecateSystemTemplateOutput](#DeprecateSystemTemplateOutput), [error](/builtin#error))
```
Deprecates the specified system.
Deprecated: since: 2022-08-30
####
func (*Client) [DescribeNamespace](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DescribeNamespace.go#L22)
deprecated
```
func (c *[Client](#Client)) DescribeNamespace(ctx [context](/context).[Context](/context#Context), params *[DescribeNamespaceInput](#DescribeNamespaceInput), optFns ...func(*[Options](#Options))) (*[DescribeNamespaceOutput](#DescribeNamespaceOutput), [error](/builtin#error))
```
Gets the latest version of the user's namespace and the public version that it is tracking.
Deprecated: since: 2022-08-30
####
func (*Client) [DissociateEntityFromThing](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DissociateEntityFromThing.go#L24)
deprecated
```
func (c *[Client](#Client)) DissociateEntityFromThing(ctx [context](/context).[Context](/context#Context), params *[DissociateEntityFromThingInput](#DissociateEntityFromThingInput), optFns ...func(*[Options](#Options))) (*[DissociateEntityFromThingOutput](#DissociateEntityFromThingOutput), [error](/builtin#error))
```
Dissociates a device entity from a concrete thing. The action takes only the type of the entity that you need to dissociate because only one entity of a particular type can be associated with a thing.
Deprecated: since: 2022-08-30
####
func (*Client) [GetEntities](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetEntities.go#L34)
deprecated
```
func (c *[Client](#Client)) GetEntities(ctx [context](/context).[Context](/context#Context), params *[GetEntitiesInput](#GetEntitiesInput), optFns ...func(*[Options](#Options))) (*[GetEntitiesOutput](#GetEntitiesOutput), [error](/builtin#error))
```
Gets definitions of the specified entities. Uses the latest version of the user's namespace by default. This API returns the following TDM entities.
* Properties
* States
* Events
* Actions
* Capabilities
* Mappings
* Devices
* Device Models
* Services
This action doesn't return definitions for systems, flows, and deployments.
Deprecated: since: 2022-08-30
####
func (*Client) [GetFlowTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplate.go#L23)
deprecated
```
func (c *[Client](#Client)) GetFlowTemplate(ctx [context](/context).[Context](/context#Context), params *[GetFlowTemplateInput](#GetFlowTemplateInput), optFns ...func(*[Options](#Options))) (*[GetFlowTemplateOutput](#GetFlowTemplateOutput), [error](/builtin#error))
```
Gets the latest version of the DefinitionDocument and FlowTemplateSummary for the specified workflow.
Deprecated: since: 2022-08-30
####
func (*Client) [GetFlowTemplateRevisions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L25)
deprecated
```
func (c *[Client](#Client)) GetFlowTemplateRevisions(ctx [context](/context).[Context](/context#Context), params *[GetFlowTemplateRevisionsInput](#GetFlowTemplateRevisionsInput), optFns ...func(*[Options](#Options))) (*[GetFlowTemplateRevisionsOutput](#GetFlowTemplateRevisionsOutput), [error](/builtin#error))
```
Gets revisions of the specified workflow. Only the last 100 revisions are stored. If the workflow has been deprecated, this action will return revisions that occurred before the deprecation. This action won't work for workflows that have been deleted.
Deprecated: since: 2022-08-30
####
func (*Client) [GetNamespaceDeletionStatus](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetNamespaceDeletionStatus.go#L22)
deprecated
```
func (c *[Client](#Client)) GetNamespaceDeletionStatus(ctx [context](/context).[Context](/context#Context), params *[GetNamespaceDeletionStatusInput](#GetNamespaceDeletionStatusInput), optFns ...func(*[Options](#Options))) (*[GetNamespaceDeletionStatusOutput](#GetNamespaceDeletionStatusOutput), [error](/builtin#error))
```
Gets the status of a namespace deletion task.
Deprecated: since: 2022-08-30
####
func (*Client) [GetSystemInstance](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemInstance.go#L22)
deprecated
```
func (c *[Client](#Client)) GetSystemInstance(ctx [context](/context).[Context](/context#Context), params *[GetSystemInstanceInput](#GetSystemInstanceInput), optFns ...func(*[Options](#Options))) (*[GetSystemInstanceOutput](#GetSystemInstanceOutput), [error](/builtin#error))
```
Gets a system instance.
Deprecated: since: 2022-08-30
####
func (*Client) [GetSystemTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplate.go#L22)
deprecated
```
func (c *[Client](#Client)) GetSystemTemplate(ctx [context](/context).[Context](/context#Context), params *[GetSystemTemplateInput](#GetSystemTemplateInput), optFns ...func(*[Options](#Options))) (*[GetSystemTemplateOutput](#GetSystemTemplateOutput), [error](/builtin#error))
```
Gets a system.
Deprecated: since: 2022-08-30
####
func (*Client) [GetSystemTemplateRevisions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L25)
deprecated
```
func (c *[Client](#Client)) GetSystemTemplateRevisions(ctx [context](/context).[Context](/context#Context), params *[GetSystemTemplateRevisionsInput](#GetSystemTemplateRevisionsInput), optFns ...func(*[Options](#Options))) (*[GetSystemTemplateRevisionsOutput](#GetSystemTemplateRevisionsOutput), [error](/builtin#error))
```
Gets revisions made to the specified system template. Only the previous 100 revisions are stored. If the system has been deprecated, this action will return the revisions that occurred before its deprecation. This action won't work with systems that have been deleted.
Deprecated: since: 2022-08-30
####
func (*Client) [GetUploadStatus](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetUploadStatus.go#L23)
deprecated
```
func (c *[Client](#Client)) GetUploadStatus(ctx [context](/context).[Context](/context#Context), params *[GetUploadStatusInput](#GetUploadStatusInput), optFns ...func(*[Options](#Options))) (*[GetUploadStatusOutput](#GetUploadStatusOutput), [error](/builtin#error))
```
Gets the status of the specified upload.
Deprecated: since: 2022-08-30
####
func (*Client) [ListFlowExecutionMessages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L23)
deprecated
```
func (c *[Client](#Client)) ListFlowExecutionMessages(ctx [context](/context).[Context](/context#Context), params *[ListFlowExecutionMessagesInput](#ListFlowExecutionMessagesInput), optFns ...func(*[Options](#Options))) (*[ListFlowExecutionMessagesOutput](#ListFlowExecutionMessagesOutput), [error](/builtin#error))
```
Returns a list of objects that contain information about events in a flow execution.
Deprecated: since: 2022-08-30
####
func (*Client) [ListTagsForResource](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L22)
deprecated
```
func (c *[Client](#Client)) ListTagsForResource(ctx [context](/context).[Context](/context#Context), params *[ListTagsForResourceInput](#ListTagsForResourceInput), optFns ...func(*[Options](#Options))) (*[ListTagsForResourceOutput](#ListTagsForResourceOutput), [error](/builtin#error))
```
Lists all tags on an AWS IoT Things Graph resource.
Deprecated: since: 2022-08-30
####
func (*Client) [SearchEntities](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L23)
deprecated
```
func (c *[Client](#Client)) SearchEntities(ctx [context](/context).[Context](/context#Context), params *[SearchEntitiesInput](#SearchEntitiesInput), optFns ...func(*[Options](#Options))) (*[SearchEntitiesOutput](#SearchEntitiesOutput), [error](/builtin#error))
```
Searches for entities of the specified type. You can search for entities in your namespace and the public namespace that you're tracking.
Deprecated: since: 2022-08-30
####
func (*Client) [SearchFlowExecutions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L23)
deprecated
```
func (c *[Client](#Client)) SearchFlowExecutions(ctx [context](/context).[Context](/context#Context), params *[SearchFlowExecutionsInput](#SearchFlowExecutionsInput), optFns ...func(*[Options](#Options))) (*[SearchFlowExecutionsOutput](#SearchFlowExecutionsOutput), [error](/builtin#error))
```
Searches for AWS IoT Things Graph workflow execution instances.
Deprecated: since: 2022-08-30
####
func (*Client) [SearchFlowTemplates](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L22)
deprecated
```
func (c *[Client](#Client)) SearchFlowTemplates(ctx [context](/context).[Context](/context#Context), params *[SearchFlowTemplatesInput](#SearchFlowTemplatesInput), optFns ...func(*[Options](#Options))) (*[SearchFlowTemplatesOutput](#SearchFlowTemplatesOutput), [error](/builtin#error))
```
Searches for summary information about workflows.
Deprecated: since: 2022-08-30
####
func (*Client) [SearchSystemInstances](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L22)
deprecated
```
func (c *[Client](#Client)) SearchSystemInstances(ctx [context](/context).[Context](/context#Context), params *[SearchSystemInstancesInput](#SearchSystemInstancesInput), optFns ...func(*[Options](#Options))) (*[SearchSystemInstancesOutput](#SearchSystemInstancesOutput), [error](/builtin#error))
```
Searches for system instances in the user's account.
Deprecated: since: 2022-08-30
####
func (*Client) [SearchSystemTemplates](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L24)
deprecated
```
func (c *[Client](#Client)) SearchSystemTemplates(ctx [context](/context).[Context](/context#Context), params *[SearchSystemTemplatesInput](#SearchSystemTemplatesInput), optFns ...func(*[Options](#Options))) (*[SearchSystemTemplatesOutput](#SearchSystemTemplatesOutput), [error](/builtin#error))
```
Searches for summary information about systems in the user's account. You can filter by the ID of a workflow to return only systems that use the specified workflow.
Deprecated: since: 2022-08-30
####
func (*Client) [SearchThings](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L27)
deprecated
```
func (c *[Client](#Client)) SearchThings(ctx [context](/context).[Context](/context#Context), params *[SearchThingsInput](#SearchThingsInput), optFns ...func(*[Options](#Options))) (*[SearchThingsOutput](#SearchThingsOutput), [error](/builtin#error))
```
Searches for things associated with the specified entity. You can search by both device and device model. For example, if two different devices, camera1 and camera2, implement the camera device model, the user can associate thing1 to camera1 and thing2 to camera2. SearchThings(camera2) will return only thing2,
but SearchThings(camera) will return both thing1 and thing2. This action searches for exact matches and doesn't perform partial text matching.
Deprecated: since: 2022-08-30
####
func (*Client) [TagResource](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_TagResource.go#L22)
deprecated
```
func (c *[Client](#Client)) TagResource(ctx [context](/context).[Context](/context#Context), params *[TagResourceInput](#TagResourceInput), optFns ...func(*[Options](#Options))) (*[TagResourceOutput](#TagResourceOutput), [error](/builtin#error))
```
Creates a tag for the specified resource.
Deprecated: since: 2022-08-30
####
func (*Client) [UndeploySystemInstance](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UndeploySystemInstance.go#L22)
deprecated
```
func (c *[Client](#Client)) UndeploySystemInstance(ctx [context](/context).[Context](/context#Context), params *[UndeploySystemInstanceInput](#UndeploySystemInstanceInput), optFns ...func(*[Options](#Options))) (*[UndeploySystemInstanceOutput](#UndeploySystemInstanceOutput), [error](/builtin#error))
```
Removes a system instance from its target (Cloud or Greengrass).
Deprecated: since: 2022-08-30
####
func (*Client) [UntagResource](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UntagResource.go#L21)
deprecated
```
func (c *[Client](#Client)) UntagResource(ctx [context](/context).[Context](/context#Context), params *[UntagResourceInput](#UntagResourceInput), optFns ...func(*[Options](#Options))) (*[UntagResourceOutput](#UntagResourceOutput), [error](/builtin#error))
```
Removes a tag from the specified resource.
Deprecated: since: 2022-08-30
####
func (*Client) [UpdateFlowTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UpdateFlowTemplate.go#L26)
deprecated
```
func (c *[Client](#Client)) UpdateFlowTemplate(ctx [context](/context).[Context](/context#Context), params *[UpdateFlowTemplateInput](#UpdateFlowTemplateInput), optFns ...func(*[Options](#Options))) (*[UpdateFlowTemplateOutput](#UpdateFlowTemplateOutput), [error](/builtin#error))
```
Updates the specified workflow. All deployed systems and system instances that use the workflow will see the changes in the flow when it is redeployed. If you don't want this behavior, copy the workflow (creating a new workflow with a different ID), and update the copy. The workflow can contain only entities in the specified namespace.
Deprecated: since: 2022-08-30
####
func (*Client) [UpdateSystemTemplate](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UpdateSystemTemplate.go#L24)
deprecated
```
func (c *[Client](#Client)) UpdateSystemTemplate(ctx [context](/context).[Context](/context#Context), params *[UpdateSystemTemplateInput](#UpdateSystemTemplateInput), optFns ...func(*[Options](#Options))) (*[UpdateSystemTemplateOutput](#UpdateSystemTemplateOutput), [error](/builtin#error))
```
Updates the specified system. You don't need to run this action after updating a workflow. Any deployment that uses the system will see the changes in the system when it is redeployed.
Deprecated: since: 2022-08-30
####
func (*Client) [UploadEntityDefinitions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UploadEntityDefinitions.go#L38)
deprecated
```
func (c *[Client](#Client)) UploadEntityDefinitions(ctx [context](/context).[Context](/context#Context), params *[UploadEntityDefinitionsInput](#UploadEntityDefinitionsInput), optFns ...func(*[Options](#Options))) (*[UploadEntityDefinitionsOutput](#UploadEntityDefinitionsOutput), [error](/builtin#error))
```
Asynchronously uploads one or more entity definitions to the user's namespace.
The document parameter is required if syncWithPublicNamespace and deleteExistingEntites are false. If the syncWithPublicNamespace parameter is set to true , the user's namespace will synchronize with the latest version of the public namespace. If deprecateExistingEntities is set to true, all entities in the latest version will be deleted before the new DefinitionDocument is uploaded. When a user uploads entity definitions for the first time, the service creates a new namespace for the user. The new namespace tracks the public namespace. Currently users can have only one namespace. The namespace version increments whenever a user uploads entity definitions that are backwards-incompatible and whenever a user sets the syncWithPublicNamespace parameter or the deprecateExistingEntities parameter to true . The IDs for all of the entities should be in URN format. Each entity must be in the user's namespace. Users can't create entities in the public namespace, but entity definitions can refer to entities in the public namespace. Valid entities are Device , DeviceModel , Service , Capability , State , Action , Event , Property
, Mapping , Enum .
Deprecated: since: 2022-08-30
####
type [CreateFlowTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateFlowTemplate.go#L41) [¶](#CreateFlowTemplateInput)
```
type CreateFlowTemplateInput struct {
// The workflow DefinitionDocument .
//
// This member is required.
Definition *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DefinitionDocument](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DefinitionDocument)
// The namespace version in which the workflow is to be created. If no value is
// specified, the latest version is used by default.
CompatibleNamespaceVersion *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [CreateFlowTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateFlowTemplate.go#L55) [¶](#CreateFlowTemplateOutput)
```
type CreateFlowTemplateOutput struct {
// The summary object that describes the created workflow.
Summary *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateSystemInstanceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateSystemInstance.go#L49) [¶](#CreateSystemInstanceInput)
```
type CreateSystemInstanceInput struct {
// A document that defines an entity.
//
// This member is required.
Definition *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DefinitionDocument](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DefinitionDocument)
// The target type of the deployment. Valid values are GREENGRASS and CLOUD .
//
// This member is required.
Target [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DeploymentTarget](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DeploymentTarget)
// The ARN of the IAM role that AWS IoT Things Graph will assume when it executes
// the flow. This role must have read and write access to AWS Lambda and AWS IoT
// and any other AWS services that the flow uses when it executes. This value is
// required if the value of the target parameter is CLOUD .
FlowActionsRoleArn *[string](/builtin#string)
// The name of the Greengrass group where the system instance will be deployed.
// This value is required if the value of the target parameter is GREENGRASS .
GreengrassGroupName *[string](/builtin#string)
// An object that specifies whether cloud metrics are collected in a deployment
// and, if so, what role is used to collect metrics.
MetricsConfiguration *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[MetricsConfiguration](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#MetricsConfiguration)
// The name of the Amazon Simple Storage Service bucket that will be used to store
// and deploy the system instance's resource file. This value is required if the
// value of the target parameter is GREENGRASS .
S3BucketName *[string](/builtin#string)
// Metadata, consisting of key-value pairs, that can be used to categorize your
// system instances.
Tags [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Tag](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Tag)
// contains filtered or unexported fields
}
```
####
type [CreateSystemInstanceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateSystemInstance.go#L87) [¶](#CreateSystemInstanceOutput)
```
type CreateSystemInstanceOutput struct {
// The summary object that describes the new system instance.
Summary *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemInstanceSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemInstanceSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [CreateSystemTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateSystemTemplate.go#L39) [¶](#CreateSystemTemplateInput)
```
type CreateSystemTemplateInput struct {
// The DefinitionDocument used to create the system.
//
// This member is required.
Definition *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DefinitionDocument](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DefinitionDocument)
// The namespace version in which the system is to be created. If no value is
// specified, the latest version is used by default.
CompatibleNamespaceVersion *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [CreateSystemTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_CreateSystemTemplate.go#L53) [¶](#CreateSystemTemplateOutput)
```
type CreateSystemTemplateOutput struct {
// The summary object that describes the created system.
Summary *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteFlowTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteFlowTemplate.go#L39) [¶](#DeleteFlowTemplateInput)
```
type DeleteFlowTemplateInput struct {
// The ID of the workflow to be deleted. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:workflow:WORKFLOWNAME
//
// This member is required.
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteFlowTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteFlowTemplate.go#L50) [¶](#DeleteFlowTemplateOutput)
```
type DeleteFlowTemplateOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteNamespaceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteNamespace.go#L38) [¶](#DeleteNamespaceInput)
```
type DeleteNamespaceInput struct {
// contains filtered or unexported fields
}
```
####
type [DeleteNamespaceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteNamespace.go#L42) [¶](#DeleteNamespaceOutput)
```
type DeleteNamespaceOutput struct {
// The ARN of the namespace to be deleted.
NamespaceArn *[string](/builtin#string)
// The name of the namespace to be deleted.
NamespaceName *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteSystemInstanceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteSystemInstance.go#L38) [¶](#DeleteSystemInstanceInput)
```
type DeleteSystemInstanceInput struct {
// The ID of the system instance to be deleted.
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteSystemInstanceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteSystemInstance.go#L46) [¶](#DeleteSystemInstanceOutput)
```
type DeleteSystemInstanceOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeleteSystemTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteSystemTemplate.go#L38) [¶](#DeleteSystemTemplateInput)
```
type DeleteSystemTemplateInput struct {
// The ID of the system to be deleted. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:system:SYSTEMNAME
//
// This member is required.
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeleteSystemTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeleteSystemTemplate.go#L49) [¶](#DeleteSystemTemplateOutput)
```
type DeleteSystemTemplateOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeploySystemInstanceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeploySystemInstance.go#L47) [¶](#DeploySystemInstanceInput)
```
type DeploySystemInstanceInput struct {
// The ID of the system instance. This value is returned by the
// CreateSystemInstance action. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:deployment:DEPLOYMENTNAME
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeploySystemInstanceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeploySystemInstance.go#L57) [¶](#DeploySystemInstanceOutput)
```
type DeploySystemInstanceOutput struct {
// An object that contains summary information about a system instance that was
// deployed.
//
// This member is required.
Summary *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemInstanceSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemInstanceSummary)
// The ID of the Greengrass deployment used to deploy the system instance.
GreengrassDeploymentId *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeprecateFlowTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeprecateFlowTemplate.go#L38) [¶](#DeprecateFlowTemplateInput)
```
type DeprecateFlowTemplateInput struct {
// The ID of the workflow to be deleted. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:workflow:WORKFLOWNAME
//
// This member is required.
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeprecateFlowTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeprecateFlowTemplate.go#L49) [¶](#DeprecateFlowTemplateOutput)
```
type DeprecateFlowTemplateOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DeprecateSystemTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeprecateSystemTemplate.go#L36) [¶](#DeprecateSystemTemplateInput)
```
type DeprecateSystemTemplateInput struct {
// The ID of the system to delete. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:system:SYSTEMNAME
//
// This member is required.
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DeprecateSystemTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DeprecateSystemTemplate.go#L47) [¶](#DeprecateSystemTemplateOutput)
```
type DeprecateSystemTemplateOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DescribeNamespaceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DescribeNamespace.go#L37) [¶](#DescribeNamespaceInput)
```
type DescribeNamespaceInput struct {
// The name of the user's namespace. Set this to aws to get the public namespace.
NamespaceName *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DescribeNamespaceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DescribeNamespace.go#L45) [¶](#DescribeNamespaceOutput)
```
type DescribeNamespaceOutput struct {
// The ARN of the namespace.
NamespaceArn *[string](/builtin#string)
// The name of the namespace.
NamespaceName *[string](/builtin#string)
// The version of the user's namespace to describe.
NamespaceVersion *[int64](/builtin#int64)
// The name of the public namespace that the latest namespace version is tracking.
TrackingNamespaceName *[string](/builtin#string)
// The version of the public namespace that the latest version is tracking.
TrackingNamespaceVersion *[int64](/builtin#int64)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [DissociateEntityFromThingInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DissociateEntityFromThing.go#L39) [¶](#DissociateEntityFromThingInput)
```
type DissociateEntityFromThingInput struct {
// The entity type from which to disassociate the thing.
//
// This member is required.
EntityType [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EntityType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EntityType)
// The name of the thing to disassociate.
//
// This member is required.
ThingName *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [DissociateEntityFromThingOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_DissociateEntityFromThing.go#L54) [¶](#DissociateEntityFromThingOutput)
```
type DissociateEntityFromThingOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [EndpointParameters](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L265) [¶](#EndpointParameters)
added in v1.15.0
```
type EndpointParameters struct {
// The AWS region used to dispatch the request.
//
// Parameter is
// required.
//
// AWS::Region
Region *[string](/builtin#string)
// When true, use the dual-stack endpoint. If the configured endpoint does not
// support dual-stack, dispatching the request MAY return an error.
//
// Defaults to
// false if no value is provided.
//
// AWS::UseDualStack
UseDualStack *[bool](/builtin#bool)
// When true, send this request to the FIPS-compliant regional endpoint. If the
// configured endpoint does not have a FIPS compliant endpoint, dispatching the
// request will return an error.
//
// Defaults to false if no value is
// provided.
//
// AWS::UseFIPS
UseFIPS *[bool](/builtin#bool)
// Override the endpoint used to send this request
//
// Parameter is
// required.
//
// SDK::Endpoint
Endpoint *[string](/builtin#string)
}
```
EndpointParameters provides the parameters that influence how endpoints are resolved.
####
func (EndpointParameters) [ValidateRequired](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L303) [¶](#EndpointParameters.ValidateRequired)
added in v1.15.0
```
func (p [EndpointParameters](#EndpointParameters)) ValidateRequired() [error](/builtin#error)
```
ValidateRequired validates required parameters are set.
####
func (EndpointParameters) [WithDefaults](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L317) [¶](#EndpointParameters.WithDefaults)
added in v1.15.0
```
func (p [EndpointParameters](#EndpointParameters)) WithDefaults() [EndpointParameters](#EndpointParameters)
```
WithDefaults returns a shallow copy of EndpointParameterswith default values applied to members where applicable.
####
type [EndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L26) [¶](#EndpointResolver)
```
type EndpointResolver interface {
ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error))
}
```
EndpointResolver interface for resolving service endpoints.
####
func [EndpointResolverFromURL](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L51) [¶](#EndpointResolverFromURL)
added in v1.1.0
```
func EndpointResolverFromURL(url [string](/builtin#string), optFns ...func(*[aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint))) [EndpointResolver](#EndpointResolver)
```
EndpointResolverFromURL returns an EndpointResolver configured using the provided endpoint url. By default, the resolved endpoint resolver uses the client region as signing region, and the endpoint source is set to EndpointSourceCustom.You can provide functional options to configure endpoint values for the resolved endpoint.
####
type [EndpointResolverFunc](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L40) [¶](#EndpointResolverFunc)
```
type EndpointResolverFunc func(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error))
```
EndpointResolverFunc is a helper utility that wraps a function so it satisfies the EndpointResolver interface. This is useful when you want to add additional endpoint resolving logic, or stub out specific endpoints with custom values.
####
func (EndpointResolverFunc) [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L42) [¶](#EndpointResolverFunc.ResolveEndpoint)
```
func (fn [EndpointResolverFunc](#EndpointResolverFunc)) ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) (endpoint [aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), err [error](/builtin#error))
```
####
type [EndpointResolverOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L23) [¶](#EndpointResolverOptions)
added in v0.29.0
```
type EndpointResolverOptions = [internalendpoints](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints).[Options](/github.com/aws/aws-sdk-go-v2/service/[email protected]/internal/endpoints#Options)
```
EndpointResolverOptions is the service endpoint resolver options
####
type [EndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L329) [¶](#EndpointResolverV2)
added in v1.15.0
```
type EndpointResolverV2 interface {
// ResolveEndpoint attempts to resolve the endpoint with the provided options,
// returning the endpoint if found. Otherwise an error is returned.
ResolveEndpoint(ctx [context](/context).[Context](/context#Context), params [EndpointParameters](#EndpointParameters)) (
[smithyendpoints](/github.com/aws/smithy-go/endpoints).[Endpoint](/github.com/aws/smithy-go/endpoints#Endpoint), [error](/builtin#error),
)
}
```
EndpointResolverV2 provides the interface for resolving service endpoints.
####
func [NewDefaultEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L340) [¶](#NewDefaultEndpointResolverV2)
added in v1.15.0
```
func NewDefaultEndpointResolverV2() [EndpointResolverV2](#EndpointResolverV2)
```
####
type [GetEntitiesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetEntities.go#L49) [¶](#GetEntitiesInput)
```
type GetEntitiesInput struct {
// An array of entity IDs. The IDs should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:device:DEVICENAME
//
// This member is required.
Ids [][string](/builtin#string)
// The version of the user's namespace. Defaults to the latest version of the
// user's namespace.
NamespaceVersion *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [GetEntitiesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetEntities.go#L64) [¶](#GetEntitiesOutput)
```
type GetEntitiesOutput struct {
// An array of descriptions for the specified entities.
Descriptions [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EntityDescription](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EntityDescription)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetFlowTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplate.go#L38) [¶](#GetFlowTemplateInput)
```
type GetFlowTemplateInput struct {
// The ID of the workflow. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:workflow:WORKFLOWNAME
//
// This member is required.
Id *[string](/builtin#string)
// The number of the workflow revision to retrieve.
RevisionNumber *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [GetFlowTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplate.go#L52) [¶](#GetFlowTemplateOutput)
```
type GetFlowTemplateOutput struct {
// The object that describes the specified workflow.
Description *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowTemplateDescription](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowTemplateDescription)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetFlowTemplateRevisionsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L149) [¶](#GetFlowTemplateRevisionsAPIClient)
added in v0.30.0
```
type GetFlowTemplateRevisionsAPIClient interface {
GetFlowTemplateRevisions([context](/context).[Context](/context#Context), *[GetFlowTemplateRevisionsInput](#GetFlowTemplateRevisionsInput), ...func(*[Options](#Options))) (*[GetFlowTemplateRevisionsOutput](#GetFlowTemplateRevisionsOutput), [error](/builtin#error))
}
```
GetFlowTemplateRevisionsAPIClient is a client that implements the GetFlowTemplateRevisions operation.
####
type [GetFlowTemplateRevisionsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L40) [¶](#GetFlowTemplateRevisionsInput)
```
type GetFlowTemplateRevisionsInput struct {
// The ID of the workflow. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:workflow:WORKFLOWNAME
//
// This member is required.
Id *[string](/builtin#string)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetFlowTemplateRevisionsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L58) [¶](#GetFlowTemplateRevisionsOutput)
```
type GetFlowTemplateRevisionsOutput struct {
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// An array of objects that provide summary data about each revision.
Summaries [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetFlowTemplateRevisionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L167) [¶](#GetFlowTemplateRevisionsPaginator)
added in v0.30.0
```
type GetFlowTemplateRevisionsPaginator struct {
// contains filtered or unexported fields
}
```
GetFlowTemplateRevisionsPaginator is a paginator for GetFlowTemplateRevisions
####
func [NewGetFlowTemplateRevisionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L177) [¶](#NewGetFlowTemplateRevisionsPaginator)
added in v0.30.0
```
func NewGetFlowTemplateRevisionsPaginator(client [GetFlowTemplateRevisionsAPIClient](#GetFlowTemplateRevisionsAPIClient), params *[GetFlowTemplateRevisionsInput](#GetFlowTemplateRevisionsInput), optFns ...func(*[GetFlowTemplateRevisionsPaginatorOptions](#GetFlowTemplateRevisionsPaginatorOptions))) *[GetFlowTemplateRevisionsPaginator](#GetFlowTemplateRevisionsPaginator)
```
NewGetFlowTemplateRevisionsPaginator returns a new GetFlowTemplateRevisionsPaginator
####
func (*GetFlowTemplateRevisionsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L201) [¶](#GetFlowTemplateRevisionsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[GetFlowTemplateRevisionsPaginator](#GetFlowTemplateRevisionsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*GetFlowTemplateRevisionsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L206) [¶](#GetFlowTemplateRevisionsPaginator.NextPage)
added in v0.30.0
```
func (p *[GetFlowTemplateRevisionsPaginator](#GetFlowTemplateRevisionsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[GetFlowTemplateRevisionsOutput](#GetFlowTemplateRevisionsOutput), [error](/builtin#error))
```
NextPage retrieves the next GetFlowTemplateRevisions page.
####
type [GetFlowTemplateRevisionsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetFlowTemplateRevisions.go#L157) [¶](#GetFlowTemplateRevisionsPaginatorOptions)
added in v0.30.0
```
type GetFlowTemplateRevisionsPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
GetFlowTemplateRevisionsPaginatorOptions is the paginator options for GetFlowTemplateRevisions
####
type [GetNamespaceDeletionStatusInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetNamespaceDeletionStatus.go#L37) [¶](#GetNamespaceDeletionStatusInput)
```
type GetNamespaceDeletionStatusInput struct {
// contains filtered or unexported fields
}
```
####
type [GetNamespaceDeletionStatusOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetNamespaceDeletionStatus.go#L41) [¶](#GetNamespaceDeletionStatusOutput)
```
type GetNamespaceDeletionStatusOutput struct {
// An error code returned by the namespace deletion task.
ErrorCode [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[NamespaceDeletionStatusErrorCodes](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#NamespaceDeletionStatusErrorCodes)
// An error code returned by the namespace deletion task.
ErrorMessage *[string](/builtin#string)
// The ARN of the namespace that is being deleted.
NamespaceArn *[string](/builtin#string)
// The name of the namespace that is being deleted.
NamespaceName *[string](/builtin#string)
// The status of the deletion request.
Status [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[NamespaceDeletionStatus](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#NamespaceDeletionStatus)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetSystemInstanceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemInstance.go#L37) [¶](#GetSystemInstanceInput)
```
type GetSystemInstanceInput struct {
// The ID of the system deployment instance. This value is returned by
// CreateSystemInstance . The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:deployment:DEPLOYMENTNAME
//
// This member is required.
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetSystemInstanceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemInstance.go#L49) [¶](#GetSystemInstanceOutput)
```
type GetSystemInstanceOutput struct {
// An object that describes the system instance.
Description *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemInstanceDescription](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemInstanceDescription)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetSystemTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplate.go#L37) [¶](#GetSystemTemplateInput)
```
type GetSystemTemplateInput struct {
// The ID of the system to get. This ID must be in the user's namespace. The ID
// should be in the following format. urn:tdm:REGION/ACCOUNT
// ID/default:system:SYSTEMNAME
//
// This member is required.
Id *[string](/builtin#string)
// The number that specifies the revision of the system to get.
RevisionNumber *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [GetSystemTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplate.go#L52) [¶](#GetSystemTemplateOutput)
```
type GetSystemTemplateOutput struct {
// An object that contains summary data about the system.
Description *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemTemplateDescription](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemTemplateDescription)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetSystemTemplateRevisionsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L150) [¶](#GetSystemTemplateRevisionsAPIClient)
added in v0.30.0
```
type GetSystemTemplateRevisionsAPIClient interface {
GetSystemTemplateRevisions([context](/context).[Context](/context#Context), *[GetSystemTemplateRevisionsInput](#GetSystemTemplateRevisionsInput), ...func(*[Options](#Options))) (*[GetSystemTemplateRevisionsOutput](#GetSystemTemplateRevisionsOutput), [error](/builtin#error))
}
```
GetSystemTemplateRevisionsAPIClient is a client that implements the GetSystemTemplateRevisions operation.
####
type [GetSystemTemplateRevisionsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L40) [¶](#GetSystemTemplateRevisionsInput)
```
type GetSystemTemplateRevisionsInput struct {
// The ID of the system template. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:system:SYSTEMNAME
//
// This member is required.
Id *[string](/builtin#string)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetSystemTemplateRevisionsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L58) [¶](#GetSystemTemplateRevisionsOutput)
```
type GetSystemTemplateRevisionsOutput struct {
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// An array of objects that contain summary data about the system template
// revisions.
Summaries [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [GetSystemTemplateRevisionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L169) [¶](#GetSystemTemplateRevisionsPaginator)
added in v0.30.0
```
type GetSystemTemplateRevisionsPaginator struct {
// contains filtered or unexported fields
}
```
GetSystemTemplateRevisionsPaginator is a paginator for GetSystemTemplateRevisions
####
func [NewGetSystemTemplateRevisionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L179) [¶](#NewGetSystemTemplateRevisionsPaginator)
added in v0.30.0
```
func NewGetSystemTemplateRevisionsPaginator(client [GetSystemTemplateRevisionsAPIClient](#GetSystemTemplateRevisionsAPIClient), params *[GetSystemTemplateRevisionsInput](#GetSystemTemplateRevisionsInput), optFns ...func(*[GetSystemTemplateRevisionsPaginatorOptions](#GetSystemTemplateRevisionsPaginatorOptions))) *[GetSystemTemplateRevisionsPaginator](#GetSystemTemplateRevisionsPaginator)
```
NewGetSystemTemplateRevisionsPaginator returns a new GetSystemTemplateRevisionsPaginator
####
func (*GetSystemTemplateRevisionsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L203) [¶](#GetSystemTemplateRevisionsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[GetSystemTemplateRevisionsPaginator](#GetSystemTemplateRevisionsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*GetSystemTemplateRevisionsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L208) [¶](#GetSystemTemplateRevisionsPaginator.NextPage)
added in v0.30.0
```
func (p *[GetSystemTemplateRevisionsPaginator](#GetSystemTemplateRevisionsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[GetSystemTemplateRevisionsOutput](#GetSystemTemplateRevisionsOutput), [error](/builtin#error))
```
NextPage retrieves the next GetSystemTemplateRevisions page.
####
type [GetSystemTemplateRevisionsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetSystemTemplateRevisions.go#L158) [¶](#GetSystemTemplateRevisionsPaginatorOptions)
added in v0.30.0
```
type GetSystemTemplateRevisionsPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
GetSystemTemplateRevisionsPaginatorOptions is the paginator options for GetSystemTemplateRevisions
####
type [GetUploadStatusInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetUploadStatus.go#L38) [¶](#GetUploadStatusInput)
```
type GetUploadStatusInput struct {
// The ID of the upload. This value is returned by the UploadEntityDefinitions
// action.
//
// This member is required.
UploadId *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [GetUploadStatusOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_GetUploadStatus.go#L49) [¶](#GetUploadStatusOutput)
```
type GetUploadStatusOutput struct {
// The date at which the upload was created.
//
// This member is required.
CreatedDate *[time](/time).[Time](/time#Time)
// The ID of the upload.
//
// This member is required.
UploadId *[string](/builtin#string)
// The status of the upload. The initial status is IN_PROGRESS . The response show
// all validation failures if the upload fails.
//
// This member is required.
UploadStatus [types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[UploadStatus](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#UploadStatus)
// The reason for an upload failure.
FailureReason [][string](/builtin#string)
// The ARN of the upload.
NamespaceArn *[string](/builtin#string)
// The name of the upload's namespace.
NamespaceName *[string](/builtin#string)
// The version of the user's namespace. Defaults to the latest version of the
// user's namespace.
NamespaceVersion *[int64](/builtin#int64)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [HTTPClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L177) [¶](#HTTPClient)
```
type HTTPClient interface {
Do(*[http](/net/http).[Request](/net/http#Request)) (*[http](/net/http).[Response](/net/http#Response), [error](/builtin#error))
}
```
####
type [HTTPSignerV4](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L425) [¶](#HTTPSignerV4)
```
type HTTPSignerV4 interface {
SignHTTP(ctx [context](/context).[Context](/context#Context), credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[Credentials](/github.com/aws/aws-sdk-go-v2/aws#Credentials), r *[http](/net/http).[Request](/net/http#Request), payloadHash [string](/builtin#string), service [string](/builtin#string), region [string](/builtin#string), signingTime [time](/time).[Time](/time#Time), optFns ...func(*[v4](/github.com/aws/aws-sdk-go-v2/aws/signer/v4).[SignerOptions](/github.com/aws/aws-sdk-go-v2/aws/signer/v4#SignerOptions))) [error](/builtin#error)
}
```
####
type [ListFlowExecutionMessagesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L147) [¶](#ListFlowExecutionMessagesAPIClient)
added in v0.30.0
```
type ListFlowExecutionMessagesAPIClient interface {
ListFlowExecutionMessages([context](/context).[Context](/context#Context), *[ListFlowExecutionMessagesInput](#ListFlowExecutionMessagesInput), ...func(*[Options](#Options))) (*[ListFlowExecutionMessagesOutput](#ListFlowExecutionMessagesOutput), [error](/builtin#error))
}
```
ListFlowExecutionMessagesAPIClient is a client that implements the ListFlowExecutionMessages operation.
####
type [ListFlowExecutionMessagesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L38) [¶](#ListFlowExecutionMessagesInput)
```
type ListFlowExecutionMessagesInput struct {
// The ID of the flow execution.
//
// This member is required.
FlowExecutionId *[string](/builtin#string)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListFlowExecutionMessagesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L55) [¶](#ListFlowExecutionMessagesOutput)
```
type ListFlowExecutionMessagesOutput struct {
// A list of objects that contain information about events in the specified flow
// execution.
Messages [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowExecutionMessage](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowExecutionMessage)
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListFlowExecutionMessagesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L165) [¶](#ListFlowExecutionMessagesPaginator)
added in v0.30.0
```
type ListFlowExecutionMessagesPaginator struct {
// contains filtered or unexported fields
}
```
ListFlowExecutionMessagesPaginator is a paginator for ListFlowExecutionMessages
####
func [NewListFlowExecutionMessagesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L175) [¶](#NewListFlowExecutionMessagesPaginator)
added in v0.30.0
```
func NewListFlowExecutionMessagesPaginator(client [ListFlowExecutionMessagesAPIClient](#ListFlowExecutionMessagesAPIClient), params *[ListFlowExecutionMessagesInput](#ListFlowExecutionMessagesInput), optFns ...func(*[ListFlowExecutionMessagesPaginatorOptions](#ListFlowExecutionMessagesPaginatorOptions))) *[ListFlowExecutionMessagesPaginator](#ListFlowExecutionMessagesPaginator)
```
NewListFlowExecutionMessagesPaginator returns a new ListFlowExecutionMessagesPaginator
####
func (*ListFlowExecutionMessagesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L199) [¶](#ListFlowExecutionMessagesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListFlowExecutionMessagesPaginator](#ListFlowExecutionMessagesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListFlowExecutionMessagesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L204) [¶](#ListFlowExecutionMessagesPaginator.NextPage)
added in v0.30.0
```
func (p *[ListFlowExecutionMessagesPaginator](#ListFlowExecutionMessagesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListFlowExecutionMessagesOutput](#ListFlowExecutionMessagesOutput), [error](/builtin#error))
```
NextPage retrieves the next ListFlowExecutionMessages page.
####
type [ListFlowExecutionMessagesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListFlowExecutionMessages.go#L155) [¶](#ListFlowExecutionMessagesPaginatorOptions)
added in v0.30.0
```
type ListFlowExecutionMessagesPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListFlowExecutionMessagesPaginatorOptions is the paginator options for ListFlowExecutionMessages
####
type [ListTagsForResourceAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L144) [¶](#ListTagsForResourceAPIClient)
added in v0.30.0
```
type ListTagsForResourceAPIClient interface {
ListTagsForResource([context](/context).[Context](/context#Context), *[ListTagsForResourceInput](#ListTagsForResourceInput), ...func(*[Options](#Options))) (*[ListTagsForResourceOutput](#ListTagsForResourceOutput), [error](/builtin#error))
}
```
ListTagsForResourceAPIClient is a client that implements the ListTagsForResource operation.
####
type [ListTagsForResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L37) [¶](#ListTagsForResourceInput)
```
type ListTagsForResourceInput struct {
// The Amazon Resource Name (ARN) of the resource whose tags are to be returned.
//
// This member is required.
ResourceArn *[string](/builtin#string)
// The maximum number of tags to return.
MaxResults *[int32](/builtin#int32)
// The token that specifies the next page of results to return.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [ListTagsForResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L53) [¶](#ListTagsForResourceOutput)
```
type ListTagsForResourceOutput struct {
// The token that specifies the next page of results to return.
NextToken *[string](/builtin#string)
// List of tags returned by the ListTagsForResource operation.
Tags [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Tag](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Tag)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [ListTagsForResourcePaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L162) [¶](#ListTagsForResourcePaginator)
added in v0.30.0
```
type ListTagsForResourcePaginator struct {
// contains filtered or unexported fields
}
```
ListTagsForResourcePaginator is a paginator for ListTagsForResource
####
func [NewListTagsForResourcePaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L171) [¶](#NewListTagsForResourcePaginator)
added in v0.30.0
```
func NewListTagsForResourcePaginator(client [ListTagsForResourceAPIClient](#ListTagsForResourceAPIClient), params *[ListTagsForResourceInput](#ListTagsForResourceInput), optFns ...func(*[ListTagsForResourcePaginatorOptions](#ListTagsForResourcePaginatorOptions))) *[ListTagsForResourcePaginator](#ListTagsForResourcePaginator)
```
NewListTagsForResourcePaginator returns a new ListTagsForResourcePaginator
####
func (*ListTagsForResourcePaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L195) [¶](#ListTagsForResourcePaginator.HasMorePages)
added in v0.30.0
```
func (p *[ListTagsForResourcePaginator](#ListTagsForResourcePaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*ListTagsForResourcePaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L200) [¶](#ListTagsForResourcePaginator.NextPage)
added in v0.30.0
```
func (p *[ListTagsForResourcePaginator](#ListTagsForResourcePaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListTagsForResourceOutput](#ListTagsForResourceOutput), [error](/builtin#error))
```
NextPage retrieves the next ListTagsForResource page.
####
type [ListTagsForResourcePaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_ListTagsForResource.go#L152) [¶](#ListTagsForResourcePaginatorOptions)
added in v0.30.0
```
type ListTagsForResourcePaginatorOptions struct {
// The maximum number of tags to return.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
ListTagsForResourcePaginatorOptions is the paginator options for ListTagsForResource
####
type [Options](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L60) [¶](#Options)
```
type Options struct {
// Set of options to modify how an operation is invoked. These apply to all
// operations invoked for this client. Use functional options on operation call to
// modify this list for per operation behavior.
APIOptions []func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)
// The optional application specific identifier appended to the User-Agent header.
AppID [string](/builtin#string)
// This endpoint will be given as input to an EndpointResolverV2. It is used for
// providing a custom base endpoint that is subject to modifications by the
// processing EndpointResolverV2.
BaseEndpoint *[string](/builtin#string)
// Configures the events that will be sent to the configured logger.
ClientLogMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[ClientLogMode](/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode)
// The credentials object to use when signing requests.
Credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[CredentialsProvider](/github.com/aws/aws-sdk-go-v2/aws#CredentialsProvider)
// The configuration DefaultsMode that the SDK should use when constructing the
// clients initial default settings.
DefaultsMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[DefaultsMode](/github.com/aws/aws-sdk-go-v2/aws#DefaultsMode)
// The endpoint options to be used when attempting to resolve an endpoint.
EndpointOptions [EndpointResolverOptions](#EndpointResolverOptions)
// The service endpoint resolver.
//
// Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
// value for this field will likely prevent you from using any endpoint-related
// service features released after the introduction of EndpointResolverV2 and
// BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom
// endpoint, set the client option BaseEndpoint instead.
EndpointResolver [EndpointResolver](#EndpointResolver)
// Resolves the endpoint used for a particular service. This should be used over
// the deprecated EndpointResolver
EndpointResolverV2 [EndpointResolverV2](#EndpointResolverV2)
// Signature Version 4 (SigV4) Signer
HTTPSignerV4 [HTTPSignerV4](#HTTPSignerV4)
// The logger writer interface to write logging messages to.
Logger [logging](/github.com/aws/smithy-go/logging).[Logger](/github.com/aws/smithy-go/logging#Logger)
// The region to send requests to. (Required)
Region [string](/builtin#string)
// RetryMaxAttempts specifies the maximum number attempts an API client will call
// an operation that fails with a retryable error. A value of 0 is ignored, and
// will not be used to configure the API client created default retryer, or modify
// per operation call's retry max attempts. When creating a new API Clients this
// member will only be used if the Retryer Options member is nil. This value will
// be ignored if Retryer is not nil. If specified in an operation call's functional
// options with a value that is different than the constructed client's Options,
// the Client's Retryer will be wrapped to use the operation's specific
// RetryMaxAttempts value.
RetryMaxAttempts [int](/builtin#int)
// RetryMode specifies the retry mode the API client will be created with, if
// Retryer option is not also specified. When creating a new API Clients this
// member will only be used if the Retryer Options member is nil. This value will
// be ignored if Retryer is not nil. Currently does not support per operation call
// overrides, may in the future.
RetryMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[RetryMode](/github.com/aws/aws-sdk-go-v2/aws#RetryMode)
// Retryer guides how HTTP requests should be retried in case of recoverable
// failures. When nil the API client will use a default retryer. The kind of
// default retry created by the API client can be changed with the RetryMode
// option.
Retryer [aws](/github.com/aws/aws-sdk-go-v2/aws).[Retryer](/github.com/aws/aws-sdk-go-v2/aws#Retryer)
// The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
// to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
// should not populate this structure programmatically, or rely on the values here
// within your applications.
RuntimeEnvironment [aws](/github.com/aws/aws-sdk-go-v2/aws).[RuntimeEnvironment](/github.com/aws/aws-sdk-go-v2/aws#RuntimeEnvironment)
// The HTTP client to invoke API calls with. Defaults to client's default HTTP
// implementation if nil.
HTTPClient [HTTPClient](#HTTPClient)
// contains filtered or unexported fields
}
```
####
func (Options) [Copy](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_client.go#L182) [¶](#Options.Copy)
```
func (o [Options](#Options)) Copy() [Options](#Options)
```
Copy creates a clone where the APIOptions list is deep copied.
####
type [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L67) [¶](#ResolveEndpoint)
```
type ResolveEndpoint struct {
Resolver [EndpointResolver](#EndpointResolver)
Options [EndpointResolverOptions](#EndpointResolverOptions)
}
```
####
func (*ResolveEndpoint) [HandleSerialize](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L76) [¶](#ResolveEndpoint.HandleSerialize)
```
func (m *[ResolveEndpoint](#ResolveEndpoint)) HandleSerialize(ctx [context](/context).[Context](/context#Context), in [middleware](/github.com/aws/smithy-go/middleware).[SerializeInput](/github.com/aws/smithy-go/middleware#SerializeInput), next [middleware](/github.com/aws/smithy-go/middleware).[SerializeHandler](/github.com/aws/smithy-go/middleware#SerializeHandler)) (
out [middleware](/github.com/aws/smithy-go/middleware).[SerializeOutput](/github.com/aws/smithy-go/middleware#SerializeOutput), metadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata), err [error](/builtin#error),
)
```
####
func (*ResolveEndpoint) [ID](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/endpoints.go#L72) [¶](#ResolveEndpoint.ID)
```
func (*[ResolveEndpoint](#ResolveEndpoint)) ID() [string](/builtin#string)
```
####
type [SearchEntitiesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L158) [¶](#SearchEntitiesAPIClient)
added in v0.30.0
```
type SearchEntitiesAPIClient interface {
SearchEntities([context](/context).[Context](/context#Context), *[SearchEntitiesInput](#SearchEntitiesInput), ...func(*[Options](#Options))) (*[SearchEntitiesOutput](#SearchEntitiesOutput), [error](/builtin#error))
}
```
SearchEntitiesAPIClient is a client that implements the SearchEntities operation.
####
type [SearchEntitiesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L38) [¶](#SearchEntitiesInput)
```
type SearchEntitiesInput struct {
// The entity types for which to search.
//
// This member is required.
EntityTypes [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EntityType](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EntityType)
// Optional filter to apply to the search. Valid filters are NAME NAMESPACE ,
// SEMANTIC_TYPE_PATH and REFERENCED_ENTITY_ID . REFERENCED_ENTITY_ID filters on
// entities that are used by the entity in the result set. For example, you can
// filter on the ID of a property that is used in a state. Multiple filters
// function as OR criteria in the query. Multiple values passed inside the filter
// function as AND criteria.
Filters [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EntityFilter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EntityFilter)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The version of the user's namespace. Defaults to the latest version of the
// user's namespace.
NamespaceVersion *[int64](/builtin#int64)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [SearchEntitiesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L67) [¶](#SearchEntitiesOutput)
```
type SearchEntitiesOutput struct {
// An array of descriptions for each entity returned in the search result.
Descriptions [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[EntityDescription](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#EntityDescription)
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [SearchEntitiesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L175) [¶](#SearchEntitiesPaginator)
added in v0.30.0
```
type SearchEntitiesPaginator struct {
// contains filtered or unexported fields
}
```
SearchEntitiesPaginator is a paginator for SearchEntities
####
func [NewSearchEntitiesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L184) [¶](#NewSearchEntitiesPaginator)
added in v0.30.0
```
func NewSearchEntitiesPaginator(client [SearchEntitiesAPIClient](#SearchEntitiesAPIClient), params *[SearchEntitiesInput](#SearchEntitiesInput), optFns ...func(*[SearchEntitiesPaginatorOptions](#SearchEntitiesPaginatorOptions))) *[SearchEntitiesPaginator](#SearchEntitiesPaginator)
```
NewSearchEntitiesPaginator returns a new SearchEntitiesPaginator
####
func (*SearchEntitiesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L208) [¶](#SearchEntitiesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[SearchEntitiesPaginator](#SearchEntitiesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*SearchEntitiesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L213) [¶](#SearchEntitiesPaginator.NextPage)
added in v0.30.0
```
func (p *[SearchEntitiesPaginator](#SearchEntitiesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[SearchEntitiesOutput](#SearchEntitiesOutput), [error](/builtin#error))
```
NextPage retrieves the next SearchEntities page.
####
type [SearchEntitiesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchEntities.go#L165) [¶](#SearchEntitiesPaginatorOptions)
added in v0.30.0
```
type SearchEntitiesPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
SearchEntitiesPaginatorOptions is the paginator options for SearchEntities
####
type [SearchFlowExecutionsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L156) [¶](#SearchFlowExecutionsAPIClient)
added in v0.30.0
```
type SearchFlowExecutionsAPIClient interface {
SearchFlowExecutions([context](/context).[Context](/context#Context), *[SearchFlowExecutionsInput](#SearchFlowExecutionsInput), ...func(*[Options](#Options))) (*[SearchFlowExecutionsOutput](#SearchFlowExecutionsOutput), [error](/builtin#error))
}
```
SearchFlowExecutionsAPIClient is a client that implements the SearchFlowExecutions operation.
####
type [SearchFlowExecutionsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L38) [¶](#SearchFlowExecutionsInput)
```
type SearchFlowExecutionsInput struct {
// The ID of the system instance that contains the flow.
//
// This member is required.
SystemInstanceId *[string](/builtin#string)
// The date and time of the latest flow execution to return.
EndTime *[time](/time).[Time](/time#Time)
// The ID of a flow execution.
FlowExecutionId *[string](/builtin#string)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// The date and time of the earliest flow execution to return.
StartTime *[time](/time).[Time](/time#Time)
// contains filtered or unexported fields
}
```
####
type [SearchFlowExecutionsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L64) [¶](#SearchFlowExecutionsOutput)
```
type SearchFlowExecutionsOutput struct {
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// An array of objects that contain summary information about each workflow
// execution in the result set.
Summaries [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowExecutionSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowExecutionSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [SearchFlowExecutionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L174) [¶](#SearchFlowExecutionsPaginator)
added in v0.30.0
```
type SearchFlowExecutionsPaginator struct {
// contains filtered or unexported fields
}
```
SearchFlowExecutionsPaginator is a paginator for SearchFlowExecutions
####
func [NewSearchFlowExecutionsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L183) [¶](#NewSearchFlowExecutionsPaginator)
added in v0.30.0
```
func NewSearchFlowExecutionsPaginator(client [SearchFlowExecutionsAPIClient](#SearchFlowExecutionsAPIClient), params *[SearchFlowExecutionsInput](#SearchFlowExecutionsInput), optFns ...func(*[SearchFlowExecutionsPaginatorOptions](#SearchFlowExecutionsPaginatorOptions))) *[SearchFlowExecutionsPaginator](#SearchFlowExecutionsPaginator)
```
NewSearchFlowExecutionsPaginator returns a new SearchFlowExecutionsPaginator
####
func (*SearchFlowExecutionsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L207) [¶](#SearchFlowExecutionsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[SearchFlowExecutionsPaginator](#SearchFlowExecutionsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*SearchFlowExecutionsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L212) [¶](#SearchFlowExecutionsPaginator.NextPage)
added in v0.30.0
```
func (p *[SearchFlowExecutionsPaginator](#SearchFlowExecutionsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[SearchFlowExecutionsOutput](#SearchFlowExecutionsOutput), [error](/builtin#error))
```
NextPage retrieves the next SearchFlowExecutions page.
####
type [SearchFlowExecutionsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowExecutions.go#L164) [¶](#SearchFlowExecutionsPaginatorOptions)
added in v0.30.0
```
type SearchFlowExecutionsPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
SearchFlowExecutionsPaginatorOptions is the paginator options for SearchFlowExecutions
####
type [SearchFlowTemplatesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L145) [¶](#SearchFlowTemplatesAPIClient)
added in v0.30.0
```
type SearchFlowTemplatesAPIClient interface {
SearchFlowTemplates([context](/context).[Context](/context#Context), *[SearchFlowTemplatesInput](#SearchFlowTemplatesInput), ...func(*[Options](#Options))) (*[SearchFlowTemplatesOutput](#SearchFlowTemplatesOutput), [error](/builtin#error))
}
```
SearchFlowTemplatesAPIClient is a client that implements the SearchFlowTemplates operation.
####
type [SearchFlowTemplatesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L37) [¶](#SearchFlowTemplatesInput)
```
type SearchFlowTemplatesInput struct {
// An array of objects that limit the result set. The only valid filter is
// DEVICE_MODEL_ID .
Filters [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowTemplateFilter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowTemplateFilter)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [SearchFlowTemplatesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L53) [¶](#SearchFlowTemplatesOutput)
```
type SearchFlowTemplatesOutput struct {
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// An array of objects that contain summary information about each workflow in the
// result set.
Summaries [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [SearchFlowTemplatesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L163) [¶](#SearchFlowTemplatesPaginator)
added in v0.30.0
```
type SearchFlowTemplatesPaginator struct {
// contains filtered or unexported fields
}
```
SearchFlowTemplatesPaginator is a paginator for SearchFlowTemplates
####
func [NewSearchFlowTemplatesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L172) [¶](#NewSearchFlowTemplatesPaginator)
added in v0.30.0
```
func NewSearchFlowTemplatesPaginator(client [SearchFlowTemplatesAPIClient](#SearchFlowTemplatesAPIClient), params *[SearchFlowTemplatesInput](#SearchFlowTemplatesInput), optFns ...func(*[SearchFlowTemplatesPaginatorOptions](#SearchFlowTemplatesPaginatorOptions))) *[SearchFlowTemplatesPaginator](#SearchFlowTemplatesPaginator)
```
NewSearchFlowTemplatesPaginator returns a new SearchFlowTemplatesPaginator
####
func (*SearchFlowTemplatesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L196) [¶](#SearchFlowTemplatesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[SearchFlowTemplatesPaginator](#SearchFlowTemplatesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*SearchFlowTemplatesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L201) [¶](#SearchFlowTemplatesPaginator.NextPage)
added in v0.30.0
```
func (p *[SearchFlowTemplatesPaginator](#SearchFlowTemplatesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[SearchFlowTemplatesOutput](#SearchFlowTemplatesOutput), [error](/builtin#error))
```
NextPage retrieves the next SearchFlowTemplates page.
####
type [SearchFlowTemplatesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchFlowTemplates.go#L153) [¶](#SearchFlowTemplatesPaginatorOptions)
added in v0.30.0
```
type SearchFlowTemplatesPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
SearchFlowTemplatesPaginatorOptions is the paginator options for SearchFlowTemplates
####
type [SearchSystemInstancesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L143) [¶](#SearchSystemInstancesAPIClient)
added in v0.30.0
```
type SearchSystemInstancesAPIClient interface {
SearchSystemInstances([context](/context).[Context](/context#Context), *[SearchSystemInstancesInput](#SearchSystemInstancesInput), ...func(*[Options](#Options))) (*[SearchSystemInstancesOutput](#SearchSystemInstancesOutput), [error](/builtin#error))
}
```
SearchSystemInstancesAPIClient is a client that implements the SearchSystemInstances operation.
####
type [SearchSystemInstancesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L37) [¶](#SearchSystemInstancesInput)
```
type SearchSystemInstancesInput struct {
// Optional filter to apply to the search. Valid filters are SYSTEM_TEMPLATE_ID ,
// STATUS , and GREENGRASS_GROUP_NAME . Multiple filters function as OR criteria in
// the query. Multiple values passed inside the filter function as AND criteria.
Filters [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemInstanceFilter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemInstanceFilter)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [SearchSystemInstancesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L54) [¶](#SearchSystemInstancesOutput)
```
type SearchSystemInstancesOutput struct {
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// An array of objects that contain summary data abour the system instances in the
// result set.
Summaries [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemInstanceSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemInstanceSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [SearchSystemInstancesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L161) [¶](#SearchSystemInstancesPaginator)
added in v0.30.0
```
type SearchSystemInstancesPaginator struct {
// contains filtered or unexported fields
}
```
SearchSystemInstancesPaginator is a paginator for SearchSystemInstances
####
func [NewSearchSystemInstancesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L170) [¶](#NewSearchSystemInstancesPaginator)
added in v0.30.0
```
func NewSearchSystemInstancesPaginator(client [SearchSystemInstancesAPIClient](#SearchSystemInstancesAPIClient), params *[SearchSystemInstancesInput](#SearchSystemInstancesInput), optFns ...func(*[SearchSystemInstancesPaginatorOptions](#SearchSystemInstancesPaginatorOptions))) *[SearchSystemInstancesPaginator](#SearchSystemInstancesPaginator)
```
NewSearchSystemInstancesPaginator returns a new SearchSystemInstancesPaginator
####
func (*SearchSystemInstancesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L194) [¶](#SearchSystemInstancesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[SearchSystemInstancesPaginator](#SearchSystemInstancesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*SearchSystemInstancesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L199) [¶](#SearchSystemInstancesPaginator.NextPage)
added in v0.30.0
```
func (p *[SearchSystemInstancesPaginator](#SearchSystemInstancesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[SearchSystemInstancesOutput](#SearchSystemInstancesOutput), [error](/builtin#error))
```
NextPage retrieves the next SearchSystemInstances page.
####
type [SearchSystemInstancesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemInstances.go#L151) [¶](#SearchSystemInstancesPaginatorOptions)
added in v0.30.0
```
type SearchSystemInstancesPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
SearchSystemInstancesPaginatorOptions is the paginator options for SearchSystemInstances
####
type [SearchSystemTemplatesAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L147) [¶](#SearchSystemTemplatesAPIClient)
added in v0.30.0
```
type SearchSystemTemplatesAPIClient interface {
SearchSystemTemplates([context](/context).[Context](/context#Context), *[SearchSystemTemplatesInput](#SearchSystemTemplatesInput), ...func(*[Options](#Options))) (*[SearchSystemTemplatesOutput](#SearchSystemTemplatesOutput), [error](/builtin#error))
}
```
SearchSystemTemplatesAPIClient is a client that implements the SearchSystemTemplates operation.
####
type [SearchSystemTemplatesInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L39) [¶](#SearchSystemTemplatesInput)
```
type SearchSystemTemplatesInput struct {
// An array of filters that limit the result set. The only valid filter is
// FLOW_TEMPLATE_ID .
Filters [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemTemplateFilter](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemTemplateFilter)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [SearchSystemTemplatesOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L55) [¶](#SearchSystemTemplatesOutput)
```
type SearchSystemTemplatesOutput struct {
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// An array of objects that contain summary information about each system
// deployment in the result set.
Summaries [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [SearchSystemTemplatesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L165) [¶](#SearchSystemTemplatesPaginator)
added in v0.30.0
```
type SearchSystemTemplatesPaginator struct {
// contains filtered or unexported fields
}
```
SearchSystemTemplatesPaginator is a paginator for SearchSystemTemplates
####
func [NewSearchSystemTemplatesPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L174) [¶](#NewSearchSystemTemplatesPaginator)
added in v0.30.0
```
func NewSearchSystemTemplatesPaginator(client [SearchSystemTemplatesAPIClient](#SearchSystemTemplatesAPIClient), params *[SearchSystemTemplatesInput](#SearchSystemTemplatesInput), optFns ...func(*[SearchSystemTemplatesPaginatorOptions](#SearchSystemTemplatesPaginatorOptions))) *[SearchSystemTemplatesPaginator](#SearchSystemTemplatesPaginator)
```
NewSearchSystemTemplatesPaginator returns a new SearchSystemTemplatesPaginator
####
func (*SearchSystemTemplatesPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L198) [¶](#SearchSystemTemplatesPaginator.HasMorePages)
added in v0.30.0
```
func (p *[SearchSystemTemplatesPaginator](#SearchSystemTemplatesPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*SearchSystemTemplatesPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L203) [¶](#SearchSystemTemplatesPaginator.NextPage)
added in v0.30.0
```
func (p *[SearchSystemTemplatesPaginator](#SearchSystemTemplatesPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[SearchSystemTemplatesOutput](#SearchSystemTemplatesOutput), [error](/builtin#error))
```
NextPage retrieves the next SearchSystemTemplates page.
####
type [SearchSystemTemplatesPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchSystemTemplates.go#L155) [¶](#SearchSystemTemplatesPaginatorOptions)
added in v0.30.0
```
type SearchSystemTemplatesPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
SearchSystemTemplatesPaginatorOptions is the paginator options for SearchSystemTemplates
####
type [SearchThingsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L154) [¶](#SearchThingsAPIClient)
added in v0.30.0
```
type SearchThingsAPIClient interface {
SearchThings([context](/context).[Context](/context#Context), *[SearchThingsInput](#SearchThingsInput), ...func(*[Options](#Options))) (*[SearchThingsOutput](#SearchThingsOutput), [error](/builtin#error))
}
```
SearchThingsAPIClient is a client that implements the SearchThings operation.
####
type [SearchThingsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L42) [¶](#SearchThingsInput)
```
type SearchThingsInput struct {
// The ID of the entity to which the things are associated. The IDs should be in
// the following format. urn:tdm:REGION/ACCOUNT ID/default:device:DEVICENAME
//
// This member is required.
EntityId *[string](/builtin#string)
// The maximum number of results to return in the response.
MaxResults *[int32](/builtin#int32)
// The version of the user's namespace. Defaults to the latest version of the
// user's namespace.
NamespaceVersion *[int64](/builtin#int64)
// The string that specifies the next page of results. Use this when you're
// paginating results.
NextToken *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [SearchThingsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L64) [¶](#SearchThingsOutput)
```
type SearchThingsOutput struct {
// The string to specify as nextToken when you request the next page of results.
NextToken *[string](/builtin#string)
// An array of things in the result set.
Things [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Thing](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Thing)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [SearchThingsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L171) [¶](#SearchThingsPaginator)
added in v0.30.0
```
type SearchThingsPaginator struct {
// contains filtered or unexported fields
}
```
SearchThingsPaginator is a paginator for SearchThings
####
func [NewSearchThingsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L180) [¶](#NewSearchThingsPaginator)
added in v0.30.0
```
func NewSearchThingsPaginator(client [SearchThingsAPIClient](#SearchThingsAPIClient), params *[SearchThingsInput](#SearchThingsInput), optFns ...func(*[SearchThingsPaginatorOptions](#SearchThingsPaginatorOptions))) *[SearchThingsPaginator](#SearchThingsPaginator)
```
NewSearchThingsPaginator returns a new SearchThingsPaginator
####
func (*SearchThingsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L204) [¶](#SearchThingsPaginator.HasMorePages)
added in v0.30.0
```
func (p *[SearchThingsPaginator](#SearchThingsPaginator)) HasMorePages() [bool](/builtin#bool)
```
HasMorePages returns a boolean indicating whether more pages are available
####
func (*SearchThingsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L209) [¶](#SearchThingsPaginator.NextPage)
added in v0.30.0
```
func (p *[SearchThingsPaginator](#SearchThingsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[SearchThingsOutput](#SearchThingsOutput), [error](/builtin#error))
```
NextPage retrieves the next SearchThings page.
####
type [SearchThingsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_SearchThings.go#L161) [¶](#SearchThingsPaginatorOptions)
added in v0.30.0
```
type SearchThingsPaginatorOptions struct {
// The maximum number of results to return in the response.
Limit [int32](/builtin#int32)
// Set to true if pagination should stop if the service returns a pagination token
// that matches the most recent token provided to the service.
StopOnDuplicateToken [bool](/builtin#bool)
}
```
SearchThingsPaginatorOptions is the paginator options for SearchThings
####
type [TagResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_TagResource.go#L37) [¶](#TagResourceInput)
```
type TagResourceInput struct {
// The Amazon Resource Name (ARN) of the resource whose tags are returned.
//
// This member is required.
ResourceArn *[string](/builtin#string)
// A list of tags to add to the resource.>
//
// This member is required.
Tags [][types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[Tag](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#Tag)
// contains filtered or unexported fields
}
```
####
type [TagResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_TagResource.go#L52) [¶](#TagResourceOutput)
```
type TagResourceOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UndeploySystemInstanceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UndeploySystemInstance.go#L37) [¶](#UndeploySystemInstanceInput)
```
type UndeploySystemInstanceInput struct {
// The ID of the system instance to remove from its target.
Id *[string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UndeploySystemInstanceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UndeploySystemInstance.go#L45) [¶](#UndeploySystemInstanceOutput)
```
type UndeploySystemInstanceOutput struct {
// An object that contains summary information about the system instance that was
// removed from its target.
Summary *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemInstanceSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemInstanceSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UntagResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UntagResource.go#L36) [¶](#UntagResourceInput)
```
type UntagResourceInput struct {
// The Amazon Resource Name (ARN) of the resource whose tags are to be removed.
//
// This member is required.
ResourceArn *[string](/builtin#string)
// A list of tag key names to remove from the resource. You don't specify the
// value. Both the key and its associated value are removed. This parameter to the
// API requires a JSON text string argument. For information on how to format a
// JSON parameter for the various command line tool environments, see Using JSON
// for Parameters (<https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters.html#cli-using-param-json>)
// in the AWS CLI User Guide.
//
// This member is required.
TagKeys [][string](/builtin#string)
// contains filtered or unexported fields
}
```
####
type [UntagResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UntagResource.go#L56) [¶](#UntagResourceOutput)
```
type UntagResourceOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateFlowTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UpdateFlowTemplate.go#L41) [¶](#UpdateFlowTemplateInput)
```
type UpdateFlowTemplateInput struct {
// The DefinitionDocument that contains the updated workflow definition.
//
// This member is required.
Definition *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DefinitionDocument](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DefinitionDocument)
// The ID of the workflow to be updated. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:workflow:WORKFLOWNAME
//
// This member is required.
Id *[string](/builtin#string)
// The version of the user's namespace. If no value is specified, the latest
// version is used by default. Use the GetFlowTemplateRevisions if you want to
// find earlier revisions of the flow to update.
CompatibleNamespaceVersion *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [UpdateFlowTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UpdateFlowTemplate.go#L62) [¶](#UpdateFlowTemplateOutput)
```
type UpdateFlowTemplateOutput struct {
// An object containing summary information about the updated workflow.
Summary *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[FlowTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#FlowTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UpdateSystemTemplateInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UpdateSystemTemplate.go#L39) [¶](#UpdateSystemTemplateInput)
```
type UpdateSystemTemplateInput struct {
// The DefinitionDocument that contains the updated system definition.
//
// This member is required.
Definition *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DefinitionDocument](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DefinitionDocument)
// The ID of the system to be updated. The ID should be in the following format.
// urn:tdm:REGION/ACCOUNT ID/default:system:SYSTEMNAME
//
// This member is required.
Id *[string](/builtin#string)
// The version of the user's namespace. Defaults to the latest version of the
// user's namespace. If no value is specified, the latest version is used by
// default.
CompatibleNamespaceVersion *[int64](/builtin#int64)
// contains filtered or unexported fields
}
```
####
type [UpdateSystemTemplateOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UpdateSystemTemplate.go#L60) [¶](#UpdateSystemTemplateOutput)
```
type UpdateSystemTemplateOutput struct {
// An object containing summary information about the updated system.
Summary *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[SystemTemplateSummary](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#SystemTemplateSummary)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
```
####
type [UploadEntityDefinitionsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UploadEntityDefinitions.go#L53) [¶](#UploadEntityDefinitionsInput)
```
type UploadEntityDefinitionsInput struct {
// A Boolean that specifies whether to deprecate all entities in the latest
// version before uploading the new DefinitionDocument . If set to true , the
// upload will create a new namespace version.
DeprecateExistingEntities [bool](/builtin#bool)
// The DefinitionDocument that defines the updated entities.
Document *[types](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types).[DefinitionDocument](/github.com/aws/aws-sdk-go-v2/service/[email protected]/types#DefinitionDocument)
// A Boolean that specifies whether to synchronize with the latest version of the
// public namespace. If set to true , the upload will create a new namespace
// version.
SyncWithPublicNamespace [bool](/builtin#bool)
// contains filtered or unexported fields
}
```
####
type [UploadEntityDefinitionsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotthingsgraph/v1.16.2/service/iotthingsgraph/api_op_UploadEntityDefinitions.go#L71) [¶](#UploadEntityDefinitionsOutput)
```
type UploadEntityDefinitionsOutput struct {
// The ID that specifies the upload action. You can use this to track the status
// of the upload.
//
// This member is required.
UploadId *[string](/builtin#string)
// Metadata pertaining to the operation's result.
ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata)
// contains filtered or unexported fields
}
``` |
github.com/RH12503/Triangula | go | Go | README
[¶](#section-readme)
---
![](https://github.com/RH12503/Triangula/raw/v1.2.0/assets/logo.svg)
An iterative algorithm to generate high quality triangulated images.
![Test status](https://github.com/RH12503/Triangula/actions/workflows/test.yml/badge.svg)
[![Go Reference](https://pkg.go.dev/badge/github.com/RH12503/Triangula.svg)](https://pkg.go.dev/github.com/RH12503/Triangula)
[![Go Report Card](https://goreportcard.com/badge/github.com/RH12503/Triangula)](https://goreportcard.com/report/github.com/RH12503/Triangula)
[![](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=An%20iterative%20algorithm%20to%20triangulate%20images.&url=https://github.com/RH12503/triangula&hashtags=golang,geneticalgorithm,generativeart)
Triangula uses a modified genetic algorithm to triangulate images. It works best with images smaller than 3000px and with fewer than 3000 points, typically producing an optimal result within a couple of minutes. For a full explanation of the algorithm, see [this page in the wiki](https://github.com/RH12503/Triangula/wiki/Explanation-of-the-algorithm).
You can try the algorithm out in your browser [here](https://rh12503.github.io/triangula/), but the desktop app will typically be 20-50x faster.
### Install
#### GUI
Install the [GUI](https://github.com/RH12503/Triangula-GUI) from the [releases page](https://github.com/RH12503/Triangula/releases).
The GUI uses [Wails](https://wails.app/) for its frontend.
![](https://github.com/RH12503/Triangula/raw/v1.2.0/assets/triangula.gif)
If the app isn't running on Linux, go to the Permissions tab in the executable's properties and tick `Allow executing file as program`.
#### CLI
Install the [CLI](https://github.com/RH12503/Triangula-CLI) by running:
```
go get -u github.com/RH12503/Triangula-CLI/triangula
```
Your `PATH` variable also needs to include your `go/bin` directory, which is `~/go/bin` on macOS, `$GOPATH/bin` on Linux, and `c:\Go\bin` on Windows.
Then run it using the command:
```
triangula run -img <path to image> -out <path to output JSON>
```
and when you're happy with its fitness, render a SVG:
```
triangula render -in <path to outputted JSON> -img <path to image> -out <path to output SVG```
For more detailed instructions, including rendering PNGs with effects see [this page](https://github.com/RH12503/Triangula-CLI/blob/main/README.md).
### Options
For almost all cases, only changing the number of points and leaving all other options with their default values will generate an optimal result.
| Name | Flag | Default | Usage |
| --- | --- | --- | --- |
| Points | `--points, -p` | 300 | The number of points to use in the triangulation |
| Mutations | `--mutations, --mut, -m` | 2 | The number of mutations to make |
| Variation | `--variation, -v` | 0.3 | The variation each mutation causes |
| Population | `--population, --pop, --size` | 400 | The population size in the algorithm |
| Cutoff | `--cutoff, --cut` | 5 | The cutoff value of the algorithm |
| Cache | `--cache, -c` | 22 | The cache size as a power of 2 |
| Block | `--block, -b` | 5 | The size of the blocks used when rendering |
| Threads | `--threads, -t` | 0 | The number of threads to use or 0 to use all cores |
| Repetitions | `--reps, -r` | 500 | The number of generations before saving to the output file (CLI only) |
### Examples of output
![](https://github.com/RH12503/Triangula/raw/v1.2.0/assets/output/grad.png)
![](https://github.com/RH12503/Triangula/raw/v1.2.0/assets/output/plane.png)
![](https://github.com/RH12503/Triangula/raw/v1.2.0/assets/output/sf.png)
![](https://github.com/RH12503/Triangula/raw/v1.2.0/assets/output/astro.png)
#### Comparison to [esimov/triangle](https://github.com/esimov/triangle)
esimov/triangle seems to be a similar project to Triangula that is also written in Go. However, the two appear to generate very different styles. One big advantage of triangle is that it generates an image almost instantaneously, while Triangula needs to run many iterations.
esimov/triangle results were taken from their [Github repo](https://github.com/esimov/triangle), and Triangula's results were generated over 1-2 minutes.
| esimov/triangle | Triangula |
| --- | --- |
| | |
| | |
##### Difference from [fogleman/primitive](https://github.com/fogleman/primitive) and [gheshu/image_decompiler](https://github.com/gheshu/image_decompiler)
A lot of people have commented about Triangula's similarities to these other algorithms. While all these algorithms are iterative algorithms, the main difference is that in the other algorithms triangles can overlap while Triangula generates a triangulation.
### API
Simple example:
```
import imageData "github.com/RH12503/Triangula/image"
func main() {
// Open and decode a PNG/JPEG
file, err := os.Open("image.png")
if err != nil {
log.Fatal(err)
}
image, _, err := image.Decode(file)
file.Close()
if err != nil {
log.Fatal(err)
}
img := imageData.ToData(image)
pointFactory := func() normgeom.NormPointGroup {
return (generator.RandomGenerator{}).Generate(200) // 200 points
}
evaluatorFactory := func(n int) evaluator.Evaluator {
// 22 for the cache size and 5 for the block size
// use PolygonsImageFunctions for polygons
return evaluator.NewParallel(fitness.TrianglesImageFunctions(imgData, 5, n), 22)
}
var mutator mutation.Method
// 1% mutation rate and 30% variation
mutator = mutation.NewGaussianMethod(0.01, 0.3)
// 400 population size and 5 cutoff
algo := algorithm.NewModifiedGenetic(pointFactory, 400, 5, evaluatorFactory, mutator)
// Run the algorithm
for {
algo.Step()
fmt.Println(algo.Stats().BestFitness)
}
}
```
### Contribute
Any contributions are welcome. Currently help is needed with:
* Support for exporting effects to SVGs.
* Supporting more image types for the CLI and GUI. (eg. .tiff, .webp, .heic)
* Allowing drag and drop of images from the web for the GUI.
* More effects.
* Any optimizations.
Thank you to these contributors for helping to improve Triangula:
* [@bstncartwright](https://github.com/bstncartwright)
* [@2BoysAndHats](https://github.com/2BoysAndHats)
None |
github.com/souvikinator/lsx | go | Go | README
[¶](#section-readme)
---
[lsx](https://github.com/souvikinator/lsx)
===
### Navigate through terminal like a pro 😎
[![license](https://img.shields.io/badge/licence-MIT-brightgreen)](https://opensource.org/licenses/)
[![](https://img.shields.io/github/issues/souvikinator/lsx)](https://github.com/souvikinator/lsx/issues)
[![codebeat badge](https://codebeat.co/badges/08315931-e796-4828-bfb0-18b6750d6f2a)](https://codebeat.co/projects/github-com-souvikinator-lsx-master)
![](https://img.shields.io/badge/made%20with-Go-blue)
![go report card](https://goreportcard.com/badge/github.com/souvikinator/lsx)
[💻 Demo](#-Demo) •
[⚗️ Install & Update](#%EF%B8%8F-install) •
[🐜 Contribution](#-contribution) •
[❗Known Issues](#known-issues)
### ❓ Why?
It's a pain to `cd` and `ls` multiple times to reach desired directory in terminal (*this maybe subjective*). **ls-Xtended (lsx)** solves this problem by allowing users to smoothly navigate and search directories on the go with just one command. It also allows to create alias for paths making it easier for users to remember the path to the desired directory.
**It also ranks your directories based on how often you access them and placing them on top of the list to reduce searching and navigation time.**
### 💻 Demo
> **Note**: once you reach the desired destination, use `ctr+c` to exit and stay in the desired destination
#### Navigate through terminal and perform search:
* use `/` to trigger search and start typing to search
```
lsx
```
![](https://github.com/souvikinator/lsx/raw/master/assets/demo.gif)
#### Show hidden files as well
```
lsx -a
```
![](https://github.com/souvikinator/lsx/raw/master/assets/all-mode.gif)
#### Set **alias** for directory paths
```
lsx set-alias -n somealias -p path/to/be/aliased
```
or
```
lsx set-alias --path-name somealias --path path/to/be/aliased
```
![](https://github.com/souvikinator/lsx/raw/master/assets/set-alias.gif)
#### Updating Alias
`set-alias` can also be used to update any existing alias. Let's say alias `abc` already exists for path `a/b/c`. on can update it like so:
```
lsx set-alias -n abc -p d/e/f
```
#### List **alias** created by user
```
lsx alias
```
![](https://github.com/souvikinator/lsx/raw/master/assets/list-alias.gif)
#### Use **alias**
```
lsx somealias
```
![](https://github.com/souvikinator/lsx/raw/master/assets/use-alias.gif)
#### Remove existing **alias**
```
lsx remove-alias aliasname
```
![](https://github.com/souvikinator/lsx/raw/master/assets/remove-alias.gif)
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
starlette-context | readthedoc | Python | ## Installation
The only dependency for this project is Starlette, therefore this library should work with all Starlette-based frameworks, such as Responder, FastAPI or Flama.
```
$ pip install starlette-context
```
## How to use
* You can access the magic
`context` object if and only if these two conditions are met: *
you access it within a request-response cycle
*
you used a
`ContextMiddleware` or `RawContextMiddleware` in your ASGI app
Minimal working example
```
# app.py
from starlette.middleware import Middleware
from starlette.applications import Starlette
from starlette_context.middleware import RawContextMiddleware
middleware = [Middleware(RawContextMiddleware)]
app = Starlette(middleware=middleware)
```
```
# views.py
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette_context import context
from .app import app
@app.route("/")
async def index(request: Request):
return JSONResponse(context.data)
```
The context object utilizes `ContextVar` to store the information.
This `ContextVar` is a native Python object, introduced in Python 3.7.
For more information, see the official docs of contextvars.
Warning
If you see
please see Errors. The idea was to create something like a `g` object in `Flask` . In `Django` there’s no similar, builtin solution, though it can be compared to anything that allows you to store some
data in the thread such as django-currentuser or django-crum. The interface hides the implementation, you can use the `context` object as if it was a native `dict` .
Most significant difference is that you can’t serialize the context object itself.
You’d have to use
```
json.dumps(context.data)
```
, as `.data` returns a `dict` .
Following operations work as expected `**context` ( `dict` unpacking) `context["key"]` `context.get("key")` `context.items()`
```
context["key"] = "value"
```
To make it available during the request-response cycle, it needs to be instantiated in the Starlette app with one of the middlewares, or the
context manager, which can be useful in FastAPI Depends or unit tests requiring an available context.
The middleware approach offer extended capability the form of plugins to process and extend the request, but needs to redefine the error response in case those plugins raise an Exception.
The context manager approach is more barebone, which likely will lead you to implement the initial context population yourself.
## What for
The middleware effectively creates the context for the request, so you must configure your app to use it. More usage detail along with code examples can be found in Plugins.
## Errors and Middlewares in Starlette
There may be a validation error occuring while processing the request in the plugins, which requires sending an error response. Starlette however does not let middleware use the regular error handler (more details), so middlewares facing a validation error have to send a response by themselves.
By default, the response sent will be a 400 with no body or extra header, as a Starlette
```
Response(status_code=400)
```
.
This response can be customized at both middleware and plugin level. The middlewares accepts a `Response` object (or anything that inherits it, such as a `JSONResponse` ) through
```
default_error_response
```
keyword argument at init.
This response will be sent on raised
```
starlette_context.errors.MiddleWareValidationError
```
exceptions, if it doesn’t include a response itself.
```
middleware = [
Middleware(
ContextMiddleware,
default_error_response=JSONResponse(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY
content={"Error": "Invalid request"},
),
# plugins = ...
)
]
```
## Why are there two middlewares that do the same thing
Warning
`ContextMiddleware` middleware is deprecated and will be removed in version 1.0.0.
Use `RawContextMiddleware` instead. For more information, see
this ticket. `ContextMiddleware` inherits from `BaseHTTPMiddleware` which is an interface prepared by `encode` .
That is, in theory, the “normal” way of creating a middleware. It’s simple and convenient.
However, if you are using `StreamingResponse` , you might bump into memory issues. See
Authors recently started to discourage the use of BaseHTTPMiddleware in favor of what they call “raw middleware”. The problem with the “raw” one is that there’s no docs for how to actually create it.
The `RawContextMiddleware` does more or less the same thing.
It is entirely possible that `ContextMiddleware` will be removed in the future release.
It is also possible that authors will make some changes to the `BaseHTTPMiddleware` to fix this issue.
I’d advise to only use `RawContextMiddleware` .
Warning
Due to how Starlette handles application exceptions, the `enrich_response` method won’t run,
and the default error response will not be used after an unhandled exception.
Therefore, this middleware is not capable of setting response headers for 500 responses. You can try to use your own 500 handler, but beware that the context will not be available.
## How does it work
First, an empty “storage” is created, that’s bound to the context of your async request. The `set_context` method allows you to assign something to the context on creation
therefore that’s the best place to add everything that might come in
handy later on. You can always alter the context, so add/remove items from it, but each operation comes with some cost. All `plugins` are executed when `set_context` method is called. If you want to add something else there you might
either write your own plugin or just overwrite the `set_context` method which returns a `dict` .
Then, once the response is created, we iterate over plugins so it’s possible to set some response headers based on the context contents.
Finally, the “storage” that async python apps can access is removed.
Context plugins allow you to extract any data you want from the request and store it in the context object. Plugins for the most common scenarios have been created, such as extracting Correlation ID.
## Using a plugin
There may be a validation error occurring while processing the request in the plugins, which requires sending an error response. Starlette however does not let middleware use the regular error handler (more details on this), so middlewares facing a validation error have to send a response by themselves.
By default, the response sent will be a 400 with no body or extra header, as a Starlette Response(status_code=400). This response can be customized at both middleware and plugin level.
## Example usage
```
from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette_context import plugins
from starlette_context.middleware import ContextMiddleware
middleware = [
Middleware(
ContextMiddleware,
plugins=(
plugins.RequestIdPlugin(),
plugins.CorrelationIdPlugin()
)
)
]
app = Starlette(middleware=middleware)
```
You can use the middleware without plugin, it will only create the context for the request and not populate it directly.
## Built-in plugins
`starlette-context` includes the following plugins you can import and use as shown above.
They are all accessible from the plugins module.
Do note headers are case-insensitive, as per RFC9110.
You can access the header value through the <plugin class>.key attribute, or through the starlette_context.header_keys.HeaderKeys enum.
| | | |
| --- | --- | --- | --- |
| | |
| | | |
| | | |
| | |
| | | |
| | |
## UUID Plugins
UUID plugins accept `force_new_uuid=True` to enforce the creation of a new UUID. Defaults to `False` . If the target header has a value, it is validated to be a UUID (although kept as str in the context). The error response if this validation fails can be customized with
```
error_response=<Response object>
```
.
If no error response was specified, the middleware’s default response will be used.
This validation can be turned off altogether with `validate = False` .
## Implementing your own
You can implement your plugin with variying degree of ease and flexibility.
## Easy mode
You want a Plugin to extract a header that is not already available in the built-in ones. There are indeed many, and your app may even want to use a custom header.
You just need to define the header key that you’re looking for.
```
from starlette_context.plugins import Plugin
class AcceptLanguagePlugin(Plugin):
key = "Accept-Language"
```
That’s it! Just load it in your Middleware’s plugins, and the value of the `Accept-Language` header will be put in the context,
which you can later get with
```
context.get(AcceptLanguagePlugin.key)
```
or
```
context.get("Accept-Language")
```
Hopefully you can use it to try and serve locally appropriate content. You can notice the `key` attributes is both used to define the header you want to extract data from, and the key with which it is inserted in the context.
## Intermediate
What if you don’t want to put the header’s value as a plain str, or don’t even want to take data from the header?
You need to override the `process_request` method.
This gives you full access to the request, freedom to perform any processing in-between, and to return any value type.
Whatever is returned will be put in the context, again with the plugin’s defined `key` . Any Exception raised from a middleware in Starlette would normally become a hard 500 response. However you probably might find cases where you want to send a validation error instead. For those cases, `starlette_context` provides a
```
MiddleWareValidationError
```
exception you can raise, and include a Starlette `Response` object.
The middleware class will take care of sending it.
You can also raise a MiddleWareValidationError without attaching a response, the middleware’s default response will then be used. You can also do more than extracting from requests, plugins also have a hook to modify the response before it’s sent: `enrich_response` .
It can access the Response object, and of course, the context, fully populated by that point.
Here an example of a plugin that extracts a Session from the request cookies, expects it to be encoded in base64, attempts to decode it before returning it to the context. It generates an error response if it cannot be decoded. On the way out, it retrieves the value it put in the context, and sets a new cookie.
```
import base64
import logging
from typing import Any, Optional, Union
from starlette.responses import Response
from starlette.requests import HTTPConnection, Request
from starlette.types import Message
from starlette_context.plugins import Plugin
from starlette_context.errors import MiddleWareValidationError
from starlette_context import context
class MySessionPlugin(Plugin):
# The returned value will be inserted in the context with this key
key = "session_cookie"
async def process_request(
self, request: Union[Request, HTTPConnection]
) -> Optional[Any]:
# access any part of the request
raw_cookie = request.cookies.get("Session")
if not raw_cookie:
# it will be inserted as None in the context.
return None
try:
decoded_cookie = base64.b64decode(bytes(raw_cookie, encoding="utf-8"))
except Exception as e:
logging.error("Raw cookie couldn't be decoded", exc_info=e)
# create a response to signal the user of the invalid cookie.
response = Response(
content=f"Invalid cookie: {raw_cookie}", status_code=400
)
# pass the response object in the exception so the middleware can abort processing and send it.
raise MiddleWareValidationError("Cookie problem", error_response=response)
return decoded_cookie
async def enrich_response(self, response: Union[Response, Message]) -> None:
# can access the populated context here.
previous_cookie = context.get("session_cookie")
response.set_cookie("PreviousSession", previous_cookie)
response.set_cookie("Session", "SGVsbG8gV29ybGQ=")
# mutate the response in-place, return nothing.
```
Do note, the type of request and response argument received depends on the middlewares class used. The example shown here is valid for use with the `ContextMiddleware` , receiving built Starlette `Request` and `Response` objects.
In a `RawContextMiddleware` , the hooks will receive `HTTPConnection` and `Message` objects passed as argument.
## ContextDoesNotExistError
You will see this error whenever you try to access `context` object outside of the request-response cycle.
To be more specific: `ContextVar` store not created. `RawContextMiddleware` uses `ContextVar` to create a storage that will be available within the request-response cycle.
So you will see the error if you are trying to access this object before using `RawContextMiddleware` (fe. in another middleware), which instantiates `ContextVar` which belongs to event in the event loop. Wrong order of middlewares
```
class FirstMiddleware(BaseHTTPMiddleware): pass # can't access context
class SecondMiddleware(RawContextMiddleware): pass # creates a context and can add into it
class ThirdContextMiddleware(BaseHTTPMiddleware): pass # can access context
middlewares = [
Middleware(FirstMiddleware),
Middleware(SecondMiddleware),
Middleware(ThirdContextMiddleware),
]
app = Starlette(debug=True, middleware=middlewares)
```
As stated in the point no. 1, the order of middlewares matters. If you want to read more into order of execution of middlewares, have a look at #479.
Note, contents of this `context` object are gone when response pass `SecondMiddleware` in this example. Outside of the request-response cycle.
Depending on how you setup your logging, it’s possible that your server ( `uvicorn` ) or other 3rd party loggers sometimes
will be able to access `context` , sometimes not. You might want to check if `context.exists()` to log it only if it’s available. It also applies to your tests. You can’t send a request, get the response and then check what’s in the context object. For that you’d either have to use some ugly mocking or return the context in the response as `dict` .
As part of your application logic flow, you will come to functions that expect the context to be available with specific data at runtime.
Testing such functions while very much valid during usual server runtime, may be trickier in a testing environnment, as it is not during an actual request-response cycle. While it can be done in a full-blown integration test, if you want to test the functionality of your function extensively while keeping it to a limited scope during unit test, you can use the
context manager.
```
import logging
from starlette_context import context, request_cycle_context
# original function assuming a context available
def original_function():
client_id = context["x-client-id"]
return client_id
# test
def test_my_function():
assumed_context = {"x-client-id": "unit testing!"}
with request_cycle_context(assumed_context):
assert original_function() == "unit testing!"
```
Or using pytest fixture
```
from starlette_context import context, request_cycle_context
from starlette_context.ctx import _Context
from starlette_context.errors import ConfigurationError
@pytest.fixture
def ctx_store():
return {"a": 0, "b": 1, "c": 2}
@pytest.fixture
def mocked_context(ctx_store) -> None:
with request_cycle_context(ctx_store):
yield context
def test_my_function(mocked_context, ctx_store):
assert mocked_context == ctx_store
```
Runnable example can be found under example in repo.
```
import structlog
from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette.middleware.base import (
BaseHTTPMiddleware,
RequestResponseEndpoint,
)
from starlette.requests import Request
from starlette.responses import JSONResponse, Response
from starlette_context import context, plugins
from starlette_context.middleware import RawContextMiddleware
logger = structlog.get_logger("starlette_context_example")
class LoggingMiddleware(BaseHTTPMiddleware):
"""Example logging middleware."""
async def dispatch(
self, request: Request, call_next: RequestResponseEndpoint
) -> Response:
await logger.info("request log", request=request)
response = await call_next(request)
await logger.info("response log", response=response)
return response
middlewares = [
Middleware(
RawContextMiddleware,
plugins=(
plugins.CorrelationIdPlugin(),
plugins.RequestIdPlugin(),
),
),
Middleware(LoggingMiddleware),
]
app = Starlette(debug=True, middleware=middlewares)
@app.on_event("startup")
async def startup_event() -> None:
from setup_logging import setup_logging
setup_logging()
@app.route("/")
async def index(request: Request):
context["something else"] = "This will be visible even in the response log"
await logger.info("log from view")
return JSONResponse(context.data)
```
Although FastAPI is built on top of Starlette, its popularity justifies having a section dedicated to FastAPI. As both are build on top of ASGI standard, `starlette_context` library is compatible with the FastAPI framework.
It can be used in the same way, with the same middlewares as a regular Starlette application.
FastAPI however offers another interesting feature with its Depends system and auto-generated OpenAPI documentation. Using a middleware escapes this documentation generation, so if your app requires some specific headers from a middleware, those would not appear in your API documentation, which is quite infortunate.
FastAPI `Depends` offers a way to solve this issue.
Instead of using middlewares, you can use a common Dependency, taking the data from the request it needs there, all while documenting it.
A FastAPI Depends with a `yield` can have a similar role to a middleware to manage the context, allowing code to be executed before as well as after the request.
You can find more information regarding this usage on the FastAPI documentation The same
presented in Testing can be used to create this Depends
As an upside, errors raised there would folow the regular error handling.
As a downside, it cannot use the plugins system for middlewares, so the data from headers or else that you need must be implemented yourself, respecting FastAPI usage.
```
from starlette_context import context, request_cycle_context
from fastapi import FastAPI, Depends, HTTPException, Header
async def my_context_dependency(x_client_id = Header(...)):
# When used a Depends(), this fucntion get the `X-Client_ID` header,
# which will be documented as a required header by FastAPI.
# use `x_client_id: str = Header(None)` for an optional header.
data = {"x_client_id": x_client_id}
with request_cycle_context(data):
# yield allows it to pass along to the rest of the request
yield
# use it as Depends across the whole FastAPI app
app = FastAPI(dependencies=[Depends(my_context_dependency)])
@app.get("/")
async def hello():
client = context["x_client_id"]
return f"hello {client}"
```
I’m very happy with all the tickets you open. Feel free to open PRs if you feel like it. If you’ve found a bug but don’t want to get involved, that’s more than ok and I’d appreciate such ticket as well.
* If you have opened a PR it can’t be merged until CI passed. Stuff that is checked:
*
codecov has to be kept at 100%
*
pre commit hooks consist of flake8 and mypy, so consider installing hooks before commiting. Otherwise CI might fail
Sometimes one pre-commit hook will affect another so you will run them a few times.
You can run tests with docker of with venv.
## With docker
With docker run tests with `make testdocker` .
If you want to plug docker env in your IDE run service `tests` from `docker-compose.yml` .
## Local setup
Running `make init` will result with creating local venv with dependencies. Then you can `make test` or plug venv into IDE.
Date: 2022-01-01
Categories:
Tags:
MIT License
Copyright (c) 2022 <NAME>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Date: 2023-02-16
Categories:
Tags:
This document records all notable changes to starlette-context. This project adheres to Semantic Versioning.
Latest release
## 0.4.0
Release date: TBA
remove ContextMiddleware https://github.com/tomwojcik/starlette-context/issues/47
## 0.3.6
Release date: February 16, 2023
fix for being unable to catch some exceptions with a try/except due to base exc inheriting from the BaseException (Thanks @soundstripe) https://github.com/tomwojcik/starlette-context/issues/90
*
minimal Python version required is now 3.8
## 0.3.5
Release date: November 26, 2022
fix for accessing the context in error handlers (Thanks @hhamana) https://github.com/tomwojcik/starlette-context/issues/74
## 0.3.4
Release date: June 22, 2022
add request_cycle_context. It’s a context manager that allows for easier testing and cleaner code (Thanks @hhamana) https://github.com/tomwojcik/starlette-context/issues/46
*
fix for accessing context during logging, outside of the request-response cycle. Technically it should raise an exception, but it makes sense to include the context by default (in logs) and if it’s not available, some logs are better than no logs. Now it will show context data if context is available, with a fallback to an empty dict (instead of raising an exc) https://github.com/tomwojcik/starlette-context/issues/65
*
add
`ContextMiddleware` deprecation warning *
`**context` context unpacking seems to be working now
## 0.3.3
Release date: June 28, 2021
add support for custom error responses if error occurred in plugin / middleware -> fix for 500 (Thanks @hhamana)
*
better (custom) exceptions with a base StarletteContextError (Thanks @hhamana)
## 0.3.2
Release date: April 22, 2021
is raised when context object can’t be accessed. Previously it was `RuntimeError` . For backwards compatibility, it inherits from `RuntimeError` so it shouldn’t result in any regressions. *
Added
`py.typed` file so your mypy should never complain (Thanks @ginomempin)
## 0.3.1
Release date: October 17, 2020
add
`ApiKeyPlugin` plugin for `X-API-Key` header
## 0.3.0
Release date: October 10, 2020
add
`RawContextMiddleware` for `Streaming` and `File` responses *
add flake8, isort, mypy
*
small refactor of the base plugin, moved directories and removed one redundant method (potentially breaking changes)
## 0.2.3
Release date: July 27, 2020
add docs on read the docs
fix bug with
`force_new_uuid=True` returning the same uuid constantly
due to ^ a lot of tests had to be refactored as well
## 0.2.2
Release date: Apr 26, 2020
for correlation id and request id plugins, add support for enforcing the generation of a new value
for ^ plugins add support for validating uuid. It’s a default behavior so will break things for people who don’t use uuid4 there. If you don’t want this validation, you need to pass validate=False to the plugin
you can now check if context is available (Thanks @VukW)
## 0.2.1
Release date: Apr 18, 2020
dropped with_plugins from the middleware as Starlette has it’s own way of doing this
due to ^ this change some tests are simplified
if context is not available no LookupError will be raised, instead there will be RuntimeError, because this error might mean one of two things: user either didn’t use ContextMiddleware or is trying to access context object outside of request-response cycle
## 0.2.0
Release date: Feb 21, 2020
changed parent of context object. More or less the API is the same but due to this change the implementation itself is way more simple and now it’s possible to use .items() or keys() like in a normal dict, out of the box. Still, unpacking
`**kwargs` is not supported and I don’t think it ever will be. I tried to inherit from the builtin dict but nothing good came out of this. Now you access context as dict using context.data, not context.dict()
there was an issue related to not having awaitable plugins. Now both middleware and plugins are fully async compatible. It’s a breaking change as it forces to use await, hence new minor version
## 0.1.6
Release date: Jan 2, 2020
breaking changes
one middleware, one context, multiple plugins for middleware
very easy testing and writing custom plugins
## 0.1.5
Release date: Jan 1, 2020
lint
tests (100% cov)
separate class for header constants
BasicContextMiddleware add some logic
## 0.1.4
Release date: Dec 31, 2019
get_many in context object
cicd improvements
type annotations
### mvp until 0.1.4
experiments and tests with ContextVar
|
python-baremetrikcs | readthedoc | Unknown | Python wrapper for Baremetrics API
* Free software: Apache Software License 2.0
* Documentation: https://python-baremetrics.readthedocs.io.
## Features¶
* TODO
## Credits¶
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
## Stable release¶
To install python-baremetrics, run this command in your terminal:
```
$ pip install python_baremetrics
```
This is the preferred method to install python-baremetrics, as it will always install the most recent stable release.
If you don’t have pip installed, this Python installation guide can guide you through the process.
## From sources¶
The sources for python-baremetrics can be downloaded from the Github repo.
You can either clone the public repository:
```
$ git clone git://github.com/budurli/python_baremetrics
```
Or download the tarball:
```
$ curl -OL https://github.com/budurli/python_baremetrics/tarball/master
```
Once you have a copy of the source, you can install it with:
```
$ python setup.py install
```
# Usage¶
python-baremetrics Usage ¶ To use python-baremetrics in a project: import python_baremetrics
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
You can contribute in many ways:
## Types of Contributions¶
### Report Bugs¶
Report bugs at https://github.com/budurli/python_baremetrics/issues.
If you are reporting a bug, please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
### Fix Bugs¶
### Implement Features¶
### Write Documentation¶
python-baremetrics could always use more documentation, whether as part of the official python-baremetrics docs, in docstrings, or even on the web in blog posts, articles, and such.
### Submit Feedback¶
The best way to send feedback is to file an issue at https://github.com/budurli/python_baremetrics/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions are welcome :)
## Get Started!¶
Ready to contribute? Here’s how to set up python_baremetrics for local development.
Fork the python_baremetrics repo on GitHub.
*
Clone your fork locally:
> $ git clone <EMAIL>:your_name_here/python_baremetrics.git
*
Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:
```
$ mkvirtualenv python_baremetrics $ cd python_baremetrics/ $ python setup.py develop
```
Create a branch for local development:
> $ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
*
When you’re done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:
```
$ flake8 python_baremetrics tests $ python setup.py test or py.test $ tox
```
To get flake8 and tox, just pip install them into your virtualenv.
*
Commit your changes and push your branch to GitHub:
```
$ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature
```
Submit a pull request through the GitHub website.
## Pull Request Guidelines¶
Before you submit a pull request, check that it meets these guidelines:
* The pull request should include tests.
* If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst.
* The pull request should work for Python 2.6, 2.7, 3.3, 3.4 and 3.5, and for PyPy. Check https://travis-ci.org/budurli/python_baremetrics/pull_requests and make sure that the tests pass for all supported Python versions.
## Tips¶
To run a subset of tests:
```
$ python -m unittest tests.test_python_baremetrics
```
# Credits¶
python-baremetrics Credits ¶ Development Lead ¶ <NAME> < smirnoffmg @ gmail . com > Contributors ¶ None yet. Why not be the first?
# History¶
Date: 2017-07-05
Categories:
Tags:
python-baremetrics History ¶ 0.1.0 (2017-07-05) ¶ First release on PyPI.
|
gopkg.in/cheggaaa/pb.v2 | go | Go | README
[¶](#section-readme)
---
### Warning
Please use v3 version `go get github.com/cheggaaa/v3` - it's a continuation of the v2, but in the master branch and with the support of go modules
Terminal progress bar for Go
===
[![Coverage Status](https://coveralls.io/repos/github/cheggaaa/pb/badge.svg?branch=v2)](https://coveralls.io/github/cheggaaa/pb?branch=v2)
### It's beta, some features may be changed
This is proposal for the second version of progress bar
* based on text/template
* can take custom elements
* using colors is easy
Installation
---
```
go get gopkg.in/cheggaaa/pb.v2
```
Usage
---
```
package main
import (
"gopkg.in/cheggaaa/pb.v2"
"time"
)
func main() {
simple()
fromPreset()
customTemplate(`Custom template: {{counters . }}`)
customTemplate(`{{ red "With colors:" }} {{bar . | green}} {{speed . | blue }}`)
customTemplate(`{{ red "With funcs:" }} {{ bar . "<" "-" (cycle . "↖" "↗" "↘" "↙" ) "." ">"}} {{speed . | rndcolor }}`)
customTemplate(`{{ bar . "[<" "·····•·····" (rnd "ᗧ" "◔" "◕" "◷" ) "•" ">]"}}`)
}
func simple() {
count := 1000
bar := pb.StartNew(count)
for i := 0; i < count; i++ {
bar.Increment()
time.Sleep(time.Millisecond * 2)
}
bar.Finish()
}
func fromPreset() {
count := 1000
//bar := pb.Default.Start(total)
//bar := pb.Simple.Start(total)
bar := pb.Full.Start(count)
defer bar.Finish()
bar.Set("prefix", "fromPreset(): ")
for i := 0; i < count/2; i++ {
bar.Add(2)
time.Sleep(time.Millisecond * 4)
}
}
func customTemplate(tmpl string) {
count := 1000
bar := pb.ProgressBarTemplate(tmpl).Start(count)
defer bar.Finish()
for i := 0; i < count/2; i++ {
bar.Add(2)
time.Sleep(time.Millisecond * 4)
}
}
```
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [func CellCount(s string) int](#CellCount)
* [func RegisterElement(name string, el Element, adaptive bool)](#RegisterElement)
* [func StripString(s string, w int) string](#StripString)
* [func StripStringToBuffer(s string, w int, buf *bytes.Buffer)](#StripStringToBuffer)
* [type Element](#Element)
* [type ElementFunc](#ElementFunc)
* + [func (e ElementFunc) ProgressElement(state *State, args ...string) string](#ElementFunc.ProgressElement)
* [type ProgressBar](#ProgressBar)
* + [func New(total int) *ProgressBar](#New)
+ [func New64(total int64) *ProgressBar](#New64)
+ [func Start64(total int64) *ProgressBar](#Start64)
+ [func StartNew(total int) *ProgressBar](#StartNew)
* + [func (pb *ProgressBar) Add(value int) *ProgressBar](#ProgressBar.Add)
+ [func (pb *ProgressBar) Add64(value int64) *ProgressBar](#ProgressBar.Add64)
+ [func (pb *ProgressBar) Current() int64](#ProgressBar.Current)
+ [func (pb *ProgressBar) Err() error](#ProgressBar.Err)
+ [func (pb *ProgressBar) Finish() *ProgressBar](#ProgressBar.Finish)
+ [func (pb *ProgressBar) Format(v int64) string](#ProgressBar.Format)
+ [func (pb *ProgressBar) Get(key interface{}) interface{}](#ProgressBar.Get)
+ [func (pb *ProgressBar) GetBool(key interface{}) bool](#ProgressBar.GetBool)
+ [func (pb *ProgressBar) Increment() *ProgressBar](#ProgressBar.Increment)
+ [func (pb *ProgressBar) IsStarted() bool](#ProgressBar.IsStarted)
+ [func (pb *ProgressBar) NewProxyReader(r io.Reader) *Reader](#ProgressBar.NewProxyReader)
+ [func (pb *ProgressBar) ProgressElement(s *State, args ...string) string](#ProgressBar.ProgressElement)
+ [func (pb *ProgressBar) Set(key, value interface{}) *ProgressBar](#ProgressBar.Set)
+ [func (pb *ProgressBar) SetCurrent(value int64) *ProgressBar](#ProgressBar.SetCurrent)
+ [func (pb *ProgressBar) SetErr(err error) *ProgressBar](#ProgressBar.SetErr)
+ [func (pb *ProgressBar) SetRefreshRate(dur time.Duration) *ProgressBar](#ProgressBar.SetRefreshRate)
+ [func (pb *ProgressBar) SetTemplate(tmpl ProgressBarTemplate) *ProgressBar](#ProgressBar.SetTemplate)
+ [func (pb *ProgressBar) SetTemplateString(tmpl string) *ProgressBar](#ProgressBar.SetTemplateString)
+ [func (pb *ProgressBar) SetTotal(value int64) *ProgressBar](#ProgressBar.SetTotal)
+ [func (pb *ProgressBar) SetWidth(width int) *ProgressBar](#ProgressBar.SetWidth)
+ [func (pb *ProgressBar) SetWriter(w io.Writer) *ProgressBar](#ProgressBar.SetWriter)
+ [func (pb *ProgressBar) Start() *ProgressBar](#ProgressBar.Start)
+ [func (pb *ProgressBar) StartTime() time.Time](#ProgressBar.StartTime)
+ [func (pb *ProgressBar) String() string](#ProgressBar.String)
+ [func (pb *ProgressBar) Total() int64](#ProgressBar.Total)
+ [func (pb *ProgressBar) Width() (width int)](#ProgressBar.Width)
+ [func (pb *ProgressBar) Write() *ProgressBar](#ProgressBar.Write)
* [type ProgressBarTemplate](#ProgressBarTemplate)
* + [func (pbt ProgressBarTemplate) New(total int) *ProgressBar](#ProgressBarTemplate.New)
+ [func (pbt ProgressBarTemplate) Start(total int) *ProgressBar](#ProgressBarTemplate.Start)
+ [func (pbt ProgressBarTemplate) Start64(total int64) *ProgressBar](#ProgressBarTemplate.Start64)
* [type Reader](#Reader)
* + [func (r *Reader) Close() (err error)](#Reader.Close)
+ [func (r *Reader) Read(p []byte) (n int, err error)](#Reader.Read)
* [type State](#State)
* + [func (s *State) AdaptiveElWidth() int](#State.AdaptiveElWidth)
+ [func (s *State) Id() uint64](#State.Id)
+ [func (s *State) IsAdaptiveWidth() bool](#State.IsAdaptiveWidth)
+ [func (s *State) IsFinished() bool](#State.IsFinished)
+ [func (s *State) IsFirst() bool](#State.IsFirst)
+ [func (s *State) Time() time.Time](#State.Time)
+ [func (s *State) Total() int64](#State.Total)
+ [func (s *State) Value() int64](#State.Value)
+ [func (s *State) Width() int](#State.Width)
### Constants [¶](#pkg-constants)
```
const (
// Bytes means we're working with byte sizes. Numbers will print as Kb, Mb, etc
// bar.Set(pb.Bytes, true)
Bytes key = 1 << [iota](/builtin#iota)
// Terminal means we're will print to terminal and can use ascii sequences
// Also we're will try to use terminal width
Terminal
// Static means progress bar will not update automaticly
Static
// ReturnSymbol - by default in terminal mode it's '\r'
ReturnSymbol
// Color by default is true when output is tty, but you can set to false for disabling colors
Color
)
```
```
const Version = "2.0.6"
```
Version of ProgressBar library
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [CellCount](https://github.com/cheggaaa/pb/blob/v2.0.7/util.go#L21) [¶](#CellCount)
added in v2.0.2
```
func CellCount(s [string](/builtin#string)) [int](/builtin#int)
```
####
func [RegisterElement](https://github.com/cheggaaa/pb/blob/v2.0.7/element.go#L47) [¶](#RegisterElement)
```
func RegisterElement(name [string](/builtin#string), el [Element](#Element), adaptive [bool](/builtin#bool))
```
RegisterElement give you a chance to use custom elements
####
func [StripString](https://github.com/cheggaaa/pb/blob/v2.0.7/util.go#L29) [¶](#StripString)
added in v2.0.2
```
func StripString(s [string](/builtin#string), w [int](/builtin#int)) [string](/builtin#string)
```
####
func [StripStringToBuffer](https://github.com/cheggaaa/pb/blob/v2.0.7/util.go#L39) [¶](#StripStringToBuffer)
added in v2.0.2
```
func StripStringToBuffer(s [string](/builtin#string), w [int](/builtin#int), buf *[bytes](/bytes).[Buffer](/bytes#Buffer))
```
### Types [¶](#pkg-types)
####
type [Element](https://github.com/cheggaaa/pb/blob/v2.0.7/element.go#L21) [¶](#Element)
```
type Element interface {
ProgressElement(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string)
}
```
Element is an interface for bar elements
####
type [ElementFunc](https://github.com/cheggaaa/pb/blob/v2.0.7/element.go#L26) [¶](#ElementFunc)
```
type ElementFunc func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string)
```
ElementFunc type implements Element interface and created for simplify elements
```
var ElementBar [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
// init
var p = getProgressObj(state, args...)
total, value := state.Total(), state.Value()
if total < 0 {
total = -total
}
if value < 0 {
value = -value
}
if total != 0 && value > total {
total = value
}
p.buf.Reset()
var widthLeft = state.AdaptiveElWidth()
if widthLeft <= 0 || !state.IsAdaptiveWidth() {
widthLeft = 30
}
if p.cc[0] < widthLeft {
widthLeft -= p.write(state, 0, p.cc[0])
} else {
p.write(state, 0, widthLeft)
return p.buf.String()
}
if p.cc[4] < widthLeft {
widthLeft -= p.cc[4]
} else {
p.write(state, 4, widthLeft)
return p.buf.String()
}
var curCount [int](/builtin#int)
if total > 0 {
curCount = [int](/builtin#int)([math](/math).[Ceil](/math#Ceil)(([float64](/builtin#float64)(value) / [float64](/builtin#float64)(total)) * [float64](/builtin#float64)(widthLeft)))
}
if total == value && state.IsFinished() {
widthLeft -= p.write(state, 1, curCount)
} else if toWrite := curCount - p.cc[2]; toWrite > 0 {
widthLeft -= p.write(state, 1, toWrite)
widthLeft -= p.write(state, 2, p.cc[2])
} else if curCount > 0 {
widthLeft -= p.write(state, 2, curCount)
}
if widthLeft > 0 {
widthLeft -= p.write(state, 3, widthLeft)
}
p.write(state, 4, p.cc[4])
return p.buf.String()
}
```
ElementBar make progress bar view [-->__]
Optionally can take up to 5 string arguments. Defaults is "[", "-", ">", "_", "]"
In template use as follows: {{bar . }} or {{bar . "<" "oOo" "|" "~" ">"}}
Color args: {{bar . (red "[") (green "-") ...
```
var ElementCounters [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
var f [string](/builtin#string)
if state.Total() > 0 {
f = argsHelper(args).getNotEmptyOr(0, "%s / %s")
} else {
f = argsHelper(args).getNotEmptyOr(1, "%[1]s")
}
return [fmt](/fmt).[Sprintf](/fmt#Sprintf)(f, state.Format(state.Value()), state.Format(state.Total()))
}
```
ElementCounters shows current and total values.
Optionally can take one or two string arguments.
First string will be used as format value when Total is present (>0). Default is "%s / %s"
Second string will be used when total <= 0. Default is "%[1]s"
In template use as follows: {{counters .}} or {{counters . "%s/%s"}} or {{counters . "%s/%s" "%s/?"}}
```
var ElementCycle [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
if [len](/builtin#len)(args) == 0 {
return ""
}
n, _ := state.Get(cycleObj).([int](/builtin#int))
if n >= [len](/builtin#len)(args) {
n = 0
}
state.Set(cycleObj, n+1)
return args[n]
}
```
ElementCycle return next argument for every call In template use as follows: {{cycle . "1" "2" "3"}}
Or mix width other elements: {{ bar . "" "" (cycle . "↖" "↗" "↘" "↙" )}}
```
var ElementElapsedTime [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
etm := state.Time().Truncate([time](/time).[Second](/time#Second)).Sub(state.StartTime().Truncate([time](/time).[Second](/time#Second)))
return [fmt](/fmt).[Sprintf](/fmt#Sprintf)(argsHelper(args).getOr(0, "%s"), etm.String())
}
```
ElementElapsedTime shows elapsed time Optionally cat take one argument - it's format for time string.
In template use as follows: {{etime .}} or {{etime . "%s elapsed"}}
```
var ElementPercent [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
argsh := argsHelper(args)
if state.Total() > 0 {
return [fmt](/fmt).[Sprintf](/fmt#Sprintf)(
argsh.getNotEmptyOr(0, "%.02f%%"),
[float64](/builtin#float64)(state.Value())/([float64](/builtin#float64)(state.Total())/[float64](/builtin#float64)(100)),
)
}
return argsh.getOr(1, "?%")
}
```
ElementPercent shows current percent of progress.
Optionally can take one or two string arguments.
First string will be used as value for format float64, default is "%.02f%%".
Second string will be used when percent can't be calculated, default is "?%"
In template use as follows: {{percent .}} or {{percent . "%.03f%%"}} or {{percent . "%.03f%%" "?"}}
```
var ElementRemainingTime [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
var rts [string](/builtin#string)
sp := getSpeedObj(state).value(state)
if !state.IsFinished() {
if sp > 0 {
remain := [float64](/builtin#float64)(state.Total() - state.Value())
remainDur := [time](/time).[Duration](/time#Duration)(remain/sp) * [time](/time).[Second](/time#Second)
rts = remainDur.String()
} else {
return argsHelper(args).getOr(2, "?")
}
} else {
rts = state.Time().Truncate([time](/time).[Second](/time#Second)).Sub(state.StartTime().Truncate([time](/time).[Second](/time#Second))).String()
return [fmt](/fmt).[Sprintf](/fmt#Sprintf)(argsHelper(args).getOr(1, "%s"), rts)
}
return [fmt](/fmt).[Sprintf](/fmt#Sprintf)(argsHelper(args).getOr(0, "%s"), rts)
}
```
ElementRemainingTime calculates remaining time based on speed (EWMA)
Optionally can take one or two string arguments.
First string will be used as value for format time duration string, default is "%s".
Second string will be used when bar finished and value indicates elapsed time, default is "%s"
Third string will be used when value not available, default is "?"
In template use as follows: {{rtime .}} or {{rtime . "%s remain"}} or {{rtime . "%s remain" "%s total" "???"}}
```
var ElementSpeed [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
sp := getSpeedObj(state).value(state)
if sp == 0 {
return argsHelper(args).getNotEmptyOr(1, "? p/s")
}
return [fmt](/fmt).[Sprintf](/fmt#Sprintf)(argsHelper(args).getNotEmptyOr(0, "%s p/s"), state.Format([int64](/builtin#int64)(round(sp))))
}
```
ElementSpeed calculates current speed by EWMA Optionally can take one or two string arguments.
First string will be used as value for format speed, default is "%s p/s".
Second string will be used when speed not available, default is "? p/s"
In template use as follows: {{speed .}} or {{speed . "%s per second"}} or {{speed . "%s ps" "..."}
```
var ElementString [ElementFunc](#ElementFunc) = func(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string) {
if [len](/builtin#len)(args) == 0 {
return ""
}
v := state.Get(args[0])
if v == [nil](/builtin#nil) {
return ""
}
return [fmt](/fmt).[Sprint](/fmt#Sprint)(v)
}
```
ElementString get value from bar by given key and print them bar.Set("myKey", "string to print")
In template use as follows: {{string . "myKey"}}
####
func (ElementFunc) [ProgressElement](https://github.com/cheggaaa/pb/blob/v2.0.7/element.go#L29) [¶](#ElementFunc.ProgressElement)
```
func (e [ElementFunc](#ElementFunc)) ProgressElement(state *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string)
```
ProgressElement just call self func
####
type [ProgressBar](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L77) [¶](#ProgressBar)
```
type ProgressBar struct {
// contains filtered or unexported fields
}
```
ProgressBar is the main object of bar
####
func [New](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L50) [¶](#New)
added in v2.0.3
```
func New(total [int](/builtin#int)) *[ProgressBar](#ProgressBar)
```
New creates new ProgressBar object
####
func [New64](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L55) [¶](#New64)
added in v2.0.3
```
func New64(total [int64](/builtin#int64)) *[ProgressBar](#ProgressBar)
```
New64 creates new ProgressBar object using int64 as total
####
func [Start64](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L66) [¶](#Start64)
added in v2.0.3
```
func Start64(total [int64](/builtin#int64)) *[ProgressBar](#ProgressBar)
```
Start64 starts new ProgressBar with Default template. Using int64 as total.
####
func [StartNew](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L61) [¶](#StartNew)
added in v2.0.4
```
func StartNew(total [int](/builtin#int)) *[ProgressBar](#ProgressBar)
```
StartNew starts new ProgressBar with Default template
####
func (*ProgressBar) [Add](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L241) [¶](#ProgressBar.Add)
```
func (pb *[ProgressBar](#ProgressBar)) Add(value [int](/builtin#int)) *[ProgressBar](#ProgressBar)
```
Add adding given int value to bar value
####
func (*ProgressBar) [Add64](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L235) [¶](#ProgressBar.Add64)
```
func (pb *[ProgressBar](#ProgressBar)) Add64(value [int64](/builtin#int64)) *[ProgressBar](#ProgressBar)
```
Add adding given int64 value to bar value
####
func (*ProgressBar) [Current](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L230) [¶](#ProgressBar.Current)
added in v2.0.2
```
func (pb *[ProgressBar](#ProgressBar)) Current() [int64](/builtin#int64)
```
Current return current bar value
####
func (*ProgressBar) [Err](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L460) [¶](#ProgressBar.Err)
```
func (pb *[ProgressBar](#ProgressBar)) Err() [error](/builtin#error)
```
Err return possible error When all ok - will be nil May contain template.Execute errors
####
func (*ProgressBar) [Finish](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L345) [¶](#ProgressBar.Finish)
```
func (pb *[ProgressBar](#ProgressBar)) Finish() *[ProgressBar](#ProgressBar)
```
Finish stops the bar
####
func (*ProgressBar) [Format](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L337) [¶](#ProgressBar.Format)
```
func (pb *[ProgressBar](#ProgressBar)) Format(v [int64](/builtin#int64)) [string](/builtin#string)
```
Format convert int64 to string according to the current settings
####
func (*ProgressBar) [Get](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L262) [¶](#ProgressBar.Get)
```
func (pb *[ProgressBar](#ProgressBar)) Get(key interface{}) interface{}
```
Get return value by key
####
func (*ProgressBar) [GetBool](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L273) [¶](#ProgressBar.GetBool)
```
func (pb *[ProgressBar](#ProgressBar)) GetBool(key interface{}) [bool](/builtin#bool)
```
GetBool return value by key and try to convert there to boolean If value doesn't set or not boolean - return false
####
func (*ProgressBar) [Increment](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L246) [¶](#ProgressBar.Increment)
```
func (pb *[ProgressBar](#ProgressBar)) Increment() *[ProgressBar](#ProgressBar)
```
Increment atomically increments the progress
####
func (*ProgressBar) [IsStarted](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L365) [¶](#ProgressBar.IsStarted)
added in v2.0.2
```
func (pb *[ProgressBar](#ProgressBar)) IsStarted() [bool](/builtin#bool)
```
IsStarted indicates progress bar state
####
func (*ProgressBar) [NewProxyReader](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L387) [¶](#ProgressBar.NewProxyReader)
added in v2.0.5
```
func (pb *[ProgressBar](#ProgressBar)) NewProxyReader(r [io](/io).[Reader](/io#Reader)) *[Reader](#Reader)
```
NewProxyReader creates a wrapper for given reader, but with progress handle Takes io.Reader or io.ReadCloser Also, it automatically switches progress bar to handle units as bytes
####
func (*ProgressBar) [ProgressElement](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L473) [¶](#ProgressBar.ProgressElement)
```
func (pb *[ProgressBar](#ProgressBar)) ProgressElement(s *[State](#State), args ...[string](/builtin#string)) [string](/builtin#string)
```
ProgressElement implements Element interface
####
func (*ProgressBar) [Set](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L251) [¶](#ProgressBar.Set)
```
func (pb *[ProgressBar](#ProgressBar)) Set(key, value interface{}) *[ProgressBar](#ProgressBar)
```
Set sets any value by any key
####
func (*ProgressBar) [SetCurrent](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L224) [¶](#ProgressBar.SetCurrent)
```
func (pb *[ProgressBar](#ProgressBar)) SetCurrent(value [int64](/builtin#int64)) *[ProgressBar](#ProgressBar)
```
SetCurrent sets the current bar value
####
func (*ProgressBar) [SetErr](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L450) [¶](#ProgressBar.SetErr)
```
func (pb *[ProgressBar](#ProgressBar)) SetErr(err [error](/builtin#error)) *[ProgressBar](#ProgressBar)
```
SetErr sets error to the ProgressBar Error will be available over Err()
####
func (*ProgressBar) [SetRefreshRate](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L309) [¶](#ProgressBar.SetRefreshRate)
added in v2.0.2
```
func (pb *[ProgressBar](#ProgressBar)) SetRefreshRate(dur [time](/time).[Duration](/time#Duration)) *[ProgressBar](#ProgressBar)
```
####
func (*ProgressBar) [SetTemplate](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L380) [¶](#ProgressBar.SetTemplate)
```
func (pb *[ProgressBar](#ProgressBar)) SetTemplate(tmpl [ProgressBarTemplate](#ProgressBarTemplate)) *[ProgressBar](#ProgressBar)
```
SetTemplateString sets ProgressBarTempate and parse it
####
func (*ProgressBar) [SetTemplateString](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L372) [¶](#ProgressBar.SetTemplateString)
added in v2.0.2
```
func (pb *[ProgressBar](#ProgressBar)) SetTemplateString(tmpl [string](/builtin#string)) *[ProgressBar](#ProgressBar)
```
SetTemplateString sets ProgressBar tempate string and parse it
####
func (*ProgressBar) [SetTotal](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L218) [¶](#ProgressBar.SetTotal)
```
func (pb *[ProgressBar](#ProgressBar)) SetTotal(value [int64](/builtin#int64)) *[ProgressBar](#ProgressBar)
```
SetTotal sets the total bar value
####
func (*ProgressBar) [SetWidth](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L282) [¶](#ProgressBar.SetWidth)
```
func (pb *[ProgressBar](#ProgressBar)) SetWidth(width [int](/builtin#int)) *[ProgressBar](#ProgressBar)
```
SetWidth sets the bar width When given value <= 0 would be using the terminal width (if possible) or default value.
####
func (*ProgressBar) [SetWriter](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L320) [¶](#ProgressBar.SetWriter)
```
func (pb *[ProgressBar](#ProgressBar)) SetWriter(w [io](/io).[Writer](/io#Writer)) *[ProgressBar](#ProgressBar)
```
SetWriter sets the io.Writer. Bar will write in this writer By default this is os.Stderr
####
func (*ProgressBar) [Start](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L147) [¶](#ProgressBar.Start)
```
func (pb *[ProgressBar](#ProgressBar)) Start() *[ProgressBar](#ProgressBar)
```
Start starts the bar
####
func (*ProgressBar) [StartTime](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L330) [¶](#ProgressBar.StartTime)
```
func (pb *[ProgressBar](#ProgressBar)) StartTime() [time](/time).[Time](/time#Time)
```
StartTime return the time when bar started
####
func (*ProgressBar) [String](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L467) [¶](#ProgressBar.String)
```
func (pb *[ProgressBar](#ProgressBar)) String() [string](/builtin#string)
```
String return currrent string representation of ProgressBar
####
func (*ProgressBar) [Total](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L213) [¶](#ProgressBar.Total)
```
func (pb *[ProgressBar](#ProgressBar)) Total() [int64](/builtin#int64)
```
Total return current total bar value
####
func (*ProgressBar) [Width](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L291) [¶](#ProgressBar.Width)
```
func (pb *[ProgressBar](#ProgressBar)) Width() (width [int](/builtin#int))
```
Width return the bar width It's current terminal width or settled over 'SetWidth' value.
####
func (*ProgressBar) [Write](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L181) [¶](#ProgressBar.Write)
added in v2.0.3
```
func (pb *[ProgressBar](#ProgressBar)) Write() *[ProgressBar](#ProgressBar)
```
Write performs write to the output
####
type [ProgressBarTemplate](https://github.com/cheggaaa/pb/blob/v2.0.7/template.go#L12) [¶](#ProgressBarTemplate)
added in v2.0.2
```
type ProgressBarTemplate [string](/builtin#string)
```
ProgressBarTemplate that template string
```
var (
// Full - preset with all default available elements
// Example: 'Prefix 20/100 [-->______] 20% 1 p/s ETA 1m Suffix'
Full [ProgressBarTemplate](#ProgressBarTemplate) = `{{string . "prefix"}}{{counters . }} {{bar . }} {{percent . }} {{speed . }} {{rtime . "ETA %s"}}{{string . "suffix"}}`
// Default - preset like Full but without elapsed time
// Example: 'Prefix 20/100 [-->______] 20% 1 p/s ETA 1m Suffix'
Default [ProgressBarTemplate](#ProgressBarTemplate) = `{{string . "prefix"}}{{counters . }} {{bar . }} {{percent . }} {{speed . }}{{string . "suffix"}}`
// Simple - preset without speed and any timers. Only counters, bar and percents
// Example: 'Prefix 20/100 [-->______] 20% Suffix'
Simple [ProgressBarTemplate](#ProgressBarTemplate) = `{{string . "prefix"}}{{counters . }} {{bar . }} {{percent . }}{{string . "suffix"}}`
)
```
####
func (ProgressBarTemplate) [New](https://github.com/cheggaaa/pb/blob/v2.0.7/template.go#L15) [¶](#ProgressBarTemplate.New)
added in v2.0.2
```
func (pbt [ProgressBarTemplate](#ProgressBarTemplate)) New(total [int](/builtin#int)) *[ProgressBar](#ProgressBar)
```
New creates new bar from template
####
func (ProgressBarTemplate) [Start](https://github.com/cheggaaa/pb/blob/v2.0.7/template.go#L25) [¶](#ProgressBarTemplate.Start)
added in v2.0.2
```
func (pbt [ProgressBarTemplate](#ProgressBarTemplate)) Start(total [int](/builtin#int)) *[ProgressBar](#ProgressBar)
```
Start create and start new bar with given int total value
####
func (ProgressBarTemplate) [Start64](https://github.com/cheggaaa/pb/blob/v2.0.7/template.go#L20) [¶](#ProgressBarTemplate.Start64)
added in v2.0.2
```
func (pbt [ProgressBarTemplate](#ProgressBarTemplate)) Start64(total [int64](/builtin#int64)) *[ProgressBar](#ProgressBar)
```
Start64 create and start new bar with given int64 total value
####
type [Reader](https://github.com/cheggaaa/pb/blob/v2.0.7/reader.go#L8) [¶](#Reader)
added in v2.0.5
```
type Reader struct {
[io](/io).[Reader](/io#Reader)
// contains filtered or unexported fields
}
```
Reader it's a wrapper for given reader, but with progress handle
####
func (*Reader) [Close](https://github.com/cheggaaa/pb/blob/v2.0.7/reader.go#L21) [¶](#Reader.Close)
added in v2.0.5
```
func (r *[Reader](#Reader)) Close() (err [error](/builtin#error))
```
Close the wrapped reader when it implements io.Closer
####
func (*Reader) [Read](https://github.com/cheggaaa/pb/blob/v2.0.7/reader.go#L14) [¶](#Reader.Read)
added in v2.0.5
```
func (r *[Reader](#Reader)) Read(p [][byte](/builtin#byte)) (n [int](/builtin#int), err [error](/builtin#error))
```
Read reads bytes from wrapped reader and add amount of bytes to progress bar
####
type [State](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L482) [¶](#State)
```
type State struct {
*[ProgressBar](#ProgressBar)
// contains filtered or unexported fields
}
```
State represents the current state of bar Need for bar elements
####
func (*State) [AdaptiveElWidth](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L518) [¶](#State.AdaptiveElWidth)
```
func (s *[State](#State)) AdaptiveElWidth() [int](/builtin#int)
```
AdaptiveElWidth - adaptive elements must return string with given cell count (when AdaptiveElWidth > 0)
####
func (*State) [Id](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L498) [¶](#State.Id)
added in v2.0.4
```
func (s *[State](#State)) Id() [uint64](/builtin#uint64)
```
Id it's the current state identifier
- incremental
- starts with 1
- resets after finish/start
####
func (*State) [IsAdaptiveWidth](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L523) [¶](#State.IsAdaptiveWidth)
```
func (s *[State](#State)) IsAdaptiveWidth() [bool](/builtin#bool)
```
IsAdaptiveWidth returns true when element must be shown as adaptive
####
func (*State) [IsFinished](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L528) [¶](#State.IsFinished)
```
func (s *[State](#State)) IsFinished() [bool](/builtin#bool)
```
IsFinished return true when bar is finished
####
func (*State) [IsFirst](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L533) [¶](#State.IsFirst)
```
func (s *[State](#State)) IsFirst() [bool](/builtin#bool)
```
IsFirst return true only in first render
####
func (*State) [Time](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L538) [¶](#State.Time)
added in v2.0.4
```
func (s *[State](#State)) Time() [time](/time).[Time](/time#Time)
```
Time when state was created
####
func (*State) [Total](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L503) [¶](#State.Total)
```
func (s *[State](#State)) Total() [int64](/builtin#int64)
```
Total it's bar int64 total
####
func (*State) [Value](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L508) [¶](#State.Value)
```
func (s *[State](#State)) Value() [int64](/builtin#int64)
```
Value it's current value
####
func (*State) [Width](https://github.com/cheggaaa/pb/blob/v2.0.7/pb.go#L513) [¶](#State.Width)
```
func (s *[State](#State)) Width() [int](/builtin#int)
```
Width of bar |
itemloaders | readthedoc | Unknown | itemloaders Documentation
Zyte
Apr 21, 2023
CONTENTS 1.1 Content... 4
i
ii
itemloaders Documentation itemloaders provide a convenient mechanism for populating data records. Its design provides a flexible, efficient and easy mechanism for extending and overriding different field parsing rules, either by raw data, or by source format
(HTML, XML, etc) without becoming a nightmare to maintain.
To install itemloaders, run:
pip install itemloaders Note: Under the hood, itemloaders uses itemadapter as a common interface. This means you can use any of the types supported by itemadapter here.
Warning: dataclasses and attrs support is still experimental. Please, refer to default_item_class in the
API Reference for more information.
itemloaders Documentation 2 CONTENTS
CHAPTER
ONE
GETTING STARTED WITH ITEMLOADERS To use an Item Loader, you must first instantiate it. You can either instantiate it with a dict-like object (item) or without one, in which case an item is automatically instantiated in the Item Loader __init__ method using the item class specified in the ItemLoader.default_item_class attribute.
Then, you start collecting values into the Item Loader, typically using CSS or XPath Selectors. You can add more than one value to the same item field; the Item Loader will know how to “join” those values later using a proper processing function.
Note: Collected data is stored internally as lists, allowing to add several values to the same field. If an item argument is passed when creating a loader, each of the item’s values will be stored as-is if it’s already an iterable, or wrapped with a list if it’s a single value.
Here is a typical Item Loader usage:
from itemloaders import ItemLoader from parsel import Selector html_data = '''
<!DOCTYPE html>
<html>
<head>
<title>Some random product page</title>
</head>
<body>
<div class="product_name">Some random product page</div>
<p id="price">$ 100.12</p>
</body>
</html>
'''
l = ItemLoader(selector=Selector(html_data))
l.add_xpath('name', '//div[@class="product_name"]/text()')
l.add_xpath('name', '//div[@class="product_title"]/text()')
l.add_css('price', '#price::text')
l.add_value('last_updated', 'today') # you can also use literal values item = l.load_item()
item
# {'name': ['Some random product page'], 'price': ['$ 100.12'], 'last_updated': ['today']}
By quickly looking at that code, we can see the name field is being extracted from two different XPath locations in the page:
itemloaders Documentation
1. //div[@class="product_name"]
2. //div[@class="product_title"]
In other words, data is being collected by extracting it from two XPath locations, using the add_xpath() method. This is the data that will be assigned to the name field later.
Afterwards, similar calls are used for price field using a CSS selector with the add_css() method, and finally the last_update field is populated directly with a literal value (today) using a different method: add_value().
Finally, when all data is collected, the ItemLoader.load_item() method is called which actually returns the item populated with the data previously extracted and collected with the add_xpath(), add_css(), and add_value()
calls.
1.1 Contents 1.1.1 Declaring Item Loaders Item Loaders are declared by using a class definition syntax. Here is an example:
from itemloaders import ItemLoader from itemloaders.processors import TakeFirst, MapCompose, Join class ProductLoader(ItemLoader):
default_output_processor = TakeFirst()
name_in = MapCompose(str.title)
name_out = Join()
# using a built-in processor
price_in = MapCompose(str.strip)
# using a function
def price_out(self, values):
return float(values[0])
loader = ProductLoader()
loader.add_value('name', 'plasma TV')
loader.add_value('price', '999.98')
loader.load_item()
# {'name': 'Plasma Tv', 'price': 999.98}
As you can see, input processors are declared using the _in suffix while output processors are declared us-
ing the _out suffix. And you can also declare a default input/output processors using the ItemLoader.
default_input_processor and ItemLoader.default_output_processor attributes.
The precedence order, for both input and output processors, is as follows:
1. Item Loader field-specific attributes: field_in and field_out (most precedence)
2. Field metadata (input_processor and output_processor keys).
Check out itemadapter field metadata for more information.
New in version 1.0.1.
itemloaders Documentation
3. Item Loader defaults: ItemLoader.default_input_processor() and ItemLoader.
default_output_processor() (least precedence)
See also: Reusing and extending Item Loaders.
1.1.2 Input and Output processors An Item Loader contains one input processor and one output processor for each (item) field. The input processor processes the extracted data as soon as it’s received (through the add_xpath(), add_css() or add_value() meth-
ods) and the result of the input processor is collected and kept inside the ItemLoader. After collecting all data, the ItemLoader.load_item() method is called to populate and get the populated item object. That’s when the output processor is called with the data previously collected (and processed using the input processor). The result of the output processor is the final value that gets assigned to the item.
Let’s see an example to illustrate how the input and output processors are called for a particular field (the same applies for any other field):
l = ItemLoader(selector=some_selector)
l.add_xpath('name', xpath1) # (1)
l.add_xpath('name', xpath2) # (2)
l.add_css('name', css) # (3)
l.add_value('name', 'test') # (4)
return l.load_item() # (5)
So what happens is:
1. Data from xpath1 is extracted, and passed through the input processor of the name field. The result of the input
processor is collected and kept in the Item Loader (but not yet assigned to the item).
2. Data from xpath2 is extracted, and passed through the same input processor used in (1). The result of the input
processor is appended to the data collected in (1) (if any).
3. This case is similar to the previous ones, except that the data is extracted from the css CSS selector, and passed
through the same input processor used in (1) and (2). The result of the input processor is appended to the data
collected in (1) and (2) (if any).
4. This case is also similar to the previous ones, except that the value to be collected is assigned directly, instead of
being extracted from a XPath expression or a CSS selector. However, the value is still passed through the input
processors. In this case, since the value is not iterable it is converted to an iterable of a single element before
passing it to the input processor, because input processor always receive iterables.
5. The data collected in steps (1), (2), (3) and (4) is passed through the output processor of the name field. The
result of the output processor is the value assigned to the name field in the item.
It’s worth noticing that processors are just callable objects, which are called with the data to be parsed, and return a parsed value. So you can use any function as input or output processor. The only requirement is that they must accept one (and only one) positional argument, which will be an iterable.
Note: Both input and output processors must receive an iterable as their first argument. The output of those functions can be anything. The result of input processors will be appended to an internal list (in the Loader) containing the collected values (for that field). The result of the output processors is the value that will be finally assigned to the item.
The other thing you need to keep in mind is that the values returned by input processors are collected internally (in lists) and then passed to output processors to populate the fields.
Last, but not least, itemloaders comes with some commonly used processors built-in for convenience.
itemloaders Documentation 1.1.3 Item Loader Context The Item Loader Context is a mechanism that allows to change the input/ouput processors behavior. It’s just a dict of arbitrary key/values which is shared among all processors. By default, the context contains the selector and any other keyword arguments sent to the Loaders’s __init__. The context can be passed when declaring, instantiating or using Item Loader.
For example, suppose you have a function parse_length which receives a text value and extracts a length from it:
def parse_length(text, loader_context):
unit = loader_context.get('unit', 'm')
# ... length parsing code goes here ...
return parsed_length By accepting a loader_context argument the function is explicitly telling the Item Loader that it’s able to receive an Item Loader context, so the Item Loader passes the currently active context when calling it, and the processor function
(parse_length in this case) can thus use them.
There are several ways to modify Item Loader context values:
1. By modifying the currently active Item Loader context (context attribute):
loader = ItemLoader(product)
loader.context['unit'] = 'cm'
2. On Item Loader instantiation (the keyword arguments of Item Loader __init__ method are stored in the Item
Loader context):
loader = ItemLoader(product, unit='cm')
3. On Item Loader declaration, for those input/output processors that support instantiating them with an Item Loader
context. MapCompose is one of them:
class ProductLoader(ItemLoader):
length_out = MapCompose(parse_length, unit='cm')
1.1.4 Nested Loaders When parsing related values from a subsection of a document, it can be useful to create nested loaders. Imagine you’re extracting details from a footer of a page that looks something like:
Example:
<footer>
<a class="social" href="https://facebook.com/whatever">Like Us</a>
<a class="social" href="https://twitter.com/whatever">Follow Us</a>
<a class="email" href="mailto:<EMAIL>">Email Us</a>
</footer>
Without nested loaders, you need to specify the full xpath (or css) for each value that you wish to extract.
Example:
loader = ItemLoader()
# load stuff not in the footer
(continues on next page)
itemloaders Documentation
(continued from previous page)
loader.add_xpath('social', '//footer/a[@class = "social"]/@href')
loader.add_xpath('email', '//footer/a[@class = "email"]/@href')
loader.load_item()
Instead, you can create a nested loader with the footer selector and add values relative to the footer. The functionality is the same but you avoid repeating the footer selector.
Example:
loader = ItemLoader()
# load stuff not in the footer footer_loader = loader.nested_xpath('//footer')
footer_loader.add_xpath('social', 'a[@class = "social"]/@href')
footer_loader.add_xpath('email', 'a[@class = "email"]/@href')
# no need to call footer_loader.load_item()
loader.load_item()
You can nest loaders arbitrarily and they work with either xpath or css selectors. As a general guideline, use nested loaders when they make your code simpler but do not go overboard with nesting or your parser can become difficult to read.
1.1.5 Reusing and extending Item Loaders Item Loaders are designed to ease the maintenance burden of parsing rules, without losing flexibility and, at the same time, providing a convenient mechanism for extending and overriding them. For this reason Item Loaders support traditional Python class inheritance for dealing with differences in data schemas.
Suppose, for example, that you get some particular product names enclosed in three dashes (e.g. ---Plasma TV---)
and you don’t want to end up with those dashes in the final product names.
Here’s how you can remove those dashes by reusing and extending the default Product Item Loader (ProductLoader):
from itemloaders.processors import MapCompose from myproject.loaders import ProductLoader def strip_dashes(x):
return x.strip('-')
class SiteSpecificLoader(ProductLoader):
name_in = MapCompose(strip_dashes, ProductLoader.name_in)
Another case where extending Item Loaders can be very helpful is when you have multiple source formats, for example XML and HTML. In the XML version you may want to remove CDATA occurrences. Here’s an example of how to do it:
from itemloaders.processors import MapCompose from myproject.ItemLoaders import ProductLoader from myproject.utils.xml import remove_cdata class XmlProductLoader(ProductLoader):
name_in = MapCompose(remove_cdata, ProductLoader.name_in)
And that’s how you typically extend input/output processors.
itemloaders Documentation There are many other possible ways to extend, inherit and override your Item Loaders, and different Item Loaders hierarchies may fit better for different projects. itemloaders only provides the mechanism; it doesn’t impose any specific organization of your Loaders collection - that’s up to you and your project’s needs.
1.1.6 Available built-in processors Even though you can use any callable function as input and output processors, itemloaders provides some commonly used processors, which are described below.
Some of them, like the MapCompose (which is typically used as input processor) compose the output of several functions executed in order, to produce the final parsed value.
Here is a list of all built-in processors:
This module provides some commonly used processors for Item Loaders.
See documentation in docs/topics/loaders.rst class itemloaders.processors.Compose(*functions, **default_loader_context)
A processor which is constructed from the composition of the given functions. This means that each input value
of this processor is passed to the first function, and the result of that function is passed to the second function,
and so on, until the last function returns the output value of this processor.
By default, stop process on None value. This behaviour can be changed by passing keyword argument
stop_on_none=False.
Example:
>>> from itemloaders.processors import Compose
>>> proc = Compose(lambda v: v[0], str.upper)
>>> proc(['hello', 'world'])
'HELLO'
Each function can optionally receive a loader_context parameter. For those which do, this processor will pass
the currently active Loader context through that parameter.
The keyword arguments passed in the __init__ method are used as the default Loader context values passed to
each function call. However, the final Loader context values passed to functions are overridden with the currently
active Loader context accessible through the ItemLoader.context attribute.
class itemloaders.processors.Identity
The simplest processor, which doesn’t do anything. It returns the original values unchanged. It doesn’t receive
any __init__ method arguments, nor does it accept Loader contexts.
Example:
>>> from itemloaders.processors import Identity
>>> proc = Identity()
>>> proc(['one', 'two', 'three'])
['one', 'two', 'three']
class itemloaders.processors.Join(separator=' ')
Returns the values joined with the separator given in the __init__ method, which defaults to ' '. It doesn’t
accept Loader contexts.
When using the default separator, this processor is equivalent to the function: ' '.join
Examples:
itemloaders Documentation
>>> from itemloaders.processors import Join
>>> proc = Join()
>>> proc(['one', 'two', 'three'])
'one two three'
>>> proc = Join('<br>')
>>> proc(['one', 'two', 'three'])
'one<br>two<br>three'
class itemloaders.processors.MapCompose(*functions, **default_loader_context)
A processor which is constructed from the composition of the given functions, similar to the Compose processor.
The difference with this processor is the way internal results are passed among functions, which is as follows:
The input value of this processor is iterated and the first function is applied to each element. The results of these
function calls (one for each element) are concatenated to construct a new iterable, which is then used to apply
the second function, and so on, until the last function is applied to each value of the list of values collected so
far. The output values of the last function are concatenated together to produce the output of this processor.
Each particular function can return a value or a list of values, which is flattened with the list of values returned
by the same function applied to the other input values. The functions can also return None in which case the
output of that function is ignored for further processing over the chain.
This processor provides a convenient way to compose functions that only work with single values (instead of
iterables). For this reason the MapCompose processor is typically used as input processor, since data is often
extracted using the extract() method of parsel selectors, which returns a list of unicode strings.
The example below should clarify how it works:
>>> def filter_world(x):
... return None if x == 'world' else x
...
>>> from itemloaders.processors import MapCompose
>>> proc = MapCompose(filter_world, str.upper)
>>> proc(['hello', 'world', 'this', 'is', 'something'])
['HELLO', 'THIS', 'IS', 'SOMETHING']
As with the Compose processor, functions can receive Loader contexts, and __init__ method keyword argu-
ments are used as default context values. See Compose processor for more info.
class itemloaders.processors.SelectJmes(json_path)
Query the input string for the jmespath (given at instantiation), and return the answer Requires : jmespath(https:
//github.com/jmespath/jmespath) Note: SelectJmes accepts only one input element at a time.
Example:
>>> from itemloaders.processors import SelectJmes, Compose, MapCompose
>>> proc = SelectJmes("foo") #for direct use on lists and dictionaries
>>> proc({'foo': 'bar'})
'bar'
>>> proc({'foo': {'bar': 'baz'}})
{'bar': 'baz'}
Working with Json:
>>> import json
>>> proc_single_json_str = Compose(json.loads, SelectJmes("foo"))
>>> proc_single_json_str('{"foo": "bar"}')
(continues on next page)
itemloaders Documentation
(continued from previous page)
'bar'
>>> proc_json_list = Compose(json.loads, MapCompose(SelectJmes('foo')))
>>> proc_json_list('[{"foo":"bar"}, {"baz":"tar"}]')
['bar']
class itemloaders.processors.TakeFirst
Returns the first non-null/non-empty value from the values received, so it’s typically used as an output processor
to single-valued fields. It doesn’t receive any __init__ method arguments, nor does it accept Loader contexts.
Example:
>>> from itemloaders.processors import TakeFirst
>>> proc = TakeFirst()
>>> proc(['', 'one', 'two', 'three'])
'one'
1.1.7 API Reference class itemloaders.ItemLoader(item=None, selector=None, parent=None, **context)
Return a new Item Loader for populating the given item. If no item is given, one is instantiated automatically
using the class in default_item_class.
When instantiated with a :param selector parameter the ItemLoader class provides convenient mechanisms
for extracting data from web pages using parsel selectors.
Parameters
• item (dict object) – The item instance to populate using subsequent calls to add_xpath(),
add_css(), add_jmes() or add_value().
• selector (Selector object) – The selector to extract data from, when using
the add_xpath() (resp. add_css(), add_jmes()) or replace_xpath() (resp.
replace_css(), replace_jmes()) method.
The item, selector and the remaining keyword arguments are assigned to the Loader context (accessible through
the context attribute).
item
The item object being parsed by this Item Loader. This is mostly used as a property so when attempting to
override this value, you may want to check out default_item_class first.
context
The currently active Context of this Item Loader. Refer to <loaders-context> for more information about
the Loader Context.
default_item_class
An Item class (or factory), used to instantiate items when not given in the __init__ method.
Warning: Currently, this factory/class needs to be callable/instantiated without any arguments. If you
are using dataclasses, please consider the following alternative:
from dataclasses import dataclass, field
from typing import Optional
itemloaders Documentation
@dataclass
class Product:
name: Optional[str] = field(default=None)
price: Optional[float] = field(default=None)
default_input_processor
The default input processor to use for those fields which don’t specify one.
default_output_processor
The default output processor to use for those fields which don’t specify one.
selector
The Selector object to extract data from. It’s the selector given in the __init__ method. This attribute
is meant to be read-only.
add_css(field_name, css, *processors, re=None, **kw)
Similar to ItemLoader.add_value() but receives a CSS selector instead of a value, which is used to
extract a list of unicode strings from the selector associated with this ItemLoader.
See get_css() for kwargs.
Parameters
css (str) – the CSS selector to extract data from
Examples:
# HTML snippet: <p class="product-name">Color TV</p>
loader.add_css('name', 'p.product-name')
# HTML snippet: <p id="price">the price is $1200</p>
loader.add_css('price', 'p#price', re='the price is (.*)')
add_jmes(field_name, jmes, *processors, re=None, **kw)
Similar to ItemLoader.add_value() but receives a JMESPath selector instead of a value, which is used
to extract a list of unicode strings from the selector associated with this ItemLoader.
See get_jmes() for kwargs.
Parameters
jmes (str) – the JMESPath selector to extract data from
Examples:
# HTML snippet: {"name": "Color TV"}
loader.add_jmes('name')
# HTML snippet: {"price": the price is $1200"}
loader.add_jmes('price', TakeFirst(), re='the price is (.*)')
add_value(field_name, value, *processors, re=None, **kw)
Process and then add the given value for the given field.
The value is first passed through get_value() by giving the processors and kwargs, and then passed
through the field input processor and its result appended to the data collected for that field. If the field
already contains collected data, the new data is added.
The given field_name can be None, in which case values for multiple fields may be added. And the
processed value should be a dict with field_name mapped to values.
Examples:
itemloaders Documentation
loader.add_value('name', 'Color TV')
loader.add_value('colours', ['white', 'blue'])
loader.add_value('length', '100')
loader.add_value('name', 'name: foo', TakeFirst(), re='name: (.+)')
loader.add_value(None, {'name': 'foo', 'sex': 'male'})
add_xpath(field_name, xpath, *processors, re=None, **kw)
Similar to ItemLoader.add_value() but receives an XPath instead of a value, which is used to extract a
list of strings from the selector associated with this ItemLoader.
See get_xpath() for kwargs.
Parameters
xpath (str) – the XPath to extract data from
Examples:
# HTML snippet: <p class="product-name">Color TV</p>
loader.add_xpath('name', '//p[@class="product-name"]')
# HTML snippet: <p id="price">the price is $1200</p>
loader.add_xpath('price', '//p[@id="price"]', re='the price is (.*)')
get_collected_values(field_name)
Return the collected values for the given field.
get_css(css, *processors, re=None, **kw)
Similar to ItemLoader.get_value() but receives a CSS selector instead of a value, which is used to
extract a list of unicode strings from the selector associated with this ItemLoader.
Parameters
• css (str) – the CSS selector to extract data from
• re (str or Pattern) – a regular expression to use for extracting data from the selected
CSS region
Examples:
# HTML snippet: <p class="product-name">Color TV</p>
loader.get_css('p.product-name')
# HTML snippet: <p id="price">the price is $1200</p>
loader.get_css('p#price', TakeFirst(), re='the price is (.*)')
get_jmes(jmes, *processors, re=None, **kw)
Similar to ItemLoader.get_value() but receives a JMESPath selector instead of a value, which is used
to extract a list of unicode strings from the selector associated with this ItemLoader.
Parameters
• jmes (str) – the JMESPath selector to extract data from
• re (str or Pattern) – a regular expression to use for extracting data from the selected
JMESPath
Examples:
# HTML snippet: {"name": "Color TV"}
loader.get_jmes('name')
(continues on next page)
itemloaders Documentation
(continued from previous page)
# HTML snippet: {"price": the price is $1200"}
loader.get_jmes('price', TakeFirst(), re='the price is (.*)')
get_output_value(field_name)
Return the collected values parsed using the output processor, for the given field. This method doesn’t
populate or modify the item at all.
get_value(value, *processors, re=None, **kw)
Process the given value by the given processors and keyword arguments.
Available keyword arguments:
Parameters
re (str or Pattern) – a regular expression to use for extracting data from the given value
using extract_regex() method, applied before processors
Examples:
>>> from itemloaders import ItemLoader
>>> from itemloaders.processors import TakeFirst
>>> loader = ItemLoader()
>>> loader.get_value('name: foo', TakeFirst(), str.upper, re='name: (.+)')
'FOO'
get_xpath(xpath, *processors, re=None, **kw)
Similar to ItemLoader.get_value() but receives an XPath instead of a value, which is used to extract a
list of unicode strings from the selector associated with this ItemLoader.
Parameters
• xpath (str) – the XPath to extract data from
• re (str or Pattern) – a regular expression to use for extracting data from the selected
XPath region
Examples:
# HTML snippet: <p class="product-name">Color TV</p>
loader.get_xpath('//p[@class="product-name"]')
# HTML snippet: <p id="price">the price is $1200</p>
loader.get_xpath('//p[@id="price"]', TakeFirst(), re='the price is (.*)')
load_item()
Populate the item with the data collected so far, and return it. The data collected is first passed through the
output processors to get the final value to assign to each item field.
nested_css(css, **context)
Create a nested loader with a css selector. The supplied selector is applied relative to selector associated with
this ItemLoader. The nested loader shares the item with the parent ItemLoader so calls to add_xpath(),
add_value(), replace_value(), etc. will behave as expected.
nested_xpath(xpath, **context)
Create a nested loader with an xpath selector. The supplied selector is applied relative to selector asso-
ciated with this ItemLoader. The nested loader shares the item with the parent ItemLoader so calls to
add_xpath(), add_value(), replace_value(), etc. will behave as expected.
itemloaders Documentation
replace_css(field_name, css, *processors, re=None, **kw)
Similar to add_css() but replaces collected data instead of adding it.
replace_jmes(field_name, jmes, *processors, re=None, **kw)
Similar to add_jmes() but replaces collected data instead of adding it.
replace_value(field_name, value, *processors, re=None, **kw)
Similar to add_value() but replaces the collected data with the new value instead of adding it.
replace_xpath(field_name, xpath, *processors, re=None, **kw)
Similar to add_xpath() but replaces collected data instead of adding it.
1.1.8 Release notes itemloaders 1.1.0 (2023-04-21)
• Added JMESPath support (ItemLoader.add_jmes() etc.), requiring Parsel 1.8.1+ (#68)
• Added official support for Python 3.11 (#59)
• Removed official support for Python 3.6 (#61)
• Internal code cleanup (#65, #66)
• Added pre-commit support and applied changes from black and flake8 (#70).
• Improved CI (#60)
itemloaders 1.0.6 (2022-08-29)
Fixes a regression introduced in 1.0.5 that would cause the re parameter of ItemLoader.add_xpath() and similar methods to be passed to lxml, which would trigger an exception when the value of re was a compiled pattern and not a string (#56)
itemloaders 1.0.5 (2022-08-25)
• Allow additional args to be passed when calling ItemLoader.add_xpath() (#48)
• Fixed missing space in an exception message (#47)
• Updated company name in author and copyright sections (#42)
• Added official support for Python 3.9 and improved PyPy compatibility (#44)
• Added official support for Python 3.10 (#53)
itemloaders 1.0.4 (2020-11-12)
• When adding a scrapy.item.scrapy.Item object as a value into an ItemLoader object, that item is now
added as is, instead of becoming a list of keys from its scrapy.item.scrapy.Item.fields (#28, #29)
• Increased test coverage (#27)
itemloaders Documentation itemloaders 1.0.3 (2020-09-09)
• Calls to ItemLoader.get_output_value() no longer affect the output of ItemLoader.load_item() (#21,
#22)
• Fixed some documentation links (#19, #23)
• Fixed some test warnings (#24)
itemloaders 1.0.2 (2020-08-05)
• Included the license file in the source releases (#13)
• Cleaned up some remnants of Python 2 (#16, #17)
itemloaders 1.0.1 (2020-07-02)
• Extended item type support to all item types supported by itemadapter (#13)
• Input and output processors defined in item field metadata are now taken into account (#13)
• Lowered some minimum dependency versions (#10):
– parsel: 1.5.2 → 1.5.0
– w3lib: 1.21.0 → 1.17.0
• Improved the README file (#9)
• Improved continuous integration (e62d95b)
itemloaders 1.0.0 (2020-05-18)
Initial release, based on a part of the Scrapy code base.
itemloaders Documentation 16 Chapter 1. Getting Started with itemloaders
PYTHON MODULE INDEX i
itemloaders.processors, 8
17
itemloaders Documentation 18 Python Module Index
INDEX A M add_css() (itemloaders.ItemLoader method), 11 MapCompose (class in itemloaders.processors), 9 add_jmes() (itemloaders.ItemLoader method), 11 module add_value() (itemloaders.ItemLoader method), 11 itemloaders.processors, 8 add_xpath() (itemloaders.ItemLoader method), 12
N Compose (class in itemloaders.processors), 8 nested_xpath() (itemloaders.ItemLoader method), 13 context (itemloaders.ItemLoader attribute), 10
R default_input_processor (itemloaders.ItemLoader replace_jmes() (itemloaders.ItemLoader method), 14
attribute), 11 replace_value() (itemloaders.ItemLoader method), 14 default_item_class (itemloaders.ItemLoader at- replace_xpath() (itemloaders.ItemLoader method), 14
tribute), 10 default_output_processor (itemloaders.ItemLoader S
attribute), 11 SelectJmes (class in itemloaders.processors), 9 G
get_collected_values() (itemloaders.ItemLoader T
method), 12 TakeFirst (class in itemloaders.processors), 10 get_css() (itemloaders.ItemLoader method), 12 get_jmes() (itemloaders.ItemLoader method), 12 get_output_value() (itemloaders.ItemLoader
method), 13 get_value() (itemloaders.ItemLoader method), 13 get_xpath() (itemloaders.ItemLoader method), 13 I
Identity (class in itemloaders.processors), 8 item (itemloaders.ItemLoader attribute), 10 ItemLoader (class in itemloaders), 10 itemloaders.processors
module, 8 J
Join (class in itemloaders.processors), 8 L
load_item() (itemloaders.ItemLoader method), 13 |
github.com/aws/aws-sdk-go/internal/shareddefaults | go | Go | None
Documentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func SharedConfigFilename() string](#SharedConfigFilename)
* [func SharedCredentialsFilename() string](#SharedCredentialsFilename)
* [func UserHomeDir() string](#UserHomeDir)
### Constants [¶](#pkg-constants)
```
const (
// ECSCredsProviderEnvVar is an environmental variable key used to
// determine which path needs to be hit.
ECSCredsProviderEnvVar = "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI"
)
```
### Variables [¶](#pkg-variables)
```
var ECSContainerCredentialsURI = "http://169.254.170.2"
```
ECSContainerCredentialsURI is the endpoint to retrieve container credentials. This can be overridden to test to ensure the credential process is behaving correctly.
### Functions [¶](#pkg-functions)
####
func [SharedConfigFilename](https://github.com/aws/aws-sdk-go/blob/v1.45.26/internal/shareddefaults/shared_config.go#L26) [¶](#SharedConfigFilename)
```
func SharedConfigFilename() [string](/builtin#string)
```
SharedConfigFilename returns the SDK's default file path for the shared config file.
Builds the shared config file path based on the OS's platform.
* Linux/Unix: $HOME/.aws/config
* Windows: %USERPROFILE%\.aws\config
####
func [SharedCredentialsFilename](https://github.com/aws/aws-sdk-go/blob/v1.45.26/internal/shareddefaults/shared_config.go#L15) [¶](#SharedCredentialsFilename)
```
func SharedCredentialsFilename() [string](/builtin#string)
```
SharedCredentialsFilename returns the SDK's default file path for the shared credentials file.
Builds the shared config file path based on the OS's platform.
* Linux/Unix: $HOME/.aws/credentials
* Windows: %USERPROFILE%\.aws\credentials
####
func [UserHomeDir](https://github.com/aws/aws-sdk-go/blob/v1.45.26/internal/shareddefaults/shared_config.go#L32) [¶](#UserHomeDir)
```
func UserHomeDir() [string](/builtin#string)
```
UserHomeDir returns the home directory for the user the process is running under.
### Types [¶](#pkg-types)
This section is empty. |
matrisk | cran | R | Package ‘matrisk’
May 2, 2023
Title Macroeconomic-at-Risk
Version 0.1.0
Description The Macroeconomics-at-Risk (MaR) approach is based on a two-step semi-parametric es-
timation procedure that allows to forecast the full conditional distribution of an economic vari-
able at a given horizon, as a function of a set of factors. These density fore-
casts are then be used to produce coherent forecasts for any downside risk measure, e.g., value-
at-risk, expected shortfall, downside entropy. Initially intro-
duced by Adrian et al. (2019) <doi:10.1257/aer.20161923> to reveal the vulnerability of eco-
nomic growth to financial conditions, the MaR approach is currently extensively used by interna-
tional financial institutions to provide Value-at-Risk (VaR) type fore-
casts for GDP growth (Growth-at-Risk) or inflation (Inflation-at-Risk). This package pro-
vides methods for estimating these models. Datasets for the US and the Eurozone are avail-
able to allow testing of the Adrian et al (2019) model. This package constitutes a useful tool-
box (data and functions) for private practitioners, scholars as well as policymakers.
Depends R (>= 2.10)
License GPL-3
Encoding UTF-8
RoxygenNote 7.2.3
Imports stats, quantreg, sn, dfoptim, plot3D
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [aut],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-05-02 08:30:05 UTC
R topics documented:
data_eur... 2
data_U... 2
f_compile_quantil... 3
f_distri... 4
f_distrib_hist... 5
f_E... 7
f_Va... 8
data_euro Historical data for the eurozone (GDP and Financial Conditions) from
2008:Q4 to 2022:Q3
Description
data_euro contains: - Quarterly annualized GDP, from 2008:Q4 to 2022:Q3 - Financial Condition
Index of the euro Area, from 2008:Q4 to 2022:Q3 - Composite Indicator of Systemic Stress, from
2008:Q4 to 2022:Q3 Sources : https://sdw.ecb.europa.eu/browseExplanation.do?node=9689686 https://webstat.banque-
france.fr/ws_wsen/browseSelection.do?node=DATASETS_FCI https://fred.stlouisfed.org/series/CLVMEURSCAB1GQEA1
Usage
data("data_euro")
Format
A data frame with 57 observations on the following 4 variables.
DATE Vector of dates.
GDP Vector of annualized PIB.
FCI Historical values of the Financial Condition Index (FCI).
CISS Historical values of the Composite Indicator of Systemic Stress (CISS).
data_US Historical data for the US (GDP and Financial Conditions) from
1973:Q1 to 2022:Q3
Description
data_euro contains: - Quarterly annualized GDP, from 1973:Q1 to 2022:Q3 - National Financial
Condition Index of the US, from 1973:Q1 to 2022:Q3 Sources : https://www.chicagofed.org/research/data/nfci/current-
data https://fred.stlouisfed.org/series/A191RL1Q225SBEA
Usage
data("data_US")
Format
A data frame with 200 observations on the following 4 variables.
DATE Vector of dates.
GDP Vector of annualized PIB.
NFCI Historical values of the National Financial Condition Index (NFCI).
f_compile_quantile Estimation of quantiles
Description
Predicted values based on each quantile regression (Koenker and Basset, 1978), at time=t_trgt, for
each quantile in qt_trgt.
Usage
f_compile_quantile(qt_trgt, v_dep, v_expl, t_trgt)
Arguments
qt_trgt Numeric vector, dim k, of k quantiles for different qt-estimations
v_dep Numeric vector of the dependent variable
v_expl Numeric vector of the (k) explanatory variable(s)
t_trgt Numeric time target (optional)
Value
Numeric matrix with the predicted values based on each quantile regression, at time fixed in input
References
Koenker, Roger, and <NAME>. "Regression quantiles." Econometrica: journal of the
Econometric Society (1978): 33-50.
Examples
# Import data
data("data_euro")
#' # Data process
PIB_euro_forward_4 = data_euro["GDP"][c(5:length(data_euro["GDP"][,1])),]
FCI_euro_lag_4 = data_euro["FCI"][c(1:(length(data_euro["GDP"][,1]) - 4)),]
CISS_euro_lag_4 = data_euro["CISS"][c(1:(length(data_euro["GDP"][,1]) - 4)),]
quantile_target <- as.vector(c(0.10,0.25,0.75,0.90))
results_quantile_reg <- f_compile_quantile(qt_trgt=quantile_target,
v_dep=PIB_euro_forward_4,
v_expl=cbind(FCI_euro_lag_4, CISS_euro_lag_4),
t_trgt = 30)
f_distrib Distribution
Description
This function is used to estimate the parameters of the distribution (mean and standard deviation for
Gaussian, xi, omega, alpha, and nu for skew-t) based on the quantile regression results (Koenker and
Basset, 1978). See Adrian et al. (2019) and Adrian et al. (2022) for more details on the estimation
steps.
Usage
f_distrib(type_function, compile_qt, starting_values)
Arguments
type_function String argument : "gaussian" for normal distribution or "skew-t" for t-student
distribution
compile_qt Numeric matrix containing different quantiles and associated values
starting_values
Numeric vector with initial values for optimization
Value
a data.frame with the parameters of the distribution
References
Adrian, Tobias, <NAME>, and <NAME>. "Vulnerable growth." American Eco-
nomic Review 109.4 (2019): 1263-89.
Adrian, Tobias, et al. "The term structure of growth-at-risk. " American Economic Journal: Macroe-
conomics 14.3 (2022): 283-323.
<NAME>, and <NAME> Jr. "Regression quantiles." Econometrica: journal of the
Econometric Society (1978): 33-50.
Examples
# Import data
data("data_euro")
# Data process
PIB_euro_forward_4 = data_euro["GDP"][c(5:length(data_euro["GDP"][,1])),]
FCI_euro_lag_4 = data_euro["FCI"][c(1:(length(data_euro["GDP"][,1]) - 4)),]
CISS_euro_lag_4 = data_euro["CISS"][c(1:(length(data_euro["GDP"][,1]) - 4)),]
# for a gaussian
quantile_target <- as.vector(c(0.25,0.75))
results_quantile_reg <- f_compile_quantile(qt_trgt=quantile_target,
v_dep=PIB_euro_forward_4,
v_expl=cbind(FCI_euro_lag_4, CISS_euro_lag_4),
t_trgt = 30)
results_g <- f_distrib(type_function="gaussian",
compile_qt=results_quantile_reg,
starting_values=c(0, 1))
# for a skew-t
quantile_target <- as.vector(c(0.10,0.25,0.75,0.90))
results_quantile_reg <- f_compile_quantile(qt_trgt=quantile_target,
v_dep=PIB_euro_forward_4,
v_expl=cbind(FCI_euro_lag_4, CISS_euro_lag_4),
t_trgt = 30)
results_s <- f_distrib(type_function="skew-t",
compile_qt=results_quantile_reg,
starting_values=c(0, 1, -0.5, 1.3))
f_distrib_histo Historical distributions
Description
This function is based on f_distrib function (Adrian et al., 2019; Adrian et al., 2022) and is used
to get historical estimation of empirical distributions and associated parameters. Results allow to
realize a 3D graphical representation.
Usage
f_distrib_histo(
qt_trgt,
v_dep,
v_expl,
type_function,
starting_values,
step,
x_min,
x_max
)
Arguments
qt_trgt Numeric vector, dim k, of k quantiles for different qt-estimations
v_dep Numeric vector of the dependent variable
v_expl Numeric vector of the (k) explanatory variable(s)
type_function String argument : "gaussian" for normal distribution or "skew-t" for t-student
distribution
starting_values
Numeric vector with initial values for optimization
step Numeric argument for accuracy graphics abscissa
x_min Numeric optional argument (default value = -15)
x_max Numeric optional argument (default value = 10)
Value
A list with:
distrib_histo Numeric matrix with historical values of x, y and t
param_histo Numeric matrix containing the parameters of the distribution for each period
References
Adrian, Tobias, <NAME>, and <NAME>. "Vulnerable growth." American Eco-
nomic Review 109.4 (2019): 1263-89.
Adrian, Tobias, et al. "The term structure of growth-at-risk." American Economic Journal: Macroe-
conomics 14.3 (2022): 283-323.
Examples
# Import data
data("data_euro")
# Data process
PIB_euro_forward_4 = data_euro["GDP"][c(5:length(data_euro["GDP"][,1])),]
FCI_euro_lag_4 = data_euro["FCI"][c(1:(length(data_euro["GDP"][,1]) - 4)),]
CISS_euro_lag_4 = data_euro["CISS"][c(1:(length(data_euro["GDP"][,1]) - 4)),]
results_histo <- f_distrib_histo(qt_trgt=c(0.10,0.25,0.75,0.90), v_dep=PIB_euro_forward_4,
v_expl=cbind(FCI_euro_lag_4,CISS_euro_lag_4),
type_function="skew-t",
starting_values=c(0, 1, -0.5, 1.3),
step=5, x_min=-10, x_max=5)
library(plot3D) # load
scatter3D(results_histo$distrib_histo[,3],
results_histo$distrib_histo[,1],
results_histo$distrib_histo[,2],
pch = 10, theta = 70, phi = 10,
main = "Distribution of GDP Growth over time - Euro Area",
xlab = "Date",
ylab ="Pib",
zlab="", cex = 0.3)
f_ES Expected Shortfall
Description
The function allows to calculate Expected-shortfall for a given distribution. It takes as parameters
alpha (risk level), a distribution and the parameters associated with this distribution. For example,
for a normal distribution, the user must enter the mean and the standard deviation. Currently,
the function can calculate the Expected-shortfall for the normal distribution and for the skew-t
distribution (Azzalini and Capitianio, 2003)
Usage
f_ES(alpha, dist, params, accuracy = 1e-05)
Arguments
alpha Numeric argument for Expected-Shortfall, between 0 and 1
dist String for the type of distribution (gaussian or skew-t)
params Numeric vector containing parameters of the distribution
accuracy Scalar value which regulates the accuracy of the ES (default value 1e-05)
Value
Numeric value for the expected-shortfall given the distribution and the alpha risk
References
Azzalini, Adelchi, and <NAME>. "Distributions generated by perturbation of symmetry
with emphasis on a multivariate skew t-distribution." Journal of the Royal Statistical Society: Series
B (Statistical Methodology) 65.2 (2003): 367-389.
Azzalini, Adelchi, and Maintainer <NAME>. "Package ‘sn’." The skew-normal and skew-t
distributions (2015): 1-3.
Examples
f_ES(0.95, "gaussian", params=c(0,1))
f_ES(0.95, "gaussian", params=c(0,1), accuracy=1e-05)
f_ES(0.95, "gaussian", params=c(0,1), accuracy=1e-04)
f_VaR Value-at-Risk
Description
The function allows to calculate Value-at-Risk for a given distribution. It takes as parameters alpha
(risk level), a distribution and the parameters associated with this distribution. For example, for a
normal distribution, the user must enter the mean and the standard deviation. Currently, the function
can calculate the Value-at-Risk for the normal distribution and for the skew-t distribution (Azzalini
and Capitianio, 2003)
Usage
f_VaR(alpha, dist, params)
Arguments
alpha Numeric argument for Expected-Shortfall, between 0 and 1
dist String for the type of distribution (gaussian or skew-t)
params Numeric vector containing parameters of the distribution
Value
Numeric value for the Value-at-Risk given the distribution and the alpha risk
References
Azzalini, Adelchi, and <NAME>. "Distributions generated by perturbation of symmetry
with emphasis on a multivariate skew t-distribution." Journal of the Royal Statistical Society: Series
B (Statistical Methodology) 65.2 (2003): 367-389.
Azzalini, Adelchi, and Maintainer <NAME>. "Package ‘sn’." The skew-normal and skew-t
distributions (2015): 1-3.
Examples
f_VaR(0.95, "gaussian", params=c(0,1)) |
GNSSseg | cran | R | Package ‘GNSSseg’
October 12, 2022
Type Package
Title Homogenization of GNSS Series
Version 6.0
Date 2020-05-12
Author <NAME> [aut, cre], <NAME> [aut], <NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Description Homogenize GNSS (Global Navigation Satellite System) time-series. The gen-
eral model is a segmentation in the mean model including a periodic function and consider-
ing monthly variances, see Quarello (2020) <arXiv:2005.04683>.
LazyData true
License GPL-3
Encoding UTF-8
Imports robustbase, capushe
RoxygenNote 7.0.2
Suggests knitr, rmarkdown
NeedsCompilation no
Repository CRAN
Date/Publication 2020-06-02 15:20:16 UTC
R topics documented:
Dat... 2
GNSSse... 2
plot_GNS... 5
Data Example of data
Description
A data frame [n x 2] containing a simulated Gaussian series for the two years 1995 and 1996, with
size n=731. 3 changes are considered at positions 100, 150 and 500 (or at the dates 1995-04-10,
1995-05-30 and 1996-05-14). The means of the segments alternates between 0 and 1 (beginning
by 0). The functional part is 0.4*cos(2*pi*time/lyear) where lyear is 365.25 and time is centred
according to the first date and expressed in seconds: time=(date-date[1])/86400. The standard
deviation of the noise of the 12 months are drawn from an uniform distribution between 0.1 and
0.8. The date is expressed as yyyy-mm-dd in the "calendar time" (class POSIXct).
Usage
data(Data)
Format
A data frame with 731 observations on the following 2 variables.
signal a numeric vector
date a date vector expressed as yyyy-mm-dd in the "calendar time" (class POSIXct)
Details
signal: the values of the observed signal; date: the dates in calendar time
Examples
library(GNSSseg)
data(Data)
class(Data$date)
plot(Data$date,Data$signal,type="l")
GNSSseg Homogeneization of GNSS series
Description
fit a segmentation in the mean model by taken into account for a functional part and a heterogeneous
variance (default is monthly)
Usage
GNSSseg(
Data,
lyear = 365.25,
lmin = 1,
Kmax = 30,
selection.K = "BM_BJ",
S = 0.75,
f = TRUE,
selection.f = FALSE,
threshold = 0.001,
tol = 1e-04
)
Arguments
Data a data frame, with size [n x 2], containing the signal (e.g. the daily GPS-ERAI
series for GNSS) and the dates (in format yyyy-mm-dd of type "calendar time"
(class POSIXct))
lyear the length of the year in the signal. Default is 365.25
lmin the minimum length of the segments. Default is 1
Kmax the maximal number of segments (must be lower than n). Default is 30
selection.K a name indicating the model selection criterion to select the number of seg-
ments K (mBIC, Lav, BM_BJ or BM_slope). "none" indicates that no selection
is claimed and the procedure considers Kmax segments or Kmax-1 changes. If
selection.K="All", the results for the four possible criteria are given. Default
is "BM_BJ"
S the threshold used in the Lav’s criterion. Default is 0.75
f a boolean indicating if the functional part is taking into account in the model.
Default is TRUE and note that if f=FALSE, only a segmentation is performed
selection.f a boolean indicating if a selection on the functions of the Fourier decomposition
of order 4 is performed. Default is FALSE
threshold a numeric value lower than 1 used for the selection of the functions of the Fourier
decomposition of order 4. Default is 0.001
tol the stopping rule for the iterative procedure. Default is 1e-4
Details
The function performs homogeneization of GNSS series. The considered model is such that: (1) the
average is composed of a piecewise function (changes in the mean) with a functional part and (2)
the variance is heterogeneous on fixed intervals. By default the latter intervals are the months. The
inference procedure consists in two steps. First, the number of segments is fixed to Kmax and the
parameters are estimated using the maximum likelihood procedure using the following procedure:
first the variances are robustly estimated and then the segmentation and the functional parts are
iteratively estimated. Then the number of segments is chosen using model selection criteria. The
possible criteria are mBIC the modified BIC criterion, Lav the criterion proposed by Lavielle, BM_BJ
and BM_slope the criteriain which the penalty constant is calibrated using the Biggest Jump and the
slope.
• The data is a data frame with 2 columns: $signal is the signal to be homogeneized (a daily
series) and $date is the date. The date will be in format yyyy-mm-dd of type "calendar time"
(class POSIXct).
• The function part is estimated using a Fourier decomposition of order 4 with selection.f=FALSE.
selection.f=TRUE consists in selecting the significative functions of the Fourier decomposi-
tion of order 4 (for which p.values are lower than threshold)
• If selection.K="none", the procedure is performed with Kmax segments.
• Missing data in the signal are accepted.
Value
A file containing
• K that corresponds to the selected number of segments or K-1 corresponds to the number of
changes. If selection.K="none", the number of segments is Kmax.
• seg that corresponds to the estimation of the segmentation parameters (the begin and the end
positions of each segment with the estimated mean).
• funct that corresponds to the estimation of the functional part. If f==FALSE, funct is FALSE
• coeff that corresponds to the estimation of the coefficients of the Fourier decomposition. The
vector contains 8 coefficients if selection.f=FALSE or as many coefficients as the number
of selected functions if selection.f=TRUE. If f==FALSE, coeff is FALSE
• variances that corresponds to the estimated variances of each fixed interval
• SSR that corresponds to the Residuals Sum of Squares for k=1,...,Kmax. If selection.K="none",
it contains only the SSR for Kmax segments
• Tot is a list. Each component contains all the results k segments (k=1,...,Kmax). If selection.K="none",
Tot is NA
If selection.K="All", the outputs K, seg, funct and coeff are each a list containing the corre-
sponding results obtained for the four model selection criteria
Examples
data(Data)
lyear=365.25
Kmax=4
lmin=1
result=GNSSseg(Data,lyear,Kmax=Kmax,selection.K="none")
plot_GNSS(Data,result$seg,result$funct)
plot_GNSS Graph of the result
Description
plot the signal with the estimated average
Usage
plot_GNSS(Data, segmentation, functional)
Arguments
Data a data frame, with size [n x 2], containing the signal (e.g. the daily GPS-ERAI
series for GNSS) and the dates (in format yyyy-mm-dd of type "calendar time"
(class POSIXct))
segmentation the estimated segmentation (result of the GNSSseg function)
functional the estimated functional (result of the GNSSseg function)
Details
The function gives the plot of the results with the signal
Value
a plot of the results with the signal
Examples
data(Data)
lyear=365.25
Kmax=4
lmin=1
result=GNSSseg(Data,lyear,selection.K="none",Kmax=Kmax)
plot_GNSS(Data,result$seg,result$funct) |
SwiftyStoreKit | cocoapods | Objective-C | swiftystorekit
===
Introduction:
---
Welcome to the documentation for SwiftStoreKit, a lightweight In-App Purchases framework written in Swift. In this guide, you will learn how to integrate SwiftStoreKit into your iOS applications to enable seamless In-App Purchases functionality.
Installation:
---
### Using CocoaPods:
1. Open your terminal and navigate to your project directory.
2. Initialize a Podfile by running the command `pod init`.
3. Add SwiftStoreKit to your Podfile:
```
pod 'SwiftStoreKit'
```
4. Save the Podfile and run the command `pod install` to install the framework.
5. Make sure to open the project using the newly generated `.xcworkspace` file.
### Manually:
1. Visit the [SwiftStoreKit repository](https://github.com/bizz84/SwiftStoreKit) on GitHub.
2. Download the latest release zip file.
3. Unzip the downloaded file and drag the `SwiftStoreKit.swift` file into your Xcode project. Make sure to select “Copy items if needed” when prompted.
Getting Started:
---
### Initializing SwiftStoreKit:
To get started, import SwiftStoreKit into your Swift file:
```
// import SwiftStoreKit import SwiftStoreKit
```
### Verifying App Receipt:
Before making any In-App Purchase requests, it is essential to verify the app receipt to ensure its authenticity. Use the following method to verify the receipt:
```
SwiftStoreKit.verifyReceipt(using: sharedSecret) { result in
switch result {
case .success(let receipt):
// Receipt verification successful
print("Receipt verified: \(receipt)")
case .error(let error):
// Receipt verification failed
print("Receipt verification failed: \(error)")
}
}
```
### Making a Purchase:
Once the app receipt is verified, you can proceed with making a purchase request. Use the following method to initiate a purchase request:
```
SwiftStoreKit.purchaseProduct(productId) { result in
switch result {
case .success(let purchase):
// Purchase successful
print("Purchase Success: \(purchase.productId)")
// Provide the user with the purchased content
case .error(let error):
// Purchase failed
print("Purchase Failed: \(error)")
}
}
```
### Restoring Purchases:
In case a user wants to restore their previous purchases, use the following method:
```
SwiftStoreKit.restorePurchases() { results in
if results.restoreFailedPurchases.count > 0 {
// Restoring purchases failed for some products
print("Restore Failed: \(results.restoreFailedPurchases)")
}
else if results.restoredPurchases.count > 0 {
// All purchases restored successfully
print("Purchases Restored: \(results.restoredPurchases)")
}
else {
// No previous purchases found
print("Nothing to Restore")
}
}
```
Frequently Asked Questions (FAQ):
---
### How do I test In-App Purchases in the sandbox environment?
Follow these steps:
1. Create a Sandbox Tester Account on App Store Connect.
2. In Xcode, go to “Signing & Capabilities” and add your Sandbox Tester Account.
3. Ensure that you are using a sandbox build for testing.
4. You can now make test purchases using the Sandbox Tester Account credentials.
### How do I display localized pricing?
To display localized pricing, use the following method:
```
SwiftStoreKit.retrieveProductsInfo([productId]) { result in
if let product = result.retrievedProducts.first {
let priceString = product.localizedPrice
// Display the localized price to the user
}
}
```
Conclusion:
---
Congratulations! You have successfully integrated SwiftStoreKit into your iOS application. You can now provide users with a seamless In-App Purchases experience. For more advanced usage and additional functionality, make sure to refer to the official [SwiftStoreKit repository](https://github.com/bizz84/SwiftStoreKit) documentation. Happy coding! |
auth | readthedoc | Python | auth stable documentation
[auth](index.html#document-index)
---
Auth | Authorization for Humans[¶](#auth-authorization-for-humans)
===
RESTful, Simple Authorization system with ZERO configuration.
What is Auth?[¶](#what-is-auth)
---
Auth is a module that makes authorization simple and also scalable and powerful. It also has a beautiful RESTful API for use in micro-service architectures and platforms. It is originally desinged to use in Appido, a scalable media market in Iran.
It supports Python2.6+ and if you have a mongodb backbone, you need ZERO configurations steps. Just type `auth-server` and press enter!
I use Travis and Codecov to keep myself honest.
requirements[¶](#requirements)
---
You need to access to **mongodb**. If you are using a remote mongodb, provide these environment variables:
`MONGO_HOST` and `MONGO_PORT`
Installation[¶](#installation)
---
```
pip install auth
```
Show me an example[¶](#show-me-an-example)
---
ok, lets image you have two users, **Jack** and **Sara**. Sara can cook and Jack can dance. Both can laugh.
You also need to choose a secret key for your application. Because you may want to use Auth in various tools and each must have a secret key for seperating their scope.
```
my_secret_key = "pleaSeDoN0tKillMyC_at"
from auth import Authorization cas = Authorization(my_secret_key)
```
Now, Lets add 3 groups, Cookers, Dancers and Laughers. Remember that groups are Roles. So when we create a group, indeed we create a role:
```
cas.add_group('cookers')
cas.add_group('dancers')
cas.add_group('laughers')
```
Ok, great. You have 3 groups and you need to authorize them to do special things.
```
cas.add_permission('cookers', 'cook')
cas.add_permission('dancers', 'dance')
cas.add_permission('laughers', 'laugh')
```
Good. You let cookers to cook and dancers to dance etc...
The final part is to set memberships for Sara and Jack:
```
cas.add_membership('sara', 'cookers')
cas.add_membership('sara', 'laughers')
cas.add_membership('jack', 'dancers')
cas.add_membership('jack', 'laughers')
```
That’s all we need. Now lets ensure that jack can dance:
```
if cas.user_has_permission('jack', 'dance'):
print('YES!!! Jack can dance.')
```
Authirization Methods[¶](#authirization-methods)
---
use pydoc to see all methods:
```
pydoc auth.Authorization
```
RESTful API[¶](#restful-api)
---
Lets run the server on port 4000:
```
from auth import api, serve serve('localhost', 4000, api)
```
Or, from version 0.1.2+ you can use this command:
```
auth-server
```
Simple! Authorization server is ready to use.
You can use it via simple curl or using mighty Requests module. So in you remote application, you can do something like this:
```
import requests secret_key = "pleaSeDoN0tKillMyC_at"
auth_api = "http://127.0.0.1:4000/api"
```
Lets create admin group:
```
requests.post(auth_api+'/role/'+secret_key+'/admin')
```
And lets make Jack an admin:
```
requests.post(auth_api+'/permission/'+secret_key+'/jack/admin')
```
And finally let’s check if Sara still can cook:
```
requests.get(auth_api+'/has_permission/'+secret_key+'/sara/cook')
```
RESTful API helpers[¶](#restful-api-helpers)
---
auth comes with a helper class that makes your life easy.
```
from auth.client import Client service = Client('srv201', 'http://192.168.99.100:4000')
print(service)
service.get_roles()
service.add_role(role='admin')
```
API Methods[¶](#api-methods)
---
```
pydoc auth.CAS.REST.service
```
* `/ping` [GET]
> Ping API, useful for your monitoring tools
* `/api/membership/{KEY}/{user}/{role}` [GET/POST/DELETE]
> Adding, removing and getting membership information.
* `/api/permission/{KEY}/{role}/{name}` [GET/POST/DELETE]
> Adding, removing and getting permissions
* `/api/has_permission/{KEY}/{user}/{name}` [GET]
> Getting user permission info
* `/api/role/{KEY}/{role}` [GET/POST/DELETE]
Adding, removing and getting roles
* `/api/which_roles_can/{KEY}/{name}` [GET]
For example: Which roles can send_mail?
* `/api/which_users_can/{KEY}/{name}` [GET]
For example: Which users can send_mail?
* `/api/user_permissions/{KEY}/{user}` [GET]
Get all permissions that a user has
* `/api/role_permissions/{KEY}/{role}` [GET]
Get all permissions that a role has
* `/api/user_roles/{KEY}/{user}` [GET]
> Get roles that user assinged to
>
* `/api/roles/{KEY}` [GET]
> Get all available roles
Deployment[¶](#deployment)
---
Deploying Auth module in production environment is easy:
```
gunicorn auth:api
```
Dockerizing[¶](#dockerizing)
---
It’s simple:
```
docker build -t python/auth-server https://raw.githubusercontent.com/ourway/auth/master/Dockerfile docker run --name=auth -e MONGO_HOST='192.168.99.100' -p 4000:4000 -d --restart=always --link=mongodb-server python/auth-server
```
Copyright[¶](#copyright)
---
* <NAME> [@](mailto:<EMAIL>)
Documentation[¶](#documentation)
---
Feel free to dig into source code. If you think you can improve the documentation, please do so and send me a pull request.
Unit Tests and Coverage[¶](#unit-tests-and-coverage)
---
I am trying to add tests as much as I can, but still there are areas that need improvement.
To DO[¶](#to-do)
---
* Add Authentication features
* Improve Code Coverage |
django-polymorphic | readthedoc | Python | django-polymorphic 3.1 documentation
[django-polymorphic](index.html#document-index)
---
Welcome to django-polymorphic’s documentation![¶](#welcome-to-django-polymorphic-s-documentation)
===
Django-polymorphic builds on top of the standard Django model inheritance.
It makes using inherited models easier. When a query is made at the base model,
the inherited model classes are returned.
When we store models that inherit from a `Project` model…
```
>>> Project.objects.create(topic="Department Party")
>>> ArtProject.objects.create(topic="Painting with Tim", artist="<NAME>")
>>> ResearchProject.objects.create(topic="Swallow Aerodynamics", supervisor="Dr. Winter")
```
…and want to retrieve all our projects, the subclassed models are returned!
```
>>> Project.objects.all()
[ <Project: id 1, topic "Department Party">,
<ArtProject: id 2, topic "Painting with Tim", artist "<NAME>">,
<ResearchProject: id 3, topic "Swallow Aerodynamics", supervisor "Dr. Winter"> ]
```
Using vanilla Django, we get the base class objects, which is rarely what we wanted:
```
>>> Project.objects.all()
[ <Project: id 1, topic "Department Party">,
<Project: id 2, topic "Painting with Tim">,
<Project: id 3, topic "Swallow Aerodynamics"> ]
```
Features[¶](#features)
---
* Full admin integration.
* ORM integration:
> * Support for ForeignKey, ManyToManyField, OneToOneField descriptors.
> * Support for proxy models.
> * Filtering/ordering of inherited models (`ArtProject___artist`).
> * Filtering model types: `instance_of(...)` and `not_instance_of(...)`
> * Combining querysets of different models (`qs3 = qs1 | qs2`)
> * Support for custom user-defined managers.
* Formset support.
* Uses the minimum amount of queries needed to fetch the inherited models.
* Disabling polymorphic behavior when needed.
Getting started[¶](#getting-started)
---
### Quickstart[¶](#quickstart)
Install the project using:
```
pip install django-polymorphic
```
Update the settings file:
```
INSTALLED_APPS += (
'polymorphic',
'django.contrib.contenttypes',
)
```
The current release of *django-polymorphic* supports Django 2.2 - 4.0 and Python 3.6+.
#### Making Your Models Polymorphic[¶](#making-your-models-polymorphic)
Use `PolymorphicModel` instead of Django’s `models.Model`, like so:
```
from polymorphic.models import PolymorphicModel
class Project(PolymorphicModel):
topic = models.CharField(max_length=30)
class ArtProject(Project):
artist = models.CharField(max_length=30)
class ResearchProject(Project):
supervisor = models.CharField(max_length=30)
```
All models inheriting from your polymorphic models will be polymorphic as well.
#### Using Polymorphic Models[¶](#using-polymorphic-models)
Create some objects:
```
>>> Project.objects.create(topic="Department Party")
>>> ArtProject.objects.create(topic="Painting with Tim", artist="T. Turner")
>>> ResearchProject.objects.create(topic="Swallow Aerodynamics", supervisor="Dr. Winter")
```
Get polymorphic query results:
```
>>> Project.objects.all()
[ <Project: id 1, topic "Department Party">,
<ArtProject: id 2, topic "Painting with Tim", artist "T. Turner">,
<ResearchProject: id 3, topic "Swallow Aerodynamics", supervisor "Dr. Winter"> ]
```
Use `instance_of` or `not_instance_of` for narrowing the result to specific subtypes:
```
>>> Project.objects.instance_of(ArtProject)
[ <ArtProject: id 2, topic "Painting with Tim", artist "T. Turner"> ]
```
```
>>> Project.objects.instance_of(ArtProject) | Project.objects.instance_of(ResearchProject)
[ <ArtProject: id 2, topic "Painting with Tim", artist "<NAME>">,
<ResearchProject: id 3, topic "Swallow Aerodynamics", supervisor "<NAME>"> ]
```
Polymorphic filtering: Get all projects where Mr. Turner is involved as an artist or supervisor (note the three underscores):
```
>>> Project.objects.filter(Q(ArtProject___artist='<NAME>') | Q(ResearchProject___supervisor='T. Turner'))
[ <ArtProject: id 2, topic "Painting with Tim", artist "<NAME>">,
<ResearchProject: id 4, topic "Color Use in Late Cubism", supervisor "<NAME>"> ]
```
This is basically all you need to know, as *django-polymorphic* mostly works fully automatic and just delivers the expected results.
Note: When using the `dumpdata` management command on polymorphic tables
(or any table that has a reference to [`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType)),
include the `--natural` flag in the arguments. This makes sure the
[`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType) models will be referenced by name instead of their primary key as that changes between Django instances.
Note
While *django-polymorphic* makes subclassed models easy to use in Django,
we still encourage to use them with caution. Each subclassed model will require Django to perform an `INNER JOIN` to fetch the model fields from the database.
While taking this in mind, there are valid reasons for using subclassed models.
That’s what this library is designed for!
### Django admin integration[¶](#django-admin-integration)
Of course, it’s possible to register individual polymorphic models in the Django admin interface.
However, to use these models in a single cohesive interface, some extra base classes are available.
#### Setup[¶](#setup)
Both the parent model and child model need to have a `ModelAdmin` class.
The shared base model should use the [`PolymorphicParentModelAdmin`](index.html#polymorphic.admin.PolymorphicParentModelAdmin) as base class.
> * [`base_model`](index.html#polymorphic.admin.PolymorphicParentModelAdmin.base_model) should be set
> * [`child_models`](index.html#polymorphic.admin.PolymorphicParentModelAdmin.child_models) or
> [`get_child_models()`](index.html#polymorphic.admin.PolymorphicParentModelAdmin.get_child_models) should return an iterable of Model classes.
The admin class for every child model should inherit from [`PolymorphicChildModelAdmin`](index.html#polymorphic.admin.PolymorphicChildModelAdmin)
> * [`base_model`](index.html#polymorphic.admin.PolymorphicChildModelAdmin.base_model) should be set.
Although the child models are registered too, they won’t be shown in the admin index page.
This only happens when [`show_in_index`](index.html#polymorphic.admin.PolymorphicChildModelAdmin.show_in_index) is set to `True`.
##### Fieldset configuration[¶](#fieldset-configuration)
The parent admin is only used for the list display of models,
and for the edit/delete view of non-subclassed models.
All other model types are redirected to the edit/delete/history view of the child model admin.
Hence, the fieldset configuration should be placed on the child admin.
Tip
When the child admin is used as base class for various derived classes, avoid using the standard `ModelAdmin` attributes `form` and `fieldsets`.
Instead, use the `base_form` and `base_fieldsets` attributes.
This allows the [`PolymorphicChildModelAdmin`](index.html#polymorphic.admin.PolymorphicChildModelAdmin) class to detect any additional fields in case the child model is overwritten.
Changed in version 1.0: It’s now needed to register the child model classes too.
In *django-polymorphic* 0.9 and below, the `child_models` was a tuple of a `(Model, ChildModelAdmin)`.
The admin classes were registered in an internal class, and kept away from the main admin site.
This caused various subtle problems with the `ManyToManyField` and related field wrappers,
which are fixed by registering the child admin classes too. Note that they are hidden from the main view, unless [`show_in_index`](index.html#polymorphic.admin.PolymorphicChildModelAdmin.show_in_index) is set.
#### Example[¶](#example)
The models are taken from [Advanced features](index.html#advanced-features).
```
from django.contrib import admin from polymorphic.admin import PolymorphicParentModelAdmin, PolymorphicChildModelAdmin, PolymorphicChildModelFilter from .models import ModelA, ModelB, ModelC, StandardModel
class ModelAChildAdmin(PolymorphicChildModelAdmin):
""" Base admin class for all child models """
base_model = ModelA # Optional, explicitly set here.
# By using these `base_...` attributes instead of the regular ModelAdmin `form` and `fieldsets`,
# the additional fields of the child models are automatically added to the admin form.
base_form = ...
base_fieldsets = (
...
)
@admin.register(ModelB)
class ModelBAdmin(ModelAChildAdmin):
base_model = ModelB # Explicitly set here!
# define custom features here
@admin.register(ModelC)
class ModelCAdmin(ModelBAdmin):
base_model = ModelC # Explicitly set here!
show_in_index = True # makes child model admin visible in main admin site
# define custom features here
@admin.register(ModelA)
class ModelAParentAdmin(PolymorphicParentModelAdmin):
""" The parent model admin """
base_model = ModelA # Optional, explicitly set here.
child_models = (ModelB, ModelC)
list_filter = (PolymorphicChildModelFilter,) # This is optional.
```
#### Filtering child types[¶](#filtering-child-types)
Child model types can be filtered by adding a [`PolymorphicChildModelFilter`](index.html#polymorphic.admin.PolymorphicChildModelFilter)
to the `list_filter` attribute. See the example above.
#### Inline models[¶](#inline-models)
New in version 1.0.
Inline models are handled via a special [`StackedPolymorphicInline`](index.html#polymorphic.admin.StackedPolymorphicInline) class.
For models with a generic foreign key, there is a [`GenericStackedPolymorphicInline`](index.html#polymorphic.admin.GenericStackedPolymorphicInline) class available.
When the inline is included to a normal [`ModelAdmin`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/admin/#django.contrib.admin.ModelAdmin),
make sure the [`PolymorphicInlineSupportMixin`](index.html#polymorphic.admin.PolymorphicInlineSupportMixin) is included.
This is not needed when the admin inherits from the
[`PolymorphicParentModelAdmin`](index.html#polymorphic.admin.PolymorphicParentModelAdmin) /
[`PolymorphicChildModelAdmin`](index.html#polymorphic.admin.PolymorphicChildModelAdmin) classes.
In the following example, the `PaymentInline` supports several types.
These are defined as separate inline classes.
The child classes can be nested for clarity, but this is not a requirement.
```
from django.contrib import admin
from polymorphic.admin import PolymorphicInlineSupportMixin, StackedPolymorphicInline from .models import Order, Payment, CreditCardPayment, BankPayment, SepaPayment
class PaymentInline(StackedPolymorphicInline):
"""
An inline for a polymorphic model.
The actual form appearance of each row is determined by
the child inline that corresponds with the actual model type.
"""
class CreditCardPaymentInline(StackedPolymorphicInline.Child):
model = CreditCardPayment
class BankPaymentInline(StackedPolymorphicInline.Child):
model = BankPayment
class SepaPaymentInline(StackedPolymorphicInline.Child):
model = SepaPayment
model = Payment
child_inlines = (
CreditCardPaymentInline,
BankPaymentInline,
SepaPaymentInline,
)
@admin.register(Order)
class OrderAdmin(PolymorphicInlineSupportMixin, admin.ModelAdmin):
"""
Admin for orders.
The inline is polymorphic.
To make sure the inlines are properly handled,
the ``PolymorphicInlineSupportMixin`` is needed to
"""
inlines = (PaymentInline,)
```
##### Using polymorphic models in standard inlines[¶](#using-polymorphic-models-in-standard-inlines)
To add a polymorphic child model as an Inline for another model, add a field to the inline’s `readonly_fields` list formed by the lowercased name of the polymorphic parent model with the string `_ptr` appended to it.
Otherwise, trying to save that model in the admin will raise an AttributeError with the message “can’t set attribute”.
```
from django.contrib import admin from .models import StandardModel
class ModelBInline(admin.StackedInline):
model = ModelB
fk_name = 'modelb'
readonly_fields = ['modela_ptr']
@admin.register(StandardModel)
class StandardModelAdmin(admin.ModelAdmin):
inlines = [ModelBInline]
```
#### Internal details[¶](#internal-details)
The polymorphic admin interface works in a simple way:
* The add screen gains an additional step where the desired child model is selected.
* The edit screen displays the admin interface of the child model.
* The list screen still displays all objects of the base class.
The polymorphic admin is implemented via a parent admin that redirects the *edit* and *delete* views to the `ModelAdmin` of the derived child model. The *list* page is still implemented by the parent model admin.
##### The parent model[¶](#the-parent-model)
The parent model needs to inherit [`PolymorphicParentModelAdmin`](index.html#polymorphic.admin.PolymorphicParentModelAdmin), and implement the following:
> * [`base_model`](index.html#polymorphic.admin.PolymorphicParentModelAdmin.base_model) should be set
> * [`child_models`](index.html#polymorphic.admin.PolymorphicParentModelAdmin.child_models) or
> [`get_child_models()`](index.html#polymorphic.admin.PolymorphicParentModelAdmin.get_child_models) should return an iterable of Model classes.
The exact implementation can depend on the way your module is structured.
For simple inheritance situations, `child_models` is the best solution.
For large applications, `get_child_models()` can be used to query a plugin registration system.
By default, the non_polymorphic() method will be called on the queryset, so only the Parent model will be provided to the list template. This is to avoid the performance hit of retrieving child models.
This can be controlled by setting the `polymorphic_list` property on the parent admin. Setting it to True will provide child models to the list template.
If you use other applications such as [django-reversion](https://github.com/etianen/django-reversion) or [django-mptt](https://github.com/django-mptt/django-mptt), please check +:ref:third-party.
Note: If you are using non-integer primary keys in your model, you have to edit `pk_regex`,
for example `pk_regex = '([\w-]+)'` if you use UUIDs. Otherwise you cannot change model entries.
##### The child models[¶](#the-child-models)
The admin interface of the derived models should inherit from [`PolymorphicChildModelAdmin`](index.html#polymorphic.admin.PolymorphicChildModelAdmin).
Again, [`base_model`](index.html#polymorphic.admin.PolymorphicChildModelAdmin.base_model) should be set in this class as well.
This class implements the following features:
* It corrects the breadcrumbs in the admin pages.
* It extends the template lookup paths, to look for both the parent model and child model in the `admin/app/model/change_form.html` path.
* It allows to set [`base_form`](index.html#polymorphic.admin.PolymorphicChildModelAdmin.base_form) so the derived class will automatically include other fields in the form.
* It allows to set [`base_fieldsets`](index.html#polymorphic.admin.PolymorphicChildModelAdmin.base_fieldsets) so the derived class will automatically display any extra fields.
* Although it must be registered with admin site, by default it’s hidden from admin site index page.
This can be overriden by adding [`show_in_index`](index.html#polymorphic.admin.PolymorphicChildModelAdmin.show_in_index) = `True` in admin class.
### Performance Considerations[¶](#performance-considerations)
Usually, when Django users create their own polymorphic ad-hoc solution without a tool like *django-polymorphic*, this usually results in a variation of
```
result_objects = [ o.get_real_instance() for o in BaseModel.objects.filter(...) ]
```
which has very bad performance, as it introduces one additional SQL query for every object in the result which is not of class `BaseModel`.
Compared to these solutions, *django-polymorphic* has the advantage that it only needs 1 SQL query *per object type*, and not *per object*.
The current implementation does not use any custom SQL or Django DB layer internals - it is purely based on the standard Django ORM. Specifically, the query:
```
result_objects = list( ModelA.objects.filter(...) )
```
performs one SQL query to retrieve `ModelA` objects and one additional query for each unique derived class occurring in result_objects.
The best case for retrieving 100 objects is 1 SQL query if all are class `ModelA`. If 50 objects are `ModelA` and 50 are `ModelB`, then two queries are executed. The pathological worst case is 101 db queries if result_objects contains 100 different object types (with all of them subclasses of `ModelA`).
#### ContentType retrieval[¶](#contenttype-retrieval)
When fetching the [`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType) class,
it’s tempting to read the `object.polymorphic_ctype` field directly.
However, this performs an additional query via the [`ForeignKey`](https://docs.djangoproject.com/en/4.0/_objects/ref/models/fields/#django.db.models.ForeignKey) object to fetch the [`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType).
Instead, use:
```
from django.contrib.contenttypes.models import ContentType
ctype = ContentType.objects.get_for_id(object.polymorphic_ctype_id)
```
This uses the [`get_for_id()`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentTypeManager.get_for_id) function which caches the results internally.
#### Database notes[¶](#database-notes)
Current relational DBM systems seem to have general problems with the SQL queries produced by object relational mappers like the Django ORM, if these use multi-table inheritance like Django’s ORM does.
The “inner joins” in these queries can perform very badly.
This is independent of django_polymorphic and affects all uses of multi table Model inheritance.
Please also see this [post (and comments) from <NAME>](http://jacobian.org/writing/concrete-inheritance/).
### Third-party applications support[¶](#third-party-applications-support)
#### django-guardian support[¶](#django-guardian-support)
New in version 1.0.2.
You can configure [django-guardian](https://github.com/django-guardian/django-guardian) to use the base model for object level permissions.
Add this option to your settings:
```
GUARDIAN_GET_CONTENT_TYPE = 'polymorphic.contrib.guardian.get_polymorphic_base_content_type'
```
This option requires [django-guardian](https://github.com/django-guardian/django-guardian) >= 1.4.6. Details about how this option works are available in the
[django-guardian documentation](https://django-guardian.readthedocs.io/en/latest/configuration.html#guardian-get-content-type).
#### django-rest-framework support[¶](#django-rest-framework-support)
The [django-rest-polymorphic](https://github.com/apirobot/django-rest-polymorphic) package provides polymorphic serializers that help you integrate your polymorphic models with django-rest-framework.
##### Example[¶](#example)
Define serializers:
```
from rest_framework import serializers from rest_polymorphic.serializers import PolymorphicSerializer from .models import Project, ArtProject, ResearchProject
class ProjectSerializer(serializers.ModelSerializer):
class Meta:
model = Project
fields = ('topic', )
class ArtProjectSerializer(serializers.ModelSerializer):
class Meta:
model = ArtProject
fields = ('topic', 'artist')
class ResearchProjectSerializer(serializers.ModelSerializer):
class Meta:
model = ResearchProject
fields = ('topic', 'supervisor')
class ProjectPolymorphicSerializer(PolymorphicSerializer):
model_serializer_mapping = {
Project: ProjectSerializer,
ArtProject: ArtProjectSerializer,
ResearchProject: ResearchProjectSerializer
}
```
Create viewset with serializer_class equals to your polymorphic serializer:
```
from rest_framework import viewsets from .models import Project from .serializers import ProjectPolymorphicSerializer
class ProjectViewSet(viewsets.ModelViewSet):
queryset = Project.objects.all()
serializer_class = ProjectPolymorphicSerializer
```
#### django-extra-views[¶](#django-extra-views)
New in version 1.1.
The [`polymorphic.contrib.extra_views`](index.html#module-polymorphic.contrib.extra_views) package provides classes to display polymorphic formsets using the classes from [django-extra-views](https://github.com/AndrewIngram/django-extra-views). See the documentation of:
* [`PolymorphicFormSetView`](index.html#polymorphic.contrib.extra_views.PolymorphicFormSetView)
* [`PolymorphicInlineFormSetView`](index.html#polymorphic.contrib.extra_views.PolymorphicInlineFormSetView)
* [`PolymorphicInlineFormSet`](index.html#polymorphic.contrib.extra_views.PolymorphicInlineFormSet)
#### django-mptt support[¶](#django-mptt-support)
Combining polymorphic with [django-mptt](https://github.com/django-mptt/django-mptt) is certainly possible, but not straightforward.
It involves combining both managers, querysets, models, meta-classes and admin classes using multiple inheritance.
The [django-polymorphic-tree](https://github.com/django-polymorphic/django-polymorphic-tree) package provides this out of the box.
#### django-reversion support[¶](#django-reversion-support)
Support for [django-reversion](https://github.com/etianen/django-reversion) works as expected with polymorphic models.
However, they require more setup than standard models. That’s become:
* Manually register the child models with [django-reversion](https://github.com/etianen/django-reversion), so their `follow` parameter can be set.
* Polymorphic models use [multi-table inheritance](https://docs.djangoproject.com/en/dev/topics/db/models/#multi-table-inheritance).
See the [reversion documentation](https://django-reversion.readthedocs.io/en/latest/api.html#multi-table-inheritance)
how to deal with this by adding a `follow` field for the primary key.
* Both admin classes redefine `object_history_template`.
##### Example[¶](#id1)
The admin [admin example](index.html#admin-example) becomes:
```
from django.contrib import admin from polymorphic.admin import PolymorphicParentModelAdmin, PolymorphicChildModelAdmin from reversion.admin import VersionAdmin from reversion import revisions from .models import ModelA, ModelB, ModelC
class ModelAChildAdmin(PolymorphicChildModelAdmin, VersionAdmin):
base_model = ModelA # optional, explicitly set here.
base_form = ...
base_fieldsets = (
...
)
class ModelBAdmin(ModelAChildAdmin, VersionAdmin):
# define custom features here
class ModelCAdmin(ModelBAdmin):
# define custom features here
class ModelAParentAdmin(VersionAdmin, PolymorphicParentModelAdmin):
base_model = ModelA # optional, explicitly set here.
child_models = (
(ModelB, ModelBAdmin),
(ModelC, ModelCAdmin),
)
revisions.register(ModelB, follow=['modela_ptr'])
revisions.register(ModelC, follow=['modelb_ptr'])
admin.site.register(ModelA, ModelAParentAdmin)
```
Redefine a `admin/polymorphic/object_history.html` template, so it combines both worlds:
```
{% extends 'reversion/object_history.html' %}
{% load polymorphic_admin_tags %}
{% block breadcrumbs %}
{% breadcrumb_scope base_opts %}{{ block.super }}{% endbreadcrumb_scope %}
{% endblock %}
```
This makes sure both the reversion template is used, and the breadcrumb is corrected for the polymorphic model.
#### django-reversion-compare support[¶](#django-reversion-compare-support)
The [django-reversion-compare](https://github.com/jedie/django-reversion-compare) views work as expected, the admin requires a little tweak.
In your parent admin, include the following method:
```
def compare_view(self, request, object_id, extra_context=None):
"""Redirect the reversion-compare view to the child admin."""
real_admin = self._get_real_admin(object_id)
return real_admin.compare_view(request, object_id, extra_context=extra_context)
```
As the compare view resolves the the parent admin, it uses it’s base model to find revisions.
This doesn’t work, since it needs to look for revisions of the child model. Using this tweak,
the view of the actual child model is used, similar to the way the regular change and delete views are redirected.
Advanced topics[¶](#advanced-topics)
---
### Formsets[¶](#formsets)
New in version 1.0.
Polymorphic models can be used in formsets.
The implementation is almost identical to the regular Django formsets.
As extra parameter, the factory needs to know how to display the child models.
Provide a list of [`PolymorphicFormSetChild`](index.html#polymorphic.formsets.PolymorphicFormSetChild) objects for this.
```
from polymorphic.formsets import polymorphic_modelformset_factory, PolymorphicFormSetChild
ModelAFormSet = polymorphic_modelformset_factory(ModelA, formset_children=(
PolymorphicFormSetChild(ModelB),
PolymorphicFormSetChild(ModelC),
))
```
The formset can be used just like all other formsets:
```
if request.method == "POST":
formset = ModelAFormSet(request.POST, request.FILES, queryset=ModelA.objects.all())
if formset.is_valid():
formset.save()
else:
formset = ModelAFormSet(queryset=ModelA.objects.all())
```
Like standard Django formsets, there are 3 factory methods available:
* [`polymorphic_modelformset_factory()`](index.html#polymorphic.formsets.polymorphic_modelformset_factory) - create a regular model formset.
* [`polymorphic_inlineformset_factory()`](index.html#polymorphic.formsets.polymorphic_inlineformset_factory) - create a inline model formset.
* [`generic_polymorphic_inlineformset_factory()`](index.html#polymorphic.formsets.generic_polymorphic_inlineformset_factory) - create an inline formset for a generic foreign key.
Each one uses a different base class:
* [`BasePolymorphicModelFormSet`](index.html#polymorphic.formsets.BasePolymorphicModelFormSet)
* [`BasePolymorphicInlineFormSet`](index.html#polymorphic.formsets.BasePolymorphicInlineFormSet)
* [`BaseGenericPolymorphicInlineFormSet`](index.html#polymorphic.formsets.BaseGenericPolymorphicInlineFormSet)
When needed, the base class can be overwritten and provided to the factory via the `formset` parameter.
### Migrating existing models to polymorphic[¶](#migrating-existing-models-to-polymorphic)
Existing models can be migrated to become polymorphic models.
During the migrating, the `polymorphic_ctype` field needs to be filled in.
This can be done in the following steps:
1. Inherit your model from [`PolymorphicModel`](index.html#polymorphic.models.PolymorphicModel).
2. Create a Django migration file to create the `polymorphic_ctype_id` database column.
3. Make sure the proper [`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType) value is filled in.
#### Filling the content type value[¶](#filling-the-content-type-value)
The following Python code can be used to fill the value of a model:
```
from django.contrib.contenttypes.models import ContentType from myapp.models import MyModel
new_ct = ContentType.objects.get_for_model(MyModel)
MyModel.objects.filter(polymorphic_ctype__isnull=True).update(polymorphic_ctype=new_ct)
```
The creation and update of the `polymorphic_ctype_id` column can be included in a single Django migration. For example:
```
# -*- coding: utf-8 -*-
from django.db import migrations, models
def forwards_func(apps, schema_editor):
MyModel = apps.get_model('myapp', 'MyModel')
ContentType = apps.get_model('contenttypes', 'ContentType')
new_ct = ContentType.objects.get_for_model(MyModel)
MyModel.objects.filter(polymorphic_ctype__isnull=True).update(polymorphic_ctype=new_ct)
class Migration(migrations.Migration):
dependencies = [
('contenttypes', '0001_initial'),
('myapp', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='mymodel',
name='polymorphic_ctype',
field=models.ForeignKey(related_name='polymorphic_myapp.mymodel_set+', editable=False, to='contenttypes.ContentType', null=True),
),
migrations.RunPython(forwards_func, migrations.RunPython.noop),
]
```
It’s recommended to let `makemigrations` create the migration file,
and include the `RunPython` manually before running the migration.
New in version 1.1.
When the model is created elsewhere, you can also use the [`polymorphic.utils.reset_polymorphic_ctype()`](index.html#polymorphic.utils.reset_polymorphic_ctype) function:
```
from polymorphic.utils import reset_polymorphic_ctype from myapp.models import Base, Sub1, Sub2
reset_polymorphic_ctype(Base, Sub1, Sub2)
reset_polymorphic_ctype(Base, Sub1, Sub2, ignore_existing=True)
```
### Custom Managers, Querysets & Manager Inheritance[¶](#custom-managers-querysets-manager-inheritance)
#### Using a Custom Manager[¶](#using-a-custom-manager)
A nice feature of Django is the possibility to define one’s own custom object managers.
This is fully supported with django_polymorphic: For creating a custom polymorphic manager class, just derive your manager from `PolymorphicManager` instead of
`models.Manager`. As with vanilla Django, in your model class, you should explicitly add the default manager first, and then your custom manager:
```
from polymorphic.models import PolymorphicModel from polymorphic.managers import PolymorphicManager
class TimeOrderedManager(PolymorphicManager):
def get_queryset(self):
qs = super(TimeOrderedManager,self).get_queryset()
return qs.order_by('-start_date') # order the queryset
def most_recent(self):
qs = self.get_queryset() # get my ordered queryset
return qs[:10] # limit => get ten most recent entries
class Project(PolymorphicModel):
objects = PolymorphicManager() # add the default polymorphic manager first
objects_ordered = TimeOrderedManager() # then add your own manager
start_date = DateTimeField() # project start is this date/time
```
The first manager defined (‘objects’ in the example) is used by Django as automatic manager for several purposes, including accessing related objects. It must not filter objects and it’s safest to use the plain `PolymorphicManager` here.
#### Manager Inheritance[¶](#manager-inheritance)
Polymorphic models inherit/propagate all managers from their base models, as long as these are polymorphic. This means that all managers defined in polymorphic base models continue to work as expected in models inheriting from this base model:
```
from polymorphic.models import PolymorphicModel from polymorphic.managers import PolymorphicManager
class TimeOrderedManager(PolymorphicManager):
def get_queryset(self):
qs = super(TimeOrderedManager,self).get_queryset()
return qs.order_by('-start_date') # order the queryset
def most_recent(self):
qs = self.get_queryset() # get my ordered queryset
return qs[:10] # limit => get ten most recent entries
class Project(PolymorphicModel):
objects = PolymorphicManager() # add the default polymorphic manager first
objects_ordered = TimeOrderedManager() # then add your own manager
start_date = DateTimeField() # project start is this date/time
class ArtProject(Project): # inherit from Project, inheriting its fields and managers
artist = models.CharField(max_length=30)
```
ArtProject inherited the managers `objects` and `objects_ordered` from Project.
`ArtProject.objects_ordered.all()` will return all art projects ordered regarding their start time and `ArtProject.objects_ordered.most_recent()`
will return the ten most recent art projects.
#### Using a Custom Queryset Class[¶](#using-a-custom-queryset-class)
The `PolymorphicManager` class accepts one initialization argument,
which is the queryset class the manager should use. Just as with vanilla Django,
you may define your own custom queryset classes. Just use PolymorphicQuerySet instead of Django’s QuerySet as the base class:
```
from polymorphic.models import PolymorphicModel from polymorphic.managers import PolymorphicManager from polymorphic.query import PolymorphicQuerySet
class MyQuerySet(PolymorphicQuerySet):
def my_queryset_method(self):
...
class MyModel(PolymorphicModel):
my_objects = PolymorphicManager.from_queryset(MyQuerySet)()
...
```
### Advanced features[¶](#advanced-features)
In the examples below, these models are being used:
```
from django.db import models from polymorphic.models import PolymorphicModel
class ModelA(PolymorphicModel):
field1 = models.CharField(max_length=10)
class ModelB(ModelA):
field2 = models.CharField(max_length=10)
class ModelC(ModelB):
field3 = models.CharField(max_length=10)
```
#### Filtering for classes (equivalent to python’s isinstance() ):[¶](#filtering-for-classes-equivalent-to-python-s-isinstance)
```
>>> ModelA.objects.instance_of(ModelB)
.
[ <ModelB: id 2, field1 (CharField), field2 (CharField)>,
<ModelC: id 3, field1 (CharField), field2 (CharField), field3 (CharField)> ]
```
In general, including or excluding parts of the inheritance tree:
```
ModelA.objects.instance_of(ModelB [, ModelC ...])
ModelA.objects.not_instance_of(ModelB [, ModelC ...])
```
You can also use this feature in Q-objects (with the same result as above):
```
>>> ModelA.objects.filter( Q(instance_of=ModelB) )
```
#### Polymorphic filtering (for fields in inherited classes)[¶](#polymorphic-filtering-for-fields-in-inherited-classes)
For example, cherrypicking objects from multiple derived classes anywhere in the inheritance tree, using Q objects (with the syntax: `exact model name + three _ + field name`):
```
>>> ModelA.objects.filter( Q(ModelB___field2 = 'B2') | Q(ModelC___field3 = 'C3') )
.
[ <ModelB: id 2, field1 (CharField), field2 (CharField)>,
<ModelC: id 3, field1 (CharField), field2 (CharField), field3 (CharField)> ]
```
#### Combining Querysets[¶](#combining-querysets)
Querysets could now be regarded as object containers that allow the aggregation of different object types, very similar to python lists - as long as the objects are accessed through the manager of a common base class:
```
>>> Base.objects.instance_of(ModelX) | Base.objects.instance_of(ModelY)
.
[ <ModelX: id 1, field_x (CharField)>,
<ModelY: id 2, field_y (CharField)> ]
```
#### ManyToManyField, ForeignKey, OneToOneField[¶](#manytomanyfield-foreignkey-onetoonefield)
Relationship fields referring to polymorphic models work as expected: like polymorphic querysets they now always return the referred objects with the same type/class these were created and saved as.
E.g., if in your model you define:
```
field1 = OneToOneField(ModelA)
```
then field1 may now also refer to objects of type `ModelB` or `ModelC`.
A ManyToManyField example:
```
# The model holding the relation may be any kind of model, polymorphic or not class RelatingModel(models.Model):
many2many = models.ManyToManyField('ModelA') # ManyToMany relation to a polymorphic model
>>> o=RelatingModel.objects.create()
>>> o.many2many.add(ModelA.objects.get(id=1))
>>> o.many2many.add(ModelB.objects.get(id=2))
>>> o.many2many.add(ModelC.objects.get(id=3))
>>> o.many2many.all()
[ <ModelA: id 1, field1 (CharField)>,
<ModelB: id 2, field1 (CharField), field2 (CharField)>,
<ModelC: id 3, field1 (CharField), field2 (CharField), field3 (CharField)> ]
```
#### Copying Polymorphic objects[¶](#copying-polymorphic-objects)
When creating a copy of a polymorphic object, both the
`.id` and the `.pk` of the object need to be set to `None` before saving so that both the base table and the derived table will be updated to the new object:
```
>>> o = ModelB.objects.first()
>>> o.field1 = 'new val' # leave field2 unchanged
>>> o.pk = None
>>> o.id = None
>>> o.save()
```
#### Using Third Party Models (without modifying them)[¶](#using-third-party-models-without-modifying-them)
Third party models can be used as polymorphic models without restrictions by subclassing them. E.g. using a third party model as the root of a polymorphic inheritance tree:
```
from thirdparty import ThirdPartyModel
class MyThirdPartyBaseModel(PolymorphicModel, ThirdPartyModel):
pass # or add fields
```
Or instead integrating the third party model anywhere into an existing polymorphic inheritance tree:
```
class MyBaseModel(SomePolymorphicModel):
my_field = models.CharField(max_length=10)
class MyModelWithThirdParty(MyBaseModel, ThirdPartyModel):
pass # or add fields
```
#### Non-Polymorphic Queries[¶](#non-polymorphic-queries)
If you insert `.non_polymorphic()` anywhere into the query chain, then django_polymorphic will simply leave out the final step of retrieving the real objects, and the manager/queryset will return objects of the type of the base class you used for the query, like vanilla Django would
(`ModelA` in this example).
```
>>> qs=ModelA.objects.non_polymorphic().all()
>>> qs
[ <ModelA: id 1, field1 (CharField)>,
<ModelA: id 2, field1 (CharField)>,
<ModelA: id 3, field1 (CharField)> ]
```
There are no other changes in the behaviour of the queryset. For example,
enhancements for `filter()` or `instance_of()` etc. still work as expected.
If you do the final step yourself, you get the usual polymorphic result:
```
>>> ModelA.objects.get_real_instances(qs)
[ <ModelA: id 1, field1 (CharField)>,
<ModelB: id 2, field1 (CharField), field2 (CharField)>,
<ModelC: id 3, field1 (CharField), field2 (CharField), field3 (CharField)> ]
```
#### About Queryset Methods[¶](#about-queryset-methods)
* `annotate()` and `aggregate()` work just as usual, with the addition that the `ModelX___field` syntax can be used for the keyword arguments (but not for the non-keyword arguments).
* `order_by()` similarly supports the `ModelX___field` syntax for specifying ordering through a field in a submodel.
* `distinct()` works as expected. It only regards the fields of the base class, but this should never make a difference.
* `select_related()` works just as usual, but it can not (yet) be used to select relations in inherited models
(like `ModelA.objects.select_related('ModelC___fieldxy')` )
* `extra()` works as expected (it returns polymorphic results) but currently has one restriction: The resulting objects are required to have a unique primary key within the result set - otherwise an error is thrown
(this case could be made to work, however it may be mostly unneeded)..
The keyword-argument “polymorphic” is no longer supported.
You can get back the old non-polymorphic behaviour by using `ModelA.objects.non_polymorphic().extra(...)`.
* `get_real_instances()` allows you to turn a queryset or list of base model objects efficiently into the real objects.
For example, you could do `base_objects_queryset=ModelA.extra(...).non_polymorphic()`
and then call `real_objects=base_objects_queryset.get_real_instances()`. Or alternatively
.``real_objects=ModelA.objects.get_real_instances(base_objects_queryset_or_object_list)``
* `values()` & `values_list()` currently do not return polymorphic results. This may change in the future however. If you want to use these methods now, it’s best if you use `Model.base_objects.values...` as this is guaranteed to not change.
* `defer()` and `only()` work as expected. On Django 1.5+ they support the `ModelX___field` syntax, but on Django 1.4 it is only possible to pass fields on the base model into these methods.
#### Using enhanced Q-objects in any Places[¶](#using-enhanced-q-objects-in-any-places)
The queryset enhancements (e.g. `instance_of`) only work as arguments to the member functions of a polymorphic queryset. Occasionally it may be useful to be able to use Q objects with these enhancements in other places.
As Django doesn’t understand these enhanced Q objects, you need to transform them manually into normal Q objects before you can feed them to a Django queryset or function:
```
normal_q_object = ModelA.translate_polymorphic_Q_object( Q(instance_of=Model2B) )
```
This function cannot be used at model creation time however (in models.py),
as it may need to access the ContentTypes database table.
#### Nicely Displaying Polymorphic Querysets[¶](#nicely-displaying-polymorphic-querysets)
In order to get the output as seen in all examples here, you need to use the
`ShowFieldType` class mixin:
```
from polymorphic.models import PolymorphicModel from polymorphic.showfields import ShowFieldType
class ModelA(ShowFieldType, PolymorphicModel):
field1 = models.CharField(max_length=10)
```
You may also use `ShowFieldContent`
or `ShowFieldTypeAndContent` to display additional information when printing querysets (or converting them to text).
When showing field contents, they will be truncated to 20 characters. You can modify this behaviour by setting a class variable in your model like this:
```
class ModelA(ShowFieldType, PolymorphicModel):
polymorphic_showfield_max_field_width = 20
...
```
Similarly, pre-V1.0 output formatting can be re-estated by using
`polymorphic_showfield_old_format = True`.
#### Restrictions & Caveats[¶](#restrictions-caveats)
* Database Performance regarding concrete Model inheritance in general.
Please see the [Performance Considerations](index.html#performance).
* Queryset methods `values()`, `values_list()`, and `select_related()`
are not yet fully supported (see above). `extra()` has one restriction:
the resulting objects are required to have a unique primary key within the result set.
* Diamond shaped inheritance: There seems to be a general problem with diamond shaped multiple model inheritance with Django models
(tested with V1.1 - V1.3).
An example is here: <http://code.djangoproject.com/ticket/10808>.
This problem is aggravated when trying to enhance models.Model by subclassing it instead of modifying Django core (as we do here with PolymorphicModel).
* The enhanced filter-definitions/Q-objects only work as arguments for the methods of the polymorphic querysets. Please see above for `translate_polymorphic_Q_object`.
* When using the `dumpdata` management command on polymorphic tables
(or any table that has a reference to
[`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType)),
include the `--natural` flag in the arguments.
### Changelog[¶](#changelog)
#### Changes in 3.1 (2021-11-18)[¶](#changes-in-3-1-2021-11-18)
* Added support for Django 4.0.
* Fixed crash when the admin “add type” view has no choices; will show a permission denied.
* Fixed missing `locale` folder in sdist.
* Fixed missing `QuerySet.bulk_create(.., ignore_conflicts=True)` parameter support.
* Fixed `FilteredRelation` support.
* Fixed supporting class keyword arguments in model definitions for `__init_subclass__()`.
* Fixed including `polymorphic.tests.migrations` in the sdist.
* Fixed non-polymorphic parent handling, which has no `_base_objects`.
* Fixed missing `widgets` support for `modelform_factory()`.
* Fixed `has_changed` handling for `polymorphic_ctype_id` due to implicit str to int conversions.
* Fixed `Q` object handling when lists are used (e.g. in [django-advanced-filters](https://github.com/modlinltd/django-advanced-filters)).
* Fixed Django Admin support when using a script-prefix.
Many thanks to everyone providing clear pull requests!
#### Changes in 3.0.0 (2020-08-21)[¶](#changes-in-3-0-0-2020-08-21)
* Support for Django 3.X
* Dropped support for python 2.X
* A lot of various fixes and improvements by various authors. Thanks a lot!
#### Changes in 2.1.2 (2019-07-15)[¶](#changes-in-2-1-2-2019-07-15)
* Fix `PolymorphicInlineModelAdmin` media jQuery include for Django 2.0+
#### Changes in 2.1.1 (2019-07-15)[¶](#changes-in-2-1-1-2019-07-15)
* Fixed admin import error due to `isort` changes.
#### Changes in 2.1 (2019-07-15)[¶](#changes-in-2-1-2019-07-15)
* Added Django 2.2 support.
* Changed `.non_polymorphic()`, to use a different iterable class that completely cirvumvent polymorphic.
* Changed SQL for `instance_of` filter: use `IN` statement instead of `OR` clauses.
* Changed queryset iteration to implement `prefetch_related()` support.
* Fixed Django 3.0 alpha compatibility.
* Fixed compatibility with current [django-extra-views](https://github.com/AndrewIngram/django-extra-views) in `polymorphic.contrib.extra_views`.
* Fixed `prefetch_related()` support on polymorphic M2M relations.
* Fixed model subclass `___` selector for abstract/proxy models.
* Fixed model subclass `___` selector for models with a custom `OneToOneField(parent_link=True)`.
* Fixed unwanted results on calling `queryset.get_real_instances([])`.
* Fixed unwanted `TypeError` exception when `PolymorphicTypeInvalid` should have raised.
* Fixed hiding the add-button of polymorphic lines in the Django admin.
* Reformatted all files with black
#### Changes in 2.0.3 (2018-08-24)[¶](#changes-in-2-0-3-2018-08-24)
* Fixed admin crash for Django 2.1 with missing `use_required_attribute`.
#### Changes in 2.0.2 (2018-02-05)[¶](#changes-in-2-0-2-2018-02-05)
* Fixed manager inheritance behavior for Django 1.11, by automatically enabling `Meta.manager_inheritance_from_future` if it’s not defined.
This restores the manager inheritance behavior that *django-polymorphic 1.3* provided for Django 1.x projects.
* Fixed internal `base_objects` usage.
#### Changes in 2.0.1 (2018-02-05)[¶](#changes-in-2-0-1-2018-02-05)
* Fixed manager inheritance detection for Django 1.11.
It’s recommended to use `Meta.manager_inheritance_from_future` so Django 1.x code also inherit the `PolymorphicManager` in all subclasses. Django 2.0 already does this by default.
* Deprecated the `base_objects` manager. Use `objects.non_polymorphic()` instead.
* Optimized detection for dumpdata behavior, avoiding the performance hit of `__getattribute__()`.
* Fixed test management commands
#### Changes in 2.0 (2018-01-22)[¶](#changes-in-2-0-2018-01-22)
* **BACKWARDS INCOMPATIBILITY:** Dropped Django 1.8 and 1.10 support.
* **BACKWARDS INCOMPATIBILITY:** Removed old deprecated code from 1.0, thus:
> * Import managers from `polymorphic.managers` (plural), not `polymorphic.manager`.
> * Register child models to the admin as well using `@admin.register()` or `admin.site.register()`,
> as this is no longer done automatically.
* Added Django 2.0 support.
Also backported into 1.3.1:
* Added `PolymorphicTypeUndefined` exception for incomplete imported models.
When a data migration or import creates an polymorphic model,
the `polymorphic_ctype_id` field should be filled in manually too.
The `polymorphic.utils.reset_polymorphic_ctype` function can be used for that.
* Added `PolymorphicTypeInvalid` exception when database was incorrectly imported.
* Added `polymorphic.utils.get_base_polymorphic_model()` to find the base model for types.
* Using `base_model` on the polymorphic admins is no longer required, as this can be autodetected.
* Fixed manager errors for swappable models.
* Fixed `deleteText` of `|as_script_options` template filter.
* Fixed `.filter(applabel__ModelName___field=...)` lookups.
* Fixed proxy model support in formsets.
* Fixed error with .defer and child models that use the same parent.
* Fixed error message when `polymorphic_ctype_id` is null.
* Fixed fieldsets recursion in the admin.
* Improved `polymorphic.utils.reset_polymorphic_ctype()` to accept models in random ordering.
* Fix fieldsets handling in the admin (`declared_fieldsets` is removed since Django 1.9)
#### Version 1.3.1 (2018-04-16)[¶](#version-1-3-1-2018-04-16)
Backported various fixes from 2.x to support older Django versions:
* Added `PolymorphicTypeUndefined` exception for incomplete imported models.
When a data migration or import creates an polymorphic model,
the `polymorphic_ctype_id` field should be filled in manually too.
The `polymorphic.utils.reset_polymorphic_ctype` function can be used for that.
* Added `PolymorphicTypeInvalid` exception when database was incorrectly imported.
* Added `polymorphic.utils.get_base_polymorphic_model()` to find the base model for types.
* Using `base_model` on the polymorphic admins is no longer required, as this can be autodetected.
* Fixed manager errors for swappable models.
* Fixed `deleteText` of `|as_script_options` template filter.
* Fixed `.filter(applabel__ModelName___field=...)` lookups.
* Fixed proxy model support in formsets.
* Fixed error with .defer and child models that use the same parent.
* Fixed error message when `polymorphic_ctype_id` is null.
* Fixed fieldsets recursion in the admin.
* Improved `polymorphic.utils.reset_polymorphic_ctype()` to accept models in random ordering.
* Fix fieldsets handling in the admin (`declared_fieldsets` is removed since Django 1.9)
#### Version 1.3 (2017-08-01)[¶](#version-1-3-2017-08-01)
* **BACKWARDS INCOMPATIBILITY:** Dropped Django 1.4, 1.5, 1.6, 1.7, 1.9 and Python 2.6 support.
Only official Django releases (1.8, 1.10, 1.11) are supported now.
* Allow expressions to pass unchanged in `.order_by()`
* Fixed Django 1.11 accessor checks (to support subclasses of `ForwardManyToOneDescriptor`, like `ForwardOneToOneDescriptor`)
* Fixed polib syntax error messages in translations.
#### Version 1.2 (2017-05-01)[¶](#version-1-2-2017-05-01)
* Django 1.11 support.
* Fixed `PolymorphicInlineModelAdmin` to explictly exclude `polymorphic_ctype`.
* Fixed Python 3 TypeError in the admin when preserving the query string.
* Fixed Python 3 issue due to `force_unicode()` usage instead of `force_text()`.
* Fixed `z-index` attribute for admin menu appearance.
#### Version 1.1 (2017-02-03)[¶](#version-1-1-2017-02-03)
* Added class based formset views in `polymorphic/contrib/extra_views`.
* Added helper function `polymorphic.utils.reset_polymorphic_ctype()`.
This eases the migration old existing models to polymorphic.
* Fixed Python 2.6 issue.
* Fixed Django 1.6 support.
#### Version 1.0.2 (2016-10-14)[¶](#version-1-0-2-2016-10-14)
* Added helper function for [django-guardian](https://github.com/django-guardian/django-guardian); add
`GUARDIAN_GET_CONTENT_TYPE = 'polymorphic.contrib.guardian.get_polymorphic_base_content_type'`
to the project settings to let guardian handles inherited models properly.
* Fixed `polymorphic_modelformset_factory()` usage.
* Fixed Python 3 bug for inline formsets.
* Fixed CSS for Grappelli, so model choice menu properly overlaps.
* Fixed `ParentAdminNotRegistered` exception for models that are registered via a proxy model instead of the real base model.
#### Version 1.0.1 (2016-09-11)[¶](#version-1-0-1-2016-09-11)
* Fixed compatibility with manager changes in Django 1.10.1
#### Version 1.0 (2016-09-02)[¶](#version-1-0-2016-09-02)
* Added Django 1.10 support.
* Added **admin inline** support for polymorphic models.
* Added **formset** support for polymorphic models.
* Added support for polymorphic queryset limiting effects on *proxy models*.
* Added support for multiple databases with the `.using()` method and `using=..` keyword argument.
* Fixed modifying passed `Q()` objects in place.
Note
This version provides a new method for registering the admin models.
While the old method is still supported, we recommend to upgrade your code.
The new registration style improves the compatibility in the Django admin.
* Register each `PolymorphicChildModelAdmin` with the admin site too.
* The `child_models` attribute of the `PolymorphicParentModelAdmin` should be a flat list of all child models.
The `(model, admin)` tuple is obsolete.
Also note that proxy models will now limit the queryset too.
##### Fixed since 1.0b1 (2016-08-10)[¶](#fixed-since-1-0b1-2016-08-10)
* Fix formset empty-form display when there are form errors.
* Fix formset empty-form hiding for [Grappelli](http://grappelliproject.com/).
* Fixed packing `admin/polymorphic/edit_inline/stacked.html` in the wheel format.
#### Version 0.9.2 (2016-05-04)[¶](#version-0-9-2-2016-05-04)
* Fix error when using `date_hierarchy` field in the admin
* Fixed Django 1.10 warning in admin add-type view.
#### Version 0.9.1 (2016-02-18)[¶](#version-0-9-1-2016-02-18)
* Fixed support for `PolymorphicManager.from_queryset()` for custom query sets.
* Fixed Django 1.7 `changeform_view()` redirection to the child admin site.
This fixes custom admin code that uses these views, such as [django-reversion](https://github.com/etianen/django-reversion)’s `revision_view()` / `recover_view()`.
* Fixed `.only('pk')` field support.
* Fixed `object_history_template` breadcrumb.
**NOTE:** when using [django-reversion](https://github.com/etianen/django-reversion) / [django-reversion-compare](https://github.com/jedie/django-reversion-compare), make sure to implement a `admin/polymorphic/object_history.html` template in your project that extends from `reversion/object_history.html` or `reversion-compare/object_history.html` respectively.
#### Version 0.9 (2016-02-17)[¶](#version-0-9-2016-02-17)
* Added `.only()` and `.defer()` support.
* Added support for Django 1.8 complex expressions in `.annotate()` / `.aggregate()`.
* Fix Django 1.9 handling of custom URLs.
The new change-URL redirect overlapped any custom URLs defined in the child admin.
* Fix Django 1.9 support in the admin.
* Fix setting an extra custom manager without overriding the `_default_manager`.
* Fix missing `history_view()` redirection to the child admin, which is important for [django-reversion](https://github.com/etianen/django-reversion) support.
See the documentation for hints for [django-reversion-compare support](index.html#django-reversion-compare-support).
#### Version 0.8.1 (2015-12-29)[¶](#version-0-8-1-2015-12-29)
* Fixed support for reverse relations for `relname___field` when the field starts with an `_` character.
Otherwise, the query will be interpreted as subclass lookup (`ClassName___field`).
#### Version 0.8 (2015-12-28)[¶](#version-0-8-2015-12-28)
* Added Django 1.9 compatibility.
* Renamed `polymorphic.manager` => `polymorphic.managers` for consistentcy.
* **BACKWARDS INCOMPATIBILITY:** The import paths have changed to support Django 1.9.
Instead of `from polymorphic import X`,
you’ll have to import from the proper package. For example:
```
from polymorphic.models import PolymorphicModel from polymorphic.managers import PolymorphicManager, PolymorphicQuerySet from polymorphic.showfields import ShowFieldContent, ShowFieldType, ShowFieldTypeAndContent
```
* **BACKWARDS INCOMPATIBILITY:** Removed `__version__.py` in favor of a standard `__version__` in `polymorphic/__init__.py`.
* **BACKWARDS INCOMPATIBILITY:** Removed automatic proxying of method calls to the queryset class.
Use the standard Django methods instead:
```
# In model code:
objects = PolymorphicQuerySet.as_manager()
# For manager code:
MyCustomManager = PolymorphicManager.from_queryset(MyCustomQuerySet)
```
#### Version 0.7.2 (2015-10-01)[¶](#version-0-7-2-2015-10-01)
* Added `queryset.as_manager()` support for Django 1.7/1.8
* Optimize model access for non-dumpdata usage; avoid `__getattribute__()` call each time to access the manager.
* Fixed 500 error when using invalid PK’s in the admin URL, return 404 instead.
* Fixed possible issues when using an custom `AdminSite` class for the parent object.
* Fixed Pickle exception when polymorphic model is cached.
#### Version 0.7.1 (2015-04-30)[¶](#version-0-7-1-2015-04-30)
* Fixed Django 1.8 support for related field widgets.
#### Version 0.7 (2015-04-08)[¶](#version-0-7-2015-04-08)
* Added Django 1.8 support
* Added support for custom primary key defined using `mybase_ptr = models.OneToOneField(BaseClass, parent_link=True, related_name="...")`.
* Fixed Python 3 issue in the admin
* Fixed `_default_manager` to be consistent with Django, it’s now assigned directly instead of using `add_to_class()`
* Fixed 500 error for admin URLs without a ‘/’, e.g. `admin/app/parentmodel/id`.
* Fixed preserved filter for Django admin in delete views
* Removed test noise for diamond inheritance problem (which Django 1.7 detects)
#### Version 0.6.1 (2014-12-30)[¶](#version-0-6-1-2014-12-30)
* Remove Django 1.7 warnings
* Fix Django 1.4/1.5 queryset calls on related objects for unknown methods.
The `RelatedManager` code overrides `get_query_set()` while `__getattr__()` used the new-style `get_queryset()`.
* Fix validate_model_fields(), caused errors when metaclass raises errors
#### Version 0.6 (2014-10-14)[¶](#version-0-6-2014-10-14)
* Added Django 1.7 support.
* Added permission check for all child types.
* **BACKWARDS INCOMPATIBILITY:** the `get_child_type_choices()` method receives 2 arguments now (request, action).
If you have overwritten this method in your code, make sure the method signature is updated accordingly.
#### Version 0.5.6 (2014-07-21)[¶](#version-0-5-6-2014-07-21)
* Added `pk_regex` to the `PolymorphicParentModelAdmin` to support non-integer primary keys.
* Fixed passing `?ct_id=` to the add view for Django 1.6 (fixes compatibility with [django-parler](https://github.com/django-parler/django-parler)).
#### Version 0.5.5 (2014-04-29)[¶](#version-0-5-5-2014-04-29)
* Fixed `get_real_instance_class()` for proxy models (broke in 0.5.4).
#### Version 0.5.4 (2014-04-09)[¶](#version-0-5-4-2014-04-09)
* Fix `.non_polymorphic()` to returns a clone of the queryset, instead of effecting the existing queryset.
* Fix missing `alters_data = True` annotations on the overwritten `save()` methods.
* Fix infinite recursion bug in the admin with Django 1.6+
* Added detection of bad `ContentType` table data.
#### Version 0.5.3 (2013-09-17)[¶](#version-0-5-3-2013-09-17)
* Fix TypeError when `base_form` was not defined.
* Fix passing `/admin/app/model/id/XYZ` urls to the correct admin backend.
There is no need to include a `?ct_id=..` field, as the ID already provides enough information.
#### Version 0.5.2 (2013-09-05)[¶](#version-0-5-2-2013-09-05)
* Fix [Grappelli](http://grappelliproject.com/) breadcrumb support in the views.
* Fix unwanted `___` handling in the ORM when a field name starts with an underscore;
this detects you meant `relatedfield__ _underscorefield` instead of `ClassName___field`.
* Fix missing permission check in the “add type” view. This was caught however in the next step.
* Fix admin validation errors related to additional non-model form fields.
#### Version 0.5.1 (2013-07-05)[¶](#version-0-5-1-2013-07-05)
* Add Django 1.6 support.
* Fix [Grappelli](http://grappelliproject.com/) theme support in the “Add type” view.
#### Version 0.5 (2013-04-20)[¶](#version-0-5-2013-04-20)
* Add Python 3.2 and 3.3 support
* Fix errors with ContentType objects that don’t refer to an existing model.
#### Version 0.4.2 (2013-04-10)[¶](#version-0-4-2-2013-04-10)
* Used proper `__version__` marker.
#### Version 0.4.1 (2013-04-10)[¶](#version-0-4-1-2013-04-10)
* Add Django 1.5 and 1.6 support
* Add proxy model support
* Add default admin `list_filter` for polymorphic model type.
* Fix queryset support of related objects.
* Performed an overall cleanup of the project
* **Deprecated** the `queryset_class` argument of the `PolymorphicManager` constructor, use the class attribute instead.
* **Dropped** Django 1.1, 1.2 and 1.3 support
#### Version 0.4 (2013-03-25)[¶](#version-0-4-2013-03-25)
* Update example project for Django 1.4
* Added tox and Travis configuration
#### Version 0.3.1 (2013-02-28)[¶](#version-0-3-1-2013-02-28)
* SQL optimization, avoid query in pre_save_polymorphic()
#### Version 0.3 (2013-02-28)[¶](#version-0-3-2013-02-28)
Many changes to the codebase happened, but no new version was released to pypi for years.
0.3 contains fixes submitted by many contributors, huge thanks to everyone!
* Added a polymorphic admin interface.
* PEP8 and code cleanups by various authors
#### Version 0.2 (2011-04-27)[¶](#version-0-2-2011-04-27)
The 0.2 release serves as legacy release.
It supports Django 1.1 up till 1.4 and Python 2.4 up till 2.7.
For a detailed list of it’s changes, see the [archived changelog](index.html#document-changelog_archive).
### Contributing[¶](#contributing)
You can contribute to *django-polymorphic* to forking the code on GitHub:
> <https://github.com/django-polymorphic/django-polymorphic#### Running tests[¶](#running-tests)
We require features to be backed by a unit test.
This way, we can test *django-polymorphic* against new Django versions.
To run the included test suite, execute:
```
./runtests.py
```
To test support for multiple Python and Django versions, run tox from the repository root:
```
pip install tox tox
```
The Python versions need to be installed at your system.
On Linux, download the versions at <http://www.python.org/download/releases/>.
On MacOS X, use [Homebrew](http://mxcl.github.io/homebrew/) to install other Python versions.
We currently support Python 3.5, 3.6, 3.7, and 3.8.
#### Example project[¶](#example-project)
The repository contains a complete Django project that may be used for tests or experiments,
without any installation needed.
The management command `pcmd.py` in the app `pexp` can be used for quick tests or experiments - modify this file (pexp/management/commands/pcmd.py) to your liking.
#### Supported Django versions[¶](#supported-django-versions)
The current release should be usable with the supported releases of Django;
the current stable release and the previous release. Supporting older Django versions is a nice-to-have feature, but not mandatory.
In case you need to use *django-polymorphic* with older Django versions,
consider installing a previous version.
### API Documentation[¶](#api-documentation)
#### polymorphic.admin[¶](#polymorphic-admin)
##### ModelAdmin classes[¶](#modeladmin-classes)
###### The `PolymorphicParentModelAdmin` class[¶](#the-polymorphicparentmodeladmin-class)
*class* `polymorphic.admin.``PolymorphicParentModelAdmin`(*model*, *admin_site*, **args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin)
Bases: `django.contrib.admin.options.ModelAdmin`
A admin interface that can displays different change/delete pages, depending on the polymorphic model.
To use this class, one attribute need to be defined:
* [`child_models`](#polymorphic.admin.PolymorphicParentModelAdmin.child_models) should be a list models.
Alternatively, the following methods can be implemented:
* [`get_child_models()`](#polymorphic.admin.PolymorphicParentModelAdmin.get_child_models) should return a list of models.
* optionally, [`get_child_type_choices()`](#polymorphic.admin.PolymorphicParentModelAdmin.get_child_type_choices) can be overwritten to refine the choices for the add dialog.
This class needs to be inherited by the model admin base class that is registered in the site.
The derived models should *not* register the ModelAdmin, but instead it should be returned by [`get_child_models()`](#polymorphic.admin.PolymorphicParentModelAdmin.get_child_models).
`add_type_form`[¶](#polymorphic.admin.PolymorphicParentModelAdmin.add_type_form)
alias of `polymorphic.admin.forms.PolymorphicModelChoiceForm`
`__init__`(*model*, *admin_site*, **args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.__init__)
Initialize self. See help(type(self)) for accurate signature.
`add_type_view`(*request*, *form_url=''*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.add_type_view)
Display a choice form to select which page type to add.
`add_view`(*request*, *form_url=''*, *extra_context=None*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.add_view)
Redirect the add view to the real admin.
`change_view`(*request*, *object_id*, **args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.change_view)
Redirect the change view to the real admin.
`changeform_view`(*request*, *object_id=None*, **args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.changeform_view)
`delete_view`(*request*, *object_id*, *extra_context=None*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.delete_view)
Redirect the delete view to the real admin.
`get_child_models`()[¶](#polymorphic.admin.PolymorphicParentModelAdmin.get_child_models)
Return the derived model classes which this admin should handle.
This should return a list of tuples, exactly like [`child_models`](#polymorphic.admin.PolymorphicParentModelAdmin.child_models) is.
The model classes can be retrieved as `base_model.__subclasses__()`,
a setting in a config file, or a query of a plugin registration system at your option
`get_child_type_choices`(*request*, *action*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.get_child_type_choices)
Return a list of polymorphic types for which the user has the permission to perform the given action.
`get_preserved_filters`(*request*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.get_preserved_filters)
Return the preserved filters querystring.
`get_queryset`(*request*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.get_queryset)
Return a QuerySet of all model instances that can be edited by the admin site. This is used by changelist_view.
`get_urls`()[¶](#polymorphic.admin.PolymorphicParentModelAdmin.get_urls)
Expose the custom URLs for the subclasses and the URL resolver.
`history_view`(*request*, *object_id*, *extra_context=None*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.history_view)
Redirect the history view to the real admin.
`register_child`(*model*, *model_admin*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.register_child)
Register a model with admin to display.
`render_add_type_form`(*request*, *context*, *form_url=''*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.render_add_type_form)
Render the page type choice form.
`subclass_view`(*request*, *path*)[¶](#polymorphic.admin.PolymorphicParentModelAdmin.subclass_view)
Forward any request to a custom view of the real admin.
`add_type_template` *= None*[¶](#polymorphic.admin.PolymorphicParentModelAdmin.add_type_template)
`base_model` *= None*[¶](#polymorphic.admin.PolymorphicParentModelAdmin.base_model)
The base model that the class uses (auto-detected if not set explicitly)
`change_list_template`[¶](#polymorphic.admin.PolymorphicParentModelAdmin.change_list_template)
`child_models` *= None*[¶](#polymorphic.admin.PolymorphicParentModelAdmin.child_models)
The child models that should be displayed
`media`[¶](#polymorphic.admin.PolymorphicParentModelAdmin.media)
`pk_regex` *= '(\\d+|__fk__)'*[¶](#polymorphic.admin.PolymorphicParentModelAdmin.pk_regex)
The regular expression to filter the primary key in the URL.
This accepts only numbers as defensive measure against catch-all URLs.
If your primary key consists of string values, update this regular expression.
`polymorphic_list` *= False*[¶](#polymorphic.admin.PolymorphicParentModelAdmin.polymorphic_list)
Whether the list should be polymorphic too, leave to `False` to optimize
###### The `PolymorphicChildModelAdmin` class[¶](#the-polymorphicchildmodeladmin-class)
*class* `polymorphic.admin.``PolymorphicChildModelAdmin`(*model*, *admin_site*, **args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin)
Bases: `django.contrib.admin.options.ModelAdmin`
The *optional* base class for the admin interface of derived models.
This base class defines some convenience behavior for the admin interface:
* It corrects the breadcrumbs in the admin pages.
* It adds the base model to the template lookup paths.
* It allows to set `base_form` so the derived class will automatically include other fields in the form.
* It allows to set `base_fieldsets` so the derived class will automatically display any extra fields.
`__init__`(*model*, *admin_site*, **args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.__init__)
Initialize self. See help(type(self)) for accurate signature.
`delete_view`(*request*, *object_id*, *context=None*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.delete_view)
`get_base_fieldsets`(*request*, *obj=None*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.get_base_fieldsets)
`get_fieldsets`(*request*, *obj=None*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.get_fieldsets)
Hook for specifying fieldsets.
`get_form`(*request*, *obj=None*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.get_form)
Return a Form class for use in the admin add view. This is used by add_view and change_view.
`get_model_perms`(*request*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.get_model_perms)
Return a dict of all perms for this model. This dict has the keys
`add`, `change`, `delete`, and `view` mapping to the True/False for each of those actions.
`get_subclass_fields`(*request*, *obj=None*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.get_subclass_fields)
`history_view`(*request*, *object_id*, *extra_context=None*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.history_view)
The ‘history’ admin view for this model.
`render_change_form`(*request*, *context*, *add=False*, *change=False*, *form_url=''*, *obj=None*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.render_change_form)
`response_post_save_add`(*request*, *obj*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.response_post_save_add)
Figure out where to redirect after the ‘Save’ button has been pressed when adding a new object.
`response_post_save_change`(*request*, *obj*)[¶](#polymorphic.admin.PolymorphicChildModelAdmin.response_post_save_change)
Figure out where to redirect after the ‘Save’ button has been pressed when editing an existing object.
`base_fieldsets` *= None*[¶](#polymorphic.admin.PolymorphicChildModelAdmin.base_fieldsets)
By setting `base_fieldsets` instead of `fieldsets`,
any subclass fields can be automatically added.
This is useful when your model admin class is inherited by others.
`base_form` *= None*[¶](#polymorphic.admin.PolymorphicChildModelAdmin.base_form)
By setting `base_form` instead of `form`, any subclass fields are automatically added to the form.
This is useful when your model admin class is inherited by others.
`base_model` *= None*[¶](#polymorphic.admin.PolymorphicChildModelAdmin.base_model)
The base model that the class uses (auto-detected if not set explicitly)
`change_form_template`[¶](#polymorphic.admin.PolymorphicChildModelAdmin.change_form_template)
`delete_confirmation_template`[¶](#polymorphic.admin.PolymorphicChildModelAdmin.delete_confirmation_template)
`extra_fieldset_title` *= 'Contents'*[¶](#polymorphic.admin.PolymorphicChildModelAdmin.extra_fieldset_title)
Default title for extra fieldset
`media`[¶](#polymorphic.admin.PolymorphicChildModelAdmin.media)
`object_history_template`[¶](#polymorphic.admin.PolymorphicChildModelAdmin.object_history_template)
`show_in_index` *= False*[¶](#polymorphic.admin.PolymorphicChildModelAdmin.show_in_index)
Whether the child admin model should be visible in the admin index page.
##### List filtering[¶](#list-filtering)
###### The `PolymorphicChildModelFilter` class[¶](#the-polymorphicchildmodelfilter-class)
*class* `polymorphic.admin.``PolymorphicChildModelFilter`(*request*, *params*, *model*, *model_admin*)[¶](#polymorphic.admin.PolymorphicChildModelFilter)
Bases: `django.contrib.admin.filters.SimpleListFilter`
An admin list filter for the PolymorphicParentModelAdmin which enables filtering by its child models.
This can be used in the parent admin:
```
list_filter = (PolymorphicChildModelFilter,)
```
##### Inlines support[¶](#inlines-support)
###### The `StackedPolymorphicInline` class[¶](#the-stackedpolymorphicinline-class)
*class* `polymorphic.admin.``StackedPolymorphicInline`(*parent_model*, *admin_site*)[¶](#polymorphic.admin.StackedPolymorphicInline)
Bases: `polymorphic.admin.inlines.PolymorphicInlineModelAdmin`
Stacked inline for django-polymorphic models.
Since tabular doesn’t make much sense with changed fields, just offer this one.
###### The `GenericStackedPolymorphicInline` class[¶](#the-genericstackedpolymorphicinline-class)
*class* `polymorphic.admin.``GenericStackedPolymorphicInline`(*parent_model*, *admin_site*)[¶](#polymorphic.admin.GenericStackedPolymorphicInline)
Bases: `polymorphic.admin.generic.GenericPolymorphicInlineModelAdmin`
The stacked layout for generic inlines.
`media`[¶](#polymorphic.admin.GenericStackedPolymorphicInline.media)
`template` *= 'admin/polymorphic/edit_inline/stacked.html'*[¶](#polymorphic.admin.GenericStackedPolymorphicInline.template)
The default template to use.
###### The `PolymorphicInlineSupportMixin` class[¶](#the-polymorphicinlinesupportmixin-class)
*class* `polymorphic.admin.``PolymorphicInlineSupportMixin`[¶](#polymorphic.admin.PolymorphicInlineSupportMixin)
Bases: `object`
A Mixin to add to the regular admin, so it can work with our polymorphic inlines.
This mixin needs to be included in the admin that hosts the `inlines`.
It makes sure the generated admin forms have different fieldsets/fields depending on the polymorphic type of the form instance.
This is achieved by overwriting [`get_inline_formsets()`](#polymorphic.admin.PolymorphicInlineSupportMixin.get_inline_formsets) to return an [`PolymorphicInlineAdminFormSet`](#polymorphic.admin.PolymorphicInlineAdminFormSet) instead of a standard Django
`InlineAdminFormSet` for the polymorphic formsets.
`get_inline_formsets`(*request*, *formsets*, *inline_instances*, *obj=None*, **args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicInlineSupportMixin.get_inline_formsets)
Overwritten version to produce the proper admin wrapping for the polymorphic inline formset. This fixes the media and form appearance of the inline polymorphic models.
##### Low-level classes[¶](#low-level-classes)
These classes are useful when existing parts of the admin classes.
*class* `polymorphic.admin.``PolymorphicModelChoiceForm`(**args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicModelChoiceForm)
Bases: `django.forms.forms.Form`
The default form for the `add_type_form`. Can be overwritten and replaced.
**Form fields:**
* `ct_id`: Type (`ChoiceField`)
`__init__`(**args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicModelChoiceForm.__init__)
Initialize self. See help(type(self)) for accurate signature.
`media`[¶](#polymorphic.admin.PolymorphicModelChoiceForm.media)
`type_label` *= 'Type'*[¶](#polymorphic.admin.PolymorphicModelChoiceForm.type_label)
Define the label for the radiofield
*class* `polymorphic.admin.``PolymorphicInlineModelAdmin`(*parent_model*, *admin_site*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin)
Bases: `django.contrib.admin.options.InlineModelAdmin`
A polymorphic inline, where each formset row can be a different form.
Note that:
* Permissions are only checked on the base model.
* The child inlines can’t override the base model fields, only this parent inline can do that.
*class* `Child`(*parent_inline*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child)
Bases: `django.contrib.admin.options.InlineModelAdmin`
The child inline; which allows configuring the admin options for the child appearance.
Note that not all options will be honored by the parent, notably the formset options:
* [`extra`](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.extra)
* `min_num`
* `max_num`
The model form options however, will all be read.
`formset_child`[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.formset_child)
alias of `polymorphic.formsets.models.PolymorphicFormSetChild`
`__init__`(*parent_inline*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.__init__)
Initialize self. See help(type(self)) for accurate signature.
`get_fields`(*request*, *obj=None*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.get_fields)
Hook for specifying fields.
`get_formset`(*request*, *obj=None*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.get_formset)
Return a BaseInlineFormSet class for use in admin add/change views.
`get_formset_child`(*request*, *obj=None*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.get_formset_child)
Return the formset child that the parent inline can use to represent us.
| Return type: | [PolymorphicFormSetChild](index.html#polymorphic.formsets.PolymorphicFormSetChild) |
`extra` *= 0*[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.extra)
`media`[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.Child.media)
`formset`[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.formset)
alias of `polymorphic.formsets.models.BasePolymorphicInlineFormSet`
`__init__`(*parent_model*, *admin_site*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.__init__)
Initialize self. See help(type(self)) for accurate signature.
`get_child_inline_instance`(*model*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.get_child_inline_instance)
Find the child inline for a given model.
| Return type: | [PolymorphicInlineModelAdmin.Child](index.html#polymorphic.admin.PolymorphicInlineModelAdmin.Child) |
`get_child_inline_instances`()[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.get_child_inline_instances)
:rtype List[PolymorphicInlineModelAdmin.Child]
`get_fields`(*request*, *obj=None*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.get_fields)
Hook for specifying fields.
`get_fieldsets`(*request*, *obj=None*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.get_fieldsets)
Hook for specifying fieldsets.
`get_formset`(*request*, *obj=None*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.get_formset)
Construct the inline formset class.
This passes all class attributes to the formset.
| Return type: | type |
`get_formset_children`(*request*, *obj=None*)[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.get_formset_children)
The formset ‘children’ provide the details for all child models that are part of this formset.
It provides a stripped version of the modelform/formset factory methods.
`child_inlines` *= ()*[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.child_inlines)
Inlines for all model sub types that can be displayed in this inline.
Each row is a [`PolymorphicInlineModelAdmin.Child`](#polymorphic.admin.PolymorphicInlineModelAdmin.Child)
`extra` *= 0*[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.extra)
The extra forms to show By default there are no ‘extra’ forms as the desired type is unknown.
Instead, add each new item using JavaScript that first offers a type-selection.
`media`[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.media)
`polymorphic_media` *= Media(css={'all': ['polymorphic/css/polymorphic_inlines.css']}, js=['admin/js/vendor/jquery/jquery.min.js', 'admin/js/jquery.init.js', 'polymorphic/js/polymorphic_inlines.js'])*[¶](#polymorphic.admin.PolymorphicInlineModelAdmin.polymorphic_media)
The extra media to add for the polymorphic inlines effect.
This can be redefined for subclasses.
*class* `polymorphic.admin.``GenericPolymorphicInlineModelAdmin`(*parent_model*, *admin_site*)[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin)
Bases: `polymorphic.admin.inlines.PolymorphicInlineModelAdmin`, [`django.contrib.contenttypes.admin.GenericInlineModelAdmin`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.admin.GenericInlineModelAdmin)
Base class for variation of inlines based on generic foreign keys.
*class* `Child`(*parent_inline*)[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.Child)
Bases: `polymorphic.admin.inlines.Child`
Variation for generic inlines.
`formset_child`[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.Child.formset_child)
alias of `polymorphic.formsets.generic.GenericPolymorphicFormSetChild`
`get_formset_child`(*request*, *obj=None*, ***kwargs*)[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.Child.get_formset_child)
Return the formset child that the parent inline can use to represent us.
| Return type: | [PolymorphicFormSetChild](index.html#polymorphic.formsets.PolymorphicFormSetChild) |
`content_type`[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.Child.content_type)
Expose the ContentType that the child relates to.
This can be used for the `polymorphic_ctype` field.
`ct_field` *= 'content_type'*[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.Child.ct_field)
`ct_fk_field` *= 'object_id'*[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.Child.ct_fk_field)
`media`[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.Child.media)
`formset`[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.formset)
alias of `polymorphic.formsets.generic.BaseGenericPolymorphicInlineFormSet`
`get_formset`(*request*, *obj=None*, ***kwargs*)[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.get_formset)
Construct the generic inline formset class.
`media`[¶](#polymorphic.admin.GenericPolymorphicInlineModelAdmin.media)
*class* `polymorphic.admin.``PolymorphicInlineAdminForm`(*formset*, *form*, *fieldsets*, *prepopulated_fields*, *original*, *readonly_fields=None*, *model_admin=None*, *view_on_site_url=None*)[¶](#polymorphic.admin.PolymorphicInlineAdminForm)
Bases: `django.contrib.admin.helpers.InlineAdminForm`
Expose the admin configuration for a form
*class* `polymorphic.admin.``PolymorphicInlineAdminFormSet`(**args*, ***kwargs*)[¶](#polymorphic.admin.PolymorphicInlineAdminFormSet)
Bases: `django.contrib.admin.helpers.InlineAdminFormSet`
Internally used class to expose the formset in the template.
#### polymorphic.contrib.extra_views[¶](#module-polymorphic.contrib.extra_views)
The `extra_views.formsets` provides a simple way to handle formsets.
The `extra_views.advanced` provides a method to combine that with a create/update form.
This package provides classes that support both options for polymorphic formsets.
*class* `polymorphic.contrib.extra_views.``PolymorphicFormSetView`(***kwargs*)[¶](#polymorphic.contrib.extra_views.PolymorphicFormSetView)
Bases: `polymorphic.contrib.extra_views.PolymorphicFormSetMixin`, `extra_views.formsets.ModelFormSetView`
A view that displays a single polymorphic formset.
```
from polymorphic.formsets import PolymorphicFormSetChild
class ItemsView(PolymorphicFormSetView):
model = Item
formset_children = [
PolymorphicFormSetChild(ItemSubclass1),
PolymorphicFormSetChild(ItemSubclass2),
]
```
`formset_class`[¶](#polymorphic.contrib.extra_views.PolymorphicFormSetView.formset_class)
alias of `polymorphic.formsets.models.BasePolymorphicModelFormSet`
*class* `polymorphic.contrib.extra_views.``PolymorphicInlineFormSetView`(***kwargs*)[¶](#polymorphic.contrib.extra_views.PolymorphicInlineFormSetView)
Bases: `polymorphic.contrib.extra_views.PolymorphicFormSetMixin`, `extra_views.formsets.InlineFormSetView`
A view that displays a single polymorphic formset - with one parent object.
This is a variation of the `extra_views` package classes for django-polymorphic.
```
from polymorphic.formsets import PolymorphicFormSetChild
class OrderItemsView(PolymorphicInlineFormSetView):
model = Order
inline_model = Item
formset_children = [
PolymorphicFormSetChild(ItemSubclass1),
PolymorphicFormSetChild(ItemSubclass2),
]
```
`formset_class`[¶](#polymorphic.contrib.extra_views.PolymorphicInlineFormSetView.formset_class)
alias of `polymorphic.formsets.models.BasePolymorphicInlineFormSet`
*class* `polymorphic.contrib.extra_views.``PolymorphicInlineFormSet`(*parent_model*, *request*, *instance*, *view_kwargs=None*, *view=None*)[¶](#polymorphic.contrib.extra_views.PolymorphicInlineFormSet)
Bases: `polymorphic.contrib.extra_views.PolymorphicFormSetMixin`, `extra_views.advanced.InlineFormSetFactory`
An inline to add to the `inlines` of the `CreateWithInlinesView`
and `UpdateWithInlinesView` class.
```
from polymorphic.formsets import PolymorphicFormSetChild
class ItemsInline(PolymorphicInlineFormSet):
model = Item
formset_children = [
PolymorphicFormSetChild(ItemSubclass1),
PolymorphicFormSetChild(ItemSubclass2),
]
class OrderCreateView(CreateWithInlinesView):
model = Order
inlines = [ItemsInline]
def get_success_url(self):
return self.object.get_absolute_url()
```
`formset_class`[¶](#polymorphic.contrib.extra_views.PolymorphicInlineFormSet.formset_class)
alias of `polymorphic.formsets.models.BasePolymorphicInlineFormSet`
#### polymorphic.contrib.guardian[¶](#module-polymorphic.contrib.guardian)
`polymorphic.contrib.guardian.``get_polymorphic_base_content_type`(*obj*)[¶](#polymorphic.contrib.guardian.get_polymorphic_base_content_type)
Helper function to return the base polymorphic content type id. This should used with django-guardian and the GUARDIAN_GET_CONTENT_TYPE option.
See the django-guardian documentation for more information:
<https://django-guardian.readthedocs.io/en/latest/configuration.html#guardian-get-content-type#### polymorphic.formsets[¶](#module-polymorphic.formsets)
This allows creating formsets where each row can be a different form type.
The logic of the formsets work similar to the standard Django formsets;
there are factory methods to construct the classes with the proper form settings.
The “parent” formset hosts the entire model and their child model.
For every child type, there is an [`PolymorphicFormSetChild`](#polymorphic.formsets.PolymorphicFormSetChild) instance that describes how to display and construct the child.
It’s parameters are very similar to the parent’s factory method.
##### Model formsets[¶](#model-formsets)
`polymorphic.formsets.``polymorphic_modelformset_factory`(*model*, *formset_children*, *formset=<class 'polymorphic.formsets.models.BasePolymorphicModelFormSet'>*, *form=<class 'django.forms.models.ModelForm'>*, *fields=None*, *exclude=None*, *extra=1*, *can_order=False*, *can_delete=True*, *max_num=None*, *formfield_callback=None*, *widgets=None*, *validate_max=False*, *localized_fields=None*, *labels=None*, *help_texts=None*, *error_messages=None*, *min_num=None*, *validate_min=False*, *field_classes=None*, *child_form_kwargs=None*)[¶](#polymorphic.formsets.polymorphic_modelformset_factory)
Construct the class for an polymorphic model formset.
All arguments are identical to :func:’~django.forms.models.modelformset_factory’,
with the exception of the ‘’formset_children’’ argument.
| Parameters: | **formset_children** (*Iterable**[*[*PolymorphicFormSetChild*](index.html#polymorphic.formsets.PolymorphicFormSetChild)*]*) – A list of all child :class:’PolymorphicFormSetChild’ objects that tell the inline how to render the child model types. |
| Return type: | type |
*class* `polymorphic.formsets.``PolymorphicFormSetChild`(*model*, *form=<class 'django.forms.models.ModelForm'>*, *fields=None*, *exclude=None*, *formfield_callback=None*, *widgets=None*, *localized_fields=None*, *labels=None*, *help_texts=None*, *error_messages=None*)[¶](#polymorphic.formsets.PolymorphicFormSetChild)
Metadata to define the inline of a polymorphic child.
Provide this information in the :func:’polymorphic_inlineformset_factory’ construction.
##### Inline formsets[¶](#inline-formsets)
`polymorphic.formsets.``polymorphic_inlineformset_factory`(*parent_model*, *model*, *formset_children*, *formset=<class 'polymorphic.formsets.models.BasePolymorphicInlineFormSet'>*, *fk_name=None*, *form=<class 'django.forms.models.ModelForm'>*, *fields=None*, *exclude=None*, *extra=1*, *can_order=False*, *can_delete=True*, *max_num=None*, *formfield_callback=None*, *widgets=None*, *validate_max=False*, *localized_fields=None*, *labels=None*, *help_texts=None*, *error_messages=None*, *min_num=None*, *validate_min=False*, *field_classes=None*, *child_form_kwargs=None*)[¶](#polymorphic.formsets.polymorphic_inlineformset_factory)
Construct the class for an inline polymorphic formset.
All arguments are identical to :func:’~django.forms.models.inlineformset_factory’,
with the exception of the ‘’formset_children’’ argument.
| Parameters: | **formset_children** (*Iterable**[*[*PolymorphicFormSetChild*](index.html#polymorphic.formsets.PolymorphicFormSetChild)*]*) – A list of all child :class:’PolymorphicFormSetChild’ objects that tell the inline how to render the child model types. |
| Return type: | type |
##### Generic formsets[¶](#generic-formsets)
`polymorphic.formsets.``generic_polymorphic_inlineformset_factory`(*model*, *formset_children*, *form=<class 'django.forms.models.ModelForm'>*, *formset=<class 'polymorphic.formsets.generic.BaseGenericPolymorphicInlineFormSet'>*, *ct_field='content_type'*, *fk_field='object_id'*, *fields=None*, *exclude=None*, *extra=1*, *can_order=False*, *can_delete=True*, *max_num=None*, *formfield_callback=None*, *validate_max=False*, *for_concrete_model=True*, *min_num=None*, *validate_min=False*, *child_form_kwargs=None*)[¶](#polymorphic.formsets.generic_polymorphic_inlineformset_factory)
Construct the class for a generic inline polymorphic formset.
All arguments are identical to [`generic_inlineformset_factory()`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.forms.generic_inlineformset_factory),
with the exception of the `formset_children` argument.
| Parameters: | **formset_children** (*Iterable**[*[*PolymorphicFormSetChild*](index.html#polymorphic.formsets.PolymorphicFormSetChild)*]*) – A list of all child [`PolymorphicFormSetChild`](#polymorphic.formsets.PolymorphicFormSetChild) objects that tell the inline how to render the child model types. |
| Return type: | type |
##### Low-level features[¶](#low-level-features)
The internal machinery can be used to extend the formset classes. This includes:
`polymorphic.formsets.``polymorphic_child_forms_factory`(*formset_children*, ***kwargs*)[¶](#polymorphic.formsets.polymorphic_child_forms_factory)
Construct the forms for the formset children.
This is mostly used internally, and rarely needs to be used by external projects.
When using the factory methods (:func:’polymorphic_inlineformset_factory’),
this feature is called already for you.
*class* `polymorphic.formsets.``BasePolymorphicModelFormSet`(**args*, ***kwargs*)[¶](#polymorphic.formsets.BasePolymorphicModelFormSet)
Bases: [`django.forms.models.BaseModelFormSet`](https://docs.djangoproject.com/en/4.0/_objects/topics/forms/modelforms/#django.forms.models.BaseModelFormSet)
A formset that can produce different forms depending on the object type.
Note that the ‘add’ feature is therefore more complex,
as all variations need ot be exposed somewhere.
When switching existing formsets to the polymorphic formset,
note that the ID field will no longer be named ‘’model_ptr’’,
but just appear as ‘’id’’.
*class* `polymorphic.formsets.``BasePolymorphicInlineFormSet`(*data=None*, *files=None*, *instance=None*, *save_as_new=False*, *prefix=None*, *queryset=None*, ***kwargs*)[¶](#polymorphic.formsets.BasePolymorphicInlineFormSet)
Bases: [`django.forms.models.BaseInlineFormSet`](https://docs.djangoproject.com/en/4.0/_objects/topics/forms/modelforms/#django.forms.models.BaseInlineFormSet), `polymorphic.formsets.models.BasePolymorphicModelFormSet`
Polymorphic formset variation for inline formsets
*class* `polymorphic.formsets.``BaseGenericPolymorphicInlineFormSet`(*data=None*, *files=None*, *instance=None*, *save_as_new=False*, *prefix=None*, *queryset=None*, ***kwargs*)[¶](#polymorphic.formsets.BaseGenericPolymorphicInlineFormSet)
Bases: [`django.contrib.contenttypes.forms.BaseGenericInlineFormSet`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.forms.BaseGenericInlineFormSet), `polymorphic.formsets.models.BasePolymorphicModelFormSet`
Polymorphic formset variation for inline generic formsets
#### polymorphic.managers[¶](#module-polymorphic.managers)
The manager class for use in the models.
##### The `PolymorphicManager` class[¶](#the-polymorphicmanager-class)
*class* `polymorphic.managers.``PolymorphicManager`[¶](#polymorphic.managers.PolymorphicManager)
Bases: `django.db.models.manager.Manager`
Manager for PolymorphicModel
Usually not explicitly needed, except if a custom manager or a custom queryset class is to be used.
`queryset_class`[¶](#polymorphic.managers.PolymorphicManager.queryset_class)
alias of `polymorphic.query.PolymorphicQuerySet`
`get_queryset`()[¶](#polymorphic.managers.PolymorphicManager.get_queryset)
Return a new QuerySet object. Subclasses can override this method to customize the behavior of the Manager.
##### The `PolymorphicQuerySet` class[¶](#the-polymorphicqueryset-class)
*class* `polymorphic.managers.``PolymorphicQuerySet`(**args*, ***kwargs*)[¶](#polymorphic.managers.PolymorphicQuerySet)
Bases: [`django.db.models.query.QuerySet`](https://docs.djangoproject.com/en/4.0/_objects/ref/models/querysets/#django.db.models.query.QuerySet)
QuerySet for PolymorphicModel
Contains the core functionality for PolymorphicModel
Usually not explicitly needed, except if a custom queryset class is to be used.
`__init__`(**args*, ***kwargs*)[¶](#polymorphic.managers.PolymorphicQuerySet.__init__)
Initialize self. See help(type(self)) for accurate signature.
`aggregate`(**args*, ***kwargs*)[¶](#polymorphic.managers.PolymorphicQuerySet.aggregate)
translate the polymorphic field paths in the kwargs, then call vanilla aggregate.
We need no polymorphic object retrieval for aggregate => switch it off.
`annotate`(**args*, ***kwargs*)[¶](#polymorphic.managers.PolymorphicQuerySet.annotate)
translate the polymorphic field paths in the kwargs, then call vanilla annotate.
_get_real_instances will do the rest of the job after executing the query.
`bulk_create`(*objs*, *batch_size=None*, *ignore_conflicts=False*)[¶](#polymorphic.managers.PolymorphicQuerySet.bulk_create)
Insert each of the instances into the database. Do *not* call save() on each of the instances, do not send any pre/post_save signals, and do not set the primary key attribute if it is an autoincrement field (except if features.can_return_rows_from_bulk_insert=True).
Multi-table models are not supported.
`defer`(**fields*)[¶](#polymorphic.managers.PolymorphicQuerySet.defer)
Translate the field paths in the args, then call vanilla defer.
Also retain a copy of the original fields passed, which we’ll need when we’re retrieving the real instance (since we’ll need to translate them again, as the model will have changed).
`get_real_instances`(*base_result_objects=None*)[¶](#polymorphic.managers.PolymorphicQuerySet.get_real_instances)
Cast a list of objects to their actual classes.
This does roughly the same as:
```
return [ o.get_real_instance() for o in base_result_objects ]
```
but more efficiently.
| Return type: | [PolymorphicQuerySet](index.html#polymorphic.managers.PolymorphicQuerySet) |
`instance_of`(**args*)[¶](#polymorphic.managers.PolymorphicQuerySet.instance_of)
Filter the queryset to only include the classes in args (and their subclasses).
`non_polymorphic`()[¶](#polymorphic.managers.PolymorphicQuerySet.non_polymorphic)
switch off polymorphic behaviour for this query.
When the queryset is evaluated, only objects of the type of the base class used for this query are returned.
`not_instance_of`(**args*)[¶](#polymorphic.managers.PolymorphicQuerySet.not_instance_of)
Filter the queryset to exclude the classes in args (and their subclasses).
`only`(**fields*)[¶](#polymorphic.managers.PolymorphicQuerySet.only)
Translate the field paths in the args, then call vanilla only.
Also retain a copy of the original fields passed, which we’ll need when we’re retrieving the real instance (since we’ll need to translate them again, as the model will have changed).
`order_by`(**field_names*)[¶](#polymorphic.managers.PolymorphicQuerySet.order_by)
translate the field paths in the args, then call vanilla order_by.
#### polymorphic.models[¶](#module-polymorphic.models)
Seamless Polymorphic Inheritance for Django Models
*class* `polymorphic.models.``PolymorphicModel`(**args*, ***kwargs*)[¶](#polymorphic.models.PolymorphicModel)
Bases: `django.db.models.base.Model`
Abstract base class that provides polymorphic behaviour for any model directly or indirectly derived from it.
PolymorphicModel declares one field for internal use ([`polymorphic_ctype`](#polymorphic.models.PolymorphicModel.polymorphic_ctype))
and provides a polymorphic manager as the default manager (and as ‘objects’).
| Parameters: | **polymorphic_ctype** (ForeignKey to [`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType)) – Polymorphic ctype |
`__init__`(**args*, ***kwargs*)[¶](#polymorphic.models.PolymorphicModel.__init__)
Replace Django’s inheritance accessor member functions for our model
(self.__class__) with our own versions.
We monkey patch them until a patch can be added to Django
(which would probably be very small and make all of this obsolete).
If we have inheritance of the form ModelA -> ModelB ->ModelC then Django creates accessors like this:
- ModelA: modelb
- ModelB: modela_ptr, modelb, modelc
- ModelC: modela_ptr, modelb, modelb_ptr, modelc
These accessors allow Django (and everyone else) to travel up and down the inheritance tree for the db object at hand.
The original Django accessors use our polymorphic manager.
But they should not. So we replace them with our own accessors that use our appropriate base_objects manager.
`get_real_instance`()[¶](#polymorphic.models.PolymorphicModel.get_real_instance)
Upcast an object to it’s actual type.
If a non-polymorphic manager (like base_objects) has been used to retrieve objects, then the complete object with it’s real class/type and all fields may be retrieved with this method.
Note
Each method call executes one db query (if necessary).
Use the [`get_real_instances()`](index.html#polymorphic.managers.PolymorphicQuerySet.get_real_instances)
to upcast a complete list in a single efficient query.
`get_real_instance_class`()[¶](#polymorphic.models.PolymorphicModel.get_real_instance_class)
Return the actual model type of the object.
If a non-polymorphic manager (like base_objects) has been used to retrieve objects, then the real class/type of these objects may be determined using this method.
`pre_save_polymorphic`(*using='default'*)[¶](#polymorphic.models.PolymorphicModel.pre_save_polymorphic)
Make sure the `polymorphic_ctype` value is correctly set on this model.
`save`(**args*, ***kwargs*)[¶](#polymorphic.models.PolymorphicModel.save)
Calls [`pre_save_polymorphic()`](#polymorphic.models.PolymorphicModel.pre_save_polymorphic) and saves the model.
`polymorphic_ctype`[¶](#polymorphic.models.PolymorphicModel.polymorphic_ctype)
**Model field:** polymorphic ctype, accesses the [`ContentType`](https://docs.djangoproject.com/en/4.0/_objects/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType) model.
#### polymorphic.templatetags.polymorphic_admin_tags[¶](#module-polymorphic.templatetags)
Template tags for polymorphic
##### The `polymorphic_formset_tags` Library[¶](#the-polymorphic-formset-tags-library)
New in version 1.1.
To render formsets in the frontend, the `polymorphic_tags` provides extra filters to implement HTML rendering of polymorphic formsets.
The following filters are provided;
* `{{ formset|as_script_options }}` render the `data-options` for a JavaScript formset library.
* `{{ formset|include_empty_form }}` provide the placeholder form for an add button.
* `{{ form|as_form_type }}` return the model name that the form instance uses.
* `{{ model|as_model_name }}` performs the same, for a model class or instance.
```
{% load i18n polymorphic_formset_tags %}
<div class="inline-group" id="{{ formset.prefix }}-group" data-options="{{ formset|as_script_options }}">
{% block add_button %}
{% if formset.show_add_button|default_if_none:'1' %}
{% if formset.empty_forms %}
{# django-polymorphic formset (e.g. PolymorphicInlineFormSetView) #}
<div class="btn-group" role="group">
{% for model in formset.child_forms %}
<a type="button" data-type="{{ model|as_model_name }}" class="js-add-form btn btn-default">{% glyphicon 'plus' %} {{ model|as_verbose_name }}</a>
{% endfor %}
</div>
{% else %}
<a class="btn btn-default js-add-form">{% trans "Add" %}</a>
{% endif %}
{% endif %}
{% endblock %}
{{ formset.management_form }}
{% for form in formset|include_empty_form %}
{% block formset_form_wrapper %}
<div id="{{ form.prefix }}" data-inline-type="{{ form|as_form_type|lower }}" class="inline-related{% if '__prefix__' in form.prefix %} empty-form{% endif %}">
{{ form.non_field_errors }}
{# Add the 'pk' field that is not mentioned in crispy #}
{% for field in form.hidden_fields %}
{{ field }}
{% endfor %}
{% block formset_form %}
{% crispy form %}
{% endblock %}
</div>
{% endblock %}
{% endfor %}
</div>
```
##### The `polymorphic_admin_tags` Library[¶](#the-polymorphic-admin-tags-library)
The `{% breadcrumb_scope ... %}` tag makes sure the `{{ opts }}` and `{{ app_label }}`
values are temporary based on the provided `{{ base_opts }}`.
This allows fixing the breadcrumb in admin templates:
```
{% extends "admin/change_form.html" %}
{% load polymorphic_admin_tags %}
{% block breadcrumbs %}
{% breadcrumb_scope base_opts %}{{ block.super }}{% endbreadcrumb_scope %}
{% endblock %}
```
#### polymorphic.utils[¶](#module-polymorphic.utils)
`polymorphic.utils.``get_base_polymorphic_model`(*ChildModel*, *allow_abstract=False*)[¶](#polymorphic.utils.get_base_polymorphic_model)
First the first concrete model in the inheritance chain that inherited from the PolymorphicModel.
`polymorphic.utils.``reset_polymorphic_ctype`(**models*, ***filters*)[¶](#polymorphic.utils.reset_polymorphic_ctype)
Set the polymorphic content-type ID field to the proper model Sort the `*models` from base class to descending class,
to make sure the content types are properly assigned.
Add `ignore_existing=True` to skip models which already have a polymorphic content type.
`polymorphic.utils.``sort_by_subclass`(**classes*)[¶](#polymorphic.utils.sort_by_subclass)
Sort a series of models by their inheritance order.
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
HLMdiag | cran | R | Package ‘HLMdiag’
October 12, 2022
Type Package
Title Diagnostic Tools for Hierarchical (Multilevel) Linear Models
Version 0.5.0
Maintainer <NAME> <<EMAIL>>
Description A suite of diagnostic tools for hierarchical
(multilevel) linear models. The tools include
not only leverage and traditional deletion diagnostics (Cook's
distance, covratio, covtrace, and MDFFITS) but also
convenience functions and graphics for residual analysis. Models
can be fit using either lmer in the 'lme4' package or lme in the 'nlme' package.
Depends R (>= 3.5.0)
Imports ggplot2 (>= 0.9.2), stats, methods, plyr, reshape2, MASS,
Matrix, mgcv, dplyr, magrittr, stringr, purrr, tibble,
tidyselect, janitor, Rcpp, rlang, ggrepel, diagonals
LinkingTo Rcpp, RcppArmadillo
Suggests mlmRev, WWGbook, lme4 (>= 1.0), nlme, testthat, knitr,
rmarkdown, car, gridExtra, qqplotr
License GPL-2
LazyLoad yes
LazyData yes
Collate 'diagnostic_functions.R' 'group_level_residual_functions.R'
'identification.R' 'plot_functions.R' 'adjust_formula_lmList.R'
'case_delete.R' 'LSresids.R' 'HLMdiag-deprecated.R'
'HLMdiag-package.R' 'hlm_augment.R' 'hlm_influence.R'
'hlm_resid.R' 'help.R' 'influence_functions.R'
'utility_functions.R' 'rotate_ranefs.R' 'autism.R' 'ahd.R'
'radon.R' 'pull_resid.R' 'residual_functions.R'
'HLMdiag-defunct.R'
Encoding UTF-8
URL https://github.com/aloy/HLMdiag
BugReports https://github.com/aloy/HLMdiag/issues
RoxygenNote 7.1.1
VignetteBuilder knitr
NeedsCompilation yes
Author <NAME> [cre, aut],
<NAME> [aut],
<NAME> [aut]
Repository CRAN
Date/Publication 2021-05-02 04:30:08 UTC
R topics documented:
adjust_lmList.formul... 3
ah... 4
autis... 4
case_delete.defaul... 5
compare_eb_l... 8
covratio.defaul... 8
dotplot_dia... 11
extract_desig... 12
HLMdia... 13
HLMdiag-defunc... 14
HLMdiag-deprecate... 15
hlm_augmen... 15
hlm_influence.defaul... 16
hlm_resid.defaul... 18
leverage.defaul... 20
LSresids.defaul... 22
mdffits.defaul... 23
pull_resid.defaul... 26
rado... 27
resid_conditional.defaul... 27
resid_marginal.defaul... 29
resid_rane... 30
rotate_ranef.defaul... 30
rvc.defaul... 31
varcomp.me... 33
wage... 33
adjust_lmList.formula Fitting Common Models via lm
Description
Separate linear models are fit via lm similar to lmList, however, adjust_lmList can handle models
where a factor takes only one level within a group. In this case, the formula is updated eliminating
the offending factors from the formula for that group as the effect is absorbed into the intercept.
Usage
## S3 method for class 'formula'
adjust_lmList(object, data, pool)
Arguments
object a linear formula such as that used by lmList, e.g. y ~ x1 + ... + xn | g, where
g is a grouping factor.
data a data frame containing the variables in the model.
pool a logical value that indicates whether the pooled standard deviation/error should
be used.
References
<NAME>, <NAME> and <NAME> (2012). lme4: Linear mixed-effects models using
S4 classes. R package version 0.999999-0.
See Also
lmList, lm
Examples
data(Exam, package = 'mlmRev')
sepLM <- adjust_lmList(normexam ~ standLRT + sex + schgend | school, data = Exam)
confint(sepLM)
ahd Methylprednisolone data
Description
Data from a longitudinal study examining the effectiveness of Methylprednisolone as a treatment
for patients with severe alcoholic hepatitis. Subjects were randomly assigned to a treatment (31
received a placebo, 35 received the treatment) and serum bilirubin was measures each week for four
weeks.
Usage
data(ahd)
Format
A data frame with 330 observations on the following 5 variables:
treatment The treatment a subject received - a factor. Levels are placebo and treated.
subject Subject ID - a factor.
week Week of the study (0–4) - the time variable.
sbvalue Serum bilirubin level (in µmol/L).
baseline A subject’s serum bilirubin level at week 0.
Source
<NAME>. and <NAME>. (1997) Linear and Nonlinear Models for the Analysis of Re-
peated Measurements. Marcel Dekker, New York.
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. & Maddrey,
<NAME>. (1989) Methylprednisolone therapy in patients with severe alcoholic hepatitis. Annals of
Internal Medicine, 110(9), 685–690.
autism Autism data
Description
Data from a prospective longitudinal study following 214 children between the ages of 2 and 13
who were diagnosed with either autism spectrum disorder or non-spectrum developmental delays
at age 2.
Usage
data(autism)
Format
A data frame with 604 observation on the following 7 variables:
childid Child ID.
sicdegp Sequenced Inventory of Communication Development group (an assessment of expressive
language development) - a factor. Levels are low, med, and high.
age2 Age (in years) centered around age 2 (age at diagnosis).
vsae Vineland Socialization Age Equivalent
gender Child’s gender - a factor. Levels are male and female.
race Child’s race - a factor. Levels are white and nonwhite.
bestest2 Diagnosis at age 2 - a factor. Levels are autism and pdd (pervasive developmental disor-
der).
Source
http://www-personal.umich.edu/~kwelch/
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., et al. (2007). Patterns
of growth in verbal abilities among children with autism spectrum disorder. Journal of Consulting
and Clinical Psychology, 75(4), 594–604.
<NAME>., <NAME>., <NAME>., & <NAME>. (2009). Patterns of Growth in Adaptive Social
Abilities Among Children with Autism Spectrum Disorders. Journal of Abnormal Child Psychol-
ogy, 37(7), 1019–1034.
case_delete.default Case Deletion for mer/ lmerMod objects
Description
This function is used to iteratively delete groups corresponding to the levels of a hierarchical linear
model. It uses lmer() to fit the models for each deleted case (i.e. uses brute force). To investigate
numerous levels of the model, the function will need to be called multiple times, specifying the
group (level) of interest each time.
Usage
## Default S3 method:
case_delete(model, ...)
## S3 method for class 'mer'
case_delete(
model,
level = 1,
type = c("both", "fixef", "varcomp"),
delete = NULL,
...
)
## S3 method for class 'lmerMod'
case_delete(
model,
level = 1,
type = c("both", "fixef", "varcomp"),
delete = NULL,
...
)
## S3 method for class 'lme'
case_delete(
model,
level = 1,
type = c("both", "fixef", "varcomp"),
delete = NULL,
...
)
Arguments
model the original hierarchical model fit using lmer()
... do not use
level a variable used to define the group for which cases will be deleted. If level = 1
(default), then the function will delete individual observations.
type the part of the model for which you are obtaining deletion diagnostics: the fixed
effects ("fixef"), variance components ("varcomp"), or "both" (default).
delete numeric index of individual cases to be deleted. If the level parameter is spec-
ified, delete may also take the form of a character vector consisting of group
names as they appear in flist. It is possible to set level and delete individual
cases from different groups using delete, so numeric indices should be double
checked to confirm that they encompass entire groups. If delete = NULL then
all cases are iteratively deleted.
Value
a list with the following components:
fixef.original the original fixed effects estimates
ranef.original the original predicted random effects
vcov.original the original variance-covariance matrix for the fixed effects
varcomp.original the original estimated variance components
fixef.delete a list of the fixed effects estimated after case deletion
ranef.delete a list of the random effects predicted after case deletion
vcov.delete a list of the variance-covariance matrices for the fixed effects obtained after case
deletion
fitted.delete a list of the fitted values obtained after case deletion
varcomp.delete a list of the estimated variance components obtained after case deletion
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., <NAME>., and <NAME>. (1992) Case-Deletion Diagnostics for Mixed Mod-
els, Technometrics, 34, 38 – 45.
Schabenberger, O. (2004) Mixed Model Influence Diagnostics, in Proceedings of the Twenty-Ninth
SAS Users Group International Conference, SAS Users Group International.
Examples
data(sleepstudy, package = 'lme4')
fm <- lme4::lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
# Deleting every Subject
fmDel <- case_delete(model = fm, level = "Subject", type = "both")
# Deleting only subject 308
del308 <- case_delete(model = fm, level = "Subject", type = "both", delete = 308)
# Deleting a subset of subjects
delSubset <- case_delete(model = fm, level = "Subject", type = "both", delete = 308:310)
compare_eb_ls Visually comparing shrinkage and LS estimates
Description
This function creates a plot (using qplot()) where the shrinkage estimate appears on the horizontal
axis and the LS estimate appears on the vertical axis.
Usage
compare_eb_ls(eb, ols, identify = FALSE, silent = TRUE, ...)
Arguments
eb a matrix of random effects
ols a matrix of the OLS estimates found using random_ls_coef
identify the percentage of points to identify as unusual, FALSE if you do not want the
points identified.
silent logical: should the list of data frames used to make the plots be suppressed.
... other arguments to be passed to qplot()
Author(s)
<NAME> <<EMAIL>>
Examples
wages.fm1 <- lme4::lmer(lnw ~ exper + (exper | id), data = wages)
wages.sepLM <- adjust_lmList(lnw ~ exper | id, data = wages)
rancoef.eb <- coef(wages.fm1)$id
rancoef.ols <- coef(wages.sepLM)
compare_eb_ls(eb = rancoef.eb, ols = rancoef.ols, identify = 0.01)
covratio.default Influence on precision of fixed effects in HLMs
Description
These functions calculate measures of the change in the covariance matrices for the fixed effects
based on the deletion of an observation, or group of observations, for a hierarchical linear model fit
using lmer.
Usage
## Default S3 method:
covratio(object, ...)
## Default S3 method:
covtrace(object, ...)
## S3 method for class 'mer'
covratio(object, level = 1, delete = NULL, ...)
## S3 method for class 'lmerMod'
covratio(object, level = 1, delete = NULL, ...)
## S3 method for class 'lme'
covratio(object, level = 1, delete = NULL, ...)
## S3 method for class 'mer'
covtrace(object, level = 1, delete = NULL, ...)
## S3 method for class 'lmerMod'
covtrace(object, level = 1, delete = NULL, ...)
## S3 method for class 'lme'
covtrace(object, level = 1, delete = NULL, ...)
Arguments
object fitted object of class mer or lmerMod
... do not use
level variable used to define the group for which cases will be deleted. If level = 1
(default), then individual cases will be deleted.
delete index of individual cases to be deleted. To delete specific observations the row
number must be specified. To delete higher level units the group ID and group
parameter must be specified. If delete = NULL then all cases are iteratively
deleted.
Details
Both the covariance ratio (covratio) and the covariance trace (covtrace) measure the change in
the covariance matrix of the fixed effects based on the deletion of a subset of observations. The key
difference is how the variance covariance matrices are compared: covratio compares the ratio of
the determinants while covtrace compares the trace of the ratio.
Value
If delete = NULL then a vector corresponding to each deleted observation/group is returned.
If delete is specified then a single value is returned corresponding to the deleted subset specified.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., <NAME>., & <NAME>. (1992) Case-deletion diagnostics for mixed models.
Technometrics, 34(1), 38–45.
<NAME>. (2004) Mixed Model Influence Diagnostics, in Proceedings of the Twenty-Ninth
SAS Users Group International Conference, SAS Users Group International.
See Also
leverage.mer, cooks.distance.mer mdffits.mer, rvc.mer
Examples
data(sleepstudy, package = 'lme4')
ss <- lme4::lmer(Reaction ~ Days + (Days | Subject), data = sleepstudy)
# covratio for individual observations
ss.cr1 <- covratio(ss)
# covratio for subject-level deletion
ss.cr2 <- covratio(ss, level = "Subject")
## Not run:
## A larger example
data(Exam, package = 'mlmRev')
fm <- lme4::lmer(normexam ~ standLRT * schavg + (standLRT | school), data = Exam)
# covratio for individual observations
cr1 <- covratio(fm)
# covratio for school-level deletion
cr2 <- covratio(fm, level = "school")
## End(Not run)
# covtrace for individual observations
ss.ct1 <- covtrace(ss)
# covtrace for subject-level deletion
ss.ct2 <- covtrace(ss, level = "Subject")
## Not run:
## Returning to the larger example
# covtrace for individual observations
ct1 <- covtrace(fm)
# covtrace for school-level deletion
ct2 <- covtrace(fm, level = "school")
## End(Not run)
dotplot_diag Dot plots for influence diagnostics
Description
This is a function that can be used to create (modified) dotplots for the diagnostic measures. The
plot allows the user to understand the distribution of the diagnostic measure and visually identify
unusual cases.
Usage
dotplot_diag(
x,
cutoff,
name = c("cooks.distance", "mdffits", "covratio", "covtrace", "rvc", "leverage"),
data,
index = NULL,
modify = FALSE,
...
)
Arguments
x values of the diagnostic of interest
cutoff value(s) specifying the boundary for unusual values of the diagnostic. The cut-
off(s) can either be supplied by the user, or automatically calculated using mea-
sures of internal scaling if cutoff = "internal".
name what diagnostic is being plotted (one of "cooks.distance", "mdffits", "covratio",
"covtrace", "rvc", or "leverage"). This is used for the calculation of "inter-
nal" cutoffs.
data data frame to use (optional)
index optional parameter to specify index (IDs) of x values. If NULL(default), values
will be indexed in the order of the vector passed to x.
modify specifies the geom to be used to produce a space-saving modification: either
"dotplot" or "boxplot"
... other arguments to be passed to ggplot()
Note
The resulting plot uses coord_flip to rotate the plot, so when adding customized axis labels you
will need to flip the names between the x and y axes.
Author(s)
<NAME> <<EMAIL>>
Examples
data(sleepstudy, package = 'lme4')
fm <- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
#Observation level deletion and diagnostics
obs.infl <- hlm_influence(fm, level = 1)
dotplot_diag(x = obs.infl$cooksd, cutoff = "internal", name = "cooks.distance", modify = FALSE)
dotplot_diag(x = obs.infl$mdffits, cutoff = "internal", name = "cooks.distance", modify = FALSE)
# Subject level deletion and diagnostics
subject.infl <- hlm_influence(fm, level = "Subject")
dotplot_diag(x = subject.infl$cooksd, cutoff = "internal",
name = "cooks.distance", modify = FALSE)
dotplot_diag(x = subject.infl$mdffits, cutoff = "internal", name = "mdffits", modify = "dotplot")
extract_design Extracting covariance matrices from lme
Description
This function extracts the full covariance matrices from a mixed/hierarchical linear model fit using
lme.
Usage
extract_design(b)
Arguments
b a fitted model object of class lme.
Value
A list of matrices is returned.
• D contains the covariance matrix of the random effects.
• V contains the covariance matrix of the response.
• X contains the fixed-effect model matrix.
• Z contains the random-effect model matrix.
Author(s)
<NAME> <<EMAIL>>
References
This method has been adapted from the method mgcv::extract.lme.cov in the mgcv package,
written by <NAME> <<EMAIL>>.
HLMdiag Diagnostic tools for hierarchical (multilevel) linear models
Description
HLMdiag provides a suite of diagnostic tools for hierarchical (multilevel) linear models fit using
the lme4 or nlme packages. These tools are grouped below by purpose. See the help documentation
for additional information about each function.
Details
Residual analysis
HLMdiag’s hlm_resid function provides a wrapper that extracts residuals and fitted values for
individual observations or groups of observations. In addition to being a wrapper function for
functions implemented in the lme4 and nlme packages, hlm_resid provides access to the marginal
and least squares residuals.
Influence analysis
HLMdiag’s hlm_influence function provides a convenient wrapper to obtain influence diagnostics
for each observation or group of observations appended to the data used to fit the model. The di-
agnostics returned by hlm_influence include Cook’s distance, MDFFITS, covariance trace (cov-
trace), covariance ratio (covratio), leverage, and relative variance change (RVC). HLMdiag also
contains functions to calculate these diagnostics individually, as discussed below.
Influence on fitted values
HLMdiag provides leverage that calculates the influence that observations/groups have on the fit-
ted values (leverage). For mixed/hierarchical models leverage can be decomposed into two parts:
the fixed part and the random part. We refer the user to the references cited in the help documenta-
tion for additional explanation.
Influence on fixed effects estimates
HLMdiag provides cooks.distance and mdffits to assess the influence of subsets of observations
on the fixed effects.
Influence on precision of fixed effects
HLMdiag provides covratio and covtrace to assess the influence of subsets of observations on
the precision of the fixed effects.
Influence on variance components
HLMdiag’s rvc calculates the relative variance change to assess the influence of subsets of obser-
vations on the variance components.
Graphics
HLMdiag also strives to make graphical assessment easier in the ggplot2 framework by providing
dotplots for influence diagnostics (dotplot_diag), grouped Q-Q plots (group_qqnorm), and Q-Q
plots that combine the functionality of qqnorm and qqline (ggplot_qqnorm).
HLMdiag-defunct Defunct functions in package HLMdiag
Description
These functions are defunct and no longer available.
Usage
HLMresid(...)
HLMresid.default(...)
HLMresid.lmerMod(...)
HLMresid.mer(...)
diagnostics(...)
group_qqnorm(...)
ggplot_qqnorm(...)
Arguments
... arguments passed to defunct functions
Details
HLMresid is replaced by hlm_resid
diagnostics is replaced by hlm_influence
group_qqnorm and group_qqnorm are replaced by functions in qqplotr. See stat_qq_point,
stat_qq_line, and stat_qq_band.
HLMdiag-deprecated Deprecated functions in HLMdiag
Description
These functions still work but will be removed (defunct) in the next version.
Details
• HLMresid: This function is deprecated, and will be removed in the next version of this pack-
age.
• diagnostics: This function is deprecated, and will be removed in the next version of this
package.
hlm_augment Calculating residuals and influence diagnostics for HLMs
hlm_augment is used to compute residuals, fitted values, and
influence diagnostics for a hierarchical linear model. The residuals
and fitted values are computed using Least Squares(LS) and Empirical
Bayes (EB) methods. The influence diagnostics are computed through
one step approximations.
Description
Calculating residuals and influence diagnostics for HLMs
hlm_augment is used to compute residuals, fitted values, and influence diagnostics for a hierarchical
linear model. The residuals and fitted values are computed using Least Squares(LS) and Empirical
Bayes (EB) methods. The influence diagnostics are computed through one step approximations.
Usage
hlm_augment(object, ...)
## Default S3 method:
hlm_augment(object, ...)
## S3 method for class 'lmerMod'
hlm_augment(object, level = 1, include.ls = TRUE, data = NULL, ...)
Arguments
object an object of class lmerMod or lme.
... currently not used
level which residuals should be extracted and what cases should be deleted for influ-
ence diagnostics. If level = 1 (default), then within-group (case-level) residuals
are returned and influence diagnostics are calculated for individual observations.
Otherwise, level should be the name of a grouping factor as defined in flist
for a lmerMod object or as in groups for a lme object. This will return between-
group residuals and influence diagnostics calculated for each group.
include.ls a logical indicating if LS residuals should be included in the return tibble. include.ls
= FALSE decreases runtime substantially.
data the original data frame passed to ‘lmer‘. This is only necessary for ‘lmerMod‘
models where ‘na.action = "na.exclude"‘
Details
The hlm_augment function combines functionality from hlm_resid and hlm_influence for a sim-
pler way of obtaining residuals and influence diagnostics. Please see ?hlm_resid and ?hlm_influence
for additional information about the returned values.
Note
hlm_augment does not allow for the deletion of specific cases, the specification of other types of
leverage, or the use of full refits of the model instead of one step approximations for influence
diagnostics. If this additional functionality is desired, hlm_influence should be used instead. The
additional parameter standardize is available in hlm_resid; if this are desired, hlm_resid should
be used instead.
hlm_influence.default Calculating influence diagnostics for HLMs
Description
This function is used to compute influence diagnostics for a hierarchical linear model. It takes a
model fit as a lmerMod object or as a lme object and returns a tibble with Cook’s distance, MDF-
FITS, covtrace, covratio, and leverage.
Usage
## Default S3 method:
hlm_influence(model, ...)
## S3 method for class 'lmerMod'
hlm_influence(
model,
level = 1,
delete = NULL,
approx = TRUE,
leverage = "overall",
data = NULL,
...
)
## S3 method for class 'lme'
hlm_influence(
model,
level = 1,
delete = NULL,
approx = TRUE,
leverage = "overall",
...
)
Arguments
model an object of class lmerMod or lme
... not in use
level used to define the group for which cases are deleted and influence diagnostics
are calculated. If level = 1 (default), then influence diagnostics are calculated
for individual observations. Otherwise, level should be the name of a grouping
factor as defined in flist for a lmerMod object or as in groups for a lme object.
delete numeric index of individual cases to be deleted. If the level parameter is spec-
ified, delete may also take the form of a character vector consisting of group
names as they appear in flist for lme4 models or as in groups for nlme models.
If delete = NULL then all cases are iteratively deleted.
approx logical parameter used to determine how the influence diagnostics are calcu-
lated. If FALSE (default), influence diagnostics are calculated using a one step
approximation. If TRUE, influence diagnostics are calculated by iteratively delet-
ing groups and refitting the model using lmer. This method is more accurate,
but slower than the one step approximation. If approx = FALSE, the returned
tibble also contains columns for relative variance change (RVC).
leverage a character vector to determine which types of leverage should be included in
the returned tibble. There are four options: ’overall’ (default), ’fixef’, ’ranef’,
or ’ranef.uc’. One or more types may be specified. For additional information
about the types of leverage, see ?leverage.
data (optional) the data frame used to fit the model. This is only necessary for
lmerMod models if na.action = "na.exclude" was set.
Details
The hlm_influence function provides a wrapper that appends influence diagnostics to the original
data. The approximated influence diagnostics returned by this function are equivalent to those
returned by cooks.distance, mdffits, covtrace, covratio, and leverage. The exact influence
diagnostics obtained through a full refit of the data are also available through case_delete and the
accompanying functions cooks.distance, mdffits, covtrace, and covratio that can be called
directly on the case_delete object.
Note
It is possible to set level and delete individual cases from different groups using delete, so nu-
meric indices should be double checked to confirm that they encompass entire groups. Additionally,
if delete is specified, leverage values are not returned in the resulting tibble.
hlm_resid.default Calculating residuals from HLMs
Description
hlm_resid takes a hierarchical linear model fit as a lmerMod or lme object and extracts residuals
and predicted values from the model, using Least Squares (LS) and Empirical Bayes (EB) methods.
It then appends them to the model data frame in the form of a tibble inspired by the augment
function in broom. This unified framework enables the analyst to more easily conduct an upward
residual analysis during model exploration/checking.
Usage
## Default S3 method:
hlm_resid(object, ...)
## S3 method for class 'lmerMod'
hlm_resid(
object,
level = 1,
standardize = FALSE,
include.ls = TRUE,
data = NULL,
...
)
## S3 method for class 'lme'
hlm_resid(
object,
level = 1,
standardize = FALSE,
include.ls = TRUE,
data = NULL,
...
)
Arguments
object an object of class lmerMod or lme.
... do not use
level which residuals should be extracted: 1 for within-group (case-level) residuals,
the name of a grouping factor for between-group residuals (as defined in flist
in lmerMod objects or in groups in lme objects)
standardize for any level, if standardize = TRUE the standardized residuals will be returned
for any group; for level-1 only, if standardize = "semi" then the semi-standardized
level-1 residuals will be returned
include.ls a logical indicating if LS residuals be included in the return tibble. include.ls
= FALSE decreases runtime substantially.
data if na.action = na.exclude, the user must provide the data set used to fit the
model, otherwise NULL.
Details
The hlm_resid function provides a wrapper that will extract residuals and predicted values from
a fitted lmerMod or lme object. The function provides access to residual quantities already made
available by the functions resid, predict, and ranef, but adds additional functionality. Below is
a list of types of residuals and predicted values that are extracted and appended to the model data.
level-1 residuals
.resid and .fitted Residuals calculated using the EB method (using maximum likelihood). Level-
1 EB residuals are interrelated with higher level residuals. Equivalent to the residuals extracted
by resid(object) and predict(object) respectively. When standardize = TRUE, residu-
als are standardized by sigma components of the model object.
.ls.resid and .ls.fitted Residuals calculated calculated by fitting separate LS regression mod-
els for each group. Level-1 LS residuals are unconfounded by higher level residuals, but
unreliable for small within-group sample sizes.
.mar.resid and .mar.fitted Marginal residuals only consider the fixed effect portion of the es-
timates. Equivalent to resid(object, level=0) in nlme, not currently implemented within
the lme4::resid function. When standardize = TRUE, Cholesky marginal residuals are re-
turned.
higher-level residuals (random effects)
.ranef.var_name The group level random effects using the EB method of estimating parameters.
Equivalent to ranef(object) on the specified level. EB residuals are preferred at higher
levels as LS residuals are dependent on a large sample size.
.ls.var_name The group level random effects using the LS method of estimating parameters. Cal-
culated using ranef on a lmList object to compare the random effects of individual models
to the global model.
Note that standardize = "semi" is only implemented for level-1 LS residuals.
Author(s)
<NAME> <<EMAIL>>, <NAME>, <NAME>
References
Hilden-Minton, J. (1995) Multilevel diagnostics for mixed and hierarchical linear models. Univer-
sity of California Los Angeles.
<NAME>., <NAME>., & <NAME>. (2004) Cholesky Residuals for Assessing Normal Er-
rors in a Linear Model With Correlated Outcomes. Journal of the American Statistical Association,
99(466), 383–394.
<NAME> and <NAME> (2020). broom: Convert Statistical Analysis Objects into Tidy
Tibbles. R package version 0.5.6. https://CRAN.R-project.org/package=broom
See Also
hlm_augment, resid, ranef
Examples
data(sleepstudy, package = "lme4")
fm.lmer <- lme4::lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
fm.lme <- nlme::lme(Reaction ~ Days, random = ~Days|Subject, sleepstudy)
# level-1 and marginal residuals
fm.lmer.res1 <- hlm_resid(fm.lmer) ## raw level-1 and mar resids
fm.lmer.res1
fm.lme.std1 <- hlm_resid(fm.lme, standardize = TRUE) ## std level-1 and mar resids
fm.lme.std1
# level-2 residuals
fm.lmer.res2 <- hlm_resid(fm.lmer, level = "Subject") ## level-2 ranefs
fm.lmer.res2
fm.lme.res2 <- hlm_resid(fm.lme, level = "Subject", include.ls = FALSE) ##level-2 ranef, no LS
fm.lme.res2
leverage.default Leverage for HLMs
Description
This function calculates the leverage of a hierarchical linear model fit by lmer.
Usage
## Default S3 method:
leverage(object, ...)
## S3 method for class 'mer'
leverage(object, level = 1, ...)
## S3 method for class 'lmerMod'
leverage(object, level = 1, ...)
## S3 method for class 'lme'
leverage(object, level = 1, ...)
Arguments
object fitted object of class mer of lmerMod
... do not use
level the level at which the leverage should be calculated: either 1 for observation
level leverage (default) or the name of the grouping factor (as defined in flist
of the mer object) for group level leverage. leverage assumes that the grouping
factors are unique; thus, if IDs are repeated within each unit, unique IDs must
be generated by the user prior to use of leverage.
Details
Demidenko and Stukel (2005) describe leverage for mixed (hierarchical) linear models as being the
sum of two components, a leverage associated with the fixed (H1 ) and a leverage associated with
the random effects (H2 ) where
H1 = X(X 0 V −1 X)−1 X 0 V −1
and
H2 = ZDZ 0 V −1 (I − H1 )
Nobre and Singer (2011) propose using
H2∗ = ZDZ 0
as the random effects leverage as it does not rely on the fixed effects.
For individual observations leverage uses the diagonal elements of the above matrices as the mea-
sure of leverage. For higher-level units, leverage uses the mean trace of the above matrices asso-
ciated with each higher-level unit.
Value
leverage returns a data frame with the following columns:
overall The overall leverage, i.e. H = H1 + H2 .
fixef The leverage corresponding to the fixed effects.
ranef The leverage corresponding to the random effects proposed by Demidenko and Stukel (2005).
ranef.uc The (unconfounded) leverage corresponding to the random effects proposed by Nobre
and Singer (2011).
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (2005) Influence analysis for linear mixed-effects models. Statistics
in Medicine, 24(6), 893–909.
<NAME>., & <NAME>. (2011) Leverage analysis for linear mixed models. Journal of Applied
Statistics, 38(5), 1063–1072.
See Also
cooks.distance.mer, mdffits.mer, covratio.mer, covtrace.mer, rvc.mer
Examples
data(sleepstudy, package = 'lme4')
fm <- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
# Observation level leverage
lev1 <- leverage(fm, level = 1)
head(lev1)
# Group level leverage
lev2 <- leverage(fm, level = "Subject")
head(lev2)
LSresids.default Calculating least squares residuals
Description
This function calculates least squares (LS) residuals found by fitting separate LS regression models
to each case. For examples see the documentation for HLMresid.
Usage
## Default S3 method:
LSresids(object, ...)
## S3 method for class 'mer'
LSresids(object, level, sim = NULL, standardize = FALSE, ...)
## S3 method for class 'lmerMod'
LSresids(object, level, standardize = FALSE, ...)
## S3 method for class 'lme'
LSresids(object, level, standardize = FALSE, ...)
Arguments
object an object of class mer or lmerMod.
... do not use
level which residuals should be extracted: 1 for case-level residuals or the name of
a grouping factor (as defined in flist of the mer object) for between-group
residuals.
sim optional argument giving the data frame used for LS residuals. This is used
mainly when dealing with simulations. Removed in version 0.3.2.
standardize if TRUE the standardized level-1 residuals will also be returned (if level = 1); if
"semi" then the semi-standardized level-1 residuals will be returned.
Author(s)
<NAME> <<EMAIL>>
References
Hilden-Minton, J. (1995) Multilevel diagnostics for mixed and hierarchical linear models. Univer-
sity of California Los Angeles.
See Also
HLMresid
mdffits.default Influence on fixed effects of HLMs
Description
These functions calculate measures of the change in the fixed effects estimates based on the deletion
of an observation, or group of observations, for a hierarchical linear model fit using lmer.
Usage
## Default S3 method:
mdffits(object, ...)
## S3 method for class 'mer'
cooks.distance(model, level = 1, delete = NULL, ...)
## S3 method for class 'lmerMod'
cooks.distance(model, level = 1, delete = NULL, include.attr = FALSE, ...)
## S3 method for class 'lme'
cooks.distance(model, level = 1, delete = NULL, include.attr = FALSE, ...)
## S3 method for class 'mer'
mdffits(object, level = 1, delete = NULL, ...)
## S3 method for class 'lmerMod'
mdffits(object, level = 1, delete = NULL, include.attr = FALSE, ...)
## S3 method for class 'lme'
mdffits(object, level = 1, delete = NULL, include.attr = FALSE, ...)
Arguments
object fitted object of class mer or lmerMod
... do not use
model fitted model of class mer or lmerMod
level variable used to define the group for which cases will be deleted. If level = 1
(default), then individual cases will be deleted.
delete index of individual cases to be deleted. To delete specific observations the row
number must be specified. To delete higher level units the group ID and group
parameter must be specified. If delete = NULL then all cases are iteratively
deleted.
include.attr logical value determining whether the difference between the full and deleted
parameter estimates should be included. If FALSE (default), a numeric vector
of Cook’s distance or MDFFITS is returned. If TRUE, a tibble with the Cook’s
distance or MDFFITS values in the first column and the parameter differences
in the remaining columns is returned.
Details
Both Cook’s distance and MDFFITS measure the change in the fixed effects estimates based on the
deletion of a subset of observations. The key difference between the two diagnostics is that Cook’s
distance uses the covariance matrix for the fixed effects from the original model while MDFFITS
uses the covariance matrix from the deleted model.
Value
Both functions return a numeric vector (or single value if delete has been specified) as the default.
If include.attr = TRUE, then a tibble is returned. The first column consists of the Cook’s distance
or MDFFITS values, and the later columns capture the difference between the full and deleted
parameter estimates.
Note
Because MDFFITS requires the calculation of the covariance matrix for the fixed effects for every
model, it will be slower.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., <NAME>., & <NAME>. (1992) Case-deletion diagnostics for mixed models.
Technometrics, 34, 38–45.
<NAME>. (2004) Mixed Model Influence Diagnostics, in Proceedings of the Twenty-Ninth
SAS Users Group International Conference, SAS Users Group International.
See Also
leverage.mer, covratio.mer, covtrace.mer, rvc.mer
Examples
data(sleepstudy, package = 'lme4')
ss <- lme4::lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
# Cook's distance for individual observations
ss.cd.lev1 <- cooks.distance(ss)
# Cook's distance for each Subject
ss.cd.subject <- cooks.distance(ss, level = "Subject")
## Not run:
data(Exam, package = 'mlmRev')
fm <- lme4::lmer(normexam ~ standLRT * schavg + (standLRT | school), Exam)
# Cook's distance for individual observations
cd.lev1 <- cooks.distance(fm)
# Cook's distance for each school
cd.school <- cooks.distance(fm, level = "school")
# Cook's distance when school 1 is deleted
cd.school1 <- cooks.distance(fm, level = "school", delete = 1)
## End(Not run)
# MDFFITS for individual observations
ss.m1 <- mdffits(ss)
# MDFFITS for each Subject
ss.m.subject <- mdffits(ss, level = "Subject")
## Not run:
# MDFFITS for individual observations
m1 <- mdffits(fm)
# MDFFITS for each school
m.school <- mdffits(fm, level = "school")
## End(Not run)
pull_resid.default Computationally Efficient HLM Residuals
Description
pull_resid takes a hierarchical linear model fit as a lmerMod or lme object and returns various
types of level-1 residuals as a vector. Because the pull_resid only calculates one type of residual,
it is more efficient than using hlm_resid and indexing the resulting tibble. pull_resid is designed
to be used with methods that take a long time to run, such as the resampling methods found in the
lmeresampler package.
Usage
## Default S3 method:
pull_resid(object, ...)
## S3 method for class 'lmerMod'
pull_resid(object, type = "ls", standardize = FALSE, ...)
## S3 method for class 'lme'
pull_resid(object, type = "ls", standardize = FALSE, ...)
Arguments
object an object of class lmerMod or lme.
... not in use
type which residuals should be returned. Can be either ’ls’, ’eb’, or ’marginal’
standardize a logical indicating if residuals should be standardized
Details
type = "ls" Residuals calculated by fitting separate LS regression models for each group. LS
residuals are unconfounded by higher level residuals, but unreliable for small within-group
sample sizes. When standardize = TRUE, residuals are standardized by sigma components
of the model object.
type = "eb" Residuals calculated using the empirical Bayes (EB) method using maximum likeli-
hood. EB residuals are interrelated with higher level residuals. When standardize = TRUE,
residuals are standardized by sigma components of the model object.
type = "marginal" Marginal residuals only consider the fixed effect portion of the estimates.
When standardize = TRUE, Cholesky residuals are returned.
See Also
hlm_resid
radon Radon data
Description
Radon measurements of 919 owner-occupied homes in 85 counties of Minnesota.
Usage
data(radon)
Format
A data frame with 919 observations on the following 5 variables:
log.radon Radon measurement (in log pCi/L, i.e., log picoCurie per liter)
basement Indicator for the level of the home at which the radon measurement was taken - 0 =
basement, 1 = first floor.
uranium Average county-level soil uranium content.
county County ID.
county.name County name - a factor.
Source
http://www.stat.columbia.edu/~gelman/arm/software/
References
<NAME>., <NAME>. and <NAME>. (1996) Bayesian prediction of mean indoor radon concen-
trations for Minnesota counties. Health Physics. 71(6), 922–936.
<NAME>. and <NAME>. (2007) Data analysis using regression and multilevel/hierarchical models.
Cambridge University Press.
resid_conditional.default
Conditional residuals
Description
Calculates conditional residuals of lmerMod and lme model objects.
Usage
## Default S3 method:
resid_conditional(object, type)
## S3 method for class 'lmerMod'
resid_conditional(
object,
type = c("raw", "pearson", "studentized", "cholesky")
)
## S3 method for class 'lme'
resid_conditional(
object,
type = c("raw", "pearson", "studentized", "cholesky")
)
Arguments
object an object of class lmerMod or lme.
type a character string specifying what type of residuals should be calculated. It is
set to "raw" (observed - fitted) by default. Other options include "pearson",
"studentized", and "cholesky". Partial matching of arguments is used, so
only the first character needs to be provided.
Details
For a model of the form Y = Xβ + Zb + , four types of marginal residuals can be calculated:
raw e = Y − X betaˆ − Z b̂
q
pearson e/ diag(V ˆar(Y |b))
q
studentized e/ diag(V ˆar(e))
cholesky Ĉ −1 e where Ĉ Ĉ 0 = V ˆar(e)
Value
A vector of conditional residuals.
References
<NAME>., <NAME>., & <NAME>. (2017). Graphical Tools for Detecting Departures
from Linear Mixed Model Assumptions and Some Remedial Measures. International Statistical
Review, 85, 290–324.
<NAME>. (2004) Mixed Model Influence Diagnostics, in Proceedings of the Twenty-Ninth
SAS Users Group International Conference, SAS Users Group International.
resid_marginal.default
Marginal residuals
Description
Calculates marginal residuals of lmerMod and lme model objects.
Usage
## Default S3 method:
resid_marginal(object, type)
## S3 method for class 'lmerMod'
resid_marginal(object, type = c("raw", "pearson", "studentized", "cholesky"))
## S3 method for class 'lme'
resid_marginal(object, type = c("raw", "pearson", "studentized", "cholesky"))
Arguments
object an object of class lmerMod or lme.
type a character string specifying what type of residuals should be calculated. It is
set to "raw" (observed - fitted) by default. Other options include "pearson",
"studentized", and "cholesky". Partial matching of arguments is used, so
only the first character needs to be provided.
Details
For a model of the form Y = Xβ + Zb + , four types of marginal residuals can be calculated:
raw r = Y − X betaˆ
q
pearson r/ diag(V ˆar(Y ))
q
studentized r/ diag(V ˆar(r))
cholesky Ĉ −1 r where Ĉ Ĉ 0 = V ˆar(Y )
Value
A vector of marginal residuals.
References
<NAME>., <NAME>., & <NAME>. (2017). Graphical Tools for Detecting Departures
from Linear Mixed Model Assumptions and Some Remedial Measures. International Statistical
Review, 85, 290–324.
<NAME>. (2004) Mixed Model Influence Diagnostics, in Proceedings of the Twenty-Ninth
SAS Users Group International Conference, SAS Users Group International.
resid_ranef Random effects residuals
Description
Calculates Random effects residuals of lmerMod model objects.
Usage
resid_ranef(object, level, which, standardize)
Arguments
object an object of class lmerMod.
level DESCRIPTION
which DESCRIPTION
standardize DESCRIPTION
Value
A vector of conditional residuals.
rotate_ranef.default Calculate s-dimensional rotated random effects
Description
This function calculates reduced dimensional rotated random effects. The rotation reduces the
influence of the residuals from other levels of the model so that distributional assessment of the
resulting random effects is possible.
Usage
## Default S3 method:
rotate_ranef(.mod, ...)
## S3 method for class 'mer'
rotate_ranef(.mod, .L, s = NULL, .varimax = FALSE, ...)
## S3 method for class 'lmerMod'
rotate_ranef(.mod, .L, s = NULL, .varimax = FALSE, ...)
## S3 method for class 'lme'
rotate_ranef(.mod, .L, s = NULL, .varimax = FALSE, ...)
Arguments
.mod an object of class mer or lmerMod.
... do not use
.L a matrix defining which combination of random effects are of interest.
s the dimension of the subspace of interest.
.varimax if .varimax = TRUE than the raw varimax rotation will be applied to the resulting
rotation.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>. & <NAME>. (in press). Are you Normal? The Problem of Confounded Residual Struc-
tures in Hierarchical Linear Models. Journal of Computational and Graphical Statistics.
rvc.default Relative variance change for HLMs
Description
This function calculates the relative variance change (RVC) of hierarchical linear models fit via
lmer.
Usage
## Default S3 method:
rvc(object, ...)
## S3 method for class 'mer'
rvc(object, level = 1, delete = NULL, ...)
## S3 method for class 'lmerMod'
rvc(object, level = 1, delete = NULL, ...)
## S3 method for class 'lme'
rvc(object, level = 1, delete = NULL, ...)
Arguments
object fitted object of class mer or lmerMod
... do not use
level variable used to define the group for which cases will be deleted. If level = 1
(default), then individual cases will be deleted.
delete index of individual cases to be deleted. To delete specific observations the row
number must be specified. To delete higher level units the group ID and group
parameter must be specified. If delete = NULL then all cases are iteratively
deleted.
Value
If delete = NULL a matrix with columns corresponding to the variance components of the model
and rows corresponding to the deleted observation/group is returned.
If delete is specified then a named vector is returned.
The residual variance is named sigma2 and the other variance components are named D** where
the trailing digits give the position in the covariance matrix of the random effects.
Author(s)
<NAME> <<EMAIL>>
References
Dillane, D. (2005) Deletion Diagnostics for the Linear Mixed Model. Ph.D. thesis, Trinity College
Dublin
See Also
leverage.mer, cooks.distance.mer, mdffits.mer, covratio.mer, covtrace.mer
varcomp.mer Extracting variance components
Description
This function extracts the variance components from a mixed/hierarchical linear model fit using
lmer.
Usage
varcomp.mer(object)
Arguments
object a fitted model object of class mer or lmerMod.
Value
A named vector is returned. sigma2 denotes the residual variance. The other variance components
are names D** where the trailing digits specify the of that variance component in the covariance
matrix of the random effects.
Author(s)
<NAME> <<EMAIL>>
Examples
data(sleepstudy, package = "lme4")
fm1 <- lme4::lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
varcomp.mer(fm1)
wages Wages for male high school dropouts
Description
Data on the labor-market experience of male high school dropouts.
Format
A data frame with 6402 observations on the following 15 variables.
id respondent id - a factor with 888 levels.
lnw natural log of wages expressed in 1990 dollars.
exper years of experience in the work force
ged equals 1 if respondent has obtained a GED as of the time of survey, 0 otherwise
postexp labor force participation since obtaining a GED (in years) - before a GED is earned postexp
= 0, and on the day a GED is earned postexp = 0
black factor - equals 1 if subject is black, 0 otherwise
hispanic factor - equals 1 if subject is hispanic, 0 otherwise
hgc highest grade completed - takes integers 6 through 12
hgc.9 hgc - 9, a centered version of hgc
uerate local area unemployment rate for that year
ue.7
ue.centert1
ue.mean
ue.person.cen
ue1
Source
These data are originally from the 1979 National Longitudinal Survey on Youth (NLSY79).
Singer and Willett (2003) used these data for examples in chapter (insert info. here) and the data
sets used can be found on the UCLA Statistical Computing website: https://stats.idre.ucla.
edu/other/examples/alda/
Additionally the data were discussed by Cook and Swayne (2003) and the data can be found on the
GGobi website: http://ggobi.org/book.html.
References
<NAME>. and <NAME>. (2003), Applied Longitudinal Data Analysis: Modeling Change and
Event Occurrence, New York: Oxford University Press.
<NAME>. and <NAME>. (2007), Interactive and Dynamic Graphics for Data Analysis with R
and GGobi, Springer.
Examples
str(wages)
summary(wages)
## Not run:
library(lme4)
lmer(lnw ~ exper + (exper | id), data = wages)
## End(Not run) |
django-widgy | readthedoc | Python | django-widgy 0.7.1 documentation
### Navigation
* [django-widgy 0.7.1 documentation](index.html#document-index) »
django-widgy[¶](#django-widgy)
===
django-widgy is a heterogeneous tree editor for Django that is well-suited for use as a CMS. A heterogeneous tree is a tree where each node can be a different type—just like HTML. Widgy provides the representation for heterogeneous trees as well as an interactive JavaScript editor for them. Widgy supports Django 1.4+.
Widgy was originally created for powerful content management, but it can have many different uses.
Design[¶](#design)
---
django-widgy is a heterogeneous tree editor for Django. It enables you to combine models of different types into a tree structure.
The django-widgy project is split into two main pieces. Widgy core provides
`Node`, the `Content` abstract class,
versioning models, views, configuration helpers, and the JavaScript editor code. Much like in Django, django-widgy has many contrib packages that provide the batteries.
### Data Model[¶](#data-model)
Central to Widgy are Nodes, Contents, and Widgets. `Node`
is a subclass of Treebeard’s [`MP_Node`](https://tabo.pe/projects/django-treebeard/docs/2.0b1/mp_tree.html#treebeard.mp_tree.MP_Node).
Nodes concern themselves with the tree structure. Each Node is associated with an instance of `Content` subclass. A Node + Content combination is called a Widget.
Storing all the structure data in Node and having that point to any subclass of Content allows us to have all the benefits of a tree, but also the flexibility to store very different data within a tree.
`Nodes` are associated with their
`Content` through a
[`GenericForeignKey`](https://docs.djangoproject.com/en/1.5/ref/contrib/contenttypes/#django.contrib.contenttypes.generic.GenericForeignKey).
This is what a hypothetical Widgy tree might look like.:
```
Node (TwoColumnLayout)
|
+-- Node (MainBucket)
| |
| +-- Node (Text)
| |
| +-- Node (Image)
| |
| +-- Node (Form)
| |
| +-- Node (Input)
| |
| +-- Node (Checkboxes)
| |
| +-- Node (SubmitButton)
|
+-- Node (SidebarBucket)
|
+-- Node (CallToAction)
```
### Versioning[¶](#versioning)
Widgy comes with an optional but powerful versioning system inspired by Git. Versioning works by putting another model called a version tracker between the owner and the root node. Just like in Git, each
`VersionTracker` has a reference to a current working copy and then a list of commits. A
`VersionCommit` is a frozen snapshot of the tree.
Versioning also supports delayed publishing of commits. Normally commits will be visible immediately, but it is possible to set a publish time for a commit that allows a user to future publish content.
To enable versioning, all you need to do is use
`widgy.db.fields.VersionedWidgyfield` instead of
`widgy.db.fields.WidgyField`.
Todo
diagram
### Customization[¶](#customization)
There are two main ways to customize the behavior of Widgy and existing widgets.
The first is through the [`WidgySite`](index.html#widgy.site.WidgySite).
[`WidgySite`](index.html#widgy.site.WidgySite) is a centralized source of configuration for a Widgy instance, much like Django’s
[`AdminSite`](https://docs.djangoproject.com/en/1.5/ref/contrib/admin/#django.contrib.admin.AdminSite). You can also configure each widget’s behavior by subclassing it with a proxy.
#### WidgySite[¶](#widgysite)
* tracks installed widgets
* stores URLs
* provides authorization
* allows centralized overriding of compatibility between components
* accomodates for multiple instances of widgy
#### Proxying a Widget[¶](#proxying-a-widget)
Widgy uses a special subclass of
[`GenericForeignKey`](https://docs.djangoproject.com/en/1.5/ref/contrib/contenttypes/#django.contrib.contenttypes.generic.GenericForeignKey) that supports retrieving proxy models. Subclassing a model as a proxy is a lightweight method for providing custom behavior for widgets that you don’t control.
A more in-depth tutorial on proxying widgets can be found at the
[*Proxy Widgy Model Tutorial*](index.html#document-tutorials/proxy-widget).
### Owners[¶](#owners)
A Widgy owner is a model that has a `WidgyField`.
#### Admin[¶](#admin)
Use `WidgyAdmin` (or a `WidgyForm` for your admin form)
Use `get_action_links` to add a preview button to the editor.
#### Page Builder[¶](#page-builder)
If layouts should extend something other than `layout_base.html`, set the
`base_template` property on your owner.
#### Form Builder[¶](#form-builder)
`widgy.contrib.form_builder` requires a `get_form_action` method on the owner. It accepts the form widget and the Widgy context, and returns a URL for forms to submit to. You normally submit to your own view and mix in
`HandleFormMixin` to help with handling the form submission. Make sure re-rendering after a validation error works.
Todo
tutorials/owner
#### Tutorial[¶](#tutorial)
> * It’s probably a good idea to render the entire page through Widgy, so I’ve
> used a template like this:
> ```
> {# product_list.html #}
> {% include widgy_tags %}{% render_root category 'content' %}
> ```
> I have been inserting the ‘view’ style functionality, in this case a list
> of products in a category, with `ProductList` widget.
You’ll probably have to add support for `root_node_override` to your view,
like this:
```
root_node_pk = self.kwargs.get('root_node_pk')
if root_node_pk:
site.authorize_view(self.request, self)
kwargs['root_node_override'] = get_object_or_404(Node, pk=root_node_pk)
elif hasattr(self, 'form_node'):
kwargs['root_node_override'] = self.form_node.get_root()
```
### Editor[¶](#editor)
Widgy provides a drag and drop JavaScript editor interface to the tree in the form of a Django formfield.
The editor is built on Backbone.js and RequireJS to provide a modular and customizable interface.
Contrib Packages[¶](#contrib-packages)
---
Here is where we keep the batteries. These packages are Django apps that add functionality or widgets to Widgy.
### Page Builder[¶](#page-builder)
Page builder is a collection of widgets for the purpose of creating HTML pages.
#### Installation[¶](#installation)
Page builder depends on the following packages:
* django-filer
* markdown
* bleach
* sorl-thumbnail
You can install them manually, or you can install them using the django-widgy package:
```
$ pip install django-widgy[page_builder]
```
#### Widgets[¶](#widgets)
*class* `widgy.contrib.page_builder.models.``DefaultLayout`[¶](#widgy.contrib.page_builder.models.DefaultLayout)
Todo
Who actually uses DefaultLayout?
*class* `widgy.contrib.page_builder.models.``MainContent`[¶](#widgy.contrib.page_builder.models.MainContent)
*class* `widgy.contrib.page_builder.models.``Sidebar`[¶](#widgy.contrib.page_builder.models.Sidebar)
*class* `widgy.contrib.page_builder.models.``Markdown`[¶](#widgy.contrib.page_builder.models.Markdown)
*class* `widgy.contrib.page_builder.models.``Html`[¶](#widgy.contrib.page_builder.models.Html)
The HTML Widget provides a CKEditor field. It is useful for large blocks of text that need simple inline styling. It purposefully doesn’t have the capability to add images or tables, because there are already widgets that the developer can control.
Note
There is a possible permission escalation vulnerability with allowing any admin user to add HTML. For this reason, the [`Html`](#widgy.contrib.page_builder.models.Html) widget sanitizes all the HTML using [bleach](https://pypi.python.org/pypi/bleach). If you want to add unsanitized HTML, please use the [`UnsafeHtml`](#widgy.contrib.page_builder.models.UnsafeHtml) widget.
*class* `widgy.contrib.page_builder.models.``UnsafeHtml`[¶](#widgy.contrib.page_builder.models.UnsafeHtml)
This is a widget which allows the user to output arbitrary HTML. It is unsafe because a non-superuser could gain publishing the unsafe HTML on the website with XSS code to cause permission escalation.
Warning
The `page_builder.add_unsafehtml` and `page_builder.edit_unsafehtml`
permissions are equivalent to `is_superuser` status because of the possibility of a staff user inserting JavaScript that a superuser will execute.
*class* `widgy.contrib.page_builder.models.``CalloutWidget`[¶](#widgy.contrib.page_builder.models.CalloutWidget)
*class* `widgy.contrib.page_builder.models.``Accordion`[¶](#widgy.contrib.page_builder.models.Accordion)
*class* `widgy.contrib.page_builder.models.``Tabs`[¶](#widgy.contrib.page_builder.models.Tabs)
*class* `widgy.contrib.page_builder.models.``Section`[¶](#widgy.contrib.page_builder.models.Section)
*class* `widgy.contrib.page_builder.models.``Image`[¶](#widgy.contrib.page_builder.models.Image)
*class* `widgy.contrib.page_builder.models.``Video`[¶](#widgy.contrib.page_builder.models.Video)
*class* `widgy.contrib.page_builder.models.``Figure`[¶](#widgy.contrib.page_builder.models.Figure)
*class* `widgy.contrib.page_builder.models.``GoogleMap`[¶](#widgy.contrib.page_builder.models.GoogleMap)
*class* `widgy.contrib.page_builder.models.``Button`[¶](#widgy.contrib.page_builder.models.Button)
#### Tables[¶](#tables)
*class* `widgy.contrib.page_builder.models.``Table`[¶](#widgy.contrib.page_builder.models.Table)
*class* `widgy.contrib.page_builder.models.``TableRow`[¶](#widgy.contrib.page_builder.models.TableRow)
*class* `widgy.contrib.page_builder.models.``TableHeaderData`[¶](#widgy.contrib.page_builder.models.TableHeaderData)
*class* `widgy.contrib.page_builder.models.``TableData`[¶](#widgy.contrib.page_builder.models.TableData)
#### Database Fields[¶](#database-fields)
*class* `widgy.contrib.page_builder.db.fields.``ImageField`[¶](#widgy.contrib.page_builder.db.fields.ImageField)
A [FilerFileField](http://django-filer.readthedocs.org/en/latest/usage.html#usage) that only accepts images. Includes sensible defaults for use in Widgy — `null=True`,
`related_name='+'` and `on_delete=PROTECT`.
### Form Builder[¶](#form-builder)
Form builder is a collection of tools built on top of Page Builder that help with the creation of HTML forms.
To enable Form Builder, add `widgy.contrib.form_builder` to your [`INSTALLED_APPS`](https://docs.djangoproject.com/en/1.5/ref/settings/#std:setting-INSTALLED_APPS).
#### Installation[¶](#installation)
Form builder depends on the following packages:
* django-widgy[page_builder]
* django-extensions
* html2text
* phonenumbers
You can install them manually, or you can install them using the django-widgy package:
```
$ pip install django-widgy[page_builder,form_builder]
```
#### Success Handlers[¶](#success-handlers)
When a user submits a [`Form`](#widgy.contrib.form_builder.models.Form), the [`Form`](#widgy.contrib.form_builder.models.Form) will loop through all of the success handler widgets to do the things that you would normally put in the
`form_valid` method of a `django.views.generic.FormView`, for example. Form Builder provides a couple of built-in success handlers that do things like saving the data, sending emails, or submitting to Salesforce.
#### Widgets[¶](#widgets)
*class* `widgy.contrib.form_builder.models.``Form`[¶](#widgy.contrib.form_builder.models.Form)
This widget corresponds to the HTML `<form>` tag. It acts as a container and also can be used to construct a Django Form class.
`build_form_class`(*self*)[¶](#widgy.contrib.form_builder.models.Form.build_form_class)
Returns a Django Form class based on the FormField widgets inside the form.
*class* `widgy.contrib.form_builder.models.``Uncaptcha`[¶](#widgy.contrib.form_builder.models.Uncaptcha)
*class* `widgy.contrib.form_builder.models.``FormField`[¶](#widgy.contrib.form_builder.models.FormField)
[`FormField`](#widgy.contrib.form_builder.models.FormField) is an abstract base class for the following widgets.
Each [`FormField`](#widgy.contrib.form_builder.models.FormField) has the following fields which correspond to the same attributes on `django.forms.fields.Field`.
`label`[¶](#widgy.contrib.form_builder.models.FormField.label)
Corresponds with the HTML `<label>` tag. This is the text that will go inside the label.
`required`[¶](#widgy.contrib.form_builder.models.FormField.required)
Indicates whether or not this field is required. Defaults to True.
`help_text`[¶](#widgy.contrib.form_builder.models.FormField.help_text)
A TextField for outputting help text.
*class* `widgy.contrib.form_builder.models.``FormInput`[¶](#widgy.contrib.form_builder.models.FormInput)
This is a widget for all simple `<input>` types. It supports the following input types: `text`, `number`, `email`, `tel`,
`checkbox`, `date`. Respectively they correspond with the following Django formfields: [`CharField`](https://docs.djangoproject.com/en/1.5/ref/forms/fields/#django.forms.CharField),
[`IntegerField`](https://docs.djangoproject.com/en/1.5/ref/forms/fields/#django.forms.IntegerField),
[`EmailField`](https://docs.djangoproject.com/en/1.5/ref/forms/fields/#django.forms.EmailField), `PhoneNumberField`,
[`BooleanField`](https://docs.djangoproject.com/en/1.5/ref/forms/fields/#django.forms.BooleanField),
[`DateField`](https://docs.djangoproject.com/en/1.5/ref/forms/fields/#django.forms.DateField).
*class* `widgy.contrib.form_builder.models.``Textarea`[¶](#widgy.contrib.form_builder.models.Textarea)
*class* `widgy.contrib.form_builder.models.``ChoiceField`[¶](#widgy.contrib.form_builder.models.ChoiceField)
*class* `widgy.contrib.form_builder.models.``MultipleChoiceField`[¶](#widgy.contrib.form_builder.models.MultipleChoiceField)
#### Owner Contract[¶](#owner-contract)
For custom [Widgy owners](index.html#owners), Form Builder needs to have a view to use for handling form submissions.
1. Each widgy owner should implement a
`get_form_action_url(form, widgy_context)` method that returns a URL that points to a view (see step 2).
2. Create a view to handle form submissions for each owner. Form Builder provides the class-based views mixin,
[`HandleFormMixin`](#widgy.contrib.form_builder.models.widgy.contrib.form_builder.views.HandleFormMixin), to make this easier.
#### Views[¶](#views)
*class* `widgy.contrib.form_builder.views.``HandleFormMixin`[¶](#widgy.contrib.form_builder.models.widgy.contrib.form_builder.views.HandleFormMixin)
An abstract view mixin for handling form_builder.Form submissions. It inherits from
[`django.views.generic.edit.FormMixin`](https://docs.djangoproject.com/en/1.5/ref/class-based-views/mixins-editing/#django.views.generic.edit.FormMixin).
It should be registered with a URL similar to the following.
```
url('^form/(?P<form_node_pk>[^/]*)/$', 'your_view')
```
[`HandleFormMixin`](#widgy.contrib.form_builder.models.widgy.contrib.form_builder.views.HandleFormMixin) does not implement a GET method, so your subclass should handle that. Here is an example of a fully functioning implementation:
```
from django.views.generic import DetailView from widgy.contrib.form_builder.views import HandleFormMixin
class EventDetailView(HandleFormMixin, DetailView):
model = Event
def post(self, *args, **kwargs):
self.object = self.get_object()
return super(EventDetailView, self).post(*args, **kwargs)
```
`widgy.contrib.widgy_mezzanine.views.HandleFormView` provides an even more robust example implementation.
### Widgy Mezzanine[¶](#widgy-mezzanine)
This app provides integration with the [Mezzanine](http://mezzanine.jupo.org/) project. Widgy Mezzanine uses Mezzanine for site structure and Widgy for page content. It does this by providing a subclass of Mezzanine’s Page model called
[`WidgyPage`](#widgy.contrib.widgy_mezzanine.models.WidgyPage) which delegates to Page Builder for all content.
The dependencies for Widgy Mezzanine (Mezzanine and Widgy’s Page Builder app)
are not installed by default when you install widgy, you can install them yourself:
```
$ pip install Mezzanine django-widgy[page_builder]
```
or you can install them using through the widgy package:
```
$ pip install django-widgy[page_builder,widgy_mezzanine]
```
In order to use Widgy Mezzanine, you must provide `WIDGY_MEZZANINE_SITE` in your settings. This is a fully-qualified import path to an instance of
[`WidgySite`](index.html#widgy.site.WidgySite). You also need to install the URLs.
```
url(r'^widgy-mezzanine/', include('widgy.contrib.widgy_mezzanine.urls')),
```
*class* `widgy.contrib.widgy_mezzanine.models.``WidgyPage`[¶](#widgy.contrib.widgy_mezzanine.models.WidgyPage)
The `WidgyPage` class is
`swappable` like [`User`](https://docs.djangoproject.com/en/1.5/ref/contrib/auth/#django.contrib.auth.models.User). If you want to override it, specify a `WIDGY_MEZZANINE_PAGE_MODEL` in your settings. the
`widgy.contrib.widgy_mezzanine.models.WidgyPageMixin` mixin is provided for ease of overriding. Any code that references a
[`WidgyPage`](#widgy.contrib.widgy_mezzanine.models.WidgyPage) should use the
`widgy.contrib.widgy_mezzanine.get_widgypage_model()` to get the correct class.
### Review Queue[¶](#review-queue)
Some companies have stricter policies for who can edit and who can publish content on their websites. The review queue app is an extension to versioning which collects commits for approval by a user with permissions.
The `review_queue.change_reviewedversioncommit` permission is used to determine who is allowed to approve commits.
To enabled the review queue,
1. Add `widgy.contrib.review_queue` to your
[`INSTALLED_APPS`](https://docs.djangoproject.com/en/1.5/ref/settings/#std:setting-INSTALLED_APPS).
2. Your [`WidgySite`](index.html#widgy.site.WidgySite) needs to inherit from
`ReviewedWidgySite`.
3. Register a subclass of [`VersionCommitAdminBase`](#widgy.contrib.review_queue.admin.VersionCommitAdminBase).
```
from django.contrib import admin from widgy.contrib.review_queue.admin import VersionCommitAdminBase from widgy.contrib.review_queue.models import ReviewedVersionCommit
class VersionCommitAdmin(VersionCommitAdminBase):
def get_site(self):
return my_site
admin.site.register(ReviewedVersionCommit, VersionCommitAdmin)
```
4. If upgrading from a non-reviewed site, a
`widgy.contrib.review_queue.models.ReviewedVersionCommit`
object must be created for each `widgy.models.VersionCommit`.
There is a management command to do this for you. It assumes that all existing commits should be approved.
```
./manage.py populate_review_queue
```
*class* `admin.``VersionCommitAdminBase`[¶](#widgy.contrib.review_queue.admin.VersionCommitAdminBase)
This an abstract [`ModelAdmin`](https://docs.djangoproject.com/en/1.5/ref/contrib/admin/#django.contrib.admin.ModelAdmin)
class that displays the pending changes for approval. Any it abstract,
because it doesn’t know which [`WidgySite`](index.html#widgy.site.WidgySite) to use.
`get_site`(*self*)[¶](#widgy.contrib.review_queue.admin.VersionCommitAdminBase.get_site)
The [`WidgySite`](index.html#widgy.site.WidgySite) that this specific
[`VersionCommitAdminBase`](#widgy.contrib.review_queue.admin.VersionCommitAdminBase) needs to work on.
Note
The review queue’s undo (it can undo approvals) support requires Django >=
1.5 or the session-based `MESSAGE_STORAGE`:
```
MESSAGE_STORAGE = 'django.contrib.messages.storage.session.SessionStorage'
```
API Reference[¶](#api-reference)
---
### Base Models[¶](#base-models)
*class* `widgy.models.base.``Content`[[source]](_modules/widgy/models/base.html#Content)[¶](#widgy.models.base.Content)
`node`[¶](#widgy.models.base.Content.node)
Accessor for the [`Node`](#widgy.models.base.Node) that the [`Content`](#widgy.models.base.Content) belongs to.
Tree Traversal
With the exception [`depth_first_order()`](#widgy.models.base.Content.depth_first_order), the following methods are all like the traversal API provided by [`Treebeard`](https://tabo.pe/projects/django-treebeard/docs/2.0b1/api.html#module-treebeard.models), but instead of returning [`Nodes`](#widgy.models.base.Node), they return [`Contents`](#widgy.models.base.Content).
`get_root`(*self*)[[source]](_modules/widgy/models/base.html#Content.get_root)[¶](#widgy.models.base.Content.get_root)
`get_ancestors`(*self*)[[source]](_modules/widgy/models/base.html#Content.get_ancestors)[¶](#widgy.models.base.Content.get_ancestors)
`get_parent`(*self*)[[source]](_modules/widgy/models/base.html#Content.get_parent)[¶](#widgy.models.base.Content.get_parent)
`get_next_sibling`(*self*)[[source]](_modules/widgy/models/base.html#Content.get_next_sibling)[¶](#widgy.models.base.Content.get_next_sibling)
`get_children`(*self*)[[source]](_modules/widgy/models/base.html#Content.get_children)[¶](#widgy.models.base.Content.get_children)
`depth_first_order`(*self*)[[source]](_modules/widgy/models/base.html#Content.depth_first_order)[¶](#widgy.models.base.Content.depth_first_order)
Convenience method for iterating over all the [`Contents`](#widgy.models.base.Content) in a subtree in order. This is similar to Treebeard’s
[`get_descendants()`](https://tabo.pe/projects/django-treebeard/docs/2.0b1/api.html#treebeard.models.Node.get_descendants), but includes itself.
Tree Manipulation
The following methods mirror those of [`Node`](#widgy.models.base.Node), but accept a
[`WidgySite`](index.html#widgy.site.WidgySite) as the first argument. You must call these methods on [`Content`](#widgy.models.base.Content) and not on [`Node`](#widgy.models.base.Node).
```
>>> root = Layout.add_root(widgy_site)
>>> main = root.add_child(widgy_site, MainContent)
>>> sidebar = main.add_sibling(widgy_site, Sidebar, title='Alerts')
# move the sidebar to the left of the main content.
>>> sidebar.reposition(widgy_site, right=main)
```
*classmethod* `add_root`(*cls*, *site*, ***kwargs*)[[source]](_modules/widgy/models/base.html#Content.add_root)[¶](#widgy.models.base.Content.add_root)
Creates a root node widget. Any kwargs will be passed to the Content class’s initialize method.
`add_child`(*self*, *site*, *cls*, ***kwargs*)[[source]](_modules/widgy/models/base.html#Content.add_child)[¶](#widgy.models.base.Content.add_child)
Adds a new instance of `cls` as the last child of the current widget.
`add_sibling`(*self*, *site*, *cls*, ***kwargs*)[[source]](_modules/widgy/models/base.html#Content.add_sibling)[¶](#widgy.models.base.Content.add_sibling)
Adds a new instance of `cls` to the right of the current widget.
`reposition`(*self*, *site*, *right=None*, *parent=None*)[[source]](_modules/widgy/models/base.html#Content.reposition)[¶](#widgy.models.base.Content.reposition)
Moves the current widget to the left of `right` or to the last child position of `parent`.
`post_create`(*self*, *site*)[[source]](_modules/widgy/models/base.html#Content.post_create)[¶](#widgy.models.base.Content.post_create)
Hook for doing things after a widget has been created (a
[`Content`](#widgy.models.base.Content) has been created and put in the tree). This is useful if you want to have default children for a widget, for example.
`delete`(*self*, *raw=False*)[[source]](_modules/widgy/models/base.html#Content.delete)[¶](#widgy.models.base.Content.delete)
If `raw` is `True` the widget is being deleted due to a failure in widget creation, so `post_create` will not have been run yet.
`clone`(*self*)[[source]](_modules/widgy/models/base.html#Content.clone)[¶](#widgy.models.base.Content.clone)
This method is called by `Node.clone_tree()`. You may wish to override it if your Content has special needs like a ManyToManyField.
Warning
Clone is used to freeze tree state in Versioning. If your
[`clone()`](#widgy.models.base.Content.clone) method is incorrect, your history will be corrupt.
Editing
`display_name`[¶](#widgy.models.base.Content.display_name)
A human-readable short name for widgets. This defaults to the
`verbose_name` of the widget.
Hint
You can use the `@property` decorator to make this dynamic.
Todo
screenshot
`tooltip`[¶](#widgy.models.base.Content.tooltip)
A class attribute that sets the tooltip for this widget on the shelf.
`css_classes`[¶](#widgy.models.base.Content.css_classes)
A list of CSS classes to apply to the widget element in the Editor.
Defaults to `app_label` and `module_name` of the widget.
`shelf = False`
Denotes whether this widget have a shelf. Root nodes automatically have a shelf. The shelf is where the widgets exist in the interface before they are dragged on. It is useful to set `shelf` to
`True` if there are a large number of widgets who can only go in a specfic subtree.
`component_name = 'widget'`
Specifies which JavaScript component to use for this widget.
Todo
Write documentation about components.
`pop_out = CANNOT_POP_OUT`
It is possible to open a subtree in its own editing window.
`pop_out` controls if a widget can be popped out. There are three values for `pop_out`:
`CANNOT_POP_OUT`[¶](#widgy.models.base.Content.CANNOT_POP_OUT)
`CAN_POP_OUT`[¶](#widgy.models.base.Content.CAN_POP_OUT)
`MUST_POP_OUT`[¶](#widgy.models.base.Content.MUST_POP_OUT)
`form = ModelForm`
The form class to use for editing. Also see [`get_form_class()`](#widgy.models.base.Content.get_form_class).
`formfield_overrides = {}`
Similar to [`ModelAdmin`](https://docs.djangoproject.com/en/1.5/ref/contrib/admin/#django.contrib.admin.ModelAdmin),
[`Content`](#widgy.models.base.Content) allows you to override the form fields for specific model field classes.
`draggable = True`
Denotes whether this widget may be moved through the editing interface.
`deletable = True`
Denotes whether this widget may be deleted through the editing interface.
`editable = False`
Denotes whether this widget may be edited through the editing interface. Widgy will automatically generate a
[`ModelForm`](https://docs.djangoproject.com/en/1.5/topics/forms/modelforms/#django.forms.ModelForm) to provide the editing functionality. Also see `form` and [`get_form_class()`](#widgy.models.base.Content.get_form_class).
`preview_templates`[¶](#widgy.models.base.Content.preview_templates)
A template name or list of template names for rendering in the widgy Editor. See [`get_templates_hierarchy()`](#widgy.models.base.Content.get_templates_hierarchy) for how the default value is derived.
`edit_templates`[¶](#widgy.models.base.Content.edit_templates)
A template name or list of template names for rendering the edit interface in the widgy Editor. See [`get_templates_hierarchy()`](#widgy.models.base.Content.get_templates_hierarchy)
for how the default value is derived.
`get_form_class`(*self*, *request*)[[source]](_modules/widgy/models/base.html#Content.get_form_class)[¶](#widgy.models.base.Content.get_form_class)
Returns a [`ModelForm`](https://docs.djangoproject.com/en/1.5/topics/forms/modelforms/#django.forms.ModelForm) class that is used for editing.
`get_form`(*self*, *request*, ***form_kwargs*)[[source]](_modules/widgy/models/base.html#Content.get_form)[¶](#widgy.models.base.Content.get_form)
Returns a form instance to use for editing.
*classmethod* `get_templates_hierarchy`(*cls*, ***kwargs*)[[source]](_modules/widgy/models/base.html#Content.get_templates_hierarchy)[¶](#widgy.models.base.Content.get_templates_hierarchy)
Loops through MRO to return a list of possible template names for a widget. For example the preview template for something like
[`Tabs`](index.html#widgy.contrib.page_builder.models.Tabs) might look like:
* `widgy/page_builder/tabs/preview.html`
* `widgy/mixins/tabbed/preview.html`
* `widgy/page_builder/accordion/preview.html`
* `widgy/page_builder/bucket/preview.html`
* `widgy/models/content/preview.html`
* `widgy/page_builder/preview.html`
* `widgy/mixins/preview.html`
* `widgy/page_builder/preview.html`
* `widgy/models/preview.html`
* `widgy/preview.html`
Frontend Rendering
`render`(*self*, *context*, *template=None*)[[source]](_modules/widgy/models/base.html#Content.render)[¶](#widgy.models.base.Content.render)
The method that is called by the
[`render()`](index.html#widgy.templatetags.widgy_tags.render) template tag to render the Content. It is useful to override this if you need to inject things into the context.
`get_render_templates`(*self*, *context*)[[source]](_modules/widgy/models/base.html#Content.get_render_templates)[¶](#widgy.models.base.Content.get_render_templates)
Returns a template name or list of template names for frontend rendering.
Compatibility
Widgy provide robust machinery for compatibility between Contents. Widgy uses the compatibility system to validate the relationships between parent and child Contents.
Compatibility is checked when rendering the shelf and when adding or moving widgets in the tree.
`accepting_children = False`
An easy compatibility configuration attribute. See
[`valid_parent_of()`](#widgy.models.base.Content.valid_parent_of) for more details.
`valid_parent_of`(*self*, *cls*, *obj=None*)[[source]](_modules/widgy/models/base.html#Content.valid_parent_of)[¶](#widgy.models.base.Content.valid_parent_of)
If `obj` is provided, return `True` if it could be a child of the current widget. `cls` is the type of `obj`.
If `obj` isn’t provided, return `True` if a new instance of `cls`
could be a child of the current widget.
`obj` is `None` when the child widget is being created or Widgy is checking the compatibility of the widgets on the shelf. If it is being moved from another location, there will be an instance. A parent and child are only compatible if both [`valid_parent_of()`](#widgy.models.base.Content.valid_parent_of) and
[`valid_child_of()`](#widgy.models.base.Content.valid_child_of) return `True`. This defaults to the value of
`accepting_children`.
Here is an example of a parent that only accepts three instances of
`B`:
```
class A(Content):
def valid_parent_of(self, cls, obj=None):
# If this is already my child, it can stay my child.
# This works for obj=None because self.get_children()
# will never contain None.
if obj in self.get_children():
return True
else:
# Make sure it is of type B
return (issubclass(cls, B)
# And that I don't already have three children.
and len(self.get_children()) < 3)
```
*classmethod* `valid_child_of`(*cls*, *parent*, *obj=None*)[[source]](_modules/widgy/models/base.html#Content.valid_child_of)[¶](#widgy.models.base.Content.valid_child_of)
If `obj` is provided, return `True` if it can be a child of
`parent`. `obj` will be an instance of `cls`—it may feel like an instance method.
If `obj` isn’t provided, return `True` if a new instance of `cls`
could be a child of `parent`.
This defaults to `True`.
Here is an example of a Content that can not live inside another instance of itself:
```
class Foo(Content):
@classmethod
def valid_child_of(cls, parent, obj=None):
for p in list(parent.get_ancestors()) + [parent]:
if isinstance(p, Foo):
return False
return super(Foo, cls).valid_child_of(parent, obj)
```
`equal`(*self*, *other*)[[source]](_modules/widgy/models/base.html#Content.equal)[¶](#widgy.models.base.Content.equal)
Should return `True` if `self` is equal to `other`. The default implementation checks the equality of each widget’s
`get_attributes()`.
*class* `widgy.models.base.``Node`[[source]](_modules/widgy/models/base.html#Node)[¶](#widgy.models.base.Node)
`content`[¶](#widgy.models.base.Node.content)
A generic foreign key point to our [`Content`](#widgy.models.base.Content) instance.
`is_frozen`[¶](#widgy.models.base.Node.is_frozen)
A boolean field indicating whether this node is frozen and can’t be changed in any way. This is used to preserve old tree versions for versioning.
`render`(*self*, **args*, ***kwargs*)[[source]](_modules/widgy/models/base.html#Node.render)[¶](#widgy.models.base.Node.render)
Renders this subtree and returns a string. Normally you shouldn’t call it directly, use `widgy.db.fields.WidgyField.render()`
or [`widgy.templatetags.widgy_tags.render()`](index.html#widgy.templatetags.widgy_tags.render).
`depth_first_order`(*self*)[[source]](_modules/widgy/models/base.html#Node.depth_first_order)[¶](#widgy.models.base.Node.depth_first_order)
Like [`Content.depth_first_order()`](#widgy.models.base.Content.depth_first_order), but over nodes.
`prefetch_tree`(*self*)[[source]](_modules/widgy/models/base.html#Node.prefetch_tree)[¶](#widgy.models.base.Node.prefetch_tree)
Efficiently fetches an entire tree (or subtree), including content instances. It uses `1 + m` queries, where `m` is the number of distinct content types in the tree.
*classmethod* `prefetch_trees`(*cls*, **root_nodes*)[[source]](_modules/widgy/models/base.html#Node.prefetch_trees)[¶](#widgy.models.base.Node.prefetch_trees)
Prefetches multiple trees. Uses `n + m` queries, where `n`
is the number of trees and `m` is the number of distinct content types across all the trees.
`maybe_prefetch_tree`(*self*)[[source]](_modules/widgy/models/base.html#Node.maybe_prefetch_tree)[¶](#widgy.models.base.Node.maybe_prefetch_tree)
Prefetches the tree unless it has been prefetched already.
*classmethod* `find_widgy_problems`(*cls*, *site=None*)[[source]](_modules/widgy/models/base.html#Node.find_widgy_problems)[¶](#widgy.models.base.Node.find_widgy_problems)
When a Widgy tree is edited without protection from a transaction, it is possible to get into an inconsistent state. This method returns a tuple containing two lists:
> 1. A list of node pks whose content pointer is dangling –
> pointing to a content that doesn’t exist.
> 2. A list of node pks whose content_type doesn’t exist. This might
> happen when you switch branches and remove the code for a widget,
> but still have the widget in your database. These are represented
> by `UnknownWidget` instances.
### Widgy Site[¶](#widgy-site)
*class* `widgy.site.``WidgySite`[[source]](_modules/widgy/site.html#WidgySite)[¶](#widgy.site.WidgySite)
`get_all_content_classes`(*self*)[[source]](_modules/widgy/site.html#WidgySite.get_all_content_classes)[¶](#widgy.site.WidgySite.get_all_content_classes)
Returns a list (or set) of available Content classes (widget classes). This is used
> * To find layouts from `root_choices`
> * To find widgets to put on the shelf (using
> [`validate_relationship()`](#widgy.site.WidgySite.validate_relationship) against all existing widgets in
> a tree)
`urls`(*self*)[¶](#widgy.site.WidgySite.urls)
Returns the urlpatterns needed for this Widgy site. It should be included in your urlpatterns:
```
('^admin/widgy/', include(widgy_site.urls)),
```
`get_urls`(*self*)[[source]](_modules/widgy/site.html#WidgySite.get_urls)[¶](#widgy.site.WidgySite.get_urls)
This method only exists due to the example
[`ModelAdmin`](https://docs.djangoproject.com/en/1.5/ref/contrib/admin/#django.contrib.admin.ModelAdmin) sets.
> Todo
> is `urls` or `get_urls` the preferred interface?
`reverse`(*self*, **args*, ***kwargs*)[[source]](_modules/widgy/site.html#WidgySite.reverse)[¶](#widgy.site.WidgySite.reverse)
Todo
explain reverse
`authorize_view`(*self*, *request*, *view*)[[source]](_modules/widgy/site.html#WidgySite.authorize_view)[¶](#widgy.site.WidgySite.authorize_view)
Every Widgy view will call this before doing anything. It can be considered a ‘view’ or ‘read’ permission. It should raise a
[`PermissionDenied`](https://docs.djangoproject.com/en/1.5/ref/exceptions/#django.core.exceptions.PermissionDenied) when the request is not authorized. It can be used to implement permission checking that should happen on every view, like limiting access to staff members:
```
def authorize_view(self, request, view):
if not request.user.is_staff:
raise PermissionDenied
super(WidgySite, self).authorize_view(request, value)
```
`has_add_permission`(*self*, *request*, *content_class*)[[source]](_modules/widgy/site.html#WidgySite.has_add_permission)[¶](#widgy.site.WidgySite.has_add_permission)
Given a `Content` class, can this request add a new instance? Returns `True` or `False`. The default implementation uses the Django Permission framework.
`has_change_permission`(*self*, *request*, *obj_or_class*)[[source]](_modules/widgy/site.html#WidgySite.has_change_permission)[¶](#widgy.site.WidgySite.has_change_permission)
Like [`has_add_permission()`](#widgy.site.WidgySite.has_add_permission), but for changing. It receives an instance if one is available, otherwise a class.
`has_delete_permission`(*self*, *request*, *obj_or_class_or_list*)[[source]](_modules/widgy/site.html#WidgySite.has_delete_permission)[¶](#widgy.site.WidgySite.has_delete_permission)
Like [`has_change_permission()`](#widgy.site.WidgySite.has_change_permission), but for deleting.
`obj_or_class_or_list` can also be a list, when attempting to delete a widget that has children.
`validate_relationship`(*self*, *parent*, *child*)[[source]](_modules/widgy/site.html#WidgySite.validate_relationship)[¶](#widgy.site.WidgySite.validate_relationship)
The single compatibility checking entry point. The default implementation delegates to [`valid_parent_of()`](#widgy.site.WidgySite.valid_parent_of) of
[`valid_child_of()`](#widgy.site.WidgySite.valid_child_of).
`parent` is always an instance, `child` can be a class or an instance.
`valid_parent_of`(*self*, *parent*, *child_class*, *child=None*)[[source]](_modules/widgy/site.html#WidgySite.valid_parent_of)[¶](#widgy.site.WidgySite.valid_parent_of)
Does `parent` accept the `child` instance, or a new
`child_class` instance, as a child?
The default implementation just delegates to
`Content.valid_parent_of`.
`valid_child_of`(*self*, *parent*, *child_class*, *child=None*)[[source]](_modules/widgy/site.html#WidgySite.valid_child_of)[¶](#widgy.site.WidgySite.valid_child_of)
Will the `child` instance, or a new instance of `child_class`,
accept `parent` as a parent?
The default implementation just delegates to
`Content.valid_child_of`.
`get_version_tracker_model`(*self*)[[source]](_modules/widgy/site.html#WidgySite.get_version_tracker_model)[¶](#widgy.site.WidgySite.get_version_tracker_model)
Returns the class to use as a `VersionTracker`.
This can be overridden to customize versioning behavior.
Views
Each of these properties returns a view callable. A urlpattern is built in
[`get_urls()`](#widgy.site.WidgySite.get_urls). It is important that the same callable is used for the lifetime of the site, so
`django.utils.functional.cached_property` is helpful.
`node_view`(*self*)[¶](#widgy.site.WidgySite.node_view)
`content_view`(*self*)[¶](#widgy.site.WidgySite.content_view)
`shelf_view`(*self*)[¶](#widgy.site.WidgySite.shelf_view)
`node_edit_view`(*self*)[¶](#widgy.site.WidgySite.node_edit_view)
`node_templates_view`(*self*)[¶](#widgy.site.WidgySite.node_templates_view)
`node_parents_view`(*self*)[¶](#widgy.site.WidgySite.node_parents_view)
`commit_view`(*self*)[¶](#widgy.site.WidgySite.commit_view)
`history_view`(*self*)[¶](#widgy.site.WidgySite.history_view)
`revert_view`(*self*)[¶](#widgy.site.WidgySite.revert_view)
`diff_view`(*self*)[¶](#widgy.site.WidgySite.diff_view)
`reset_view`(*self*)[¶](#widgy.site.WidgySite.reset_view)
Media Files
Note
These properties are cached at server start-up, so new ones won’t be detected until the server restarts. This means that when using `runserver`, you have to manually restart the server when adding a new file.
`scss_files`[¶](#widgy.site.WidgySite.scss_files)
Returns a list of SCSS files to be included on the front-end.
Widgets can add SCSS files just by making a file available at a location determined by its app label and name (see
`widgy.models.Content.get_templates_hierarchy()`). For example:
```
widgy/page_builder/html.scss
```
`js_files`[¶](#widgy.site.WidgySite.js_files)
Like `scss_files`, but JavaScript files.
`admin_scss_files`[¶](#widgy.site.WidgySite.admin_scss_files)
Like `scss_files`, but for the back-end editing interface. These paths look like, for an app:
```
widgy/page_builder/admin.scss
```
and for a widget:
```
widgy/page_builder/table.admin.scss
```
If you want to included JavaScript for the editing interface, you should use a `component`.
### Links Framework[¶](#links-framework)
Widgy core also provides a linking framework that allows any model to point to any other model without really knowing which models are available for linking.
This is the mechanism by which Page Builder’s
[`Button`](index.html#widgy.contrib.page_builder.models.Button) can link to Widgy Mezzanine’s [`WidgyPage`](index.html#widgy.contrib.widgy_mezzanine.models.WidgyPage) without even knowing that `WidgyPage` exists. There are two components to the links framework, [`LinkField`](#widgy.models.links.LinkField) and the link registry.
#### Model Field[¶](#model-field)
*class* `widgy.models.links.``LinkField`[[source]](_modules/widgy/models/links.html#LinkField)[¶](#widgy.models.links.LinkField)
[`LinkField`](#widgy.models.links.LinkField) is a subclass of
[`django.contrib.contenttypes.generic.GenericForeignKey`](https://docs.djangoproject.com/en/1.5/ref/contrib/contenttypes/#django.contrib.contenttypes.generic.GenericForeignKey). If you want to add a link to any model, you can just add a [`LinkField`](#widgy.models.links.LinkField)
to it.
```
from django.db import models from widgy.models import links
class MyModel(models.Model):
title = models.Charfield(max_length=255)
link = links.LinkField()
```
[`LinkField`](#widgy.models.links.LinkField) will automatically add the two required fields for GenericForeignKey, the
[`ContentType`](https://docs.djangoproject.com/en/1.5/ref/contrib/contenttypes/#django.contrib.contenttypes.models.ContentType) ForeignKey and the PositiveIntegerField. If you need to override this, you can pass in the `ct_field` and `fk_field` options that GenericForeignKey takes.
Note
Unfortunately, because Django currently lacks support for composite fields,
if you need to display the [`LinkField`](#widgy.models.links.LinkField) in a form, there are a couple of things you need to do.
1. Your Form class needs to mixin the `LinkFormMixin`.
2. You need to explicitly define a `LinkFormField` on your Form class.
Hopefully in future iterations of Django, these steps will be obsoleted.
#### Registry[¶](#registry)
If you want to expose your model to the link framework to allow things to link to it, you need to do two things.
1. You need to register your model with the links registry.
```
from django.db import models from widgy.models import links
class Blog(models.Model):
# ...
links.register(Blog)
```
The `register()` function also works as a class decorator.
```
from django.db import models from widgy.models import links
@links.register class Blog(models.Model):
# ...
```
2. You need to make sure that your model defines a `get_absolute_url`
method.
### Template Tags[¶](#template-tags)
To use these, you’ll need to `{% load widgy_tags %}`.
`widgy.templatetags.widgy_tags.``render`(*node*)[[source]](_modules/widgy/templatetags/widgy_tags.html#render)[¶](#widgy.templatetags.widgy_tags.render)
Renders a node. Use this in your `render.html` templates to render any node that isn’t a root node. Under the hood, this template tag calls
`Content.render` with the current context.
Example:
```
{% for child in self.get_children %}
{% render child %}
{% endfor %}
```
`widgy.templatetags.widgy_tags.``scss_files`(*site*)[[source]](_modules/widgy/templatetags/widgy_tags.html#scss_files)[¶](#widgy.templatetags.widgy_tags.scss_files)
`widgy.templatetags.widgy_tags.``js_files`(*site*)[[source]](_modules/widgy/templatetags/widgy_tags.html#js_files)[¶](#widgy.templatetags.widgy_tags.js_files)
These template tags provide a way to extract the [`WidgySite.scss_files`](index.html#widgy.site.WidgySite.scss_files) off of a site. This is required if you don’t have access to the site in the context, but do have a reference to it in your settings file. [`scss_files()`](#widgy.templatetags.widgy_tags.scss_files) and [`js_files()`](#widgy.templatetags.widgy_tags.js_files) can also accept an import path to the site.
```
{% for js_file in 'WIDGY_MEZZANINE_SITE'|js_files %}
<script src="{% static js_file %}"></script>
{% endfor %}
```
`widgy.templatetags.widgy_tags.``render_root`(*owner*, *field_name*)[[source]](_modules/widgy/templatetags/widgy_tags.html#render_root)[¶](#widgy.templatetags.widgy_tags.render_root)
The template entry point for rendering a tree. It delegates to
`WidgyField.render`. The
`root_node_override` template context variable can be used to override the root node that is rendered (for preview).
```
{% render_root owner_obj 'content' %}
```
Tutorials[¶](#tutorials)
---
### Quickstart[¶](#quickstart)
This quickstart assumes you wish to use the following packages:
* Widgy Mezzanine
* Page Builder
* Form Builder
Install the Widgy package:
```
pip install django-widgy[all]
```
Add Mezzanine apps to `INSTALLED_APPS` in `settings.py`:
```
'mezzanine.conf',
'mezzanine.core',
'mezzanine.generic',
'mezzanine.pages',
'django_comments',
'django.contrib.sites',
'filebrowser_safe',
'grappelli_safe',
```
add Widgy to `INSTALLED_APPS`:
```
'widgy',
'widgy.contrib.page_builder',
'widgy.contrib.form_builder',
'widgy.contrib.widgy_mezzanine',
```
add required Widgy apps to `INSTALLED_APPS`:
```
'filer',
'easy_thumbnails',
'compressor',
'argonauts',
'sorl.thumbnail',
```
`django.contrib.admin` should be installed after Mezzanine and Widgy,
so move it under them in `INSTALLED_APPS`.
add Mezzanine middleware to `MIDDLEWARE_CLASSES`:
```
'mezzanine.core.request.CurrentRequestMiddleware',
'mezzanine.core.middleware.AdminLoginInterfaceSelectorMiddleware',
'mezzanine.pages.middleware.PageMiddleware',
```
Mezzanine settings:
```
# settings.py PACKAGE_NAME_FILEBROWSER = "filebrowser_safe"
PACKAGE_NAME_GRAPPELLI = "grappelli_safe"
ADMIN_MEDIA_PREFIX = STATIC_URL + "grappelli/"
TESTING = False GRAPPELLI_INSTALLED = True SITE_ID = 1
```
If you want mezzanine to use
[`WidgyPage`](index.html#widgy.contrib.widgy_mezzanine.models.WidgyPage) as the default page,
you can add the following line to `settings.py`:
```
ADD_PAGE_ORDER = (
'widgy_mezzanine.WidgyPage',
)
```
add Mezzanine’s context processors. If you don’t already have
`TEMPLATE_CONTEXT_PROCESSORS` in your settings file, you should copy the default before adding Mezzanine’s:
```
TEMPLATE_CONTEXT_PROCESSORS = (
# Defaults
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.static",
"django.core.context_processors.media",
"django.core.context_processors.request",
# Mezzanine
"mezzanine.conf.context_processors.settings",
"mezzanine.pages.context_processors.page",
)
```
make a [`Widgy site`](index.html#widgy.site.WidgySite) and set it in settings:
```
# demo/widgy_site.py from widgy.site import WidgySite
class WidgySite(WidgySite):
pass
site = WidgySite()
# settings.py WIDGY_MEZZANINE_SITE = 'demo.widgy_site.site'
```
Configure django-compressor:
```
# settings.py STATICFILES_FINDERS = (
'compressor.finders.CompressorFinder',
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
)
COMPRESS_ENABLED = True
COMPRESS_PRECOMPILERS = (
('text/x-scss', 'django_pyscss.compressor.DjangoScssFilter'),
)
```
Note
Widgy requires that django-compressor be configured with a precompiler for `text/x-scss`. Widgy uses the [django-pyscss](https://github.com/fusionbox/django-pyscss) package for easily integrating the [pyScss](https://github.com/Kronuz/pyScss) library with Django.
Note
If you are using a version of Django older than 1.7, you will need use South 1.0 or set SOUTH_MIGRATION_MODULES.
Then run the following command:
```
$ python manage.py migrate
```
Note
If you are on a version of Django older than 1.7, you will need to run the following command as well:
```
$ python manage.py syncdb
```
add urls:
```
# urls.py from django.conf.urls import patterns, include, url from demo.widgy_site import site as widgy_site
urlpatterns = patterns('',
# ...
# widgy admin
url(r'^admin/widgy/', include(widgy_site.urls)),
# widgy frontend
url(r'^widgy/', include('widgy.contrib.widgy_mezzanine.urls')),
url(r'^', include('mezzanine.urls')),
)
```
Make sure you have a url pattern named `home` or the admin templates will not work right.
If you are using `GenericTemplateFinderMiddleware`, use the one from
`fusionbox.mezzanine.middleware`. It has been patched to work with Mezzanine.
#### How to edit home page[¶](#how-to-edit-home-page)
1. Add the homepage to your urls.py:
```
url(r'^$', 'mezzanine.pages.views.page', {'slug': '/'}, name='home'),
```
**Note:** it must be a named URL, with the name ‘home’
2. Make a page with the slug `/` and publish it.
3. Make a template called `pages/index.html` and put:
```
{% extends "pages/widgypage.html" %}
```
**Note:** If you don’t do this you will likely get the following error:
```
AttributeError: 'Settings' object has no attribute 'FORMS_EXTRA_FIELDS'
```
This is caused by Mezzanine falling back its own template
`pages/index.html` which tries to provide the inline editing feature,
which requires `mezzanine.forms` to be installed.
#### Admin center[¶](#admin-center)
A nice `ADMIN_MENU_ORDER`:
```
# settings.py ADMIN_MENU_ORDER = [
('Widgy', (
'pages.Page',
'page_builder.Callout',
'form_builder.Form',
('Review queue', 'review_queue.ReviewedVersionCommit'),
('File manager', 'filer.Folder'),
)),
]
```
#### urlconf include[¶](#urlconf-include)
`urlconf_include` is an optional application that allows you to install urlpatterns in the Mezzanine page tree. To use it, put it in
`INSTALLED_APPS`,:
```
'widgy.contrib.urlconf_include',
```
then add the `urlconf_include` middleware,:
```
'widgy.contrib.urlconf_include.middleware.PatchUrlconfMiddleware',
```
then set `URLCONF_INCLUDE_CHOICES` to a list of allowed urlpatterns. For example:
```
URLCONF_INCLUDE_CHOICES = (
('blog.urls', 'Blog'),
)
```
#### Adding Widgy to Mezzanine[¶](#adding-widgy-to-mezzanine)
If you are adding widgy to an existing mezzanine site, there are some additional considerations.
If you have not done so already, add the root directory of your mezzanine install to INSTALLED_APPS.
Also, take care when setting the WIDGY_MEZZANINE_SITE variable in your settings.py file. Because mezzanine is using an old Django directory structure,
it uses your root directory as your project file:
```
# Use:
WIDGY_MEZZANINE_SITE = 'myproject.demo.widgy_site.site'
# Not:
WIDGY_MEZZANINE_SITE = 'demo.widgy_site.site'
```
#### Common Customizations[¶](#common-customizations)
If you only have [`WidgyPages`](index.html#widgy.contrib.widgy_mezzanine.models.WidgyPage), you can choose to unregister the Mezzanine provided `RichTextPage`. Simply add this to an `admin.py`
file in your directory and add this code:
```
from django.contrib import admin
from mezzanine.pages.models import RichTextPage
admin.site.unregister(RichTextPage)
```
### Building Widgy’s JavaScript With RequireJS[¶](#building-widgy-s-javascript-with-requirejs)
Widgy’s editing interface uses RequireJS to handle dependency management and to encourage code modularity. This is convenient for development,
but might be slow in production due to the many small JavaScript files.
Widgy supports building its JavaScript with the [RequireJS optimizer](http://requirejs.org/docs/optimization.html)
to remedy this. This is entirely optional and only necessary if the performance of loading many small JavaScript and template files bothers you.
To build the JavaScript,
> * Install `django-require`:
> ```
> pip install django-require
> ```
> * Add the settings for django-require:
> ```
> REQUIRE_BUILD_PROFILE = 'widgy.build.js'
> REQUIRE_BASE_URL = 'widgy/js'
> STATICFILES_STORAGE = 'require.storage.OptimizedStaticFilesStorage'
> ```
> * Install `node` or `rhino` to run `r.js`. django-require will
> detect which one you installed. `rhino` is nice because you can
> apt-get it:
> ```
> apt-get install rhino
> ```
Now the JavaScript will automatically built during
[`collectstatic`](https://docs.djangoproject.com/en/1.5/ref/contrib/staticfiles/#django-admin-collectstatic).
### Writing Your First Widget[¶](#writing-your-first-widget)
In this tutorial, we will build a Slideshow widget. You probably want to read the [*Quickstart*](index.html#document-tutorials/widgy-mezzanine-tutorial) to get a Widgy site running before you do this one.
We currently have a static slideshow that we need to make editable. Users need to be able to add any number of slides. Users also want to be able to change the delay between each slide.
Here is the current slideshow HTML that is using [jQuery Cycle2](http://jquery.malsup.com/cycle2/):
```
<div class="cycle-slideshow"
data-cycle-timeout="2000"
data-cycle-caption-template="{% templatetag openvariable %}alt{% templatetag closevariable %}">
<div class="cycle-caption"></div <img src="http://placekitten.com/800/300" alt="Cute cat">
<img src="http://placekitten.com/800/300" alt="Fuzzy kitty">
<img src="http://placekitten.com/800/300" alt="Another cute cat">
<img src="http://placekitten.com/800/300" alt="Awwww">
</div>
```
See also
[`templatetag`](https://docs.djangoproject.com/en/1.5/ref/templates/builtins/#std:templatetag-templatetag)
This template tag allows inserting the `{{` and `}}` characters needed by Cycle2.
#### 1. Write the Models[¶](#write-the-models)
The first step in writing a widget is to write the models. We are going to make a new Django app for these widgets.
```
$ python manage.py startapp slideshow
```
(Don’t forget to add `slideshow` to your [`INSTALLED_APPS`](https://docs.djangoproject.com/en/1.5/ref/settings/#std:setting-INSTALLED_APPS)).
Now let’s write the models. We need to make a `Slideshow` model as the container and a `Slide` model that represents the individual images.
```
# slideshow/models.py from django.db import models import widgy from widgy.models import Content
@widgy.register class Slideshow(Content):
delay = models.PositiveIntegerField(default=2,
help_text="The delay in seconds between slides.")
accepting_children = True
editable = True
@widgy.register class Slide(Content):
image = models.ImageField(upload_to='slides/', null=True)
caption = models.CharField(max_length=255)
editable = True
```
All widget classes inherit from [`widgy.models.base.Content`](index.html#widgy.models.base.Content). This creates the relationship with [`widgy.models.base.Node`](index.html#widgy.models.base.Node) and ensures that all of the required methods are implemented.
In order to make a widget visible to Widgy, you have to add it to the registry.
There are two functions in the `widgy` module that help with this,
`widgy.register()` and `widgy.unregister()`. You should use the
`widgy.register()` class decorator on any model class that you wish to use as a widget.
Both widgets need to have `editable` set to
`True`. This will make an edit button appear in the editor, allowing the user to set the `image`, `caption`, and `delay` values.
`Slideshow` has `accepting_children` set to
`True` so that you can put a `Slide` in it. The default implementation of
[`valid_parent_of()`](index.html#widgy.models.base.Content.valid_parent_of) checks
`accepting_children`. We only need this until we override [`valid_parent_of()`](index.html#widgy.models.base.Content.valid_parent_of) in [Step 3](#slideshow-compatibility).
Note
As you can see, the `image` field is `null=True`. It is necessary for all fields in a widget either to be `null=True` or to provide a default.
This is because when a widget is dragged onto a tree, it needs to be saved without data.
[`CharFields`](https://docs.djangoproject.com/en/1.5/ref/models/fields/#django.db.models.CharField) don’t need to be
`null=True` because if they are non-NULL, the default is an empty string.
For most other field types, you must have `null=True` or a default value.
Now we need to generate migration for this app.
```
$ python manage.py schemamigration --initial slideshow
```
And now run the migration.
```
$ python manage.py migrate
```
#### 2. Write the Templates[¶](#write-the-templates)
After that, we need to write our templates. The templates are expected to be named `widgy/slideshow/slideshow/render.html` and
`widgy/slideshow/slide/render.html`.
To create the slideshow template, add a file at
`slideshow/templates/widgy/slideshow/slideshow/render.html`.
```
{% load widgy_tags %}
<div class="cycle-slideshow"
data-cycle-timeout="{{ self.get_delay_milliseconds }}"
data-cycle-caption-template="{% templatetag openvariable %}alt{% templatetag closevariable %}">
<div class="cycle-caption"></div {% for child in self.get_children %}
{% render child %}
{% endfor %}
</div>
```
For the slide, it’s `slideshow/templates/widgy/slideshow/slide/render.html`.
```
<img src="{{ self.image.url }}" alt="{{ self.caption }}">
```
See also
[`Content.get_templates_hierarchy`](index.html#widgy.models.base.Content.get_templates_hierarchy)
Documentation for how templates are discovered.
The current `Slideshow` instance is available in the context as `self`.
Because jQuery Cycle2 only accepts milliseconds instead of seconds for the delay, we need to add a method to the `Slideshow` class.
```
class Slideshow(Content):
# ...
def get_delay_milliseconds(self):
return self.delay * 1000
```
The [`Content`](index.html#widgy.models.base.Content) class mirrors several methods of the
[`TreeBeard API`](https://tabo.pe/projects/django-treebeard/docs/2.0b1/api.html#module-treebeard.models), so you can call
[`get_children()`](index.html#widgy.models.base.Content.get_children) to get all the children. To render a child [`Content`](index.html#widgy.models.base.Content), use the
[`render()`](index.html#widgy.templatetags.widgy_tags.render) template tag.
Caution
You might be tempted to include the HTML for each `Slide` inside the render template for `Slideshow`. While this does work, it is a violation of the single responsibility principle and makes it difficult for slides
(or subclasses thereof) to change how they are rendered.
#### 3. Write the Compatibility[¶](#write-the-compatibility)
Right now, the `Slideshow` and `Slide` render and could be considered complete; however, the way we have it, `Slideshow` can accept any widget as a child and a `Slide` can go in any parent. To disallow this, we have to write some [Compatibility](index.html#compatibility) methods.
```
class Slideshow(Content):
def valid_parent_of(self, cls, obj=None):
# only accept Slides
return issubclass(cls, Slide)
class Slide(Content):
@classmethod
def valid_child_of(cls, parent, obj=None):
# only go in Slideshows
return isinstance(parent, Slideshow)
```
Done.
#### Addendum: Limit Number of Children[¶](#addendum-limit-number-of-children)
Say you want to limit the number of `Slide` children to 3 for your
`Slideshow`. You do so like this:
```
class Slideshow(Content):
def valid_parent_of(self, cls, obj=None):
if obj in self.get_children():
# If it's already one of our children, it is valid
return True
else:
# Make sure it's a Slide and that you aren't full
return (issubclass(cls, Slide) and
len(self.get_children()) < 3)
```
### Proxy Widgy Model Tutorial[¶](#proxy-widgy-model-tutorial)
Widgy was developed with a batteries included philosophy like Django.
When building your own widgy project, you may find that you need to change the behavior of certain widgets. With `widgy.unregister()`, you can disable existing widgy models and re-enable it with your custom proxy model with `widgy.register()`.
This tutorial will cover a simple case where we add HTML classes to the
`<input>` tags in the contrib module, Form Builder . This tutorial assumes that you have a working widgy project. Please go through the
[*Quickstart*](index.html#document-tutorials/widgy-mezzanine-tutorial) if you do not have a working project.
In a sample project, we are adding Bootstrap styling to our forms.
Widgy uses an easy template hierarchy to replace all of the templates for styling; however, when we get to adding the styling class,
‘form-control’, to each of our input boxes in the forms, there is no template to replace.
See also
[`Content.get_templates_hierarchy`](index.html#widgy.models.base.Content.get_templates_hierarchy)
Documentation for how templates are discovered.
Widgy uses the power of Django to create a widget with predefined attributes.
To insert the class, you will need to override the attribute `widget_attrs`
in [`widgy.contrib.form_builder.models.FormInput`](index.html#widgy.contrib.form_builder.models.FormInput).
Start by creating a models.py file in your project and add your project to the `INSTALLED_APPS` if you have not done so already.
Then add this to your models.py file:
```
import widgy
from widgy.contrib.form_builder.models import FormInput
widgy.unregister(FormInput)
@widgy.register class BootstrapFormInput(FormInput):
class Meta:
proxy = True
verbose_name = 'Form Input'
verbose_name_plural = 'Form Inputs'
@property
def widget_attrs(self):
attrs = super(BootstrapFormInput, self).widget_attrs
attrs['class'] = attrs.get('class', '') + ' form-control'
return attrs
```
This code simply unregisters the existing FormInput and reregisters our proxied version to replace the attribute `widget_attrs`.
To test it, simply create a form with a form input field and preview it in the widgy interface. When you view the HTML source for that field, you will see that the HTML class form-control is now added to `<input>`.
In another example, if you wanted to override the compatibility and `verbose_name` for Page Builder’s `CalloutBucket`, you could do the following:
```
import widgy from widgy.contrib.page_builder.models import CalloutBucket
widgy.unregister(CalloutBucket)
@widgy.register class MyCalloutBucket(CalloutBucket):
class Meta:
proxy = True
verbose_name = 'Awesome Callout'
def valid_parent_of(self, cls, obj=None):
return issubclass(cls, (MyWidget)) or \
super(MyCalloutBucket, self).valid_parent_of(self, cls, obj)
```
Finally, when using proxy models, if you proxy and unregister a model that already has saved instances in the database, the old class will be used.
If you still need to use the existing widgets for the new proxy model,
you will need to write a database migration to update the content types.
Here is a sample of what may be required for this migration:
```
Node.objects.filter(content_type=old_content_type).update(content_type=new_content_type)
```
More info on proxying models can be found on [Django’s documentation on proxy models](https://docs.djangoproject.com/en/1.5/topics/db/models/#proxy-models)
Changelog[¶](#changelog)
---
### 0.7.1 (2015-08-18)[¶](#id1)
* Fix python 3 compatibility: SortedDict.keys() was returning an iterator instead of a view. This was causing `form_builder/forms/XX` not to display properly.
### 0.7.0 (2015-07-31)[¶](#id2)
* **Possible Breaking Change** Updated the [django-pyscss](https://github.com/fusionbox/django-pyscss) dependency. Please see the [django-pyscss changelog](https://pypi.python.org/pypi/django-pyscss/2.0.0#changelog) for documentation on how/if you need to change anything.
* Django 1.8 compatibility.
* Python 3 compatibility
* Django 1.7 is now the minimum supported version
* Mezzanine 4.0 is now the minimum supported version
* `Content.clone` now copies simple many-to-many relationships. If you have a widget with a many-to-many field and an overridden clone method that calls super, you should take this into account. If you have many-to-many relationships that use a custom through table, you will have to continue to override clone to clone those.
* **Backwards Incompatible** `WidgySite.has_add_permission` signature changed.
* Multisite support
+ One widgy project can now respond to multiple domains. Use cases could be
Widgy as a Service or multi-franchise website.
+ This feature depends on [Mezzanine multi-tenancy](http://mezzanine.jupo.org/docs/multi-tenancy.html)
+ Callouts are now tied to a django site
+ This feature is provided by
`widgy.contrib.widgy_mezzanine.site.MultiSitePermissionMixin`
### 0.6.1 (2015-05-01)[¶](#id3)
* Fix non-determinism bug with find_media_files.
### 0.6.0 (2015-04-30)[¶](#id4)
* Improved the compatibility error messages [#299, #193]
* Remove the recommendation to use mezzanine.boot as it was not required [#291]
* **Possible Breaking Change** Updated the [django-pyscss](https://github.com/fusionbox/django-pyscss) dependency. Please see the [django-pyscss changelog](https://pypi.python.org/pypi/django-pyscss/2.0.0#changelog) for documentation on how/if you need to change anything.
* By default, Widgy views are restricted to staff members. Previously any authenticated user was allowed. This effects the preview view and pop out edit view, among others. If you were relying on the ability for any user to access those, override `authorize_view` in your `WidgySite`. [#267]:
```
class MyWidgySite(WidgySite):
def authorize_view(self, request, view):
if not request.user.is_authenticated()
raise PermissionDenied
```
### 0.5.0 (2015-04-17)[¶](#id6)
* **Backwards Incompatible** RichTextPage is no longer unregistered by default in widgy_mezzanine. If you wish to unregister it, you can add the following to your admin.py file:
```
from django.contrib import admin from mezzanine.pages.models import RichTextPage admin.site.unregister(RichTextPage)
```
* Bugfix: Previously, the Widgy editor would break if CSRF_COOKIE_HTTPONLY was set to True [#311]
* Switched to py.test for testing. [#309]
### 0.4.0 (2015-03-12)[¶](#id7)
* Django 1.7 support. Requires upgrade to South 1.0 (Or use of SOUTH_MIGRATION_MODULES) if you stay on Django < 1.7. You may have to –fake some migrations to upgrade to the builtin Django migrations. Make sure your database is up to date using South, then upgrade Django and run:
```
./manage.py migrate --fake widgy
./manage.py migrate --fake easy_thumbnails
./manage.py migrate
```
* Support for installing Widgy without the dependencies of its contrib apps.
The ‘django-widgy’ package only has dependencies required for Widgy core.
Each contrib package has a setuptools ‘extra’. To install everything, replace
‘django-widgy’ with ‘django-widgy[all]’. [#221]
* Switched to tox for test running and allow running core tests without contrib. [#294]
* Stopped relying on urls with consecutive ‘/’ characters [#233]. This adds a new urlpattern for widgy_mezzanine’s preview page and form submission handler.
The old ones will keep working, but you should reverse with ‘page_pk’ instead of ‘slug’. For example:
```
url = urlresolvers.reverse('widgy.contrib.widgy_mezzanine.views.preview', kwargs={
'node_pk': node.pk,
'page_pk': page.pk,
})
```
* Treat help_text for fields in a widget form as safe (HTML will not be escaped) [#298]. If you were relying on HTML special characters being escaped, you should replace `help_text="1 is < 2"` with
`help_text=django.utils.html.escape("1 is < 2")`.
* Reverse URLs in form_builder admin with consideration for Form subclasses [#274].
### 0.3.5 (2015-01-30)[¶](#id8)
Bugfix release:
* Set model at runtime for ClonePageView and UnpublishView [Rocky Meza, #286]
### 0.3.4 (2015-01-22)[¶](#id9)
Bugfix release:
* Documentation fixes [Rocky Meza and Gavin Wahl]
* Fixes unintentional horizontal scrolling of Widgy content [Justin Stollsteimer]
* Increased spacing after widget title paragraphs [Justin Stollsteimer]
* Fixed styles in ckeditor to show justifications [Z<NAME>f, #279]
* Eliminated the margins for InvisibleMixin [Rocky Meza]
* CSS support for adding fields to Image. [Rocky Meza]
* Additional mezzanine container style overflow fixes [Justin Stollsteimer]
* Fix r.js optimization errors with daisydiff [Rocky Meza]
* Remove delete button from widgypage add form [<NAME>ahl]
### 0.3.3 (2014-12-22)[¶](#id10)
Bugfix release:
* Allow cloning with an overridden WIDGY_MEZZANINE_PAGE_MODEL [<NAME>, #269]
* SCSS syntax error [Rivo Laks, #271]
### 0.3.2 (2014-10-16)[¶](#id11)
Bugfix release:
* Allow WidgyAdmin to check for ReviewedWidgySite without review_queue installed [<NAME>, #265]
* Fix handling of related_name on ProxyGenericRelation [#264]
### 0.3.1 (2014-10-01)[¶](#id12)
Bugfix release for 0.3.0. #261, #263.
### 0.3.0 (2014-09-24)[¶](#id13)
This release mainly focusses on the New Save Flow feature, but also includes several bug fixes and some nice CSS touch ups. There have been some updates to the dependencies, so please be sure to check the [How to Upgrade](#how-to-upgrade) section to make sure that you get everything updated correctly.
#### Major Changes[¶](#major-changes)
* New Save Flow **Requires upgrading Mezzanine to at least 3.1.10** [<NAME>, Rocky Meza, #241]
We have updated the workflow for WidgyPage. We consider this an experiment that we can hopefully expand to all WidgyAdmins in the future. We hope that this new save flow is more intuitive and less tedious.
Screenshot of before:
Screenshot of after:
As you can see, we have rearranged some of the buttons and have gotten rid of the Published Status button. The new save buttons on the bottom right now will control the publish state as well as the commit status. This means that now instead of committing and saving being a two-step process, it all lives in one button. This should make editing and saving a smoother process.
Additionally, we have renamed some buttons to make their intent more obvious.
#### Bug Fixes[¶](#bug-fixes)
* Updated overridden directory_table template for django-filer 0.9.6. **Requires upgrading django-filer to at least 0.9.6**. [<NAME>, #179]
* Fix bug in ReviewedVersionTrackerQuerySet.published [<NAME>, #240]
* Made commit buttons not look disabled [<NAME>, #250, #205]
* (Demo) Added ADD_PAGE_ORDER to demo settings [<NAME>, #248]
* (Demo) Updated demo project requirements [Scott Clark, #234]
* Make Widgy’s jQuery private to prevent clashes with other admin extensions [<NAME>, #246]
#### Documentation[¶](#documentation)
* Update recommend ADMIN_MENU_ORDER to clarify django-filer [<NAME>, #249]
#### How to Upgrade[¶](#how-to-upgrade)
In this release, widgy has udpated two of its dependencies:
* The minimum supported version of django-filer is now 0.9.6 (previously 0.9.5).
* The minimum supported version of Mezzanine is now 3.1.10 (previously 1.3.0).
If you `pip install django-widgy==0.3.0`, it should upgrade the dependencies for you, but just to be sure, you may want to also run
```
pip install 'django-filer>=0.9.6' 'Mezzanine>=3.1.10'
```
to make sure that you get the updates.
Note
Please note that if you are upgrading from an older version of Mezzanine,
that the admin center has been restyled a little bit.
### 0.2.0 (2014-08-04)[¶](#id14)
#### Changes[¶](#id15)
* Widgy is now Apache Licensed
* **Breaking Change** Use [django-pyscss](https://github.com/fusionbox/django-pyscss) for SCSS compilation. [Rocky Meza, #175]
Requires an update to the `COMPRESS_PRECOMPILERS` setting:
```
COMPRESS_PRECOMPILERS = (
('text/x-scss', 'django_pyscss.compressor.DjangoScssFilter'),
)
```
You may also have to update `@import` statements in your SCSS, because django-pyscss uses a different (more consistent) rule for path resolution.
For example, `@import 'widgy_common'` should be changed to `@import
'/widgy/css/widgy_common'`
* Added help_text to Section to help user avoid bug [<NAME>, #135]
* Allow UI to updated based on new data after reposition [<NAME>, #199]
* Changed Button’s css_classes in shelf [Rocky Meza, #203]
* Added loading cursor while ajax is in flight [<NAME>, #215, #208]
* Get rid of “no content” [<NAME>, #206]
* Use sprites for the widget icons [<NAME> and Rocky Meza, #89, #227]
* Only show approve/unapprove buttons for interesting commits [<NAME>, #228]
* Updated demo app to have new design and new widgets [<NAME>, <NAME>, <NAME> and Rocky Meza, #129, #176]
* Added cloning for WidgyPages [<NAME>, #235]
* Use a more realistic context to render pages for search [<NAME>, #166]
* Add default children to Accordion and Tabs [Rocky Meza, #238]
#### Bugfixes[¶](#bugfixes)
* Fix cursors related to dragging [<NAME>, #155]
* Update safe urls [G<NAME>, #212]
* Fix widgy_mezzanine preview for Mezzanine==3.1.2 [Rocky Meza, #201]
* Allow RichTextPage in the admin [Zach Metcalf, #197]
* Don’t assume the response has a content-type header [G<NAME>, #216]
* Fix bug with FileUpload having empty values [Rocky Meza, #217]
* Fix urlconf_include login_required handling [<NAME>, #200]
* Patch fancybox to work with jQuery 1.9 [<NAME>, #222]
* Fix some import errors in SCSS [Rocky Meza, #230]
* Fix restore page in newer versions of Mezzanine [<NAME>, #232]
* Use unicode format strings in review queue [<NAME>, #236]
#### Documentation[¶](#id16)
* Updated quickstart to cover south migrations with easy_thumbnails [Zach Metcalf, #202]
* Added Proxy Widgy Model Tutorial [Zach Metcalf, #210]
### 0.1.6 (2014-09-09)[¶](#id17)
* Fix migrations containing unsupported KeywordsField from mezzanine [Scott Clark]
* Rename package to django-widgy
### 0.1.5 (2013-11-23)[¶](#id18)
* Fix Widgy migrations without Mezzanine [Gavin Wahl]
* Drop target collision detection [Gavin Wahl]
* Fix Figure and StrDisplayNameMixin [Gavin Wahl]
* Avoid loading review_queue when it’s not installed [Scott Clark]
* Fix multi-table inheritance with LinkFields [Gavin Wahl]
### 0.1.4 (2013-11-04)[¶](#id19)
* Add StrDisplayNameMixin
### 0.1.3 (2013-10-25)[¶](#id20)
* Fix image widget validation with the S3 storage backend
### 0.1.2 (2013-10-23)[¶](#id21)
* Fix Widgy admin for static files hosted on a different domain
### 0.1.1 (2013-10-21)[¶](#id22)
* Adjust `MANIFEST.in` to fix PyPi install.
* Fix layout having a unicode `verbose_name`
### 0.1.0 (2013-10-18)[¶](#id23)
First release.
Basic features:
* Heterogeneous tree editor (`widgy`)
* CMS (`widgy.contrib.widgy_mezzanine`)
* CMS Plugins (`widgy.contrib.urlconf_include`)
* Widgets (`widgy.contrib.page_builder`)
* Form builder (`widgy.contrib.form_builder`)
* Multilingual pages (`widgy.contrib.widgy_i18n`)
* Review queue (`widgy.contrib.review_queue`)
Development[¶](#development)
---
You can follow and contribute to Widgy’s development on [GitHub](https://github.com/fusionbox/django-widgy). There is a developers mailing list available at [<EMAIL>](https://groups.google.com/a/fusionbox.com/forum/#!forum/widgy)
### [Table Of Contents](index.html#document-index)
* [Design](index.html#document-design/index)
+ [Data Model](index.html#document-design/data-model)
+ [Versioning](index.html#document-design/versioning)
+ [Customization](index.html#document-design/site)
+ [Owners](index.html#document-design/owners)
+ [Editor](index.html#document-design/javascript)
* [Contrib Packages](index.html#document-contrib/index)
+ [Page Builder](index.html#document-contrib/page-builder/index)
+ [Form Builder](index.html#document-contrib/form-builder/index)
+ [Widgy Mezzanine](index.html#document-contrib/widgy-mezzanine/index)
+ [Review Queue](index.html#document-contrib/review-queue/index)
* [API Reference](index.html#document-api/index)
+ [Base Models](index.html#document-api/models)
+ [Widgy Site](index.html#document-api/site)
+ [Links Framework](index.html#document-api/links)
+ [Template Tags](index.html#document-api/template-tags)
* [Tutorials](index.html#document-tutorials/index)
+ [Quickstart](index.html#document-tutorials/widgy-mezzanine-tutorial)
+ [Building Widgy’s JavaScript With RequireJS](index.html#document-tutorials/require-optimizer)
+ [Writing Your First Widget](index.html#document-tutorials/first-widget)
+ [Proxy Widgy Model Tutorial](index.html#document-tutorials/proxy-widget)
* [Changelog](index.html#document-changelog)
### Related Topics
* [Documentation overview](index.html#document-index)
### Quick search
Enter search terms or a module, class or function name. |
flagmojis | hex | Erlang | API Reference
===
Modules
---
[Flagmojis](Flagmojis.html)
A micro library that provides an easy lookup to country emoji information including ISO, Unicode, Emoji and Name.
[Flagmojis.Data](Flagmojis.Data.html)
[Flagmojis.Flag](Flagmojis.Flag.html)
Flagmojis
===
A micro library that provides an easy lookup to country emoji information including ISO, Unicode, Emoji and Name.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[by_country_name(name)](#by_country_name/1)
Returns a Flagmojis.Flag struct containing all flag information by name
[by_iso(iso)](#by_iso/1)
Returns a Flagmojis.Flag struct containing all flag information by country ISO
[by_unicode(unicode)](#by_unicode/1)
Returns a Flagmojis.Flag struct containing all flag information by unicode
[Link to this section](#functions)
Functions
===
Returns a Flagmojis.Flag struct containing all flag information by name
Example
---
```
iex> Flagmojis.by_country_name("Cyprus")
%Flagmojis.Flag{
emoji: "🇨🇾",
iso: "CY",
name: "Cyprus",
unicode: "U+1F1E8 U+1F1FE"
}
```
Returns a Flagmojis.Flag struct containing all flag information by country ISO
Example
---
```
iex> Flagmojis.by_iso("GB")
%Flagmojis.Flag{
emoji: "🇬🇧",
iso: "GB",
name: "United Kingdom",
unicode: "U+1F1EC U+1F1E7"
}
```
Returns a Flagmojis.Flag struct containing all flag information by unicode
Example
---
```
iex> Flagmojis.by_unicode("U+1F1EC U+1F1E7")
%Flagmojis.Flag{
emoji: "🇬🇧",
iso: "GB",
name: "United Kingdom",
unicode: "U+1F1EC U+1F1E7"
}
```
Flagmojis.Data
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[all()](#all/0)
[Link to this section](#functions)
Functions
===
Flagmojis.Flag
=== |
cluster-toolkit | readthedoc | Python | cluster_toolkit 1.0 documentation
[cluster_toolkit](index.html#document-index)
---
Cluster Toolkit Documentation[¶](#cluster-toolkit-documentation)
===
Cluster Toolkit is a Python package specifically built for calculating weak lensing signals from galaxy clusters and cluster cosmology.
It consists of a Python front end wrapped around a well optimized back end in C, merged with [cffi](https://cffi.readthedocs.io/en/latest/).
The core functionality of the package includes:
> * 3D density functions \(\rho(r)\)
> * 3D correlation functions \(\xi(r)\)
> * Halo bias models \(b(M)\)
> * Projected density and differential profiles \(\Sigma(R)\) and \(\Delta\Sigma\)
> * Radially averaged profiles \(\overline{\Delta\Sigma}\)
> * Boost factor models \(\mathcal{B} = (1-f_{\rm cl})^{-1}\)
> * Miscentering effects on projected profiles \(R_{\rm mis}\)
> * Halo mass functions \(\frac{dn}{dM}(M,z)\)
> * Mass-concentration relations \(M-c\)
> * Sunyaev-Zel’dovich (SZ) cluster signals \(Y_{SZ}\) (in development)
> * Cluster magnification \(\kappa(\theta)\) and shear profiles \(\gamma(\theta)\) (in development)
The source code is publically available at <https://github.com/tmcclintock/cluster_toolkit>.
Note
Unless stated otherwise, all distances are assumed to be \({\rm Mpc}/h\) comoving and masses \({\rm M}_\odot/h\). Furthermore, power spectra \(P(k)\) must be in units of \(({\rm Mpc}/h)^3\) with wavenumber \(k\) in units of \(h/{\rm Mpc}\).
Installation[¶](#installation)
---
To install the cluster_toolkit you currently need to build it from source:
```
git clone https://github.com/tmcclintock/cluster_toolkit.git cd cluster_toolkit python setup.py install
```
To run the tests you can do:
```
python setup.py test
```
### Requirements[¶](#requirements)
This package has only ever been tested with Python 2.7.x and has some dependencies. The Python dependencies that you can get with pip are:
* [Numpy](http://www.numpy.org/): 1.13 or later
* [cffi](https://cffi.readthedocs.io/en/latest/): 1.10 or later
* [pytest](https://docs.pytest.org/en/latest/): 3.x or later for testing
In addition, you must have the [GNU Science Library](https://www.gnu.org/software/gsl/) (GSL) installed. If you follow the instructions in their INSTALL file you will be done in only a few lines in the terminal. There is a pip installable GSL, but I do not know if it will work with the cluster toolkit.
Furthermore, while it is not a dependency of this code, for some of the functions you will need a way to calculate the linear and nonlinear matter power spectra \(P(k)\). Two good options are [CAMB](http://camb.info/) and [CLASS](http://class-code.net/). Both are also available in the [Core Cosmology Library](https://github.com/LSSTDESC/CCL).
Density Profiles[¶](#density-profiles)
---
Most cluster observables are closely related to their density profiles. This repository contains routines for calcalating the NFW and Einasto profiles.
### NFW Profile[¶](#nfw-profile)
The NFW profile ([arxiv](https://arxiv.org/abs/astro-ph/9508025)) is a 3D density profile given by:
\[\rho_{\rm nfw}(r) = \frac{\Omega_m\rho_{\rm crit}\delta_c}{\left(\frac{r}{r_s}\right)\left(1+\frac{r}{r_s}\right)^2}.\]
The free parameters are the cluster mass \(M_\Delta\) and concentration \(c_\Delta = r_\Delta/r_s\). In this module we choose to define the density with respect to the matter background density \(\Omega_m\rho_{\rm crit}\). The scale radius \(r_s\) is given in \(h^{-1}{\rm Mpc}\), however the code uses concentration \(c_\Delta\) as an argument instead. The normalization \(\delta_c\) is calculated internally and depends only on the concentration. As written, because of the choice of units the only cosmological parameter that needs to be passed in is \(\Omega_m\).
Note
The density profiles can use \(\Delta\neq 200\).
To use this, you would do:
```
from cluster_toolkit import density import numpy as np radii = np.logspace(-2, 3, 100) #Mpc/h comoving mass = 1e14 #Msun/h concentration = 5 #arbitrary Omega_m = 0.3 rho_nfw = density.rho_nfw_at_r(radii, mass, concentration, Omega_m)
```
### Einasto Profile[¶](#einasto-profile)
The [Einasto profile](http://adsabs.harvard.edu/abs/1965TrAlm...5...87E) is a 3D density profile given by:
\[\rho_{\rm ein}(r) = \rho_s\exp\left(-\frac{2}{\alpha}\left(\frac{r}{r_s}\right)^\alpha\right)\]
In this model, the free parameters are the scale radius \(r_s\), \(\alpha\), and the cluster mass \(M_\Delta\). The scale density \(\rho_s\) is calculated internally, or can be passed in instead of mass. To use this, you would do:
```
from cluster_toolkit import density import numpy as np radii = np.logspace(-2, 3, 100) #Mpc/h comoving mass = 1e14 #Msun/h r_scale = 1.0 #Mpc/h comoving scale radius alpha = 0.19 #arbitrary; a typical value Omega_m = 0.3 rho_ein = density.rho_einasto_at_r(radii, mass, r_scale, alpha, Omega_m)
```
We can see the difference between these two profiles here:
Correlation Functions[¶](#correlation-functions)
---
Cluster density profiles are closely related to the correlation function
\[\rho(r) = \rho_0(1+\xi_{\rm hm}(r))\]
That is, the average density of halos some distance \(r\) from the center of the halo is proportional to mean density \(\rho_0=\Omega_m\rho_{\rm crit}\) of the universe and the *halo-matter* correlation function, or the tendency to find matter near halos. This module makes various correlation functions available. The correlation functions for the NFW and Einasto profiles can also be computed directly from the density profiles by inverting the above equation.
Note
By definition \(\int {\rm d}V\ \xi(\vec{r}) = 0\). Almost no analytic correlation function has this built in, however.
### NFW Profile[¶](#nfw-profile)
The NFW profile ([arxiv](https://arxiv.org/abs/astro-ph/9508025)) is a 3D correaltion function given by:
\[\xi_{\rm nfw}(r) = \frac{\rho_{\rm nfw}(r)}{\Omega_m\rho_{\rm crit}} - 1\]
The free parameters are the cluster mass \(M_\Delta\) and concentration \(c_\Delta = r_\Delta/r_s\). In this module we choose to define the density with respect to the matter background density \(\Omega_m\rho_{\rm crit}\). The scale radius \(r_s\) is given in \(h^{-1}{\rm Mpc}\), however the code uses concentration \(c_\Delta\) as an argument instead. As written, because of the choice of units the only cosmological parameter that needs to be passed in is \(\Omega_m\). The arguments are identical to the density profile.
Note
The correlation functions can use \(\Delta\neq 200\) as an argument: `delta=200`.
To use this, you would do:
```
from cluster_toolkit import xi import numpy as np radii = np.logspace(-2, 3, 100) #Mpc/h comoving mass = 1e14 #Msun/h concentration = 5 #arbitrary Omega_m = 0.3 xi_nfw = xi.xi_nfw_at_r(radii, mass, concentration, Omega_m)
```
### Einasto Profile[¶](#einasto-profile)
The [Einasto profile](http://adsabs.harvard.edu/abs/1965TrAlm...5...87E) is a 3D density profile given by:
\[\xi_{\rm ein}(r) = \frac{\rho_{\rm ein}}{\Omega_m\rho_{\rm crit}} - 1\]
In this model, the free parameters are the scale radius \(r_s\), \(\alpha\), and the cluster mass \(M_\Delta\). The scale density \(\rho_s\) is calculated internally, or can be passed in instead of mass. This is the same arguments as the density profile. To use this, you would do:
```
from cluster_toolkit import xi import numpy as np radii = np.logspace(-2, 3, 100) #Mpc/h comoving mass = 1e14 #Msun/h r_scale = 1.0 #Mpc/h comoving scale radius alpha = 0.19 #arbitrary; a typical value Omega_m = 0.3 xi_ein = xi.xi_einasto_at_r(radii, mass, r_scale, alpha , Omega_m)
```
### 1-halo vs. 2-halo Terms[¶](#halo-vs-2-halo-terms)
The NFW and Einasto profiles describe the “1-halo” density of halos, or the density of a halo within its boundary. However, halos tend to be found near other halos, and for this reason the *average density* of halos should have a 2-halo term. In other words, as you move far from the center of a halo (e.g. past the scale radius or \(r_{200}\) or a splashback radius or any arbitrary boundary) the halo-matter correlation function becomes the “2-halo” term, or the likelihood that one is in a second halo
\[\xi_{\rm hm}(r >> r_s) = \xi_{\rm 2-halo}(r).\]
The two halo term, as detailed below, should be related to the matter density field and a bias. The specific treatment of this is not unanimously agreed upon.
### Matter Correlation Function[¶](#matter-correlation-function)
The matter (auto)correlation function describes the average density of matter.
\[\rho_{\rm m}(r) = \rho_0(1+\xi_{\rm mm}(r))\]
By definition it is related to the matter power spectrum by a Fourier transform
\[\xi_{\rm mm}(r) = \frac{1}{2\pi^2}\int_0^\infty {\rm d}k\ k^2 P(k) \frac{\sin kr}{kr}.\]
There is not consensus on what power spectrum to use. [Hayashi & White](https://arxiv.org/abs/0709.3933) use the *linear* matter power spectrum, while [Zu et al.](https://arxiv.org/abs/1207.3794) use the *nonlinear* matter power spectrum. Anecdotally, the former is generally better for lower mass halos and the latter is better for higher mass halos. Regardless of what you use, to call this you would do
```
from cluster_toolkit import xi import numpy as np radii = np.logspace(-2, 3, 100) #Mpc/h comoving
#Assume that k and P come from somewhere, e.g. CAMB or CLASS xi_mm = xi.xi_mm_at_r(radii, k, P)
```
### 2-halo Correlation Function[¶](#halo-correlation-function)
Halos are *biased* tracers of the matter density field, meaning the 2-halo correlation function is
\[\xi_{\rm 2-halo}(r,M) = b(M)\xi_{\rm mm}(r)\]
The bias is described in more detail in the bias section of this documentation (in progress). To calculate the 2-halo term you would do
```
from cluster_toolkit import xi from cluster_toolkit import bias import numpy as np radii = np.logspace(-2, 3, 100) #Mpc/h comoving mass = 1e14 #Msun/h Omega_m = 0.3
#Assume that k and P come from somewhere, e.g. CAMB or CLASS xi_mm = xi.xi_mm_at_r(radii, k, P)
#Assume that k and P_linear came from somewhere, e.g. CAMB or CLASS bias = bias.bias_at_M(mass, k, P_linear, Omega_m)
xi_2halo = xi.xi_2halo(bias, xi_mm)
```
### Halo-matter Correlation Function[¶](#halo-matter-correlation-function)
At small scales, the correlation function follows the 1-halo term (e.g. NFW or Einasto) while at large scales it follows the 2-halo term. There is no consensus on how to combine the two. [Zu et al.](https://arxiv.org/abs/1207.3794) take the max of the two terms, while [Chang et al.](https://arxiv.org/abs/1710.06808) sum the two. The default behavior of this module is to follow [Zu et al.](https://arxiv.org/abs/1207.3794), and in the near future it will be easy to switch between different options. Mathematically this is
\[\xi_{\rm hm}(r,M) = \max(\xi_{\rm 1-halo},\xi_{\rm 2-halo}).\]
To use this you would do
```
from cluster_toolkit import xi
#Calculate 1-halo and 2-halo terms here xi_hm = xi.xi_hm(xi_1halo, xi_2halo)
```
Here are each of these correlation functions plotted together:
Halo Bias[¶](#halo-bias)
---
Halos, which host galaxies and galaxy clusters, are *biased* tracers of the matter density field. This means at large scales the correlation function is
\[\xi_{\rm hm}(R) = b\xi_{\rm mm}\]
where the bias is a function of mass (and cosmological parameters). This module implements the [Tinker et al. 2010](https://arxiv.org/abs/1001.3162) halo bias model, which is accurate to 6%. Other biases will be available in the future. To use this you would do:
```
from cluster_toolkit import bias mass = 1e14 #Msun/h Omega_m = 0.3
#Assume that k and P_linear came from somewhere, e.g. CAMB or CLASS bias = bias.bias_at_M(mass, k, P_linear, Omega_m)
```
Note
The bias can use \(\Delta\neq 200\) as an argument `delta=200`.
This module also allows for conversions between mass and RMS density variance \(\sigma^2\) and peak height \(\nu\).
```
from cluster_toolkit import bias mass = 1e14 #Msun/h Omega_m = 0.3
#Assume that k and P_linear came from somewhere, e.g. CAMB or CLASS sigma2= bias.sigma2_at_M(mass, k, P_linear, Omega_m)
nu = bias.nu_at_M(Mass, k, P_linear, Omega_m)
```
The bias as a function of mass is seen here for a basic cosmology:
Projected Density profiles \(\Sigma\) and \(\Delta\Sigma\)[¶](#projected-density-profiles-sigma-and-delta-sigma)
---
Weak lensing measurements of galaxy clusters involve calculating the projected and differential density profiles of the cluster.
### Surface Mass Density \(\Sigma(R)\)[¶](#surface-mass-density-sigma-r)
The projected density (or the surface mass density) is defined as
\[\Sigma(R) = \Omega_m\rho_{\rm crit}\int_{-\infty}^{+\infty}{\rm d}z\ \xi_{\rm hm}(\sqrt{R^2+z^2}).\]
Where \(\xi_{hm}\) is the halo-matter correlation function (link to the correlation function documentation). The integral is along the line of site, meaning that \(R\) is the distance on the sky from the center of the cluster.
Note
\(\Sigma\) and \(\Delta\Sigma\) use units of \(h{\rm M_\odot/pc^2}\), following convention in the literature.
Note
This module is called `cluster_toolkit.deltasigma`, even though it contains routines to calculate \(\Sigma\) as well as \(\Delta\Sigma\).
To calculate this using the module you would use:
```
from cluster_toolkit import deltasigma mass = 1e14 #Msun/h concentration = 5 #arbitrary Omega_m = 0.3 R_perp = np.logspace(-2, 2.4, 100) #Mpc/h comoving; distance on the sky
#Assume that radii and xi_hm are computed here Sigma = deltasigma.Sigma_at_R(R_perp, radii, xi_hm, mass, concentration, Omega_m)
```
### NFW \(\Sigma(R)\)[¶](#nfw-sigma-r)
The example code above computes a \(\Sigma\) given any halo-matter correlation function, but you can compute \(\Sigma_{nfw}\) directly using
```
from cluster_toolkit import deltasigma mass = 1e14 #Msun/h concentration = 5 #arbitrary Omega_m = 0.3 R_perp = np.logspace(-2, 2.4, 100) #Mpc/h comoving; distance on the sky Sigma_nfw = deltasigma.Sigma_nfw_at_R(R_perp, mass, concentration, Omega_m)
```
If you know of an analytic form of the the Einasto profile, please let me know.
### Differential Surface Density \(\Delta\Sigma(R)\)[¶](#differential-surface-density-delta-sigma-r)
The differential (or excess) surface mass density is defined as
\[\Delta\Sigma = \bar{\Sigma}(<R) - \Sigma(R)\]
where \(\Sigma\) is given above and
\[\bar{\Sigma}(<R) = \frac{2}{R^2}\int_0^R {\rm d}R'\ R'\Sigma(R'),\]
or the average surface mass density within the circle of radius:math:R. To calculate this you would use
```
from cluster_toolkit import deltasigma mass = 1e14 #Msun/h concentration = 5 #arbitrary Omega_m = 0.3
#Assume that Sigma at Rp is calculated here R_perp = np.logspace(-2, 2.4, 100) #Mpc/h comoving; distance on the sky DeltaSigma = deltasigma.DeltaSigma_at_R(R_perp, Rp, Sigma, Mass, concentartion, Omega_m)
```
As you can see, the code is structured so that the input \(\Sigma\) profile is arbitrary.
Note
Mass, concentration, and \(\Omega_m\) are also arguments to \(\Delta\Sigma\), because an NFW profile is used to extrapolate the integrand for \(\bar{\Sigma}(<R)\) at very small scales. To avoid issues when using an Einasto or other profile, make sure that the input profiles are calculated to fairly large and small scales.
This figure shows the different \(\Sigma(R)\) profiles, including with miscentering
This figure shows the different \(\Delta\Sigma(R)\) profiles, including with miscentering
Radially Averaged Projected Profiles[¶](#radially-averaged-projected-profiles)
---
Weak lensing measurements are bin-averaged quantities. That is, they are measurements of a quantity with a radial bin around the lens. This module allows for calculating radially averaged projected profiles from continuous profiles. Mathematically this is
\[\overline{\Delta\Sigma} = \frac{2}{R_2^2-R_1^2}\int_{R_1}^{R_2}{\rm d}R' R'\Delta\Sigma(R').\]
This can be computed in the code by using
```
from cluster_toolkit import averaging
#Assume DeltaSigma at R_perp are computed here N_bins = 15 bin_edges = np.logspace(np.log10(0.2), np.log10(30.), N_bins+1)
#Bin edges are from 200 kpc/h to 30 Mpc/h averaged_DeltaSigma = averaging.average_profile_in_bins(bin_edges, R_perp, DeltaSigma)
```
Note
The `average_profile_in_bins` function can work with any projected profile.
Note
The returned average profile will be an array of length \(N_{\rm bins}\).
Boost Factors[¶](#boost-factors)
---
In galaxy cluster weak lensing, a significant systematic issue is cluster member dilution also known as boost factors. The idea is that if some cluster galaxies are misidentified as source (background) galaxies, then your weak lensing signal is diluted due to the fact that the cluster member galaxy won’t be sheared or magnified. Traditionally, one calculates or estimates this correction and “boosts” the data vector. This boost factor is radially dependent, since you will tend to misidentify cluster members close to the cluster more than those farther out. Mathematically this looks like
\[\Delta\Sigma_{\rm corrected}(R) = (1-f_{\rm cl})^{-1}(R)\Delta\Sigma(R)\]
where \(f_{\rm cl}\) is the fraction of cluster members misidentified as being source galaxies. For shorthand, we write \(\mathcal{B} = (1-f_{\rm cl})^{-1}\). This module provides multiple models for \(\mathcal{B}\).
### NFW Boost Model[¶](#nfw-boost-model)
In McClintock et al. (in prep.) we model the boost factor with an NFW model:
\[\mathcal{B}(R) = 1+B_0\frac{1-F(x)}{x^2-1}\]
where \(x=R/R_s\) and
\[\begin{split}F(x) = \Biggl\lbrace
\begin{eqnarray}
\frac{\tan^{-1}\sqrt{x^2-1}}{\sqrt{x^2-1}} : x > 1\\
1 : x = 1\\
\frac{\tanh^{-1}\sqrt{1-x^2}}{\sqrt{1-x^2}} : x < 1.
\end{eqnarray}\end{split}\]
Parameters that need to be specified by the user are \(B_0\) and the scale radius \(R_s\). To use this, you would do:
```
from cluster_toolkit import boostfactors import numpy as np R = np.logspace(-2, 3, 100) #Mpc/h comoving B0 = 0.1 #Typical value Rs = 1.0 #Mpc/h comoving; typical value B = boostfactors.boost_nfw_at_R(R, B0, Rs)
```
### Powerlaw Boost Model[¶](#powerlaw-boost-model)
In [Melchior et al.](https://arxiv.org/abs/1610.06890) we used a power law for the boost factor.
\[\mathcal{B} = 1 + B_0\left(\frac{R}{R_s}\right)^\alpha\]
Here, the input parameters are \(B_0\), the scale radius \(R_s\), and the exponent \(\alpha\). This is also available in this module:
```
from cluster_toolkit import boostfactors import numpy as np R = np.logspace(-2, 3, 100) #Mpc/h comoving B0 = 0.1 #Typical value Rs = 1.0 #Mpc/h comoving; typical value alpha = -1.0 #arbitrary B = boostfactors.boost_powerlaw_at_R(R, B0, Rs, alpha)
```
This figure shows the NFW boost factor model:
This figure shows how the boost factor changes the \(\Delta\Sigma(R)\) profile:
Miscentering Effects[¶](#miscentering-effects)
---
If galaxy cluster centers are not properly identified on the sky, then quantities measured in annuli around that center will not match theoretical models. This effect is detailed in [Johnston et al. (2007)](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/0507467) and [Yang et al. (2006)](https://arxiv.org/abs/astro-ph/0607552).
To summarize, if a cluster center is incorrectly identified on the sky by a distance \(R_{\rm mis}\) then the surface mass density becomes:
\[\Sigma_{\rm mis}^{\rm single\ cluster}(R, R_{\rm mis}) = \int_0^{2\pi} \frac{{\rm d}\theta}{2\pi}\ \Sigma\left(\sqrt{R^2+R_{\rm mis}^2 + 2RR_{\rm mis}\cos\theta}\right).\]
That is, the average surface mass density at distance \(R\) away from the incorrect center is the average of the circle drawn around that incorrect center. To get the miscentered profiles of a *single cluster* you would use
```
from cluster_toolkit import miscentering mass = 1e14 #Msun/h conc = 5 #arbitrary Omega_m = 0.3
#Calculate Rp and Sigma here, where Sigma is centered Rmis = 0.25 #Mpc/h; typical value Sigma_mis_single = miscentering.Sigma_mis_single_at_R(Rp, Rp, Sigma, mass, conc, Omega_m, Rmis)
```
As you can see `Rp` is passed in twice. It is first used as the location at which to evaluate `Sigma_mis` and then as the locations at which `Sigma` is known. So if you wanted those two radial arrays can be different.
The \(\Delta\Sigma\) profile is defined the usual way
\[\Delta\Sigma(R,R_{\rm mis}) = \bar{\Sigma}_{\rm mis}(<R,R_{\rm mis}) - \Sigma_{\rm mis}(R,R_{\rm mis})\]
which can be calculated using this module using
```
DeltaSigma_mis_single = miscentering.DeltaSigma_mis_at_R(Rp, Rp, Sigma_mis_single)
```
### Stacked Miscentering[¶](#stacked-miscentering)
In a stack of clusters, the amount of miscentering will follow a distribution \(P(R'|R_{\rm mis})\) given some characteristic miscentering length \(R_{\rm mis}\). That is, some clusters will be miscentered more than others. [Simet et al. (2017)](https://arxiv.org/abs/1603.06953) for SDSS and [Melchior et al. (2017)](https://arxiv.org/abs/1610.06890) assume a Raleigh distribution for the amount of miscentering \(R'\):
\[P(R'|R_{\rm mis}) = \frac{R'}{R^2_{\rm mis}}\exp[-R'^2/2R_{\rm mis}^2]\,.\]
In [McClintock et al. (2019)](http://adsabs.harvard.edu/abs/2019MNRAS.482.1352M) we used a Gamma profile for the mistnering:
\[P(R'|R_{\rm mis}) = \frac{R'}{R^2_{\rm mis}}\exp[-R'/R_{\rm mis}]\,.\]
Both of these are available in the toolkit. We see that \(R_{\rm mis}\) is a free parameter, giving rise to a miscentered projected stacked density profile:
\[\Sigma_{\rm mis}^{\rm stack}(R) = \int_0^\infty{\rm d}R'\ P(R'|R_{\rm mis})\Sigma_{\rm mis}^{\rm single\ cluster}(R, R')\]
which can then itself be integrated to get \(\Delta\Sigma_{\rm mis^{\rm stack}}\). To calculate these in the code you would use:
```
from cluster_toolkit import miscentering
#Assume Sigma at R_perp are computed here Sigma_mis = miscentering.Sigma_mis_at_R(R_perp, R_perp, Sigma, mass, concentration, Omega_m, R_mis)
DeltaSigma_mis = miscentering.DeltaSigma_mis_at_R(R_perp, R_perp, Sigma_mis)
```
Halo Mass Function[¶](#halo-mass-function)
---
Clusters and galaxies live inside of dark matter halos, which represent peaks in the matter density field. This means that it is possible to use the abundance of clusters to constrain cosmology if you have well understood mapping from clusters onto halos (which is outside the scope of this code).
The abundance of halos is also known as the *halo mass function*. A general mathematical form for the mass function is given in [Tinker et al. (2008)](https://arxiv.org/abs/0803.2706) (where they cite [Press & Schechter (1974)](http://adsabs.harvard.edu/abs/1974ApJ...187..425P), [Jenkins et al. (2000)](https://arxiv.org/abs/astro-ph/0005260) and other papers that you can look up) is:
\[\frac{{\rm d}n}{{\rm d}M} = f(\sigma)\frac{\rho_m}{M}\frac{{\rm d}\ln\sigma^{-1}}{{\rm d}M}.\]
Where \(\sigma\) is the RMS variance of a spherical top hat containing a mass \(M\), \(\rho_m=\rho_{\rm crit}\Omega_m\) is the mean matter density and \(f(\sigma)\) is known as the *halo multiplicity function*. Practically speaking, what sets one mass function model apart from another is how the multiplicity is written down.
At some point in the future the toolkit will have other options for the mass function, but for now it implements the mass function from [Tinker et al. (2008)](https://arxiv.org/abs/0803.2706). Specifically, the version in Appendix C of that paper, which is usually the one people mean when they refer to this paper.
Note
Implicitely, the mass function depends on the linear matter power spectrum \(P(k)\) in order to map from \(M\) to \(\sigma\). Since the toolkit doesn’t have its own power spectrum implemented, the user must input one from, e.g. CLASS or CAMB.
### Tinker Mass Function[¶](#tinker-mass-function)
The Tinker mass function is defined by it’s multiplicity function, which looks like
\[f(\sigma) = B\left[\left(\frac{\sigma}{e}\right)^{-d} + \sigma^{-f}\right]\exp(-g/\sigma^2).\]
In this mass function \(d\), \(e\), \(f\), and \(g\) are free parameters that depend on halo definition. By default they are set to the values associated with \(M_{200m}\). If you want to switch to other values for other halo definitions, see [Tinker et al. (2008)](https://arxiv.org/abs/0803.2706). The normalization \(B\) is calculated internally so that you don’t have to pass it in.
To use this in the code you would do:
```
from cluster_toolkit import massfunction import numpy as np
#Assume that k and P come from somewhere, e.g. CAMB or CLASS
#Units of k and P are h/Mpc and (Mpc/h)^3 Mass = 1e14 #Msun/h Omega_m = 0.3 #example value dndM = massfunction.dndM_at_M(Mass, k, P, Omega_m)
#Or could also use an array Masses = np.logspace(12, 16)
dndM = massfunction.dndM_at_M(Masses, k, P, Omega_m)
```
### Binned Mass Functions[¶](#binned-mass-functions)
In reality in a simulation or in cluster abundance the real observable is the number density of objects in some mass bin of finite width. Written out, this is
\[n = \int_{M_1}^{M_2}{\rm d}M\ \frac{{\rm d}n}{{\rm d}M}.\]
In the toolkit this is available by first calculating \({\rm d}n/{\rm d}M\) and then passing that back to the toolkit. This is available in the code by using
```
from cluster_toolkit import massfunction import numpy as np
#Assume that k and P come from somewhere, e.g. CAMB or CLASS
#Units of k and P are h/Mpc and (Mpc/h)^3 Omega_m = 0.3 #example value M = np.logspace(12, 16) #Msun/h dndM = massfunction.dndM_at_M(M, k, P, Omega_m)
M1 = 1e12 #Msun/h M2 = 1e13 #Msun/h n = n_in_bin(M1, M2, M, dndM)
```
You can also pass in many bin edges at once:
```
edges = np.array([1e12, 5e12, 1e13, 5e13])
n = n_in_bins(edges, M, dndM)
```
Mass-concentration Relations[¶](#mass-concentration-relations)
---
The inner regions of clusters are called the 1-halo regime. The 1-halo regime is often modeled analytically using either an NFW or Einasto profile. Both of these profiles are functions of at least two variables. Numerous papers have examined the relationship between mass and NFW halo concentration. This module implements the Diemer-Kravtzov 2015 M-c relation. At present, only \(M_{200c}\) and \(M_{200m}\) mass definitions are supported. In the near future this will be expanded to \(\Delta\neq200\) as well as \(M_{\rm vir}\).
To call this function you would do the following:
```
from cluster toolkit import concentration as conc M = 1e14 #Msun/h Omega_m = 0.3 #Matter fraction Omega_b = 0.05 #Baryon fraction ns = 0.96 #Power spectrum index h = 0.7 #Hubble constant
#Assume that k and P come from somewhere, e.g. CAMB or CLASS
#k are wavenumbers in h/Mpc and P is the linear power spectrum
#in (Mpc/h)^3
#The Mass_type argument can either be 'mean' or 'crit'
Mass_type = 'mean'
c = conc.concentration_at_M(M, k, P, ns, Omega_b, Omega_m, h, Mass_type)
```
cluster_toolkit[¶](#cluster-toolkit)
---
### cluster_toolkit package[¶](#cluster-toolkit-package)
#### Submodules[¶](#submodules)
##### cluster_toolkit.averaging module[¶](#module-cluster_toolkit.averaging)
Averaging projected cluster profiles.
`cluster_toolkit.averaging.``average_profile_in_bin`(*Rlow*, *Rhigh*, *R*, *prof*)[[source]](_modules/cluster_toolkit/averaging.html#average_profile_in_bin)[¶](#cluster_toolkit.averaging.average_profile_in_bin)
Average profile in a bin.
Calculates the average of some projected profile in a radial bin in Mpc/h comoving.
| Parameters: | * **Rlow** (*float*) – Inner radii.
* **Rhigh** (*float*) – Outer radii.
* **R** (*array like*) – Radii of the profile.
* **prof** (*array like*) – Projected profile.
|
| Returns: | Average profile in the radial bin, or annulus. |
| Return type: | float |
`cluster_toolkit.averaging.``average_profile_in_bins`(*Redges*, *R*, *prof*)[[source]](_modules/cluster_toolkit/averaging.html#average_profile_in_bins)[¶](#cluster_toolkit.averaging.average_profile_in_bins)
Average profile in bins.
Calculates the average of some projected profile in a radial bins in Mpc/h comoving.
| Parameters: | * **Redges** (*array like*) – Array of radial bin edges.
* **R** (*array like*) – Radii of the profile.
* **prof** (*array like*) – Projected profile.
|
| Returns: | Average profile in bins between the edges provided. |
| Return type: | numpy.array |
##### cluster_toolkit.bias module[¶](#module-cluster_toolkit.bias)
Halo bias.
`cluster_toolkit.bias.``bias_at_M`(*M*, *k*, *P*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/bias.html#bias_at_M)[¶](#cluster_toolkit.bias.bias_at_M)
Tinker et al. 2010 bais at mass M [Msun/h].
| Parameters: | * **M** (*float* *or* *array like*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
* **Omega_m** (*float*) – Matter density fraction.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Halo bias. |
| Return type: | float or array like |
`cluster_toolkit.bias.``bias_at_R`(*R*, *k*, *P*, *delta=200*)[[source]](_modules/cluster_toolkit/bias.html#bias_at_R)[¶](#cluster_toolkit.bias.bias_at_R)
Tinker 2010 bais at mass M [Msun/h] corresponding to radius R [Mpc/h comoving].
| Parameters: | * **R** (*float* *or* *array like*) – Lagrangian radius in Mpc/h comoving.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Halo bias. |
| Return type: | float or array like |
`cluster_toolkit.bias.``bias_at_nu`(*nu*, *delta=200*)[[source]](_modules/cluster_toolkit/bias.html#bias_at_nu)[¶](#cluster_toolkit.bias.bias_at_nu)
Tinker 2010 bais at peak height nu.
| Parameters: | * **nu** (*float* *or* *array like*) – Peak height.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Halo bias. |
| Return type: | float or array like |
`cluster_toolkit.bias.``dbiasdM_at_M`(*M*, *k*, *P*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/bias.html#dbiasdM_at_M)[¶](#cluster_toolkit.bias.dbiasdM_at_M)
d/dM of Tinker et al. 2010 bais at mass M [Msun/h].
| Parameters: | * **M** (*float* *or* *array like*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
* **Omega_m** (*float*) – Matter density fraction.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Derivative of the halo bias. |
| Return type: | float or array like |
##### cluster_toolkit.boostfactors module[¶](#module-cluster_toolkit.boostfactors)
Galaxy cluster boost factors, also known as membership dilution models.
`cluster_toolkit.boostfactors.``boost_nfw_at_R`(*R*, *B0*, *R_scale*)[[source]](_modules/cluster_toolkit/boostfactors.html#boost_nfw_at_R)[¶](#cluster_toolkit.boostfactors.boost_nfw_at_R)
NFW boost factor model.
| Parameters: | * **R** (*float* *or* *array like*) – Distances on the sky in the same units as R_scale. Mpc/h comoving suggested for consistency with other modules.
* **B0** (*float*) – NFW profile amplitude.
* **R_scale** (*float*) – NFW profile scale radius.
|
| Returns: | NFW boost factor profile; B = (1-fcl)^-1. |
| Return type: | float or array like |
`cluster_toolkit.boostfactors.``boost_powerlaw_at_R`(*R*, *B0*, *R_scale*, *alpha*)[[source]](_modules/cluster_toolkit/boostfactors.html#boost_powerlaw_at_R)[¶](#cluster_toolkit.boostfactors.boost_powerlaw_at_R)
Power law boost factor model.
| Parameters: | * **R** (*float* *or* *array like*) – Distances on the sky in the same units as R_scale. Mpc/h comoving suggested for consistency with other modules.
* **B0** (*float*) – Boost factor amplitude.
* **R_scale** (*float*) – Power law scale radius.
* **alpha** (*float*) – Power law exponent.
|
| Returns: | Power law boost factor profile; B = (1-fcl)^-1. |
| Return type: | float or array like |
##### cluster_toolkit.concentration module[¶](#module-cluster_toolkit.concentration)
Halo concentration.
`cluster_toolkit.concentration.``concentration_at_M`(*Mass*, *k*, *P*, *n_s*, *Omega_b*, *Omega_m*, *h*, *T_CMB=2.7255*, *delta=200*, *Mass_type='crit'*)[[source]](_modules/cluster_toolkit/concentration.html#concentration_at_M)[¶](#cluster_toolkit.concentration.concentration_at_M)
Concentration of the NFW profile at mass M [Msun/h].
Only implemented relation at the moment is Diemer & Kravtsov (2015).
Note: only single concentrations at a time are allowed at the moment.
| Parameters: | * **Mass** (*float*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Linear matter power spectrum in (Mpc/h)^3 comoving.
* **n_s** (*float*) – Power spectrum tilt.
* **Omega_b** (*float*) – Baryonic matter density fraction.
* **Omega_m** (*float*) – Matter density fraction.
* **h** (*float*) – Reduced Hubble constant.
* **T_CMB** (*float*) – CMB temperature in Kelvin, default is 2.7.
* **delta** (*int; optional*) – Overdensity, default is 200.
* **Mass_type** (*string; optional*) –
|
| Returns: | NFW concentration. |
| Return type: | float |
##### cluster_toolkit.deltasigma module[¶](#module-cluster_toolkit.deltasigma)
Galaxy cluster shear and magnification profiles also known as DeltaSigma and Sigma, respectively.
`cluster_toolkit.deltasigma.``DeltaSigma_at_R`(*R*, *Rs*, *Sigma*, *mass*, *concentration*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/deltasigma.html#DeltaSigma_at_R)[¶](#cluster_toolkit.deltasigma.DeltaSigma_at_R)
Excess surface mass density given Sigma [Msun h/pc^2 comoving].
| Parameters: | * **R** (*float* *or* *array like*) – Projected radii Mpc/h comoving.
* **Rs** (*array like*) – Projected radii of Sigma, the surface mass density.
* **Sigma** (*array like*) – Surface mass density.
* **mass** (*float*) – Halo mass Msun/h.
* **concentration** (*float*) – concentration.
* **Omega_m** (*float*) – Matter density fraction.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Excess surface mass density Msun h/pc^2 comoving. |
| Return type: | float or array like |
`cluster_toolkit.deltasigma.``Sigma_at_R`(*R*, *Rxi*, *xi*, *mass*, *concentration*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/deltasigma.html#Sigma_at_R)[¶](#cluster_toolkit.deltasigma.Sigma_at_R)
Surface mass density given some 3d profile [Msun h/pc^2 comoving].
| Parameters: | * **R** (*float* *or* *array like*) – Projected radii Mpc/h comoving.
* **Rxi** (*array like*) – 3D radii of xi_hm Mpc/h comoving.
* **xi_hm** (*array like*) – Halo matter correlation function.
* **mass** (*float*) – Halo mass Msun/h.
* **concentration** (*float*) – concentration.
* **Omega_m** (*float*) – Matter density fraction.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Surface mass density Msun h/pc^2 comoving. |
| Return type: | float or array like |
`cluster_toolkit.deltasigma.``Sigma_nfw_at_R`(*R*, *mass*, *concentration*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/deltasigma.html#Sigma_nfw_at_R)[¶](#cluster_toolkit.deltasigma.Sigma_nfw_at_R)
Surface mass density of an NFW profile [Msun h/pc^2 comoving].
| Parameters: | * **R** (*float* *or* *array like*) – Projected radii Mpc/h comoving.
* **mass** (*float*) – Halo mass Msun/h.
* **concentration** (*float*) – concentration.
* **Omega_m** (*float*) – Matter density fraction.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Surface mass density Msun h/pc^2 comoving. |
| Return type: | float or array like |
##### cluster_toolkit.density module[¶](#module-cluster_toolkit.density)
Galaxy cluster density profiles.
`cluster_toolkit.density.``rho_einasto_at_r`(*r*, *M*, *rs*, *alpha*, *Omega_m*, *delta=200*, *rhos=-1.0*)[[source]](_modules/cluster_toolkit/density.html#rho_einasto_at_r)[¶](#cluster_toolkit.density.rho_einasto_at_r)
Einasto halo density profile. Distances are Mpc/h comoving.
| Parameters: | * **r** (*float* *or* *array like*) – 3d distances from halo center.
* **M** (*float*) – Mass in Msun/h; not used if rhos is specified.
* **rhos** (*float*) – Scale density in Msun h^2/Mpc^3 comoving; optional.
* **rs** (*float*) – Scale radius.
* **alpha** (*float*) – Profile exponent.
* **Omega_m** (*float*) – Omega_matter, matter fraction of the density.
* **delta** (*int*) – Overdensity, default is 200.
|
| Returns: | Einasto halo density profile in Msun h^2/Mpc^3 comoving. |
| Return type: | float or array like |
`cluster_toolkit.density.``rho_nfw_at_r`(*r*, *M*, *c*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/density.html#rho_nfw_at_r)[¶](#cluster_toolkit.density.rho_nfw_at_r)
NFW halo density profile.
| Parameters: | * **r** (*float* *or* *array like*) – 3d distances from halo center in Mpc/h comoving.
* **M** (*float*) – Mass in Msun/h.
* **c** (*float*) – Concentration.
* **Omega_m** (*float*) – Omega_matter, matter fraction of the density.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | NFW halo density profile in Msun h^2/Mpc^3 comoving. |
| Return type: | float or array like |
##### cluster_toolkit.exclusion module[¶](#module-cluster_toolkit.exclusion)
Correlation functions with halo exclusion.
`cluster_toolkit.exclusion.``theta_at_r`(*radii*, *rt*, *beta*)[[source]](_modules/cluster_toolkit/exclusion.html#theta_at_r)[¶](#cluster_toolkit.exclusion.theta_at_r)
Truncation function.
| Parameters: | * **radii** (*float* *or* *array-like*) – Radii of the profile in Mpc/h
* **rt** (*float*) – truncation radius in Mpc/h
* **beta** (*float*) – width of the truncation distribution (erfc) in Mpc/h
|
| Returns: | Truncation function |
| Return type: | float or array-like |
`cluster_toolkit.exclusion.``xi_1h_exclusion_at_r`(*radii*, *Mass*, *conc*, *alpha*, *rt*, *beta*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/exclusion.html#xi_1h_exclusion_at_r)[¶](#cluster_toolkit.exclusion.xi_1h_exclusion_at_r)
Halo-matter correlation function with halo exclusion incorporated,
but just the 1-halo term.
| Parameters: | * **radii** (*float* *or* *array-like*) – Radii of the profile in Mpc/h
* **Mass** (*float*) – Mass in Msun/h
* **conc** (*float*) – concentration of the 1-halo profile
* **alpha** (*float*) – Einasto parameter
* **rt** (*float*) – truncation radius in Mpc/h
* **beta** (*float*) – width of the truncation distribution (erfc) in Mpc/h
* **Omega_m** (*float*) – Matter density fraction
* **delta** (*float*) – halo overdensity; default is 200
|
| Returns: | 1-halo of the exclusion profile at each radii |
| Return type: | float or array-like |
`cluster_toolkit.exclusion.``xi_2h_exclusion_at_r`(*radii*, *r_eff*, *beta_eff*, *bias*, *xi_mm*)[[source]](_modules/cluster_toolkit/exclusion.html#xi_2h_exclusion_at_r)[¶](#cluster_toolkit.exclusion.xi_2h_exclusion_at_r)
2-halo term in the halo-matter correlation function using halo exclusion theory.
| Parameters: | * **radii** (*float* *or* *array-like*) – Radii of the profile in Mpc/h
* **r_eff** (*float*) – effective radius for 2-halo subtraction in Mpc/h
* **beta_eff** (*float*) – width for effective radius truncation
* **bias** (*float*) – halo bias at large scales
* **xi_mm** (*float* *or* *array-like*) – matter correlation function.
Must have same shape as the radii.
|
| Returns: | 2-halo of the exclusion profile at each radii |
| Return type: | float or array-like |
`cluster_toolkit.exclusion.``xi_C_at_r`(*radii*, *r_A*, *r_B*, *beta_ex*, *xi_2h*)[[source]](_modules/cluster_toolkit/exclusion.html#xi_C_at_r)[¶](#cluster_toolkit.exclusion.xi_C_at_r)
Halo-matter correlation function with halo exclusion incorporated.
| Parameters: | * **radii** (*float* *or* *array-like*) – Radii of the profile in Mpc/h
* **r_A** (*float*) – radius of first correction term in Mpc/h
* **r_B** (*float*) – radius of second correction term in Mpc/h
* **beta_ex** (*float*) – width parameter for exclusion terms
* **xi_2h** (*float* *or* *array-like*) – 2-halo term of the exclusion profile
|
| Returns: | correction term for the exclusion profile |
| Return type: | float or array-like |
`cluster_toolkit.exclusion.``xi_hm_exclusion_at_r`(*radii*, *Mass*, *conc*, *alpha*, *rt*, *beta*, *r_eff*, *beta_eff*, *r_A*, *r_B*, *beta_ex*, *bias*, *xi_mm*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/exclusion.html#xi_hm_exclusion_at_r)[¶](#cluster_toolkit.exclusion.xi_hm_exclusion_at_r)
Halo-matter correlation function with halo exclusion incorporated.
| Parameters: | * **radii** (*float* *or* *array-like*) – Radii of the profile in Mpc/h
* **Mass** (*float*) – Mass in Msun/h
* **conc** (*float*) – concentration of the 1-halo profile
* **alpha** (*float*) – Einasto parameter
* **rt** (*float*) – truncation radius in Mpc/h
* **beta** (*float*) – width of the truncation distribution (erfc) in Mpc/h
* **r_eff** (*float*) – effective radius for 2-halo subtraction in Mpc/h
* **beta_eff** (*float*) – width for effective radius truncation
* **r_A** (*float*) – radius of first correction term in Mpc/h
* **r_B** (*float*) – radius of second correction term in Mpc/h
* **beta_ex** (*float*) – width parameter for exclusion terms
* **bias** (*float*) – linear halo bias
* **xi_mm** (*float* *or* *array-like*) – matter correlation function.
same shape as radii
* **Omega_m** (*float*) – matter density fraction
* **delta** (*int*) – halo overdensity. Default is 200
|
| Returns: | exclusion profile at each radii |
| Return type: | float or array-like |
##### cluster_toolkit.massfunction module[¶](#module-cluster_toolkit.massfunction)
Halo mass function.
`cluster_toolkit.massfunction.``G_at_M`(*M*, *k*, *P*, *Omega_m*, *d=1.97*, *e=1.0*, *f=0.51*, *g=1.228*)[[source]](_modules/cluster_toolkit/massfunction.html#G_at_M)[¶](#cluster_toolkit.massfunction.G_at_M)
Tinker et al. 2008 appendix C multiplicity funciton G(M) as a function of mass. Default behavior is for \(M_{200m}\) mass definition.
| Parameters: | * **M** (*float* *or* *array like*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of the matter power spectrum in h/Mpc comoving.
* **P_lin** (*array like*) – Linear matter power spectrum in (Mpc/h)^3 comoving.
* **Omega_m** (*float*) – Matter density fraction.
* **d** (*float; optional*) – First Tinker parameter. Default is 1.97.
* **e** (*float; optional*) – Second Tinker parameter. Default is 1.
* **f** (*float; optional*) – Third Tinker parameter. Default is 0.51.
* **g** (*float; optional*) – Fourth Tinker parameter. Default is 1.228.
|
| Returns: | Halo multiplicity \(G(M)\). |
| Return type: | float or array like |
`cluster_toolkit.massfunction.``G_at_sigma`(*sigma*, *d=1.97*, *e=1.0*, *f=0.51*, *g=1.228*)[[source]](_modules/cluster_toolkit/massfunction.html#G_at_sigma)[¶](#cluster_toolkit.massfunction.G_at_sigma)
Tinker et al. 2008 appendix C multiplicity funciton G(sigma) as a function of sigma.
NOTE: by default, this function is only valid at \(z=0\). For use at higher redshifts either recompute the parameters yourself, or wait for this behavior to be patched.
| Parameters: | * **sigma** (*float* *or* *array like*) – RMS variance of the matter density field.
* **d** (*float; optional*) – First Tinker parameter. Default is 1.97.
* **e** (*float; optional*) – Second Tinker parameter. Default is 1.
* **f** (*float; optional*) – Third Tinker parameter. Default is 0.51.
* **g** (*float; optional*) – Fourth Tinker parameter. Default is 1.228.
|
| Returns: | Halo multiplicity G(sigma). |
| Return type: | float or array like |
`cluster_toolkit.massfunction.``dndM_at_M`(*M*, *k*, *P*, *Omega_m*, *d=1.97*, *e=1.0*, *f=0.51*, *g=1.228*)[[source]](_modules/cluster_toolkit/massfunction.html#dndM_at_M)[¶](#cluster_toolkit.massfunction.dndM_at_M)
Tinker et al. 2008 appendix C mass function at a given mass.
Default behavior is for \(M_{200m}\) mass definition.
NOTE: by default, this function is only valid at \(z=0\). For use at higher redshifts either recompute the parameters yourself, or wait for this behavior to be patched.
| Parameters: | * **M** (*float* *or* *array like*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of the matter power spectrum in h/Mpc comoving.
* **P_lin** (*array like*) – Linear matter power spectrum in (Mpc/h)^3 comoving.
* **Omega_m** (*float*) – Matter density fraction.
* **d** (*float; optional*) – First Tinker parameter. Default is 1.97.
* **e** (*float; optional*) – Second Tinker parameter. Default is 1.
* **f** (*float; optional*) – Third Tinker parameter. Default is 0.51.
* **g** (*float; optional*) – Fourth Tinker parameter. Default is 1.228.
|
| Returns: | Mass function \(dn/dM\). |
| Return type: | float or array like |
`cluster_toolkit.massfunction.``n_in_bin`(*Mlo*, *Mhi*, *Marr*, *dndM*)[[source]](_modules/cluster_toolkit/massfunction.html#n_in_bin)[¶](#cluster_toolkit.massfunction.n_in_bin)
Tinker et al. 2008 appendix C binned mass function.
| Parameters: | * **Mlo** (*float*) – Lower mass edge.
* **Mhi** (*float*) – Upper mass edge.
* **Marr** (*array like*) – Array of locations that dndM has been evaluated at.
* **dndM** (*array like*) – Array of dndM.
|
| Returns: | number density of halos in the mass bin. |
| Return type: | float |
`cluster_toolkit.massfunction.``n_in_bins`(*edges*, *Marr*, *dndM*)[[source]](_modules/cluster_toolkit/massfunction.html#n_in_bins)[¶](#cluster_toolkit.massfunction.n_in_bins)
Tinker et al. 2008 appendix C binned mass function.
| Parameters: | * **edges** (*array like*) – Edges of the mass bins.
* **Marr** (*array like*) – Array of locations that dndM has been evaluated at.
* **dndM** (*array like*) – Array of dndM.
|
| Returns: | number density of halos in the mass bins. Length is `len(edges)-1`. |
| Return type: | numpy.ndarray |
##### cluster_toolkit.miscentering module[¶](#module-cluster_toolkit.miscentering)
Miscentering effects for projected profiles.
`cluster_toolkit.miscentering.``DeltaSigma_mis_at_R`(*R*, *Rsigma*, *Sigma_mis*)[[source]](_modules/cluster_toolkit/miscentering.html#DeltaSigma_mis_at_R)[¶](#cluster_toolkit.miscentering.DeltaSigma_mis_at_R)
Miscentered excess surface mass density profile at R. Units are Msun h/pc^2 comoving.
| Parameters: | * **R** (*float* *or* *array like*) – Projected radii to evaluate profile.
* **Rsigma** (*array like*) – Projected radii of miscentered Sigma profile.
* **Sigma_mis** (*array like*) – Miscentered Sigma profile.
|
| Returns: | Miscentered excess surface mass density profile. |
| Return type: | float array like |
`cluster_toolkit.miscentering.``Sigma_mis_at_R`(*R*, *Rsigma*, *Sigma*, *M*, *conc*, *Omega_m*, *Rmis*, *delta=200*, *kernel='rayleigh'*)[[source]](_modules/cluster_toolkit/miscentering.html#Sigma_mis_at_R)[¶](#cluster_toolkit.miscentering.Sigma_mis_at_R)
Miscentered surface mass density [Msun h/pc^2 comoving]
convolved with a distribution for Rmis. Units are Msun h/pc^2 comoving.
| Parameters: | * **R** (*float* *or* *array like*) – Projected radii Mpc/h comoving.
* **Rsigma** (*array like*) – Projected radii of the centered surface mass density profile.
* **Sigma** (*float* *or* *array like*) – Surface mass density Msun h/pc^2 comoving.
* **M** (*float*) – Halo mass Msun/h.
* **conc** (*float*) – concentration.
* **Omega_m** (*float*) – Matter density fraction.
* **Rmis** (*float*) – Miscentered distance in Mpc/h comoving.
* **delta** (*int; optional*) – Overdensity, default is 200.
* **kernel** (*string; optional*) – Kernal for convolution. Options: rayleigh or gamma.
|
| Returns: | Miscentered projected surface mass density. |
| Return type: | float or array like |
`cluster_toolkit.miscentering.``Sigma_mis_single_at_R`(*R*, *Rsigma*, *Sigma*, *M*, *conc*, *Omega_m*, *Rmis*, *delta=200*)[[source]](_modules/cluster_toolkit/miscentering.html#Sigma_mis_single_at_R)[¶](#cluster_toolkit.miscentering.Sigma_mis_single_at_R)
Miscentered surface mass density [Msun h/pc^2 comoving] of a profile miscentered by an amount Rmis Mpc/h comoving. Units are Msun h/pc^2 comoving.
| Parameters: | * **R** (*float* *or* *array like*) – Projected radii Mpc/h comoving.
* **Rsigma** (*array like*) – Projected radii of the centered surface mass density profile.
* **Sigma** (*float* *or* *array like*) – Surface mass density Msun h/pc^2 comoving.
* **M** (*float*) – Halo mass Msun/h.
* **conc** (*float*) – concentration.
* **Omega_m** (*float*) – Matter density fraction.
* **Rmis** (*float*) – Miscentered distance in Mpc/h comoving.
* **delta** (*int; optional*) – Overdensity, default is 200.
|
| Returns: | Miscentered projected surface mass density. |
| Return type: | float or array like |
##### cluster_toolkit.peak_height module[¶](#module-cluster_toolkit.peak_height)
Integrals of the power spectrum. This includes RMS variance of the density field, sigma2, as well as peak neight, nu. These were previously implemented in the bias module, but have been migrated to here.
`cluster_toolkit.peak_height.``dsigma2dM_at_M`(*M*, *k*, *P*, *Omega_m*)[[source]](_modules/cluster_toolkit/peak_height.html#dsigma2dM_at_M)[¶](#cluster_toolkit.peak_height.dsigma2dM_at_M)
Derivative w.r.t. mass of RMS variance in top hat sphere of lagrangian radius R [Mpc/h comoving] corresponding to a mass M [Msun/h] of linear power spectrum.
| Parameters: | * **M** (*float* *or* *array like*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
* **Omega_m** (*float*) – Omega_matter, matter density fraction.
|
| Returns: | d/dM of RMS variance of top hat sphere. |
| Return type: | float or array like |
`cluster_toolkit.peak_height.``nu_at_M`(*M*, *k*, *P*, *Omega_m*)[[source]](_modules/cluster_toolkit/peak_height.html#nu_at_M)[¶](#cluster_toolkit.peak_height.nu_at_M)
Peak height of top hat sphere of lagrangian radius R [Mpc/h comoving] corresponding to a mass M [Msun/h] of linear power spectrum.
| Parameters: | * **M** (*float* *or* *array like*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
* **Omega_m** (*float*) – Omega_matter, matter density fraction.
|
| Returns: | Peak height. |
| Return type: | nu (float or array like) |
`cluster_toolkit.peak_height.``nu_at_R`(*R*, *k*, *P*)[[source]](_modules/cluster_toolkit/peak_height.html#nu_at_R)[¶](#cluster_toolkit.peak_height.nu_at_R)
Peak height of top hat sphere of radius R [Mpc/h comoving] of linear power spectrum.
| Parameters: | * **R** (*float* *or* *array like*) – Radius in Mpc/h comoving.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
|
| Returns: | Peak height. |
| Return type: | float or array like |
`cluster_toolkit.peak_height.``sigma2_at_M`(*M*, *k*, *P*, *Omega_m*)[[source]](_modules/cluster_toolkit/peak_height.html#sigma2_at_M)[¶](#cluster_toolkit.peak_height.sigma2_at_M)
RMS variance in top hat sphere of lagrangian radius R [Mpc/h comoving] corresponding to a mass M [Msun/h] of linear power spectrum.
| Parameters: | * **M** (*float* *or* *array like*) – Mass in Msun/h.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
* **Omega_m** (*float*) – Omega_matter, matter density fraction.
|
| Returns: | RMS variance of top hat sphere. |
| Return type: | float or array like |
`cluster_toolkit.peak_height.``sigma2_at_R`(*R*, *k*, *P*)[[source]](_modules/cluster_toolkit/peak_height.html#sigma2_at_R)[¶](#cluster_toolkit.peak_height.sigma2_at_R)
RMS variance in top hat sphere of radius R [Mpc/h comoving] of linear power spectrum.
| Parameters: | * **R** (*float* *or* *array like*) – Radius in Mpc/h comoving.
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving.
* **P** (*array like*) – Power spectrum in (Mpc/h)^3 comoving.
|
| Returns: | RMS variance of a top hat sphere. |
| Return type: | float or array like |
##### cluster_toolkit.profile_derivatives module[¶](#module-cluster_toolkit.profile_derivatives)
Derivatives of halo profiles. Used to plot splashback results.
`cluster_toolkit.profile_derivatives.``drho_nfw_dr_at_R`(*Radii*, *Mass*, *conc*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/profile_derivatives.html#drho_nfw_dr_at_R)[¶](#cluster_toolkit.profile_derivatives.drho_nfw_dr_at_R)
Derivative of the NFW halo density profile.
| Parameters: | * **Radii** (*float* *or* *array like*) – 3d distances from halo center in Mpc/h comoving
* **Mass** (*float*) – Mass in Msun/h
* **conc** (*float*) – Concentration
* **Omega_m** (*float*) – Matter fraction of the density
* **delta** (*int; optional*) – Overdensity, default is 200
|
| Returns: | derivative of the NFW profile. |
| Return type: | float or array like |
##### cluster_toolkit.xi module[¶](#module-cluster_toolkit.xi)
Correlation functions for matter and halos.
`cluster_toolkit.xi.``xi_2halo`(*bias*, *xi_mm*)[[source]](_modules/cluster_toolkit/xi.html#xi_2halo)[¶](#cluster_toolkit.xi.xi_2halo)
2-halo term in halo-matter correlation function
| Parameters: | * **bias** (*float*) – Halo bias
* **xi_mm** (*float* *or* *array like*) – Matter-matter correlation function
|
| Returns: | 2-halo term in halo-matter correlation function |
| Return type: | float or array like |
`cluster_toolkit.xi.``xi_DK`(*r*, *M*, *conc*, *be*, *se*, *k*, *P*, *om*, *delta=200*, *rhos=-1.0*, *alpha=-1.0*, *beta=-1.0*, *gamma=-1.0*)[[source]](_modules/cluster_toolkit/xi.html#xi_DK)[¶](#cluster_toolkit.xi.xi_DK)
Diemer-Kravtsov 2014 profile.
| Parameters: | * **r** (*float* *or* *array like*) – radii in Mpc/h comoving
* **M** (*float*) – mass in Msun/h
* **conc** (*float*) – Einasto concentration
* **be** (*float*) – DK transition parameter
* **se** (*float*) – DK transition parameter
* **k** (*array like*) – wavenumbers in h/Mpc
* **P** (*array like*) – matter power spectrum in [Mpc/h]^3
* **Omega_m** (*float*) – matter density fraction
* **delta** (*float*) – overdensity of matter. Optional, default is 200
* **rhos** (*float*) – Einasto density. Optional, default is compute from the mass
* **alpha** (*float*) – Einasto parameter. Optional, default is computed from peak height
* **beta** (*float*) – DK 2-halo parameter. Optional, default is 4
* **gamma** (*float*) – DK 2-halo parameter. Optional, default is 8
|
| Returns: | DK profile evaluated at the input radii |
| Return type: | float or array like |
`cluster_toolkit.xi.``xi_DK_appendix1`(*r*, *M*, *conc*, *be*, *se*, *k*, *P*, *om*, *bias*, *xi_mm*, *delta=200*, *rhos=-1.0*, *alpha=-1.0*, *beta=-1.0*, *gamma=-1.0*)[[source]](_modules/cluster_toolkit/xi.html#xi_DK_appendix1)[¶](#cluster_toolkit.xi.xi_DK_appendix1)
Diemer-Kravtsov 2014 profile, first form from the appendix, eq. A3.
| Parameters: | * **r** (*float* *or* *array like*) – radii in Mpc/h comoving
* **M** (*float*) – mass in Msun/h
* **conc** (*float*) – Einasto concentration
* **be** (*float*) – DK transition parameter
* **se** (*float*) – DK transition parameter
* **k** (*array like*) – wavenumbers in h/Mpc
* **P** (*array like*) – matter power spectrum in [Mpc/h]^3
* **Omega_m** (*float*) – matter density fraction
* **bias** (*float*) – halo bias
* **xi_mm** (*float* *or* *array like*) – matter correlation function at r
* **delta** (*float*) – overdensity of matter. Optional, default is 200
* **rhos** (*float*) – Einasto density. Optional, default is compute from the mass
* **alpha** (*float*) – Einasto parameter. Optional, default is computed from peak height
* **beta** (*float*) – DK 2-halo parameter. Optional, default is 4
* **gamma** (*float*) – DK 2-halo parameter. Optional, default is 8
|
| Returns: | DK profile evaluated at the input radii |
| Return type: | float or array like |
`cluster_toolkit.xi.``xi_DK_appendix2`(*r*, *M*, *conc*, *be*, *se*, *k*, *P*, *om*, *bias*, *xi_mm*, *delta=200*, *rhos=-1.0*, *alpha=-1.0*, *beta=-1.0*, *gamma=-1.0*)[[source]](_modules/cluster_toolkit/xi.html#xi_DK_appendix2)[¶](#cluster_toolkit.xi.xi_DK_appendix2)
Diemer-Kravtsov 2014 profile, second form from the appendix, eq. A4.
| Parameters: | * **r** (*float* *or* *array like*) – radii in Mpc/h comoving
* **M** (*float*) – mass in Msun/h
* **conc** (*float*) – Einasto concentration
* **be** (*float*) – DK transition parameter
* **se** (*float*) – DK transition parameter
* **k** (*array like*) – wavenumbers in h/Mpc
* **P** (*array like*) – matter power spectrum in [Mpc/h]^3
* **Omega_m** (*float*) – matter density fraction
* **bias** (*float*) – halo bias
* **xi_mm** (*float* *or* *array like*) – matter correlation function at r
* **delta** (*float*) – overdensity of matter. Optional, default is 200
* **rhos** (*float*) – Einasto density. Optional, default is compute from the mass
* **alpha** (*float*) – Einasto parameter. Optional, default is computed from peak height
* **beta** (*float*) – DK 2-halo parameter. Optional, default is 4
* **gamma** (*float*) – DK 2-halo parameter. Optional, default is 8
|
| Returns: | DK profile evaluated at the input radii |
| Return type: | float or array like |
`cluster_toolkit.xi.``xi_einasto_at_r`(*r*, *M*, *conc*, *alpha*, *om*, *delta=200*, *rhos=-1.0*)[[source]](_modules/cluster_toolkit/xi.html#xi_einasto_at_r)[¶](#cluster_toolkit.xi.xi_einasto_at_r)
Einasto halo profile.
| Parameters: | * **r** (*float* *or* *array like*) – 3d distances from halo center in Mpc/h comoving
* **M** (*float*) – Mass in Msun/h; not used if rhos is specified
* **conc** (*float*) – Concentration
* **alpha** (*float*) – Profile exponent
* **om** (*float*) – Omega_matter, matter fraction of the density
* **delta** (*int*) – Overdensity, default is 200
* **rhos** (*float*) – Scale density in Msun h^2/Mpc^3 comoving; optional
|
| Returns: | Einasto halo profile. |
| Return type: | float or array like |
`cluster_toolkit.xi.``xi_hm`(*xi_1halo*, *xi_2halo*, *combination='max'*)[[source]](_modules/cluster_toolkit/xi.html#xi_hm)[¶](#cluster_toolkit.xi.xi_hm)
Halo-matter correlation function
Note: at the moment you can combine the 1-halo and 2-halo terms by either taking the max of the two or the sum of the two. The ‘combination’ field must be set to either ‘max’ (default) or ‘sum’.
| Parameters: | * **xi_1halo** (*float* *or* *array like*) – 1-halo term
* **xi_2halo** (*float* *or* *array like**,* *same size as xi_1halo*) – 2-halo term
* **combination** (*string; optional*) – specifies how the 1-halo and 2-halo terms are combined, default is ‘max’ which takes the max of the two
|
| Returns: | Halo-matter correlation function |
| Return type: | float or array like |
`cluster_toolkit.xi.``xi_mm_at_r`(*r*, *k*, *P*, *N=500*, *step=0.005*, *exact=False*)[[source]](_modules/cluster_toolkit/xi.html#xi_mm_at_r)[¶](#cluster_toolkit.xi.xi_mm_at_r)
Matter-matter correlation function.
| Parameters: | * **r** (*float* *or* *array like*) – 3d distances from halo center in Mpc/h comoving
* **k** (*array like*) – Wavenumbers of power spectrum in h/Mpc comoving
* **P** (*array like*) – Matter power spectrum in (Mpc/h)^3 comoving
* **N** (*int; optional*) – Quadrature step count, default is 500
* **step** (*float; optional*) – Quadrature step size, default is 5e-3
* **exact** (*boolean*) – Use the slow, exact calculation; default is False
|
| Returns: | Matter-matter correlation function |
| Return type: | float or array like |
`cluster_toolkit.xi.``xi_nfw_at_r`(*r*, *M*, *c*, *Omega_m*, *delta=200*)[[source]](_modules/cluster_toolkit/xi.html#xi_nfw_at_r)[¶](#cluster_toolkit.xi.xi_nfw_at_r)
NFW halo profile correlation function.
| Parameters: | * **r** (*float* *or* *array like*) – 3d distances from halo center in Mpc/h comoving
* **M** (*float*) – Mass in Msun/h
* **c** (*float*) – Concentration
* **Omega_m** (*float*) – Omega_matter, matter fraction of the density
* **delta** (*int; optional*) – Overdensity, default is 200
|
| Returns: | NFW halo profile. |
| Return type: | float or array like |
#### Module contents[¶](#module-cluster_toolkit)
cluster_toolkit is a module for computing galaxy cluster models.
Using CLASS[¶](#using-class)
---
[CLASS](http://class-code.net/) is a code used to compute the matter power spectrum. The power spectrum is a key input for the cluster-toolkit. This is meant to be a very short example of how you can call CLASS to get the linear and nonlinear power spectra.
Note
The CLASS github page is [here](https://github.com/lesgourg/class_public). The CLASS documentation is [found here](https://github.com/lesgourg/class_public/blob/master/explanatory.ini).
Note
CLASS uses units of \(Mpc^{-1}\) for math:k and \(Mpc^3\) for math:P.
```
from classy import Class import numpy as np
#Start by specifying the cosmology Omega_b = 0.05 Omega_m = 0.3 Omega_cdm = Omega_m - Omega_b h = 0.7 #H0/100 A_s = 2.1e-9 n_s = 0.96
#Create a params dictionary
#Need to specify the max wavenumber k_max = 10 #UNITS: 1/Mpc
params = {
'output':'mPk',
'non linear':'halofit',
'Omega_b':Omega_b,
'Omega_cdm':Omega_cdm,
'h':h,
'A_s':A_s,
'n_s':n_s,
'P_k_max_1/Mpc':k_max,
'z_max_pk':10. #Default value is 10
}
#Initialize the cosmology andcompute everything cosmo = Class()
cosmo.set(params)
cosmo.compute()
#Specify k and z k = np.logspace(-5, np.log10(k_max), num=1000) #Mpc^-1 z = 1.
#Call these for the nonlinear and linear matter power spectra Pnonlin = np.array([cosmo.pk(ki, z) for ki in k])
Plin = np.array([cosmo.pk_lin(ki, z) for ki in k])
#NOTE: You will need to convert these to h/Mpc and (Mpc/h)^3
#to use in the toolkit. To do this you would do:
k /= h Plin *= h**3 Pnonlin *= h**3
```
Using CAMB[¶](#using-camb)
---
[CAMB](http://camb.readthedocs.io/en/latest/) is similar to CLASS in that it is a Boltzmann code used to compute the matter power spectrum. Very recently, a Python wrapper was created for CAMB, however it is less documented and less modular compared to CLASS. However, for the sake of comparison you can follow this script to use CAMB to calculate the linear and nonlinear matter power spectra.
Note
CAMB and CLASS have differences that can cause >1% level changes in things like the mass function and possibly lensing. In general, pick one and make it explicit that you are using it when you describe your work.
Note
CAMB outputs tend to have different shapes than you would expect.
```
import camb from camb import model, initialpower
#Set cosmological parameters pars = camb.CAMBparams()
pars.set_cosmology(H0=67.5, ombh2=0.022, omch2=0.122)
pars.set_dark_energy(w=-1.0)
pars.InitPower.set_params(ns=0.965)
#This sets the k limits and specifies redshifts pars.set_matter_power(redshifts=[0., 0.8], kmax=2.0)
#Linear P(k)
pars.NonLinear = model.NonLinear_none results = camb.get_results(pars)
kh, z, pk = results.get_matter_power_spectrum(minkh=1e-4, maxkh=1, npoints = 1000)
#Note: the above function has the maxkh argument for specifying a different
#kmax than was used above.
#Note: pk has the shape (N_z, N_k)
#Non-Linear spectra (Halofit)
pars.NonLinear = model.NonLinear_both results.calc_power_spectra(pars)
khnl, znl, pknl = results.get_matter_power_spectrum(minkh=1e-4, maxkh=1, npoints = 1000)
```
Frequently Asked Questions[¶](#frequently-asked-questions)
---
### I’m getting an interpolation error[¶](#i-m-getting-an-interpolation-error)
The backend of the toolkit is written in C. It is likely that you are passing in data that in Python is actually of floating point precision (32 bits), when everything in C is expected to be in double precision (64 bits). You must do a cast in Python to correct this.
### I’m getting random GSL errors[¶](#i-m-getting-random-gsl-errors)
Python does not order arrays in memory the same way as in C. This can cause strange behavior as seemingly random memory addresses get used, and sometimes even segmentation faults. It is likely that your array in Python is not “C-ordered”. Use [this numpy function](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ascontiguousarray.html) to force your input arrays to have the correct ordering in memory.
These two points are outstanding issues on the [github issues page](https://github.com/tmcclintock/cluster_toolkit/issues). One day they will be taken care of automatically without causing the code to slow significantly. |
mimepost | ruby | Ruby | mimepost
===
Mimepost - the Ruby gem for the mimepost
MimePost API for sending email. You can find out more about MimePost at [https://mimepost.com](http://mimepost.com). For this sample, you can use the api key `special-key` to test the authorization filters.
This SDK is automatically generated by the [Swagger Codegen](https://github.com/swagger-api/swagger-codegen) project:
* API version: 0.1.0
* Package version: 1.0.0
* Build package: io.swagger.codegen.languages.RubyClientCodegen
Installation
---
### Build a gem
To build the Ruby code into a gem:
```
gem build mimepost.gemspec
```
Then either install the gem locally:
```
gem install ./mimepost-1.0.0.gem
```
(for development, run `gem install --dev ./mimepost-1.0.0.gem` to install the development dependencies)
or publish the gem to a gem hosting service, e.g. [RubyGems](https://rubygems.org/).
Finally add this to the Gemfile:
```
gem 'mimepost', '~> 1.0.0'
```
### Install from Git
If the Ruby gem is hosted at a git repository: <https://github.com/mimepost/mimepost-ruby>, then add the following in the Gemfile:
```
gem 'mimepost', :git => 'https://github.com/mimepost/mimepost-ruby.git'
```
### Include the Ruby code directly
Include the Ruby code directly using `-I` as follows:
```
ruby -Ilib script.rb
```
Getting Started
---
Please follow the [installation](#installation) procedure and then run the following code:
```
# Load the gem require 'mimepost'
# Setup authorization
[Mimepost](/gems/mimepost/Mimepost "Mimepost (module)").[configure](/gems/mimepost/Mimepost#configure-class_method "Mimepost.configure (method)") do |config|
# Configure API key authorization: api_key
config.api_key['X-Auth-Token'] = 'YOUR API KEY'
# Uncomment the following line to set a prefix for the API key, e.g. 'Bearer' (defaults to nil)
#config.api_key_prefix['X-Auth-Token'] = 'Bearer'
end
api_instance = [Mimepost](/gems/mimepost/Mimepost "Mimepost (module)")::[AccountsApi](/gems/mimepost/Mimepost/AccountsApi "Mimepost::AccountsApi (class)").[new](/gems/mimepost/Mimepost/AccountsApi#initialize-instance_method "Mimepost::AccountsApi#initialize (method)")
begin
#Get account profile details
result = api_instance.account_profile_get
p result rescue [Mimepost](/gems/mimepost/Mimepost "Mimepost (module)")::[ApiError](/gems/mimepost/Mimepost/ApiError "Mimepost::ApiError (class)") => e
puts "Exception when calling AccountsApi->account_profile_get: #{e}"
end
```
Documentation for API Endpoints
---
All URIs are relative to *<https://api.mimepost.com/v1/>*
| Class | Method | HTTP request | Description |
| --- | --- | --- | --- |
| *Mimepost::AccountsApi* | [**account_profile_get**](docs/AccountsApi.md#account_profile_get) | **GET** /account/profile/ | Get account profile details |
| *Mimepost::AccountsApi* | [**account_profile_post**](docs/AccountsApi.md#account_profile_post) | **POST** /account/profile/ | Update account profile details |
| *Mimepost::AccountsApi* | [**settings_get**](docs/AccountsApi.md#settings_get) | **GET** /settings/ | Get all the settings |
| *Mimepost::AccountsApi* | [**settings_post**](docs/AccountsApi.md#settings_post) | **POST** /settings/ | Set a setting |
| *Mimepost::DomainsApi* | [**domains_get**](docs/DomainsApi.md#domains_get) | **GET** /domains/ | Get a list of all the domains |
| *Mimepost::DomainsApi* | [**domains_id_approve_post**](docs/DomainsApi.md#domains_id_approve_post) | **POST** /domains/id/approve/ | Submit request for the approval of a verified domain |
| *Mimepost::DomainsApi* | [**domains_id_delete**](docs/DomainsApi.md#domains_id_delete) | **DELETE** /domains/id | Remove a single domain |
| *Mimepost::DomainsApi* | [**domains_id_get**](docs/DomainsApi.md#domains_id_get) | **GET** /domains/id | Get the details of a single domain |
| *Mimepost::DomainsApi* | [**domains_id_verify_dkim_post**](docs/DomainsApi.md#domains_id_verify_dkim_post) | **POST** /domains/id/verify_dkim/ | Request for the verification of DKIM record for a single domain |
| *Mimepost::DomainsApi* | [**domains_id_verify_spf_post**](docs/DomainsApi.md#domains_id_verify_spf_post) | **POST** /domains/id/verify_spf/ | Request for the verification of SPF record for a single domain |
| *Mimepost::DomainsApi* | [**domains_id_verify_tracking_post**](docs/DomainsApi.md#domains_id_verify_tracking_post) | **POST** /domains/id/verify_tracking/ | Request for the verification of tracking record for a single domain |
| *Mimepost::DomainsApi* | [**domains_post**](docs/DomainsApi.md#domains_post) | **POST** /domains/ | Add single domain |
| *Mimepost::EmailsApi* | [**send_email**](docs/EmailsApi.md#send_email) | **POST** /emails/ | Send email |
| *Mimepost::StatsApi* | [**emaillogs_get**](docs/StatsApi.md#emaillogs_get) | **GET** /emaillogs/ | Get the logs of a particular date |
| *Mimepost::StatsApi* | [**stats_get**](docs/StatsApi.md#stats_get) | **GET** /stats/ | Get the summary of stats for a range of dates |
| *Mimepost::WebhooksApi* | [**webhooks_get**](docs/WebhooksApi.md#webhooks_get) | **GET** /webhooks/ | Get the list of all the webhooks |
| *Mimepost::WebhooksApi* | [**webhooks_id_delete**](docs/WebhooksApi.md#webhooks_id_delete) | **DELETE** /webhooks/id | Remove a single webhook |
| *Mimepost::WebhooksApi* | [**webhooks_id_get**](docs/WebhooksApi.md#webhooks_id_get) | **GET** /webhooks/id | Get the details of a single webhook |
| *Mimepost::WebhooksApi* | [**webhooks_id_put**](docs/WebhooksApi.md#webhooks_id_put) | **PUT** /webhooks/id | Update the details of a single webhook |
| *Mimepost::WebhooksApi* | [**webhooks_post**](docs/WebhooksApi.md#webhooks_post) | **POST** /webhooks/ | Add single webhook |
Documentation for Models
---
* [Mimepost::AccountProfile](docs/AccountProfile.md)
* [Mimepost::AccountProfileResponse](docs/AccountProfileResponse.md)
* [Mimepost::AccountSettings](docs/AccountSettings.md)
* [Mimepost::ApiResponse](docs/ApiResponse.md)
* [Mimepost::ApiResponseAllWebhooks](docs/ApiResponseAllWebhooks.md)
* [Mimepost::ApiResponseAllWebhooksData](docs/ApiResponseAllWebhooksData.md)
* [Mimepost::ApiResponseDomainsList](docs/ApiResponseDomainsList.md)
* [Mimepost::ApiResponseDomainsListData](docs/ApiResponseDomainsListData.md)
* [Mimepost::ApiResponseEmaillogs](docs/ApiResponseEmaillogs.md)
* [Mimepost::ApiResponseEmaillogsData](docs/ApiResponseEmaillogsData.md)
* [Mimepost::ApiResponseSingleWebhooks](docs/ApiResponseSingleWebhooks.md)
* [Mimepost::ApiResponseStats](docs/ApiResponseStats.md)
* [Mimepost::ApiResponseStatsData](docs/ApiResponseStatsData.md)
* [Mimepost::ApiResponseStatsDataDatewiseSummary](docs/ApiResponseStatsDataDatewiseSummary.md)
* [Mimepost::ApiResponseStatsDataGraphSummary](docs/ApiResponseStatsDataGraphSummary.md)
* [Mimepost::ApiResponseStatsDataTotalSummary](docs/ApiResponseStatsDataTotalSummary.md)
* [Mimepost::ApiResponseStatsDataTotalSummaryStatus](docs/ApiResponseStatsDataTotalSummaryStatus.md)
* [Mimepost::ApiResponseWebhooks](docs/ApiResponseWebhooks.md)
* [Mimepost::ApiResponseWebhooksData](docs/ApiResponseWebhooksData.md)
* [Mimepost::Domain](docs/Domain.md)
* [Mimepost::Email](docs/Email.md)
* [Mimepost::EmailAttachments](docs/EmailAttachments.md)
* [Mimepost::EmailGlobalMergeVars](docs/EmailGlobalMergeVars.md)
* [Mimepost::EmailMergeVars](docs/EmailMergeVars.md)
* [Mimepost::EmailTo](docs/EmailTo.md)
* [Mimepost::Webhook](docs/Webhook.md)
* [Mimepost::Webhook1](docs/Webhook1.md)
Documentation for Authorization
---
### api_key
* **Type**: API key
* **API key parameter name**: X-Auth-Token
* **Location**: HTTP header |
tpr | cran | R | Package ‘tpr’
October 17, 2022
Type Package
Title Temporal Process Regression
Version 0.3-3
Author <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Description Regression models for temporal process responses with
time-varying coefficient.
Depends R (>= 4.0), stats, lgtdl
License GPL (>= 3)
NeedsCompilation yes
Repository CRAN
Date/Publication 2022-10-17 07:10:02 UTC
R topics documented:
tpr-packag... 1
ci.plo... 2
dnas... 3
tp... 5
tpr.pfi... 7
tpr.tes... 8
tpr-package Temporal Process Regression
Description
Fit regression models for temporal process responses with time-varying and time-independent co-
efficients.
Details
An overview of how to use the package, including the most important functions
Author(s)
<NAME> <<EMAIL>>
References
Fine, Yan, and Kosorok (2004): Temporal Process Regression. Biometrika.
ci.plot Confidence Interval Plot
Description
Plotting time-varying coefficient with pointwise confidence.
Usage
ci.plot(x, y, se, level = 0.95, ylim = NULL, newplot = TRUE,
fun = gaussian()$linkinv, dfun = gaussian()$mu.eta, ...)
Arguments
x the x coordinate
y the y coordinate
se the standard error of y
level confidence level
ylim the range of y axis
newplot if TRUE, draw a new plot
fun a transform function
dfun the derivative of the tranform function
... arguments to be passed to plot
Author(s)
<NAME> <<EMAIL>>
dnase rhDNase Data
Description
Randomized trial of rhDNase for treatment of cystic fibrosis
Usage
data(dnase)
Format
A data frame with 767 observations on the following 6 variables.
id subject id
rx treatment arm: 0 = placebo, 1 = rhDNase
fev forced expirotary volume, a measure of lung capacity
futime follow time
iv1 IV start time
iv2 IV stop time
Details
During an exacerbation, patients received intravenous (IV) antibiotics and were considered unsus-
ceptible until seven exacerbation-free days beyond the end of IV therapy.
A few subjects were infected at the time of enrollment, for instance a subject has a first infection
interval of -21 to 7. We do not count this first infection as an "event", and the subject first enters the
risk set at day 7.
Source
Therneau and Grambsch (2000). Modeling Survival Data: Extending the Cox model. Springer.
http://www.mayo.edu/hsr/people/therneau/book/data/dnase.html
References
<NAME> Fine (2008). Analysis of Episodic Data with Application to Recurrent Pulmonary Exacer-
bations in Cystic Fibrosis Patients. JASA.
Examples
## This example steps through how to set up for the tpr function.
## Three objects are needed:
## 1) response process (an object of "lgtdl")
## 2) data availability process (an object of "lgtdl")
## 3) a time-independent covariate matrix
data(dnase)
## extracting the unique id and subject level information
dat <- unique(dnase[,c("id", "futime", "fev", "rx")])
## construct temporal process response for recurrent enent
rec <- lapply(split(dnase[,c("id", "iv1", "futime")], dnase$id),
function(x) {
v <- x$iv1
maxfu <- max(x$futime)
## iv1 may be negative!!!
if (is.na(v[1])) c(0, maxfu + 1.0)
else if (v[1] < 0) c(v[1] - 1, v[!is.na(v)], maxfu + 1.0)
else c(0, v[!is.na(v)], maxfu + 1.0)
})
yrec <- lapply(rec,
function(x) {
dat <- data.frame(time=x, cov=1:length(x)-1)
len <- length(x)
dat$cov[len] <- dat$cov[len - 1]
as.lgtdl(dat)
})
## construct temporal process response for accumulative days exacerbation
do1.acc <- function(x) {
gap <- x$iv2 - x$iv1 + 1
if (all(is.na(gap))) yy <- tt <- NULL
else {
gap <- na.omit(gap)
yy <- cumsum(rep(1, sum(gap)))
tt <- unlist(sapply(1:length(gap), function(i)
seq(x$iv1[i], x$iv2[i], by=1.0)))
}
yy <- c(0, yy, rev(yy)[1])
if (!is.null(tt[1]) && tt[1] < 0)
tt <- c(tt[1] - 1, tt, max(x$futime) + 1.0)
else tt <- c(0, tt, max(x$futime) + 1.0)
as.lgtdl(data.frame(time=tt, cov=yy))
}
yacc <- lapply(split(dnase[,c("id", "iv1", "iv2", "futime")], dnase$id),
do1.acc)
## construct data availability (or at risk) indicator process
tu <- max(dat$futime) + 0.001
rt <- lapply(1:nrow(dat),
function(i) {
x <- dat[i, "futime"]
time <- c(0, x, tu)
cov <- c(1, 0, 0)
as.lgtdl(data.frame(time=time, cov=cov))
})
## time-independent covariate matrix
xmat <- model.matrix(~ rx + fev, data=dat)
## time-window in days
tlim <- c(10, 168)
good <- unlist(lapply(yrec, function(x) x$time[1] == 0))
## fully functional temporal process regression
## for recurrent event
m.rec <- tpr(yrec, rt, xmat[,1:3], list(), xmat[,-(1:3),drop=FALSE], list(),
tis=10:160, w = rep(1, 151), family = poisson(),
evstr = list(link = 5, v = 3))
par(mfrow=c(1,3), mgp=c(2,1,0), mar=c(4,2,1,0), oma=c(0,2,0,0))
for(i in 1:3) ci.plot(m.rec$tis, m.rec$alpha[,i], sqrt(m.rec$valpha[,i]))
## hypothesis test of significance
## integral test, covariate index 2 and 3
sig.test.int.ff(m.rec, idx=2:3, ncut=2)
sig.test.boots.ff(m.rec, idx=2:3, nsim=1000)
## constant fit
cfit <- cst.fit.ff(m.rec, idx=2:3)
## goodness-of-fit test for constant fit
gof.test.int.ff(m.rec, idx=2:3, ncut=2)
gof.test.boots.ff(m.rec, idx=2:3, nsim=1000)
## for cumulative days in exacerbation
m.acc <- tpr(yacc, rt, xmat[,1:3], list(), xmat[,-(1:3),drop=FALSE], list(),
tis=10:160, w = rep(1, 151), family = gaussian(),
evstr = list(link = 1, v = 1))
par(mfrow=c(1,3), mgp=c(2,1,0), mar=c(4,2,1,0), oma=c(0,2,0,0))
for(i in 1:3) ci.plot(m.acc$tis, m.acc$alpha[,i], sqrt(m.acc$valpha[,i]))
tpr Temporal Process Regression
Description
Regression for temporal process responses and time-independent covariate. Some covariates have
time-varying coefficients while others have time-independent coefficients.
Usage
tpr(y, delta, x, xtv=list(), z, ztv=list(), w, tis,
family = poisson(),
evstr = list(link = 5, v = 3),
alpha = NULL, theta = NULL,
tidx = 1:length(tis),
kernstr = list(kern=1, poly=1, band=range(tis)/50),
control = list(maxit=25, tol=0.0001, smooth=0, intsmooth=0))
Arguments
y Response, a list of "lgtdl" objects.
delta Data availability indicator, a list of "lgtdl" objects.
x Covariate matrix for time-varying coefficients.
xtv A list of list of "lgtdl" for time-varying covariates with time-varying coefficients.
z NOT READY YET; Covariate matrix for time-independent coefficients.
ztv NOT READY YET; A list of list of "lgtdl" for time-varying covariates with
time-independent coefficients.
w Weight vector with the same length of tis.
tis A vector of time points at which the model is to be fitted.
family Specification of the response distribution; see family for glm; this argument is
used in getting initial estimates.
evstr A list of two named components, link function and variance function. link: 1 =
identity, 2 = logit, 3 = probit, 4 = cloglog, 5 = log; v: 1 = gaussian, 2 = binomial,
3 = poisson
alpha A matrix supplying initial values of alpha.
theta A numeric vector supplying initial values of theta.
tidx indices for time points used to get initial values.
kernstr A list of two names components: kern: 1 = Epanechnikov, 2 = triangular, 0 =
uniform; band: bandwidth
control A list of named components: maxit: maximum number of iterations; tol: toler-
ance level of iterations. smooth: 1 = smoothing; 0 = no smoothing.
Details
This rapper function can be made more user-friendly in the future. For example, evstr can be
determined from the family argument.
Value
An object of class "tpr":
tis same as the input argument
alpha estimate of time-varying coefficients
beta estimate of time-independent coefficients
valpha a matrix of variance of alpha at tis
vbeta a matrix of variance of beta at tis
niter the number of iterations used
infAlpha a list of influence functions for alpha
infBeta a matrix of influence functions for beta
Author(s)
<NAME> <<EMAIL>>
References
Fine, Yan, and Kosorok (2004). Temporal Process Regression. Biometrika.
Yan and Huang (2009). Partly Functional Temporal Process Regression with Semiparametric Profile
Estimating Functions. Biometrics.
tpr.pfit Constant fit of coefficients in a TPR model
Description
Weighted least square estimate of a constant model for time-varying coefficients in a TPR model.
Usage
cst.fit.ff(fit, idx)
Arguments
fit a fitted object from tpr
idx the index of the
Value
The estimated constant fit, standard error, z-value and p-value.
Author(s)
<NAME> <<EMAIL>>
References
Fine, Yan, and Kosorok (2004). Temporal Process Regression. Biometrika.
See Also
tpr.test
tpr.test Significance and Goodness-of-fit Test of TPR
Description
Two kinds of tests are provided for inference on the coefficients in a fully functional TRP model:
integral test and bootstrap test.
Usage
sig.test.int.ff(fit, chypo = 0, idx, weight = TRUE, ncut = 2)
sig.test.boots.ff(fit, chypo = 0, idx, nsim = 1000, plot = FALSE)
gof.test.int.ff(fit, cfitList = NULL, idx, weight = TRUE, ncut = 2)
gof.test.boots.ff(fit, cfitList = NULL, idx, nsim = 1000, plot = FALSE)
gof.test.boots.pf(fit1, fit2, nsim, p = NULL, q = 1)
Arguments
fit a fitted object from tpr
chypo hypothesized value of coefficients
idx the index of the coefficients to be tested
weight whether or not use inverse variation weight
ncut the number of cuts of the interval of interest in integral test
cfitList a list of fitted object from cst.fit.ff
nsim the number of bootstrap samples in bootstrap test
plot whether or not plot
fit1 fit of H0 model (reduced)
fit2 fit of H1 model (full)
p the index of the time-varying estimation in fit2
q the index of the time-independent estimation in fit1
Value
Test statistics and their p-values.
Author(s)
<NAME> <<EMAIL>>
References
Fine, Yan, and Kosorok (2004). Temporal Process Regression. Biometrika.
See Also
tpr
Examples
## see ?tpr |
macro_env | rust | Rust | Crate macro_env
===
Macro_env: An environment variable seeking crate
---
Macro_env is a crate to find environment variables.
Originally designed to easily fetch environment variables from different places without having to change a lot of different code.
By simply changing the SearchType in the macro or in the function, it fetches the variable from a different location.
### Usage
First add:
```
[dependencies]
macro_env = "0.1.*"
```
**Macro**
```
// Import the crate, importing the whole crate is the easiest
// You can also manually import the function you need, for .env search for example:
// `use macro_env::{dotenvreader, macro_env};`
use macro_env::*;
// Fetch the environment variable "OS" from the .env file at the cargo.toml level macro_env!(File, "OS");
// Fetch the environment variable "OS" from the system environment variables macro_env!(System, "OS");
// Asks the user for enter the input through the terminal macro_env!(Input);
// All, without specifying the searchtype, will try to find the variable through all 3 methods:
// First it checks for a .env file
// Then by searching for a system variable
// And if both fail, it will ask the user for input macro_env!(All, "OS");
macro_env!("OS");
```
**EnvSeeker()**
```
use macro_env::*;
use macro_env::SearchType::*;
// You can use envseeker() when you prefer using a function over a macro envseeker(Envfile, "OS")
```
Macros
---
* macro_env`macro_env!()` is used to fetch environment variables.
Enums
---
* SearchTypeSearchtype for the `fn envseeker()`, this will define what type of search it performs
Functions
---
* dotenvreaderReads the .env file and tries to find the .env variable.
* envseekerA function instead of a macro to find the environment variable
* inputRequest user input
`input()` fetches stdin.read_lines() and then trims them.
* systemreaderFetch the environment variable from the system environment variable
Crate macro_env
===
Macro_env: An environment variable seeking crate
---
Macro_env is a crate to find environment variables.
Originally designed to easily fetch environment variables from different places without having to change a lot of different code.
By simply changing the SearchType in the macro or in the function, it fetches the variable from a different location.
### Usage
First add:
```
[dependencies]
macro_env = "0.1.*"
```
**Macro**
```
// Import the crate, importing the whole crate is the easiest
// You can also manually import the function you need, for .env search for example:
// `use macro_env::{dotenvreader, macro_env};`
use macro_env::*;
// Fetch the environment variable "OS" from the .env file at the cargo.toml level macro_env!(File, "OS");
// Fetch the environment variable "OS" from the system environment variables macro_env!(System, "OS");
// Asks the user for enter the input through the terminal macro_env!(Input);
// All, without specifying the searchtype, will try to find the variable through all 3 methods:
// First it checks for a .env file
// Then by searching for a system variable
// And if both fail, it will ask the user for input macro_env!(All, "OS");
macro_env!("OS");
```
**EnvSeeker()**
```
use macro_env::*;
use macro_env::SearchType::*;
// You can use envseeker() when you prefer using a function over a macro envseeker(Envfile, "OS")
```
Macros
---
* macro_env`macro_env!()` is used to fetch environment variables.
Enums
---
* SearchTypeSearchtype for the `fn envseeker()`, this will define what type of search it performs
Functions
---
* dotenvreaderReads the .env file and tries to find the .env variable.
* envseekerA function instead of a macro to find the environment variable
* inputRequest user input
`input()` fetches stdin.read_lines() and then trims them.
* systemreaderFetch the environment variable from the system environment variable
Macro macro_env::macro_env
===
```
macro_rules! macro_env {
(File, $envvariablename:literal) => { ... };
(System, $envvariablename:literal) => { ... };
(Input) => { ... };
(All, $envvariablename:literal) => { ... };
($envvariablename:literal) => { ... };
}
```
`macro_env!()` is used to fetch environment variables.
Example
---
```
// Import the crate, importing the whole crate is the easiest
// You can also manually import the function you need, for .env search for example:
// `use macro_env::dotenvreader;`
use macro_env::*;
// Fetch the environment variable "OS" from the .env file at the cargo.toml level macro_env!(File, "OS");
// Fetch the environment variable "OS" from the system environment variables macro_env!(System, "OS");
// Asks the user for enter the input through the terminal macro_env!(Input);
// All, and not specifying the searchtype, will try to find the variable through all 3 methods:
// First it checks for a .env file
// Then by searching for a system variable
// And if both fail, it will ask the user for input macro_env!(All, "OS");
macro_env!("OS");
```
Enum macro_env::SearchType
===
```
pub enum SearchType {
Envfile,
System,
Input,
All,
}
```
Searchtype for the `fn envseeker()`, this will define what type of search it performs
Variants
---
### Envfile
Searching for a .env file
### System
Searching for a system variable
### Input
Requesting user input
### All
First searching for a .env file, then search for a system variable, and finally request the user to input one if all fails
Auto Trait Implementations
---
### impl RefUnwindSafe for SearchType
### impl Send for SearchType
### impl Sync for SearchType
### impl Unpin for SearchType
### impl UnwindSafe for SearchType
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Function macro_env::dotenvreader
===
```
pub fn dotenvreader(envvariablename: String) -> Result<String, Error>
```
Reads the .env file and tries to find the .env variable.
Example
---
```
use macro_env::dotenvreader;
let envvariable :String = dotenvreader("OS".to_string()).unwrap();
```
Function macro_env::envseeker
===
```
pub fn envseeker(searchtype: SearchType, envvariablename: &str) -> String
```
A function instead of a macro to find the environment variable
Example
---
```
use macro_env::*;
use macro_env::SearchType::*;
// Fetch a variable from .env let filevariable :String = envseeker(Envfile, "OS");
// Fetch a systemvariable let systemvariable :String = envseeker(System, "OS");
// Request user input let inputvariable :String = envseeker(Input, "OS");
// Perform all three methods to find a variable let allvariable :String = envseeker(All, "OS");
```
Function macro_env::input
===
```
pub fn input() -> Result<String, Error>
```
Request user input
`input()` fetches stdin.read_lines() and then trims them.
Example
---
```
use macro_env::input;
// Request the user to input a variable let envvariable :String = input().unwrap();
```
Function macro_env::systemreader
===
```
pub fn systemreader(envvariablename: String) -> Result<String, VarError>
```
Fetch the environment variable from the system environment variable
Example
---
```
use macro_env::systemreader;
// Using systemreader is just a shortcut for std::env::var()
let envvariable :String = systemreader("OS".to_string()).unwrap();
``` |
svines | cran | R | Package ‘svines’
October 14, 2022
Title Stationary Vine Copula Models
Version 0.1.4
Description Provides functionality to fit and simulate from stationary vine
copula models for time series, see Nagler et al. (2022)
<doi:10.1016/j.jeconom.2021.11.015>.
License GPL-3
Encoding UTF-8
LazyData true
URL https://github.com/tnagler/svines
BugReports https://github.com/tnagler/svines/issues
Depends R (>= 3.3.0), rvinecopulib (>= 0.6.1.1.2)
Imports Rcpp, assertthat, univariateML, wdm, fGarch
LinkingTo RcppEigen, Rcpp, RcppThread, BH, wdm, rvinecopulib
Suggests testthat, ggraph, covr
RoxygenNote 7.1.2
NeedsCompilation yes
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-04-13 08:20:02 UTC
R topics documented:
return... 2
svin... 2
svineco... 3
svinecop_dis... 6
svinecop_hessia... 7
svinecop_logli... 8
svinecop_score... 9
svinecop_si... 10
svine_bootstrap_model... 11
svine_dis... 12
svine_hessia... 13
svine_logli... 14
svine_score... 14
svine_si... 15
returns Stock returns of 20 companies
Description
A dataset containing the log-returns of daily returns of 20 companies. The observation period is
from 2015-01-01 to 2019-12-31.
Usage
returns
Format
A data frame with 1296 rows and 20 variables:
Source
Yahoo finance.
svine Stationary vine distribution models
Description
Automated fitting or creation of custom S-vine distribution models
Usage
svine(
data,
p,
margin_families = univariateML::univariateML_models,
selcrit = "aic",
...
)
Arguments
data a matrix or data.frame of data.
p the Markov order.
margin_families
either a vector of univariateML families to select from (used for every margin) or
a list with one entry for every variable. Can also be "empirical" for empirical
cdfs.
selcrit criterion for family selection, either "loglik", "aic", "bic", "mbicv".
... arguments passed to svinecop().
Value
Returns the fitted model as an object with classes svine and svine_dist. A list with entries
• $margins: list of marginal models from univariateML,
• $copula: an object of svinecop_dist.
See Also
svine_dist, svine_loglik, svine_sim, svine_bootstrap_models
Examples
# load data set
data(returns)
# fit parametric S-vine model with Markov order 1
fit <- svine(returns[1:100, 1:3], p = 1, family_set = "parametric")
fit
summary(fit)
plot(fit$copula)
contour(fit$copula)
logLik(fit)
pairs(svine_sim(500, rep = 1, fit))
svinecop Stationary vine copula models
Description
Automated fitting or creation of custom S-vine copula models
Usage
svinecop(
data,
p,
var_types = rep("c", NCOL(data)),
family_set = "all",
cs_structure = NA,
out_vertices = NA,
in_vertices = NA,
type = "S",
par_method = "mle",
nonpar_method = "constant",
mult = 1,
selcrit = "aic",
weights = numeric(),
psi0 = 0.9,
presel = TRUE,
trunc_lvl = Inf,
tree_crit = "tau",
threshold = 0,
keep_data = FALSE,
show_trace = FALSE,
cores = 1
)
Arguments
data a matrix or data.frame (copula data should have approximately uniform mar-
gins).
p the Markov order.
var_types variable types; discrete variables not (yet) allowed.
family_set a character vector of families; see rvinecopulib::bicop() for additional op-
tions.
cs_structure the cross-sectional vine structure (see rvinecopulib::rvine_structure();
cs_structure = NA performs automatic structure selection.
out_vertices the out-vertex; if NA, the out-vertex is selected automatically if no structure is
provided, and is equivalent to 1 if a structure is provided.
in_vertices the in-vertex; if NA, the in-vertex is selected automatically if no structure is pro-
vided, and is equivalent to 1 if a structure is provided.
type type of stationary vine; "S" (default) for general S-vines, "D" for Smith’s long
D-vine, "M" for Beare and Seo’s M-vine.
par_method the estimation method for parametric models, either "mle" for sequential maxi-
mum likelihood, "itau" for inversion of Kendall’s tau (only available for one-
parameter families and "t".
nonpar_method the estimation method for nonparametric models, either "constant" for the
standard transformation estimator, or "linear"/"quadratic" for the local-likelihood
approximations of order one/two.
mult multiplier for the smoothing parameters of nonparametric families. Values larger
than 1 make the estimate more smooth, values less than 1 less smooth.
selcrit criterion for family selection, either "loglik", "aic", "bic", "mbic". For
vinecop() there is the additional option "mbicv".
weights optional vector of weights for each observation.
psi0 prior probability of a non-independence copula (only used for selcrit = "mbic"
and selcrit = "mbicv").
presel whether the family set should be thinned out according to symmetry character-
istics of the data.
trunc_lvl currently unsupported.
tree_crit the criterion for tree selection, one of "tau", "rho", "hoeffd", or "mcor" for
Kendall’s τ , Spearman’s ρ, Hoeffding’s D, and maximum correlation, respec-
tively.
threshold for thresholded vine copulas; NA indicates that the threshold should be selected
automatically by rvinecopulib::mBICV().
keep_data whether the data should be stored (necessary for using fitted()).
show_trace logical; whether a trace of the fitting progress should be printed.
cores number of cores to use; if more than 1, estimation of pair copulas within a tree
is done in parallel.
Value
Returns the fitted model as an object with classes svinecop and svinecop_dist. Also inherits
from vinecop, vinecop_dist such that many functions from rvinecopulib can be called.
Examples
# load data set
data(returns)
# convert to pseudo observations with empirical cdf for marginal distributions
u <- pseudo_obs(returns[1:100, 1:3])
# fit parametric S-vine copula model with Markov order 1
fit <- svinecop(u, p = 1, family_set = "parametric")
fit
summary(fit)
plot(fit)
contour(fit)
logLik(fit)
pairs(svinecop_sim(500, rep = 1, fit))
svinecop_dist Custom S-vine models
Description
Custom S-vine models
Usage
svinecop_dist(
pair_copulas,
cs_structure,
p,
out_vertices,
in_vertices,
var_types = rep("c", dim(cs_structure)[1])
)
Arguments
pair_copulas A nested list of ’bicop_dist’ objects, where pair_copulas[[t]][[e]] corre-
sponds to the pair-copula at edge e in tree t. Only the most-left unique pair
copulas are used, others can be omitted.
cs_structure The cross-sectional structure. Either a matrix, or an rvine_structure object;
see rvinecopulib::rvine_structure()
p the Markov order.
out_vertices the out-vertex; if NA, the out-vertex is selected automatically if no structure is
provided, and is equivalent to 1 if a structure is provided.
in_vertices the in-vertex; if NA, the in-vertex is selected automatically if no structure is pro-
vided, and is equivalent to 1 if a structure is provided.
var_types variable types; discrete variables not (yet) allowed.
Value
Returns the model as an object with classes svinecop_dist. Also inherits from vinecop_dist
such that many functions from rvinecopulib can be called.
See Also
svinecop_loglik, svinecop_sim, svinecop_hessian, svinecop_scores
Examples
cs_struct <- cvine_structure(1:2)
pcs <- list(
list( # first tree
bicop_dist("clayton", 0, 3), # cross sectional copula
bicop_dist("gaussian", 0, -0.1) # serial copula
),
list( # second tree
bicop_dist("gaussian", 0, 0.2), bicop_dist("indep")
),
list( # third tree
bicop_dist("indep")
)
)
cop <- svinecop_dist(
pcs, cs_struct, p = 1, out_vertices = 1:2, in_vertices = 1:2)
svinecop_hessian Expected hessian for S-vine copula models
Description
Expected hessian for S-vine copula models
Usage
svinecop_hessian(u, model, cores = 1)
Arguments
u the data; should have approximately uniform margins..
model model inheriting from class svinecop_dist.
cores number of cores to use; if larger than one, computations are done in parallel on
cores batches .
Value
Returns the observed Hessian matrix. Rows/columns correspond to to model parameters in the
order: copula parameters of first tree, copula parameters of second tree, etc. Duplicated parameters
in the copula model are omitted.
See Also
svinecop_scores
Examples
# load data set
data(returns)
# convert to uniform margins
u <- pseudo_obs(returns[1:100, 1:3])
# fit parametric S-vine copula model with Markov order 1
fit <- svinecop(u, p = 1, family_set = "parametric")
svinecop_loglik(u, fit)
svinecop_scores(u, fit)
svinecop_hessian(u, fit)
svinecop_loglik Log-likelihood for S-vine copula models
Description
Log-likelihood for S-vine copula models
Usage
svinecop_loglik(u, model, cores = 1)
Arguments
u the data; should have approximately uniform margins..
model model inheriting from class svinecop_dist.
cores number of cores to use; if larger than one, computations are done in parallel on
cores batches .
Value
Returns the log-likelihood of the data for the model.
Examples
# load data set
data(returns)
# convert to uniform margins
u <- pseudo_obs(returns[1:100, 1:3])
# fit parametric S-vine copula model with Markov order 1
fit <- svinecop(u, p = 1, family_set = "parametric")
svinecop_loglik(u, fit)
svinecop_scores(u, fit)
svinecop_hessian(u, fit)
svinecop_scores Log-likelihood scores for S-vine copula models
Description
Log-likelihood scores for S-vine copula models
Usage
svinecop_scores(u, model, cores = 1)
Arguments
u the data; should have approximately uniform margins..
model model inheriting from class svinecop_dist.
cores number of cores to use; if larger than one, computations are done in parallel on
cores batches .
Value
A matrix containing the score vectors in its rows, where each row corresponds to one observation
(row in u). The columns correspond to model parameters in the order: copula parameters of first
tree, copula parameters of second tree, etc. Duplicated parameters in the copula model are omitted.
See Also
svinecop_hessian
Examples
# load data set
data(returns)
# convert to uniform margins
u <- pseudo_obs(returns[1:100, 1:3])
# fit parametric S-vine copula model with Markov order 1
fit <- svinecop(u, p = 1, family_set = "parametric")
svinecop_loglik(u, fit)
svinecop_scores(u, fit)
svinecop_hessian(u, fit)
svinecop_sim Simulate from a S-vine copula model
Description
Simulate from a S-vine copula model
Usage
svinecop_sim(n, rep, model, past = NULL, qrng = FALSE, cores = 1)
Arguments
n how many steps of the time series to simulate.
rep number of replications; rep time series of length n are generated.
model a S-vine copula model object (inheriting from svinecop_dist).
past (optional) matrix of past observations. If provided, time series are simulated
conditional on the past.
qrng if TRUE, generates quasi-random numbers using the multivariate Generalized
Halton sequence up to dimension 300 and the Generalized Sobol sequence in
higher dimensions (default qrng = FALSE).
cores number of cores to use; if larger than one, computations are done parallel over
replications.
Value
An n-by-d-by-rep array, where d is the cross-sectional dimension of the model. This reduces to an
n-by-d matrix if rep == 1.
Examples
# load data set
data(returns)
# convert to uniform margins
u <- pseudo_obs(returns[1:100, 1:3])
# fit parametric S-vine copula model with Markov order 1
fit <- svinecop(u, p = 1, family_set = "parametric")
pairs(u) # original data
pairs(svinecop_sim(100, rep = 1, model = fit)) # simulated data
# simulate the next day conditionally on the past 500 times
pairs(t(svinecop_sim(1, rep = 100, model = fit, past = u)[1, , ]))
svine_bootstrap_models
Bootstrap S-vine models
Description
Computes bootstrap replicates of a given model using the one-step block multiplier bootstrap of
Nagler et. al (2022).
Usage
svine_bootstrap_models(n_models, model)
Arguments
n_models number of bootstrap replicates.
model the initial fitted model
Value
A list of length n_models, with each entry representing one bootstrapped model as object of class
svine.
Examples
data(returns)
dat <- returns[1:100, 1:2]
# fit parametric S-vine model with Markov order 1
model <- svine(dat, p = 1, family_set = "parametric")
# compute 10 bootstrap replicates of the model
boot_models <- svine_bootstrap_models(10, model)
# compute bootstrap replicates of 90%-quantile of X_1 + X_2.
mu_boot <- sapply(
boot_models,
function(m) {
xx <- rowSums(t(svine_sim(1, 10^2, m, past = dat)[1, ,]))
quantile(xx, 0.9)
}
)
svine_dist Custom S-vine distribution models
Description
Custom S-vine distribution models
Usage
svine_dist(margins, copula)
Arguments
margins A list of length d containing univariateML objects.
copula the copula model; an object of class svinecop_dist with cross-sectional di-
mension d.
Value
Returns the model as an object with class svine_dist. A list with entries
• $margins: list of marginal models from univariateML,
• $copula: an object of svinecop_dist.
See Also
svine_dist, svine_loglik, svine_sim, svine_bootstrap_models
Examples
## marginal objects
# create dummy univariateML models
univ1 <- univ2 <- univariateML::mlnorm(rnorm(10))
# modify the parameters to N(5, 10) and N(0, 2) distributions
univ1[] <- c(5, 10)
univ2[] <- c(0, 2)
## copula óbject
cs_struct <- cvine_structure(1:2)
pcs <- list(
list( # first tree
bicop_dist("clayton", 0, 3), # cross sectional copula
bicop_dist("gaussian", 0, -0.1) # serial copula
),
list( # second tree
bicop_dist("gaussian", 0, 0.2), bicop_dist("indep")
),
list( # third tree
bicop_dist("indep")
)
)
cop <- svinecop_dist(
pcs, cs_struct, p = 1, out_vertices = 1:2, in_vertices = 1:2)
model <- svine_dist(margins = list(univ1, univ2), copula = cop)
summary(model)
svine_hessian Expected hessian of a parametric S-vine models
Description
Expected hessian of a parametric S-vine models
Usage
svine_hessian(x, model, cores = 1)
Arguments
x the data.
model S-vine model (inheriting from svine_dist).
cores number of cores to use.
Value
A returns a k-by-k matrix, where k is the total number of parameters in the model. Parameters
are ordered as follows: marginal parameters, copula parameters of first tree, copula parameters of
second tree, etc. Duplicated parameters in the copula model are omitted.
Examples
data(returns)
dat <- returns[1:100, 1:2]
# fit parametric S-vine model with Markov order 1
model <- svine(dat, p = 1, family_set = "parametric")
# Implementation of asymptotic variances
I <- cov(svine_scores(dat, model))
H <- svine_hessian(dat, model)
Hi <- solve(H)
Hi %*% I %*% t(Hi) / nrow(dat)
svine_loglik Log-likelihood for S-vine models
Description
Log-likelihood for S-vine models
Usage
svine_loglik(x, model, cores = 1)
Arguments
x the data.
model model inheriting from class svine_dist.
cores number of cores to use; if larger than one, computations are done in parallel on
cores batches .
Value
Returns the log-likelihood of the data for the model.
Examples
# load data set
data(returns)
# fit parametric S-vine model with Markov order 1
fit <- svine(returns[1:100, 1:3], p = 1, family_set = "parametric")
svine_loglik(returns[1:100, 1:3], fit)
svine_scores Score function of parametric S-vine models
Description
Score function of parametric S-vine models
Usage
svine_scores(x, model, cores = 1)
Arguments
x the data.
model S-vine model (inheriting from svine_dist).
cores number of cores to use.
Value
A returns a n-by-k matrix, where n = NROW(x) and k is the total number of parameters in the model.
Parameters are ordered as follows: marginal parameters, copula parameters of first tree, copula
parameters of second tree, etc. Duplicated parameters in the copula model are omitted.
Examples
data(returns)
dat <- returns[1:100, 1:2]
# fit parametric S-vine model with Markov order 1
model <- svine(dat, p = 1, family_set = "parametric")
# Implementation of asymptotic variances
I <- cov(svine_scores(dat, model))
H <- svine_hessian(dat, model)
Hi <- solve(H)
Hi %*% I %*% t(Hi) / nrow(dat)
svine_sim Simulate from a S-vine model
Description
Simulate from a S-vine model
Usage
svine_sim(n, rep, model, past = NULL, qrng = FALSE, cores = 1)
Arguments
n how many steps of the time series to simulate.
rep number of replications; rep time series of length n are generated.
model a S-vine copula model object (inheriting from svinecop_dist).
past (optional) matrix of past observations. If provided, time series are simulated
conditional on the past.
qrng if TRUE, generates quasi-random numbers using the multivariate Generalized
Halton sequence up to dimension 300 and the Generalized Sobol sequence in
higher dimensions (default qrng = FALSE).
cores number of cores to use; if larger than one, computations are done parallel over
replications.
Value
An n-by-d-byrep array, where d is the cross-sectional dimension of the model. This reduces to an
n-by-d matrix if rep == 1.
Examples
# load data set
data(returns)
returns <- returns[1:100, 1:3]
# fit parametric S-vine model with Markov order 1
fit <- svine(returns, p = 1, family_set = "parametric")
pairs(returns) # original data
pairs(svine_sim(100, rep = 1, model = fit)) # simulated data
# simulate the next day conditionally on the past 500 times
pairs(t(svine_sim(1, rep = 100, model = fit, past = returns)[1, , ])) |
github.com/mattermost/focalboard | go | Go | README
[¶](#section-readme)
---
### Focalboard
![CI Status](https://github.com/mattermost/focalboard/actions/workflows/ci.yml/badge.svg)
![CodeQL](https://github.com/mattermost/focalboard/actions/workflows/codeql-analysis.yml/badge.svg)
![Dev Release](https://github.com/mattermost/focalboard/actions/workflows/dev-release.yml/badge.svg)
![Prod Release](https://github.com/mattermost/focalboard/actions/workflows/prod-release.yml/badge.svg)
[![Translation status](https://translate.mattermost.com/widgets/focalboard/-/svg-badge.svg)](https://translate.mattermost.com/engage/focalboard/)
Like what you see? 👀 Give us a GitHub Star! ⭐
[![Focalboard](https://github.com/mattermost/focalboard/raw/v7.10.4/website/site/static/img/hero.jpg)](https://www.focalboard.com)
[Focalboard](https://www.focalboard.com) is an open source, multilingual, self-hosted project management tool that's an alternative to Trello, Notion, and Asana.
It helps define, organize, track and manage work across individuals and teams. Focalboard comes in two main editions:
* **[Mattermost Boards](https://www.focalboard.com/download/mattermost/)**: A self-hosted or **[free cloud server](https://mattermost.com/sign-up/?utm_source=github&utm_campaign=focalboard)** for your team to plan and collaborate.
* **[Personal Desktop](https://www.focalboard.com/download/personal-edition/desktop/)**: A standalone, single-user [macOS](https://apps.apple.com/app/apple-store/id1556908618?pt=2114704&ct=website&mt=8), [Windows](https://www.microsoft.com/store/apps/9NLN2T0SX9VF?cid=website), or [Linux](https://www.focalboard.com/download/personal-edition/desktop/#linux-desktop) desktop app for your own todos and personal projects.
Focalboard can also be installed as a standalone **[Personal Server](https://www.focalboard.com/download/personal-edition/ubuntu/)** for development and personal use.
#### Try Focalboard
##### Mattermost Boards - [now available as a free cloud server](https://mattermost.com/sign-up/?utm_source=github&utm_campaign=focalboard)
**Mattermost Boards** combines project management tools with messaging and collaboration for teams of all sizes. To access and use **Mattermost Boards**, install or upgrade to Mattermost v6.0 or later as a [self-hosted server](https://docs.mattermost.com/guides/deployment.html?utm_source=github&utm_campaign=focalboard) or [Cloud server](https://mattermost.com/sign-up/?utm_source=github&utm_campaign=focalboard). After logging into Mattermost, select the menu in the top left corner and select **Boards**.
***Mattermost Boards** is installed and enabled by default in Mattermost v6.0 and later.*
See the [plugin setup guide](https://www.focalboard.com/download/mattermost/) for more details.
##### Personal Desktop (Windows, Mac or Linux Desktop)
* **Windows**: Download from the [Windows App Store](https://www.microsoft.com/store/productId/9NLN2T0SX9VF) or download `focalboard-win.zip` from the [latest release](https://github.com/mattermost/focalboard/releases), unpack, and run `Focalboard.exe`.
* **Mac**: Download from the [Mac App Store](https://apps.apple.com/us/app/focalboard-insiders/id1556908618?mt=12).
* **Linux Desktop**: Download `focalboard-linux.tar.gz` from the [latest release](https://github.com/mattermost/focalboard/releases), unpack, and open `focalboard-app`.
##### Personal Server
**Ubuntu**: You can download and run the compiled Focalboard **Personal Server** on Ubuntu by following [our latest install guide](https://www.focalboard.com/download/personal-edition/ubuntu/).
##### API Docs
Boards API docs can be found over at <https://htmlpreview.github.io/?https://github.com/mattermost/focalboard/blob/main/server/swagger/docs/html/index.html#### Contribute to Focalboard
Contribute code, bug reports, and ideas to the future of the Focalboard project. We welcome your input! Please see [CONTRIBUTING](https://github.com/mattermost/focalboard/blob/v7.10.4/CONTRIBUTING.md) for details on how to get involved.
##### Getting started
Our [developer guide](https://developers.mattermost.com/contribute/focalboard/personal-server-setup-guide) has detailed instructions on how to set up your development environment for the **Personal Server**. It also provides more information about contributing to our open source community.
Clone [mattermost-server](https://github.com/mattermost/mattermost-server) into sibling directory.
Create an `.env` file in the focalboard directory that contains:
```
EXCLUDE_ENTERPRISE="1"
```
To build the server:
```
make prebuild make
```
To run the server:
```
./bin/focalboard-server
```
Then navigate your browser to [`http://localhost:8000`](http://localhost:8000) to access your Focalboard server. The port is configured in `config.json`.
Once the server is running, you can rebuild just the web app via `make webapp` in a separate terminal window. Reload your browser to see the changes.
##### Building and running standalone desktop apps
You can build standalone apps that package the server to run locally against SQLite:
* **Windows**:
+ *Requires Windows 10, [Windows 10 SDK](https://developer.microsoft.com/en-us/windows/downloads/sdk-archive/) 10.0.19041.0, and .NET 4.8 developer pack*
+ Open a `git-bash` prompt.
+ Run `make prebuild`
+ The above prebuild step needs to be run only when you make changes to or want to install your npm dependencies, etc.
+ Once the prebuild is completed, you can keep repeating the below steps to build the app & see the changes.
+ Run `make win-wpf-app`
+ Run `cd win-wpf/msix && focalboard.exe`
* **Mac**:
+ *Requires macOS 11.3+ and Xcode 13.2.1+*
+ Run `make prebuild`
+ The above prebuild step needs to be run only when you make changes to or want to install your npm dependencies, etc.
+ Once the prebuild is completed, you can keep repeating the below steps to build the app & see the changes.
+ Run `make mac-app`
+ Run `open mac/dist/Focalboard.app`
* **Linux**:
+ *Tested on Ubuntu 18.04*
+ Install `webgtk` dependencies
- Run `sudo apt-get install libgtk-3-dev`
- Run `sudo apt-get install libwebkit2gtk-4.0-dev`
+ Run `make prebuild`
+ The above prebuild step needs to be run only when you make changes to or want to install your npm dependencies, etc.
+ Once the prebuild is completed, you can keep repeating the below steps to build the app & see the changes.
+ Run `make linux-app`
+ Uncompress `linux/dist/focalboard-linux.tar.gz` to a directory of your choice
+ Run `focalboard-app` from the directory you have chosen
* **Docker**:
+ To run it locally from offical image:
- `docker run -it -p 80:8000 mattermost/focalboard`
+ To build it for your current architecture:
- `docker build -f docker/Dockerfile .`
+ To build it for a custom architecture (experimental):
- `docker build -f docker/Dockerfile --platform linux/arm64 .`
Cross-compilation currently isn't fully supported, so please build on the appropriate platform. Refer to the GitHub Actions workflows (`build-mac.yml`, `build-win.yml`, `build-ubuntu.yml`) for the detailed list of steps on each platform.
##### Unit testing
Before checking in commits, run `make ci`, which is similar to the `.gitlab-ci.yml` workflow and includes:
* **Server unit tests**: `make server-test`
* **Web app ESLint**: `cd webapp; npm run check`
* **Web app unit tests**: `cd webapp; npm run test`
* **Web app UI tests**: `cd webapp; npm run cypress:ci`
##### Translating
Help translate Focalboard! The app is already translated into several languages. We welcome corrections and new language translations! You can add new languages or improve existing translations at [Weblate](https://translate.mattermost.com/engage/focalboard/).
##### Staying informed
Are you interested in influencing the future of the Focalboard open source project? Here's how you can get involved:
* **Changes**: See the [CHANGELOG](https://github.com/mattermost/focalboard/blob/v7.10.4/CHANGELOG.md) for the latest updates
* **GitHub Discussions**: Join the [Developer Discussion](https://github.com/mattermost/focalboard/discussions) board
* **Bug Reports**: [File a bug report](https://github.com/mattermost/focalboard/issues/new?assignees=&labels=bug&template=bug_report.md&title=)
* **Chat**: Join the [Focalboard community channel](https://community.mattermost.com/core/channels/focalboard)
None |
frab | cran | R | Package ‘frab’
August 16, 2023
Type Package
Title How to Add Two Tables
Version 0.0-3
Maintainer <NAME> <<EMAIL>>
Description Methods to ``add'' two tables; also an alternative
interpretation of named vectors as generalized tables, so that
c(a=1,b=2,c=3) + c(b=3,a=-1) will return c(b=5,c=3). Uses
'disordR' discipline (Hankin, 2022, <arxiv:2210.03856>).
Extraction and replacement methods are provided. The underlying
mathematical structure is the Free Abelian group, hence the name.
To cite in publications please use Hankin (2023)
<arxiv:2307:13184>.
License GPL (>= 2)
Depends R (>= 3.5.0)
Suggests knitr, markdown, rmarkdown, testthat, mvtnorm
VignetteBuilder knitr
Imports Rcpp (>= 1.0-7), mathjaxr, disordR (>= 0.9-8-1), methods
LinkingTo Rcpp
URL https://github.com/RobinHankin/frab
BugReports https://github.com/RobinHankin/frab
RdMacros mathjaxr
R topics documented:
frab-packag... 2
Arit... 3
Compare-method... 4
Extrac... 5
fra... 6
frab-clas... 8
mis... 9
namedvecto... 10
pma... 11
prin... 12
rfra... 13
sparsetabl... 14
tabl... 15
zer... 17
frab-package How to Add Two Tables
Description
Methods to "add" two tables; also an alternative interpretation of named vectors as generalized
tables, so that c(a=1,b=2,c=3) + c(b=3,a=-1) will return c(b=5,c=3). Uses ’disordR’ discipline
(Hankin, 2022, <arxiv:2210.03856>). Extraction and replacement methods are provided. The un-
derlying mathematical structure is the Free Abelian group, hence the name. To cite in publications
please use Hankin (2023) <arxiv:2307:13184>.
Details
The DESCRIPTION file:
Package: frab
Type: Package
Title: How to Add Two Tables
Version: 0.0-3
Authors@R: person(given=c("Robin", "<NAME>."), family="Hankin", role = c("aut","cre"), email="<EMAIL>
Maintainer: <NAME> <<EMAIL>>
Description: Methods to "add" two tables; also an alternative interpretation of named vectors as generalized tables,
License: GPL (>= 2)
Depends: R (>= 3.5.0)
Suggests: knitr, markdown, rmarkdown, testthat, mvtnorm
VignetteBuilder: knitr
Imports: Rcpp (>= 1.0-7), mathjaxr, disordR (>= 0.9-8-1), methods
LinkingTo: Rcpp
URL: https://github.com/RobinHankin/frab
BugReports: https://github.com/RobinHankin/frab
RdMacros: mathjaxr
Author: <NAME> [aut, cre] (<https://orcid.org/0000-0001-5982-0415>)
Index of help topics:
Compare-methods Comparision methods
arith Arithmetic methods for class '"frab"'
extract Extraction and replacement methods for class
'"frab"'
frab Creating 'frab' objects
frab-class Class "frab"
frab-package How to Add Two Tables
is.namedvector Named vectors and the frab package
misc Miscellaneous functions
pmax Parallel maxima and minima for frabs
print Methods for printing frabs
rfrab Random frabs
sparsetable Generalized sparse tables: 'sparsetable'
objects
table Tables and frab objects
zero The zero frab object
Author(s)
NA
Maintainer: <NAME> <<EMAIL>>
Examples
x <- frab(c(a=1, b=2, c=5))
y <- frab(c(b=-2, c=1, d=8))
x+y
Arith Arithmetic methods for class "frab"
Description
The frab class provides basic arithmetic methods for frab objects. Low-level helper functions
c_frab_eq() amd c_frab_pmax() are documented here for consistency; but technically c_frab_eq()
is a Comparison operator, and c_frab_pmax() is an “Extremes” function. They are documented at
Compare.Rd and pmax.Rd respectively.
Usage
frab_negative(x)
frab_reciprocal(x)
frab_plus_frab(F1,F2)
frab_multiply_numeric(e1,e2)
frab_power_numeric(e1,e2)
numeric_power_frab(e1,e2)
frab_unary(e1,e2)
frab_arith_frab(e1,e2)
frab_plus_numeric(e1,e2)
frab_arith_numeric(e1,e2)
numeric_arith_frab(e1,e2)
Arguments
e1,e2,x,F1,F2 Objects of class frab, coerced if needed
Value
Return frab objects
Methods
Arith signature(e1="frab" , e2="missing"): blah blah blah
Arith signature(e1="frab" , e2="frab" ): ...
Arith signature(e1="frab" , e2="numeric"): ...
Arith signature(e1="numeric", e2="frab" ): ...
Arith signature(e1="ANY" , e2="frab" ): ...
Arith signature(e1="frab" , e2="ANY" ): ...
Author(s)
<NAME>
See Also
Compare
Examples
(x <- frab(c(a=1,b=2,c=3)))
(y <- frab(c(b=-2,d=8,x=1,y=7)))
(z <- frab(c(c=2,x=5,b=1,a=6)))
x+y
x+y+z
x*y
Compare-methods Comparision methods
Description
Methods for comparison (greater than, etc) in the frab package.
Functions frab_gt_num() etc follow a consistent naming convention; the mnemonic is the old
Fortran .GT. scheme [for “greater than”].
Function frab_eq() is an odd-ball, formally documented at Arith.Rd. It is slightly different from
the other comparisons: it calls low-level helper function c_frab_eq(), which calls its C namesake
which is written for speed (specifically, returning FALSE as soon as it spots a difference between its
two arguments). Note that if any value is NA, frab_eq() will return FALSE.
Usage
frab_eq(e1,e2)
frab_compare_frab(e1,e2)
frab_eq_num(e1,e2)
frab_ne_num(e1,e2)
frab_gt_num(e1,e2)
frab_ge_num(e1,e2)
frab_lt_num(e1,e2)
frab_le_num(e1,e2)
frab_compare_numeric(e1,e2)
num_eq_frab(e1,e2)
num_ne_frab(e1,e2)
num_gt_frab(e1,e2)
num_ge_frab(e1,e2)
num_lt_frab(e1,e2)
num_le_frab(e1,e2)
numeric_compare_frab(e1,e2)
Arguments
e1,e2 Objects of class frab
Value
Generally, return a frab or a logical
Author(s)
<NAME>
See Also
Arith
Examples
rfrab()
a <- rfrab(26,sym=letters)
a[a<4] <- 100
Extract Extraction and replacement methods for class "frab"
Description
The frab class provides basic arithmetic and extract/replace methods for frab objects.
Class index is taken from the excellent Matrix package and is a setClassUnion() of classes
numeric, logical, and character.
Value
Generally, return a frab object.
Methods
[ signature(x = "frab", i = "character", j = "missing"): x["a"] <- 33
[ signature(x = "frab", i = "disord", j = "missing"): x[x>3]
[ signature(x = "frab", i = "missing", j = "missing"): x[]
[<- signature(x = "frab", i = "character",j = "missing", value = "ANY"): x["a"] <- 3
[<- signature(x = "frab", i = "disord", j = "missing",value="frab"): x[x<0] <- -x[x<0];
not implemented
[<- signature(x = "frab", i = "disord", j = "missing",value="logical"): x[x<0] <- NA
[<- signature(x = "frab", i = "ANY",j = "ANY", value = "ANY"): not implemented
[<- signature(x = "frab", i = "disindex",j = "missing",value = "numeric"): x[x>0] <- 3
[<- signature(x = "frab", i = "character", j = "missing", value = "logical"): x["c"] <-
NA
Double square extraction, as in x[[i]] and x[[i]] <- value, is not currently defined. In replace-
ment methods, if value is logical it is coerced to numeric (this includes NA).
Author(s)
<NAME>
Examples
frab(setNames(seq_len(0),letters[seq_len(0)]))
a <- rfrab(26,sym=letters)
a<4
a[a<4]
a[a<4] <- 100
a
x <- rfrab()
values(x) <- values(x) + 66
x <- rfrabb()
v <- values(x)
v[v<0] <- abs(v[v<0]) + 50
values(x) <- v
names(x) <- toupper(names(x))
x
frab Creating frab objects
Description
Package idiom for creating frab objects
Usage
frab(x)
as.frab(x)
is.frab(x)
list_to_frab(L)
Arguments
x object coerced to, or tested for, frab
L List of two elements, a numeric vector named values and a character vector
named names
Details
Function frab() is the creation method, taking a named numeric vector as its argument; it is the
only function in the package that actually calls new("frab", ...).
Function as.frab() tries a bit harder to be useful and can coerce different types of object to a frab.
If given a list it dispatches to list_to_frab(). If given a table it dispatches to table_to_frab(),
documented at table.Rd.
Value
Returns a frab, or a boolean
Author(s)
<NAME>
See Also
frab-class
Examples
frab(c(x=6,y=6,z=-4,u=0,x=3))
as.frab(c(a=2,b=1,c=77))
as.frab(list(names=letters[5:2],values=1:4))
x <- rfrab()
y <- rfrab()
x+y
frab-class Class “frab”
Description
The formal S4 class for frab objects
Usage
## S4 method for signature 'frab'
namedvector(x)
Arguments
x Object of class frab
Objects from the Class
Formal class frab has a single slot x which is a named numeric vector.
The class has three accessor methods: names(), values(), and namedvector().
Author(s)
<NAME>
Examples
new("frab",x=c(a=6,b=4,c=1)) # formal creation method (discouraged)
frab(c(a=4,b=1,c=5)) # use frab() in day-to-day work
frab(c(a=4,b=0,c=5)) # zero entries are discarded
frab(c(a=4,b=3,b=5)) # repeted entries are summed
frab(c(apple=4,orange=3,cherry=5)) # any names are OK
x <- frab(c(d=1,y=3,a=2,b=5,rug=7,c=2))
(y <- rfrab())
x+y # addition works as expected
x + 2*y # arithmetic
x>2 # extraction
x[x>3] <- 99 # replacement
# sum(x) # some summary methods implemented
# max(x)
misc Miscellaneous functions
Description
This page documents various functions that work for frabs, and I will add to these from time to time
as I add new functions that make sense for frab objects. To use functions like sin() and abs() on
frab object x, work with values(x) (which is a disord object). However, there are a few functions
that are a little more involved:
• length() returns the length of the data component of the object.
• which() returns an error when called with a frab object, but is useful here because it returns
a disind when given a Boolean disord object. This is useful for idiom such as x[x>0]
• Functions is.na() and is.notna() return a disind object
Usage
## S4 method for signature 'frab'
length(x)
Arguments
x Object of class frab
Value
Generally return frabs
Note
Constructions such as !is.na(x) do not work if x is a frab object: this is because is.na() returns
a disind object, not a logical. Use is.notna() to identify elements that are not NA.
Author(s)
<NAME>
See Also
extract
Examples
(a <- frab(c(a=1,b=NA,c=44,x=NA,h=4)))
is.na(a)
(x <- frab(c(x=5,y=2,z=3,a=7,b=6)))
which(x>3)
x[which(x>3)]
x[which(x>3)] <- 4
x
is.na(x) <- x<3
x
x[is.na(x)] <- 100
x
y <- frab(c(a=5,b=NA,c=3,d=NA))
y[is.notna(y)] <- 199
y
namedvector Named vectors and the frab package
Description
Named vectors are closely related to frab objects, but are not the same. However, there is a natural
coercion from one to the other.
Usage
is.namedvector(v)
is.namedlogical(v)
is.unnamedlogical(v)
is.unnamedvector(v)
Arguments
v Argument to be tested or coerced
Details
Coercion and testing for named vectors. Function nv_to_frab(), documented at frab.Rd, coerces
a named vector to a frab.
Value
Function is.namedvector() returns a boolean, function as.namedvector() returns a named vec-
tor.
Author(s)
<NAME>
Examples
x <- c(a=5, b=3, c=-2,b=-3, x=33)
is.namedvector(x)
as.namedvector(frab(x))
x <- c(a=5, b=3, c=-2)
y <- c(p=1, c=2, d= 6)
x
y
x+y
frab(x) + frab(y)
pmax Parallel maxima and minima for frabs
Description
Parallel (pairwise) maxima and minima for frabs.
Usage
pmax_pair(F1,F2)
pmin_pair(F1,F2)
pmax_dots(x, ...)
pmin_dots(x, ...)
## S4 method for signature 'frab'
pmax(...)
## S4 method for signature 'frab'
pmin(...)
Arguments
F1, F2, x, ... Frab objects
Details
Pairwise minima and maxima for frabs, using names as the primary key.
Functions pmax_pair() calls c_frab_pmax() and pmin_pair() use
Functions pmax() and pmin() use the same mechanism as cbrob() of the Brobdingnag package,
originally due to <NAME> (pers. comm.)
Value
Returns a frab object
Author(s)
<NAME>
Examples
x <- rfrab()
y <- rfrab()
print Methods for printing frabs
Description
Methods for printing frabs nicely
Usage
## S4 method for signature 'frab'
show(object)
frab_print(object)
Arguments
object An object of class frab
Details
The method is sensitive to option frab_print_hash. If TRUE, the hash code is printed; otherwise it
is not.
Function frab_print() returns its argument, invisibly.
There is special dispensation for the empty frab object.
Value
Returns its argument, invisibly
Author(s)
<NAME>
Examples
print(rfrab()) # default
options(frab_print_hash = TRUE)
print(rfrab()) # prints hash code
options(frab_print_hash = NULL) # restore default
rfrab Random frabs
Description
Random frab objects, intended as quick “get you going” examples
Usage
rfrab(n = 9, v = seq_len(5), symb = letters[seq_len(9)])
rfrabb(n = 100, v = -5:5, symb = letters)
rfrabbb(n = 5000, v = -10:10, symb = letters,i=3)
Arguments
n Length of object to return
v Values to assign to symbols (see details)
symb Symbols to use
i Exponentiating index for rfrabbb()
Details
What you see is what you get, basically. If a symbol is chosen more than once, as in, c(a=1,b=2,a=3),
then the value for a will be summed.
Use function rfrab() for a small, easily-managed object; rfrabb() and rfrabbb() give succes-
sively larger objects.
Value
Returns a frab object
Author(s)
<NAME>
Examples
rfrab()
sparsetable Generalized sparse tables: sparsetable objects
Description
Package idiom for creating and manipulating sparsetable objects
Usage
sparsetable(i,v=1)
rspar(n=15,l=3,d=3)
rspar2(n=15,l=6)
rsparr(n=20,d=6,l=5,s=4)
sparsetable_to_array(x)
array_to_sparsetable(x)
sparsetable_to_frab(x)
## S4 method for signature 'sparsetable'
index(x)
## S4 method for signature 'sparsetable'
values(x)
## S4 method for signature 'sparsetable'
dimnames(x)
## S4 method for signature 'sparsetable'
dim(x)
Arguments
x In functions like index(), an object of class sparsetable
i,v In standard constructor function sparsetable(), argument i is the index matrix
of strings, and v a numeric vector of values
n,l,d,s In functions rspar(), rspar2(), and rsparr(), n is the number of terms, l the
number of letters, d the dimensionality and s the number of distinct marginal
values to return
Details
Most functions here mirror their equivalent in the spray package [which the C code is largely
copied from] or the frab functionality. So, for example, num_eq_sparsetable() is the equivalent
of num_eq_spray().
The print method treats arity-2 sparsetable objects differently from other arities. By default,
arity-2 sparsetable objects are displayed as two-dimensional tables. Control this behaviour with
option print_2dsparsetables_as_matrices:
options("print_2dsparsetables_as_matrices" = FALSE)
The default value for this option, non-FALSE (including its out-of-the-box status of “unset”), directs
the print method to coerce arity-2 sparsetable objects to two-dimensional tables before printing.
If this option is FALSE, arity-2 sparsetables are printed using matrix index form, just the same as
any other arity.
Functions rspar(), rspar2(), and rsparr() create random sparsetable objects of increasing
complexity. The defaults are chosen to make the values of sensible sizes.
Function drop() takes a sparsetable object of arity one and coerces to a frab object.
Function dim() returns a named vector, with names being the dimnames of its argument.
Extraction and replacement methods are a subset of spray methods, but most should work. There is
special dispensation so that standard idiom for arrays [e.g. x['a','b','a'] and x['a','b','a']
<- 55] work as expected, although the general expectation is that access and replacement use (char-
acter) matrices and an index object. However, indexing by disord and disindex objects should
also work [e.g. x[x>7]].
The spray source code and the sparstable functionality hve about 90% overlap; there were enough
small differences between the codes to make it worth maintaining two sets of source code, IMO.
There is a discussion of package idiom in the vignette, vignette("frab").
Note
The pronunciation of “sparsetable” has the emphasis on the first syllable, so it rhymes with “Barn-
able” or “Barnstaple”.
Author(s)
<NAME>
See Also
frab-class
Examples
sparsetable(matrix(sample(letters[1:4],36,replace=TRUE),ncol=2),1:18)
sparsetable(matrix(sample(letters[1:4],39,replace=TRUE),ncol=3),1:13)
(x <- rspar2(9))
(y <- rspar2(9))
x + y
x["KT","FF"] <- 100
x
rsparr()
a <- rspar(d=4)
asum(a,"Feb")
table Tables and frab objects
Description
Various methods and functions to deal with tables in the frab package.
Usage
## S4 method for signature 'frab'
as.table(x,...)
table_to_frab(x)
Arguments
x Object of class frab or table
... Further arguments, currently ignored
Details
If a frab object has non-negative entries it may be interpreted as a table. However, in base R, table
objects do not have sensible addition methods which is why the frab package is needed.
Function is.1dtable() checks for its argument being a one-dimensional table. The idea is that a
table like table(sample(letters,30,TRUE)), being a table of a single observation, is accepted
but a table like table(data.frame(rnorm(20)>0,rnorm(20)>0)) is not acceptable because it is
a two-dimensional contingency table.
Value
Generally return a table or frab.
Note
The order of the entries may be changed during the coercion, as per disordR discipline. Function
as.frab() takes a table, dispatching to table_to_frab().
Author(s)
<NAME>
Examples
X <- table(letters[c(1,1,1,1,2,3,3)])
Y <- table(letters[c(1,1,1,1,3,4,4)])
Z <- table(letters[c(1,1,2,3,4,5,5)])
X+Y # defined but nonsense
# X+Z # returns an error
as.frab(X) + as.frab(Y) # correct answer
plot(as.table(rfrab()))
zero The zero frab object
Description
Test for a frab object’s being zero (empty).
Usage
zero(...)
is.zero(x)
is.empty(x)
Arguments
x Object of class frab
... Further arguments (currently ignored)
Details
Function zero() returns the empty frab object; this is the additive identity 0 with property x + 0 =
0 + x = x.
Function is.zero() returns TRUE if its argument is indeed the zero object.
Function is.empty() is a synonym for is.zero(). Sometimes one is thinking about the free
Abelian group, in which case is.zero() makes more sense, and sometimes one is thinking about
maps and tables, in which case is.empty() is more appropriate.
Value
Function zero() returns the zero frab object, function is.zero() a Boolean
Author(s)
<NAME>
Examples
zero()
zero() + zero()
x <- rfrab()
x+zero() == x
is.zero(zero()) |
github.com/zalando-incubator/stackset-controller | go | Go | README
[¶](#section-readme)
---
### Kubernetes StackSet Controller
[![Build Status](https://travis-ci.org/zalando-incubator/stackset-controller.svg?branch=master)](https://travis-ci.org/zalando-incubator/stackset-controller)
[![Coverage Status](https://coveralls.io/repos/github/zalando-incubator/stackset-controller/badge.svg?branch=master)](https://coveralls.io/github/zalando-incubator/stackset-controller?branch=master)
The Kubernetes StackSet Controller is a concept (along with an implementation) for easing and automating application life cycle for certain types of applications running on Kubernetes.
It is not meant to be a generic solution for all types of applications but it's explicitly focusing on "Web Applications", that is, application which receive HTTP traffic and are continuously deployed with new versions which should receive traffic either instantly or gradually fading traffic from one version of the application to the next one. Think Blue/Green deployments as one example.
By default Kubernetes offers the
[Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
resource type which, combined with a
[Service](https://kubernetes.io/docs/concepts/services-networking/service/),
can provide some level of application life cycle in the form of rolling updates.
While rolling updates are a powerful concept, there are some limitations for certain use cases:
* Switching traffic in a Blue/Green style is not possible with rolling updates.
* Splitting traffic between versions of the application can only be done by scaling the number of Pods. E.g. if you want to give 1% of traffic to a new version, you need at least 100 Pods.
* Impossible to run smoke tests against a new version of the application before it gets traffic.
To work around these limitations I propose a different type of resource called an `StackSet` which has the concept of `Stacks`.
The `StackSet` is a declarative way of describing the application stack as a whole, and the `Stacks` describe individual versions of the application. The `StackSet` also allows defining a "global" load balancer spanning all stacks of the stackset which makes it possible to switch traffic to different stacks at the load balancer (for example Ingress) level.
```
+---+
| |
| Load Balancer |
| (for example Ingress) |
| |
+--+---+---+--+
| 0% | 20% | 80%
+---+ | +---+
| | |
+---v---+ +---v---+ +---v---+
| | | | | |
| Stack | | Stack | | Stack |
| Version 1 | | Version 2 | | Version 3 |
| | | | | |
+---+ +---+ +---+
```
The `StackSet` and `Stack` resources are implemented as
[CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/). A `StackSet` looks like this:
```
apiVersion: zalando.org/v1 kind: StackSet metadata:
name: my-app spec:
# optional Ingress definition.
ingress:
hosts: [my-app.example.org, alt.name.org]
backendPort: 80
# optional desired traffic weights defined by stack
traffic:
- stackName: mystack-v1
weight: 80
- stackName: mystack-v2
weight: 20
# optional percentage of required Replicas ready to allow traffic switch
# if none specified, defaults to 100
minReadyPercent: 90
stackLifecycle:
scaledownTTLSeconds: 300
limit: 5 # maximum number of scaled down stacks to keep.
# If there are more than `limit` stacks, the oldest stacks which are scaled down
# will be deleted.
stackTemplate:
spec:
version: v1 # version of the Stack.
replicas: 3
# optional autoscaler definition (will create an HPA for the stack).
autoscaler:
minReplicas: 3
maxReplicas: 10
metrics:
- type: CPU
averageUtilization: 50
# full Pod template.
podTemplate:
spec:
containers:
- name: skipper
image: ghcr.io/zalando/skipper:latest
args:
- skipper
- -inline-routes
- '* -> inlineContent("OK") -> <shunt>'
- -address=:80
ports:
- containerPort: 80
name: ingress
resources:
limits:
cpu: 10m
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
```
The above `StackSet` would generate a `Stack` that looks like this:
```
apiVersion: zalando.org/v1 kind: Stack metadata:
name: my-app-v1
labels:
stackset: my-app
stackset-version: v1 spec:
replicas: 3
autoscaler:
minReplicas: 3
maxReplicas: 10
metrics:
- type: CPU
averageUtilization: 50
podTemplate:
spec:
containers:
image: ghcr.io/zalando/skipper:latest
args:
- skipper
- -inline-routes
- '* -> inlineContent("OK") -> <shunt>'
- -address=:80
ports:
- containerPort: 80
name: ingress
resources:
limits:
cpu: 10m
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
```
For each `Stack` a `Service` and `Deployment` resource will be created automatically with the right labels. The service will also be attached to the
"global" Ingress if the stack is configured to get traffic. An optional
`autoscaler` resource can also be created per stack for horizontally scaling the deployment.
For the most part the `Stacks` will be dynamically managed by the system and the users don't have to touch them. You can think of this similar to the relationship between `Deployments` and `ReplicaSets`.
If the `Stack` is deleted the related resources like `Service` and
`Deployment` will be automatically cleaned up.
The `stackLifecycle` let's you configure two settings to change the cleanup behavior for the `StackSet`:
* `scaleDownTTLSeconds` defines for how many seconds a stack should not receive traffic before it's scaled down.
* `limit` defines the total number of stacks to keep. That is, if you have a
`limit` of `5` and currently have `6` stacks for the `StackSet` then it will clean up the oldest stack which is **NOT** getting traffic. The `limit` is not enforced if it would mean deleting a stack with traffic. E.g. if you set a `limit` of `1` and have two stacks with `50%` then none of them would be deleted. However, if you switch to `100%` traffic for one of the stacks then the other will be deleted after it has not received traffic for
`scaleDownTTLSeconds`.
#### Features
* Automatically create new Stacks when the `StackSet` is updated with a new version in the `stackTemplate`.
* Do traffic switching between Stacks at the Ingress layer, if you have the ingress definition in the spec. Ingress resources are automatically updated when new stacks are created. (This require that your ingress controller implements the annotation
`zalando.org/backend-weights: {"my-app-1": 80, "my-app-2": 20}`, for example use [skipper](https://github.com/zalando/skipper) for Ingress) or read the information from stackset `status.traffic`.
* Safely switch traffic to scaled down stacks. If a stack is scaled down, it will be scaled up automatically before traffic is directed to it.
* Dynamically provision Ingresses per stack, with per stack host names. I.e.
`my-app.example.org`, `my-app-v1.example.org`, `my-app-v2.example.org`.
* Automatically scale down stacks when they don't get traffic for a specified period.
* Automatically delete stacks that have been scaled down and are not getting any traffic for longer time.
* Automatically clean up all dependent resources when a `StackSet` or
`Stack` resource is deleted. This includes `Service`,
`Deployment`, `Ingress` and optionally `HorizontalPodAutoscaler`.
* Command line utility (`traffic`) for showing and switching traffic between stacks.
* You can opt-out of the global `Ingress` creation with
`externalIngress:` spec, such that external controllers can manage the Ingress or CRD creation, that will configure the routing into the cluster.
* You can use skipper's
[RouteGroups](https://opensource.zalando.com/skipper/kubernetes/routegroups)
to configure more complex routing rules.
#### Docs
* [How To's](https://github.com/zalando-incubator/stackset-controller/blob/v1.4.17/docs/howtos.md)
##### Kubernetes Compatibility
The StackSet controller works with Kubernetes `>=v1.23`.
#### How it works
The controller watches for `StackSet` resources and creates `Stack` resources whenever the version is updated in the `StackSet` `stackTemplate`. For each
`StackSet` it will create an optional "main" `Ingress` resource and keep it up to date when new `Stacks` are created for the `StackSet`. For each `Stack` it will create a `Deployment`, a `Service` and optionally an
`HorizontalPodAutoscaler` for the `Deployment`. These resources are all owned by the `Stack` and will be cleaned up if the stack is deleted.
#### Setup
Use an existing cluster or create a test cluster with [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
```
kind create cluster --name testcluster001
```
The `stackset-controller` can be run as a deployment in the cluster.
See [deployment.yaml](https://github.com/zalando-incubator/stackset-controller/blob/v1.4.17/docs/deployment.yaml).
The controller depends on the [StackSet](https://github.com/zalando-incubator/stackset-controller/blob/v1.4.17/docs/stackset_crd.yaml) and
[Stack](https://github.com/zalando-incubator/stackset-controller/blob/v1.4.17/docs/stack_crd.yaml)
[CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
You must install these into your cluster before running the controller:
```
$ kubectl apply -f docs/stackset_crd.yaml -f docs/stack_crd.yaml
```
After the CRDs are installed the controller can be deployed:
*please adjust the controller version and cluster-domain to your environment*
```
$ kubectl apply -f docs/rbac.yaml -f docs/deployment.yaml
```
##### Custom configuration
#### controller-id
There are cases where it might be desirable to run multiple instances of the stackset-controller in the same cluster, e.g. for development.
To prevent the controllers from fighting over the same `StackSet` resources they can be configured with the flag `--controller-id=<some-id>` which indicates that the controller should only manage the `StackSets` which has an annotation `stackset-controller.zalando.org/controller=<some-id>` defined.
If the controller-id is not configured, the controller will manage all
`StackSets` which does not have the annotation defined.
#### Quick intro
Once you have deployed the controller you can create your first `StackSet`
resource:
```
$ kubectl apply -f docs/stackset.yaml stackset.zalando.org/my-app created
```
This will create the stackset in the cluster:
```
$ kubectl get stacksets NAME CREATED AT my-app 21s
```
And soon after you will see the first `Stack` of the `my-app`
stackset:
```
$ kubectl get stacks NAME CREATED AT my-app-v1 30s
```
It will also create `Ingress`, `Service`, `Deployment` and
`HorizontalPodAutoscaler` resources:
```
$ kubectl get ingress,service,deployment.apps,hpa -l stackset=my-app NAME HOSTS ADDRESS PORTS AGE ingress.extensions/my-app my-app.example.org kube-ing-lb-3es9a....elb.amazonaws.com 80 7m ingress.extensions/my-app-v1 my-app-v1.example.org kube-ing-lb-3es9a....elb.amazonaws.com 80 7m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-app-v1 ClusterIP 10.3.204.136 <none> 80/TCP 7m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/my-app-v1 1 1 1 1 7m
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/my-app-v1 Deployment/my-app-v1 <unknown>/50% 3 10 0 20s
```
Imagine you want to roll out a new version of your stackset. You can do this by changing the `StackSet` resource. E.g. by changing the version:
```
$ kubectl patch stackset my-app --type='json' -p='[{"op": "replace", "path": "/spec/stackTemplate/spec/version", "value": "v2"}]'
stackset.zalando.org/my-app patched
```
Soon after, we will see a new stack:
```
$ kubectl get stacks -l stackset=my-app NAME CREATED AT my-app-v1 14m my-app-v2 46s
```
And using the `traffic` tool we can see how the traffic is distributed (see below for how to build the tool):
```
./build/traffic my-app STACK TRAFFIC WEIGHT my-app-v1 100.0%
my-app-v2 0.0%
```
If we want to switch 100% traffic to the new stack we can do it like this:
```
# traffic <stackset> <stack> <traffic>
./build/traffic my-app my-app-v2 100 STACK TRAFFIC WEIGHT my-app-v1 0.0%
my-app-v2 100.0%
```
Since the `my-app-v1` stack is no longer getting traffic it will be scaled down after some time and eventually deleted.
If you want to delete it manually, you can simply do:
```
$ kubectl delete appstack my-app-v1 stacksetstack.zalando.org "my-app-v1" deleted
```
And all the related resources will be gone shortly after:
```
$ kubectl get ingress,service,deployment.apps,hpa -l stackset=my-app,stackset-version=v1 No resources found.
```
#### Building
This project uses [Go modules](https://github.com/golang/go/wiki/Modules) as introduced in Go 1.11 therefore you need Go >=1.11 installed in order to build.
If using Go 1.11 you also need to [activate Module support](https://github.com/golang/go/wiki/Modules#installing-and-activating-module-support).
Assuming Go has been setup with module support it can be built simply by running:
```
$ export GO111MODULE=on # needed if the project is checked out in your $GOPATH.
$ make
```
Note that the Go client interface for talking to the custom `StackSet` and
`Stack` CRD is generated code living in `pkg/client/` and
`pkg/apis/zalando.org/v1/zz_generated_deepcopy.go`. If you make changes to
`pkg/apis/*` then you must run `make clean && make` to regenerate the code.
To understand how this works see the upstream
[example](https://github.com/kubernetes/apiextensions-apiserver/tree/master/examples/client-go)
for generating client interface code for CRDs.
#### Upgrade
##### <= v1.0.0 to >= v1.1.0
Clients that write the desired traffic switching value have to move from ingress annotation `zalando.org/stack-traffic-weights: '{"mystack-v1":80, "mystack-v2": 20}'`
to stackset `spec.traffic`:
```
spec:
traffic:
- stackName: mystack-v1
weight: 80
- stackName: mystack-v2
weight: 20
```
None |
multimark | cran | R | Package ‘multimark’
March 10, 2023
Type Package
Title Capture-Mark-Recapture Analysis using Multiple Non-Invasive
Marks
Version 2.1.6
Date 2023-03-09
Depends R (>= 3.2.1)
Imports parallel, Matrix, coda, statmod, RMark, Brobdingnag, mvtnorm,
graphics, methods, stats, utils, prodlim, sp, raster
Description Traditional and spatial capture-mark-recapture analysis with
multiple non-invasive marks. The models implemented in 'multimark' combine
encounter history data arising from two different non-invasive ``marks'',
such as images of left-sided and right-sided pelage patterns of bilaterally
asymmetrical species, to estimate abundance and related demographic
parameters while accounting for imperfect detection. Bayesian models are
specified using simple formulae and fitted using Markov chain Monte Carlo.
Addressing deficiencies in currently available software, 'multimark' also
provides a user-friendly interface for performing Bayesian multimodel
inference using non-spatial or spatial capture-recapture data consisting of a single
conventional mark or multiple non-invasive marks. See McClin-
tock (2015) <doi:10.1002/ece3.1676> and Maronde et al. (2020) <doi:10.1002/ece3.6990>.
Suggests testthat
License GPL-2
LazyData yes
ByteCompile TRUE
RoxygenNote 7.2.3
Encoding UTF-8
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [ctb, cph] (C original matrix library,
https://github.com/najela/matrix.h),
<NAME> [ctb] (Fortran original ranlib library),
<NAME> [ctb] (Fortran original ranlib library),
<NAME> [ctb] (C original ranlib library,
http://people.sc.fsu.edu/~jburkardt/c_src/ranlib),
<NAME> [ctb] (C original linpack library,
http://www.kkant.net/geist/ranlib/),
<NAME> [ctb] (modified snippets of R package SPACECAP code)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-03-10 11:20:06 UTC
R topics documented:
bobca... 2
bobcatSC... 3
getdensityClosedSC... 5
getprobsCJ... 6
getprobsClose... 7
getprobsClosedSC... 8
markCJ... 9
markClose... 13
markClosedSC... 16
multimarkCJ... 20
multimarkClose... 26
multimarkClosedSC... 31
multimarkSCRsetup-clas... 37
multimarksetup-clas... 39
multimodelCJ... 40
multimodelClose... 43
multimodelClosedSC... 45
plotSpatialDat... 48
processdat... 49
processdataSC... 51
simdataCJ... 54
simdataClose... 56
simdataClosedSC... 58
tige... 62
bobcat Bobcat data
Description
Example bobcat data for multimark package.
Format
The data are summarized in a 46x8 matrix containing observed encounter histories for 46 bobcats
across 8 sampling occasions. Bobcats are bilaterially asymmetrical, and sampling was conducted
using camera stations consisting of a single camera.
Because the left-side cannot be reconciled with the right-side, the two types of “marks” in this case
are the pelage patterns on the left- and right-side of each individual. Encounter type 0 corresponds
to non-detection, encounter type 1 corresponds to left-sided detection, encounter type 2 corresponds
to right-sided detection.
Both-sided encounters were never observed in this dataset, hence the most appropriate multimark
data type is data.type="never".
Source
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
See Also
multimarkClosed, processdata
Examples
data(bobcat)
bobcatSCR Bobcat spatial capture-recapture data
Description
Example spatial bobcat data for multimark package.
Format
These spatial capture-recapture data with multiple mark types are summarized in a list of length 3
containing the following objects:
Enc.Mat is a 42 x (noccas*ntraps) matrix containing observed encounter histories for 42 bobcats
across noccas=187 sampling occasions and ntraps=30 traps. The first 187 columns correspond to
trap 1, the second 187 columns corresopond to trap 2, etc.
trapCoords is a matrix of dimension ntraps x (2 + noccas) indicating the Cartesian coordi-
nates and operating occasions for the traps, where rows correspond to trap, the first column the
x-coordinate, and the second column the y-coordinate. The last noccas columns indicate whether
or not the trap was operating on each of the occasions, where ‘1’ indciates the trap was operating
and ‘0’ indicates the trap was not operating.
studyArea is a 3-column matrix containing the coordinates for the centroids of the contiguous grid
of 1023 cells that define the study area and available habitat. Each row corresponds to a grid cell.
The first 2 columns indicate the Cartesian x- and y-coordinate for the centroid of each grid cell, and
the third column indicates whether the cell is available habitat (=1) or not (=0). The grid cells are
0.65x0.65km resolution.
Bobcats are bilaterially asymmetrical, and sampling was conducted using camera stations consisting
of a single camera. Because the left-side cannot be reconciled with the right-side, the two types of
“marks” in this case are the pelage patterns on the left- and right-side of each individual. Encounter
type 0 corresponds to non-detection, encounter type 1 corresponds to left-sided detection, encounter
type 2 corresponds to right-sided detection.
Both-sided encounters were never observed in this dataset, hence the most appropriate multimark
data type is data.type="never".
The first 15 rows of bobcatSCR$Enc.Mat correspond to individuals for which both the left and
right sides were known because they were physically captured for telemetry deployments prior
to sampling surveys. The encounter histories for these 15 individuals are therefore known with
certainty and should be specified as such using the known argument in processdataSCR and/or
multimarkClosedSCR (see example below).
These data were obtained from the R package SPIM (Augustine et al. 2017) and modified by pro-
jecting onto a regular rectangular grid consisting of square grid cells (as is required by the spatial
capture-recapture models in multimark).
Details
We thank <NAME> and co-authors for making these data publicly available in the SPIM package
(Augustine et al. 2017).
Source
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. 2017.
Spatial capture-recapture with partial identity: an application to camera traps. bioRxiv doi: https://doi.org/10.1101/056804
See Also
multimarkClosedSCR, processdataSCR
Examples
data(bobcatSCR)
#plot the traps and available habitat within the study area
plotSpatialData(trapCoords=bobcatSCR$trapCoords,studyArea=bobcatSCR$studyArea)
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
# Fit spatial model to tiger data
Enc.Mat <- bobcatSCR$Enc.Mat
trapCoords <- bobcatSCR$trapCoords
studyArea <- bobcatSCR$studyArea
# specify known encounter histories
known <- c(rep(1,15),rep(0,nrow(Enc.Mat)-15))
# specify prior bounds for sigma2_scr
sig_bounds <- c(0.1,max(diff(range(studyArea[,"x"])),diff(range(studyArea[,"y"]))))
mmsSCR <- processdataSCR(Enc.Mat,trapCoords,studyArea,known=known)
bobcatSCR.dot.type <- multimarkClosedSCR(mms=mmsSCR,iter=200,adapt=100,burnin=100,
sigma_bounds=sig_bounds)
summary(bobcatSCR.dot.type$mcmc)
getdensityClosedSCR Calculate population density estimates
Description
This function calculates posterior population density estimates from multimarkClosedSCR output
as D = N/A, where D is density, N is abundance, and A is the area of available habitat within the
study area.
Usage
getdensityClosedSCR(out)
Arguments
out List of output returned by multimarkClosedSCR.
Value
An object of class mcmc.list containing the following:
D Posterior samples for density.
Author(s)
<NAME>
See Also
multimarkClosedSCR
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Run behavior model for simulated data with constant detection probability (i.e., mod.p=~c)
sim.data<-simdataClosedSCR()
Enc.Mat<-sim.data$Enc.Mat
trapCoords<-sim.data$spatialInputs$trapCoords
studyArea<-sim.data$spatialInputs$studyArea
example.dot <- multimarkClosedSCR(Enc.Mat,trapCoords,studyArea,mod.p=~1)
#Calculate capture and recapture probabilities
D <- getdensityClosedSCR(example.dot)
summary(D)
getprobsCJS Calculate posterior capture and survival probabilities
Description
This function calculates posterior capture (p) and survival (φ) probabilities for each sampling occa-
sion from multimarkCJS output.
Usage
getprobsCJS(out, link = "probit")
Arguments
out List of output returned by multimarkCJS
link Link function for p and φ. Must be "probit" or "logit". Note that multimarkCJS
is currently implemented for the probit link only.
Value
An object of class mcmc.list containing the following:
p Posterior samples for capture probability (p[c, t]) for each release cohort (c =
1, . . . , T − 1) and sampling occasion (t = 2, . . . , T ).
phi Posterior samples for survival probability (φ[c, k]) for each release cohort (c =
1, . . . , T − 1) and interval (k = 1, . . . , T − 1).
Author(s)
<NAME>. McClintock
See Also
multimarkCJS
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Simulate open population data with temporal variation in survival
noccas <- 5
data <- simdataCJS(noccas=noccas, phibeta=rnorm(noccas-1,1.6,0.1))
#Fit open population model with temporal variation in survival
sim.time <- multimarkCJS(data$Enc.Mat,mod.phi=~time)
#Calculate capture and survival probabilities for each cohort and time
pphi <- getprobsCJS(sim.time)
summary(pphi)
getprobsClosed Calculate posterior capture and recapture probabilities
Description
This function calculates posterior capture (p) and recapture (c) probabilities for each sampling oc-
casion from multimarkClosed output.
Usage
getprobsClosed(out, link = "logit")
Arguments
out List of output returned by multimarkClosed.
link Link function for detection probability. Must be "logit" or "probit". Note that
multimarkClosed is currently implemented for the logit link only.
Value
An object of class mcmc.list containing the following:
p Posterior samples for capture probability (p) for each sampling occasion.
c Posterior samples for recapture probability (c) for each sampling occasion.
Author(s)
<NAME>
See Also
multimarkClosed
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Run behavior model for bobcat data with constant detection probability (i.e., mod.p=~c)
bobcat.c <- multimarkClosed(bobcat,mod.p=~c)
#Calculate capture and recapture probabilities
pc <- getprobsClosed(bobcat.c)
summary(pc)
getprobsClosedSCR Calculate posterior capture and recapture probabilities
Description
This function calculates posterior spatial capture (p) and recapture (c) probabilities (at zero distance
from an activity center) for each sampling occasion from multimarkClosedSCR output.
Usage
getprobsClosedSCR(out, link = "cloglog")
Arguments
out List of output returned by multimarkClosedSCR.
link Link function for detection probability. Must be "cloglog". Note that multimarkClosedSCR
is currently implemented for the cloglog link only.
Value
An object of class mcmc.list containing the following:
p Posterior samples for capture probability (p) for each sampling occasion (first
index) and trap (second index).
c Posterior samples for recapture probability (c) for each sampling occasion (first
index) and trap (second index).
Author(s)
<NAME>
See Also
multimarkClosedSCR
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Run behavior model for simulated data with constant detection probability (i.e., mod.p=~c)
sim.data<-simdataClosedSCR()
Enc.Mat<-sim.data$Enc.Mat
trapCoords<-sim.data$spatialInputs$trapCoords
studyArea<-sim.data$spatialInputs$studyArea
example.c <- multimarkClosedSCR(Enc.Mat,trapCoords,studyArea,mod.p=~c,
iter=1000,adapt=500,burnin=500)
#Calculate capture and recapture probabilities
pc <- getprobsClosedSCR(example.c)
summary(pc)
markCJS Fit open population survival models for “traditional” capture-mark-
recapture data consisting of a single mark type
Description
This function fits Cormack-Jolly-Seber (CJS) open population models for survival probability (φ)
and capture probability (p) for “traditional” capture-mark-recapture data consisting of a single mark
type. Using Bayesian analysis methods, Markov chain Monte Carlo (MCMC) is used to draw
samples from the joint posterior distribution.
Usage
markCJS(
Enc.Mat,
covs = data.frame(),
mod.p = ~1,
mod.phi = ~1,
parms = c("pbeta", "phibeta"),
nchains = 1,
iter = 12000,
adapt = 1000,
bin = 50,
thin = 1,
burnin = 2000,
taccept = 0.44,
tuneadjust = 0.95,
proppbeta = 0.1,
propzp = 1,
propsigmap = 1,
propphibeta = 0.1,
propzphi = 1,
propsigmaphi = 1,
pbeta0 = 0,
pSigma0 = 1,
phibeta0 = 0,
phiSigma0 = 1,
l0p = 1,
d0p = 0.01,
l0phi = 1,
d0phi = 0.01,
initial.values = NULL,
link = "probit",
printlog = FALSE,
...
)
Arguments
Enc.Mat A matrix of observed encounter histories with rows corresponding to individuals
and columns corresponding to sampling occasions. With a single mark type,
encounter histories consist of only non-detections (0) and type 1 encounters (1).
covs A data frame of temporal covariates for detection probabilities (ignored unless
mms=NULL). The number of rows in the data frame must equal the number of
sampling occasions. Covariate names cannot be "time", "age", or "h"; these
names are reserved for temporal, behavioral, and individual effects when speci-
fying mod.p and mod.phi.
mod.p Model formula for detection probability (p). For example, mod.p=~1 spec-
ifies no effects (i.e., intercept only), mod.p~time specifies temporal effects,
mod.p~age specifies age effects, mod.p~h specifies individual heterogeneity,
and mod.p~time+age specifies additive temporal and age effects.
mod.phi Model formula for survival probability (φ). For example, mod.phi=~1 speci-
fies no effects (i.e., intercept only), mod.phi~time specifies temporal effects,
mod.phi~age specifies age effects, mod.phi~h specifies individual heterogene-
ity, and mod.phi~time+age specifies additive temporal and age effects.
parms A character vector giving the names of the parameters and latent variables to
monitor. Possible parameters are probit-scale detection probability parameters
("pbeta" for p and "phibeta" for φ), probit-scale individual heterogeneity vari-
ance terms ("sigma2_zp" for p and "sigma2_zphi" for φ), and probit-scale in-
dividual effects ("zp" and "zphi"). Latent variable indicators for whether each
individual was alive (1) or dead (0) during each sampling occasion ("q") and
the log likelihood ("loglike") may also be monitored. Setting parms="all"
monitors all possible parameters and latent variables.
nchains The number of parallel MCMC chains for the model.
iter The number of MCMC iterations.
adapt Ignored; no adaptive phase is needed for "probit" link.
bin Ignored; no adaptive phase is needed for "probit" link.
thin Thinning interval for monitored parameters.
burnin Number of burn-in iterations (0 <= burnin < iter).
taccept Ignored; no adaptive phase is needed for "probit" link.
tuneadjust Ignored; no adaptive phase is needed for "probit" link.
proppbeta Ignored; no adaptive phase is needed for "probit" link.
propzp Ignored; no adaptive phase is needed for "probit" link.
propsigmap Ignored; no adaptive phase is needed for "probit" link.
propphibeta Ignored; no adaptive phase is needed for "probit" link.
propzphi Ignored; no adaptive phase is needed for "probit" link.
propsigmaphi Ignored; no adaptive phase is needed for "probit" link.
pbeta0 Scaler or vector (of length k) specifying mean of pbeta ~ multivariateNormal(pbeta0,
pSigma0) prior. If pbeta0 is a scaler, then this value is used for all j = 1, ..., k.
Default is pbeta0 = 0.
pSigma0 Scaler or k x k matrix specifying covariance matrix of pbeta ~ multivariateNor-
mal(pbeta0, pSigma0) prior. If pSigma0 is a scaler, then this value is used for
all pSigma0[j,j] for j = 1, ..., k (with pSigma[j,l] = 0 for all j 6= l). Default is
pSigma0 = 1.
phibeta0 Scaler or vector (of length k) specifying mean of phibeta ~ multivariateNor-
mal(phibeta0, phiSigma0) prior. If phibeta0 is a scaler, then this value is used
for all j = 1, ..., k. Default is phibeta0 = 0.
phiSigma0 Scaler or k x k matrix specifying covariance matrix of phibeta ~ multivariateNor-
mal(phibeta0, phiSigma0) prior. If phiSigma0 is a scaler, then this value is used
for all phiSigma0[j,j] for j = 1, ..., k (with phiSigma[j,l] = 0 for all j 6= l).
Default is phiSigma0 = 1.
l0p Specifies "shape" parameter for [sigma2_zp] ~ invGamma(l0p,d0p) prior. De-
fault is l0p = 1.
d0p Specifies "scale" parameter for [sigma2_zp] ~ invGamma(l0p,d0p) prior. De-
fault is d0p = 0.01.
l0phi Specifies "shape" parameter for [sigma2_zphi] ~ invGamma(l0phi,d0phi) prior.
Default is l0phi = 1.
d0phi Specifies "scale" parameter for [sigma2_zphi] ~ invGamma(l0phi,d0phi) prior.
Default is d0phi = 0.01.
initial.values OOptional list of nchain list(s) specifying initial values for "pbeta", "phibeta",
"sigma2_zp", "sigma2_zphi", "zp", "zphi", and "q". Default is initial.values
= NULL, which causes initial values to be generated automatically.
link Link function for survival and capture probabilities. Only probit link is currently
implemented.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
... Additional "parameters" arguments for specifying mod.p and mod.phi. See
RMark::make.design.data.
Details
The first time markCJS (or markClosed) is called, it will likely produce a firewall warning alerting
users that R has requested the ability to accept incoming network connections. Incoming network
connections are required to use parallel processing as implemented in multimarkCJS. Note that
setting parms="all" is required for any markCJS model output to be used in multimodelCJS.
Value
A list containing the following:
mcmc Markov chain Monte Carlo object of class mcmc.list.
mod.p Model formula for detection probability (as specified by mod.p above).
mod.phi Model formula for survival probability (as specified by mod.phi above).
mod.delta Formula always NULL; only for internal use in multimodelCJS.
DM A list of design matrices for detection and survival probability respectively gen-
erated by mod.p and mod.phi, where DM$p is the design matrix for capture
probability (p) and DM$phi is the design matrix for survival probability (φ).
initial.values A list containing the parameter and latent variable values at iteration iter for
each chain. Values are provided for "pbeta", "phibeta", "sigma2_zp", "sigma2_zphi",
"zp", "zphi", and "q".
mms An object of class multimarksetup
Author(s)
<NAME>
See Also
processdata, multimodelCJS
Examples
# These examples are excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Simulate open population data using defaults
data <- simdataCJS(delta_1=1,delta_2=0)$Enc.Mat
#Fit default open population model
sim.dot <- markCJS(data)
#Posterior summary for monitored parameters
summary(sim.dot$mcmc)
plot(sim.dot$mcmc)
#Fit ``age'' model with 2 age classes (e.g., juvenile and adult) for survival
#using 'parameters' and 'right' arguments from RMark::make.design.data
sim.age <- markCJS(data,mod.phi=~age,
parameters=list(Phi=list(age.bins=c(0,1,4))),right=FALSE)
summary(getprobsCJS(sim.age))
markClosed Fit closed population abundance models for “traditional” capture-
mark-recapture data consisting of a single mark type
Description
This function fits closed population abundance models for “traditional” capture-mark-recapture data
consisting of a single mark type using Bayesian analysis methods. Markov chain Monte Carlo
(MCMC) is used to draw samples from the joint posterior distribution.
Usage
markClosed(
Enc.Mat,
covs = data.frame(),
mod.p = ~1,
parms = c("pbeta", "N"),
nchains = 1,
iter = 12000,
adapt = 1000,
bin = 50,
thin = 1,
burnin = 2000,
taccept = 0.44,
tuneadjust = 0.95,
proppbeta = 0.1,
propzp = 1,
propsigmap = 1,
npoints = 500,
a = 25,
mu0 = 0,
sigma2_mu0 = 1.75,
initial.values = NULL,
printlog = FALSE,
...
)
Arguments
Enc.Mat A matrix of observed encounter histories with rows corresponding to individuals
and columns corresponding to sampling occasions. With a single mark type,
encounter histories consist of only non-detections (0) and type 1 encounters (1).
covs A data frame of temporal covariates for detection probabilities (ignored unless
mms=NULL). The number of rows in the data frame must equal the number of
sampling occasions. Covariate names cannot be "time", "age", or "h"; these
names are reserved for temporal, behavioral, and individual effects when speci-
fying mod.p and mod.phi.
mod.p Model formula for detection probability. For example, mod.p=~1 specifies no
effects (i.e., intercept only), mod.p~time specifies temporal effects, mod.p~c
specifies behavioral reponse (i.e., trap "happy" or "shy"), mod.p~h specifies in-
dividual heterogeneity, and mod.p~time+c specifies additive temporal and be-
havioral effects.
parms A character vector giving the names of the parameters and latent variables to
monitor. Possible parameters are logit-scale detection probability parameters
("pbeta"), population abundance ("N"), logit-scale individual heterogeneity vari-
ance term ("sigma2_zp"), and logit-scale individual effects ("zp"). The log pos-
terior density ("logPosterior") may also be monitored. Setting parms="all"
monitors all possible parameters and latent variables.
nchains The number of parallel MCMC chains for the model.
iter The number of MCMC iterations.
adapt The number of iterations for proposal distribution adaptation. If adapt = 0 then
no adaptation occurs.
bin Bin length for calculating acceptance rates during adaptive phase (0 < bin <=
iter).
thin Thinning interval for monitored parameters.
burnin Number of burn-in iterations (0 <= burnin < iter).
taccept Target acceptance rate during adaptive phase (0 < taccept <= 1). Acceptance
rate is monitored every bin iterations. Default is taccept = 0.44.
tuneadjust Adjustment term during adaptive phase (0 < tuneadjust <= 1). If acceptance
rate is less than taccept, then proposal term (proppbeta, propzp, or propsigmap)
is multiplied by tuneadjust. If acceptance rate is greater than or equal to
taccept, then proposal term is divided by tuneadjust. Default is tuneadjust
= 0.95.
proppbeta Scaler or vector (of length k) specifying the initial standard deviation of the
Normal(pbeta[j], proppbeta[j]) proposal distribution. If proppbeta is a scaler,
then this value is used for all j = 1, ..., k. Default is proppbeta = 0.1.
propzp Scaler or vector (of length M) specifying the initial standard deviation of the
Normal(zp[i], propzp[i]) proposal distribution. If propzp is a scaler, then this
value is used for all i = 1, ..., M individuals. Default is propzp = 1.
propsigmap Scaler specifying the initial Gamma(shape = 1/propsigmap, scale = sigma_zp *
propsigmap) proposal distribution for sigma_zp = sqrt(sigma2_zp). Default is
propsigmap=1.
npoints Number of Gauss-Hermite quadrature points to use for numerical integration.
Accuracy increases with number of points, but so does computation time.
a Scale parameter for [sigma_z] ~ half-Cauchy(a) prior for the individual hete-
geneity term sigma_zp = sqrt(sigma2_zp). Default is “uninformative” a = 25.
mu0 Scaler or vector (of length k) specifying mean of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If mu0 is a scaler, then this value is used for all j = 1, ...,
k. Default is mu0 = 0.
sigma2_mu0 Scaler or vector (of length k) specifying variance of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If sigma2_mu0 is a scaler, then this value is used for all j
= 1, ..., k. Default is sigma2_mu0 = 1.75.
initial.values Optional list of nchain list(s) specifying initial values for "pbeta", "zp", "sigma2_zp",
and "N". Default is initial.values = NULL, which causes initial values to be
generated automatically.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
... Additional "parameters" arguments for specifying mod.p. See make.design.data.
Details
The first time markClosed (or markCJS) is called, it will likely produce a firewall warning alerting
users that R has requested the ability to accept incoming network connections. Incoming network
connections are required to use parallel processing as implemented in markClosed. Note that setting
parms="all" is required for any markClosed model output to be used in multimodelClosed.
Value
A list containing the following:
mcmc Markov chain Monte Carlo object of class mcmc.list.
mod.p Model formula for detection probability (as specified by mod.p above).
mod.delta Formula always NULL; only for internal use in multimodelClosed.
DM A list of design matrices for detection probability generated for model mod.p,
where DM$p is the design matrix for initial capture probability (p) and DM$c
is the design matrix for recapture probability (c).
initial.values A list containing the parameter and latent variable values at iteration iter for
each chain. Values are provided for "pbeta", "zp", "sigma2_zp", and "N".
mms An object of class multimarksetup
Author(s)
<NAME>
See Also
multimodelClosed
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Run single chain using the default model for simulated ``traditional'' data
data<-simdataClosed(delta_1=1,delta_2=0)$Enc.Mat
sim.dot<-markClosed(data)
#Posterior summary for monitored parameters
summary(sim.dot$mcmc)
plot(sim.dot$mcmc)
markClosedSCR Fit spatial population abundance models for “traditional” capture-
mark-recapture data consisting of a single mark type
Description
This function fits spatial population abundance models for “traditional” capture-mark-recapture data
consisting of a single mark type using Bayesian analysis methods. Markov chain Monte Carlo
(MCMC) is used to draw samples from the joint posterior distribution.
Usage
markClosedSCR(
Enc.Mat,
trapCoords,
studyArea = NULL,
buffer = NULL,
ncells = 1024,
covs = data.frame(),
mod.p = ~1,
detection = "half-normal",
parms = c("pbeta", "N"),
nchains = 1,
iter = 12000,
adapt = 1000,
bin = 50,
thin = 1,
burnin = 2000,
taccept = 0.44,
tuneadjust = 0.95,
proppbeta = 0.1,
propsigma = 1,
propcenter = NULL,
sigma_bounds = NULL,
mu0 = 0,
sigma2_mu0 = 1.75,
initial.values = NULL,
scalemax = 10,
printlog = FALSE,
...
)
Arguments
Enc.Mat A matrix containing the observed encounter histories with rows corresponding
to individuals and (ntraps*noccas) columns corresponding to traps and sam-
pling occasions. The first noccas columns correspond to trap 1, the second
noccas columns corresopond to trap 2, etc.
trapCoords A matrix of dimension ntraps x (2 + noccas) indicating the Cartesian coor-
dinates and operating occasions for the traps, where rows correspond to trap,
the first column the x-coordinate (“x”), and the second column the y-coordinate
(“y”). The last noccas columns indicate whether or not the trap was operat-
ing on each of the occasions, where ‘1’ indciates the trap was operating and ‘0’
indicates the trap was not operating. Ignored unless mms=NULL.
studyArea is a 3-column matrix containing the coordinates for the centroids a contiguous
grid of cells that define the study area and available habitat. Each row corre-
sponds to a grid cell. The first 2 columns (“x” and “y”) indicate the Carte-
sian x- and y-coordinate for the centroid of each grid cell, and the third column
(“avail”) indicates whether the cell is available habitat (=1) or not (=0). All cells
must have the same resolution. If studyArea=NULL (the default) and mms=NULL,
then a square study area grid composed of ncells cells of available habitat is
drawn around the bounding box of trapCoords based on buffer. Ignored un-
less mms=NULL. Note that rows should be ordered by raster cell order (raster cell
numbers start at 1 in the upper left corner, and increase from left to right, and
then from top to bottom).
buffer A scaler in same units as trapCoords indicating the buffer around the bounding
box of trapCoords for defining the study area when studyArea=NULL. Ignored
unless studyArea=NULL.
ncells The number of grid cells in the study area when studyArea=NULL. The square
root of ncells must be a whole number. Default is ncells=1024. Ignored
unless studyArea=NULL and mms=NULL.
covs A data frame of time- and/or trap-dependent covariates for detection probabil-
ities (ignored unless mms=NULL). The number of rows in the data frame must
equal the number of traps times the number of sampling occasions (ntraps*noccas),
where the first noccas rows correspond to trap 1, the noccas rows correspond
to trap 2, etc. Covariate names cannot be "time", "age", or "h"; these names are
reserved for temporal, behavioral, and individual effects when specifying mod.p
and mod.phi.
mod.p Model formula for detection probability. For example, mod.p=~1 specifies no
effects (i.e., intercept only), mod.p~time specifies temporal effects, mod.p~c
specifies behavioral reponse (i.e., trap "happy" or "shy"), mod.p~trap speci-
fies trap effects, and mod.p~time+c specifies additive temporal and behavioral
effects.
detection Model for detection probability as a function of distance from activity centers .
Must be "half-normal" (of the form exp (−d2 /(2 ∗ σ 2 )), where d is distance)
or "exponential" (of the form exp (−d/λ)).
parms A character vector giving the names of the parameters and latent variables to
monitor. Possible parameters are cloglog-scale detection probability param-
eters ("pbeta"), population abundance ("N"), and cloglog-scale distance term
for the detection function ("sigma2_scr" when detection=``half-normal''
or "lambda" when detection=``exponential''). Individual activity centers
("centers") and the log posterior density ("logPosterior") may also be mon-
itored. Setting parms="all" monitors all possible parameters and latent vari-
ables.
nchains The number of parallel MCMC chains for the model.
iter The number of MCMC iterations.
adapt The number of iterations for proposal distribution adaptation. If adapt = 0 then
no adaptation occurs.
bin Bin length for calculating acceptance rates during adaptive phase (0 < bin <=
iter).
thin Thinning interval for monitored parameters.
burnin Number of burn-in iterations (0 <= burnin < iter).
taccept Target acceptance rate during adaptive phase (0 < taccept <= 1). Acceptance
rate is monitored every bin iterations. Default is taccept = 0.44.
tuneadjust Adjustment term during adaptive phase (0 < tuneadjust <= 1). If acceptance
rate is less than taccept, then proposal term (proppbeta or propsigma) is mul-
tiplied by tuneadjust. If acceptance rate is greater than or equal to taccept,
then proposal term is divided by tuneadjust. Default is tuneadjust = 0.95.
proppbeta Scaler or vector (of length k) specifying the initial standard deviation of the
Normal(pbeta[j], proppbeta[j]) proposal distribution. If proppbeta is a scaler,
then this value is used for all j = 1, ..., k. Default is proppbeta = 0.1.
propsigma Scaler specifying the initial Gamma(shape = 1/propsigma, scale = sigma_scr *
propsigma) proposal distribution for sigma_scr = sqrt(sigma2_scr). Default is
propsigma=1.
propcenter Scaler specifying the neighborhood distance when proposing updates to activity
centers. When propcenter=NULL (the default), then propcenter = a*10, where a
is the cell size for the study area grid, and each cell has (at most) approximately
300 neighbors.
sigma_bounds Positive vector of length 2 for the lower and upper bounds for the [sigma_scr] ~
Uniform(sigma_bounds[1], sigma_bounds[2]) (or [sqrt(lambda)] when detection=``exponential'')
prior for the detection function term sigma_scr = sqrt(sigma2_scr) (or sqrt(lambda)).
When sigma_bounds = NULL (the default), then sigma_bounds = c(1.e-6,max(diff(range(studyArea
mu0 Scaler or vector (of length k) specifying mean of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If mu0 is a scaler, then this value is used for all j = 1, ...,
k. Default is mu0 = 0.
sigma2_mu0 Scaler or vector (of length k) specifying variance of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If sigma2_mu0 is a scaler, then this value is used for all j
= 1, ..., k. Default is sigma2_mu0 = 1.75.
initial.values Optional list of nchain list(s) specifying initial values for "pbeta", "N", "sigma2_scr",
and "centers". Default is initial.values = NULL, which causes initial values
to be generated automatically.
scalemax Upper bound for internal re-scaling of grid cell centroid coordinates. Default is
scalemax=10, which re-scales the centroids to be between 0 and 10. Re-scaling
is done internally to avoid numerical overflows during model fitting.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
... Additional "parameters" arguments for specifying mod.p. See make.design.data.
Details
The first time markClosedSCR is called, it will likely produce a firewall warning alerting users
that R has requested the ability to accept incoming network connections. Incoming network con-
nections are required to use parallel processing as implemented in markClosed. Note that setting
parms="all" is required for any markClosed model output to be used in multimodelClosed.
Value
A list containing the following:
mcmc Markov chain Monte Carlo object of class mcmc.list.
mod.p Model formula for detection probability (as specified by mod.p above).
mod.delta Formula always NULL; only for internal use in multimodelClosedSCR.
mod.det Model formula for detection function (as specified by detection above).
DM A list of design matrices for detection probability generated for model mod.p,
where DM$p is the design matrix for initial capture probability (p) and DM$c
is the design matrix for recapture probability (c).
initial.values A list containing the parameter and latent variable values at iteration iter for
each chain. Values are provided for "pbeta", "N", "sigma2_scr", and "centers".
mms An object of class multimarkSCRsetup
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and Karanth,
K.U. 2012. Program SPACECAP: software for estimating animal density using spatially explicit
capture-recapture models. Methods in Ecology and Evolution 3:1067-1072.
<NAME>., <NAME>., <NAME>., and <NAME>. 2016. Capture-recapture abundance
estimation using a semi-complete data likelihood approach. The Annals of Applied Statistics 10:
264-285
<NAME>., <NAME>., <NAME>. and <NAME>. 2009. Bayesian inference in
camera trapping studies for a class of spatial capture-recapture models. Ecology 90: 3233-3244.
See Also
multimodelClosedSCR
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Run single chain using the default model for ``traditional'' tiger data of Royle et al (2009)
Enc.Mat<-tiger$Enc.Mat
trapCoords<-tiger$trapCoords
studyArea<-tiger$studyArea
tiger.dot<-markClosedSCR(Enc.Mat,trapCoords,studyArea,iter=100,adapt=50,burnin=50)
#Posterior summary for monitored parameters
summary(tiger.dot$mcmc)
plot(tiger.dot$mcmc)
multimarkCJS Fit open population survival models for capture-mark-recapture data
consisting of multiple non-invasive marks
Description
This function fits Cormack-Jolly-Seber (CJS) open population models for survival probability (φ)
and capture probability (p) from capture-mark-recapture data consisting of multiple non-invasive
marks. Using Bayesian analysis methods, Markov chain Monte Carlo (MCMC) is used to draw
samples from the joint posterior distribution.
Usage
multimarkCJS(
Enc.Mat,
data.type = "never",
covs = data.frame(),
mms = NULL,
mod.p = ~1,
mod.phi = ~1,
mod.delta = ~type,
parms = c("pbeta", "phibeta", "delta"),
nchains = 1,
iter = 12000,
adapt = 1000,
bin = 50,
thin = 1,
burnin = 2000,
taccept = 0.44,
tuneadjust = 0.95,
proppbeta = 0.1,
propzp = 1,
propsigmap = 1,
propphibeta = 0.1,
propzphi = 1,
propsigmaphi = 1,
maxnumbasis = 1,
pbeta0 = 0,
pSigma0 = 1,
phibeta0 = 0,
phiSigma0 = 1,
l0p = 1,
d0p = 0.01,
l0phi = 1,
d0phi = 0.01,
a0delta = 1,
a0alpha = 1,
b0alpha = 1,
a0psi = 1,
b0psi = 1,
initial.values = NULL,
known = integer(),
link = "probit",
printlog = FALSE,
...
)
Arguments
Enc.Mat A matrix of observed encounter histories with rows corresponding to individuals
and columns corresponding to sampling occasions (ignored unless mms=NULL).
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
covs A data frame of temporal covariates for detection probabilities (ignored unless
mms=NULL). The number of rows in the data frame must equal the number of
sampling occasions. Covariate names cannot be "time", "age", or "h"; these
names are reserved for temporal, behavioral, and individual effects when speci-
fying mod.p and mod.phi.
mms An optional object of class multimarksetup-class; if NULL it is created. See
processdata.
mod.p Model formula for detection probability (p). For example, mod.p=~1 spec-
ifies no effects (i.e., intercept only), mod.p~time specifies temporal effects,
mod.p~age specifies age effects, mod.p~h specifies individual heterogeneity,
and mod.p~time+age specifies additive temporal and age effects.
mod.phi Model formula for survival probability (φ). For example, mod.phi=~1 speci-
fies no effects (i.e., intercept only), mod.phi~time specifies temporal effects,
mod.phi~age specifies age effects, mod.phi~h specifies individual heterogene-
ity, and mod.phi~time+age specifies additive temporal and age effects.
mod.delta Model formula for conditional probabilities of type 1 (delta_1) and type 2 (delta_2)
encounters, given detection. Currently only mod.delta=~1 (i.e., δ1 = δ2 ) and
mod.delta=~type (i.e., δ1 6= δ2 ) are implemented.
parms A character vector giving the names of the parameters and latent variables to
monitor. Possible parameters are probit-scale detection probability parameters
("pbeta" for p and "phibeta" for φ), conditional probability of type 1 or type
2 encounter, given detection ("delta)", probability of simultaneous type 1 and
type 2 detection, given both types encountered ("alpha"), probit-scale individ-
ual heterogeneity variance terms ("sigma2_zp" for p and "sigma2_zphi" for
φ), probit-scale individual effects ("zp" and "zphi"), and the probability that a
randomly selected individual from the M = nrow(Enc.Mat) observed individuals
belongs to the n unique individuals encountered at least once ("psi"). Individ-
ual encounter history indices ("H"), latent variable indicators for whether each
individual was alive (1) or dead (0) during each sampling occasion ("q"), and
the log likelihood ("loglike") may also be monitored. Setting parms="all"
monitors all possible parameters and latent variables.
nchains The number of parallel MCMC chains for the model.
iter The number of MCMC iterations.
adapt Ignored; no adaptive phase is needed for "probit" link.
bin Ignored; no adaptive phase is needed for "probit" link.
thin Thinning interval for monitored parameters.
burnin Number of burn-in iterations (0 <= burnin < iter).
taccept Ignored; no adaptive phase is needed for "probit" link.
tuneadjust Ignored; no adaptive phase is needed for "probit" link.
proppbeta Ignored; no adaptive phase is needed for "probit" link.
propzp Ignored; no adaptive phase is needed for "probit" link.
propsigmap Ignored; no adaptive phase is needed for "probit" link.
propphibeta Ignored; no adaptive phase is needed for "probit" link.
propzphi Ignored; no adaptive phase is needed for "probit" link.
propsigmaphi Ignored; no adaptive phase is needed for "probit" link.
maxnumbasis Maximum number of basis vectors to use when proposing latent history fre-
quency updates. Default is maxnumbasis = 1, but higher values can potentially
improve mixing.
pbeta0 Scaler or vector (of length k) specifying mean of pbeta ~ multivariateNormal(pbeta0,
pSigma0) prior. If pbeta0 is a scaler, then this value is used for all j = 1, ..., k.
Default is pbeta0 = 0.
pSigma0 Scaler or k x k matrix specifying covariance matrix of pbeta ~ multivariateNor-
mal(pbeta0, pSigma0) prior. If pSigma0 is a scaler, then this value is used for
all pSigma0[j,j] for j = 1, ..., k (with pSigma[j,l] = 0 for all j 6= l). Default is
pSigma0 = 1.
phibeta0 Scaler or vector (of length k) specifying mean of phibeta ~ multivariateNor-
mal(phibeta0, phiSigma0) prior. If phibeta0 is a scaler, then this value is used
for all j = 1, ..., k. Default is phibeta0 = 0.
phiSigma0 Scaler or k x k matrix specifying covariance matrix of phibeta ~ multivariateNor-
mal(phibeta0, phiSigma0) prior. If phiSigma0 is a scaler, then this value is used
for all phiSigma0[j,j] for j = 1, ..., k (with phiSigma[j,l] = 0 for all j 6= l).
Default is phiSigma0 = 1.
l0p Specifies "shape" parameter for [sigma2_zp] ~ invGamma(l0p,d0p) prior. De-
fault is l0p = 1.
d0p Specifies "scale" parameter for [sigma2_zp] ~ invGamma(l0p,d0p) prior. De-
fault is d0p = 0.01.
l0phi Specifies "shape" parameter for [sigma2_zphi] ~ invGamma(l0phi,d0phi) prior.
Default is l0phi = 1.
d0phi Specifies "scale" parameter for [sigma2_zphi] ~ invGamma(l0phi,d0phi) prior.
Default is d0phi = 0.01.
a0delta Scaler or vector (of length d) specifying the prior for the conditional (on detec-
tion) probability of type 1 (delta_1), type 2 (delta_2), and both type 1 and type 2
encounters (1-delta_1-delta_2). If a0delta is a scaler, then this value is used for
all a0delta[j] for j = 1, ..., d. For mod.delta=~type, d=3 with [delta_1, delta_2,
1-delta_1-delta_2] ~ Dirichlet(a0delta) prior. For mod.delta=~1, d=2 with [tau]
~ Beta(a0delta[1],a0delta[2]) prior, where (delta_1,delta_2,1-delta_1-delta_2) =
(tau/2,tau/2,1-tau). See McClintock et al. (2013) for more details.
a0alpha Specifies "shape1" parameter for [alpha] ~ Beta(a0alpha, b0alpha) prior. Only
applicable when data.type = "sometimes". Default is a0alpha = 1. Note that
when a0alpha = 1 and b0alpha = 1, then [alpha] ~ Unif(0,1).
b0alpha Specifies "shape2" parameter for [alpha] ~ Beta(a0alpha, b0alpha) prior. Only
applicable when data.type = "sometimes". Default is b0alpha = 1. Note that
when a0alpha = 1 and b0alpha = 1, then [alpha] ~ Unif(0,1).
a0psi Specifies "shape1" parameter for [psi] ~ Beta(a0psi,b0psi) prior. Default is
a0psi = 1.
b0psi Specifies "shape2" parameter for [psi] ~ Beta(a0psi,b0psi) prior. Default is
b0psi = 1.
initial.values Optional list of nchain list(s) specifying initial values for parameters and latent
variables. Default is initial.values = NULL, which causes initial values to be
generated automatically. In addition to the parameters ("pbeta", "phibeta",
"delta_1", "delta_2", "alpha", "sigma2_zp", "sigma2_zphi", "zp", "zphi",
and "psi"), initial values can be specified for the initial latent history frequencies
("x"), initial individual encounter history indices ("H"), and initial latent variable
indicators for whether each individual was alive (1) or dead (0) during each
sampling occasion ("q").
known Optional integer vector indicating whether the encounter history of an individual
is known with certainty (i.e., the observed encounter history is the true encounter
history). Encounter histories with at least one type 4 encounter are automatically
assumed to be known, and known does not need to be specified unless there ex-
ist encounter histories that do not contain a type 4 encounter that happen to be
known with certainty (e.g., from independent telemetry studies). If specified,
known = c(v_1,v_2,...,v_M) must be a vector of length M = nrow(Enc.Mat)
where v_i = 1 if the encounter history for individual i is known (v_i = 0 other-
wise). Note that known all-zero encounter histories (e.g., ‘000’) are ignored.
link Link function for survival and capture probabilities. Only probit link is currently
implemented.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
... Additional "parameters" arguments for specifying mod.p and mod.phi. See
RMark::make.design.data.
Details
The first time multimarkCJS (or multimarkClosed) is called, it will likely produce a firewall warn-
ing alerting users that R has requested the ability to accept incoming network connections. Incom-
ing network connections are required to use parallel processing as implemented in multimarkCJS.
Note that setting parms="all" is required for any multimarkCJS model output to be used in
multimodelCJS.
Value
A list containing the following:
mcmc Markov chain Monte Carlo object of class mcmc.list.
mod.p Model formula for detection probability (as specified by mod.p above).
mod.phi Model formula for survival probability (as specified by mod.phi above).
mod.delta Formula always NULL; only for internal use in multimodelCJS.
DM A list of design matrices for detection and survival probability respectively gen-
erated by mod.p and mod.phi, where DM$p is the design matrix for capture
probability (p) and DM$phi is the design matrix for survival probability (φ).
initial.values A list containing the parameter and latent variable values at iteration iter for
each chain. Values are provided for "pbeta", "phibeta", "delta_1", "delta_2",
"alpha", "sigma2_zp" "sigma2_zphi", "zp", "zphi", "psi", "x", "H", and "q".
mms An object of class multimarksetup
Author(s)
<NAME>
References
<NAME>., and <NAME>. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
<NAME>., <NAME>., <NAME>., and <NAME>. 2014. Probit models for capture-
recapture data subject to imperfect detection, individual heterogeneity and misidentification. The
Annals of Applied Statistics 8: 2461-2484.
See Also
processdata, multimodelCJS
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Simulate open population data using defaults
data <- simdataCJS()
#Fit default open population model
sim.dot <- multimarkCJS(data$Enc.Mat)
#Posterior summary for monitored parameters
summary(sim.dot$mcmc)
plot(sim.dot$mcmc)
#' #Fit ``age'' model with 2 age classes (e.g., juvenile and adult) for survival
#using 'parameters' and 'right' arguments from RMark::make.design.data
sim.age <- multimarkCJS(data$Enc.Mat,mod.phi=~age,
parameters=list(Phi=list(age.bins=c(0,1,4))),right=FALSE)
summary(getprobsCJS(sim.age))
multimarkClosed Fit closed population abundance models for capture-mark-recapture
data consisting of multiple non-invasive marks
Description
This function fits closed population abundance models for capture-mark-recapture data consist-
ing of multiple non-invasive marks using Bayesian analysis methods. Markov chain Monte Carlo
(MCMC) is used to draw samples from the joint posterior distribution.
Usage
multimarkClosed(
Enc.Mat,
data.type = "never",
covs = data.frame(),
mms = NULL,
mod.p = ~1,
mod.delta = ~type,
parms = c("pbeta", "delta", "N"),
nchains = 1,
iter = 12000,
adapt = 1000,
bin = 50,
thin = 1,
burnin = 2000,
taccept = 0.44,
tuneadjust = 0.95,
proppbeta = 0.1,
propzp = 1,
propsigmap = 1,
npoints = 500,
maxnumbasis = 1,
a0delta = 1,
a0alpha = 1,
b0alpha = 1,
a = 25,
mu0 = 0,
sigma2_mu0 = 1.75,
a0psi = 1,
b0psi = 1,
initial.values = NULL,
known = integer(),
printlog = FALSE,
...
)
Arguments
Enc.Mat A matrix of observed encounter histories with rows corresponding to individuals
and columns corresponding to sampling occasions (ignored unless mms=NULL).
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
covs A data frame of temporal covariates for detection probabilities (ignored unless
mms=NULL). The number of rows in the data frame must equal the number of
sampling occasions. Covariate names cannot be "time", "c", or "h"; these names
are reserved for temporal, behavioral, and individual effects when specifying
mod.p and mod.phi.
mms An optional object of class multimarksetup-class; if NULL it is created. See
processdata.
mod.p Model formula for detection probability. For example, mod.p=~1 specifies no
effects (i.e., intercept only), mod.p~time specifies temporal effects, mod.p~c
specifies behavioral reponse (i.e., trap "happy" or "shy"), mod.p~h specifies in-
dividual heterogeneity, and mod.p~time+c specifies additive temporal and be-
havioral effects.
mod.delta Model formula for conditional probabilities of type 1 (delta_1) and type 2 (delta_2)
encounters, given detection. Currently only mod.delta=~1 (i.e., δ1 = δ2 ) and
mod.delta=~type (i.e., δ1 6= δ2 ) are implemented.
parms A character vector giving the names of the parameters and latent variables to
monitor. Possible parameters are logit-scale detection probability parameters
("pbeta"), population abundance ("N"), conditional probability of type 1 or type
2 encounter, given detection ("delta)", probability of simultaneous type 1 and
type 2 detection, given both types encountered ("alpha"), logit-scale individ-
ual heterogeneity variance term ("sigma2_zp"), logit-scale individual effects
("zp"), and the probability that a randomly selected individual from the M =
nrow(Enc.Mat) observed individuals belongs to the n unique individuals en-
countered at least once ("psi"). Individual encounter history indices ("H") and
the log posterior density ("logPosterior") may also be monitored. Setting
parms="all" monitors all possible parameters and latent variables.
nchains The number of parallel MCMC chains for the model.
iter The number of MCMC iterations.
adapt The number of iterations for proposal distribution adaptation. If adapt = 0 then
no adaptation occurs.
bin Bin length for calculating acceptance rates during adaptive phase (0 < bin <=
iter).
thin Thinning interval for monitored parameters.
burnin Number of burn-in iterations (0 <= burnin < iter).
taccept Target acceptance rate during adaptive phase (0 < taccept <= 1). Acceptance
rate is monitored every bin iterations. Default is taccept = 0.44.
tuneadjust Adjustment term during adaptive phase (0 < tuneadjust <= 1). If acceptance
rate is less than taccept, then proposal term (proppbeta, propzp, or propsigmap)
is multiplied by tuneadjust. If acceptance rate is greater than or equal to
taccept, then proposal term is divided by tuneadjust. Default is tuneadjust
= 0.95.
proppbeta Scaler or vector (of length k) specifying the initial standard deviation of the
Normal(pbeta[j], proppbeta[j]) proposal distribution. If proppbeta is a scaler,
then this value is used for all j = 1, ..., k. Default is proppbeta = 0.1.
propzp Scaler or vector (of length M) specifying the initial standard deviation of the
Normal(zp[i], propzp[i]) proposal distribution. If propzp is a scaler, then this
value is used for all i = 1, ..., M individuals. Default is propzp = 1.
propsigmap Scaler specifying the initial Gamma(shape = 1/propsigmap, scale = sigma_zp *
propsigmap) proposal distribution for sigma_zp = sqrt(sigma2_zp). Default is
propsigmap=1.
npoints Number of Gauss-Hermite quadrature points to use for numerical integration.
Accuracy increases with number of points, but so does computation time.
maxnumbasis Maximum number of basis vectors to use when proposing latent history fre-
quency updates. Default is maxnumbasis = 1, but higher values can potentially
improve mixing.
a0delta Scaler or vector (of length d) specifying the prior for the conditional (on detec-
tion) probability of type 1 (delta_1), type 2 (delta_2), and both type 1 and type 2
encounters (1-delta_1-delta_2). If a0delta is a scaler, then this value is used for
all a0delta[j] for j = 1, ..., d. For mod.delta=~type, d=3 with [delta_1, delta_2,
1-delta_1-delta_2] ~ Dirichlet(a0delta) prior. For mod.delta=~1, d=2 with [tau]
~ Beta(a0delta[1],a0delta[2]) prior, where (delta_1,delta_2,1-delta_1-delta_2) =
(tau/2,tau/2,1-tau). See McClintock et al. (2013) for more details.
a0alpha Specifies "shape1" parameter for [alpha] ~ Beta(a0alpha, b0alpha) prior. Only
applicable when data.type = "sometimes". Default is a0alpha = 1. Note that
when a0alpha = 1 and b0alpha = 1, then [alpha] ~ Unif(0,1).
b0alpha Specifies "shape2" parameter for [alpha] ~ Beta(a0alpha, b0alpha) prior. Only
applicable when data.type = "sometimes". Default is b0alpha = 1. Note that
when a0alpha = 1 and b0alpha = 1, then [alpha] ~ Unif(0,1).
a Scale parameter for [sigma_z] ~ half-Cauchy(a) prior for the individual hete-
geneity term sigma_zp = sqrt(sigma2_zp). Default is “uninformative” a = 25.
mu0 Scaler or vector (of length k) specifying mean of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If mu0 is a scaler, then this value is used for all j = 1, ...,
k. Default is mu0 = 0.
sigma2_mu0 Scaler or vector (of length k) specifying variance of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If sigma2_mu0 is a scaler, then this value is used for all j
= 1, ..., k. Default is sigma2_mu0 = 1.75.
a0psi Specifies "shape1" parameter for [psi] ~ Beta(a0psi,b0psi) prior. Default is
a0psi = 1.
b0psi Specifies "shape2" parameter for [psi] ~ Beta(a0psi,b0psi) prior. Default is
b0psi = 1.
initial.values Optional list of nchain list(s) specifying initial values for parameters and la-
tent variables. Default is initial.values = NULL, which causes initial values
to be generated automatically. In addition to the parameters ("pbeta", "N",
"delta_1", "delta_2", "alpha", "sigma2_zp", "zp", and "psi"), initial val-
ues can be specified for the initial latent history frequencies ("x") and initial
individual encounter history indices ("H").
known Optional integer vector indicating whether the encounter history of an individual
is known with certainty (i.e., the observed encounter history is the true encounter
history). Encounter histories with at least one type 4 encounter are automatically
assumed to be known, and known does not need to be specified unless there ex-
ist encounter histories that do not contain a type 4 encounter that happen to be
known with certainty (e.g., from independent telemetry studies). If specified,
known = c(v_1,v_2,...,v_M) must be a vector of length M = nrow(Enc.Mat)
where v_i = 1 if the encounter history for individual i is known (v_i = 0 other-
wise). Note that known all-zero encounter histories (e.g., ‘000’) are ignored.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
... Additional "parameters" arguments for specifying mod.p. See make.design.data.
Details
The first time multimarkClosed (or multimarkCJS) is called, it will likely produce a firewall warn-
ing alerting users that R has requested the ability to accept incoming network connections. Incoming
network connections are required to use parallel processing as implemented in multimarkClosed.
Note that setting parms="all" is required for any multimarkClosed model output to be used in
multimodelClosed.
Value
A list containing the following:
mcmc Markov chain Monte Carlo object of class mcmc.list.
mod.p Model formula for detection probability (as specified by mod.p above).
mod.delta Model formula for conditional probability of type 1 or type 2 encounter, given
detection (as specified by mod.delta above).
DM A list of design matrices for detection probability generated for model mod.p,
where DM$p is the design matrix for initial capture probability (p) and DM$c
is the design matrix for recapture probability (c).
initial.values A list containing the parameter and latent variable values at iteration iter for
each chain. Values are provided for "pbeta", "N", "delta_1", "delta_2", "alpha",
"sigma2_zp", "zp", "psi", "x", and "H".
mms An object of class multimarksetup
Author(s)
<NAME>
References
<NAME>., and <NAME>. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
<NAME>., <NAME>., <NAME>., and <NAME>. 2014. Probit models for capture-
recapture data subject to imperfect detection, individual heterogeneity and misidentification. The
Annals of Applied Statistics 8: 2461-2484.
See Also
bobcat, processdata, multimodelClosed
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Run single chain using the default model for bobcat data
bobcat.dot<-multimarkClosed(bobcat)
#Posterior summary for monitored parameters
summary(bobcat.dot$mcmc)
plot(bobcat.dot$mcmc)
multimarkClosedSCR Fit spatially-explicit population abundance models for capture-mark-
recapture data consisting of multiple non-invasive marks
Description
This function fits spatially-explicit population abundance models for capture-mark-recapture data
consisting of multiple non-invasive marks using Bayesian analysis methods. Markov chain Monte
Carlo (MCMC) is used to draw samples from the joint posterior distribution.
Usage
multimarkClosedSCR(
Enc.Mat,
trapCoords,
studyArea = NULL,
buffer = NULL,
ncells = 1024,
data.type = "never",
covs = data.frame(),
mms = NULL,
mod.p = ~1,
mod.delta = ~type,
detection = "half-normal",
parms = c("pbeta", "delta", "N"),
nchains = 1,
iter = 12000,
adapt = 1000,
bin = 50,
thin = 1,
burnin = 2000,
taccept = 0.44,
tuneadjust = 0.95,
proppbeta = 0.1,
propsigma = 1,
propcenter = NULL,
maxnumbasis = 1,
a0delta = 1,
a0alpha = 1,
b0alpha = 1,
sigma_bounds = NULL,
mu0 = 0,
sigma2_mu0 = 1.75,
a0psi = 1,
b0psi = 1,
initial.values = NULL,
known = integer(),
scalemax = 10,
printlog = FALSE,
...
)
Arguments
Enc.Mat A matrix containing the observed encounter histories with rows corresponding
to individuals and (ntraps*noccas) columns corresponding to traps and sam-
pling occasions. The first noccas columns correspond to trap 1, the second
noccas columns corresopond to trap 2, etc. Ignored unless mms=NULL.
trapCoords A matrix of dimension ntraps x (2 + noccas) indicating the Cartesian coor-
dinates and operating occasions for the traps, where rows correspond to trap,
the first column the x-coordinate (“x”), and the second column the y-coordinate
(“y”). The last noccas columns indicate whether or not the trap was operat-
ing on each of the occasions, where ‘1’ indciates the trap was operating and ‘0’
indicates the trap was not operating. Ignored unless mms=NULL.
studyArea is a 3-column matrix containing the coordinates for the centroids of a contigu-
ous grid of cells that define the study area and available habitat. Each row cor-
responds to a grid cell. The first 2 columns (“x” and “y”) indicate the Carte-
sian x- and y-coordinate for the centroid of each grid cell, and the third column
(“avail”) indicates whether the cell is available habitat (=1) or not (=0). All cells
must be square and have the same resolution. If studyArea=NULL (the default)
and mms=NULL, then a square study area grid composed of ncells cells of avail-
able habitat is drawn around the bounding box of trapCoords based on buffer.
Ignored unless mms=NULL. Note that rows should be ordered in raster cell order
(raster cell numbers start at 1 in the upper left corner, and increase from left to
right, and then from top to bottom).
buffer A scaler in same units as trapCoords indicating the buffer around the bounding
box of trapCoords for defining the study area when studyArea=NULL. Ignored
unless studyArea=NULL and mms=NULL.
ncells The number of grid cells in the study area when studyArea=NULL. The square
root of ncells must be a whole number. Default is ncells=1024. Ignored
unless studyArea=NULL and mms=NULL.
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
covs A data frame of time- and/or trap-dependent covariates for detection probabil-
ities (ignored unless mms=NULL). The number of rows in the data frame must
equal the number of traps times the number of sampling occasions (ntraps*noccas),
where the first noccas rows correspond to trap 1, the second noccas rows cor-
respond to trap 2, etc. Covariate names cannot be "time", "age", or "h"; these
names are reserved for temporal, behavioral, and individual effects when speci-
fying mod.p and mod.phi.
mms An optional object of class multimarkSCRsetup-class; if NULL it is created.
See processdataSCR.
mod.p Model formula for detection probability as a function of distance from activity
centers. For example, mod.p=~1 specifies no effects (i.e., intercept only) other
than distance, mod.p~time specifies temporal effects, mod.p~c specifies behav-
ioral reponse (i.e., trap "happy" or "shy"), mod.p~trap specifies trap effects, and
mod.p~time+c specifies additive temporal and behavioral effects.
mod.delta Model formula for conditional probabilities of type 1 (delta_1) and type 2 (delta_2)
encounters, given detection. Currently only mod.delta=~1 (i.e., δ1 = δ2 ) and
mod.delta=~type (i.e., δ1 6= δ2 ) are implemented.
detection Model for detection probability as a function of distance from activity centers .
Must be "half-normal" (of the form exp (−d2 /(2 ∗ σ 2 )), where d is distance)
or "exponential" (of the form exp (−d/λ)).
parms A character vector giving the names of the parameters and latent variables to
monitor. Possible parameters are cloglog-scale detection probability parameters
("pbeta"), population abundance ("N"), conditional probability of type 1 or type
2 encounter, given detection ("delta)", probability of simultaneous type 1 and
type 2 detection, given both types encountered ("alpha"), cloglog-scale distance
term for the detection function ("sigma2_scr" when detection=``half-normal''
or "lambda" when detection=``exponential''), and the probability that a
randomly selected individual from the M = nrow(Enc.Mat) observed individuals
belongs to the n unique individuals encountered at least once ("psi"). Individual
activity centers ("centers"), encounter history indices ("H"), and the log poste-
rior density ("logPosterior") may also be monitored. Setting parms="all"
monitors all possible parameters and latent variables.
nchains The number of parallel MCMC chains for the model.
iter The number of MCMC iterations.
adapt The number of iterations for proposal distribution adaptation. If adapt = 0 then
no adaptation occurs.
bin Bin length for calculating acceptance rates during adaptive phase (0 < bin <=
iter).
thin Thinning interval for monitored parameters.
burnin Number of burn-in iterations (0 <= burnin < iter).
taccept Target acceptance rate during adaptive phase (0 < taccept <= 1). Acceptance
rate is monitored every bin iterations. Default is taccept = 0.44.
tuneadjust Adjustment term during adaptive phase (0 < tuneadjust <= 1). If acceptance
rate is less than taccept, then proposal term (proppbeta or propsigma) is mul-
tiplied by tuneadjust. If acceptance rate is greater than or equal to taccept,
then proposal term is divided by tuneadjust. Default is tuneadjust = 0.95.
proppbeta Scaler or vector (of length k) specifying the initial standard deviation of the
Normal(pbeta[j], proppbeta[j]) proposal distribution. If proppbeta is a scaler,
then this value is used for all j = 1, ..., k. Default is proppbeta = 0.1.
propsigma Scaler specifying the initial Gamma(shape = 1/propsigma, scale = sigma_scr *
propsigma) proposal distribution for sigma_scr = sqrt(sigma2_scr) (or sqrt(lambda)
= lambda if detection=``exponential''). Default is propsigma=1.
propcenter Scaler specifying the neighborhood distance when proposing updates to activity
centers. When propcenter=NULL (the default), then propcenter = a*10, where a
is the cell size for the study area grid, and each cell has (at most) approximately
300 neighbors.
maxnumbasis Maximum number of basis vectors to use when proposing latent history fre-
quency updates. Default is maxnumbasis = 1, but higher values can potentially
improve mixing.
a0delta Scaler or vector (of length d) specifying the prior for the conditional (on detec-
tion) probability of type 1 (delta_1), type 2 (delta_2), and both type 1 and type 2
encounters (1-delta_1-delta_2). If a0delta is a scaler, then this value is used for
all a0delta[j] for j = 1, ..., d. For mod.delta=~type, d=3 with [delta_1, delta_2,
1-delta_1-delta_2] ~ Dirichlet(a0delta) prior. For mod.delta=~1, d=2 with [tau]
~ Beta(a0delta[1],a0delta[2]) prior, where (delta_1,delta_2,1-delta_1-delta_2) =
(tau/2,tau/2,1-tau). See McClintock et al. (2013) for more details.
a0alpha Specifies "shape1" parameter for [alpha] ~ Beta(a0alpha, b0alpha) prior. Only
applicable when data.type = "sometimes". Default is a0alpha = 1. Note that
when a0alpha = 1 and b0alpha = 1, then [alpha] ~ Unif(0,1).
b0alpha Specifies "shape2" parameter for [alpha] ~ Beta(a0alpha, b0alpha) prior. Only
applicable when data.type = "sometimes". Default is b0alpha = 1. Note that
when a0alpha = 1 and b0alpha = 1, then [alpha] ~ Unif(0,1).
sigma_bounds Positive vector of length 2 for the lower and upper bounds for the [sigma_scr] ~
Uniform(sigma_bounds[1], sigma_bounds[2]) (or [sqrt(lambda)] when detection=``exponential'')
prior for the detection function term sigma_scr = sqrt(sigma2_scr) (or sqrt(lambda)).
When sigma_bounds = NULL (the default), then sigma_bounds = c(1.e-6,max(diff(range(studyArea
mu0 Scaler or vector (of length k) specifying mean of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If mu0 is a scaler, then this value is used for all j = 1, ...,
k. Default is mu0 = 0.
sigma2_mu0 Scaler or vector (of length k) specifying variance of pbeta[j] ~ Normal(mu0[j],
sigma2_mu0[j]) prior. If sigma2_mu0 is a scaler, then this value is used for all j
= 1, ..., k. Default is sigma2_mu0 = 1.75.
a0psi Specifies "shape1" parameter for [psi] ~ Beta(a0psi,b0psi) prior. Default is
a0psi = 1.
b0psi Specifies "shape2" parameter for [psi] ~ Beta(a0psi,b0psi) prior. Default is
b0psi = 1.
initial.values Optional list of nchain list(s) specifying initial values for parameters and la-
tent variables. Default is initial.values = NULL, which causes initial val-
ues to be generated automatically. In addition to the parameters ("pbeta", "N",
"delta_1", "delta_2", "alpha", "sigma2_scr", "centers", and "psi"), initial
values can be specified for the initial latent history frequencies ("x") and initial
individual encounter history indices ("H").
known Optional integer vector indicating whether the encounter history of an individual
is known with certainty (i.e., the observed encounter history is the true encounter
history). Encounter histories with at least one type 4 encounter are automatically
assumed to be known, and known does not need to be specified unless there ex-
ist encounter histories that do not contain a type 4 encounter that happen to be
known with certainty (e.g., from independent telemetry studies). If specified,
known = c(v_1,v_2,...,v_M) must be a vector of length M = nrow(Enc.Mat)
where v_i = 1 if the encounter history for individual i is known (v_i = 0 other-
wise). Note that known all-zero encounter histories (e.g., ‘000’) are ignored.
scalemax Upper bound for internal re-scaling of grid cell centroid coordinates. Default is
scalemax=10, which re-scales the centroids to be between 0 and 10. Re-scaling
is done internally to avoid numerical overflows during model fitting. Ignored
unless mms=NULL.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
... Additional "parameters" arguments for specifying mod.p. See make.design.data.
Details
The first time multimarkSCRClosed is called, it will likely produce a firewall warning alerting users
that R has requested the ability to accept incoming network connections. Incoming network connec-
tions are required to use parallel processing as implemented in multimarkClosed. Note that setting
parms="all" is required for any multimarkClosed model output to be used in multimodelClosed.
Value
A list containing the following:
mcmc Markov chain Monte Carlo object of class mcmc.list.
mod.p Model formula for detection probability (as specified by mod.p above).
mod.delta Model formula for conditional probability of type 1 or type 2 encounter, given
detection (as specified by mod.delta above).
mod.det Model formula for detection function (as specified by detection above).
DM A list of design matrices for detection probability generated for model mod.p,
where DM$p is the design matrix for initial capture probability (p) and DM$c
is the design matrix for recapture probability (c).
initial.values A list containing the parameter and latent variable values at iteration iter for
each chain. Values are provided for "pbeta", "N", "delta_1", "delta_2", "alpha",
"sigma2_scr", "centers", "psi", "x", and "H".
mms An object of class multimarkSCRsetup
Author(s)
<NAME>
References
<NAME>., and Holmberg J. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
Gopalaswamy, A.M., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and Karanth,
K.U. 2012. Program SPACECAP: software for estimating animal density using spatially explicit
capture-recapture models. Methods in Ecology and Evolution 3:1067-1072.
<NAME>., <NAME>., <NAME>., and <NAME>. 2016. Capture-recapture abundance
estimation using a semi-complete data likelihood approach. The Annals of Applied Statistics 10:
264-285
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
<NAME>., <NAME>., <NAME>., and <NAME>. 2014. Probit models for capture-
recapture data subject to imperfect detection, individual heterogeneity and misidentification. The
Annals of Applied Statistics 8: 2461-2484.
<NAME>., <NAME>., <NAME>. and <NAME>. 2009. Bayesian inference in
camera trapping studies for a class of spatial capture-recapture models. Ecology 90: 3233-3244.
See Also
processdataSCR.
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Generate object of class "multimarkSCRsetup" from simulated data
sim.data<-simdataClosedSCR()
Enc.Mat <- sim.data$Enc.Mat
trapCoords <- sim.data$spatialInputs$trapCoords
studyArea <- sim.data$spatialInputs$studyArea
#Run single chain using the default model for simulated data
example.dot<-multimarkClosedSCR(Enc.Mat,trapCoords,studyArea)
#Posterior summary for monitored parameters
summary(example.dot$mcmc)
plot(example.dot$mcmc)
multimarkSCRsetup-class
Class "multimarkSCRsetup"
Description
A class of spatial ’mulitmark’ model inputs
Slots
Enc.Mat Object of class "matrix". The observed encounter histories (with rows corresponding to
individuals and columns corresponding to sampling occasions).
data.type Object of class "character". The encounter history data type ("never", "sometimes",
or "always").
vAll.hists Object of class "integer". An ordered vector containing all possible encounter his-
tories in sequence.
Aprime Object of class "sparseMatrix". Transpose of the A matrix mapping latent encounter
histories to observed histories.
indBasis Object of class "numeric".An ordered vector of the indices of the three encounter his-
tories updated by each basis vector.
ncolbasis Object of class "integer". The number of needed basis vectors.
knownx Object of class "integer". Frequencies of known encounter histories.
C Object of class "integer". Sampling occasion of first capture for each encounter history.
L Object of class "integer". Sampling occasion of last capture for each encounter history.
naivex Object of class "integer". “Naive” latent history frequencies assuming a one-to-one map-
ping with Enc.Mat.
covs Object of class "data.frame". Temporal covariates for detection probability (the number of
rows in the data frame must equal the number of sampling occasions).
spatialInputs Object of class "list". List is of length 4 containing trapCoords and studyArea
after re-scaling coordinates based on maxscale, as well as the original (not re-scaled) grid cell
resolution (origCellRes) and re-scaling range (Srange).
Objects from the Class
Objects can be created by calls of the form processdata(Enc.Mat, ...) or new("multimarkSCRsetup",
...).
Methods
No methods defined with class "multimarkSCRsetup".
Author(s)
<NAME>
See Also
processdataSCR
Examples
showClass("multimarkSCRsetup")
multimarksetup-class Class "multimarksetup"
Description
A class of ’mulitmark’ model inputs
Slots
Enc.Mat Object of class "matrix". The observed encounter histories (with rows corresponding to
individuals and columns corresponding to sampling occasions).
data.type Object of class "character". The encounter history data type ("never", "sometimes",
or "always").
vAll.hists Object of class "integer". An ordered vector containing all possible encounter his-
tories in sequence.
Aprime Object of class "sparseMatrix". Transpose of the A matrix mapping latent encounter
histories to observed histories.
indBasis Object of class "numeric".An ordered vector of the indices of the three encounter his-
tories updated by each basis vector.
ncolbasis Object of class "integer". The number of needed basis vectors.
knownx Object of class "integer". Frequencies of known encounter histories.
C Object of class "integer". Sampling occasion of first capture for each encounter history.
L Object of class "integer". Sampling occasion of last capture for each encounter history.
naivex Object of class "integer". “Naive” latent history frequencies assuming a one-to-one map-
ping with Enc.Mat.
covs Object of class "data.frame". Temporal covariates for detection probability (the number of
rows in the data frame must equal the number of sampling occasions).
Objects from the Class
Objects can be created by calls of the form processdata(Enc.Mat, ...) or new("multimarksetup",
...).
Methods
No methods defined with class "multimarksetup".
Author(s)
<NAME>
See Also
processdata
Examples
showClass("multimarksetup")
multimodelCJS Multimodel inference for ’multimark’ open population survival models
Description
This function performs Bayesian multimodel inference for a set of ’multimark’ open population
survival (i.e., Cormack-Jolly-Seber) models using the reversible jump Markov chain Monte Carlo
(RJMCMC) algorithm proposed by Barker & Link (2013).
Usage
multimodelCJS(
modlist,
modprior = rep(1/length(modlist), length(modlist)),
monparms = "phi",
miter = NULL,
mburnin = 0,
mthin = 1,
M1 = NULL,
pbetapropsd = 1,
zppropsd = NULL,
phibetapropsd = 1,
zphipropsd = NULL,
sigppropshape = 1,
sigppropscale = 0.01,
sigphipropshape = 1,
sigphipropscale = 0.01,
printlog = FALSE
)
Arguments
modlist A list of individual model output lists returned by multimarkCJS. The models
must have the same number of chains and MCMC iterations.
modprior Vector of length length(modlist) containing prior model probabilities. De-
fault is modprior = rep(1/length(modlist), length(modlist)).
monparms Parameters to monitor. Only parameters common to all models can be monitored
(e.g., "pbeta[(Intercept)]", "phibeta[(Intercept)]", "psi"), but derived
survival ("phi") and capture ("p") probabilities can also be monitored. Default
is monparms = "phi".
miter The number of RJMCMC iterations per chain. If NULL, then the number of
MCMC iterations for each individual model chain is used.
mburnin Number of burn-in iterations (0 <= mburnin < miter).
mthin Thinning interval for monitored parameters.
M1 Integer vector indicating the initial model for each chain, where M1_j=i initial-
izes the RJMCMC algorithm for chain j in the model corresponding to modlist[[i]]
for i=1,..., length(modlist). If NULL, the algorithm for all chains is initialized
in the most general model. Default is M1=NULL.
pbetapropsd Scaler specifying the standard deviation of the Normal(0, pbetapropsd) proposal
distribution for "pbeta" parameters. Default is pbetapropsd=1. See Barker &
Link (2013) for more details.
zppropsd Scaler specifying the standard deviation of the Normal(0, zppropsd) proposal
distribution for "zp" parameters. Only applies if at least one (but not all) model(s)
include individual hetergeneity in detection probability. If NULL, zppropsd =
sqrt(sigma2_zp) is used. Default is zppropsd=NULL. See Barker & Link (2013)
for more details.
phibetapropsd Scaler specifying the standard deviation of the Normal(0, phibetapropsd) pro-
posal distribution for "phibeta" parameters. Default is phibetapropsd=1. See
Barker & Link (2013) for more details.
zphipropsd Scaler specifying the standard deviation of the Normal(0, zphipropsd) proposal
distribution for "zphi" parameters. Only applies if at least one (but not all)
model(s) include individual hetergeneity in survival probability. If NULL, zphipropsd
= sqrt(sigma2_zphi) is used. Default is zphipropsd=NULL. See Barker & Link
(2013) for more details.
sigppropshape Scaler specifying the shape parameter of the invGamma(shape = sigppropshape,
scale = sigppropscale) proposal distribution for "sigma2_zp". Only applies if
at least one (but not all) model(s) include individual hetergeneity in detection
probability. Default is sigppropshape=1. See Barker & Link (2013) for more
details.
sigppropscale Scaler specifying the scale parameter of the invGamma(shape = sigppropshape,
scale = sigppropscale) proposal distribution for "sigma2_zp". Only applies if
at least one (but not all) model(s) include individual hetergeneity in detection
probability. Default is sigppropscale=0.01. See Barker & Link (2013) for
more details.
sigphipropshape
Scaler specifying the shape parameter of the invGamma(shape = sigphiprop-
shape, scale = sigphipropscale) proposal distribution for "sigma2_zphi". Only
applies if at least one (but not all) model(s) include individual hetergeneity in
survival probability. Default is sigphipropshape=1. See Barker & Link (2013)
for more details.
sigphipropscale
Scaler specifying the scale parameter of the invGamma(shape = sigphiprop-
shape, scale = sigphipropscale) proposal distribution for "sigma_zphi". Only
applies if at least one (but not all) model(s) include individual hetergeneity in
survival probability. Default is sigphipropscale=0.01. See Barker & Link
(2013) for more details.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
Details
Note that setting parms="all" is required when fitting individual multimarkCJS models to be
included in modlist.
Value
A list containing the following:
rjmcmc Reversible jump Markov chain Monte Carlo object of class mcmc.list. Includes
RJMCMC output for monitored parameters and the current model at each itera-
tion ("M").
pos.prob A list of calculated posterior model probabilities for each chain, including the
overall posterior model probabilities across all chains.
Author(s)
<NAME>. McClintock
References
<NAME>. and Link. W. A. 2013. Bayesian multimodel inference by RJMCMC: a Gibbs sampling
approach. The American Statistician 67: 150-156.
See Also
multimarkCJS, processdata
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Generate object of class "multimarksetup" from simulated data
data_type = "always"
noccas <- 5
phibetaTime <- seq(2,0,length=noccas-1) # declining trend in survival
data <- simdataCJS(noccas=5,phibeta=phibetaTime,data.type=data_type)
setup <- processdata(data$Enc.Mat,data.type=data_type)
#Run single chain using the default model. Note parms="all".
sim.pdot.phidot <- multimarkCJS(mms=setup,parms="all",iter=1000,adapt=500,burnin=500)
#Run single chain with temporal trend for phi. Note parms="all".
sim.pdot.phiTime <- multimarkCJS(mms=setup,mod.phi=~Time,parms="all",iter=1000,adapt=500,burnin=500)
#Perform RJMCMC using defaults
modlist <- list(mod1=sim.pdot.phidot,mod2=sim.pdot.phiTime)
sim.M <- multimodelCJS(modlist=modlist)
#Posterior model probabilities
sim.M$pos.prob
#multimodel posterior summary for survival (display first cohort only)
summary(sim.M$rjmcmc[,paste0("phi[1,",1:(noccas-1),"]")])
multimodelClosed Multimodel inference for ’multimark’ closed population abundance
models
Description
This function performs Bayesian multimodel inference for a set of ’multimark’ closed population
abundance models using the reversible jump Markov chain Monte Carlo (RJMCMC) algorithm
proposed by <NAME> (2013).
Usage
multimodelClosed(
modlist,
modprior = rep(1/length(modlist), length(modlist)),
monparms = "N",
miter = NULL,
mburnin = 0,
mthin = 1,
M1 = NULL,
pbetapropsd = 1,
zppropsd = NULL,
sigppropshape = 6,
sigppropscale = 4,
printlog = FALSE
)
Arguments
modlist A list of individual model output lists returned by multimarkClosed or markClosed.
The models must have the same number of chains and MCMC iterations.
modprior Vector of length length(modlist) containing prior model probabilities. De-
fault is modprior = rep(1/length(modlist), length(modlist)).
monparms Parameters to monitor. Only parameters common to all models can be moni-
tored (e.g., "pbeta[(Intercept)]", "N"), but derived capture ("p") and recap-
ture ("c") probabilities can also be monitored. Default is monparms = "N".
miter The number of RJMCMC iterations per chain. If NULL, then the number of
MCMC iterations for each individual model chain is used.
mburnin Number of burn-in iterations (0 <= mburnin < miter).
mthin Thinning interval for monitored parameters.
M1 Integer vector indicating the initial model for each chain, where M1_j=i initial-
izes the RJMCMC algorithm for chain j in the model corresponding to modlist[[i]]
for i=1,..., length(modlist). If NULL, the algorithm for all chains is initialized
in the most general model. Default is M1=NULL.
pbetapropsd Scaler specifying the standard deviation of the Normal(0, pbetapropsd) proposal
distribution for "pbeta" parameters. Default is pbetapropsd=1. See Barker &
Link (2013) for more details.
zppropsd Scaler specifying the standard deviation of the Normal(0, zppropsd) proposal
distribution for "zp" parameters. Only applies if at least one (but not all) model(s)
include individual hetergeneity in detection probability. If NULL, zppropsd =
sqrt(sigma2_zp) is used. Default is zppropsd=NULL. See Barker & Link (2013)
for more details.
sigppropshape Scaler specifying the shape parameter of the invGamma(shape = sigppropshape,
scale = sigppropscale) proposal distribution for sigma_zp. Only applies if at
least one (but not all) model(s) include individual hetergeneity in detection prob-
ability. Default is sigppropshape=6. See Barker & Link (2013) for more de-
tails.
sigppropscale Scaler specifying the scale parameter of the invGamma(shape = sigppropshape,
scale = sigppropscale) proposal distribution for sigma_zp. Only applies if at
least one (but not all) model(s) include individual hetergeneity in detection prob-
ability. Default is sigppropscale=4. See Barker & Link (2013) for more de-
tails.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
Details
Note that setting parms="all" is required when fitting individual multimarkClosed or markClosed
models to be included in modlist.
Value
A list containing the following:
rjmcmc Reversible jump Markov chain Monte Carlo object of class mcmc.list. Includes
RJMCMC output for monitored parameters and the current model at each itera-
tion ("M").
pos.prob A list of calculated posterior model probabilities for each chain, including the
overall posterior model probabilities across all chains.
Author(s)
<NAME>. McClintock
References
Barker, <NAME>. and Link. <NAME>. 2013. Bayesian multimodel inference by RJMCMC: a Gibbs sampling
approach. The American Statistician 67: 150-156.
See Also
multimarkClosed, markClosed, processdata
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Generate object of class "multimarksetup"
setup <- processdata(bobcat)
#Run single chain using the default model for bobcat data. Note parms="all".
bobcat.dot <- multimarkClosed(mms=setup,parms="all",iter=1000,adapt=500,burnin=500)
#Run single chain for bobcat data with time effects. Note parms="all".
bobcat.time <- multimarkClosed(mms=setup,mod.p=~time,parms="all",iter=1000,adapt=500,burnin=500)
#Perform RJMCMC using defaults
modlist <- list(mod1=bobcat.dot,mod2=bobcat.time)
bobcat.M <- multimodelClosed(modlist=modlist,monparms=c("N","p"))
#Posterior model probabilities
bobcat.M$pos.prob
#multimodel posterior summary for abundance
summary(bobcat.M$rjmcmc[,"N"])
multimodelClosedSCR Multimodel inference for ’multimark’ spatial population abundance
models
Description
This function performs Bayesian multimodel inference for a set of ’multimark’ spatial population
abundance models using the reversible jump Markov chain Monte Carlo (RJMCMC) algorithm
proposed by <NAME> (2013).
Usage
multimodelClosedSCR(
modlist,
modprior = rep(1/length(modlist), length(modlist)),
monparms = "N",
miter = NULL,
mburnin = 0,
mthin = 1,
M1 = NULL,
pbetapropsd = 1,
sigpropmean = 0.8,
sigpropsd = 0.4,
printlog = FALSE
)
Arguments
modlist A list of individual model output lists returned by multimarkClosedSCR or
markClosedSCR. The models must have the same number of chains and MCMC
iterations.
modprior Vector of length length(modlist) containing prior model probabilities. De-
fault is modprior = rep(1/length(modlist), length(modlist)).
monparms Parameters to monitor. Only parameters common to all models can be monitored
(e.g., "pbeta[(Intercept)]", "N", "sigma2_scr"), but derived density ("D") as
well as capture ("p") and recapture ("c") probabilities (at distance zero from
activity centers) can also be monitored. Default is monparms = "N".
miter The number of RJMCMC iterations per chain. If NULL, then the number of
MCMC iterations for each individual model chain is used.
mburnin Number of burn-in iterations (0 <= mburnin < miter).
mthin Thinning interval for monitored parameters.
M1 Integer vector indicating the initial model for each chain, where M1_j=i initial-
izes the RJMCMC algorithm for chain j in the model corresponding to modlist[[i]]
for i=1,..., length(modlist). If NULL, the algorithm for all chains is initialized
in the most general model. Default is M1=NULL.
pbetapropsd Scaler specifying the standard deviation of the Normal(0, pbetapropsd) proposal
distribution for "pbeta" parameters. Default is pbetapropsd=1. See Barker &
Link (2013) for more details.
sigpropmean Scaler specifying the mean of the inverse Gamma proposal distribution for sigma2_scr
(or lambda if detection=``exponential''). Only applies if models do not
have the same detection function (i.e., “half-normal” or “exponential”). Default
is sigpropmean=0.8. See Barker & Link (2013) for more details.
sigpropsd Scaler specifying the standard deviation of the inverse Gamma proposal dis-
tribution for sigma2_scr (or lambda if detection=``exponential''). Only
applies if models do not have the same detection function (i.e., “half-normal” or
“exponential”). Default is sigpropsd=0.4. See Barker & Link (2013) for more
details.
printlog Logical indicating whether to print the progress of chains and any errors to a log
file in the working directory. Ignored when nchains=1. Updates are printed to
log file as 1% increments of iter of each chain are completed. With >1 chains,
setting printlog=TRUE is probably most useful for Windows users because
progress and errors are automatically printed to the R console for "Unix-like"
machines (i.e., Mac and Linux) when printlog=FALSE. Default is printlog=FALSE.
Details
Note that setting parms="all" is required when fitting individual multimarkClosedSCR or markClosedSCR
models to be included in modlist.
Value
A list containing the following:
rjmcmc Reversible jump Markov chain Monte Carlo object of class mcmc.list. Includes
RJMCMC output for monitored parameters and the current model at each itera-
tion ("M").
pos.prob A list of calculated posterior model probabilities for each chain, including the
overall posterior model probabilities across all chains.
Author(s)
<NAME>
References
Barker, <NAME>. and Link. W. A. 2013. Bayesian multimodel inference by RJMCMC: a Gibbs sampling
approach. The American Statistician 67: 150-156.
See Also
multimarkClosedSCR, processdataSCR
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Generate object of class "multimarkSCRsetup"
sim.data<-simdataClosedSCR()
Enc.Mat<-sim.data$Enc.Mat
trapCoords<-sim.data$spatialInputs$trapCoords
studyArea<-sim.data$spatialInputs$studyArea
setup<-processdataSCR(Enc.Mat,trapCoords,studyArea)
#Run single chain using the default model for simulated data. Note parms="all".
example.dot <- multimarkClosedSCR(mms=setup,parms="all",iter=1000,adapt=500,burnin=500)
#Run single chain for simulated data with behavior effects. Note parms="all".
example.c <- multimarkClosedSCR(mms=setup,mod.p=~c,parms="all",iter=1000,adapt=500,burnin=500)
#Perform RJMCMC using defaults
modlist <- list(mod1=example.dot,mod2=example.c)
example.M <- multimodelClosedSCR(modlist=modlist,monparms=c("N","D","sigma2_scr"))
#Posterior model probabilities
example.M$pos.prob
#multimodel posterior summary for abundance and density
summary(example.M$rjmcmc[,c("N","D")])
plotSpatialData Plot spatial capture-mark-recapture data
Description
This function plots the study area grid, available habitat, and trap coordinates for spatial capture-
recapture studies. Activity centers and capture locations can also be plotted.
Usage
plotSpatialData(
mms = NULL,
trapCoords,
studyArea,
centers = NULL,
trapLines = FALSE
)
Arguments
mms An optional object of class multimarkSCRsetup-class from which the (re-
scaled) study area and trap coordinates are plotted.
trapCoords A matrix of dimension ntraps x (2 + noccas) indicating the Cartesian coordi-
nates and operating occasions for the traps, where rows correspond to trap, the
first column the x-coordinate, and the second column the y-coordinate. The last
noccas columns indicate whether or not the trap was operating on each of the
occasions, where ‘1’ indciates the trap was operating and ‘0’ indicates the trap
was not operating. Ignored unless mms=NULL.
studyArea A 3-column matrix defining the study area and available habitat. Each row
corresponds to a grid cell. The first 2 columns indicate the Cartesian x- and
y-coordinate for the centroid of each grid cell, and the third column indicates
whether the cell is available habitat (=1) or not (=0). All cells must have the
same resolution. Ignored unless mms=NULL. Note that rows should be ordered
in raster cell order (raster cell numbers start at 1 in the upper left corner, and
increase from left to right, and then from top to bottom).
centers An optional vector indicating the grid cell (i.e., the row of studyArea) that
contains the true (latent) activity centers for each individual. If mms is provided,
then centers must be of length nrow(Enc.Mat) (i.e., a center must be provided
for each observed individual).
trapLines Logical indicating whether to draw lines from activity centers to respective traps
at which each individual was captured. Default is trapLines=FALSE. Ignored
when mms=NULL or centers=NULL.
Author(s)
<NAME>
Examples
#Plot the tiger example data
plotSpatialData(trapCoords=tiger$trapCoords,studyArea=tiger$studyArea)
processdata Generate model inputs for fitting ’multimark’ models
Description
This function generates an object of class multimarksetup that is required to fit ‘multimark’ mod-
els.
Usage
processdata(
Enc.Mat,
data.type = "never",
covs = data.frame(),
known = integer()
)
Arguments
Enc.Mat A matrix of observed encounter histories with rows corresponding to individuals
and columns corresponding to sampling occasions (ignored unless mms=NULL).
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
covs A data frame of temporal covariates for detection probabilities (ignored unless
mms=NULL). The number of rows in the data frame must equal the number of
sampling occasions. Covariate names cannot be "time", "age", or "h"; these
names are reserved for temporal, behavioral, and individual effects when speci-
fying mod.p and mod.phi.
known Optional integer vector indicating whether the encounter history of an individual
is known with certainty (i.e., the observed encounter history is the true encounter
history). Encounter histories with at least one type 4 encounter are automatically
assumed to be known, and known does not need to be specified unless there ex-
ist encounter histories that do not contain a type 4 encounter that happen to be
known with certainty (e.g., from independent telemetry studies). If specified,
known = c(v_1,v_2,...,v_M) must be a vector of length M = nrow(Enc.Mat)
where v_i = 1 if the encounter history for individual i is known (v_i = 0 other-
wise). Note that known all-zero encounter histories (e.g., ‘000’) are ignored.
Value
An object of class multimarksetup.
Author(s)
<NAME>. McClintock
References
<NAME>., and <NAME>. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
See Also
multimarksetup-class, multimarkClosed, bobcat
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Generate object of class "multimarksetup"
setup <- processdata(bobcat)
#Run single chain using the default model for bobcat data
bobcat.dot<-multimarkClosed(mms=setup)
#Run single chain for bobcat data with temporal effects (i.e., mod.p=~time)
bobcat.time <- multimarkClosed(mms=setup,mod.p=~time)
processdataSCR Generate model inputs for fitting spatial ’multimark’ models
Description
This function generates an object of class multimarkSCRsetup that is required to fit spatial ‘multi-
mark’ models.
Usage
processdataSCR(
Enc.Mat,
trapCoords,
studyArea = NULL,
buffer = NULL,
ncells = NULL,
data.type = "never",
covs = data.frame(),
known = integer(),
scalemax = 10
)
Arguments
Enc.Mat A matrix containing the observed encounter histories with rows corresponding
to individuals and (ntraps*noccas) columns corresponding to traps and sam-
pling occasions. The first noccas columns correspond to trap 1, the second
noccas columns corresopond to trap 2, etc. Ignored unless mms=NULL.
trapCoords A matrix of dimension ntraps x (2 + noccas) indicating the Cartesian coordi-
nates and operating occasions for the traps, where rows correspond to trap, the
first column the x-coordinate, and the second column the y-coordinate. The last
noccas columns indicate whether or not the trap was operating on each of the
occasions, where ‘1’ indciates the trap was operating and ‘0’ indicates the trap
was not operating.
studyArea is a 3-column matrix containing the coordinates for the centroids of a contigu-
ous grid of cells that define the study area and available habitat. Each row
corresponds to a grid cell. The first 2 columns indicate the Cartesian x- and
y-coordinate for the centroid of each grid cell, and the third column indicates
whether the cell is available habitat (=1) or not (=0). All cells must be square
and have the same resolution. If studyArea=NULL (the default), then a square
study area grid composed of ncells cells of available habitat is drawn around
the bounding box of trapCoords based on buffer. Note that rows should be
ordered in raster cell order (raster cell numbers start at 1 in the upper left corner,
and increase from left to right, and then from top to bottom).
buffer A scaler in same units as trapCoords indicating the buffer around the bounding
box of trapCoords for defining the study area when studyArea=NULL. Ignored
unless studyArea=NULL.
ncells The number of grid cells in the study area when studyArea=NULL. The square
root of ncells must be a whole number. Default is ncells=1024. Ignored
unless studyArea=NULL.
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
covs A data frame of time- and/or trap-dependent covariates for detection probabil-
ities (ignored unless mms=NULL). The number of rows in the data frame must
equal the number of traps times the number of sampling occasions (ntraps*noccas),
where the first noccas rows correspond to trap 1, the second noccas rows cor-
respond to trap 2, etc. Covariate names cannot be "time", "age", or "h"; these
names are reserved for temporal, behavioral, and individual effects when speci-
fying mod.p and mod.phi.
known Optional integer vector indicating whether the encounter history of an individual
is known with certainty (i.e., the observed encounter history is the true encounter
history). Encounter histories with at least one type 4 encounter are automatically
assumed to be known, and known does not need to be specified unless there ex-
ist encounter histories that do not contain a type 4 encounter that happen to be
known with certainty (e.g., from independent telemetry studies). If specified,
known = c(v_1,v_2,...,v_M) must be a vector of length M = nrow(Enc.Mat)
where v_i = 1 if the encounter history for individual i is known (v_i = 0 other-
wise). Note that known all-zero encounter histories (e.g., ‘000’) are ignored.
scalemax Upper bound for internal re-scaling of grid cell centroid coordinates. Default is
scalemax=10, which re-scales the centroids to be between 0 and 10. Re-scaling
is done internally to avoid numerical overflows during model fitting.
Value
An object of class multimarkSCRsetup.
Author(s)
<NAME>
References
<NAME>., and <NAME>. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and Karanth,
K.U. 2012. Program SPACECAP: software for estimating animal density using spatially explicit
capture-recapture models. Methods in Ecology and Evolution 3:1067-1072.
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
<NAME>., <NAME>., <NAME>. and <NAME>. 2009. Bayesian inference in
camera trapping studies for a class of spatial capture-recapture models. Ecology 90: 3233-3244.
See Also
multimarkSCRsetup-class, multimarkClosedSCR
Examples
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
#Generate object of class "multimarksetup" from simulated data
sim.data<-simdataClosedSCR()
Enc.Mat <- sim.data$Enc.Mat
trapCoords <- sim.data$spatialInputs$trapCoords
studyArea <- sim.data$spatialInputs$studyArea
setup <- processdataSCR(Enc.Mat,trapCoords,studyArea)
#Run single chain using the default model for simulated data
example.dot<-multimarkClosedSCR(mms=setup)
simdataCJS Simulate open population capture-mark-recapture data arising from
multiple non-invasive marks
Description
This function generates encounter histories from simulated open population capture-mark-recapture
data consisting of multiple non-invasive marks.
Usage
simdataCJS(
N = 100,
noccas = 5,
pbeta = -0.25,
sigma2_zp = 0,
phibeta = 1.6,
sigma2_zphi = 0,
delta_1 = 0.4,
delta_2 = 0.4,
alpha = 0.5,
data.type = "never",
link = "probit"
)
Arguments
N Number of individuals.
noccas Number of sampling occasions. floor(N/noccas) individuals are first encoun-
tered on each occasion.
pbeta Logit- or probit-scale intercept term(s) for capture probability (p). Must be a
scaler or vector of length noccas.
sigma2_zp Logit- or probit-scale individual heterogeneity variance term for capture proba-
bility (p).
phibeta Logit- or probit-scale intercept term(s) for survival probability (φ). Must be a
scaler or vector of length noccas.
sigma2_zphi Logit- or probit-scale individual heterogeneity variance term for survival proba-
bility (φ).
delta_1 Conditional probability of type 1 encounter, given detection.
delta_2 Conditional probability of type 2 encounter, given detection.
alpha Conditional probability of simultaneous type 1 and type 2 detection, given both
types encountered. Only applies when data.type="sometimes".
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
link Link function for detection probability. Must be "logit" or "probit". Note that
multimarkCJS is currently implemented for the probit link only.
Value
A list containing the following:
Enc.Mat A matrix containing the observed encounter histories with rows corresponding
to individuals and columns corresponding to sampling occasions.
trueEnc.Mat A matrix containing the true (latent) encounter histories with rows correspond-
ing to individuals and columns corresponding to sampling occasions.
Author(s)
<NAME>
References
<NAME>., and <NAME>. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
See Also
processdata, multimarkCJS
Examples
#simulate data for data.type="sometimes" using defaults
data<-simdataCJS(data.type="sometimes")
simdataClosed Simulate closed population capture-mark-recapture data arising from
multiple non-invasive marks
Description
This function generates encounter histories from simulated closed population capture-mark-recapture
data consisting of multiple non-invasive marks.
Usage
simdataClosed(
N = 100,
noccas = 5,
pbeta = -0.4,
tau = 0,
sigma2_zp = 0,
delta_1 = 0.4,
delta_2 = 0.4,
alpha = 0.5,
data.type = "never",
link = "logit"
)
Arguments
N True population size or abundance.
noccas The number of sampling occasions.
pbeta Logit- or probit-scale intercept term(s) for capture probability (p). Must be a
scaler or vector of length noccas.
tau Additive logit- or probit-scale behavioral effect term for recapture probability
(c).
sigma2_zp Logit- or probit-scale individual heterogeneity variance term.
delta_1 Conditional probability of type 1 encounter, given detection.
delta_2 Conditional probability of type 2 encounter, given detection.
alpha Conditional probability of simultaneous type 1 and type 2 detection, given both
types encountered. Only applies when data.type="sometimes".
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
link Link function for detection probability. Must be "logit" or "probit". Note that
multimarkClosed is currently implemented for the logit link only.
Value
A list containing the following:
Enc.Mat A matrix containing the observed encounter histories with rows corresponding
to individuals and columns corresponding to sampling occasions.
trueEnc.Mat A matrix containing the true (latent) encounter histories with rows correspond-
ing to individuals and columns corresponding to sampling occasions.
Author(s)
<NAME>
References
<NAME>., and <NAME>. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
See Also
processdata, multimarkClosed
Examples
#simulate data for data.type="sometimes" using defaults
data<-simdataClosed(data.type="sometimes")
simdataClosedSCR Simulate spatially-explicit capture-mark-recapture data from a (demo-
graphically) closed population with multiple non-invasive marks
Description
This function generates encounter histories from spatially-explicit capture-mark-recapture data con-
sisting of multiple non-invasive marks.
Usage
simdataClosedSCR(
N = 30,
ntraps = 9,
noccas = 5,
pbeta = 0.25,
tau = 0,
sigma2_scr = 0.75,
lambda = 0.75,
delta_1 = 0.4,
delta_2 = 0.4,
alpha = 0.5,
data.type = "never",
detection = "half-normal",
spatialInputs = NULL,
buffer = 3 * sqrt(sigma2_scr),
ncells = 1024,
scalemax = 10,
plot = TRUE
)
Arguments
N True population size or abundance.
ntraps The number of traps. If trapCoords=NULL, the square root of ntraps must be a
whole number in order to create a regular grid of trap coordinates on a square.
noccas Scaler indicating the number of sampling occasions per trap.
pbeta Complementary loglog-scale intercept term for detection probability (p). Must
be a scaler or vector of length noccas.
tau Additive complementary loglog-scale behavioral effect term for recapture prob-
ability (c).
sigma2_scr Complementary loglog-scale term for effect of distance in the “half-normal”
detection function. Ignored unless detection=``half-normal''.
lambda Complementary loglog-scale term for effect of distance in the “exponential”
detection function. Ignored unless detection=``exponential''.
delta_1 Conditional probability of type 1 encounter, given detection.
delta_2 Conditional probability of type 2 encounter, given detection.
alpha Conditional probability of simultaneous type 1 and type 2 detection, given both
types encountered. Only applies when data.type="sometimes".
data.type Specifies the encounter history data type. All data types include non-detections
(type 0 encounter), type 1 encounter (e.g., left-side), and type 2 encounters (e.g.,
right-side). When both type 1 and type 2 encounters occur for the same individ-
ual within a sampling occasion, these can either be "non-simultaneous" (type 3
encounter) or "simultaneous" (type 4 encounter). Three data types are currently
permitted:
data.type="never" indicates both type 1 and type 2 encounters are never ob-
served for the same individual within a sampling occasion, and observed en-
counter histories therefore include only type 1 or type 2 encounters (e.g., only
left- and right-sided photographs were collected). Observed encounter histories
can consist of non-detections (0), type 1 encounters (1), and type 2 encounters
(2). See bobcat. Latent encounter histories consist of non-detections (0), type
1 encounters (1), type 2 encounters (2), and type 3 encounters (3).
data.type="sometimes" indicates both type 1 and type 2 encounters are some-
times observed (e.g., both-sided photographs are sometimes obtained, but not
necessarily for all individuals). Observed encounter histories can consist of non-
detections (0), type 1 encounters (1), type 2 encounters (2), type 3 encounters
(3), and type 4 encounters (4). Type 3 encounters can only be observed when
an individual has at least one type 4 encounter. Latent encounter histories con-
sist of non-detections (0), type 1 encounters (1), type 2 encounters (2), type 3
encounters (3), and type 4 encounters (4).
data.type="always" indicates both type 1 and type 2 encounters are always
observed, but some encounter histories may still include only type 1 or type
2 encounters. Observed encounter histories can consist of non-detections (0),
type 1 encounters (1), type 2 encounters (2), and type 4 encounters (4). Latent
encounter histories consist of non-detections (0), type 1 encounters (1), type 2
encounters (2), and type 4 encounters (4).
detection Model for detection probability as a function of distance from activity centers.
Must be "half-normal" (of the form exp (−d2 /(2 ∗ σ 2 )), where d is distance)
or "exponential" (of the form exp (−d/λ)).
spatialInputs A list of length 3 composed of objects named trapCoords, studyArea, and
centers:
trapCoords is a matrix of dimension ntraps x (2 + noccas) indicating the
Cartesian coordinates and operating occasions for the traps, where rows corre-
spond to trap, the first column the x-coordinate (“x”), and the second column
the y-coordinate (“y”). The last noccas columns indicate whether or not the
trap was operating on each of the occasions, where ‘1’ indciates the trap was
operating and ‘0’ indicates the trap was not operating.
studyArea is a 3-column matrix defining the study area and available habitat.
Each row corresponds to a grid cell. The first 2 columns (“x” and “y”) indicate
the Cartesian x- and y-coordinate for the centroid of each grid cell, and the third
column (“avail”) indicates whether the cell is available habitat (=1) or not (=0).
All grid cells must have the same resolution. Note that rows should be ordered
in raster cell order (raster cell numbers start at 1 in the upper left corner, and
increase from left to right, and then from top to bottom).
centers is a N-vector indicating the grid cell (i.e., the row of studyArea) that
contains the true (latent) activity centers for each individual in the population.
If spatialInputs=NULL (the default), then all traps are assumed to be operating
on all occasions, the study area is assumed to be composed of ncells grid cells,
grid cells within buffer of the trap array are assumed to be available habitat,
and the activity centers are randomly assigned to grid cells of available habitat.
buffer A scaler indicating the buffer around the bounding box of trapCoords for defin-
ing the study area and available habitat when spatialInputs=NULL. Default is
buffer=3*sqrt(sigma2_scr). Ignored unless spatialInputs=NULL.
ncells The number of grid cells in the study area when studyArea=NULL. The square
root of ncells must be a whole number. Default is ncells=1024. Ignored
unless spatialInputs=NULL.
scalemax Upper bound for grid cell centroid x- and y-coordinates. Default is scalemax=10,
which scales the x- and y-coordinates to be between 0 and 10. Ignored unless
spatialInputs=NULL.
plot Logical indicating whether to plot the simulated trap coordinates, study area,
and activity centers using plotSpatialData. Default is plot=TRUE
Details
Please be very careful when specifying your own spatialInputs; multimarkClosedSCR and
markClosedSCR do little to verify that these make sense during model fitting.
Value
A list containing the following:
Enc.Mat Matrix containing the observed encounter histories with rows corresponding to
individuals and (ntraps*noccas) columns corresponding to traps and sampling
occasions. The first noccas columns correspond to trap 1, the second noccas
columns corresopond to trap 2, etc.
trueEnc.Mat Matrix containing the true (latent) encounter histories with rows corresponding
to individuals and (ntraps*noccas) columns corresponding to traps and sam-
pling occasions. The first noccas columns correspond to trap 1, the second
noccas columns corresopond to trap 2, etc.
spatialInputs List of length 2 with objects named trapCoords and studyArea:
trapCoords is a matrix of dimension ntraps x (2 + noccas) indicating the
Cartesian coordinates and operating occasions for the traps, where rows cor-
respond to trap, the first column the x-coordinate, and the second column the
y-coordinate. The last noccas columns indicate whether or not the trap was op-
erating on each of the occasions, where ‘1’ indciates the trap was operating and
‘0’ indicates the trap was not operating.
studyArea is a 3-column matrix containing the coordinates for the centroids a
contiguous grid of cells that define the study area and available habitat. Each
row corresponds to a grid cell. The first 2 columns indicate the Cartesian x- and
y-coordinate for the centroid of each grid cell, and the third column indicates
whether the cell is available habitat (=1) or not (=0). All cells must have the
same resolution.
centers N-vector indicating the grid cell (i.e., the row of spatialInputs$studyArea)
that contains the true (latent) activity centers for each individual in the popula-
tion.
Author(s)
<NAME>. McClintock
References
<NAME>., and <NAME>. 2013. Mark-recapture with multiple, non-invasive marks. Biometrics
69: 766-775.
<NAME>., <NAME>., <NAME>., and <NAME>. 2013. Integrated modeling of
bilateral photo-identification data in mark-recapture analyses. Ecology 94: 1464-1471.
<NAME>., <NAME>., <NAME>. and <NAME>. 2009. Bayesian inference in
camera trapping studies for a class of spatial capture-recapture models. Ecology 90: 3233-3244.
See Also
processdataSCR, multimarkClosedSCR, markClosedSCR
Examples
#simulate data for data.type="sometimes" using defaults
data<-simdataClosedSCR(data.type="sometimes")
tiger Tiger data
Description
Example tiger data for multimark package.
Format
These spatial capture-recapture data with a single mark type are summarized in a list of length 3
containing the following objects:
Enc.Mat is a 44 x (noccas*ntraps) matrix containing observed encounter histories for 44 tigers
across noccas=48 sampling occasions and ntraps=120 traps.
trapCoords is a matrix of dimension ntraps x (2 + noccas) indicating the Cartesian coordi-
nates and operating occasions for the traps, where rows correspond to trap, the first column the
x-coordinate, and the second column the y-coordinate. The last noccas columns indicate whether
or not the trap was operating on each of the occasions, where ‘1’ indciates the trap was operating
and ‘0’ indicates the trap was not operating.
studyArea is a 3-column matrix containing the coordinates for the centroids of the contiguous grid
of cells that define the study area and available habitat. Each row corresponds to a grid cell. The
first 2 columns indicate the Cartesian x- and y-coordinate for the centroid of each grid cell, and the
third column indicates whether the cell is available habitat (=1) or not (=0). The grid cells are 0.336
km^2 resolution.
These data were obtained from the R package SPACECAP and modified by projecting onto a regu-
lar rectangular grid consisting of square grid cells (as is required by the spatial capture-recapture
models in multimark).
Details
We thank Ullas Karanth, Wildlife Conservation Society, for providing the tiger data for use as an
example with this package.
Source
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and Karanth,
K.U. 2012. Program SPACECAP: software for estimating animal density using spatially explicit
capture-recapture models. Methods in Ecology and Evolution 3:1067-1072.
<NAME>., <NAME>., <NAME>. and <NAME>. 2009. Bayesian inference in
camera trapping studies for a class of spatial capture-recapture models. Ecology 90: 3233-3244.
See Also
markClosedSCR
Examples
data(tiger)
#plot the traps and available habitat within the study area
plotSpatialData(trapCoords=tiger$trapCoords,studyArea=tiger$studyArea)
# This example is excluded from testing to reduce package check time
# Example uses unrealistically low values for nchain, iter, and burnin
# Fit spatial model to tiger data
Enc.Mat<-tiger$Enc.Mat
trapCoords<-tiger$trapCoords
studyArea<-tiger$studyArea
tiger.dot<-markClosedSCR(Enc.Mat,trapCoords,studyArea,iter=100,adapt=50,burnin=50)
summary(tiger.dot$mcmc) |
HttpCommon | julia | Julia | [`HttpCommon.Cookie`](#HttpCommon.Cookie) — TypeA `Cookie` represents an HTTP cookie. It has three fields: `name` and `value` are strings, and `attrs` is dictionary of pairs of strings.
[`HttpCommon.Headers`](#HttpCommon.Headers) — Type`Headers` represents the header fields for an HTTP request.
[`HttpCommon.Request`](#HttpCommon.Request) — TypeA `Request` represents an HTTP request sent by a client to a server. It has five fields:
* `method`: an HTTP methods string (e.g. "GET")
* `resource`: the resource requested (e.g. "/hello/world")
* `headers`: see `Headers` above
* `data`: the request data as a vector of bytes
* `uri`: additional details, normally not used
[`HttpCommon.Response`](#HttpCommon.Response) — TypeA `Response` represents an HTTP response sent to a client by a server. It has six fields:
* `status`: HTTP status code (see `STATUS_CODES`) [default: `200`]
* `headers`: `Headers` [default: `HttpCommmon.headers()`]
* `cookies`: Dictionary of strings => `Cookie`s
* `data`: the request data as a vector of bytes [default: `UInt8[]`]
* `finished`: `true` if the `Reponse` is valid, meaning that it can be converted to an actual HTTP response [default: `false`]
* `requests`: the history of requests that generated the response. Can be greater than one if a redirect was involved.
Response has many constructors - use `methods(Response)` for full list.
[`HttpCommon.escapeHTML`](#HttpCommon.escapeHTML-Tuple{String}) — MethodescapeHTML(i::String)
Returns a string with special HTML characters escaped: &, <, >, ", '
[`HttpCommon.parsequerystring`](#HttpCommon.parsequerystring-Union{Tuple{T}, Tuple{T}} where T<:AbstractString) — Methodparsequerystring(query::String)
Convert a valid querystring to a Dict:
```
q = "foo=bar&baz=%3Ca%20href%3D%27http%3A%2F%2Fwww.hackershool.com%27%3Ehello%20world%21%3C%2Fa%3E"
parsequerystring(q)
# Dict{String,String} with 2 entries:
# "baz" => "<a href='http://www.hackershool.com'>hello world!</a>"
# "foo" => "bar"
``` |
eq | ctan | TeX | # The eq-fetchbbl Package
Creating Quizzes to match Bible Passages with Verses
<NAME>. StoryTable of Contents
* 1 Introduction
* 2 Declaring Bible verses and passages
* 3 Basic commands and environments
* 4 Other customizations
1. Introduction
The eq-fetchbbl package is an application to the exerquiz (eq) and fetchbibpes (fetchbbl) packages. This package defines several commands and two environments that are used to _conveniently_ build quizzes that challenges the user to match Bible passages with their corresponding verse references. Technically speaking, such quizzes may be built without this package using the techniques illustrated in Exerquiz: Match-type questions and in Exerquiz: Randomized matching-type questions.1,2 When working with Biblical topics, however, it is easier to incorporate the fetching capabilities of fetchbibpes. All the examples given here are reproduced, with additional variations, in the sample file doc-examples.tex, found in the examples folder of this distribution.
Footnote 1: [http://www.acrotex.net/blog/?p=1446](http://www.acrotex.net/blog/?p=1446)
Footnote 2: [http://www.acrotex.net/blog/?p=1449](http://www.acrotex.net/blog/?p=1449)
Match the quotations (NKV) with the Bible references on the right. Each problem is worth \(2\) points; passing is \(100\%\).
Therefore do not fear them. For there is nothing covered that will not be revealed, and hidden that will not be known.
Then a voice came from heaven, "You are My beloved Son, in whom I am well pleased."
For there is nothing covered that will not be revealed, nor hidden that will not be known.
Answers:
Throughout this document, the markup for the above quiz is used to illustrate the commands and environments of this package.
2. Declaring Bible verses and passages
There are two ways of declaring Bible verses and passages:
* Through a database of verses: \usepackage[deffolder=exmpldefs, \usepsexes=verses]{fetchbibpes}[2021/03/08] Refer to the demo file bible-quiz-uv.tex for an example of this method.
* Through direct specification of passages in the document using the declareBVs environment. Refer to the demo files bible-quiz.tex and bible-quiz-rt.tex.
This documentation earlier declares,3
When you take this quiz, you'll notice that the \CorrAnsButton (Ans) button is aligned with the first line of the passage rather than the last line of the passage.
The previous quiz used \priorRBT, the command \priorPsg is illustrated in the doc-example.tex file.
Using a numbering scheme (\useNumbersOn) allows a consistent structure of incorporating other types of questions into the quiz, as shown above.
You can change the appearance properties of the underlying \RespBoxTxt command with the \everyRespBoxTxt command; as was done in the previous quiz, in which \everyRespBoxTxt{\textColor{blue}} is expanded just below the \adjCAB command.
A final note. For matching, there are two "blocks" of items, the passages and the verse references. The document author determines how to present these two blocks. Originally, I used a multicols environment; later, I switched over to enclosing the two blocks in separate minipages and placing them side-by-side. Both methods have problems if your cross a page boundary.
4. Other customizations
The color of the verse references labels is, by default, blue. (A, B,...) Change this color with the exerquiz command \quesNumColor; eg, \quesNumColor{red} changes the verse references labels to the color red. Any other changes to the labeling of verse references requires a redefinition of BibVrs. Refer to doc-examples.tex for an example of redefining BibVrs.
\setRBTWidthTo{(_content_)} (both define \RBTWidth)
\setRBTWidth{(_length_)}
Both commands define \RBTWidth to a width determined by their arguments. For \setRBTWidthTo, (_content_) is any text; it is measured to determine its width and the width becomes the expansion of \RBTWidth. For \setRBTWidth, (_length_) and any length, \RBTWidth then is defined to expand to (_length_). The command \RBTWidth is the width of the \RespBoxTxt control; its default value is \setRBTWidthTo{AA}.
Now, I simply must get back to my retirement. |
rPraat | cran | R | Package ‘rPraat’
October 14, 2022
Type Package
Title Interface to Praat
Version 1.3.2-1
Encoding UTF-8
Maintainer <NAME> <<EMAIL>>
Description Read, write and manipulate 'Praat' TextGrid, PitchTier, Pitch, IntensityTier, For-
mant, Sound, and Collection files <https://www.fon.hum.uva.nl/praat/>.
URL https://github.com/bbTomas/rPraat/
BugReports https://github.com/bbTomas/rPraat/issues
License MIT + file LICENSE
LazyData TRUE
Depends R (>= 3.4.0)
Imports graphics (>= 3.1.0), dplyr (>= 0.8.5), stringr (>= 1.4.0),
readr(>= 1.3.1), dygraphs (>= 1.1.1.6), tuneR (>= 1.3.3)
RoxygenNote 7.1.1
Suggests testthat
NeedsCompilation no
Author <NAME> [aut, cre]
Repository CRAN
Date/Publication 2021-02-27 22:40:02 UTC
R topics documented:
as.forman... 4
as.i... 5
as.pitc... 5
as.p... 6
as.sn... 6
as.t... 7
col.rea... 8
col.writ... 9
detectEncodin... 10
formant.cu... 10
formant.cut... 11
formant.getPointIndexHigherThanTim... 12
formant.getPointIndexLowerThanTim... 13
formant.getPointIndexNearestTim... 14
formant.plo... 14
formant.rea... 15
formant.sampl... 16
formant.toArra... 17
formant.toFram... 17
formant.writ... 18
iff... 19
isIn... 19
isLogica... 20
isNu... 21
isStrin... 22
it.cu... 22
it.cut... 23
it.getPointIndexHigherThanTim... 24
it.getPointIndexLowerThanTim... 25
it.getPointIndexNearestTim... 26
it.interpolat... 26
it.legendr... 27
it.legendreDem... 28
it.legendreSynt... 28
it.plo... 29
it.rea... 30
it.sampl... 31
it.writ... 31
pitch.cu... 32
pitch.cut... 33
pitch.getPointIndexHigherThanTim... 34
pitch.getPointIndexLowerThanTim... 35
pitch.getPointIndexNearestTim... 35
pitch.plo... 36
pitch.rea... 37
pitch.sampl... 38
pitch.toArra... 39
pitch.toFram... 39
pitch.writ... 40
pt.cu... 41
pt.cut... 42
pt.getPointIndexHigherThanTim... 43
pt.getPointIndexLowerThanTim... 43
pt.getPointIndexNearestTim... 44
pt.Hz2S... 45
pt.interpolat... 45
pt.legendr... 46
pt.legendreDem... 47
pt.legendreSynt... 48
pt.plo... 49
pt.rea... 49
pt.sampl... 50
pt.writ... 51
round... 51
seq... 52
snd.cu... 54
snd.cut... 55
snd.getPointIndexHigherThanTim... 56
snd.getPointIndexLowerThanTim... 57
snd.getPointIndexNearestTim... 57
snd.plo... 58
snd.rea... 59
snd.sampl... 60
snd.writ... 60
strTri... 61
str_contain... 62
str_fin... 62
str_find... 63
tg.boundaryMagne... 64
tg.checkTierIn... 65
tg.countLabel... 66
tg.createNewTextGri... 66
tg.cu... 67
tg.cut... 68
tg.duplicateTie... 69
tg.duplicateTierMergeSegment... 70
tg.findLabel... 71
tg.getEndTim... 73
tg.getIntervalDuratio... 74
tg.getIntervalEndTim... 74
tg.getIntervalIndexAtTim... 75
tg.getIntervalStartTim... 76
tg.getLabe... 76
tg.getNumberOfInterval... 77
tg.getNumberOfPoint... 78
tg.getNumberOfTier... 78
tg.getPointIndexHigherThanTim... 79
tg.getPointIndexLowerThanTim... 80
tg.getPointIndexNearestTim... 80
tg.getPointTim... 81
tg.getStartTim... 82
tg.getTierNam... 82
tg.getTotalDuratio... 83
tg.insertBoundar... 84
tg.insertInterva... 85
tg.insertNewIntervalTie... 86
tg.insertNewPointTie... 87
tg.insertPoin... 88
tg.isIntervalTie... 89
tg.isPointTie... 90
tg.plo... 90
tg.rea... 92
tg.removeIntervalBothBoundarie... 92
tg.removeIntervalLeftBoundar... 94
tg.removeIntervalRightBoundar... 95
tg.removePoin... 96
tg.removeTie... 96
tg.repairContinuit... 97
tg.sampl... 98
tg.sampleProble... 98
tg.setLabe... 99
tg.setTierNam... 100
tg.writ... 100
as.formant as.formant
Description
Renames the class(formant)["name"] attribute and sets class(formant)["type"] <- "Formant
2" (if it is not already set)
Usage
as.formant(formant, name = "")
Arguments
formant Formant 2 object
name New name
Value
Formant 2 object
Examples
class(formant.sample())
class(as.formant(formant.sample(), name = "New Name"))
as.it as.it
Description
Renames the class(it)["name"] attribute and sets class(it)["type"] <- "IntensityTier" (if
it is not already set)
Usage
as.it(it, name = "")
Arguments
it IntensityTier object
name New name
Value
IntensityTier object
Examples
class(it.sample())
class(as.it(it.sample(), name = "New Name"))
as.pitch as.pitch
Description
Renames the class(pitch)["name"] attribute and sets class(pitch)["type"] <- "Pitch 1" (if
it is not already set)
Usage
as.pitch(pitch, name = "")
Arguments
pitch Pitch 1 object
name New name
Value
Pitch 1 object
Examples
class(pitch.sample())
class(as.pitch(pitch.sample(), name = "New Name"))
as.pt as.pt
Description
Renames the class(pt)["name"] attribute and sets class(pt)["type"] <- "PitchTier" (if it is
not already set)
Usage
as.pt(pt, name = "")
Arguments
pt PitchTier object
name New name
Value
PitchTier object
Examples
class(pt.sample())
class(as.pt(pt.sample(), name = "New Name"))
as.snd as.snd
Description
Renames the class(snd)["name"] attribute and sets class(snd)["type"] <- "Sound" (if it is
not already set)
Usage
as.snd(snd, name = "")
Arguments
snd snd object
name New name
Details
At least, $sig and $fs members must be present in snd list.
If not present, it calculates $t, $nChannels, $nBits (default: 16), $nSamples, and $duration
members of snd list
Value
snd object
Examples
class(snd.sample())
class(as.snd(snd.sample(), name = "New Name"))
as.tg as.tg
Description
Renames the class(tg)["name"] attribute and sets class(tg)["type"] <- "TextGrid" (if it is not
already set)
Usage
as.tg(tg, name = "")
Arguments
tg TextGrid object
name New name
Value
TextGrid object
Examples
class(tg.sample())
class(as.tg(tg.sample(), name = "New Name"))
col.read col.read
Description
Loads Collection from Praat in Text or Short text format. Collection may contain combination of
TextGrids, PitchTiers, Pitch objects, Formant objects, and IntensityTiers.
Usage
col.read(fileName, encoding = "UTF-8")
Arguments
fileName Input file name
encoding File encoding (default: "UTF-8"), "auto" for auto-detect of Unicode encoding
Value
Collection object
See Also
tg.read, pt.read, pitch.read, formant.read, it.read
Examples
## Not run:
coll <- col.read("coll_text.Collection")
length(coll) # number of objects in collection
class(coll[[1]])["type"] # 1st object type
class(coll[[1]])["name"] # 1st object name
it <- coll[[1]] # 1st object
it.plot(it)
class(coll[[2]])["type"] # 2nd object type
class(coll[[2]])["name"] # 2nd object name
tg <- coll[[2]] # 2nd object
tg.plot(tg)
length(tg) # number of tiers in TextGrid
tg$word$label
class(coll[[3]])["type"] # 3rd object type
class(coll[[3]])["name"] # 3rd object type
pitch <- coll[[3]] # 3rd object
names(pitch)
pitch$nx # number of frames
pitch$t[4] # time instance of the 4th frame
pitch$frame[[4]] # 4th frame: pitch candidates
pitch$frame[[4]]$frequency[2]
pitch$frame[[4]]$strength[2]
class(coll[[4]])["type"] # 4th object type
class(coll[[4]])["name"] # 4th object name
pt <- coll[[4]] # 2nd object
pt.plot(pt)
## End(Not run)
col.write col.write
Description
Saves Collection of objects to a file (in UTF-8 encoding). col is list of objects, each item col[[i]]
must contain class(col[[i]])["type"] ("TextGrid", "PitchTier", "IntensityTier", "Pitch 1", or
"Formant 2") and class(col[[i]])["name"] (name of the object) parameters set. These pa-
rameters can be created easily using "as.something()" functions: as.tg(), as.pt(), as.it(),
as.pitch(), as.formant()
Usage
col.write(col, fileNameCollection, format = "short")
Arguments
col Collection object = list of objects (col[[1]], col[[2]], etc.) with class(col[[i]])["type"]
and class(col[[i]])["name"] parameters set
fileNameCollection
file name to be created
format Output file format ("short" (short text format) or "text" (a.k.a. full text for-
mat))
Details
Sound objects in col.read() and col.write() are not supported at this moment because they
would occupy too much disc space in text format.
See Also
col.read
Examples
## Not run:
col <- list(as.tg(tg.sample(), "My textgrid"), as.pt(pt.sample(), "My PitchTier 1"),
as.pt(pt.Hz2ST(pt.sample()), "My PitchTier 2"), as.it(it.sample(), "My IntensityTier"),
as.pitch(pitch.sample(), "My Pitch"), as.formant(formant.sample(), "My Formant"))
col.write(col, "my_collection.Collection")
## End(Not run)
detectEncoding detectEncoding
Description
Detects unicode encoding of Praat text files
Usage
detectEncoding(fileName)
Arguments
fileName Input file name
Value
detected encoding of the text input file
Examples
## Not run:
detectEncoding("demo/H.TextGrid")
detectEncoding("demo/H_UTF16.TextGrid")
## End(Not run)
formant.cut formant.cut
Description
Cut the specified interval from the Formant object and preserve time
Usage
formant.cut(formant, tStart = -Inf, tEnd = Inf)
Arguments
formant Formant object (either in Frame or Array format)
tStart beginning time of interval to be cut (default -Inf = cut from the xmin of the
Formant)
tEnd final time of interval to be cut (default Inf = cut to the xmax of the Formant)
Value
Formant object
See Also
formant.cut0, tg.cut, tg.cut0, formant.read, formant.plot
Examples
formant <- formant.sample()
formant2 <- formant.cut(formant, tStart = 3)
formant2_0 <- formant.cut0(formant, tStart = 3)
formant3 <- formant.cut(formant, tStart = 2, tEnd = 3)
formant3_0 <- formant.cut0(formant, tStart = 2, tEnd = 3)
formant4 <- formant.cut(formant, tEnd = 1)
formant4_0 <- formant.cut0(formant, tEnd = 1)
formant5 <- formant.cut(formant, tStart = -1, tEnd = 1)
formant5_0 <- formant.cut0(formant, tStart = -1, tEnd = 1)
## Not run:
formant.plot(formant)
formant.plot(formant2)
formant.plot(formant2_0)
formant.plot(formant3)
formant.plot(formant3_0)
formant.plot(formant4)
formant.plot(formant4_0)
formant.plot(formant5)
formant.plot(formant5_0)
## End(Not run)
formant.cut0 formant.cut0
Description
Cut the specified interval from the Formant object and shift time so that the new xmin = 0
Usage
formant.cut0(formant, tStart = -Inf, tEnd = Inf)
Arguments
formant Formant object (either in Frame or Array format)
tStart beginning time of interval to be cut (default -Inf = cut from the xmin of the
Formant)
tEnd final time of interval to be cut (default Inf = cut to the xmax of the Formant)
Value
Formant object
See Also
formant.cut, tg.cut, tg.cut0, formant.read, formant.plot
Examples
formant <- formant.sample()
formant2 <- formant.cut(formant, tStart = 3)
formant2_0 <- formant.cut0(formant, tStart = 3)
formant3 <- formant.cut(formant, tStart = 2, tEnd = 3)
formant3_0 <- formant.cut0(formant, tStart = 2, tEnd = 3)
formant4 <- formant.cut(formant, tEnd = 1)
formant4_0 <- formant.cut0(formant, tEnd = 1)
formant5 <- formant.cut(formant, tStart = -1, tEnd = 1)
formant5_0 <- formant.cut0(formant, tStart = -1, tEnd = 1)
## Not run:
formant.plot(formant)
formant.plot(formant2)
formant.plot(formant2_0)
formant.plot(formant3)
formant.plot(formant3_0)
formant.plot(formant4)
formant.plot(formant4_0)
formant.plot(formant5)
formant.plot(formant5_0)
## End(Not run)
formant.getPointIndexHigherThanTime
formant.getPointIndexHigherThanTime
Description
Returns index of frame which is nearest the given time from right, i.e. time <= frameTime.
Usage
formant.getPointIndexHigherThanTime(formant, time)
Arguments
formant Formant object
time time which is going to be found in frames
Value
integer
See Also
formant.getPointIndexNearestTime, formant.getPointIndexLowerThanTime
Examples
formant <- formant.sample()
formant.getPointIndexHigherThanTime(formant, 0.5)
formant.getPointIndexLowerThanTime
formant.getPointIndexLowerThanTime
Description
Returns index of frame which is nearest the given time from left, i.e. frameTime <= time.
Usage
formant.getPointIndexLowerThanTime(formant, time)
Arguments
formant Formant object
time time which is going to be found in frames
Value
integer
See Also
formant.getPointIndexNearestTime, formant.getPointIndexHigherThanTime
Examples
formant <- formant.sample()
formant.getPointIndexLowerThanTime(formant, 0.5)
formant.getPointIndexNearestTime
formant.getPointIndexNearestTime
Description
Returns index of frame which is nearest the given time (from both sides).
Usage
formant.getPointIndexNearestTime(formant, time)
Arguments
formant Formant object
time time which is going to be found in frames
Value
integer
See Also
formant.getPointIndexLowerThanTime, formant.getPointIndexHigherThanTime
Examples
formant <- formant.sample()
formant.getPointIndexNearestTime(formant, 0.5)
formant.plot formant.plot
Description
Plots interactive Formant object using dygraphs package.
Usage
formant.plot(formant, scaleIntensity = TRUE, drawBandwidth = TRUE, group = "")
Arguments
formant Formant object
scaleIntensity Point size scaled according to relative intensity
drawBandwidth Draw formant bandwidth
group [optional] character string, name of group for dygraphs synchronization
See Also
formant.read, formant.sample, formant.toArray, tg.plot
Examples
## Not run:
formant <- formant.sample()
formant.plot(formant, drawBandwidth = TRUE)
## End(Not run)
formant.read formant.read
Description
Reads Formant object from Praat. Supported formats: text file, short text file.
Usage
formant.read(fileNameFormant, encoding = "UTF-8")
Arguments
fileNameFormant
file name of Formant object
encoding File encoding (default: "UTF-8"), "auto" for auto-detect of Unicode encoding
Value
A Formant object represents formants as a function of time.
[ref: Praat help, https://www.fon.hum.uva.nl/praat/manual/Formant.html]
f$xmin ... start time (seconds)
f$xmax ... end time (seconds)
f$nx ... number of frames
f$dx ... time step = frame duration (seconds)
f$x1 ... time associated with the first frame (seconds)
f$t ... vector of time instances associated with all frames
f$maxnFormants ... maximum number of formants in frame
f$frame[[1]] to f$frame[[f$nx]] ... frames
f$frame[[1]]$intensity ... intensity of the frame
f$frame[[1]]$nFormants ... actual number of formants in this frame
f$frame[[1]]$frequency ... vector of formant frequencies (in Hz)
f$frame[[1]]$bandwidth ... vector of formant bandwidths (in Hz)
See Also
formant.write, formant.plot, formant.cut, formant.getPointIndexNearestTime, pitch.read,
pt.read, tg.read, it.read, col.read
Examples
## Not run:
f <- formant.read('demo/maminka.Formant')
names(f)
f$nx
f$t[4] # time instance of the 4th frame
f$frame[[4]] # 4th frame: formants
f$frame[[4]]$frequency[2]
f$frame[[4]]$bandwidth[2]
## End(Not run)
formant.sample formant.sample
Description
Returns sample Formant object.
Usage
formant.sample()
Value
Formant
See Also
tg.sample, pt.sample, it.sample, pitch.sample
Examples
formant <- formant.sample()
formant.toArray formant.toArray
Description
formant.toArray
Usage
formant.toArray(formant)
Arguments
formant Formant object
Value
Formant object with frames converted to frequency and bandwidth arrays and intensity vector
See Also
formant.read, formant.plot
Examples
formantArray <- formant.toArray(formant.sample())
formantArray$t[1:10]
formantArray$frequencyArray[, 1:10]
formantArray$bandwidthArray[, 1:10]
formantArray$intensityVector[1:10]
## Not run:
plot(formantArray$t, formantArray$frequencyArray[1, ]) # draw 1st formant track
## End(Not run)
formant.toFrame formant.toFrame
Description
formant.toFrame
Usage
formant.toFrame(formantArray)
Arguments
formantArray Formant object (array format)
Value
Formant object with frames
See Also
formant.toArray, formant.read, formant.plot
Examples
formantArray <- formant.toArray(formant.sample())
formant <- formant.toFrame(formantArray)
formant.write formant.write
Description
Saves Formant to the file.
Usage
formant.write(formant, fileNameFormant, format = "short")
Arguments
formant Formant object
fileNameFormant
Output file name
format Output file format ("short" (default, short text format) or "text" (a.k.a. full
text format))
See Also
formant.read, tg.read
Examples
## Not run:
formant <- formant.sample()
formant.write(formant, "demo_output.Formant")
## End(Not run)
ifft ifft
Description
Inverse Fast Fourier Transform (discrete FT), Matlab-like behavior.
Usage
ifft(sig)
Arguments
sig input vector
Details
This really is the inverse of the fft function, so ifft(fft(x)) == x.
Value
output vector of the same length as the input vector
See Also
fft, Re, Im, Mod, Conj
Examples
ifft(fft(1:5))
isInt isInt
Description
Returns TRUE / FALSE whether it is exactly 1 integer number (in fact, the class can be numeric but
the number must be integer), non-missing
Usage
isInt(num)
Arguments
num variable to be tested
Value
TRUE / FALSE
See Also
isNum, isLogical, isString
Examples
isInt(2)
isInt(2L)
isInt(-2)
isInt(-2L)
isInt(2.1)
isInt(-2.1)
isInt(1:5)
isInt(NA_integer_)
isInt(integer(0))
isLogical isLogical
Description
Returns TRUE / FALSE whether it is exactly 1 logical value, non-missing
Usage
isLogical(logical)
Arguments
logical variable to be tested
Value
TRUE / FALSE
See Also
isNum, isInt, isString
Examples
isLogical(TRUE)
isLogical(FALSE)
isLogical(1)
isLogical(0)
isLogical(2)
isLogical(NA)
isLogical(NaN)
isLogical(logical(0))
isNum isNum
Description
Returns TRUE / FALSE whether it is exactly 1 number (numeric or integer vector of length 1, non-
missing)
Usage
isNum(num)
Arguments
num variable to be tested
Value
TRUE / FALSE
See Also
isInt, isLogical, isString
Examples
isNum(2)
isNum(2L)
isNum(-2)
isNum(-2L)
isNum(2.1)
isNum(-2.1)
isNum(1:5)
isNum(NA_real_)
isNum(numeric(0))
isString isString
Description
Returns TRUE / FALSE whether it is exactly 1 character string (character vector of length 1, non-
missing)
Usage
isString(string)
Arguments
string variable to be tested
Value
TRUE / FALSE
See Also
isInt, isNum, isLogical
Examples
isString("hello")
isString(2)
isString(c("hello", "world"))
isString(NA_character_)
it.cut it.cut
Description
Cut the specified interval from the IntensityTier and preserve time
Usage
it.cut(it, tStart = -Inf, tEnd = Inf)
Arguments
it IntensityTier object
tStart beginning time of interval to be cut (default -Inf = cut from the tmin of the
IntensityTier)
tEnd final time of interval to be cut (default Inf = cut to the tmax of the IntensityTier)
Value
IntensityTier object
See Also
it.cut0, it.read, it.plot, it.interpolate, it.legendre, it.legendreSynth, it.legendreDemo
Examples
it <- it.sample()
it2 <- it.cut(it, tStart = 0.3)
it2_0 <- it.cut0(it, tStart = 0.3)
it3 <- it.cut(it, tStart = 0.2, tEnd = 0.3)
it3_0 <- it.cut0(it, tStart = 0.2, tEnd = 0.3)
it4 <- it.cut(it, tEnd = 0.3)
it4_0 <- it.cut0(it, tEnd = 0.3)
it5 <- it.cut(it, tStart = -1, tEnd = 1)
it5_0 <- it.cut0(it, tStart = -1, tEnd = 1)
## Not run:
it.plot(it)
it.plot(it2)
it.plot(it2_0)
it.plot(it3)
it.plot(it3_0)
it.plot(it4)
it.plot(it4_0)
it.plot(it5)
it.plot(it5_0)
## End(Not run)
it.cut0 it.cut0
Description
Cut the specified interval from the IntensityTier and shift time so that the new tmin = 0
Usage
it.cut0(it, tStart = -Inf, tEnd = Inf)
Arguments
it IntensityTier object
tStart beginning time of interval to be cut (default -Inf = cut from the tmin of the
IntensityTier)
tEnd final time of interval to be cut (default Inf = cut to the tmax of the IntensityTier)
Value
IntensityTier object
See Also
it.cut, it.read, it.plot, it.interpolate, it.legendre, it.legendreSynth, it.legendreDemo
Examples
it <- it.sample()
it2 <- it.cut(it, tStart = 0.3)
it2_0 <- it.cut0(it, tStart = 0.3)
it3 <- it.cut(it, tStart = 0.2, tEnd = 0.3)
it3_0 <- it.cut0(it, tStart = 0.2, tEnd = 0.3)
it4 <- it.cut(it, tEnd = 0.3)
it4_0 <- it.cut0(it, tEnd = 0.3)
it5 <- it.cut(it, tStart = -1, tEnd = 1)
it5_0 <- it.cut0(it, tStart = -1, tEnd = 1)
## Not run:
it.plot(it)
it.plot(it2)
it.plot(it2_0)
it.plot(it3)
it.plot(it3_0)
it.plot(it4)
it.plot(it4_0)
it.plot(it5)
it.plot(it5_0)
## End(Not run)
it.getPointIndexHigherThanTime
it.getPointIndexHigherThanTime
Description
Returns index of point which is nearest the given time from right, i.e. time <= pointTime.
Usage
it.getPointIndexHigherThanTime(it, time)
Arguments
it IntensityTier object
time time which is going to be found in points
Value
integer
See Also
it.getPointIndexNearestTime, it.getPointIndexLowerThanTime
Examples
it <- it.sample()
it.getPointIndexHigherThanTime(it, 0.5)
it.getPointIndexLowerThanTime
it.getPointIndexLowerThanTime
Description
Returns index of point which is nearest the given time from left, i.e. pointTime <= time.
Usage
it.getPointIndexLowerThanTime(it, time)
Arguments
it IntensityTier object
time time which is going to be found in points
Value
integer
See Also
it.getPointIndexNearestTime, it.getPointIndexHigherThanTime
Examples
it <- it.sample()
it.getPointIndexLowerThanTime(it, 0.5)
it.getPointIndexNearestTime
it.getPointIndexNearestTime
Description
Returns index of point which is nearest the given time (from both sides).
Usage
it.getPointIndexNearestTime(it, time)
Arguments
it IntensityTier object
time time which is going to be found in points
Value
integer
See Also
it.getPointIndexLowerThanTime, it.getPointIndexHigherThanTime
Examples
it <- it.sample()
it.getPointIndexNearestTime(it, 0.5)
it.interpolate it.interpolate
Description
Interpolates IntensityTier contour in given time instances.
Usage
it.interpolate(it, t)
Arguments
it IntensityTier object
t vector of time instances of interest
Details
a) If t < min(it$t) (or t > max(it$t)), returns the first (or the last) value of it$i. b) If t is
existing point in it$t, returns the respective it$f. c) If t is between two existing points, returns
linear interpolation of these two points.
Value
IntensityTier object
See Also
it.getPointIndexNearestTime, it.read, it.write, it.plot, it.cut, it.cut0, it.legendre
Examples
it <- it.sample()
it2 <- it.interpolate(it, seq(it$t[1], it$t[length(it$t)], by = 0.001))
## Not run:
it.plot(it)
it.plot(it2)
## End(Not run)
it.legendre it.legendre
Description
Interpolate the IntensityTier in npoints equidistant points and approximate it by Legendre polyno-
mials
Usage
it.legendre(it, npoints = 1000, npolynomials = 4)
Arguments
it IntensityTier object
npoints Number of points of IntensityTier interpolation
npolynomials Number of polynomials to be used for Legendre modelling
Value
Vector of Legendre polynomials coefficients
See Also
it.legendreSynth, it.legendreDemo, it.cut, it.cut0, it.read, it.plot, it.interpolate
Examples
it <- it.sample()
it <- it.cut(it, tStart = 0.2, tEnd = 0.4) # cut IntensityTier and preserve time
c <- it.legendre(it)
print(c)
leg <- it.legendreSynth(c)
itLeg <- it
itLeg$t <- seq(itLeg$tmin, itLeg$tmax, length.out = length(leg))
itLeg$i <- leg
## Not run:
plot(it$t, it$i, xlab = "Time (sec)", ylab = "Intensity (dB)")
lines(itLeg$t, itLeg$i, col = "blue")
## End(Not run)
it.legendreDemo it.legendreDemo
Description
Plots first four Legendre polynomials
Usage
it.legendreDemo()
See Also
it.legendre, it.legendreSynth, it.read, it.plot, it.interpolate
Examples
## Not run:
it.legendreDemo()
## End(Not run)
it.legendreSynth it.legendreSynth
Description
Synthetize the contour from vector of Legendre polynomials c in npoints equidistant points
Usage
it.legendreSynth(c, npoints = 1000)
Arguments
c Vector of Legendre polynomials coefficients
npoints Number of points of IntensityTier interpolation
Value
Vector of values of synthetized contour
See Also
it.legendre, it.legendreDemo, it.read, it.plot, it.interpolate
Examples
it <- it.sample()
it <- it.cut(it, tStart = 0.2, tEnd = 0.4) # cut IntensityTier and preserve time
c <- it.legendre(it)
print(c)
leg <- it.legendreSynth(c)
itLeg <- it
itLeg$t <- seq(itLeg$tmin, itLeg$tmax, length.out = length(leg))
itLeg$i <- leg
## Not run:
plot(it$t, it$i, xlab = "Time (sec)", ylab = "Intensity (dB)")
lines(itLeg$t, itLeg$i, col = "blue")
## End(Not run)
it.plot it.plot
Description
Plots interactive IntensityTier using dygraphs package.
Usage
it.plot(it, group = "", snd = NULL)
Arguments
it IntensityTier object
group [optional] character string, name of group for dygraphs synchronization
snd [optional] Sound object
See Also
it.read, tg.plot, it.cut, it.cut0, it.interpolate, it.write
Examples
## Not run:
it <- it.sample()
it.plot(it)
## End(Not run)
it.read it.read
Description
Reads IntensityTier from Praat. Supported formats: text file, short text file.
Usage
it.read(fileNameIntensityTier, encoding = "UTF-8")
Arguments
fileNameIntensityTier
file name of IntensityTier
encoding File encoding (default: "UTF-8"), "auto" for auto-detect of Unicode encoding
Value
IntensityTier object
See Also
it.write, it.plot, it.cut, it.cut0, it.interpolate, tg.read, pt.read, pitch.read, formant.read,
col.read
Examples
## Not run:
it <- it.read("demo/maminka.IntensityTier")
it.plot(it)
## End(Not run)
it.sample it.sample
Description
Returns sample IntensityTier.
Usage
it.sample()
Value
IntensityTier
See Also
it.plot
Examples
it <- it.sample()
it.plot(it)
it.write it.write
Description
Saves IntensityTier to file (in UTF-8 encoding). it is list with at least $t and $i vectors (of the same
length). If there are no $tmin and $tmax values, there are set as min and max of $t vector.
Usage
it.write(it, fileNameIntensityTier, format = "short")
Arguments
it IntensityTier object
fileNameIntensityTier
file name to be created
format Output file format ("short" (short text format - default), "text" (a.k.a. full text
format))
See Also
it.read, tg.write, it.interpolate
Examples
## Not run:
it <- it.sample()
it.plot(pt)
it.write(it, "demo/intensity.IntensityTier")
## End(Not run)
pitch.cut pitch.cut
Description
Cut the specified interval from the Pitch object and preserve time
Usage
pitch.cut(pitch, tStart = -Inf, tEnd = Inf)
Arguments
pitch Pitch object (either in Frame or Array format)
tStart beginning time of interval to be cut (default -Inf = cut from the xmin of the
Pitch)
tEnd final time of interval to be cut (default Inf = cut to the xmax of the Pitch)
Value
Pitch object
See Also
pitch.cut0, tg.cut, tg.cut0, pitch.read, pitch.plot
Examples
pitch <- pitch.sample()
pitch2 <- pitch.cut(pitch, tStart = 3)
pitch2_0 <- pitch.cut0(pitch, tStart = 3)
pitch3 <- pitch.cut(pitch, tStart = 2, tEnd = 3)
pitch3_0 <- pitch.cut0(pitch, tStart = 2, tEnd = 3)
pitch4 <- pitch.cut(pitch, tEnd = 1)
pitch4_0 <- pitch.cut0(pitch, tEnd = 1)
pitch5 <- pitch.cut(pitch, tStart = -1, tEnd = 1)
pitch5_0 <- pitch.cut0(pitch, tStart = -1, tEnd = 1)
## Not run:
pitch.plot(pitch)
pitch.plot(pitch2)
pitch.plot(pitch2_0)
pitch.plot(pitch3)
pitch.plot(pitch3_0)
pitch.plot(pitch4)
pitch.plot(pitch4_0)
pitch.plot(pitch5)
pitch.plot(pitch5_0)
## End(Not run)
pitch.cut0 pitch.cut0
Description
Cut the specified interval from the Pitch object and shift time so that the new xmin = 0
Usage
pitch.cut0(pitch, tStart = -Inf, tEnd = Inf)
Arguments
pitch Pitch object (either in Frame or Array format)
tStart beginning time of interval to be cut (default -Inf = cut from the xmin of the
Pitch)
tEnd final time of interval to be cut (default Inf = cut to the xmax of the Pitch)
Value
Pitch object
See Also
pitch.cut, tg.cut, tg.cut0, pitch.read, pitch.plot
Examples
pitch <- pitch.sample()
pitch2 <- pitch.cut(pitch, tStart = 3)
pitch2_0 <- pitch.cut0(pitch, tStart = 3)
pitch3 <- pitch.cut(pitch, tStart = 2, tEnd = 3)
pitch3_0 <- pitch.cut0(pitch, tStart = 2, tEnd = 3)
pitch4 <- pitch.cut(pitch, tEnd = 1)
pitch4_0 <- pitch.cut0(pitch, tEnd = 1)
pitch5 <- pitch.cut(pitch, tStart = -1, tEnd = 1)
pitch5_0 <- pitch.cut0(pitch, tStart = -1, tEnd = 1)
## Not run:
pitch.plot(pitch)
pitch.plot(pitch2)
pitch.plot(pitch2_0)
pitch.plot(pitch3)
pitch.plot(pitch3_0)
pitch.plot(pitch4)
pitch.plot(pitch4_0)
pitch.plot(pitch5)
pitch.plot(pitch5_0)
## End(Not run)
pitch.getPointIndexHigherThanTime
pitch.getPointIndexHigherThanTime
Description
Returns index of frame which is nearest the given time from right, i.e. time <= frameTime.
Usage
pitch.getPointIndexHigherThanTime(pitch, time)
Arguments
pitch Pitch object
time time which is going to be found in frames
Value
integer
See Also
pitch.getPointIndexNearestTime, pitch.getPointIndexLowerThanTime
Examples
pitch <- pitch.sample()
pitch.getPointIndexHigherThanTime(pitch, 0.5)
pitch.getPointIndexLowerThanTime
pitch.getPointIndexLowerThanTime
Description
Returns index of frame which is nearest the given time from left, i.e. frameTime <= time.
Usage
pitch.getPointIndexLowerThanTime(pitch, time)
Arguments
pitch Pitch object
time time which is going to be found in frames
Value
integer
See Also
pitch.getPointIndexNearestTime, pitch.getPointIndexHigherThanTime
Examples
pitch <- pitch.sample()
pitch.getPointIndexLowerThanTime(pitch, 0.5)
pitch.getPointIndexNearestTime
pitch.getPointIndexNearestTime
Description
Returns index of frame which is nearest the given time (from both sides).
Usage
pitch.getPointIndexNearestTime(pitch, time)
Arguments
pitch Pitch object
time time which is going to be found in frames
Value
integer
See Also
pitch.getPointIndexLowerThanTime, pitch.getPointIndexHigherThanTime
Examples
pitch <- pitch.sample()
pitch.getPointIndexNearestTime(pitch, 0.5)
pitch.plot pitch.plot
Description
Plots interactive Pitch object using dygraphs package.
Usage
pitch.plot(
pitch,
scaleIntensity = TRUE,
showStrength = FALSE,
group = "",
pt = NULL
)
Arguments
pitch Pitch object
scaleIntensity Point size scaled according to relative intensity
showStrength Show strength annotation
group [optional] character string, name of group for dygraphs synchronization
pt [optional] PitchTier object
See Also
pitch.read, pitch.sample, pitch.toArray, tg.plot, pt.plot, formant.plot
Examples
## Not run:
pitch <- pitch.sample()
pitch.plot(pitch, scaleIntensity = TRUE, showStrength = TRUE)
pitch.plot(pitch, scaleIntensity = TRUE, showStrength = TRUE, pt = pt.sample())
## End(Not run)
pitch.read pitch.read
Description
Reads Pitch object from Praat. Supported formats: text file, short text file.
Usage
pitch.read(fileNamePitch, encoding = "UTF-8")
Arguments
fileNamePitch file name of Pitch object
encoding File encoding (default: "UTF-8"), "auto" for auto-detect of Unicode encoding
Value
A Pitch object represents periodicity candidates as a function of time.
[ref: Praat help, https://www.fon.hum.uva.nl/praat/manual/Pitch.html]
p$xmin ... start time (seconds)
p$xmax ... end time (seconds)
p$nx ... number of frames
p$dx ... time step = frame duration (seconds)
p$x1 ... time associated with the first frame (seconds)
p$t ... vector of time instances associated with all frames
p$ceiling ... a frequency above which a candidate is considered voiceless (Hz)
p$maxnCandidates ... maximum number of candidates in frame
p$frame[[1]] to p$frame[[p$nx]] ... frames
p$frame[[1]]$intensity ... intensity of the frame
p$frame[[1]]$nCandidates ... actual number of candidates in this frame
p$frame[[1]]$frequency ... vector of candidates’ frequency (in Hz)
(for a voiced candidate), or 0 (for an unvoiced candidate)
p$frame[[1]]$strength ... vector of degrees of periodicity of candidates (between 0 and 1)
See Also
pitch.write, pitch.plot, pitch.cut, pitch.getPointIndexNearestTime, pt.read, tg.read,
it.read, col.read
Examples
## Not run:
p <- pitch.read('demo/sound.Pitch')
names(p)
p$nx
p$t[4] # time instance of the 4th frame
p$frame[[4]] # 4th frame: pitch candidates
p$frame[[4]]$frequency[2]
p$frame[[4]]$strength[2]
## End(Not run)
pitch.sample pitch.sample
Description
Returns sample Pitch object.
Usage
pitch.sample()
Value
Pitch
See Also
tg.sample, pt.sample, it.sample, formant.sample
Examples
pitch <- pitch.sample()
pitch.toArray pitch.toArray
Description
pitch.toArray
Usage
pitch.toArray(pitch)
Arguments
pitch Pitch object (frame format)
Value
Pitch object with frames converted to frequency and strength arrays and intensity vector
See Also
pitch.toFrame, pitch.read, pitch.plot
Examples
pitchArray <- pitch.toArray(pitch.sample())
pitchArray$t[1:10]
pitchArray$frequencyArray[, 1:10]
pitchArray$bandwidthArray[, 1:10]
pitchArray$intensityVector[1:10]
pitch.toFrame pitch.toFrame
Description
pitch.toFrame
Usage
pitch.toFrame(pitchArray)
Arguments
pitchArray Pitch object (array format)
Value
Pitch object with frames
See Also
pitch.toArray, pitch.read, pitch.plot
Examples
pitchArray <- pitch.toArray(pitch.sample())
pitch <- pitch.toFrame(pitchArray)
pitch.write pitch.write
Description
Saves Pitch to the file.
Usage
pitch.write(pitch, fileNamePitch, format = "short")
Arguments
pitch Pitch object
fileNamePitch Output file name
format Output file format ("short" (default, short text format) or "text" (a.k.a. full
text format))
See Also
pitch.read, pt.read
Examples
## Not run:
pitch <- pitch.sample()
pitch.write(pitch, "demo_output.Pitch")
## End(Not run)
pt.cut pt.cut
Description
Cut the specified interval from the PitchTier and preserve time
Usage
pt.cut(pt, tStart = -Inf, tEnd = Inf)
Arguments
pt PitchTier object
tStart beginning time of interval to be cut (default -Inf = cut from the tmin of the
PitchTier)
tEnd final time of interval to be cut (default Inf = cut to the tmax of the PitchTier)
Value
PitchTier object
See Also
pt.cut0, tg.cut, tg.cut0, pt.read, pt.plot, pt.Hz2ST, pt.interpolate, pt.legendre, pt.legendreSynth,
pt.legendreDemo
Examples
pt <- pt.sample()
pt2 <- pt.cut(pt, tStart = 3)
pt2_0 <- pt.cut0(pt, tStart = 3)
pt3 <- pt.cut(pt, tStart = 2, tEnd = 3)
pt3_0 <- pt.cut0(pt, tStart = 2, tEnd = 3)
pt4 <- pt.cut(pt, tEnd = 1)
pt4_0 <- pt.cut0(pt, tEnd = 1)
pt5 <- pt.cut(pt, tStart = -1, tEnd = 1)
pt5_0 <- pt.cut0(pt, tStart = -1, tEnd = 1)
## Not run:
pt.plot(pt)
pt.plot(pt2)
pt.plot(pt2_0)
pt.plot(pt3)
pt.plot(pt3_0)
pt.plot(pt4)
pt.plot(pt4_0)
pt.plot(pt5)
pt.plot(pt5_0)
## End(Not run)
pt.cut0 pt.cut0
Description
Cut the specified interval from the PitchTier and shift time so that the new tmin = 0
Usage
pt.cut0(pt, tStart = -Inf, tEnd = Inf)
Arguments
pt PitchTier object
tStart beginning time of interval to be cut (default -Inf = cut from the tmin of the
PitchTier)
tEnd final time of interval to be cut (default Inf = cut to the tmax of the PitchTier)
Value
PitchTier object
See Also
pt.cut, pt.read, pt.plot, pt.Hz2ST, pt.interpolate, pt.legendre, pt.legendreSynth, pt.legendreDemo
Examples
pt <- pt.sample()
pt2 <- pt.cut(pt, tStart = 3)
pt2_0 <- pt.cut0(pt, tStart = 3)
pt3 <- pt.cut(pt, tStart = 2, tEnd = 3)
pt3_0 <- pt.cut0(pt, tStart = 2, tEnd = 3)
pt4 <- pt.cut(pt, tEnd = 1)
pt4_0 <- pt.cut0(pt, tEnd = 1)
pt5 <- pt.cut(pt, tStart = -1, tEnd = 1)
pt5_0 <- pt.cut0(pt, tStart = -1, tEnd = 1)
## Not run:
pt.plot(pt)
pt.plot(pt2)
pt.plot(pt2_0)
pt.plot(pt3)
pt.plot(pt3_0)
pt.plot(pt4)
pt.plot(pt4_0)
pt.plot(pt5)
pt.plot(pt5_0)
## End(Not run)
pt.getPointIndexHigherThanTime
pt.getPointIndexHigherThanTime
Description
Returns index of point which is nearest the given time from right, i.e. time <= pointTime.
Usage
pt.getPointIndexHigherThanTime(pt, time)
Arguments
pt PitchTier object
time time which is going to be found in points
Value
integer
See Also
pt.getPointIndexNearestTime, pt.getPointIndexLowerThanTime
Examples
pt <- pt.sample()
pt.getPointIndexHigherThanTime(pt, 0.5)
pt.getPointIndexLowerThanTime
pt.getPointIndexLowerThanTime
Description
Returns index of point which is nearest the given time from left, i.e. pointTime <= time.
Usage
pt.getPointIndexLowerThanTime(pt, time)
Arguments
pt PitchTier object
time time which is going to be found in points
Value
integer
See Also
pt.getPointIndexNearestTime, pt.getPointIndexHigherThanTime
Examples
pt <- pt.sample()
pt.getPointIndexLowerThanTime(pt, 0.5)
pt.getPointIndexNearestTime
pt.getPointIndexNearestTime
Description
Returns index of point which is nearest the given time (from both sides).
Usage
pt.getPointIndexNearestTime(pt, time)
Arguments
pt PitchTier object
time time which is going to be found in points
Value
integer
See Also
pt.getPointIndexLowerThanTime, pt.getPointIndexHigherThanTime
Examples
pt <- pt.sample()
pt.getPointIndexNearestTime(pt, 0.5)
pt.Hz2ST pt.Hz2ST
Description
Converts Hz to Semitones with given reference (default 0 ST = 100 Hz).
Usage
pt.Hz2ST(pt, ref = 100)
Arguments
pt PitchTier object
ref reference value (in Hz) for 0 ST. Default: 100 Hz.
Value
PitchTier object
See Also
pt.read, pt.write, pt.plot, pt.interpolate, pt.cut, pt.cut0
Examples
pt <- pt.sample()
pt2 <- pt.Hz2ST(pt, ref = 200)
## Not run:
pt.plot(pt) %>% dygraphs::dyAxis("y", label = "Frequency (Hz)")
pt.plot(pt2) %>% dygraphs::dyAxis("y", label = "Frequency (ST re 200 Hz)")
## End(Not run)
pt.interpolate pt.interpolate
Description
Interpolates PitchTier contour in given time instances.
Usage
pt.interpolate(pt, t)
Arguments
pt PitchTier object
t vector of time instances of interest
Details
a) If t < min(pt$t) (or t > max(pt$t)), returns the first (or the last) value of pt$f. b) If t is
existing point in pt$t, returns the respective pt$f. c) If t is between two existing points, returns
linear interpolation of these two points.
Value
PitchTier object
See Also
pt.getPointIndexNearestTime, pt.read, pt.write, pt.plot, pt.Hz2ST, pt.cut, pt.cut0,
pt.legendre
Examples
pt <- pt.sample()
pt <- pt.Hz2ST(pt, ref = 100) # conversion of Hz to Semitones, reference 0 ST = 100 Hz.
pt2 <- pt.interpolate(pt, seq(pt$t[1], pt$t[length(pt$t)], by = 0.001))
## Not run:
pt.plot(pt)
pt.plot(pt2)
## End(Not run)
pt.legendre pt.legendre
Description
Interpolate the PitchTier in npoints equidistant points and approximate it by Legendre polynomials
Usage
pt.legendre(pt, npoints = 1000, npolynomials = 4)
Arguments
pt PitchTier object
npoints Number of points of PitchTier interpolation
npolynomials Number of polynomials to be used for Legendre modelling
Value
Vector of Legendre polynomials coefficients
See Also
pt.legendreSynth, pt.legendreDemo, pt.cut, pt.cut0, pt.read, pt.plot, pt.Hz2ST, pt.interpolate
Examples
pt <- pt.sample()
pt <- pt.Hz2ST(pt)
pt <- pt.cut(pt, tStart = 3) # cut PitchTier from t = 3 sec and preserve time
c <- pt.legendre(pt)
print(c)
leg <- pt.legendreSynth(c)
ptLeg <- pt
ptLeg$t <- seq(ptLeg$tmin, ptLeg$tmax, length.out = length(leg))
ptLeg$f <- leg
## Not run:
plot(pt$t, pt$f, xlab = "Time (sec)", ylab = "F0 (ST re 100 Hz)")
lines(ptLeg$t, ptLeg$f, col = "blue")
## End(Not run)
pt.legendreDemo pt.legendreDemo
Description
Plots first four Legendre polynomials
Usage
pt.legendreDemo()
See Also
pt.legendre, pt.legendreSynth, pt.read, pt.plot, pt.Hz2ST, pt.interpolate
Examples
## Not run:
pt.legendreDemo()
## End(Not run)
pt.legendreSynth pt.legendreSynth
Description
Synthetize the contour from vector of Legendre polynomials c in npoints equidistant points
Usage
pt.legendreSynth(c, npoints = 1000)
Arguments
c Vector of Legendre polynomials coefficients
npoints Number of points of PitchTier interpolation
Value
Vector of values of synthetized contour
See Also
pt.legendre, pt.legendreDemo, pt.read, pt.plot, pt.Hz2ST, pt.interpolate
Examples
pt <- pt.sample()
pt <- pt.Hz2ST(pt)
pt <- pt.cut(pt, tStart = 3) # cut PitchTier from t = 3 sec and preserve time
c <- pt.legendre(pt)
print(c)
leg <- pt.legendreSynth(c)
ptLeg <- pt
ptLeg$t <- seq(ptLeg$tmin, ptLeg$tmax, length.out = length(leg))
ptLeg$f <- leg
## Not run:
plot(pt$t, pt$f, xlab = "Time (sec)", ylab = "F0 (ST re 100 Hz)")
lines(ptLeg$t, ptLeg$f, col = "blue")
## End(Not run)
pt.plot pt.plot
Description
Plots interactive PitchTier using dygraphs package.
Usage
pt.plot(pt, group = "")
Arguments
pt PitchTier object
group [optional] character string, name of group for dygraphs synchronization
See Also
pt.read, pt.Hz2ST, pt.cut, pt.cut0, pt.interpolate, pt.write, tg.plot, pitch.plot, formant.plot
Examples
## Not run:
pt <- pt.sample()
pt.plot(pt)
## End(Not run)
pt.read pt.read
Description
Reads PitchTier from Praat. Supported formats: text file, short text file, spreadsheet, headerless
spreadsheet (headerless not recommended, it does not contain tmin and tmax info).
Usage
pt.read(fileNamePitchTier, encoding = "UTF-8")
Arguments
fileNamePitchTier
file name of PitchTier
encoding File encoding (default: "UTF-8"), "auto" for auto-detect of Unicode encoding
Value
PitchTier object
See Also
pt.write, pt.plot, pt.Hz2ST, pt.cut, pt.cut0, pt.interpolate, pt.legendre, tg.read, pitch.read,
formant.read, it.read, col.read
Examples
## Not run:
pt <- pt.read("demo/H.PitchTier")
pt.plot(pt)
## End(Not run)
pt.sample pt.sample
Description
Returns sample PitchTier.
Usage
pt.sample()
Value
PitchTier
See Also
pt.plot
Examples
pt <- pt.sample()
pt.plot(pt)
pt.write pt.write
Description
Saves PitchTier to a file (in UTF-8 encoding). pt is a list with $t and $f vectors (of the same length)
at least. If there are no $tmin and $tmax values, there are set as min and max of $t vector.
Usage
pt.write(pt, fileNamePitchTier, format = "spreadsheet")
Arguments
pt PitchTier object
fileNamePitchTier
file name to be created
format Output file format ("short" (short text format), "text" (a.k.a. full text for-
mat), "spreadsheet" (default), "headerless" (not recommended, it does not
contain tmin and tmax info))
See Also
pt.read, tg.write, pt.Hz2ST, pt.interpolate
Examples
## Not run:
pt <- pt.sample()
pt <- pt.Hz2ST(pt) # conversion of Hz to Semitones, reference 0 ST = 100 Hz.
pt.plot(pt)
pt.write(pt, "demo/H_st.PitchTier")
## End(Not run)
round2 round2
Description
Rounds a number to the specified order. Round half away from zero (this is the difference from
built-in round function.)
Usage
round2(x, order = 0)
Arguments
x number to be rounded
order 0 (default) = units, -1 = 0.1, +1 = 10
Value
rounded number to the specified order
See Also
round, trunc, ceiling, floor
Examples
round2(23.5) # = 24, compare: round(23.5) = 24
round2(23.4) # = 23
round2(24.5) # = 25, compare: round(24.5) = 24
round2(-23.5) # = -24, compare: round(-23.5) = -24
round2(-23.4) # = -23
round2(-24.5) # = -25, compare: round(-24.5) = -24
round2(123.456, -1) # 123.5
round2(123.456, -2) # 123.46
round2(123.456, 1) # 120
round2(123.456, 2) # 100
round2(123.456, 3) # 0
round2(-123.456, -1) # -123.5
round2(-123.456, -2) # -123.46
round2(-123.456, 1) # -120
round2(-123.456, 2) # -100
round2(-123.456, 3) # 0
seqM seqM
Description
Matlab-like behaviour of colon operator or linspace for creating sequences, for-loop friendly.
Usage
seqM(from = NA, to = NA, by = NA, length.out = NA)
Arguments
from starting value of the sequence (the first number)
to end value of the sequence (the last number or the boundary number)
by increment of the sequence (if specified, do not use the length.out parameter).
If both by and length.out are not specified, then by = +1.
length.out desired length of the sequence (if specified, do not use the by parameter)
Details
Like seq() but with Matlab-like behavior ([: operator] with by or [linspace] with length.out).
If I create a for-loop, I would like to get an empty vector for 3:1 (I want a default step +1) and also
an empty vector for seq(3, 1, by = 1) (not an error). This is solved by this seqM function.
Value
returns a vector of type "integer" or "double"
Comparison
R: seqM Matlab R: seq
seqM(1, 3) [1] 1 2 3 1:3 the same the same
seqM(1, 3, by=.8) [1] 1.0 1.8 2.6 1:.8:3 the same the same
seqM(1, 3, by=5) [1] 1 1:5:3 the same the same
seqM(3, 1) integer(0) 3:1 the same [1] 3 2 1
seqM(3, 1, by=+1) integer(0) 3:1:1 the same Error: wrong ’by’
seqM(3, 1, by=-1) [1] 3 2 1 3:-1:1 the same the same
seqM(3, 1, by=-3) [1] 3 3:-3:1 the same the same
seqM(1, 3, len=5) [1] 1.0 1.5 2.0 2.5 3.0 linspace(1,3,5) the same the same
seqM(1, 3, len=3) [1] 1 2 3 linspace(1,3,3) the same the same
seqM(1, 3, len=2) [1] 1 3 linspace(1,3,2) the same the same
seqM(1, 3, len=1) [1] 3 linspace(1,3,1) the same [1] 1
seqM(1, 3, len=0) integer(0) + warning linspace(1,3,0) the same without warning the same without warning
seqM(3, 1, len=3) [1] 3 2 1 linspace(3,1,3) the same the same
See Also
round2, isNum, isInt, ifft.
Examples
seqM(1, 3)
seqM(1, 3, by=.8)
seqM(1, 3, by=5)
seqM(3, 1)
seqM(3, 1, by=+1)
seqM(3, 1, by=-1)
seqM(3, 1, by=-3)
seqM(1, 3, len=5)
seqM(1, 3, len=3)
seqM(1, 3, len=2)
seqM(1, 3, len=1)
seqM(1, 3, len=0)
seqM(3, 1, len=3)
snd.cut snd.cut
Description
Cut the specified interval from the Sound object and preserve time
Usage
snd.cut(snd, Start = -Inf, End = Inf, units = "seconds")
Arguments
snd Sound object (list with $sig and $fs members at least)
Start beginning sample/time of interval to be cut (default -Inf = cut from the begin-
ning of the Sound)
End final sample/time of interval to be cut (default Inf = cut to the end of the Sound)
units Units of Start and End arguments: "samples" (starting from 1, i.e., 1 == index
of the 1st sample) or "seconds" (starting from 0)
Value
Sound object
See Also
snd.cut0, tg.cut, tg.cut0, snd.read, snd.plot
Examples
snd <- snd.sample()
snd2 <- snd.cut(snd, Start = 0.3)
snd2_0 <- snd.cut0(snd, Start = 0.3)
snd3 <- snd.cut(snd, Start = 0.2, End = 0.3)
snd3_0 <- snd.cut0(snd, Start = 0.2, End = 0.3)
snd4 <- snd.cut(snd, End = 0.1)
snd4_0 <- snd.cut0(snd, End = 0.1)
snd5 <- snd.cut(snd, Start = -0.1, End = 0.1)
snd5_0 <- snd.cut0(snd, Start = -0.1, End = 0.1)
snd6 <- snd.cut(snd, End = 1000, units = "samples")
snd6_0 <- snd.cut0(snd, End = 1000, units = "samples")
## Not run:
snd.plot(snd)
snd.plot(snd2)
snd.plot(snd2_0)
snd.plot(snd3)
snd.plot(snd3_0)
snd.plot(snd4)
snd.plot(snd4_0)
snd.plot(snd5)
snd.plot(snd5_0)
snd.plot(snd6)
snd.plot(snd6_0)
## End(Not run)
snd.cut0 snd.cut0
Description
Cut the specified interval from the Sound object and and shift time so that the new snd$t[1] = 0
Usage
snd.cut0(snd, Start = -Inf, End = Inf, units = "seconds")
Arguments
snd Sound object (list with $sig and $fs members at least)
Start beginning sample/time of interval to be cut (default -Inf = cut from the begin-
ning of the Sound)
End final sample/time of interval to be cut (default Inf = cut to the end of the Sound)
units Units of Start and End arguments: "samples" (starting from 1, i.e., 1 == index
of the 1st sample) or "seconds" (starting from 0)
Value
Sound object
See Also
snd.cut, tg.cut, tg.cut0, snd.read, snd.plot
Examples
snd <- snd.sample()
snd2 <- snd.cut(snd, Start = 0.3)
snd2_0 <- snd.cut0(snd, Start = 0.3)
snd3 <- snd.cut(snd, Start = 0.2, End = 0.3)
snd3_0 <- snd.cut0(snd, Start = 0.2, End = 0.3)
snd4 <- snd.cut(snd, End = 0.1)
snd4_0 <- snd.cut0(snd, End = 0.1)
snd5 <- snd.cut(snd, Start = -0.1, End = 0.1)
snd5_0 <- snd.cut0(snd, Start = -0.1, End = 0.1)
snd6 <- snd.cut(snd, End = 1000, units = "samples")
snd6_0 <- snd.cut0(snd, End = 1000, units = "samples")
## Not run:
snd.plot(snd)
snd.plot(snd2)
snd.plot(snd2_0)
snd.plot(snd3)
snd.plot(snd3_0)
snd.plot(snd4)
snd.plot(snd4_0)
snd.plot(snd5)
snd.plot(snd5_0)
snd.plot(snd6)
snd.plot(snd6_0)
## End(Not run)
snd.getPointIndexHigherThanTime
snd.getPointIndexHigherThanTime
Description
Returns index of sample which is nearest the given time from right, i.e. time <= sampleTime.
Usage
snd.getPointIndexHigherThanTime(snd, time)
Arguments
snd Sound object
time time which is going to be found in samples
Value
integer
See Also
snd.getPointIndexNearestTime, snd.getPointIndexLowerThanTime
Examples
snd <- snd.sample()
snd.getPointIndexHigherThanTime(snd, 0.5)
snd.getPointIndexLowerThanTime
snd.getPointIndexLowerThanTime
Description
Returns index of sample which is nearest the given time from left, i.e. sampleTime <= time.
Usage
snd.getPointIndexLowerThanTime(snd, time)
Arguments
snd Sound object
time time which is going to be found in samples
Value
integer
See Also
snd.getPointIndexNearestTime, snd.getPointIndexHigherThanTime
Examples
snd <- snd.sample()
snd.getPointIndexLowerThanTime(snd, 0.5)
snd.getPointIndexNearestTime
snd.getPointIndexNearestTime
Description
Returns index of sample which is nearest the given time (from both sides).
Usage
snd.getPointIndexNearestTime(snd, time)
Arguments
snd Sound object
time time which is going to be found in samples
Value
integer
See Also
snd.getPointIndexLowerThanTime, snd.getPointIndexHigherThanTime
Examples
snd <- snd.sample()
snd.getPointIndexNearestTime(snd, 0.5)
snd.plot snd.plot
Description
Plots interactive Sound object using dygraphs package. If the sound is 2-channel (stereo), the 1st
channel is plotted around mean value +1, the 2nd around mean value -1.
Usage
snd.plot(snd, group = "", stemPlot = FALSE)
Arguments
snd Sound object (with $sig and $fs members at least)
group [optional] character string, name of group for dygraphs synchronization
stemPlot [optional] discrete style of plot using
See Also
snd.read
Examples
## Not run:
snd <- snd.sample()
snd.plot(snd)
snd.plot(list(sig = sin(seq(0, 2*pi, length.out = 4000)), fs = 8000))
## End(Not run)
snd.read snd.read
Description
Loads sound file (.wav or .mp3) using tuneR package.
Usage
snd.read(
fileNameSound,
fileType = "auto",
from = 1,
to = Inf,
units = "samples"
)
Arguments
fileNameSound Sound file name (.wav or .mp3)
fileType "wav", "mp3" or "auto"
from Where to start reading in units (beginning "samples": 1, "seconds": 0)
to Where to stop reading in units (Inf = end of the file)
units Units of from and to arguments: "samples" (starting from 1) or "seconds"
(starting from 0)
Value
Sound object with normalized amplitude (PCM / 2^(nbits-1) - 1) resulting to the range of [-1; +1].
In fact, the minimum value can be one quantization step lower (e.g. PCM 16bit: -32768). t ...
vector of discrete time instances (seconds) sig ... signal matrix (nrow(snd$sig) = number of
samples, ncol(snd$sig) = number of channels, i.e., $sig[, 1] ... 1st channel) fs ... sample rate
(Hz) nChannels ... number of signal channels (ncol(snd$sig)), 1 == mono, 2 == stereo nBits
... number of bits ped one sample nSamples ... number of samples (nrow(snd$sig)) duration ...
duration of signal (seconds), snd$duration == snd$nSamples/snd$fs
See Also
snd.write, snd.plot, snd.cut, snd.getPointIndexNearestTime
Examples
## Not run:
snd <- snd.read("demo/H.wav")
snd.plot(snd)
## End(Not run)
snd.sample snd.sample
Description
Returns sample Sound object.
Usage
snd.sample()
Value
snd
See Also
snd.plot
Examples
snd <- snd.sample()
snd.plot(snd)
snd.write snd.write
Description
Saves Sound object to a file. snd is a list with $sig and $fs members at least. If $nBits is
not present, default value of 16 bits is used. Vector $t is ignored. If the sound signal is 2-channel
(stereo), $sig must be a two-column matrix (1st column corresponds to the left channel, 2nd column
to the right channel). If the sound is 1-channel (mono), $sig can be either a numeric vector or a
one-column matrix. optional $t, $nChannels, $nSamples, $duration vectors are ignored.
Usage
snd.write(snd, fileNameSound)
Arguments
snd Sound object (with $sig, $nBits and $fs members)
fileNameSound file name to be created
See Also
snd.read
Examples
## Not run:
snd <- snd.sample()
snd.plot(snd)
snd.write(snd, "temp1.wav")
signal <- 0.8*sin(seq(0, 2*pi*440, length.out = 8000))
snd.write(list(sig = signal, fs = 8000, nBits = 16), "temp2.wav")
left <- 0.3*sin(seq(0, 2*pi*440, length.out = 4000))
right <- 0.5*sin(seq(0, 2*pi*220, length.out = 4000))
snd.write(list(sig = matrix(c(left, right), ncol = 2), fs = 8000, nBits = 16), "temp3.wav")
## End(Not run)
strTrim strTrim
Description
Trim leading and trailing whitespace in character string.
Usage
strTrim(string)
Arguments
string character string
Details
Like str_trim() in stringr package or trimws() in R3.2.0 but way faster.
Source: <NAME> comment at https://stackoverflow.com/questions/2261079/how-to-trim-
leading-and-trailing-whitespace-in-r
Value
returns a character string with removed leading and trailing whitespace characters.
See Also
isString for testing whether it is 1 character vector, str_contains for finding string in string
without regexp, str_find for all indices without regexp, str_find1 for the first index withoud
regexp.
Examples
strTrim(" Hello World! ")
str_contains str_contains
Description
Find string in another string (without regular expressions), returns TRUE / FALSE.
Usage
str_contains(string, patternNoRegex)
Arguments
string string in which we try to find something
patternNoRegex string we want to find, "as it is" - no regular exprressions
Value
TRUE / FALSE
See Also
str_find, str_find1, isString
Examples
str_contains("Hello world", "wor") # TRUE
str_contains("Hello world", "WOR") # FALSE
str_contains(tolower("Hello world"), tolower("wor")) # TRUE
str_contains("Hello world", "") # TRUE
str_find str_find
Description
Find string in another string (without regular expressions), returns indices of all occurences.
Usage
str_find(string, patternNoRegex)
Arguments
string string in which we try to find something
patternNoRegex string we want to find, "as it is" - no regular exprressions
Value
indices of all occurences (1 = 1st character)
See Also
str_find1, str_contains, isString
Examples
str_find("Hello, hello, hello world", "ell") # 2 9 16
str_find("Hello, hello, hello world", "q") # integer(0)
str_find1 str_find1
Description
Find string in another string (without regular expressions), returns indices of the first occurence
only.
Usage
str_find1(string, patternNoRegex)
Arguments
string string in which we try to find something
patternNoRegex string we want to find, "as it is" - no regular exprressions
Value
index of the first occurence only (1 = 1st character)
See Also
str_find, str_contains, isString
Examples
str_find1("Hello, hello, hello world", "ell") # 2
str_find1("Hello, hello, hello world", "q") # integer(0)
tg.boundaryMagnet tg.boundaryMagnet
Description
Aligns boundaries of intervals in the target tier (typically: "word") to the closest boundaries in the
pattern tier (typically: "phone"). If there is no boundary within the tolerance limit in the pattern tier,
the boundary position in the target tier is kept at its original position.
Usage
tg.boundaryMagnet(
tg,
targetTier,
patternTier,
boundaryTolerance = Inf,
verbose = TRUE
)
Arguments
tg TextGrid object
targetTier index or "name" of the tier to be aligned
patternTier index or "name" of the pattern tier
boundaryTolerance
if there is not any boundary in the pattern tier within this tolerance, the target
boundary is kept at its position [default: Inf]
verbose if TRUE, every boundary shift is printed [default: TRUE]
Value
TextGrid object
See Also
tg.insertBoundary, tg.insertInterval, tg.duplicateTier
Examples
## Not run:
tg <- tg.sample()
tg <- tg.removeTier(tg, "phoneme")
tg <- tg.removeTier(tg, "syllable")
tg <- tg.removeTier(tg, "phrase")
# garble times in "word" tier a little
n <- length(tg$word$label)
deltaT <- runif(n - 1, min = -0.01, max = 0.015)
tg$word$t2[1: (n-1)] <- tg$word$t2[1: (n-1)] + deltaT
tg$word$t1[2: n] <- tg$word$t2[1: (n-1)]
tg.plot(tg)
# align "word" tier according to "phone tier"
tg2 <- tg.boundaryMagnet(tg, targetTier = "word", patternTier = "phone")
tg.plot(tg2)
## End(Not run)
tg.checkTierInd tg.checkTierInd
Description
Returns tier index. Input can be either index (number) or tier name (character string). It performs
checks whether the tier exists.
Usage
tg.checkTierInd(tg, tierInd)
Arguments
tg TextGrid object
tierInd Tier index or "name"
Value
Tier index
See Also
tg.getTierName, tg.isIntervalTier, tg.isPointTier, tg.plot, tg.getNumberOfTiers
Examples
tg <- tg.sample()
tg.checkTierInd(tg, 4)
tg.checkTierInd(tg, "word")
tg.countLabels tg.countLabels
Description
Returns number of labels with the specified label.
Usage
tg.countLabels(tg, tierInd, label)
Arguments
tg TextGrid object
tierInd tier index or "name"
label character string: label to be counted
Value
integer number
See Also
tg.findLabels, tg.getLabel
Examples
tg <- tg.sample()
tg.countLabels(tg, "phone", "a")
tg.createNewTextGrid tg.createNewTextGrid
Description
Creates new and empty TextGrid. tMin and tMax specify the total start and end time for the
TextGrid. If a new interval tier is added later without specified start and end, they are set to TextGrid
start and end.
Usage
tg.createNewTextGrid(tMin, tMax)
Arguments
tMin Start time of TextGrid
tMax End time of TextGrid
Details
This empty TextGrid cannot be used for almost anything. At least one tier should be inserted using
tg.insertNewIntervalTier() or tg.insertNewPointTier().
Value
TextGrid object
See Also
tg.insertNewIntervalTier, tg.insertNewPointTier
Examples
tg <- tg.createNewTextGrid(0, 5)
tg <- tg.insertNewIntervalTier(tg, 1, "word")
tg <- tg.insertInterval(tg, "word", 1, 2, "hello")
tg.plot(tg)
tg.cut tg.cut
Description
Cut the specified time frame from the TextGrid and preserve time
Usage
tg.cut(tg, tStart = -Inf, tEnd = Inf)
Arguments
tg TextGrid object
tStart beginning time of time frame to be cut (default -Inf = cut from the tmin of the
TextGrid)
tEnd final time of time frame to be cut (default Inf = cut to the tmax of the TextGrid)
Value
TextGrid object
See Also
tg.cut0, pt.cut, pt.cut0, tg.read, tg.plot, tg.write, tg.insertInterval
Examples
tg <- tg.sample()
tg2 <- tg.cut(tg, tStart = 3)
tg2_0 <- tg.cut0(tg, tStart = 3)
tg3 <- tg.cut(tg, tStart = 2, tEnd = 3)
tg3_0 <- tg.cut0(tg, tStart = 2, tEnd = 3)
tg4 <- tg.cut(tg, tEnd = 1)
tg4_0 <- tg.cut0(tg, tEnd = 1)
tg5 <- tg.cut(tg, tStart = -1, tEnd = 5)
tg5_0 <- tg.cut0(tg, tStart = -1, tEnd = 5)
## Not run:
tg.plot(tg)
tg.plot(tg2)
tg.plot(tg2_0)
tg.plot(tg3)
tg.plot(tg3_0)
tg.plot(tg4)
tg.plot(tg4_0)
tg.plot(tg5)
tg.plot(tg5_0)
## End(Not run)
tg.cut0 tg.cut0
Description
Cut the specified time frame from the TextGrid and shift time so that the new tmin = 0
Usage
tg.cut0(tg, tStart = -Inf, tEnd = Inf)
Arguments
tg TextGrid object
tStart beginning time of time frame to be cut (default -Inf = cut from the tmin of the
TextGrid)
tEnd final time of time frame to be cut (default Inf = cut to the tmax of the TextGrid)
Value
TextGrid object
See Also
tg.cut, pt.cut, pt.cut0, tg.read, tg.plot, tg.write, tg.insertInterval
Examples
tg <- tg.sample()
tg2 <- tg.cut(tg, tStart = 3)
tg2_0 <- tg.cut0(tg, tStart = 3)
tg3 <- tg.cut(tg, tStart = 2, tEnd = 3)
tg3_0 <- tg.cut0(tg, tStart = 2, tEnd = 3)
tg4 <- tg.cut(tg, tEnd = 1)
tg4_0 <- tg.cut0(tg, tEnd = 1)
tg5 <- tg.cut(tg, tStart = -1, tEnd = 5)
tg5_0 <- tg.cut0(tg, tStart = -1, tEnd = 5)
## Not run:
tg.plot(tg)
tg.plot(tg2)
tg.plot(tg2_0)
tg.plot(tg3)
tg.plot(tg3_0)
tg.plot(tg4)
tg.plot(tg4_0)
tg.plot(tg5)
tg.plot(tg5_0)
## End(Not run)
tg.duplicateTier tg.duplicateTier
Description
Duplicates tier originalInd to new tier with specified index newInd (existing tiers are shifted). It is
highly recommended to set a name to the new tier (this can also be done later by tg.setTierName()).
Otherwise, both original and new tiers have the same name which is permitted but not recom-
mended. In such a case, we cannot use the comfort of using tier name instead of its index in other
functions.
Usage
tg.duplicateTier(tg, originalInd, newInd = Inf, newTierName = "")
Arguments
tg TextGrid object
originalInd tier index or "name"
newInd new tier index (1 = the first, Inf = the last [default])
newTierName [optional but recommended] name of the new tier
Value
TextGrid object
See Also
tg.duplicateTierMergeSegments, tg.setTierName, tg.removeTier, tg.boundaryMagnet
Examples
tg <- tg.sample()
tg2 <- tg.duplicateTier(tg, "word", 1, "NEW")
tg.plot(tg2)
tg.duplicateTierMergeSegments
tg.duplicateTierMergeSegments
Description
Duplicate tier originalInd and merge segments (according to the pattern) to the new tier with
specified index newInd (existing tiers are shifted). Typical use: create new syllable tier from phone
tier. It merges phones into syllables according to separators in pattern.
Usage
tg.duplicateTierMergeSegments(
tg,
originalInd,
newInd = Inf,
newTierName,
pattern,
sep = "-"
)
Arguments
tg TextGrid object
originalInd tier index or "name"
newInd new tier index (1 = the first, Inf = the last [default])
newTierName name of the new tier
pattern merge segments pattern for the new tier (e.g., "he-llo-world")
sep separator in pattern (default: "-")
Details
Note 1: there can be segments with empty labels in the original tier (pause), do not specify them in
the pattern
Note 2: if there is an segment with empty label in the original tier in the place of separator in the
pattern, the empty segment is duplicated into the new tier, i.e. at the position of the separator, there
may or may not be an empty segment, if there is, it is duplicated. And they are not specified in the
pattern.
Note 3: if the segment with empty label is not at the position corresponding to separator, it leads to
error - the part specified in the pattern between separators cannot be split by empty segments
Note 4: beware of labels that appear empty but they are not (space, new line character etc.) - these
segments are handled as classical non-empty labels. See example - one label is " ", therefore it
must be specified in the pattern.
Value
TextGrid object
See Also
tg.duplicateTier, tg.setTierName, tg.removeTier
Examples
tg <- tg.sample()
tg <- tg.removeTier(tg, "syllable")
collapsed <- paste0(tg$phone$label, collapse = "") # get actual labels
print(collapsed) # all labels in collapsed form - copy the string, include separators -> pattern
pattern <- "ja:-ci-P\\ek-nu-t_so-?u-J\\e-la:S- -nej-dP\\i:f-naj-deZ-h\\ut_S-ku-?a-?a-ta-ma-na:"
tg2 <- tg.duplicateTierMergeSegments(tg, "phone", 1, "syll", pattern, sep = "-")
## Not run:
tg.plot(tg)
tg.plot(tg2)
## End(Not run)
tg.findLabels tg.findLabels
Description
Find label or consecutive sequence of labels and returns their indices.
Usage
tg.findLabels(tg, tierInd, labelVector, returnTime = FALSE)
Arguments
tg TextGrid object
tierInd tier index or "name"
labelVector character string (one label) or vector of character strings (consecutive sequence
of labels) to be found
returnTime If TRUE, return vectors of begin (t1) and end time (t2) for each found group of
sequence of labels instead of indices (when FALSE = default).
Value
If returnTime == FALSE, returns list of all occurrences, each member of the list is one occurence
and contains vector of label indices, if returnTime == TRUE, returns list witch vectors t1 (begin)
and t2 (end) for each found group of sequence of labels.
See Also
tg.countLabels, tg.getLabel, tg.duplicateTierMergeSegments
Examples
tg <- tg.sample()
i <- tg.findLabels(tg, "phoneme", "n")
i
length(i)
i[[1]]
i[[2]]
tg$phoneme$label[unlist(i)]
i <- tg.findLabels(tg, "phone", c("?", "a"))
i
length(i)
tg$phone$label[i[[1]]]
tg$phone$label[i[[2]]]
tg$phone$label[unlist(i)]
t <- tg.findLabels(tg, "phone", c("?", "a"), returnTime = TRUE)
t
t$t2[1] - t$t1[1] # duration of the first result
t$t2[2] - t$t1[2] # duration of the second result
i <- tg.findLabels(tg.sample(), "word", c("ti", "reknu", "co"))
i
length(i)
length(i[[1]])
i[[1]]
i[[1]][3]
tg$word$label[i[[1]]]
t <- tg.findLabels(tg.sample(), "word", c("ti", "reknu", "co"), returnTime = TRUE)
pt <- pt.sample()
tStart <- t$t1[1]
tEnd <- t$t2[1]
## Not run:
pt.plot(pt.cut(pt, tStart, tEnd))
## End(Not run)
tg.getEndTime tg.getEndTime
Description
Returns end time. If tier index is specified, it returns end time of the tier, if it is not specified, it
returns end time of the whole TextGrid.
Usage
tg.getEndTime(tg, tierInd = 0)
Arguments
tg TextGrid object
tierInd [optional] tier index or "name"
Value
numeric
See Also
tg.getStartTime, tg.getTotalDuration
Examples
tg <- tg.sample()
tg.getEndTime(tg)
tg.getEndTime(tg, "phone")
tg.getIntervalDuration
tg.getIntervalDuration
Description
Return duration (i.e., end - start time) of interval in interval tier.
Usage
tg.getIntervalDuration(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of interval
Value
numeric
See Also
tg.getIntervalStartTime, tg.getIntervalEndTime, tg.getIntervalIndexAtTime, tg.findLabels
Examples
tg <- tg.sample()
tg.getIntervalDuration(tg, "phone", 5)
tg.getIntervalEndTime tg.getIntervalEndTime
Description
Return end time of interval in interval tier.
Usage
tg.getIntervalEndTime(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of interval
Value
numeric
See Also
tg.getIntervalStartTime, tg.getIntervalDuration, tg.getIntervalIndexAtTime, tg.findLabels
Examples
tg <- tg.sample()
tg.getIntervalEndTime(tg, "phone", 5)
tg.getIntervalIndexAtTime
tg.getIntervalIndexAtTime
Description
Returns index of interval which includes the given time, i.e. tStart <= time < tEnd. Tier index must
belong to interval tier.
Usage
tg.getIntervalIndexAtTime(tg, tierInd, time)
Arguments
tg TextGrid object
tierInd tier index or "name"
time time which is going to be found in intervals
Value
integer
See Also
tg.getIntervalStartTime, tg.getIntervalEndTime, tg.getLabel, tg.findLabels
Examples
tg <- tg.sample()
tg.getIntervalIndexAtTime(tg, "word", 0.5)
tg.getIntervalStartTime
tg.getIntervalStartTime
Description
Returns start time of interval in interval tier.
Usage
tg.getIntervalStartTime(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of interval
Value
numeric
See Also
tg.getIntervalEndTime, tg.getIntervalDuration, tg.getIntervalIndexAtTime, tg.findLabels
Examples
tg <- tg.sample()
tg.getIntervalStartTime(tg, "phone", 5)
tg.getLabel tg.getLabel
Description
Return label of point or interval at the specified index.
Usage
tg.getLabel(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of point or interval
tg.getNumberOfIntervals 77
Value
character string
See Also
tg.setLabel, tg.countLabels, tg.findLabels
Examples
tg <- tg.sample()
tg.getLabel(tg, "phoneme", 4)
tg.getLabel(tg, "phone", 4)
tg.getNumberOfIntervals
tg.getNumberOfIntervals
Description
Returns number of intervals in the given interval tier.
Usage
tg.getNumberOfIntervals(tg, tierInd)
Arguments
tg TextGrid object
tierInd tier index or "name"
Value
integer
See Also
tg.getNumberOfPoints
Examples
tg <- tg.sample()
tg.getNumberOfIntervals(tg, "phone")
tg.getNumberOfPoints tg.getNumberOfPoints
Description
Returns number of points in the given point tier.
Usage
tg.getNumberOfPoints(tg, tierInd)
Arguments
tg TextGrid object
tierInd tier index or "name"
Value
integer
See Also
tg.getNumberOfIntervals
Examples
tg <- tg.sample()
tg.getNumberOfPoints(tg, "phoneme")
tg.getNumberOfTiers tg.getNumberOfTiers
Description
Returns number of tiers.
Usage
tg.getNumberOfTiers(tg)
Arguments
tg TextGrid object
Value
integer
See Also
tg.getTierName, tg.isIntervalTier, tg.isPointTier
Examples
tg <- tg.sample()
tg.getNumberOfTiers(tg)
tg.getPointIndexHigherThanTime
tg.getPointIndexHigherThanTime
Description
Returns index of point which is nearest the given time from right, i.e. time <= pointTime. Tier
index must belong to point tier.
Usage
tg.getPointIndexHigherThanTime(tg, tierInd, time)
Arguments
tg TextGrid object
tierInd tier index or "name"
time time which is going to be found in points
Value
integer
See Also
tg.getPointIndexNearestTime, tg.getPointIndexLowerThanTime, tg.getLabel, tg.findLabels
Examples
tg <- tg.sample()
tg.getPointIndexHigherThanTime(tg, "phoneme", 0.5)
tg.getPointIndexLowerThanTime
tg.getPointIndexLowerThanTime
Description
Returns index of point which is nearest the given time from left, i.e. pointTime <= time. Tier index
must belong to point tier.
Usage
tg.getPointIndexLowerThanTime(tg, tierInd, time)
Arguments
tg TextGrid object
tierInd tier index or "name"
time time which is going to be found in points
Value
integer
See Also
tg.getPointIndexNearestTime, tg.getPointIndexHigherThanTime, tg.getLabel, tg.findLabels
Examples
tg <- tg.sample()
tg.getPointIndexLowerThanTime(tg, "phoneme", 0.5)
tg.getPointIndexNearestTime
tg.getPointIndexNearestTime
Description
Returns index of point which is nearest the given time (from both sides). Tier index must belong to
point tier.
Usage
tg.getPointIndexNearestTime(tg, tierInd, time)
Arguments
tg TextGrid object
tierInd tier index or "name"
time time which is going to be found in points
Value
integer
See Also
tg.getPointIndexLowerThanTime, tg.getPointIndexHigherThanTime, tg.getLabel, tg.findLabels
Examples
tg <- tg.sample()
tg.getPointIndexNearestTime(tg, "phoneme", 0.5)
tg.getPointTime tg.getPointTime
Description
Return time of point at the specified index in point tier.
Usage
tg.getPointTime(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of point
Value
numeric
See Also
tg.getLabel, tg.getPointIndexNearestTime, tg.getPointIndexLowerThanTime,
tg.getPointIndexHigherThanTime, tg.findLabels
Examples
tg <- tg.sample()
tg.getPointTime(tg, "phoneme", 4)
tg.getStartTime tg.getStartTime
Description
Returns start time. If tier index is specified, it returns start time of the tier, if it is not specified, it
returns start time of the whole TextGrid.
Usage
tg.getStartTime(tg, tierInd = 0)
Arguments
tg TextGrid object
tierInd [optional] tier index or "name"
Value
numeric
See Also
tg.getEndTime, tg.getTotalDuration
Examples
tg <- tg.sample()
tg.getStartTime(tg)
tg.getStartTime(tg, "phone")
tg.getTierName tg.getTierName
Description
Returns name of the tier.
Usage
tg.getTierName(tg, tierInd)
Arguments
tg TextGrid object
tierInd tier index or "name"
Value
character string
See Also
tg.setTierName, tg.isIntervalTier, tg.isPointTier
Examples
tg <- tg.sample()
tg.getTierName(tg, 2)
tg.getTotalDuration tg.getTotalDuration
Description
Returns total duration. If tier index is specified, it returns duration of the tier, if it is not specified, it
returns total duration of the TextGrid.
Usage
tg.getTotalDuration(tg, tierInd = 0)
Arguments
tg TextGrid object
tierInd [optional] tier index or "name"
Value
numeric
See Also
tg.getStartTime, tg.getEndTime
Examples
tg <- tg.sample()
tg.getTotalDuration(tg)
tg.getTotalDuration(tg, "phone")
tg.insertBoundary tg.insertBoundary
Description
Inserts new boundary into interval tier. This creates a new interval, to which we can set the label
(optional argument).
Usage
tg.insertBoundary(tg, tierInd, time, label = "")
Arguments
tg TextGrid object
tierInd tier index or "name"
time time of the new boundary
label [optional] label of the new interval
Details
There are more possible situations which influence where the new label will be set.
a) New boundary into the existing interval (the most common situation): The interval is splitted into
two parts. The left preserves the label of the original interval, the right is set to the new (optional)
label.
b) On the left of existing interval (i.e., enlarging the tier size): The new interval starts with the new
boundary and ends at the start of originally first existing interval. The label is set to the new interval.
c) On the right of existing interval (i.e., enlarging the tier size): The new interval starts at the end of
originally last existing interval and ends with the new boundary. The label is set to the new interval.
This is somewhat different behaviour than in a) and b) where the new label is set to the interval
which is on the right of the new boundary. In c), the new label is set on the left of the new boundary.
But this is the only logical possibility.
It is a nonsense to insert a boundary between existing intervals to a position where there is no
interval. This is against the basic logic of Praat interval tiers where, at the beginning, there is one
large empty interval from beginning to the end. And then, it is divided to smaller intervals by adding
new boundaries. Nevertheless, if the TextGrid is created by external programmes, you may rarely
find such discontinuities. In such a case, at first, use the tgRepairContinuity() function.
Value
TextGrid object
See Also
tg.insertInterval, tg.removeIntervalLeftBoundary, tg.removeIntervalRightBoundary,
tg.removeIntervalBothBoundaries, tg.boundaryMagnet, tg.duplicateTierMergeSegments
Examples
tg <- tg.sample()
tg2 <- tg.insertNewIntervalTier(tg, 1, "INTERVALS")
tg2 <- tg.insertBoundary(tg2, "INTERVALS", 0.8)
tg2 <- tg.insertBoundary(tg2, "INTERVALS", 0.1, "Interval A")
tg2 <- tg.insertInterval(tg2, "INTERVALS", 1.2, 2.5, "Interval B")
## Not run:
tg.plot(tg2)
## End(Not run)
tg.insertInterval tg.insertInterval
Description
Inserts new interval into an empty space in interval tier: a) Into an already existing interval with
empty label (most common situation because, e.g., a new interval tier has one empty interval from
beginning to the end. b) Outside of existing intervals (left or right), this may create another empty
interval between.
Usage
tg.insertInterval(tg, tierInd, tStart, tEnd, label = "")
Arguments
tg TextGrid object
tierInd tier index or "name"
tStart start time of the new interval
tEnd end time of the new interval
label [optional] label of the new interval
Details
In most cases, this function is the same as 1.) tgInsertBoundary(tEnd) and 2.) tgInsertBoundary(tStart,
"new label"). But, additional checks are performed: a) tStart and tEnd belongs to the same
empty interval, or b) both times are outside of existings intervals (both left or both right).
Intersection of the new interval with more already existing (even empty) does not make a sense and
is forbidden.
In many situations, in fact, this function creates more than one interval. E.g., let’s assume an empty
interval tier with one empty interval from 0 to 5 sec. 1.) We insert a new interval from 1 to 2 with
label "he". Result: three intervals, 0-1 "", 1-2 "he", 2-5 "". 2.) Then, we insert an interval from
7 to 8 with label "lot". Result: five intervals, 0-1 "", 1-2 "he", 2-5 "", 5-7 "", 7-8 "lot" Note:
the empty 5-7 "" interval is inserted because we are going outside of the existing tier. 3.) Now, we
insert a new interval exactly between 2 and 3 with label "said". Result: really only one interval is
created (and only the right boundary is added because the left one already exists): 0-1 "", 1-2 "he",
2-3 "said", 3-5 "", 5-7 "", 7-8 "lot". 4.) After this, we want to insert another interval, 3 to 5:
label "a". In fact, this does not create any new interval at all. Instead of that, it only sets the label to
the already existing interval 3-5. Result: 0-1 "", 1-2 "he", 2-3 "said", 3-5 "a", 5-7 "", 7-8 "lot".
This function is not implemented in Praat (6.0.14). And it is very useful for adding separate inter-
vals to an empty area in interval tier, e.g., result of voice activity detection algorithm. On the other
hand, if we want continuously add new consequential intervals, tgInsertBoundary() may be more
useful. Because, in the tgInsertInterval() function, if we calculate both boundaries separately
for each interval, strange situations may happen due to numeric round-up errors, like 3.14*5 !=
15.7. In such cases, it may be hard to obtain precisely consequential time instances. As 3.14*5 is
slightly larger than 15.7 (let’s try to calculate 15.7 - 3.14*5), if you calculate tEnd of the first in-
terval as 3.14*5 and tStart of the second interval as 15.7, this function refuse to create the second
interval because it would be an intersection. In the opposite case (tEnd of the 1st: 15.7, tStart of
the 2nd: 3.14*5), it would create another "micro" interval between these two slightly different time
instances. Instead of that, if you insert only one boundary using the tgInsertBoundary() function,
you are safe that only one new interval is created. But, if you calculate the "15.7" (no matter how)
and store in the variable and then, use this variable in the tgInsertInterval() function both for
the tEnd of the 1st interval and tStart of the 2nd interval, you are safe, it works fine.
Value
TextGrid object
See Also
tg.insertBoundary, tg.removeIntervalLeftBoundary, tg.removeIntervalRightBoundary,
tg.removeIntervalBothBoundaries, tg.boundaryMagnet, tg.duplicateTierMergeSegments
Examples
tg <- tg.sample()
tg2 <- tg.insertNewIntervalTier(tg, 1, "INTERVALS")
tg2 <- tg.insertBoundary(tg2, "INTERVALS", 0.8)
tg2 <- tg.insertBoundary(tg2, "INTERVALS", 0.1, "Interval A")
tg2 <- tg.insertInterval(tg2, "INTERVALS", 1.2, 2.5, "Interval B")
## Not run:
tg.plot(tg2)
## End(Not run)
tg.insertNewIntervalTier
tg.insertNewIntervalTier
Description
Inserts new interval tier to the specified index (existing tiers are shifted). The new tier contains one
empty interval from beginning to end. Then, if we add new boundaries, this interval is divided to
smaller pieces.
Usage
tg.insertNewIntervalTier(tg, newInd = Inf, newTierName, tMin = NA, tMax = NA)
Arguments
tg TextGrid object
newInd new tier index (1 = the first, Inf = the last [default])
newTierName new tier name
tMin [optional] start time of the new tier
tMax [optional] end time of the new tier
Value
TextGrid object
See Also
tg.insertInterval, tg.insertNewPointTier, tg.duplicateTier, tg.duplicateTierMergeSegments,
tg.removeTier
Examples
## Not run:
tg <- tg.sample()
tg2 <- tg.insertNewIntervalTier(tg, 1, "INTERVALS")
tg2 <- tg.insertBoundary(tg2, "INTERVALS", 0.8)
tg2 <- tg.insertBoundary(tg2, "INTERVALS", 0.1, "Interval A")
tg2 <- tg.insertInterval(tg2, "INTERVALS", 1.2, 2.5, "Interval B")
tg2 <- tg.insertNewIntervalTier(tg2, Inf, "LastTier")
tg2 <- tg.insertInterval(tg2, "LastTier", 1, 3, "This is the last tier")
tg.plot(tg2)
## End(Not run)
tg.insertNewPointTier tg.insertNewPointTier
Description
Inserts new point tier to the specified index (existing tiers are shifted).
Usage
tg.insertNewPointTier(tg, newInd = Inf, newTierName)
Arguments
tg TextGrid object
newInd new tier index (1 = the first, Inf = the last [default])
newTierName new tier name
Value
TextGrid object
See Also
tg.insertPoint, tg.insertNewIntervalTier, tg.duplicateTier, tg.removeTier
Examples
## Not run:
tg <- tg.sample()
tg2 <- tg.insertNewPointTier(tg, 1, "POINTS")
tg2 <- tg.insertPoint(tg2, "POINTS", 3, "MY POINT")
tg2 <- tg.insertNewPointTier(tg2, Inf, "POINTS2") # the last tier
tg2 <- tg.insertPoint(tg2, "POINTS2", 2, "point in the last tier")
tg.plot(tg2)
## End(Not run)
tg.insertPoint tg.insertPoint
Description
Inserts new point to point tier of the given index.
Usage
tg.insertPoint(tg, tierInd, time, label)
Arguments
tg TextGrid object
tierInd tier index or "name"
time time of the new point
label time of the new point
Value
TextGrid object
tg.isIntervalTier 89
See Also
tg.removePoint, tg.insertInterval, tg.insertBoundary
Examples
## Not run:
tg <- tg.sample()
tg2 <- tg.insertPoint(tg, "phoneme", 1.4, "NEW POINT")
tg.plot(tg2)
## End(Not run)
tg.isIntervalTier tg.isIntervalTier
Description
Returns TRUE if the tier is IntervalTier, FALSE otherwise.
Usage
tg.isIntervalTier(tg, tierInd)
Arguments
tg TextGrid object
tierInd tier index or "name"
Value
TRUE / FALSE
See Also
tg.isPointTier, tg.getTierName, tg.findLabels
Examples
tg <- tg.sample()
tg.isIntervalTier(tg, 1)
tg.isIntervalTier(tg, "word")
tg.isPointTier tg.isPointTier
Description
Returns TRUE if the tier is PointTier, FALSE otherwise.
Usage
tg.isPointTier(tg, tierInd)
Arguments
tg TextGrid object
tierInd tier index or "name"
Value
TRUE / FALSE
See Also
tg.isIntervalTier, tg.getTierName, tg.findLabels
Examples
tg <- tg.sample()
tg.isPointTier(tg, 1)
tg.isPointTier(tg, "word")
tg.plot tg.plot
Description
Plots interactive TextGrid using dygraphs package.
Usage
tg.plot(
tg,
group = "",
pt = NULL,
it = NULL,
formant = NULL,
formantScaleIntensity = TRUE,
formantDrawBandwidth = TRUE,
pitch = NULL,
pitchScaleIntensity = TRUE,
pitchShowStrength = FALSE,
snd = NULL
)
Arguments
tg TextGrid object
group [optional] character string, name of group for dygraphs synchronization
pt [optional] PitchTier object
it [optional] IntensityTier object
formant [optional] Formant object
formantScaleIntensity
[optional] Point size scaled according to relative intensity
formantDrawBandwidth
[optional] Draw formant bandwidth
pitch [optional] Pitch object
pitchScaleIntensity
[optional] Point size scaled according to relative intensity
pitchShowStrength
[optional] Show strength annotation
snd [optional] Sound object
See Also
tg.read, pt.plot, it.plot, pitch.plot
Examples
## Not run:
tg <- tg.sample()
tg.plot(tg)
tg.plot(tg.sample(), pt = pt.sample())
## End(Not run)
tg.read tg.read
Description
Loads TextGrid from Praat in Text or Short text format (UTF-8), it handles both Interval and Point
tiers. Labels can may contain quotation marks and new lines.
Usage
tg.read(fileNameTextGrid, encoding = "UTF-8")
Arguments
fileNameTextGrid
Input file name
encoding File encoding (default: "UTF-8"), "auto" for auto-detect of Unicode encoding
Value
TextGrid object
See Also
tg.write, tg.plot, tg.repairContinuity, tg.createNewTextGrid, tg.findLabels, tg.duplicateTierMergeSegment
pt.read, pitch.read, formant.read, it.read, col.read
Examples
## Not run:
tg <- tg.read("demo/H.TextGrid")
tg.plot(tg)
## End(Not run)
tg.removeIntervalBothBoundaries
tg.removeIntervalBothBoundaries
Description
Remove both left and right boundary of interval of the given index in Interval tier. In fact, this
operation concatenate three intervals into one (and their labels). It cannot be applied to the first
and the last interval because they contain beginning or end boundary of the tier. E.g., let’s assume
interval 1-2-3. We remove both boundaries of the 2nd interval. The result is one interval 123. If
we do not want to concatenate labels (we wanted to remove the label including its interval), we can
set the label of the second interval to the empty string "" before this operation. If we only want to
remove the label of interval "without concatenation", i.e., the desired result is 1-empty-3, it is not
this operation of removing boundaries. Just set the label of the second interval to the empty string
"".
Usage
tg.removeIntervalBothBoundaries(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of the interval
Value
TextGrid object
See Also
tg.removeIntervalLeftBoundary, tg.removeIntervalRightBoundary, tg.insertBoundary,
tg.insertInterval
Examples
## Not run:
tg <- tg.sample()
tg.plot(tg)
tg2 <- tg.removeIntervalBothBoundaries(tg, "word", 3)
tg.plot(tg2)
## End(Not run)
tg.removeIntervalLeftBoundary
tg.removeIntervalLeftBoundary
Description
Remove left boundary of the interval of the given index in Interval tier. In fact, it concatenates two
intervals into one (and their labels). It cannot be applied to the first interval because it is the start
boundary of the tier. E.g., we have interval 1-2-3, we remove the left boundary of the 2nd interval,
the result is two intervals 12-3. If we do not want to concatenate labels, we have to set the label to
the empty string "" before this operation.
Usage
tg.removeIntervalLeftBoundary(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of the interval
Value
TextGrid object
See Also
tg.removeIntervalRightBoundary, tg.removeIntervalBothBoundaries, tg.insertBoundary,
tg.insertInterval
Examples
## Not run:
tg <- tg.sample()
tg.plot(tg)
tg2 <- tg.removeIntervalLeftBoundary(tg, "word", 3)
tg.plot(tg2)
## End(Not run)
tg.removeIntervalRightBoundary
tg.removeIntervalRightBoundary
Description
Remove right boundary of the interval of the given index in Interval tier. In fact, it concatenates
two intervals into one (and their labels). It cannot be applied to the last interval because it is the end
boundary of the tier. E.g., we have interval 1-2-3, we remove the right boundary of the 2nd interval,
the result is two intervals 1-23. If we do not want to concatenate labels, we have to set the label to
the empty string "" before this operation.
Usage
tg.removeIntervalRightBoundary(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of the interval
Value
TextGrid object
See Also
tg.removeIntervalLeftBoundary, tg.removeIntervalBothBoundaries, tg.insertBoundary,
tg.insertInterval
Examples
## Not run:
tg <- tg.sample()
tg.plot(tg)
tg2 <- tg.removeIntervalRightBoundary(tg, "word", 3)
tg.plot(tg2)
## End(Not run)
tg.removePoint tg.removePoint
Description
Remove point of the given index from the point tier.
Usage
tg.removePoint(tg, tierInd, index)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of point to be removed
Value
TextGrid object
See Also
tg.insertPoint, tg.getNumberOfPoints, tg.removeIntervalBothBoundaries
Examples
tg <- tg.sample()
tg$phoneme$label
tg2 <- tg.removePoint(tg, "phoneme", 1)
tg2$phoneme$label
tg.removeTier tg.removeTier
Description
Removes tier of the given index.
Usage
tg.removeTier(tg, tierInd)
Arguments
tg TextGrid object
tierInd tier index or "name"
Value
TextGrid object
See Also
tg.insertNewIntervalTier, tg.insertNewPointTier, tg.duplicateTier
Examples
## Not run:
tg <- tg.sample()
tg.plot(tg)
tg2 <- tg.removeTier(tg, "word")
tg.plot(tg2)
## End(Not run)
tg.repairContinuity tg.repairContinuity
Description
Repairs problem of continuity of T2 and T1 in interval tiers. This problem is very rare and it should
not appear. However, e.g., automatic segmentation tool Prague Labeller produces random numeric
round-up errors featuring, e.g., T2 of preceding interval is slightly higher than the T1 of the current
interval. Because of that, the boundary cannot be manually moved in Praat edit window.
Usage
tg.repairContinuity(tg, verbose = TRUE)
Arguments
tg TextGrid object
verbose [optional, default=TRUE] If FALSE, the function performs everything quietly.
Value
TextGrid object
See Also
tg.sampleProblem
Examples
## Not run:
tgProblem <- tg.sampleProblem()
tgNew <- tg.repairContinuity(tgProblem)
tg.write(tgNew, "demo_problem_OK.TextGrid")
## End(Not run)
tg.sample tg.sample
Description
Returns sample TextGrid.
Usage
tg.sample()
Value
TextGrid
See Also
tg.plot
Examples
tg <- tg.sample()
tg.plot(tg)
tg.sampleProblem tg.sampleProblem
Description
Returns sample TextGrid with continuity problem.
Usage
tg.sampleProblem()
Value
TextGrid
See Also
tg.repairContinuity
Examples
tg <- tg.sampleProblem()
tg2 <- tg.repairContinuity(tg)
tg2 <- tg.repairContinuity(tg2)
tg.plot(tg2)
tg.setLabel tg.setLabel
Description
Sets (changes) label of interval or point of the given index in the interval or point tier.
Usage
tg.setLabel(tg, tierInd, index, newLabel)
Arguments
tg TextGrid object
tierInd tier index or "name"
index index of interval or point
newLabel new "label"
See Also
tg.getLabel
Examples
tg <- tg.sample()
tg2 <- tg.setLabel(tg, "word", 3, "New Label")
tg.getLabel(tg2, "word", 3)
tg.setTierName tg.setTierName
Description
Sets (changes) name of tier of the given index.
Usage
tg.setTierName(tg, tierInd, name)
Arguments
tg TextGrid object
tierInd tier index or "name"
name new "name" of the tier
See Also
tg.getTierName
Examples
tg <- tg.sample()
tg2 <- tg.setTierName(tg, "word", "WORDTIER")
tg.getTierName(tg2, 4)
tg.write tg.write
Description
Saves TextGrid to the file. TextGrid may contain both interval and point tiers (tg[[1]], tg[[2]],
tg[[3]], etc.). If tier type is not specified in $type, is is assumed to be "interval". If specified,
$type have to be "interval" or "point". If there is no class(tg)["tmin"] and class(tg)["tmax"],
they are calculated as min and max of all tiers. The file is saved in UTF-8 encoding.
Usage
tg.write(tg, fileNameTextGrid, format = "short")
Arguments
tg TextGrid object
fileNameTextGrid
Output file name
format Output file format ("short" (default, short text format) or "text" (a.k.a. full
text format))
See Also
tg.read, pt.write
Examples
## Not run:
tg <- tg.sample()
tg.write(tg, "demo_output.TextGrid")
## End(Not run) |
gatsby-plugin-sync | npm | JavaScript | gatsby-plugin-offline
===
Adds drop-in support for making a Gatsby site work offline and more resistant to bad network connections. It creates a service worker for the site and loads the service worker into the client.
If you're using this plugin with `gatsby-plugin-manifest` (recommended) this plugin should be listed *after* that plugin so the manifest file can be included in the service worker.
Install
---
`npm install --save gatsby-plugin-offline`
How to use
---
```
// In your gatsby-config.jsplugins: [`gatsby-plugin-offline`]
```
Overriding options
---
When adding this plugin to your `gatsby-config.js`, you can pass in options to override the default [Workbox](https://developers.google.com/web/tools/workbox/modules/workbox-build) config.
The default config is as follows. Warning: you can break the offline support by changing these options, so tread carefully.
```
const options = { importWorkboxFrom: `local`, globDirectory: rootDir, globPatterns, modifyUrlPrefix: { // If `pathPrefix` is configured by user, we should replace // the default prefix with `pathPrefix`. "/": `${pathPrefix}/`, }, cacheId: `gatsby-plugin-offline`, // Don't cache-bust JS or CSS files, and anything in the static directory, // since these files have unique URLs and their contents will never change dontCacheBustUrlsMatching: /(\.js$|\.css$|static\/)/, runtimeCaching: [ { // Use cacheFirst since these don't need to be revalidated (same RegExp // and same reason as above) urlPattern: /(\.js$|\.css$|static\/)/, handler: `cacheFirst`, }, { // Add runtime caching of various other page resources urlPattern: /^https?:.*\.(png|jpg|jpeg|webp|svg|gif|tiff|js|woff|woff2|json|css)$/, handler: `staleWhileRevalidate`, }, { // Google Fonts CSS (doesn't end in .css so we need to specify it) urlPattern: /^https?:\/\/fonts\.googleapis\.com\/css/, handler: `staleWhileRevalidate`, }, ], skipWaiting: true, clientsClaim: true,}
```
Remove
---
If you want to remove `gatsby-plugin-offline` from your site at a later point,
substitute it with [`gatsby-plugin-remove-serviceworker`](https://www.npmjs.com/package/gatsby-plugin-remove-serviceworker)
to safely remove the service worker. First, install the new package:
```
npm install gatsby-plugin-remove-serviceworkernpm uninstall gatsby-plugin-offline
```
Then, update your `gatsby-config.js`:
```
plugins: [- `gatsby-plugin-offline`,+ `gatsby-plugin-remove-serviceworker`, ]
```
This will ensure that the worker is properly unregistered, instead of leaving an outdated version registered in users' browsers.
Notes
---
### Empty View Source and SEO
Gatsby offers great SEO capabilities and that is no different with `gatsby-plugin-offline`. However, you shouldn't think that Gatsby doesn't serve HTML tags anymore when looking at your source code in the browser (with `Right click` => `View source`). `View source` doesn't represent the actual HTML data since `gatsby-plugin-offline` registers and loads a service worker that will cache and handle this differently. Your site is loaded from the service worker, not from its actual source (check your `Network` tab in the DevTools for that).
To see the HTML data that crawlers will receive, run this in your terminal:
```
curl https://www.yourdomain.tld
```
Alternatively you can have a look at the `/public/index.html` file in your project folder.
Readme
---
### Keywords
* gatsby
* gatsby-plugin
* offline
* precache
* service-worker |
@morgs32/dotenvnow | npm | JavaScript | dotenv
===
Dotenv is a zero-dependency module that loads environment variables from a `.env` file into [`process.env`](https://nodejs.org/docs/latest/api/process.html#process_process_env). Storing configuration in the environment separate from code is based on [The Twelve-Factor App](http://12factor.net/config) methodology.
Install
---
```
npm install dotenv --save
```
Usage
---
As early as possible in your application, require and configure dotenv.
```
require('dotenv').config()
```
Create a `.env` file in the root directory of your project. Add environment-specific variables on new lines in the form of `NAME=VALUE`.
For example:
```
DB_HOST=localhost DB_USER=root DB_PASS=s1mpl3
```
That's it.
`process.env` now has the keys and values you defined in your `.env` file.
```
var db = require('db')
db.connect({
host: process.env.DB_HOST,
username: process.env.DB_USER,
password: process.env.DB_PASS
})
```
###
Preload
If you are using iojs-v1.6.0 or later, you can use the `--require` (`-r`) command line option to preload dotenv. By doing this, you do not need to require and load dotenv in your application code.
```
$ node -r dotenv/config your_script.js
```
The configuration options below are supported as command line arguments in the format `dotenv_config_<option>=value`
```
$ node -r dotenv/config your_script.js dotenv_config_path=/custom/path/to/your/env/vars
```
Config
---
*Alias: `load`*
`config` will read your .env file, parse the contents, assign it to
[`process.env`](https://nodejs.org/docs/latest/api/process.html#process_process_env),
and return an Object with a `parsed` key containing the loaded content or an `error` key if it failed.
```
const result = dotenv.config()
if (result.error) {
throw result.error
}
console.log(result.parsed)
```
You can additionally, pass options to `config`.
###
Options
####
Path
Default: `.env`
You can specify a custom path if your file containing environment variables is named or located differently.
```
require('dotenv').config({path: '/custom/path/to/your/env/vars'})
```
####
Encoding
Default: `utf8`
You may specify the encoding of your file containing environment variables using this option.
```
require('dotenv').config({encoding: 'base64'})
```
Parse
---
The engine which parses the contents of your file containing environment variables is available to use. It accepts a String or Buffer and will return an Object with the parsed keys and values.
```
var dotenv = require('dotenv')
var buf = new Buffer('BASIC=basic')
var config = dotenv.parse(buf) // will return an object console.log(typeof config, config) // object { BASIC : 'basic' }
```
###
Rules
The parsing engine currently supports the following rules:
* `BASIC=basic` becomes `{BASIC: 'basic'}`
* empty lines are skipped
* lines beginning with `#` are treated as comments
* empty values become empty strings (`EMPTY=` becomes `{EMPTY: ''}`)
* single and double quoted values are escaped (`SINGLE_QUOTE='quoted'` becomes `{SINGLE_QUOTE: "quoted"}`)
* new lines are expanded if in double quotes (`MULTILINE="new\nline"` becomes
```
{MULTILINE: 'new line'}
```
* inner quotes are maintained (think JSON) (`JSON={"foo": "bar"}` becomes `{JSON:"{\"foo\": \"bar\"}"`)
* whitespace is removed from both ends of the value (see more on [`trim`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/Trim)) (`FOO=" some value "` becomes `{FOO: 'some value'}`)
FAQ
---
###
Should I commit my `.env` file?
No. We **strongly** recommend against committing your `.env` file to version control. It should only include environment-specific values such as database passwords or API keys. Your production database should have a different password than your development database.
###
Should I have multiple `.env` files?
No. We **strongly** recommend against having a "main" `.env` file and an "environment" `.env` file like `.env.test`. Your config should vary between deploys, and you should not be sharing values between environments.
> In a twelve-factor app, env vars are granular controls, each fully orthogonal to other env vars. They are never grouped together as “environments”, but instead are independently managed for each deploy. This is a model that scales up smoothly as the app naturally expands into more deploys over its lifetime.
> – [The Twelve-Factor App](http://12factor.net/config)
###
What happens to environment variables that were already set?
We will never modify any environment variables that have already been set. In particular, if there is a variable in your `.env` file which collides with one that already exists in your environment, then that variable will be skipped. This behavior allows you to override all `.env` configurations with a machine-specific environment, although it is not recommended.
If you want to override `process.env` you can do something like this:
```
const fs = require('fs')
const dotenv = require('dotenv')
const envConfig = dotenv.parse(fs.readFileSync('.env.override'))
for (var k in envConfig) {
process.env[k] = envConfig[k]
}
```
###
Can I customize/write plugins for dotenv?
For `[email protected]`: Yes. `dotenv.config()` now returns an object representing the parsed `.env` file. This gives you everything you need to continue setting values on `process.env`. For example:
```
var dotenv = require('dotenv')
var variableExpansion = require('dotenv-expand')
const myEnv = dotenv.config()
variableExpansion(myEnv)
```
###
What about variable expansion?
For `[email protected]`: Use [dotenv-expand](https://github.com/motdotla/dotenv-expand).
For `[email protected]`: We haven't been presented with a compelling use case for expanding variables and believe it leads to env vars that are not "fully orthogonal" as [The Twelve-Factor App](http://12factor.net/config) outlines.[[1](https://github.com/motdotla/dotenv/issues/39)][[2](https://github.com/motdotla/dotenv/pull/97)] Please open an issue if you have a compelling use case.
###
How do I use dotenv with `import`?
ES2015 and beyond offers modules that allow you to `export` any top-level `function`, `class`, `var`, `let`, or `const`.
> When you run a module containing an `import` declaration, the modules it imports are loaded first, then each module body is executed in a depth-first traversal of the dependency graph, avoiding cycles by skipping anything already executed.
> – [ES6 In Depth: Modules](https://hacks.mozilla.org/2015/08/es6-in-depth-modules/)
You must run `dotenv.config()` before referencing any environment variables. Here's an example of problematic code:
`errorReporter.js`:
```
import { Client } from 'best-error-reporting-service'
export const client = new Client(process.env.BEST_API_KEY)
```
`index.js`:
```
import dotenv from 'dotenv'
dotenv.config()
import errorReporter from './errorReporter'
errorReporter.client.report(new Error('faq example'))
```
`client` will not be configured correctly because it was constructed before `dotenv.config()` was executed. There are (at least) 3 ways to make this work.
1. Preload dotenv: `node --require dotenv/config index.js` (*Note: you do not need to `import` dotenv with this approach*)
2. Import `dotenv/config` instead of `dotenv` (*Note: you do not need to call `dotenv.config()` and must pass options via the command line with this approach*)
3. Create a separate file that will execute `config` first as outlined in [this comment on #133](https://github.com/motdotla/dotenv/issues/133#issuecomment-255298822)
Contributing Guide
---
See [CONTRIBUTING.md](https://github.com/motdotla/dotenv/blob/HEAD/CONTRIBUTING.md)
Change Log
---
See [CHANGELOG.md](https://github.com/motdotla/dotenv/blob/HEAD/CHANGELOG.md)
License
---
See [LICENSE](https://github.com/motdotla/dotenv/blob/HEAD/LICENSE)
Who's using dotenv
---
Here's just a few of many repositories using dotenv:
* [jaws](https://github.com/jaws-framework/jaws-core-js)
* [node-lambda](https://github.com/motdotla/node-lambda)
* [resume-cli](https://www.npmjs.com/package/resume-cli)
* [phant](https://www.npmjs.com/package/phant)
* [adafruit-io-node](https://github.com/adafruit/adafruit-io-node)
* [mockbin](https://www.npmjs.com/package/mockbin)
* [and many more...](https://www.npmjs.com/browse/depended/dotenv)
Go well with dotenv
---
Here's some projects that expand on dotenv. Check them out.
* [require-environment-variables](https://github.com/bjoshuanoah/require-environment-variables)
* [dotenv-safe](https://github.com/rolodato/dotenv-safe)
* [envalid](https://github.com/af/envalid)
Readme
---
### Keywords
* dotenv
* env
* .env
* environment
* variables
* config
* settings |
ravedash | cran | R | Package ‘ravedash’
October 16, 2022
Type Package
Title Dashboard System for Reproducible Visualization of 'iEEG'
Version 0.1.2
Description Dashboard system to display the analysis results produced by 'RAVE'
(<NAME>., <NAME>., <NAME>. (2020), R analysis
and visualizations of 'iEEG' <doi:10.1016/j.neuroimage.2020.117341>).
Provides infrastructure to integrate customized analysis pipelines into
dashboard modules, including file structures, front-end widgets, and
event handlers.
License MIT + file LICENSE
Encoding UTF-8
Language en-US
Imports dipsaus (>= 0.2.0), logger (>= 0.2.2), raveio (>= 0.0.6),
rpymat (>= 0.1.2), shidashi (>= 0.1.1), shiny (>= 1.7.1),
shinyWidgets (>= 0.6.2), threeBrain (>= 0.2.4), shinyvalidate,
htmlwidgets
Suggests htmltools, fastmap (>= 1.1.0), rlang (>= 1.0.2), crayon (>=
1.4.2), rstudioapi, knitr, httr, rmarkdown
RoxygenNote 7.2.1
URL https://dipterix.org/ravedash/
BugReports https://github.com/dipterix/ravedash/issues
VignetteBuilder knitr
NeedsCompilation no
Author <NAME> [aut, cre, cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-10-15 23:50:02 UTC
R topics documented:
card_ur... 2
debug_module... 3
get_active_module_inf... 4
group_bo... 4
logge... 5
module_server_commo... 7
new_rave_shiny_component_containe... 9
output_gadge... 10
random-tex... 11
rave-input-output-car... 12
rave-runtime-event... 13
rave-sessio... 16
rave-ui-prese... 19
ravedash_foote... 23
register_outpu... 24
run_analysis_butto... 27
safe_observ... 28
shiny_icon... 28
simple_layou... 29
standalone_viewe... 30
temp_fil... 32
card_url Set ’URL’ scheme for modules
Description
Automatically generates href for input_card and output_card
Usage
set_card_url_scheme(module_id, root, sep = "/")
card_href(title, type = "input", module_id = NULL)
Arguments
module_id the module ID
root ’URL’ default route
sep separation
title a title string that will be used to generate ’URL’
type type of the card; choices are 'input' or 'output'
Value
The hyper reference of suggested card ’URL’
Examples
set_card_url_scheme(
module_id = "power_explorer",
root = "https://openwetware.org/wiki/RAVE:ravebuiltins",
sep = ":")
card_href("Set Electrodes", type = "input", module_id = "power_explorer")
debug_modules Debug ’RAVE’ modules interactively in local project folder
Description
Debug ’RAVE’ modules interactively in local project folder
Usage
debug_modules(
module_root = rstudioapi::getActiveProject(),
host = "127.0.0.1",
port = 17283,
jupyter = FALSE,
...
)
Arguments
module_root root of modules, usually the project folder created from 'shidashi' template
host, port host and port of the application
jupyter whether to launch 'Jupyter' server; default is false
... passed to render
Value
'RStudio' job ID
get_active_module_info
Get current active module information, internally used
Description
Get current active module information, internally used
Usage
get_active_module_info(session = shiny::getDefaultReactiveDomain())
Arguments
session shiny reactive domain, default is current domain
Value
A named list, including module ID, module label, internal 'rave_id'.
group_box Group input elements into a box with title
Description
Only works in template framework provided by 'shidashi' package, see use_template
Usage
group_box(title, ..., class = NULL)
flex_group_box(title, ..., class = NULL, wrap = "wrap", direction = "row")
Arguments
title the box title
... elements to be included or to be passed to other methods
class additional class of the box
wrap, direction
see flex_container
Value
A ’HTML’ tag
Examples
library(shiny)
library(shidashi)
library(ravedash)
group_box(
title = "Analysis Group A",
selectInput("a", "Condition", choices = c("A", "B")),
sliderInput("b", "Time range", min = 0, max = 1, value = c(0,1))
)
flex_group_box(
title = "Project and Subject",
flex_item( "Some input 1" ),
flex_item( "Some input 2" ),
flex_break(),
flex_item( "Some input in new line" )
)
logger Logger system used by ’RAVE’
Description
Keep track of messages printed by modules
Usage
logger(
...,
level = c("info", "warning", "error", "fatal", "debug", "trace"),
calc_delta = "auto",
.envir = parent.frame(),
.sep = "",
use_glue = FALSE,
reset_timer = FALSE
)
set_logger_path(root_path, max_bytes, max_files)
logger_threshold(
level = c("info", "warning", "error", "fatal", "debug", "trace"),
module_id,
type = c("console", "file", "both")
)
logger_error_condition(cond, level = "error")
error_notification(
cond,
title = "Error found!",
type = "danger",
class = "error_notif",
delay = 30000,
autohide = TRUE,
session = shiny::getDefaultReactiveDomain()
)
with_error_notification(expr, envir = parent.frame(), quoted = FALSE, ...)
Arguments
..., .envir, .sep
passed to glue, if use_glue is true
level the level of message, choices are 'info' (default), 'warning', 'error', 'fatal',
'debug', 'trace'
calc_delta whether to calculate time difference between current message and previous mes-
sage; default is 'auto', which prints time difference when level is 'debug'.
This behavior can be changed by altering calc_delta by a logical TRUE to en-
able or FALSE to disable.
use_glue whether to use glue to combine ...; default is false
reset_timer whether to reset timer used by calc_delta
root_path root directory if you want log messages to be saved to hard disks; if root_path
is NULL, "", or nullfile, then logger path will be unset.
max_bytes maximum file size for each logger partitions
max_files maximum number of partition files to hold the log; old files will be deleted.
module_id ’RAVE’ module identification string, or name-space; default is 'ravedash'
type which type of logging should be set; default is 'console', if file log is enabled
through set_logger_path, type could be 'file' or 'both'. Default log level
is 'info' on console and 'debug' on file.
cond condition to log
class, title, delay, autohide
passed to show_notification
session shiny session
expr expression to evaluate
envir environment to evaluate expr
quoted whether expr is quoted; default is false
Value
The message without time-stamps
Examples
logger("This is a message")
a <- 1
logger("A message with glue: a={a}")
logger("A message without glue: a={a}", use_glue = FALSE)
logger("Message A", calc_delta = TRUE, reset_timer = TRUE)
logger("Seconds before logging another message", calc_delta = TRUE)
# by default, debug and trace messages won't be displayed
logger('debug message', level = 'debug')
# adjust logger level, make sure `module_id` is a valid RAVE module ID
logger_threshold('debug', module_id = NULL)
# Debug message will display
logger('debug message', level = 'debug')
# Trace message will not display as it's lower than debug level
logger('trace message', level = 'trace')
module_server_common Default module server function
Description
Common shiny server function to enable modules that requires data loader panel.
Usage
module_server_common(
module_id,
check_data_loaded,
...,
session = shiny::getDefaultReactiveDomain(),
parse_env = NULL
)
Arguments
module_id ’RAVE’ module ID
check_data_loaded
a function that takes zero to one argument and must return either TRUE if data
has been loaded or FALSE if loader needs to be open to load data.
... ignored
session shiny session
parse_env environment used to parse module
Value
A list of server utility functions; see ’Examples’ below.
Examples
# Debug in non-reactive session: create fake session
fake_session <- shiny::MockShinySession$new()
# register common-server function
module_server_common(module_id = "mock-session",
session = fake_session)
server_tools <- get_default_handlers(fake_session)
# Print each function to see the usage
server_tools$auto_recalculate
server_tools$run_analysis_onchange
server_tools$run_analysis_flag
server_tools$module_is_active
server_tools$simplify_view
# 'RAVE' module server function
server <- function(input, output, session, ...){
pipeline_path <- "PATH to module pipeline"
module_server_common(
module_id = session$ns(NULL),
check_data_loaded = function(first_time){
re <- tryCatch({
# Try to read data from pipeline results
repo <- raveio::pipeline_read(
'repository',
pipe_dir = pipeline_path
)
# Fire event to update footer message
ravedash::fire_rave_event('loader_message',
"Data loaded")
# Return TRUE indicating data has been loaded
TRUE
}, error = function(e){
# Fire event to remove footer message
ravedash::fire_rave_event('loader_message', NULL)
# Return FALSE indicating no data has been found
FALSE
})
}, session = session
)
}
new_rave_shiny_component_container
Creates a container for preset components
Description
Creates a container for preset components
Usage
new_rave_shiny_component_container(
module_id,
pipeline_name,
pipeline_path = raveio::pipeline_find(pipeline_name),
settings_file = "settings.yaml"
)
Arguments
module_id ’RAVE’ module ID
pipeline_name the name of pipeline to run
pipeline_path path of the pipeline
settings_file the settings file of the pipeline, usually stores the pipeline input information;
default is "settings.yaml"
Value
A 'RAVEShinyComponentContainer' instance
Examples
f <- tempfile()
dir.create(f, showWarnings = FALSE, recursive = TRUE)
file.create(file.path(f, "settings.yaml"))
container <- new_rave_shiny_component_container(
module_id = "module_power_phase_coherence",
pipeline_name = "power_phase_coherence_pipeline",
pipeline_path = f
)
loader_project <- presets_loader_project()
loader_subject <- presets_loader_subject()
container$add_components(
loader_project, loader_subject
)
output_gadget ’RAVE’ dashboard output gadgets
Description
’RAVE’ dashboard output gadgets
Usage
output_gadget(
outputId,
icon = NULL,
type = c("standalone", "download", "actionbutton", "custom"),
class = NULL,
inputId = NULL,
...
)
output_gadget_container(
expr,
gadgets = c("standalone", "download"),
quoted = FALSE,
env = parent.frame(),
outputId = NULL,
class = NULL,
container = NULL,
wrapper = TRUE
)
Arguments
outputId output ID in the root scope of shiny session
icon gadget icon
type, gadgets gadget type(s), currently supported: 'standalone', 'download', 'actionbutton'
class additional class to the gadget or its container
inputId input ID, automatically assigned internally
... ignored
expr shiny output call expression, for example, shiny::plotOutput({...})
quoted whether expr is quoted; default is false
env environment where expr should be evaluated
container optional container for the gadgets and outputs; will be ignored if wrapper is
false
wrapper whether to wrap the gadgets and the output within a ’HTML’ container
random-text Randomly choose a text from a list of strings
Description
Randomly choose a text from a list of strings
Usage
be_patient_text(candidates)
finished_text(candidates)
Arguments
candidates character vectors, a list of candidates
Value
be_patient_text returns a text asking users to be patient; finished_text returns the text indi-
cating the task has finished.
Examples
be_patient_text()
finished_text()
rave-input-output-card
Input and output card (front-end element)
Description
Input and output card (front-end element)
Usage
input_card(
title,
...,
class = "",
class_header = "shidashi-anchor",
class_body = "padding-10",
class_foot = "padding-10",
href = "auto",
tools = NULL,
footer = NULL,
append_tools = TRUE,
toggle_advanced = FALSE,
module_id = get0("module_id", ifnotfound = NULL, envir = parent.frame())
)
output_card(
title,
...,
class = "",
class_body = "padding-10",
class_foot = "padding-10",
href = "auto",
tools = NULL,
append_tools = TRUE,
module_id = get0("module_id", ifnotfound = NULL, envir = parent.frame())
)
Arguments
title title of the card
... additional elements to be included in the card, see card
class the ’HTML’ class for card
class_header the ’HTML’ class for card header; default is 'shidashi-anchor', which will
generate shortcuts at the page footers
class_body the ’HTML’ class for card body; default is "padding-10", with '10px' at each
direction
class_foot the ’HTML’ class for card footer; default is "padding-10", with '10px' at each
direction
href hyper reference link of the card
tools a list of additional card tools, see card_tool
footer footer elements
append_tools whether to append tools to the default list; default is true
toggle_advanced
whether to show links in the footer to toggle elements with ’HTML’ class 'rave-optional'
module_id the ’RAVE’ module ID
Value
’HTML’ tags
See Also
card
Examples
input_card(title = "Condition selector",
"Please select experimental conditions:",
shiny::selectInput(
inputId = "condition", label = "Condition",
choices = c("Audio", "Visual")
))
rave-runtime-events ’RAVE’ run-time events
Description
A set of preset behaviors used by ’RAVE’ modules
Usage
register_rave_session(
session = shiny::getDefaultReactiveDomain(),
.rave_id = NULL
)
get_default_handlers(session = shiny::getDefaultReactiveDomain())
fire_rave_event(
key,
value,
global = FALSE,
force = FALSE,
session = shiny::getDefaultReactiveDomain(),
.internal_ok = FALSE
)
get_session_by_rave_id(rave_id)
get_rave_event(key, session = shiny::getDefaultReactiveDomain())
open_loader(session = shiny::getDefaultReactiveDomain())
close_loader(session = shiny::getDefaultReactiveDomain())
watch_loader_opened(session = shiny::getDefaultReactiveDomain())
watch_data_loaded(session = shiny::getDefaultReactiveDomain())
current_shiny_theme(default, session = shiny::getDefaultReactiveDomain())
Arguments
session shiny session, usually automatically determined
key event key to fire or to monitor
value event value
global whether to notify other sessions (experimental and not recommended)
force whether to force firing the event even the value hasn’t changed
.internal_ok internally used
rave_id, .rave_id
internally used to store unique session identification key
default default value if not found
Details
These goal of these event functions is to simplify the dashboard logic without understanding the de-
tails or passing global variables around. Everything starts with register_rave_session. This
function registers a unique identification to session, and adds bunch of registry to monitor the
changes of themes, built-in, and custom events. If you have called module_server_common, then
register_rave_session has already been called.
register_rave_session make initial registries, must be called, returns a list of registries
fire_rave_event send signals to make changes to a event; returns nothing
get_rave_event watch and get the event values; must run in shiny reactive context
open_loader fire an event with a special key 'open_loader' to open the data-loading panel; re-
turns nothing
close_loader reset an event with a special key 'open_loader' to close the data-loading panel if
possible; returns nothing
watch_loader_opened watch in shiny reactive context whether the loader is opened; returns a
logical value, but raise errors when reactive context is missing
watch_data_loaded watch a special event with key 'data_loaded'; returns a logical value of
whether new data has been loaded, or raise errors when reactive context is missing
current_shiny_theme watch and returns a list of theme parameters, for example, light or dark
theme
Value
See ’Details’
Built-in Events
The following event keys are built-in. Please do not fire them using fire_rave_event or the
’RAVE’ application might will crash
’simplify_toggle’ toggle visibility of ’HTML’ elements with class 'rave-option'
’run_analysis’ notifies the module to run pipeline
’save_pipeline’, ’load_pipeline’ notifies the module to save or load pipeline
’data_loaded’ notifies the module that new data has been loaded
’open_loader’, ’toggle_loader’ notifies the internal server code to show or hide the data load-
ing panel
’active_module’ internally used to store current active module information
Examples
library(shiny)
library(ravedash)
ui <- fluidPage(
actionButton("btn", "Fire event"),
actionButton("btn2", "Toggle loader")
)
server <- function(input, output, session) {
# Create event registries
register_rave_session()
shiny::bindEvent(
shiny::observe({
fire_rave_event("my_event_key", Sys.time())
}),
input$btn,
ignoreInit = TRUE,
ignoreNULL = TRUE
)
shiny::bindEvent(
shiny::observe({
cat("An event fired with value:", get_rave_event("my_event_key"), "\n")
}),
get_rave_event("my_event_key"),
ignoreNULL = TRUE
)
shiny::bindEvent(
shiny::observe({
if(watch_loader_opened()){
close_loader()
} else {
open_loader()
}
}),
input$btn2,
ignoreInit = TRUE,
ignoreNULL = TRUE
)
shiny::bindEvent(
shiny::observe({
cat("Loader is", ifelse(watch_loader_opened(), "opened", "closed"), "\n")
}),
watch_loader_opened(),
ignoreNULL = TRUE
)
}
if(interactive()){
shinyApp(ui, server)
}
rave-session Create, register, list, and remove ’RAVE’ sessions
Description
Create, register, list, and remove ’RAVE’ sessions
Usage
new_session(update = FALSE)
use_session(x)
launch_session(
x,
host = "127.0.0.1",
port = NULL,
options = list(jupyter = TRUE, jupyter_port = NULL, as_job = TRUE, launch_browser =
TRUE, single_session = FALSE)
)
session_getopt(keys, default = NA, namespace = "default")
session_setopt(..., .list = NULL, namespace = "default")
remove_session(x)
remove_all_sessions()
list_session(path = session_root(), order = c("none", "ascend", "descend"))
start_session(
session,
new = NA,
host = "127.0.0.1",
port = NULL,
jupyter = NA,
jupyter_port = NULL,
as_job = TRUE,
launch_browser = TRUE,
single_session = FALSE
)
shutdown_session(
returnValue = invisible(),
session = shiny::getDefaultReactiveDomain()
)
Arguments
update logical, whether to update to latest ’RAVE’ template
host host ’IP’ address, default is ’localhost’
port port to listen
options additional options, including jupyter, jupyter_port, as_job, and launch_browser
keys vector of characters, one or more keys of which the values should be obtained
default default value if key is missing
namespace namespace of the option; default is 'default'
..., .list named list of key-value pairs of session options. The keys must be characters,
and values must be simple data types (such as numeric vectors, characters)
path root path to store the sessions; default is the "tensor_temp_path" in raveio_getopt
order whether to order the session by date created; choices are 'none' (default),
'ascend', 'descend'
session, x session identification string, or session object; use list_session to list all ex-
isting sessions
new whether to create a new session instead of using the most recent one, default is
false
jupyter logical, whether to launch ’jupyter’ instance as well. It requires additional setups
to enable ’jupyter’ lab
jupyter_port port used by ’jupyter’ lab, can be set by 'jupyter_port' option in raveio_setopt
as_job whether to launch the application as ’RStudio’ job, default is true if ’RStudio’
is detected; when running without ’RStudio’, this option is always false
launch_browser whether to launch browser, default is true
single_session whether to enable single-session mode. Under this mode, closing the main frame
will terminate ’RAVE’ run-time session, otherwise the ’RAVE’ instance will still
open in the background
returnValue passed to stopApp
Value
new_session returns a session object with character 'session_id' and a function 'launch_session'
to launch the application from this session
use_session returns a session object, the same as new_session under the condition that corre-
sponding session exists, or raise an error if the session is missing
list_session returns a list of all existing session objects under the session root
remove_session returns a logical whether the corresponding session has been found and removed
Examples
if(interactive()){
sess <- new_session()
sess$launch_session()
all_sessions <- list_session()
print(all_sessions)
# Use existing session
session_id <- all_sessions[[1]]$session_id
sess <- use_session(session_id)
sess$launch_session()
# Remove session
remove_session(session_id)
list_session()
}
rave-ui-preset Preset reusable front-end components for ’RAVE’ modules
Description
For examples and use cases, please check new_rave_shiny_component_container.
Usage
presets_analysis_electrode_selector2(
id = "electrode_text",
varname = "analysis_electrodes",
label = "Select Electrodes",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code",
pipeline_repository = "repository",
start_simple = FALSE,
multiple = TRUE
)
presets_analysis_ranges(
id = "analysis_ranges",
varname = "analysis_ranges",
label = "Configure Analysis",
pipeline_repository = "repository",
max_components = 2
)
presets_baseline_choices(
id = "baseline_choices",
varname = "baseline",
label = "Baseline Settings",
pipeline_repository = "repository",
baseline_choices = c("Decibel", "% Change Power", "% Change Amplitude",
"z-score Power", "z-score Amplitude"),
baseline_along_choices = c("Per frequency, trial, and electrode", "Across electrode",
"Across trial", "Across trial and electrode")
)
presets_condition_groups(
id = "condition_groups",
varname = "condition_groups",
label = "Create Condition Contrast",
pipeline_repository = "repository"
)
presets_import_export_subject_pipeline(
id = "im_ex_pipeline",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code",
pipeline_repository = "repository",
settings_entries = c("loaded_electrodes", "epoch_choice", "epoch_choice__trial_starts",
"epoch_choice__trial_ends", "reference_name"),
fork_mode = c("exclude", "include")
)
presets_import_setup_blocks(
id = "import_blocks",
label = "Format & session blocks",
import_setup_id = "import_setup",
max_components = 5
)
presets_import_setup_channels(
id = "import_channels",
label = "Channel information",
import_setup_id = "import_setup",
import_blocks_id = "import_blocks"
)
presets_import_setup_native(
id = "import_setup",
label = "Select project & subject"
)
presets_loader_3dviewer(
id = "loader_3d_viewer",
height = "600px",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code",
loader_reference_id = "loader_reference_name",
loader_electrodes_id = "loader_electrode_text",
gadgets = c("standalone", "download")
)
presets_loader_3dviewer2(
id = "loader_3d_viewer",
height = "600px",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code",
loader_electrodes_id = "loader_electrode_text",
gadgets = c("standalone", "download")
)
presets_loader_electrodes(
rave-ui-preset 21
id = "loader_electrode_text",
varname = "loaded_electrodes",
label = "Electrodes",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code"
)
presets_loader_epoch(
id = "loader_epoch_name",
varname = "epoch_choice",
label = "Epoch and Trial Duration",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code"
)
presets_loader_project(
id = "loader_project_name",
varname = "project_name",
label = "Project"
)
presets_loader_reference(
id = "loader_reference_name",
varname = "reference_name",
label = "Reference name",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code",
mode = c("default", "create")
)
presets_loader_subject(
id = "loader_subject_code",
varname = "subject_code",
label = "Subject",
loader_project_id = "loader_project_name",
checks = c("notch", "wavelet")
)
presets_loader_subject_only(
id = "loader_subject_code",
varname = "subject_code",
label = "Subject",
multiple = FALSE
)
presets_loader_sync_project_subject(
id = "loader_sync_project_subject",
label = "Sync subject from most recently loaded",
varname = "loader_sync_project_subject",
loader_project_id = "loader_project_name",
loader_subject_id = "loader_subject_code",
from_module = NULL,
project_varname = "project_name",
subject_varname = "subject_code"
)
Arguments
id input or output ID of the element; this ID will be prepended with module names-
pace
varname variable name(s) in the module’s settings file
label readable label(s) of the element
loader_project_id
the ID of presets_loader_project if different to the default
loader_subject_id
the ID of presets_loader_subject if different to the default
pipeline_repository
the pipeline name that represents the ’RAVE’ repository from functions such as
prepare_subject_bare, prepare_subject_with_epoch, and prepare_subject_power
start_simple whether to start in simple view and hide optional inputs
multiple whether to allow multiple inputs
max_components maximum number of components for compound inputs
baseline_choices
the possible approaches to calculate baseline
baseline_along_choices
the units of baseline
settings_entries
used when importing pipelines, pipeline variable names to be included or ex-
cluded, depending on fork_mode
fork_mode 'exclude' (default) or 'include'; in 'exclude' mode, settings_entries
will be excluded from the pipeline settings; in 'include' mode, only settings_entries
can be imported.
import_setup_id
the ID of presets_import_setup_native if different to the default
import_blocks_id
the ID of presets_import_setup_blocks if different to the default
height height of the element
loader_reference_id
the ID of presets_loader_reference if different to the default
loader_electrodes_id
the ID of presets_loader_electrodes if different to the default
gadgets gadget types to include; see type argument in function output_gadget
mode whether to create new reference, or simply to choose from existing references
checks whether to check if subject has been applied with ’Notch’ filters or ’Wavelet’;
default is both.
from_module which module to extract input settings
project_varname, subject_varname
variable names that should be extracted from the settings file
Value
A 'RAVEShinyComponent' instance.
See Also
new_rave_shiny_component_container
ravedash_footer A hovering footer at bottom-right
Description
Internally used. Do not call explicitly
Usage
ravedash_footer(
module_id = NULL,
label = "Run Analysis",
auto_recalculation = TRUE,
class = NULL,
style = NULL
)
Arguments
module_id ’RAVE’ module ID
label run-analysis button label; default is "Run Analysis"
auto_recalculation
whether to show the automatic calculation button; default is true
class additional class for the footer
style additional style for the footer
Value
’HTML’ tags
Examples
library(shiny)
# dummy variables for the example
data_loaded <- TRUE
# UI code
ravedash_footer("my_module")
# server code to set message
server <- function(input, output, session){
module_server_common(input, output, session, function(){
# check if data has been loaded
if(data_loaded) {
# if yes, then set the footer message
fire_rave_event("loader_message",
"my_project/subject - Epoch: Auditory")
return(TRUE)
} else {
# No data found, unset the footer message
fire_rave_event("loader_message", NULL)
return(FALSE)
}
})
}
register_output Register output and output options
Description
Enable advanced output gadgets such as expanding the output in another browser window, or down-
loading the rendered data.
Usage
register_output_options(
outputId,
...,
.opt = list(),
extras = list(),
session = shiny::getDefaultReactiveDomain()
)
get_output_options(outputId, session = shiny::getDefaultReactiveDomain())
register_output(
render_function,
outputId,
export_type = c("none", "custom", "pdf", "csv", "3dviewer", "htmlwidget"),
export_settings = list(),
quoted = FALSE,
output_opts = list(),
session = shiny::getDefaultReactiveDomain()
)
get_output(outputId, session = shiny::getDefaultReactiveDomain())
Arguments
outputId output ID in the scope of current shiny session
..., output_opts, .opt
output options
extras extra information to store
session shiny session instance
render_function
shiny render function
export_type type of export file formats supported, options are 'none' (do not export), 'custom',
'pdf' (for figures), 'csv' (for tables), '3dviewer' (for ’RAVE’ 3D viewers),
'htmlwidget' (for ’HTML’ widgets).
export_settings
a list of settings, depending on export type; see ’Details’.
quoted whether render_function is quoted; default is false
Details
Default shiny output does not provide handlers for downloading the figures or data, and is often
limited to the ’HTML’ layouts. ’RAVE’ dashboard provides such mechanisms automatically with
few extra configurations.
Value
Registered output or output options.
Examples
if(interactive()) {
library(shiny)
library(ravedash)
rave_id <- paste(sample(c(letters, LETTERS, 0:9), 20, replace = TRUE),
collapse = "")
ui <- function(req) {
query_string <- req$QUERY_STRING
if(length(query_string) != 1) {
query_string <- "/"
}
query_result <- httr::parse_url(query_string)
if(!identical(toupper(query_result$query$standalone), "TRUE")) {
# normal page
basicPage(
output_gadget_container(
plotOutput("plot", brush = shiny::brushOpts("plot__brush")),
)
)
} else {
# standalone viewer
uiOutput("viewer")
}
}
server <- function(input, output, session) {
bindEvent(
safe_observe({
query_string <- session$clientData$url_search
query_result <- httr::parse_url(query_string)
if(!identical(toupper(query_result$query$module), "standalone_viewer")) {
# normal page
register_rave_session(session = session, .rave_id = rave_id)
register_output(
renderPlot({
plot(1:100, pch = 16)
}),
outputId = "plot", export_type = "pdf",
output_opts = list(brush = shiny::brushOpts("plot__brush"))
)
output$plot <- renderPlot({
input$btn
plot(rnorm(100), pch = 16)
})
} else {
# standalone viewer
standalone_viewer(outputId = "plot", rave_id = rave_id)
}
}),
session$clientData$url_search
)
}
shinyApp(ui, server, options = list(port = 8989))
}
run_analysis_button Button to trigger analysis
Description
A button that triggers 'run_analysis' event; see also get_rave_event
Usage
run_analysis_button(
label = "Run analysis (Ctrl+Enter)",
icon = NULL,
width = NULL,
type = "primary",
btn_type = c("button", "link"),
class = "",
style = "",
...
)
Arguments
label label to display
icon icon before the label
width, class, style, ...
passed to ’HTML’ tag
type used to calculate class
btn_type button style, choices are 'button' or 'link'
Value
A ’HTML’ button tag
safe_observe Safe-wrapper of ’shiny’ observe function
Description
Safely wrap expression x such that shiny application does no hang when when the expression raises
error.
Usage
safe_observe(x, env = NULL, quoted = FALSE, priority = 0L, domain = NULL, ...)
Arguments
x, env, quoted, priority, domain, ...
passed to observe
Value
’shiny’ observer instance
Examples
values <- shiny::reactiveValues(A=1)
obsB <- safe_observe({
print(values$A + 1)
})
shiny_icons Shiny icons
Description
Shiny icons
Usage
shiny_icons
Format
An object of class ravedash_shiny_icons of length 0.
Details
The goal of create this list is to keep ’shiny’ icons (which are essentially ’font-awesome’ icons)
up-to-date.
simple_layout Simple input-output layout
Description
Provides simple layout, with inputs on the left, and outputs on the right. Only useful in 'shidashi'
framework.
Usage
simple_layout(
input_ui,
output_ui,
input_width = 4L,
container_fixed = FALSE,
container_style = NULL,
scroll = FALSE
)
Arguments
input_ui the ’HTML’ tags for the inputs
output_ui the ’HTML’ tags for the outputs
input_width width of inputs, must be an integer from 1 to 11
container_fixed
whether the maximum width of the container should be fixed; default is no
container_style
additional ’CSS’ style of the container
scroll whether to stretch the container to full-heights and scroll the input and output
separately.
Value
’HTML’ tags
Examples
library(shiny)
library(ravedash)
simple_layout(
input_ui = list(
ravedash::input_card(
title = "Data Selection",
"Add inputs here"
)
),
output_ui = list(
ravedash::output_card(
title = "Result A",
"Add outputs here"
)
)
)
standalone_viewer Register shiny-output options to allow display in stand-alone viewers
Description
Save the output options such that the additional configurations can be used by stand-alone viewer
Usage
standalone_viewer(
outputId,
module_session,
rave_id,
session = shiny::getDefaultReactiveDomain(),
wrapper_id = "viewer"
)
Arguments
outputId the full shiny output ID
module_session the module shiny session; if not provided, then the session will be inferred by
rave_id
rave_id the unique identification key for ’RAVE’ module sessions, can be obtained via
get_active_module_info
session shiny session object
wrapper_id the wrapping render ID, default is "viewer"
Details
’RAVE’ dashboard provides powerful stand-alone viewers where users can display almost any out-
puts from other modules and interact with these viewers while sending messages back.
Value
nothing
Examples
if(interactive()) {
library(shiny)
library(ravedash)
rave_id <- paste(sample(c(letters, LETTERS, 0:9), 20, replace = TRUE),
collapse = "")
ui <- function(req) {
query_string <- req$QUERY_STRING
if(length(query_string) != 1) {
query_string <- "/"
}
query_result <- httr::parse_url(query_string)
if(!identical(toupper(query_result$query$standalone), "TRUE")) {
# normal page
basicPage(
actionButton("btn", "Click Me"),
plotOutput("plot")
)
} else {
# standalone viewer
uiOutput("viewer")
}
}
server <- function(input, output, session) {
bindEvent(
safe_observe({
query_string <- session$clientData$url_search
query_result <- httr::parse_url(query_string)
if(!identical(toupper(query_result$query$standalone), "TRUE")) {
# normal page
register_rave_session(session = session, .rave_id = rave_id)
output$plot <- renderPlot({
input$btn
plot(rnorm(100), pch = 16)
})
} else {
# standalone viewer
standalone_viewer(outputId = "plot", rave_id = rave_id)
}
}),
session$clientData$url_search
)
}
shinyApp(ui, server, options = list(port = 8989))
# Now open http://127.0.0.1:8989/?standalone=TRUE
}
temp_file Create a random temporary file path for current session
Description
Create a random temporary file path for current session
Usage
temp_file(
pattern = "file",
fileext = "",
persist = c("process", "app-session", "package-cache")
)
temp_dir(check = FALSE, persist = c("process", "app-session", "package-cache"))
Arguments
pattern, fileext
see tempfile
persist persist level, choices are 'app-session', 'package-cache', and 'process';
see ’Details’. ’RAVE’ application session, default), 'package-cache' (package-
level cache directory)
check whether to create the temporary directory
Details
R default tempdir usually gets removed once the R process ends. This behavior might not meet
all the needs for ’RAVE’ modules. For example, some data are ’RAVE’ session-based, like current
or last visited subject, project, or state data (like bookmarks, configurations). This session-based
information will be useful when launching the same ’RAVE’ instance next time, hence should not
be removed when users close R. Other data, such as subject-related, or package-related should last
even longer. These types of data may be cache of subject power, package-generated color schemes,
often irrelevant from R or ’RAVE’ sessions, and can be shared across different ’RAVE’ instances.
The default scheme is persist='process'. Under this mode, this function behaves the same as
tempfile. To store data in ’RAVE’ session-based manner, please use persist='app-session'.
The actual path will be inside of ’RAVE’ session folder, hence this option is valid only if ’RAVE’ in-
stance is running. When ’RAVE’ instance is not running, the result falls back to persist='process'.
When persist='process', To cache larger and session-irrelevant data, use 'package-cache'.
The ’RAVE’ session and package cache are not cleared even when R process ends. Users need
to clean the data by themselves. See remove_session or remove_all_sessions about removing
session-based folders, or clear_cached_files to remove package-based cache.
Value
A file or a directory path to persist temporary data cache
Examples
temp_dir()
temp_dir(persist = "package-cache") |
react-recaptcha | npm | JavaScript | [react](http://facebook.github.io/react/)-recaptcha
===
A [react.js](https://github.com/appleboy/react-recaptcha/blob/HEAD/(http:/facebook.github.io/react/)) reCAPTCHA for Google. The FREE anti-abuse service. Easy to add, advanced security, accessible to wide range of users and platforms.
What is reCAPTCHA?
===
reCAPTCHA is a free service that protects your site from spam and abuse. It uses advanced risk analysis engine to tell humans and bots apart. With the new API, a significant number of your valid human users will pass the reCAPTCHA challenge without having to solve a CAPTCHA (See blog for more details). reCAPTCHA comes in the form of a widget that you can easily add to your blog, forum, registration form, etc.
See [the details](https://www.google.com/recaptcha/intro/index.html).
Sign up for an API key pair
===
To use reCAPTCHA, you need to [sign up for an API key pair](http://www.google.com/recaptcha/admin) for your site. The key pair consists of a site key and secret. The site key is used to display the widget on your site. The secret authorizes communication between your application backend and the reCAPTCHA server to verify the user's response. The secret needs to be kept safe for security purposes.
Installation
===
Install package via [node.js](http://nodejs.org/)
```
$ npm install --save react-recaptcha
```
Usage
===
You can see the [full example](https://github.com/appleboy/react-recaptcha/blob/HEAD/example) by following steps.
```
$ npm install
$ npm start
```
open the `http://localhost:3000` in your browser.
Node support
===
Node >= v6 is required for this package. Run `node -v` in your command prompt if you're unsure which Node version you have installed.
### Automatically render the reCAPTCHA widget
Html example code:
```
<html> <head> <title>reCAPTCHA demo: Simple page</title> <script src="build/react.js"></script> <script src="https://www.google.com/recaptcha/api.js" async defer></script> </head> <body> <div id="example"></div> <script src="build/index.js"></script> </body></html>
```
Jsx example code: `build/index.js`
```
var Recaptcha = require('react-recaptcha'); ReactDOM.render( <Recaptcha sitekey="<KEY>" />, document.getElementById('example'));
```
### Explicitly render the reCAPTCHA widget
Deferring the render can be achieved by specifying your onload callback function and adding parameters to the JavaScript resource.
```
<html> <head> <title>reCAPTCHA demo: Simple page</title> <script src="build/react.js"></script> <script src="https://www.google.com/recaptcha/api.js?onload=onloadCallback&render=explicit" async defer></script> </head> <body> <div id="example"></div> <script src="build/index.js"></script> </body></html>
```
Jsx example code: `build/index.js`
```
var Recaptcha = require('react-recaptcha'); // specifying your onload callback functionvar callback = function () { console.log('Done!!!!');}; ReactDOM.render( <Recaptcha sitekey="<KEY>" render="explicit" onloadCallback={callback} />, document.getElementById('example'));
```
Define verify Callback function
```
var Recaptcha = require('react-recaptcha'); // specifying your onload callback functionvar callback = function () { console.log('Done!!!!');}; // specifying verify callback functionvar verifyCallback = function (response) { console.log(response);}; ReactDOM.render( <Recaptcha sitekey="<KEY>" render="explicit" verifyCallback={verifyCallback} onloadCallback={callback} />, document.getElementById('example'));
```
Change the color theme of the widget. Please `theme` property `light|dark`. Default value is `light`.
```
ReactDOM.render( <Recaptcha sitekey="<KEY>" theme="dark" />, document.getElementById('example'));
```
Change the type of CAPTCHA to serve. Please `type` property `audio|image`. Default value is `image`.
```
ReactDOM.render( <Recaptcha sitekey="<KEY>" type="audio" />, document.getElementById('example'));
```
### Explicitly reset the reCAPTCHA widget
The reCAPTCHA widget can be manually reset by accessing the component instance via a callback ref and calling `.reset()` on the instance.
```
var Recaptcha = require('react-recaptcha'); // create a variable to store the component instancelet recaptchaInstance; // create a reset functionconst resetRecaptcha = () => { recaptchaInstance.reset(); }; ReactDOM.render( <div> <Recaptcha ref={e => recaptchaInstance = e} sitekey="<KEY>" /> <button onClick={resetRecaptcha} > Reset </button> </div>, document.getElementById('example'));
```
Component props
---
### Available props
The following props can be passed into the React reCAPTCHA component. These can also be viewed in the [source code](https://github.com/appleboy/react-recaptcha/blob/master/src/index.js#L4-L21)
* `className` : the class for the reCAPTCHA div.
* `onloadCallbackName` : the name of your onloadCallback function (see `onloadCallback` below).
* `elementID` : the #id for the reCAPTCHA div.
* `onloadCallback` : the callback to pass into the reCAPTCHA API if [rendering the reCAPTCHA explicitly](https://github.com/appleboy/react-recaptcha#explicitly-render-the-recaptcha-widget).
* `verifyCallback` : the callback that fires after reCAPTCHA has verified a user.
* `expiredCallback` : optional. A callback to pass into the reCAPTCHA if the reCAPTCHA response has expired.
* `render` : specifies the render type for the component (e.g. explicit), see `onloadCallback` and [explicit rendering](https://github.com/appleboy/react-recaptcha#explicitly-render-the-recaptcha-widget).
* `sitekey` : the sitekey for the reCAPTCHA widget, obtained after signing up for an API key.
* `theme` : the color theme for the widget, either light or dark.
* `type` : the type of reCAPTCHA you'd like to render, list of reCAPTCHA types [available here](https://developers.google.com/recaptcha/docs/versions).
* `verifyCallbackName` : the name of your verifyCallback function, see `verifyCallback` above.
* `expiredCallbackName` : the name of your expiredCallbackName function, see `expiredCallback` above.
* `size` : the desired size of the reCAPTCHA widget, can be either 'compact' or 'normal'.
* `tabindex` : optional: The tabindex of the widget and challenge. If other elements in your page use tabindex, it should be set to make user navigation easier. More info on tabindex [available here](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/tabindex).
* `hl` : optional. Forces the widget to render in a specific language. Auto-detects the user's language if unspecified. List of language codes [available here](https://developers.google.com/recaptcha/docs/language).
* `badge` : optional. Reposition the reCAPTCHA badge. 'inline' allows you to control the CSS.
### Default props
If not specified when rendering the component, the following props will be passed into the reCAPTCHA widget:
```
{ elementID: 'g-recaptcha', onloadCallback: undefined, onloadCallbackName: 'onloadCallback', verifyCallback: undefined, verifyCallbackName: 'verifyCallback', expiredCallback: undefined, expiredCallbackName: 'expiredCallback', render: 'onload', theme: 'light', type: 'image', size: 'normal', tabindex: '0', hl: 'en', badge: 'bottomright', };
```
### Using invisible reCAPTCHA
Use the invisible reCAPTCHA by setting `size` prop to 'invisible'. Since it is invisible, the reCAPTCHA widget must be executed programatically.
```
var Recaptcha = require('react-recaptcha'); // create a variable to store the component instancelet recaptchaInstance; // manually trigger reCAPTCHA executionconst executeCaptcha = function () { recaptchaInstance.execute();}; // executed once the captcha has been verified// can be used to post forms, redirect, etc.const verifyCallback = function (response) { console.log(response); document.getElementById("someForm").submit();}; ReactDOM.render( <div> <form id="someForm" action="/search" method="get"> <input type="text" name="query"> </form> <button onClick={executeCaptcha} > Submit </button> <Recaptcha ref={e => recaptchaInstance = e} sitekey="<KEY>" size="invisible" verifyCallback={verifyCallback} /> </div>, document.getElementById('example'));
```
Contributing
===
* 1. Fork it
* 1. Create your feature branch (git checkout -b my-new-feature)
* 1. Commit your changes (git commit -am 'Add some feature')
* 1. Push to the branch (git push origin my-new-feature)
* 1. Create new Pull Request
Readme
---
### Keywords
* react
* react-component
* reCAPTCHA
* component |
macaddress | readthedoc | Unknown | macaddress
Jul 25, 2021
Contents:
1 Introduction 1 2 Installing macaddress 3 3 Using macaddress 5 4 Patterns for macaddress 9 5 Testing macaddress 13 6 Other languages 15 7 macaddress 17 8 Indices 27 Python Module Index 29 Index 31
i
ii
CHAPTER 1
Introduction Media access control (MAC) addresses play an important role in local-area networks. They also pack a lot of infor-
mation into 48-bit hexadecimal strings!
The macaddress library makes it easy to evaluate the properties of MAC addresses and the extended identifiers of which they are subclasses.
macaddress 2 Chapter 1. Introduction
CHAPTER 2
Installing macaddress macaddress is available on GitHub at https://github.com/critical-path/macaddress.
If you do not have pip version 18.1 or higher, then run the following command from your shell.
[user@host ~]$ sudo pip install --upgrade pip To install macaddress with test-related dependencies, run the following command from your shell.
[user@host ~]$ sudo pip install --editable git+https://github.com/critical-path/
˓→macaddress.git#egg=macaddress[test]
To install it without test-related dependencies, run the following command from your shell.
[user@host ~]$ sudo pip install git+https://github.com/critical-path/macaddress.git
(If necessary, replace pip with pip3.)
macaddress 4 Chapter 2. Installing macaddress
CHAPTER 3
Using macaddress While macaddress contains multiple classes, the only one with which you need to interact directly is MediaAccessControlAddress.
Import MediaAccessControlAddress.
>>> from macaddress import MediaAccessControlAddress Instantiate MediaAccessControlAddress by passing in a MAC address in plain, hyphen, colon, or dot notation.
>>> mac = MediaAccessControlAddress("a0b1c2d3e4f5")
>>> mac = MediaAccessControlAddress("a0-b1-c2-d3-e4-f5")
>>> mac = MediaAccessControlAddress("a0:b1:c2:d3:e4:f5")
>>> mac = MediaAccessControlAddress("a0b1.c2d3.e4f5")
To determine whether the MAC address is a broadcast, a multicast (layer-two), or a unicast address, access its is_broadcast, is_multicast, and is_unicast properties.
>>> print(mac.is_broadcast)
False
>>> print(mac.is_multicast)
False
>>> print(mac.is_unicast)
True To determine whether the MAC address is a universally-administered address (UAA) or a locally-administered address
(LAA), access its is_uaa and is_laa properties.
macaddress
>>> print(mac.is_uaa)
True
>>> print(mac.is_laa)
False To work with the MAC address’s octets, access its octets property, which contains six Octet objects.
>>> print(mac.octets)
[Octet('a0'), Octet('b1'), Octet('c2'), Octet('d3'), Octet('e4'), Octet('f5')]
To determine whether the MAC address is an extended unique identifier (EUI), an extended local identifier (ELI), or unknown, access its type property.
>>> print(mac.type)
unique To determine whether the MAC address has an organizationally-unique identifier (OUI) or a company ID (CID), access its has_oui and has_cid properties.
>>> print(mac.has_oui)
True
>>> print(mac.has_cid)
False To view the decimal equivalent of the MAC address, access its decimal property.
>>> print(mac.decimal)
176685338322165 To view the binary equivalent of the MAC address, access its binary and reverse_binary properties. With binary, the most-significant digit of each octet appears first. With reverse_binary, the least-significant digit of each octet appears first.
>>> print(mac.binary)
101000001011000111000010110100111110010011110101
>>> print(mac.reverse_binary)
000001011000110101000011110010110010011110101111 To return the MAC address’s two “fragments,” call the to_fragments method. For an EUI, this means the 24-bit OUI as the first fragment and the remaining interface-specific bits as the second fragment. For an ELI, this means the 24-bit CID as the first fragment and the remaining interface-specific bits as the second fragment.
>>> fragments = mac.to_fragments()
>>> print(fragments)
('a0b1c2', 'd3e4f5')
To return the MAC address in different notations, call the to_plain_notation, to_hyphen_notation,
to_colon_notation, and to_dot_notation methods.
>>> plain = mac.to_plain_notation()
>>> print(plain)
a0b1c2d3e4f5
macaddress
>>> hyphen = mac.to_hyphen_notation()
>>> print(hyphen)
a0-b1-c2-d3-e4-f5
>>> colon = mac.to_colon_notation()
>>> print(colon)
a0:b1:c2:d3:e4:f5
>>> dot = mac.to_dot_notation()
>>> print(dot)
a0b1.c2d3.e4f5
7
macaddress 8 Chapter 3. Using macaddress
CHAPTER 4
Patterns for macaddress 4.1 Create a range of MAC addresses
# Import `pprint.pprint` and `macaddress.MediaAccessControlAddress`.
>>> from pprint import pprint
>>> from macaddress import MediaAccessControlAddress
# Identify the start and end of the range.
>>> start_mac = MediaAccessControlAddress("a0b1c2d3e4f5")
>>> end_mac = MediaAccessControlAddress("a0b1c2d3e4ff")
# Create a list containing one `MediaAccessControlAddress` object
# for each address in the range.
>>> mac_range = [
... MediaAccessControlAddress(format(decimal, "x"))
... for decimal in range(start_mac.decimal, end_mac.decimal + 1)
... ]
# Do something useful with the results, such as returning
# the colon notation of each MAC address in the list.
>>> colons = [
... mac.to_colon_notation() for mac in mac_range
... ]
>>> pprint(colons)
["a0:b1:c2:d3:e4:f5",
"a0:b1:c2:d3:e4:f6",
"a0:b1:c2:d3:e4:f7",
"a0:b1:c2:d3:e4:f8",
"a0:b1:c2:d3:e4:f9",
"a0:b1:c2:d3:e4:fa",
(continues on next page)
macaddress
(continued from previous page)
"a0:b1:c2:d3:e4:fb",
"a0:b1:c2:d3:e4:fc",
"a0:b1:c2:d3:e4:fd",
"a0:b1:c2:d3:e4:fe",
"a0:b1:c2:d3:e4:ff"]
4.2 Map-reduce a list of MAC addresses
# Import `functools.reduce`, `pprint.pprint`, and
# `macaddress.MediaAccessControlAddress`.
>>> from functools import reduce
>>> from pprint import pprint
>>> from macaddress import MediaAccessControlAddress
# Define `transform`, which is our map function.
>>> def transform(mac, attributes):
... transformed = {}
... transformed[mac.normalized] = {}
... for attribute in attributes:
... transformed[mac.normalized][attribute] = getattr(mac, attribute)
... return transformed
...
# Define `fold`, which is our reduce function.
>>> def fold(current_mac, next_mac):
... for key, value in next_mac.items():
... if key in current_mac:
... pass
... else:
... current_mac[key] = value
... return current_mac
...
# Define `map_reduce`, which calls `functools.reduce`, `transform`, and `fold`.
>>> def map_reduce(macs, attributes):
... return reduce(fold, [transform(mac, attributes) for mac in macs])
...
# Identify addresses of interest.
>>> addresses = [
... "a0:b1:c2:d3:e4:f5",
... "a0:b1:c2:d3:e4:f6",
... "a0:b1:c2:d3:e4:f7",
... "a0:b1:c2:d3:e4:f8",
... "a0:b1:c2:d3:e4:f9",
... "a0:b1:c2:d3:e4:fa",
... "a0:b1:c2:d3:e4:fb",
... "a0:b1:c2:d3:e4:fc",
... "a0:b1:c2:d3:e4:fd",
(continues on next page)
macaddress
(continued from previous page)
... "a0:b1:c2:d3:e4:fe",
... "a0:b1:c2:d3:e4:ff"
... ]
# Create a list containing one `MediaAccessControlAddress` object
# for each address of interest.
>>> macs = [
... MediaAccessControlAddress(address) for address in addresses
... ]
# Create a list with attributes of interest.
>>> attributes = [
... "is_unicast",
... "is_uaa"
... ]
# Call `map_reduce`, passing in the lists of `MediaAccessControlAddress`
# objects and attributes.
>>> mapped_reduced = map_reduce(macs, attributes)
>>> pprint(mapped_reduced)
{"a0b1c2d3e4f5": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4f6": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4f7": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4f8": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4f9": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4fa": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4fb": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4fc": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4fd": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4fe": {"is_uaa": True, "is_unicast": True},
"a0b1c2d3e4ff": {"is_uaa": True, "is_unicast": True}}
4.3 Serialize the attributes of a MAC address
# Import `json.dumps`.
>>> from json import dumps
# Identify the addresses and attributes of interest.
>>> unserialized = {
... "a0b1c2d3e4f5": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4f6": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4f7": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4f8": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4f9": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4fa": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4fb": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4fc": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4fd": {"is_uaa": True, "is_unicast": True},
... "a0b1c2d3e4fe": {"is_uaa": True, "is_unicast": True},
(continues on next page)
macaddress
(continued from previous page)
... "a0b1c2d3e4ff": {"is_uaa": True, "is_unicast": True}
... }
# Call `json.dumps` on the unserialized addresses.
>>> serialized = dumps(unserialized, indent=2)
>>> print(serialized)
{
"a0b1c2d3e4f5": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4f6": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4f7": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4f8": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4f9": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4fa": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4fb": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4fc": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4fd": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4fe": {
"is_uaa": true,
"is_unicast": true
},
"a0b1c2d3e4ff": {
"is_uaa": true,
"is_unicast": true
}
}
CHAPTER 5
Testing macaddress To conduct testing, run the following commands from your shell.
[user@host macaddress]$ flake8 --count --ignore E125 macaddress
[user@host macaddress]$ pytest --cov --cov-report=term-missing
macaddress 14 Chapter 5. Testing macaddress
CHAPTER 6
Other languages The macaddress library is also available in the following languages:
• JavaScript
• Ruby
• Rust
macaddress 16 Chapter 6. Other languages
CHAPTER 7
macaddress 7.1 macaddress package 7.1.1 macaddress.macaddress module This module includes MediaAccessControlAddress and AddressError.
exception macaddress.macaddress.AddressError
Bases: Exception
MediaAccessControlAddress raises AddressError if instantiated with an invalid argument.
Parameters message (str) – A human-readable error message.
class macaddress.macaddress.MediaAccessControlAddress(address)
Bases: macaddress.ei48.ExtendedIdentifier48
MediaAccessControlAddress makes it easy to work with media access control (MAC) addresses.
is_broadcast
Whether the MAC address is a broadcast address.
“ffffffffffff” = broadcast.
Type bool
is_multicast
Whether the MAC address is a multicast address (layer-two multicast, not layer-three multicast).
The least-significant bit in the first octet of a MAC address determines whether it is a multicast or a unicast.
1 = multicast.
Type bool
is_unicast
Whether the MAC address is a unicast address.
The least-significant bit in the first octet of a MAC address determines whether it is a multicast or a unicast.
macaddress
0 = unicast.
Type bool
is_uaa
Whether the MAC address is a universally-administered address (UAA).
The second-least-significant bit in the first octet of a MAC address determines whether it is a UAA or an
LAA.
0 = UAA.
Type bool
is_laa
Whether the MAC address is a locally-administered address (LAA).
The second-least-significant bit in the first octet of a MAC address determines whether it is a UAA or an
LAA.
1 = LAA.
Type bool 7.1.2 macaddress.ei48 module This module includes ExtendedIdentifier48 and IdentifierError.
exception macaddress.ei48.IdentifierError
Bases: Exception
ExtendedIdentifier48 raises IdentifierError if instantiated with an invalid argument.
Parameters message (str) – A human-readable error message.
class macaddress.ei48.ExtendedIdentifier48(identifier)
Bases: object
ExtendedIdentifier48 makes it easy to work with the IEEE’s 48-bit extended unique identifiers (EUI) and ex-
tended local identifiers (ELI).
The first 24 or 36 bits of an EUI is called an organizationally- unique identifier (OUI), while the first 24 or 36
bits of an ELI is called a company ID (CID).
Visit the IEEE’s website for more information on EUIs and ELIs.
Helpful link: https://standards.ieee.org/products-services/regauth/tut/index.html
original
The hexadecimal identifier passed in by the user.
Type str
normalized
The hexadecimal identifier after replacing all uppercase letters with lowercase letters and removing all
hypens, colons, and dots.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
a0b1c2d3e4f5.
Type str
is_valid
Whether the user passed in a valid hexadecimal identifier.
macaddress
Type bool
octets
Each of the hexadecimal identifier’s six octets.
Type list
first_octet
The hexadecimal identifier’s first octet.
Type Octet
type
The hexadecimal identifier’s type, where type is unique, local, or unknown.
The two least-significant bits in the first octet of an extended identifier determine whether it is an EUI.
00 = unique.
The four least-signficant bits in the first octet of an extended identifier determine whether it is an ELI.
1010 = local.
Type str
has_oui
Whether the hexadecimal identifier has an OUI.
If the identifier is an EUI, then it has an OUI.
Type bool
has_cid
Whether the hexadecimal identifier has a CID.
If the identifier is an ELI, then it has a CID.
Type bool
decimal
The decimal equivalent of the hexadecimal digits passed in by the user.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
176685338322165.
Type int
binary
The binary equivalent of the hexadecimal identifier passed in by the user. The most-significant digit of
each octet appears first.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
101000001011000111000010110100111110010011110101.
Type str
reverse_binary
The reverse-binary equivalent of the hexadecimal identifier passed in by the user. The least-significant
digit of each octet appears first.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
000001011000110101000011110010110010011110101111.
Type str
Parameters identifier (str) – Twelve hexadecimal digits (0-9, A-F, or a-f).
macaddress
Raises IdentifierError
to_fragments(bits=24)
Returns the hexadecimal identifier’s two “fragments.”
For an EUI, this means the 24- or 36-bit OUI as the first fragment and the remaining device- or object-
specific bits as the second fragment.
For an ELI, this means the 24- or 36-bit CID as the first fragment and the remaining device- or object-
specific bits as the second fragment.
For example, if the user passes in A0-B1-C2-D3-E4-F5 and calls this method with either bits=24 or no
keyword argument, then ExtendedIdentifier48 will return (a0b1c2, d3e4f5).
If the user passes in A0-B1-C2-D3-E4-F5 and calls this method with bits=36, then ExtendedIdentifier48
will return (a0b1c2d3e, 4f5).
Parameters bits (int) – The number of bits for the OUI or CID.
The default value is 24.
to_plain_notation()
Returns the hexadecimal identifier in plain notation (for example, a0b1c2d3e4f5).
to_hyphen_notation()
Returns the hexadecimal identifier in hyphen notation (for example, a0-b1-c2-d3-e4-f5).
to_colon_notation()
Returns the hexadecimal identifier in colon notation (for example, a0:b1:c2:d3:e4:f5).
to_dot_notation()
Returns the hexadecimal identifier in dot notation (for example, a0b1.c2d3.e4f5).
7.1.3 macaddress.octet module This module includes Octet and OctetError.
exception macaddress.octet.OctetError
Bases: Exception
Octet raises OctetError if instantiated with an invalid argument.
Parameters message (str) – A human-readable error message.
class macaddress.octet.Octet(digits)
Bases: object
Octet makes it easy to convert two hexadecimal digits to eight binary or reverse-binary digits.
This is useful when working with the IEEE’s extended unique identifiers and extended local identifiers.
original
The hexadecimal digits passed in by the user.
Type str
normalized
The hexadecimal digits after replacing all uppercase letters with lowercase letters.
For example, if the user passes in A0, then Octet will return a0.
Type str
macaddress
is_valid
Whether the user passed in valid hexadecimal digits.
Type bool
decimal
The decimal equivalent of the hexadecimal digits passed in by the user.
For example, if the user passes in A0, then Octet will return 160.
Type int
binary
The binary equivalent of the hexadecimal digits passed in by the user. The most-significant digit appears
first.
For example, if the user passes in A0, then Octet will return 10100000.
Type str
reverse_binary
The reverse-binary equivalent of the hexadecimal digits passed in by the user. The least-significant digit
appears first.
For example, if the user passes in A0, then Octet will return 00000101.
Type str
Parameters digits (str) – Two hexadecimal digits (0-9, A-F, or a-f).
Raises OctetError 7.1.4 Module contents The macaddress library makes it easy to work with media access control (MAC) addresses.
class macaddress.ExtendedIdentifier48(identifier)
Bases: object
ExtendedIdentifier48 makes it easy to work with the IEEE’s 48-bit extended unique identifiers (EUI) and ex-
tended local identifiers (ELI).
The first 24 or 36 bits of an EUI is called an organizationally- unique identifier (OUI), while the first 24 or 36
bits of an ELI is called a company ID (CID).
Visit the IEEE’s website for more information on EUIs and ELIs.
Helpful link: https://standards.ieee.org/products-services/regauth/tut/index.html
original
The hexadecimal identifier passed in by the user.
Type str
normalized
The hexadecimal identifier after replacing all uppercase letters with lowercase letters and removing all
hypens, colons, and dots.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
a0b1c2d3e4f5.
Type str
macaddress
is_valid
Whether the user passed in a valid hexadecimal identifier.
Type bool
octets
Each of the hexadecimal identifier’s six octets.
Type list
first_octet
The hexadecimal identifier’s first octet.
Type Octet
type
The hexadecimal identifier’s type, where type is unique, local, or unknown.
The two least-significant bits in the first octet of an extended identifier determine whether it is an EUI.
00 = unique.
The four least-signficant bits in the first octet of an extended identifier determine whether it is an ELI.
1010 = local.
Type str
has_oui
Whether the hexadecimal identifier has an OUI.
If the identifier is an EUI, then it has an OUI.
Type bool
has_cid
Whether the hexadecimal identifier has a CID.
If the identifier is an ELI, then it has a CID.
Type bool
decimal
The decimal equivalent of the hexadecimal digits passed in by the user.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
176685338322165.
Type int
binary
The binary equivalent of the hexadecimal identifier passed in by the user. The most-significant digit of
each octet appears first.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
101000001011000111000010110100111110010011110101.
Type str
reverse_binary
The reverse-binary equivalent of the hexadecimal identifier passed in by the user. The least-significant
digit of each octet appears first.
For example, if the user passes in A0-B1-C2-D3-E4-F5, then ExtendedIdentifier48 will return
000001011000110101000011110010110010011110101111.
Type str
macaddress
Parameters identifier (str) – Twelve hexadecimal digits (0-9, A-F, or a-f).
Raises IdentifierError
to_fragments(bits=24)
Returns the hexadecimal identifier’s two “fragments.”
For an EUI, this means the 24- or 36-bit OUI as the first fragment and the remaining device- or object-
specific bits as the second fragment.
For an ELI, this means the 24- or 36-bit CID as the first fragment and the remaining device- or object-
specific bits as the second fragment.
For example, if the user passes in A0-B1-C2-D3-E4-F5 and calls this method with either bits=24 or no
keyword argument, then ExtendedIdentifier48 will return (a0b1c2, d3e4f5).
If the user passes in A0-B1-C2-D3-E4-F5 and calls this method with bits=36, then ExtendedIdentifier48
will return (a0b1c2d3e, 4f5).
Parameters bits (int) – The number of bits for the OUI or CID.
The default value is 24.
to_plain_notation()
Returns the hexadecimal identifier in plain notation (for example, a0b1c2d3e4f5).
to_hyphen_notation()
Returns the hexadecimal identifier in hyphen notation (for example, a0-b1-c2-d3-e4-f5).
to_colon_notation()
Returns the hexadecimal identifier in colon notation (for example, a0:b1:c2:d3:e4:f5).
to_dot_notation()
Returns the hexadecimal identifier in dot notation (for example, a0b1.c2d3.e4f5).
class macaddress.MediaAccessControlAddress(address)
Bases: macaddress.ei48.ExtendedIdentifier48
MediaAccessControlAddress makes it easy to work with media access control (MAC) addresses.
is_broadcast
Whether the MAC address is a broadcast address.
“ffffffffffff” = broadcast.
Type bool
is_multicast
Whether the MAC address is a multicast address (layer-two multicast, not layer-three multicast).
The least-significant bit in the first octet of a MAC address determines whether it is a multicast or a unicast.
1 = multicast.
Type bool
is_unicast
Whether the MAC address is a unicast address.
The least-significant bit in the first octet of a MAC address determines whether it is a multicast or a unicast.
0 = unicast.
Type bool
macaddress
is_uaa
Whether the MAC address is a universally-administered address (UAA).
The second-least-significant bit in the first octet of a MAC address determines whether it is a UAA or an
LAA.
0 = UAA.
Type bool
is_laa
Whether the MAC address is a locally-administered address (LAA).
The second-least-significant bit in the first octet of a MAC address determines whether it is a UAA or an
LAA.
1 = LAA.
Type bool class macaddress.Octet(digits)
Bases: object
Octet makes it easy to convert two hexadecimal digits to eight binary or reverse-binary digits.
This is useful when working with the IEEE’s extended unique identifiers and extended local identifiers.
original
The hexadecimal digits passed in by the user.
Type str
normalized
The hexadecimal digits after replacing all uppercase letters with lowercase letters.
For example, if the user passes in A0, then Octet will return a0.
Type str
is_valid
Whether the user passed in valid hexadecimal digits.
Type bool
decimal
The decimal equivalent of the hexadecimal digits passed in by the user.
For example, if the user passes in A0, then Octet will return 160.
Type int
binary
The binary equivalent of the hexadecimal digits passed in by the user. The most-significant digit appears
first.
For example, if the user passes in A0, then Octet will return 10100000.
Type str
reverse_binary
The reverse-binary equivalent of the hexadecimal digits passed in by the user. The least-significant digit
appears first.
For example, if the user passes in A0, then Octet will return 00000101.
Type str
macaddress
Parameters digits (str) – Two hexadecimal digits (0-9, A-F, or a-f).
Raises OctetError exception macaddress.IdentifierError
Bases: Exception
ExtendedIdentifier48 raises IdentifierError if instantiated with an invalid argument.
Parameters message (str) – A human-readable error message.
exception macaddress.AddressError
Bases: Exception
MediaAccessControlAddress raises AddressError if instantiated with an invalid argument.
Parameters message (str) – A human-readable error message.
exception macaddress.OctetError
Bases: Exception
Octet raises OctetError if instantiated with an invalid argument.
Parameters message (str) – A human-readable error message.
macaddress 26 Chapter 7. macaddress
CHAPTER 8
Indices
• genindex
• modindex
• search
27
macaddress 28 Chapter 8. Indices
Python Module Index m
macaddress, 21 macaddress.ei48, 18 macaddress.macaddress, 17 macaddress.octet, 20
29
macaddress 30 Python Module Index |
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures | go | Go | README
[¶](#section-readme)
---
### Azure Features Module for Go
[![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures)
The `armfeatures` module provides operations for working with Azure Features.
[Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/resources/armfeatures)
### Getting started
#### Prerequisites
* an [Azure subscription](https://azure.microsoft.com/free/)
* Go 1.18 or above (You could download and install the latest version of Go from [here](https://go.dev/doc/install). It will replace the existing Go on your machine. If you want to install multiple Go versions on the same machine, you could refer this [doc](https://go.dev/doc/manage-install).)
#### Install the package
This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management.
Install the Azure Features module:
```
go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures
```
#### Authorization
When creating a client, you will need to provide a credential for authenticating with Azure Features. The `azidentity` module provides facilities for various ways of authenticating with Azure including client/secret, certificate, managed identity, and more.
```
cred, err := azidentity.NewDefaultAzureCredential(nil)
```
For more information on authentication, please see the documentation for `azidentity` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity).
#### Client Factory
Azure Features module consists of one or more clients. We provide a client factory which could be used to create any client in this module.
```
clientFactory, err := armfeatures.NewClientFactory(<subscription ID>, cred, nil)
```
You can use `ClientOptions` in package `github.com/Azure/azure-sdk-for-go/sdk/azcore/arm` to set endpoint to connect with public and sovereign clouds as well as Azure Stack. For more information, please see the documentation for `azcore` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore).
```
options := arm.ClientOptions {
ClientOptions: azcore.ClientOptions {
Cloud: cloud.AzureChina,
},
}
clientFactory, err := armfeatures.NewClientFactory(<subscription ID>, cred, &options)
```
#### Clients
A client groups a set of related APIs, providing access to its functionality. Create one or more clients to access the APIs you require using client factory.
```
client := clientFactory.NewClient()
```
#### Provide Feedback
If you encounter bugs or have suggestions, please
[open an issue](https://github.com/Azure/azure-sdk-for-go/issues) and assign the `Features` label.
### Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution.
For details, visit <https://cla.microsoft.com>.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label,
comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information, see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [<EMAIL>](mailto:<EMAIL>) with any additional questions or comments.
Documentation
[¶](#section-documentation)
---
### Index [¶](#pkg-index)
* [type AuthorizationProfile](#AuthorizationProfile)
* + [func (a AuthorizationProfile) MarshalJSON() ([]byte, error)](#AuthorizationProfile.MarshalJSON)
+ [func (a *AuthorizationProfile) UnmarshalJSON(data []byte) error](#AuthorizationProfile.UnmarshalJSON)
* [type Client](#Client)
* + [func NewClient(subscriptionID string, credential azcore.TokenCredential, ...) (*Client, error)](#NewClient)
* + [func (client *Client) Get(ctx context.Context, resourceProviderNamespace string, featureName string, ...) (ClientGetResponse, error)](#Client.Get)
+ [func (client *Client) NewListAllPager(options *ClientListAllOptions) *runtime.Pager[ClientListAllResponse]](#Client.NewListAllPager)
+ [func (client *Client) NewListPager(resourceProviderNamespace string, options *ClientListOptions) *runtime.Pager[ClientListResponse]](#Client.NewListPager)
+ [func (client *Client) Register(ctx context.Context, resourceProviderNamespace string, featureName string, ...) (ClientRegisterResponse, error)](#Client.Register)
+ [func (client *Client) Unregister(ctx context.Context, resourceProviderNamespace string, featureName string, ...) (ClientUnregisterResponse, error)](#Client.Unregister)
* [type ClientFactory](#ClientFactory)
* + [func NewClientFactory(subscriptionID string, credential azcore.TokenCredential, ...) (*ClientFactory, error)](#NewClientFactory)
* + [func (c *ClientFactory) NewClient() *Client](#ClientFactory.NewClient)
+ [func (c *ClientFactory) NewFeatureClient() *FeatureClient](#ClientFactory.NewFeatureClient)
+ [func (c *ClientFactory) NewSubscriptionFeatureRegistrationsClient() *SubscriptionFeatureRegistrationsClient](#ClientFactory.NewSubscriptionFeatureRegistrationsClient)
* [type ClientGetOptions](#ClientGetOptions)
* [type ClientGetResponse](#ClientGetResponse)
* [type ClientListAllOptions](#ClientListAllOptions)
* [type ClientListAllResponse](#ClientListAllResponse)
* [type ClientListOptions](#ClientListOptions)
* [type ClientListResponse](#ClientListResponse)
* [type ClientRegisterOptions](#ClientRegisterOptions)
* [type ClientRegisterResponse](#ClientRegisterResponse)
* [type ClientUnregisterOptions](#ClientUnregisterOptions)
* [type ClientUnregisterResponse](#ClientUnregisterResponse)
* [type ErrorDefinition](#ErrorDefinition)
* + [func (e ErrorDefinition) MarshalJSON() ([]byte, error)](#ErrorDefinition.MarshalJSON)
+ [func (e *ErrorDefinition) UnmarshalJSON(data []byte) error](#ErrorDefinition.UnmarshalJSON)
* [type ErrorResponse](#ErrorResponse)
* + [func (e ErrorResponse) MarshalJSON() ([]byte, error)](#ErrorResponse.MarshalJSON)
+ [func (e *ErrorResponse) UnmarshalJSON(data []byte) error](#ErrorResponse.UnmarshalJSON)
* [type FeatureClient](#FeatureClient)
* + [func NewFeatureClient(credential azcore.TokenCredential, options *arm.ClientOptions) (*FeatureClient, error)](#NewFeatureClient)
* + [func (client *FeatureClient) NewListOperationsPager(options *FeatureClientListOperationsOptions) *runtime.Pager[FeatureClientListOperationsResponse]](#FeatureClient.NewListOperationsPager)
* [type FeatureClientListOperationsOptions](#FeatureClientListOperationsOptions)
* [type FeatureClientListOperationsResponse](#FeatureClientListOperationsResponse)
* [type FeatureOperationsListResult](#FeatureOperationsListResult)
* + [func (f FeatureOperationsListResult) MarshalJSON() ([]byte, error)](#FeatureOperationsListResult.MarshalJSON)
+ [func (f *FeatureOperationsListResult) UnmarshalJSON(data []byte) error](#FeatureOperationsListResult.UnmarshalJSON)
* [type FeatureProperties](#FeatureProperties)
* + [func (f FeatureProperties) MarshalJSON() ([]byte, error)](#FeatureProperties.MarshalJSON)
+ [func (f *FeatureProperties) UnmarshalJSON(data []byte) error](#FeatureProperties.UnmarshalJSON)
* [type FeatureResult](#FeatureResult)
* + [func (f FeatureResult) MarshalJSON() ([]byte, error)](#FeatureResult.MarshalJSON)
+ [func (f *FeatureResult) UnmarshalJSON(data []byte) error](#FeatureResult.UnmarshalJSON)
* [type Operation](#Operation)
* + [func (o Operation) MarshalJSON() ([]byte, error)](#Operation.MarshalJSON)
+ [func (o *Operation) UnmarshalJSON(data []byte) error](#Operation.UnmarshalJSON)
* [type OperationDisplay](#OperationDisplay)
* + [func (o OperationDisplay) MarshalJSON() ([]byte, error)](#OperationDisplay.MarshalJSON)
+ [func (o *OperationDisplay) UnmarshalJSON(data []byte) error](#OperationDisplay.UnmarshalJSON)
* [type OperationListResult](#OperationListResult)
* + [func (o OperationListResult) MarshalJSON() ([]byte, error)](#OperationListResult.MarshalJSON)
+ [func (o *OperationListResult) UnmarshalJSON(data []byte) error](#OperationListResult.UnmarshalJSON)
* [type ProxyResource](#ProxyResource)
* + [func (p ProxyResource) MarshalJSON() ([]byte, error)](#ProxyResource.MarshalJSON)
+ [func (p *ProxyResource) UnmarshalJSON(data []byte) error](#ProxyResource.UnmarshalJSON)
* [type SubscriptionFeatureRegistration](#SubscriptionFeatureRegistration)
* + [func (s SubscriptionFeatureRegistration) MarshalJSON() ([]byte, error)](#SubscriptionFeatureRegistration.MarshalJSON)
+ [func (s *SubscriptionFeatureRegistration) UnmarshalJSON(data []byte) error](#SubscriptionFeatureRegistration.UnmarshalJSON)
* [type SubscriptionFeatureRegistrationApprovalType](#SubscriptionFeatureRegistrationApprovalType)
* + [func PossibleSubscriptionFeatureRegistrationApprovalTypeValues() []SubscriptionFeatureRegistrationApprovalType](#PossibleSubscriptionFeatureRegistrationApprovalTypeValues)
* [type SubscriptionFeatureRegistrationList](#SubscriptionFeatureRegistrationList)
* + [func (s SubscriptionFeatureRegistrationList) MarshalJSON() ([]byte, error)](#SubscriptionFeatureRegistrationList.MarshalJSON)
+ [func (s *SubscriptionFeatureRegistrationList) UnmarshalJSON(data []byte) error](#SubscriptionFeatureRegistrationList.UnmarshalJSON)
* [type SubscriptionFeatureRegistrationProperties](#SubscriptionFeatureRegistrationProperties)
* + [func (s SubscriptionFeatureRegistrationProperties) MarshalJSON() ([]byte, error)](#SubscriptionFeatureRegistrationProperties.MarshalJSON)
+ [func (s *SubscriptionFeatureRegistrationProperties) UnmarshalJSON(data []byte) error](#SubscriptionFeatureRegistrationProperties.UnmarshalJSON)
* [type SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState)
* + [func PossibleSubscriptionFeatureRegistrationStateValues() []SubscriptionFeatureRegistrationState](#PossibleSubscriptionFeatureRegistrationStateValues)
* [type SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient)
* + [func NewSubscriptionFeatureRegistrationsClient(subscriptionID string, credential azcore.TokenCredential, ...) (*SubscriptionFeatureRegistrationsClient, error)](#NewSubscriptionFeatureRegistrationsClient)
* + [func (client *SubscriptionFeatureRegistrationsClient) CreateOrUpdate(ctx context.Context, providerNamespace string, featureName string, ...) (SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse, error)](#SubscriptionFeatureRegistrationsClient.CreateOrUpdate)
+ [func (client *SubscriptionFeatureRegistrationsClient) Delete(ctx context.Context, providerNamespace string, featureName string, ...) (SubscriptionFeatureRegistrationsClientDeleteResponse, error)](#SubscriptionFeatureRegistrationsClient.Delete)
+ [func (client *SubscriptionFeatureRegistrationsClient) Get(ctx context.Context, providerNamespace string, featureName string, ...) (SubscriptionFeatureRegistrationsClientGetResponse, error)](#SubscriptionFeatureRegistrationsClient.Get)
+ [func (client *SubscriptionFeatureRegistrationsClient) NewListAllBySubscriptionPager(options *SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions) ...](#SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager)
+ [func (client *SubscriptionFeatureRegistrationsClient) NewListBySubscriptionPager(providerNamespace string, ...) ...](#SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager)
* [type SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions](#SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions)
* [type SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse](#SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse)
* [type SubscriptionFeatureRegistrationsClientDeleteOptions](#SubscriptionFeatureRegistrationsClientDeleteOptions)
* [type SubscriptionFeatureRegistrationsClientDeleteResponse](#SubscriptionFeatureRegistrationsClientDeleteResponse)
* [type SubscriptionFeatureRegistrationsClientGetOptions](#SubscriptionFeatureRegistrationsClientGetOptions)
* [type SubscriptionFeatureRegistrationsClientGetResponse](#SubscriptionFeatureRegistrationsClientGetResponse)
* [type SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions](#SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions)
* [type SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse](#SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse)
* [type SubscriptionFeatureRegistrationsClientListBySubscriptionOptions](#SubscriptionFeatureRegistrationsClientListBySubscriptionOptions)
* [type SubscriptionFeatureRegistrationsClientListBySubscriptionResponse](#SubscriptionFeatureRegistrationsClientListBySubscriptionResponse)
#### Examples [¶](#pkg-examples)
* [Client.Get](#example-Client.Get)
* [Client.NewListAllPager](#example-Client.NewListAllPager)
* [Client.NewListPager](#example-Client.NewListPager)
* [Client.Register](#example-Client.Register)
* [Client.Unregister](#example-Client.Unregister)
* [FeatureClient.NewListOperationsPager](#example-FeatureClient.NewListOperationsPager)
* [SubscriptionFeatureRegistrationsClient.CreateOrUpdate](#example-SubscriptionFeatureRegistrationsClient.CreateOrUpdate)
* [SubscriptionFeatureRegistrationsClient.Delete](#example-SubscriptionFeatureRegistrationsClient.Delete)
* [SubscriptionFeatureRegistrationsClient.Get](#example-SubscriptionFeatureRegistrationsClient.Get)
* [SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager](#example-SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager)
* [SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager](#example-SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [AuthorizationProfile](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L15) [¶](#AuthorizationProfile)
```
type AuthorizationProfile struct {
// READ-ONLY; The approved time
ApprovedTime *[time](/time).[Time](/time#Time) `json:"approvedTime,omitempty" azure:"ro"`
// READ-ONLY; The approver
Approver *[string](/builtin#string) `json:"approver,omitempty" azure:"ro"`
// READ-ONLY; The requested time
RequestedTime *[time](/time).[Time](/time#Time) `json:"requestedTime,omitempty" azure:"ro"`
// READ-ONLY; The requester
Requester *[string](/builtin#string) `json:"requester,omitempty" azure:"ro"`
// READ-ONLY; The requester object id
RequesterObjectID *[string](/builtin#string) `json:"requesterObjectId,omitempty" azure:"ro"`
}
```
AuthorizationProfile - Authorization Profile
####
func (AuthorizationProfile) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L20) [¶](#AuthorizationProfile.MarshalJSON)
```
func (a [AuthorizationProfile](#AuthorizationProfile)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type AuthorizationProfile.
####
func (*AuthorizationProfile) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L31) [¶](#AuthorizationProfile.UnmarshalJSON)
```
func (a *[AuthorizationProfile](#AuthorizationProfile)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type AuthorizationProfile.
####
type [Client](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client.go#L26) [¶](#Client)
added in v0.2.0
```
type Client struct {
// contains filtered or unexported fields
}
```
Client contains the methods for the Features group.
Don't use this type directly, use NewClient() instead.
####
func [NewClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client.go#L35) [¶](#NewClient)
added in v0.2.0
```
func NewClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[Client](#Client), [error](/builtin#error))
```
NewClient creates a new instance of Client with the specified values.
* subscriptionID - The Azure subscription ID.
* credential - used to authorize requests. Usually a credential from azidentity.
* options - pass nil to accept the default values.
####
func (*Client) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client.go#L54) [¶](#Client.Get)
added in v0.2.0
```
func (client *[Client](#Client)) Get(ctx [context](/context).[Context](/context#Context), resourceProviderNamespace [string](/builtin#string), featureName [string](/builtin#string), options *[ClientGetOptions](#ClientGetOptions)) ([ClientGetResponse](#ClientGetResponse), [error](/builtin#error))
```
Get - Gets the preview feature with the specified name.
If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2021-07-01
* resourceProviderNamespace - The resource provider namespace for the feature.
* featureName - The name of the feature to get.
* options - ClientGetOptions contains the optional parameters for the Client.Get method.
Example [¶](#example-Client.Get)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/getFeature.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
res, err := clientFactory.NewClient().Get(ctx, "Resource Provider Namespace", "feature", nil)
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
// You could use response here. We use blank identifier for just demo purposes.
_ = res
// If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// res.FeatureResult = armfeatures.FeatureResult{
// Name: to.Ptr("Feature1"),
// Type: to.Ptr("type1"),
// ID: to.Ptr("feature_id1"),
// Properties: &armfeatures.FeatureProperties{
// State: to.Ptr("registered"),
// },
// }
}
```
```
Output:
```
Share Format
Run
####
func (*Client) [NewListAllPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client.go#L172) [¶](#Client.NewListAllPager)
added in v0.4.0
```
func (client *[Client](#Client)) NewListAllPager(options *[ClientListAllOptions](#ClientListAllOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[ClientListAllResponse](#ClientListAllResponse)]
```
NewListAllPager - Gets all the preview features that are available through AFEC for the subscription.
Generated from API version 2021-07-01
* options - ClientListAllOptions contains the optional parameters for the Client.NewListAllPager method.
Example [¶](#example-Client.NewListAllPager)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/listSubscriptionFeatures.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewClient().NewListAllPager(nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.FeatureOperationsListResult = armfeatures.FeatureOperationsListResult{
// Value: []*armfeatures.FeatureResult{
// {
// Name: to.Ptr("Feature1"),
// Type: to.Ptr("type1"),
// ID: to.Ptr("feature_id1"),
// Properties: &armfeatures.FeatureProperties{
// State: to.Ptr("registered"),
// },
// },
// {
// Name: to.Ptr("Feature2"),
// Type: to.Ptr("type2"),
// ID: to.Ptr("feature_id2"),
// Properties: &armfeatures.FeatureProperties{
// State: to.Ptr("unregistered"),
// },
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
####
func (*Client) [NewListPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client.go#L109) [¶](#Client.NewListPager)
added in v0.4.0
```
func (client *[Client](#Client)) NewListPager(resourceProviderNamespace [string](/builtin#string), options *[ClientListOptions](#ClientListOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[ClientListResponse](#ClientListResponse)]
```
NewListPager - Gets all the preview features in a provider namespace that are available through AFEC for the subscription.
Generated from API version 2021-07-01
* resourceProviderNamespace - The namespace of the resource provider for getting features.
* options - ClientListOptions contains the optional parameters for the Client.NewListPager method.
Example [¶](#example-Client.NewListPager)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/listProviderFeatures.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewClient().NewListPager("Resource Provider Namespace", nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.FeatureOperationsListResult = armfeatures.FeatureOperationsListResult{
// Value: []*armfeatures.FeatureResult{
// {
// Name: to.Ptr("Feature1"),
// Type: to.Ptr("type1"),
// ID: to.Ptr("feature_id1"),
// Properties: &armfeatures.FeatureProperties{
// State: to.Ptr("registered"),
// },
// },
// {
// Name: to.Ptr("Feature2"),
// Type: to.Ptr("type2"),
// ID: to.Ptr("feature_id2"),
// Properties: &armfeatures.FeatureProperties{
// State: to.Ptr("unregistered"),
// },
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
####
func (*Client) [Register](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client.go#L234) [¶](#Client.Register)
added in v0.2.0
```
func (client *[Client](#Client)) Register(ctx [context](/context).[Context](/context#Context), resourceProviderNamespace [string](/builtin#string), featureName [string](/builtin#string), options *[ClientRegisterOptions](#ClientRegisterOptions)) ([ClientRegisterResponse](#ClientRegisterResponse), [error](/builtin#error))
```
Register - Registers the preview feature for the subscription.
If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2021-07-01
* resourceProviderNamespace - The namespace of the resource provider.
* featureName - The name of the feature to register.
* options - ClientRegisterOptions contains the optional parameters for the Client.Register method.
Example [¶](#example-Client.Register)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/registerFeature.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
res, err := clientFactory.NewClient().Register(ctx, "Resource Provider Namespace", "feature", nil)
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
// You could use response here. We use blank identifier for just demo purposes.
_ = res
// If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// res.FeatureResult = armfeatures.FeatureResult{
// Name: to.Ptr("Feature1"),
// Type: to.Ptr("type1"),
// ID: to.Ptr("feature_id1"),
// Properties: &armfeatures.FeatureProperties{
// State: to.Ptr("registered"),
// },
// }
}
```
```
Output:
```
Share Format
Run
####
func (*Client) [Unregister](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client.go#L291) [¶](#Client.Unregister)
added in v0.2.0
```
func (client *[Client](#Client)) Unregister(ctx [context](/context).[Context](/context#Context), resourceProviderNamespace [string](/builtin#string), featureName [string](/builtin#string), options *[ClientUnregisterOptions](#ClientUnregisterOptions)) ([ClientUnregisterResponse](#ClientUnregisterResponse), [error](/builtin#error))
```
Unregister - Unregisters the preview feature for the subscription.
If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2021-07-01
* resourceProviderNamespace - The namespace of the resource provider.
* featureName - The name of the feature to unregister.
* options - ClientUnregisterOptions contains the optional parameters for the Client.Unregister method.
Example [¶](#example-Client.Unregister)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/unregisterFeature.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
res, err := clientFactory.NewClient().Unregister(ctx, "Resource Provider Namespace", "feature", nil)
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
// You could use response here. We use blank identifier for just demo purposes.
_ = res
// If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// res.FeatureResult = armfeatures.FeatureResult{
// Name: to.Ptr("Feature1"),
// Type: to.Ptr("Microsoft.Features/providers/features"),
// ID: to.Ptr("/subscriptions/ff23096b-f5a2-46ea-bd62-59c3e93fef9a/providers/Microsoft.Features/providers/Microsoft.Test/features/Feature1"),
// Properties: &armfeatures.FeatureProperties{
// State: to.Ptr("unregistered"),
// },
// }
}
```
```
Output:
```
Share Format
Run
####
type [ClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client_factory.go#L19) [¶](#ClientFactory)
added in v1.1.0
```
type ClientFactory struct {
// contains filtered or unexported fields
}
```
ClientFactory is a client factory used to create any client in this module.
Don't use this type directly, use NewClientFactory instead.
####
func [NewClientFactory](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client_factory.go#L30) [¶](#NewClientFactory)
added in v1.1.0
```
func NewClientFactory(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[ClientFactory](#ClientFactory), [error](/builtin#error))
```
NewClientFactory creates a new instance of ClientFactory with the specified values.
The parameter values will be propagated to any client created from this factory.
* subscriptionID - The Azure subscription ID.
* credential - used to authorize requests. Usually a credential from azidentity.
* options - pass nil to accept the default values.
####
func (*ClientFactory) [NewClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client_factory.go#L46) [¶](#ClientFactory.NewClient)
added in v1.1.0
```
func (c *[ClientFactory](#ClientFactory)) NewClient() *[Client](#Client)
```
####
func (*ClientFactory) [NewFeatureClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client_factory.go#L41) [¶](#ClientFactory.NewFeatureClient)
added in v1.1.0
```
func (c *[ClientFactory](#ClientFactory)) NewFeatureClient() *[FeatureClient](#FeatureClient)
```
####
func (*ClientFactory) [NewSubscriptionFeatureRegistrationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/client_factory.go#L51) [¶](#ClientFactory.NewSubscriptionFeatureRegistrationsClient)
added in v1.1.0
```
func (c *[ClientFactory](#ClientFactory)) NewSubscriptionFeatureRegistrationsClient() *[SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient)
```
####
type [ClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L33) [¶](#ClientGetOptions)
added in v0.2.0
```
type ClientGetOptions struct {
}
```
ClientGetOptions contains the optional parameters for the Client.Get method.
####
type [ClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L13) [¶](#ClientGetResponse)
added in v0.2.0
```
type ClientGetResponse struct {
[FeatureResult](#FeatureResult)
}
```
ClientGetResponse contains the response from method Client.Get.
####
type [ClientListAllOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L38) [¶](#ClientListAllOptions)
added in v0.2.0
```
type ClientListAllOptions struct {
}
```
ClientListAllOptions contains the optional parameters for the Client.NewListAllPager method.
####
type [ClientListAllResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L18) [¶](#ClientListAllResponse)
added in v0.2.0
```
type ClientListAllResponse struct {
[FeatureOperationsListResult](#FeatureOperationsListResult)
}
```
ClientListAllResponse contains the response from method Client.NewListAllPager.
####
type [ClientListOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L43) [¶](#ClientListOptions)
added in v0.2.0
```
type ClientListOptions struct {
}
```
ClientListOptions contains the optional parameters for the Client.NewListPager method.
####
type [ClientListResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L23) [¶](#ClientListResponse)
added in v0.2.0
```
type ClientListResponse struct {
[FeatureOperationsListResult](#FeatureOperationsListResult)
}
```
ClientListResponse contains the response from method Client.NewListPager.
####
type [ClientRegisterOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L48) [¶](#ClientRegisterOptions)
added in v0.2.0
```
type ClientRegisterOptions struct {
}
```
ClientRegisterOptions contains the optional parameters for the Client.Register method.
####
type [ClientRegisterResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L28) [¶](#ClientRegisterResponse)
added in v0.2.0
```
type ClientRegisterResponse struct {
[FeatureResult](#FeatureResult)
}
```
ClientRegisterResponse contains the response from method Client.Register.
####
type [ClientUnregisterOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L53) [¶](#ClientUnregisterOptions)
added in v0.2.0
```
type ClientUnregisterOptions struct {
}
```
ClientUnregisterOptions contains the optional parameters for the Client.Unregister method.
####
type [ClientUnregisterResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L33) [¶](#ClientUnregisterResponse)
added in v0.2.0
```
type ClientUnregisterResponse struct {
[FeatureResult](#FeatureResult)
}
```
ClientUnregisterResponse contains the response from method Client.Unregister.
####
type [ErrorDefinition](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L58) [¶](#ErrorDefinition)
```
type ErrorDefinition struct {
// Internal error details.
Details []*[ErrorDefinition](#ErrorDefinition) `json:"details,omitempty"`
// READ-ONLY; Service specific error code which serves as the substatus for the HTTP error code.
Code *[string](/builtin#string) `json:"code,omitempty" azure:"ro"`
// READ-ONLY; Description of the error.
Message *[string](/builtin#string) `json:"message,omitempty" azure:"ro"`
}
```
ErrorDefinition - Error definition.
####
func (ErrorDefinition) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L63) [¶](#ErrorDefinition.MarshalJSON)
```
func (e [ErrorDefinition](#ErrorDefinition)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ErrorDefinition.
####
func (*ErrorDefinition) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L72) [¶](#ErrorDefinition.UnmarshalJSON)
added in v1.1.0
```
func (e *[ErrorDefinition](#ErrorDefinition)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ErrorDefinition.
####
type [ErrorResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L70) [¶](#ErrorResponse)
```
type ErrorResponse struct {
// The error details.
Error *[ErrorDefinition](#ErrorDefinition) `json:"error,omitempty"`
}
```
ErrorResponse - Error response indicates that the service is not able to process the incoming request.
####
func (ErrorResponse) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L98) [¶](#ErrorResponse.MarshalJSON)
added in v1.1.0
```
func (e [ErrorResponse](#ErrorResponse)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ErrorResponse.
####
func (*ErrorResponse) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L105) [¶](#ErrorResponse.UnmarshalJSON)
added in v1.1.0
```
func (e *[ErrorResponse](#ErrorResponse)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ErrorResponse.
####
type [FeatureClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/feature_client.go#L23) [¶](#FeatureClient)
```
type FeatureClient struct {
// contains filtered or unexported fields
}
```
FeatureClient contains the methods for the FeatureClient group.
Don't use this type directly, use NewFeatureClient() instead.
####
func [NewFeatureClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/feature_client.go#L30) [¶](#NewFeatureClient)
```
func NewFeatureClient(credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[FeatureClient](#FeatureClient), [error](/builtin#error))
```
NewFeatureClient creates a new instance of FeatureClient with the specified values.
* credential - used to authorize requests. Usually a credential from azidentity.
* options - pass nil to accept the default values.
####
func (*FeatureClient) [NewListOperationsPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/feature_client.go#L46) [¶](#FeatureClient.NewListOperationsPager)
added in v0.4.0
```
func (client *[FeatureClient](#FeatureClient)) NewListOperationsPager(options *[FeatureClientListOperationsOptions](#FeatureClientListOperationsOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[FeatureClientListOperationsResponse](#FeatureClientListOperationsResponse)]
```
NewListOperationsPager - Lists all of the available Microsoft.Features REST API operations.
Generated from API version 2021-07-01
* options - FeatureClientListOperationsOptions contains the optional parameters for the FeatureClient.NewListOperationsPager method.
Example [¶](#example-FeatureClient.NewListOperationsPager)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/listFeaturesOperations.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewFeatureClient().NewListOperationsPager(nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.OperationListResult = armfeatures.OperationListResult{
// Value: []*armfeatures.Operation{
// {
// Name: to.Ptr("FeaturesOpeartion1"),
// Display: &armfeatures.OperationDisplay{
// Operation: to.Ptr("Read"),
// Provider: to.Ptr("Microsoft.ResourceProvider"),
// Resource: to.Ptr("Resource1"),
// },
// },
// {
// Name: to.Ptr("FeaturesOpeartion2"),
// Display: &armfeatures.OperationDisplay{
// Operation: to.Ptr("Write"),
// Provider: to.Ptr("Microsoft.ResourceProvider"),
// Resource: to.Ptr("Resource2"),
// },
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
####
type [FeatureClientListOperationsOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L76) [¶](#FeatureClientListOperationsOptions)
```
type FeatureClientListOperationsOptions struct {
}
```
FeatureClientListOperationsOptions contains the optional parameters for the FeatureClient.NewListOperationsPager method.
####
type [FeatureClientListOperationsResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L38) [¶](#FeatureClientListOperationsResponse)
```
type FeatureClientListOperationsResponse struct {
[OperationListResult](#OperationListResult)
}
```
FeatureClientListOperationsResponse contains the response from method FeatureClient.NewListOperationsPager.
####
type [FeatureOperationsListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L81) [¶](#FeatureOperationsListResult)
```
type FeatureOperationsListResult struct {
// The URL to use for getting the next set of results.
NextLink *[string](/builtin#string) `json:"nextLink,omitempty"`
// The array of features.
Value []*[FeatureResult](#FeatureResult) `json:"value,omitempty"`
}
```
FeatureOperationsListResult - List of previewed features.
####
func (FeatureOperationsListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L125) [¶](#FeatureOperationsListResult.MarshalJSON)
```
func (f [FeatureOperationsListResult](#FeatureOperationsListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type FeatureOperationsListResult.
####
func (*FeatureOperationsListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L133) [¶](#FeatureOperationsListResult.UnmarshalJSON)
added in v1.1.0
```
func (f *[FeatureOperationsListResult](#FeatureOperationsListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type FeatureOperationsListResult.
####
type [FeatureProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L90) [¶](#FeatureProperties)
```
type FeatureProperties struct {
// The registration state of the feature for the subscription.
State *[string](/builtin#string) `json:"state,omitempty"`
}
```
FeatureProperties - Information about feature.
####
func (FeatureProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L156) [¶](#FeatureProperties.MarshalJSON)
added in v1.1.0
```
func (f [FeatureProperties](#FeatureProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type FeatureProperties.
####
func (*FeatureProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L163) [¶](#FeatureProperties.UnmarshalJSON)
added in v1.1.0
```
func (f *[FeatureProperties](#FeatureProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type FeatureProperties.
####
type [FeatureResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L96) [¶](#FeatureResult)
```
type FeatureResult struct {
// The resource ID of the feature.
ID *[string](/builtin#string) `json:"id,omitempty"`
// The name of the feature.
Name *[string](/builtin#string) `json:"name,omitempty"`
// Properties of the previewed feature.
Properties *[FeatureProperties](#FeatureProperties) `json:"properties,omitempty"`
// The resource type of the feature.
Type *[string](/builtin#string) `json:"type,omitempty"`
}
```
FeatureResult - Previewed feature information.
####
func (FeatureResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L183) [¶](#FeatureResult.MarshalJSON)
added in v1.1.0
```
func (f [FeatureResult](#FeatureResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type FeatureResult.
####
func (*FeatureResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L193) [¶](#FeatureResult.UnmarshalJSON)
added in v1.1.0
```
func (f *[FeatureResult](#FeatureResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type FeatureResult.
####
type [Operation](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L111) [¶](#Operation)
```
type Operation struct {
// The object that represents the operation.
Display *[OperationDisplay](#OperationDisplay) `json:"display,omitempty"`
// Operation name: {provider}/{resource}/{operation}
Name *[string](/builtin#string) `json:"name,omitempty"`
}
```
Operation - Microsoft.Features operation
####
func (Operation) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L222) [¶](#Operation.MarshalJSON)
added in v1.1.0
```
func (o [Operation](#Operation)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type Operation.
####
func (*Operation) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L230) [¶](#Operation.UnmarshalJSON)
added in v1.1.0
```
func (o *[Operation](#Operation)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type Operation.
####
type [OperationDisplay](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L120) [¶](#OperationDisplay)
```
type OperationDisplay struct {
// Operation type: Read, write, delete, etc.
Operation *[string](/builtin#string) `json:"operation,omitempty"`
// Service provider: Microsoft.Features
Provider *[string](/builtin#string) `json:"provider,omitempty"`
// Resource on which the operation is performed: Profile, endpoint, etc.
Resource *[string](/builtin#string) `json:"resource,omitempty"`
}
```
OperationDisplay - The object that represents the operation.
####
func (OperationDisplay) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L253) [¶](#OperationDisplay.MarshalJSON)
added in v1.1.0
```
func (o [OperationDisplay](#OperationDisplay)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type OperationDisplay.
####
func (*OperationDisplay) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L262) [¶](#OperationDisplay.UnmarshalJSON)
added in v1.1.0
```
func (o *[OperationDisplay](#OperationDisplay)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type OperationDisplay.
####
type [OperationListResult](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L133) [¶](#OperationListResult)
```
type OperationListResult struct {
// URL to get the next set of operation list results if there are any.
NextLink *[string](/builtin#string) `json:"nextLink,omitempty"`
// List of Microsoft.Features operations.
Value []*[Operation](#Operation) `json:"value,omitempty"`
}
```
OperationListResult - Result of the request to list Microsoft.Features operations. It contains a list of operations and a URL link to get the next set of results.
####
func (OperationListResult) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L288) [¶](#OperationListResult.MarshalJSON)
```
func (o [OperationListResult](#OperationListResult)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type OperationListResult.
####
func (*OperationListResult) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L296) [¶](#OperationListResult.UnmarshalJSON)
added in v1.1.0
```
func (o *[OperationListResult](#OperationListResult)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type OperationListResult.
####
type [ProxyResource](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L142) [¶](#ProxyResource)
```
type ProxyResource struct {
// READ-ONLY; Azure resource Id.
ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"`
// READ-ONLY; Azure resource name.
Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"`
// READ-ONLY; Azure resource type.
Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"`
}
```
ProxyResource - An Azure proxy resource.
####
func (ProxyResource) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L319) [¶](#ProxyResource.MarshalJSON)
added in v1.1.0
```
func (p [ProxyResource](#ProxyResource)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type ProxyResource.
####
func (*ProxyResource) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L328) [¶](#ProxyResource.UnmarshalJSON)
added in v1.1.0
```
func (p *[ProxyResource](#ProxyResource)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type ProxyResource.
####
type [SubscriptionFeatureRegistration](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L154) [¶](#SubscriptionFeatureRegistration)
```
type SubscriptionFeatureRegistration struct {
Properties *[SubscriptionFeatureRegistrationProperties](#SubscriptionFeatureRegistrationProperties) `json:"properties,omitempty"`
// READ-ONLY; Azure resource Id.
ID *[string](/builtin#string) `json:"id,omitempty" azure:"ro"`
// READ-ONLY; Azure resource name.
Name *[string](/builtin#string) `json:"name,omitempty" azure:"ro"`
// READ-ONLY; Azure resource type.
Type *[string](/builtin#string) `json:"type,omitempty" azure:"ro"`
}
```
SubscriptionFeatureRegistration - Subscription feature registration details
####
func (SubscriptionFeatureRegistration) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L354) [¶](#SubscriptionFeatureRegistration.MarshalJSON)
added in v1.1.0
```
func (s [SubscriptionFeatureRegistration](#SubscriptionFeatureRegistration)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type SubscriptionFeatureRegistration.
####
func (*SubscriptionFeatureRegistration) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L364) [¶](#SubscriptionFeatureRegistration.UnmarshalJSON)
added in v1.1.0
```
func (s *[SubscriptionFeatureRegistration](#SubscriptionFeatureRegistration)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type SubscriptionFeatureRegistration.
####
type [SubscriptionFeatureRegistrationApprovalType](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/constants.go#L18) [¶](#SubscriptionFeatureRegistrationApprovalType)
```
type SubscriptionFeatureRegistrationApprovalType [string](/builtin#string)
```
SubscriptionFeatureRegistrationApprovalType - The feature approval type.
```
const (
SubscriptionFeatureRegistrationApprovalTypeApprovalRequired [SubscriptionFeatureRegistrationApprovalType](#SubscriptionFeatureRegistrationApprovalType) = "ApprovalRequired"
SubscriptionFeatureRegistrationApprovalTypeAutoApproval [SubscriptionFeatureRegistrationApprovalType](#SubscriptionFeatureRegistrationApprovalType) = "AutoApproval"
SubscriptionFeatureRegistrationApprovalTypeNotSpecified [SubscriptionFeatureRegistrationApprovalType](#SubscriptionFeatureRegistrationApprovalType) = "NotSpecified"
)
```
####
func [PossibleSubscriptionFeatureRegistrationApprovalTypeValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/constants.go#L27) [¶](#PossibleSubscriptionFeatureRegistrationApprovalTypeValues)
```
func PossibleSubscriptionFeatureRegistrationApprovalTypeValues() [][SubscriptionFeatureRegistrationApprovalType](#SubscriptionFeatureRegistrationApprovalType)
```
PossibleSubscriptionFeatureRegistrationApprovalTypeValues returns the possible values for the SubscriptionFeatureRegistrationApprovalType const type.
####
type [SubscriptionFeatureRegistrationList](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L168) [¶](#SubscriptionFeatureRegistrationList)
```
type SubscriptionFeatureRegistrationList struct {
// The link used to get the next page of subscription feature registrations list.
NextLink *[string](/builtin#string) `json:"nextLink,omitempty"`
// The list of subscription feature registrations.
Value []*[SubscriptionFeatureRegistration](#SubscriptionFeatureRegistration) `json:"value,omitempty"`
}
```
SubscriptionFeatureRegistrationList - The list of subscription feature registrations.
####
func (SubscriptionFeatureRegistrationList) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L393) [¶](#SubscriptionFeatureRegistrationList.MarshalJSON)
```
func (s [SubscriptionFeatureRegistrationList](#SubscriptionFeatureRegistrationList)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type SubscriptionFeatureRegistrationList.
####
func (*SubscriptionFeatureRegistrationList) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L401) [¶](#SubscriptionFeatureRegistrationList.UnmarshalJSON)
added in v1.1.0
```
func (s *[SubscriptionFeatureRegistrationList](#SubscriptionFeatureRegistrationList)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type SubscriptionFeatureRegistrationList.
####
type [SubscriptionFeatureRegistrationProperties](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L176) [¶](#SubscriptionFeatureRegistrationProperties)
```
type SubscriptionFeatureRegistrationProperties struct {
// Authorization Profile
AuthorizationProfile *[AuthorizationProfile](#AuthorizationProfile) `json:"authorizationProfile,omitempty"`
// The feature description.
Description *[string](/builtin#string) `json:"description,omitempty"`
// Key-value pairs for meta data.
Metadata map[[string](/builtin#string)]*[string](/builtin#string) `json:"metadata,omitempty"`
// Indicates whether feature should be displayed in Portal.
ShouldFeatureDisplayInPortal *[bool](/builtin#bool) `json:"shouldFeatureDisplayInPortal,omitempty"`
// The state.
State *[SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) `json:"state,omitempty"`
// READ-ONLY; The feature approval type.
ApprovalType *[SubscriptionFeatureRegistrationApprovalType](#SubscriptionFeatureRegistrationApprovalType) `json:"approvalType,omitempty" azure:"ro"`
// READ-ONLY; The featureDisplayName.
DisplayName *[string](/builtin#string) `json:"displayName,omitempty" azure:"ro"`
// READ-ONLY; The feature documentation link.
DocumentationLink *[string](/builtin#string) `json:"documentationLink,omitempty" azure:"ro"`
// READ-ONLY; The featureName.
FeatureName *[string](/builtin#string) `json:"featureName,omitempty" azure:"ro"`
// READ-ONLY; The providerNamespace.
ProviderNamespace *[string](/builtin#string) `json:"providerNamespace,omitempty" azure:"ro"`
// READ-ONLY; The feature registration date.
RegistrationDate *[time](/time).[Time](/time#Time) `json:"registrationDate,omitempty" azure:"ro"`
// READ-ONLY; The feature release date.
ReleaseDate *[time](/time).[Time](/time#Time) `json:"releaseDate,omitempty" azure:"ro"`
// READ-ONLY; The subscriptionId.
SubscriptionID *[string](/builtin#string) `json:"subscriptionId,omitempty" azure:"ro"`
// READ-ONLY; The tenantId.
TenantID *[string](/builtin#string) `json:"tenantId,omitempty" azure:"ro"`
}
```
####
func (SubscriptionFeatureRegistrationProperties) [MarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L424) [¶](#SubscriptionFeatureRegistrationProperties.MarshalJSON)
```
func (s [SubscriptionFeatureRegistrationProperties](#SubscriptionFeatureRegistrationProperties)) MarshalJSON() ([][byte](/builtin#byte), [error](/builtin#error))
```
MarshalJSON implements the json.Marshaller interface for type SubscriptionFeatureRegistrationProperties.
####
func (*SubscriptionFeatureRegistrationProperties) [UnmarshalJSON](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models_serde.go#L444) [¶](#SubscriptionFeatureRegistrationProperties.UnmarshalJSON)
```
func (s *[SubscriptionFeatureRegistrationProperties](#SubscriptionFeatureRegistrationProperties)) UnmarshalJSON(data [][byte](/builtin#byte)) [error](/builtin#error)
```
UnmarshalJSON implements the json.Unmarshaller interface for type SubscriptionFeatureRegistrationProperties.
####
type [SubscriptionFeatureRegistrationState](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/constants.go#L36) [¶](#SubscriptionFeatureRegistrationState)
```
type SubscriptionFeatureRegistrationState [string](/builtin#string)
```
SubscriptionFeatureRegistrationState - The state.
```
const (
SubscriptionFeatureRegistrationStateNotRegistered [SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) = "NotRegistered"
SubscriptionFeatureRegistrationStateNotSpecified [SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) = "NotSpecified"
SubscriptionFeatureRegistrationStatePending [SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) = "Pending"
SubscriptionFeatureRegistrationStateRegistered [SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) = "Registered"
SubscriptionFeatureRegistrationStateRegistering [SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) = "Registering"
SubscriptionFeatureRegistrationStateUnregistered [SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) = "Unregistered"
SubscriptionFeatureRegistrationStateUnregistering [SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState) = "Unregistering"
)
```
####
func [PossibleSubscriptionFeatureRegistrationStateValues](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/constants.go#L49) [¶](#PossibleSubscriptionFeatureRegistrationStateValues)
```
func PossibleSubscriptionFeatureRegistrationStateValues() [][SubscriptionFeatureRegistrationState](#SubscriptionFeatureRegistrationState)
```
PossibleSubscriptionFeatureRegistrationStateValues returns the possible values for the SubscriptionFeatureRegistrationState const type.
####
type [SubscriptionFeatureRegistrationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/subscriptionfeatureregistrations_client.go#L26) [¶](#SubscriptionFeatureRegistrationsClient)
```
type SubscriptionFeatureRegistrationsClient struct {
// contains filtered or unexported fields
}
```
SubscriptionFeatureRegistrationsClient contains the methods for the SubscriptionFeatureRegistrations group.
Don't use this type directly, use NewSubscriptionFeatureRegistrationsClient() instead.
####
func [NewSubscriptionFeatureRegistrationsClient](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/subscriptionfeatureregistrations_client.go#L35) [¶](#NewSubscriptionFeatureRegistrationsClient)
```
func NewSubscriptionFeatureRegistrationsClient(subscriptionID [string](/builtin#string), credential [azcore](/github.com/Azure/azure-sdk-for-go/sdk/azcore).[TokenCredential](/github.com/Azure/azure-sdk-for-go/sdk/azcore#TokenCredential), options *[arm](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm).[ClientOptions](/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm#ClientOptions)) (*[SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient), [error](/builtin#error))
```
NewSubscriptionFeatureRegistrationsClient creates a new instance of SubscriptionFeatureRegistrationsClient with the specified values.
* subscriptionID - The Azure subscription ID.
* credential - used to authorize requests. Usually a credential from azidentity.
* options - pass nil to accept the default values.
####
func (*SubscriptionFeatureRegistrationsClient) [CreateOrUpdate](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/subscriptionfeatureregistrations_client.go#L55) [¶](#SubscriptionFeatureRegistrationsClient.CreateOrUpdate)
```
func (client *[SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient)) CreateOrUpdate(ctx [context](/context).[Context](/context#Context), providerNamespace [string](/builtin#string), featureName [string](/builtin#string), options *[SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions](#SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions)) ([SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse](#SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse), [error](/builtin#error))
```
CreateOrUpdate - Create or update a feature registration.
If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2021-07-01
* providerNamespace - The provider namespace.
* featureName - The feature name.
* options - SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.CreateOrUpdate method.
Example [¶](#example-SubscriptionFeatureRegistrationsClient.CreateOrUpdate)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/FeatureRegistration/SubscriptionFeatureRegistrationPUT.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
res, err := clientFactory.NewSubscriptionFeatureRegistrationsClient().CreateOrUpdate(ctx, "subscriptionFeatureRegistrationGroupTestRG", "testFeature", &armfeatures.SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions{SubscriptionFeatureRegistrationType: &armfeatures.SubscriptionFeatureRegistration{
Properties: &armfeatures.SubscriptionFeatureRegistrationProperties{},
},
})
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
// You could use response here. We use blank identifier for just demo purposes.
_ = res
// If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// res.SubscriptionFeatureRegistration = armfeatures.SubscriptionFeatureRegistration{
// Name: to.Ptr("testFeature"),
// Type: to.Ptr("Microsoft.Features/featureProviders/subscriptionFeatureRegistrations"),
// ID: to.Ptr("/subscriptions/00000000-1111-2222-3333-444444444444/providers/Microsoft.Features/featureProviders/Microsoft.TestRP/subscriptionFeatureRegistrations/testFeature"),
// Properties: &armfeatures.SubscriptionFeatureRegistrationProperties{
// ApprovalType: to.Ptr(armfeatures.SubscriptionFeatureRegistrationApprovalTypeApprovalRequired),
// AuthorizationProfile: &armfeatures.AuthorizationProfile{
// },
// FeatureName: to.Ptr("testFeature"),
// ProviderNamespace: to.Ptr("Microsoft.TestRP"),
// RegistrationDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2020-02-26T01:57:51.734777Z"); return t}()),
// ReleaseDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2019-11-05T00:34:53.1243228Z"); return t}()),
// State: to.Ptr(armfeatures.SubscriptionFeatureRegistrationStatePending),
// SubscriptionID: to.Ptr("00000000-1111-2222-3333-444444444444"),
// },
// }
}
```
```
Output:
```
Share Format
Run
####
func (*SubscriptionFeatureRegistrationsClient) [Delete](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/subscriptionfeatureregistrations_client.go#L116) [¶](#SubscriptionFeatureRegistrationsClient.Delete)
```
func (client *[SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient)) Delete(ctx [context](/context).[Context](/context#Context), providerNamespace [string](/builtin#string), featureName [string](/builtin#string), options *[SubscriptionFeatureRegistrationsClientDeleteOptions](#SubscriptionFeatureRegistrationsClientDeleteOptions)) ([SubscriptionFeatureRegistrationsClientDeleteResponse](#SubscriptionFeatureRegistrationsClientDeleteResponse), [error](/builtin#error))
```
Delete - Deletes a feature registration If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2021-07-01
* providerNamespace - The provider namespace.
* featureName - The feature name.
* options - SubscriptionFeatureRegistrationsClientDeleteOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.Delete method.
Example [¶](#example-SubscriptionFeatureRegistrationsClient.Delete)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/FeatureRegistration/SubscriptionFeatureRegistrationDELETE.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
_, err = clientFactory.NewSubscriptionFeatureRegistrationsClient().Delete(ctx, "subscriptionFeatureRegistrationGroupTestRG", "testFeature", nil)
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
}
```
```
Output:
```
Share Format
Run
####
func (*SubscriptionFeatureRegistrationsClient) [Get](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/subscriptionfeatureregistrations_client.go#L165) [¶](#SubscriptionFeatureRegistrationsClient.Get)
```
func (client *[SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient)) Get(ctx [context](/context).[Context](/context#Context), providerNamespace [string](/builtin#string), featureName [string](/builtin#string), options *[SubscriptionFeatureRegistrationsClientGetOptions](#SubscriptionFeatureRegistrationsClientGetOptions)) ([SubscriptionFeatureRegistrationsClientGetResponse](#SubscriptionFeatureRegistrationsClientGetResponse), [error](/builtin#error))
```
Get - Returns a feature registration If the operation fails it returns an *azcore.ResponseError type.
Generated from API version 2021-07-01
* providerNamespace - The provider namespace.
* featureName - The feature name.
* options - SubscriptionFeatureRegistrationsClientGetOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.Get method.
Example [¶](#example-SubscriptionFeatureRegistrationsClient.Get)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/FeatureRegistration/SubscriptionFeatureRegistrationGET.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
res, err := clientFactory.NewSubscriptionFeatureRegistrationsClient().Get(ctx, "subscriptionFeatureRegistrationGroupTestRG", "testFeature", nil)
if err != nil {
log.Fatalf("failed to finish the request: %v", err)
}
// You could use response here. We use blank identifier for just demo purposes.
_ = res
// If the HTTP response code is 200 as defined in example definition, your response structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// res.SubscriptionFeatureRegistration = armfeatures.SubscriptionFeatureRegistration{
// Name: to.Ptr("testFeature"),
// Type: to.Ptr("Microsoft.Features/featureProviders/subscriptionFeatureRegistrations"),
// ID: to.Ptr("/subscriptions/00000000-1111-2222-3333-444444444444/providers/Microsoft.Features/featureProviders/Microsoft.TestRP/subscriptionFeatureRegistrations/testFeature"),
// Properties: &armfeatures.SubscriptionFeatureRegistrationProperties{
// ApprovalType: to.Ptr(armfeatures.SubscriptionFeatureRegistrationApprovalTypeApprovalRequired),
// AuthorizationProfile: &armfeatures.AuthorizationProfile{
// },
// FeatureName: to.Ptr("testFeature"),
// ProviderNamespace: to.Ptr("Microsoft.TestRP"),
// RegistrationDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2020-02-26T01:57:51.734777Z"); return t}()),
// ReleaseDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2019-11-05T00:34:53.1243228Z"); return t}()),
// State: to.Ptr(armfeatures.SubscriptionFeatureRegistrationStatePending),
// SubscriptionID: to.Ptr("00000000-1111-2222-3333-444444444444"),
// },
// }
}
```
```
Output:
```
Share Format
Run
####
func (*SubscriptionFeatureRegistrationsClient) [NewListAllBySubscriptionPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/subscriptionfeatureregistrations_client.go#L220) [¶](#SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager)
added in v0.4.0
```
func (client *[SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient)) NewListAllBySubscriptionPager(options *[SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions](#SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse](#SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse)]
```
NewListAllBySubscriptionPager - Returns subscription feature registrations for given subscription.
Generated from API version 2021-07-01
* options - SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager method.
Example [¶](#example-SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/FeatureRegistration/SubscriptionFeatureRegistrationLISTALL.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewSubscriptionFeatureRegistrationsClient().NewListAllBySubscriptionPager(nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.SubscriptionFeatureRegistrationList = armfeatures.SubscriptionFeatureRegistrationList{
// Value: []*armfeatures.SubscriptionFeatureRegistration{
// {
// Name: to.Ptr("testFeature"),
// Type: to.Ptr("Microsoft.Features/featureProviders/subscriptionFeatureRegistrations"),
// ID: to.Ptr("/subscriptions/00000000-1111-2222-3333-444444444444/providers/Microsoft.Features/featureProviders/Microsoft.TestRP/subscriptionFeatureRegistrations/testFeature"),
// Properties: &armfeatures.SubscriptionFeatureRegistrationProperties{
// ApprovalType: to.Ptr(armfeatures.SubscriptionFeatureRegistrationApprovalTypeApprovalRequired),
// AuthorizationProfile: &armfeatures.AuthorizationProfile{
// },
// FeatureName: to.Ptr("testFeature"),
// ProviderNamespace: to.Ptr("Microsoft.TestRP"),
// RegistrationDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2020-02-26T01:57:51.734777Z"); return t}()),
// ReleaseDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2019-11-05T00:34:53.1243228Z"); return t}()),
// State: to.Ptr(armfeatures.SubscriptionFeatureRegistrationStatePending),
// SubscriptionID: to.Ptr("00000000-1111-2222-3333-444444444444"),
// },
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
####
func (*SubscriptionFeatureRegistrationsClient) [NewListBySubscriptionPager](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/subscriptionfeatureregistrations_client.go#L281) [¶](#SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager)
added in v0.4.0
```
func (client *[SubscriptionFeatureRegistrationsClient](#SubscriptionFeatureRegistrationsClient)) NewListBySubscriptionPager(providerNamespace [string](/builtin#string), options *[SubscriptionFeatureRegistrationsClientListBySubscriptionOptions](#SubscriptionFeatureRegistrationsClientListBySubscriptionOptions)) *[runtime](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime).[Pager](/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime#Pager)[[SubscriptionFeatureRegistrationsClientListBySubscriptionResponse](#SubscriptionFeatureRegistrationsClientListBySubscriptionResponse)]
```
NewListBySubscriptionPager - Returns subscription feature registrations for given subscription and provider namespace.
Generated from API version 2021-07-01
* providerNamespace - The provider namespace.
* options - SubscriptionFeatureRegistrationsClientListBySubscriptionOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager method.
Example [¶](#example-SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager)
Generated from example definition: <https://github.com/Azure/azure-rest-api-specs/blob/0cc5e2efd6ffccf30e80d1e150b488dd87198b94/specification/resources/resource-manager/Microsoft.Features/stable/2021-07-01/examples/FeatureRegistration/SubscriptionFeatureRegistrationLIST.json>
```
package main
import (
"context"
"log"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armfeatures"
)
func main() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
ctx := context.Background()
clientFactory, err := armfeatures.NewClientFactory("<subscription-id>", cred, nil)
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
pager := clientFactory.NewSubscriptionFeatureRegistrationsClient().NewListBySubscriptionPager("subscriptionFeatureRegistrationGroupTestRG", nil)
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
log.Fatalf("failed to advance page: %v", err)
}
for _, v := range page.Value {
// You could use page here. We use blank identifier for just demo purposes.
_ = v
}
// If the HTTP response code is 200 as defined in example definition, your page structure would look as follows. Please pay attention that all the values in the output are fake values for just demo purposes.
// page.SubscriptionFeatureRegistrationList = armfeatures.SubscriptionFeatureRegistrationList{
// Value: []*armfeatures.SubscriptionFeatureRegistration{
// {
// Name: to.Ptr("testFeature"),
// Type: to.Ptr("Microsoft.Features/featureProviders/subscriptionFeatureRegistrations"),
// ID: to.Ptr("/subscriptions/00000000-1111-2222-3333-444444444444/providers/Microsoft.Features/featureProviders/Microsoft.TestRP/subscriptionFeatureRegistrations/testFeature"),
// Properties: &armfeatures.SubscriptionFeatureRegistrationProperties{
// ApprovalType: to.Ptr(armfeatures.SubscriptionFeatureRegistrationApprovalTypeApprovalRequired),
// AuthorizationProfile: &armfeatures.AuthorizationProfile{
// },
// FeatureName: to.Ptr("testFeature"),
// ProviderNamespace: to.Ptr("Microsoft.TestRP"),
// RegistrationDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2020-02-26T01:57:51.734777Z"); return t}()),
// ReleaseDate: to.Ptr(func() time.Time { t, _ := time.Parse(time.RFC3339Nano, "2019-11-05T00:34:53.1243228Z"); return t}()),
// State: to.Ptr(armfeatures.SubscriptionFeatureRegistrationStatePending),
// SubscriptionID: to.Ptr("00000000-1111-2222-3333-444444444444"),
// },
// }},
// }
}
}
```
```
Output:
```
Share Format
Run
####
type [SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L222) [¶](#SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions struct {
// Subscription Feature Registration Type details.
SubscriptionFeatureRegistrationType *[SubscriptionFeatureRegistration](#SubscriptionFeatureRegistration)
}
```
SubscriptionFeatureRegistrationsClientCreateOrUpdateOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.CreateOrUpdate method.
####
type [SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L43) [¶](#SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse struct {
[SubscriptionFeatureRegistration](#SubscriptionFeatureRegistration)
}
```
SubscriptionFeatureRegistrationsClientCreateOrUpdateResponse contains the response from method SubscriptionFeatureRegistrationsClient.CreateOrUpdate.
####
type [SubscriptionFeatureRegistrationsClientDeleteOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L229) [¶](#SubscriptionFeatureRegistrationsClientDeleteOptions)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientDeleteOptions struct {
}
```
SubscriptionFeatureRegistrationsClientDeleteOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.Delete method.
####
type [SubscriptionFeatureRegistrationsClientDeleteResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L48) [¶](#SubscriptionFeatureRegistrationsClientDeleteResponse)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientDeleteResponse struct {
}
```
SubscriptionFeatureRegistrationsClientDeleteResponse contains the response from method SubscriptionFeatureRegistrationsClient.Delete.
####
type [SubscriptionFeatureRegistrationsClientGetOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L235) [¶](#SubscriptionFeatureRegistrationsClientGetOptions)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientGetOptions struct {
}
```
SubscriptionFeatureRegistrationsClientGetOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.Get method.
####
type [SubscriptionFeatureRegistrationsClientGetResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L53) [¶](#SubscriptionFeatureRegistrationsClientGetResponse)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientGetResponse struct {
[SubscriptionFeatureRegistration](#SubscriptionFeatureRegistration)
}
```
SubscriptionFeatureRegistrationsClientGetResponse contains the response from method SubscriptionFeatureRegistrationsClient.Get.
####
type [SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L241) [¶](#SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions struct {
}
```
SubscriptionFeatureRegistrationsClientListAllBySubscriptionOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager method.
####
type [SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L58) [¶](#SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse struct {
[SubscriptionFeatureRegistrationList](#SubscriptionFeatureRegistrationList)
}
```
SubscriptionFeatureRegistrationsClientListAllBySubscriptionResponse contains the response from method SubscriptionFeatureRegistrationsClient.NewListAllBySubscriptionPager.
####
type [SubscriptionFeatureRegistrationsClientListBySubscriptionOptions](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/models.go#L247) [¶](#SubscriptionFeatureRegistrationsClientListBySubscriptionOptions)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientListBySubscriptionOptions struct {
}
```
SubscriptionFeatureRegistrationsClientListBySubscriptionOptions contains the optional parameters for the SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager method.
####
type [SubscriptionFeatureRegistrationsClientListBySubscriptionResponse](https://github.com/Azure/azure-sdk-for-go/blob/sdk/resourcemanager/resources/armfeatures/v1.1.0/sdk/resourcemanager/resources/armfeatures/response_types.go#L63) [¶](#SubscriptionFeatureRegistrationsClientListBySubscriptionResponse)
added in v0.2.0
```
type SubscriptionFeatureRegistrationsClientListBySubscriptionResponse struct {
[SubscriptionFeatureRegistrationList](#SubscriptionFeatureRegistrationList)
}
```
SubscriptionFeatureRegistrationsClientListBySubscriptionResponse contains the response from method SubscriptionFeatureRegistrationsClient.NewListBySubscriptionPager. |
github.com/studio-b12/gowebdav | go | Go | README
[¶](#section-readme)
---
### GoWebDAV
[![Unit Tests Status](https://github.com/studio-b12/gowebdav/actions/workflows/tests.yml/badge.svg)](https://github.com/studio-b12/gowebdav/actions/workflows/tests.yml)
[![Build Artifacts Status](https://github.com/studio-b12/gowebdav/actions/workflows/artifacts.yml/badge.svg)](https://github.com/studio-b12/gowebdav/actions/workflows/artifacts.yml)
[![GoDoc](https://godoc.org/github.com/studio-b12/gowebdav?status.svg)](https://godoc.org/github.com/studio-b12/gowebdav)
[![Go Report Card](https://goreportcard.com/badge/github.com/studio-b12/gowebdav)](https://goreportcard.com/report/github.com/studio-b12/gowebdav)
A pure Golang WebDAV client library that comes with a [reference implementation](https://github.com/studio-b12/gowebdav/tree/master/cmd/gowebdav).
#### Features at a glance
Our `gowebdav` library allows to perform following actions on the remote WebDAV server:
* [create path](#readme-create-path-on-a-webdav-server)
* [get files list](#readme-get-files-list)
* [download file](#readme-download-file-to-byte-array)
* [upload file](#readme-upload-file-from-byte-array)
* [get information about specified file/folder](#readme-get-information-about-specified-filefolder)
* [move file to another location](#readme-move-file-to-another-location)
* [copy file to another location](#readme-copy-file-to-another-location)
* [delete file](#readme-delete-file)
It also provides an [authentication API](#readme-type-authenticator) that makes it easy to encapsulate and control complex authentication challenges.
The default implementation negotiates the algorithm based on the user's preferences and the methods offered by the remote server.
Out-of-box authentication support for:
* [BasicAuth](https://en.wikipedia.org/wiki/Basic_access_authentication)
* [DigestAuth](https://en.wikipedia.org/wiki/Digest_access_authentication)
* [MS-PASS](https://github.com/studio-b12/gowebdav/pull/70#issuecomment-1421713726)
* [WIP Kerberos](https://github.com/studio-b12/gowebdav/pull/71#issuecomment-1416465334)
* [WIP Bearer Token](https://github.com/studio-b12/gowebdav/issues/61)
#### Usage
First of all you should create `Client` instance using `NewClient()` function:
```
root := "https://webdav.mydomain.me"
user := "user"
password := "password"
c := gowebdav.NewClient(root, user, password)
c.Connect()
// kick of your work!
```
After you can use this `Client` to perform actions, described below.
**NOTICE:** We will not check for errors in the examples, to focus you on the `gowebdav` library's code, but you should do it in your code!
##### Create path on a WebDAV server
```
err := c.Mkdir("folder", 0644)
```
In case you want to create several folders you can use `c.MkdirAll()`:
```
err := c.MkdirAll("folder/subfolder/subfolder2", 0644)
```
##### Get files list
```
files, _ := c.ReadDir("folder/subfolder")
for _, file := range files {
//notice that [file] has os.FileInfo type
fmt.Println(file.Name())
}
```
##### Download file to byte array
```
webdavFilePath := "folder/subfolder/file.txt"
localFilePath := "/tmp/webdav/file.txt"
bytes, _ := c.Read(webdavFilePath)
os.WriteFile(localFilePath, bytes, 0644)
```
##### Download file via reader
Also you can use `c.ReadStream()` method:
```
webdavFilePath := "folder/subfolder/file.txt"
localFilePath := "/tmp/webdav/file.txt"
reader, _ := c.ReadStream(webdavFilePath)
file, _ := os.Create(localFilePath)
defer file.Close()
io.Copy(file, reader)
```
##### Upload file from byte array
```
webdavFilePath := "folder/subfolder/file.txt"
localFilePath := "/tmp/webdav/file.txt"
bytes, _ := os.ReadFile(localFilePath)
c.Write(webdavFilePath, bytes, 0644)
```
##### Upload file via writer
```
webdavFilePath := "folder/subfolder/file.txt"
localFilePath := "/tmp/webdav/file.txt"
file, _ := os.Open(localFilePath)
defer file.Close()
c.WriteStream(webdavFilePath, file, 0644)
```
##### Get information about specified file/folder
```
webdavFilePath := "folder/subfolder/file.txt"
info := c.Stat(webdavFilePath)
//notice that [info] has os.FileInfo type fmt.Println(info)
```
##### Move file to another location
```
oldPath := "folder/subfolder/file.txt"
newPath := "folder/subfolder/moved.txt"
isOverwrite := true
c.Rename(oldPath, newPath, isOverwrite)
```
##### Copy file to another location
```
oldPath := "folder/subfolder/file.txt"
newPath := "folder/subfolder/file-copy.txt"
isOverwrite := true
c.Copy(oldPath, newPath, isOverwrite)
```
##### Delete file
```
webdavFilePath := "folder/subfolder/file.txt"
c.Remove(webdavFilePath)
```
#### Links
More details about WebDAV server you can read from following resources:
* [RFC 4918 - HTTP Extensions for Web Distributed Authoring and Versioning (WebDAV)](https://tools.ietf.org/html/rfc4918)
* [RFC 5689 - Extended MKCOL for Web Distributed Authoring and Versioning (WebDAV)](https://tools.ietf.org/html/rfc5689)
* [RFC 2616 - HTTP/1.1 Status Code Definitions](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html "HTTP/1.1 Status Code Definitions")
* [WebDav: Next Generation Collaborative Web Authoring By <NAME>](https://books.google.de/books?isbn=0130652083)
**NOTICE**: RFC 2518 is obsoleted by RFC 4918 in June 2007
#### Contributing
All contributing are welcome. If you have any suggestions or find some bug - please create an Issue to let us make this project better. We appreciate your help!
#### License
This library is distributed under the BSD 3-Clause license found in the [LICENSE](https://github.com/studio-b12/gowebdav/raw/master/LICENSE) file.
#### API
`import "github.com/studio-b12/gowebdav"`
* [Overview](#readme-pkg-overview)
* [Index](#readme-pkg-index)
* [Examples](#readme-pkg-examples)
* [Subdirectories](#readme-pkg-subdirectories)
##### Overview
Package gowebdav is a WebDAV client library with a command line tool included.
##### Index
* [Constants](#readme-pkg-constants)
* [Variables](#readme-pkg-variables)
* [func FixSlash(s string) string](#readme-FixSlash)
* [func FixSlashes(s string) string](#readme-FixSlashes)
* [func IsErrCode(err error, code int) bool](#readme-IsErrCode)
* [func IsErrNotFound(err error) bool](#readme-IsErrNotFound)
* [func Join(path0 string, path1 string) string](#readme-Join)
* [func NewPathError(op string, path string, statusCode int) error](#readme-NewPathError)
* [func NewPathErrorErr(op string, path string, err error) error](#readme-NewPathErrorErr)
* [func PathEscape(path string) string](#readme-PathEscape)
* [func ReadConfig(uri, netrc string) (string, string)](#readme-ReadConfig)
* [func String(r io.Reader) string](#readme-String)
* [type AuthFactory](#readme-AuthFactory)
* [type Authenticator](#readme-Authenticator)
+ [func NewDigestAuth(login, secret string, rs *http.Response) (Authenticator, error)](#readme-NewDigestAuth)
+ [func NewPassportAuth(c *http.Client, user, pw, partnerURL string, header *http.Header) (Authenticator, error)](#readme-NewPassportAuth)
* [type Authorizer](#readme-Authorizer)
+ [func NewAutoAuth(login string, secret string) Authorizer](#readme-NewAutoAuth)
+ [func NewEmptyAuth() Authorizer](#readme-NewEmptyAuth)
+ [func NewPreemptiveAuth(auth Authenticator) Authorizer](#readme-NewPreemptiveAuth)
* [type BasicAuth](#readme-BasicAuth)
+ [func (b *BasicAuth) Authorize(c *http.Client, rq *http.Request, path string) error](#readme-BasicAuth.Authorize)
+ [func (b *BasicAuth) Clone() Authenticator](#readme-BasicAuth.Clone)
+ [func (b *BasicAuth) Close() error](#readme-BasicAuth.Close)
+ [func (b *BasicAuth) String() string](#readme-BasicAuth.String)
+ [func (b *BasicAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)](#readme-BasicAuth.Verify)
* [type Client](#readme-Client)
+ [func NewAuthClient(uri string, auth Authorizer) *Client](#readme-NewAuthClient)
+ [func NewClient(uri, user, pw string) *Client](#readme-NewClient)
+ [func (c *Client) Connect() error](#readme-Client.Connect)
+ [func (c *Client) Copy(oldpath, newpath string, overwrite bool) error](#readme-Client.Copy)
+ [func (c *Client) Mkdir(path string, _ os.FileMode) (err error)](#readme-Client.Mkdir)
+ [func (c *Client) MkdirAll(path string, _ os.FileMode) (err error)](#readme-Client.MkdirAll)
+ [func (c *Client) Read(path string) ([]byte, error)](#readme-Client.Read)
+ [func (c *Client) ReadDir(path string) ([]os.FileInfo, error)](#readme-Client.ReadDir)
+ [func (c *Client) ReadStream(path string) (io.ReadCloser, error)](#readme-Client.ReadStream)
+ [func (c *Client) ReadStreamRange(path string, offset, length int64) (io.ReadCloser, error)](#readme-Client.ReadStreamRange)
+ [func (c *Client) Remove(path string) error](#readme-Client.Remove)
+ [func (c *Client) RemoveAll(path string) error](#readme-Client.RemoveAll)
+ [func (c *Client) Rename(oldpath, newpath string, overwrite bool) error](#readme-Client.Rename)
+ [func (c *Client) SetHeader(key, value string)](#readme-Client.SetHeader)
+ [func (c *Client) SetInterceptor(interceptor func(method string, rq *http.Request))](#readme-Client.SetInterceptor)
+ [func (c *Client) SetJar(jar http.CookieJar)](#readme-Client.SetJar)
+ [func (c *Client) SetTimeout(timeout time.Duration)](#readme-Client.SetTimeout)
+ [func (c *Client) SetTransport(transport http.RoundTripper)](#readme-Client.SetTransport)
+ [func (c *Client) Stat(path string) (os.FileInfo, error)](#readme-Client.Stat)
+ [func (c *Client) Write(path string, data []byte, _ os.FileMode) (err error)](#readme-Client.Write)
+ [func (c *Client) WriteStream(path string, stream io.Reader, _ os.FileMode) (err error)](#readme-Client.WriteStream)
* [type DigestAuth](#readme-DigestAuth)
+ [func (d *DigestAuth) Authorize(c *http.Client, rq *http.Request, path string) error](#readme-DigestAuth.Authorize)
+ [func (d *DigestAuth) Clone() Authenticator](#readme-DigestAuth.Clone)
+ [func (d *DigestAuth) Close() error](#readme-DigestAuth.Close)
+ [func (d *DigestAuth) String() string](#readme-DigestAuth.String)
+ [func (d *DigestAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)](#readme-DigestAuth.Verify)
* [type File](#readme-File)
+ [func (f File) ContentType() string](#readme-File.ContentType)
+ [func (f File) ETag() string](#readme-File.ETag)
+ [func (f File) IsDir() bool](#readme-File.IsDir)
+ [func (f File) ModTime() time.Time](#readme-File.ModTime)
+ [func (f File) Mode() os.FileMode](#readme-File.Mode)
+ [func (f File) Name() string](#readme-File.Name)
+ [func (f File) Path() string](#readme-File.Path)
+ [func (f File) Size() int64](#readme-File.Size)
+ [func (f File) String() string](#readme-File.String)
+ [func (f File) Sys() interface{}](#readme-File.Sys)
* [type PassportAuth](#readme-PassportAuth)
+ [func (p *PassportAuth) Authorize(c *http.Client, rq *http.Request, path string) error](#readme-PassportAuth.Authorize)
+ [func (p *PassportAuth) Clone() Authenticator](#readme-PassportAuth.Clone)
+ [func (p *PassportAuth) Close() error](#readme-PassportAuth.Close)
+ [func (p *PassportAuth) String() string](#readme-PassportAuth.String)
+ [func (p *PassportAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)](#readme-PassportAuth.Verify)
* [type StatusError](#readme-StatusError)
+ [func (se StatusError) Error() string](#readme-StatusError.Error)
Examples
* [PathEscape](#readme-example_PathEscape)
Package files
[auth.go](https://github.com/studio-b12/gowebdav/raw/master/auth.go) [basicAuth.go](https://github.com/studio-b12/gowebdav/raw/master/basicAuth.go) [client.go](https://github.com/studio-b12/gowebdav/raw/master/client.go) [digestAuth.go](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go) [doc.go](https://github.com/studio-b12/gowebdav/raw/master/doc.go) [errors.go](https://github.com/studio-b12/gowebdav/raw/master/errors.go) [file.go](https://github.com/studio-b12/gowebdav/raw/master/file.go) [netrc.go](https://github.com/studio-b12/gowebdav/raw/master/netrc.go) [passportAuth.go](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go) [requests.go](https://github.com/studio-b12/gowebdav/raw/master/requests.go) [utils.go](https://github.com/studio-b12/gowebdav/raw/master/utils.go)
##### Constants
```
const XInhibitRedirect = "X-Gowebdav-Inhibit-Redirect"
```
##### Variables
```
var ErrAuthChanged = errors.New("authentication failed, change algorithm")
```
ErrAuthChanged must be returned from the Verify method as an error to trigger a re-authentication / negotiation with a new authenticator.
```
var ErrTooManyRedirects = errors.New("stopped after 10 redirects")
```
ErrTooManyRedirects will be used as return error if a request exceeds 10 redirects.
##### func [FixSlash](https://github.com/studio-b12/gowebdav/raw/master/utils.go?s=354:384#L23)
```
func FixSlash(s string) string
```
FixSlash appends a trailing / to our string
##### func [FixSlashes](https://github.com/studio-b12/gowebdav/raw/master/utils.go?s=506:538#L31)
```
func FixSlashes(s string) string
```
FixSlashes appends and prepends a / if they are missing
##### func [IsErrCode](https://github.com/studio-b12/gowebdav/raw/master/errors.go?s=740:780#L29)
```
func IsErrCode(err error, code int) bool
```
IsErrCode returns true if the given error is an os.PathError wrapping a StatusError with the given status code.
##### func [IsErrNotFound](https://github.com/studio-b12/gowebdav/raw/master/errors.go?s=972:1006#L39)
```
func IsErrNotFound(err error) bool
```
IsErrNotFound is shorthand for IsErrCode for status 404.
##### func [Join](https://github.com/studio-b12/gowebdav/raw/master/utils.go?s=639:683#L40)
```
func Join(path0 string, path1 string) string
```
Join joins two paths
##### func [NewPathError](https://github.com/studio-b12/gowebdav/raw/master/errors.go?s=1040:1103#L43)
```
func NewPathError(op string, path string, statusCode int) error
```
##### func [NewPathErrorErr](https://github.com/studio-b12/gowebdav/raw/master/errors.go?s=1194:1255#L51)
```
func NewPathErrorErr(op string, path string, err error) error
```
##### func [PathEscape](https://github.com/studio-b12/gowebdav/raw/master/utils.go?s=153:188#L14)
```
func PathEscape(path string) string
```
PathEscape escapes all segments of a given path
##### func [ReadConfig](https://github.com/studio-b12/gowebdav/raw/master/netrc.go?s=428:479#L27)
```
func ReadConfig(uri, netrc string) (string, string)
```
ReadConfig reads login and password configuration from ~/.netrc machine foo.com login username password 123456
##### func [String](https://github.com/studio-b12/gowebdav/raw/master/utils.go?s=813:844#L45)
```
func String(r io.Reader) string
```
String pulls a string out of our io.Reader
##### type [AuthFactory](https://github.com/studio-b12/gowebdav/raw/master/auth.go?s=150:251#L13)
```
type AuthFactory func(c *http.Client, rs *http.Response, path string) (auth Authenticator, err error)
```
AuthFactory prototype function to create a new Authenticator
##### type [Authenticator](https://github.com/studio-b12/gowebdav/raw/master/auth.go?s=2155:2695#L56)
```
type Authenticator interface {
// Authorizes a request. Usually by adding some authorization headers.
Authorize(c *http.Client, rq *http.Request, path string) error
// Verifies the response if the authorization was successful.
// May trigger some round trips to pass the authentication.
// May also trigger a new Authenticator negotiation by returning `ErrAuthChenged`
Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)
// Creates a copy of the underlying Authenticator.
Clone() Authenticator
io.Closer
}
```
A Authenticator implements a specific way to authorize requests.
Each request is bound to a separate Authenticator instance.
The authentication flow itself is broken down into `Authorize`
and `Verify` steps. The former method runs before, and the latter runs after the `Request` is submitted.
This makes it easy to encapsulate and control complex authentication challenges.
Some authentication flows causing authentication round trips,
which can be archived by returning the `redo` of the Verify method. `True` restarts the authentication process for the current action: A new `Request` is spawned, which must be authorized, sent, and re-verified again, until the action is successfully submitted.
The preferred way is to handle the authentication ping-pong within `Verify`, and then `redo` with fresh credentials.
The result of the `Verify` method can also trigger an
`Authenticator` change by returning the `ErrAuthChanged`
as an error. Depending on the `Authorizer` this may trigger an `Authenticator` negotiation.
Set the `XInhibitRedirect` header to '1' in the `Authorize`
method to get control over request redirection.
Attention! You must handle the incoming request yourself.
To store a shared session state the `Clone` method **must**
return a new instance, initialized with the shared state.
###### func [NewDigestAuth](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go?s=324:406#L21)
```
func NewDigestAuth(login, secret string, rs *http.Response) (Authenticator, error)
```
NewDigestAuth creates a new instance of our Digest Authenticator
###### func [NewPassportAuth](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go?s=386:495#L21)
```
func NewPassportAuth(c *http.Client, user, pw, partnerURL string, header *http.Header) (Authenticator, error)
```
constructor for PassportAuth creates a new PassportAuth object and automatically authenticates against the given partnerURL
##### type [Authorizer](https://github.com/studio-b12/gowebdav/raw/master/auth.go?s=349:764#L17)
```
type Authorizer interface {
// Creates a new Authenticator Shim per request.
// It may track request related states and perform payload buffering
// for authentication round trips.
// The underlying Authenticator will perform the real authentication.
NewAuthenticator(body io.Reader) (Authenticator, io.Reader)
// Registers a new Authenticator factory to a key.
AddAuthenticator(key string, fn AuthFactory)
}
```
Authorizer our Authenticator factory which creates an
`Authenticator` per action/request.
###### func [NewAutoAuth](https://github.com/studio-b12/gowebdav/raw/master/auth.go?s=3789:3845#L109)
```
func NewAutoAuth(login string, secret string) Authorizer
```
NewAutoAuth creates an auto Authenticator factory.
It negotiates the default authentication method based on the order of the registered Authenticators and the remotely offered authentication methods.
First In, First Out.
###### func [NewEmptyAuth](https://github.com/studio-b12/gowebdav/raw/master/auth.go?s=4694:4724#L132)
```
func NewEmptyAuth() Authorizer
```
NewEmptyAuth creates an empty Authenticator factory The order of adding the Authenticator matters.
First In, First Out.
It offers the `NewAutoAuth` features.
###### func [NewPreemptiveAuth](https://github.com/studio-b12/gowebdav/raw/master/auth.go?s=5300:5353#L148)
```
func NewPreemptiveAuth(auth Authenticator) Authorizer
```
NewPreemptiveAuth creates a preemptive Authenticator The preemptive authorizer uses the provided Authenticator for every request regardless of any `Www-Authenticate` header.
It may only have one authentication method,
so calling `AddAuthenticator` **will panic**!
Look out!! This offers the skinniest and slickest implementation without any synchronisation!!
Still applicable with `BasicAuth` within go routines.
##### type [BasicAuth](https://github.com/studio-b12/gowebdav/raw/master/basicAuth.go?s=94:145#L9)
```
type BasicAuth struct {
// contains filtered or unexported fields
}
```
BasicAuth structure holds our credentials
###### func (*BasicAuth) [Authorize](https://github.com/studio-b12/gowebdav/raw/master/basicAuth.go?s=180:262#L15)
```
func (b *BasicAuth) Authorize(c *http.Client, rq *http.Request, path string) error
```
Authorize the current request
###### func (*BasicAuth) [Clone](https://github.com/studio-b12/gowebdav/raw/master/basicAuth.go?s=666:707#L34)
```
func (b *BasicAuth) Clone() Authenticator
```
Clone creates a Copy of itself
###### func (*BasicAuth) [Close](https://github.com/studio-b12/gowebdav/raw/master/basicAuth.go?s=581:614#L29)
```
func (b *BasicAuth) Close() error
```
Close cleans up all resources
###### func (*BasicAuth) [String](https://github.com/studio-b12/gowebdav/raw/master/basicAuth.go?s=778:813#L40)
```
func (b *BasicAuth) String() string
```
String toString
###### func (*BasicAuth) [Verify](https://github.com/studio-b12/gowebdav/raw/master/basicAuth.go?s=352:449#L21)
```
func (b *BasicAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)
```
Verify verifies if the authentication
##### type [Client](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=220:388#L19)
```
type Client struct {
// contains filtered or unexported fields
}
```
Client defines our structure
###### func [NewAuthClient](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=608:663#L33)
```
func NewAuthClient(uri string, auth Authorizer) *Client
```
NewAuthClient creates a new client instance with a custom Authorizer
###### func [NewClient](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=436:480#L28)
```
func NewClient(uri, user, pw string) *Client
```
NewClient creates a new instance of client
###### func (*Client) [Connect](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=1829:1861#L74)
```
func (c *Client) Connect() error
```
Connect connects to our dav server
###### func (*Client) [Copy](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=6815:6883#L310)
```
func (c *Client) Copy(oldpath, newpath string, overwrite bool) error
```
Copy copies a file from A to B
###### func (*Client) [Mkdir](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=5790:5852#L259)
```
func (c *Client) Mkdir(path string, _ os.FileMode) (err error)
```
Mkdir makes a directory
###### func (*Client) [MkdirAll](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=6065:6130#L273)
```
func (c *Client) MkdirAll(path string, _ os.FileMode) (err error)
```
MkdirAll like mkdir -p, but for webdav
###### func (*Client) [Read](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=6989:7039#L315)
```
func (c *Client) Read(path string) ([]byte, error)
```
Read reads the contents of a remote file
###### func (*Client) [ReadDir](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=2855:2915#L117)
```
func (c *Client) ReadDir(path string) ([]os.FileInfo, error)
```
ReadDir reads the contents of a remote directory
###### func (*Client) [ReadStream](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=7350:7413#L333)
```
func (c *Client) ReadStream(path string) (io.ReadCloser, error)
```
ReadStream reads the stream for a given path
###### func (*Client) [ReadStreamRange](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=8162:8252#L355)
```
func (c *Client) ReadStreamRange(path string, offset, length int64) (io.ReadCloser, error)
```
ReadStreamRange reads the stream representing a subset of bytes for a given path,
utilizing HTTP Range Requests if the server supports it.
The range is expressed as offset from the start of the file and length, for example offset=10, length=10 will return bytes 10 through 19.
If the server does not support partial content requests and returns full content instead,
this function will emulate the behavior by skipping `offset` bytes and limiting the result to `length`.
###### func (*Client) [Remove](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=5296:5338#L236)
```
func (c *Client) Remove(path string) error
```
Remove removes a remote file
###### func (*Client) [RemoveAll](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=5404:5449#L241)
```
func (c *Client) RemoveAll(path string) error
```
RemoveAll removes remote files
###### func (*Client) [Rename](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=6649:6719#L305)
```
func (c *Client) Rename(oldpath, newpath string, overwrite bool) error
```
Rename moves a file from A to B
###### func (*Client) [SetHeader](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=1092:1137#L49)
```
func (c *Client) SetHeader(key, value string)
```
SetHeader lets us set arbitrary headers for a given client
###### func (*Client) [SetInterceptor](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=1244:1326#L54)
```
func (c *Client) SetInterceptor(interceptor func(method string, rq *http.Request))
```
SetInterceptor lets us set an arbitrary interceptor for a given client
###### func (*Client) [SetJar](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=1727:1770#L69)
```
func (c *Client) SetJar(jar http.CookieJar)
```
SetJar exposes the ability to set a cookie jar to the client.
###### func (*Client) [SetTimeout](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=1428:1478#L59)
```
func (c *Client) SetTimeout(timeout time.Duration)
```
SetTimeout exposes the ability to set a time limit for requests
###### func (*Client) [SetTransport](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=1571:1629#L64)
```
func (c *Client) SetTransport(transport http.RoundTripper)
```
SetTransport exposes the ability to define custom transports
###### func (*Client) [Stat](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=4241:4296#L184)
```
func (c *Client) Stat(path string) (os.FileInfo, error)
```
Stat returns the file stats for a specified path
###### func (*Client) [Write](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=9272:9347#L389)
```
func (c *Client) Write(path string, data []byte, _ os.FileMode) (err error)
```
Write writes data to a given path
###### func (*Client) [WriteStream](https://github.com/studio-b12/gowebdav/raw/master/client.go?s=9771:9857#L419)
```
func (c *Client) WriteStream(path string, stream io.Reader, _ os.FileMode) (err error)
```
WriteStream writes a stream
##### type [DigestAuth](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go?s=157:254#L14)
```
type DigestAuth struct {
// contains filtered or unexported fields
}
```
DigestAuth structure holds our credentials
###### func (*DigestAuth) [Authorize](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go?s=525:608#L26)
```
func (d *DigestAuth) Authorize(c *http.Client, rq *http.Request, path string) error
```
Authorize the current request
###### func (*DigestAuth) [Clone](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go?s=1228:1270#L49)
```
func (d *DigestAuth) Clone() Authenticator
```
Clone creates a copy of itself
###### func (*DigestAuth) [Close](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go?s=1142:1176#L44)
```
func (d *DigestAuth) Close() error
```
Close cleans up all resources
###### func (*DigestAuth) [String](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go?s=1466:1502#L58)
```
func (d *DigestAuth) String() string
```
String toString
###### func (*DigestAuth) [Verify](https://github.com/studio-b12/gowebdav/raw/master/digestAuth.go?s=912:1010#L36)
```
func (d *DigestAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)
```
Verify checks for authentication issues and may trigger a re-authentication
##### type [File](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=93:253#L10)
```
type File struct {
// contains filtered or unexported fields
}
```
File is our structure for a given file
###### func (File) [ContentType](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=476:510#L31)
```
func (f File) ContentType() string
```
ContentType returns the content type of a file
###### func (File) [ETag](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=929:956#L56)
```
func (f File) ETag() string
```
ETag returns the ETag of a file
###### func (File) [IsDir](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=1035:1061#L61)
```
func (f File) IsDir() bool
```
IsDir let us see if a given file is a directory or not
###### func (File) [ModTime](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=836:869#L51)
```
func (f File) ModTime() time.Time
```
ModTime returns the modified time of a file
###### func (File) [Mode](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=665:697#L41)
```
func (f File) Mode() os.FileMode
```
Mode will return the mode of a given file
###### func (File) [Name](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=378:405#L26)
```
func (f File) Name() string
```
Name returns the name of a file
###### func (File) [Path](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=295:322#L21)
```
func (f File) Path() string
```
Path returns the full path of a file
###### func (File) [Size](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=573:599#L36)
```
func (f File) Size() int64
```
Size returns the size of a file
###### func (File) [String](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=1183:1212#L71)
```
func (f File) String() string
```
String lets us see file information
###### func (File) [Sys](https://github.com/studio-b12/gowebdav/raw/master/file.go?s=1095:1126#L66)
```
func (f File) Sys() interface{}
```
Sys ????
##### type [PassportAuth](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go?s=125:254#L12)
```
type PassportAuth struct {
// contains filtered or unexported fields
}
```
PassportAuth structure holds our credentials
###### func (*PassportAuth) [Authorize](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go?s=690:775#L32)
```
func (p *PassportAuth) Authorize(c *http.Client, rq *http.Request, path string) error
```
Authorize the current request
###### func (*PassportAuth) [Clone](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go?s=1701:1745#L69)
```
func (p *PassportAuth) Clone() Authenticator
```
Clone creates a Copy of itself
###### func (*PassportAuth) [Close](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go?s=1613:1649#L64)
```
func (p *PassportAuth) Close() error
```
Close cleans up all resources
###### func (*PassportAuth) [String](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go?s=2048:2086#L83)
```
func (p *PassportAuth) String() string
```
String toString
###### func (*PassportAuth) [Verify](https://github.com/studio-b12/gowebdav/raw/master/passportAuth.go?s=1075:1175#L46)
```
func (p *PassportAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)
```
Verify verifies if the authentication is good
##### type [StatusError](https://github.com/studio-b12/gowebdav/raw/master/errors.go?s=499:538#L18)
```
type StatusError struct {
Status int
}
```
StatusError implements error and wraps an erroneous status code.
###### func (StatusError) [Error](https://github.com/studio-b12/gowebdav/raw/master/errors.go?s=540:576#L22)
```
func (se StatusError) Error() string
```
---
Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package gowebdav is a WebDAV client library with a command line tool included.
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func FixSlash(s string) string](#FixSlash)
* [func FixSlashes(s string) string](#FixSlashes)
* [func IsErrCode(err error, code int) bool](#IsErrCode)
* [func IsErrNotFound(err error) bool](#IsErrNotFound)
* [func Join(path0 string, path1 string) string](#Join)
* [func NewPathError(op string, path string, statusCode int) error](#NewPathError)
* [func NewPathErrorErr(op string, path string, err error) error](#NewPathErrorErr)
* [func PathEscape(path string) string](#PathEscape)
* [func ReadConfig(uri, netrc string) (string, string)](#ReadConfig)
* [func String(r io.Reader) string](#String)
* [type AuthFactory](#AuthFactory)
* [type Authenticator](#Authenticator)
* + [func NewDigestAuth(login, secret string, rs *http.Response) (Authenticator, error)](#NewDigestAuth)
+ [func NewPassportAuth(c *http.Client, user, pw, partnerURL string, header *http.Header) (Authenticator, error)](#NewPassportAuth)
* [type Authorizer](#Authorizer)
* + [func NewAutoAuth(login string, secret string) Authorizer](#NewAutoAuth)
+ [func NewEmptyAuth() Authorizer](#NewEmptyAuth)
+ [func NewPreemptiveAuth(auth Authenticator) Authorizer](#NewPreemptiveAuth)
* [type BasicAuth](#BasicAuth)
* + [func (b *BasicAuth) Authorize(c *http.Client, rq *http.Request, path string) error](#BasicAuth.Authorize)
+ [func (b *BasicAuth) Clone() Authenticator](#BasicAuth.Clone)
+ [func (b *BasicAuth) Close() error](#BasicAuth.Close)
+ [func (b *BasicAuth) String() string](#BasicAuth.String)
+ [func (b *BasicAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)](#BasicAuth.Verify)
* [type Client](#Client)
* + [func NewAuthClient(uri string, auth Authorizer) *Client](#NewAuthClient)
+ [func NewClient(uri, user, pw string) *Client](#NewClient)
* + [func (c *Client) Connect() error](#Client.Connect)
+ [func (c *Client) Copy(oldpath, newpath string, overwrite bool) error](#Client.Copy)
+ [func (c *Client) Mkdir(path string, _ os.FileMode) (err error)](#Client.Mkdir)
+ [func (c *Client) MkdirAll(path string, _ os.FileMode) (err error)](#Client.MkdirAll)
+ [func (c *Client) Read(path string) ([]byte, error)](#Client.Read)
+ [func (c *Client) ReadDir(path string) ([]os.FileInfo, error)](#Client.ReadDir)
+ [func (c *Client) ReadStream(path string) (io.ReadCloser, error)](#Client.ReadStream)
+ [func (c *Client) ReadStreamRange(path string, offset, length int64) (io.ReadCloser, error)](#Client.ReadStreamRange)
+ [func (c *Client) Remove(path string) error](#Client.Remove)
+ [func (c *Client) RemoveAll(path string) error](#Client.RemoveAll)
+ [func (c *Client) Rename(oldpath, newpath string, overwrite bool) error](#Client.Rename)
+ [func (c *Client) SetHeader(key, value string)](#Client.SetHeader)
+ [func (c *Client) SetInterceptor(interceptor func(method string, rq *http.Request))](#Client.SetInterceptor)
+ [func (c *Client) SetJar(jar http.CookieJar)](#Client.SetJar)
+ [func (c *Client) SetTimeout(timeout time.Duration)](#Client.SetTimeout)
+ [func (c *Client) SetTransport(transport http.RoundTripper)](#Client.SetTransport)
+ [func (c *Client) Stat(path string) (os.FileInfo, error)](#Client.Stat)
+ [func (c *Client) Write(path string, data []byte, _ os.FileMode) (err error)](#Client.Write)
+ [func (c *Client) WriteStream(path string, stream io.Reader, _ os.FileMode) (err error)](#Client.WriteStream)
* [type DigestAuth](#DigestAuth)
* + [func (d *DigestAuth) Authorize(c *http.Client, rq *http.Request, path string) error](#DigestAuth.Authorize)
+ [func (d *DigestAuth) Clone() Authenticator](#DigestAuth.Clone)
+ [func (d *DigestAuth) Close() error](#DigestAuth.Close)
+ [func (d *DigestAuth) String() string](#DigestAuth.String)
+ [func (d *DigestAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)](#DigestAuth.Verify)
* [type File](#File)
* + [func (f File) ContentType() string](#File.ContentType)
+ [func (f File) ETag() string](#File.ETag)
+ [func (f File) IsDir() bool](#File.IsDir)
+ [func (f File) ModTime() time.Time](#File.ModTime)
+ [func (f File) Mode() os.FileMode](#File.Mode)
+ [func (f File) Name() string](#File.Name)
+ [func (f File) Path() string](#File.Path)
+ [func (f File) Size() int64](#File.Size)
+ [func (f File) String() string](#File.String)
+ [func (f File) Sys() interface{}](#File.Sys)
* [type PassportAuth](#PassportAuth)
* + [func (p *PassportAuth) Authorize(c *http.Client, rq *http.Request, path string) error](#PassportAuth.Authorize)
+ [func (p *PassportAuth) Clone() Authenticator](#PassportAuth.Clone)
+ [func (p *PassportAuth) Close() error](#PassportAuth.Close)
+ [func (p *PassportAuth) String() string](#PassportAuth.String)
+ [func (p *PassportAuth) Verify(c *http.Client, rs *http.Response, path string) (redo bool, err error)](#PassportAuth.Verify)
* [type StatusError](#StatusError)
* + [func (se StatusError) Error() string](#StatusError.Error)
#### Examples [¶](#pkg-examples)
* [PathEscape](#example-PathEscape)
### Constants [¶](#pkg-constants)
```
const XInhibitRedirect = "X-Gowebdav-Inhibit-Redirect"
```
### Variables [¶](#pkg-variables)
```
var ErrAuthChanged = [errors](/errors).[New](/errors#New)("authentication failed, change algorithm")
```
ErrAuthChanged must be returned from the Verify method as an error to trigger a re-authentication / negotiation with a new authenticator.
```
var ErrTooManyRedirects = [errors](/errors).[New](/errors#New)("stopped after 10 redirects")
```
ErrTooManyRedirects will be used as return error if a request exceeds 10 redirects.
### Functions [¶](#pkg-functions)
####
func [FixSlash](https://github.com/studio-b12/gowebdav/blob/v0.9.0/utils.go#L23) [¶](#FixSlash)
```
func FixSlash(s [string](/builtin#string)) [string](/builtin#string)
```
FixSlash appends a trailing / to our string
####
func [FixSlashes](https://github.com/studio-b12/gowebdav/blob/v0.9.0/utils.go#L31) [¶](#FixSlashes)
```
func FixSlashes(s [string](/builtin#string)) [string](/builtin#string)
```
FixSlashes appends and prepends a / if they are missing
####
func [IsErrCode](https://github.com/studio-b12/gowebdav/blob/v0.9.0/errors.go#L29) [¶](#IsErrCode)
added in v0.9.0
```
func IsErrCode(err [error](/builtin#error), code [int](/builtin#int)) [bool](/builtin#bool)
```
IsErrCode returns true if the given error is an os.PathError wrapping a StatusError with the given status code.
####
func [IsErrNotFound](https://github.com/studio-b12/gowebdav/blob/v0.9.0/errors.go#L39) [¶](#IsErrNotFound)
added in v0.9.0
```
func IsErrNotFound(err [error](/builtin#error)) [bool](/builtin#bool)
```
IsErrNotFound is shorthand for IsErrCode for status 404.
####
func [Join](https://github.com/studio-b12/gowebdav/blob/v0.9.0/utils.go#L40) [¶](#Join)
```
func Join(path0 [string](/builtin#string), path1 [string](/builtin#string)) [string](/builtin#string)
```
Join joins two paths
####
func [NewPathError](https://github.com/studio-b12/gowebdav/blob/v0.9.0/errors.go#L43) [¶](#NewPathError)
added in v0.9.0
```
func NewPathError(op [string](/builtin#string), path [string](/builtin#string), statusCode [int](/builtin#int)) [error](/builtin#error)
```
####
func [NewPathErrorErr](https://github.com/studio-b12/gowebdav/blob/v0.9.0/errors.go#L51) [¶](#NewPathErrorErr)
added in v0.9.0
```
func NewPathErrorErr(op [string](/builtin#string), path [string](/builtin#string), err [error](/builtin#error)) [error](/builtin#error)
```
####
func [PathEscape](https://github.com/studio-b12/gowebdav/blob/v0.9.0/utils.go#L14) [¶](#PathEscape)
```
func PathEscape(path [string](/builtin#string)) [string](/builtin#string)
```
PathEscape escapes all segments of a given path
Example [¶](#example-PathEscape)
```
fmt.Println(PathEscape(""))
fmt.Println(PathEscape("/"))
fmt.Println(PathEscape("/web"))
fmt.Println(PathEscape("/web/"))
fmt.Println(PathEscape("/w e b/d a v/s%u&c#k:s/"))
```
```
Output:
/
/web
/web/
/w%20e%20b/d%20a%20v/s%25u&c%23k:s/
```
####
func [ReadConfig](https://github.com/studio-b12/gowebdav/blob/v0.9.0/netrc.go#L27) [¶](#ReadConfig)
```
func ReadConfig(uri, netrc [string](/builtin#string)) ([string](/builtin#string), [string](/builtin#string))
```
ReadConfig reads login and password configuration from ~/.netrc machine foo.com login username password 123456
####
func [String](https://github.com/studio-b12/gowebdav/blob/v0.9.0/utils.go#L45) [¶](#String)
```
func String(r [io](/io).[Reader](/io#Reader)) [string](/builtin#string)
```
String pulls a string out of our io.Reader
### Types [¶](#pkg-types)
####
type [AuthFactory](https://github.com/studio-b12/gowebdav/blob/v0.9.0/auth.go#L13) [¶](#AuthFactory)
added in v0.9.0
```
type AuthFactory func(c *[http](/net/http).[Client](/net/http#Client), rs *[http](/net/http).[Response](/net/http#Response), path [string](/builtin#string)) (auth [Authenticator](#Authenticator), err [error](/builtin#error))
```
AuthFactory prototype function to create a new Authenticator
####
type [Authenticator](https://github.com/studio-b12/gowebdav/blob/v0.9.0/auth.go#L56) [¶](#Authenticator)
```
type Authenticator interface {
// Authorizes a request. Usually by adding some authorization headers.
Authorize(c *[http](/net/http).[Client](/net/http#Client), rq *[http](/net/http).[Request](/net/http#Request), path [string](/builtin#string)) [error](/builtin#error)
// Verifies the response if the authorization was successful.
// May trigger some round trips to pass the authentication.
// May also trigger a new Authenticator negotiation by returning `ErrAuthChenged`
Verify(c *[http](/net/http).[Client](/net/http#Client), rs *[http](/net/http).[Response](/net/http#Response), path [string](/builtin#string)) (redo [bool](/builtin#bool), err [error](/builtin#error))
// Creates a copy of the underlying Authenticator.
Clone() [Authenticator](#Authenticator)
[io](/io).[Closer](/io#Closer)
}
```
A Authenticator implements a specific way to authorize requests.
Each request is bound to a separate Authenticator instance.
The authentication flow itself is broken down into `Authorize`
and `Verify` steps. The former method runs before, and the latter runs after the `Request` is submitted.
This makes it easy to encapsulate and control complex authentication challenges.
Some authentication flows causing authentication round trips,
which can be archived by returning the `redo` of the Verify method. `True` restarts the authentication process for the current action: A new `Request` is spawned, which must be authorized, sent, and re-verified again, until the action is successfully submitted.
The preferred way is to handle the authentication ping-pong within `Verify`, and then `redo` with fresh credentials.
The result of the `Verify` method can also trigger an
`Authenticator` change by returning the `ErrAuthChanged`
as an error. Depending on the `Authorizer` this may trigger an `Authenticator` negotiation.
Set the `XInhibitRedirect` header to '1' in the `Authorize`
method to get control over request redirection.
Attention! You must handle the incoming request yourself.
To store a shared session state the `Clone` method **must**
return a new instance, initialized with the shared state.
####
func [NewDigestAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/digestAuth.go#L21) [¶](#NewDigestAuth)
added in v0.9.0
```
func NewDigestAuth(login, secret [string](/builtin#string), rs *[http](/net/http).[Response](/net/http#Response)) ([Authenticator](#Authenticator), [error](/builtin#error))
```
NewDigestAuth creates a new instance of our Digest Authenticator
####
func [NewPassportAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/passportAuth.go#L21) [¶](#NewPassportAuth)
added in v0.9.0
```
func NewPassportAuth(c *[http](/net/http).[Client](/net/http#Client), user, pw, partnerURL [string](/builtin#string), header *[http](/net/http).[Header](/net/http#Header)) ([Authenticator](#Authenticator), [error](/builtin#error))
```
constructor for PassportAuth creates a new PassportAuth object and automatically authenticates against the given partnerURL
####
type [Authorizer](https://github.com/studio-b12/gowebdav/blob/v0.9.0/auth.go#L17) [¶](#Authorizer)
added in v0.9.0
```
type Authorizer interface {
// Creates a new Authenticator Shim per request.
// It may track request related states and perform payload buffering
// for authentication round trips.
// The underlying Authenticator will perform the real authentication.
NewAuthenticator(body [io](/io).[Reader](/io#Reader)) ([Authenticator](#Authenticator), [io](/io).[Reader](/io#Reader))
// Registers a new Authenticator factory to a key.
AddAuthenticator(key [string](/builtin#string), fn [AuthFactory](#AuthFactory))
}
```
Authorizer our Authenticator factory which creates an
`Authenticator` per action/request.
####
func [NewAutoAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/auth.go#L109) [¶](#NewAutoAuth)
added in v0.9.0
```
func NewAutoAuth(login [string](/builtin#string), secret [string](/builtin#string)) [Authorizer](#Authorizer)
```
NewAutoAuth creates an auto Authenticator factory.
It negotiates the default authentication method based on the order of the registered Authenticators and the remotely offered authentication methods.
First In, First Out.
####
func [NewEmptyAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/auth.go#L132) [¶](#NewEmptyAuth)
added in v0.9.0
```
func NewEmptyAuth() [Authorizer](#Authorizer)
```
NewEmptyAuth creates an empty Authenticator factory The order of adding the Authenticator matters.
First In, First Out.
It offers the `NewAutoAuth` features.
####
func [NewPreemptiveAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/auth.go#L148) [¶](#NewPreemptiveAuth)
added in v0.9.0
```
func NewPreemptiveAuth(auth [Authenticator](#Authenticator)) [Authorizer](#Authorizer)
```
NewPreemptiveAuth creates a preemptive Authenticator The preemptive authorizer uses the provided Authenticator for every request regardless of any `Www-Authenticate` header.
It may only have one authentication method,
so calling `AddAuthenticator` **will panic**!
Look out!! This offers the skinniest and slickest implementation without any synchronisation!!
Still applicable with `BasicAuth` within go routines.
####
type [BasicAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/basicAuth.go#L9) [¶](#BasicAuth)
```
type BasicAuth struct {
// contains filtered or unexported fields
}
```
BasicAuth structure holds our credentials
####
func (*BasicAuth) [Authorize](https://github.com/studio-b12/gowebdav/blob/v0.9.0/basicAuth.go#L15) [¶](#BasicAuth.Authorize)
```
func (b *[BasicAuth](#BasicAuth)) Authorize(c *[http](/net/http).[Client](/net/http#Client), rq *[http](/net/http).[Request](/net/http#Request), path [string](/builtin#string)) [error](/builtin#error)
```
Authorize the current request
####
func (*BasicAuth) [Clone](https://github.com/studio-b12/gowebdav/blob/v0.9.0/basicAuth.go#L34) [¶](#BasicAuth.Clone)
added in v0.9.0
```
func (b *[BasicAuth](#BasicAuth)) Clone() [Authenticator](#Authenticator)
```
Clone creates a Copy of itself
####
func (*BasicAuth) [Close](https://github.com/studio-b12/gowebdav/blob/v0.9.0/basicAuth.go#L29) [¶](#BasicAuth.Close)
added in v0.9.0
```
func (b *[BasicAuth](#BasicAuth)) Close() [error](/builtin#error)
```
Close cleans up all resources
####
func (*BasicAuth) [String](https://github.com/studio-b12/gowebdav/blob/v0.9.0/basicAuth.go#L40) [¶](#BasicAuth.String)
added in v0.9.0
```
func (b *[BasicAuth](#BasicAuth)) String() [string](/builtin#string)
```
String toString
####
func (*BasicAuth) [Verify](https://github.com/studio-b12/gowebdav/blob/v0.9.0/basicAuth.go#L21) [¶](#BasicAuth.Verify)
added in v0.9.0
```
func (b *[BasicAuth](#BasicAuth)) Verify(c *[http](/net/http).[Client](/net/http#Client), rs *[http](/net/http).[Response](/net/http#Response), path [string](/builtin#string)) (redo [bool](/builtin#bool), err [error](/builtin#error))
```
Verify verifies if the authentication
####
type [Client](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L19) [¶](#Client)
```
type Client struct {
// contains filtered or unexported fields
}
```
Client defines our structure
####
func [NewAuthClient](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L33) [¶](#NewAuthClient)
added in v0.9.0
```
func NewAuthClient(uri [string](/builtin#string), auth [Authorizer](#Authorizer)) *[Client](#Client)
```
NewAuthClient creates a new client instance with a custom Authorizer
####
func [NewClient](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L28) [¶](#NewClient)
```
func NewClient(uri, user, pw [string](/builtin#string)) *[Client](#Client)
```
NewClient creates a new instance of client
####
func (*Client) [Connect](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L74) [¶](#Client.Connect)
```
func (c *[Client](#Client)) Connect() [error](/builtin#error)
```
Connect connects to our dav server
####
func (*Client) [Copy](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L310) [¶](#Client.Copy)
```
func (c *[Client](#Client)) Copy(oldpath, newpath [string](/builtin#string), overwrite [bool](/builtin#bool)) [error](/builtin#error)
```
Copy copies a file from A to B
####
func (*Client) [Mkdir](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L259) [¶](#Client.Mkdir)
```
func (c *[Client](#Client)) Mkdir(path [string](/builtin#string), _ [os](/os).[FileMode](/os#FileMode)) (err [error](/builtin#error))
```
Mkdir makes a directory
####
func (*Client) [MkdirAll](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L273) [¶](#Client.MkdirAll)
```
func (c *[Client](#Client)) MkdirAll(path [string](/builtin#string), _ [os](/os).[FileMode](/os#FileMode)) (err [error](/builtin#error))
```
MkdirAll like mkdir -p, but for webdav
####
func (*Client) [Read](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L315) [¶](#Client.Read)
```
func (c *[Client](#Client)) Read(path [string](/builtin#string)) ([][byte](/builtin#byte), [error](/builtin#error))
```
Read reads the contents of a remote file
####
func (*Client) [ReadDir](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L117) [¶](#Client.ReadDir)
```
func (c *[Client](#Client)) ReadDir(path [string](/builtin#string)) ([][os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
ReadDir reads the contents of a remote directory
####
func (*Client) [ReadStream](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L333) [¶](#Client.ReadStream)
```
func (c *[Client](#Client)) ReadStream(path [string](/builtin#string)) ([io](/io).[ReadCloser](/io#ReadCloser), [error](/builtin#error))
```
ReadStream reads the stream for a given path
####
func (*Client) [ReadStreamRange](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L355) [¶](#Client.ReadStreamRange)
added in v0.9.0
```
func (c *[Client](#Client)) ReadStreamRange(path [string](/builtin#string), offset, length [int64](/builtin#int64)) ([io](/io).[ReadCloser](/io#ReadCloser), [error](/builtin#error))
```
ReadStreamRange reads the stream representing a subset of bytes for a given path,
utilizing HTTP Range Requests if the server supports it.
The range is expressed as offset from the start of the file and length, for example offset=10, length=10 will return bytes 10 through 19.
If the server does not support partial content requests and returns full content instead,
this function will emulate the behavior by skipping `offset` bytes and limiting the result to `length`.
####
func (*Client) [Remove](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L236) [¶](#Client.Remove)
```
func (c *[Client](#Client)) Remove(path [string](/builtin#string)) [error](/builtin#error)
```
Remove removes a remote file
####
func (*Client) [RemoveAll](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L241) [¶](#Client.RemoveAll)
```
func (c *[Client](#Client)) RemoveAll(path [string](/builtin#string)) [error](/builtin#error)
```
RemoveAll removes remote files
####
func (*Client) [Rename](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L305) [¶](#Client.Rename)
```
func (c *[Client](#Client)) Rename(oldpath, newpath [string](/builtin#string), overwrite [bool](/builtin#bool)) [error](/builtin#error)
```
Rename moves a file from A to B
####
func (*Client) [SetHeader](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L49) [¶](#Client.SetHeader)
```
func (c *[Client](#Client)) SetHeader(key, value [string](/builtin#string))
```
SetHeader lets us set arbitrary headers for a given client
####
func (*Client) [SetInterceptor](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L54) [¶](#Client.SetInterceptor)
added in v0.9.0
```
func (c *[Client](#Client)) SetInterceptor(interceptor func(method [string](/builtin#string), rq *[http](/net/http).[Request](/net/http#Request)))
```
SetInterceptor lets us set an arbitrary interceptor for a given client
####
func (*Client) [SetJar](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L69) [¶](#Client.SetJar)
added in v0.9.0
```
func (c *[Client](#Client)) SetJar(jar [http](/net/http).[CookieJar](/net/http#CookieJar))
```
SetJar exposes the ability to set a cookie jar to the client.
####
func (*Client) [SetTimeout](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L59) [¶](#Client.SetTimeout)
```
func (c *[Client](#Client)) SetTimeout(timeout [time](/time).[Duration](/time#Duration))
```
SetTimeout exposes the ability to set a time limit for requests
####
func (*Client) [SetTransport](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L64) [¶](#Client.SetTransport)
```
func (c *[Client](#Client)) SetTransport(transport [http](/net/http).[RoundTripper](/net/http#RoundTripper))
```
SetTransport exposes the ability to define custom transports
####
func (*Client) [Stat](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L184) [¶](#Client.Stat)
```
func (c *[Client](#Client)) Stat(path [string](/builtin#string)) ([os](/os).[FileInfo](/os#FileInfo), [error](/builtin#error))
```
Stat returns the file stats for a specified path
####
func (*Client) [Write](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L389) [¶](#Client.Write)
```
func (c *[Client](#Client)) Write(path [string](/builtin#string), data [][byte](/builtin#byte), _ [os](/os).[FileMode](/os#FileMode)) (err [error](/builtin#error))
```
Write writes data to a given path
####
func (*Client) [WriteStream](https://github.com/studio-b12/gowebdav/blob/v0.9.0/client.go#L419) [¶](#Client.WriteStream)
```
func (c *[Client](#Client)) WriteStream(path [string](/builtin#string), stream [io](/io).[Reader](/io#Reader), _ [os](/os).[FileMode](/os#FileMode)) (err [error](/builtin#error))
```
WriteStream writes a stream
####
type [DigestAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/digestAuth.go#L14) [¶](#DigestAuth)
```
type DigestAuth struct {
// contains filtered or unexported fields
}
```
DigestAuth structure holds our credentials
####
func (*DigestAuth) [Authorize](https://github.com/studio-b12/gowebdav/blob/v0.9.0/digestAuth.go#L26) [¶](#DigestAuth.Authorize)
```
func (d *[DigestAuth](#DigestAuth)) Authorize(c *[http](/net/http).[Client](/net/http#Client), rq *[http](/net/http).[Request](/net/http#Request), path [string](/builtin#string)) [error](/builtin#error)
```
Authorize the current request
####
func (*DigestAuth) [Clone](https://github.com/studio-b12/gowebdav/blob/v0.9.0/digestAuth.go#L49) [¶](#DigestAuth.Clone)
added in v0.9.0
```
func (d *[DigestAuth](#DigestAuth)) Clone() [Authenticator](#Authenticator)
```
Clone creates a copy of itself
####
func (*DigestAuth) [Close](https://github.com/studio-b12/gowebdav/blob/v0.9.0/digestAuth.go#L44) [¶](#DigestAuth.Close)
added in v0.9.0
```
func (d *[DigestAuth](#DigestAuth)) Close() [error](/builtin#error)
```
Close cleans up all resources
####
func (*DigestAuth) [String](https://github.com/studio-b12/gowebdav/blob/v0.9.0/digestAuth.go#L58) [¶](#DigestAuth.String)
added in v0.9.0
```
func (d *[DigestAuth](#DigestAuth)) String() [string](/builtin#string)
```
String toString
####
func (*DigestAuth) [Verify](https://github.com/studio-b12/gowebdav/blob/v0.9.0/digestAuth.go#L36) [¶](#DigestAuth.Verify)
added in v0.9.0
```
func (d *[DigestAuth](#DigestAuth)) Verify(c *[http](/net/http).[Client](/net/http#Client), rs *[http](/net/http).[Response](/net/http#Response), path [string](/builtin#string)) (redo [bool](/builtin#bool), err [error](/builtin#error))
```
Verify checks for authentication issues and may trigger a re-authentication
####
type [File](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L10) [¶](#File)
```
type File struct {
// contains filtered or unexported fields
}
```
File is our structure for a given file
####
func (File) [ContentType](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L31) [¶](#File.ContentType)
```
func (f [File](#File)) ContentType() [string](/builtin#string)
```
ContentType returns the content type of a file
####
func (File) [ETag](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L56) [¶](#File.ETag)
```
func (f [File](#File)) ETag() [string](/builtin#string)
```
ETag returns the ETag of a file
####
func (File) [IsDir](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L61) [¶](#File.IsDir)
```
func (f [File](#File)) IsDir() [bool](/builtin#bool)
```
IsDir let us see if a given file is a directory or not
####
func (File) [ModTime](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L51) [¶](#File.ModTime)
```
func (f [File](#File)) ModTime() [time](/time).[Time](/time#Time)
```
ModTime returns the modified time of a file
####
func (File) [Mode](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L41) [¶](#File.Mode)
```
func (f [File](#File)) Mode() [os](/os).[FileMode](/os#FileMode)
```
Mode will return the mode of a given file
####
func (File) [Name](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L26) [¶](#File.Name)
```
func (f [File](#File)) Name() [string](/builtin#string)
```
Name returns the name of a file
####
func (File) [Path](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L21) [¶](#File.Path)
```
func (f [File](#File)) Path() [string](/builtin#string)
```
Path returns the full path of a file
####
func (File) [Size](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L36) [¶](#File.Size)
```
func (f [File](#File)) Size() [int64](/builtin#int64)
```
Size returns the size of a file
####
func (File) [String](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L71) [¶](#File.String)
```
func (f [File](#File)) String() [string](/builtin#string)
```
String lets us see file information
####
func (File) [Sys](https://github.com/studio-b12/gowebdav/blob/v0.9.0/file.go#L66) [¶](#File.Sys)
```
func (f [File](#File)) Sys() interface{}
```
Sys ????
####
type [PassportAuth](https://github.com/studio-b12/gowebdav/blob/v0.9.0/passportAuth.go#L12) [¶](#PassportAuth)
added in v0.9.0
```
type PassportAuth struct {
// contains filtered or unexported fields
}
```
PassportAuth structure holds our credentials
####
func (*PassportAuth) [Authorize](https://github.com/studio-b12/gowebdav/blob/v0.9.0/passportAuth.go#L32) [¶](#PassportAuth.Authorize)
added in v0.9.0
```
func (p *[PassportAuth](#PassportAuth)) Authorize(c *[http](/net/http).[Client](/net/http#Client), rq *[http](/net/http).[Request](/net/http#Request), path [string](/builtin#string)) [error](/builtin#error)
```
Authorize the current request
####
func (*PassportAuth) [Clone](https://github.com/studio-b12/gowebdav/blob/v0.9.0/passportAuth.go#L69) [¶](#PassportAuth.Clone)
added in v0.9.0
```
func (p *[PassportAuth](#PassportAuth)) Clone() [Authenticator](#Authenticator)
```
Clone creates a Copy of itself
####
func (*PassportAuth) [Close](https://github.com/studio-b12/gowebdav/blob/v0.9.0/passportAuth.go#L64) [¶](#PassportAuth.Close)
added in v0.9.0
```
func (p *[PassportAuth](#PassportAuth)) Close() [error](/builtin#error)
```
Close cleans up all resources
####
func (*PassportAuth) [String](https://github.com/studio-b12/gowebdav/blob/v0.9.0/passportAuth.go#L83) [¶](#PassportAuth.String)
added in v0.9.0
```
func (p *[PassportAuth](#PassportAuth)) String() [string](/builtin#string)
```
String toString
####
func (*PassportAuth) [Verify](https://github.com/studio-b12/gowebdav/blob/v0.9.0/passportAuth.go#L46) [¶](#PassportAuth.Verify)
added in v0.9.0
```
func (p *[PassportAuth](#PassportAuth)) Verify(c *[http](/net/http).[Client](/net/http#Client), rs *[http](/net/http).[Response](/net/http#Response), path [string](/builtin#string)) (redo [bool](/builtin#bool), err [error](/builtin#error))
```
Verify verifies if the authentication is good
####
type [StatusError](https://github.com/studio-b12/gowebdav/blob/v0.9.0/errors.go#L18) [¶](#StatusError)
added in v0.9.0
```
type StatusError struct {
Status [int](/builtin#int)
}
```
StatusError implements error and wraps an erroneous status code.
####
func (StatusError) [Error](https://github.com/studio-b12/gowebdav/blob/v0.9.0/errors.go#L22) [¶](#StatusError.Error)
added in v0.9.0
```
func (se [StatusError](#StatusError)) Error() [string](/builtin#string)
``` |
porcelain | hex | Erlang | porcelain v2.0.3
API Reference
===
* [Modules](#modules)
* [Exceptions](#exceptions)
Modules
===
[Porcelain](Porcelain.html)
The main module exposing the public API of Porcelain
[Porcelain.Driver.Basic](Porcelain.Driver.Basic.html)
Porcelain driver that offers basic functionality for interacting with external programs
[Porcelain.Driver.Goon](Porcelain.Driver.Goon.html)
Porcelain driver that offers additional features over the basic one
[Porcelain.Process](Porcelain.Process.html)
Module for working with external processes launched with [`Porcelain.spawn/3`](Porcelain.html#spawn/3)
or [`Porcelain.spawn_shell/2`](Porcelain.html#spawn_shell/2)
[Porcelain.Result](Porcelain.Result.html)
A struct containing the result of running a program after it has terminated
Exceptions
===
[Porcelain.UsageError](Porcelain.UsageError.html)
This exception is meant to indicate programmer errors (misuses of the library API) that have to be fixed prior to release
porcelain v2.0.3
Porcelain
===
The main module exposing the public API of Porcelain.
Basic concepts
---
Functions in this module can either spawn external programs directly
([`exec/3`](#exec/3) and [`spawn/3`](#spawn/3)) or using a system shell ([`shell/2`](#shell/2) and
[`spawn_shell/2`](#spawn_shell/2)).
Functions [`exec/3`](#exec/3) and [`shell/2`](#shell/2) are synchronous (or blocking), meaning they don’t return until the external program terminates.
Functions [`spawn/3`](#spawn/3) and [`spawn_shell/2`](#spawn_shell/2) are non-blocking: they immediately return a [`Porcelain.Process`](Porcelain.Process.html) struct and use one of the available ways to exchange input and output with the external process asynchronously.
Error handling
---
Using undefined options, passing invalid values to options or any function arguments will fail with a function clause error or [`Porcelain.UsageError`](Porcelain.UsageError.html)
exception. Those are programmer errors and have to be fixed.
Any other kinds of runtime errors are reported by returning an error tuple:
`{:error, <reason>}` where `<reason>` is a string explaining the error.
Summary
===
[Functions](#functions)
---
[exec(prog, args, options \\ [])](#exec/3)
Execute a program synchronously
[reinit(driver \\ nil)](#reinit/1)
Reruns the initialization and updates application env
[shell(cmd, options \\ [])](#shell/2)
Execute a shell invocation synchronously
[spawn(prog, args, options \\ [])](#spawn/3)
Spawn an external process and return a [`Porcelain.Process`](Porcelain.Process.html) struct to be able to communicate with it
[spawn_shell(cmd, options \\ [])](#spawn_shell/2)
Spawn a system shell and execute the command in it
Functions
===
exec(prog, args, options \\ [])
#### Specs
```
exec(binary, [binary], [Keyword.t](http://elixir-lang.org/docs/stable/elixir/Keyword.html#t:t/0)) :: [Porcelain.Result.t](Porcelain.Result.html#t:t/0)
```
Execute a program synchronously.
Porcelain will look for the program in PATH and launch it directly, passing the `args` list as command-line arguments to it.
Feeds all input into the program (synchronously or concurrently with reading output; see `:async_in` option below) and waits for it to terminate. Returns a [`Porcelain.Result`](Porcelain.Result.html) struct containing program’s output and exit status code.
When no options are passed, the following defaults will be used:
```
[in: "", out: :string, err: nil]
```
This will run the program with no input and will capture its standard output.
Available options:
* `:in` – specify the way input will be passed to the program.
Possible values:
+ `<iodata>` – the data is fed into stdin as the sole input for the
program
+ `<stream>` – interprets `<stream>` as a stream of iodata to be fed into
the program
+ `{:path, <string>}` – path to a file to be fed into stdin
+ `{:file, <file>}` – `<file>` is a file descriptor obtained from e.g.
`File.open`; the file will be read from the current position until EOF
* `:async_in` – can be `true` or `false` (default). When enabled, an additional process will be spawned to feed input to the program concurrently with receiving output.
* `:out` – specify the way output will be passed back to Elixir.
Possible values:
+ `nil` – discard the output
+ `:string` (default) – the whole output will be accumulated in memory
and returned as one string to the caller
+ `:iodata` – the whole output will be accumulated in memory and returned
as iodata to the caller
+ `{:path, <string>}` – the file at path will be created (or truncated)
and the output will be written to it
+ `{:append, <string>}` – the output will be appended to the the file at
path (it will be created first if needed)
+ `{:file, <file>}` – `<file>` is a file descriptor obtained from e.g.
`File.open`; the file will be written to starting at the current
position
+ `<coll>` – feeds program output (as iodata) into the collectable
`<coll>`. Useful for outputting directly to the console, for example:
```
stream = IO.binstream(:standard_io, :line)
exec("echo", ["hello", "world"], out: stream)
#=> prints "hello\nworld\n" to stdout
```
* `:err` – specify the way stderr will be passed back to Elixir.
Possible values are the same as for `:out`. In addition, it accepts the atom `:out` which denotes redirecting stderr to stdout.
**Caveat**: when using [`Porcelain.Driver.Basic`](Porcelain.Driver.Basic.html), the only supported values are `nil` (stderr will be printed to the terminal) and `:out`.
* `:dir` – takes a path that will be used as the directory in which the program will be launched.
* `:env` – set additional environment variables for the program. The value should be an enumerable with elements of the shape `{<key>, <val>}` where
`<key>` is an atom or a binary and `<val>` is a binary or `false`
(meaning removing the corresponding variable from the environment).
Basically, it accepts any kind of dict, including keyword lists.
reinit(driver \\ nil)
Reruns the initialization and updates application env.
This function is useful in the following cases:
1. The currently used driver is Goon and the location of the goon
executable has changed.
2. You want to change the driver being used.
shell(cmd, options \\ [])
#### Specs
```
shell(binary, [Keyword.t](http://elixir-lang.org/docs/stable/elixir/Keyword.html#t:t/0)) :: [Porcelain.Result.t](Porcelain.Result.html#t:t/0)
```
Execute a shell invocation synchronously.
This function will launch a system shell and pass the invocation to it. This allows using shell features like chaining multiple programs with pipes. The downside is that those advanced features may be unavailable on some platforms.
It is similar to the [`exec/3`](#exec/3) function in all other respects.
spawn(prog, args, options \\ [])
#### Specs
```
spawn(binary, [binary], [Keyword.t](http://elixir-lang.org/docs/stable/elixir/Keyword.html#t:t/0)) :: [Porcelain.Process.t](Porcelain.Process.html#t:t/0)
```
Spawn an external process and return a [`Porcelain.Process`](Porcelain.Process.html) struct to be able to communicate with it.
You have to explicitly stop the process after reading its output or when it is no longer needed.
Use the [`Porcelain.Process.await/2`](Porcelain.Process.html#await/2) function to wait for the process to terminate.
Supports all options defined for [`exec/3`](#exec/3) plus some additional ones:
* `in: :receive` – input is expected to be sent to the process in chunks using the [`Porcelain.Process.send_input/2`](Porcelain.Process.html#send_input/2) function.
* `:out` and `:err` can choose from a few more values (with the familiar caveat that [`Porcelain.Driver.Basic`](Porcelain.Driver.Basic.html) does not support them for `:err`):
+ `:stream` – the corresponding field of the returned `Process` struct
will contain a stream of iodata.
Note that the underlying port implementation is message based. This
means that the external program will be able to send all of its
output to an Elixir process and terminate. The data will be kept in
the Elixir process’s memory until the stream is consumed.
+ `{:send, <pid>}` – send the output to the process denoted by `<pid>`.
Will send zero or more data messages and will always send one result
message in the end.
The data messages have the following shape:
```
{<from>, :data, :out | :err, <iodata>}
```
where `<from>` will be the same pid as the one contained in the
`Process` struct returned by this function.
The result message has the following shape:
```
{<from>, :result, %Porcelain.Result{} | nil}
```
The result will be `nil` if the `:result` option that is passed to
this function is set to `:discard`.
**Note**: if both `:out` and `:err` are set up to send to the same
pid, only one result message will be sent to that pid in the end.
* `:result` – specify how the result of the external program should be
returned after it has terminated.
This option has a smart default value. If either `:out` or `:err` option is set to `:string` or `:iodata`, `:result` will be set to `:keep`.
Otherwise, it will be set to `:discard`.
Possible values:
+ `:keep` – the result will be kept in memory until requested by calling
[`Porcelain.Process.await/2`](Porcelain.Process.html#await/2) or discarded by calling
[`Porcelain.Process.stop/1`](Porcelain.Process.html#stop/1).
+ `:discard` – discards the result and automatically closes the port
after program termination. Useful in combination with `out: :stream`
and `err: :stream`.
spawn_shell(cmd, options \\ [])
#### Specs
```
spawn_shell(binary, [Keyword.t](http://elixir-lang.org/docs/stable/elixir/Keyword.html#t:t/0)) :: [Porcelain.Process.t](Porcelain.Process.html#t:t/0)
```
Spawn a system shell and execute the command in it.
Works similar to [`spawn/3`](#spawn/3).
porcelain v2.0.3
Porcelain.Driver.Basic
===
Porcelain driver that offers basic functionality for interacting with external programs.
Users are not supposed to call functions in this module directly. Use functions in [`Porcelain`](Porcelain.html) instead.
This driver has two major limitations compared to [`Porcelain.Driver.Goon`](Porcelain.Driver.Goon.html):
* the `exec` function does not work with programs that read all input until EOF before producing any output. Such programs will hang since Erlang ports don’t provide any mechanism to indicate the end of input.
If a program is continuously consuming input and producing output, it could work with the `spawn` function, but you’ll also have to explicitly close the connection with the external program when you’re done with it.
* sending OS signals to external processes is not supported
porcelain v2.0.3
Porcelain.Driver.Goon
===
Porcelain driver that offers additional features over the basic one.
Users are not supposed to call functions in this module directly. Use functions in [`Porcelain`](Porcelain.html) instead.
This driver will be used by default if it can locate the external program named `goon` in the executable path. If `goon` is not found, Porcelain will fall back to the basic driver.
The additional functionality provided by this driver is as follows:
* ability to signal EOF to the external program
* send an OS signal to the program
* (to be implemented) more efficient piping of multiple programs
porcelain v2.0.3
Porcelain.Process
===
Module for working with external processes launched with [`Porcelain.spawn/3`](Porcelain.html#spawn/3)
or [`Porcelain.spawn_shell/2`](Porcelain.html#spawn_shell/2).
Summary
===
[Types](#types)
---
[signal()](#t:signal/0)
[t()](#t:t/0)
[Functions](#functions)
---
[__struct__()](#__struct__/0)
A struct representing a wrapped OS processes which provides the ability to exchange data with it
[alive?(process)](#alive?/1)
Check if the process is still running
[await(process, timeout \\ :infinity)](#await/2)
Wait for the external process to terminate
[send_input(process, data)](#send_input/2)
Send iodata to the process’s stdin
[signal(process, sig)](#signal/2)
Send an OS signal to the processes
[stop(process)](#stop/1)
Stops the process created with [`Porcelain.spawn/3`](Porcelain.html#spawn/3) or
[`Porcelain.spawn_shell/2`](Porcelain.html#spawn_shell/2). Also closes the underlying Erlang port
Types
===
```
[signal](#t:signal/0) :: :int | :kill | non_neg_integer
```
```
[t](#t:t/0) :: %Porcelain.Process{err: term, out: term, pid: term}
```
Functions
===
__struct__()
A struct representing a wrapped OS processes which provides the ability to exchange data with it.
alive?(process)
#### Specs
```
alive?([t](#t:t/0)) :: true | false
```
Check if the process is still running.
await(process, timeout \\ :infinity)
#### Specs
```
await([t](#t:t/0), non_neg_integer | :infinity) ::
{:ok, [Porcelain.Result.t](Porcelain.Result.html#t:t/0)} |
{:error, :noproc | :timeout}
```
Wait for the external process to terminate.
Returns [`Porcelain.Result`](Porcelain.Result.html) struct with the process’s exit status and output.
Automatically closes the underlying port in this case.
If timeout value is specified and the external process fails to terminate before it runs out, atom `:timeout` is returned.
send_input(process, data)
#### Specs
```
send_input([t](#t:t/0), iodata) :: iodata
```
Send iodata to the process’s stdin.
End of input is indicated by sending an empty message.
**Caveat**: when using [`Porcelain.Driver.Basic`](Porcelain.Driver.Basic.html), it is not possible to indicate the end of input. You should stop the process explicitly using
[`stop/1`](#stop/1).
signal(process, sig)
#### Specs
```
signal([t](#t:t/0), [signal](#t:signal/0)) :: [signal](#t:signal/0)
```
Send an OS signal to the processes.
No further communication with the process is possible after sending it a signal.
stop(process)
#### Specs
```
stop([t](#t:t/0)) :: true
```
Stops the process created with [`Porcelain.spawn/3`](Porcelain.html#spawn/3) or
[`Porcelain.spawn_shell/2`](Porcelain.html#spawn_shell/2). Also closes the underlying Erlang port.
May cause “broken pipe” message to be written to stderr.
Caveats
---
When using [`Porcelain.Driver.Basic`](Porcelain.Driver.Basic.html), Porcelain will merely close the Erlang port connected to that process. This normally causes an external process to terminate provided that it is listening on its `stdin`. If not, the external process will continue running.
See <http://erlang.org/pipermail/erlang-questions/2010-March/050227.html> for some background info.
When using [`Porcelain.Driver.Goon`](Porcelain.Driver.Goon.html), a `SIGTERM` signal will be sent to the external process. If it doesn’t terminate after `:goon_stop_timeout` seconds, a `SIGKILL` will be sent to the process.
porcelain v2.0.3
Porcelain.Result
===
A struct containing the result of running a program after it has terminated.
Summary
===
[Types](#types)
---
[t()](#t:t/0)
Types
===
```
[t](#t:t/0) :: %Porcelain.Result{err: term, out: term, status: term}
```
porcelain v2.0.3
Porcelain.UsageError
exception
===
This exception is meant to indicate programmer errors (misuses of the library API) that have to be fixed prior to release.
Summary
===
[Functions](#functions)
---
[exception(msg)](#exception/1)
Callback implementation for `c:Exception.exception/1`
[message(exception)](#message/1)
Callback implementation for `c:Exception.message/1`
Functions
===
exception(msg)
#### Specs
```
exception([String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0)) :: [Exception.t](http://elixir-lang.org/docs/stable/elixir/Exception.html#t:t/0)
```
```
exception([Keyword.t](http://elixir-lang.org/docs/stable/elixir/Keyword.html#t:t/0)) :: [Exception.t](http://elixir-lang.org/docs/stable/elixir/Exception.html#t:t/0)
```
Callback implementation for `c:Exception.exception/1`.
message(exception)
#### Specs
```
message([Exception.t](http://elixir-lang.org/docs/stable/elixir/Exception.html#t:t/0)) :: [String.t](http://elixir-lang.org/docs/stable/elixir/String.html#t:t/0)
```
Callback implementation for `c:Exception.message/1`. |
seesaw | hex | Erlang | [![Build Status](https://secure.travis-ci.org/daveray/seesaw.png?branch=develop)](http://travis-ci.org/daveray/seesaw)
*Note that current development is on the *develop* branch, not master*
There's now a [Google Group](https://groups.google.com/group/seesaw-clj) for discussion and questions.
[Here's a brief tutorial](https://gist.github.com/1441520) that covers some Seesaw basics. It assumes no knowledge of Swing or Java.
[Here's the slides](http://darevay.com/talks/clojurewest2012/) from a Clojure/West 2012 talk on the Seesaw. Best viewed in Chrome or Safari.
[Seesaw: Clojure + UI](#seesaw-clojure--ui)
===
**See [the Seesaw Wiki](https://github.com/daveray/seesaw/wiki) and [the Seesaw API Docs](http://daveray.github.com/seesaw/) for more detailed docs. Note that the docs in the code (use the `doc` function!) are always the most up-to-date and trustworthy.**
Seesaw is a library/DSL for constructing user interfaces in Clojure. It happens to be built on Swing, but please don't hold that against it.
[Features](#features)
---
Seesaw is compatible with both Clojure 1.4, but will probably work fine with 1.3 and 1.5. Maybe even 1.2.
* Swing knowledge is *not required* for many apps!
* [Construct widgets](https://github.com/daveray/seesaw/wiki/Widgets) with simple functions, e.g. `(listbox :model (range 100))`
* Support for all of Swing's built-in widgets as well as SwingX.
* Support for all of Swing's layout managers as well as MigLayout, and JGoodies Forms
* Convenient shortcuts for most properties. For example, `:background :blue` or `:background "#00f"`, or `:size [640 :by 480]`.
* [CSS-style selectors](https://github.com/daveray/seesaw/wiki/Selectors) with same syntax as [Enlive](https://github.com/cgrand/enlive).
* Unified, extensible [event API](https://github.com/daveray/seesaw/wiki/Handling-events)
* Unified, extensible [selection API](https://github.com/daveray/seesaw/wiki/Handling-selection)
* [Widget binding](http://blog.darevay.com/2011/07/seesaw-widget-binding/), i.e. map changes from one widget into one or more others in a more functional style. Also integrates with Clojure's reference types.
* [Graphics](https://github.com/daveray/seesaw/wiki/Graphics)
* [i18n](https://github.com/daveray/seesaw/wiki/Resource-bundles-and-i18n)
* An extensive [test suite](https://github.com/daveray/seesaw/tree/master/test/seesaw/test)
*There are numerous Seesaw examples in [test/seesaw/test/examples](https://github.com/daveray/seesaw/tree/master/test/seesaw/test/examples).*
[TL;DR](#tldr)
---
Here's how you use Seesaw with [Leiningen] (<https://github.com/technomancy/leiningen>)
Install `lein` as described and then:
```
$ lein new hello-seesaw
$ cd hello-seesaw
```
Add Seesaw to `project.clj`
```
(defproject hello-seesaw "1.0.0-SNAPSHOT"
:description "FIXME: write"
:dependencies [[org.clojure/clojure "1.4.0"]
[seesaw "x.y.z"]])
```
*Replace the Seesaw version with whatever the latest version tag is. See below!*
Now edit the generated `src/hello_seesaw/core.clj` file:
```
(ns hello-seesaw.core
(:use seesaw.core))
(defn -main [& args]
(invoke-later
(-> (frame :title "Hello",
:content "Hello, Seesaw",
:on-close :exit)
pack!
show!)))
```
Now run it:
```
$ lein run -m hello-seesaw.core
```
*NOTE:* Here's how you can run against the bleeding edge of Seesaw:
* Clone Seesaw from github. Fork if you like. *Switch to the "develop" branch.*
* In your Seesaw checkout, run `lein install` to build it. *Note that Seesaw uses Leiningen 2 as of 3 NOV 2012!*
* In your project's `project.clj` file, change the Seesaw version to `X.Y.Z-SNAPSHOT` to match whatever's in Seesaw's `project.clj`.
* Run `lein deps` ... actually you can just start coding. `lein deps` is almost never necessary.
* Move along
[Contributors](#contributors)
---
* <NAME> (kotarak)
* <NAME> (Quantalume)
* <NAME> (harto)
* <NAME>
* <NAME> (odyssomay)
* <NAME> (Raynes)
* <NAME> (MHOOO)
* <NAME> (Domon)
* <NAME> (dpx-infinity)
* <NAME> (rosejn)
* <NAME> (simlun)
* <NAME> (jakemcc)
[License](#license)
---
Copyright (C) 2012 <NAME>
Distributed under the Eclipse Public License, the same as Clojure.
seesaw.action
===
Functions for dealing with Swing Actions. Prefer (seesaw.core/action).
```
Functions for dealing with Swing Actions. Prefer (seesaw.core/action).
```
[raw docstring](#)
---
#### actionclj
```
(action & opts)
```
Construct a new Action object. Supports the following properties:
:enabled? Whether the action is enabled
:selected? Whether the action is selected (for use with radio buttons,
toggle buttons, etc.
:name The name of the action, i.e. the text that will be displayed in whatever widget it's associated with
:command The action command key. An arbitrary string identifier associated with the action.
:tip The action's tooltip
:icon The action's icon. See (seesaw.core/icon)
:key A keystroke associated with the action. See (seesaw.keystroke/keystroke).
:mnemonic The mnemonic for the button, either a character or a keycode.
Usually allows the user to activate the button with alt-mnemonic.
See (seesaw.util/to-mnemonic-keycode).
:handler A single-argument function that performs whatever operations are associated with the action. The argument is a ActionEvent instance.
Instances of action can be passed to the :action option of most buttons, menu items,
etc.
Actions can be later configured with the same properties above with (seesaw.core/config!).
Returns an instance of javax.swing.Action.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/Action.html>
```
Construct a new Action object. Supports the following properties:
:enabled? Whether the action is enabled
:selected? Whether the action is selected (for use with radio buttons,
toggle buttons, etc.
:name The name of the action, i.e. the text that will be displayed
in whatever widget it's associated with
:command The action command key. An arbitrary string identifier associated
with the action.
:tip The action's tooltip
:icon The action's icon. See (seesaw.core/icon)
:key A keystroke associated with the action. See (seesaw.keystroke/keystroke).
:mnemonic The mnemonic for the button, either a character or a keycode.
Usually allows the user to activate the button with alt-mnemonic.
See (seesaw.util/to-mnemonic-keycode).
:handler A single-argument function that performs whatever operations are
associated with the action. The argument is a ActionEvent instance.
Instances of action can be passed to the :action option of most buttons, menu items,
etc.
Actions can be later configured with the same properties above with (seesaw.core/config!).
Returns an instance of javax.swing.Action.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/Action.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/action.clj#L58)[raw docstring](#)
seesaw.applet
===
Macros and functions that make creating an applet with Seesaw a little less painful.
```
Macros and functions that make creating an applet with Seesaw a little less painful.
```
[raw docstring](#)
---
#### defappletcljmacro
```
(defapplet &
{:keys [name init start stop content]
:or {init (quote (fn [applet]))
start (quote (fn [applet]))
stop (quote (fn [applet]))
content (quote (fn [applet]
(seesaw.core/label "A Seesaw Applet")))}})
```
Define an applet. This macro does all the gen-class business and takes maps applet lifetime methods to callback functions automatically. Supports the following options:
:name The name of the generated class. Defaults to the current namespace.
:init Function called when the applet is first loaded. Takes a single JApplet argument. This function is called from the UI thread.
:start Function called when the applet is started. Takes a single JApplet argument. This function is called from the UI thread.
:stop Function called when the applet is stopped. Takes a single JApplet argument. This function is called from the UI thread.
:content Function called after :init which should return the content of the applet, for example some kind of panel. It's added to the center of a border pane so it will be resized with the applet.
Note that the namespace containing a call to (defapplet) must be compiled. In Leiningen, this is easiest to do by adding an :aot option to project.clj:
:aot [namespace.with.defapplet]
After that, use "lein uberjar" to build a jar with everything.
Since Seesaw is currently reflection heavy, the resulting jar must be signed:
$ keytool -genkey -alias seesaw -dname "cn=company, c=en"
$ keytool -selfcert -alias seesaw -dname "cn=company, c=en"
$ lein uberjar
$ jarsigner name-of-project-X.X.X-SNAPSHOT-standalone.jar seesaw
Then refer to it from your webpage like this:
<applet archive="name-of-project-X.X.X-standalone.jar"
code="namespace/with/defapplet.class"
width="200"
height="200"Examples:
See examples/applet project.
See:
<http://download.oracle.com/javase/7/docs/api/javax/swing/JApplet.html>
<http://download.oracle.com/javase/tutorial/uiswing/components/applet.html>
<http://download.oracle.com/javase/tutorial/deployment/applet/index.html>
```
Define an applet. This macro does all the gen-class business and takes maps applet lifetime methods to callback functions automatically. Supports the following options:
:name The name of the generated class. Defaults to the current namespace.
:init Function called when the applet is first loaded. Takes a single
JApplet argument. This function is called from the UI thread.
:start Function called when the applet is started. Takes a single JApplet
argument. This function is called from the UI thread.
:stop Function called when the applet is stopped. Takes a single JApplet
argument. This function is called from the UI thread.
:content Function called after :init which should return the content of
the applet, for example some kind of panel. It's added to the center
of a border pane so it will be resized with the applet.
Note that the namespace containing a call to (defapplet) must be compiled. In Leiningen, this is easiest to do by adding an :aot option to project.clj:
:aot [namespace.with.defapplet]
After that, use "lein uberjar" to build a jar with everything.
Since Seesaw is currently reflection heavy, the resulting jar must be signed:
$ keytool -genkey -alias seesaw -dname "cn=company, c=en"
$ keytool -selfcert -alias seesaw -dname "cn=company, c=en"
$ lein uberjar
$ jarsigner name-of-project-X.X.X-SNAPSHOT-standalone.jar seesaw
Then refer to it from your webpage like this:
<applet archive="name-of-project-X.X.X-standalone.jar"
code="namespace/with/defapplet.class"
width="200"
height="200"Examples:
See examples/applet project.
See:
http://download.oracle.com/javase/7/docs/api/javax/swing/JApplet.html
http://download.oracle.com/javase/tutorial/uiswing/components/applet.html
http://download.oracle.com/javase/tutorial/deployment/applet/index.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/applet.clj#L18)[raw docstring](#)
seesaw.behave
===
A collection of basic behaviors that can be dynamically added to widgets. Most cover basic functionality that's missing from Swing or just a pain to implement.
```
A collection of basic behaviors that can be dynamically added to widgets. Most cover basic functionality that's missing from Swing or just a pain to implement.
```
[raw docstring](#)
---
#### when-focused-select-allclj
```
(when-focused-select-all w)
```
A helper function which adds a "select all when focus gained" behavior to one or more text widgets or editable comboboxes.
Like (seesaw.core/listen) returns a function which will remove all event handlers when called.
Examples:
(flow-panel :items [
"Enter some text here: "
(doto
(text "All this text will be selected when I get keyboard focus")
when-focused-select-all)])
See:
```
A helper function which adds a "select all when focus gained" behavior to one or more text widgets or editable comboboxes.
Like (seesaw.core/listen) returns a function which will remove all event handlers when called.
Examples:
(flow-panel :items [
"Enter some text here: "
(doto
(text "All this text will be selected when I get keyboard focus")
when-focused-select-all)])
See:
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/behave.clj#L19)[raw docstring](#)
---
#### when-mouse-draggedclj
```
(when-mouse-dragged w & opts)
```
A helper for handling mouse dragging on a widget. This isn't that complicated,
but the default mouse dragged event provided with Swing doesn't give the delta since the last drag event so you end up having to keep track of it. This function takes three options:
:start event handler called when the drag is started (mouse pressed).
:drag A function that takes a mouse event and a [dx dy] vector which is the change in x and y since the last drag event.
:finish event handler called when the drag is finished (mouse released).
Like (seesaw.core/listen) returns a function which will remove all event handlers when called.
Examples:
See (seesaw.examples.xyz-panel)
```
A helper for handling mouse dragging on a widget. This isn't that complicated,
but the default mouse dragged event provided with Swing doesn't give the delta since the last drag event so you end up having to keep track of it. This function takes three options:
:start event handler called when the drag is started (mouse pressed).
:drag A function that takes a mouse event and a [dx dy] vector which is
the change in x and y since the last drag event.
:finish event handler called when the drag is finished (mouse released).
Like (seesaw.core/listen) returns a function which will remove all event handlers when called.
Examples:
See (seesaw.examples.xyz-panel)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/behave.clj#L45)[raw docstring](#)
seesaw.bind
===
Functions for binding the value of one thing to another, for example synchronizing an atom with changes to a slider.
```
Functions for binding the value of one thing to another, for example synchronizing an atom with changes to a slider.
```
[raw docstring](#)
---
#### b-docljmacro
```
(b-do bindings & body)
```
Macro helper for (seesaw.bind/b-do*). Takes a single-argument fn-style binding vector and a body. When a new value is received it is passed to the binding and the body is executes. The result is discarded.
See:
(seesaw.bind/b-do*)
```
Macro helper for (seesaw.bind/b-do*). Takes a single-argument fn-style binding vector and a body. When a new value is received it is passed to the binding and the body is executes. The result is discarded.
See:
(seesaw.bind/b-do*)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L398)[raw docstring](#)
---
#### b-do*clj
```
(b-do* f & args)
```
Creates a bindable that takes an incoming value v, executes
(f v args) and does nothing further. That is, it's the end of the binding chain.
See:
(seesaw.bind/bind)
(seesaw.bind/b-do)
```
Creates a bindable that takes an incoming value v, executes
(f v args) and does nothing further. That is, it's the end of the binding chain.
See:
(seesaw.bind/bind)
(seesaw.bind/b-do)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L383)[raw docstring](#)
---
#### b-sendclj
```
(b-send agent f & args)
```
Creates a bindable that (send)s to an agent using the given function each time its input changes. That is, each time a new value comes in,
(apply send agent f new-value args) is called.
This bindable's value (the current value of the atom) is subscribable.
Example:
; Accumulate list of selections in a vector
(bind (selection my-list) (b-send my-agent conj))
```
Creates a bindable that (send)s to an agent using the given function each time its input changes. That is, each time a new value comes in,
(apply send agent f new-value args) is called.
This bindable's value (the current value of the atom) is subscribable.
Example:
; Accumulate list of selections in a vector
(bind (selection my-list) (b-send my-agent conj))
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L221)[raw docstring](#)
---
#### b-send-offclj
```
(b-send-off agent f & args)
```
Creates a bindable that (send-off)s to an agent using the given function each time its input changes. That is, each time a new value comes in,
(apply send agent f new-value args) is called.
This bindable's value (the current value of the atom) is subscribable.
Example:
; Accumulate list of selections in a vector
(bind (selection my-list) (b-send-off my-agent conj))
```
Creates a bindable that (send-off)s to an agent using the given function each time its input changes. That is, each time a new value comes in,
(apply send agent f new-value args) is called.
This bindable's value (the current value of the atom) is subscribable.
Example:
; Accumulate list of selections in a vector
(bind (selection my-list) (b-send-off my-agent conj))
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L236)[raw docstring](#)
---
#### b-swap!clj
```
(b-swap! atom f & args)
```
Creates a bindable that swaps! an atom's value using the given function each time its input changes. That is, each time a new value comes in,
(apply swap! atom f new-value args) is called.
This bindable's value (the current value of the atom) is subscribable.
Example:
; Accumulate list of selections in a vector
(bind (selection my-list) (b-swap! my-atom conj))
```
Creates a bindable that swaps! an atom's value using the given function each time its input changes. That is, each time a new value comes in,
(apply swap! atom f new-value args) is called.
This bindable's value (the current value of the atom) is subscribable.
Example:
; Accumulate list of selections in a vector
(bind (selection my-list) (b-swap! my-atom conj))
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L194)[raw docstring](#)
---
#### bindclj
```
(bind first-source target & more)
```
Chains together two or more bindables into a listening chain.
When the value of source changes it is passed along and updates the value of target and so on.
Note that the return value of this function is itself a composite bindable so it can be subscribed to, or nested in other chains.
The return value, like (seesaw.bind/subscribe) and (seesaw.event/listen)
can also be invoked as a no-arg function to back out all the subscriptions made by bind.
Examples:
; Bind the text of a text box to an atom. As the user types in
; t, the value of a is updated.
(let [t (text)
a (atom nil)]
(bind (.getDocument t) a))
; Bind a the value of a slider to an atom, with a transform
; that forces the value to [0, 1]
(let [s (slider :min 0 :max 1)
a (atom 0.0)]
(bind s (transform / 100.0) a))
; Bind the value of an atom to a label
(let [a (atom "hi")
lbl (label)]
(bind a (transform #(.toUpperCase %)) (property lbl :text))))
Notes:
Creating a binding does *not* automatically synchronize the values.
Circular bindings will usually work.
```
Chains together two or more bindables into a listening chain.
When the value of source changes it is passed along and updates
the value of target and so on.
Note that the return value of this function is itself a composite bindable so it can be subscribed to, or nested in other chains.
The return value, like (seesaw.bind/subscribe) and (seesaw.event/listen)
can also be invoked as a no-arg function to back out all the subscriptions made by bind.
Examples:
; Bind the text of a text box to an atom. As the user types in
; t, the value of a is updated.
(let [t (text)
a (atom nil)]
(bind (.getDocument t) a))
; Bind a the value of a slider to an atom, with a transform
; that forces the value to [0, 1]
(let [s (slider :min 0 :max 1)
a (atom 0.0)]
(bind s (transform / 100.0) a))
; Bind the value of an atom to a label
(let [a (atom "hi")
lbl (label)]
(bind a (transform #(.toUpperCase %)) (property lbl :text))))
Notes:
Creating a binding does *not* automatically synchronize the values.
Circular bindings will usually work.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L47)[raw docstring](#)
---
#### Bindablecljprotocol
#### notifyclj
```
(notify this v)
```
Pass a new value to this bindable. Causes all subscribed handlers to be called with the value.
```
Pass a new value to this bindable. Causes all subscribed handlers to be called with the value.
```
#### subscribeclj
```
(subscribe this handler)
```
Subscribes a handler to changes in this bindable.
handler is a single argument function that takes the new value of the bindable.
Must return a no-arg function that unsubscribes the handler from future changes.
```
Subscribes a handler to changes in this bindable.
handler is a single argument function that takes the new value of the bindable.
Must return a no-arg function that unsubscribes the handler from future changes.
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L23)
---
#### compositeclj
```
(composite start end)
```
Create a composite bindable from the start and end of a binding chain
```
Create a composite bindable from the start and end of a binding chain
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L40)[raw docstring](#)
---
#### filterclj
```
(filter pred)
```
Executes a predicate on incoming value. If the predicate returns a truthy value, the incoming value is passed on to the next bindable in the chain.
Otherwise, nothing is notified.
Examples:
; Block out of range values
(let [input (text)
output (slider :min 0 :max 100)]
(bind input
(filter #(< 0 % 100))
output))
Notes:
This works a lot like (clojure.core/filter)
See:
(seesaw.bind/some)
(clojure.core/filter)
```
Executes a predicate on incoming value. If the predicate returns a truthy value, the incoming value is passed on to the next bindable in the chain.
Otherwise, nothing is notified.
Examples:
; Block out of range values
(let [input (text)
output (slider :min 0 :max 100)]
(bind
input
(filter #(< 0 % 100))
output))
Notes:
This works a lot like (clojure.core/filter)
See:
(seesaw.bind/some)
(clojure.core/filter)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L409)[raw docstring](#)
---
#### funnelclj
```
(funnel & bindables)
```
Create a binding chain with several input chains. Provides a vector of input values further along the chain.
Example: Only enable a button if there is some text in both fields.
(let [t1 (text)
t2 (text)
b (button)]
(bind
(funnel
(property t1 :text)
(property t2 :text))
(transform #(every? seq %))
(property b :enabled?)))
```
Create a binding chain with several input chains. Provides a vector of input values further along the chain.
Example: Only enable a button if there is some text in both fields.
(let [t1 (text)
t2 (text)
b (button)]
(bind
(funnel
(property t1 :text)
(property t2 :text))
(transform #(every? seq %))
(property b :enabled?)))
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L100)[raw docstring](#)
---
#### notify-laterclj
```
(notify-later)
```
Creates a bindable that notifies its subscribers (next in chain) on the swing thread using (seesaw.invoke/invoke-later). You should use this to ensure that things happen on the right thread, e.g. (seesaw.bind/property)
and (seesaw.bind/selection).
See:
(seesaw.invoke/invoke-later)
```
Creates a bindable that notifies its subscribers (next in chain) on the swing thread using (seesaw.invoke/invoke-later). You should use this to ensure that things happen on the right thread, e.g. (seesaw.bind/property)
and (seesaw.bind/selection).
See:
(seesaw.invoke/invoke-later)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L511)[raw docstring](#)
---
#### notify-nowclj
```
(notify-now)
```
Creates a bindable that notifies its subscribers (next in chain) on the swing thread using (seesaw.invoke/invoke-now). You should use this to ensure that things happen on the right thread, e.g. (seesaw.bind/property)
and (seesaw.bind/selection).
Note that sincel invoke-now is used, you're in danger of deadlocks. Be careful.
See:
(seesaw.invoke/invoke-soon)
```
Creates a bindable that notifies its subscribers (next in chain) on the swing thread using (seesaw.invoke/invoke-now). You should use this to ensure that things happen on the right thread, e.g. (seesaw.bind/property)
and (seesaw.bind/selection).
Note that sincel invoke-now is used, you're in danger of deadlocks. Be careful.
See:
(seesaw.invoke/invoke-soon)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L535)[raw docstring](#)
---
#### notify-soonclj
```
(notify-soon)
```
Creates a bindable that notifies its subscribers (next in chain) on the swing thread using (seesaw.invoke/invoke-soon). You should use this to ensure that things happen on the right thread, e.g. (seesaw.bind/property)
and (seesaw.bind/selection).
See:
(seesaw.invoke/invoke-soon)
```
Creates a bindable that notifies its subscribers (next in chain) on the swing thread using (seesaw.invoke/invoke-soon). You should use this to ensure that things happen on the right thread, e.g. (seesaw.bind/property)
and (seesaw.bind/selection).
See:
(seesaw.invoke/invoke-soon)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L523)[raw docstring](#)
---
#### notify-when*clj
```
(notify-when* schedule-fn)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L499)
---
#### propertyclj
```
(property target property-name)
```
Returns a bindable (suitable to pass to seesaw.bind/bind) that connects to a property of a widget, e.g. :foreground, :enabled?,
etc.
Examples:
; Map the text in a text box to the foreground color of a label
; Pass the text through Seesaw's color function first to get
; a color value.
(let [t (text :text "white")
lbl (label :text "Color is shown here" :opaque? true)]
(bind (.getDocument t)
(transform #(try (color %) (catch Exception (color 0 0 0))))
(property lbl :background)))
See:
(seesaw.bind/bind)
```
Returns a bindable (suitable to pass to seesaw.bind/bind) that connects to a property of a widget, e.g. :foreground, :enabled?,
etc.
Examples:
; Map the text in a text box to the foreground color of a label
; Pass the text through Seesaw's color function first to get
; a color value.
(let [t (text :text "white")
lbl (label :text "Color is shown here" :opaque? true)]
(bind (.getDocument t)
(transform #(try (color %) (catch Exception (color 0 0 0))))
(property lbl :background)))
See:
(seesaw.bind/bind)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L272)[raw docstring](#)
---
#### selectionclj
```
(selection widget)
```
```
(selection widget options)
```
Converts the selection of a widget into a bindable. Applies to listbox,
table, tree, combobox, checkbox, etc, etc. In short, anything to which
(seesaw.core/selection) applies.
options corresponds to the option map passed to (seesaw.core/selection)
and (seesaw.core/selection)
Examples:
; Bind checkbox state to enabled state of a widget
(let [cb (checkbox :text "Enable")
t (text)]
(bind (selection cb) (property t :enabled?)))
See:
(seesaw.bind/bind)
(seesaw.core/selection)
(seesaw.core/selection!)
```
Converts the selection of a widget into a bindable. Applies to listbox,
table, tree, combobox, checkbox, etc, etc. In short, anything to which
(seesaw.core/selection) applies.
options corresponds to the option map passed to (seesaw.core/selection)
and (seesaw.core/selection)
Examples:
; Bind checkbox state to enabled state of a widget
(let [cb (checkbox :text "Enable")
t (text)]
(bind (selection cb) (property t :enabled?)))
See:
(seesaw.bind/bind)
(seesaw.core/selection)
(seesaw.core/selection!)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L307)[raw docstring](#)
---
#### someclj
```
(some pred)
```
Executes a predicate on incoming value. If the predicate returns a truthy value, that value is passed on to the next bindable in the chain. Otherwise,
nothing is notified.
Examples:
; Try to convert a text string to a number. Do nothing if the conversion
; Fails
(let [input (text)
output (slider :min 0 :max 100)]
(bind input (some #(try (Integer/parseInt %) (catch Exception nil))) output))
Notes:
This works a lot like (clojure.core/some)
See:
(seesaw.bind/filter)
(clojure.core/some)
```
Executes a predicate on incoming value. If the predicate returns a truthy value, that value is passed on to the next bindable in the chain. Otherwise,
nothing is notified.
Examples:
; Try to convert a text string to a number. Do nothing if the conversion
; Fails
(let [input (text)
output (slider :min 0 :max 100)]
(bind input (some #(try (Integer/parseInt %) (catch Exception nil))) output))
Notes:
This works a lot like (clojure.core/some)
See:
(seesaw.bind/filter)
(clojure.core/some)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L444)[raw docstring](#)
---
#### teeclj
```
(tee & bindables)
```
Create a tee junction in a bindable chain.
Examples:
; Take the content of a text box and show it as upper and lower
; case in two labels
(let [t (text)
upper (label)
lower (label)]
(bind (property t :text)
(tee (bind (transform #(.toUpperCase %)) (property upper :text))
(bind (transform #(.toLowerCase %)) (property lower :text)))))
See:
(seesaw.bind/bind)
```
Create a tee junction in a bindable chain.
Examples:
; Take the content of a text box and show it as upper and lower
; case in two labels
(let [t (text)
upper (label)
lower (label)]
(bind (property t :text)
(tee (bind (transform #(.toUpperCase %)) (property upper :text))
(bind (transform #(.toLowerCase %)) (property lower :text)))))
See:
(seesaw.bind/bind)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L477)[raw docstring](#)
---
#### to-bindableclj
```
(to-bindable target)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L35)
---
#### ToBindablecljprotocol
#### to-bindable*clj
```
(to-bindable* this)
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L32)
---
#### transformclj
```
(transform f & args)
```
Creates a bindable that takes an incoming value v, applies
(f v args), and passes the result on. f should be side-effect free.
See:
(seesaw.bind/bind)
```
Creates a bindable that takes an incoming value v, applies
(f v args), and passes the result on. f should be side-effect free.
See:
(seesaw.bind/bind)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L364)[raw docstring](#)
---
#### valueclj
```
(value widget)
```
Converts the value of a widget into a bindable. Applies to listbox,
table, tree, combobox, checkbox, etc, etc. In short, anything to which
(seesaw.core/value) applies. This is a "receive-only" bindable since there is no good way to detect changes in the values of composites.
Examples:
; Map the value of an atom (a map) into the value of a panel.
(let [a (atom nil)
p (border-panel :north (checkbox :id :cb :text "Enable")
:south (text :id :tb)]
(bind a (value p)))
; ... now setting a to {:cb true :tb "Hi"} will check the checkbox
; and change the text field to "Hi"
See:
(seesaw.bind/bind)
(seesaw.core/value!)
```
Converts the value of a widget into a bindable. Applies to listbox,
table, tree, combobox, checkbox, etc, etc. In short, anything to which
(seesaw.core/value) applies. This is a "receive-only" bindable since there is no good way to detect changes in the values of composites.
Examples:
; Map the value of an atom (a map) into the value of a panel.
(let [a (atom nil)
p (border-panel :north (checkbox :id :cb :text "Enable")
:south (text :id :tb)]
(bind a (value p)))
; ... now setting a to {:cb true :tb "Hi"} will check the checkbox
; and change the text field to "Hi"
See:
(seesaw.bind/bind)
(seesaw.core/value!)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/bind.clj#L337)[raw docstring](#)
seesaw.border
===
Functions for creating widget borders.
```
Functions for creating widget borders.
```
[raw docstring](#)
---
#### compound-borderclj
```
(compound-border b)
```
```
(compound-border b0 b1)
```
```
(compound-border b0 b1 & more)
```
Create a compount border from the given arguments. Order is from inner to outer.
Each argument is passed through (seesaw.border/to-border).
Examples:
```
; Create an 4 pixel empty border, red line border, and title border.
(compound-border 4 (line-border :color :red :thickness 4) "Title")
```
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/BorderFactory.html>
```
Create a compount border from the given arguments. Order is from inner to outer.
Each argument is passed through (seesaw.border/to-border).
Examples:
; Create an 4 pixel empty border, red line border, and title border.
(compound-border 4 (line-border :color :red :thickness 4) "Title")
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/BorderFactory.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/border.clj#L73)[raw docstring](#)
---
#### custom-borderclj
```
(custom-border & args)
```
Define a custom border with the following properties:
:paint A function that takes the same arguments as Border.paintBorder:
java.awt.Component c - The target component java.awt.Graphics g - The graphics context to draw to int x - x position of border int y - y position of border int w - width of border int h - height of border
:insets Returns the insets of the border. Can be a zero-arg function that returns something that is passed through (seesaw.util/to-insets)
or a constant value passed through the same. Defaults to 0.
:opaque? Whether the border is opaque. A constant truthy value or a zero-arg function that returns a truthy value.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/border/Border.html>
(seesaw.util/to-insets)
```
Define a custom border with the following properties:
:paint A function that takes the same arguments as Border.paintBorder:
java.awt.Component c - The target component
java.awt.Graphics g - The graphics context to draw to
int x - x position of border
int y - y position of border
int w - width of border
int h - height of border
:insets Returns the insets of the border. Can be a zero-arg function that
returns something that is passed through (seesaw.util/to-insets)
or a constant value passed through the same. Defaults to 0.
:opaque? Whether the border is opaque. A constant truthy value or a zero-arg
function that returns a truthy value.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/border/Border.html
(seesaw.util/to-insets)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/border.clj#L89)[raw docstring](#)
---
#### empty-borderclj
```
(empty-border & {:keys [thickness top left bottom right]})
```
Create an empty border. The following properties are supported:
:thickness The thickness of the border (all sides) in pixels. This property is only used if :top, :bottom, etc are omitted. Defaults to 1.
:top Thickness of the top border in pixels. Defaults to 0.
:left Thickness of the left border in pixels. Defaults to 0.
:bottom Thickness of the bottom border in pixels. Defaults to 0.
:right Thickness of the right border in pixels. Defaults to 0.
Examples:
```
; Create an empty 10 pixel border
(empty-border :thickness 10)
; Create an empty border 5 pixels on top and left, 0 on other sides
(empty-border :left 5 :top 5)
```
```
Create an empty border. The following properties are supported:
:thickness The thickness of the border (all sides) in pixels. This property
is only used if :top, :bottom, etc are omitted. Defaults to 1.
:top Thickness of the top border in pixels. Defaults to 0.
:left Thickness of the left border in pixels. Defaults to 0.
:bottom Thickness of the bottom border in pixels. Defaults to 0.
:right Thickness of the right border in pixels. Defaults to 0.
Examples:
; Create an empty 10 pixel border
(empty-border :thickness 10)
; Create an empty border 5 pixels on top and left, 0 on other sides
(empty-border :left 5 :top 5)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/border.clj#L25)[raw docstring](#)
---
#### line-borderclj
```
(line-border &
{:keys [color thickness top left bottom right]
:or {thickness 1 color Color/BLACK}})
```
Create a colored border with following properties:
:color The color, passed through (seesaw.color/to-color). Defaults to black.
:thickness The thickness of the border in pixels. This property is only used if :top, :bottom, etc are omitted. Defaults to 1.
:top Thickness of the top border in pixels. Defaults to 0.
:left Thickness of the left border in pixels. Defaults to 0.
:bottom Thickness of the bottom border in pixels. Defaults to 0.
:right Thickness of the right border in pixels. Defaults to 0.
Examples:
```
; Create a green border, 3 pixels on top, 5 pixels on the botttom
(line-border :color "#0f0" :top 3 :bottom 5)
```
```
Create a colored border with following properties:
:color The color, passed through (seesaw.color/to-color). Defaults to black.
:thickness The thickness of the border in pixels. This property is only used
if :top, :bottom, etc are omitted. Defaults to 1.
:top Thickness of the top border in pixels. Defaults to 0.
:left Thickness of the left border in pixels. Defaults to 0.
:bottom Thickness of the bottom border in pixels. Defaults to 0.
:right Thickness of the right border in pixels. Defaults to 0.
Examples:
; Create a green border, 3 pixels on top, 5 pixels on the botttom
(line-border :color "#0f0" :top 3 :bottom 5)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/border.clj#L50)[raw docstring](#)
---
#### to-borderclj
```
(to-border b)
```
```
(to-border b & args)
```
Construct a border. The border returned depends on the input:
nil - returns nil a Border - returns b a number - returns an empty border with the given thickness a vector or list - returns a compound border by applying to-border to each element, inner to outer.
a i18n keyword - returns a titled border using the given resource a string - returns a titled border using the given stirng
If given more than one argument, a compound border is created by applying to-border to each argument, inner to outer.
Note:
to-border is used implicitly by the :border option supported by all widgets to it is rarely necessary to call directly.
```
Construct a border. The border returned depends on the input:
nil - returns nil
a Border - returns b
a number - returns an empty border with the given thickness
a vector or list - returns a compound border by applying to-border
to each element, inner to outer.
a i18n keyword - returns a titled border using the given resource
a string - returns a titled border using the given stirng
If given more than one argument, a compound border is created by applying to-border to each argument, inner to outer.
Note:
to-border is used implicitly by the :border option supported by all widgets to it is rarely necessary to call directly.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/border.clj#L127)[raw docstring](#)
seesaw.cells
===
Functions for implementing custom cell renderers. Note that on many core functions (listbox, tree, combobox, etc) a render function can be given directly to the :renderer option.
```
Functions for implementing custom cell renderers. Note that on many core functions (listbox, tree, combobox, etc) a render function can be given directly to the :renderer option.
```
[raw docstring](#)
---
#### default-list-cell-rendererclj
```
(default-list-cell-renderer render-fn)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/cells.clj#L20)
---
#### default-tree-cell-rendererclj
```
(default-tree-cell-renderer render-fn)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/cells.clj#L36)
---
#### to-cell-rendererclj
```
(to-cell-renderer target arg)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/cells.clj#L54)
seesaw.chooser
===
File chooser and other common dialogs.
```
File chooser and other common dialogs.
```
[raw docstring](#)
---
#### choose-colorclj
```
(choose-color & args)
```
Choose a color with a color chooser dialog. The optional first argument is the parent component for the dialog. The rest of the args is a list of key/value pairs:
```
:color The initial selected color (see seesaw.color/to-color)
:title The dialog's title
```
Returns the selected color or nil if canceled.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JColorChooser.html>
```
Choose a color with a color chooser dialog. The optional first argument is the parent component for the dialog. The rest of the args is a list of key/value
pairs:
:color The initial selected color (see seesaw.color/to-color)
:title The dialog's title
Returns the selected color or nil if canceled.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JColorChooser.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/chooser.clj#L207)[raw docstring](#)
---
#### choose-fileclj
```
(choose-file & args)
```
Choose a file to open or save. The arguments can take two forms. First, with an initial parent component which will act as the parent of the dialog.
```
(choose-file dialog-parent ... options ...)
```
If the first arg is omitted, the desktop is used as the parent of the dialog:
```
(choose-file ... options ...)
```
Options can be one of:
:type The dialog type: :open, :save, or a custom string placed on the Ok button.
Defaults to :open.
:dir The initial working directory. If omitted, the previous directory chosen is remembered and used.
:multi? If true, multi-selection is enabled and a seq of files is returned.
:selection-mode The file selection mode: :files-only, :dirs-only and :files-and-dirs.
Defaults to :files-only
:filters A seq of either:
```
a seq that contains a filter name and a seq of
extensions as strings for that filter;
a seq that contains a filter name and a function
to be used as accept function (see file-filter);
a FileFilter (see file-filter).
The filters appear in the dialog's filter selection in the same
order as in the seq.
```
:all-files? If true, a filter matching all file extensions and files without an extension will appear in the filter selection of the dialog additionally to the filters specified through :filters. The filter usually appears last in the selection. If this is not desired set this option to false and include an equivalent filter manually at the desired position as shown in the examples below. Defaults to true.
:remember-directory? Flag specifying whether to remember the directory for future file-input invocations in case of successful exit. Default: true.
:success-fn Function which will be called with the JFileChooser and the File which has been selected by the user. Its result will be returned.
Default: return selected File. In the case of MULTI-SELECT? being true,
a seq of File instances will be passed instead of a single File.
:cancel-fn Function which will be called with the JFileChooser on user abort of the dialog.
Its result will be returned. Default: returns nil.
Examples:
; ask & return single file
(choose-file)
; ask & return including a filter for image files and an "all files"
; filter appearing at the beginning
(choose-file :all-files? false
:filters [(file-filter "All files" (constantly true))
["Images" ["png" "jpeg"]]
["Folders" #(.isDirectory %)]])
; ask & return absolute file path as string
(choose-file :success-fn (fn [fc file] (.getAbsolutePath file)))
Returns result of SUCCESS-FN (default: either java.io.File or seq of java.io.File iff multi? set to true)
in case of the user selecting a file, or result of CANCEL-FN otherwise.
See <http://download.oracle.com/javase/6/docs/api/javax/swing/JFileChooser.html>
```
Choose a file to open or save. The arguments can take two forms. First, with an initial parent component which will act as the parent of the dialog.
(choose-file dialog-parent ... options ...)
If the first arg is omitted, the desktop is used as the parent of the dialog:
(choose-file ... options ...)
Options can be one of:
:type The dialog type: :open, :save, or a custom string placed on the Ok button.
Defaults to :open.
:dir The initial working directory. If omitted, the previous directory chosen
is remembered and used.
:multi? If true, multi-selection is enabled and a seq of files is returned.
:selection-mode The file selection mode: :files-only, :dirs-only and :files-and-dirs.
Defaults to :files-only
:filters A seq of either:
a seq that contains a filter name and a seq of
extensions as strings for that filter;
a seq that contains a filter name and a function
to be used as accept function (see file-filter);
a FileFilter (see file-filter).
The filters appear in the dialog's filter selection in the same
order as in the seq.
:all-files? If true, a filter matching all file extensions and files
without an extension will appear in the filter selection
of the dialog additionally to the filters specified
through :filters. The filter usually appears last in the
selection. If this is not desired set this option to
false and include an equivalent filter manually at the
desired position as shown in the examples below. Defaults
to true.
:remember-directory? Flag specifying whether to remember the directory for future
file-input invocations in case of successful exit. Default: true.
:success-fn Function which will be called with the JFileChooser and the File which
has been selected by the user. Its result will be returned.
Default: return selected File. In the case of MULTI-SELECT? being true,
a seq of File instances will be passed instead of a single File.
:cancel-fn Function which will be called with the JFileChooser on user abort of the dialog.
Its result will be returned. Default: returns nil.
Examples:
; ask & return single file
(choose-file)
; ask & return including a filter for image files and an "all files"
; filter appearing at the beginning
(choose-file :all-files? false
:filters [(file-filter "All files" (constantly true))
["Images" ["png" "jpeg"]]
["Folders" #(.isDirectory %)]])
; ask & return absolute file path as string
(choose-file :success-fn (fn [fc file] (.getAbsolutePath file)))
Returns result of SUCCESS-FN (default: either java.io.File or seq of java.io.File iff multi? set to true)
in case of the user selecting a file, or result of CANCEL-FN otherwise.
See http://download.oracle.com/javase/6/docs/api/javax/swing/JFileChooser.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/chooser.clj#L106)[raw docstring](#)
---
#### file-filterclj
```
(file-filter description accept)
```
Create a FileFilter.
Arguments:
description - description of this filter, will show up in the filter-selection box when opening a file choosing dialog.
accept - a function taking a java.awt.File returning true if the file should be shown,
false otherwise.
```
Create a FileFilter.
Arguments:
description - description of this filter, will show up in the
filter-selection box when opening a file choosing dialog.
accept - a function taking a java.awt.File
returning true if the file should be shown,
false otherwise.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/chooser.clj#L21)[raw docstring](#)
---
#### set-file-filtersclj
```
(set-file-filters chooser filters)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/chooser.clj#L53)
seesaw.clipboard
===
---
#### contentsclj
```
(contents)
```
```
(contents flavor)
```
Retrieve the current content of the system clipboard in the given flavor.
If omitted, flavor defaults to seesaw.dnd/string-flavor. If not content with the given flavor is found, returns nil.
See:
seesaw.dnd
<http://docs.oracle.com/javase/7/docs/api/java/awt/datatransfer/Clipboard.html>
```
Retrieve the current content of the system clipboard in the given flavor.
If omitted, flavor defaults to seesaw.dnd/string-flavor. If not content with the given flavor is found, returns nil.
See:
seesaw.dnd
http://docs.oracle.com/javase/7/docs/api/java/awt/datatransfer/Clipboard.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/clipboard.clj#L8)[raw docstring](#)
---
#### contents!clj
```
(contents! transferable)
```
```
(contents! transferable owner)
```
Set the content of the sytem clipboard to the given transferable. If transferable is a string, a string transferable is created. Otherwise,
use seesaw.dnd/default-transferable to create one.
Returns the clipboard.
See:
(seesaw.dnd/default-transferable)
<http://docs.oracle.com/javase/7/docs/api/java/awt/datatransfer/Clipboard.html>
```
Set the content of the sytem clipboard to the given transferable. If transferable is a string, a string transferable is created. Otherwise,
use seesaw.dnd/default-transferable to create one.
Returns the clipboard.
See:
(seesaw.dnd/default-transferable)
http://docs.oracle.com/javase/7/docs/api/java/awt/datatransfer/Clipboard.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/clipboard.clj#L23)[raw docstring](#)
---
#### systemclj
```
(system)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/clipboard.clj#L4)
seesaw.color
===
Functions for creating Swing colors. Note that these are implicit in the core color options.
```
Functions for creating Swing colors. Note that these are implicit in the core color options.
```
[raw docstring](#)
---
#### colorclj
```
(color s)
```
```
(color s a)
```
```
(color r g b)
```
```
(color r g b a)
```
Create a java.awt.Color object from args.
Examples:
; Named color with string or keyword
(color "springgreen")
(color :aliceblue)
; CSS-style hex color
(color "#ff0000")
; Named color with alpha
(color :aliceblue 128)
; CSS-style hex color with alpha
(color "#ff0000" 128)
; RGB color
(color 255 128 128)
; RGB color with alpha
(color 255 128 128 224)
See:
<http://download.oracle.com/javase/6/docs/api/java/awt/Color.html>
<http://www.w3.org/TR/css3-color/>
```
Create a java.awt.Color object from args.
Examples:
; Named color with string or keyword
(color "springgreen")
(color :aliceblue)
; CSS-style hex color
(color "#ff0000")
; Named color with alpha
(color :aliceblue 128)
; CSS-style hex color with alpha
(color "#ff0000" 128)
; RGB color
(color 255 128 128)
; RGB color with alpha
(color 255 128 128 224)
See:
http://download.oracle.com/javase/6/docs/api/java/awt/Color.html
http://www.w3.org/TR/css3-color/
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/color.clj#L180)[raw docstring](#)
---
#### default-colorclj
```
(default-color name)
```
Retrieve a default color from the UIManager.
Examples:
; Return the look and feel's label foreground color
(default-color "Label.foreground")
Returns a java.awt.Color instance or nil if not found.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/UIManager.html#getColor%28java.lang.Object%29>
```
Retrieve a default color from the UIManager.
Examples:
; Return the look and feel's label foreground color
(default-color "Label.foreground")
Returns a java.awt.Color instance or nil if not found.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/UIManager.html#getColor%28java.lang.Object%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/color.clj#L216)[raw docstring](#)
---
#### get-rgbaclj
```
(get-rgba c)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/color.clj#L168)
---
#### to-colorclj
```
(to-color c)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/color.clj#L232)
seesaw.config
===
Functions for configuring widgets. Prefer (seesaw.core/config) and friends.
```
Functions for configuring widgets. Prefer (seesaw.core/config) and friends.
```
[raw docstring](#)
---
#### configclj
```
(config target name)
```
Retrieve the value of an option from target. For example:
(config button1 :text)
=> "I'm a button!"
Target must satisfy the Configurable protocol. In general, it may be a widget,
or convertible to widget with (to-widget). For example, the target can be an event object.
Returns the option value.
Throws IllegalArgumentException if an unknown option is requested.
See:
(seesaw.core/config!)
```
Retrieve the value of an option from target. For example:
(config button1 :text)
=> "I'm a button!"
Target must satisfy the Configurable protocol. In general, it may be a widget,
or convertible to widget with (to-widget). For example, the target can be an event
object.
Returns the option value.
Throws IllegalArgumentException if an unknown option is requested.
See:
(seesaw.core/config!)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/config.clj#L28)[raw docstring](#)
---
#### config!clj
```
(config! targets & args)
```
Applies options in the argument list to one or more targets. For example:
(config! button1 :enabled? false :text "I' disabled")
or:
(config! [button1 button2] :enabled? false :text "We're disabled")
Targets must satisfy the Configurable protocol. In general, they may be widgets,
or convertible to widgets with (to-widget). For example, the target can be an event object.
Returns the input targets.
Throws IllegalArgumentException if an unknown option is encountered.
See:
(seesaw.core/config)
```
Applies options in the argument list to one or more targets. For example:
(config! button1 :enabled? false :text "I' disabled")
or:
(config! [button1 button2] :enabled? false :text "We're disabled")
Targets must satisfy the Configurable protocol. In general, they may be widgets,
or convertible to widgets with (to-widget). For example, the target can be an event
object.
Returns the input targets.
Throws IllegalArgumentException if an unknown option is encountered.
See:
(seesaw.core/config)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/config.clj#L47)[raw docstring](#)
---
#### Configurablecljprotocol
A protocol for configuring and querying properties of an object. Client code should use (seesaw.core/config!) and (seesaw.core/config) rather than calling protocol methods directly.
See:
(seesaw.core/config)
(seesaw.core/config!)
```
A protocol for configuring and querying properties of an object. Client code should use (seesaw.core/config!) and (seesaw.core/config) rather than calling protocol methods directly.
See:
(seesaw.core/config)
(seesaw.core/config!)
```
#### config!*clj
```
(config!* target args)
```
Configure one or more options on target. Args is a list of key/value pairs. See (seesaw.core/config!)
```
Configure one or more options on target. Args is a list of key/value pairs. See (seesaw.core/config!)
```
#### config*clj
```
(config* target name)
```
Retrieve the current value for the given named option. See (seesaw.core/config)
```
Retrieve the current value for the given named option. See (seesaw.core/config)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/config.clj#L16)[raw docstring](#)
seesaw.core
===
Core functions and macros for Seesaw. Although there are many more Seesaw namespaces, usually what you want is in here. Most functions in other namespaces have a core wrapper which adds additional capability or makes them easier to use.
```
Core functions and macros for Seesaw. Although there are many more Seesaw namespaces, usually what you want is in here. Most functions in other namespaces have a core wrapper which adds additional capability or makes them easier to use.
```
[raw docstring](#)
---
#### abstract-panelclj
```
(abstract-panel layout opts)
```
```
(abstract-panel panel layout opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L935)
---
#### actionclj
Alias of seesaw.action/action:
Construct a new Action object. Supports the following properties:
```
:enabled? Whether the action is enabled
:selected? Whether the action is selected (for use with radio buttons,
toggle buttons, etc.
:name The name of the action, i.e. the text that will be displayed
in whatever widget it's associated with
:command The action command key. An arbitrary string identifier associated
with the action.
:tip The action's tooltip
:icon The action's icon. See (seesaw.core/icon)
:key A keystroke associated with the action. See (seesaw.keystroke/keystroke).
:mnemonic The mnemonic for the button, either a character or a keycode.
Usually allows the user to activate the button with alt-mnemonic.
See (seesaw.util/to-mnemonic-keycode).
:handler A single-argument function that performs whatever operations are
associated with the action. The argument is a ActionEvent instance.
```
Instances of action can be passed to the :action option of most buttons, menu items,
etc.
Actions can be later configured with the same properties above with (seesaw.core/config!).
Returns an instance of javax.swing.Action.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/Action.html>
```
Alias of seesaw.action/action:
Construct a new Action object. Supports the following properties:
:enabled? Whether the action is enabled
:selected? Whether the action is selected (for use with radio buttons,
toggle buttons, etc.
:name The name of the action, i.e. the text that will be displayed
in whatever widget it's associated with
:command The action command key. An arbitrary string identifier associated
with the action.
:tip The action's tooltip
:icon The action's icon. See (seesaw.core/icon)
:key A keystroke associated with the action. See (seesaw.keystroke/keystroke).
:mnemonic The mnemonic for the button, either a character or a keycode.
Usually allows the user to activate the button with alt-mnemonic.
See (seesaw.util/to-mnemonic-keycode).
:handler A single-argument function that performs whatever operations are
associated with the action. The argument is a ActionEvent instance.
Instances of action can be passed to the :action option of most buttons, menu items,
etc.
Actions can be later configured with the same properties above with (seesaw.core/config!).
Returns an instance of javax.swing.Action.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/Action.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L115)[raw docstring](#)
---
#### action-optionclj
Default handler for the :action option. Internal use.
```
Default handler for the :action option. Internal use.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L649)[raw docstring](#)
---
#### add!clj
```
(add! container subject & more)
```
Add one or more widgets to a widget container. The container and each widget argument are passed through (to-widget) as usual. Each widget can be a single widget, or a widget/constraint pair with a layout-specific constraint.
The container is properly revalidated and repainted after removal.
Examples:
; Add a label and a checkbox to a panel
(add! (vertical-panel) "Hello" (button ...))
; Add a label and a checkbox to a border panel with layout constraints
(add! (border-panel) ["Hello" :north] [(button ...) :center])
Returns the target container *after* it's been passed through (to-widget).
```
Add one or more widgets to a widget container. The container and each widget argument are passed through (to-widget) as usual. Each widget can be a single widget, or a widget/constraint pair with a layout-specific constraint.
The container is properly revalidated and repainted after removal.
Examples:
; Add a label and a checkbox to a panel
(add! (vertical-panel) "Hello" (button ...))
; Add a label and a checkbox to a border panel with layout constraints
(add! (border-panel) ["Hello" :north] [(button ...) :center])
Returns the target container *after* it's been passed through (to-widget).
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3702)[raw docstring](#)
---
#### alertclj
```
(alert & args)
```
Show a simple message alert dialog:
(alert [source] message & options)
source - optional parent component message - The message to show the user. May be a string, or list of strings, widgets, etc.
options - additional options
Additional options:
:title The dialog title
:type :warning, :error, :info, :plain, or :question
:icon Icon to display (Icon, URL, etc)
Examples:
(alert "Hello!")
(alert e "Hello!")
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JOptionPane.html#showMessageDialog%28java.awt.Component,%20java.lang.Object,%20java.lang.String,%20int%29>
```
Show a simple message alert dialog:
(alert [source] message & options)
source - optional parent component message - The message to show the user. May be a string, or list of strings, widgets, etc.
options - additional options
Additional options:
:title The dialog title
:type :warning, :error, :info, :plain, or :question
:icon Icon to display (Icon, URL, etc)
Examples:
(alert "Hello!")
(alert e "Hello!")
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JOptionPane.html#showMessageDialog%28java.awt.Component,%20java.lang.Object,%20java.lang.String,%20int%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3068)[raw docstring](#)
---
#### all-framesclj
```
(all-frames)
```
Returns a sequence of all of the frames (includes java.awt.Frame) known by the JVM.
This function is really only useful for debugging and repl development, namely:
; Clear out all frames
(dispose! (all-frames))
Otherwise, it is highly unreliable. Frames will hang around after disposal, pile up and generally cause trouble.
You've been warned.
```
Returns a sequence of all of the frames (includes java.awt.Frame) known by the JVM.
This function is really only useful for debugging and repl development, namely:
; Clear out all frames
(dispose! (all-frames))
Otherwise, it is highly unreliable. Frames will hang around after disposal, pile up and generally cause trouble.
You've been warned.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L308)[raw docstring](#)
---
#### assert-ui-threadclj
```
(assert-ui-thread message)
```
Verify that the current thread is the Swing UI thread and throw IllegalStateException if it's not. message is included in the exception message.
Returns nil.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#isEventDispatchThread%28%29>
```
Verify that the current thread is the Swing UI thread and throw IllegalStateException if it's not. message is included in the exception message.
Returns nil.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#isEventDispatchThread%28%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L88)[raw docstring](#)
---
#### base-resource-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L816)
---
#### border-panelclj
```
(border-panel & opts)
```
Create a panel with a border layout. In addition to the usual options,
supports:
:north widget for north position (passed through make-widget)
:south widget for south position (passed through make-widget)
:east widget for east position (passed through make-widget)
:west widget for west position (passed through make-widget)
:center widget for center position (passed through make-widget)
:hgap horizontal gap between widgets
:vgap vertical gap between widgets
The :items option is a list of widget/direction pairs which can be used if you don't want to use the direction options directly. For example, both of these are equivalent:
(border-panel :north "North" :south "South")
is the same as:
(border-panel :items [["North" :north] ["South" :south]])
This is for consistency with other containers.
See:
<http://download.oracle.com/javase/6/docs/api/java/awt/BorderLayout.html>
```
Create a panel with a border layout. In addition to the usual options,
supports:
:north widget for north position (passed through make-widget)
:south widget for south position (passed through make-widget)
:east widget for east position (passed through make-widget)
:west widget for west position (passed through make-widget)
:center widget for center position (passed through make-widget)
:hgap horizontal gap between widgets
:vgap vertical gap between widgets
The :items option is a list of widget/direction pairs which can be used if you don't want to use the direction options directly. For example, both of these are equivalent:
(border-panel :north "North" :south "South")
is the same as:
(border-panel :items [["North" :north] ["South" :south]])
This is for consistency with other containers.
See:
http://download.oracle.com/javase/6/docs/api/java/awt/BorderLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L974)[raw docstring](#)
---
#### border-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L972)
---
#### box-panelclj
```
(box-panel dir & opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1063)
---
#### box-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1061)
---
#### buttonclj
```
(button & args)
```
Construct a generic button. In addition to default widget options, supports the following:
```
:halign Horizontal alignment. One of :left, :right, :leading, :trailing,
:center
:valign Vertical alignment. One of :top, :center, :bottom
:selected? Whether the button is initially selected. Mostly for checked
and radio buttons/menu-items.
:margin The button margins as insets. See (seesaw.util/to-insets)
:group A button-group that the button should be added to.
:resource A resource prefix (see below).
:mnemonic The mnemonic for the button, either a character or a keycode.
Usually allows the user to activate the button with alt-mnemonic.
See (seesaw.util/to-mnemonic-keycode).
```
Resources and i18n:
A button's base properties can be set from a resource prefix, i.e. a namespace-
qualified keyword that refers to a resource bundle loadable by j18n.
Examples:
; Create a button with text "Next" with alt-N mnemonic shortcut that shows
; an alert when clicked.
(button :text "Next"
:mnemonic \N
:listen [:action #(alert % "NEXT!")])
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JButton.html>
(seesaw.core/button-group)
```
Construct a generic button. In addition to default widget options, supports the following:
:halign Horizontal alignment. One of :left, :right, :leading, :trailing,
:center
:valign Vertical alignment. One of :top, :center, :bottom
:selected? Whether the button is initially selected. Mostly for checked
and radio buttons/menu-items.
:margin The button margins as insets. See (seesaw.util/to-insets)
:group A button-group that the button should be added to.
:resource A resource prefix (see below).
:mnemonic The mnemonic for the button, either a character or a keycode.
Usually allows the user to activate the button with alt-mnemonic.
See (seesaw.util/to-mnemonic-keycode).
Resources and i18n:
A button's base properties can be set from a resource prefix, i.e. a namespace-
qualified keyword that refers to a resource bundle loadable by j18n.
Examples:
; Create a button with text "Next" with alt-N mnemonic shortcut that shows
; an alert when clicked.
(button :text "Next"
:mnemonic \N
:listen [:action #(alert % "NEXT!")])
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JButton.html
(seesaw.core/button-group)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1264)[raw docstring](#)
---
#### button-groupclj
```
(button-group & opts)
```
Creates a button group, i.e. a group of mutually exclusive toggle buttons,
radio buttons, toggle-able menus, etc. Takes the following options:
:buttons A sequence of buttons to include in the group. They are *not*
passed through (make-widget), i.e. they must be button or menu instances.
The mutual exclusion of the buttons in the group will be maintained automatically.
The currently "selected" button can be retrieved and set with (selection) and
(selection!) as usual.
Note that a button can be added to a group when the button is created using the
:group option of the various button and menu creation functions.
Examples:
(let [bg (button-group)]
(flow-panel :items [(radio :id :a :text "A" :group bg)
(radio :id :b :text "B" :group bg)]))
; now A and B are mutually exclusive
; Check A
(selection bg (select root [:#a]))
; Listen for selection changes. Note that the selection MAY BE NIL!
; Also note that the event that comes through is from the selected radio button
; *not the button-group itself* since the button-group is a somewhat artificial
; construct. So, you'll have to ask for (selection bg) instead of (selection e) : (
(listen bg :selection
(fn [e]
(if-let [s (selection bg)]
(println "Selected " (text s)))))
Returns an instance of javax.swing.ButtonGroup
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/ButtonGroup.html>
```
Creates a button group, i.e. a group of mutually exclusive toggle buttons,
radio buttons, toggle-able menus, etc. Takes the following options:
:buttons A sequence of buttons to include in the group. They are *not*
passed through (make-widget), i.e. they must be button or menu
instances.
The mutual exclusion of the buttons in the group will be maintained automatically.
The currently "selected" button can be retrieved and set with (selection) and
(selection!) as usual.
Note that a button can be added to a group when the button is created using the
:group option of the various button and menu creation functions.
Examples:
(let [bg (button-group)]
(flow-panel :items [(radio :id :a :text "A" :group bg)
(radio :id :b :text "B" :group bg)]))
; now A and B are mutually exclusive
; Check A
(selection bg (select root [:#a]))
; Listen for selection changes. Note that the selection MAY BE NIL!
; Also note that the event that comes through is from the selected radio button
; *not the button-group itself* since the button-group is a somewhat artificial
; construct. So, you'll have to ask for (selection bg) instead of (selection e) : (
(listen bg :selection
(fn [e]
(if-let [s (selection bg)]
(println "Selected " (text s)))))
Returns an instance of javax.swing.ButtonGroup
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/ButtonGroup.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1204)[raw docstring](#)
---
#### button-group-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1195)
---
#### button-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1247)
---
#### canvasclj
```
(canvas & opts)
```
Creates a paintable canvas, i.e. a JPanel with paintComponent overridden.
Painting is configured with the :paint property which can take the following values:
nil - disables painting. The widget will be filled with its background color unless it is not opaque.
(fn [c g]) - a paint function that takes the widget and a Graphics2D as arguments. Called after super.paintComponent.
{:before fn :after fn :super? bool} - a map with :before and :after functions which are called before and after super.paintComponent respectively. If super?
is false, super.paintComponent is not called.
Notes:
The :paint option is actually supported by *all* Seesaw widgets.
(seesaw.core/config!) can be used to change the :paint property at any time.
Some customizations are also possible and maybe easier with the creative use of borders.
Examples:
(canvas :paint #(.drawString %2 "I'm a canvas" 10 10))
See:
(seesaw.graphics)
(seesaw.examples.canvas)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JComponent.html#paintComponent%28java.awt.Graphics%29>
```
Creates a paintable canvas, i.e. a JPanel with paintComponent overridden.
Painting is configured with the :paint property which can take the following values:
nil - disables painting. The widget will be filled with its background
color unless it is not opaque.
(fn [c g]) - a paint function that takes the widget and a Graphics2D as
arguments. Called after super.paintComponent.
{:before fn :after fn :super? bool} - a map with :before and :after functions which
are called before and after super.paintComponent respectively. If super?
is false, super.paintComponent is not called.
Notes:
The :paint option is actually supported by *all* Seesaw widgets.
(seesaw.core/config!) can be used to change the :paint property at any time.
Some customizations are also possible and maybe easier with
the creative use of borders.
Examples:
(canvas :paint #(.drawString %2 "I'm a canvas" 10 10))
See:
(seesaw.graphics)
(seesaw.examples.canvas)
http://download.oracle.com/javase/6/docs/api/javax/swing/JComponent.html#paintComponent%28java.awt.Graphics%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2562)[raw docstring](#)
---
#### canvas-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2560)
---
#### card-panelclj
```
(card-panel & opts)
```
Create a panel with a card layout. Options:
:items A list of pairs with format [widget, identifier]
where identifier is a string or keyword.
See:
(seesaw.core/show-card!)
<http://download.oracle.com/javase/6/docs/api/java/awt/CardLayout.html>
```
Create a panel with a card layout. Options:
:items A list of pairs with format [widget, identifier]
where identifier is a string or keyword.
See:
(seesaw.core/show-card!)
http://download.oracle.com/javase/6/docs/api/java/awt/CardLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1011)[raw docstring](#)
---
#### card-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1009)
---
#### checkboxclj
```
(checkbox & args)
```
Same as (seesaw.core/button), but creates a checkbox. Use :selected? option to set initial state.
See:
(seesaw.core/button)
```
Same as (seesaw.core/button), but creates a checkbox. Use :selected? option to set initial state.
See:
(seesaw.core/button)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1312)[raw docstring](#)
---
#### checkbox-menu-itemclj
```
(checkbox-menu-item & args)
```
Create a checked menu item for use in (seesaw.core/menu). Supports same options as
(seesaw.core/button)
```
Create a checked menu item for use in (seesaw.core/menu). Supports same options as
(seesaw.core/button)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2267)[raw docstring](#)
---
#### checkbox-menu-item-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2265)
---
#### checkbox-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1310)
---
#### comboboxclj
```
(combobox & args)
```
Create a combo box (JComboBox). Additional options:
:model Instance of ComboBoxModel, or sequence of values used to construct a default model.
:renderer Cell renderer used for display. See (seesaw.cells/to-cell-renderer).
Note that the current selection can be retrieved and set with the (selection) and
(selection!) functions. Calling (seesaw.core/text) on a combobox will return
(str (selection cb)). (seesaw.core/text!) is not supported.
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JComboBox.html>
```
Create a combo box (JComboBox). Additional options:
:model Instance of ComboBoxModel, or sequence of values used to construct
a default model.
:renderer Cell renderer used for display. See (seesaw.cells/to-cell-renderer).
Note that the current selection can be retrieved and set with the (selection) and
(selection!) functions. Calling (seesaw.core/text) on a combobox will return
(str (selection cb)). (seesaw.core/text!) is not supported.
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JComboBox.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1845)[raw docstring](#)
---
#### combobox-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1830)
---
#### configclj
Alias of seesaw.config/config:
Retrieve the value of an option from target. For example:
```
(config button1 :text)
=> "I'm a button!"
```
Target must satisfy the Configurable protocol. In general, it may be a widget,
or convertible to widget with (to-widget). For example, the target can be an event object.
Returns the option value.
Throws IllegalArgumentException if an unknown option is requested.
See:
(seesaw.core/config!)
```
Alias of seesaw.config/config:
Retrieve the value of an option from target. For example:
(config button1 :text)
=> "I'm a button!"
Target must satisfy the Configurable protocol. In general, it may be a widget,
or convertible to widget with (to-widget). For example, the target can be an event
object.
Returns the option value.
Throws IllegalArgumentException if an unknown option is requested.
See:
(seesaw.core/config!)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L536)[raw docstring](#)
---
#### config!clj
Alias of seesaw.config/config!:
Applies options in the argument list to one or more targets. For example:
```
(config! button1 :enabled? false :text "I' disabled")
```
or:
```
(config! [button1 button2] :enabled? false :text "We're disabled")
```
Targets must satisfy the Configurable protocol. In general, they may be widgets,
or convertible to widgets with (to-widget). For example, the target can be an event object.
Returns the input targets.
Throws IllegalArgumentException if an unknown option is encountered.
See:
(seesaw.core/config)
```
Alias of seesaw.config/config!:
Applies options in the argument list to one or more targets. For example:
(config! button1 :enabled? false :text "I' disabled")
or:
(config! [button1 button2] :enabled? false :text "We're disabled")
Targets must satisfy the Configurable protocol. In general, they may be widgets,
or convertible to widgets with (to-widget). For example, the target can be an event
object.
Returns the input targets.
Throws IllegalArgumentException if an unknown option is encountered.
See:
(seesaw.core/config)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L538)[raw docstring](#)
---
#### ConfigActioncljprotocol
Protocol to hook into :action option
```
Protocol to hook into :action option
```
#### get-action*clj
```
(get-action* this)
```
#### set-action*clj
```
(set-action* this v)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L633)[raw docstring](#)
---
#### ConfigIconcljprotocol
Protocol to hook into :icon option
```
Protocol to hook into :icon option
```
#### get-icon*clj
```
(get-icon* this)
```
#### set-icon*clj
```
(set-icon* this v)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L549)[raw docstring](#)
---
#### ConfigModelcljprotocol
Protocol to hook into :model option
```
Protocol to hook into :model option
```
#### get-model*clj
```
(get-model* this)
```
#### set-model*clj
```
(set-model* this m)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L655)[raw docstring](#)
---
#### ConfigTextcljprotocol
Protocol to hook into :text option
```
Protocol to hook into :text option
```
#### get-text*clj
```
(get-text* this)
```
#### set-text*clj
```
(set-text* this v)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L573)[raw docstring](#)
---
#### confirmclj
```
(confirm & args)
```
Show a confirmation dialog:
(confirm [source] message & options)
source - optional parent component message - The message to show the user. May be a string, or list of strings, widgets, etc.
options - additional options
Additional options:
:title The dialog title
:option-type :yes-no, :yes-no-cancel, or :ok-cancel (default)
:type :warning, :error, :info, :plain, or :question
:icon Icon to display (Icon, URL, etc)
Returns true if the user has hit Yes or OK, false if they hit No,
and nil if they hit Cancel.
See:
<http://docs.oracle.com/javase/6/docs/api/javax/swing/JOptionPane.html#showConfirmDialog%28java.awt.Component,%20java.lang.Object,%20java.lang.String,%20int,%20int%29>
```
Show a confirmation dialog:
(confirm [source] message & options)
source - optional parent component message - The message to show the user. May be a string, or list of strings, widgets, etc.
options - additional options
Additional options:
:title The dialog title
:option-type :yes-no, :yes-no-cancel, or :ok-cancel (default)
:type :warning, :error, :info, :plain, or :question
:icon Icon to display (Icon, URL, etc)
Returns true if the user has hit Yes or OK, false if they hit No,
and nil if they hit Cancel.
See:
http://docs.oracle.com/javase/6/docs/api/javax/swing/JOptionPane.html#showConfirmDialog%28java.awt.Component,%20java.lang.Object,%20java.lang.String,%20int,%20int%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3334)[raw docstring](#)
---
#### constructcljmacro
```
(construct factory-class & opts)
```
*experimental. subject to change.*
A macro that returns a proxied instance of the given class. This is used by Seesaw to construct widgets that can be fiddled with later,
e.g. installing a paint handler, etc.
```
*experimental. subject to change.*
A macro that returns a proxied instance of the given class. This is used by Seesaw to construct widgets that can be fiddled with later,
e.g. installing a paint handler, etc.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L213)[raw docstring](#)
---
#### custom-dialogclj
```
(custom-dialog &
{:keys [width height visible? modal? on-close size]
:or {width 100 height 100 visible? false}
:as opts})
```
Create a dialog and display it.
```
(custom-dialog ... options ...)
```
Besides the default & frame options, options can also be one of:
:parent The window which the new dialog should be positioned relatively to.
:modal? A boolean value indicating whether this dialog is to be a modal dialog. If :modal? *and* :visible? are set to true (:visible? is true per default), the function will block with a dialog. The function will return once the user:
a) Closes the window by using the system window manager (e.g. by pressing the "X" icon in many OS's)
b) A function from within an event calls (dispose!) on the dialog c) A function from within an event calls RETURN-FROM-DIALOG with a return value.
In the case of a) and b), this function returns nil. In the case of c), this function returns the value passed to RETURN-FROM-DIALOG. Default: true.
Returns a JDialog. Use (seesaw.core/show!) to display the dialog.
Notes:
See:
(seesaw.core/show!)
(seesaw.core/return-from-dialog)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JDialog.html>
```
Create a dialog and display it.
(custom-dialog ... options ...)
Besides the default & frame options, options can also be one of:
:parent The window which the new dialog should be positioned relatively to.
:modal? A boolean value indicating whether this dialog is to be a
modal dialog. If :modal? *and* :visible? are set to
true (:visible? is true per default), the function will
block with a dialog. The function will return once the user:
a) Closes the window by using the system window
manager (e.g. by pressing the "X" icon in many OS's)
b) A function from within an event calls (dispose!) on the dialog
c) A function from within an event calls RETURN-FROM-DIALOG
with a return value.
In the case of a) and b), this function returns nil. In the
case of c), this function returns the value passed to
RETURN-FROM-DIALOG. Default: true.
Returns a JDialog. Use (seesaw.core/show!) to display the dialog.
Notes:
See:
(seesaw.core/show!)
(seesaw.core/return-from-dialog)
http://download.oracle.com/javase/6/docs/api/javax/swing/JDialog.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2996)[raw docstring](#)
---
#### custom-dialog-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2924)
---
#### default-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L818)
---
#### dialogclj
```
(dialog & {:as opts})
```
Display a JOptionPane. This is a dialog which displays some input/question to the user, which may be answered using several standard button configurations or entirely custom ones.
```
(dialog ... options ...)
```
Options can be any of:
:content May be a string or a component (or a panel with even more components) which is to be displayed.
:option-type In case :options is *not* specified, this may be one of
:default, :yes-no, :yes-no-cancel, :ok-cancel to specify which standard button set is to be used in the dialog.
:type The type of the dialog. One of :warning, :error, :info, :plain, or :question.
:options Custom buttons/options can be provided using this argument.
It must be a seq of "make-widget"'able objects which will be displayed as options the user can choose from. Note that in this case, :success-fn, :cancel-fn & :no-fn will *not* be called.
Use the handlers on those buttons & RETURN-FROM-DIALOG to close the dialog.
:default-option The default option instance which is to be selected. This should be an element from the :options seq.
:success-fn A function taking the JOptionPane as its only argument. It will be called when no :options argument has been specified and the user has pressed any of the "Yes" or "Ok" buttons.
Default: a function returning :success.
:cancel-fn A function taking the JOptionPane as its only argument. It will be called when no :options argument has been specified and the user has pressed the "Cancel" button.
Default: a function returning nil.
:no-fn A function taking the JOptionPane as its only argument. It will be called when no :options argument has been specified and the user has pressed the "No" button.
Default: a function returning :no.
Any remaining options will be passed to dialog.
Examples:
; display a dialog with only an "Ok" button.
(dialog :content "You may now press Ok")
; display a dialog to enter a users name and return the entered name.
(dialog :content
(flow-panel :items ["Enter your name" (text :id :name :text "Your name here")])
:option-type :ok-cancel
:success-fn (fn [p] (text (select (to-root p) [:#name]))))
The dialog is not immediately shown. Use (seesaw.core/show!) to display the dialog.
If the dialog is modal this will return the result of :success-fn, :cancel-fn or
:no-fn depending on what button the user pressed.
Alternatively if :options has been specified, returns the value which has been passed to (seesaw.core/return-from-dialog).
See:
(seesaw.core/show!)
(seesaw.core/return-from-dialog)
```
Display a JOptionPane. This is a dialog which displays some input/question to the user, which may be answered using several standard button configurations or entirely custom ones.
(dialog ... options ...)
Options can be any of:
:content May be a string or a component (or a panel with even more
components) which is to be displayed.
:option-type In case :options is *not* specified, this may be one of
:default, :yes-no, :yes-no-cancel, :ok-cancel to specify
which standard button set is to be used in the dialog.
:type The type of the dialog. One of :warning, :error, :info, :plain, or :question.
:options Custom buttons/options can be provided using this argument.
It must be a seq of "make-widget"'able objects which will be
displayed as options the user can choose from. Note that in this
case, :success-fn, :cancel-fn & :no-fn will *not* be called.
Use the handlers on those buttons & RETURN-FROM-DIALOG to close
the dialog.
:default-option The default option instance which is to be selected. This should be an element
from the :options seq.
:success-fn A function taking the JOptionPane as its only
argument. It will be called when no :options argument
has been specified and the user has pressed any of the "Yes" or "Ok" buttons.
Default: a function returning :success.
:cancel-fn A function taking the JOptionPane as its only
argument. It will be called when no :options argument
has been specified and the user has pressed the "Cancel" button.
Default: a function returning nil.
:no-fn A function taking the JOptionPane as its only
argument. It will be called when no :options argument
has been specified and the user has pressed the "No" button.
Default: a function returning :no.
Any remaining options will be passed to dialog.
Examples:
; display a dialog with only an "Ok" button.
(dialog :content "You may now press Ok")
; display a dialog to enter a users name and return the entered name.
(dialog :content
(flow-panel :items ["Enter your name" (text :id :name :text "Your name here")])
:option-type :ok-cancel
:success-fn (fn [p] (text (select (to-root p) [:#name]))))
The dialog is not immediately shown. Use (seesaw.core/show!) to display the dialog.
If the dialog is modal this will return the result of :success-fn, :cancel-fn or
:no-fn depending on what button the user pressed.
Alternatively if :options has been specified, returns the value which has been passed to (seesaw.core/return-from-dialog).
See:
(seesaw.core/show!)
(seesaw.core/return-from-dialog)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3209)[raw docstring](#)
---
#### dispose!clj
```
(dispose! targets)
```
Dispose the given frame, dialog or window. target can be anything that can be converted to a root-level object with (to-root).
Returns its input.
See:
<http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#dispose%28%29>
```
Dispose the given frame, dialog or window. target can be anything that can be converted to a root-level object with (to-root).
Returns its input.
See:
http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#dispose%28%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L294)[raw docstring](#)
---
#### editor-paneclj
```
(editor-pane & opts)
```
Create a JEditorPane. Custom options:
:page A URL (string or java.net.URL) with the contents of the editor
:content-type The content-type, for example "text/html" for some crappy HTML rendering.
:editor-kit The EditorKit. See Javadoc.
Notes:
An editor pane can fire 'hyperlink' events when elements are click,
say like a hyperlink in an html doc. You can listen to these with the
:hyperlink event:
```
(listen my-editor :hyperlink (fn [e] ...))
```
where the event is an instance of javax.swing.event.HyperlinkEvent.
From there you can inspect the event, inspect the clicked element,
etc.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JEditorPane.html>
<http://docs.oracle.com/javase/6/docs/api/javax/swing/event/HyperlinkEvent.html>
```
Create a JEditorPane. Custom options:
:page A URL (string or java.net.URL) with the contents of the editor
:content-type The content-type, for example "text/html" for some crappy
HTML rendering.
:editor-kit The EditorKit. See Javadoc.
Notes:
An editor pane can fire 'hyperlink' events when elements are click,
say like a hyperlink in an html doc. You can listen to these with the
:hyperlink event:
(listen my-editor :hyperlink (fn [e] ...))
where the event is an instance of javax.swing.event.HyperlinkEvent.
From there you can inspect the event, inspect the clicked element,
etc.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JEditorPane.html
http://docs.oracle.com/javase/6/docs/api/javax/swing/event/HyperlinkEvent.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1628)[raw docstring](#)
---
#### editor-pane-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1618)
---
#### flow-panelclj
```
(flow-panel & opts)
```
Create a panel with a flow layout. Options:
:items List of widgets (passed through make-widget)
:hgap horizontal gap between widgets
:vgap vertical gap between widgets
:align :left, :right, :leading, :trailing, :center
:align-on-baseline?
See <http://download.oracle.com/javase/6/docs/api/java/awt/FlowLayout.html>
```
Create a panel with a flow layout. Options:
:items List of widgets (passed through make-widget)
:hgap horizontal gap between widgets
:vgap vertical gap between widgets
:align :left, :right, :leading, :trailing, :center
:align-on-baseline?
See http://download.oracle.com/javase/6/docs/api/java/awt/FlowLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1044)[raw docstring](#)
---
#### flow-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1042)
---
#### form-panelclj
```
(form-panel & opts)
```
*Don't use this. GridBagLaout is an abomination* I suggest using Seesaw's MigLayout (seesaw.mig) or JGoogies Forms (seesaw.forms) support instead.
A panel that uses a GridBagLayout. Also aliased as (grid-bag-panel) if you want to be reminded of GridBagLayout. The :items property should be a list of vectors of the form:
```
[widget & options]
```
where widget is something widgetable and options are key/value pairs corresponding to GridBagConstraints fields. For example:
[["Name" :weightx 0]
[(text :id :name) :weightx 1 :fill :horizontal]]
This creates a label/field pair where the field expands.
See <http://download.oracle.com/javase/6/docs/api/java/awt/GridBagLayout.html>
```
*Don't use this. GridBagLaout is an abomination* I suggest using Seesaw's MigLayout (seesaw.mig) or JGoogies Forms (seesaw.forms) support instead.
A panel that uses a GridBagLayout. Also aliased as (grid-bag-panel) if you want to be reminded of GridBagLayout. The :items property should be a list of vectors of the form:
[widget & options]
where widget is something widgetable and options are key/value pairs corresponding to GridBagConstraints fields. For example:
[["Name" :weightx 0]
[(text :id :name) :weightx 1 :fill :horizontal]]
This creates a label/field pair where the field expands.
See http://download.oracle.com/javase/6/docs/api/java/awt/GridBagLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1117)[raw docstring](#)
---
#### form-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1115)
---
#### frameclj
```
(frame & {:keys [width height visible? size] :as opts})
```
Create a JFrame. Options:
:id id of the window, used by (select).
:title the title of the window
:icon the icon of the frame (varies by platform)
:width initial width. Note that calling (pack!) will negate this setting
:height initial height. Note that calling (pack!) will negate this setting
:size initial size. Note that calling (pack!) will negate this setting
:minimum-size minimum size of frame, e.g. [640 :by 480]
:content passed through (make-widget) and used as the frame's content-pane
:visible? whether frame should be initially visible (default false)
:resizable? whether the frame can be resized (default true)
:on-close default close behavior. One of :exit, :hide, :dispose, :nothing The default value is :hide. Note that the :window-closed event is only fired for values :exit and :dispose
returns the new frame.
Examples:
; Create a frame, pack it and show it.
(-> (frame :title "HI!" :content "I'm a label!")
pack!
show!)
; Create a frame with an initial size (note that pack! isn't called)
(show! (frame :title "HI!" :content "I'm a label!" :width 500 :height 600))
Notes:
Unless :visible? is set to true, the frame will not be displayed until (show!)
is called on it.
Call (pack!) on the frame if you'd like the frame to resize itself to fit its contents. Sometimes this doesn't look like crap.
See:
(seesaw.core/show!)
(seesaw.core/hide!)
(seesaw.core/move!)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JFrame.html>
```
Create a JFrame. Options:
:id id of the window, used by (select).
:title the title of the window
:icon the icon of the frame (varies by platform)
:width initial width. Note that calling (pack!) will negate this setting
:height initial height. Note that calling (pack!) will negate this setting
:size initial size. Note that calling (pack!) will negate this setting
:minimum-size minimum size of frame, e.g. [640 :by 480]
:content passed through (make-widget) and used as the frame's content-pane
:visible? whether frame should be initially visible (default false)
:resizable? whether the frame can be resized (default true)
:on-close default close behavior. One of :exit, :hide, :dispose, :nothing
The default value is :hide. Note that the :window-closed event is
only fired for values :exit and :dispose
returns the new frame.
Examples:
; Create a frame, pack it and show it.
(-> (frame :title "HI!" :content "I'm a label!")
pack!
show!)
; Create a frame with an initial size (note that pack! isn't called)
(show! (frame :title "HI!" :content "I'm a label!" :width 500 :height 600))
Notes:
Unless :visible? is set to true, the frame will not be displayed until (show!)
is called on it.
Call (pack!) on the frame if you'd like the frame to resize itself to fit its
contents. Sometimes this doesn't look like crap.
See:
(seesaw.core/show!)
(seesaw.core/hide!)
(seesaw.core/move!)
http://download.oracle.com/javase/6/docs/api/javax/swing/JFrame.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2821)[raw docstring](#)
---
#### frame-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2776)
---
#### full-screen!clj
```
(full-screen! window)
```
```
(full-screen! device window)
```
Make the given window/frame full-screen. Pass nil to return all windows to normal size.
```
Make the given window/frame full-screen. Pass nil to return all windows to normal size.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2743)[raw docstring](#)
---
#### full-screen-windowclj
```
(full-screen-window)
```
```
(full-screen-window device)
```
Returns the window/frame that is currently in full-screen mode or nil if none.
```
Returns the window/frame that is currently in full-screen mode or nil if none.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2704)[raw docstring](#)
---
#### full-screen?clj
```
(full-screen? window)
```
```
(full-screen? device window)
```
Returns true if the given window/frame is in full-screen mode
```
Returns true if the given window/frame is in full-screen mode
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2712)[raw docstring](#)
---
#### get-drag-enabledclj
```
(get-drag-enabled this)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### grid-bag-panelclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1140)
---
#### grid-panelclj
```
(grid-panel & {:keys [rows columns] :as opts})
```
Create a panel where widgets are arranged horizontally. Options:
:rows Number of rows, defaults to 0, i.e. unspecified.
:columns Number of columns.
:items List of widgets (passed through make-widget)
:hgap horizontal gap between widgets
:vgap vertical gap between widgets
Note that it's usually sufficient to just give :columns and ignore :rows.
See <http://download.oracle.com/javase/6/docs/api/java/awt/GridLayout.html>
```
Create a panel where widgets are arranged horizontally. Options:
:rows Number of rows, defaults to 0, i.e. unspecified.
:columns Number of columns.
:items List of widgets (passed through make-widget)
:hgap horizontal gap between widgets
:vgap vertical gap between widgets
Note that it's usually sufficient to just give :columns and ignore :rows.
See http://download.oracle.com/javase/6/docs/api/java/awt/GridLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1094)[raw docstring](#)
---
#### grid-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1092)
---
#### group-by-idclj
```
(group-by-id root)
```
Group the widgets in a hierarchy starting at some root into a map keyed by :id. Widgets with no id are ignored. If an id appears twice,
the 'later' widget wins.
root is any (to-widget)-able object.
Examples:
Suppose you have a form with with widgets with ids :name, :address,
:phone, :city, :state, :zip.
You'd like to quickly grab all those widgets and do something with them from an event handler:
```
(fn [event]
(let [{:keys [name address phone city state zip]} (group-by-id event)
... do something ...))
```
This is functionally equivalent to, but faster than:
```
(let [name (select event [:#name])
address (select event [:#address])
phone (select event [:#phone])
... and so on ...]
... do something ...)
```
See:
(seesaw.core/select)
```
Group the widgets in a hierarchy starting at some root into a map keyed by :id. Widgets with no id are ignored. If an id appears twice,
the 'later' widget wins.
root is any (to-widget)-able object.
Examples:
Suppose you have a form with with widgets with ids :name, :address,
:phone, :city, :state, :zip.
You'd like to quickly grab all those widgets and do something with
them from an event handler:
(fn [event]
(let [{:keys [name address phone city state zip]} (group-by-id event)
... do something ...))
This is functionally equivalent to, but faster than:
(let [name (select event [:#name])
address (select event [:#address])
phone (select event [:#phone])
... and so on ...]
... do something ...)
See:
(seesaw.core/select)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3627)[raw docstring](#)
---
#### heightclj
```
(height w)
```
Returns the height of the given widget in pixels
```
Returns the height of the given widget in pixels
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L467)[raw docstring](#)
---
#### hide!clj
```
(hide! targets)
```
Hide a frame, dialog or widget.
Returns its input.
See:
<http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#setVisible%28boolean%29>
```
Hide a frame, dialog or widget.
Returns its input.
See:
http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#setVisible%28boolean%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L267)[raw docstring](#)
---
#### horizontal-panelclj
```
(horizontal-panel & opts)
```
Create a panel where widgets are arranged horizontally. Options:
:items List of widgets (passed through make-widget)
See <http://download.oracle.com/javase/6/docs/api/javax/swing/BoxLayout.html>
```
Create a panel where widgets are arranged horizontally. Options:
:items List of widgets (passed through make-widget)
See http://download.oracle.com/javase/6/docs/api/javax/swing/BoxLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1069)[raw docstring](#)
---
#### horizontal-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1067)
---
#### iconclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L171)
---
#### id-forclj
Deprecated. See (seesaw.core/id-of)
```
Deprecated. See (seesaw.core/id-of)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L487)[raw docstring](#)
---
#### id-ofclj
```
(id-of w)
```
Returns the id of the given widget if the :id property was specified at creation. The widget parameter is passed through (to-widget) first so events and other objects can also be used. The id is always returned as a string, even it if was originally given as a keyword.
Returns the id as a keyword, or nil.
See:
(seesaw.core/select).
```
Returns the id of the given widget if the :id property was specified at
creation. The widget parameter is passed through (to-widget) first so
events and other objects can also be used. The id is always returned as
a string, even it if was originally given as a keyword.
Returns the id as a keyword, or nil.
See:
(seesaw.core/select).
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L473)[raw docstring](#)
---
#### inputclj
```
(input & args)
```
Show an input dialog:
(input [source] message & options)
source - optional parent component message - The message to show the user. May be a string, or list of strings, widgets, etc.
options - additional options
Additional options:
:title The dialog title
:value The initial, default value to show in the dialog
:choices List of values to choose from rather than freeform entry
:type :warning, :error, :info, :plain, or :question
:icon Icon to display (Icon, URL, etc)
:to-string A function which creates the string representation of the values in :choices. This let's you choose arbitrary clojure data structures without while keeping things looking nice. Defaults to str.
Examples:
; Ask for a string input
(input "Bang the keyboard like a monkey")
; Ask for a choice from a set
(input "Pick a color" :choices ["RED" "YELLO" "GREEN"])
; Choose from a list of maps using a custom string function for the display.
; This will display only the city names, but the return value will be one of
; maps in the :choices list. Yay!
(input "Pick a city"
:choices [{ :name "New York" :population 8000000 }
{ :name "<NAME>" :population 100000 }
{ :name "<NAME>" :population 5201 }]
:to-string :name)
Returns the user input or nil if they hit cancel.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JOptionPane.html>
```
Show an input dialog:
(input [source] message & options)
source - optional parent component message - The message to show the user. May be a string, or list of strings, widgets, etc.
options - additional options
Additional options:
:title The dialog title
:value The initial, default value to show in the dialog
:choices List of values to choose from rather than freeform entry
:type :warning, :error, :info, :plain, or :question
:icon Icon to display (Icon, URL, etc)
:to-string A function which creates the string representation of the values
in :choices. This let's you choose arbitrary clojure data structures
without while keeping things looking nice. Defaults to str.
Examples:
; Ask for a string input
(input "Bang the keyboard like a monkey")
; Ask for a choice from a set
(input "Pick a color" :choices ["RED" "YELLO" "GREEN"])
; Choose from a list of maps using a custom string function for the display.
; This will display only the city names, but the return value will be one of
; maps in the :choices list. Yay!
(input "Pick a city"
:choices [{ :name "New York" :population 8000000 }
{ :name "<NAME>" :population 100000 }
{ :name "<NAME>" :population 5201 }]
:to-string :name)
Returns the user input or nil if they hit cancel.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JOptionPane.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3134)[raw docstring](#)
---
#### InputChoiceclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3105)
---
#### invoke-latercljmacro
```
(invoke-later & args)
```
Alias for seesaw.invoke/invoke-later
```
Alias for seesaw.invoke/invoke-later
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L60)[raw docstring](#)
---
#### invoke-nowcljmacro
```
(invoke-now & args)
```
Alias for seesaw.invoke/invoke-now
```
Alias for seesaw.invoke/invoke-now
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L55)[raw docstring](#)
---
#### invoke-sooncljmacro
```
(invoke-soon & args)
```
Alias for seesaw.invoke/invoke-soon
```
Alias for seesaw.invoke/invoke-soon
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L65)[raw docstring](#)
---
#### labelclj
```
(label & args)
```
Create a label. Supports all default properties. Can take two forms:
```
(label "My Label") ; Single text argument for the label
```
or with full options:
```
(label :id :my-label :text "My Label" ...)
```
Additional options:
:h-text-position Horizontal text position, :left, :right, :center, etc.
:v-text-position Horizontal text position, :top, :center, :bottom, etc.
:resource Namespace-qualified keyword which is a resource prefix for the labels properties
Resources and i18n:
A label's base properties can be set from a resource prefix, i.e. a namespace-
qualified keyword that refers to a resource bundle loadable by j18n.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JLabel.html>
```
Create a label. Supports all default properties. Can take two forms:
(label "My Label") ; Single text argument for the label
or with full options:
(label :id :my-label :text "My Label" ...)
Additional options:
:h-text-position Horizontal text position, :left, :right, :center, etc.
:v-text-position Horizontal text position, :top, :center, :bottom, etc.
:resource Namespace-qualified keyword which is a resource prefix for the
labels properties
Resources and i18n:
A label's base properties can be set from a resource prefix, i.e. a namespace-
qualified keyword that refers to a resource bundle loadable by j18n.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JLabel.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1157)[raw docstring](#)
---
#### label-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1145)
---
#### LayoutOrientationConfigcljprotocol
Hook protocol for :layout-orientation option
```
Hook protocol for :layout-orientation option
```
#### get-layout-orientation*clj
```
(get-layout-orientation* this)
```
#### set-layout-orientation*clj
```
(set-layout-orientation* this v)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L733)[raw docstring](#)
---
#### left-right-splitclj
```
(left-right-split left right & args)
```
Create a left/right (horizontal) splitpane with the given widgets. See
(seesaw.core/splitter) for additional options. Options are given after the two widgets.
Notes:
See:
(seesaw.core/splitter)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JSplitPane.html>
```
Create a left/right (horizontal) splitpane with the given widgets. See
(seesaw.core/splitter) for additional options. Options are given after the two widgets.
Notes:
See:
(seesaw.core/splitter)
http://download.oracle.com/javase/6/docs/api/javax/swing/JSplitPane.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2197)[raw docstring](#)
---
#### left-right-split-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2195)
---
#### listboxclj
```
(listbox & args)
```
Create a list box (JList). Additional options:
:model A ListModel, or a sequence of values with which a DefaultListModel will be constructed.
:renderer A cell renderer to use. See (seesaw.cells/to-cell-renderer).
Notes:
Retrieving and setting the current selection of the list box is fully supported by the (selection) and (selection!) functions.
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JList.html>
```
Create a list box (JList). Additional options:
:model A ListModel, or a sequence of values with which a DefaultListModel
will be constructed.
:renderer A cell renderer to use. See (seesaw.cells/to-cell-renderer).
Notes:
Retrieving and setting the current selection of the list box is fully
supported by the (selection) and (selection!) functions.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JList.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1681)[raw docstring](#)
---
#### listbox-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1666)
---
#### listenclj
Alias of seesaw.event/listen:
*note: use seesaw.core/listen rather than calling this directly*
Install listeners for one or more events on the given target. For example:
(listen (button "foo")
:mouse-entered (fn [e] ...)
:focus-gained (fn [e] ...)
:key-pressed (fn [e] ...)
:mouse-wheel-moved (fn [e] ...))
one function can be registered for multiple events by using a set of event names instead of one:
(listen (text)
#{:remove-update :insert-update} (fn [e] ...))
Note in this case that it's smart enough to add a document listener to the JTextFields document.
Similarly, an event can be registered for all events in a particular swing listener interface by just using the keyword-ized prefix of the interface name. For example, to get all callbacks in the MouseListener interface:
(listen my-widget :mouse (fn [e] ...))
Returns a function which, when called, removes all listeners registered with this call.
When the target is a JTable and listener type is :selection, only row selection events are reported. Also note that the source table is
*not* retrievable from the event object.
```
Alias of seesaw.event/listen:
*note: use seesaw.core/listen rather than calling this directly*
Install listeners for one or more events on the given target. For example:
(listen (button "foo")
:mouse-entered (fn [e] ...)
:focus-gained (fn [e] ...)
:key-pressed (fn [e] ...)
:mouse-wheel-moved (fn [e] ...))
one function can be registered for multiple events by using a set of event names instead of one:
(listen (text)
#{:remove-update :insert-update} (fn [e] ...))
Note in this case that it's smart enough to add a document listener to the JTextFields document.
Similarly, an event can be registered for all events in a particular swing listener interface by just using the keyword-ized prefix of the interface name. For example, to get all callbacks in the MouseListener interface:
(listen my-widget :mouse (fn [e] ...))
Returns a function which, when called, removes all listeners registered with this call.
When the target is a JTable and listener type is :selection, only row selection events are reported. Also note that the source table is
*not* retrievable from the event object.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L112)[raw docstring](#)
---
#### make-widgetclj
```
(make-widget v)
```
Try to create a new widget based on the following rules:
nil -> nil java.awt.Component -> return argument unchanged (like to-widget)
java.util.EventObject -> return the event source (like to-widget)
java.awt.Dimension -> return Box/createRigidArea java.swing.Action -> return a button using the action
:separator -> create a horizontal JSeparator
:fill-h -> Box/createHorizontalGlue
:fill-v -> Box/createVerticalGlue
[:fill-h n] -> Box/createHorizontalStrut with width n
[:fill-v n] -> Box/createVerticalStrut with height n
[width :by height] -> create rigid area with given dimensions java.net.URL -> a label with the image located at the url Anything else -> a label with the text from passing the object through str
```
Try to create a new widget based on the following rules:
nil -> nil java.awt.Component -> return argument unchanged (like to-widget)
java.util.EventObject -> return the event source (like to-widget)
java.awt.Dimension -> return Box/createRigidArea java.swing.Action -> return a button using the action
:separator -> create a horizontal JSeparator
:fill-h -> Box/createHorizontalGlue
:fill-v -> Box/createVerticalGlue
[:fill-h n] -> Box/createHorizontalStrut with width n
[:fill-v n] -> Box/createVerticalStrut with height n
[width :by height] -> create rigid area with given dimensions java.net.URL -> a label with the image located at the url Anything else -> a label with the text from passing the object through str
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L177)[raw docstring](#)
---
#### menuclj
```
(menu & opts)
```
Create a new menu. In addition to all options applicable to (seesaw.core/button)
the following additional options are supported:
:items Sequence of menu item-like things (actions, icons, JMenuItems, etc)
Notes:
See:
(seesaw.core/button)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JMenu.html>
```
Create a new menu. In addition to all options applicable to (seesaw.core/button)
the following additional options are supported:
:items Sequence of menu item-like things (actions, icons, JMenuItems, etc)
Notes:
See:
(seesaw.core/button)
http://download.oracle.com/javase/6/docs/api/javax/swing/JMenu.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2310)[raw docstring](#)
---
#### menu-itemclj
```
(menu-item & args)
```
Create a menu item for use in (seesaw.core/menu). Supports same options as
(seesaw.core/button)
```
Create a menu item for use in (seesaw.core/menu). Supports same options as
(seesaw.core/button)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2259)[raw docstring](#)
---
#### menu-item-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2251)
---
#### menu-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2295)
---
#### menubarclj
```
(menubar & opts)
```
Create a new menu bar, suitable for the :menubar property of (frame).
Additional options:
:items Sequence of menus, see (menu).
Notes:
See:
(seesaw.core/frame)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JMenuBar.html>
```
Create a new menu bar, suitable for the :menubar property of (frame).
Additional options:
:items Sequence of menus, see (menu).
Notes:
See:
(seesaw.core/frame)
http://download.oracle.com/javase/6/docs/api/javax/swing/JMenuBar.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2380)[raw docstring](#)
---
#### menubar-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2372)
---
#### model-optionclj
Default handler for the :model option. Delegates to the ConfigModel protocol
```
Default handler for the :model option. Delegates to the ConfigModel protocol
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L684)[raw docstring](#)
---
#### move!clj
```
(move! target how & [loc])
```
Move a widget relatively or absolutely. target is a 'to-widget'-able object,
type is :by or :to, and loc is a two-element vector or instance of java.awt.Point. The how type parameter has the following interpretation:
:to The absolute position of the widget is set to the given point
:by The position of th widget is adjusted by the amount in the given point relative to its current position.
:to-front Move the widget to the top of the z-order in its parent.
Returns target.
Examples:
; Move x to the point (42, 43)
(move! x :to [42, 43])
; Move x to y position 43 while keeping x unchanged
(move! x :to [:*, 43])
; Move x relative to its current position. Assume initial position is (42, 43).
(move! x :by [50, -20])
; ... now x's position is [92, 23]
Notes:
For widgets, this function will generally only have an affect on widgets whose container has a nil layout! This function has similar functionality to the :bounds and :location options, but is a little more flexible and readable.
See:
(seesaw.core/xyz-panel)
[http://download.oracle.com/javase/6/docs/api/java/awt/Component.html#setLocation(int](http://download.oracle.com/javase/6/docs/api/java/awt/Component.html#setLocation%28int), int)
```
Move a widget relatively or absolutely. target is a 'to-widget'-able object,
type is :by or :to, and loc is a two-element vector or instance of java.awt.Point. The how type parameter has the following interpretation:
:to The absolute position of the widget is set to the given point
:by The position of th widget is adjusted by the amount in the given point
relative to its current position.
:to-front Move the widget to the top of the z-order in its parent.
Returns target.
Examples:
; Move x to the point (42, 43)
(move! x :to [42, 43])
; Move x to y position 43 while keeping x unchanged
(move! x :to [:*, 43])
; Move x relative to its current position. Assume initial position is (42, 43).
(move! x :by [50, -20])
; ... now x's position is [92, 23]
Notes:
For widgets, this function will generally only have an affect on widgets whose container
has a nil layout! This function has similar functionality to the :bounds
and :location options, but is a little more flexible and readable.
See:
(seesaw.core/xyz-panel)
http://download.oracle.com/javase/6/docs/api/java/awt/Component.html#setLocation(int, int)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L412)[raw docstring](#)
---
#### move-by!clj
```
(move-by! this dx dy)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### move-to!clj
```
(move-to! this x y)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### move-to-back!clj
```
(move-to-back! this)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### move-to-front!clj
```
(move-to-front! this)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### native!clj
```
(native!)
```
Set native look and feel and other options to try to make things look right.
This function must be called very early, like before any other Seesaw or Swing calls!
Note that on OSX, you can set the application name in the menu bar (usually displayed as the main class name) by setting the -Xdock:<name-of-your-app>
parameter to the JVM at startup. Sorry, I don't know of a way to do it dynamically.
See:
<http://developer.apple.com/library/mac/#documentation/Java/Conceptual/Java14Development/07-NativePlatformIntegration/NativePlatformIntegration.html>
```
Set native look and feel and other options to try to make things look right.
This function must be called very early, like before any other Seesaw or Swing calls!
Note that on OSX, you can set the application name in the menu bar (usually displayed as the main class name) by setting the -Xdock:<name-of-your-app>
parameter to the JVM at startup. Sorry, I don't know of a way to do it dynamically.
See:
http://developer.apple.com/library/mac/#documentation/Java/Conceptual/Java14Development/07-NativePlatformIntegration/NativePlatformIntegration.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L70)[raw docstring](#)
---
#### pack!clj
```
(pack! targets)
```
Pack a frame or window, causing it to resize to accommodate the preferred size of its contents.
Returns its input.
See:
<http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#pack%28%29>
```
Pack a frame or window, causing it to resize to accommodate the preferred size of its contents.
Returns its input.
See:
http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#pack%28%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L280)[raw docstring](#)
---
#### paintablecljmacro
```
(paintable cls & opts)
```
*Deprecated. Just use :paint directly on any widget.*
Macro that generates a paintable widget, i.e. a widget that can be drawn on by client code. target is a Swing class literal indicating the type that will be constructed.
All other options will be passed along to the given Seesaw widget as usual and will be applied to the generated class.
Notes:
If you just want a panel to draw on, use (seesaw.core/canvas). This macro is intended for customizing the appearance of existing widget types.
Examples:
; Create a raw JLabel and paint over it.
(paintable javax.swing.JLabel :paint (fn [c g] (.fillRect g 0 0 20 20))
See:
(seesaw.core/canvas)
(seesaw.graphics)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JComponent.html#paintComponent%28java.awt.Graphics%29>
```
*Deprecated. Just use :paint directly on any widget.*
Macro that generates a paintable widget, i.e. a widget that can be drawn on by client code. target is a Swing class literal indicating the type that will be constructed.
All other options will be passed along to the given Seesaw widget as usual and will be applied to the generated class.
Notes:
If you just want a panel to draw on, use (seesaw.core/canvas). This macro is
intended for customizing the appearance of existing widget types.
Examples:
; Create a raw JLabel and paint over it.
(paintable javax.swing.JLabel :paint (fn [c g] (.fillRect g 0 0 20 20))
See:
(seesaw.core/canvas)
(seesaw.graphics)
http://download.oracle.com/javase/6/docs/api/javax/swing/JComponent.html#paintComponent%28java.awt.Graphics%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2533)[raw docstring](#)
---
#### passwordclj
```
(password & opts)
```
Create a password field. Options are the same as single-line text fields with the following additions:
:echo-char The char displayed for the characters in the password field
Returns an instance of JPasswordField.
Example:
(password :echo-char \X)
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JPasswordField.html>
```
Create a password field. Options are the same as single-line text fields with the following additions:
:echo-char The char displayed for the characters in the password field
Returns an instance of JPasswordField.
Example:
(password :echo-char \X)
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JPasswordField.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1568)[raw docstring](#)
---
#### password-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1560)
---
#### popupclj
```
(popup & opts)
```
Create a new popup menu. Additional options:
:items Sequence of menu item-like things (actions, icons, JMenuItems, etc)
Note that in many cases, the :popup option is what you want if you want to show a context menu on a widget. It handles all the yucky mouse stuff and fixes various eccentricities of Swing.
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JPopupMenu.html>
```
Create a new popup menu. Additional options:
:items Sequence of menu item-like things (actions, icons, JMenuItems, etc)
Note that in many cases, the :popup option is what you want if you want to show a context menu on a widget. It handles all the yucky mouse stuff and fixes various eccentricities of Swing.
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JPopupMenu.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2340)[raw docstring](#)
---
#### popup-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2324)
---
#### progress-barclj
```
(progress-bar & {:keys [orientation value min max] :as opts})
```
Show a progress-bar which can be used to display the progress of long running tasks.
```
(progress-bar ... options ...)
```
Besides the default options, options can also be one of:
:orientation The orientation of the progress-bar. One of :horizontal, :vertical. Default: :horizontal.
:value The initial numerical value that is to be set. Default: 0.
:min The minimum numerical value which can be set. Default: 0.
:max The maximum numerical value which can be set. Default: 100.
:paint-string? A boolean value indicating whether to paint a string containing the progress' percentage. Default: false.
:indeterminate? A boolean value indicating whether the progress bar is to be in indeterminate mode (for when the exact state of the task is not yet known). Default: false.
Examples:
; vertical progress bar from 0 to 100 starting with inital value at 15.
(progress-bar :orientation :vertical :min 0 :max 100 :value 15)
Returns a JProgressBar.
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JProgressBar.html>
```
Show a progress-bar which can be used to display the progress of long running tasks.
(progress-bar ... options ...)
Besides the default options, options can also be one of:
:orientation The orientation of the progress-bar. One of :horizontal, :vertical. Default: :horizontal.
:value The initial numerical value that is to be set. Default: 0.
:min The minimum numerical value which can be set. Default: 0.
:max The maximum numerical value which can be set. Default: 100.
:paint-string? A boolean value indicating whether to paint a string containing
the progress' percentage. Default: false.
:indeterminate? A boolean value indicating whether the progress bar is to be in
indeterminate mode (for when the exact state of the task is not
yet known). Default: false.
Examples:
; vertical progress bar from 0 to 100 starting with inital value at 15.
(progress-bar :orientation :vertical :min 0 :max 100 :value 15)
Returns a JProgressBar.
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JProgressBar.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3451)[raw docstring](#)
---
#### progress-bar-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3437)
---
#### radioclj
```
(radio & args)
```
Same as (seesaw.core/button), but creates a radio button. Use :selected? option to set initial state.
See:
(seesaw.core/button)
```
Same as (seesaw.core/button), but creates a radio button. Use :selected? option to set initial state.
See:
(seesaw.core/button)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1323)[raw docstring](#)
---
#### radio-menu-itemclj
```
(radio-menu-item & args)
```
Create a radio menu item for use in (seesaw.core/menu). Supports same options as
(seesaw.core/button).
Notes:
Use (seesaw.core/button-group) or the :group option to enforce mutual exclusion across menu items.
```
Create a radio menu item for use in (seesaw.core/menu). Supports same options as
(seesaw.core/button).
Notes:
Use (seesaw.core/button-group) or the :group option to enforce mutual exclusion
across menu items.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2275)[raw docstring](#)
---
#### radio-menu-item-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2273)
---
#### radio-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1321)
---
#### remove!clj
```
(remove! container subject & more)
```
Remove one or more widgets from a container. container and each widget are passed through (to-widget) as usual, but no new widgets are created.
The container is properly revalidated and repainted after removal.
Examples:
(def lbl (label "HI"))
(def p (border-panel :north lbl))
(remove! p lbl)
Returns the target container *after* it's been passed through (to-widget).
```
Remove one or more widgets from a container. container and each widget are passed through (to-widget) as usual, but no new widgets are created.
The container is properly revalidated and repainted after removal.
Examples:
(def lbl (label "HI"))
(def p (border-panel :north lbl))
(remove! p lbl)
Returns the target container *after* it's been passed through (to-widget).
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3723)[raw docstring](#)
---
#### repaint!clj
```
(repaint! targets)
```
Request a repaint of one or a list of widget-able things.
Example:
; Repaint just one widget
(repaint! my-widget)
; Repaint all widgets in a hierarcy
(repaint! (select [:*] root))
Returns targets.
```
Request a repaint of one or a list of widget-able things.
Example:
; Repaint just one widget
(repaint! my-widget)
; Repaint all widgets in a hierarcy
(repaint! (select [:*] root))
Returns targets.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L323)[raw docstring](#)
---
#### replace!clj
```
(replace! container old-widget new-widget)
```
Replace old-widget with new-widget from container. container and old-widget are passed through (to-widget). new-widget is passed through make-widget.
Note that the layout constraints of old-widget are retained for the new widget.
This is different from the behavior you'd get with just remove/add in Swing.
The container is properly revalidated and repainted after replacement.
Examples:
; Replace a label with a new label.
(def lbl (label "HI"))
(def p (border-panel :north lbl))
(replace! p lbl "Goodbye")
Returns the target container *after* it's been passed through (to-widget).
```
Replace old-widget with new-widget from container. container and old-widget are passed through (to-widget). new-widget is passed through make-widget.
Note that the layout constraints of old-widget are retained for the new widget.
This is different from the behavior you'd get with just remove/add in Swing.
The container is properly revalidated and repainted after replacement.
Examples:
; Replace a label with a new label.
(def lbl (label "HI"))
(def p (border-panel :north lbl))
(replace! p lbl "Goodbye")
Returns the target container *after* it's been passed through (to-widget).
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3741)[raw docstring](#)
---
#### request-focus!clj
```
(request-focus! target)
```
Request focus for the given widget-able thing. This will try to give keyboard focus to the given widget. Returns its input.
The widget must be :focusable? for this to succeed.
Example:
(request-focus! my-widget)
; Move focus on click
(listen my-widget :focus-gained request-focus!)
See:
[http://docs.oracle.com/javase/6/docs/api/javax/swing/JComponent.html#requestFocusInWindow()](http://docs.oracle.com/javase/6/docs/api/javax/swing/JComponent.html#requestFocusInWindow%28%29)
```
Request focus for the given widget-able thing. This will try to give keyboard focus to the given widget. Returns its input.
The widget must be :focusable? for this to succeed.
Example:
(request-focus! my-widget)
; Move focus on click
(listen my-widget :focus-gained request-focus!)
See:
http://docs.oracle.com/javase/6/docs/api/javax/swing/JComponent.html#requestFocusInWindow()
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L341)[raw docstring](#)
---
#### return-from-dialogclj
```
(return-from-dialog dlg result)
```
Return from the given dialog with the specified value. dlg may be anything that can be converted into a dialog as with (to-root). For example, an event, or a child widget of the dialog. Result is the value that will be returned from the blocking (dialog), (custom-dialog), or (show!)
call.
Examples:
; A button with an action listener that will cause the dialog to close
; and return :ok to the invoker.
(button
:text "OK"
:listen [:action (fn [e] (return-from-dialog e :ok))])
Notes:
The dialog must be modal and created from within the DIALOG fn with
:modal? set to true.
See:
(seesaw.core/dialog)
(seesaw.core/custom-dialog)
```
Return from the given dialog with the specified value. dlg may be anything that can be converted into a dialog as with (to-root). For example, an event, or a child widget of the dialog. Result is the value that will be returned from the blocking (dialog), (custom-dialog), or (show!)
call.
Examples:
; A button with an action listener that will cause the dialog to close
; and return :ok to the invoker.
(button
:text "OK"
:listen [:action (fn [e] (return-from-dialog e :ok))])
Notes:
The dialog must be modal and created from within the DIALOG fn with
:modal? set to true.
See:
(seesaw.core/dialog)
(seesaw.core/custom-dialog)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2963)[raw docstring](#)
---
#### scroll!clj
```
(scroll! target modifier arg)
```
Scroll a widget. Obviously, the widget must be contained in a scrollable.
Returns the widget.
The basic format of the function call is:
(scroll! widget modifier argument)
widget is passed through (to-widget) as usual. Currently, the only accepted value for modifier is :to. The interpretation and set of accepted values for argument depends on the type of widget:
All Widgets:
```
:top - Scroll to the top of the widget
:bottom - Scroll to the bottom of the widget java.awt.Point - Scroll so the given pixel point is visible java.awt.Rectangle - Scroll so the given rectangle is visible
[:point x y] - Scroll so the given pixel point is visible
[:rect x y w h] - Scroll so the given rectable is visible
```
listboxes (JList):
```
[:row n] - Scroll so that row n is visible
```
tables (JTable):
```
[:row n] - Scroll so that row n is visible
[:column n] - Scroll so that column n is visible
[:cell row col] - Scroll so that the given cell is visible
```
text widgets:
```
[:line n] - Scroll so that line n is visible
[:position n] - Scroll so that position n (character offset) is visible
Note that for text widgets, the caret will also be moved which in turn causes the selection to change.
```
Examples:
(scroll! w :to :top)
(scroll! w :to :bottom)
(scroll! w :to [:point 99 10])
(scroll! w :to [:rect 99 10 100 100])
(scroll! listbox :to [:row 99])
(scroll! table :to [:row 99])
(scroll! table :to [:column 10])
(scroll! table :to [:cell 99 10])
(scroll! text :to [:line 200])
(scroll! text :to [:position 2000])
See:
(seesaw.scroll/scroll!*)
(seesaw.examples.scroll)
```
Scroll a widget. Obviously, the widget must be contained in a scrollable.
Returns the widget.
The basic format of the function call is:
(scroll! widget modifier argument)
widget is passed through (to-widget) as usual. Currently, the only accepted value for modifier is :to. The interpretation and set of accepted values for argument depends on the type of widget:
All Widgets:
:top - Scroll to the top of the widget
:bottom - Scroll to the bottom of the widget
java.awt.Point - Scroll so the given pixel point is visible
java.awt.Rectangle - Scroll so the given rectangle is visible
[:point x y] - Scroll so the given pixel point is visible
[:rect x y w h] - Scroll so the given rectable is visible
listboxes (JList):
[:row n] - Scroll so that row n is visible
tables (JTable):
[:row n] - Scroll so that row n is visible
[:column n] - Scroll so that column n is visible
[:cell row col] - Scroll so that the given cell is visible
text widgets:
[:line n] - Scroll so that line n is visible
[:position n] - Scroll so that position n (character offset) is visible
Note that for text widgets, the caret will also be moved which in turn
causes the selection to change.
Examples:
(scroll! w :to :top)
(scroll! w :to :bottom)
(scroll! w :to [:point 99 10])
(scroll! w :to [:rect 99 10 100 100])
(scroll! listbox :to [:row 99])
(scroll! table :to [:row 99])
(scroll! table :to [:column 10])
(scroll! table :to [:cell 99 10])
(scroll! text :to [:line 200])
(scroll! text :to [:position 2000])
See:
(seesaw.scroll/scroll!*)
(seesaw.examples.scroll)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2048)[raw docstring](#)
---
#### scrollableclj
```
(scrollable target & opts)
```
Wrap target in a JScrollPane and return the scroll pane.
The first argument is always the widget that should be scrolled. It's followed by zero or more options *for the scroll pane*.
Additional Options:
:hscroll - Controls appearance of horizontal scroll bar.
One of :as-needed (default), :never, :always
:vscroll - Controls appearance of vertical scroll bar.
One of :as-needed (default), :never, :always
:row-header - Row header widget or viewport
:column-header - Column header widget or viewport
:lower-left - Widget in lower-left corner
:lower-right - Widget in lower-right corner
:upper-left - Widget in upper-left corner
:upper-right - Widget in upper-right corner
Examples:
; Vanilla scrollable
(scrollable (listbox :model ["Foo" "Bar" "Yum"]))
; Scrollable with some options on the JScrollPane
(scrollable (listbox :model ["Foo" "Bar" "Yum"]) :id :#scrollable :border 5)
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JScrollPane.html>
```
Wrap target in a JScrollPane and return the scroll pane.
The first argument is always the widget that should be scrolled. It's followed by zero or more options *for the scroll pane*.
Additional Options:
:hscroll - Controls appearance of horizontal scroll bar.
One of :as-needed (default), :never, :always
:vscroll - Controls appearance of vertical scroll bar.
One of :as-needed (default), :never, :always
:row-header - Row header widget or viewport
:column-header - Column header widget or viewport
:lower-left - Widget in lower-left corner
:lower-right - Widget in lower-right corner
:upper-left - Widget in upper-left corner
:upper-right - Widget in upper-right corner
Examples:
; Vanilla scrollable
(scrollable (listbox :model ["Foo" "Bar" "Yum"]))
; Scrollable with some options on the JScrollPane
(scrollable (listbox :model ["Foo" "Bar" "Yum"]) :id :#scrollable :border 5)
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JScrollPane.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2011)[raw docstring](#)
---
#### scrollable-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1987)
---
#### selectclj
```
(select root selector)
```
Select a widget using the given selector expression. Selectors are *always*
expressed as a vector. root is the root of the widget hierarchy to select from, usually either a (frame) or other container.
(select root [:#id]) Look up widget by id. A single widget is always returned.
(select root [:tag]) Look up widgets by "tag". In Seesaw tag is treated as the exact simple class name of a widget, so :JLabel would match both javax.swing.JLabel *and* com.me.JLabel.
Be careful!
(select root [:<class-name>]) Look up widgets by *fully-qualified* class name.
Matches sub-classes as well. Always returns a sequence of widgets.
(select root [:<class-name!>]) Same as above, but class must match exactly.
(select root [:*]) Root and all the widgets under it
Notes:
This function will return a single widget *only* in the case where the selector is a single identifier, e.g. [:#my-id]. In *all* other cases, a sequence of widgets is returned. This is for convenience. Select-by-id is the common case where a single widget is almost always desired.
Examples:
To find a widget by id from an event handler, use (to-root) on the event to get the root and then select on the id:
```
(fn [e]
(let [my-widget (select (to-root e) [:#my-widget])]
...))
```
Disable all JButtons (excluding subclasses) in a hierarchy:
```
(config! (select root [:<javax.swing.JButton>]) :enabled? false)
```
More:
```
; All JLabels, no sub-classes allowed
(select root [:<javax.swing.JLabel!>])
; All JSliders that are descendants of a JPanel with id foo
(select root [:JPanel#foo :JSlider])
; All JSliders (and sub-classes) that are immediate children of a JPanel with id foo
(select root [:JPanel#foo :> :<javax.swing.JSlider>])
; All widgets with class foo. Set the class of a widget with the :class option
(flow-panel :class :my-class) or (flow-panel :class #{:class1 :class2})
(select root [:.my-class])
(select root [:.class1.class2])
; Select all text components with class input
(select root [:<javax.swing.text.JTextComponent>.input])
; Select all descendants of all panels with class container
(select root [:JPanel.container :*])
```
See:
(seesaw.selector/select)
<https://github.com/cgrand/enlive>
```
Select a widget using the given selector expression. Selectors are *always*
expressed as a vector. root is the root of the widget hierarchy to select
from, usually either a (frame) or other container.
(select root [:#id]) Look up widget by id. A single widget is
always returned.
(select root [:tag]) Look up widgets by "tag". In Seesaw tag is
treated as the exact simple class name of a
widget, so :JLabel would match both
javax.swing.JLabel *and* com.me.JLabel.
Be careful!
(select root [:<class-name>]) Look up widgets by *fully-qualified* class name.
Matches sub-classes as well. Always returns a
sequence of widgets.
(select root [:<class-name!>]) Same as above, but class must match exactly.
(select root [:*]) Root and all the widgets under it
Notes:
This function will return a single widget *only* in the case where the selector
is a single identifier, e.g. [:#my-id]. In *all* other cases, a sequence of
widgets is returned. This is for convenience. Select-by-id is the common case
where a single widget is almost always desired.
Examples:
To find a widget by id from an event handler, use (to-root) on the event to get
the root and then select on the id:
(fn [e]
(let [my-widget (select (to-root e) [:#my-widget])]
...))
Disable all JButtons (excluding subclasses) in a hierarchy:
(config! (select root [:<javax.swing.JButton>]) :enabled? false)
More:
; All JLabels, no sub-classes allowed
(select root [:<javax.swing.JLabel!>])
; All JSliders that are descendants of a JPanel with id foo
(select root [:JPanel#foo :JSlider])
; All JSliders (and sub-classes) that are immediate children of a JPanel with id foo
(select root [:JPanel#foo :> :<javax.swing.JSlider>])
; All widgets with class foo. Set the class of a widget with the :class option
(flow-panel :class :my-class) or (flow-panel :class #{:class1 :class2})
(select root [:.my-class])
(select root [:.class1.class2])
; Select all text components with class input
(select root [:<javax.swing.text.JTextComponent>.input])
; Select all descendants of all panels with class container
(select root [:JPanel.container :*])
See:
(seesaw.selector/select)
https://github.com/cgrand/enlive
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3517)[raw docstring](#)
---
#### select-withclj
```
(select-with target)
```
Returns an object with the following properties:
* Equivalent to (partial seesaw.core/select (to-widget target)), i.e. it returns a function that performs a select on the target.
* Calling (to-widget) on it returns the same value as (to-widget target).
This basically allows you to pack a widget and the select function into a single package for convenience. For example:
(defn make-frame [] (frame ...))
(defn add-behaviors [$]
(let [widget-a ($ [:#widget-a])
buttons ($ [:.button])
...]
...)
$)
(defn -main []
(-> (make-frame) select-with add-behaviors pack! show!))
See:
(seesaw.core/select)
(seesaw.core/to-widget)
```
Returns an object with the following properties:
* Equivalent to (partial seesaw.core/select (to-widget target)), i.e. it
returns a function that performs a select on the target.
* Calling (to-widget) on it returns the same value as (to-widget target).
This basically allows you to pack a widget and the select function into a single package for convenience. For example:
(defn make-frame [] (frame ...))
(defn add-behaviors [$]
(let [widget-a ($ [:#widget-a])
buttons ($ [:.button])
...]
...)
$)
(defn -main []
(-> (make-frame) select-with add-behaviors pack! show!))
See:
(seesaw.core/select)
(seesaw.core/to-widget)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3598)[raw docstring](#)
---
#### selectionclj
```
(selection target)
```
```
(selection target options)
```
Gets the selection of a widget. target is passed through (to-widget)
so event objects can also be used. The default behavior is to return a *single* selection value, even if the widget supports multiple selection.
If there is no selection, returns nil.
options is an option map which supports the following flags:
multi? - If true the return value is a seq of selected values rather than a single value.
Examples:
(def t (table))
(listen t :selection
(fn [e]
(let [selected-rows (selection t {:multi? true})]
(println "Currently selected rows: " selected-rows))))
See:
(seesaw.core/selection!)
(seesaw.selection/selection)
```
Gets the selection of a widget. target is passed through (to-widget)
so event objects can also be used. The default behavior is to return a *single* selection value, even if the widget supports multiple selection.
If there is no selection, returns nil.
options is an option map which supports the following flags:
multi? - If true the return value is a seq of selected values rather than
a single value.
Examples:
(def t (table))
(listen t :selection
(fn [e]
(let [selected-rows (selection t {:multi? true})]
(println "Currently selected rows: " selected-rows))))
See:
(seesaw.core/selection!)
(seesaw.selection/selection)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L125)[raw docstring](#)
---
#### selection!clj
```
(selection! target new-selection)
```
```
(selection! target opts new-selection)
```
Sets the selection on a widget. target is passed through (to-widget)
so event objects can also be used. The arguments are the same as
(selection). By default, new-selection is a single new selection value.
If new-selection is nil, the selection is cleared.
options is an option map which supports the following flags:
multi? - if true new-expression is a list of values to selection,
the same as the list returned by (selection).
Always returns target.
See:
(seesaw.core/selection)
(seesaw.selection/selection!)
```
Sets the selection on a widget. target is passed through (to-widget)
so event objects can also be used. The arguments are the same as
(selection). By default, new-selection is a single new selection value.
If new-selection is nil, the selection is cleared.
options is an option map which supports the following flags:
multi? - if true new-expression is a list of values to selection,
the same as the list returned by (selection).
Always returns target.
See:
(seesaw.core/selection)
(seesaw.selection/selection!)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L151)[raw docstring](#)
---
#### SelectionModeConfigcljprotocol
Hook protocol for :selection-mode option
```
Hook protocol for :selection-mode option
```
#### get-selection-mode*clj
```
(get-selection-mode* this)
```
#### set-selection-mode*clj
```
(set-selection-mode* this v)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L762)[raw docstring](#)
---
#### SelectWithclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3591)
---
#### separatorclj
```
(separator & opts)
```
Create a separator.
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JSeparator.html>
```
Create a separator.
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JSeparator.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2237)[raw docstring](#)
---
#### separator-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2229)
---
#### set-drag-enabledclj
```
(set-drag-enabled this v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### show!clj
```
(show! targets)
```
Show a frame, dialog or widget.
If target is a modal dialog, the call will block and show! will return the dialog's result. See (seesaw.core/return-from-dialog).
Returns its input.
See:
<http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#setVisible%28boolean%29>
```
Show a frame, dialog or widget.
If target is a modal dialog, the call will block and show! will return the
dialog's result. See (seesaw.core/return-from-dialog).
Returns its input.
See:
http://download.oracle.com/javase/6/docs/api/java/awt/Window.html#setVisible%28boolean%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L248)[raw docstring](#)
---
#### show-card!clj
```
(show-card! panel id)
```
Show a particular card in a card layout. id can be a string or keyword.
panel is returned.
See:
(seesaw.core/card-panel)
<http://download.oracle.com/javase/6/docs/api/java/awt/CardLayout.html>
```
Show a particular card in a card layout. id can be a string or keyword.
panel is returned.
See:
(seesaw.core/card-panel)
http://download.oracle.com/javase/6/docs/api/java/awt/CardLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1026)[raw docstring](#)
---
#### Showablecljprotocol
#### visible!clj
```
(visible! this v)
```
#### visible?clj
```
(visible? this)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L231)
---
#### sliderclj
```
(slider &
{:keys [orientation value min max minor-tick-spacing major-tick-spacing
snap-to-ticks? paint-ticks? paint-labels? paint-track?
inverted?]
:as kw})
```
Show a slider which can be used to modify a value.
```
(slider ... options ...)
```
Besides the default options, options can also be one of:
:orientation The orientation of the slider. One of :horizontal, :vertical.
:value The initial numerical value that is to be set.
:min The minimum numerical value which can be set.
:max The maximum numerical value which can be set.
:minor-tick-spacing The spacing between minor ticks. If set, will also set :paint-ticks? to true.
:major-tick-spacing The spacing between major ticks. If set, will also set :paint-ticks? to true.
:snap-to-ticks? A boolean value indicating whether the slider should snap to ticks.
:paint-ticks? A boolean value indicating whether to paint ticks.
:paint-labels? A boolean value indicating whether to paint labels for ticks.
:paint-track? A boolean value indicating whether to paint the track.
:inverted? A boolean value indicating whether to invert the slider (to go from high to low).
Returns a JSlider.
Examples:
; ask & return single file
(slider :value 10 :min -50 :max 50)
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JSlider.html>
```
Show a slider which can be used to modify a value.
(slider ... options ...)
Besides the default options, options can also be one of:
:orientation The orientation of the slider. One of :horizontal, :vertical.
:value The initial numerical value that is to be set.
:min The minimum numerical value which can be set.
:max The maximum numerical value which can be set.
:minor-tick-spacing The spacing between minor ticks. If set, will also set :paint-ticks? to true.
:major-tick-spacing The spacing between major ticks. If set, will also set :paint-ticks? to true.
:snap-to-ticks? A boolean value indicating whether the slider should snap to ticks.
:paint-ticks? A boolean value indicating whether to paint ticks.
:paint-labels? A boolean value indicating whether to paint labels for ticks.
:paint-track? A boolean value indicating whether to paint the track.
:inverted? A boolean value indicating whether to invert the slider (to go from high to low).
Returns a JSlider.
Examples:
; ask & return single file
(slider :value 10 :min -50 :max 50)
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JSlider.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3397)[raw docstring](#)
---
#### slider-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3372)
---
#### spinnerclj
```
(spinner & args)
```
Create a spinner (JSpinner). Additional options:
:model Instance of SpinnerModel, or one of the values described below.
Note that the value can be retrieved and set with the (selection) and
(selection!) functions. Listen to :selection to be notified of value changes.
The value of model can be one of the following:
* An instance of javax.swing.SpinnerModel
* A java.util.Date instance in which case the spinner starts at that date,
is unbounded, and moves by day.
* A number giving the initial value for an unbounded number spinner
* A value returned by (seesaw.core/spinner-model)
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JSpinner.html>
<http://download.oracle.com/javase/6/docs/api/javax/swing/SpinnerModel.html>
(seesaw.core/spinner-model)
test/seesaw/test/examples/spinner.clj
```
Create a spinner (JSpinner). Additional options:
:model Instance of SpinnerModel, or one of the values described below.
Note that the value can be retrieved and set with the (selection) and
(selection!) functions. Listen to :selection to be notified of value changes.
The value of model can be one of the following:
* An instance of javax.swing.SpinnerModel
* A java.util.Date instance in which case the spinner starts at that date,
is unbounded, and moves by day.
* A number giving the initial value for an unbounded number spinner
* A value returned by (seesaw.core/spinner-model)
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JSpinner.html
http://download.oracle.com/javase/6/docs/api/javax/swing/SpinnerModel.html
(seesaw.core/spinner-model)
test/seesaw/test/examples/spinner.clj
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1935)[raw docstring](#)
---
#### spinner-modelclj
```
(spinner-model v & {:keys [from to by]})
```
A helper function for creating spinner models. Calls take the general form:
```
(spinner-model initial-value
:from start-value :to end-value :by step)
```
Values can be one of:
* java.util.Date where step is one of :day-of-week, etc. See java.util.Calendar constants.
* a number
Any of the options beside the initial value may be omitted.
Note that on some platforms the :by parameter will be ignored for date spinners.
See:
(seesaw.core/spinner)
<http://download.oracle.com/javase/6/docs/api/javax/swing/SpinnerDateModel.html>
<http://download.oracle.com/javase/6/docs/api/javax/swing/SpinnerNumberModel.html>
<http://download.oracle.com/javase/6/docs/api/javax/swing/JSpinner.html>
```
A helper function for creating spinner models. Calls take the general form:
(spinner-model initial-value
:from start-value :to end-value :by step)
Values can be one of:
* java.util.Date where step is one of :day-of-week, etc. See
java.util.Calendar constants.
* a number
Any of the options beside the initial value may be omitted.
Note that on some platforms the :by parameter will be ignored for date spinners.
See:
(seesaw.core/spinner)
http://download.oracle.com/javase/6/docs/api/javax/swing/SpinnerDateModel.html
http://download.oracle.com/javase/6/docs/api/javax/swing/SpinnerNumberModel.html
http://download.oracle.com/javase/6/docs/api/javax/swing/JSpinner.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1882)[raw docstring](#)
---
#### spinner-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1927)
---
#### splitterclj
```
(splitter dir left right & opts)
```
Create a new JSplitPane. This is a lower-level function. Usually you want
(seesaw.core/top-bottom-split) or (seesaw.core/left-right-split). But here's the additional options any three of these functions can take:
:divider-location The initial divider location. See (seesaw.core/divider-location!).
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JSplitPane.html>
```
Create a new JSplitPane. This is a lower-level function. Usually you want
(seesaw.core/top-bottom-split) or (seesaw.core/left-right-split). But here's the additional options any three of these functions can take:
:divider-location The initial divider location. See (seesaw.core/divider-location!).
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JSplitPane.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2173)[raw docstring](#)
---
#### splitter-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2162)
---
#### style-text!clj
```
(style-text! target id start length)
```
Style a JTextPane id identifies a style that has been added to the text pane.
See:
(seesaw.core/text)
<http://download.oracle.com/javase/tutorial/uiswing/components/editorpane.html>
```
Style a JTextPane id identifies a style that has been added to the text pane.
See:
(seesaw.core/text)
http://download.oracle.com/javase/tutorial/uiswing/components/editorpane.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1542)[raw docstring](#)
---
#### styled-textclj
```
(styled-text & args)
```
Create a text pane.
Supports the following options:
:text text content.
:wrap-lines? If true wraps lines.
This only works if the styled text is wrapped in (seesaw.core/scrollable). Doing so will cause a grey area to appear to the right of the text.
This can be avoided by calling
(.setBackground (.getViewport s) java.awt.Color/white)
on the scrollable s.
:styles Define styles, should be a list of vectors of form:
[identifier & options]
Where identifier is a string or keyword Options supported:
:font A font family name as keyword or string.
:size An integer.
:color See (seesaw.color/to-color)
:background See (seesaw.color/to-color)
:bold bold if true.
:italic italic if true.
:underline underline if true.
See:
(seesaw.core/style-text!)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JTextPane.html>
```
Create a text pane.
Supports the following options:
:text text content.
:wrap-lines? If true wraps lines.
This only works if the styled text is wrapped
in (seesaw.core/scrollable). Doing so will cause
a grey area to appear to the right of the text.
This can be avoided by calling
(.setBackground (.getViewport s) java.awt.Color/white)
on the scrollable s.
:styles Define styles, should be a list of vectors of form:
[identifier & options]
Where identifier is a string or keyword
Options supported:
:font A font family name as keyword or string.
:size An integer.
:color See (seesaw.color/to-color)
:background See (seesaw.color/to-color)
:bold bold if true.
:italic italic if true.
:underline underline if true.
See:
(seesaw.core/style-text!)
http://download.oracle.com/javase/6/docs/api/javax/swing/JTextPane.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1507)[raw docstring](#)
---
#### styled-text-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1497)
---
#### tabbed-panelclj
```
(tabbed-panel & opts)
```
Create a JTabbedPane. Supports the following properties:
:placement Tab placement, one of :bottom, :top, :left, :right.
:overflow Tab overflow behavior, one of :wrap, :scroll.
:tabs A list of tab descriptors. See below
A tab descriptor is a map with the following properties:
:title Title of the tab or a component to be displayed.
:tip Tab's tooltip text
:icon Tab's icon, passed through (icon)
:content The content of the tab, passed through (make-widget) as usual.
Returns the new JTabbedPane.
Notes:
The currently selected tab can be retrieved with the (selection) function.
It returns a map similar to the tab descriptor with keys :title, :content,
and :index.
Similarly, a tab can be programmatically selected with the
(selection!) function, by passing one of the following values:
* A number - The index of the tab to select
* A string - The title of the tab to select
* A to-widget-able - The content of the tab to select
* A map as returned by (selection) with at least an :index, :title, or
:content key.
Furthermore, you can be notified for when the active tab changes by listening for the :selection event:
(listen my-tabbed-panel :selection (fn [e] ...))
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JTabbedPane.html>
(seesaw.core/selection)
(seesaw.core/selection!)
```
Create a JTabbedPane. Supports the following properties:
:placement Tab placement, one of :bottom, :top, :left, :right.
:overflow Tab overflow behavior, one of :wrap, :scroll.
:tabs A list of tab descriptors. See below
A tab descriptor is a map with the following properties:
:title Title of the tab or a component to be displayed.
:tip Tab's tooltip text
:icon Tab's icon, passed through (icon)
:content The content of the tab, passed through (make-widget) as usual.
Returns the new JTabbedPane.
Notes:
The currently selected tab can be retrieved with the (selection) function.
It returns a map similar to the tab descriptor with keys :title, :content,
and :index.
Similarly, a tab can be programmatically selected with the
(selection!) function, by passing one of the following values:
* A number - The index of the tab to select
* A string - The title of the tab to select
* A to-widget-able - The content of the tab to select
* A map as returned by (selection) with at least an :index, :title, or
:content key.
Furthermore, you can be notified for when the active tab changes by listening for the :selection event:
(listen my-tabbed-panel :selection (fn [e] ...))
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JTabbedPane.html
(seesaw.core/selection)
(seesaw.core/selection!)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2463)[raw docstring](#)
---
#### tabbed-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2453)
---
#### tableclj
```
(table & args)
```
Create a table (JTable). Additional options:
:model A TableModel, or a vector. If a vector, then it is used as arguments to (seesaw.table/table-model).
:show-grid? Whether to show the grid lines of the table.
:show-horizontal-lines? Whether to show horizontal grid lines
:show-vertical-lines? Whether to show vertical grid lines
:fills-viewport-height?
:auto-resize The behavior of columns when the table is resized. One of:
:off Do nothing to column widths
:next-column When a column is resized, take space from next column
:subsequent-columns Change subsequent columns to presercve total width of table
:last-column Apply adjustments to last column only
:all-columns Proportionally resize all columns Defaults to :subsequent-columns. If you're wondering where your horizontal scroll bar is, try setting this to :off.
Example:
(table
:model [:columns [:age :height]
:rows [{:age 13 :height 45}
{:age 45 :height 13}]])
Notes:
See:
seesaw.table/table-model seesaw.examples.table
<http://download.oracle.com/javase/6/docs/api/javax/swing/JTable.html>
```
Create a table (JTable). Additional options:
:model A TableModel, or a vector. If a vector, then it is used as
arguments to (seesaw.table/table-model).
:show-grid? Whether to show the grid lines of the table.
:show-horizontal-lines? Whether to show horizontal grid lines
:show-vertical-lines? Whether to show vertical grid lines
:fills-viewport-height?
:auto-resize The behavior of columns when the table is resized. One of:
:off Do nothing to column widths
:next-column When a column is resized, take space from next column
:subsequent-columns Change subsequent columns to presercve total width of table
:last-column Apply adjustments to last column only
:all-columns Proportionally resize all columns
Defaults to :subsequent-columns. If you're wondering where your horizontal scroll
bar is, try setting this to :off.
Example:
(table
:model [:columns [:age :height]
:rows [{:age 13 :height 45}
{:age 45 :height 13}]])
Notes:
See:
seesaw.table/table-model
seesaw.examples.table
http://download.oracle.com/javase/6/docs/api/javax/swing/JTable.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1743)[raw docstring](#)
---
#### table-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1718)
---
#### textclj
```
(text & args)
```
Create a text field or area. Given a single argument, creates a JTextField using the argument as the initial text value. Otherwise, supports the following additional properties:
:text Initial text content
:multi-line? If true, a JTextArea is created (default false)
:editable? If false, the text is read-only (default true)
:margin
:caret-color Color of caret (see seesaw.color)
:caret-position Caret position as zero-based integer offset
:disabled-text-color A color value
:selected-text-color A color value
:selection-color A color value
The following properties only apply if :multi-line? is false:
:columns Number of columns of text
:halign Horizontal text alignment (:left, :right, :center, :leading, :trailing)
The following properties only apply if :multi-line? is true:
:wrap-lines? If true (and :multi-line? is true) lines are wrapped.
(default false)
:tab-size Tab size in spaces. Defaults to 8. Only applies if :multi-line?
is true.
:rows Number of rows if :multi-line? is true (default 0).
To listen for document changes, use the :listen option:
(text :listen [:document #(... handler ...)])
or attach a listener later with (listen):
(text :id :my-text ...)
...
(listen (select root [:#my-text]) :document #(... handler ...))
Given a single widget or document (or event) argument, retrieves the text of the argument. For example:
```
user=> (def t (text "HI"))
user=> (text t)
"HI"
```
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JTextArea.html>
<http://download.oracle.com/javase/6/docs/api/javax/swing/JTextField.html>
```
Create a text field or area. Given a single argument, creates a JTextField using the argument as the initial text value. Otherwise, supports the following additional properties:
:text Initial text content
:multi-line? If true, a JTextArea is created (default false)
:editable? If false, the text is read-only (default true)
:margin
:caret-color Color of caret (see seesaw.color)
:caret-position Caret position as zero-based integer offset
:disabled-text-color A color value
:selected-text-color A color value
:selection-color A color value
The following properties only apply if :multi-line? is false:
:columns Number of columns of text
:halign Horizontal text alignment (:left, :right, :center, :leading, :trailing)
The following properties only apply if :multi-line? is true:
:wrap-lines? If true (and :multi-line? is true) lines are wrapped.
(default false)
:tab-size Tab size in spaces. Defaults to 8. Only applies if :multi-line?
is true.
:rows Number of rows if :multi-line? is true (default 0).
To listen for document changes, use the :listen option:
(text :listen [:document #(... handler ...)])
or attach a listener later with (listen):
(text :id :my-text ...)
...
(listen (select root [:#my-text]) :document #(... handler ...))
Given a single widget or document (or event) argument, retrieves the text of the argument. For example:
user=> (def t (text "HI"))
user=> (text t)
"HI"
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JTextArea.html
http://download.oracle.com/javase/6/docs/api/javax/swing/JTextField.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1377)[raw docstring](#)
---
#### text!clj
```
(text! targets value)
```
Set the text of widget(s) or document(s). targets is an object that can be turned into a widget or document, or a list of such things. value is the new text value to be applied. Returns targets.
target may be one of:
A widget A widget-able thing like an event A Document A DocumentEvent
The resulting text in the widget depends on the type of value:
A string - the string A URL, File, or anything "slurpable" - the slurped value Anythign else - (resource value)
Example:
```
user=> (def t (text "HI"))
user=> (text! t "BYE")
user=> (text t)
"BYE"
; Put the contents of a URL in editor
(text! editor (java.net.URL. "http://google.com"))
```
Notes:
This applies to the :text property of new text widgets and config! as well.
```
Set the text of widget(s) or document(s). targets is an object that can be turned into a widget or document, or a list of such things. value is the new text value to be applied. Returns targets.
target may be one of:
A widget
A widget-able thing like an event
A Document
A DocumentEvent
The resulting text in the widget depends on the type of value:
A string - the string
A URL, File, or anything "slurpable" - the slurped value
Anythign else - (resource value)
Example:
user=> (def t (text "HI"))
user=> (text! t "BYE")
user=> (text t)
"BYE"
; Put the contents of a URL in editor
(text! editor (java.net.URL. "http://google.com"))
Notes:
This applies to the :text property of new text widgets and config! as well.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1445)[raw docstring](#)
---
#### text-area-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1362)
---
#### text-field-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1353)
---
#### text-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1334)
---
#### timerclj
Alias of seesaw.timer/timer:
Creates a new Swing timer that periodically executes the single-argument function f. The argument is a "state" of the timer. Each time the function is called its previous return value is passed to it. Kind of like (reduce)
but spread out over time :) The following options are supported:
```
:initial-value The first value passed to the handler function. Defaults to nil.
:initial-delay Delay, in milliseconds, of first call. Defaults to 0.
:delay Delay, in milliseconds, between calls. Defaults to 1000.
:repeats? If true, the timer runs forever, otherwise, it's a
"one-shot" timer. Defaults to true.
:start? Whether to start the timer immediately. Defaults to true.
```
See <http://download.oracle.com/javase/6/docs/api/javax/swing/Timer.html>
```
Alias of seesaw.timer/timer:
Creates a new Swing timer that periodically executes the single-argument
function f. The argument is a "state" of the timer. Each time the function
is called its previous return value is passed to it. Kind of like (reduce)
but spread out over time :) The following options are supported:
:initial-value The first value passed to the handler function. Defaults to nil.
:initial-delay Delay, in milliseconds, of first call. Defaults to 0.
:delay Delay, in milliseconds, between calls. Defaults to 1000.
:repeats? If true, the timer runs forever, otherwise, it's a
"one-shot" timer. Defaults to true.
:start? Whether to start the timer immediately. Defaults to true.
See http://download.oracle.com/javase/6/docs/api/javax/swing/Timer.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L109)[raw docstring](#)
---
#### to-documentclj
```
(to-document v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L925)
---
#### to-frameclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2910)
---
#### to-rootclj
```
(to-root w)
```
Get the frame or window that contains the given widget. Useful for APIs like JDialog that want a JFrame, when all you have is a widget or event.
Note that w is run through (to-widget) first, so you can pass event object directly to this.
```
Get the frame or window that contains the given widget. Useful for APIs like JDialog that want a JFrame, when all you have is a widget or event.
Note that w is run through (to-widget) first, so you can pass event object directly to this.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2901)[raw docstring](#)
---
#### to-widgetclj
```
(to-widget v)
```
Try to convert the input argument to a widget based on the following rules:
nil -> nil java.awt.Component -> return argument unchanged java.util.EventObject -> return the event source
See:
(seeseaw.to-widget)
```
Try to convert the input argument to a widget based on the following rules:
nil -> nil
java.awt.Component -> return argument unchanged
java.util.EventObject -> return the event source
See:
(seeseaw.to-widget)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L197)[raw docstring](#)
---
#### toggleclj
```
(toggle & args)
```
Same as (seesaw.core/button), but creates a toggle button. Use :selected? option to set initial state.
See:
(seesaw.core/button)
```
Same as (seesaw.core/button), but creates a toggle button. Use :selected? option to set initial state.
See:
(seesaw.core/button)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1301)[raw docstring](#)
---
#### toggle-full-screen!clj
```
(toggle-full-screen! window)
```
```
(toggle-full-screen! device window)
```
Toggle the full-screen state of the given window/frame.
```
Toggle the full-screen state of the given window/frame.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2753)[raw docstring](#)
---
#### toggle-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1299)
---
#### toolbarclj
```
(toolbar & opts)
```
Create a JToolBar. The following properties are supported:
:floatable? Whether the toolbar is floatable.
:orientation Toolbar orientation, :horizontal or :vertical
:items Normal list of widgets to add to the toolbar. :separator creates a toolbar separator.
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JToolBar.html>
```
Create a JToolBar. The following properties are supported:
:floatable? Whether the toolbar is floatable.
:orientation Toolbar orientation, :horizontal or :vertical
:items Normal list of widgets to add to the toolbar. :separator
creates a toolbar separator.
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JToolBar.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2416)[raw docstring](#)
---
#### toolbar-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2404)
---
#### top-bottom-splitclj
```
(top-bottom-split top bottom & args)
```
Create a top/bottom (vertical) split pane with the given widgets. See
(seesaw.core/splitter) for additional options. Options are given after the two widgets.
Notes:
See:
(seesaw.core/splitter)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JSplitPane.html>
```
Create a top/bottom (vertical) split pane with the given widgets. See
(seesaw.core/splitter) for additional options. Options are given after the two widgets.
Notes:
See:
(seesaw.core/splitter)
http://download.oracle.com/javase/6/docs/api/javax/swing/JSplitPane.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2213)[raw docstring](#)
---
#### top-bottom-split-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2211)
---
#### treeclj
```
(tree & args)
```
Create a tree (JTree). Additional options:
Notes:
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/JTree.html>
```
Create a tree (JTree). Additional options:
Notes:
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/JTree.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1805)[raw docstring](#)
---
#### tree-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1783)
---
#### user-dataclj
```
(user-data w)
```
Convenience function to retrieve the value of the :user-data option passed to the widget at construction. The widget parameter is passed through (to-widget) first so events and other objects can also be used.
Examples:
(user-data (label :text "HI!" :user-data 99))
;=> 99
```
Convenience function to retrieve the value of the :user-data option passed to the widget at construction. The widget parameter is passed through (to-widget) first so events and other objects can also be used.
Examples:
(user-data (label :text "HI!" :user-data 99))
;=> 99
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L489)[raw docstring](#)
---
#### valueclj
```
(value target)
```
Return the 'value' of a widget. target is passed through (to-widget) as usual.
Basically, there are two possibilities:
* It's a container: A map of widget values keyed by :id is built recursively from all its children.
* The 'natural' value for the widget is returned, usually the text,
or the current selection of the widget.
See:
(seesaw.core/value!)
(seesaw.core/selection)
(seesaw.core/group-by-id)
This idea is shamelessly borrowed from Clarity <https://github.com/stathissideris/clarity>
```
Return the 'value' of a widget. target is passed through (to-widget) as usual.
Basically, there are two possibilities:
* It's a container: A map of widget values keyed by :id is built
recursively from all its children.
* The 'natural' value for the widget is returned, usually the text,
or the current selection of the widget.
See:
(seesaw.core/value!)
(seesaw.core/selection)
(seesaw.core/group-by-id)
This idea is shamelessly borrowed from Clarity https://github.com/stathissideris/clarity
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3769)[raw docstring](#)
---
#### value!clj
```
(value! target v)
```
Set the 'value' of a widget. This is the dual of (seesaw.core/value). target is passed through (to-widget) as usual.
Basically, there are two possibilities:
* target is a container: The value is a map of widget values keyed by :id. These values are applied to all descendants of target.
* otherwise, v is a new 'natural' value for the widget, usually the text,
or the current selection of the widget.
In either case (to-widget target) is returned.
Examples:
Imagine there's widget :foo, :bar, :yum in frame f:
```
(value! f {:foo "new foo text" :bar 99 :yum "new yum text"})
```
See:
(seesaw.core/value)
(seesaw.core/selection)
(seesaw.core/group-by-id)
This idea is shamelessly borrowed from Clarity <https://github.com/stathissideris/clarity>
```
Set the 'value' of a widget. This is the dual of (seesaw.core/value). target is passed through (to-widget) as usual.
Basically, there are two possibilities:
* target is a container: The value is a map of widget values keyed by :id. These
values are applied to all descendants of target.
* otherwise, v is a new 'natural' value for the widget, usually the text,
or the current selection of the widget.
In either case (to-widget target) is returned.
Examples:
Imagine there's widget :foo, :bar, :yum in frame f:
(value! f {:foo "new foo text" :bar 99 :yum "new yum text"})
See:
(seesaw.core/value)
(seesaw.core/selection)
(seesaw.core/group-by-id)
This idea is shamelessly borrowed from Clarity https://github.com/stathissideris/clarity
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3789)[raw docstring](#)
---
#### vertical-panelclj
```
(vertical-panel & opts)
```
Create a panel where widgets are arranged vertically Options:
:items List of widgets (passed through make-widget)
See <http://download.oracle.com/javase/6/docs/api/javax/swing/BoxLayout.html>
```
Create a panel where widgets are arranged vertically Options:
:items List of widgets (passed through make-widget)
See http://download.oracle.com/javase/6/docs/api/javax/swing/BoxLayout.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1080)[raw docstring](#)
---
#### vertical-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1078)
---
#### widthclj
```
(width w)
```
Returns the width of the given widget in pixels
```
Returns the width of the given widget in pixels
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L462)[raw docstring](#)
---
#### windowclj
```
(window & {:keys [width height visible? size] :as opts})
```
Create a JWindow. NOTE: A JWindow is a top-level window with no decorations,
i.e. no title bar, no menu, no nothin'. Usually you want (seesaw.core/frame)
if your just showing a normal top-level app.
Options:
:id id of the window, used by (select).
:width initial width. Note that calling (pack!) will negate this setting
:height initial height. Note that calling (pack!) will negate this setting
:size initial size. Note that calling (pack!) will negate this setting
:minimum-size minimum size of frame, e.g. [640 :by 480]
:content passed through (make-widget) and used as the frame's content-pane
:visible? whether frame should be initially visible (default false)
returns the new window
Examples:
; Create a window, pack it and show it.
(-> (window :content "I'm a label!")
pack!
show!)
; Create a frame with an initial size (note that pack! isn't called)
(show! (window :content "I'm a label!" :width 500 :height 600))
Notes:
Unless :visible? is set to true, the window will not be displayed until (show!)
is called on it.
Call (pack!) on the frame if you'd like the window to resize itself to fit its contents. Sometimes this doesn't look like crap.
See:
(seesaw.core/show!)
(seesaw.core/hide!)
(seesaw.core/move!)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JWindow.html>
```
Create a JWindow. NOTE: A JWindow is a top-level window with no decorations,
i.e. no title bar, no menu, no nothin'. Usually you want (seesaw.core/frame)
if your just showing a normal top-level app.
Options:
:id id of the window, used by (select).
:width initial width. Note that calling (pack!) will negate this setting
:height initial height. Note that calling (pack!) will negate this setting
:size initial size. Note that calling (pack!) will negate this setting
:minimum-size minimum size of frame, e.g. [640 :by 480]
:content passed through (make-widget) and used as the frame's content-pane
:visible? whether frame should be initially visible (default false)
returns the new window
Examples:
; Create a window, pack it and show it.
(-> (window :content "I'm a label!")
pack!
show!)
; Create a frame with an initial size (note that pack! isn't called)
(show! (window :content "I'm a label!" :width 500 :height 600))
Notes:
Unless :visible? is set to true, the window will not be displayed until (show!)
is called on it.
Call (pack!) on the frame if you'd like the window to resize itself to fit its
contents. Sometimes this doesn't look like crap.
See:
(seesaw.core/show!)
(seesaw.core/hide!)
(seesaw.core/move!)
http://download.oracle.com/javase/6/docs/api/javax/swing/JWindow.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2648)[raw docstring](#)
---
#### window-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L2644)
---
#### with-password*clj
```
(with-password* field handler)
```
Retrieve the password of a password field and passes it to the given handler function as an array or characters. Upon completion, the array is zero'd out and the value returned by the handler is returned.
This is the 'safe' way to access the password. The (text) function will work too but that method is discouraged, at least by the JPasswordField docs.
Example:
(with-password* my-password-field
(fn [password-chars]
(... do something with chars ...)))
See:
(seesaw.core/password)
(seesaw.core/text)
<http://download.oracle.com/javase/6/docs/api/javax/swing/JPasswordField.html>
```
Retrieve the password of a password field and passes it to the given handler function as an array or characters. Upon completion, the array is zero'd out and the value returned by the handler is returned.
This is the 'safe' way to access the password. The (text) function will work too but that method is discouraged, at least by the JPasswordField docs.
Example:
(with-password* my-password-field
(fn [password-chars]
(... do something with chars ...)))
See:
(seesaw.core/password)
(seesaw.core/text)
http://download.oracle.com/javase/6/docs/api/javax/swing/JPasswordField.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L1589)[raw docstring](#)
---
#### with-widgetscljmacro
```
(with-widgets widgets & body)
```
Macro to ease construction of multiple widgets. The first argument is a vector of widget constructor forms, each with an :id option.
The name of the value of each :id is used to generate a binding in the scope of the macro.
Examples:
(with-widgets [(label :id :foo :text "foo")
(button :id :bar :text "bar")]
...)
; is equivalent to
(let [foo (label :id :foo :text "foo")
bar (button :id :bar :text "bar")]
...)
Notes:
If you're looking for something like this to reduce boilerplate with selectors on multiple widgets, see (seesaw.core/group-by-id).
See:
(seesaw.core/group-by-id)
```
Macro to ease construction of multiple widgets. The first argument is a vector of widget constructor forms, each with an :id option.
The name of the value of each :id is used to generate a binding in the scope of the macro.
Examples:
(with-widgets [(label :id :foo :text "foo")
(button :id :bar :text "bar")]
...)
; is equivalent to
(let [foo (label :id :foo :text "foo")
bar (button :id :bar :text "bar")]
...)
Notes:
If you're looking for something like this to reduce boilerplate with selectors on multiple widgets, see (seesaw.core/group-by-id).
See:
(seesaw.core/group-by-id)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L3665)[raw docstring](#)
---
#### xyz-panelclj
```
(xyz-panel & opts)
```
Creates a JPanel on which widgets can be positioned arbitrarily by client code. No layout manager is installed.
Initial widget positions can be given with their :bounds property. After construction they can be moved with the (seesaw.core/move!) function.
Examples:
; Create a panel with a label positions at (10, 10) with width 200 and height 40.
(xyz-panel :items [(label :text "The Black Lodge" :bounds [10 10 200 40])])
; Move a widget up 50 pixels and right 25 pixels
(move! my-label :by [25 -50])
Notes:
See:
(seesaw.core/move!)
```
Creates a JPanel on which widgets can be positioned arbitrarily by client code. No layout manager is installed.
Initial widget positions can be given with their :bounds property. After construction they can be moved with the (seesaw.core/move!) function.
Examples:
; Create a panel with a label positions at (10, 10) with width 200 and height 40.
(xyz-panel :items [(label :text "The Black Lodge" :bounds [10 10 200 40])])
; Move a widget up 50 pixels and right 25 pixels
(move! my-label :by [25 -50])
Notes:
See:
(seesaw.core/move!)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/core.clj#L945)[raw docstring](#)
seesaw.cursor
===
Functions for creating Swing cursors.
```
Functions for creating Swing cursors.
```
[raw docstring](#)
---
#### cursorclj
```
(cursor type & args)
```
Create a built-in or custom cursor. Take one of two forms:
(cursor :name-of-built-in-cursor)
Creates a built-in cursor of the given type. Valid types are:
:crosshair :custom :default :hand :move :text :wait
:e-resize :n-resize :ne-resize :nw-resize
:s-resize :se-resize :sw-resize :w-resize
To create custom cursor:
(cursor image-or-icon optional-hotspot)
where image-or-icon is a java.awt.Image (see seesaw.graphics/buffered-image)
or javax.swing.ImageIcon (see seesaw.icon/icon). The hotspot is an optional
[x y] point indicating the click point for the cursor. Defaults to [0 0].
Examples:
; The hand cursor
(cursor :hand)
; Create a custom cursor from a URL:
(cursor (icon "<http://path/to/my/cursor.png>") [5 5])
Notes:
This function is used implicitly by the :cursor option on most widget constructor functions. So
```
(label :cursor (cursor :hand))
```
is equivalent to:
```
(label :cursor :hand)
```
Same for setting the cursor with (seesaw.core/config!).
Also, the size of a cursor is platform dependent, so some experimentation will be required with creating custom cursors from images.
See:
<http://download.oracle.com/javase/6/docs/api/java/awt/Cursor.html>
<http://download.oracle.com/javase/6/docs/api/java/awt/Toolkit.html#createCustomCursor%28java.awt.Image,%20java.awt.Point,%20java.lang.String%29>
```
Create a built-in or custom cursor. Take one of two forms:
(cursor :name-of-built-in-cursor)
Creates a built-in cursor of the given type. Valid types are:
:crosshair :custom :default :hand :move :text :wait
:e-resize :n-resize :ne-resize :nw-resize
:s-resize :se-resize :sw-resize :w-resize
To create custom cursor:
(cursor image-or-icon optional-hotspot)
where image-or-icon is a java.awt.Image (see seesaw.graphics/buffered-image)
or javax.swing.ImageIcon (see seesaw.icon/icon). The hotspot is an optional
[x y] point indicating the click point for the cursor. Defaults to [0 0].
Examples:
; The hand cursor
(cursor :hand)
; Create a custom cursor from a URL:
(cursor (icon "http://path/to/my/cursor.png") [5 5])
Notes:
This function is used implicitly by the :cursor option on most widget
constructor functions. So
(label :cursor (cursor :hand))
is equivalent to:
(label :cursor :hand)
Same for setting the cursor with (seesaw.core/config!).
Also, the size of a cursor is platform dependent, so some experimentation
will be required with creating custom cursors from images.
See:
http://download.oracle.com/javase/6/docs/api/java/awt/Cursor.html
http://download.oracle.com/javase/6/docs/api/java/awt/Toolkit.html#createCustomCursor%28java.awt.Image,%20java.awt.Point,%20java.lang.String%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/cursor.clj#L28)[raw docstring](#)
seesaw.dev
===
Functions to aid development of Seesaw apps.
```
Functions to aid development of Seesaw apps.
```
[raw docstring](#)
---
#### debug!clj
```
(debug!)
```
```
(debug! f)
```
Install a custom exception handler which displays a window with event and stack trace info whenever an unhandled exception occurs in the UI thread.
This is usually more friendly than the console, especially in a repl.
Calling with no args, enables default debugging. Otherwise, pass a two arg function that takes a java.awt.AWTEvent and a java.lang.Throwable. Passing nil disables debugging.
```
Install a custom exception handler which displays a window with event and stack trace info whenever an unhandled exception occurs in the UI thread.
This is usually more friendly than the console, especially in a repl.
Calling with no args, enables default debugging. Otherwise, pass a two arg function that takes a java.awt.AWTEvent and a java.lang.Throwable. Passing nil disables debugging.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dev.clj#L73)[raw docstring](#)
---
#### show-eventsclj
```
(show-events v)
```
Given a class or instance, print information about all supported events.
From there, you can look up javadocs, etc.
Examples:
(show-events javax.swing.JButton)
... lots of output ...
(show-events (button))
... lots of output ...
```
Given a class or instance, print information about all supported events.
From there, you can look up javadocs, etc.
Examples:
(show-events javax.swing.JButton)
... lots of output ...
(show-events (button))
... lots of output ...
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dev.clj#L108)[raw docstring](#)
---
#### show-optionsclj
```
(show-options v)
```
Given an object, print information about the options it supports. These are all the options you can legally pass to (seesaw.core/config) and friends.
```
Given an object, print information about the options it supports. These are all the options you can legally pass to (seesaw.core/config) and friends.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dev.clj#L95)[raw docstring](#)
seesaw.dnd
===
Functions for dealing with drag and drop and data transfer.
```
Functions for dealing with drag and drop and data transfer.
```
[raw docstring](#)
---
#### default-transfer-handlerclj
```
(default-transfer-handler & {:keys [import export] :as opts})
```
Create a transfer handler for drag and drop operations. Take a list of key/value option pairs as usual. The following options are supported:
:import - A vector of flavor/handler pairs used when a drop/paste occurs
(see below)
:export - A map of options used when a drag/copy occurs (see below)
Data Import
The :import option specifies a vector of flavor/handler pairs. A handler is either
```
a function: (fn [data] ...process drop...)
or a map : {:on-drop (fn [data] ...process drop...)
:can-drop? (fn [data] ...check drop is allowed...)}
```
if a map is provided :on-drop is mandatory, :can-drop? is optional,
defaulting to (constantly true)
When a drop/paste occurs, the handler for the first matching flavor is called with a map with the following keys:
```
:target The widget that's the target of the drop
:data The data, type depends on flavor
:drop? true if this is a drop operation, otherwise it's a paste
:drop-location Map of drop location info or nil if drop? is false. See
below.
:support Instance of javax.swing.TransferHandler$TransferSupport
for advanced use.
```
The handler must return truthy if the drop is accepted, falsey otherwise.
If :drop? is true, :drop-location will be non-nil and include the following keys, depending on the type of the drop target:
```
All types:
:point [x y] vector
listbox
:index The index for the drop
:insert? True if it's an insert, i.e. "between" entries
table
:column The column for the drop
:row The row for the drop
:insert-column? True if it's an insert, i.e. "between" columns.
:insert-row? True if it's an insert, i.e. "between" rows
tree
:index The index of the drop point
:path TreePath of the drop point
Text Components
:bias No idea what this is
:index The insertion index
```
Data Export
The :export option specifies the behavior when a drag or copy is started from a widget. It is a map with the following keys:
```
:actions A function that takes a widget and returns a keyword indicating
supported actions. Defaults to :move. Can be any of :move, :copy,
:copy-or-move, :link, or :none.
:start A function that takes a widget and returns a vector of flavor/value
pairs to be exported. Required.
:finish A function that takes a map of values. It's called when the drag/paste
is completed. The map has the following keys:
:source The widget from which the drag started
:action The action, :move, :copy, or :link.
:data A Transferable
```
Examples:
(default-transfer-handler
; Allow either strings or lists of files to be dropped
:import [string-flavor (fn [{:keys [data]}] ... data is a string ...)
file-list-flavor (fn [{:keys [data]}] ... data is a *list* of files ...)]
```
:export {
:actions (fn [_] :copy)
:start (fn [w] [string-flavor (seesaw.core/text w)])
:finish (fn [_] ... do something when drag is finished ...) })
```
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/TransferHandler.html>
```
Create a transfer handler for drag and drop operations. Take a list of key/value option pairs as usual. The following options are supported:
:import - A vector of flavor/handler pairs used when a drop/paste occurs
(see below)
:export - A map of options used when a drag/copy occurs (see below)
Data Import
The :import option specifies a vector of flavor/handler pairs. A handler is
either
a function: (fn [data] ...process drop...)
or a map : {:on-drop (fn [data] ...process drop...)
:can-drop? (fn [data] ...check drop is allowed...)}
if a map is provided :on-drop is mandatory, :can-drop? is optional,
defaulting to (constantly true)
When a drop/paste occurs, the handler for the first matching flavor is
called with a map with the following keys:
:target The widget that's the target of the drop
:data The data, type depends on flavor
:drop? true if this is a drop operation, otherwise it's a paste
:drop-location Map of drop location info or nil if drop? is false. See
below.
:support Instance of javax.swing.TransferHandler$TransferSupport
for advanced use.
The handler must return truthy if the drop is accepted, falsey otherwise.
If :drop? is true, :drop-location will be non-nil and include the following
keys, depending on the type of the drop target:
All types:
:point [x y] vector
listbox
:index The index for the drop
:insert? True if it's an insert, i.e. "between" entries
table
:column The column for the drop
:row The row for the drop
:insert-column? True if it's an insert, i.e. "between" columns.
:insert-row? True if it's an insert, i.e. "between" rows
tree
:index The index of the drop point
:path TreePath of the drop point
Text Components
:bias No idea what this is
:index The insertion index
Data Export
The :export option specifies the behavior when a drag or copy is started
from a widget. It is a map with the following keys:
:actions A function that takes a widget and returns a keyword indicating
supported actions. Defaults to :move. Can be any of :move, :copy,
:copy-or-move, :link, or :none.
:start A function that takes a widget and returns a vector of flavor/value
pairs to be exported. Required.
:finish A function that takes a map of values. It's called when the drag/paste
is completed. The map has the following keys:
:source The widget from which the drag started
:action The action, :move, :copy, or :link.
:data A Transferable
Examples:
(default-transfer-handler
; Allow either strings or lists of files to be dropped
:import [string-flavor (fn [{:keys [data]}] ... data is a string ...)
file-list-flavor (fn [{:keys [data]}] ... data is a *list* of files ...)]
:export {
:actions (fn [_] :copy)
:start (fn [w] [string-flavor (seesaw.core/text w)])
:finish (fn [_] ... do something when drag is finished ...) })
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/TransferHandler.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L194)[raw docstring](#)
---
#### default-transferableclj
```
(default-transferable pairs)
```
Constructs a transferable given a vector of alternating flavor/value pairs.
If a value is a function, i.e. (fn? value) is true, then then function is called with no arguments when the value is requested for its corresponding flavor. This way calculation of the value can be deferred until drop time.
Each flavor must be unique and it's assumed that the flavor and value agree.
Examples:
; A transferable holding String or File data where the file calc is
; deferred
(default-transferable [string-flavor "/home/dave"
file-list-flavor #(vector (java.io.File. "/home/dave"))])
```
Constructs a transferable given a vector of alternating flavor/value pairs.
If a value is a function, i.e. (fn? value) is true, then then function is called with no arguments when the value is requested for its corresponding flavor. This way calculation of the value can be deferred until drop time.
Each flavor must be unique and it's assumed that the flavor and value agree.
Examples:
; A transferable holding String or File data where the file calc is
; deferred
(default-transferable [string-flavor "/home/dave"
file-list-flavor #(vector (java.io.File. "/home/dave"))])
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L95)[raw docstring](#)
---
#### everything-transfer-handlerclj
```
(everything-transfer-handler handler)
```
Handler that accepts all drops. For debugging.
```
Handler that accepts all drops. For debugging.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L352)[raw docstring](#)
---
#### file-list-flavorclj
Flavor for a list of java.io.File objects
```
Flavor for a list of java.io.File objects
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L77)[raw docstring](#)
---
#### Flavorfulcljprotocol
Protocol for abstracting DataFlavor including automatic conversion from external/native representations (e.g. uri-list) to friendlier internal representations (e.g. list of java.net.URI).
```
Protocol for abstracting DataFlavor including automatic conversion from external/native representations (e.g. uri-list) to friendlier internal representations (e.g. list of java.net.URI).
```
#### to-localclj
```
(to-local this value)
```
Given an incoming value convert it to the expected local format. For example, a uri-list would return a vector of URI.
```
Given an incoming value convert it to the expected local format. For example, a uri-list would return a vector of URI.
```
#### to-raw-flavorclj
```
(to-raw-flavor this)
```
Return an instance of java.awt.datatransfer.DataFlavor for this.
```
Return an instance of java.awt.datatransfer.DataFlavor for this.
```
#### to-remoteclj
```
(to-remote this value)
```
Given an outgoing value, convert it to the appropriate remote format.
For example, a vector of URIs would be serialized as a uri-list.
```
Given an outgoing value, convert it to the appropriate remote format.
For example, a vector of URIs would be serialized as a uri-list.
```
#####
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L23)[raw docstring](#)
---
#### html-flavorclj
Flavor for HTML text
```
Flavor for HTML text
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L91)[raw docstring](#)
---
#### image-flavorclj
Flavor for images as java.awt.Image
```
Flavor for images as java.awt.Image
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L92)[raw docstring](#)
---
#### local-object-flavorclj
```
(local-object-flavor class-or-value)
```
Creates a flavor for moving raw Java objects between components within a single JVM instance. class-or-value is either the class of data, or an example value from which the class is taken.
Examples:
; Move Clojure vectors
(local-object-flavor [])
```
Creates a flavor for moving raw Java objects between components within a single JVM instance. class-or-value is either the class of data, or an example value from which the class is taken.
Examples:
; Move Clojure vectors
(local-object-flavor [])
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L62)[raw docstring](#)
---
#### make-flavorclj
```
(make-flavor mime-type rep-class)
```
Construct a new data flavor with the given mime-type and representation class.
Notes:
Run seesaw.dnd-explorer to experiment with flavors coming from other apps.
Examples:
; HTML as a reader
(make-flavor "text/html" java.io.Reader)
```
Construct a new data flavor with the given mime-type and representation class.
Notes:
Run seesaw.dnd-explorer to experiment with flavors coming from
other apps.
Examples:
; HTML as a reader
(make-flavor "text/html" java.io.Reader)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L43)[raw docstring](#)
---
#### normalise-import-pairsclj
```
(normalise-import-pairs import-pairs)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L187)
---
#### string-flavorclj
Flavor for raw text
```
Flavor for raw text
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L93)[raw docstring](#)
---
#### to-transfer-handlerclj
```
(to-transfer-handler v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L345)
---
#### uri-list-flavorclj
Flavor for a list of java.net.URI objects. Note it's URI, not URL.
With just java.net.URL it's not possible to drop non-URL links, e.g. "about:config".
```
Flavor for a list of java.net.URI objects. Note it's URI, not URL.
With just java.net.URL it's not possible to drop non-URL links, e.g. "about:config".
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L80)[raw docstring](#)
---
#### validate-import-pairsclj
```
(validate-import-pairs import-pairs)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd.clj#L174)
seesaw.dnd-explorer
===
---
#### -mainclj
```
(-main & args)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd_explorer.clj#L46)
---
#### appclj
```
(app)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd_explorer.clj#L24)
---
#### drop-handlerclj
```
(drop-handler t support)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/dnd_explorer.clj#L15)
seesaw.event
===
Functions for handling events. Do not use these functions directly.
Use (seesaw.core/listen) instead.
```
Functions for handling events. Do not use these functions directly.
Use (seesaw.core/listen) instead.
```
[raw docstring](#)
---
#### add-action-listenerclj
```
(add-action-listener this v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### add-change-listenerclj
```
(add-change-listener this l)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### add-list-selection-listenerclj
```
(add-list-selection-listener this v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
---
#### append-listenerclj
```
(append-listener listeners k l)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L338)
---
#### def-reify-listenercljmacro
```
(def-reify-listener klass events)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L265)
---
#### events-forclj
```
(events-for v)
```
Returns a sequence of event info maps for the given object which can be either a widget instance or class.
Used by (seesaw.dev/show-events).
See:
(seesaw.dev/show-events)
```
Returns a sequence of event info maps for the given object which can be either a widget instance or class.
Used by (seesaw.dev/show-events).
See:
(seesaw.dev/show-events)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L530)[raw docstring](#)
---
#### get-handlersclj
```
(get-handlers target event-group-name)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L324)
---
#### listenclj
```
(listen targets & more)
```
*note: use seesaw.core/listen rather than calling this directly*
Install listeners for one or more events on the given target. For example:
(listen (button "foo")
:mouse-entered (fn [e] ...)
:focus-gained (fn [e] ...)
:key-pressed (fn [e] ...)
:mouse-wheel-moved (fn [e] ...))
one function can be registered for multiple events by using a set of event names instead of one:
(listen (text)
#{:remove-update :insert-update} (fn [e] ...))
Note in this case that it's smart enough to add a document listener to the JTextFields document.
Similarly, an event can be registered for all events in a particular swing listener interface by just using the keyword-ized prefix of the interface name. For example, to get all callbacks in the MouseListener interface:
(listen my-widget :mouse (fn [e] ...))
Returns a function which, when called, removes all listeners registered with this call.
When the target is a JTable and listener type is :selection, only row selection events are reported. Also note that the source table is
*not* retrievable from the event object.
```
*note: use seesaw.core/listen rather than calling this directly*
Install listeners for one or more events on the given target. For example:
(listen (button "foo")
:mouse-entered (fn [e] ...)
:focus-gained (fn [e] ...)
:key-pressed (fn [e] ...)
:mouse-wheel-moved (fn [e] ...))
one function can be registered for multiple events by using a set of event names instead of one:
(listen (text)
#{:remove-update :insert-update} (fn [e] ...))
Note in this case that it's smart enough to add a document listener to the JTextFields document.
Similarly, an event can be registered for all events in a particular swing listener interface by just using the keyword-ized prefix of the interface name. For example, to get all callbacks in the MouseListener interface:
(listen my-widget :mouse (fn [e] ...))
Returns a function which, when called, removes all listeners registered with this call.
When the target is a JTable and listener type is :selection, only row selection events are reported. Also note that the source table is
*not* retrievable from the event object.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L471)[raw docstring](#)
---
#### listen-for-named-eventcljmultimethod
*experimental and subject to change*
A multi-method that allows the set of events in the (listen) to be extended or for an existing event to be extended to a new type. Basically performs double-dispatch on the type of the target and the name of the event.
This multi-method is an extension point, but is not meant to be called directly by client code.
Register the given event handler on this for the given event name which is a keyword like :selection, etc. If the handler is registered, returns a zero-arg function that undoes the listener. Otherwise, must return nil indicating that no listener was registered, i.e. this doesn't support the given event.
TODO try using this to implement all of the event system rather than the mess above.
See:
(seesaw.swingx/color-selection-button) for an example.
```
*experimental and subject to change*
A multi-method that allows the set of events in the (listen) to be extended or for an existing event to be extended to a new type. Basically performs double-dispatch on the type of the target and the name of the event.
This multi-method is an extension point, but is not meant to be called directly by client code.
Register the given event handler on this for the given event name which is a keyword like :selection, etc. If the handler is registered, returns a zero-arg function that undoes the listener. Otherwise, must return nil indicating that no listener was registered, i.e. this doesn't support the given event.
TODO try using this to implement all of the event system rather than the mess above.
See:
(seesaw.swingx/color-selection-button) for an example.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L404)[raw docstring](#)
---
#### listen-to-propertyclj
```
(listen-to-property target property event-fn)
```
Listen to propertyChange events on a target for a particular named property.
Like (listen), returns a function that, when called removes the installed listener.
```
Listen to propertyChange events on a target for a particular named property.
Like (listen), returns a function that, when called removes the installed listener.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L512)[raw docstring](#)
---
#### reify-listenercljmultimethod
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L260)
---
#### unappend-listenerclj
```
(unappend-listener listeners k l)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/event.clj#L344)
seesaw.font
===
Functions for handling fonts. Note that most core widget functions use these implicitly through the :font option.
```
Functions for handling fonts. Note that most core widget functions use these implicitly through the :font option.
```
[raw docstring](#)
---
#### default-fontclj
```
(default-font name)
```
Look up a default font from the UIManager.
Example:
(default-font "Label.font")
Returns an instane of java.awt.Font
See:
<http://download.oracle.com/javase/6/docs/api/javax/swing/UIManager.html#getFont%28java.lang.Object%29>
```
Look up a default font from the UIManager.
Example:
(default-font "Label.font")
Returns an instane of java.awt.Font
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/UIManager.html#getFont%28java.lang.Object%29
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/font.clj#L89)[raw docstring](#)
---
#### fontclj
```
(font & args)
```
Create and return a Font.
```
(font name)
(font ... options ...)
```
Options are:
:name The name of the font. Besides string values, also possible are any of :monospaced, :serif, :sans-serif. See (seesaw.font/font-families)
to get a system-specific list of all valid values.
:style The style. One of :bold, :plain, :italic, or a set of those values to combine them. Default: :plain.
:size The size of the font. Default: 12.
:from A Font from which to derive the new Font.
Returns a java.awt.Font instance.
Examples:
; Create a font from a font-spec (see JavaDocs)
(font "ARIAL-ITALIC-20")
; Create a 12 pt bold and italic monospace
(font :style #{:bold :italic} :name :monospaced)
See:
(seesaw.font/font-families)
<http://download.oracle.com/javase/6/docs/api/java/awt/Font.html>
```
Create and return a Font.
(font name)
(font ... options ...)
Options are:
:name The name of the font. Besides string values, also possible are
any of :monospaced, :serif, :sans-serif. See (seesaw.font/font-families)
to get a system-specific list of all valid values.
:style The style. One of :bold, :plain, :italic, or a set of those values
to combine them. Default: :plain.
:size The size of the font. Default: 12.
:from A Font from which to derive the new Font.
Returns a java.awt.Font instance.
Examples:
; Create a font from a font-spec (see JavaDocs)
(font "ARIAL-ITALIC-20")
; Create a 12 pt bold and italic monospace
(font :style #{:bold :italic} :name :monospaced)
See:
(seesaw.font/font-families)
http://download.oracle.com/javase/6/docs/api/java/awt/Font.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/font.clj#L42)[raw docstring](#)
---
#### font-familiesclj
```
(font-families)
```
```
(font-families locale)
```
Returns a seq of strings naming the font families on the system. These are the names that are valid in :name option (seesaw.font/font) as well as in font descriptor strings like "Arial-BOLD-20"
See:
(seesaw.core/font)
```
Returns a seq of strings naming the font families on the system. These are the names that are valid in :name option (seesaw.font/font) as well as in font descriptor strings like "Arial-BOLD-20"
See:
(seesaw.core/font)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/font.clj#L18)[raw docstring](#)
---
#### to-fontclj
```
(to-font f)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/font.clj#L104)
seesaw.forms
===
---
#### ComponentSpeccljprotocol
#### appendclj
```
(append this builder)
```
Add the given component to the form builder
```
Add the given component to the form builder
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L22)
---
#### forms-panelclj
```
(forms-panel column-spec & opts)
```
Construct a panel with a FormLayout. The column spec is expected to be a FormLayout column spec in string form.
The items are a list of strings, components or any of the combinators. For example:
```
:items ["Login" (text) (next-line)
"Password" (span (text) 3)]
```
Takes the following special properties. They correspond to the DefaultFormBuilder option of the same name.
```
:default-dialog-border?
:default-row-spec
:leading-column-offset
:line-gap-size
:paragraph-gap-size
```
See <http://www.jgoodies.com/freeware/forms/index.html>
```
Construct a panel with a FormLayout. The column spec is expected to be a FormLayout column spec in string form.
The items are a list of strings, components or any of the combinators. For example:
:items ["Login" (text) (next-line)
"Password" (span (text) 3)]
Takes the following special properties. They correspond to the DefaultFormBuilder option of the same name.
:default-dialog-border?
:default-row-spec
:leading-column-offset
:line-gap-size
:paragraph-gap-size
See http://www.jgoodies.com/freeware/forms/index.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L125)[raw docstring](#)
---
#### groupclj
```
(group & items)
```
Group the rows of the contained items into a row group.
```
Group the rows of the contained items into a row group.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L83)[raw docstring](#)
---
#### next-columnclj
```
(next-column)
```
```
(next-column n)
```
Continue with the nth next column in the builder.
```
Continue with the nth next column in the builder.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L53)[raw docstring](#)
---
#### next-lineclj
```
(next-line)
```
```
(next-line n)
```
Continue with the nth next line in the builder.
```
Continue with the nth next line in the builder.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L44)[raw docstring](#)
---
#### separatorclj
```
(separator)
```
```
(separator label)
```
Adds a separator with an optional label to the form.
```
Adds a separator with an optional label to the form.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L70)[raw docstring](#)
---
#### spanclj
```
(span component column-span)
```
Add the given component spanning several columns.
```
Add the given component spanning several columns.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L36)[raw docstring](#)
---
#### titleclj
```
(title title)
```
Adds the given title to the form.
```
Adds the given title to the form.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/forms.clj#L62)[raw docstring](#)
seesaw.graphics
===
Basic graphics functions to simplify use of Graphics2D.
```
Basic graphics functions to simplify use of Graphics2D.
```
[raw docstring](#)
---
#### anti-aliasclj
```
(anti-alias g2d)
```
Enable anti-aliasing on the given Graphics2D object.
Returns g2d.
```
Enable anti-aliasing on the given Graphics2D object.
Returns g2d.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L20)[raw docstring](#)
---
#### arcclj
```
(arc x y w h start extent)
```
```
(arc x y w h start extent arc-type)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L109)
---
#### buffered-imageclj
```
(buffered-image width height)
```
```
(buffered-image width height t)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L28)
---
#### chordclj
```
(chord x y w h start extent)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L120)
---
#### circleclj
```
(circle x y radius)
```
Create a circle with the given center and radius
```
Create a circle with the given center and radius
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L104)[raw docstring](#)
---
#### drawclj
```
(draw g2d)
```
```
(draw g2d shape style)
```
```
(draw g2d shape style & more)
```
Draw a one or more shape/style pairs to the given graphics context.
shape should be an object that implements Draw protocol (see (rect),
(ellipse), etc.
style is a style object created with (style). If the style's :foreground is non-nil, the border of the shape is drawn with the given stroke. If the style's :background is non-nil, the shape is filled with that color.
Returns g2d.
```
Draw a one or more shape/style pairs to the given graphics context.
shape should be an object that implements Draw protocol (see (rect),
(ellipse), etc.
style is a style object created with (style). If the style's :foreground is non-nil, the border of the shape is drawn with the given stroke. If the style's :background is non-nil, the shape is filled with that color.
Returns g2d.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L435)[raw docstring](#)
---
#### Drawcljprotocol
#### draw*clj
```
(draw* shape g2d style)
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L405)
---
#### ellipseclj
```
(ellipse x y w)
```
```
(ellipse x y w h)
```
Create an ellipse that occupies the given rectangular region
```
Create an ellipse that occupies the given rectangular region
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L95)[raw docstring](#)
---
#### image-shapeclj
```
(image-shape x y image)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L162)
---
#### ImageShapeclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L160)
---
#### lineclj
```
(line x1 y1 x2 y2)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L65)
---
#### linear-gradientclj
```
(linear-gradient &
{:keys [start end fractions colors cycle]
:or {start default-start
end default-end
fractions default-fractions
colors default-colors
cycle :none}
:as opts})
```
Creates a linear gradient suitable for use on the :foreground and
:background properties of a (seesaw.graphics/style), or anywhere a java.awt.Paint is required. Has the following options:
:start The start [x y] point, defaults to [0 0]
:end The end [x y] point, defaults to [1 0]
:fractions Sequence of fractional values indicating color transition points in the gradient. Defaults to [0.0 1.0]. Must have same number of entries as :colors.
:colors Sequence of color values correspoding to :fractions. Value is passed through (seesaw.color/to-color). e.g. :blue, "#fff", etc.
:cycle The cycle behavior of the gradient, :none, :repeat, or :reflect.
Defaults to :none.
Examples:
; create a horizontal red, white and blue gradiant with three equal parts
(linear-gradient :fractions [0 0.5 1.0] :colors [:red :white :blue])
See:
<http://docs.oracle.com/javase/6/docs/api/java/awt/LinearGradientPaint.html>
```
Creates a linear gradient suitable for use on the :foreground and
:background properties of a (seesaw.graphics/style), or anywhere a java.awt.Paint is required. Has the following options:
:start The start [x y] point, defaults to [0 0]
:end The end [x y] point, defaults to [1 0]
:fractions Sequence of fractional values indicating color transition points
in the gradient. Defaults to [0.0 1.0]. Must have same number
of entries as :colors.
:colors Sequence of color values correspoding to :fractions. Value is passed
through (seesaw.color/to-color). e.g. :blue, "#fff", etc.
:cycle The cycle behavior of the gradient, :none, :repeat, or :reflect.
Defaults to :none.
Examples:
; create a horizontal red, white and blue gradiant with three equal parts
(linear-gradient :fractions [0 0.5 1.0] :colors [:red :white :blue])
See:
http://docs.oracle.com/javase/6/docs/api/java/awt/LinearGradientPaint.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L201)[raw docstring](#)
---
#### pathcljmacro
```
(path opts & forms)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L147)
---
#### pieclj
```
(pie x y w h start extent)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L124)
---
#### polygonclj
```
(polygon & points)
```
Create a polygonal shape with the given set of vertices.
points is a list of x/y pairs, e.g.:
(polygon [1 2] [3 4] [5 6])
```
Create a polygonal shape with the given set of vertices.
points is a list of x/y pairs, e.g.:
(polygon [1 2] [3 4] [5 6])
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L128)[raw docstring](#)
---
#### pushcljmacro
```
(push g2d & forms)
```
Push a Graphics2D context (Graphics2d/create) and automatically dispose it.
For example, in a paint handler:
(fn [c g2d]
(.setColor g2d java.awt.Color/RED)
(.drawString g2d "This string is RED" 0 20)
(push g2d
(.setColor g2d java.awt.Color/BLUE)
(.drawString g2d "This string is BLUE" 0 40))
(.drawString g2d "This string is RED again" 0 60))
```
Push a Graphics2D context (Graphics2d/create) and automatically dispose it.
For example, in a paint handler:
(fn [c g2d]
(.setColor g2d java.awt.Color/RED)
(.drawString g2d "This string is RED" 0 20)
(push g2d
(.setColor g2d java.awt.Color/BLUE)
(.drawString g2d "This string is BLUE" 0 40))
(.drawString g2d "This string is RED again" 0 60))
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L42)[raw docstring](#)
---
#### radial-gradientclj
```
(radial-gradient &
{:keys [center focus radius fractions colors cycle]
:or {center default-center
radius default-radius
fractions default-fractions
colors default-colors
cycle :none}
:as opts})
```
Creates a radial gradient suitable for use on the :foreground and
:background properties of a (seesaw.graphics/style), or anywhere a java.awt.Paint is required. Has the following options:
:center The center [x y] point, defaults to [0 0]
:focus The focus [x y] point, defaults to :center
:radius The radius. Defaults to 1.0
:fractions Sequence of fractional values indicating color transition points in the gradient. Defaults to [0.0 1.0]. Must have same number of entries as :colors.
:colors Sequence of color values correspoding to :fractions. Value is passed through (seesaw.color/to-color). e.g. :blue, "#fff", etc.
:cycle The cycle behavior of the gradient, :none, :repeat, or :reflect.
Defaults to :none.
Examples:
; create a red, white and blue gradiant with three equal parts
(radial-gradient :radius 100.0 :fractions [0 0.5 1.0] :colors [:red :white :blue])
See:
<http://docs.oracle.com/javase/6/docs/api/java/awt/RadialGradientPaint.html>
```
Creates a radial gradient suitable for use on the :foreground and
:background properties of a (seesaw.graphics/style), or anywhere a java.awt.Paint is required. Has the following options:
:center The center [x y] point, defaults to [0 0]
:focus The focus [x y] point, defaults to :center
:radius The radius. Defaults to 1.0
:fractions Sequence of fractional values indicating color transition points
in the gradient. Defaults to [0.0 1.0]. Must have same number
of entries as :colors.
:colors Sequence of color values correspoding to :fractions. Value is passed
through (seesaw.color/to-color). e.g. :blue, "#fff", etc.
:cycle The cycle behavior of the gradient, :none, :repeat, or :reflect.
Defaults to :none.
Examples:
; create a red, white and blue gradiant with three equal parts
(radial-gradient :radius 100.0 :fractions [0 0.5 1.0] :colors [:red :white :blue])
See:
http://docs.oracle.com/javase/6/docs/api/java/awt/RadialGradientPaint.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L241)[raw docstring](#)
---
#### rectclj
```
(rect x y w)
```
```
(rect x y w h)
```
Create a rectangular shape with the given upper-left corner, width and height.
```
Create a rectangular shape with the given upper-left corner, width and height.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L67)[raw docstring](#)
---
#### rotateclj
```
(rotate g2d degrees)
```
Apply a rotation to the graphics context by degrees
Returns g2d
```
Apply a rotation to the graphics context by degrees
Returns g2d
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L168)[raw docstring](#)
---
#### rounded-rectclj
```
(rounded-rect x y w h)
```
```
(rounded-rect x y w h rx)
```
```
(rounded-rect x y w h rx ry)
```
Create a rectangular shape with the given upper-left corner, width,
height and corner radii.
```
Create a rectangular shape with the given upper-left corner, width,
height and corner radii.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L81)[raw docstring](#)
---
#### scaleclj
```
(scale g2d s)
```
```
(scale g2d sx sy)
```
Apply a scale factor to the graphics context
Returns g2d
```
Apply a scale factor to the graphics context
Returns g2d
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L181)[raw docstring](#)
---
#### string-shapeclj
```
(string-shape x y value)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L158)
---
#### StringShapeclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L157)
---
#### strokeclj
```
(stroke &
{:keys [width cap join miter-limit dashes dash-phase]
:or {width 1
cap :square
join :miter
miter-limit 10.0
dashes nil
dash-phase 0.0}})
```
Create a new stroke with the given properties:
:width Width of the stroke
```
Create a new stroke with the given properties:
:width Width of the stroke
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L295)[raw docstring](#)
---
#### Styleclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L335)
---
#### styleclj
```
(style & {:keys [foreground background stroke font]})
```
Create a new style object for use with (seesaw.graphics/draw). Takes a list of key/value pairs:
:foreground A color value (see seesaw.color) for the foreground (stroke)
:background A color value (see seesaw.color) for the background (fill)
:stroke A stroke value used to draw outlines (see seesaw.graphics/stroke)
:font Font value used for drawing text shapes
The default value for all properties is nil. See (seesaw.graphics/draw) for interpretation of nil values.
Notes:
Style objects are immutable so they can be efficiently "pre-compiled" and used for drawing multiple shapes.
Examples:
; Red on black
(style :foreground :red :background :black :font :monospace)
; Red, 8-pixel line with no fill.
(style :foreground :red :stroke 8)
See:
(seesaw.graphics/update-style)
(seesaw.graphics/draw)
```
Create a new style object for use with (seesaw.graphics/draw). Takes a list of key/value pairs:
:foreground A color value (see seesaw.color) for the foreground (stroke)
:background A color value (see seesaw.color) for the background (fill)
:stroke A stroke value used to draw outlines (see seesaw.graphics/stroke)
:font Font value used for drawing text shapes
The default value for all properties is nil. See (seesaw.graphics/draw) for
interpretation of nil values.
Notes:
Style objects are immutable so they can be efficiently "pre-compiled" and
used for drawing multiple shapes.
Examples:
; Red on black
(style :foreground :red :background :black :font :monospace)
; Red, 8-pixel line with no fill.
(style :foreground :red :stroke 8)
See:
(seesaw.graphics/update-style)
(seesaw.graphics/draw)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L340)[raw docstring](#)
---
#### to-paintclj
```
(to-paint v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L329)
---
#### to-strokeclj
```
(to-stroke v)
```
Convert v to a stroke. As follows depending on v:
nil - returns nil java.awt.Stroke instance - returns v
Throws IllegalArgumentException if it can't figure out what to do.
```
Convert v to a stroke. As follows depending on v:
nil - returns nil
java.awt.Stroke instance - returns v
Throws IllegalArgumentException if it can't figure out what to do.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L309)[raw docstring](#)
---
#### translateclj
```
(translate g2d dx dy)
```
Apply a translation to the graphics context
Returns g2d
```
Apply a translation to the graphics context
Returns g2d
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L175)[raw docstring](#)
---
#### update-styleclj
```
(update-style s
&
{:keys [foreground background stroke font]
:or {foreground (:foreground s)
background (:background s)
stroke (:stroke s)
font (:font s)}})
```
Update a style with new properties and return a new style. This is basically exactly the same as (clojure.core/assoc) with the exception that color, stroke,
and font values are interpreted by Seesaw.
Examples:
(def start (style :foreground blue :background :white))
(def no-fill (update-style start :background nil))
(def red-line (update-style no-fill :foreground :red))
See:
(seesaw.graphics/style)
(seesaw.graphics/draw)
```
Update a style with new properties and return a new style. This is basically exactly the same as (clojure.core/assoc) with the exception that color, stroke,
and font values are interpreted by Seesaw.
Examples:
(def start (style :foreground blue :background :white))
(def no-fill (update-style start :background nil))
(def red-line (update-style no-fill :foreground :red))
See:
(seesaw.graphics/style)
(seesaw.graphics/draw)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/graphics.clj#L376)[raw docstring](#)
seesaw.icon
===
Functions for loading and creating icons.
```
Functions for loading and creating icons.
```
[raw docstring](#)
---
#### iconclj
```
(icon p)
```
Loads an icon. The parameter p can be any of the following:
nil - returns nil javax.swing.Icon - returns the icon java.awt.Image - returns an ImageIcon around the image java.net.URL - Load the icon from the given URL an i18n keyword - Load the icon from the resource bundle classpath path string - Load the icon from the classpath URL string - Load the icon from the given URL java.io.File - Load the icon from the File
This is the function used to process the :icon property on most widgets and windows. Thus, any of these values may be used for the :icon property.
```
Loads an icon. The parameter p can be any of the following:
nil - returns nil
javax.swing.Icon - returns the icon
java.awt.Image - returns an ImageIcon around the image
java.net.URL - Load the icon from the given URL
an i18n keyword - Load the icon from the resource bundle
classpath path string - Load the icon from the classpath
URL string - Load the icon from the given URL
java.io.File - Load the icon from the File
This is the function used to process the :icon property on most widgets and windows. Thus, any of these values may be used for the :icon property.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/icon.clj#L21)[raw docstring](#)
seesaw.invoke
===
---
#### invoke-latercljmacro
```
(invoke-later & body)
```
Equivalent to SwingUtilities/invokeLater. Executes the given body sometime in the future on the Swing UI thread. For example,
(invoke-later
(config! my-label :text "New Text"))
Notes:
(seesaw.core/invoke-later) is an alias of this macro.
See:
[http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeLater(java.lang.Runnable)](http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeLater%28java.lang.Runnable%29)
```
Equivalent to SwingUtilities/invokeLater. Executes the given body sometime in the future on the Swing UI thread. For example,
(invoke-later
(config! my-label :text "New Text"))
Notes:
(seesaw.core/invoke-later) is an alias of this macro.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeLater(java.lang.Runnable)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L30)[raw docstring](#)
---
#### invoke-later*clj
```
(invoke-later* f & args)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L14)
---
#### invoke-nowcljmacro
```
(invoke-now & body)
```
Equivalent to SwingUtilities/invokeAndWait. Executes the given body immediately on the Swing UI thread, possibly blocking the current thread if it's not the Swing UI thread. Returns the result of executing body. For example,
(invoke-now
(config! my-label :text "New Text"))
Notes:
Be very careful with this function in the presence of locks and stuff.
(seesaw.core/invoke-now) is an alias of this macro.
See:
[http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeAndWait(java.lang.Runnable)](http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeAndWait%28java.lang.Runnable%29)
```
Equivalent to SwingUtilities/invokeAndWait. Executes the given body immediately on the Swing UI thread, possibly blocking the current thread if it's not the Swing UI thread. Returns the result of executing body. For example,
(invoke-now
(config! my-label :text "New Text"))
Notes:
Be very careful with this function in the presence of locks and stuff.
(seesaw.core/invoke-now) is an alias of this macro.
See:
http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeAndWait(java.lang.Runnable)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L47)[raw docstring](#)
---
#### invoke-now*clj
```
(invoke-now* f & args)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L16)
---
#### invoke-sooncljmacro
```
(invoke-soon & body)
```
Execute code on the swing event thread (EDT) as soon as possible. That is:
* If the current thread is the EDT, executes body and returns the result
* Otherise, passes body to (seesaw.core/invoke-later) and returns nil
Notes:
(seesaw.core/invoke-soon) is an alias of this macro.
See:
(seesaw.core/invoke-later)
[http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeLater(java.lang.Runnable)](http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeLater%28java.lang.Runnable%29)
```
Execute code on the swing event thread (EDT) as soon as possible. That is:
* If the current thread is the EDT, executes body and returns the result
* Otherise, passes body to (seesaw.core/invoke-later) and returns nil
Notes:
(seesaw.core/invoke-soon) is an alias of this macro.
See:
(seesaw.core/invoke-later)
http://download.oracle.com/javase/6/docs/api/javax/swing/SwingUtilities.html#invokeLater(java.lang.Runnable)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L65)[raw docstring](#)
---
#### invoke-soon*clj
```
(invoke-soon* f & args)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L24)
---
#### signallercljmacro
```
(signaller args & body)
```
Convenience form of (seesaw.invoke/signaller*).
A use of signaller* like this:
(signaller* (fn [x y z] ... body ...))
can be written like this:
(signaller [x y z] ... body ...)
See:
(seesaw.invoke/signaller*)
```
Convenience form of (seesaw.invoke/signaller*).
A use of signaller* like this:
(signaller* (fn [x y z] ... body ...))
can be written like this:
(signaller [x y z] ... body ...)
See:
(seesaw.invoke/signaller*)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L132)[raw docstring](#)
---
#### signaller*clj
```
(signaller* f)
```
Returns a function that conditionally queues the given function (+ args) on the UI thread. The call is only queued if there is not already a pending call queued.
Suppose you're performing some computation in the background and want to signal some UI component to update. Normally you'd use (seesaw.core/invoke-later)
but that can easily flood the UI thread with unnecessary updates. That is,
only the "last" queued update really matters since it will overwrite any preceding updates when the event queue is drained. Thus, this function takes care of insuring that only one update call is "in-flight" at any given time.
The returned function returns true if the action was queued, or false if one was already active.
Examples:
; Increment a number in a thread and signal the UI to update a label
; with the current value. Without a signaller, the loop would send
; updates way way way faster than the UI thread could handle them.
(defn counting-text-box []
(let [display (label :text "0")
value (atom 0)
signal (signaller* #(text! display (str @value)))]
(future
(loop []
(swap! value inc)
(signal)
(recur)))
label))
Note:
You probably want to use the (seesaw.invoke/signaller) convenience form.
See:
(seesaw.invoke/invoke-later)
(seesaw.invoke/signaller)
```
Returns a function that conditionally queues the given function (+ args) on
the UI thread. The call is only queued if there is not already a pending call queued.
Suppose you're performing some computation in the background and want to signal some UI component to update. Normally you'd use (seesaw.core/invoke-later)
but that can easily flood the UI thread with unnecessary updates. That is,
only the "last" queued update really matters since it will overwrite any preceding updates when the event queue is drained. Thus, this function takes care of insuring that only one update call is "in-flight" at any given time.
The returned function returns true if the action was queued, or false if one was already active.
Examples:
; Increment a number in a thread and signal the UI to update a label
; with the current value. Without a signaller, the loop would send
; updates way way way faster than the UI thread could handle them.
(defn counting-text-box []
(let [display (label :text "0")
value (atom 0)
signal (signaller* #(text! display (str @value)))]
(future
(loop []
(swap! value inc)
(signal)
(recur)))
label))
Note:
You probably want to use the (seesaw.invoke/signaller) convenience
form.
See:
(seesaw.invoke/invoke-later)
(seesaw.invoke/signaller)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/invoke.clj#L81)[raw docstring](#)
seesaw.keymap
===
Functions for mapping key strokes to actions.
```
Functions for mapping key strokes to actions.
```
[raw docstring](#)
---
#### map-keyclj
```
(map-key target key act & {:keys [scope id] :as opts})
```
Install a key mapping on a widget.
Key mappings are hopelessly entwined with keyboard focus and the widget hierarchy. When a key is pressed in a widget with focus, each widget up the hierarchy gets a chance to handle it. There three 'scopes' with which a mapping may be registered:
:self
```
The mapping only handles key presses when the widget itself has the keyboard focus. Use this, for example, to install custom key mappings in a text box.
```
:descendants
```
The mapping handles key presses when the widget itself or any of its descendants has keyboard focus.
```
:global
```
The mapping handles key presses as long as the top-level window containing the widget is active. This is what's used for menu shortcuts and should be used for other app-wide mappings.
```
Given this, each mapping is installed on a particular widget along with the desired keystroke and action to perform. The keystroke can be any valid argument to (seesaw.keystroke/keystroke). The action can be one of the following:
* A javax.swing.Action. See (seesaw.core/action)
* A single-argument function. An action will automatically be created around it.
* A button, menu, menuitem, or other button-y thing. An action that programmatically clicks the button will be created.
* nil to disable or remove a mapping
target may be a widget, frame, or something convertible through to-widget.
Returns a function that removes the key mapping.
Examples:
; In frame f, key "K" clicks button b
(map-key f "K" b)
; In text box t, map ctrl+enter to a function
(map-key t "control ENTER"
(fn [e] (alert e "You pressed ctrl+enter!")))
See:
(seesaw.keystroke/keystroke)
<http://download.oracle.com/javase/tutorial/uiswing/misc/keybinding.html>
```
Install a key mapping on a widget.
Key mappings are hopelessly entwined with keyboard focus and the widget
hierarchy. When a key is pressed in a widget with focus, each widget up the hierarchy gets a chance to handle it. There three 'scopes' with which a mapping may be registered:
:self
The mapping only handles key presses when the widget itself has
the keyboard focus. Use this, for example, to install custom
key mappings in a text box.
:descendants
The mapping handles key presses when the widget itself or any
of its descendants has keyboard focus.
:global
The mapping handles key presses as long as the top-level window
containing the widget is active. This is what's used for menu
shortcuts and should be used for other app-wide mappings.
Given this, each mapping is installed on a particular widget along with the desired keystroke and action to perform. The keystroke can be any valid argument to (seesaw.keystroke/keystroke). The action can be one of the following:
* A javax.swing.Action. See (seesaw.core/action)
* A single-argument function. An action will automatically be
created around it.
* A button, menu, menuitem, or other button-y thing. An action
that programmatically clicks the button will be created.
* nil to disable or remove a mapping
target may be a widget, frame, or something convertible through to-widget.
Returns a function that removes the key mapping.
Examples:
; In frame f, key "K" clicks button b
(map-key f "K" b)
; In text box t, map ctrl+enter to a function
(map-key t "control ENTER"
(fn [e] (alert e "You pressed ctrl+enter!")))
See:
(seesaw.keystroke/keystroke)
http://download.oracle.com/javase/tutorial/uiswing/misc/keybinding.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/keymap.clj#L46)[raw docstring](#)
seesaw.keystroke
===
---
#### keystrokeclj
```
(keystroke arg)
```
Convert an argument to a KeyStroke. When the argument is a string, follows the keystroke descriptor syntax for KeyStroke/getKeyStroke (see link below).
For example,
(keystroke "ctrl S")
Note that there is one additional modifier supported, "menu" which will replace the modifier with the appropriate platform-specific modifier key for menus. For example, on Windows it will be "ctrl", while on OSX, it will be the "command" key. Yay!
arg can also be an i18n resource keyword.
See [http://download.oracle.com/javase/6/docs/api/javax/swing/KeyStroke.html#getKeyStroke(java.lang.String)](http://download.oracle.com/javase/6/docs/api/javax/swing/KeyStroke.html#getKeyStroke%28java.lang.String%29)
```
Convert an argument to a KeyStroke. When the argument is a string, follows
the keystroke descriptor syntax for KeyStroke/getKeyStroke (see link below).
For example,
(keystroke "ctrl S")
Note that there is one additional modifier supported, "menu" which will replace the modifier with the appropriate platform-specific modifier key for menus. For example, on Windows it will be "ctrl", while on OSX, it will be the "command" key. Yay!
arg can also be an i18n resource keyword.
See http://download.oracle.com/javase/6/docs/api/javax/swing/KeyStroke.html#getKeyStroke(java.lang.String)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/keystroke.clj#L28)[raw docstring](#)
seesaw.layout
===
Functions for dealing with layouts. Prefer layout specific constructors in seesaw.core, e.g. border-panel.
```
Functions for dealing with layouts. Prefer layout specific constructors in seesaw.core, e.g. border-panel.
```
[raw docstring](#)
---
#### add!-implclj
```
(add!-impl container subject & more)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L290)
---
#### add-widgetclj
```
(add-widget c w)
```
```
(add-widget c w constraint)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L36)
---
#### add-widgetsclj
```
(add-widgets c ws)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L44)
---
#### border-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L79)
---
#### box-layoutclj
```
(box-layout dir)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L164)
---
#### box-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L158)
---
#### card-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L105)
---
#### default-items-optionclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L51)
---
#### flow-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L128)
---
#### form-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L263)
---
#### grid-layoutclj
```
(grid-layout rows columns)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L185)
---
#### grid-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L169)
---
#### handle-structure-changeclj
```
(handle-structure-change container)
```
Helper. Revalidate and repaint a container after structure change
```
Helper. Revalidate and repaint a container after structure change
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L29)[raw docstring](#)
---
#### LayoutManipulationcljprotocol
#### add!*clj
```
(add!* layout target widget constraint)
```
#### get-constraint*clj
```
(get-constraint* layout container widget)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L25)
---
#### nil-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L58)
---
#### realize-grid-bag-constraintsclj
```
(realize-grid-bag-constraints items)
```
*INTERNAL USE ONLY. DO NOT USE.*
Turn item specs into [widget constraint] pairs by successively applying options to GridBagConstraints
```
*INTERNAL USE ONLY. DO NOT USE.*
Turn item specs into [widget constraint] pairs by successively applying options to GridBagConstraints
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L241)[raw docstring](#)
---
#### remove!-implclj
```
(remove!-impl container subject & more)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L300)
---
#### replace!-implclj
```
(replace!-impl container old-widget new-widget)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/layout.clj#L317)
seesaw.make-widget
===
---
#### MakeWidgetcljprotocol
#### make-widget*clj
```
(make-widget* v)
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/make_widget.clj#L16)
seesaw.meta
===
Functions for associating metadata with frames and widgets, etc.
```
Functions for associating metadata with frames and widgets, etc.
```
[raw docstring](#)
---
#### Metacljprotocol
#### get-metaclj
```
(get-meta this key)
```
#### put-meta!clj
```
(put-meta! this key value)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/meta.clj#L15)
seesaw.mig
===
MigLayout support for Seesaw
```
MigLayout support for Seesaw
```
[raw docstring](#)
---
#### mig-layout-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/mig.clj#L35)
---
#### mig-panelclj
```
(mig-panel & opts)
```
Construct a panel with a MigLayout. Takes one special property:
```
:constraints ["layout constraints" "column constraints" "row constraints"]
```
These correspond to the three constructor arguments to MigLayout.
A vector of 0, 1, 2, or 3 constraints can be given.
The format of the :items property is a vector of [widget, constraint] pairs.
For example:
:items [[ "Propeller" "split, span, gaptop 10"]]
See:
<http://www.miglayout.com>
(seesaw.core/default-options)
```
Construct a panel with a MigLayout. Takes one special property:
:constraints ["layout constraints" "column constraints" "row constraints"]
These correspond to the three constructor arguments to MigLayout.
A vector of 0, 1, 2, or 3 constraints can be given.
The format of the :items property is a vector of [widget, constraint] pairs.
For example:
:items [[ "Propeller" "split, span, gaptop 10"]]
See:
http://www.miglayout.com
(seesaw.core/default-options)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/mig.clj#L44)[raw docstring](#)
---
#### mig-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/mig.clj#L42)
seesaw.mouse
===
Functions for dealing with the mouse.
```
Functions for dealing with the mouse.
```
[raw docstring](#)
---
#### buttonclj
```
(button e)
```
Return the affected button in a mouse event.
Returns :left, :center, :right, or nil.
```
Return the affected button in a mouse event.
Returns :left, :center, :right, or nil.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/mouse.clj#L58)[raw docstring](#)
---
#### button-down?clj
```
(button-down? e btn)
```
Returns true if the given button is currently down in the given mouse event.
Examples:
(button-down? event :left)
```
Returns true if the given button is currently down in the given mouse event.
Examples:
(button-down? event :left)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/mouse.clj#L46)[raw docstring](#)
---
#### locationclj
```
(location)
```
```
(location v)
```
Returns the [x y] location of the mouse.
If given no arguments, returns full screen coordinates.
If given a MouseEvent object returns the mouse location from the event.
```
Returns the [x y] location of the mouse.
If given no arguments, returns full screen coordinates.
If given a MouseEvent object returns the mouse location from the event.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/mouse.clj#L18)[raw docstring](#)
seesaw.options
===
Functions for dealing with options.
```
Functions for dealing with options.
```
[raw docstring](#)
---
#### apply-optionsclj
```
(apply-options target opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L132)
---
#### around-optionclj
```
(around-option parent-option set-conv get-conv)
```
```
(around-option parent-option set-conv get-conv examples)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L144)
---
#### bean-optioncljmacro
```
(bean-option name-arg target-type & [set-conv get-conv examples])
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L61)
---
#### default-optionclj
```
(default-option name)
```
```
(default-option name setter)
```
```
(default-option name setter getter)
```
```
(default-option name setter getter examples)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L72)
---
#### get-option-mapclj
```
(get-option-map this)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L20)
---
#### get-option-valueclj
```
(get-option-value target name)
```
```
(get-option-value target name handlers)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L160)
---
#### ignore-optionclj
```
(ignore-option name)
```
```
(ignore-option name examples)
```
Might be used to explicitly ignore the default behaviour of options.
```
Might be used to explicitly ignore the default behaviour of options.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L78)[raw docstring](#)
---
#### ignore-optionsclj
```
(ignore-options source-options)
```
Create a ignore-map for options, which should be ignored. Ready to be merged into default option maps.
```
Create a ignore-map for options, which should be ignored. Ready to be merged into default option maps.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L138)[raw docstring](#)
---
#### Optionclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L28)
---
#### option-mapclj
```
(option-map & opts)
```
Construct an option map from a list of options.
```
Construct an option map from a list of options.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L155)[raw docstring](#)
---
#### option-providercljmacro
```
(option-provider class options)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L23)
---
#### OptionProvidercljprotocol
#### get-option-maps*clj
```
(get-option-maps* this)
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L17)
---
#### resource-optionclj
```
(resource-option option-name keys)
```
Defines an option that takes a j18n namespace-qualified keyword as a value. The keyword is used as a prefix for the set of properties in the given key list. This allows subsets of widget options to be configured from a resource bundle.
Example:
; The :resource property looks in a resource bundle for
; prefix.text, prefix.foreground, etc.
(resource-option :resource [:text :foreground :background])
```
Defines an option that takes a j18n namespace-qualified keyword as a value. The keyword is used as a prefix for the set of properties in the given key list. This allows subsets of widget options to be configured from a resource bundle.
Example:
; The :resource property looks in a resource bundle for
; prefix.text, prefix.foreground, etc.
(resource-option :resource [:text :foreground :background])
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L83)[raw docstring](#)
---
#### set-option-valueclj
```
(set-option-value target name value)
```
```
(set-option-value target name value handlers)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/options.clj#L169)
seesaw.pref
===
---
#### bind-preference-to-atomcljmacro
```
(bind-preference-to-atom key atom)
```
Bind atom to preference by syncing it with (java.util.prefs.Preferences/userRoot) for the current namespace and a given KEY. If no preference has been set yet the atom will stay untouched, otherwise it will be set to the stored preference value. Note that any value of the atom and the preference key must be printable per PRINT-DUP and readable per READ-STRING for it to be used with the preferences store.
```
Bind atom to preference by syncing it with (java.util.prefs.Preferences/userRoot) for the current namespace and a given KEY. If no preference has been set yet the atom will stay untouched, otherwise it will be set to the stored preference value. Note that any value of the atom and the preference key must be printable per PRINT-DUP and readable per READ-STRING for it to be used with the preferences store.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/pref.clj#L50)[raw docstring](#)
---
#### bind-preference-to-atom*clj
```
(bind-preference-to-atom* ns key atom)
```
Bind atom to preference by syncing it with (java.util.prefs.Preferences/userRoot) for the specified namespace and a given KEY. If no preference has been set yet the atom will stay untouched, otherwise it will be set to the stored preference value. Note that any value of the atom and the preference key must be printable per PRINT-DUP and readable per READ-STRING for it to be used with the preferences store.
```
Bind atom to preference by syncing it with (java.util.prefs.Preferences/userRoot) for the specified namespace and a given KEY. If no preference has been set yet the atom will stay untouched, otherwise it will be set to the stored preference value. Note that any value of the atom and the preference key must be printable per PRINT-DUP and readable per READ-STRING for it to be used with the preferences store.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/pref.clj#L31)[raw docstring](#)
---
#### preference-atomcljmacro
```
(preference-atom key)
```
```
(preference-atom key initial-value)
```
Create and return an atom which has been bound using bind-preference-to-atom for the current namespace.
```
Create and return an atom which has been bound using bind-preference-to-atom for the current namespace.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/pref.clj#L61)[raw docstring](#)
---
#### preferences-nodecljmacro
```
(preferences-node)
```
```
(preferences-node ns)
```
Return the java.util.prefs.Preferences/userRoot for the current or the specified namespace.
```
Return the java.util.prefs.Preferences/userRoot for the current or the specified namespace.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/pref.clj#L20)[raw docstring](#)
---
#### preferences-node*clj
```
(preferences-node* ns)
```
Return the java.util.prefs.Preferences/userRoot for the specified namespace.
```
Return the java.util.prefs.Preferences/userRoot for the specified namespace.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/pref.clj#L13)[raw docstring](#)
seesaw.rsyntax
===
Support for RSyntaxTextArea: <http://fifesoft.com/rsyntaxtextarea/index.php>
```
Support for RSyntaxTextArea: http://fifesoft.com/rsyntaxtextarea/index.php
```
[raw docstring](#)
---
#### text-areaclj
```
(text-area & opts)
```
Create a new RSyntaxTextArea.
In addition to normal seesaw.core/text stuff, supports the following:
:syntax The syntax highlighting. Defaults to :none. Use seesaw.dev/show-options to get full list.
See:
(seesaw.core/text)
<http://javadoc.fifesoft.com/rsyntaxtextarea/>
```
Create a new RSyntaxTextArea.
In addition to normal seesaw.core/text stuff, supports the following:
:syntax The syntax highlighting. Defaults to :none. Use
seesaw.dev/show-options to get full list.
See:
(seesaw.core/text)
http://javadoc.fifesoft.com/rsyntaxtextarea/
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/rsyntax.clj#L44)[raw docstring](#)
---
#### text-area-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/rsyntax.clj#L29)
seesaw.scroll
===
Functions for dealing with scrolling. Prefer (seesaw.core/scroll!).
```
Functions for dealing with scrolling. Prefer (seesaw.core/scroll!).
```
[raw docstring](#)
---
#### scroll!*clj
```
(scroll!* target modifier arg)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/scroll.clj#L139)
---
#### scroll-toclj
```
(scroll-to this arg)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/#L)
seesaw.selection
===
---
#### Selectioncljprotocol
#### get-selectionclj
```
(get-selection target)
```
#### set-selectionclj
```
(set-selection target args)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selection.clj#L24)
---
#### selectionclj
```
(selection target)
```
```
(selection target opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selection.clj#L146)
---
#### selection!clj
```
(selection! target values)
```
```
(selection! target opts values)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selection.clj#L154)
---
#### ViewModelIndexConversioncljprotocol
#### index-to-modelclj
```
(index-to-model this index)
```
#### index-to-viewclj
```
(index-to-view this index)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selection.clj#L20)
seesaw.selector
===
Seesaw selector support, based largely upon enlive-html.
<https://github.com/cgrand/enliveThere's no need to ever directly require this namespace. Use (seesaw.core/select)!
```
Seesaw selector support, based largely upon enlive-html.
https://github.com/cgrand/enlive
There's no need to ever directly require this namespace. Use (seesaw.core/select)!
```
[raw docstring](#)
---
#### cacheableclj
```
(cacheable selector)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L278)
---
#### cacheable?clj
```
(cacheable? selector)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L279)
---
#### class-ofclj
```
(class-of w)
```
Retrieve the classes of a widget as a set of strings
```
Retrieve the classes of a widget as a set of strings
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L46)[raw docstring](#)
---
#### class-of!clj
```
(class-of! w classes)
```
INTERNAL USE ONLY.
```
INTERNAL USE ONLY.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L51)[raw docstring](#)
---
#### id-ofclj
```
(id-of w)
```
Retrieve the id of a widget. Use (seesaw.core/id-of).
```
Retrieve the id of a widget. Use (seesaw.core/id-of).
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L33)[raw docstring](#)
---
#### id-of!clj
```
(id-of! w id)
```
INTERNAL USE ONLY.
```
INTERNAL USE ONLY.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L38)[raw docstring](#)
---
#### id-selector?clj
```
(id-selector? s)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L56)
---
#### selectclj
```
(select node-or-nodes selector)
```
*USE seesaw.core/select*
Returns the seq of nodes or fragments matched by the specified selector.
```
*USE seesaw.core/select*
Returns the seq of nodes or fragments matched by the specified selector.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L358)[raw docstring](#)
---
#### Selectablecljprotocol
#### class-of!*clj
```
(class-of!* this classes)
```
#### class-of*clj
```
(class-of* this)
```
#### id-of!*clj
```
(id-of!* this id)
```
#### id-of*clj
```
(id-of* this)
```
#####
#####
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L24)
---
#### Tagcljprotocol
#### tag-nameclj
```
(tag-name this)
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/selector.clj#L30)
seesaw.style
===
Functions for styling apps. Prefer (seesaw.core/stylesheet) and friends.
```
Functions for styling apps. Prefer (seesaw.core/stylesheet) and friends.
```
[raw docstring](#)
---
#### apply-stylesheetclj
```
(apply-stylesheet root stylesheet)
```
ALPHA - EXPERIMENTAL AND GUARANTEED TO CHANGE
Apply a stylesheet to a widget hierarchy. A stylesheet is simple a map where the keys are selectors and the values are maps from widget properties to values. For example,
(apply-stylesheet frame {
[:#foo] { :text "hi" }
[:.important] { :background :red } })
Applying a stylesheet is a one-time operation. It does not set up any kind of monitoring. Thus, if you make a change to a widget that would affect the rules that apply to it (say, by changing its :class) you'll need to reapply the stylesheet.
See:
(seesaw.core/config!)
(seesaw.core/select)
```
ALPHA - EXPERIMENTAL AND GUARANTEED TO CHANGE
Apply a stylesheet to a widget hierarchy. A stylesheet is simple a map where the keys are selectors and the values are maps from widget properties to values. For example,
(apply-stylesheet frame {
[:#foo] { :text "hi" }
[:.important] { :background :red } })
Applying a stylesheet is a one-time operation. It does not set up any kind of monitoring. Thus, if you make a change to a widget that would affect the rules that apply to it (say, by changing its :class) you'll need to reapply the stylesheet.
See:
(seesaw.core/config!)
(seesaw.core/select)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/style.clj#L17)[raw docstring](#)
seesaw.swingx
===
SwingX integration. Unfortunately, SwingX is hosted on java.net which means it looks abandoned most of the time. Downloads are here
<http://java.net/downloads/swingx/releases/1.6/This is an incomplete wrapper. If something's missing that you want, just ask.
```
SwingX integration. Unfortunately, SwingX is hosted on java.net which means it looks abandoned most of the time. Downloads are here http://java.net/downloads/swingx/releases/1.6/
This is an incomplete wrapper. If something's missing that you want, just ask.
```
[raw docstring](#)
---
#### add-highlighterclj
```
(add-highlighter target hl)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L197)
---
#### border-panel-xclj
```
(border-panel-x & opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L737)
---
#### busy-labelclj
```
(busy-label & args)
```
Creates a org.jdesktop.swingx.JXBusyLabel which is a label that shows
'busy' status with a spinner, kind of like an indeterminate progress bar.
Additional options:
:busy? Whether busy status should be shown or not. Defaults to false.
Examples:
(busy-label :text "Processing ..."
:busy? true)
See:
(seesaw.core/label)
(seesaw.core/label-options)
(seesaw.swingx/busy-label-options)
```
Creates a org.jdesktop.swingx.JXBusyLabel which is a label that shows
'busy' status with a spinner, kind of like an indeterminate progress bar.
Additional options:
:busy? Whether busy status should be shown or not. Defaults to false.
Examples:
(busy-label :text "Processing ..."
:busy? true)
See:
(seesaw.core/label)
(seesaw.core/label-options)
(seesaw.swingx/busy-label-options)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L290)[raw docstring](#)
---
#### busy-label-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L281)
---
#### button-xclj
```
(button-x & args)
```
Creates a org.jdesktop.swingx.JXButton which is an improved (button) that supports painters. Supports these additional options:
:foreground-painter The foreground painter
:background-painter The background painter
:paint-border-insets? Default to true. If false painter paints entire background.
Examples:
See:
(seesaw.core/button)
(seesaw.core/button-options)
(seesaw.swingx/button-x-options)
```
Creates a org.jdesktop.swingx.JXButton which is an improved (button) that supports painters. Supports these additional options:
:foreground-painter The foreground painter
:background-painter The background painter
:paint-border-insets? Default to true. If false painter paints entire
background.
Examples:
See:
(seesaw.core/button)
(seesaw.core/button-options)
(seesaw.swingx/button-x-options)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L222)[raw docstring](#)
---
#### button-x-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L212)
---
#### card-panel-xclj
```
(card-panel-x & opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L753)
---
#### color-selection-buttonclj
```
(color-selection-button & args)
```
Creates a color selection button. In addition to normal button options,
supports:
:selection A color value. See (seesaw.color/to-color)
The currently selected color canbe retrieved with (seesaw.core/selection).
Examples:
(def b (color-selection-button :selection :aliceblue))
(selection! b java.awt.Color/RED)
(listen b :selection
(fn [e]
(println "Selected color changed to ")))
See:
(seesaw.swingx/color-selection-button-options)
(seesaw.color/color)
```
Creates a color selection button. In addition to normal button options,
supports:
:selection A color value. See (seesaw.color/to-color)
The currently selected color canbe retrieved with (seesaw.core/selection).
Examples:
(def b (color-selection-button :selection :aliceblue))
(selection! b java.awt.Color/RED)
(listen b :selection
(fn [e]
(println "Selected color changed to ")))
See:
(seesaw.swingx/color-selection-button-options)
(seesaw.color/color)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L449)[raw docstring](#)
---
#### color-selection-button-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L440)
---
#### default-highlighter-hostcljmacro
```
(default-highlighter-host class)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L176)
---
#### flow-panel-xclj
```
(flow-panel-x & opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L740)
---
#### get-highlightersclj
```
(get-highlighters target)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L189)
---
#### grid-panel-xclj
```
(grid-panel-x & {:keys [rows columns] :as opts})
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L749)
---
#### headerclj
```
(header & args)
```
Creates a header which consists of a title, description (supports basic HTML)
and an icon. Additional options:
:title The title. May be a resource.
:description The description. Supports basic HTML (3.2). May be a resource.
:icon The icon. May be a resource.
Examples:
(header :title "This is a title"
:description "<html>A <b>description</b> with some
<i>italics</i></html>"
:icon "<http://url/to/icon.png>")
See:
(seesaw.swingx/header-options)
```
Creates a header which consists of a title, description (supports basic HTML)
and an icon. Additional options:
:title The title. May be a resource.
:description The description. Supports basic HTML (3.2). May be a resource.
:icon The icon. May be a resource.
Examples:
(header :title "This is a title"
:description "<html>A <b>description</b> with some
<i>italics</i></html>"
:icon "http://url/to/icon.png")
See:
(seesaw.swingx/header-options)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L506)[raw docstring](#)
---
#### header-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L496)
---
#### highlighter-host-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L205)
---
#### HighlighterHostcljprotocol
#### add-highlighter*clj
```
(add-highlighter* this h)
```
#### get-highlighters*clj
```
(get-highlighters* this)
```
#### remove-highlighter*clj
```
(remove-highlighter* this h)
```
#### set-highlighters*clj
```
(set-highlighters* this hs)
```
#####
#####
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L170)
---
#### hl-colorclj
```
(hl-color &
{:keys [foreground background selected-background
selected-foreground]})
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L119)
---
#### hl-iconclj
```
(hl-icon i)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L132)
---
#### hl-shadeclj
```
(hl-shade)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L141)
---
#### hl-simple-stripingclj
```
(hl-simple-striping & {:keys [background lines-per-stripe]})
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L149)
---
#### horizontal-panel-xclj
```
(horizontal-panel-x & opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L743)
---
#### hyperlinkclj
```
(hyperlink & args)
```
Constuct an org.jdesktop.swingx.JXHyperlink which is a button that looks like a link and opens its URI in the system browser. In addition to all the options of a button, supports:
:uri A string, java.net.URL, or java.net.URI with the URI to open
Examples:
(hyperlink :text "Click Me" :uri "<http://google.com>")
See:
(seesaw.core/button)
(seesaw.core/button-options)
```
Constuct an org.jdesktop.swingx.JXHyperlink which is a button that looks like a link and opens its URI in the system browser. In addition to all the options of a button, supports:
:uri A string, java.net.URL, or java.net.URI with the URI to open
Examples:
(hyperlink :text "Click Me" :uri "http://google.com")
See:
(seesaw.core/button)
(seesaw.core/button-options)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L320)[raw docstring](#)
---
#### hyperlink-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L312)
---
#### label-xclj
```
(label-x & args)
```
Creates a org.jdesktop.swingx.JXLabel which is an improved (label) that supports wrapped text, rotation, etc. Additional options:
:wrap-lines? When true, text is wrapped to fit
:text-rotation Rotation of text in radians
Examples:
(label-x :text "This is really a very very very very very very long label"
:wrap-lines? true
:rotation (Math/toRadians 90.0))
See:
(seesaw.core/label)
(seesaw.core/label-options)
(seesaw.swingx/label-x-options)
```
Creates a org.jdesktop.swingx.JXLabel which is an improved (label) that supports wrapped text, rotation, etc. Additional options:
:wrap-lines? When true, text is wrapped to fit
:text-rotation Rotation of text in radians
Examples:
(label-x :text "This is really a very very very very very very long label"
:wrap-lines? true
:rotation (Math/toRadians 90.0))
See:
(seesaw.core/label)
(seesaw.core/label-options)
(seesaw.swingx/label-x-options)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L257)[raw docstring](#)
---
#### label-x-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L245)
---
#### listbox-xclj
```
(listbox-x & args)
```
Create a JXList which is basically an improved (seesaw.core/listbox).
Additional capabilities include sorting, searching, and highlighting.
Beyond listbox, has the following additional options:
:sort-with A comparator (like <, >, etc) used to sort the items in the model.
:sort-order :ascending or descending
:highlighters A list of highlighters
By default, ctrl/cmd-F is bound to the search function.
Examples:
See:
(seesaw.core/listbox)
```
Create a JXList which is basically an improved (seesaw.core/listbox).
Additional capabilities include sorting, searching, and highlighting.
Beyond listbox, has the following additional options:
:sort-with A comparator (like <, >, etc) used to sort the items in the
model.
:sort-order :ascending or descending
:highlighters A list of highlighters
By default, ctrl/cmd-F is bound to the search function.
Examples:
See:
(seesaw.core/listbox)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L571)[raw docstring](#)
---
#### listbox-x-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L545)
---
#### p-andclj
```
(p-and & args)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L81)
---
#### p-built-inclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L50)
---
#### p-column-indexesclj
```
(p-column-indexes & indexes)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L101)
---
#### p-column-namesclj
```
(p-column-names & names)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L98)
---
#### p-depthsclj
```
(p-depths & depths)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L107)
---
#### p-eqclj
```
(p-eq value)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L95)
---
#### p-fnclj
```
(p-fn f)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L75)
---
#### p-notclj
```
(p-not p)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L89)
---
#### p-orclj
```
(p-or & args)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L85)
---
#### p-patternclj
```
(p-pattern pattern & {:keys [test-column highlight-column]})
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L110)
---
#### p-row-groupclj
```
(p-row-group lines-per-group)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L104)
---
#### p-typeclj
```
(p-type class)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L92)
---
#### panel-x-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L721)
---
#### remove-highlighterclj
```
(remove-highlighter target hl)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L201)
---
#### set-highlightersclj
```
(set-highlighters target hs)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L192)
---
#### table-xclj
```
(table-x & args)
```
Create a JXTable which is basically an improved (seesaw.core/table).
Additional capabilities include searching, sorting and highlighting.
Beyond table, has the following additional options:
:column-control-visible? Show column visibility control in upper right corner.
Defaults to true.
:column-margin Set margin between cells in pixels
:highlighters A list of highlighters
:horizontal-scroll-enabled? Allow horizontal scrollbars. Defaults to false.
By default, ctrl/cmd-F is bound to the search function.
Examples:
See:
(seesaw.core/table-options)
(seesaw.core/table)
```
Create a JXTable which is basically an improved (seesaw.core/table).
Additional capabilities include searching, sorting and highlighting.
Beyond table, has the following additional options:
:column-control-visible? Show column visibility control in upper right corner.
Defaults to true.
:column-margin Set margin between cells in pixels
:highlighters A list of highlighters
:horizontal-scroll-enabled? Allow horizontal scrollbars. Defaults to false.
By default, ctrl/cmd-F is bound to the search function.
Examples:
See:
(seesaw.core/table-options)
(seesaw.core/table)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L691)[raw docstring](#)
---
#### table-x-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L680)
---
#### task-paneclj
```
(task-pane & args)
```
Create a org.jdesktop.swingx.JXTaskPane which is a collapsable component with a title and icon. It is generally used as an item inside a task-pane-container. Supports the following additional options
:resource Get icon and title from a resource
:icon The icon
:title The pane's title
:animated? True if collapse is animated
:collapsed? True if the pane should be collapsed
:scroll-on-expand? If true, when expanded, it's container will scroll the pane into view
:special? If true, the pane will be displayed in a 'special' way depending on look and feel
The pane can be populated with the standard :items option, which just takes a sequence of widgets. Additionally, the :actions option takes a sequence of action objects and makes hyper-links out of them.
See:
(seesaw.swingx/task-pane-options)
(seesaw.swingx/task-pane-container)
```
Create a org.jdesktop.swingx.JXTaskPane which is a collapsable component with a title and icon. It is generally used as an item inside a task-pane-container. Supports the following additional options
:resource Get icon and title from a resource
:icon The icon
:title The pane's title
:animated? True if collapse is animated
:collapsed? True if the pane should be collapsed
:scroll-on-expand? If true, when expanded, it's container will scroll the pane into
view
:special? If true, the pane will be displayed in a 'special' way depending on
look and feel
The pane can be populated with the standard :items option, which just takes a sequence of widgets. Additionally, the :actions option takes a sequence of action objects and makes hyper-links out of them.
See:
(seesaw.swingx/task-pane-options)
(seesaw.swingx/task-pane-container)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L370)[raw docstring](#)
---
#### task-pane-containerclj
```
(task-pane-container & args)
```
Creates a container for task panes. Supports the following additional options:
:items Sequence of task-panes to display
Examples:
(task-pane-container
:items [(task-pane :title "First"
:actions [(action :name "HI")
(action :name "BYE")])
(task-pane :title "Second"
:actions [(action :name "HI")
(action :name "BYE")])
(task-pane :title "Third" :special? true :collapsed? true
:items [(button :text "YEP")])])
See:
(seesaw.swingx/task-pane-container-options)
(seesaw.swingx/task-pane)
```
Creates a container for task panes. Supports the following additional options:
:items Sequence of task-panes to display
Examples:
(task-pane-container
:items [(task-pane :title "First"
:actions [(action :name "HI")
(action :name "BYE")])
(task-pane :title "Second"
:actions [(action :name "HI")
(action :name "BYE")])
(task-pane :title "Third" :special? true :collapsed? true
:items [(button :text "YEP")])])
See:
(seesaw.swingx/task-pane-container-options)
(seesaw.swingx/task-pane)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L411)[raw docstring](#)
---
#### task-pane-container-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L398)
---
#### task-pane-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L347)
---
#### titled-panelclj
```
(titled-panel & args)
```
Creates a panel with a title and content. Has the following properties:
:content The content widget. Passed through (seesaw.core/to-widget)
:title The text of the title. May be a resource.
:title-color Text color. Passed through (seesaw.color/to-color). May be resource.
:left-decoration Decoration widget on left of title.
:right-decoration Decoration widget on right of title.
:resource Set :title and :title-color from a resource bundle
:painter Painter used on the title
Examples:
(titled-panel :title "Error"
:title-color :red
:content (label-x :wrap-lines? true
:text "An error occurred!"))
See:
(seesaw.core/listbox)
```
Creates a panel with a title and content. Has the following properties:
:content The content widget. Passed through (seesaw.core/to-widget)
:title The text of the title. May be a resource.
:title-color Text color. Passed through (seesaw.color/to-color). May
be resource.
:left-decoration Decoration widget on left of title.
:right-decoration Decoration widget on right of title.
:resource Set :title and :title-color from a resource bundle
:painter Painter used on the title
Examples:
(titled-panel :title "Error"
:title-color :red
:content (label-x :wrap-lines? true
:text "An error occurred!"))
See:
(seesaw.core/listbox)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L614)[raw docstring](#)
---
#### titled-panel-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L598)
---
#### to-highlighterclj
```
(to-highlighter v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L162)
---
#### tree-xclj
```
(tree-x & args)
```
Create a JXTree which is basically an improved (seesaw.core/tree).
Additional capabilities include searching, and highlighting.
Beyond tree, has the following additional options:
:highlighters A list of highlighters
By default, ctrl/cmd-F is bound to the search function.
Examples:
See:
(seesaw.core/tree-options)
(seesaw.core/tree)
```
Create a JXTree which is basically an improved (seesaw.core/tree).
Additional capabilities include searching, and highlighting.
Beyond tree, has the following additional options:
:highlighters A list of highlighters
By default, ctrl/cmd-F is bound to the search function.
Examples:
See:
(seesaw.core/tree-options)
(seesaw.core/tree)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L654)[raw docstring](#)
---
#### tree-x-optionsclj
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L646)
---
#### vertical-panel-xclj
```
(vertical-panel-x & opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L746)
---
#### xyz-panel-xclj
```
(xyz-panel-x & opts)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/swingx.clj#L734)
seesaw.table
===
---
#### clear!clj
```
(clear! target)
```
Clear all rows from a table model or JTable.
Returns target.
```
Clear all rows from a table model or JTable.
Returns target.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L298)[raw docstring](#)
---
#### column-countclj
```
(column-count target)
```
Return number of columns in a table model or JTable.
```
Return number of columns in a table model or JTable.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L312)[raw docstring](#)
---
#### insert-at!clj
```
(insert-at! target row value)
```
```
(insert-at! target row value & more)
```
Inserts one or more rows into a table. The arguments are one or more row-index/value pairs where value is either a map or a vector with the right number of columns. Each row index indicates the position before which the new row will be inserted. All indices are relative to the starting state of the table, i.e. they shouldn't take any shifting of rows that takes place during the insert. The indices *must* be in ascending sorted order!!
Returns target.
Examples:
; Insert a row at the front of the table
(insert-at! 0 {:name "<NAME>" :likes "Cherry pie and coffee"})
; Insert two rows, one at the front, one before row 3
(insert-at! 0 {:name "<NAME>" :likes "Cherry pie and coffee"}
3 {:name "<NAME>" :likes "Norma"})
```
Inserts one or more rows into a table. The arguments are one or more row-index/value pairs where value is either a map or a vector with the right number of columns. Each row index indicates the position before which the new row will be inserted. All indices are relative to the starting state of the table, i.e. they shouldn't take any shifting of rows that takes place during the insert. The indices *must* be in ascending sorted order!!
Returns target.
Examples:
; Insert a row at the front of the table
(insert-at! 0 {:name "<NAME>" :likes "Cherry pie and coffee"})
; Insert two rows, one at the front, one before row 3
(insert-at! 0 {:name "<NAME>" :likes "Cherry pie and coffee"}
3 {:name "<NAME>" :likes "Norma"})
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L245)[raw docstring](#)
---
#### remove-at!clj
```
(remove-at! target row)
```
```
(remove-at! target row & more)
```
Remove one or more rows from a table or table model by index. Args are a list of row indices at the start of the operation. The indices *must* be in ascending sorted order!
Returns target.
Examples:
; Remove first row
(remove-at! t 0)
; Remove first and third row
(remove-at! t 0 3)
```
Remove one or more rows from a table or table model by index. Args are a list of row indices at the start of the operation. The indices *must* be in ascending sorted order!
Returns target.
Examples:
; Remove first row
(remove-at! t 0)
; Remove first and third row
(remove-at! t 0 3)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L276)[raw docstring](#)
---
#### row-countclj
```
(row-count target)
```
Return number of rows in a table model or JTable.
```
Return number of rows in a table model or JTable.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L307)[raw docstring](#)
---
#### table-modelclj
```
(table-model & {:keys [columns rows] :as opts})
```
Creates a TableModel from column and row data. Takes two options:
:columns - a list of keys, or maps. If a key, then (name key) is used as the column name. If a map, it can be in the form
{:key key :text text :class class} where key is use to index the row data, text (optional) is used as the column name, and class (optional) specifies the object class of the column data returned by getColumnClass. The order establishes the order of the columns in the table.
:rows - a sequence of maps or vectors, possibly mixed. If a map, must contain row data indexed by keys in :columns. Any additional keys will be remembered and retrievable with (value-at). If a vector, data is indexed by position in the vector.
Example:
(table-model :columns [:name
{:key :age :text "Age" :class java.lang.Integer}]
:rows [ ["Jim" 65]
{:age 75 :name "Doris"}])
This creates a two column table model with columns "name" and "Age"
and two rows.
See:
(seesaw.core/table)
<http://download.oracle.com/javase/6/docs/api/javax/swing/table/TableModel.html>
```
Creates a TableModel from column and row data. Takes two options:
:columns - a list of keys, or maps. If a key, then (name key) is used as the
column name. If a map, it can be in the form
{:key key :text text :class class} where key is use to index the
row data, text (optional) is used as the column name, and
class (optional) specifies the object class of the column data
returned by getColumnClass. The order establishes the order of the
columns in the table.
:rows - a sequence of maps or vectors, possibly mixed. If a map, must contain
row data indexed by keys in :columns. Any additional keys will
be remembered and retrievable with (value-at). If a vector, data
is indexed by position in the vector.
Example:
(table-model :columns [:name
{:key :age :text "Age" :class java.lang.Integer}]
:rows [ ["Jim" 65]
{:age 75 :name "Doris"}])
This creates a two column table model with columns "name" and "Age"
and two rows.
See:
(seesaw.core/table)
http://download.oracle.com/javase/6/docs/api/javax/swing/table/TableModel.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L104)[raw docstring](#)
---
#### update-at!clj
```
(update-at! target row value)
```
```
(update-at! target row value & more)
```
Update a row in a table model or JTable. Accepts an arbitrary number of row/value pairs where row is an integer row index and value is a map or vector of values just like the :rows property of (table-model).
Notes:
Any non-column keys, i.e. keys that weren't present in the original column spec when the table-model was constructed will be remembered and retrievable later with (value-at).
Examples:
; Given a table created with column keys :a and :b, update row 3 and 5
(update-at! t 3 ["Col0 Value" "Col1 Value"]
5 { :a "A value" "B value" })
See:
(seesaw.core/table)
(seesaw.table/table-model)
<http://download.oracle.com/javase/6/docs/api/javax/swing/table/TableModel.html>
```
Update a row in a table model or JTable. Accepts an arbitrary number of row/value pairs where row is an integer row index and value is a map or vector of values just like the :rows property of (table-model).
Notes:
Any non-column keys, i.e. keys that weren't present in the original column
spec when the table-model was constructed will be remembered and retrievable
later with (value-at).
Examples:
; Given a table created with column keys :a and :b, update row 3 and 5
(update-at! t 3 ["Col0 Value" "Col1 Value"]
5 { :a "A value" "B value" })
See:
(seesaw.core/table)
(seesaw.table/table-model)
http://download.oracle.com/javase/6/docs/api/javax/swing/table/TableModel.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L204)[raw docstring](#)
---
#### value-atclj
```
(value-at target rows)
```
Retrieve one or more rows from a table or table model. target is a JTable or TableModel.
rows is either a single integer row index, or a sequence of row indices. In the first case a single map of row values is returns. Otherwise, returns a sequence of maps.
If a row index is out of bounds, returns nil.
Notes:
If target was not created with (table-model), the returned map(s) are indexed by column name.
Any non-column keys passed to (update-at!) or the initial rows of (table-model)
are *remembered* and returned in the map.
Examples:
; Retrieve row 3
(value-at t 3)
; Retrieve rows 1, 3, and 5
(value-at t [1 3 5])
; Print values of selected rows
(listen t :selection
(fn [e]
(println (value-at t (selection t {:multi? true})))))
See:
(seesaw.core/table)
(seesaw.table/table-model)
<http://download.oracle.com/javase/6/docs/api/javax/swing/table/TableModel.html>
```
Retrieve one or more rows from a table or table model. target is a JTable or TableModel.
rows is either a single integer row index, or a sequence of row indices. In the first case a single map of row values is returns. Otherwise, returns a sequence of maps.
If a row index is out of bounds, returns nil.
Notes:
If target was not created with (table-model), the returned map(s) are indexed by column name.
Any non-column keys passed to (update-at!) or the initial rows of (table-model)
are *remembered* and returned in the map.
Examples:
; Retrieve row 3
(value-at t 3)
; Retrieve rows 1, 3, and 5
(value-at t [1 3 5])
; Print values of selected rows
(listen t :selection
(fn [e]
(println (value-at t (selection t {:multi? true})))))
See:
(seesaw.core/table)
(seesaw.table/table-model)
http://download.oracle.com/javase/6/docs/api/javax/swing/table/TableModel.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/table.clj#L164)[raw docstring](#)
seesaw.timer
===
---
#### timerclj
```
(timer f & {:keys [start? initial-value] :or {start? true} :as opts})
```
Creates a new Swing timer that periodically executes the single-argument function f. The argument is a "state" of the timer. Each time the function is called its previous return value is passed to it. Kind of like (reduce)
but spread out over time :) The following options are supported:
:initial-value The first value passed to the handler function. Defaults to nil.
:initial-delay Delay, in milliseconds, of first call. Defaults to 0.
:delay Delay, in milliseconds, between calls. Defaults to 1000.
:repeats? If true, the timer runs forever, otherwise, it's a
"one-shot" timer. Defaults to true.
:start? Whether to start the timer immediately. Defaults to true.
See <http://download.oracle.com/javase/6/docs/api/javax/swing/Timer.html>
```
Creates a new Swing timer that periodically executes the single-argument function f. The argument is a "state" of the timer. Each time the function is called its previous return value is passed to it. Kind of like (reduce)
but spread out over time :) The following options are supported:
:initial-value The first value passed to the handler function. Defaults to nil.
:initial-delay Delay, in milliseconds, of first call. Defaults to 0.
:delay Delay, in milliseconds, between calls. Defaults to 1000.
:repeats? If true, the timer runs forever, otherwise, it's a
"one-shot" timer. Defaults to true.
:start? Whether to start the timer immediately. Defaults to true.
See http://download.oracle.com/javase/6/docs/api/javax/swing/Timer.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/timer.clj#L28)[raw docstring](#)
seesaw.to-widget
===
---
#### ToWidgetcljprotocol
#### to-widget*clj
```
(to-widget* v)
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/to_widget.clj#L16)
seesaw.tree
===
---
#### node-changedclj
```
(node-changed tree-model node-path)
```
Fire a node changed event. parent-path is the path to the parent of the changed node. child is the changed node.
Fire this event if the appearance of a node has changed in any way.
See:
(seesaw.tree/nodes-changed)
(seesaw.tree/simple-tree-model)
```
Fire a node changed event. parent-path is the path to the parent of the changed node. child is the changed node.
Fire this event if the appearance of a node has changed in any way.
See:
(seesaw.tree/nodes-changed)
(seesaw.tree/simple-tree-model)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L123)[raw docstring](#)
---
#### node-insertedclj
```
(node-inserted tree-model node-path)
```
Fire a node insertion event. parent-path is the path to the parent of the newly inserted child. child is the newly inserted node.
See:
(seesaw.tree/nodes-inserted)
(seesaw.tree/simple-tree-model)
```
Fire a node insertion event. parent-path is the path to the parent of the newly inserted child. child is the newly inserted node.
See:
(seesaw.tree/nodes-inserted)
(seesaw.tree/simple-tree-model)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L93)[raw docstring](#)
---
#### node-removedclj
```
(node-removed tree-model parent-path index child)
```
Fire a node removed event on a tree model created with
(simple-tree-model). parent-path is the path to the parent node,
index is the index of the removed node and child is the removed node.
See:
(seesaw.tree/nodes-removed)
(seesaw.tree/simple-tree-model)
```
Fire a node removed event on a tree model created with
(simple-tree-model). parent-path is the path to the parent node,
index is the index of the removed node and child is the removed node.
See:
(seesaw.tree/nodes-removed)
(seesaw.tree/simple-tree-model)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L60)[raw docstring](#)
---
#### node-structure-changedclj
```
(node-structure-changed tree-model node-path)
```
Fire a node structure changed event on a tree model created with
(simple-tree-model). node-path is the sequence of nodes from the model root to the node whose structure changed.
Call this when the entire structure under a node has changed.
See:
(seesaw.tree/simple-tree-model)
```
Fire a node structure changed event on a tree model created with
(simple-tree-model). node-path is the sequence of nodes from the model root to the node whose structure changed.
Call this when the entire structure under a node has changed.
See:
(seesaw.tree/simple-tree-model)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L24)[raw docstring](#)
---
#### nodes-changedclj
```
(nodes-changed tree-model parent-path children)
```
Fire a node changed event. parent-path is the path to the parent of the changed children. children is the changed nodes.
Fire this event if the appearance of a node has changed in any way.
See:
(seesaw.tree/node-changed)
(seesaw.tree/simple-tree-model)
```
Fire a node changed event. parent-path is the path to the parent of the changed children. children is the changed nodes.
Fire this event if the appearance of a node has changed in any way.
See:
(seesaw.tree/node-changed)
(seesaw.tree/simple-tree-model)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L108)[raw docstring](#)
---
#### nodes-insertedclj
```
(nodes-inserted tree-model parent-path children)
```
Fire a node insertion event. parent-path is the path to the parent of the newly inserted children. children is the newly inserted nodes.
See:
(seesaw.tree/node-inserted)
(seesaw.tree/simple-tree-model)
```
Fire a node insertion event. parent-path is the path to the parent of the newly inserted children. children is the newly inserted nodes.
See:
(seesaw.tree/node-inserted)
(seesaw.tree/simple-tree-model)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L80)[raw docstring](#)
---
#### nodes-removedclj
```
(nodes-removed tree-model parent-path indices children)
```
Fire a node removed event on a tree model created with
(simple-tree-model). parent-path is the path to the parent node,
indices is a seq of the indices of the removed nodes and children is a seq of the removed nodes.
See:
(seesaw.tree/simple-tree-model)
(seesaw.tree/node-removed)
```
Fire a node removed event on a tree model created with
(simple-tree-model). parent-path is the path to the parent node,
indices is a seq of the indices of the removed nodes and children is a seq of the removed nodes.
See:
(seesaw.tree/simple-tree-model)
(seesaw.tree/node-removed)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L42)[raw docstring](#)
---
#### simple-tree-modelclj
```
(simple-tree-model branch? children root)
```
Create a simple, read-only TreeModel for use with seesaw.core/tree.
The arguments are the same as clojure.core/tree-seq. Changes to the underlying model can be reported with the various node-xxx event functions in seesaw.tree.
See:
<http://docs.oracle.com/javase/6/docs/api/javax/swing/tree/TreeModel.html>
```
Create a simple, read-only TreeModel for use with seesaw.core/tree.
The arguments are the same as clojure.core/tree-seq. Changes to the underlying model can be reported with the various node-xxx event functions in seesaw.tree.
See:
http://docs.oracle.com/javase/6/docs/api/javax/swing/tree/TreeModel.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L140)[raw docstring](#)
---
#### TreeModelEventSourcecljprotocol
#### fire-event*clj
```
(fire-event* this event-type event)
```
Dispatches a TreeModelEvent to all model listeners. event-type is one of
:tree-nodes-changed, :tree-nodes-inserted, :tree-nodes-removed or
:tree-structure-changed. Note, do not use this function directly.
Instead use one of the helper functions in (seesaw.tree).
```
Dispatches a TreeModelEvent to all model listeners. event-type is one of
:tree-nodes-changed, :tree-nodes-inserted, :tree-nodes-removed or
:tree-structure-changed. Note, do not use this function directly.
Instead use one of the helper functions in (seesaw.tree).
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/tree.clj#L13)
seesaw.util
===
---
#### atom?clj
```
(atom? a)
```
Return true if a is an atom
```
Return true if a is an atom
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L102)[raw docstring](#)
---
#### boolean?clj
```
(boolean? b)
```
Return true if b is exactly true or false. Useful for handling optional boolean properties where we want to do nothing if the property isn't provided.
```
Return true if b is exactly true or false. Useful for handling optional boolean properties where we want to do nothing if the property isn't provided.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L95)[raw docstring](#)
---
#### camelizeclj
```
(camelize s)
```
Convert input string to camelCase from hyphen-case
```
Convert input string to camelCase from hyphen-case
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L90)[raw docstring](#)
---
#### check-argsclj
```
(check-args condition message)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L22)
---
#### Childrencljprotocol
A protocol for retrieving the children of a widget as a seq.
This takes care of idiosyncracies of frame vs. menus, etc.
```
A protocol for retrieving the children of a widget as a seq.
This takes care of idiosyncracies of frame vs. menus, etc.
```
#### childrenclj
```
(children c)
```
Returns a seq of the children of the given widget
```
Returns a seq of the children of the given widget
```
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L151)[raw docstring](#)
---
#### collectclj
```
(collect root)
```
Given a root widget or frame, returns a depth-fist seq of all the widgets in the hierarchy. For example to disable everything:
(config (collect (.getContentPane my-frame)) :enabled? false)
```
Given a root widget or frame, returns a depth-fist seq of all the widgets in the hierarchy. For example to disable everything:
(config (collect (.getContentPane my-frame)) :enabled? false)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L165)[raw docstring](#)
---
#### cond-dotocljmacro
```
(cond-doto x & forms)
```
Spawn of (cond) and (doto). Works like (doto), but each form has a condition which controls whether it is executed. Returns x.
(doto (new java.util.HashMap)
true (.put "a" 1)
(< 2 1) (.put "b" 2))
Here, only (.put "a" 1) is executed.
```
Spawn of (cond) and (doto). Works like (doto), but each form has a condition
which controls whether it is executed. Returns x.
(doto (new java.util.HashMap)
true (.put "a" 1)
(< 2 1) (.put "b" 2))
Here, only (.put "a" 1) is executed.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L34)[raw docstring](#)
---
#### constant-mapclj
```
(constant-map klass & fields)
```
Given a class and a list of keywordized constant names returns the values of those fields in a map. The name mapping upper-cases and replaces hyphens with underscore, e.g.
:above-baseline --> ABOVE_BASELINE
Note that the fields must be static and declared *in* the class, not a supertype.
```
Given a class and a list of keywordized constant names returns the values of those fields in a map. The name mapping upper-cases and replaces hyphens with underscore, e.g.
:above-baseline --> ABOVE_BASELINE
Note that the fields must be static and declared *in* the class, not a supertype.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L67)[raw docstring](#)
---
#### illegal-argumentclj
```
(illegal-argument fmt & args)
```
Throw an illegal argument exception formatted as with (clojure.core/format)
```
Throw an illegal argument exception formatted as with (clojure.core/format)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L17)[raw docstring](#)
---
#### resourceclj
```
(resource message)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L182)
---
#### resource-key?clj
```
(resource-key? v)
```
Returns true if v is a i18n resource key, i.e. a namespaced keyword
```
Returns true if v is a i18n resource key, i.e. a namespaced keyword
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L177)[raw docstring](#)
---
#### root-causeclj
```
(root-cause e)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L28)
---
#### to-dimensionclj
```
(to-dimension v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L134)
---
#### to-insetsclj
```
(to-insets v)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L142)
---
#### to-mnemonic-keycodeclj
```
(to-mnemonic-keycode v)
```
Convert a character to integer to a mnemonic keycode. In the case of char input, generates the correct keycode even if it's lower case. Input argument can be:
* i18n resource keyword - only first char is used
* string - only first char is used
* char - lower or upper case
* int - key event code
See:
java.awt.event.KeyEvent for list of keycodes
<http://download.oracle.com/javase/6/docs/api/java/awt/event/KeyEvent.html>
```
Convert a character to integer to a mnemonic keycode. In the case of char input, generates the correct keycode even if it's lower case. Input argument can be:
* i18n resource keyword - only first char is used
* string - only first char is used
* char - lower or upper case
* int - key event code
See:
java.awt.event.KeyEvent for list of keycodes
http://download.oracle.com/javase/6/docs/api/java/awt/event/KeyEvent.html
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L188)[raw docstring](#)
---
#### to-seqclj
```
(to-seq v)
```
Stupid helper to turn possibly single values into seqs
```
Stupid helper to turn possibly single values into seqs
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L54)[raw docstring](#)
---
#### to-uriclj
```
(to-uri s)
```
Try to make a java.net.URI from s
```
Try to make a java.net.URI from s
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L124)[raw docstring](#)
---
#### to-urlclj
```
(to-url s)
```
Try to parse (str s) as a URL. Returns new java.net.URL on success, nil otherwise. This is different from clojure.java.io/as-url in that it doesn't throw an exception and it uses (str) on the input.
```
Try to parse (str s) as a URL. Returns new java.net.URL on success, nil otherwise. This is different from clojure.java.io/as-url in that it doesn't throw an exception and it uses (str) on the input.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L114)[raw docstring](#)
---
#### try-castclj
```
(try-cast c x)
```
Just like clojure.core/cast, but returns nil on failure rather than throwing ClassCastException
```
Just like clojure.core/cast, but returns nil on failure rather than throwing ClassCastException
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/util.clj#L107)[raw docstring](#)
seesaw.value
===
Functions for dealing with widget value. Prefer (seesaw.core/value).
```
Functions for dealing with widget value. Prefer (seesaw.core/value).
```
[raw docstring](#)
---
#### Valuecljprotocol
#### container?*clj
```
(container?* this)
```
#### value!*clj
```
(value!* this v)
```
#### value*clj
```
(value* this)
```
#####
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/value.clj#L19)
seesaw.widget-options
===
Functions and protocol for dealing with widget options.
```
Functions and protocol for dealing with widget options.
```
[raw docstring](#)
---
#### widget-option-providercljmacro
```
(widget-option-provider class options & [nil-layout-options])
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/widget_options.clj#L30)
---
#### WidgetOptionProvidercljprotocol
#### get-layout-option-map*clj
```
(get-layout-option-map* this)
```
#### get-widget-option-map*clj
```
(get-widget-option-map* this)
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/widget_options.clj#L16)
seesaw.widgets.log-window
===
---
#### log-windowclj
```
(log-window & opts)
```
An auto-scrolling log window.
The returned widget implements the LogWindow protocol with which you can clear it, or append messages. It is thread-safe,
i.e. messages logged from multiple threads won't be interleaved.
It must be wrapped in (seesaw.core/scrollable) for scrolling.
Includes a context menu with options for clearing the window and scroll lock.
Returns a sub-class of javax.swing.JTextArea so any of the options that apply to multi-line (seesaw.core/text) apply. Also supports the following additional options:
:limit Maximum number of chars to keep in the log. When this limit is reached, chars will be removed from the beginning.
:auto-scroll? Whether the window should auto-scroll. This is the programmatic hook for the context menu entry.
See:
(seesaw.core/text)
```
An auto-scrolling log window.
The returned widget implements the LogWindow protocol with
which you can clear it, or append messages. It is thread-safe,
i.e. messages logged from multiple threads won't be interleaved.
It must be wrapped in (seesaw.core/scrollable) for scrolling.
Includes a context menu with options for clearing the window and scroll lock.
Returns a sub-class of javax.swing.JTextArea so any of the options that apply to multi-line (seesaw.core/text) apply. Also supports the following additional options:
:limit Maximum number of chars to keep in the log. When this limit
is reached, chars will be removed from the beginning.
:auto-scroll? Whether the window should auto-scroll. This is the
programmatic hook for the context menu entry.
See:
(seesaw.core/text)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/widgets/log_window.clj#L38)[raw docstring](#)
---
#### logfclj
```
(logf this fmt & args)
```
Log a formatted message to the given log-window.
```
Log a formatted message to the given log-window.
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/widgets/log_window.clj#L33)[raw docstring](#)
---
#### LogWindowcljprotocol
#### clearclj
```
(clear this)
```
Clear the contents of the log-window
```
Clear the contents of the log-window
```
#### logclj
```
(log this message)
```
Log a message to the given log-window
```
Log a message to the given log-window
```
#####
#####
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/widgets/log_window.clj#L29)
seesaw.widgets.rounded-label
===
Function to create a label with a rounded border and background.
```
Function to create a label with a rounded border and background.
```
[raw docstring](#)
---
#### rounded-labelclj
```
(rounded-label & opts)
```
Create a label whose background is a rounded rectangle
Supports all the same options as (seesaw.core/label).
See:
(seesaw.core/label)
```
Create a label whose background is a rounded rectangle
Supports all the same options as (seesaw.core/label).
See:
(seesaw.core/label)
```
[source](https://github.com/daveray/seesaw/blob/1.5.0/src/seesaw/widgets/rounded_label.clj#L31)[raw docstring](#) |
ggBubbles | cran | R | Package ‘ggBubbles’
October 13, 2022
Type Package
Title Mini Bubble Plots for Comparison of Discrete Data with 'ggplot2'
Version 0.1.4
VignetteBuilder knitr
Depends R (>= 3.5.0)
Imports dplyr, ggplot2
Suggests BiocStyle, knitr, rmarkdown, tibble
Description When comparing discrete data mini bubble plots allow displaying
more information than traditional bubble plots via colour, shape or labels.
Exact overlapping coordinates will be transformed so they surround the
original point circularly without overlapping. This is implemented as a
position_surround() function for 'ggplot2'.
License LGPL (>= 3)
Encoding UTF-8
LazyData true
biocViews
RoxygenNote 6.1.1
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0001-7697-7000>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2019-09-04 08:20:06 UTC
R topics documented:
calc_offse... 2
get_offset_tabl... 2
MusicianInterest... 3
MusicianInterestsSmal... 3
PositionSurroun... 4
position_surroun... 4
calc_offset Calculate offsets for a specific point, in a layer, position
Description
each side has several layers, with a number of positions in the layer
Usage
calc_offset(position, layer, side, offset_x = 0.1, offset_y = 0.1)
Arguments
position number for position at the particular side on the layer
layer number of layer
side side for offset 1 - top 2 - right 3 - bottom 4 - left
offset_x offset for x axis
offset_y offset for y axis
Value
integer vector of length 2 position 1 is new x value, position y is new y value
get_offset_table Calculates offset table for number of maximum overlapping positions
Description
Calculates offset table for number of maximum overlapping positions
Usage
get_offset_table(max_positions, offset_x, offset_y)
Arguments
max_positions number of maximal exact overlaps
offset_x offset for positon distance
offset_y offset for in-between layer distance
Value
data frame with position, offsets_x and offsets_y
MusicianInterests Survey about genre interests of some hobby musicians
Description
Tibble of what genre they are interested in, what instrument they play and what level the play their
instrument at (1 = beginner, 2 = intermediate, 3 = experienced, 4 = very experienced, 5 = pro). Also
there is an ID for the musician.
Usage
data(MusicianInterests)
Format
An object of class "data.frame";
Examples
library(ggBubbles)
data(MusicianInterests)
head(MusicianInterests)
MusicianInterestsSmall
Small test data of musician, interest and experience study
Description
Data.frame of what genre they are interested in, what instrument they play and what level the play
their instrument at.
Usage
data(MusicianInterestsSmall)
Format
An object of class "data.frame";
Examples
library(ggBubbles)
data(MusicianInterestsSmall)
head(MusicianInterestsSmall)
PositionSurround ggproto for position_surround()
Description
ggproto for position_surround()
position_surround Surrounds exact overlapping points around the center
Description
Bubble plots sometimes can be hard to interpret, especially if you want to overlay an additional
feature. Instead of having to colour one blob with this function you can plot the individuals con-
tributing to the bubble and colour them accordingly.
Usage
position_surround(offset = 0.1)
Arguments
offset setting offset for x and y axis added to the points surrounding the exact position.
Default is 0.1
Value
ggproto
Examples
library(ggplot2)
library(ggBubbles)
data(MusicianInterestsSmall)
ggplot(data = MusicianInterestsSmall, aes(x = Instrument, y = Genre, col = Level)) +
geom_point(position = position_surround(), size = 4) +
scale_colour_manual(values = c("#333333", "#666666", "#999999", "#CCCCCC")) + theme_bw() |
deep_clean | hex | Erlang | deep_clean v0.1.1
API Reference
===
Modules
---
[DeepClean](DeepClean.html)
Provides functionality to remove elements from nested
[`Map`](https://hexdocs.pm/elixir/Map.html) or [`List`](https://hexdocs.pm/elixir/List.html) elements
deep_clean v0.1.1
DeepClean
===
Provides functionality to remove elements from nested
[`Map`](https://hexdocs.pm/elixir/Map.html) or [`List`](https://hexdocs.pm/elixir/List.html) elements.
Util to remove json attributes in responses
Summary
===
[Functions](#functions)
---
[exclude_in(deep_elem, clean_list)](#exclude_in/2)
Cleans nested maps elements provided in a list
Functions
===
exclude_in(deep_elem, clean_list)
```
exclude_in(map | list, [[String.t](https://hexdocs.pm/elixir/String.html#t:t/0), ...]) :: map | list
```
Cleans nested maps elements provided in a list.
Examples
---
```
iex> DeepClean.exclude_in(%{a: %{aa: 1, ab: 2}, b: %{ba: 3, bb: 4}}, ["a.ab", "b.bb"])
%{a: %{aa: 1}, b: %{ba: 3}}
iex> DeepClean.exclude_in(%{a: [%{aa: 1, ab: 2}, %{aa: 11, ab: 22},], b: [%{ba: 3, bb: 4}, %{ba: 33, bb: 44}]}, ["a.ab", "b.bb"])
%{a: [%{aa: 1}, %{aa: 11}], b: [%{ba: 3}, %{ba: 33}]}
``` |
mscstexta4r | cran | R | Package ‘mscstexta4r’
October 13, 2022
Type Package
Title R Client for the Microsoft Cognitive Services Text Analytics
REST API
Version 0.1.2
Maintainer <NAME> <<EMAIL>>
Description R Client for the Microsoft Cognitive Services Text Analytics
REST API, including Sentiment Analysis, Topic Detection, Language Detection,
and Key Phrase Extraction. An account MUST be registered at the Microsoft
Cognitive Services website <https://www.microsoft.com/cognitive-services/>
in order to obtain a (free) API key. Without an API key, this package will
not work properly.
License MIT + file LICENSE
URL https://github.com/philferriere/mscstexta4r
BugReports http://www.github.com/philferriere/mscstexta4r/issues
VignetteBuilder knitr
Imports methods, httr, jsonlite, pander, stringi, dplyr, utils
Suggests knitr, rmarkdown, testthat, mscsweblm4r
SystemRequirements A valid account MUST be registered with Microsoft's
Cognitive Services website
<https://www.microsoft.com/cognitive-services/> in order to
obtain a (free) API key. Without an API key, this package will
not work properly.
NeedsCompilation no
RoxygenNote 5.0.1
Author <NAME> [aut, cre]
Repository CRAN
Date/Publication 2016-06-23 00:52:35
R topics documented:
mscstexta4... 2
text... 5
textaDetectLanguage... 5
textaDetectTopic... 7
textaDetectTopicsStatu... 10
textaIni... 13
textaKeyPhrase... 14
textaSentimen... 16
textatopic... 19
mscstexta4r R Client for the Microsoft Cognitive Services Text Analytics REST API
Description
mscstexta4r is a client/wrapper/interface for the Microsoft Cognitive Services (MSCS) Text An-
alytics (Text Analytics) REST API. To use this package, you MUST have a valid account with
https://www.microsoft.com/cognitive-services. Once you have an account, Microsoft will
provide you with a (free) API key you can use with this package.
The MSCS Text Analytics REST API
Microsoft Cognitive Services – formerly known as Project Oxford – are a set of APIs, SDKs and
services that developers can use to add AI features to their apps. Those features include emotion
and video detection; facial, speech and vision recognition; as well as speech and NLP.
The Text Analytics REST API provides tools for NLP and is documented at https://www.microsoft.
com/cognitive-services/en-us/text-analytics/documentation. This API supports the fol-
lowing operations:
• Sentiment analysis - Is a sentence or document generally positive or negative?
• Topic detection - What’s being discussed across a list of documents/reviews/articles?
• Language detection - What language is a document written in?
• Key talking points extraction - What’s being discussed in a single document?
mscstexta4r Functions
The following mscstexta4r core functions are used to wrap the MSCS Text Analytics REST API:
• Sentiment analysis - textaSentiment function
• Topic detection - textaDetectTopics and textaDetectTopicsStatus functions
• Language detection - textaDetectLanguages function
• Extraction of key talking points - textaKeyPhrases function
The textaInit configuration function is used to set the REST API URL and the private API key.
It needs to be called only once, after package load, or the core functions will not work properly.
Prerequisites
To use the mscstexta4r R package, you MUST have a valid account with Microsoft Cognitive
Services (see https://www.microsoft.com/cognitive-services/en-us/pricing for details).
Once you have an account, Microsoft will provide you with an API key listed under your subscrip-
tions. After you’ve configured mscstexta4r with your API key (as explained in the next section),
you will be able to call the Text Analytics REST API from R, up to your maximum number of
transactions per month and per minute.
Package Loading and Configuration
After loading the mscstexta4r package with the library() function, you must call the textaInit
before you can call any of the core mscstexta4r functions.
The textaInit configuration function will first check to see if the variable MSCS_TEXTANALYTICS_CONFIG_FILE
exists in the system environment. If it does, the package will use that as the path to the configuration
file.
If MSCS_TEXTANALYTICS_CONFIG_FILE doesn’t exist, it will look for the file .mscskeys.json
in the current user’s home directory (that’s ~/.mscskeys.json on Linux, and something like
C:/Users/Phil/Documents/.mscskeys.json on Windows). If the file is found, the package will
load the API key and URL from it.
If using a file, please make sure it has the following structure:
{
"textanalyticsurl": "https://westus.api.cognitive.microsoft.com/texta/analytics/v2.0/",
"textanalyticskey": "...MSCS Text Analytics API key goes here..."
}
If no configuration file is found, textaInit will attempt to pick up its configuration information
from two Sys env variables instead:
MSCS_TEXTANALYTICS_URL - the URL for the Text Analytics REST API.
MSCS_TEXTANALYTICS_KEY - your personal Text Analytics REST API key.
Synchronous vs Asynchronous Execution
All but ONE core text analytics functions execute exclusively in synchronous mode: textaDetectTopics
is the only function that can be executed either synchronously or asynchronously. Why? Because
topic detection is typically a "batch" operation meant to be performed on thousands of related doc-
uments (product reviews, research articles, etc.).
What’s the difference?
When textaDetectTopics executes synchronously, you must wait for it to finish before you can
move on to the next task. When textaDetectTopics executes asynchronously, you can move
on to something else before topic detection has completed. In the latter case, you will need to
call textaDetectTopicsStatus periodically yourself until the Microsoft Cognitive Services server
complete topic detection and results become available.
When to run which mode?
If you’re performing topic detection in batch mode (from an R script), we recommend using the
textaDetectTopics function in synchronous mode, in which case it will return only after topic
detection has completed.
IMPORTANT NOTE: If you’re calling textaDetectTopics in synchronous mode within the
R console REPL (interactive mode), it will appear as if the console has hanged. This is EX-
PECTED. The function hasn’t crashed. It is simply in "sleep mode", activating itself period-
ically and then going back to sleep, until the results have become available. In sleep mode,
even though it appears "stuck", textaDetectTopics doesn’t use any CPU resources. While
the function is operating in sleep mode, you WILL NOT be able to use the console before
the function completes. If you need to operate the console while topic detection is being per-
formed by the Microsoft Cognitive services servers, you should call textaDetectTopics in
asynchronous mode and then call textaDetectTopicsStatus yourself repeteadly afterwards,
until results are available.
S3 Objects of the Classes texta and textatopics
The sentiment analysis, language detection, and key talking points extraction functions of the msc-
stexta4r package return S3 objects of the class texta. The texta object exposes results collected
in a single dataframe, the REST API JSON response, and the original HTTP request.
The functions textaDetectTopics returns a S3 object of the class textatopics. The textatopics
object exposes formatted results using several dataframes (documents and their IDs, topics and their
IDs, which topics are assigned to which documents), the REST API JSON response (should you
care), and the HTTP request (mostly for debugging purposes).’
Error Handling
The MSCS Text Analytics API is a REST API. HTTP requests over a network and the Internet
can fail. Because of congestion, because the web site is down for maintenance, because of firewall
configuration issues, etc. There are many possible points of failure.
The API can also fail if you’ve exhausted your call volume quota or are exceeding the API calls
rate limit. Unfortunately, MSCS does not expose an API you can query to check if you’re about to
exceed your quota for instance. The only way you’ll know for sure is by looking at the error code
returned after an API call has failed.
To help with error handling, we recommend the systematic use of tryCatch() when calling mscs-
texta4r’s core functions. Its mechanism may appear a bit daunting at first, but it is well documented
at http://www.inside-r.org/r-doc/base/signalCondition. We use it in many of the code
examples.
Author(s)
<NAME> <<EMAIL>>
texta The texta object
Description
The texta object exposes formatted results, the REST API JSON response, and the HTTP request:
• result the results in data.frame format
• json the REST API JSON response
• request the HTTP request
Author(s)
<NAME> <<EMAIL>>
See Also
Other res: textatopics
textaDetectLanguages Detects the languages used in documents.
Description
This function returns the language detected in a sentence or documents along with a confidence
score between 0 and 1. A scores equal to 1 indicates 100
Internally, this function invokes the Microsoft Cognitive Services Text Analytics REST API docu-
mented at https://www.microsoft.com/cognitive-services/en-us/text-analytics/documentation.
You MUST have a valid Microsoft Cognitive Services account and an API key for this function
to work properly. See https://www.microsoft.com/cognitive-services/en-us/pricing for
details.
Usage
textaDetectLanguages(documents, numberOfLanguagesToDetect = 1L)
Arguments
documents (character vector) Vector of sentences or documents on which to perform lan-
guage detection.
numberOfLanguagesToDetect
(integer) Number of languages to detect. Set to 1 by default. Use a higher value
if individual documents contain a mix of languages.
Value
An S3 object of the class texta. The results are stored in the results dataframe inside this object.
The dataframe contains the original sentences or documents, the name of the detected language,
the ISO 639-1 code of the detected language, and a confidence score. If an error occurred during
processing, the dataframe will also have an error column that describes the error.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
docsText <- c(
"The Louvre or the Louvre Museum is the world's largest museum.",
"Le musee du Louvre est un musee d'art et d'antiquites situe au centre de Paris.",
"El Museo del Louvre es el museo nacional de Francia.",
"Il Museo del Louvre a Parigi, in Francia, e uno dei piu celebri musei del mondo.",
"Der Louvre ist ein Museum in Paris."
)
tryCatch({
# Detect languages used in documents
docsLanguage <- textaDetectLanguages(
documents = docsText, # Input sentences or documents
numberOfLanguagesToDetect = 1L # Number of languages to detect
)
# Class and structure of docsLanguage
class(docsLanguage)
#> [1] "texta"
str(docsLanguage, max.level = 1)
#> List of 3
#> $ results:'data.frame': 5 obs. of 4 variables:
#> $ json : chr "{\"documents\":[{\"id\":\"B6e4C\",\"detectedLanguages\": __truncated__ }]}
#> $ request:List of 7
#> ..- attr(*, "class")= chr "request"
#> - attr(*, "class")= chr "texta"
# Print results
docsLanguage
#> texta [https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/lan __truncated__ ]
#>
#> -----------------------------------------------------------
#> text name iso6391Name score
#> ----------------------------- ------- ------------- -------
#> The Louvre or the Louvre English en 1
#> Museum is the world's largest
#> museum.
#>
#> Le musee du Louvre est un French fr 1
#> musee d'art et d'antiquites
#> situe au centre de Paris.
#>
#> El Museo del Louvre es el Spanish es 1
#> museo nacional de Francia.
#>
#> Il Museo del Louvre a Parigi, Italian it 1
#> in Francia, e uno dei piu
#> celebri musei del mondo.
#>
#> Der Louvre ist ein Museum in German de 1
#> Paris.
#> -----------------------------------------------------------
}, error = function(err) {
# Print error
geterrmessage()
})
## End(Not run)
textaDetectTopics Detects the top topics in a group of text documents.
Description
This function returns the top detected topics for a list of submitted text documents. A topic is
identified with a key phrase, which can be one or more related words. At least 100 text documents
must be submitted, however this API is designed to detect topics across hundreds to thousands of
documents. For best performance, limit each document to a short, human written text paragraph
such as review, conversation or user feedback.
English is the only language supported at this time.
You can provide a list of stop words to control which words or documents are filtered out. You can
also supply a list of topics to exclude from the response. Finally, you can also provide min/max
word frequency count thresholds to exclude rare/ubiquitous document topics.
We recommend using the textaDetectTopics function in synchronous mode, in which case it will
return only after topic detection has completed. If you decide to call this function in asynchronous
mode, you will need to call the textaDetectTopicsStatus function periodically yourself until the
Microsoft Cognitive Services server complete topic detection and results become available.
IMPORTANT NOTE: If you’re calling textaDetectTopics in synchronous mode within the
R console REPL (interactive mode), it will appear as if the console has hanged. This is EX-
PECTED. The function hasn’t crashed. It is simply in "sleep mode", activating itself period-
ically and then going back to sleep, until the results have become available. In sleep mode,
even though it appears "stuck", textaDetectTopics dodesn’t use any CPU resources. While
the function is operating in sleep mode, you WILL NOT be able to use the console until the
function completes. If need to operate the console while topic detection is being performed by
the Microsoft Cognitive services servers, you should call textaDetectTopics in asynchronous
mode and then call textaDetectTopicsStatus yourself repeteadly afterwards, until results
are available.
Note that one transaction is charged per text document submitted.
Internally, this function invokes the Microsoft Cognitive Services Text Analytics REST API docu-
mented at https://www.microsoft.com/cognitive-services/en-us/text-analytics/documentation.
You MUST have a valid Microsoft Cognitive Services account and an API key for this function
to work properly. See https://www.microsoft.com/cognitive-services/en-us/pricing for
details.
Usage
textaDetectTopics(documents, stopWords = NULL, topicsToExclude = NULL,
minDocumentsPerWord = NULL, maxDocumentsPerWord = NULL,
resultsPollInterval = 30L, resultsTimeout = 1200L, verbose = FALSE)
Arguments
documents (character vector) Vector of sentences or documents on which to perform topic
detection. At least 100 text documents must be submitted. English is the only
language supported at this time.
stopWords (character vector) Vector of stop words to ignore while performing topic detec-
tion (optional)
topicsToExclude
(character vector) Vector of topics to exclude from the response (optional)
minDocumentsPerWord
(integer) Words that occur in less than this many documents are ignored. Use
this parameter to help exclude rare document topics. Omit to let the service
choose appropriate value. (optional)
maxDocumentsPerWord
(integer) Words that occur in more than this many documents are ignored. Use
this parameter to help exclude ubiquitous document topics. Omit to let the ser-
vice choose appropriate value. (optional)
resultsPollInterval
(integer) Interval (in seconds) at which this function will query the Microsoft
Cognitive Services servers for results (optional, default: 30L). If set to 0L, this
function will return immediately and you will have to call textaDetectTopicsStatus
periodically to collect results. If set to a non-zero integer value, this function will
only return after all results have been collected. It does so by repeatedly calling
textaDetectTopicsStatus on its own until topic detection has completed. In
the latter case, you do not need to call textaDetectTopicsStatus.
resultsTimeout (integer) Interval (in seconds) at which point this function will give up and stop
querying the Microsoft Cognitive Services servers for results (optional, default:
1200L). As soon as all results are available, this function will return them to
the caller. If the Microsoft Cognitive Services servers within resultsTimeout
seconds, this function will stop polling the servers and return the most current
results.
verbose (logical) If set to TRUE, print every poll status to stdout.
Value
An S3 object of the class textatopics. The results are stored in the results dataframes inside this
object. See textatopics for details. In the synchronous case (i.e., the function only returns after
completion), the dataframes contain the documents, the topics, and which topics are assigned to
which documents. In the asynchonous case (i.e., the function returns immediately), the dataframes
contain the documents, their unique identifiers, their current operation status code, but they don’t
contain the topics yet, nor their assignments. To get the topics and their assignments, you must call
textaDetectTopicsStatus until the Microsoft Services servers have completed topic detection.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
load("./data/yelpChineseRestaurantReviews.rda")
set.seed(1234)
documents <- sample(yelpChReviews$text, 1000)
tryCatch({
# Detect top topics in group of documents
topics <- textaDetectTopics(
documents, # At least 100 documents (English only)
stopWords = NULL, # Stop word list (optional)
topicsToExclude = NULL, # Topics to exclude (optional)
minDocumentsPerWord = NULL, # Threshold to exclude rare topics (optional)
maxDocumentsPerWord = NULL, # Threshold to exclude ubiquitous topics (optional)
resultsPollInterval = 30L, # Poll interval (in s, default:30s, use 0L for async)
resultsTimeout = 1200L, # Give up timeout (in s, default: 1200s = 20mn)
verbose = TRUE # If set to TRUE, print every poll status to stdout
)
# Class and structure of topics
class(topics)
#> [1] "textatopics"
str(topics, max.level = 1)
#> List of 8
#> $ status : chr "Succeeded"
#> $ operationId : chr "30334a3e1e28406a80566bb76ff04884"
#> $ operationType : chr "topics"
#> $ documents :'data.frame': 1000 obs. of 2 variables:
#> $ topics :'data.frame': 71 obs. of 3 variables:
#> $ topicAssignments:'data.frame': 502 obs. of 3 variables:
#> $ json : chr "{\"status\":\"Succeeded\",\"createdDateTime\": __truncated__ }
#> $ request :List of 7
#> ..- attr(*, "class")= chr "request"
#> - attr(*, "class")= chr "textatopics"
# Print results
topics
#> textatopics [https://westus.api.cognitive.microsoft.com/text/analytics/ __truncated__ ]
#> status: Succeeded
#> operationId: 30334a3e1e28406a80566bb76ff04884
#> operationType: topics
#> topics (first 20):
#> ------------------------
#> keyPhrase score
#> ---------------- -------
#> portions 35
#> noodle soup 30
#> vegetables 20
#> tofu 19
#> garlic 17
#> Eggplant 15
#> Pad 15
#> combo 13
#> Beef Noodle Soup 13
#> House 12
#> entree 12
#> wontons 12
#> Pei Wei 12
#> mongolian beef 11
#> crab 11
#> Panda 11
#> bean 10
#> dumplings 9
#> veggies 9
#> decor 9
#> ------------------------
}, error = function(err) {
# Print error
geterrmessage()
})
## End(Not run)
textaDetectTopicsStatus
Retrieves the status of a topic detection operation submitted for pro-
cessing.
Description
This function retrieves the status of an asynchronous topic detection operation previously submitted
for processing. If the operation has reached a ’Succeeded’ state, this function will also return the
results.
Internally, this function invokes the Microsoft Cognitive Services Text Analytics REST API docu-
mented at https://www.microsoft.com/cognitive-services/en-us/text-analytics/documentation.
You MUST have a valid Microsoft Cognitive Services account and an API key for this function
to work properly. See https://www.microsoft.com/cognitive-services/en-us/pricing for
details.
Usage
textaDetectTopicsStatus(operation, verbose = FALSE)
Arguments
operation (textatopics) textatopics S3 object returned by the original call to textaDetectTopics.
verbose (logical) If set to TRUE, print poll status to stdout.
Value
An S3 object of the class textatopics with the results of the topic detection operation. See
textatopics for details.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
load("./data/yelpChineseRestaurantReviews.rda")
set.seed(1234)
documents <- sample(yelpChReviews$text, 1000)
tryCatch({
# Start async topic detection
operation <- textaDetectTopics(
documents, # At least 100 docs/sentences
stopWords = NULL, # Stop word list (optional)
topicsToExclude = NULL, # Topics to exclude (optional)
minDocumentsPerWord = NULL, # Threshold to exclude rare topics (optional)
maxDocumentsPerWord = NULL, # Threshold to exclude ubiquitous topics (optional)
resultsPollInterval = 0L # Poll interval (in s, default: 30s, use 0L for async)
)
# Poll the servers until the work completes or until we time out
resultsPollInterval <- 60L
resultsTimeout <- 1200L
startTime <- Sys.time()
endTime <- startTime + resultsTimeout
while (Sys.time() <= endTime) {
sleepTime <- startTime + resultsPollInterval - Sys.time()
if (sleepTime > 0)
Sys.sleep(sleepTime)
startTime <- Sys.time()
# Poll for results
topics <- textaDetectTopicsStatus(operation)
if (topics$status != "NotStarted" && topics$status != "Running")
break;
}
# Class and structure of topics
class(topics)
#> [1] "textatopics"
str(topics, max.level = 1)
#> List of 8
#> $ status : chr "Succeeded"
#> $ operationId : chr "30334a3e1e28406a80566bb76ff04884"
#> $ operationType : chr "topics"
#> $ documents :'data.frame': 1000 obs. of 2 variables:
#> $ topics :'data.frame': 71 obs. of 3 variables:
#> $ topicAssignments:'data.frame': 502 obs. of 3 variables:
#> $ json : chr "{\"status\":\"Succeeded\",\"createdDateTime\": __truncated__ }
#> $ request :List of 7
#> ..- attr(*, "class")= chr "request"
#> - attr(*, "class")= chr "textatopics"
# Print results
topics
#> textatopics [https://westus.api.cognitive.microsoft.com/text/analytics/ __truncated__ ]
#> status: Succeeded
#> operationId: 30334a3e1e28406a80566bb76ff04884
#> operationType: topics
#> topics (first 20):
#> ------------------------
#> keyPhrase score
#> ---------------- -------
#> portions 35
#> noodle soup 30
#> vegetables 20
#> tofu 19
#> garlic 17
#> Eggplant 15
#> Pad 15
#> combo 13
#> Beef Noodle Soup 13
#> House 12
#> entree 12
#> wontons 12
#> Pei Wei 12
#> mongolian beef 11
#> crab 11
#> Panda 11
#> bean 10
#> dumplings 9
#> veggies 9
#> decor 9
#> ------------------------
}, error = function(err) {
# Print error
geterrmessage()
})
## End(Not run)
textaInit Initializes the mscstexta4r package.
Description
This function initializes the Microsoft Cognitive Services Text Analytics REST API key and URL
by reading them either from a configuration file or environment variables.
This function MUST be called right after package load and before calling any mscstexta4r core
functions, or these functions will fail.
The textaInit configuration function will first check to see if the variable MSCS_TEXTANALYTICS_CONFIG_FILE
exists in the system environment. If it does, the package will use that as the path to the configuration
file.
If MSCS_TEXTANALYTICS_CONFIG_FILE doesn’t exist, it will look for the file .mscskeys.json
in the current user’s home directory (that’s ~/.mscskeys.json on Linux, and something like
C:/Users/Phil/Documents/.mscskeys.json on Windows). If the file is found, the package will
load the API key and URL from it.
If using a file, please make sure it has the following structure:
{
"textanalyticsurl": "https://westus.api.cognitive.microsoft.com/texta/analytics/v2.0/",
"textanalyticskey": "...MSCS Text Analytics API key goes here..."
}
If no configuration file is found, textaInit will attempt to pick up its configuration information
from two Sys env variables instead:
MSCS_TEXTANALYTICS_URL - the URL for the Text Analytics REST API.
MSCS_TEXTANALYTICS_KEY - your personal Text Analytics REST API key.
textaInit needs to be called only once, after package load.
Usage
textaInit()
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
textaInit()
## End(Not run)
textaKeyPhrases Returns the key talking points in sentences or documents.
Description
This function returns the the key talking points in a list of sentences or documents. The following
languages are currently supported: English, German, Spanish and Japanese.
Internally, this function invokes the Microsoft Cognitive Services Text Analytics REST API docu-
mented at https://www.microsoft.com/cognitive-services/en-us/text-analytics/documentation.
You MUST have a valid Microsoft Cognitive Services account and an API key for this function
to work properly. See https://www.microsoft.com/cognitive-services/en-us/pricing for
details.
Usage
textaKeyPhrases(documents, languages = rep("en", length(documents)))
Arguments
documents (character vector) Vector of sentences or documents for which to extract key
talking points.
languages (character vector) Languages of the sentences or documents, supported values:
"en"(English, default), "de"(German), "es"(Spanish), "fr"(French), "ja"(Japanese)
Value
An S3 object of the class texta. The results are stored in the results dataframe inside this object.
The dataframe contains the original sentences or documents and their key talking points. If an error
occurred during processing, the dataframe will also have an error column that describes the error.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
docsText <- c(
"Loved the food, service and atmosphere! We'll definitely be back.",
"Very good food, reasonable prices, excellent service.",
"It was a great restaurant.",
"If steak is what you want, this is the place.",
"The atmosphere is pretty bad but the food is quite good.",
"The food is quite good but the atmosphere is pretty bad.",
"I'm not sure I would come back to this restaurant.",
"The food wasn't very good.",
"While the food was good the service was a disappointment.",
"I was very disappointed with both the service and my entree."
)
docsLanguage <- rep("en", length(docsText))
tryCatch({
# Get key talking points in documents
docsKeyPhrases <- textaKeyPhrases(
documents = docsText, # Input sentences or documents
languages = docsLanguage
# "en"(English, default)|"de"(German)|"es"(Spanish)|"fr"(French)|"ja"(Japanese)
)
# Class and structure of docsKeyPhrases
class(docsKeyPhrases)
#> [1] "texta"
str(docsKeyPhrases, max.level = 1)
#> List of 3
#> $ results:'data.frame': 10 obs. of 2 variables:
#> $ json : chr "{\"documents\":[{\"keyPhrases\":[\"atmosphere\",\"food\", __truncated__ ]}]}
#> $ request:List of 7
#> ..- attr(*, "class")= chr "request"
#> - attr(*, "class")= chr "texta"
# Print results
docsKeyPhrases
#> texta [https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases]
#>
#> -----------------------------------------------------------
#> text keyPhrases
#> ------------------------------ ----------------------------
#> Loved the food, service and atmosphere, food, service
#> atmosphere! We'll definitely
#> be back.
#>
#> Very good food, reasonable reasonable prices, good food
#> prices, excellent service.
#>
#> It was a great restaurant. great restaurant
#>
#> If steak is what you want, steak, place
#> this is the place.
#>
#> The atmosphere is pretty bad atmosphere, food
#> but the food is quite good.
#>
#> The food is quite good but the food, atmosphere
#> atmosphere is pretty bad.
#>
#> I'm not sure I would come back restaurant
#> to this restaurant.
#>
#> The food wasn't very good. food
#>
#> While the food was good the service, food
#> service was a disappointment.
#>
#> I was very disappointed with service, entree
#> both the service and my
#> entree.
#> -----------------------------------------------------------
}, error = function(err) {
# Print error
geterrmessage()
})
## End(Not run)
textaSentiment Assesses the sentiment of sentences or documents.
Description
This function returns a numeric score between 0 and 1 with scores close to 1 indicating positive
sentiment and scores close to 0 indicating negative sentiment.
Sentiment score is generated using classification techniques. The input features of the classifier in-
clude n-grams, features generated from part-of-speech tags, and word embeddings. English, French,
Spanish and Portuguese text are supported.
Internally, this function invokes the Microsoft Cognitive Services Text Analytics REST API docu-
mented at https://www.microsoft.com/cognitive-services/en-us/text-analytics/documentation.
You MUST have a valid Microsoft Cognitive Services account and an API key for this function
to work properly. See https://www.microsoft.com/cognitive-services/en-us/pricing for
details.
Usage
textaSentiment(documents, languages = rep("en", length(documents)))
Arguments
documents (character vector) Vector of sentences or documents for which to assess senti-
ment.
languages (character vector) Languages of the sentences or documents, supported values:
"en"(English, default), "es"(Spanish), "fr"(French), "pt"(Portuguese)
Value
An S3 object of the class texta. The results are stored in the results dataframe inside this object.
The dataframe contains the original sentences or documents and their sentiment score. If an error
occurred during processing, the dataframe will also have an error column that describes the error.
Author(s)
<NAME> <<EMAIL>>
Examples
## Not run:
docsText <- c(
"Loved the food, service and atmosphere! We'll definitely be back.",
"Very good food, reasonable prices, excellent service.",
"It was a great restaurant.",
"If steak is what you want, this is the place.",
"The atmosphere is pretty bad but the food is quite good.",
"The food is quite good but the atmosphere is pretty bad.",
"I'm not sure I would come back to this restaurant.",
"The food wasn't very good.",
"While the food was good the service was a disappointment.",
"I was very disappointed with both the service and my entree."
)
docsLanguage <- rep("en", length(docsText))
tryCatch({
# Perform sentiment analysis
docsSentiment <- textaSentiment(
documents = docsText, # Input sentences or documents
languages = docsLanguage
# "en"(English, default)|"es"(Spanish)|"fr"(French)|"pt"(Portuguese)
)
# Class and structure of docsSentiment
class(docsSentiment)
#> [1] "texta"
str(docsSentiment, max.level = 1)
#> List of 3
#> $ results:'data.frame': 10 obs. of 2 variables:
#> $ json : chr "{\"documents\":[{\"score\":0.9903013,\"id\":\"hDgKc\", __truncated__ }]}
#> $ request:List of 7
#> ..- attr(*, "class")= chr "request"
#> - attr(*, "class")= chr "texta"
# Print results
docsSentiment
#> texta [https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment]
#>
#> --------------------------------------
#> text score
#> ------------------------------ -------
#> Loved the food, service and 0.9847
#> atmosphere! We'll definitely
#> be back.
#>
#> Very good food, reasonable 0.9831
#> prices, excellent service.
#>
#> It was a great restaurant. 0.9306
#>
#> If steak is what you want, 0.8014
#> this is the place.
#>
#> The atmosphere is pretty bad 0.4998
#> but the food is quite good.
#>
#> The food is quite good but the 0.475
#> atmosphere is pretty bad.
#>
#> I'm not sure I would come back 0.2857
#> to this restaurant.
#>
#> The food wasn't very good. 0.1877
#>
#> While the food was good the 0.08727
#> service was a disappointment.
#>
#> I was very disappointed with 0.01877
#> both the service and my
#> entree.
#> --------------------------------------
}, error = function(err) {
# Print error
geterrmessage()
})
## End(Not run)
textatopics The textatopics object
Description
The textatopics object exposes formatted results for the textaDetectTopics API, this REST
API’s JSON response, and the HTTP request:
• status the operation’s current status ("NotStarted"|"Running"|"Succeeded"|"Failed")
• documents a data.frame with the documents and a unique string ID for each
• topics a data.frame with the identified topics, a unique string ID for each, and a prevalence
score for each topic (count of documents assigned to topic)
• topicAssignments a data.frame with all the topics (identified by their topic ID) assigned to
each document (identified by their document ID), and a distance score for each topic assign-
ment (between 0 and 1; the lower the distance score the stronger the topic affiliation)
• json the REST API JSON response
• request the HTTP request
Author(s)
<NAME> <<EMAIL>>
See Also
Other res: texta |
nomex | hex | Erlang | nomex v0.0.4
API Reference
===
Modules
---
[Nomex](Nomex.html)
Base module for Nomex, used to access Nomad settings
[Nomex.ACL](Nomex.ACL.html)
Methods in this module are used to interact with Nomad’s ACL HTTP API. More information here
[Nomex.Agent](Nomex.Agent.html)
Methods in this module are used to interact with Nomad’s Agent HTTP API. More information here
[Nomex.Allocations](Nomex.Allocations.html)
Methods in this module are used to interact with Nomad’s Allocations HTTP API. More information here
[Nomex.Client](Nomex.Client.html)
Methods in this module are used to interact with Nomad’s Client HTTP API. More information here
[Nomex.Deployments](Nomex.Deployments.html)
Methods in this module are used to interact with Nomad’s Deployments HTTP API. More information here
[Nomex.Evaluations](Nomex.Evaluations.html)
Methods in this module are used to interact with Nomad’s Evaluations HTTP API. More information here
[Nomex.Jobs](Nomex.Jobs.html)
Methods in this module are used to interact with Nomad’s Jobs HTTP API. More information here
[Nomex.Metrics](Nomex.Metrics.html)
Methods in this module are used to interact with Nomad’s Metrics HTTP API. More information here
[Nomex.Namespaces](Nomex.Namespaces.html)
Methods in this module are used to interact with Nomad’s Namespaces HTTP API. More information here
[Nomex.Nodes](Nomex.Nodes.html)
Methods in this module are used to interact with Nomad’s Nodes HTTP API. More information here
[Nomex.Operator](Nomex.Operator.html)
Methods in this module are used to interact with Nomad’s Operator HTTP API. More information here
[Nomex.Quotas](Nomex.Quotas.html)
Methods in this module are used to interact with Nomad’s Quotas HTTP API. More information here
[Nomex.Regions](Nomex.Regions.html)
Methods in this module are used to interact with Nomad’s Regions HTTP API. More information here
[Nomex.Request](Nomex.Request.html)
Wrapper module for [`HTTPoison.Base`](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Base.html) and contains some convenience defmacro functions to keep other modules DRY
[Nomex.Response](Nomex.Response.html)
[Nomex.Sentinel](Nomex.Sentinel.html)
Methods in this module are used to interact with Nomad’s Sentinel Policies HTTP API. More information here
[Nomex.Status](Nomex.Status.html)
Methods in this module are used to interact with Nomad’s Status HTTP API. More information here
[Nomex.System](Nomex.System.html)
Methods in this module are used to interact with Nomad’s System HTTP API. More information here
nomex v0.0.4
Nomex
===
Base module for Nomex, used to access Nomad settings
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[host()](#host/0)
returns Nomad host, configurable in `config/config.exs`
[meta_moduledoc(name, urls \\ [])](#meta_moduledoc/2)
[token()](#token/0)
returns Nomad ACL token if it is specified in `config/config.exs`
Example
---
[version()](#version/0)
returns Nomad API version
Example
---
[Link to this section](#functions)
Functions
===
[Link to this function](#host/0 "Link to this function")
host()
```
host() :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()
```
returns Nomad host, configurable in `config/config.exs`
Example
---
```
iex> Nomex.host
"http://127.0.0.1:4646"
```
[Link to this macro](#meta_moduledoc/2 "Link to this macro")
meta_moduledoc(name, urls \\ [])
(macro)
[Link to this function](#token/0 "Link to this function")
token()
```
token() :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()
```
returns Nomad ACL token if it is specified in `config/config.exs`
Example
---
```
iex> Nomex.token
"936a095f-68da-c19a-0a65-4794b0ea74e5"
```
[Link to this function](#version/0 "Link to this function")
version()
```
version() :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()
```
returns Nomad API version
Example
---
```
iex> Nomex.version
"v1"
```
nomex v0.0.4
Nomex.ACL
===
Methods in this module are used to interact with Nomad’s ACL HTTP API. More information here:
<https://www.nomadproject.io/api/acl-policies.html<https://www.nomadproject.io/api/acl-tokens.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[bootstrap()](#bootstrap/0)
[bootstrap!()](#bootstrap!/0)
[create_or_update(name, description \\ "", rules)](#create_or_update/3)
[create_or_update!(name, description \\ "", rules)](#create_or_update!/3)
[policies()](#policies/0)
issues a GET request to `<NOMAD_HOST>/v1/acl/policies`
[policies!()](#policies!/0)
issues a GET request to `<NOMAD_HOST>/v1/acl/policies`
[policy(param_id)](#policy/1)
issues a GET request to `<NOMAD_HOST>/v1/acl/policy/<param_id>`
[policy!(param_id)](#policy!/1)
issues a GET request to `<NOMAD_HOST>/v1/acl/policy/<param_id>`
[token(param_id)](#token/1)
issues a GET request to `<NOMAD_HOST>/v1/acl/token/<param_id>`
[token!(param_id)](#token!/1)
issues a GET request to `<NOMAD_HOST>/v1/acl/token/<param_id>`
[token_self()](#token_self/0)
issues a GET request to `<NOMAD_HOST>/v1/acl/token/self`
[token_self!()](#token_self!/0)
issues a GET request to `<NOMAD_HOST>/v1/acl/token/self`
[tokens()](#tokens/0)
issues a GET request to `<NOMAD_HOST>/v1/acl/tokens`
[tokens!()](#tokens!/0)
issues a GET request to `<NOMAD_HOST>/v1/acl/tokens`
[Link to this section](#functions)
Functions
===
[Link to this function](#bootstrap/0 "Link to this function")
bootstrap()
[Link to this function](#bootstrap!/0 "Link to this function")
bootstrap!()
[Link to this function](#create_or_update/3 "Link to this function")
create_or_update(name, description \\ "", rules)
[Link to this function](#create_or_update!/3 "Link to this function")
create_or_update!(name, description \\ "", rules)
[Link to this function](#policies/0 "Link to this function")
policies()
```
policies() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/policies`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#policies!/0 "Link to this function")
policies!()
```
policies!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/policies`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#policy/1 "Link to this function")
policy(param_id)
```
policy([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/policy/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#policy!/1 "Link to this function")
policy!(param_id)
```
policy!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/policy/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#token/1 "Link to this function")
token(param_id)
```
token([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/token/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#token!/1 "Link to this function")
token!(param_id)
```
token!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/token/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#token_self/0 "Link to this function")
token_self()
```
token_self() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/token/self`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#token_self!/0 "Link to this function")
token_self!()
```
token_self!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/token/self`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#tokens/0 "Link to this function")
tokens()
```
tokens() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/tokens`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#tokens!/0 "Link to this function")
tokens!()
```
tokens!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/acl/tokens`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Agent
===
Methods in this module are used to interact with Nomad’s Agent HTTP API. More information here:
<https://www.nomadproject.io/api/agent.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[health()](#health/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/health`
[health!()](#health!/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/health`
[members()](#members/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/members`
[members!()](#members!/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/members`
[self()](#self/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/self`
[self!()](#self!/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/self`
[servers()](#servers/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/servers`
[servers!()](#servers!/0)
issues a GET request to `<NOMAD_HOST>/v1/agent/servers`
[Link to this section](#functions)
Functions
===
[Link to this function](#health/0 "Link to this function")
health()
```
health() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/health`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#health!/0 "Link to this function")
health!()
```
health!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/health`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#members/0 "Link to this function")
members()
```
members() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/members`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#members!/0 "Link to this function")
members!()
```
members!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/members`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#self/0 "Link to this function")
self()
```
self() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/self`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#self!/0 "Link to this function")
self!()
```
self!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/self`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#servers/0 "Link to this function")
servers()
```
servers() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/servers`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#servers!/0 "Link to this function")
servers!()
```
servers!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/agent/servers`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Allocations
===
Methods in this module are used to interact with Nomad’s Allocations HTTP API. More information here:
<https://www.nomadproject.io/api/allocations.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[allocation(param_id)](#allocation/1)
issues a GET request to `<NOMAD_HOST>/v1/allocation/<param_id>`
[allocation!(param_id)](#allocation!/1)
issues a GET request to `<NOMAD_HOST>/v1/allocation/<param_id>`
[allocations()](#allocations/0)
issues a GET request to `<NOMAD_HOST>/v1/allocations`
[allocations(prefix)](#allocations/1)
issues a GET request to `<NOMAD_HOST>/v1/allocations?prefix=<prefix>`
[allocations!()](#allocations!/0)
issues a GET request to `<NOMAD_HOST>/v1/allocations`
[allocations!(prefix)](#allocations!/1)
issues a GET request to `<NOMAD_HOST>/v1/allocations?prefix=<prefix>`
[Link to this section](#functions)
Functions
===
[Link to this function](#allocation/1 "Link to this function")
allocation(param_id)
```
allocation([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/allocation/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#allocation!/1 "Link to this function")
allocation!(param_id)
```
allocation!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/allocation/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#allocations/0 "Link to this function")
allocations()
```
allocations() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/allocations`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#allocations/1 "Link to this function")
allocations(prefix)
```
allocations([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/allocations?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#allocations!/0 "Link to this function")
allocations!()
```
allocations!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/allocations`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#allocations!/1 "Link to this function")
allocations!(prefix)
```
allocations!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/allocations?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Client
===
Methods in this module are used to interact with Nomad’s Client HTTP API. More information here:
<https://www.nomadproject.io/api/client.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[allocation_stats(allocation_id)](#allocation_stats/1)
[allocation_stats!(allocation_id)](#allocation_stats!/1)
[stats()](#stats/0)
issues a GET request to `<NOMAD_HOST>/v1/client/stats`
[stats!()](#stats!/0)
issues a GET request to `<NOMAD_HOST>/v1/client/stats`
[Link to this section](#functions)
Functions
===
[Link to this function](#allocation_stats/1 "Link to this function")
allocation_stats(allocation_id)
[Link to this function](#allocation_stats!/1 "Link to this function")
allocation_stats!(allocation_id)
[Link to this function](#stats/0 "Link to this function")
stats()
```
stats() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/client/stats`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#stats!/0 "Link to this function")
stats!()
```
stats!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/client/stats`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Deployments
===
Methods in this module are used to interact with Nomad’s Deployments HTTP API. More information here:
<https://www.nomadproject.io/api/deployments.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[allocations(param_id)](#allocations/1)
issues a GET request to `<NOMAD_HOST>/v1/deployment/allocations/<param_id>`
[allocations!(param_id)](#allocations!/1)
issues a GET request to `<NOMAD_HOST>/v1/deployment/allocations/<param_id>`
[deployment(param_id)](#deployment/1)
issues a GET request to `<NOMAD_HOST>/v1/deployment/<param_id>`
[deployment!(param_id)](#deployment!/1)
issues a GET request to `<NOMAD_HOST>/v1/deployment/<param_id>`
[deployments()](#deployments/0)
issues a GET request to `<NOMAD_HOST>/v1/deployments`
[deployments(prefix)](#deployments/1)
issues a GET request to `<NOMAD_HOST>/v1/deployments?prefix=<prefix>`
[deployments!()](#deployments!/0)
issues a GET request to `<NOMAD_HOST>/v1/deployments`
[deployments!(prefix)](#deployments!/1)
issues a GET request to `<NOMAD_HOST>/v1/deployments?prefix=<prefix>`
[Link to this section](#functions)
Functions
===
[Link to this function](#allocations/1 "Link to this function")
allocations(param_id)
```
allocations([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployment/allocations/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#allocations!/1 "Link to this function")
allocations!(param_id)
```
allocations!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployment/allocations/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#deployment/1 "Link to this function")
deployment(param_id)
```
deployment([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployment/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#deployment!/1 "Link to this function")
deployment!(param_id)
```
deployment!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployment/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#deployments/0 "Link to this function")
deployments()
```
deployments() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployments`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#deployments/1 "Link to this function")
deployments(prefix)
```
deployments([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployments?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#deployments!/0 "Link to this function")
deployments!()
```
deployments!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployments`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#deployments!/1 "Link to this function")
deployments!(prefix)
```
deployments!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/deployments?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Evaluations
===
Methods in this module are used to interact with Nomad’s Evaluations HTTP API. More information here:
<https://www.nomadproject.io/api/evaluations.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[allocations(evaluation_id)](#allocations/1)
[allocations!(evaluation_id)](#allocations!/1)
[evaluation(param_id)](#evaluation/1)
issues a GET request to `<NOMAD_HOST>/v1/evaluation/<param_id>`
[evaluation!(param_id)](#evaluation!/1)
issues a GET request to `<NOMAD_HOST>/v1/evaluation/<param_id>`
[evaluations()](#evaluations/0)
issues a GET request to `<NOMAD_HOST>/v1/evaluations`
[evaluations(prefix)](#evaluations/1)
issues a GET request to `<NOMAD_HOST>/v1/evaluations?prefix=<prefix>`
[evaluations!()](#evaluations!/0)
issues a GET request to `<NOMAD_HOST>/v1/evaluations`
[evaluations!(prefix)](#evaluations!/1)
issues a GET request to `<NOMAD_HOST>/v1/evaluations?prefix=<prefix>`
[Link to this section](#functions)
Functions
===
[Link to this function](#allocations/1 "Link to this function")
allocations(evaluation_id)
[Link to this function](#allocations!/1 "Link to this function")
allocations!(evaluation_id)
[Link to this function](#evaluation/1 "Link to this function")
evaluation(param_id)
```
evaluation([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/evaluation/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#evaluation!/1 "Link to this function")
evaluation!(param_id)
```
evaluation!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/evaluation/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#evaluations/0 "Link to this function")
evaluations()
```
evaluations() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/evaluations`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#evaluations/1 "Link to this function")
evaluations(prefix)
```
evaluations([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/evaluations?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#evaluations!/0 "Link to this function")
evaluations!()
```
evaluations!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/evaluations`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#evaluations!/1 "Link to this function")
evaluations!(prefix)
```
evaluations!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/evaluations?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Jobs
===
Methods in this module are used to interact with Nomad’s Jobs HTTP API. More information here:
<https://www.nomadproject.io/api/jobs.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[job(param_id)](#job/1)
issues a GET request to `<NOMAD_HOST>/v1/job/<param_id>`
[job!(param_id)](#job!/1)
issues a GET request to `<NOMAD_HOST>/v1/job/<param_id>`
[job_allocations(job_id)](#job_allocations/1)
[job_allocations!(job_id)](#job_allocations!/1)
[job_deployment(job_id)](#job_deployment/1)
[job_deployment!(job_id)](#job_deployment!/1)
[job_deployments(job_id)](#job_deployments/1)
[job_deployments!(job_id)](#job_deployments!/1)
[job_evaluations(job_id)](#job_evaluations/1)
[job_evaluations!(job_id)](#job_evaluations!/1)
[job_summary(job_id)](#job_summary/1)
[job_summary!(job_id)](#job_summary!/1)
[job_versions(job_id)](#job_versions/1)
[job_versions!(job_id)](#job_versions!/1)
[jobs()](#jobs/0)
issues a GET request to `<NOMAD_HOST>/v1/jobs`
[jobs(prefix)](#jobs/1)
issues a GET request to `<NOMAD_HOST>/v1/jobs?prefix=<prefix>`
[jobs!()](#jobs!/0)
issues a GET request to `<NOMAD_HOST>/v1/jobs`
[jobs!(prefix)](#jobs!/1)
issues a GET request to `<NOMAD_HOST>/v1/jobs?prefix=<prefix>`
[Link to this section](#functions)
Functions
===
[Link to this function](#job/1 "Link to this function")
job(param_id)
```
job([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/job/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#job!/1 "Link to this function")
job!(param_id)
```
job!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/job/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#job_allocations/1 "Link to this function")
job_allocations(job_id)
[Link to this function](#job_allocations!/1 "Link to this function")
job_allocations!(job_id)
[Link to this function](#job_deployment/1 "Link to this function")
job_deployment(job_id)
[Link to this function](#job_deployment!/1 "Link to this function")
job_deployment!(job_id)
[Link to this function](#job_deployments/1 "Link to this function")
job_deployments(job_id)
[Link to this function](#job_deployments!/1 "Link to this function")
job_deployments!(job_id)
[Link to this function](#job_evaluations/1 "Link to this function")
job_evaluations(job_id)
[Link to this function](#job_evaluations!/1 "Link to this function")
job_evaluations!(job_id)
[Link to this function](#job_summary/1 "Link to this function")
job_summary(job_id)
[Link to this function](#job_summary!/1 "Link to this function")
job_summary!(job_id)
[Link to this function](#job_versions/1 "Link to this function")
job_versions(job_id)
[Link to this function](#job_versions!/1 "Link to this function")
job_versions!(job_id)
[Link to this function](#jobs/0 "Link to this function")
jobs()
```
jobs() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/jobs`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#jobs/1 "Link to this function")
jobs(prefix)
```
jobs([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/jobs?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#jobs!/0 "Link to this function")
jobs!()
```
jobs!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/jobs`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#jobs!/1 "Link to this function")
jobs!(prefix)
```
jobs!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/jobs?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Metrics
===
Methods in this module are used to interact with Nomad’s Metrics HTTP API. More information here:
<https://www.nomadproject.io/api/metrics.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[metrics()](#metrics/0)
issues a GET request to `<NOMAD_HOST>/v1/metrics`
[metrics!()](#metrics!/0)
issues a GET request to `<NOMAD_HOST>/v1/metrics`
[Link to this section](#functions)
Functions
===
[Link to this function](#metrics/0 "Link to this function")
metrics()
```
metrics() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/metrics`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#metrics!/0 "Link to this function")
metrics!()
```
metrics!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/metrics`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Namespaces
===
Methods in this module are used to interact with Nomad’s Namespaces HTTP API. More information here:
<https://www.nomadproject.io/api/namespaces.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[namespace(param_id)](#namespace/1)
issues a GET request to `<NOMAD_HOST>/v1/namespace/<param_id>`
[namespace!(param_id)](#namespace!/1)
issues a GET request to `<NOMAD_HOST>/v1/namespace/<param_id>`
[namespaces()](#namespaces/0)
issues a GET request to `<NOMAD_HOST>/v1/namespaces`
[namespaces(prefix)](#namespaces/1)
issues a GET request to `<NOMAD_HOST>/v1/namespaces?prefix=<prefix>`
[namespaces!()](#namespaces!/0)
issues a GET request to `<NOMAD_HOST>/v1/namespaces`
[namespaces!(prefix)](#namespaces!/1)
issues a GET request to `<NOMAD_HOST>/v1/namespaces?prefix=<prefix>`
[Link to this section](#functions)
Functions
===
[Link to this function](#namespace/1 "Link to this function")
namespace(param_id)
```
namespace([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/namespace/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#namespace!/1 "Link to this function")
namespace!(param_id)
```
namespace!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/namespace/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#namespaces/0 "Link to this function")
namespaces()
```
namespaces() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/namespaces`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#namespaces/1 "Link to this function")
namespaces(prefix)
```
namespaces([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/namespaces?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#namespaces!/0 "Link to this function")
namespaces!()
```
namespaces!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/namespaces`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#namespaces!/1 "Link to this function")
namespaces!(prefix)
```
namespaces!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/namespaces?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Nodes
===
Methods in this module are used to interact with Nomad’s Nodes HTTP API. More information here:
<https://www.nomadproject.io/api/nodes.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[allocations(node_id)](#allocations/1)
[allocations!(node_id)](#allocations!/1)
[node()](#node/0)
issues a GET request to `<NOMAD_HOST>/v1/node`
[node!()](#node!/0)
issues a GET request to `<NOMAD_HOST>/v1/node`
[nodes()](#nodes/0)
issues a GET request to `<NOMAD_HOST>/v1/nodes`
[nodes(prefix)](#nodes/1)
issues a GET request to `<NOMAD_HOST>/v1/nodes?prefix=<prefix>`
[nodes!()](#nodes!/0)
issues a GET request to `<NOMAD_HOST>/v1/nodes`
[nodes!(prefix)](#nodes!/1)
issues a GET request to `<NOMAD_HOST>/v1/nodes?prefix=<prefix>`
[Link to this section](#functions)
Functions
===
[Link to this function](#allocations/1 "Link to this function")
allocations(node_id)
[Link to this function](#allocations!/1 "Link to this function")
allocations!(node_id)
[Link to this function](#node/0 "Link to this function")
node()
```
node() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/node`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#node!/0 "Link to this function")
node!()
```
node!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/node`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#nodes/0 "Link to this function")
nodes()
```
nodes() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/nodes`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#nodes/1 "Link to this function")
nodes(prefix)
```
nodes([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/nodes?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#nodes!/0 "Link to this function")
nodes!()
```
nodes!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/nodes`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#nodes!/1 "Link to this function")
nodes!(prefix)
```
nodes!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/nodes?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Operator
===
Methods in this module are used to interact with Nomad’s Operator HTTP API. More information here:
<https://www.nomadproject.io/api/operator.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[raft_configuration()](#raft_configuration/0)
issues a GET request to `<NOMAD_HOST>/v1/operator/raft/configuration`
[raft_configuration!()](#raft_configuration!/0)
issues a GET request to `<NOMAD_HOST>/v1/operator/raft/configuration`
[Link to this section](#functions)
Functions
===
[Link to this function](#raft_configuration/0 "Link to this function")
raft_configuration()
```
raft_configuration() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/operator/raft/configuration`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#raft_configuration!/0 "Link to this function")
raft_configuration!()
```
raft_configuration!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/operator/raft/configuration`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Quotas
===
Methods in this module are used to interact with Nomad’s Quotas HTTP API. More information here:
<https://www.nomadproject.io/api/quotas.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[quota(param_id)](#quota/1)
issues a GET request to `<NOMAD_HOST>/v1/quota/<param_id>`
[quota!(param_id)](#quota!/1)
issues a GET request to `<NOMAD_HOST>/v1/quota/<param_id>`
[quota_usage(param_id)](#quota_usage/1)
issues a GET request to `<NOMAD_HOST>/v1/quota/usage/<param_id>`
[quota_usage!(param_id)](#quota_usage!/1)
issues a GET request to `<NOMAD_HOST>/v1/quota/usage/<param_id>`
[quota_usages()](#quota_usages/0)
issues a GET request to `<NOMAD_HOST>/v1/quota-usages`
[quota_usages(prefix)](#quota_usages/1)
issues a GET request to `<NOMAD_HOST>/v1/quota-usages?prefix=<prefix>`
[quota_usages!()](#quota_usages!/0)
issues a GET request to `<NOMAD_HOST>/v1/quota-usages`
[quota_usages!(prefix)](#quota_usages!/1)
issues a GET request to `<NOMAD_HOST>/v1/quota-usages?prefix=<prefix>`
[quotas()](#quotas/0)
issues a GET request to `<NOMAD_HOST>/v1/quotas`
[quotas(prefix)](#quotas/1)
issues a GET request to `<NOMAD_HOST>/v1/quotas?prefix=<prefix>`
[quotas!()](#quotas!/0)
issues a GET request to `<NOMAD_HOST>/v1/quotas`
[quotas!(prefix)](#quotas!/1)
issues a GET request to `<NOMAD_HOST>/v1/quotas?prefix=<prefix>`
[Link to this section](#functions)
Functions
===
[Link to this function](#quota/1 "Link to this function")
quota(param_id)
```
quota([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#quota!/1 "Link to this function")
quota!(param_id)
```
quota!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#quota_usage/1 "Link to this function")
quota_usage(param_id)
```
quota_usage([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota/usage/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#quota_usage!/1 "Link to this function")
quota_usage!(param_id)
```
quota_usage!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota/usage/<param_id>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#quota_usages/0 "Link to this function")
quota_usages()
```
quota_usages() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota-usages`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#quota_usages/1 "Link to this function")
quota_usages(prefix)
```
quota_usages([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota-usages?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#quota_usages!/0 "Link to this function")
quota_usages!()
```
quota_usages!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota-usages`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#quota_usages!/1 "Link to this function")
quota_usages!(prefix)
```
quota_usages!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quota-usages?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#quotas/0 "Link to this function")
quotas()
```
quotas() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quotas`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#quotas/1 "Link to this function")
quotas(prefix)
```
quotas([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quotas?prefix=<prefix>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#quotas!/0 "Link to this function")
quotas!()
```
quotas!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quotas`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#quotas!/1 "Link to this function")
quotas!(prefix)
```
quotas!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/quotas?prefix=<prefix>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Regions
===
Methods in this module are used to interact with Nomad’s Regions HTTP API. More information here:
<https://www.nomadproject.io/api/regions.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[regions()](#regions/0)
issues a GET request to `<NOMAD_HOST>/v1/regions`
[regions!()](#regions!/0)
issues a GET request to `<NOMAD_HOST>/v1/regions`
[Link to this section](#functions)
Functions
===
[Link to this function](#regions/0 "Link to this function")
regions()
```
regions() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/regions`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#regions!/0 "Link to this function")
regions!()
```
regions!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/regions`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Request
===
Wrapper module for [`HTTPoison.Base`](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Base.html) and contains some convenience defmacro functions to keep other modules DRY
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[body()](#t:body/0)
[headers()](#t:headers/0)
[Functions](#functions)
---
[base()](#base/0)
[delete(url, headers \\ [], options \\ [])](#delete/3)
Issues a DELETE request to the given url
[delete!(url, headers \\ [], options \\ [])](#delete!/3)
Issues a DELETE request to the given url, raising an exception in case of failure
[get(url, headers \\ [], options \\ [])](#get/3)
Issues a GET request to the given url
[get!(url, headers \\ [], options \\ [])](#get!/3)
Issues a GET request to the given url, raising an exception in case of failure
[head(url, headers \\ [], options \\ [])](#head/3)
Issues a HEAD request to the given url
[head!(url, headers \\ [], options \\ [])](#head!/3)
Issues a HEAD request to the given url, raising an exception in case of failure
[meta_get(function_name, path)](#meta_get/2)
Creates 2 functions with the following names
[meta_get_id(function_name, path)](#meta_get_id/2)
Creates 2 functions with the following names
[meta_get_prefix(function_name, path)](#meta_get_prefix/2)
Creates 2 functions with the following names
[options(url, headers \\ [], options \\ [])](#options/3)
Issues an OPTIONS request to the given url
[options!(url, headers \\ [], options \\ [])](#options!/3)
Issues a OPTIONS request to the given url, raising an exception in case of failure
[patch(url, body, headers \\ [], options \\ [])](#patch/4)
Issues a PATCH request to the given url
[patch!(url, body, headers \\ [], options \\ [])](#patch!/4)
Issues a PATCH request to the given url, raising an exception in case of failure
[post(url, body, headers \\ [], options \\ [])](#post/4)
Issues a POST request to the given url
[post!(url, body, headers \\ [], options \\ [])](#post!/4)
Issues a POST request to the given url, raising an exception in case of failure
[process_headers(headers)](#process_headers/1)
[process_request_body(body)](#process_request_body/1)
[process_request_headers(headers)](#process_request_headers/1)
[process_request_options(options)](#process_request_options/1)
[process_response_body(body)](#process_response_body/1)
[process_response_chunk(chunk)](#process_response_chunk/1)
[process_status_code(status_code)](#process_status_code/1)
[process_url(url)](#process_url/1)
[put(url, body \\ "", headers \\ [], options \\ [])](#put/4)
Issues a PUT request to the given url
[put!(url, body \\ "", headers \\ [], options \\ [])](#put!/4)
Issues a PUT request to the given url, raising an exception in case of failure
[request(method, params)](#request/2)
[request(method, url, body \\ "", headers \\ [], options \\ [])](#request/5)
Issues an HTTP request with the given method to the given url
[request!(method, params)](#request!/2)
[request!(method, url, body \\ "", headers \\ [], options \\ [])](#request!/5)
Issues an HTTP request with the given method to the given url, raising an exception in case of failure
[start()](#start/0)
Starts HTTPoison and its dependencies
[stream_next(resp)](#stream_next/1)
Requests the next message to be streamed for a given [`HTTPoison.AsyncResponse`](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html)
[Link to this section](#types)
Types
===
[Link to this type](#t:body/0 "Link to this type")
body()
```
body() :: [HTTPoison.Base.body](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Base.html#t:body/0)()
```
[Link to this type](#t:headers/0 "Link to this type")
headers()
```
headers() :: [HTTPoison.Base.headers](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Base.html#t:headers/0)()
```
[Link to this section](#functions)
Functions
===
[Link to this function](#base/0 "Link to this function")
base()
[Link to this function](#delete/3 "Link to this function")
delete(url, headers \\ [], options \\ [])
```
delete(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues a DELETE request to the given url.
Returns `{:ok, response}` if the request is successful, `{:error, reason}`
otherwise.
See [`request/5`](#request/5) for more detailed information.
[Link to this function](#delete!/3 "Link to this function")
delete!(url, headers \\ [], options \\ [])
```
delete!(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
[HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() |
[HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()
```
Issues a DELETE request to the given url, raising an exception in case of failure.
If the request does not fail, the response is returned.
See [`request!/5`](#request!/5) for more detailed information.
[Link to this function](#get/3 "Link to this function")
get(url, headers \\ [], options \\ [])
```
get(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues a GET request to the given url.
Returns `{:ok, response}` if the request is successful, `{:error, reason}`
otherwise.
See [`request/5`](#request/5) for more detailed information.
[Link to this function](#get!/3 "Link to this function")
get!(url, headers \\ [], options \\ [])
```
get!(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
[HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() |
[HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()
```
Issues a GET request to the given url, raising an exception in case of failure.
If the request does not fail, the response is returned.
See [`request!/5`](#request!/5) for more detailed information.
[Link to this function](#head/3 "Link to this function")
head(url, headers \\ [], options \\ [])
```
head(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues a HEAD request to the given url.
Returns `{:ok, response}` if the request is successful, `{:error, reason}`
otherwise.
See [`request/5`](#request/5) for more detailed information.
[Link to this function](#head!/3 "Link to this function")
head!(url, headers \\ [], options \\ [])
```
head!(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
[HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() |
[HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()
```
Issues a HEAD request to the given url, raising an exception in case of failure.
If the request does not fail, the response is returned.
See [`request!/5`](#request!/5) for more detailed information.
[Link to this macro](#meta_get/2 "Link to this macro")
meta_get(function_name, path)
(macro)
Creates 2 functions with the following names:
```
function_name function_name!
```
Both functions will issue a GET request for the `path` specified.
The first function will return a tuple.
The second function will return a `%Nomex.Response{}` or raise an exception.
[Link to this macro](#meta_get_id/2 "Link to this macro")
meta_get_id(function_name, path)
(macro)
Creates 2 functions with the following names:
```
function_name(param_id)
function_name!(param_id)
```
Both functions will issue a GET request for the `path` specified, but append `/param_id` at the end of the `path`.
The first function will return a tuple.
The second function will return a [`Nomex.Response`](Nomex.Response.html) or raise an exception.
[Link to this macro](#meta_get_prefix/2 "Link to this macro")
meta_get_prefix(function_name, path)
(macro)
Creates 2 functions with the following names:
```
function_name(prefix)
function_name!(prefix)
```
Both functions will issue a GET request for the `path` specified, and adds a querystring parameter `?prefix=#(prefix)`.
The first function will return a tuple.
The second function will return a [`Nomex.Response`](Nomex.Response.html) or raise an exception.
[Link to this function](#options/3 "Link to this function")
options(url, headers \\ [], options \\ [])
```
options(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues an OPTIONS request to the given url.
Returns `{:ok, response}` if the request is successful, `{:error, reason}`
otherwise.
See [`request/5`](#request/5) for more detailed information.
[Link to this function](#options!/3 "Link to this function")
options!(url, headers \\ [], options \\ [])
```
options!(binary(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
[HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() |
[HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()
```
Issues a OPTIONS request to the given url, raising an exception in case of failure.
If the request does not fail, the response is returned.
See [`request!/5`](#request!/5) for more detailed information.
[Link to this function](#patch/4 "Link to this function")
patch(url, body, headers \\ [], options \\ [])
```
patch(binary(), any(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues a PATCH request to the given url.
Returns `{:ok, response}` if the request is successful, `{:error, reason}`
otherwise.
See [`request/5`](#request/5) for more detailed information.
[Link to this function](#patch!/4 "Link to this function")
patch!(url, body, headers \\ [], options \\ [])
```
patch!(binary(), any(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
[HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() |
[HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()
```
Issues a PATCH request to the given url, raising an exception in case of failure.
If the request does not fail, the response is returned.
See [`request!/5`](#request!/5) for more detailed information.
[Link to this function](#post/4 "Link to this function")
post(url, body, headers \\ [], options \\ [])
```
post(binary(), any(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues a POST request to the given url.
Returns `{:ok, response}` if the request is successful, `{:error, reason}`
otherwise.
See [`request/5`](#request/5) for more detailed information.
[Link to this function](#post!/4 "Link to this function")
post!(url, body, headers \\ [], options \\ [])
```
post!(binary(), any(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
[HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() |
[HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()
```
Issues a POST request to the given url, raising an exception in case of failure.
If the request does not fail, the response is returned.
See [`request!/5`](#request!/5) for more detailed information.
[Link to this function](#process_headers/1 "Link to this function")
process_headers(headers)
[Link to this function](#process_request_body/1 "Link to this function")
process_request_body(body)
```
process_request_body(any()) :: [body](#t:body/0)()
```
[Link to this function](#process_request_headers/1 "Link to this function")
process_request_headers(headers)
```
process_request_headers([headers](#t:headers/0)()) :: [headers](#t:headers/0)()
```
[Link to this function](#process_request_options/1 "Link to this function")
process_request_options(options)
[Link to this function](#process_response_body/1 "Link to this function")
process_response_body(body)
```
process_response_body(binary()) :: any()
```
[Link to this function](#process_response_chunk/1 "Link to this function")
process_response_chunk(chunk)
[Link to this function](#process_status_code/1 "Link to this function")
process_status_code(status_code)
[Link to this function](#process_url/1 "Link to this function")
process_url(url)
[Link to this function](#put/4 "Link to this function")
put(url, body \\ "", headers \\ [], options \\ [])
```
put(binary(), any(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues a PUT request to the given url.
Returns `{:ok, response}` if the request is successful, `{:error, reason}`
otherwise.
See [`request/5`](#request/5) for more detailed information.
[Link to this function](#put!/4 "Link to this function")
put!(url, body \\ "", headers \\ [], options \\ [])
Issues a PUT request to the given url, raising an exception in case of failure.
If the request does not fail, the response is returned.
See [`request!/5`](#request!/5) for more detailed information.
[Link to this function](#request/2 "Link to this function")
request(method, params)
[Link to this function](#request/5 "Link to this function")
request(method, url, body \\ "", headers \\ [], options \\ [])
```
request(atom(), binary(), any(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) ::
{:ok, [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)() | [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Issues an HTTP request with the given method to the given url.
This function is usually used indirectly by [`get/3`](#get/3), [`post/4`](#post/4), [`put/4`](#put/4), etc
Args:
* `method` - HTTP method as an atom (`:get`, `:head`, `:post`, `:put`,
`:delete`, etc.)
* `url` - target url as a binary string or char list
* `body` - request body. See more below
* `headers` - HTTP headers as an orddict (e.g., `[{"Accept", "application/json"}]`)
* `options` - Keyword list of options
Body:
* binary, char list or an iolist
* `{:form, [{K, V}, ...]}` - send a form url encoded
* `{:file, "/path/to/file"}` - send a file
* `{:stream, enumerable}` - lazily send a stream of binaries/charlists
Options:
* `:timeout` - timeout to establish a connection, in milliseconds. Default is 8000
* `:recv_timeout` - timeout used when receiving a connection. Default is 5000
* `:stream_to` - a PID to stream the response to
* `:async` - if given `:once`, will only stream one message at a time, requires call to `stream_next`
* `:proxy` - a proxy to be used for the request; it can be a regular url or a `{Host, Port}` tuple
* `:proxy_auth` - proxy authentication `{User, Password}` tuple
* `:ssl` - SSL options supported by the `ssl` erlang module
* `:follow_redirect` - a boolean that causes redirects to be followed
* `:max_redirect` - an integer denoting the maximum number of redirects to follow
* `:params` - an enumerable consisting of two-item tuples that will be appended to the url as query string parameters
Timeouts can be an integer or `:infinity`
This function returns `{:ok, response}` or `{:ok, async_response}` if the request is successful, `{:error, reason}` otherwise.
Examples
---
```
request(:post, "https://my.website.com", "{\"foo\": 3}", [{"Accept", "application/json"}])
```
[Link to this function](#request!/2 "Link to this function")
request!(method, params)
[Link to this function](#request!/5 "Link to this function")
request!(method, url, body \\ "", headers \\ [], options \\ [])
```
request!(atom(), binary(), any(), [headers](#t:headers/0)(), [Keyword.t](https://hexdocs.pm/elixir/Keyword.html#t:t/0)()) :: [HTTPoison.Response.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Response.html#t:t/0)()
```
Issues an HTTP request with the given method to the given url, raising an exception in case of failure.
[`request!/5`](#request!/5) works exactly like [`request/5`](#request/5) but it returns just the response in case of a successful request, raising an exception in case the request fails.
[Link to this function](#start/0 "Link to this function")
start()
Starts HTTPoison and its dependencies.
[Link to this function](#stream_next/1 "Link to this function")
stream_next(resp)
```
stream_next([HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()) ::
{:ok, [HTTPoison.AsyncResponse.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html#t:t/0)()} |
{:error, [HTTPoison.Error.t](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.Error.html#t:t/0)()}
```
Requests the next message to be streamed for a given [`HTTPoison.AsyncResponse`](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.AsyncResponse.html).
See [`request!/5`](#request!/5) for more detailed information.
nomex v0.0.4
Nomex.Response
===
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[tuple_t()](#t:tuple_t/0)
tuple that wraps response from [`HTTPoison`](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.html).
Returns the status of the request made, along with the `Response`
[Functions](#functions)
---
[parse(response)](#parse/1)
[Link to this section](#types)
Types
===
[Link to this type](#t:t/0 "Link to this type")
t()
```
t() :: %Nomex.Response{body: map(), headers: list(), request_url: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), status_code: integer()}
```
[Link to this type](#t:tuple_t/0 "Link to this type")
tuple_t()
```
tuple_t() :: {:ok | :error, [Nomex.Response.t](Nomex.Response.html#t:t/0)()}
```
tuple that wraps response from [`HTTPoison`](https://hexdocs.pm/httpoison/0.13.0/HTTPoison.html).
Returns the status of the request made, along with the `Response`
[Link to this section](#functions)
Functions
===
[Link to this function](#parse/1 "Link to this function")
parse(response)
nomex v0.0.4
Nomex.Sentinel
===
Methods in this module are used to interact with Nomad’s Sentinel Policies HTTP API. More information here:
<https://www.nomadproject.io/api/sentinel-policies.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[policies()](#policies/0)
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policies`
[policies!()](#policies!/0)
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policies`
[policy(param_id)](#policy/1)
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policy/<param_id>`
[policy!(param_id)](#policy!/1)
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policy/<param_id>`
[Link to this section](#functions)
Functions
===
[Link to this function](#policies/0 "Link to this function")
policies()
```
policies() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policies`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#policies!/0 "Link to this function")
policies!()
```
policies!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policies`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#policy/1 "Link to this function")
policy(param_id)
```
policy([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policy/<param_id>`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#policy!/1 "Link to this function")
policy!(param_id)
```
policy!([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/sentinel/policy/<param_id>`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.Status
===
Methods in this module are used to interact with Nomad’s Status HTTP API. More information here:
<https://www.nomadproject.io/api/status.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[leader()](#leader/0)
issues a GET request to `<NOMAD_HOST>/v1/status/leader`
[leader!()](#leader!/0)
issues a GET request to `<NOMAD_HOST>/v1/status/leader`
[peers()](#peers/0)
issues a GET request to `<NOMAD_HOST>/v1/status/peers`
[peers!()](#peers!/0)
issues a GET request to `<NOMAD_HOST>/v1/status/peers`
[Link to this section](#functions)
Functions
===
[Link to this function](#leader/0 "Link to this function")
leader()
```
leader() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/status/leader`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#leader!/0 "Link to this function")
leader!()
```
leader!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/status/leader`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#peers/0 "Link to this function")
peers()
```
peers() :: [Nomex.Response.tuple_t](Nomex.Response.html#t:tuple_t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/status/peers`
returns a tuple with status (`:ok, :error`) and the `%Nomex.Response{}`
[Link to this function](#peers!/0 "Link to this function")
peers!()
```
peers!() :: [Nomex.Response.t](Nomex.Response.html#t:t/0)()
```
issues a GET request to `<NOMAD_HOST>/v1/status/peers`
returns a `%Nomex.Response{}` or raises exception
nomex v0.0.4
Nomex.System
===
Methods in this module are used to interact with Nomad’s System HTTP API. More information here:
<https://www.nomadproject.io/api/system.html[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[gc()](#gc/0)
issues a PUT request to `<NOMAD_HOST>/v1/system/gc`
[gc!()](#gc!/0)
issues a PUT request to `<NOMAD_HOST>/v1/system/gc`
[reconcile_summaries()](#reconcile_summaries/0)
issues a PUT request to `<NOMAD_HOST>/v1/system/reconcile/summaries`
[reconcile_summaries!()](#reconcile_summaries!/0)
issues a PUT request to `<NOMAD_HOST>/v1/system/reconcile/summaries`
[Link to this section](#functions)
Functions
===
[Link to this function](#gc/0 "Link to this function")
gc()
issues a PUT request to `<NOMAD_HOST>/v1/system/gc`
returns a tuple with status (`:ok or :error`) and the `%Nomex.Response{}`
[Link to this function](#gc!/0 "Link to this function")
gc!()
issues a PUT request to `<NOMAD_HOST>/v1/system/gc`
returns a `%Nomex.Response{}` or raises exception
[Link to this function](#reconcile_summaries/0 "Link to this function")
reconcile_summaries()
issues a PUT request to `<NOMAD_HOST>/v1/system/reconcile/summaries`
returns a tuple with status (`:ok or :error`) and the `%Nomex.Response{}`
[Link to this function](#reconcile_summaries!/0 "Link to this function")
reconcile_summaries!()
issues a PUT request to `<NOMAD_HOST>/v1/system/reconcile/summaries`
returns a `%Nomex.Response{}` or raises exception |
opsb-git | ruby | Ruby | Git Library for Ruby
---
Library for using Git in Ruby. Test.
Homepage
===
Git public hosting of the project source code is at:
[github.com/schacon/ruby-git](https://github.com/schacon/ruby-git)
Install
===
You can install Ruby/Git like this:
$ sudo gem install git
Major Objects
===
Git::Base - this is the object returned from a Git.open or Git.clone.
Most major actions are called from this object.
Git::Object - this is the base object for your tree, blob and commit objects, returned from @git.gtree or @git.object calls. the Git::AbstractObject will have most of the calls in common for all those objects.
Git::Diff - returns from a @git.diff command. It is an Enumerable that returns Git::Diff:DiffFile objects from which you can get per file patches and insertion/deletion statistics. You can also get total statistics from the Git::Diff object directly.
Git::Status - returns from a @git.status command. It is an Enumerable that returns Git:Status::StatusFile objects for each object in git, which includes files in the working directory, in the index and in the repository. Similar to running ‘git status’ on the command line to determine untracked and changed files.
Git::Branches - Enumerable object that holds Git::Branch objects. You can call .local or .remote on it to filter to just your local or remote branches.
Git::Remote - A reference to a remote repository that is tracked by this repository.
Git::Log - An Enumerable object that references all the Git::Object::Commit objects that encompass your log query, which can be constructed through methods on the Git::Log object, like:
```
@git.log(20).object("some_file").since("2 weeks ago").between('v2.6', 'v2.7').each { |commit| [block] }
```
Examples
===
Here are a bunch of examples of how to use the Ruby/Git package.
First you have to remember to require rubygems if it’s not. Then include the ‘git’ gem.
```
require 'rubygems'
require 'git'
```
Here are the operations that need read permission only.
```
g = Git.open (working_dir, :log => Logger.new(STDOUT))
g.index
g.index.readable?
g.index.writable?
g.repo
g.dir
g.log # returns array of Git::Commit objects
g.log.since('2 weeks ago')
g.log.between('v2.5', 'v2.6')
g.log.each {|l| puts l.sha }
g.gblob('v2.5:Makefile').log.since('2 weeks ago')
g.object('HEAD^').to_s # git show / git rev-parse
g.object('HEAD^').contents
g.object('v2.5:Makefile').size
g.object('v2.5:Makefile').sha
g.gtree(treeish)
g.gblob(treeish)
g.gcommit(treeish)
commit = g.gcommit('1cc8667014381')
commit.gtree
commit.parent.sha
commit.parents.size
commit.author.name
commit.author.email
commit.author.date.strftime("%m-%d-%y")
commit.committer.name
commit.date.strftime("%m-%d-%y")
commit.message
tree = g.gtree("HEAD^{tree}")
tree.blobs
tree.subtrees
tree.children # blobs and subtrees
g.revparse('v2.5:Makefile')
g.branches # returns Git::Branch objects
g.branches.local
g.branches.remote
g.branches[:master].gcommit
g.branches['origin/master'].gcommit
g.grep('hello') # implies HEAD
g.blob('v2.5:Makefile').grep('hello')
g.tag('v2.5').grep('hello', 'docs/')
g.diff(commit1, commit2).size
g.diff(commit1, commit2).stats
g.gtree('v2.5').diff('v2.6').insertions
g.diff('gitsearch1', 'v2.5').path('lib/')
g.diff('gitsearch1', @git.gtree('v2.5'))
g.diff('gitsearch1', 'v2.5').path('docs/').patch
g.gtree('v2.5').diff('v2.6').patch
g.gtree('v2.5').diff('v2.6').each do |file_diff|
puts file_diff.path
puts file_diff.patch
puts file_diff.blob(:src).contents
end
g.config('user.name') # returns '<NAME>'
g.config # returns whole config hash
g.tag # returns array of Git::Tag objects
```
And here are the operations that will need to write to your git repository.
```
g = Git.init
Git.init('project')
Git.init('/home/schacon/proj',
```
{ :git_dir => ‘/opt/git/proj.git’, :index_file => ‘/tmp/index’} )
```
g = [Git](/gems/opsb-git/Git "Git (module)").[clone](/gems/opsb-git/Git#clone-class_method "Git.clone (method)")(URI, :name => 'name', :path => '/tmp/checkout')
g.config('user.name', '<NAME>')
g.config('user.email', '[[email protected]](/cdn-cgi/l/email-protection)')
g.add('.')
g.add([file1, file2])
g.remove('file.txt')
g.remove(['file.txt', 'file2.txt'])
```
```
g.commit('message')
g.commit_all('message')
g = Git.clone(repo, 'myrepo')
g.chdir do
new_file('test-file', 'blahblahblah')
g.status.changed.each do |file|
puts file.blob(:index).contents
end end
g.reset # defaults to HEAD g.reset_hard(Git::Commit)
g.branch('new_branch') # creates new or fetches existing g.branch('new_branch').checkout g.branch('new_branch').delete g.branch('existing_branch').checkout
g.checkout('new_branch')
g.checkout(g.branch('new_branch'))
g.branch(name).merge(branch2)
g.branch(branch2).merge # merges HEAD with branch2
g.branch(name).in_branch(message) { # add files } # auto-commits g.merge('new_branch')
g.merge('origin/remote_branch')
g.merge(b.branch('master'))
g.merge([branch1, branch2])
r = g.add_remote(name, uri) # Git::Remote r = g.add_remote(name, Git::Base) # Git::Remote
g.remotes # array of Git::Remotes g.remote(name).fetch g.remote(name).remove g.remote(name).merge g.remote(name).merge(branch)
g.fetch g.fetch(g.remotes.first)
g.pull g.pull(Git::Repo, Git::Branch) # fetch and a merge
g.add_tag('tag_name') # returns Git::Tag
g.repack
g.push g.push(g.remote('name'))
```
Some examples of more low-level index and tree operations
```
g.with_temp_index do
g.read_tree(tree3) # calls self.index.read_tree
g.read_tree(tree1, :prefix => 'hi/')
c = g.commit_tree('message')
# or #
t = g.write_tree
c = g.commit_tree(t, :message => 'message', :parents => [sha1, sha2])
g.branch('branch_name').update_ref(c)
g.update_ref(branch, c)
g.with_temp_working do # new blank working directory
g.checkout
g.checkout(another_index)
g.commit # commits to temp_index
end
end
g.set_index('/path/to/index')
g.with_index(path) do
# calls set_index, then switches back after end
g.with_working(dir) do
# calls set_working, then switches back after end
g.with_temp_working(dir) do
g.checkout_index(:prefix => dir, :path_limiter => path)
# do file work
g.commit # commits to index end
``` |
kelvin-context-async-hooks | npm | JavaScript | OpenTelemetry AsyncHooks-based Context Manager
===
This package provides [async-hooks](http://nodejs.org/dist/latest/docs/api/async_hooks.html) based context manager which is used internally by OpenTelemetry plugins to propagate specific context between function calls and async operations. It only targets NodeJS since async-hooks is only available there.
What is a ContextManager
---
The definition and why they exist is available on [the readme of the context-base package](https://github.com/open-telemetry/opentelemetry-js/blob/master/packages/opentelemetry-context-base/README.md).
###
Implementation in NodeJS
NodeJS has a specific API to track async context: [async-hooks](http://nodejs.org/dist/latest/docs/api/async_hooks.html), it allows to track creation of new async operation and their respective parent.
This package only handle storing a specific object for a given async hooks context.
###
Limitations
Even if the API is native to NodeJS, it doesn't cover all possible cases of context propagation but there is a big effort from the NodeJS team to fix those. That's why we generally advise to be on the latest LTS to benefit from performance and bug fixes.
There are known modules that break context propagation ([some of them are listed there](https://github.com/nodejs/diagnostics/blob/master/tracing/AsyncHooks/problematic-modules.md)), so it's possible that the context manager doesn't work with them.
###
Prior arts
Context propagation is a big subject when talking about tracing in NodeJS, if you want more information about that here are some resources:
* <https://www.npmjs.com/package/continuation-local-storage> (which was the old way of doing context propagation)
* Datadog's own implementation for their Javascript tracer: [here](https://github.com/DataDog/dd-trace-js/tree/master/packages/dd-trace/src/scope)
* OpenTracing implementation: [here](https://github.com/opentracing/opentracing-javascript/pull/113)
* Discussion about context propagation by the NodeJS diagnostics working group: [here](https://github.com/nodejs/diagnostics/issues/300)
Useful links
---
* For more information on OpenTelemetry, visit: <https://opentelemetry.io/>
* For more about OpenTelemetry JavaScript: <https://github.com/open-telemetry/opentelemetry-js>
* For help or feedback on this project, join us on [gitter](https://gitter.im/open-telemetry/opentelemetry-node?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
License
---
Apache 2.0 - See [LICENSE](https://github.com/open-telemetry/opentelemetry-js/blob/master/LICENSE) for more information.
Readme
---
### Keywords
* opentelemetry
* nodejs
* tracing
* profiling
* metrics
* stats |
sycamore-macro | rust | Rust | Crate sycamore_macro
===
Proc-macros used in Sycamore.
Macros
---
nodeLike `view!` but only creates a single raw node instead.viewA macro for ergonomically creating complex UI structures.Attribute Macros
---
componentA macro for creating components from functions.Derive Macros
---
PropThe derive macro for `Prop`. The macro creates a builder-like API used in the `view!` macro.
Crate sycamore_macro
===
Proc-macros used in Sycamore.
Macros
---
nodeLike `view!` but only creates a single raw node instead.viewA macro for ergonomically creating complex UI structures.Attribute Macros
---
componentA macro for creating components from functions.Derive Macros
---
PropThe derive macro for `Prop`. The macro creates a builder-like API used in the `view!` macro.
Macro sycamore_macro::node
===
```
node!() { /* proc-macro */ }
```
Like `view!` but only creates a single raw node instead.
Example
---
```
use sycamore::prelude::*;
#[component]
pub fn MyComponent<G: Html>(cx: Scope) -> View<G> {
let cool_button: G = node! { cx, button { "The coolest 😎" } };
cool_button.set_property("myProperty", &"Epic!".into());
View::new_node(cool_button)
}
```
Macro sycamore_macro::view
===
```
view!() { /* proc-macro */ }
```
A macro for ergonomically creating complex UI structures.
To learn more about the template syntax, see the chapter on the `view!` macro in the Sycamore Book.
Attribute Macro sycamore_macro::component
===
```
#[component]
```
A macro for creating components from functions.
Add this attribute to a `fn` to create a component from that function.
To learn more about components, see the chapter on components in the Sycamore Book.
Derive Macro sycamore_macro::Prop
===
```
#[derive(Prop)]
{
// Attributes available to this derive:
#[builder]
}
```
The derive macro for `Prop`. The macro creates a builder-like API used in the `view!` macro. |
@foal/ejs | npm | JavaScript | *A Node.js and TypeScript framework, all-inclusive.*
[Github](https://github.com/FoalTS/foal) - [Twitter](https://twitter.com/FoalTs) - [Website](https://foalts.org/) - [Documentation](https://foalts.gitbook.io/docs/) - [YouTube](https://www.youtube.com/channel/UCQFojM334E0YdoDq56MjfOQ)
FoalTS is a Node.js framework for building HTTP APIs and Web applications with a rich interface (Angular / React / Vue). It is written in TypeScript and offers many built-in dev tools and components to handle extremely common scenarios. Simple, testable and progressive, Foal accelerates development while leaving you in control of your code.
Get started
---
First install [Node.Js and npm](https://nodejs.org/en/download/).
###
Create a new app
```
$ npm install -g @foal/cli
$ foal createapp my-app
$ cd my-app
$ npm run develop
```
The development server is started! Go to `http://localhost:3001` and find our welcoming page!
[=> Continue with the tutorial](https://foalts.gitbook.io/docs/)
Why?
---
In recent years Node.js has become one of the most popular servers on the web. And for good reason, it is fast, simple while being powerful and flexible. Creating a server with only a few lines of code has never been easier.
But when it comes to setting up a complete and scalable project, things get harder. You have to put everything in place. The authorization system, database migrations, development tools or even hashing of passwords are just the tip of the iceberg. Working on this is time consuming and may slow down the release frequency or even lead to undesired bugs. As the codebase grows up and the complexity increases, it becomes harder and harder to develop new features and maintain the app.
This is where FoalTS comes in. Based on express, this lightweight framework provides everything needed to create enterprise-grade applications. From the support of TypeScript to the integration of security tools, it offers the basic bricks to build robust webapps. But FoalTS does not pretend to be a closed framework. You can still import and use your favorite librairies from the rich ecosystem of Node.js.
Readme
---
### Keywords
* FoalTS
* foal
* template
* templates
* ejs |
sevabot-skype-bot | readthedoc | SQL | Sevabot - Skype bot 1.0 documentation
[Sevabot - Skype bot 1.0 documentation](index.html#document-index)
===
Sevabot - Skype bot 1.0 documentation
---
[Contents](index.html#document-index)
Sevabot - Friendly Skype robot documentation[¶](#sevabot-friendly-skype-robot-documentation)
===
Sevabot is a generic purpose hack-it-together Skype bot
* Has extensible command system based on UNIX scripts
* Send chat message nofications from any system using HTTP requests
* Bult-in support for Github commit notifications and other popular services
It is based on [Skype4Py framework](https://github.com/awahlig/skype4py)
The bot is written in Python 2.7.x programming language, but can be integrated with any programming languages over UNIX command piping and HTTP interface.
The underlying Skype4Py API is free - you do not need to enlist and pay Skype development program fee.
Contents:
Installing and running on Ubuntu[¶](#installing-and-running-on-ubuntu)
---
* [Introduction](#introduction)
* [Installing Skype and xvfb](#installing-skype-and-xvfb)
* [Setting up Skype and remote VNC](#setting-up-skype-and-remote-vnc)
* [Installing Sevabot](#installing-sevabot)
* [Running sevabot](#running-sevabot)
* [Test it](#test-it)
* [Testing HTTP interface](#testing-http-interface)
* [Running sevabot as service](#running-sevabot-as-service)
* [Setting avatar image](#setting-avatar-image)
* [Installing on Ubuntu desktop](#installing-on-ubuntu-desktop)
### [Introduction](#id3)[¶](#introduction)
There instructions are for setting up a headless (no monitor attached) Sevabot running in Skype on Ubuntu Server. The instructions have tested on Ubuntu Version **12.04.1** unless mentioned otherwise.
Note
For desktop installation instructions see below.
### [Installing Skype and xvfb](#id4)[¶](#installing-skype-and-xvfb)
Install Ubuntu dependencies needed to run headless Skype.
SSH into your server as a root or do `sudo -i`.
Then install necessary software:
```
apt-get update apt-get install -y xvfb fluxbox x11vnc dbus libasound2 libqt4-dbus libqt4-network libqtcore4 libqtgui4 libxss1 libpython2.7 libqt4-xml libaudio2 libmng1 fontconfig liblcms1 lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 nano wget http://www.skype.com/go/getskype-linux-beta-ubuntu-64 -O skype-linux-beta.deb
# if there are other unresolved dependencies install missing packages using apt-get install and then install the skype deb package again dpkg -i skype-linux-beta.deb
```
More packages and Python modules needed to:
```
apt-get install -y python-gobject-2 apt-get install -y curl git
```
### [Setting up Skype and remote VNC](#id5)[¶](#setting-up-skype-and-remote-vnc)
Now we will create the UNIX user `skype` running Sevabot and Skype the client application.
Note
In this phase of installation you will need a VNC remote desktop viewer software on your local computer. On Linux you have XVNCViewer, on OSX you have Chicken of VNC and on Windows you have TinyVNC.
Under `sudo -i`:
```
# Create a random password openssl rand -base64 32 # Copy this output, write down and use in the input of the following command adduser skype # We must run Skype under non-root user
```
Exit from the current (root) terminal sessoin.
Login to your server:
```
ssh [email protected]
```
Get Sevabot:
```
git clone git://github.com/opensourcehacker/sevabot.git
```
Note
If you want to live dangerously you can use git dev branch where all the development happen. You can switch to this branch with “git checkout dev”
command in the sevabot folder.
Start xvfb, fluxbox and Skype:
```
# This will output some Xvfb warnings to the terminal for a while SERVICES="xvfb fluxbox skype" ~/sevabot/scripts/start-server.sh start
```
Start VNC server:
```
# This will ask you for the password of VNC remote desktop session.
# Give a password and let it write the password file.
# Delete file ~/.x11vnc/password to reset the password
~/sevabot/scripts/start-vnc.sh start
```
On your **local computer** start the VNC viewing softare and connect the server:
```
vncviewer yourserver.example.com # Password as you give it above
```
You see the remote desktop. Login to Skype for the first time.
Make Skype to save your username and password. Create Skype account in this point if you don’t have one for sevabot.
Now, in your **local** Skype, invite the bot as your friend. Then accept the friend request.
Note
It is important to add one Skype buddy for your Sevabot instance in this point,
so don’t forget to do this step.
Nowm, in Sevabot go to Skype’s settings and set the following
* No chat history
* Only people on my list can write me
* Only people on my list can call me
### [Installing Sevabot](#id6)[¶](#installing-sevabot)
When Skype is up and running on your server, you can attach Sevabot into it.
Sevabot is deployed as [Python virtualenv installation](http://opensourcehacker.com/2012/09/16/recommended-way-for-sudo-free-installation-of-python-software-with-virtualenv/).
Login to your server as `skype` user over SSH:
```
ssh [email protected]
```
Deploy `sevabot`, as checked out from Github earlier, using [Python virtualenv](http://pypi.python.org/pypi/virtualenv/):
```
cd sevabot curl -L -o virtualenv.py https://raw.github.com/pypa/virtualenv/master/virtualenv.py python virtualenv.py venv
. venv/bin/activate python setup.py develop
```
This will
* Pull all Python package dependencies from [pypi.python.org](http://pypi.python.org) package service
* Create Sevabot launch scripts under `~/sevabot/venv/bin/`
Set password and customize other Sevabot settings by creating and editing editing `settings.py`:
```
# Create a copy of settings.py cd ~/sevabot cp settings.py.example settings.py nano settings.py
```
In `settings.py` set
* `SHARED_SECRET`: web interface password
* `HTTP_HOST`: Public IP address you want Sevabot’s web interface listen to (on Ubuntu you can figure this out using [``](#id1)ipconfig command)
We need one more thing and that’s accepting Skype dialog for Sevabot control in VNC session.
Make sure Xvfb, Fluxbox, Skype and VNC is running as instructed above. Do:
```
# Start Sevabot and make initial connect attempt to Skype SERVICES=sevabot ~/sevabot/scripts/start-server.sh start
```
Authorize the connection and tick *Remember* in VNC session
### [Running sevabot](#id7)[¶](#running-sevabot)
To start the Sevabot do:
```
# Following will restart Xvnx, Fluxbox, Skype and Sevabot
~/sevabot/scripts/start-server.sh restart
```
The last line you see should be something like:
```
2013-03-17 18:45:16,270 - werkzeug - INFO - * Running on http://123.123.123.123:5000/
```
Note
Make sure your IP address is right in above
From the log files see that Sevabot starts up:
```
tail -f ~/sevabot/logs/sevabot.log
```
It should end up reading like this:
```
Started Sevabot web server process
```
### [Test it](#id8)[¶](#test-it)
Start chatting with your Sevabot instance with your *local* Skype.
In Skype chat, type:
```
!ping
```
Sevabot should respond to this message with Skype message:
```
pong
```
Note
Sometimes Skype starts up slowly on the server and the initial messages are eaten by something.
If you don’t get instant reply, wait one minute and type !ping again.
### [Testing HTTP interface](#id9)[¶](#testing-http-interface)
Sevabot server interface is listening to port 5000.
This interface offers
* Chat list (you need to know group chat id before you can send message into it)
* [*Webhooks*](index.html#document-webhooks) for integrating external services
Just access the Sevabot server by going with your web browser to:
```
http://yourserver.example.com:5000
```
### [Running sevabot as service](#id10)[¶](#running-sevabot-as-service)
Sevabot and all related services can be controller with `scripts/start-server.sh`
helper script. Services include
* Xvfb
* Fluxbox
* Skype
* Sevabot itself
Example:
```
scripts/start-server.sh stop
...
scripts/start-server.sh start
...
scripts/start-server.sh status Xvfb is running fluxbox is running skype is running Sevabot running OVERALL STATUS: OK
```
To run sevabot from the server from reboot or do a full bot restart there is an example script [reboot-seva.sh](https://github.com/opensourcehacker/sevabot/blob/master/scripts/reboot-seva.sh) provided.
It also does optionally manual SSH key authorization so that the bot can execute remote commands over SSH.
To make your Sevabot bullet-proof add [a cron job to check](https://github.com/opensourcehacker/sevabot/blob/master/scripts/check-service.sh)
that Sevabot is running correctly and reboot if necessary.
### [Setting avatar image](#id11)[¶](#setting-avatar-image)
Sevabot has a cute logo which you want to set as Sevabot’s Skype avatar image.
Here are short instructions.
Login as your sevabot user, tunnel VNC:
```
ssh -L 5900:localhost:5900 <EMAIL>
```
Start VNC:
```
sevabot/scripts/start-vnc.sh start
```
On your local VNC client, connect to `localhost:5900`.
Set the avatar image through Skype UI.
### [Installing on Ubuntu desktop](#id12)[¶](#installing-on-ubuntu-desktop)
You don’t need Xvfb, VNC or fluxbox.
These instructions were written for Ubuntu 12.04 64-bit.
Note
These instructions were written for running 32-bit Skype client application in 64-bit Ubuntu.
Since writing the instructions the situation have changed and Skype has 64-bit application too.
If you have insight of how to install these packages correctly please open an issue on Github and submit an updated recipe.
Install requirements and Skype:
```
sudo -i
apt-get install xvfb fluxbox x11vnc dbus libasound2 libqt4-dbus libqt4-network libqtcore4 libqtgui4 libxss1 libpython2.7 libqt4-xml libaudio2 libmng1 fontconfig liblcms1 lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1
apt-get install python-gobject-2 curl git
wget http://www.skype.com/go/getskype-linux-beta-ubuntu-64 -O skype-linux-beta.deb
# if there are other unresolved dependencies install missing packages using apt-get install and then install the skype deb package again dpkg -i skype-linux-beta.deb
exit
```
Start Skype normally, register a new user or you can also use your own Skype account for testing..
Install Sevabot:
```
git clone git://github.com/opensourcehacker/sevabot.git cd sevabot curl -L -o virtualenv.py https://raw.github.com/pypa/virtualenv/master/virtualenv.py python virtualenv.py venv
. venv/bin/activate python setup.py develop
```
Customize Sevabot settings:
```
cp settings.py.example settings.py
```
Use your text editor to open `settings.py` and set your own password there.
Start sevabot:
```
. venv/bin/activate sevabot
```
You should now see in your terminal:
```
Skype API connection established getChats()
* Running on http://localhost:5000/
```
Now enter with your browser to: <http://localhost:5000/>.
Installing and running on OSX[¶](#installing-and-running-on-osx)
---
* [Introduction](#introduction)
* [Installing Skype](#installing-skype)
* [Installing sevabot](#installing-sevabot)
* [Set password and other settings](#set-password-and-other-settings)
* [Running sevabot](#running-sevabot)
* [Test it](#test-it)
* [Testing HTTP interface](#testing-http-interface)
### [Introduction](#id1)[¶](#introduction)
There instructions are for setting up a Sevabot to run on OSX desktop.
These instructions are mostly useful for Sevabot development and testing and not for actual production deployments.
### [Installing Skype](#id2)[¶](#installing-skype)
Install Skype for OSX normally. Create your Skype user.
### [Installing sevabot](#id3)[¶](#installing-sevabot)
Sevabot is deployed as [Python virtualenv installation](http://opensourcehacker.com/2012/09/16/recommended-way-for-sudo-free-installation-of-python-software-with-virtualenv/).
Install `sevabot` using [virtualenv](http://pypi.python.org/pypi/virtualenv/):
```
git clone git://github.com/opensourcehacker/sevabot.git cd sevabot curl -L -o virtualenv.py https://raw.github.com/pypa/virtualenv/master/virtualenv.py arch -i386 python virtualenv.py venv source venv/bin/activate arch -i386 python setup.py develop
```
This will
* Pull all Python package dependencies from *pypi.python.org*
* Create a scripts under `venv/bin/` to run Sevabot
Note
If you want to live dangerously you can use git dev branch where all the development happen.
### [Set password and other settings](#id4)[¶](#set-password-and-other-settings)
Customize Sevabot settings:
```
# Create a copy of settings.py cd ~/sevabot cp settings.py.example settings.py
```
Setup your Skype admin username and HTTP interface password by editing `settings.py`.
### [Running sevabot](#id5)[¶](#running-sevabot)
Type:
```
arch -i386 sevabot
```
When you launch it for the first time you need to accept the confirmation dialog in the desktop environment (over VNC on the server).
or which ever display you’re running your skype on your server.
Note
There might be a lot of logging and stdout output when the bot starts and scans all the chats of running Skype instance.
Eventually you see in the console:
```
Running on http://127.0.0.1:5000/
```
### [Test it](#id6)[¶](#test-it)
In Skype chat, type:
```
!ping
```
Sevabot should respond to this message with Skype message:
```
pong
```
### [Testing HTTP interface](#id7)[¶](#testing-http-interface)
Sevabot server interface is listening to port 5000.
This interface offers
* Chat list (you need to know group chat id before you can send message into it)
* [*Webhooks*](index.html#document-webhooks) for integrating external services
Just access the Sevabot server by going with your web browser to:
```
http://localhost:5000
```
Installing and running using Vagrant[¶](#installing-and-running-using-vagrant)
---
* [Introduction](#introduction)
* [Vagrant it](#vagrant-it)
### [Introduction](#id1)[¶](#introduction)
[Vagrant](http://vagrantup.com/) is a tool to setup and deploy local virtual machines easily. Sevabot has a script for creating Vagrant deployments.
### [Vagrant it](#id2)[¶](#vagrant-it)
Here is deployment instructions for deployment and automatic virtual machine configuration:
```
git clone https://github.com/opensourcehacker/sevabot.git cd sevabot vagrant box add precise64 http://files.vagrantup.com/precise64.box vagrant up
```
Now you should have a virtual machine running having a runnign Sevabot in it.
TODO (these instructions might need someone to have a look of them as I don’t use Vagrant myself -MIkko)
Chat commands[¶](#chat-commands)
---
* [Introduction](#introduction)
* [Out of the box commands](#out-of-the-box-commands)
* [Creating custom commands](#creating-custom-commands)
* [Stateful modules](#stateful-modules)
* [Running commands on remote servers](#running-commands-on-remote-servers)
### [Introduction](#id3)[¶](#introduction)
Sevabot supports commands you can type into group chat.
All commands begin with !.
You can create your own commands easily as Sevabot happily executes any UNIX executable script.
### [Out of the box commands](#id4)[¶](#out-of-the-box-commands)
Here are commands sevabot honours out of the box.
You can type them into the sevabot group chat.
* !reload: Reload current command scripts and print the list of available commands
* !ping: Check the bot is alive
* !sad: No woman, no cry
* !weather: Get weather by a city from [openweathermap.org](http://openweathermap.org/). Example: `!weather Toholampi`
* !timeout: Test timeouting commands
* !soundcloud: Get your soundclound playlist (edit soundcloud.rb to make it work)
* !dice: Throw a dice
* !tasks: A simple ah-hoc group task manager for virtual team sprints
* !call: Conference call manager. Type `!call help` for more info.
### [Creating custom commands](#id5)[¶](#creating-custom-commands)
The bot can use any UNIX executables printing to stdout as commands
* Shell scripts
* Python scripts, Ruby scripts, etc.
All commands must be in one of *modules* folders of the bot. The bot comes with some built-in commands like `ping`, but you can add your own custom commands by
* There is a `custom/` folder where you can place your own modules
* Enable `custom` folder in settings.py
* Create a a script in `custom` folder. Example `myscript.sh`:
```
#!/bin/sh echo "Hello world from my sevabot command"
```
* Add UNIX execution bit on the script using `chmod u+x myscript.sh`
* In Sevabot chat, type command `!reload` to relaod all scripts
* Now you should see command `!myscript` in the command list
* The following environment variables are exposed to scripts `SKYPE_USERNAME`, `SKYPE_FULLNAME`
from the person who executed the command
### [Stateful modules](#id6)[¶](#stateful-modules)
You can have Python modules which maintain their state and have full access to Skype4Py instance. These modules can e.g.
* Perform timed background tasks with Skype
* Parse full Skype chat text, not just !commands
* Reach to calls, initiate calls
* Send SMS, etc.
Further info
* [Stateful module interface is described here](https://github.com/opensourcehacker/sevabot/blob/dev/sevabot/bot/stateful.py)
* [Example task manager module is here](https://github.com/opensourcehacker/sevabot/blob/dev/modules/tasks.py)
* [Example conference call module is here](https://github.com/opensourcehacker/sevabot/blob/dev/modules/call.py)
### [Running commands on remote servers](#id7)[¶](#running-commands-on-remote-servers)
The best way to execute commands on remote servers on UNIX is over SSH.
Please read first the
[`](#id1)basics how to setup SSH keys for the bot <<http://opensourcehacker.com/2012/10/24/ssh-key-and-passwordless-login-basics-for-developers/>>´_.
Below is an example `backup.sh` which checks
* disk space usage
* the timestamp
of backup folders on a backup server over SSH.
`backup.sh`:
```
#!/bin/sh
ssh <EMAIL> '
LOCATION="/srv/backup/backup/duply"
for l in $LOCATION/*; do
S=`du -sh $l`
TIME=`stat -c %y $l | cut -d " " -f1`
BPATH=`echo $S | cut -f2`
SIZE=`echo $S | cut -f1`
echo -e "$SIZE\t$TIME\t$BPATH"
done
'#
```
You you need to install SSH keys on `skype` user to contact these servers:
```
ssh -a [email protected]
# Create key for the bot if one doesn't exist in .ssh/id_rsa
# Note: For safety reasons set passpharse. See reboot-seva script
# how passphrase enabled key is handled ssh-keygen
# Copy the key to the remote server where you indent to run SSH commands ssh-copy-id [email protected]
```
Sending Skype messages via webhooks[¶](#sending-skype-messages-via-webhooks)
---
* [Introduction](#introduction)
* [Supported services and examples](#supported-services-and-examples)
* [Getting chat list](#getting-chat-list)
* [Sending a message over HTTP interface](#sending-a-message-over-http-interface)
* [Timed messages](#timed-messages)
### [Introduction](#id1)[¶](#introduction)
Sevabot webhooks is a way to send Skype messages from external services using HTTP GET and POST requests.
Because there is no “webhook” standard Sevabot supports different ways to parse HTTP message payloads
* Signed and unsigned messages: shared secret MD5 signature prevents sending messages from hostile services
* HTTP GET and HTTP POST requests
* Service specific JSON payloads
To send a message to a chat you must first know to to the id of a group chat. Sevabot server HTTP interface has a page to show this list (see below).
### [Supported services and examples](#id2)[¶](#supported-services-and-examples)
Here are some services and examples how to integrate Sevabot
#### Sending Skype messages from shell scripts[¶](#sending-skype-messages-from-shell-scripts)
* [Introduction](#introduction)
##### [Introduction](#id1)[¶](#introduction)
These examples use an out-dated web API. Until the documentation is properly updated, you can post a message with the following commandline:
> curl –data-urlencode chat_id=”...” –data-urlencode message=”...” –data-urlencode shared_secret=”...” <http://localhost:5000/message/See examples (bash specifc)
* [send.sh](https://github.com/opensourcehacker/sevabot/blob/master/examples/send.sh)
* [ci-loop.bash](https://github.com/opensourcehacker/sevabot/blob/master/examples/ci-loop.bash)
#### Sending Skype messages from Python[¶](#sending-skype-messages-from-python)
* [Introduction](#introduction)
* [Sending messages from separate URL thread](#sending-messages-from-separate-url-thread)
##### [Introduction](#id1)[¶](#introduction)
Here is an example how to send messages to Skype chat from external Python scripts and services.
They do not need to be Sevabot commands, messages are send over HTTP interface.
##### [Sending messages from separate URL thread](#id2)[¶](#sending-messages-from-separate-url-thread)
Here is an example (orignal code <https://github.com/miohtama/collective.logbook/blob/master/collective/logbook/browser/webhook.py#L49>) how to send a message asynchronously (does not executing the orignal code).
Example:
```
import socket import threading import urllib import urllib2 import logging
logger = logging.getLogger(__name__) # Write errors to PYthon logging output
# Seconds of web service timeout WEBHOOK_HTTP_TIMEOUT = 30
# Get Skype chat id from Sevabot web inteface CHAT_ID = "xxx"
class UrlThread(threading.Thread):
"""
A separate thread doing HTTP POST so we won't block when calling the webhook.
"""
def __init__(self, url, data):
threading.Thread.__init__(self)
self.url = url
self.data = data
def run(self):
orignal_timeout = socket.getdefaulttimeout()
try:
self.data = urllib.urlencode(self.data)
socket.setdefaulttimeout(WEBHOOK_HTTP_TIMEOUT)
r = urllib2.urlopen(self.url, self.data)
r.read()
except Exception as e:
logger.error(e)
logger.exception(e)
finally:
socket.setdefaulttimeout(orignal_timeout)
message = "Hello world"
t = UrlThread("http://sevabot.something.example.com:5000/message_unsigned/", {'message': message, 'chat_id': CHAT_ID})
```
#### Zapier webhook support[¶](#zapier-webhook-support)
* [Introduction](#introduction)
* [Zapier Web hooks (raw HTTP POSTs)](#zapier-web-hooks-raw-http-posts)
+ [Testing Zapier hook](#testing-zapier-hook)
##### [Introduction](#id1)[¶](#introduction)
[zapier.com](https://zapier.com/) offers free mix-and-match different event sources to different triggers. The event sources includes popular services like Github, Dropbox, Salesforce, etc.
##### [Zapier Web hooks (raw HTTP POSTs)](#id2)[¶](#zapier-web-hooks-raw-http-posts)
Zapier hook reads HTTP POST `data` variable payload to chat message as is.
It is useful for other integrations as well.
* You need to register your *zap* in zapier.com
* *Sevabot* offers support for Zapier web hook HTTP POST requests
* Create a zap in zapier.com. Register. Add Webhooks *URL* with your bot info:
```
http://yourserver.com:5000/message_unsigned/
```
* Go to sevabot web interface and <http://yourserver.com:5000/> get chat id from Skype
* The followning Zapier settings must be used: *Send as JSON: no*
* You need fill in HTTP POST fields *message* and *chat_id*
Example of Zapier *Data* field for Github issues:
```
message|New issue 〜 {{title}} 〜 by {{user__login}} - {{html_url}}
chat_id|YOURCHATIDHERE
```
###### [Testing Zapier hook](#id3)[¶](#testing-zapier-hook)
You can use `curl` to test the hook from your server, for firewall issues and such:
```
curl --data-binary "msg=Hello world" --data-binary "chat=YOURCHATID" http://localhost:5000/message_unsigned/
```
Note
You need new enough curl version for –data-binary.
#### Github notifications to Skype[¶](#github-notifications-to-skype)
* [Introduction](#introduction)
* [Commit notifications](#commit-notifications)
* [Issue notifications](#issue-notifications)
##### [Introduction](#id1)[¶](#introduction)
Github notifications are provided through natively through Github and via [*Zapier middleman service*](index.html#document-zapier).
##### [Commit notifications](#id2)[¶](#commit-notifications)
Sevabot has built-in support for Github post-receive hook a.k.a. commit notifications.
To add one
* You need to be the repository admin
* Go *Admin* > *Service hooks* on Github
* Add Webhooks URL with your bot info:
```
http://yourserver.com:5000/github-post-commit/CHATID/SHAREDSECRET/
```
* Save
* Now you can use *Test hook* button to send a test message to the chat
* Following commits should come automatically to the chatß
##### [Issue notifications](#id3)[¶](#issue-notifications)
Use *Zapier* webhook as described below.
This applies for
* New Github issues
* New Github comments
[*See generic Zapier instructions how to set-up the hook*](index.html#document-zapier).
#### Subversion commit notifications[¶](#subversion-commit-notifications)
* [Introduction](#introduction)
##### [Introduction](#id1)[¶](#introduction)
[Use the provided shell script example](https://github.com/opensourcehacker/sevabot/blob/master/examples/svn-post-commit.sh) how to install a post-receive hook on your SVN server to send commit notifications to Skype.
#### Jenkins continuous integration notifications[¶](#jenkins-continuous-integration-notifications)
* [Introduction](#introduction)
* [Setting up a webhook](#setting-up-a-webhook)
##### [Introduction](#id1)[¶](#introduction)
[Jenkins](http://jenkins-ci.org/) is a popular open source continuous integration server.
Jenkins supports webhook notifications by using the Notification plugin:
<https://wiki.jenkins-ci.org/display/JENKINS/Notification+PluginThe jenkins notifier will emit build status through skype.
##### [Setting up a webhook](#id2)[¶](#setting-up-a-webhook)
Install the plugin as directed in the above wiki link.
In Jenkins, for each build you want to send notifications for, under the ‘Job Notifications’ section, click ‘Add Endpoint’.
Enter your sevabot jenkins-notification endpoint, for example:
<http://sevabot.example.com:5000/jenkins-notifier>/{your-channel-id}/{your-shared-secret}/
Trailing slash is important.
When a build completes, you should see the bot emit a message with the build status.
#### Zabbix alert messages from monitoring[¶](#zabbix-alert-messages-from-monitoring)
* [Introduction](#introduction)
* [Setting up a webhook](#setting-up-a-webhook)
* [Doing a agent alive check](#doing-a-agent-alive-check)
##### [Introduction](#id1)[¶](#introduction)
[Zabbix](http://www.zabbix.com/) is a popular open source monitoring solution.
You can get Zabbix monitoring alerts like server down, disk near full, etc.
to Skype with *Sevabot*.
##### [Setting up a webhook](#id2)[¶](#setting-up-a-webhook)
First you need to configure *Media* for your Zabbix user. The default user is called *Admin*.
Go to *Administrator* > *Media types*.
Add new media *Skype* with *Script name* **send.sh**.
Go to *Administrator* > *Users* > *Admin*. Open *Media* tab. Enable media *Skype* for this user.
In the *Send to* parameter put in your *chat id* (see instructions above).
On the server running the Zabbix server process create a file `/usr/local/share/zabbix/alertscripts/send.sh`:
```
#!/bin/bash
#
# Example shell script for sending a message into sevabot
#
# Give command line parameters [chat id] and [message].
# The message is md5 signed with a shared secret specified in settings.py
# Then we use curl do to the request to sevabot HTTP interface.
#
#
# Chat id comes as Send To parameter from Zabbix chat=$1
# Message is the second parameter msg=$2
# Our Skype bot shared secret secret="xxx"
# The Skype bot HTTP msg interface msgaddress="http://yourserver.com:5000/msg/"
md5=`echo -n "$chat$msg$secret" | md5sum`
#md5sum prints a '-' to the end. Let's get rid of that.
for m in $md5; do
break done
curl $msgaddress -d "chat=$chat&msg=$msg&md5=$m"
```
##### [Doing a agent alive check](#id3)[¶](#doing-a-agent-alive-check)
Below is a sample Sevabot script which will do a Zabbix agent daemon check on all the servers.
See [*commands*](index.html#document-commands) for how to configure SSH access for Sevabot to perform this functionality.
* Make a fake alert on all monitor servers, listed in ~/.ssh/config of Sevabot UNIX user
* Zabbix alert script will report back this alert from all servers where Zabbix agent is correctly running
* You need to add a special trigger in Zabbix which checks a timestamp of `/home/zabbix/zabbix_test`
file, as touched by `agents.sh` run by Sevabot
Example monitoring item which keeps track of the file:
```
.. image:: /images/zabbix-item.png
```
> | width: | 500px |
Note
Depending on the UNIX user home the touch file may be
/var/run/zabbix/zabbix_test or /home/zabbix/zabbix_test You might need to manualy switch Item state to Enabled after fixing this.
Example trigger:
Then the script we give to Sevabot to poke the file over SSH to generate Information notification in Zabbix and getting this notification back to our Zabbix monitoring Skype chat, confirming the agent is alive and well.
`agents.sh`:
```
#!/bin/bash
#
# Detect if we have a public key available ssh-add -L > /dev/null
if [[ $? != "0" ]] ; then
echo "Log-in as sevabot UNIX user and authorize SSH key"
exit 1 fi
# Get list of hosts from SSH config file HOSTS=`grep "Host " ~/.ssh/config | awk '{print $2}'`
# If some hosts don't have zabbix agents running, there's no need to use this script for them.
# Add this line to ~/.ssh/config:
# #NoAgents host1 host2 NOAGENT=`grep "#NoAgents " ~/.ssh/config | cut -d' ' -f2- | tr ' ' '\n'`
if [ -n "$NOAGENT" ]; then
HOSTS=`echo -e "$HOSTS\n$NOAGENT" | sort | uniq -u`
fi
# Tell Sevabot what agents we are going to call echo "Agents: $HOSTS" | tr '\n' ' '
echo
errors=0
# On each server touch a file to change its timestamp
# Zabbix monitoring system will detect this and
# report the alert back to Skype chat via a hook for h in $HOSTS; do
ssh -o "PasswordAuthentication no" $h "touch -m zabbix_test"
if [[ $? != "0" ]] ; then
echo "Failed to SSH to $h as sevabot UNIX user"
errors=1
fi done
if [[ $errors == "0" ]] ; then
echo "Succesfully generated zabbix_test ping on all servers"
fi
```
Example `~/.ssh/config`:
```
Host xxx User zabbix Hostname xxx.twinapex.fi
Host yyy User zabbix Hostname yyy.twinapex.fi
```
Please note that you need to set up bot [SSH keys](http://opensourcehacker.com/2012/10/24/ssh-key-and-passwordless-login-basics-for-developers/) for this.
Diagnosing
* If none of the agents is not replying your Zabbix host is probably messed up,
reboot it: `/etc/init.d/zabbix-server restart`
* If some of the agents are replying manually restart non-replying agents
### [Getting chat list](#id3)[¶](#getting-chat-list)
To send messages throught the bot you need to know
* Skype chat id - we use MD5 encoded ids to conveniently pass them in URLs.
* Sevabot shared secret in `settings.py` (only if your service supports MD5 signing, like your own custom shell script)
To get list of the chat ids visit in the Sevabot server hosted address:
```
http://localhost:5000/
```
It will return a HTTP page containing a list of Sevabot internal chat ids.
### [Sending a message over HTTP interface](#id4)[¶](#sending-a-message-over-http-interface)
One can send MD5 signed (safer) or unsigned messages (optional due to constrains in external services)
We provide
* signed endpoint <http://localhost:5000/msg/YOURCHATIT/> - see Bash example for more info
* unsigned endpoint <http://localhost:5000/message_unsigned/> - takes in HTTP POST data parameters *chat_id* and *message*
### [Timed messages](#id5)[¶](#timed-messages)
Use external clocking service like [UNIX cron](https://help.ubuntu.com/community/CronHowto) to send regular or timed messages to Sevabot Skype chat over HTTP webhooks interface.
Troubleshooting[¶](#troubleshooting)
---
* [Logging](#logging)
* [Double messages](#double-messages)
* [Segfaults](#segfaults)
* [Skype4Py distribution for OSX](#skype4py-distribution-for-osx)
* [Skype messages not coming through to bot interface](#skype-messages-not-coming-through-to-bot-interface)
* [Crashing on a startup on Ubuntu server](#crashing-on-a-startup-on-ubuntu-server)
* [Sevabot ignores commands and logs hang in sevabot - DEBUG - Attaching to Skype](#sevabot-ignores-commands-and-logs-hang-in-sevabot-debug-attaching-to-skype)
### [Logging](#id1)[¶](#logging)
By default, Sevabot writes logging output to file `logs/sevabot.log`.
You can watch this log in real time with UNIX command:
```
tail -f logs/sevabot.log
```
To increase log level to max, edit `settings.py` and set:
```
LOG_LEVEL = "DEBUG"
DEBUG_HTTP = True
```
This will dump everything + HTTP request to the log.
### [Double messages](#id2)[¶](#double-messages)
Sevabot replies to all commands twice.
Still no idea what could be causing this. Restarting everything helps.
### [Segfaults](#id3)[¶](#segfaults)
If you get segfault on OSX make sure you are using [32-bit Python](http://stackoverflow.com/questions/2088569/how-do-i-force-python-to-be-32-bit-on-snow-leopard-and-other-32-bit-64-bit-quest).
[Debugging segmentation faults with Python](http://wiki.python.org/moin/DebuggingWithGdb).
Related gdb dump:
```
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000001243b68 0x00007fff8c12d878 in CFRetain ()
(gdb) bt
#0 0x00007fff8c12d878 in CFRetain ()
#1 0x00000001007e07ec in ffi_call_unix64 ()
#2 0x00007fff5fbfbb50 in ?? ()
(gdb) c Continuing.
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000001243b68 0x00007fff8c12d878 in CFRetain ()
```
### [Skype4Py distribution for OSX](#id4)[¶](#skype4py-distribution-for-osx)
Currently Skype4Py distribution is broken.
To fix this do:
```
source venv/bin/activate git clone git://github.com/stigkj/Skype4Py.git cd Skype4Py arch -i386 python setup.py install
```
### [Skype messages not coming through to bot interface](#id5)[¶](#skype-messages-not-coming-through-to-bot-interface)
Symptons
* Skype is running in Xvfb
* Sevabot logs in screen don’t see incoming chat messages
Seems to happen if you reboot the bot in too fast cycle.
Maybe Skype login has something which makes it not working if you log several times in a row.
Looks like it fixes itself if you just a wait a bit before sending messages to the chat.
### [Crashing on a startup on Ubuntu server](#id6)[¶](#crashing-on-a-startup-on-ubuntu-server)
Segfault when starting up the bot:
```
File "build/bdist.linux-i686/egg/Skype4Py/skype.py", line 250, in __init__
File "build/bdist.linux-i686/egg/Skype4Py/api/posix.py", line 40, in SkypeAPI
File "build/bdist.linux-i686/egg/Skype4Py/api/posix_x11.py", line 254, in __in it__ Skype4Py.errors.SkypeAPIError: Could not open XDisplay Segmentation fault (core dumped)
```
This usually means that your DISPLAY environment variable is wrong.
Try:
```
export DISPLAY=:1
```
or:
```
export DISPLAY=:0
```
depending on your configuration before running Sevabot.
### [Sevabot ignores commands and logs hang in sevabot - DEBUG - Attaching to Skype](#id7)[¶](#sevabot-ignores-commands-and-logs-hang-in-sevabot-debug-attaching-to-skype)
This concerns only Ubuntu headless server deployments.
Your fluxbox might have hung. Kill it with fire:
```
killall -SIGKILL fluxbox
```
Restart.
Community and development[¶](#community-and-development)
---
* [Introduction](#introduction)
* [IRC](#irc)
* [Support tickets and issues](#support-tickets-and-issues)
* [Installing development version](#installing-development-version)
* [Debugging](#debugging)
* [Contributions](#contributions)
* [Releases](#releases)
### [Introduction](#id1)[¶](#introduction)
How to participate to the spectacular future of Sevabot.
You can make the life of Sevabot better - and yours too!
### [IRC](#id2)[¶](#irc)
For chatting
/join #opensourcehacker @ irc.freenode.net
Note: due to low activity of the channel prepare to idle there for 24 hours to wait for the answer.
### [Support tickets and issues](#id3)[¶](#support-tickets-and-issues)
[Use Github issue tracker](https://github.com/opensourcehacker/sevabot/issues)
### [Installing development version](#id4)[¶](#installing-development-version)
All development happens in `dev` branch.
How to install and run the development version (trunk) of Sevabot:
```
git clone git://github.com/opensourcehacker/sevabot.git cd sevabot git checkout dev curl -L -o virtualenv.py https://raw.github.com/pypa/virtualenv/master/virtualenv.py python virtualenv.py venv # prefix with arch -i386 on OSX source venv/bin/activate python setup.py develop # prefix with arch -i386 on OSX
```
### [Debugging](#id5)[¶](#debugging)
You might want to turn on `DEBUG_HTTP` setting to dump out incoming HTTP requests if you are testing / developing your own hooks.
### [Contributions](#id6)[¶](#contributions)
All contributions must come with accompaning documentation updates.
All Python files must follow PEP-8 coding conventionas and be [flake8 valid](http://pypi.python.org/pypi/flake8/).
Submit pull request at Github.
For any changes update [CHANGES.rst](https://github.com/opensourcehacker/sevabot/blob/master/CHANGES.rst).
### [Releases](#id7)[¶](#releases)
[Use zest.releaser](http://opensourcehacker.com/2012/08/14/high-quality-automated-package-releases-for-python-with-zest-releaser/)
See [Github](https://github.com/opensourcehacker/sevabot) for more project information.
Trademark notice[¶](#trademark-notice)
---
The Skype name, associated trade marks and logos and the “S” logo are trade marks of Skype or related entities.
Sevabot is an open source project and not associate of Microsoft Corporation or Skype.
[Contents](index.html#document-index) |
@webiny/form | npm | JavaScript | [@webiny/form](#webinyform)
===
A simple React library for creating forms.
[Install](#install)
---
```
npm install --save @webiny/form
```
Or if you prefer yarn:
```
yarn add @webiny/form
```
[Quick Example](#quick-example)
---
```
import React, { useCallback } from "react";
import { Form } from "@webiny/form";
import { Input } from "@webiny/ui/Input";
import { ButtonPrimary } from "@webiny/ui/Button";
import { validation } from "@webiny/validation";
const CarManufacturersForm = () => {
const onSubmit = useCallback(formData => console.log(formData), []);
return (
<Form data={{ title: "Untitled" }} onSubmit={onSubmit}>
{({ form, Bind }) => (
<React.Fragment>
<Bind name="title" validators={validation.create("required")}>
<Input label={"Title"} />
</Bind>
<Bind name="description" validators={validation.create("maxLength:500")}>
<Input
label={"Description"}
description={"Provide a short description here."}
rows={4}
/>
</Bind>
<ButtonPrimary onClick={form.submit}>Submit</ButtonPrimary>
</React.Fragment>
)}
</Form>
);
};
export default CarManufacturersForm;
```
Readme
---
### Keywords
none |
DysPIA | cran | R | Package ‘DysPIA’
October 12, 2022
Type Package
Title Dysregulated Pathway Identification Analysis
Version 1.3
Date 2020-06-26
Maintainer <NAME> <<EMAIL>>
Description It is used to identify dysregulated pathways based on a pre-ranked gene pair list. A fast al-
gorithm is used to make the computation really fast. The data in package 'DysPIAData' is needed.
License GPL (>= 2)
Depends R (>= 3.5.0), DysPIAData
Imports Rcpp (>= 1.0.4), BiocParallel, fastmatch, data.table,
stats,parmigene
LinkingTo Rcpp
RoxygenNote 7.1.0
Encoding UTF-8
LazyData true
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut, ctb]
Repository CRAN
Date/Publication 2020-07-10 05:10:03 UTC
R topics documented:
calcDyspiaSta... 2
calcDyspiaStatCumulativ... 3
calcDyspiaStatCumulativeBatc... 3
calEdgeCorScore_ESE... 4
class.labels_p5... 5
DysGP... 5
DysGPS_p5... 6
DysPI... 6
DyspiaRes_p5... 8
DyspiaSi... 8
DyspiaSimpleImp... 9
gene_expression_p5... 10
sample_backgroun... 10
setUpBPPARA... 11
calcDyspiaStat calcDyspiaStat: Calculates DysPIA statistics
Description
Calculates DysPIA statistics for a given query gene pair set.
Usage
calcDyspiaStat(
stats,
selectedStats,
DyspiaParam = 1,
returnAllExtremes = FALSE,
returnLeadingEdge = FALSE
)
Arguments
stats Named numeric vector with gene pair-level statistics sorted in decreasing order
(order is not checked).
selectedStats Indexes of selected gene pairs in the ‘stats‘ array.
DyspiaParam DysPIA weight parameter (0 is unweighted, suggested value is 1).
returnAllExtremes
If TRUE return not only the most extreme point, but all of them. Can be used
for enrichment plot.
returnLeadingEdge
If TRUE return also leading edge gene pairs.
Value
Value of DysPIA statistic if both returnAllExtremes and returnLeadingEdge are FALSE. Otherwise
returns list with the folowing elements:
• res – value of DysPIA statistic
• tops – vector of top peak values of cumulative enrichment statistic for each gene pair;
• bottoms – vector of bottom peak values of cumulative enrichment statistic for each gene pair;
• leadingEdge – vector with indexes of leading edge gene pairs that drive the enrichment.
calcDyspiaStatCumulative
Calculates DysPIA statistic values for all the prefixes of a gene pair
set
Description
Calculates DysPIA statistic values for all the prefixes of a gene pair set
Usage
calcDyspiaStatCumulative(stats, selectedStats, DyspiaParam)
Arguments
stats Named numeric vector with gene pair-level statistics sorted in decreasing order
(order is not checked)
selectedStats indexes of selected gene pairs in a ‘stats‘ array
DyspiaParam DysPIA weight parameter (0 is unweighted, suggested value is 1)
Value
Numeric vector of DysPIA statistics for all prefixes of selectedStats.
calcDyspiaStatCumulativeBatch
Calculates DysPIA statistic values for the gene pair sets
Description
Calculates DysPIA statistic values for the gene pair sets
Usage
calcDyspiaStatCumulativeBatch(
stats,
DyspiaParam,
pathwayScores,
pathwaysSizes,
iterations,
seed
)
Arguments
stats Named numeric vector with gene pair-level statistics sorted in decreasing order
(order is not checked).
DyspiaParam DysPIA weight parameter (0 is unweighted, suggested value is 1).
pathwayScores Vector with enrichment scores for the pathways in the database.
pathwaysSizes Vector of pathway sizes.
iterations Number of iterations.
seed Seed vector
Value
List of DysPIA statistics for gene pair sets.
calEdgeCorScore_ESEA calEdgeCorScore_ESE
Description
Calculates differential Mutual information.
Usage
calEdgeCorScore_ESEA(
dataset,
class.labels,
controlcharacter,
casecharacter,
background
)
Arguments
dataset Matrix of gene expression values (rownames are genes, columnnames are sam-
ples).
class.labels Vector of binary labels.
controlcharacter
Charactor of control in the class labels.
casecharacter Charactor of case in the class labels.
background Matrix of the edges’ background.
Value
A vector of the aberrant correlation in phenotype P based on mutual information (MI) for each edge.
Examples
data(gene_expression_p53, class.labels_p53,sample_background)
ESEAscore_p53<-calEdgeCorScore_ESEA(gene_expression_p53, class.labels_p53,
"WT", "MUT", sample_background)
class.labels_p53 Example vector of category labels.
Description
The labels for the 50 cell lines in p53 data. Control group’s label is ’WT’, case group’s label is
’MUT’.
Usage
data(class.labels_p53)
DysGPS DysGPS: Calculates Dysregulated gene pair score (DysGPS) for each
gene pair
Description
Calculates Dysregulated gene pair score (DysGPS) for each gene pair. Two-sample Welch’s T
test of gene pairs between case and control samples. The package ’DysPIAData’ including the
background data is needed to be loaded.
Usage
DysGPS(
dataset,
class.labels,
controlcharacter,
casecharacter,
background = combined_background
)
Arguments
dataset Matrix of gene expression values (rownames are genes, columnnames are sam-
ples).
class.labels Vector of category labels.
controlcharacter
Charactor of control group in the class labels.
casecharacter Charactor of case group in the class labels.
background Matrix of the gene pairs’ background. The default is ‘combined_background‘,
which includes real pathway gene pairs and randomly producted gene pairs. The
’combined_background’ was incluede in ’DysPIAData’.
Value
A vector of DysGPS for each gene pair.
Examples
data(gene_expression_p53, class.labels_p53,sample_background)
DysGPS_sample<-DysGPS(gene_expression_p53, class.labels_p53,
"WT", "MUT", sample_background)
DysGPS_p53 Example vector of DysGPS in p53 data.
Description
The score vector of 164923 gene pairs from p53 dataset. It can be loaded from the example datasets
of R-package ’DysPIA’, and also can be obtained by running DysGPS(), details see DysGPS.R
Usage
data(DysGPS_p53)
DysPIA DysPIA: Dysregulated Pathway Identification Analysis
Description
Runs Dysregulated Pathway Identification Analysis (DysPIA).The package ’DysPIAData’ includ-
ing the background data is needed to be loaded.
Usage
DysPIA(
pathwayDB = "kegg",
stats,
nperm = 10000,
minSize = 15,
maxSize = 1000,
nproc = 0,
DyspiaParam = 1,
BPPARAM = NULL
)
Arguments
pathwayDB Name of the pathway database (8 databases:reactome,kegg,biocarta,panther,pathbank,nci,smpdb,pharmgk
The default value is "kegg".
stats Named vector of CILP scores for each gene pair. Names should be the same as
in pathways.
nperm Number of permutations to do. Minimial possible nominal p-value is about
1/nperm. The default value is 10000.
minSize Minimal size of a gene pair set to test. All pathways below the threshold are
excluded. The default value is 15.
maxSize Maximal size of a gene pair set to test. All pathways above the threshold are
excluded. The default value is 1000.
nproc If not equal to zero sets BPPARAM to use nproc workers (default = 0).
DyspiaParam DysPIA parameter value, all gene pair-level status are raised to the power of
‘DyspiaParam‘ before calculation of DysPIA enrichment scores.
BPPARAM Parallelization parameter used in bplapply. Can be used to specify cluster to
run. If not initialized explicitly or by setting ‘nproc‘ default value ‘bpparam()‘
is used.
Value
A table with DysPIA results. Each row corresponds to a tested pathway. The columns are the
following:
• pathway – name of the pathway as in ‘names(pathway)‘;
• pval – an enrichment p-value;
• padj – a BH-adjusted p-value;
• DysPS – enrichment score, same as in Broad DysPIA implementation;
• NDysPS – enrichment score normalized to mean enrichment of random samples of the same
size;
• nMoreExtreme‘ – a number of times a random gene pair set had a more extreme enrichment
score value;
• size – size of the pathway after removing gene pairs not present in ‘names(stats)‘;
• leadingEdge – vector with indexes of leading edge gene pairs that drive the enrichment.
Examples
data(pathway_list,package="DysPIAData")
data(DysGPS_p53)
DyspiaRes_p53 <- DysPIA("kegg", DysGPS_p53, nperm = 100, minSize = 20, maxSize = 100)
DyspiaRes_p53 Example list of DysPIA result in p53 data.
Description
The list includes 81 pathway results from ’DisPIA.R’ as an example used in ’DyspiaSig.R’.
Usage
data(DyspiaRes_p53)
DyspiaSig DyspiaSig
Description
Returns the significant summary of DysPIA results.
Usage
DyspiaSig(DyspiaRes, fdr)
Arguments
DyspiaRes Table with results of running DysPIA().
fdr Significant threshold of ‘padj‘ (a BH-adjusted p-value).
Value
A list of significant DysPIA results, including correlation gain and correlation loss.
Examples
data(pathway_list,package="DysPIAData")
data(DyspiaRes_p53)
summary_p53 <- DyspiaSig(DyspiaRes_p53, 0.05) # filter with padj<0.05
DyspiaSimpleImpl DyspiaSimpleImpl
Description
Runs dysregulated pathway identification analysis for preprocessed input data.
Usage
DyspiaSimpleImpl(
pathwayScores,
pathwaysSizes,
pathwaysFiltered,
leadingEdges,
permPerProc,
seeds,
toKeepLength,
stats,
BPPARAM
)
Arguments
pathwayScores Vector with enrichment scores for the pathways in the database.
pathwaysSizes Vector of pathway sizes.
pathwaysFiltered
Filtered pathways.
leadingEdges Leading edge gene pairs.
permPerProc Parallelization parameter for permutations.
seeds Seed vector
toKeepLength Number of ‘pathways‘ that meet the condition for ‘minSize‘ and ‘maxSize‘.
stats Named vector of gene pair-level scores. Names should be the same as in path-
ways of ‘pathwayDB‘.
BPPARAM Parallelization parameter used in bplapply. Can be used to specify cluster to
run. If not initialized explicitly or by setting ‘nproc‘ default value ‘bpparam()‘
is used.
Value
A table with DysPIA results. Each row corresponds to a tested pathway. The columns are the
following:
• pathway – name of the pathway as in ‘names(pathway)‘;
• pval – an enrichment p-value;
• padj – a BH-adjusted p-value;
• DysPS – enrichment score, same as in Broad DysPIA implementation;
• NDysPS – enrichment score normalized to mean enrichment of random samples of the same
size;
• nMoreExtreme‘ – a number of times a random gene pair set had a more extreme enrichment
score value;
• size – size of the pathway after removing gene pairs not present in ‘names(stats)‘;
• leadingEdge – vector with indexes of leading edge gene pairs that drive the enrichment.
gene_expression_p53 Example matrix of gene expression value.
Description
A dataset of transcriptional profiles from p53+ and p53 mutant cancer cell lines. It includes the
normalized gene expression for 6385 genes in 50 samples. Rownames are genes, columnnames are
samples.
Usage
data(gene_expression_p53)
sample_background Example list of gene pair background.
Description
The list of background was used in ”DysGPS.R’ and ’calEdgeCorScore_ESEA.R’ which is a part
of the ’combined_background’ in ’DysPIAData’.
Usage
data(sample_background)
setUpBPPARAM setUpBPPARAM
Description
Sets up parameter BPPARAM value.
Usage
setUpBPPARAM(nproc = 0, BPPARAM = NULL)
Arguments
nproc If not equal to zero sets BPPARAM to use nproc workers (default = 0).
BPPARAM Parallelization parameter used in bplapply. Can be used to specify cluster to
run. If not initialized explicitly or by setting ‘nproc‘ default value ‘bpparam()‘
is used.
Value
parameter BPPARAM value |
GenAlgo | cran | R | Package ‘GenAlgo’
October 12, 2022
Version 2.2.0
Date 2020-10-13
Title Classes and Methods to Use Genetic Algorithms for Feature
Selection
Author <NAME>
Maintainer <NAME> <<EMAIL>>
Depends R (>= 3.0)
Imports methods, stats, MASS, oompaBase (>= 3.0.1), ClassDiscovery
Suggests Biobase, xtable
Description Defines classes and methods that can be used
to implement genetic algorithms for feature selection. The idea is
that we want to select a fixed number of features to combine into a
linear classifier that can predict a binary outcome, and can use a
genetic algorithm heuristically to select an optimal set of features.
License Apache License (== 2.0)
LazyLoad yes
biocViews Microarray, Clustering
URL http://oompa.r-forge.r-project.org/
NeedsCompilation no
Repository CRAN
Date/Publication 2020-10-15 17:40:03 UTC
R topics documented:
gaTourResult... 2
GenAl... 2
GenAlg-clas... 4
GenAlg-tool... 6
mah... 7
tourData0... 9
gaTourResults Results of a Genetic Algorithm
Description
We ran a genetic algorithm to find the optimal ’fantasy’ team for the competition run by the Versus
broadcasting network for the 2009 Tour de France. In order to make the vignette run in a timely
fashion, we saved the results in this data object.
Usage
data(gaTourResults)
Format
There are four objects in the data file. The first is recurse, which is an object of the GenAlg-class
representing the final generation. The other three objects are all numeric vector of length 1100:
diversity contains the average population diversity at each generation, fitter contains the max-
imum fitness, and meanfit contains the mean fitness.
Source
Kevin R. Coombes
GenAlg A generic Genetic Algorithm for feature selection
Description
These functions allow you to initialize (GenAlg) and iterate (newGeneration) a genetic algorithm to
perform feature selection for binary class prediction in the context of gene expression microarrays
or other high-throughput technologies.
Usage
GenAlg(data, fitfun, mutfun, context, pm=0.001, pc=0.5, gen=1)
newGeneration(ga)
popDiversity(ga)
Arguments
data The initial population of potential solutions, in the form of a data matrix with
one individual per row.
fitfun A function to compute the fitness of an individual solution. Must take two input
arguments: a vector of indices into rows of the population matrix, and a context
list within which any other items required by the function can be resolved. Must
return a real number; higher values indicate better fitness, with the maximum
fitness occurring at the optimal solution to the underlying numerical problem.
mutfun A function to mutate individual alleles in the population. Must take two argu-
ments: the starting allele and a context list as in the fitness function.
context A list of additional data required to perform mutation or to compute fitness. This
list is passed along as the second argument when fitfun and mutfun are called.
pm A real value between 0 and 1, representing the probability that an individual
allele will be mutated.
pc A real value between 0 and 1, representing the probability that crossover will
occur during reproduction.
gen An integer identifying the current generation.
ga An object of class GenAlg
Value
Both the GenAlg generator and the newGeneration functions return a GenAlg-class object. The
popDiversity function returns a real number representing the average diversity of the population.
Here diversity is defined by the number of alleles (selected features) that differ in two individuals.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
See Also
GenAlg-class, GenAlg-tools, maha.
Examples
# generate some fake data
nFeatures <- 1000
nSamples <- 50
fakeData <- matrix(rnorm(nFeatures*nSamples), nrow=nFeatures, ncol=nSamples)
fakeGroups <- sample(c(0,1), nSamples, replace=TRUE)
myContext <- list(dataset=fakeData, gps=fakeGroups)
# initialize population
n.individuals <- 200
n.features <- 9
y <- matrix(0, n.individuals, n.features)
for (i in 1:n.individuals) {
y[i,] <- sample(1:nrow(fakeData), n.features)
}
# set up the genetic algorithm
my.ga <- GenAlg(y, selectionFitness, selectionMutate, myContext, 0.001, 0.75)
# advance one generation
my.ga <- newGeneration(my.ga)
GenAlg-class Class "GenAlg"
Description
Objects of the GenAlg class represent one step (population) in the evolution of a genetic algorithm.
This algorithm has been customized to perform feature selection for the class prediction problem.
Usage
## S4 method for signature 'GenAlg'
as.data.frame(x, row.names=NULL, optional=FALSE, ...)
## S4 method for signature 'GenAlg'
as.matrix(x, ...)
## S4 method for signature 'GenAlg'
summary(object, ...)
Arguments
object object of class GenAlg
x object of class GenAlg
row.names character vector giving the row names for the data frame, or NULL
optional logical scalar. If TRUE, setting row names and converting column names to syn-
tactic names is optional.
... extra arguments for generic routines
Objects from the Class
Objects should be created by calls to the GenAlg generator; they will also be created automatically
as a result of applying the function newGeneration to an existing GenAlg object.
Slots
data: The initial population of potential solutions, in the form of a data matrix with one individual
per row.
fitfun: A function to compute the fitness of an individual solution. Must take two input argu-
ments: a vector of indices into the rows of the population matrix, and a context list within
which any other items required by the function can be resolved. Must return a real num-
ber; higher values indicate better fitness, with the maximum fitness occurring at the optimal
solution to the underlying numerical problem.
mutfun: A function to mutate individual alleles in the population. Must take two arguments: the
starting allele and a context list as in the fitness function.
p.mutation: numeric scalar between 0 and 1, representing the probability that an individual allele
will be mutated.
p.crossover: numeric scalar between 0 and 1, representing the probability that crossover will
occur during reproduction.
generation: integer scalar identifying the current generation.
fitness: numeric vector containing the fitness of all individuals in the population.
best.fit: A numeric value; the maximum fitness.
best.individual: A matrix (often with one row) containing the individual(s) achieving the max-
imum fitness.
context: A list of additional data required to perform mutation or to compute fitness. This list is
passed along as the second argument when fitfun and mutfun are called.
Methods
as.data.frame signature(x = "GenAlg"): Converts the GenAlg object into a data frame. The
first column contains the fitness ; remaining columns contain three selected features, given as
integer indices into the rows of the original data matrix.
as.matrix signature(x = "GenAlg"): Converts the GenAlg object into a matrix, following the
conventions of as.data.frame.
summary signature(object = "GenAlg"): Print a summary of the GenAlg object.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
References
David Goldberg.
"Genetic Algorithms in Search, Optimization and Machine Learning."
Addison-Wesley, 1989.
See Also
GenAlg, GenAlg-tools, maha.
Examples
showClass("GenAlg")
GenAlg-tools Utility functions for selection and mutation in genetic algorithms
Description
These functions implement specific forms of mutation and fitness that can be used in genetic algo-
rithms for feature selection.
Usage
simpleMutate(allele, context)
selectionMutate(allele, context)
selectionFitness(arow, context)
Arguments
allele In the simpleMutate function, allele is a binary vector filled with 0’s and
1’s. In the selectionMutate function, allele is an integer (which is silently
ignored; see Details).
arow A vector of integer indices identifying the rows (features) to be selected from
the context$dataset matrix.
context A list or data frame containing auxiliary information that is needed to resolve
references from the mutation or fitness code. In both selectionMutate and
selectionFitness, context must contain a dataset component that is either
a matrix or a data frame. In selectionFitness, the context must also include
a grouping factor (with two levels) called gps.
Details
These functions represent ’callbacks’. They can be used in the function GenAlg, which creates
objects. They will then be called repeatedly (for each individual in the population) each time the
genetic algorithm is updated to the next generation.
The simpleMutate function assumes that chromosomes are binary vectors, so alleles simply take
on the value 0 or 1. A mutation of an allele, therefore, flips its state between those two possibilities.
The selectionMutate and selectionFitness functions, by contrast, are specialized to perform
feature selection assuming a fixed number K of features, with a goal of learning how to distinguish
between two different groups of samples. We assume that the underlying data consists of a data
frame (or matrix), with the rows representing features (such as genes) and the columns representing
samples. In addition, there must be a grouping vector (or factor) that assigns all of the sample
columns to one of two possible groups. These data are collected into a list, context, containing
a dataset matrix and a gps factor. An individual member of the population of potential solutions
is encoded as a length K vector of indices into the rows of the dataset. An individual allele,
therefore, is a single index identifying a row of the dataset. When mutating it, we assume that it
can be changed into any other possible allele; i.e., any other row number. To compute the fitness,
we use the Mahalanobis distance between the centers of the two groups defined by the gps factor.
Value
Both selectionMutate and simpleMutate return an integer value; in the simpler case, the value
is guaranteed to be a 0 or 1. The selectionFitness function returns a real number.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>.org>
See Also
GenAlg, GenAlg-class, maha.
Examples
# generate some fake data
nFeatures <- 1000
nSamples <- 50
fakeData <- matrix(rnorm(nFeatures*nSamples), nrow=nFeatures, ncol=nSamples)
fakeGroups <- sample(c(0,1), nSamples, replace=TRUE)
myContext <- list(dataset=fakeData, gps=fakeGroups)
# initialize population
n.individuals <- 200
n.features <- 9
y <- matrix(0, n.individuals, n.features)
for (i in 1:n.individuals) {
y[i,] <- sample(1:nrow(fakeData), n.features)
}
# set up the genetic algorithm
my.ga <- GenAlg(y, selectionFitness, selectionMutate, myContext, 0.001, 0.75)
# advance one generation
my.ga <- newGeneration(my.ga)
maha Compute the (squared) Mahalanobis distance between two groups of
vectors
Description
The Mahalanobis distance between two groups of vectors
Usage
maha(data, groups, method = "mve")
Arguments
data A matrix with columns representing features (or variables) and rows represent-
ing independent samples
groups A factor or logical vector with length equal to the number of rows (samples) in
the data matrix
method A character string determining the method that should be used to estimate the
covariance matrix. The default value of "mve" uses the cov.mve function from
the MASS package. The other valid option is "var", which uses the var function
from the standard stats package.
Details
The Mahalanobis distance between two groups of vectors is the distance between their centers,
computed in the equivalent of a principal component space that accounts for different variances.
Value
Returns a numeric vector of length 1.
Author(s)
<NAME> <<EMAIL>>, <NAME> <<EMAIL>>
References
<NAME>. and <NAME>. and <NAME>.
Multivariate Analysis.
Academic Press, Reading, MA 1979, pp. 213–254.
See Also
cov.mve, var
Examples
nFeatures <- 40
nSamples <- 2*10
dataset <- matrix(rnorm(nSamples*nFeatures), ncol=nSamples)
groups <- factor(rep(c("A", "B"), each=10))
maha(dataset, groups)
tourData09 Tour de France 2009
Description
Each row represents the performance of a rider in the 2009 Tour de France; the name and team of
the rider are used as the row names. The four columns are the Cost (to include on a team in the
Versus fantasy challenge), Scores (based on daily finishing position), JerseyBonus (for any days
spent in one of the three main leader jerseys), and Total (the sum of Scores and JerseyBonus).
Usage
data(tourData09)
Format
A data frame with 102 rows and 4 columns.
Source
The data were collected in 2009 from the web site http://www.versus.com/tdfgames, which
appears to no longer exist. |
BayesfMRI | cran | R | Package ‘BayesfMRI’
June 8, 2023
Type Package
Title Spatial Bayesian Methods for Task Functional MRI Studies
Version 0.3.5
Maintainer <NAME> <<EMAIL>>
Description Performs a spatial Bayesian general linear model (GLM) for task
functional magnetic resonance imaging (fMRI) data on the cortical surface.
Additional models include group analysis and inference to detect thresholded
areas of activation. Includes direct support for the 'CIFTI' neuroimaging
file format. For more information see <NAME>, <NAME>, <NAME>, F.
Lindgren, <NAME> (2020) <doi:10.1080/01621459.2019.1611582> and D.
Spencer, <NAME>, <NAME>, <NAME>, <NAME> (2022)
<doi:10.1016/j.neuroimage.2022.118908>.
Depends R (>= 3.6.0)
License GPL-3
Additional_repositories https://inla.r-inla-download.org/R/testing
Encoding UTF-8
Imports ciftiTools (>= 0.8.0), excursions, foreach, fMRItools, MASS,
Matrix, matrixStats, methods, Rcpp, stats, sp, utils
Suggests covr, abind, dplyr, geometry, ggplot2, grDevices, INLA (>=
0.0-1468840039), knitr, MatrixModels, parallel, purrr, rdist,
rmarkdown, SQUAREM, testthat (>= 3.0.0)
RoxygenNote 7.2.3
URL https://github.com/mandymejia/BayesfMRI
BugReports https://github.com/mandymejia/BayesfMRI/issues
LinkingTo RcppEigen, Rcpp
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut] (<https://orcid.org/0000-0002-9705-3605>),
<NAME> [ctb] (<https://orcid.org/0000-0001-7563-4727>),
<NAME> [ctb],
<NAME> [ctb],
Yu (Ryan) Yue [ctb]
Repository CRAN
Date/Publication 2023-06-08 17:52:54 UTC
R topics documented:
.findThet... 3
.getSqrtInvCp... 4
.initialK... 4
.logDetQ... 5
aic_Para... 5
ar_order_Para... 5
ar_smooth_Para... 6
BayesGL... 6
BayesGLM... 9
BayesGLM_cift... 11
Bayes_Para... 15
cderi... 15
combine_sessions_Para... 16
contrasts_Para... 16
emTol_Para... 17
EM_Para... 17
faces_Para... 17
HR... 18
id_activation... 18
INLA_Descriptio... 20
is.BfMRI.ses... 20
make_HRF... 21
make_mas... 22
make_mes... 23
mask_Param_vertice... 24
max.threads_Para... 24
mesh_Param_eithe... 24
mesh_Param_inl... 24
num.threads_Para... 25
plot.act_BayesGLM_cift... 25
plot.BayesGLM2_cift... 26
plot.BayesGLM_cift... 26
pw_estimat... 27
pw_smoot... 28
return_INLA_Para... 28
scale_BOLD_Para... 29
scale_design_Para... 29
seed_Para... 29
session_names_Para... 30
summary.act_BayesGL... 30
summary.act_BayesGLM_cift... 31
summary.BayesGL... 31
summary.BayesGLM... 32
summary.BayesGLM2_cift... 33
summary.BayesGLM_cift... 34
task_names_Para... 34
trim_INLA_Para... 35
verbose_Para... 35
vertex_area... 35
vertices_Para... 36
.findTheta Perform the EM algorithm of the Bayesian GLM fitting
Description
Perform the EM algorithm of the Bayesian GLM fitting
Usage
.findTheta(theta, spde, y, X, QK, Psi, A, Ns, tol, verbose = FALSE)
Arguments
theta the vector of initial values for theta
spde a list containing the sparse matrix elements Cmat, Gmat, and GtCinvG
y the vector of response values
X the sparse matrix of the data values
QK a sparse matrix of the prior precision found using the initial values of the hyper-
parameters
Psi a sparse matrix representation of the basis function mapping the data locations
to the mesh vertices
A a precomputed matrix crossprod(X%*%Psi)
Ns the number of columns for the random matrix used in the Hutchinson estimator
tol a value for the tolerance used for a stopping rule (compared to the squared norm
of the differences between theta(s) and theta(s-1))
verbose (logical) Should intermediate output be displayed?
.getSqrtInvCpp Get the prewhitening matrix for a single data location
Description
Get the prewhitening matrix for a single data location
Usage
.getSqrtInvCpp(AR_coeffs, nTime, avg_var)
Arguments
AR_coeffs a length-p vector where p is the AR order
nTime (integer) the length of the time series that is being prewhitened
avg_var a scalar value of the residual variances of the AR model
.initialKP Find the initial values of kappa2 and phi
Description
Find the initial values of kappa2 and phi
Usage
.initialKP(theta, spde, w, n_sess, tol, verbose)
Arguments
theta a vector of length two containing the range and scale parameters kappa2 and
phi, in that order
spde a list containing the sparse matrix elements Cmat, Gmat, and GtCinvG
w the beta_hat estimates for a single task
n_sess the number of sessions
tol the stopping rule tolerance
verbose (logical) Should intermediate output be displayed?
.logDetQt Find the log of the determinant of Q_tilde
Description
Find the log of the determinant of Q_tilde
Usage
.logDetQt(kappa2, in_list, n_sess)
Arguments
kappa2 a scalar
in_list a list with elements Cmat, Gmat, and GtCinvG
n_sess the integer number of sessions
aic_Param aic
Description
aic
Arguments
aic Use the AIC to select AR model order between 0 and ar_order? Default:
FALSE.
ar_order_Param ar_order
Description
ar_order
Arguments
ar_order (numeric) Controls prewhitening. If greater than zero, this should be a number
indicating the order of the autoregressive model to use for prewhitening. If zero,
do not prewhiten. Default: 6. For multi-session models, note that a single AR
model is used; the parameters are estimated by averaging the estimates from
each session.
ar_smooth_Param ar_smooth
Description
ar_smooth
Arguments
ar_smooth (numeric) FWHM parameter for smoothing the AR model coefficient estimates
F W HM
for prewhitening. Remember that σ = 2∗sqrt(2∗log(2) . Set to 0 or NULL to not do
any smoothing. Default: 5.
BayesGLM BayesGLM
Description
Performs spatial Bayesian GLM for fMRI task activation
Usage
BayesGLM(
data,
vertices = NULL,
faces = NULL,
mesh = NULL,
mask = NULL,
task_names = NULL,
session_names = NULL,
combine_sessions = TRUE,
scale_BOLD = c("auto", "mean", "sd", "none"),
scale_design = TRUE,
Bayes = TRUE,
ar_order = 6,
ar_smooth = 5,
aic = FALSE,
num.threads = 4,
return_INLA = c("trimmed", "full", "minimal"),
verbose = 1,
meanTol = 1e-06,
varTol = 1e-06
)
Arguments
data A list of sessions in the "BfMRI.sess" object format. Each session is a list with
elements "BOLD", "design", and optionally "nuisance". Each element should
be a numeric matrix with T rows. The name of each element in data is the name
of that session. See ?is.BfMRI.sess for details.
Note that the argument session_names can be used instead of providing the
session names as the names of the elements in data.
vertices, faces
If Bayes, the geometry data can be provided with either both the vertices and
faces arguments, or with the mesh argument.
vertices is a V × 3 matrix, where each row contains the Euclidean coordinates
at which a given vertex in the mesh is located. V is the number of vertices in the
mesh.
faces is a F × 3 matrix, where each row contains the vertex indices for a given
triangular face in the mesh. F is the number of faces in the mesh.
mesh If Bayes, the geometry data can be provided with either both the vertices and
faces arguments, or with the mesh argument.
mesh is an "inla.mesh" object. This can be created for surface data using
make_mesh.
mask (Optional) A length V logical vector indicating the vertices to include.
task_names (Optional) Names of tasks represented in design matrix.
session_names (Optional, and only relevant for multi-session modeling) Names of each session.
Default: NULL. In BayesGLM this argument will overwrite the names of the list
entries in data, if both exist.
combine_sessions
If multiple sessions are provided, should their data be combined and analyzed
as a single session?
If TRUE (default), the multiple sessions will be concatenated along time after
scaling and nuisance regression, but before prewhitening. If FALSE, each ses-
sion will be analyzed separately, except that a single estimate of the AR model
coefficients for prewhitening is used, estimated across all sessions.
scale_BOLD Option for scaling the BOLD response.
"auto" (default) will use "mean" scaling except if demeaned data is detected (if
any mean is less than one), in which case "sd" scaling will be used instead.
"mean" scaling will scale the data to percent local signal change.
"sd" scaling will scale the data by local standard deviation.
"none" will only center the data, not scale it.
scale_design Scale the design matrix by dividing each column by its maximum and then sub-
tracting the mean? Default: TRUE. If FALSE, the design matrix is centered but
not scaled.
Bayes If TRUE (default), will fit a spatial Bayesian GLM in addition to the classical
GLM. (The classical GLM is always returned.)
ar_order (numeric) Controls prewhitening. If greater than zero, this should be a number
indicating the order of the autoregressive model to use for prewhitening. If zero,
do not prewhiten. Default: 6. For multi-session models, note that a single AR
model is used; the parameters are estimated by averaging the estimates from
each session.
ar_smooth (numeric) FWHM parameter for smoothing the AR model coefficient estimates
F W HM
for prewhitening. Remember that σ = 2∗sqrt(2∗log(2) . Set to 0 or NULL to not do
any smoothing. Default: 5.
aic Use the AIC to select AR model order between 0 and ar_order? Default:
FALSE.
num.threads The maximum number of threads to use for parallel computations: prewhitening
parameter estimation, and the inla-program model estimation. Default: 4. Note
that parallel prewhitening requires the parallel package.
return_INLA Return the INLA model object? (It can be large.) Use "trimmed" (default) to
return only the more relevant results, which is enough for both id_activations
and BayesGLM2, "minimal" to return just enough for BayesGLM2 but not id_activations,
or "full" to return the full output of inla.
verbose Should updates be printed? Use 1 (default) for occasional updates, 2 for occa-
sional updates as well as running INLA in verbose mode (if applicable), or 0 for
no updates.
meanTol, varTol
Tolerance for mean and variance of each data location. Locations which do not
meet these thresholds are masked out of the analysis. Default: 1e-6 for both.
Value
A "BayesGLM" object: a list with elements
INLA_model_obj The full result of the call to INLA::inla.
task_estimates The task coefficients for the Bayesian model.
result_classical Results from the classical model: task estimates, task standard error estimates,
residuals, degrees of freedom, and the mask.
mesh The model mesh including only the locations analyzed, i.e. within mask, without missing
values, and meeting meanTol and varTol.
mesh_orig The original mesh provided.
mask A mask of mesh_orig indicating the locations inside mesh.
design The design matrix, after centering and scaling, but before any nuisance regression or prewhiten-
ing.
task_names The names of the tasks.
session_names The names of the sessions.
hyperpar_posteriors Hyperparameter posterior densities.
theta_estimates Theta estimates from the Bayesian model.
posterior_Sig_inv For joint group modelling.
mu_theta For joint group modelling.
Q_theta For joint group modelling.
y For joint group modelling: The BOLD data after any centering, scaling, nuisance regression, or
prewhitening.
X For joint group modelling: The design matrix after any centering, scaling, nuisance regression,
or prewhitening.
prewhiten_info Vectors of values across locations: phi (AR coefficients averaged across sessions),
sigma_sq (residual variance averaged across sessions), and AIC (the maximum across ses-
sions).
call match.call() for this function call.
INLA Requirement
This function requires the INLA package, which is not a CRAN package. See https://www.
r-inla.org/download-install for easy installation instructions.
BayesGLM2 Group-level Bayesian GLM
Description
Performs group-level Bayesian GLM estimation and inference using the joint approach described
in Mejia et al. (2020).
Usage
BayesGLM2(
results,
contrasts = NULL,
quantiles = NULL,
excursion_type = NULL,
contrast_names = NULL,
gamma = 0,
alpha = 0.05,
nsamp_theta = 50,
nsamp_beta = 100,
num_cores = NULL,
verbose = 1
)
BayesGLM_group(
results,
contrasts = NULL,
quantiles = NULL,
excursion_type = NULL,
gamma = 0,
alpha = 0.05,
nsamp_theta = 50,
nsamp_beta = 100,
num_cores = NULL,
verbose = 1
)
Arguments
results Either (1) a length N list of "BayesGLM" objects, or (2) a length N character
vector of files storing "BayesGLM" objects saved with saveRDS.
contrasts (Optional) A list of contrast vectors that specify the group-level summaries of
interest. If NULL, use contrasts that compute the average of each field (task HRF)
across subjects and sessions.
Each contrast vector is length K∗S∗N vector specifying a group-level summary
of interest, where K is the number of fields (task HRFs), S is the number of
sessions, and N is the number of subjects. For a single subject-session the
contrast for the first field would be:
contrast1 <- c(1, rep(0, K-1))
and so the full contrast vector representing the group average across sessions
and subjects for the first task would be:
rep(rep(contrast1, S), N) /S /N.
To obtain the group average for the first task, for just the first sessions from each
subject:
rep(c(contrast1, rep(0, K*(S-1))), N) /N.
To obtain the mean difference between the first and second sessions, for the first
task:
rep(c(contrast1, -contrast1, rep(0, K-2)), N) /N.
To obtain the mean across sessions of the first task, just for the first subject:
c(rep(contrast1, S-1), rep(0, K*S*(N-1)) /S.
quantiles (Optional) Vector of posterior quantiles to return in addition to the posterior
mean.
excursion_type (For inference only) The type of excursion function for the contrast (">", "<",
"!="), or a vector thereof (each element corresponding to one contrast). If NULL,
no inference performed.
contrast_names (Optional) Names of contrasts.
gamma (For inference only) Activation threshold for the excursion set, or a vector thereof
(each element corresponding to one contrast). Default: 0.
alpha (For inference only) Significance level for activation for the excursion set, or a
vector thereof (each element corresponding to one contrast). Default: .05.
nsamp_theta Number of theta values to sample from posterior. Default: 50.
nsamp_beta Number of beta vectors to sample conditional on each theta value sampled. De-
fault: 100.
num_cores The number of cores to use for sampling betas in parallel. If NULL (default), do
not run in parallel.
verbose Should updates be printed? Use 1 (default) for occasional updates, 2 for occa-
sional updates as well as running INLA in verbose mode (if applicable), or 0 for
no updates.
Value
A list containing the estimates, PPMs and areas of activation for each contrast.
INLA Requirement
This function requires the INLA package, which is not a CRAN package. See https://www.
r-inla.org/download-install for easy installation instructions.
BayesGLM_cifti BayesGLM for CIFTI
Description
Performs spatial Bayesian GLM on the cortical surface for fMRI task activation
Usage
BayesGLM_cifti(
cifti_fname,
surfL_fname = NULL,
surfR_fname = NULL,
brainstructures = c("left", "right"),
design = NULL,
onsets = NULL,
TR = NULL,
nuisance = NULL,
dHRF = c(0, 1, 2),
dHRF_as = c("auto", "nuisance", "task"),
hpf = NULL,
DCT = if (is.null(hpf)) {
4
} else {
NULL
},
resamp_res = 10000,
task_names = NULL,
session_names = NULL,
combine_sessions = TRUE,
scale_BOLD = c("auto", "mean", "sd", "none"),
scale_design = TRUE,
Bayes = TRUE,
ar_order = 6,
ar_smooth = 5,
aic = FALSE,
num.threads = 4,
return_INLA = c("trimmed", "full", "minimal"),
verbose = 1,
meanTol = 1e-06,
varTol = 1e-06
)
Arguments
cifti_fname fMRI timeseries data in CIFTI format ("*.dtseries.nii"). For single-session anal-
ysis this can be a file path to a CIFTI file or a "xifti" object from the ciftiTools
package. For multi-session analysis this can be a vector of file paths or a list of
"xifti" objects.
surfL_fname Left cortex surface geometry in GIFTI format ("*.surf.gii"). This can be a file
path to a GIFTI file or a "surf" object from the ciftiTools package. This
argument is only used if brainstructures includes "left" and Bayes==TRUE.
If it’s not provided, the HCP group-average inflated surface included in the
ciftiTools package will be used.
surfR_fname Right cortex surface geometry in GIFTI format ("*.surf.gii"). This can be a file
path to a GIFTI file or a "surf" object from the ciftiTools package. This ar-
gument is only used if brainstructures includes "right" and Bayes==TRUE.
If it’s not provided, the HCP group-average inflated surface included in the
ciftiTools package will be used.
brainstructures
Character vector indicating which brain structure(s) to analyze: "left" (left cor-
tical surface) and/or "right" (right cortical surface). Default: c("left","right")
(both hemispheres). Note that the subcortical models have not yet been imple-
mented.
design, onsets, TR
Either provide design directly, or provide both onsets and TR from which the
design matrix or matrices will be constructed.
design is a T × K task design matrix. Each column represents the expected
BOLD response due to each task, a convolution of the hemodynamic response
function (HRF) and the task stimulus. Note that the scale of the regressors
will affect the scale and interpretation of the beta coefficients, so imposing a
proper scale is recommended; see the scale_design argument, which by de-
fault is TRUE. Task names should be the column names, if not provided by the
task_names argument. For multi-session modeling, this argument should be a
list of such matrices. To model HRF derivatives, calculate the derivatives of
the task columns beforehand (see the helper function cderiv which computes
the discrete central derivative) and either add them to design to model them as
tasks, or nuisance to model them as nuisance signals; it’s recommended to then
drop the first and last timepoints because the discrete central derivative doesn’t
exist at the time series boundaries. Do note that INLA computation times in-
crease greatly if the design matrix has more than five columns, so it might be
required to add these derivatives to nuisance rather than design.
onsets is an L-length list in which the name of each element is the name of
the corresponding task, and the value of each element is a matrix of onsets (first
column) and durations (second column) for each stimuli (each row) of the cor-
responding task. The units of both columns is seconds. For multi-session mod-
eling, this argument should be a list of such lists. To model HRF derivatives, use
the arguments dHRF and dHRF_as. If dHRF==0 or dHRF_as=="nuisance", the to-
tal number of columns in the design matrix, K, will equal L. If dHRF_as=="task",
K will equal L times dHRF+1.
TR is the temporal resolution of the data, in seconds.
nuisance (Optional) A T × J matrix of nuisance signals. These are regressed from the
fMRI data and the design matrix prior to the GLM computation. For multi-
session modeling, this argument should be a list of such matrices.
dHRF, dHRF_as Only applicable if onsets and TR are provided. These arguments enable the
modeling of HRF derivatives.
Set dHRF to 1 to model the temporal derivatives of each task, 2 to add the second
derivatives too, or 0 to not model the derivatives. Default: 1.
If dHRF > 0, dHRF_as controls whether the derivatives are modeled as "nuisance"
signals to regress out, "tasks", or "auto" (default) to treat them as tasks unless
the total number of columns in the design matrix would exceed five.
hpf, DCT Add DCT bases to nuisance to apply a temporal high-pass filter to the data?
Only one of these arguments should be provided. hpf should be the filter fre-
quency; if it is provided, TR must be provided too. The number of DCT bases
to include will be computed to yield a filter with as close a frequency to hpf as
possible. Alternatively, DCT can be provided to directly specify the number of
DCT bases to include.
Default: DCT=4. For typical TR, four DCT bases amounts to a lower frequency
cutoff than the approximately .01 Hz used in most studies. We selected this
default to err on the side of retaining more low-frequency information, but we
recommend setting these arguments to values most appropriate for the data anal-
ysis at hand.
Using at least two DCT bases is as sufficient as using linear and quadratic drift
terms in the design matrix. So if DCT detrending is being used, there is no need
to add linear and quadratic drift terms to nuisance.
resamp_res The number of vertices to which each cortical surface should be resampled, or
NULL to not resample. For computational feasibility, a value of 10000 or lower
is recommended.
task_names (Optional) Names of tasks represented in design matrix.
session_names (Optional, and only relevant for multi-session modeling) Names of each session.
Default: NULL. In BayesGLM this argument will overwrite the names of the list
entries in data, if both exist.
combine_sessions
If multiple sessions are provided, should their data be combined and analyzed
as a single session?
If TRUE (default), the multiple sessions will be concatenated along time after
scaling and nuisance regression, but before prewhitening. If FALSE, each ses-
sion will be analyzed separately, except that a single estimate of the AR model
coefficients for prewhitening is used, estimated across all sessions.
scale_BOLD Option for scaling the BOLD response.
"auto" (default) will use "mean" scaling except if demeaned data is detected (if
any mean is less than one), in which case "sd" scaling will be used instead.
"mean" scaling will scale the data to percent local signal change.
"sd" scaling will scale the data by local standard deviation.
"none" will only center the data, not scale it.
scale_design Scale the design matrix by dividing each column by its maximum and then sub-
tracting the mean? Default: TRUE. If FALSE, the design matrix is centered but
not scaled.
Bayes If TRUE (default), will fit a spatial Bayesian GLM in addition to the classical
GLM. (The classical GLM is always returned.)
ar_order (numeric) Controls prewhitening. If greater than zero, this should be a number
indicating the order of the autoregressive model to use for prewhitening. If zero,
do not prewhiten. Default: 6. For multi-session models, note that a single AR
model is used; the parameters are estimated by averaging the estimates from
each session.
ar_smooth (numeric) FWHM parameter for smoothing the AR model coefficient estimates
F W HM
for prewhitening. Remember that σ = 2∗sqrt(2∗log(2) . Set to 0 or NULL to not do
any smoothing. Default: 5.
aic Use the AIC to select AR model order between 0 and ar_order? Default:
FALSE.
num.threads The maximum number of threads to use for parallel computations: prewhitening
parameter estimation, and the inla-program model estimation. Default: 4. Note
that parallel prewhitening requires the parallel package.
return_INLA Return the INLA model object? (It can be large.) Use "trimmed" (default) to
return only the more relevant results, which is enough for both id_activations
and BayesGLM2, "minimal" to return just enough for BayesGLM2 but not id_activations,
or "full" to return the full output of inla.
verbose Should updates be printed? Use 1 (default) for occasional updates, 2 for occa-
sional updates as well as running INLA in verbose mode (if applicable), or 0 for
no updates.
meanTol, varTol
Tolerance for mean and variance of each data location. Locations which do not
meet these thresholds are masked out of the analysis. Default: 1e-6 for both.
Value
An object of class "BayesGLM_cifti": a list with elements
betas_Bayesian The task coefficients for the Bayesian model.
betas_classical The task coefficients for the classical model.
GLMs_Bayesian The entire list of GLM results, except for parameters estimated for the classical
model.
GLMs_classical Parameters estimated for the classical model from the GLM.
session_names The names of the sessions.
n_sess_orig The number of sessions (before averaging, if applicable).
task_names The task part of the design matrix, after centering and scaling, but before any nuisance
regression or prewhitening.
INLA latent fields limit
INLA computation times increase greatly when the number of columns in the design matrix exceeds
five. So if there are more than five tasks, or three or more tasks each with its temporal derivative
being modeled as a task, BayesGLM will raise a warning. In cases like the latter, we recommend
modeling the temporal derivatives as nuisance signals using the nuisance argument, rather than
modeling them as tasks.
Connectome Workbench Requirement
This function uses a system wrapper for the ’wb_command’ executable. The user must first down-
load and install the Connectome Workbench, available from https://www.humanconnectome.org/software/get-
connectome-workbench .
INLA Requirement
This function requires the INLA package, which is not a CRAN package. See https://www.
r-inla.org/download-install for easy installation instructions.
Bayes_Param Bayes
Description
Bayes
Arguments
Bayes If TRUE (default), will fit a spatial Bayesian GLM in addition to the classical
GLM. (The classical GLM is always returned.)
cderiv Central derivative
Description
Take the central derivative of numeric vectors by averaging the forward and backward differences.
Usage
cderiv(x)
Arguments
x A numeric matrix, or a vector which will be converted to a single-column matrix.
Value
A matrix or vector the same dimensions as x, with the derivative taken for each column of x. The
first and last rows may need to be deleted, depending on the application.
Examples
x <- cderiv(seq(5))
stopifnot(all(x == c(.5, 1, 1, 1, .5)))
combine_sessions_Param
combine_sessions
Description
combine_sessions
Arguments
combine_sessions
If multiple sessions are provided, should their data be combined and analyzed
as a single session?
If TRUE (default), the multiple sessions will be concatenated along time after
scaling and nuisance regression, but before prewhitening. If FALSE, each ses-
sion will be analyzed separately, except that a single estimate of the AR model
coefficients for prewhitening is used, estimated across all sessions.
contrasts_Param contrasts
Description
contrasts
Arguments
contrasts List of contrast vectors to be passed to inla::inla.
emTol_Param emTol
Description
emTol
Arguments
emTol The stopping tolerance for the EM algorithm. Default: 1e-3.
EM_Param EM
Description
EM
Arguments
EM (logical) Should the EM implementation of the Bayesian GLM be used? De-
fault: FALSE. This method is still in development.
faces_Param faces
Description
faces
Arguments
faces An F × 3 matrix, where each row contains the vertex indices for a given trian-
gular face in the mesh. F is the number of faces in the mesh.
HRF Canonical (double-gamma) HRF
Description
Calculate the HRF from a time vector and parameters. Optionally compute the first or second
derivative of the HRF instead.
Usage
HRF(t, deriv = 0, a1 = 6, b1 = 0.9, a2 = 12, b2 = 0.9, c = 0.35)
Arguments
t time vector
deriv 0 (default) for the HRF, 1 for the first derivative of the HRF, or 2 for the second
derivative of the HRF.
a1 delay of response. Default: 6
b1 response dispersion. Default: 0.9
a2 delay of undershoot. Default: 12
b2 dispersion of undershoot. Default: 0.9
c scale of undershoot. Default: 0.35
Value
HRF vector (or dHRF, or d2HRF) corresponding to time
Examples
downsample <- 100
HRF(seq(0, 30, by=1/downsample))
id_activations Identify task activations
Description
Identify areas of activation for each task from the result of BayesGLM or BayesGLM_cifti.
Usage
id_activations(
model_obj,
tasks = NULL,
sessions = NULL,
method = c("Bayesian", "classical"),
alpha = 0.05,
threshold = NULL,
correction = c("FWER", "FDR", "none"),
verbose = 1
)
Arguments
model_obj Result of BayesGLM or BayesGLM_cifti model call, of class "BayesGLM" or
"BayesGLM_cifti".
tasks The task(s) to identify activations for. Give either the name(s) as a character
vector, or the numerical indices. If NULL (default), analyze all tasks.
sessions The session(s) to identify activations for. Give either the name(s) as a character
vector, or the numerical indices. If NULL (default), analyze the first session.
Currently, if multiple sessions are provided, activations are identified separately
for each session. (Information is not combined between the different sessions.)
method "Bayesian" (default) or "classical". If model_obj does not have Bayesian
results because Bayes was set to FALSE, only the "classical" method can be
used.
alpha Significance level. Default: 0.05.
threshold Activation threshold, for example 1 for 1\ change if scale_BOLD=="mean" dur-
ing model estimation. Setting a threshold is required for the Bayesian method;
NULL (default) will use a threshold of zero for the classical method.
correction For the classical method only: Type of multiple comparisons correction: "FWER"
(Bonferroni correction, the default), "FDR" (Benjamini Hochberg), or "none".
verbose Should updates be printed? Use 1 (default) for occasional updates, 2 for occa-
sional updates as well as running INLA in verbose mode (if applicable), or 0 for
no updates.
Value
An "act_BayesGLM" or "act_BayesGLM_cifti" object, a list which indicates the activated loca-
tions along with related information.
INLA_Description INLA
Description
INLA
INLA Requirement
This function requires the INLA package, which is not a CRAN package. See https://www.
r-inla.org/download-install for easy installation instructions.
is.BfMRI.sess Validate a "BfMRI.sess" object.
Description
Check if object is valid for a "BfMRI.sess" object.
Usage
is.BfMRI.sess(x)
Arguments
x The putative "BfMRI.sess" object.
Details
A "BfMRI.sess" object is a list of length S, where S is the number of sessions in the analysis.
Each list entry corresponds to a separate session, and should itself be a list with these named fields:
• "BOLD"T ×V BOLD matrix. Rows are time points; columns are data locations (vertices/voxels).
• "design"T × K matrix containing the K task regressors.
• "nuisance"Optional. T × J matrix containing the L nuisance regressors.
In addition, all sessions must have the same number of data locations, V , and tasks, K.
Value
Logical. Is x a valid "BfMRI.sess" object?
Examples
nT <- 180
nV <- 700
BOLD1 <- matrix(rnorm(nT*nV), nrow=nT)
BOLD2 <- matrix(rnorm(nT*nV), nrow=nT)
onsets1 <- list(taskA=cbind(c(2,17,23),4)) # one task, 3 four sec-long stimuli
onsets2 <- list(taskA=cbind(c(1,18,25),4))
TR <- .72 # .72 seconds per volume, or (1/.72) Hz
duration <- nT # session is 180 volumes long (180*.72 seconds long)
design1 <- make_HRFs(onsets1, TR, duration)$design
design2 <- make_HRFs(onsets2, TR, duration)$design
x <- list(
sessionOne = list(BOLD=BOLD1, design=design1),
sessionTwo = list(BOLD=BOLD2, design=design2)
)
stopifnot(is.BfMRI.sess(x))
make_HRFs Make HRFs
Description
Create HRF design matrix columns from onsets and durations
Usage
make_HRFs(
onsets,
TR,
duration,
dHRF = c(0, 1, 2),
dHRF_as = c("auto", "nuisance", "task"),
downsample = 100,
verbose = FALSE
)
Arguments
onsets L-length list in which the name of each element is the name of the corresponding
task, and the value of each element is a matrix of onsets (first column) and
durations (second column) for each stimuli (each row) of the corresponding task.
TR Temporal resolution of the data, in seconds.
duration The number of volumes in the fMRI data.
dHRF Set to 1 to add the temporal derivative of each column in the design matrix, 2 to
add the second derivatives too, or 0 to not add any columns. Default: 1.
dHRF_as Only applies if dHRF > 0. Model the temporal derivatives as "nuisance" signals
to regress out, "tasks", or "auto" to treat them as tasks unless the total number
of columns in the design matrix (i.e. the total number of tasks, times dHRF+1),
would be >=10, the limit for INLA.
downsample Downsample factor for convolving stimulus boxcar or stick function with canon-
ical HRF. Default: 100.
verbose If applicable, print a message saying how the HRF derivatives will be modeled?
Default: FALSE.
Value
List with the design matrix and/or the nuisance matrix containing the HRF-convolved stimuli as
columns, depending on dHRF_as.
Examples
onsets <- list(taskA=cbind(c(2,17,23),4)) # one task, 3 four sec-long stimuli
TR <- .72 # .72 seconds per volume, or (1/.72) Hz
duration <- 300 # session is 300 volumes long (300*.72 seconds long)
make_HRFs(onsets, TR, duration)
make_mask Mask out invalid data
Description
Mask out data locations that are invalid (missing data, low mean, or low variance) for any session.
Usage
make_mask(data, meanTol = 1e-06, varTol = 1e-06, verbose = TRUE)
Arguments
data A list of sessions, where each session is a list with elements BOLD, design, and
optionally nuisance. See ?is.BfMRI.sess for details.
meanTol, varTol
Tolerance for mean and variance of each data location. Locations which do not
meet these thresholds are masked out of the analysis. Defaults: 1e-6.
verbose Print messages counting how many locations are removed? Default: TRUE.
Value
A logical vector indicating locations that are valid across all sessions.
Examples
nT <- 30
nV <- 400
BOLD1 <- matrix(rnorm(nT*nV), nrow=nT)
BOLD1[,seq(30,50)] <- NA
BOLD2 <- matrix(rnorm(nT*nV), nrow=nT)
BOLD2[,65] <- BOLD2[,65] / 1e10
data <- list(sess1=list(BOLD=BOLD1, design=NULL), sess2=list(BOLD=BOLD2, design=NULL))
make_mask(data)
make_mesh Make Mesh
Description
Make INLA triangular mesh from faces and vertices
Usage
make_mesh(vertices, faces, use_INLA = FALSE)
Arguments
vertices A V × 3 matrix, where each row contains the Euclidean coordinates at which a
given vertex in the mesh is located. V is the number of vertices in the mesh
faces An F × 3 matrix, where each row contains the vertex indices for a given trian-
gular face in the mesh. F is the number of faces in the mesh.
use_INLA (logical) Use the INLA package to make the mesh? Default: FALSE. Otherwise,
mesh construction is based on an internal function, galerkin_db.
Value
INLA triangular mesh
INLA Requirement
This function requires the INLA package, which is not a CRAN package. See https://www.
r-inla.org/download-install for easy installation instructions.
mask_Param_vertices mask: vertices
Description
mask: vertices
Arguments
mask A length V logical vector indicating if each vertex is within the input mask.
max.threads_Param max.threads
Description
max.threads
Arguments
max.threads The maximum number of threads to use in the inla-program for model estima-
tion. 0 (default) will use the maximum number of threads allowed by the system.
mesh_Param_either mesh: either
Description
mesh: either
Arguments
mesh An "inla.mesh" object (see make_mesh for surface data)
mesh_Param_inla mesh: INLA only
Description
mesh: INLA only
Arguments
mesh An "inla.mesh" object (see make_mesh for surface data).
num.threads_Param num.threads
Description
num.threads
Arguments
num.threads The maximum number of threads to use for parallel computations: prewhitening
parameter estimation, and the inla-program model estimation. Default: 4. Note
that parallel prewhitening requires the parallel package.
plot.act_BayesGLM_cifti
S3 method: use view_xifti_surface to plot a
"act_BayesGLM_cifti" object
Description
S3 method: use view_xifti_surface to plot a "act_BayesGLM_cifti" object
Usage
## S3 method for class 'act_BayesGLM_cifti'
plot(x, idx = NULL, session = NULL, ...)
Arguments
x An object of class "act_BayesGLM_cifti"
idx Which task should be plotted? Give the numeric indices or the names. NULL
(default) will show all tasks. This argument overrides the idx argument to
view_xifti_surface.
session Which session should be plotted? NULL (default) will use the first.
... Additional arguments to view_xifti_surface
Value
Result of the call to ciftiTools::view_cifti_surface.
plot.BayesGLM2_cifti S3 method: use view_xifti_surface to plot a "BayesGLM2_cifti"
object
Description
S3 method: use view_xifti_surface to plot a "BayesGLM2_cifti" object
Usage
## S3 method for class 'BayesGLM2_cifti'
plot(x, idx = NULL, what = c("contrasts", "activations"), ...)
Arguments
x An object of class "BayesGLM2_cifti"
idx Which contrast should be plotted? Give the numeric index. NULL (default) will
show all contrasts. This argument overrides the idx argument to view_xifti_surface.
what Estimates of the "contrasts" (default), or their thresholded "activations".
... Additional arguments to view_xifti_surface
Value
Result of the call to ciftiTools::view_cifti_surface.
plot.BayesGLM_cifti S3 method: use view_xifti_surface to plot a "BayesGLM_cifti"
object
Description
S3 method: use view_xifti_surface to plot a "BayesGLM_cifti" object
Usage
## S3 method for class 'BayesGLM_cifti'
plot(x, idx = NULL, session = NULL, method = NULL, zlim = c(-1, 1), ...)
Arguments
x An object of class "BayesGLM_cifti"
idx Which task should be plotted? Give the numeric indices or the names. NULL
(default) will show all tasks. This argument overrides the idx argument to
view_xifti_surface.
session Which session should be plotted? NULL (default) will use the first.
method "Bayes" or "classical". NULL (default) will use the Bayesian results if available,
and the classical results if not.
zlim Overrides the zlim argument for view_xifti_surface. Default: c(-1, 1).
... Additional arguments to view_xifti_surface
Value
Result of the call to ciftiTools::view_cifti_surface.
pw_estimate Estimate residual autocorrelation for prewhitening
Description
Estimate residual autocorrelation for prewhitening
Usage
pw_estimate(resids, ar_order, aic = FALSE)
Arguments
resids Estimated residuals
ar_order, aic Order of the AR model used to prewhiten the data at each location. If !aic
(default), the order will be exactly ar_order. If aic, the order will be between
zero and ar_order, as determined by the AIC.
Value
Estimated AR coefficients and residual variance at every vertex
pw_smooth Smooth AR coefficients and white noise variance
Description
Smooth AR coefficients and white noise variance
Usage
pw_smooth(vertices, faces, mask = NULL, AR, var, FWHM = 5)
Arguments
vertices A V × 3 matrix, where each row contains the Euclidean coordinates at which a
given vertex in the mesh is located. V is the number of vertices in the mesh
faces An F × 3 matrix, where each row contains the vertex indices for a given trian-
gular face in the mesh. F is the number of faces in the mesh.
mask A logical vector indicating, for each vertex, whether to include it in smoothing.
NULL (default) will use a vector of all TRUE, meaning that no vertex is masked
out; all are used for smoothing.
AR A Vxp matrix of estimated AR coefficients, where V is the number of vertices
and p is the AR model order
var A vector length V containing the white noise variance estimates from the AR
model
FWHM FWHM parameter for smoothing. Remember that σ = F W HM
or NULL to not do any smoothing. Default: 5.#’
Value
Smoothed AR coefficients and residual variance at every vertex
return_INLA_Param return_INLA
Description
return_INLA
Arguments
return_INLA Return the INLA model object? (It can be large.) Use "trimmed" (default) to
return only the more relevant results, which is enough for both id_activations
and BayesGLM2, "minimal" to return just enough for BayesGLM2 but not id_activations,
or "full" to return the full output of inla.
scale_BOLD_Param scale_BOLD
Description
scale_BOLD
Arguments
scale_BOLD Option for scaling the BOLD response.
"auto" (default) will use "mean" scaling except if demeaned data is detected (if
any mean is less than one), in which case "sd" scaling will be used instead.
"mean" scaling will scale the data to percent local signal change.
"sd" scaling will scale the data by local standard deviation.
"none" will only center the data, not scale it.
scale_design_Param scale_design
Description
scale_design
Arguments
scale_design Scale the design matrix by dividing each column by its maximum and then sub-
tracting the mean? Default: TRUE. If FALSE, the design matrix is centered but
not scaled.
seed_Param seed
Description
seed
Arguments
seed Random seed (optional). Default: NULL.
session_names_Param session_names
Description
session_names
Arguments
session_names (Optional, and only relevant for multi-session modeling) Names of each session.
Default: NULL. In BayesGLM this argument will overwrite the names of the list
entries in data, if both exist.
summary.act_BayesGLM Summarize a "act_BayesGLM" object
Description
Summary method for class "act_BayesGLM"
Usage
## S3 method for class 'act_BayesGLM'
summary(object, ...)
## S3 method for class 'summary.act_BayesGLM'
print(x, ...)
## S3 method for class 'act_BayesGLM'
print(x, ...)
Arguments
object Object of class "act_BayesGLM".
... further arguments passed to or from other methods.
x Object of class "summary.act_BayesGLM".
Value
A "summary.act_BayesGLM" object, a list summarizing the properties of object.
NULL, invisibly.
NULL, invisibly.
summary.act_BayesGLM_cifti
Summarize a "act_BayesGLM_cifti" object
Description
Summary method for class "act_BayesGLM_cifti"
Usage
## S3 method for class 'act_BayesGLM_cifti'
summary(object, ...)
## S3 method for class 'summary.act_BayesGLM_cifti'
print(x, ...)
## S3 method for class 'act_BayesGLM_cifti'
print(x, ...)
Arguments
object Object of class "act_BayesGLM_cifti".
... further arguments passed to or from other methods.
x Object of class "summary.act_BayesGLM_cifti".
Value
A "summary.act_BayesGLM_cifti" object, a list summarizing the properties of object.
NULL, invisibly.
NULL, invisibly.
summary.BayesGLM Summarize a "BayesGLM" object
Description
Summary method for class "BayesGLM"
Usage
## S3 method for class 'BayesGLM'
summary(object, ...)
## S3 method for class 'summary.BayesGLM'
print(x, ...)
## S3 method for class 'BayesGLM'
print(x, ...)
Arguments
object Object of class "BayesGLM".
... further arguments passed to or from other methods.
x Object of class "summary.BayesGLM".
Value
A "summary.BayesGLM" object, a list summarizing the properties of object.
NULL, invisibly.
NULL, invisibly.
summary.BayesGLM2 Summarize a "BayesGLM2" object
Description
Summary method for class "BayesGLM2"
Usage
## S3 method for class 'BayesGLM2'
summary(object, ...)
## S3 method for class 'summary.BayesGLM2'
print(x, ...)
## S3 method for class 'BayesGLM2'
print(x, ...)
Arguments
object Object of class "BayesGLM2".
... further arguments passed to or from other methods.
x Object of class "summary.BayesGLM2".
Value
A "summary.BayesGLM2" object, a list summarizing the properties of object.
NULL, invisibly.
NULL, invisibly.
summary.BayesGLM2_cifti
Summarize a "BayesGLM2_cifti" object
Description
Summary method for class "BayesGLM2_cifti"
Usage
## S3 method for class 'BayesGLM2_cifti'
summary(object, ...)
## S3 method for class 'summary.BayesGLM2_cifti'
print(x, ...)
## S3 method for class 'BayesGLM2_cifti'
print(x, ...)
Arguments
object Object of class "BayesGLM2_cifti".
... further arguments passed to or from other methods.
x Object of class "summary.BayesGLM2_cifti".
Value
A "summary.BayesGLM2_cifti" object, a list summarizing the properties of object.
NULL, invisibly.
NULL, invisibly.
summary.BayesGLM_cifti
Summarize a "BayesGLM_cifti" object
Description
Summary method for class "BayesGLM_cifti"
Usage
## S3 method for class 'BayesGLM_cifti'
summary(object, ...)
## S3 method for class 'summary.BayesGLM_cifti'
print(x, ...)
## S3 method for class 'BayesGLM_cifti'
print(x, ...)
Arguments
object Object of class "BayesGLM_cifti".
... further arguments passed to or from other methods.
x Object of class "summary.BayesGLM_cifti".
Value
A "summary.BayesGLM_cifti" object, a list summarizing the properties of object.
NULL, invisibly.
NULL, invisibly.
task_names_Param task_names
Description
task_names
Arguments
task_names (Optional) Names of tasks represented in design matrix.
trim_INLA_Param trim_INLA
Description
trim_INLA
Arguments
trim_INLA (logical) should the INLA_model_obj within the result be trimmed to only what
is necessary to use id_activations? Default: TRUE.
verbose_Param verbose
Description
verbose
Arguments
verbose Should updates be printed? Use 1 (default) for occasional updates, 2 for occa-
sional updates as well as running INLA in verbose mode (if applicable), or 0 for
no updates.
vertex_areas Surface area of each vertex
Description
Compute surface areas of each vertex in a triangular mesh.
Usage
vertex_areas(mesh)
Arguments
mesh An "inla.mesh" object (see make_mesh for surface data).
Value
Vector of areas
INLA Requirement
This function requires the INLA package, which is not a CRAN package. See https://www.
r-inla.org/download-install for easy installation instructions.
vertices_Param vertices
Description
vertices
Arguments
vertices A V × 3 matrix, where each row contains the Euclidean coordinates at which a
given vertex in the mesh is located. V is the number of vertices in the mesh |
eda4treeR | cran | R | Package ‘eda4treeR’
May 1, 2023
Type Package
Title Experimental Design and Analysis for Tree Improvement
Version 0.6.0
Maintainer <NAME> <<EMAIL>>
Description Provides data sets and R Codes for <NAME>, <NAME> and <NAME>-
son (2023). Experimental Design and Analysis for Tree Improvement, CSIRO Publishing.
Depends R (>= 4.1.0)
Imports car, dae, dplyr, emmeans, ggplot2, lmerTest, magrittr,
predictmeans, stats, supernova
License GPL-3
URL https://github.com/MYaseen208/eda4treeR
https://CRAN.R-project.org/package=eda4treeR
https://myaseen208.com/eda4treeR/ https://myaseen208.com/EDATR/
BugReports https://github.com/myaseen208/eda4treeR/issues
LazyData TRUE
RoxygenNote 7.2.3
Suggests testthat
Note 1. Asian Development Bank (ADB), Islamabad, Pakistan. 2. Benazir
Income Support Programme (BISP), Islamabad, Pakistan. 3.
Department of Mathematics and Statistics, University of
Agriculture Faisalabad, Pakistan.
NeedsCompilation no
Author <NAME> [aut, cre, cph]
(<https://orcid.org/0000-0002-5923-1714>),
<NAME> [aut, ctb],
<NAME> [aut, ctb],
<NAME> [aut, ctb]
Repository CRAN
Date/Publication 2023-05-01 04:40:02 UTC
1
R topics documented:
DataExam2.... 2
DataExam2.... 3
DataExam3.... 4
DataExam3.1.... 5
DataExam4.... 6
DataExam4.3.... 7
DataExam4.... 8
DataExam5.... 9
DataExam5.... 10
DataExam6.... 11
DataExam8.... 12
DataExam8.... 13
Exam2.... 14
Exam2.... 15
Exam3.... 16
Exam3.1.... 17
Exam4.... 19
Exam4.3.... 20
Exam4.... 21
Exam5.... 22
Exam5.... 25
Exam6.... 27
Exam8.... 30
Exam8.1.... 31
Exam8.1.... 33
Exam8.... 34
DataExam2.1 Data for Example 2.1 from Experimental Design and Analysis for Tree
Improvement
Description
Exam2.1 is used to compare two seed lots by using single factor ANOVA.
Usage
data(DataExam2.1)
Format
A data.frame with 16 rows and 2 variables.
Seedlot Two Seedlots Seed Orchad (SO) and routin plantation (P)
dbh Diameter at breast height
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam2.1
Examples
data(DataExam2.1)
DataExam2.2 Data for Example 2.2 from Experimental Design and Analysis for Tree
Improvement
Description
Exam2.2 is used to compare two seed lots by using ANOVA under RCB Design.
Usage
data(DataExam2.2)
Format
A data.frame with 16 rows and 2 variables.
Seedlot Two Seedlots Seed Orchad (SO) and routin plantation (P)
dbh Diameter at breast height
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam2.2
Examples
data(DataExam2.2)
DataExam3.1 Data for Example 3.1 from Experimental Design and Analysis for Tree
Improvement
Description
Exam3.1 is part of data from Australian Centre for Agricultural Research (ACIAR) in Queensland,
Australia (Experiment 309).
Usage
data(DataExam3.1)
Format
A data.frame with 80 rows and 6 variables.
Repl Replication number of different Seedlots
PlotNo Plot No of differnt Trees
SeedLot Seed Lot number
TreeNo Tree number of Seedlots
Ht Height in meter
Dgl Diameter at ground level
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam3.1
Examples
data(DataExam3.1)
DataExam3.1.1 Data for Example 3.1.1 from Experimental Design and Analysis for
Tree Improvement
Description
Exam3.1.1 is part of data from Australian Centre for Agricultural Research (ACIAR) in Queensland,
Australia (Experiment 309).
Usage
data(DataExam3.1.1)
Format
A data.frame with 10 rows and 6 variables.
Repl Replication number of different Seedlots
PlotNo Plot No of differnt Trees
SeedLot Seed Lot number
TreeNo Tree number of Seedlots
Ht Height in meter
Dgl Diameter at ground level
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam3.1.1
Examples
data(DataExam3.1.1)
DataExam4.3 Data for Example 4.3 from Experimental Design and Analysis for Tree
Improvement
Description
Exam4.3 presents the germination count data for 4 Pre-Treatments and 6 Seedlots.
Usage
data(DataExam4.3)
Format
A data.frame with 72 rows and 8 variables.
Row Row number of different Seedlots
Column Column number of differnt Trees
Replication Replication number of Treatment
Contcomp Control or Trated Plot
Pretreatment Treatment types
SeedLot Seed lot number
GerminationCount Number of germinated seeds out of 25
Percent Germination Percentage
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam4.3
Examples
data(DataExam4.3)
DataExam4.3.1 Data for Example 4.3.1 from Experimental Design and Analysis for
Tree Improvement
Description
Exam4.3.1 presents the germination count data for 4 Pre-Treatments and 6 Seedlots.
Usage
data(DataExam4.3.1)
Format
A data.frame with 72 rows and 8 variables.
Row Row number of different Seedlots
Column Column number of differnt Trees
Replication Replication number of Treatment
Contcomp Control or Trated Plot
Pretreatment Treatment types
SeedLot Seed lot number
GerminationCount Number of germinated seeds out of 25
Percent Germination Percentage
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam4.3.1
Examples
data(DataExam4.3.1)
DataExam4.4 Data for Example 4.4 from Experimental Design and Analysis for Tree
Improvement
Description
Exam4.4 presents the height means for 4 seedlots under factorial arrangement for two levels of
Fertilizer and two levels of Irrigation.
Usage
data(DataExam4.4)
Format
A data.frame with 32 rows and 5 variables.
Rep Replication number
Irrig Irrigation type
Ferti Fertilizer type
SeedDLot Seed Lot number
Height Height of the plants
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam4.4
Examples
data(DataExam4.4)
DataExam5.1 Data for Example 5.1 from Experimental Design and Analysis for Tree
Improvement
Description
Exam5.1 presents the height of 27 seedlots from 4 sites.
Usage
data(DataExam5.1)
Format
A data.frame with 108 rows and 4 variables.
Site Sites for the experiment
SeedLot Seed lot number
Ht Height of the plants
SiteMean Mean Height of Each Site
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam5.1
Examples
data(DataExam5.1)
DataExam5.2 Data for Example 5.2 from Experimental Design and Analysis for Tree
Improvement
Description
Exam5.2 presents the height of 37 seedlots from 6 sites.
Usage
data(DataExam5.2)
Format
A data.frame with 108 rows and 4 variables.
Site Sites for the experiment
SeedLot Seed lot number
Ht Height of the plants
SiteMean Mean Height of Each Site
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam5.2
Examples
data(DataExam5.2)
DataExam6.2 Data for Example 6.2 from Experimental Design and Analysis for Tree
Improvement
Description
Exam 6.2 Dbh mean, Dbh varince and number of trees per plot from 3 provinces("PNG","Sabah","Queensland")
with 4 replicationsof 48 families.
Usage
data(DataExam6.2)
Format
A data.frame with 192 rows and 7 variables.
Replication Replication number of different Families
Plot.number Plot number of differnt Trees
Family Family Numuber
Province Province of family
Dbh.mean Average Diameter at breast height of trees within plot
Dbh.variance Variance of Diameter at breast height of trees within plot
Dbh.count Number of trees within plot
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
Examples
data(DataExam6.2)
DataExam8.1 Data for Example 8.1 from Experimental Design and Analysis for Tree
Improvement
Description
Exam8.1 presents the Diameter at breast height (Dbh) of 60 SeedLots under layout of row column
design with 6 rows and 10 columns in 18 countries and 59 provinces of 18 selected countries.
Usage
data(DataExam8.1)
Format
A data.frame with 236 rows and 8 variables.
repl There are 4 replication for the design
row Experiment is conducted under 6 rows\
col Experiment is conducted under 4 columns
inoc Seedling were inoculated for 2 different time periods half for one week and half for seven
weeks
prov provenance
Country Data for different seedlots was collected from 18 countries
Dbh Diameter at breast height
Country.1 Recoded Country lables
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam8.1
Examples
data(DataExam8.1)
DataExam8.2 Data for Example 8.2 from Experimental Design and Analysis for Tree
Improvement
Description
Exam8.2 presents the Diameter at breast height (Dbh) of 60 SeedLots under layout of row column
design with 6 rows and 10 columns in 18 countries and 59 provinces of 18 selected countries.
Usage
data(DataExam8.2)
Format
A data.frame with 236 rows and 8 variables.
Repl There are 4 replication for the design
Row Experiment is conducted under 6 rows\
Column Experiment is conducted under 4 columns
Clonenum Clonenum
Contcompf Contcompf
Standard Standard
Clone Clone
dbhmean dbhmean
dbhvariance dbhvariance
htmean htmean
htvariance htvariance
count count
Contcompv Contcompv
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
Exam8.2
Examples
data(DataExam8.2)
Exam2.1 Example 2.1 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam2.1 is used to compare two seed lots by using single factor ANOVA.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam2.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam2.1)
# Pg. 22
fmtab2.3 <- lm(formula = dbh ~ SeedLot, data = DataExam2.1)
# Pg. 23
anova(fmtab2.3)
supernova(fmtab2.3, type = 1)
# Pg. 23
emmeans(object = fmtab2.3, specs = ~ SeedLot)
emmip(object = fmtab2.3, formula = ~ SeedLot) +
theme_classic()
Exam2.2 Example 2.2 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam2.2 is used to compare two seed lots by using ANOVA under RCB Design.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam2.2
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam2.2)
# Pg. 24
fmtab2.5 <- lm(formula = dbh ~ Blk + SeedLot, data = DataExam2.2)
# Pg. 26
anova(fmtab2.5)
supernova(fmtab2.5, type = 1)
# Pg. 26
emmeans(object = fmtab2.5, specs = ~ SeedLot)
emmip(object = fmtab2.5, formula = ~ SeedLot) +
theme_classic()
Exam3.1 Data for Example 3.1 from Experimental Design and Analysis for Tree
Improvement
Description
Exam3.1 is part of data from Australian Centre for Agricultural Research (ACIAR) in Queensland,
Australia (Experiment 309).
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam3.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam3.1)
# Pg. 28
fmtab3.3 <- lm(formula = Ht ~ Repl*SeedLot, data = DataExam3.1)
fmtab3.3ANOVA1 <-
anova(fmtab3.3) %>%
mutate(
"F value" = c(anova(fmtab3.3)[1:2, 3]/anova(fmtab3.3)[3, 3]
, anova(fmtab3.3)[4, 3]
, NA)
)
# Pg. 33 (Table 3.3)
fmtab3.3ANOVA1 %>%
mutate(
"Pr(>F)" = pf(q = fmtab3.3ANOVA1[ ,4]
, df1 = fmtab3.3ANOVA1[ ,1]
, df2 = fmtab3.3ANOVA1[4,1], lower.tail = FALSE)
)
# Pg. 33 (Table 3.3)
emmeans(object = fmtab3.3, specs = ~ SeedLot)
# Pg. 34 (Figure 3.2)
ggplot(mapping = aes(x = fitted.values(fmtab3.3), y = residuals(fmtab3.3)))+
geom_point(size = 2) +
labs(
x = "Fitted Values"
, y = "Residual"
) +
theme_classic()
# Pg. 33 (Table 3.4)
DataExam3.1m <- DataExam3.1
DataExam3.1m[c(28, 51, 76), 5] <- NA
DataExam3.1m[c(28, 51, 76), 6] <- NA
fmtab3.4 <- lm(formula = Ht ~ Repl*SeedLot, data = DataExam3.1m)
fmtab3.4ANOVA1 <-
anova(fmtab3.4) %>%
mutate(
"F value" = c(anova(fmtab3.4)[1:2, 3]/anova(fmtab3.4)[3, 3]
, anova(fmtab3.4)[4, 3], NA))
# Pg. 33 (Table 3.4)
fmtab3.4ANOVA1 %>%
mutate(
"Pr(>F)" = pf(q = fmtab3.4ANOVA1[ ,4]
, df1 = fmtab3.4ANOVA1[ ,1]
, df2 = fmtab3.4ANOVA1[4,1], lower.tail = FALSE)
)
# Pg. 33 (Table 3.4)
emmeans(object = fmtab3.4, specs = ~ SeedLot)
Exam3.1.1 Example 3.1.1 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam3.1.1 is part of data from Australian Centre for Agricultural Research (ACIAR) in Queensland,
Australia (Experiment 309).
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam3.1.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam3.1.1)
# Pg. 36
fm3.8 <- lm(formula = Mean ~ Repl + SeedLot, data = DataExam3.1.1)
# Pg. 40
anova(fm3.8)
# Pg. 40
emmeans(object = fm3.8, specs = ~ SeedLot)
emmip(object = fm3.8, formula = ~ SeedLot) +
theme_classic()
Exam4.3 Example 4.3 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam4.3 presents the germination count data for 4 Pre-Treatments and 6 Seedlots.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam4.3
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam4.3)
# Pg. 50
fm4.2 <-
aov(
formula = Percent ~ Repl + Contcomp + SeedLot +
Treat/Contcomp + Contcomp /SeedLot +
Treat/ Contcomp/SeedLot
, data = DataExam4.3
)
# Pg. 54
anova(fm4.2)
# Pg. 54
model.tables(x = fm4.2, type = "means")
emmeans(object = fm4.2, specs = ~ Contcomp)
emmeans(object = fm4.2, specs = ~ SeedLot)
emmeans(object = fm4.2, specs = ~ Contcomp + Treat)
emmeans(object = fm4.2, specs = ~ Contcomp + SeedLot)
emmeans(object = fm4.2, specs = ~ Contcomp + Treat + SeedLot)
DataExam4.3 %>%
dplyr::group_by(Treat, Contcomp, SeedLot) %>%
dplyr::summarize(Mean=mean(Percent))
RESFIT <- data.frame(residualvalue=residuals(fm4.2),fittedvalue=fitted.values(fm4.2))
ggplot(mapping = aes(x = fitted.values(fm4.2), y = residuals(fm4.2)))+
geom_point(size = 2)+
labs(
x = "Fitted Values"
, y = "Residuals"
) +
theme_classic()
Exam4.3.1 Example 4.3.1 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam4.3.1 presents the germination count data for 4 Pre-Treatments and 6 Seedlots.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam4.3.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam4.3)
# Pg. 57
fm4.4 <-
aov(
formula = Percent ~ Repl + Treat*SeedLot
, data = DataExam4.3 %>%
filter(Treat != "control")
)
# Pg. 57
anova(fm4.4)
model.tables(x = fm4.4, type = "means", se = TRUE)
emmeans(object = fm4.4, specs = ~ Treat)
emmeans(object = fm4.4, specs = ~ SeedLot)
emmeans(object = fm4.4, specs = ~ Treat * SeedLot)
Exam4.4 Example 4.4 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam4.4 presents the height means for 4 seedlots under factorial arrangement for two levels of
Fertilizer and two levels of Irrigation.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam4.4
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam4.4)
# Pg. 58
fm4.6 <-
aov(
formula = Height ~ Rep + Irrig*Ferti*SeedDLot +
Error(Rep/Irrig:Ferti)
, data = DataExam4.4
)
# Pg. 61
summary(fm4.6)
# Pg. 61
model.tables(x = fm4.6, type = "means")
# Pg. 61
emmeans(object = fm4.6, specs = ~ Irrig)
emmip(object = fm4.6, formula = ~ Irrig) +
theme_classic()
Exam5.1 Example 5.1 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam5.1 presents the height of 27 seedlots from 4 sites.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam5.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam5.1)
# Pg.68
fm5.4 <- lm(formula = Ht ~ Site*SeedLot, data = DataExam5.1)
# Pg. 73
anova(fm5.4)
# Pg. 73
emmeans(object = fm5.4, specs = ~ Site)
emmeans(object = fm5.4, specs = ~ SeedLot)
ANOVAfm5.4 <- anova(fm5.4)
ANOVAfm5.4[4, 1:3] <- c(208, 208*1040, 1040)
ANOVAfm5.4[3, 4] <- ANOVAfm5.4[3, 3]/ANOVAfm5.4[4, 3]
ANOVAfm5.4[3, 5] <- pf(
q = ANOVAfm5.4[3, 4]
, df1 = ANOVAfm5.4[3, 1]
, df2 = ANOVAfm5.4[4, 1]
, lower.tail = FALSE
)
# Pg. 73
ANOVAfm5.4
# Pg. 80
DataExam5.1 %>%
filter(SeedLot %in% c("13653", "13871")) %>%
ggplot(
data = .
, mapping = aes(x = SiteMean, y = Ht, color = SeedLot, shape = SeedLot)
) +
geom_point() +
geom_smooth(method = lm, se = FALSE, fullrange = TRUE)+
theme_classic() +
labs(
x = "SiteMean"
, y = "SeedLot Mean"
)
Tab5.10 <-
DataExam5.1 %>%
summarise(Mean = mean(Ht), .by = SeedLot) %>%
left_join(
DataExam5.1 %>%
nest_by(SeedLot) %>%
mutate(fm1 = list(lm(Ht ~ SiteMean, data = data))) %>%
summarise(Slope = coef(fm1)[2])
, by = "SeedLot"
)
# Pg. 81
Tab5.10
ggplot(data = Tab5.10, mapping = aes(x = Mean, y = Slope))+
geom_point(size = 2) +
theme_bw() +
labs(
x = "SeedLot Mean"
, y = "Regression Coefficient"
)
DevSS1 <-
DataExam5.1 %>%
nest_by(SeedLot) %>%
mutate(fm1 = list(lm(Ht ~ SiteMean, data = data))) %>%
summarise(SSE = anova(fm1)[2, 2]) %>%
ungroup() %>%
summarise(Dev = sum(SSE)) %>%
as.numeric()
ANOVAfm5.4[2, 2]
length(levels(DataExam5.1$SeedLot))
ANOVAfm5.4.1 <-
rbind(
ANOVAfm5.4[1:3, ]
, c(
ANOVAfm5.4[2, 1]
, ANOVAfm5.4[3, 2] - DevSS1
, (ANOVAfm5.4[3, 2] - DevSS1)/ANOVAfm5.4[2, 1]
, NA
, NA
)
, c(
ANOVAfm5.4[3, 1]-ANOVAfm5.4[2, 1]
, DevSS1
, DevSS1/(ANOVAfm5.4[3, 1]-ANOVAfm5.4[2, 1])
, DevSS1/(ANOVAfm5.4[3, 1]-ANOVAfm5.4[2, 1])/ANOVAfm5.4[4, 3]
, pf(
q = DevSS1/(ANOVAfm5.4[3, 1]-ANOVAfm5.4[2, 1])/ANOVAfm5.4[4, 3]
, df1 = ANOVAfm5.4[3, 1]-ANOVAfm5.4[2, 1]
, df2 = ANOVAfm5.4[4, 1]
, lower.tail = FALSE
)
)
, ANOVAfm5.4[4, ]
)
rownames(ANOVAfm5.4.1) <-
c("Site", "SeedLot", "Site:SeedLot", " regressions", " deviations", "Residuals")
# Pg. 82
ANOVAfm5.4.1
Exam5.2 Example 5.2 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam5.2 presents the height of 37 seedlots from 6 sites.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam5.2
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam5.2)
fm5.7 <- lm(formula = Ht ~ Site*SeedLot, data = DataExam5.2)
# Pg. 77
anova(fm5.7)
fm5.9 <- lm(formula = Ht ~ Site*SeedLot, data = DataExam5.2)
# Pg. 77
anova(fm5.9)
ANOVAfm5.9 <- anova(fm5.9)
ANOVAfm5.9[4, 1:3] <- c(384, 384*964, 964)
ANOVAfm5.9[3, 4] <- ANOVAfm5.9[3, 3]/ANOVAfm5.9[4, 3]
ANOVAfm5.9[3, 5] <- pf(
q = ANOVAfm5.9[3, 4]
, df1 = ANOVAfm5.9[3, 1]
, df2 = ANOVAfm5.9[4, 1]
, lower.tail = FALSE
)
# Pg. 77
ANOVAfm5.9
Tab5.14 <-
DataExam5.2 %>%
summarise(Mean = mean(Ht, na.rm = TRUE), .by = SeedLot) %>%
left_join(
DataExam5.2 %>%
nest_by(SeedLot) %>%
mutate(fm2 = list(lm(Ht ~ SiteMean, data = data))) %>%
summarise(Slope = coef(fm2)[2])
, by = "SeedLot"
)
# Pg. 81
Tab5.14
DevSS2 <-
DataExam5.2 %>%
nest_by(SeedLot) %>%
mutate(fm2 = list(lm(Ht ~ SiteMean, data = data))) %>%
summarise(SSE = anova(fm2)[2, 2]) %>%
ungroup() %>%
summarise(Dev = sum(SSE)) %>%
as.numeric()
ANOVAfm5.9.1 <-
rbind(
ANOVAfm5.9[1:3, ]
, c(
ANOVAfm5.9[2, 1]
, ANOVAfm5.9[3, 2] - DevSS2
, (ANOVAfm5.9[3, 2] - DevSS2)/ANOVAfm5.9[2, 1]
, NA
, NA
)
, c(
ANOVAfm5.9[3, 1]-ANOVAfm5.9[2, 1]
, DevSS2
, DevSS2/(ANOVAfm5.9[3, 1]-ANOVAfm5.9[2, 1])
, DevSS2/(ANOVAfm5.9[3, 1]-ANOVAfm5.9[2, 1])/ANOVAfm5.9[4, 3]
, pf(
q = DevSS2/(ANOVAfm5.9[3, 1]-ANOVAfm5.9[2, 1])/ANOVAfm5.9[4, 3]
, df1 = ANOVAfm5.9[3, 1]-ANOVAfm5.9[2, 1]
, df2 = ANOVAfm5.9[4, 1]
, lower.tail = FALSE
)
)
, ANOVAfm5.9[4, ]
)
rownames(ANOVAfm5.9.1) <-
c("Site", "SeedLot", "Site:SeedLot", " regressions", " deviations", "Residuals")
# Pg. 82
ANOVAfm5.9.1
Code <- c("a","a","a","a","b","b","b","b","c","d","d","d","d","e","f","g",
"h","h","i","i","j","k","l","m","n","n","n","o","p","p","q","r",
"s","t","t","u","v")
Tab5.14$Code <- Code
ggplot(data = Tab5.14, mapping = aes(x = Mean, y = Slope))+
geom_point(size = 2) +
geom_text(aes(label = Code), hjust = -0.5, vjust = -0.5)+
theme_bw() +
labs(
x = "SeedLot Mean"
, y = "Regression Coefficient"
)
Exam6.2 Example 6.2 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam 6.2 Dbh mean, Dbh varince and number of trees per plot from 3 provinces("PNG","Sabah","Queensland")
with 4 replications of 48 families.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam6.2
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam6.2)
DataExam6.2.1 <-
DataExam6.2 %>%
filter(Province == "PNG")
# Pg. 94
fm6.3 <-
lm(
formula = Dbh.mean ~ Replication + Family
, data = DataExam6.2.1
)
b <- anova(fm6.3)
HM <- function(x){length(x)/sum(1/x)}
w <- HM(DataExam6.2.1$Dbh.count)
S2 <- b[["Mean Sq"]][length(b[["Mean Sq"]])]
Sigma2t <- mean(DataExam6.2.1$Dbh.variance)
sigma2m <- S2-(Sigma2t/w)
fm6.3.1 <-
lmer(
formula = Dbh.mean ~ 1 + Replication + (1|Family)
, data = DataExam6.2.1
, REML = TRUE
)
# Pg. 104
# summary(fm6.3.1)
varcomp(fm6.3.1)
sigma2f <- 0.2584
h2 <- (sigma2f/(0.3))/(Sigma2t + sigma2m + sigma2f)
cbind(hmean = w, Sigma2t, sigma2m, sigma2f, h2)
fm6.4 <-
lm(
formula = Dbh.mean ~ Replication+Family
, data = DataExam6.2
)
b <- anova(fm6.4)
HM <- function(x){length(x)/sum(1/x)}
w <- HM(DataExam6.2$Dbh.count)
S2 <- b[["Mean Sq"]][length(b[["Mean Sq"]])]
Sigma2t <- mean(DataExam6.2$Dbh.variance)
sigma2m <- S2-(Sigma2t/w)
fm6.4.1 <-
lmer(
formula = Dbh.mean ~ 1 + Replication + Province + (1|Family)
, data = DataExam6.2
, REML = TRUE
)
# Pg. 107
varcomp(fm6.4.1)
sigma2f <- 0.3514
h2 <- (sigma2f/(0.3))/(Sigma2t+sigma2m+sigma2f)
cbind(hmean = w, Sigma2t, sigma2m, sigma2f, h2)
fm6.7.1 <-
lmer(
formula = Dbh.mean ~ 1+Replication+(1|Family)
, data = DataExam6.2.1
, REML = TRUE
)
# Pg. 116
varcomp(fm6.7.1)
sigma2f[1] <- 0.2584
fm6.7.2<-
lmer(
formula = Ht.mean ~ 1 + Replication + (1|Family)
, data = DataExam6.2.1
, REML = TRUE
)
# Pg. 116
varcomp(fm6.7.2)
sigma2f[2] <- 0.2711
fm6.7.3 <-
lmer(
formula = Sum.means ~ 1 + Replication + (1|Family)
, data = DataExam6.2.1
, REML = TRUE
, control = lmerControl()
)
# Pg. 116
varcomp(fm6.7.3)
sigma2f[3] <- 0.873
sigma2xy <- 0.5*(sigma2f[3]-sigma2f[1]-sigma2f[2])
GenCorr <- sigma2xy/sqrt(sigma2f[1]*sigma2f[2])
cbind(S2x = sigma2f[1], S2y = sigma2f[2], S2.x.plus.y = sigma2f[3], GenCorr)
Exam8.1 Example 8.1 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam8.1 presents the Diameter at breast height (Dbh) of 60 SeedLots under layout of row column
design with 6 rows and 10 columns in 18 countries and 59 provinces of 18 selected countries.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam8.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam8.1)
# Pg. 141
fm8.4 <-
aov(
formula = dbh ~ inoc + Error(repl/inoc) + inoc*country*prov
, data = DataExam8.1
)
# Pg. 150
summary(fm8.4)
# Pg. 150
model.tables(x = fm8.4, type = "means")
RESFit <-
data.frame(
fittedvalue = fitted.aovlist(fm8.4)
, residualvalue = proj(fm8.4)$Within[,"Residuals"]
)
ggplot(RESFit,aes(x=fittedvalue,y=residualvalue))+
geom_point(size=2)+
labs(x="Residuals vs Fitted Values", y="")+
theme_bw()
# Pg. 153
fm8.6 <-
aov(
formula = terms(dbh ~ inoc + repl + col + repl:row + repl:col +
prov + inoc:prov, keep.order = TRUE)
, data = DataExam8.1
)
summary(fm8.6)
Exam8.1.1 Example 8.1.1 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam8.1.1 presents the Mixed Effects Analysis of Diameter at breast height (Dbh) of 60 SeedLots
under layout of row column design with 6 rows and 10 columns in 18 countries and 59 provinces
of 18 selected countries given in Example 8.1.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam8.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam8.1)
# Pg. 155
fm8.8 <-
lmerTest::lmer(
formula = dbh ~ 1 + repl + col + prov + (1|repl:row) + (1|repl:col)
, data = DataExam8.1
, REML = TRUE
)
# Pg. 157
## Not run:
varcomp(fm8.8)
## End(Not run)
anova(fm8.8)
anova(fm8.8, ddf = "Kenward-Roger")
predictmeans(model = fm8.8, modelterm = "repl")
predictmeans(model = fm8.8, modelterm = "col")
predictmeans(model = fm8.8, modelterm = "prov")
# Pg. 161
RCB1 <- aov(dbh ~ prov + repl, data = DataExam8.1)
RCB <- emmeans(RCB1, specs = "prov") %>% as_tibble()
Mixed <- emmeans(fm8.8, specs = "prov") %>% as_tibble()
table8.9 <- left_join(RCB, Mixed, by = "prov", suffix = c(".RCBD", ".Mixed"))
print(table8.9)
Exam8.1.2 Example 8.1.2 from Experimental Design & Analysis for Tree Im-
provement
Description
Exam8.1.2 presents the Analysis of Nested Seedlot Structure of Diameter at breast height (Dbh) of
60 SeedLots under layout of row column design with 6 rows and 10 columns in 18 countries and 59
provinces of 18 selected countries given in Example 8.1.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam8.1
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam8.1)
# Pg. 167
fm8.11 <- aov(formula = dbh ~ country + country:prov, data = DataExam8.1)
b <- anova(fm8.11)
Res <- length(b[["Sum Sq"]])
df <- 119
MSS <- 0.1951
b[["Df"]][Res] <- df
b[["Sum Sq"]][Res] <- MSS*df
b[["Mean Sq"]][Res] <- b[["Sum Sq"]][Res]/b[["Df"]][Res]
b[["F value"]][1:Res-1] <- b[["Mean Sq"]][1:Res-1]/b[["Mean Sq"]][Res]
b[["Pr(>F)"]][Res-1] <- df(b[["F value"]][Res-1],b[["Df"]][Res-1],b[["Df"]][Res])
b
emmeans(fm8.11, specs = "country")
Exam8.2 Example 8.2 from Experimental Design and Analysis for Tree Im-
provement
Description
Exam8.2 presents the Diameter at breast height (Dbh) of 60 SeedLots under layout of row column
design with 6 rows and 10 columns in 18 countries and 59 provinces of 18 selected countries.
Author(s)
1. <NAME> (<<EMAIL>>)
2. <NAME> (<<EMAIL>>)
References
1. <NAME>, <NAME> and <NAME> (2023). Experimental Design and Analysis
for Tree Improvement. CSIRO Publishing (https://www.publish.csiro.au/book/3145/).
See Also
DataExam8.2
Examples
library(car)
library(dae)
library(dplyr)
library(emmeans)
library(ggplot2)
library(lmerTest)
library(magrittr)
library(predictmeans)
library(supernova)
data(DataExam8.2)
# Pg.
fm8.2 <-
lmer(
formula = dbhmean ~ Repl + Column + Contcompf + Contcompf:Standard +
(1|Repl:Row ) + (1|Repl:Column ) + (1|Contcompv:Clone)
, data = DataExam8.2
)
## Not run:
varcomp(fm8.2)
## End(Not run)
anova(fm8.2)
Anova(fm8.2, type = "II", test.statistic = "Chisq")
predictmeans(model = fm8.2, modelterm = "Repl")
predictmeans(model = fm8.2, modelterm = "Column")
library(emmeans)
emmeans(object = fm8.2, specs = ~Contcompf|Standard) |
ec3 | readthedoc | YAML | EC3 Documentation 1.0 documentation
[EC3 Documentation](index.html#document-index)
---
Welcome to EC3’s documentation![¶](#welcome-to-ec3-s-documentation)
===
Contents:
Introduction[¶](#introduction)
---
Elastic Cloud Computing Cluster (EC3) is a tool to create elastic virtual clusters on top of Infrastructure as a Service (IaaS) providers, either public (such as [Amazon Web Services](https://aws.amazon.com/),
[Google Cloud](http://cloud.google.com/) or [Microsoft Azure](http://azure.microsoft.com/))
or on-premises (such as [OpenNebula](http://www.opennebula.org/) and [OpenStack](http://www.openstack.org/)). We offer recipes to deploy [TORQUE](http://www.adaptivecomputing.com/products/open-source/torque)
(optionally with [MAUI](http://www.adaptivecomputing.com/products/open-source/maui/)), [SLURM](http://slurm.schedmd.com/), [SGE](http://gridscheduler.sourceforge.net/), [HTCondor](https://research.cs.wisc.edu/htcondor/), [Mesos](http://mesos.apache.org/), [Nomad](https://www.nomadproject.io/) and [Kubernetes](https://kubernetes.io/) clusters that can be self-managed with [CLUES](http://www.grycap.upv.es/clues/):
it starts with a single-node cluster and working nodes will be dynamically deployed and provisioned to fit increasing load (number of jobs at the LRMS). Working nodes will be undeployed when they are idle.
This introduces a cost-efficient approach for Cluster-based computing.
### Installation[¶](#installation)
#### Requisites[¶](#requisites)
The program ec3 requires Python 2.6+, [PLY](http://www.dabeaz.com/ply/), [PyYAML](http://pyyaml.org/wiki/PyYAML), [Requests](http://docs.python-requests.org/), [jsonschema](https://github.com/Julian/jsonschema) and an [IM](https://github.com/grycap/im) server,
which is used to launch the virtual machines.
[PyYAML](http://pyyaml.org/wiki/PyYAML) is usually available in distribution repositories (`python-yaml` in Debian;
`PyYAML` in Red Hat; and `PyYAML` in pip).
[PLY](http://www.dabeaz.com/ply/) is usually available in distribution repositories (`python-ply` and `ply` in pip).
[Requests](http://docs.python-requests.org/) is usually available in distribution repositories (`python-requests` and `requests` in pip).
[jsonschema](https://github.com/Julian/jsonschema) is usually available in distribution repositories (`python-jsonschema` and `jsonschema` in pip).
By default ec3 uses our public [IM](https://github.com/grycap/im) server in appsgrycap.i3m.upv.es. *Optionally* you can deploy a local [IM](https://github.com/grycap/im) server following the instructions of the [`IM manual`_](#id1).
Also `sshpass` command is required to provide the user with ssh access to the cluster.
#### Installing[¶](#installing)
First you need to install pip tool. To install them in Debian and Ubuntu based distributions, do:
```
sudo apt update sudo apt install python-pip
```
In Red Hat based distributions (RHEL, CentOS, Amazon Linux, Oracle Linux, Fedora, etc.), do:
```
sudo yum install epel-release sudo yum install which python-pip
```
Then you only have to call the install command of the pip tool with the ec3-cli package:
```
sudo pip install ec3-cli
```
You can also download the last ec3 version from [this](https://github.com/grycap/ec3) git repository:
```
git clone https://github.com/grycap/ec3
```
Then you can install it calling the pip tool with the current ec3 directory:
```
sudo pip install ./ec3
```
### Basic example with Amazon EC2[¶](#basic-example-with-amazon-ec2)
First create a file `auth.txt` with a single line like this:
```
id = provider ; type = EC2 ; username = <<Access Key ID>> ; password = <<Secret Access Key>>
```
Replace `<<Access Key ID>>` and `<<Secret Access Key>>` with the corresponding values for the AWS account where the cluster will be deployed. It is safer to use the credentials of an IAM user created within your AWS account.
This file is the authorization file (see [Authorization file](http://ec3.readthedocs.org/en/devel/ec3.html#authorization-file)), and can have more than one set of credentials.
Now we are going to deploy a cluster in Amazon EC2 with a limit number of nodes = 10. The parameter to indicate the maximum size of the cluster is called `ec3_max_instances` and it has to be indicated in the RADL file that describes the infrastructure to deploy. In our case, we are going to use the [ubuntu-ec2](https://github.com/grycap/ec3/blob/devel/templates/ubuntu-ec2.radl) recipe, available in our github repo. The next command deploys a [TORQUE](http://www.adaptivecomputing.com/products/open-source/torque) cluster based on an [Ubuntu](http://www.ubuntu.com/) image:
```
$ ec3 launch mycluster torque ubuntu-ec2 -a auth.txt -y WARNING: you are not using a secure connection and this can compromise the secrecy of the passwords and private keys available in the authorization file.
Creating infrastructure Infrastructure successfully created with ID: 60
▄▟▙▄¨ Front-end state: running, IP: 132.43.105.28
```
If you deployed a local [IM](https://github.com/grycap/im) server, use the next command instead:
```
$ ec3 launch mycluster torque ubuntu-ec2 -a auth.txt -u http://localhost:8899
```
This can take several minutes. After that, open a ssh session to the front-end:
```
$ ec3 ssh mycluster Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-24-generic x86_64)
* Documentation: https://help.ubuntu.com/
ubuntu@torqueserver:~$
```
Also you can show basic information about the deployed clusters by executing:
```
$ ec3 list
name state IP nodes
---
mycluster configured 132.43.105.28 0
```
### EC3 in Docker Hub[¶](#ec3-in-docker-hub)
EC3 has an official Docker container image available in [Docker Hub](https://hub.docker.com/r/grycap/ec3/) that can be used instead of installing the CLI. You can download it by typing:
```
$ sudo docker pull grycap/ec3
```
You can exploit all the potential of EC3 as if you download the CLI and run it on your computer:
```
$ sudo docker run grycap/ec3 list
$ sudo docker run grycap/ec3 templates
```
To launch a cluster, you can use the recipes that you have locally by mounting the folder as a volume. Also it is recommendable to mantain the data of active clusters locally, by mounting a volume as follows:
```
$ sudo docker run -v /home/user/:/tmp/ -v /home/user/ec3/templates/:/etc/ec3/templates -v /tmp/.ec3/clusters:/root/.ec3/clusters grycap/ec3 launch mycluster torque ubuntu16 -a /tmp/auth.dat
```
Notice that you need to change the local paths to the paths where you store the auth file, the templates folder and the .ec3/clusters folder. So, once the front-end is deployed and configured you can connect to it by using:
```
$ sudo docker run -ti -v /tmp/.ec3/clusters:/root/.ec3/clusters grycap/ec3 ssh mycluster
```
Later on, when you need to destroy the cluster, you can type:
```
$ sudo docker run -ti -v /tmp/.ec3/clusters:/root/.ec3/clusters grycap/ec3 destroy mycluster
```
### Additional information[¶](#additional-information)
* [EC3 Command-line Interface](http://ec3.readthedocs.org/en/devel/ec3.html).
* [Templates](http://ec3.readthedocs.org/en/devel/templates.html).
* Information about available templates: `ec3 templates [--search <topic>] [--full-description]`.
Architecture[¶](#architecture)
---
### Overview[¶](#overview)
EC3 proposes the combination of Green computing, Cloud computing and HPC techniques to create a tool that deploys elastic virtual clusters on top of IaaS Clouds. EC3 creates elastic cluster-like infrastructures that automatically scale out to a larger number of nodes on demand up to a maximum size specified by the user. Whenever idle resources are detected, the cluster dynamically and automatically scales in, according to some predefined policies, in order to cut down the costs in the case of using a public Cloud provider.
This creates the illusion of a real cluster without requiring an investment beyond the actual usage. Therefore, this approach aims at delivering cost-effective elastic Cluster as a Service on top of an IaaS Cloud.
### General Architecture[¶](#general-architecture)
[Fig. 1](#figure-arch) summarizes the main architecture of EC3. The deployment of the virtual elastic cluster consists of two phases. The first one involves starting a VM in the Cloud to act as the cluster front-end while the second one involves the automatic management of the cluster size,
depending on the workload and the specified policies. For the first step, a launcher (EC3 Launcher)
has been developed that deploys the front-end on the Cloud using the infrastructure deployment services described in Section 3.1. The sysadmin will run this tool, providing it with the following information:
Fig 1. EC3 Architecture.
* Maximum cluster size. This serves to establish a cost limit in case of a workload peak. The maximum cluster size can be modified at any time once the virtual cluster is operating. Thus,
the sysadmins can adapt the maximum cluster size to the dynamic requirements of their users.
In this case the LRMS must be reconfigured to add the new set of virtual nodes and in some cases it may imply a LRMS service restart.
* [RADL](#radl) document specifying the desired features of the cluster front-end, regarding both hardware and software (OS, LRMS, additional libraries, etc.). These requirements are taken by the launcher and extended to include additional ones (such as installing CLUES and its requirements together with the libraries employed to interact with the IaaS Cloud provider, etc.) in order to manage elasticity.
The launcher starts an IM that becomes responsible of deploying the cluster front-end. This is done by means of the following steps:
1. Selecting the VMI for the front-end. The IM can take a particular user-specified VMI, or it can contact the [VMRC](http://www.grycap.upv.es/vmrc) to choose the most appropriate VMI available,
considering the requirements specified in the RADL.
2. Choosing the Cloud deployment according to the specification of the user (if there are different providers).
3. Submitting an instance of the corresponding VMI and, once it is available, installing and configuring all the required software that is not already preinstalled in the VM
One of the main LRMS configuration steps is to set up the names of the cluster nodes. This is done using a sysadmin-specified name pattern (e.g. vnode-*) so that the LRMS considers a set of nodes such as vnode-1,
vnode-2, … , vnode-n, where n is the maximum cluster size. This procedure results in a fully operational elastic cluster. [Fig. 2](#figure-deployment) represents the sequence diagram and the interaction of the main components and actors during the deployment of the frontend of the cluster using EC3.
Fig 2. Sequence diagram for the deployment of the frontend.
Once the front-end and the elasticity manager (CLUES) have been deployed, the virtual cluster becomes totally autonomous and every user will be able to submit jobs to the LRMS, either from the cluster front-end or from an external node that provides job submission capabilities. The user will have the perception of a cluster with the number of nodes specified as maximum size. CLUES will monitor the working nodes and intercept the job submissions before they arrive to the LRMS, enabling the system to dynamically manage the cluster size transparently to the LRMS and the user, scaling in and out on demand.
Just like in the deployment of the front-end, CLUES internally uses an IM to submit the VMs that will be used as working nodes for the cluster. For that, it uses a RADL document defined by the sysadmin, where the features of the working nodes are specified. Once these nodes are available, they are automatically integrated in the cluster as new available nodes for the LRMS. Thus, the process to deploy the working nodes is similar to the one employed to deploy the front-end.
[Fig. 3](#figure-jobs) represents the sequence diagram and the interaction when a new job arrives to the LRMS and no nodes are available for the execution of the job.
Fig 3. Sequence diagram that represents when a new job arrives to the cluster.
Note that the EC3-L tool can be executed on any machine that has a connection with the Cloud system and it is only employed to bootstrap the cluster. Once deployed, the cluster becomes autonomous and self-managed,
and the machine from which the EC3-L tool was used (the dashed rectangle in Fig. 1) is no longer required.
The expansion of the cluster while it is operating is carried out by the front-end node, by means of CLUES, as explained above.
### Infrastructure Manager[¶](#infrastructure-manager)
The [Infrastructure Manager (IM)](http://www.grycap.upv.es/im) is a tool that eases the access and the usability of IaaS clouds by automating the VMI selection, deployment, configuration, software installation, monitoring and update of Virtual Appliances.
It supports APIs from a large number of virtual platforms, making user applications cloud-agnostic. In addition it integrates a contextualization system to enable the installation and configuration of all the user required applications providing the user with a fully functional infrastructure.
### RADL[¶](#radl)
The main purpose of the [Resource and Application description Language (RADL)](http://imdocs.readthedocs.org/en/devel/radl.html)
is to specify the requirements of the resources where the scientific applications will be executed.
It must address not only hardware (CPU number, CPU architecture, RAM size, etc.) but also software requirements (applications, libraries, data base systems, etc.).
It should include all the configuration details needed to get a fully functional and configured VM (a Virtual Appliance or VA). It merges the definitions of specifications, such as OVF, but using a declarative scheme, with contextualization languages such as Ansible. It also allows describing the underlying network capabilities required.
### CLUES[¶](#clues)
[CLUES](http://www.grycap.upv.es/clues) is an energy management system for High Performance Computing (HPC) Clusters and Cloud infrastructures.
The main function of the system is to power off internal cluster nodes when they are not being used, and conversely to power them on when they are needed. CLUES system integrates with the cluster management middleware, such as a batch-queuing system or a cloud infrastructure management system, by means of different connectors.
Deployment Models[¶](#deployment-models)
---
EC3 supports a wide variety of deployment models (i.e. cluster behaviour).
In this section, we provide information about all of them and a example of configuration for each deployment model.
For more details, you can follow reading [ec3_variables](http://ec3.readthedocs.io/en/devel/templates.html#special-ec3-features), which provides more information regarding EC3 special variables that support the specification of the deployment model in the templates.
### Basic structure (homogeneous cluster)[¶](#basic-structure-homogeneous-cluster)
An homogeneous cluster is composed by working nodes that have the same characteristics (hardware and software).
This is the basic deployment model of EC3, where we only have one type of `system` for the working nodes.
Fig 1. EC3 Deployment Model for an homogeneous cluster.
In EC3, a template specifying this model would be, for instance:
```
system wn (
ec3_max_instances = 6 and
ec3_node_type = 'wn' and
cpu.count = 4 and
memory.size >= 2048M and
disk.0.os.name = 'linux' and
net_interface.0.connection = 'net'
)
```
This RADL defines a *system* with the feature `cpu.count` equal to four, the feature
`memory.size` greater or equal than `2048M` , a operative system based on `linux`
and with the feature `net_interface.0.connection` bounded to `'net'`.
It also fixes the maximum number of working nodes to `6` with the EC3 special variable
`ec3_max_instances`, and indicates that this *system* is of type `wn` though `ec3_node_type`.
### Heterogeneous cluster[¶](#heterogeneous-cluster)
This model allows that the working nodes comprising the cluster can be of different characteristics (hardware and software).
This is of special interest when you need nodes with different configuration or hardware specifications but all working together in the same cluster.
It also allows you to configure several queues and specify from which queue the working node belongs to.
Fig 2. EC3 Deployment Model for an heterogeneous cluster.
In EC3, a template specifying this model would be, for instance:
```
system wn (
ec3_max_instances = 6 and
ec3_node_type = 'wn' and
ec3_node_queues_list = 'smalljobs' and
ec3_node_pattern = 'wn[1,2,3]' and
cpu.count = 4 and
memory.size >= 2048M and
disk.0.os.name = 'linux' and
net_interface.0.connection = 'net'
)
system largewn (
ec3_inherit_from = system wn and
ec3_node_queues_list = 'largejobs' and
ec3_node_pattern = 'wn[4,5,6]' and
cpu.count = 8 and
memory.size >= 4096M
)
```
This RADL defines two different *system*. The first one defines the `wn` with the feature `cpu.count`
equal to four, the feature `memory.size` greater or equal than `2048M` , and with the feature
`net_interface.0.connection` bounded to `'net'`.
Again, it also fixes the maximum number of working nodes to `6` with the EC3 special variable
`ec3_max_instances`, and indicates that this *system* is of type `wn` though `ec3_node_type`.
More systems can be defined, it is not limited to two types of working nodes, it’s only an example.
The second defined *system*, called `largewn`, inherits the already defined characteristics of `system wn`,
by using the EC3 special feature `ec3_inherit_from`, but it changes the values for `cpu.count` and `memory.size`.
Regarding queue management, the RADL defines two queues by using `ec3_node_queues_list`, and determines whose nodes belong to them. It is also defined the pattern to construct the name of the nodes by the `ec3_node_pattern` variable.
### Cloud Bursting (Hybrid clusters)[¶](#cloud-bursting-hybrid-clusters)
The third model supported by EC3 is Cloud Bursting. It consists on launching nodes in two or more different Cloud providers.
This is done to manage user quotas or saturated resources. When a limit is reached and no more nodes can be deployed inside the first Cloud Provider, EC3 will launch new nodes in the second defined Cloud provider. This is also called a hybrid cluster.
The nodes deployed in different Cloud providers can be different also, so heterogeneous clusters with cloud bursting capabilities can be deployed and automatically managed with EC3. The nodes would be automatically interconnected by using VPN or SSH tunneling techniques.
Fig 3. EC3 Deployment Model for an hybrid cluster.
In EC3, a template specifying this model would be, for instance:
```
system wn (
disk.0.os.name = 'linux' and
disk.0.image.url = 'one://mymachine.es/1' and
disk.0.os.credentials.username = 'ubuntu' and
ec3_max_instances = 6 and # maximum instances of this kind
cpu.count = 4 and
memory.size >= 2048M and
ec3_if_fail = 'wn_aws'
)
system wn_aws (
ec3_inherit_from = system wn and # Copy features from system 'wn'
disk.0.image.url = 'aws://us-east-1/ami-30519058' and # Ubuntu 14.04
disk.0.os.credentials.username = 'ubuntu' and
ec3_max_instances = 8 and # maximum instances of this kind
ec3_if_fail = ''
)
```
This RADL is similar to the upper ones. It also defines two different *system*, but the important detail here is the EC3 variable `ec3_if_fail`. It defines the next *system* type to be used when no more instances of *system wn* can be launched.
Command-line Interface[¶](#command-line-interface)
---
The program is called like this:
```
$ ec3 [-l <file>] [-ll <level>] [-q] launch|list|show|templates|ssh|reconfigure|destroy [args...]
```
`-l` `<file>``,` `--log-file` `<file>`[¶](#cmdoption-ec3-l)
Path to file where logs are written. Default value is standard output error.
`-ll` `<level>``,` `--log-level` `<level>`[¶](#cmdoption-ec3-ll)
Only write in the log file messages with level more severe than the indicated:
`1` for debug, `2` for info, `3` for warning and `4` for error.
`-q``,` `--quiet`[¶](#cmdoption-ec3-q)
Don’t show any message in console except the front-end IP.
### Command `launch`[¶](#command-launch)
To deploy a cluster issue this command:
```
ec3 launch <clustername> <template_0> [<template_1> ...] [-a <file>] [-u <url>] [-y]
```
`clustername`[¶](#cmdoption-ec3-launch-arg-clustername)
Name of the new cluster.
`template_0` `...`[¶](#cmdoption-ec3-launch-arg-template-0)
Template names that will be used to deploy the cluster. ec3 tries to find files with these names and extension `.radl` in `~/.ec3/templates` and
`/etc/ec3/templates`. Templates are [RADL](http://imdocs.readthedocs.org/en/devel/radl.html) descriptions of the virtual machines
(e.g., instance type, disk images, networks, etc.) and contextualization scripts.
See [Command templates](#cmd-templates) to list all available templates.
`--add`[¶](#cmdoption-ec3-launch-add)
Add a piece of RADL. This option is useful to set some features. The following example deploys a cluster with the Torque LRMS with up to four working nodes:
> ./ec3 launch mycluster torque ubuntu-ec2 –add “system wn ( ec3_max_instances = 4 )”
`-u` `<url>``,` `--restapi-url` `<url>`[¶](#cmdoption-ec3-launch-u)
URL to the IM REST API service.
`-a` `<file>``,` `--auth-file` `<file>`[¶](#cmdoption-ec3-launch-a)
Path to the authorization file, see [Authorization file](#auth-file). This option is compulsory.
`--dry-run`[¶](#cmdoption-ec3-launch-dry-run)
Validate options but do not launch the cluster.
`-n``,` `--not-store`[¶](#cmdoption-ec3-launch-n)
The new cluster will not be stored in the local database.
`-p``,` `--print`[¶](#cmdoption-ec3-launch-p)
Print final RADL description if the cluster after cluster being successfully configured.
`--json`[¶](#cmdoption-ec3-launch-json)
If option -p indicated, print RADL in JSON format instead.
`--on-error-destroy`[¶](#cmdoption-ec3-launch-on-error-destroy)
If the cluster deployment fails, try to destroy the infrastructure (and relinquish the resources).
`-y``,` `--yes`[¶](#cmdoption-ec3-launch-y)
Do not ask for confirmation when the connection to IM is not secure. Proceed anyway.
`-g``,` `--golden-images`[¶](#cmdoption-ec3-launch-g)
Generate a VMI from the first deployed node, to accelerate the contextualization process of next node deployments.
### Command `reconfigure`[¶](#command-reconfigure)
The command reconfigures a previously deployed clusters. It can be called after a failed deployment (resources provisioned will be maintained and a new attempt to configure them will take place).
It can also be used to apply a new configuration to a running cluster:
```
ec3 reconfigure <clustername>
```
`-a` `<file>``,` `--auth-file` `<file>`[¶](#cmdoption-ec3-reconfigure-a)
Append authorization entries in the provided file. See [Authorization file](#auth-file).
`--add`[¶](#cmdoption-ec3-reconfigure-add)
Add a piece of RADL. This option is useful to include additional features to a running cluster.
The following example updates the maximum number of working nodes to four:
```
./ec3 reconfigure mycluster --add "system wn ( ec3_max_instances = 4 )"
```
`-r``,` `--reload`[¶](#cmdoption-ec3-reconfigure-r)
Reload templates used to launch the cluster and reconfigure it with them
(useful if some templates were modified).
`--template``,` `-t`[¶](#cmdoption-ec3-reconfigure-template)
Add a new template/recipe. This option is useful to add new templates to a running cluster.
The following example adds the docker recipe to the configuration of the cluster (i.e. installs Docker):
```
./ec3 reconfigure mycluster -r -t docker
```
### Command `ssh`[¶](#command-ssh)
The command opens a SSH session to the infrastructure front-end:
```
ec3 ssh <clustername>
```
`--show-only`[¶](#cmdoption-ec3-ssh-show-only)
Print the command line to invoke SSH and exit.
### Command `destroy`[¶](#command-destroy)
The command undeploys the cluster and removes the associated information in the local database.:
```
ec3 destroy <clustername> [--force]
```
`--force`[¶](#cmdoption-ec3-destroy-force)
Removes local information of the cluster even when the cluster could not be undeployed successfully.
### Command `show`[¶](#command-show)
The command prints the RADL description of the cluster stored in the local database:
```
ec3 show <clustername> [-r] [--json]
```
`-r``,` `--refresh`[¶](#cmdoption-ec3-show-r)
Get the current state of the cluster before printing the information.
`--json`[¶](#cmdoption-ec3-show-json)
Print RADL description in JSON format.
### Command `list`[¶](#command-list)
The command prints a table with information about the clusters that have been launched:
```
ec3 list [-r] [--json]
```
`-r``,` `--refresh`[¶](#cmdoption-ec3-list-r)
Get the current state of the cluster before printing the information.
`--json`[¶](#cmdoption-ec3-list-json)
Print the information in JSON format.
### Command `templates`[¶](#command-templates)
The command displays basic information about the available templates like *name*,
*kind* and a *summary* description:
```
ec3 templates [-s/--search <pattern>] [-f/--full-description] [--json]
```
`-s``,` `--search`[¶](#cmdoption-ec3-templates-s)
Show only templates in which the `<pattern>` appears in the description.
`-n``,` `--name`[¶](#cmdoption-ec3-templates-n)
Show only the template with that name.
`-f``,` `--full-description`[¶](#cmdoption-ec3-templates-f)
Instead of the table, it shows all the information about the templates.
`--json`[¶](#cmdoption-ec3-templates-json)
Print the information in JSON format.
If you want to see more information about templates and its kinds in EC3, visit [Templates](http://ec3.readthedocs.org/en/latest/templates.html).
### Command `clone`[¶](#command-clone)
The command clones an infrastructure front-end previously deployed from one provider to another:
```
ec3 clone <clustername> [-a/--auth-file <file>] [-u <url>] [-d/--destination <provider>] [-e]
```
`-a` `<file>``,` `--auth-file` `<file>`[¶](#cmdoption-ec3-clone-a)
New authorization file to use to deploy the cloned cluster. See [Authorization file](#auth-file).
`-d` `<provider>``,` `--destination` `<provider>`[¶](#cmdoption-ec3-clone-d)
Provider ID, it must match with the id provided in the auth file. See [Authorization file](#auth-file).
`-u` `<url>``,` `--restapi-url` `<url>`[¶](#cmdoption-ec3-clone-u)
URL to the IM REST API service. If not indicated, EC3 uses the default value.
`-e``,` `--eliminate`[¶](#cmdoption-ec3-clone-e)
Indicate to destroy the original cluster at the end of the clone process. If not indicated, EC3 leaves running the original cluster.
### Command `migrate`[¶](#command-migrate)
The command migrates a previously deployed cluster and its running tasks from one provider to another. It is mandatory that the original cluster to migrate has been deployed with SLURM and BLCR, if not, the migration process can’t be performed. Also, this operation only works with clusters which images are selected by the VMRC, it does not work if the URL of the VMI/AMI is explicitly written in the system RADL:
```
ec3 migrate <clustername> [-b/--bucket <bucket_name>] [-a/--auth-file <file>] [-u <url>] [-d/--destination <provider>] [-e]
```
`-b` `<bucket_name>``,` `--bucket` `<bucket_name>`[¶](#cmdoption-ec3-migrate-b)
Bucket name of an already created bucket in the S3 account displayed in the auth file.
`-a` `<file>``,` `--auth-file` `<file>`[¶](#cmdoption-ec3-migrate-a)
New authorization file to use to deploy the cloned cluster. It is mandatory to have valid AWS credentials in this file to perform the migration operation, since it uses Amazon S3 to store checkpoint files from jobs running in the cluster. See [Authorization file](#auth-file).
`-d` `<provider>``,` `--destination` `<provider>`[¶](#cmdoption-ec3-migrate-d)
Provider ID, it must match with the id provided in the auth file. See [Authorization file](#auth-file).
`-u` `<url>``,` `--restapi-url` `<url>`[¶](#cmdoption-ec3-migrate-u)
URL to the IM REST API service. If not indicated, EC3 uses the default value.
`-e``,` `--eliminate`[¶](#cmdoption-ec3-migrate-e)
Indicate to destroy the original cluster at the end of the migration process. If not indicated, EC3 leaves running the original cluster.
### Command `stop`[¶](#command-stop)
To stop a cluster to later continue using it, issue this command:
```
ec3 stop <clustername> [-a <file>] [-u <url>] [-y]
```
`clustername`[¶](#cmdoption-ec3-stop-arg-clustername)
Name of the new cluster to stop.
`-a` `<file>``,` `--auth-file` `<file>`[¶](#cmdoption-ec3-stop-a)
Path to the authorization file, see [Authorization file](#auth-file).
`-u` `<url>``,` `--restapi-url` `<url>`[¶](#cmdoption-ec3-stop-u)
URL to the IM REST API external service.
`-y``,` `--yes`[¶](#cmdoption-ec3-stop-y)
Do not ask for confirmation to stop the cluster. Proceed anyway.
### Command `restart`[¶](#command-restart)
To restart an already stopped cluster, use this command:
```
ec3 restart <clustername> [-a <file>] [-u <url>]
```
`clustername`[¶](#cmdoption-ec3-restart-arg-clustername)
Name of the new cluster to restart.
`-a` `<file>``,` `--auth-file` `<file>`[¶](#cmdoption-ec3-restart-a)
Path to the authorization file, see [Authorization file](#auth-file).
`-u` `<url>``,` `--restapi-url` `<url>`[¶](#cmdoption-ec3-restart-u)
URL to the IM REST API external service.
### Command `transfer`[¶](#command-transfer)
To transfers an already launched cluster that has not been transfered to the internal IM, use this command:
```
ec3 transfer <clustername> [-a <file>] [-u <url>]
```
`clustername`[¶](#cmdoption-ec3-transfer-arg-clustername)
Name of the new cluster to transfer.
`-a` `<file>``,` `--auth-file` `<file>`[¶](#cmdoption-ec3-transfer-a)
Path to the authorization file, see [Authorization file](#auth-file).
`-u` `<url>``,` `--restapi-url` `<url>`[¶](#cmdoption-ec3-transfer-u)
URL to the IM REST API external service.
### Configuration file[¶](#configuration-file)
Default configuration values are read from `~/.ec3/config.yml`.
If this file doesn’t exist, it is generated with all the available options and their default values.
The file is formated in [YAML](http://yaml.org/). The options that are related to files admit the next values:
* an scalar: it will be treated as the content of the file, e.g.:
```
auth_file: |
type = OpenNebula; host = myone.com:9999; username = user; password = 1234
type = EC2; username = AKIAAAAAAAAAAAAAAAAA; password = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
```
* a mapping with the key `filename`: it will be treated as the file path, e.g.:
```
auth_file:
filename: /home/user/auth.txt
```
* a mapping with the key `stream`: it will select either standard output (`stdout`)
or standard error (`stderr`), e.g.:
```
log_file:
stream: stdout
```
### Authorization file[¶](#authorization-file)
The authorization file stores in plain text the credentials to access the cloud providers,
the [IM](http://www.grycap.upv.es/im) service and the [VMRC](http://www.grycap.upv.es/vmrc) service. Each line of the file is composed by pairs of key and value separated by semicolon, and refers to a single credential. The key and value should be separated by ” = “, that is **an equals sign preceded and followed by one white space at least**, like this:
```
id = id_value ; type = value_of_type ; username = value_of_username ; password = value_of_password
```
Values can contain “=”, and “\n” is replaced by carriage return. The available keys are:
* `type` indicates the service that refers the credential. The services supported are `InfrastructureManager`, `VMRC`, `OpenNebula`, `EC2`,
`OpenStack`, `OCCI`, `LibCloud`, `Docker`, `GCE`, `Azure`, and `LibVirt`.
* `username` indicates the user name associated to the credential. In EC2 it refers to the *Access Key ID*. In Azure it refers to the user Subscription ID. In GCE it refers to *Service Account’s Email Address*.
* `password` indicates the password associated to the credential. In EC2 it refers to the *Secret Access Key*. In GCE it refers to *Service Private Key*. See how to get it and how to extract the private key file from
[here info](https://cloud.google.com/storage/docs/authentication#service_accounts)).
* `tenant` indicates the tenant associated to the credential.
This field is only used in the OpenStack plugin.
* `host` indicates the address of the access point to the cloud provider.
This field is not used in IM and EC2 credentials.
* `proxy` indicates the content of the proxy file associated to the credential.
To refer to a file you must use the function “file(/tmp/proxyfile.pem)” as shown in the example.
This field is only used in the OCCI plugin.
* `project` indicates the project name associated to the credential.
This field is only used in the GCE plugin.
* `public_key` indicates the content of the public key file associated to the credential.
To refer to a file you must use the function “file(cert.pem)” as shown in the example.
This field is only used in the Azure plugin. See how to get it
[here](https://msdn.microsoft.com/en-us/library/azure/gg551722.aspx)
* `private_key` indicates the content of the private key file associated to the credential.
To refer to a file you must use the function “file(key.pem)” as shown in the example.
This field is only used in the Azure plugin. See how to get it
[here](https://msdn.microsoft.com/en-us/library/azure/gg551722.aspx)
* `id` associates an identifier to the credential. The identifier should be used as the label in the *deploy* section in the RADL.
An example of the auth file:
```
id = one; type = OpenNebula; host = oneserver:2633; username = user; password = pass id = ost; type = OpenStack; host = ostserver:5000; username = user; password = pass; tenant = tenant type = InfrastructureManager; username = user; password = pass type = VMRC; host = http://server:8080/vmrc; username = user; password = pass id = ec2; type = EC2; username = ACCESS_KEY; password = SECRET_KEY id = gce; type = GCE; username = username.apps.googleusercontent.com; password = pass; project = projectname id = docker; type = Docker; host = http://host:2375 id = occi; type = OCCI; proxy = file(/tmp/proxy.pem); host = https://fc-one.i3m.upv.es:11443 id = azure; type = Azure; username = subscription-id; public_key = file(cert.pem); private_key = file(key.pem)
id = kub; type = Kubernetes; host = http://server:8080; username = user; password = pass
```
Notice that the user credentials that you specify are *only* employed to provision the resources
(Virtual Machines, security groups, keypairs, etc.) on your behalf.
No other resources will be accessed/deleted.
However, if you are concerned about specifying your credentials to EC3, note that you can (and should)
create an additional set of credentials, perhaps with limited privileges, so that EC3 can access the Cloud on your behalf.
In particular, if you are using Amazon Web Services, we suggest you use the Identity and Access Management ([IAM](http://aws.amazon.com/iam/))
service to create a user with a new set of credentials. This way, you can rest assured that these credentials can be cancelled at anytime.
### Usage of Golden Images[¶](#usage-of-golden-images)
Golden images are a mechanism to accelerate the contextualization process of working nodes in the cluster. They are created when the first node of the cluster is deployed and configured. It provides a preconfigured AMI specially created for the cluster, with no interaction with the user required. Each golden image has a unique id that relates it with the infrastructure. Golden images are also deleted when the cluster is destroyed.
There are two ways to indicate to EC3 the usage of this strategy:
* Command option in the CLI interface: as explained before, the `launch` command offers the option `-g`, `--golden-images` to indicate to EC3 the usage of golden images, e.g.:
```
./ec3 launch mycluster slurm ubuntu -a auth.dat --golden-images
```
* In the [RADL](http://imdocs.readthedocs.org/en/devel/radl.html): as an advanced mode, the user can also specify the usage of golden images in the RADL file that describes the `system` architecture of the working nodes, e.g.:
```
system wn (
cpu.arch = 'x86_64' and
cpu.count >= 1 and
memory.size >= 1024m and
disk.0.os.name = 'linux' and
disk.0.os.credentials.username = 'ubuntu' and
disk.0.os.credentials.password = 'dsatrv' and
ec3_golden_images = 'true'
)
```
Currently this feature is only available in the command-line interface for [OpenNebula](http://www.opennebula.org/) and [Amazon Web Services](https://aws.amazon.com/) providers. The list of supported providers will be uploaded soon.
Web Interface[¶](#web-interface)
---
### Overview[¶](#overview)
EC3 as a Service (EC3aaS), is a web service offered to the community to facilitate the usage of EC3 to non-experienced users. The EC3 portal integrated in the
[EGI Application on Demmand](https://marketplace.egi.eu/42-applications-on-demand) can be accessed by users of the vo.access.egi.eu VO
(read [EGI AoD documentation](https://egi-federated-cloud.readthedocs.io/en/latest/aod.html) to get more information). The users are enabled to try the tool by using the user-friendly wizard to easily configure and deploy Virtual Elastic Clusters on [EGI Cloud Compute](https://www.egi.eu/services/cloud-compute/) or [HelixNebula](https://www.helix-nebula.eu/) Cloud (powered by [Exoscale](https://www.exoscale.com/)) resources.
The user only needs to choose the Cloud provider and allow EC3 to provision VMs on behalf of the user.
### Initial Steps[¶](#initial-steps)
The first step to access the EC3 portal is to autenticate with your [EGI CheckIn](https://www.egi.eu/services/check-in/)
credentials. Once logged you will see on the rigth-top corner the obtained full name.
These credentials will be used to interact with EGI Cloud Compute providers.
Then the user, in order to configure and deploy a Virtual Elastic Cluster using EC3aaS,
accesses the homepage and selects “Deploy your cluster!” ([Fig. 1](#figure-home)).
With this action, the web page will show different Cloud providers supported by the AoD web interface version of EC3: EGI Cloud Compute or HelxNebula Cloud.
Fig 1. EC3aaS homepage.
The next step, then, is to choose the Cloud provider where the cluster will be deployed ([Fig. 2](#figure-providers)).
Fig 2. List of Cloud providers supported by EC3aaS.
### Configuration and Deployment of a Cluster in EGI Cloud Compute[¶](#configuration-and-deployment-of-a-cluster-in-egi-cloud-compute)
When the user chooses the EGI Cloud Compute provider a wizard pops up
([Fig. 3](#figure-wizard)). This wizard will guide the user during the configuration process of the cluster, allowing the selection of the Cloud site where the VMs will be deployed, the operating system, the type of LRMS system to use,
the characteristics of the nodes, the maximum number of cluster nodes or the software packages to be installed.
Fig 3. Wizard to configure and deploy a virtual cluster in EGI Cloud Compute.
Specifically, the wizard steps are:
1. **Cluster Configuration**: the user can choose the Local Resource Management System preferred to be automatically installed and configured by EC3. Currently,
SLURM, Torque, Grid Engine, Mesos (+ Marathon + Chronos), Kubernetes, [ECAS](https://portal.enes.org/data/data-metadata-service/processing/ecas),
Nomad and [OSCAR](https://github.com/grycap/oscar) are supported. Also a set of common software packages is available to be installed in the cluster, Spark, Galaxy (only in case of SLURM clusters), GNUPlot or Octave. EC3 will install and configure them automatically in the contextualization process. If the user needs another software to be installed in his cluster, a new Ansible recipe can be developed and added to EC3 by using the CLI interface.
2. **Endpoint**: the user has to choose one of the EGI Cloud Compute sites that provides support to the vo.access.egi.eu VO. The list of sites is automatically obtained from the [EGI AppDB](https://appdb.egi.eu/) information system. In case that the site has some errors in the Argo Monitoring System a message (CRITICAL state!) will be added to the name.
You can still use this site but it may fail due to this errors.
3. **Operating System**: the user chooses the OS of the cluster from the list of available Operating Systems that are provided by the selected Cloud site (also obtained from AppDB).
4. **Instance details**: the user must indicate the instance details, like the number of CPUs or the RAM memory, for the front-end and also the working nodes of the cluster (also obtained from AppDB).
5. **Cluster’s size & Name**: here, the user has to select the maximum number of nodes of the cluster (from 1 to 10), without including the front-end node. This value indicates the maximum number of working nodes that the cluster can scale. Remember that, initially the cluster only is created with the front-end, and the nodes are powered on on-demand.
Also a name for the cluster (that must be unique) is required to identify the cluster.
6. **Resume and Launch**: a summary of the chosen configuration of the cluster is shown to the user at the last step of the wizard, and the deployment process can start by clicking the Submit button (Fig. 4).
Fig 4. Resume and Launch: final Wizard step.
Finally, when all the steps of the wizard are fulfilled correctly, the submit button starts the deployment process of the cluster. Only the front-end will be deployed,
because the working nodes will be automatically provisioned by EC3 when the workload of the cluster requires them. When the virtual machine of the front-end is running, EC3aaS provides the user with the necessary data to connect to the cluster ([Fig. 5](#figure-data)) which is composed by the username and SSH private key to connect to the cluster, the front-end IP and the name of the cluster.
Fig 5. Information received by the user when a deployment succeeds.
The cluster may not be configured when the IP of the front-end is returned by the web page, because the process of configuring the cluster is a batch process that takes several minutes, depending on the chosen configuration. However, the user is allowed to log in the front-end machine of the cluster since the moment it is deployed. To know if the cluster is configured, the command is_cluster_ready can be used. It will check if the configuration process of cluster has finished:
```
user@local:~$ssh -i key.pem <username>@<front_ip>
ubuntu@kubeserverpublic:~$ is_cluster_ready Cluster configured!
```
If the the command is_cluster_ready is not found it means that the cluster is already being configured.
Notice that EC3aaS does not offer all the capabilities of EC3, like hybrid clusters or the usage of spot instances. Those capabilities are considered advanced aspects of the tool and are only available via the [EC3 Command-line Interface](http://ec3.readthedocs.org/en/latest/ec3.html).
### Configuration and Deployment of a Cluster in HelixNebula Cloud[¶](#configuration-and-deployment-of-a-cluster-in-helixnebula-cloud)
In case of HelixNebula Cloud, the wizard is the same shown for EGI Cloud Compute but it has an additional step after “Cluster Configuration”.
In the “Provider Account” step ([Fig. 6](#figure-helix)) the user must provide the API key and Secret Key of the Exoscale cloud. To get them, follow the steps described in the
[Exoscale Vouchers for AoD](https://egi-federated-cloud.readthedocs.io/en/latest/aod/exoscale-vouchers.html) documentation.
Fig 6. Helix Nebula Provider Cccount wizard step.
### Management of deployed clusters[¶](#management-of-deployed-clusters)
You can get a list of all your deployed clusters choosing the “Manage your deployed clusters”
option (right in [Fig. 2](#figure-providers)). It will show a list with the details of the clusters launched by the user. The list will show the following information: Cluster name (specified by the user on creation), the state, front-end public IP, number of working nodes deployed. It will also enable the user to download the SSH private key needed to access the front-end node and the contextualization log to see all the configuration steps performed.
This log will enable the user to verify the currect status of the configuration of the cluster,
and check for errors in case that the cluster is not correctily configured (unconfigured state).
Finally it also offers a button to delete the cluster.
When the deletion process finishes successfully, the front-end of the cluster and all the working nodes had been destroyed and a message is shown to the user informing the success of the operation. If an error occurs during the deleting process,
an error message is returned to the user.
Fig 7. List of Clusters deployed by the active user.
Templates[¶](#templates)
---
EC3 recipes are described in a superset of [RADL](http://imdocs.readthedocs.org/en/devel/radl.html), which is a specification of virtual machines (e.g., instance type, disk images, networks, etc.) and contextualization scripts.
### Basic structure[¶](#basic-structure)
An RADL document has the following general structure:
```
network <network_id> (<features>)
system <system_id> (<features>)
configure <configure_id> (<Ansible recipes>)
deploy <system_id> <num> [<cloud_id>]
```
The keywords `network`, `system` and `configure` assign some *features*
or *recipes* to an identity `<id>`. The features are a list of constrains separated by `and`, and a constrain is formed by
`<feature name> <operator> <value>`. For instance:
```
system tomcat_node (
cpu.count = 4 and
memory.size >= 1024M and
net_interface.0.connection = 'net'
)
```
This RADL defines a *system* with the feature `cpu.count` equal to four, the feature
`memory.size` greater or equal than `1024M` and with the feature
`net_interface.0.connection` bounded to `'net'`.
The `deploy` keyword is a request to deploy a number of virtual machines.
Some identity of a cloud provider can be specified to deploy on a particular cloud.
### EC3 types of Templates[¶](#ec3-types-of-templates)
In EC3, there are three types of templates:
* `images`, that includes the `system` section of the basic template. It describes the main features of the machines that will compose the cluster, like the operating system or the CPU and RAM memory required;
* `main`, that includes the `deploy` section of the frontend. Also, they include the configuration of the chosen LRMS.
* `component`, for all the recipes that install and configure software packages that can be useful for the cluster.
In order to deploy a cluster with EC3, it is mandatory to indicate in the `ec3 launch` command, *one* recipe of kind `main` and *one* recipe of kind `image`.
The `component` recipes are optional, and you can include all that you need.
To consult the type (*kind*) of template from the ones offered with EC3,
simply use the `ec3 templates` command like in the example above:
```
$ ./ec3 templates
name kind summary
---
blcr component Tool for checkpoint the applications.
centos-ec2 images CentOS 6.5 amd64 on EC2.
ckptman component Tool to automatically checkpoint applications running on Spot instances.
docker component An open-source tool to deploy applications inside software containers.
gnuplot component A program to generate two- and three-dimensional plots.
nfs component Tool to configure shared directories inside a network.
octave component A high-level programming language, primarily intended for numerical computations
openvpn component Tool to create a VPN network.
sge main Install and configure a cluster SGE from distribution repositories.
slurm main Install and configure a cluster SLURM 14.11 from source code.
torque main Install and configure a cluster TORQUE from distribution repositories.
ubuntu-azure images Ubuntu 12.04 amd64 on Azure.
ubuntu-ec2 images Ubuntu 14.04 amd64 on EC2.
```
### Network Features[¶](#network-features)
Under the keyword `network` there are the features describing a Local Area Network (LAN) that some virtual machines can share in order to communicate to themselves and to other external networks.
The supported features are:
`outbound = yes|no`
Indicate whether the IP that will have the virtual machines in this network will be public (accessible from any external network) or private.
If `yes`, IPs will be public, and if `no`, they will be private.
The default value is `no`.
### System Features[¶](#system-features)
Under the keyword `system` there are the features describing a virtual machine. The supported features are:
`image_type = vmdk|qcow|qcow2|raw`
Constrain the virtual machine image disk format.
`virtual_system_type = '<hypervisor>-<version>'`
Constrain the hypervisor and the version used to deploy the virtual machine.
`price <=|=|=> <positive float value>`
Constrain the price per hour that will be paid, if the virtual machine is deployed in a public cloud.
`cpu.count <=|=|=> <positive integer value>`
Constrain the number of virtual CPUs in the virtual machine.
`cpu.arch = i686|x86_64`
Constrain the CPU architecture.
`cpu.performance <=|=|=> <positive float value>ECU|GCEU`
Constrain the total computational performance of the virtual machine.
`memory.size <=|=|=> <positive integer value>B|K|M|G`
Constrain the amount of *RAM* memory (main memory) in the virtual machine.
`net_interface.<netId>`
Features under this prefix refer to virtual network interface attached to the virtual machine.
`net_interface.<netId>.connection = <network id>`
Set the virtual network interface is connected to the LAN with ID
`<network id>`.
`net_interface.<netId>.ip = <IP>`
Set a static IP to the interface, if it is supported by the cloud provider.
`net_interface.<netId>.dns_name = <string>`
Set the string as the DNS name for the IP assigned to this interface. If the string contains `#N#` they are replaced by a number that is distinct for every virtual machine deployed with this `system` description.
`instance_type = <string>`
Set the instance type name of this VM.
`disk.<diskId>.<feature>`
Features under this prefix refer to virtual storage devices attached to the virtual machine. `disk.0` refers to system boot device.
`disk.<diskId>.image.url = <url>`
Set the source of the disk image. The URI designates the cloud provider:
* `one://<server>:<port>/<image-id>`, for OpenNebula;
* `ost://<server>:<port>/<ami-id>`, for OpenStack;
* `aws://<region>/<ami-id>`, for Amazon Web Service;
* `gce://<region>/<image-id>`, for Google Cloud;
* `azr://<image-id>`, for Microsoft Azure Clasic; and
* `azr://<publisher>/<offer>/<sku>/<version>`, for Microsoft Azure; and
* `<fedcloud_endpoint_url>/<image_id>`, for FedCloud OCCI connector.
* `appdb://<site_name>/<apc_name>?<vo_name>`, for FedCloud OCCI connector using AppDB info (from ver. 1.6.0).
* `docker://<docker_image>`, for Docker images.
* `fbw://<fogbow_image>`, for FogBow images.
Either `disk.0.image.url` or `disk.0.image.name` must be set.
`disk.<diskId>.image.name = <string>`
Set the source of the disk image by its name in the VMRC server.
Either `disk.0.image.url` or `disk.0.image.name` must be set.
`disk.<diskId>.type = swap|iso|filesystem`
Set the type of the image.
`disk.<diskId>.device = <string>`
Set the device name, if it is disk with no source set.
`disk.<diskId>.size = <positive integer value>B|K|M|G`
Set the size of the disk, if it is a disk with no source set.
`disk.0.free_size = <positive integer value>B|K|M|G`
Set the free space available in boot disk.
`disk.<diskId>.os.name = linux|windows|mac os x`
Set the operating system associated to the content of the disk.
`disk.<diskId>.os.flavour = <string>`
Set the operating system distribution, like `ubuntu`, `centos`,
`windows xp` and `windows 7`.
`disk.<diskId>.os.version = <string>`
Set the version of the operating system distribution, like `12.04` or
`7.1.2`.
`disk.0.os.credentials.username = <string>` and `disk.0.os.credentials.password = <string>`
Set a valid username and password to access the operating system.
`disk.0.os.credentials.public_key = <string>` and `disk.0.os.credentials.private_key = <string>`
Set a valid public-private keypair to access the operating system.
`disk.<diskId>.applications contains (name=<string>, version=<string>, preinstalled=yes|no)`
Set that the disk must have installed the application with name `name`.
Optionally a version can be specified. Also if `preinstalled` is `yes`
the application must have already installed; and if `no`, the application can be installed during the contextualization of the virtual machine if it is not installed.
### Special EC3 Features[¶](#special-ec3-features)
There are also other special features related with EC3. These features enable to customize the behaviour of EC3:
`ec3_max_instances = <integer value>`
Set maximum number of nodes with this system configuration; a negative value means no constrain.
The default value is -1. This parameter is used to set the maximum size of the cluster.
`ec3_destroy_interval = <positive integer value>`
Some cloud providers require paying in advance by the hour, like AWS. Therefore, the node will be destroyed only when it is idle and at the end of the interval expressed by this option (in seconds).
The default value is 0.
`ec3_destroy_safe = <positive integer value>`
This value (in seconds) stands for a security margin to avoid incurring in a new charge for the next hour.
The instance will be destroyed (if idle) in up to (`ec3_destroy_interval` - `ec3_destroy_safe`) seconds.
The default value is 0.
`ec3_if_fail = <string>`
Set the name of the next system configuration to try when no more instances can be allocated from a cloud provider.
Used for hybrid clusters.
The default value is ‘’.
`ec3_inherit_from = <string>`
Name of the already defined `system` from which inherit its characteristics. For example, if we have already defined a `system wn` where we have specified cpu and os, and we want to change memory only for a new system, instead of writing again the values for cpu and os, we inherit these values from the specified system like `ec3_inherit_from = system wn`.
The default value is ‘None’.
`ec3_reuse_nodes = <boolean>`
Indicates that you want to stop/start working nodes instead of powering off/on them.
The default value is ‘false’.
`ec3_golden_images = <boolean>`
Indicates that you want to use the golden images feature. See [golden images](http://ec3.readthedocs.io/en/devel/ec3.html#usage-of-golden-images) for more info.
The default value is ‘false’.
`ec3_additional_vm = <boolean>`
Indicates that you want this VM to be treated as an additional VM of the cluster, for example, to install server services that you do not want to put in the front machine.
The default value is ‘false’.
`ec3_node_type = <string>`
Indicates the type of the node. Currently the only supported value is `wn`. It enables to distinguish the WNs from the rest of nodes.
The default value is ‘None’.
`ec3_node_keywords = <string>`
Comma separated list of pairs key=value that specifies some specific features supported by this type of node
(i.e. gpu=1,infiniband=1).
The default value is ‘None’.
`ec3_node_queues_list = <string>`
Comma separated list of queues this type of node belongs to.
The default value is ‘None’.
`ec3_node_pattern = <string>`
A pattern (as a Python regular expression) to match the name of the virtual nodes with the current node type The value of this variable must be set according to the value of the variable `ec3_max_instances`.
For example if `ec3_max_instances` is set to 5 a valid value can be: ‘wn[1-5]’.
This variable has preference over `ec3_if_fail` so if a virtual node to be switched on matches with the specified pattern [``](#id2)ec3_if_fail` variable will be ignored.
The default value is ‘None’.
### System and network inheritance[¶](#system-and-network-inheritance)
It is possible to create a copy of a system or a network and to change and add some features. If feature `ec3_inherit_from` is presented, ec3 replaces that object by a copy of the object pointed out in `ec3_inherit_from` and appends the rest of the features.
Next example shows a system `wn_ec2` that inherits features from system `wn`:
```
system wn (
ec3_if_fail = 'wn_ec2' and
disk.0.image.url = 'one://myopennebula.com/999' and
net_interface.0.connection='public'
)
system wn_ec2 (
ec3_inherit_from = system wn and
disk.0.image.url = 'aws://us-east-1/ami-e50e888c' and
spot = 'yes' and
ec3_if_fail = ''
)
```
The system `wn_ec2` that ec3 sends finally to IM is:
```
system wn_ec2 (
net_interface.0.connection='public' and
disk.0.image.url = 'aws://us-east-1/ami-e50e888c' and
spot = 'yes' and
ec3_if_fail = ''
)
```
In case of systems, if system *A* inherits features from system *B*, the new configure section is composed by the one from system *A* followed by the one of system *B*.
Following the previous example, these are the configured named after the systems:
```
configure wn (
@begin
- tasks:
- user: name=user1 password=1234
@end
)
configure wn_ec2 (
@begin
- tasks:
- apt: name=pkg
@end
)
```
Then the configure `wn_ec2` that ec3 sends finally to IM is:
```
configure wn_ec2 (
@begin
- tasks:
- user: name=user1 password=1234
- tasks:
- apt: name=pkg
@end
)
```
### Configure Recipes[¶](#configure-recipes)
Contextualization recipes are specified under the keyword `configure`.
Only Ansible recipes are supported currently. They are enclosed between the tags `@begin` and `@end`, like that:
```
configure add_user1 (
@begin
---
- tasks:
- user: name=user1 password=1234
@end
)
```
#### Exported variables from IM[¶](#exported-variables-from-im)
To easy some contextualization tasks, IM publishes a set of variables that can be accessed by the recipes and have information about the virtual machine.
`IM_NODE_HOSTNAME`
Hostname of the virtual machine (without the domain).
`IM_NODE_DOMAIN`
Domain name of the virtual machine.
`IM_NODE_FQDN`
Complete FQDN of the virtual machine.
`IM_NODE_NUM`
The value of the substitution `#N#` in the virtual machine.
`IM_MASTER_HOSTNAME`
Hostname (without the domain) of the virtual machine doing the *master*
role.
`IM_MASTER_DOMAIN`
Domain name of the virtual machine doing the *master* role.
`IM_MASTER_FQDN`
Complete FQDN of the virtual machine doing the *master* role.
#### Including a recipe from another[¶](#including-a-recipe-from-another)
The next RADL defines two recipes and one of them (`add_user1`) is called by the other (`add_torque`):
```
configure add_user1 (
@begin
---
- tasks:
- user: name=user1 password=1234
@end
)
configure add_torque (
@begin
---
- tasks:
- include: add_user1.yml
- yum: name=torque-client,torque-server state=installed
@end
)
```
#### Including file content[¶](#including-file-content)
If in a `vars` map a variable has a map with key `ec3_file`, ec3 replaces the map by the content of file in the value.
For instance, there is a file `slurm.conf` with content:
```
ControlMachine=slurmserver AuthType=auth/munge CacheGroups=0
```
The next ansible recipe will copy the content of `slurm.conf` into
`/etc/slurm-llnl/slurm.conf`:
```
configure front (
@begin
- vars:
SLURM_CONF_FILE:
ec3_file: slurm.conf
tasks:
- copy:
dest: /etc/slurm-llnl/slurm.conf
content: "{{SLURM_CONF_FILE}}"
@end
)
```
Warning
Avoid using variables with file content in compact expressions like this:
```
- copy: dest=/etc/slurm-llnl/slurm.conf content={{SLURM_CONF_FILE}}
```
#### Include RADL content[¶](#include-radl-content)
Maps with keys `ec3_xpath` and `ec3_jpath` are useful to refer RADL objects and features from Ansible vars. The difference is that `ec3_xpath` prints the object in RADL format as string, and `ec3_jpath` prints objects as YAML maps. Both keys support the next paths:
* `/<class>/*`: refer to all objects with that `<class>` and its references; e.g.,
`/system/*` and `/network/*`.
* `/<class>/<id>` refer to an object of class `<class>` with id `<id>`, including its references; e.g., `/system/front`, `/network/public`.
* `/<class>/<id>/*` refer to an object of class `<class>` with id `<id>`, without references; e.g., `/system/front/*`, `/network/public/*`
Consider the next example:
```
network public ( )
system front (
net_interface.0.connection = 'public' and
net_interface.0.dns_name = 'slurmserver' and
queue_system = 'slurm'
)
system wn (
net_interface.0.connection='public'
)
configure slum_rocks (
@begin
- vars:
JFRONT_AST:
ec3_jpath: /system/front/*
XFRONT:
ec3_xpath: /system/front
tasks:
- copy: dest=/tmp/front.radl
content: "{{XFRONT}}"
when: JFRONT_AST.queue_system == "slurm"
@end
)
```
RADL configure `slurm_rocks` is transformed into:
```
configure slum_rocks (
@begin
- vars:
JFRONT_AST:
class: system
id: front
net_interface.0.connection:
class: network
id: public
reference: true
net_interface.0.dns_name: slurmserver
queue_system: slurm
XFRONT: |
network public ()
system front (
net_interface.0.connection = 'public' and
net_interface.0.dns_name = 'slurmserver' and
queue_system = 'slurm'
)
tasks:
- content: '{{XFRONT}}'
copy: dest=/tmp/front.radl
when: JFRONT_AST.queue_system == "slurm"
@end
)
```
### Adding your own templates[¶](#adding-your-own-templates)
If you want to add your own customized templates to EC3, you need to consider some aspects:
* For `image` templates, respect the frontend and working nodes nomenclatures. The system section for the frontend *must* receive the name `front`, while at least one type of working node *must* receive the name `wn`.
* For `component` templates, add a `configure` section with the name of the component. You also need to add an `include` statement to import the configure in the system that you want. See [Including a recipe from another](http://ec3.readthedocs.org/en/latest/templates.html#including-a-recipe-from-another) for more details.
Also, it is important to provide a `description` section in each new template, to be considered by the `ec3 templates` command.
Frequently Asked Questions[¶](#frequently-asked-questions)
---
These are some frequently asked questions that might solve your doubts when using EC3.
### General FAQs[¶](#general-faqs)
**What Cloud Providers are supported by EC3 (Elastic Cloud Computing Cluster)?**
Currently, EC3 supports [OpenNebula](http://www.opennebula.org/), [Amazon EC2](https://aws.amazon.com/en/ec2), [OpenStack](http://www.openstack.org/), [OCCI](http://occi-wg.org/), [LibCloud](https://libcloud.apache.org/), [Docker](https://www.docker.com/), [Microsoft Azure](http://azure.microsoft.com/), [Google Cloud Engine](https://cloud.google.com/compute/) and [LibVirt](http://libvirt.org/).
All providers and interfaces are supported by the [CLI](http://ec3.readthedocs.org/en/latest/ec3.html) interface.
However, from the [EC3aaS](http://servproject.i3m.upv.es/ec3/) interface, only support for Amazon EC2, Openstack, OpenNebula and [EGI FedCloud](https://www.egi.eu/infrastructure/cloud/) is provided. More providers will be added soon, stay tunned!
**What Local Resource Management Systems (LRMS) are supported by EC3?**
Currently, EC3 supports [SLURM](http://www.schedmd.com/slurmdocs/slurm.html), [Torque](http://www.adaptivecomputing.com/products/open-source/torque/), [Apache Mesos](http://mesos.apache.org/), [SGE](http://sourceforge.net/projects/gridscheduler/), [HTCondor](https://research.cs.wisc.edu/htcondor/) and [Kubernetes](https://kubernetes.io/).
**Is it necessary to indicate a LRMS recipe in the deployment?**
Yes, it is *mandatory*, because the cluster needs to have an LRMS system installed.
This is why the LRMS recipes are considered *main* recipes, needed to perform a deployment with EC3.
**Is it secure to provide my credentials to EC3?**
The user credentials that you specify are *only* employed to provision the resources
(Virtual Machines, security groups, keypairs, etc.) on your behalf.
No other resources will be accessed/deleted.
However, if you are concerned about specifying your credentials to EC3, note that you can (and should)
create an additional set of credentials, perhaps with limited privileges, so that EC3 can access the Cloud on your behalf.
In particular, if you are using Amazon Web Services, we suggest you use the Identity and Access Management ([IAM](http://aws.amazon.com/iam/))
service to create a user with a new set of credentials. This way, you can rest assured that these credentials can be cancelled at anytime.
**Can I configure different software packages than the ones provided with EC3 in my cluster?**
Yes, you can configure them by using the EC3 [CLI](http://ec3.readthedocs.org/en/latest/ec3.html) interface. Thus, you will need to provide a valid Ansible recipe to automatically install the dependence. You can also contact us by using the contact section, and we would try to add the software package you need.
**Why am I experimenting problems with Centos 6 when trying to deploy a Mesos cluster?**
Because the recipe of Mesos provided with EC3 is optimized for Centos 7 as well as Ubuntu 14.04. If you want to deploy a Mesos cluster, we encourage you to use one of each operative systems.
**Which is the best combination to deploy a Galaxy cluster?**
The best configuration for a elastic Galaxy cluster is to select Torque as a LRMS and install the NFS package. Support for Galaxy in SGE is not provided. Moreover, we have detected problems when using Galaxy with SLURM. So, we encourage you to use Torque and NFS in the EC3aaS and also with the EC3 CLI.
### EC3aaS Webpage[¶](#ec3aas-webpage)
**Is my cluster ready when I receive its IP using the EC3aaS webpage?**
Probably not, because the process of configuring the cluster is a batch process that takes several minutes, depending on the chosen configuration.
However, you are allowed to log in the front-end machine of the cluster since the moment it is deployed. To know if the cluster is configured, you can use the command *is_cluster_ready*. It will check if the cluster has been configured or if the configuration process is still in progress. If the command *is_cluster_ready* is not recognised, wait a few seconds and try again, because this command is also installed in the configuration process.
**Why can’t I deploy an hybrid cluster using the EC3aaS webpage?**
Because no support is provided yet by the EC3aaS service.
If you want to deploy a hybrid cluster, we encourage you to use the [CLI](http://ec3.readthedocs.org/en/latest/ec3.html) interface.
**Why can I only access to Amazon EC2, Openstack, OpenNebula and EGI FedCloud Cloud providers while other Cloud providers are supported by EC3?**
Because no support is provided yet by the EC3aaS service.
If you want to use another supported Cloud provider, like [Microsoft Azure](http://azure.microsoft.com/) or [Google Cloud Engine](https://cloud.google.com/compute/), we encourage you to use the [CLI](http://ec3.readthedocs.org/en/latest/ec3.html) interface.
**What is the correct format for the “endpoint” in the OpenNebula and Openstack wizards?**
The user needs to provide EC3 the endpoint of the on-premises Cloud provider. The correct format is *name_of_the_server:port*.
For example, for Openstack *ostserver:5000*, or for OpenNebula *oneserver:2633*.
The same format is employed in the authorization file required to use the [CLI](http://ec3.readthedocs.org/en/latest/ec3.html) interface of EC3.
**Why am I receiving this error “InvalidParameterCombination - Non-Windows instances with a virtualization type of ‘hvm’ are currently not supported for this instance type” when I deploy a cluster in Amazon EC2?**
This error is shown by the Cloud provider, because the instance type and the Amazon Machine Image selected are incompatible.
The Linux AMI with HVM virtualization cannot be used to launch a non-cluster compute instance.
Select another AMI with a virtualization type of paravirtual and try again.
**Why am I receiving this error “VPCResourceNotSpecified - The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.” when I deploy a cluster in Amazon EC2?**
This error is shown by the Cloud provider, because the instance type selected can only be used in a VPC.
To use a VPC, please, employ the CLI interface of EC3. You can specify the name of an existent VPC in the RADL file.
More info about [Amazon VPC](http://aws.amazon.com/vpc/).
**Why can’t I download the private key of my cluster?**
If you are experimenting problems downloading the private key of your cluster (deployed in Amazon EC2),
please, try with another browser. The website is currently optimized for Google Chrome.
**Where can I get the endpoint and VMI identifier for the EGI FedCloud wizard?**
In the EGI FedCloud case, the endpoint and VMI identifier can be obtained from the [AppDB portal](https://appdb.egi.eu). In the cloud marketplace select the desired VMI then select the site to launch it (considering your VO) and click the “get IDs” button. The field “Site endpoint” shows the value of the endpoint to specify in the wizard (without a “/” character after the port) and the value after the “#” char of the OCCI ID field the VMI Indentifier. Finally the value after the “#” char of the Template ID field shows the type of the instance type (In some OpenStack sites you must replace the “.” char with a “-“, e.g. m1.small to m1-small).
**Can I configure software packages in my cluster that are not available in the wizard?**
You can configure them by using the EC3 [CLI](http://ec3.readthedocs.org/en/latest/ec3.html) interface. Thus, you will need to provide a valid Ansible recipe to automatically install the dependence. You can also contact us by using the contact section, and we would try to add the software package you need.
**What is the OSCAR option that appears as a LRMS?**
In OpenNebula and EGI Fedcloud there is an option to deploy as an LRMS the [OSCAR](https://github.com/grycap/oscar) (Open Source Serverless Computing for Data-Processing Applications ) framework, that is an open-source platform to support the Functions as a Service (FaaS) computing model for file-processing applications. This option deploys a Kubernetes cluster with the OSCAR framework and all its dependences.
About[¶](#about)
---
EC3 has been developed by the [Grid and High Performance Computing Group (GRyCAP)](http://www.grycap.upv.es) at the [Instituto de Instrumentación para Imagen Molecular (I3M)](http://www.i3m.upv.es)
from the [Universitat Politècnica de València (UPV)](http://www.upv.es).
This development has been supported by the following research projects:
* Advanced Services for the Deployment and Contextualisation of Virtual Appliances to Support Programming Models in Cloud Environments (TIN2010-17804), Ministerio de Ciencia e Innovación
* Migrable Elastic Virtual Clusters on Hybrid Cloud Infrastructures (TIN2013-44390-R),
Ministerio de Economía y Competitividad
* Ayudas para la contratación de personal investigador en formación de carcter predoctoral,
programa VALi+d (grant number ACIF/2013/003), Conselleria d’Educació of the Generalitat Valenciana.
The following publications summarise both the development and integration in larger architecture. Please acknowledge the usage of this software by citing the last reference:
* <NAME>.; <NAME>.; <NAME>. and <NAME>.; “EC3: Elastic Cloud Computing Cluster”. Journal of Computer and System Sciences, Volume 78, Issue 8, December 2013, Pages 1341-1351, ISSN 0022-0000, 10.1016/j.jcss.2013.06.005.
* <NAME>.; <NAME>.; <NAME>.; and <NAME>.; “Virtual Hybrid Elastic Clusters in the Cloud”. Proceedings of 8th IBERIAN GRID INFRASTRUCTURE CONFERENCE (Ibergrid), pp. 103 - 114 ,2014.
* “Custom elastic clusters to manage Galaxy environments”. In: EGI Inspired Newsletter (Issue 22), pp 2, January 2016. Available [here](http://www.egi.eu/news-and-media/newsletters/Inspired_Issue_22/Custom_elastic_clusters_to_manage_Galaxy_environments.html).
* <NAME>,; <NAME>.; <NAME>.; <NAME>.; and <NAME>.; “Self-managed cost-efficient virtual elastic clusters on hybrid Cloud infrastructures”. Future Generation Computer Systems, 2016. doi:10.1016/j.future.2016.01.018.
Preprints are available [here](http://www.grycap.upv.es/gmolto/publications.php).
Also, EC3 has been integrated in the EGI Platform for the long-tail of science (access available through [here](https://marketplace.egi.eu/42-applications-on-demand-beta)), and it is available as one of the services of the European Open Science Cloud [Marketplace](https://marketplace.eosc-portal.eu/services/elastic-cloud-compute-cluster-ec3)
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Search Page](search.html) |
ProfileLikelihood | cran | R | Package ‘ProfileLikelihood’
August 25, 2023
Version 1.3
Date 2023-08-24
Title Profile Likelihood for a Parameter in Commonly Used Statistical
Models
Maintainer <NAME> <<EMAIL>>
Description Provides profile likelihoods for a parameter of interest in commonly used statistical mod-
els. The models include linear models, generalized linear models, proportional odds models, lin-
ear mixed-effects models, and linear models for longitudinal responses fitted by general-
ized least squares. The package also provides plots for normalized profile likeli-
hoods as well as the maximum profile likelihood estimates and the kth likelihood support intervals.
License GPL (>= 3)
Imports nlme, MASS
LazyLoad yes
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-2544-7090>)
Repository CRAN
Date/Publication 2023-08-25 10:40:06 UTC
R topics documented:
ProfileLikelihood-packag... 2
datagl... 3
datapol... 3
LR.pvalu... 4
profilelike.gl... 6
profilelike.gl... 7
profilelike.l... 9
profilelike.lm... 11
profilelike.plo... 12
profilelike.pol... 14
profilelike.summar... 15
ProfileLikelihood-package
Profile Likelihood for a Parameter in Commonly Used Statistical Mod-
els
Description
This package provides profile likelihoods for a parameter of interest in commonly used statistical
models. The models include linear models, generalized linear models, proportional odds models,
linear mixed-effects models, and linear models for longitudinal responses fitted by generalized least
squares. The package also provides plots for normalized profile likelihoods as well as the maximum
profile likelihood estimates and the kth likelihood support intervals (Royall, 1997).
Details
Use profilelike.lm, profilelike.glm, profilelike.polr, profilelike.gls and profilelike.lme
to obtain profile likelihoods and normalized profile likelihoods, and plot the normalized profile
likelihoods using profilelike.plot. Use profilelike.summary to obtain the maximum profile
likelihood estimate and the kth likelihood support intervals.
Author(s)
<NAME> <<EMAIL>>
Maintainer: <NAME> <<EMAIL>>
References
Royall, <NAME>. (1997). Statistical Evidence: A Likelihood Paradiam. Chapman & Hall/CRC.
Pawitan, Yudi (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood.
Oxford University Press.
See Also
profilelike.lm, profilelike.glm, profilelike.polr, profilelike.gls, profilelike.lme,
profilelike.plot, profilelike.summary
Examples
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- c(rep(0,10), rep(1,10))
weight <- c(ctl, trt)
dd <- data.frame(group=group, weight=weight)
xx <- profilelike.lm(formula = weight ~ 1, data=dd, profile.theta="group",
lo.theta=-2, hi.theta=1, length=500)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
dataglm Example Data for a Profile Likelihood in Generalized Linear Models
Description
This data is used to illustrate how to obtain values for a profile likelihood of a parameter of interest
in a generalized linear model.
Usage
data(dataglm)
Format
A data frame with 100 observations on the following 5 variables.
id a numeric vector; unique identification number
y a numeric vector; binary outcome variable
x1 a numeric vector; covariate
x2 a numeric vector; covariate
group a numeric vector; covariate and a parameter of interest
Details
This data is used to illustrate how to obtain values for a profile likelihood of a parameter of interest
in a logistic regression model. A parameter of interest is group indicator variable, y is a binary
outcome, and x1 and x2 are covariates in a logistic regression model.
Examples
data(dataglm)
xx <- profilelike.glm(y ~ x1 + x2, data=dataglm, profile.theta="group",
family=binomial(link="logit"), length=500, round=2)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
datapolr Example Data for a Profile Likelihood in Proportional Odds Models
Description
This data is used to illustrate how to obtain values for a profile likelihood of a parameter of interest
in a proportional odds model.
Usage
data(datapolr)
Format
A data frame with 66 observations on the following 5 variables.
id a numeric vector; unique identification number
y a numeric vector; ordinal outcome variable; should be defined as a factor
x1 a numeric vector; covariate
x2 a numeric vector; covariate
group a numeric vector; covariate and a parameter of interest
Details
This data is used to illustrate how to obtain values for a profile likelihood of a parameter of interest
in a proportional odds model. A parameter of interest is group indicator variable, y is an ordinal
outcome, and x1 and x2 are covariates in a proportional odds model.
Examples
data(datapolr)
datapolr$y <- as.factor(datapolr$y)
xx <- profilelike.polr(y ~ x1 + x2, data=datapolr, profile.theta="group",
method="logistic", lo.theta=-2, hi.theta=2.5, length=500)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
LR.pvalue P-values based on LR statistics for 2 x 2 Tables
Description
This function provides p-values based on likelihood ratio (LR) statistics for 2 x 2 tables.
Usage
LR.pvalue(y1, y2, n1, n2, interval=0.01)
Arguments
y1 the number of success for treatment 1.
y2 the number of success for treatment 2.
n1 the sample size for treatment 1.
n2 the sample size for treatment 2.
interval grid for evaluating a parameter of interest to obtain values for likelihoods. The
default is 0.01.
Details
This function provides p-values based on the profile and conditional likelihood ratio (LR) statistics
for 2 x 2 tables. The function also provides the profile and conditional likelihood support inter-
vals (k=6.8) corresponding to a 95% confidence interval based on a normal approximation. For
comparison purpose, p-values from Pearson’s Chi-squared test, Fisher’s exact test and Pearson’s
Chi-squared test with continuity correction are also provided.
Value
mle.lor.uncond
the maximum likelihood estimate for log odds ratio.
mle.lor.cond the maximum conditional likelihood estimate for log odds ratio.
LI.norm.profile
profile likelihood support interval (k=6.8) corresponding to a 95% confidence
interval based on a normal approximation.
LI.norm.cond conditional likelihood support interval (k=6.8) corresponding to a 95% confi-
dence interval based on a normal approximation.
LR.profile profile likelihood ratio.
LR.cond conditional likelihood ratio.
Pvalue.LR.profile
p-value based on the profile LR statistic.
Pvalue.LR.cond
p-value based on the conditional LR statistic.
Pvalue.chisq.test
p-value from Pearson’s Chi-squared test.
Pvalue.fisher.test
p-value from Fisher’s exact test.
Pvalue.chisq.cont.correction
p-value from Pearson’s Chi-squared test with continuity correction.
Warning
Likelihood intervals, LRs and the corresonding p-values are not reliable with empty cells (y1=0 or
y2=0) in 2 x 2 tables.
P-values from Pearson’s Chi-squared test, Fisher’s exact test and Pearson’s Chi-squared test with
continuity correction are provided only for comparison purpose. For more options, use chisq.test
and fisher.test for these tests.
Author(s)
<NAME> <<EMAIL>>
See Also
profilelike.plot, profilelike.summary, profilelike.glm
Examples
(fit <- LR.pvalue(y1=20, y2=30, n1=50, n2=50, interval=0.01))
profilelike.glm Profile Likelihood for Generalized Linear Models
Description
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a generalized linear model.
Usage
profilelike.glm(formula, data, profile.theta, family = stats::gaussian,
offset.glm = NULL, lo.theta = NULL, hi.theta = NULL, length = 300,
round = 2, subset = NULL, weights = NULL, offset = NULL, ...)
Arguments
formula see corresponding documentation in glm.
data a data frame. See corresponding documentation in glm.
profile.theta a parameter of interest, theta; must be a numeric variable.
family see corresponding documentation in glm.
offset.glm same usage as offset in glm. See corresponding documentation for offset in glm.
lo.theta lower bound for a parameter of interest to obtain values for a profile likelihood.
hi.theta upper bound for a parameter of interest to obtain values for a profile likelihood.
length length of numerical grid values for a parameter of interest to obtain values for a
profile likelihood.
round the number of decimal places for round function to automatically define lower
and upper bounds of numerical grid for a parameter of interest. If an automati-
cally defined parameter range is not appropriate, increase the number or specify
lo.theta and hi.theta.
subset should not be provided.
weights should not be provided.
offset should not be provided. Instead use offset.glm.
... further arguments passed to or from other methods.
Details
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a generalized linear model. Users must define a parameter of interest in
a generalized linear model. This function can be used for generalized linear models comparable
with the glm function. However, arguments weights, subset, and offset should not be provided. An
argument offset in glm function can be provided using offset.glm. A normalized profile likelihood
is obtained by a profile likelihood being divided by the maximum value of the profile likelihood so
that a normalized profile likelihood ranges from 0 to 1.
Value
theta numerical grid values for a parameter of interest in a specified range (between
lower and upper bounds).
profile.lik numerical values for a profile likelihood corresponding to theta in a specified
range (between lower and upper bounds).
profile.lik.norm
numerical values for a normalized profile likelihood ranging from 0 to 1.
Warning
Arguments weights, subset, and offset in the glm function are not comparable.
Missing values should be removed.
Author(s)
<NAME> <<EMAIL>>
See Also
profilelike.plot, profilelike.summary, profilelike.lm, profilelike.polr, profilelike.gls,
profilelike.lme, glm
Examples
data(dataglm)
xx <- profilelike.glm(y ~ x1 + x2, data=dataglm, profile.theta="group",
family=binomial(link="logit"), length=500, round=2)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
profilelike.gls Profile Likelihood for Linear Models for Longitudinal Responses Fit-
ted by Generalized Least Squares
Description
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a linear model for longitudinal responses fitted by generalized least squares.
Usage
profilelike.gls(formula, data, correlation = NULL, subject, profile.theta,
method = "ML", lo.theta, hi.theta, length = 300, round = 2,
subset = NULL, weights = NULL, ...)
Arguments
formula see corresponding documentation in gls.
data a data frame. See corresponding documentation in gls.
correlation see corresponding documentation in gls.
subject see corresponding documentation in gls.
profile.theta a parameter of interest, theta; must be a numeric variable.
method see corresponding documentation in gls.
lo.theta lower bound for a parameter of interest to obtain values for a profile likelihood.
hi.theta upper bound for a parameter of interest to obtain values for a profile likelihood.
length length of numerical grid values for a parameter of interest to obtain values for a
profile likelihood.
round the number of decimal places for round function to automatically define lower
and upper bounds of numerical grid for a parameter of interest. If an automati-
cally defined parameter range is not appropriate, increase the number or specify
lo.theta and hi.theta.
subset should not be provided.
weights should not be provided.
... further arguments passed to or from other methods.
Details
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a linear model for longitudinal responses fitted by generalized least squares.
Users must define a parameter of interest in the model. This function can be used for models for
longitudinal responses comparable with the gls function. However, arguments weights and subset
should not be provided. A normalized profile likelihood is obtained by a profile likelihood being
divided by the maximum value of the profile likelihood so that a normalized profile likelihood
ranges from 0 to 1.
Value
theta numerical grid values for a parameter of interest in a specified range (between
lower and upper bounds).
profile.lik numerical values for a profile likelihood corresponding to theta in a specified
range (between lower and upper bounds).
profile.lik.norm
numerical values for a normalized profile likelihood ranging from 0 to 1.
Warning
Arguments weights and subset in the gls function are not comparable.
Missing values should be removed.
Author(s)
<NAME> <<EMAIL>>
See Also
profilelike.plot, profilelike.summary, profilelike.lm, profilelike.glm, profilelike.polr,
profilelike.lme, gls
Examples
data(Gasoline, package = "nlme")
xx <- profilelike.gls(formula=yield ~ endpoint, correlation=nlme::corAR1(form = ~ 1 | id),
data=Gasoline, subject="Sample", profile.theta="vapor", method="ML",
lo.theta=1, hi.theta=5, length=500, round=2)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=4)
profilelike.lm Profile Likelihood for Linear Models
Description
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a linear model.
Usage
profilelike.lm(formula, data, profile.theta, lo.theta = NULL, hi.theta = NULL,
length = 300, round = 2, subset = NULL, weights = NULL, offset = NULL, ...)
Arguments
formula see corresponding documentation in lm.
data a data frame. See corresponding documentation in lm.
profile.theta a parameter of interest, theta; must be a numeric variable.
lo.theta lower bound for a parameter of interest to obtain values for a profile likelihood.
hi.theta upper bound for a parameter of interest to obtain values for a profile likelihood.
length length of numerical grid values for a parameter of interest to obtain values for a
profile likelihood.
round the number of decimal places for round function to automatically define lower
and upper bounds of numerical grid for a parameter of interest. If an automati-
cally defined parameter range is not appropriate, increase the number or specify
lo.theta and hi.theta.
subset should not be provided.
weights should not be provided.
offset should not be provided.
... further arguments passed to or from other methods.
Details
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a linear model. Users must define a parameter of interest in a linear model.
This function can be used for linear models comparable with the lm function. However, arguments
weights, subset, and offset should not be provided. A normalized profile likelihood is obtained by a
profile likelihood being divided by the maximum value of the profile likelihood so that a normalized
profile likelihood ranges from 0 to 1.
Value
theta numerical grid values for a parameter of interest in a specified range (between
lower and upper bounds).
profile.lik numerical values for a profile likelihood corresponding to theta in a specified
range (between lower and upper bounds).
profile.lik.norm
numerical values for a normalized profile likelihood ranging from 0 to 1.
Warning
Arguments weights, subset, and offset in the lm function are not comparable.
Missing values should be removed.
Author(s)
<NAME> <<EMAIL>>
See Also
profilelike.plot, profilelike.summary, profilelike.glm, profilelike.polr, profilelike.gls,
profilelike.lme, lm
Examples
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- c(rep(0,10), rep(1,10))
weight <- c(ctl, trt)
dd <- data.frame(group=group, weight=weight)
xx <- profilelike.lm(formula = weight ~ 1, data=dd, profile.theta="group",
lo.theta=-2, hi.theta=1, length=500)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
profilelike.lme Profile Likelihood for Linear Mixed-Effects Models
Description
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a linear mixed-effects model.
Usage
profilelike.lme(formula, data, subject, random, correlation = NULL,
profile.theta, method = "ML", lo.theta, hi.theta, length = 300,
round = 2, subset = NULL, weights = NULL, ...)
Arguments
formula see corresponding documentation in lme.
data a data frame. See corresponding documentation in lme.
subject see corresponding documentation in lme.
random see corresponding documentation in lme.
correlation see corresponding documentation in lme.
profile.theta a parameter of interest, theta; must be a numeric variable.
method see corresponding documentation in lme.
lo.theta lower bound for a parameter of interest to obtain values for a profile likelihood.
hi.theta upper bound for a parameter of interest to obtain values for a profile likelihood.
length length of numerical grid values for a parameter of interest to obtain values for a
profile likelihood.
round the number of decimal places for round function to automatically define lower
and upper bounds of numerical grid for a parameter of interest. If an automati-
cally defined parameter range is not appropriate, increase the number or specify
lo.theta and hi.theta.
subset should not be provided.
weights should not be provided.
... further arguments passed to or from other methods.
Details
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a linear mixed-effects model. Users must define a parameter of interest in a
linear mixed-effects model. This function can be used for models comparable with the lme function.
However, arguments weights and subset should not be provided. A normalized profile likelihood
is obtained by a profile likelihood being divided by the maximum value of the profile likelihood so
that a normalized profile likelihood ranges from 0 to 1.
Value
theta numerical grid values for a parameter of interest in a specified range (between
lower and upper bounds).
profile.lik numerical values for a profile likelihood corresponding to theta in a specified
range (between lower and upper bounds).
profile.lik.norm
numerical values for a normalized profile likelihood ranging from 0 to 1.
Warning
Arguments weights and subset in the lme function are not comparable.
Missing values should be removed.
Author(s)
<NAME> <<EMAIL>>
See Also
profilelike.plot, profilelike.summary, profilelike.lm, profilelike.glm, profilelike.polr,
profilelike.gls, lme
Examples
## Not run:
xx <- profilelike.lme(formula = yield ~ endpoint, random = ~ 1 | id,
correlation=corAR1(form = ~ 1 | id), data=Gasoline, subject="Sample",
profile.theta="vapor", method="ML", lo.theta=1, hi.theta=5, length=500, round=2)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=4)
## End(Not run)
profilelike.plot Profile Likelihood Plot
Description
The function provides a plot for a normalized profile likelihood as well as the maximum profile
likelihood estimate and the kth likelihood support intervals (Royall, 1997).
Usage
profilelike.plot(theta = theta, profile.lik.norm = profile.lik.norm, round = 2)
Arguments
theta numerical grid values for a parameter of interest in a specified range.
profile.lik.norm
numerical values for a normalized profile likelihood ranging from 0 to 1.
round the number of decimal places for round function for presentation of the maxi-
mum profile likelihood estimate and the kth likelihood support intervals.
Details
The function provides a plot for a normalized profile likelihood obtained from profilelike.lm,
profilelike.glm, profilelike.polr, profilelike.gls and profilelike.lme. The maxi-
mum profile likelihood estimate, the kth likelihood support interval (k=8, k=20, and k=32), and
the likelihood support interval (k=6.8) corresponding to a 95% confidence interval based on a nor-
mal approximation are also presented.
Value
A normalized profile likelihood plot with the maximum profile likelihood estimate and the kth
likelihood support intervals.
Author(s)
<NAME> <<EMAIL>>
References
Royall, <NAME>. (1997). Statistical Evidence: A Likelihood Paradiam. Chapman & Hall/CRC.
<NAME> (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood.
Oxford University Press.
See Also
profilelike.summary, profilelike.lm, profilelike.glm, profilelike.polr, profilelike.gls,
profilelike.lme
Examples
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- c(rep(0,10), rep(1,10))
weight <- c(ctl, trt)
dd <- data.frame(group=group, weight=weight)
xx <- profilelike.lm(formula = weight ~ 1, data=dd, profile.theta="group",
lo.theta=-2, hi.theta=1, length=500)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
profilelike.summary(k=8, theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=3)
profilelike.polr Profile Likelihood for Proportional Odds Models
Description
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a proportional odds model.
Usage
profilelike.polr(formula, data, profile.theta, method = "logistic",
lo.theta = NULL, hi.theta = NULL, length = 300, round = 2,
subset = NULL, weights = NULL, offset = NULL, ...)
Arguments
formula see corresponding documentation in polr.
data a data frame. See corresponding documentation in polr.
profile.theta a parameter of interest, theta; must be a numeric variable.
method see corresponding documentation in polr.
lo.theta lower bound for a parameter of interest to obtain values for a profile likelihood.
hi.theta upper bound for a parameter of interest to obtain values for a profile likelihood.
length length of numerical grid values for a parameter of interest to obtain values for a
profile likelihood.
round the number of decimal places for round function to automatically define lower
and upper bounds of numerical grid for a parameter of interest. If an automati-
cally defined parameter range is not appropriate, increase the number or specify
lo.theta and hi.theta.
subset should not be provided.
weights should not be provided.
offset should not be provided.
... further arguments passed to or from other methods.
Details
This function provides values for a profile likelihood and a normalized profile likelihood for a
parameter of interest in a proportional odds model. Users must define a parameter of interest in
a proportional odds model. This function can be used for proportional odds models comparable
with the polr function. However, arguments weights, subset, and offset should not be provided.
A normalized profile likelihood is obtained by a profile likelihood being divided by the maximum
value of the profile likelihood so that a normalized profile likelihood ranges from 0 to 1.
Value
theta numerical grid values for a parameter of interest in a specified range (between
lower and upper bounds).
profile.lik numerical values for a profile likelihood corresponding to theta in a specified
range (between lower and upper bounds).
profile.lik.norm
numerical values for a normalized profile likelihood ranging from 0 to 1.
Warning
Arguments weights, subset, and offset in the polr function are not comparable.
Missing values should be removed.
Author(s)
<NAME> <<EMAIL>>
See Also
profilelike.plot, profilelike.summary, profilelike.lm, profilelike.glm, profilelike.gls,
profilelike.lme, polr
Examples
data(datapolr)
datapolr$y <- as.factor(datapolr$y)
xx <- profilelike.polr(y ~ x1 + x2, data=datapolr, profile.theta="group",
method="logistic", lo.theta=-2, hi.theta=2.5, length=500)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
profilelike.summary Summary for the Maximum Profile Likelihood Estimate and Likelihood
Support Intervals
Description
The function provides the maximum profile likelihood estimate and likelihood support intervals
(Royall, 1997).
Usage
profilelike.summary(k, theta = theta, profile.lik.norm = profile.lik.norm,
round = 2)
Arguments
k strength of evidence for the kth likelihood support interval.
theta numerical grid values for a parameter of interest in a specified range.
profile.lik.norm
numerical values for a normalized profile likelihood ranging from 0 to 1.
round the number of decimal places for round function for presentation of the maxi-
mum profile likelihood estimate and the kth likelihood support intervals.
Details
The function provides the maximum profile likelihood estimate and likelihood support intervals
for a profile likelihood obtained from profilelike.lm, profilelike.glm, profilelike.polr,
profilelike.gls and profilelike.lme. The kth likelihood support interval and the likelihood
support interval (k=6.8) corresponding to a 95% confidence interval based on a normal approxima-
tion are provided.
Value
k strength of evidence for the kth likelihood support interval.
mle the maximum profile likelihood estimate.
LI.k the kth likelihood support interval.
LI.norm likelihood support interval (k=6.8) corresponding to a 95% confidence interval
based on a normal approximation.
Author(s)
<NAME> <<EMAIL>>
References
Royall, <NAME>. (1997). Statistical Evidence: A Likelihood Paradiam. Chapman & Hall/CRC.
Pawitan, Yudi (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood.
Oxford University Press.
See Also
profilelike.plot, profilelike.lm, profilelike.glm, profilelike.polr, profilelike.gls,
profilelike.lme
Examples
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- c(rep(0,10), rep(1,10))
weight <- c(ctl, trt)
dd <- data.frame(group=group, weight=weight)
xx <- profilelike.lm(formula = weight ~ 1, data=dd, profile.theta="group",
lo.theta=-2, hi.theta=1, length=500)
profilelike.plot(theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=2)
profilelike.summary(k=8, theta=xx$theta, profile.lik.norm=xx$profile.lik.norm, round=3) |
fklearn | readthedoc | CSS | fklearn 2.3.1 documentation
[fklearn](index.html#document-index)
---
fklearn[¶](#fklearn)
===
**fklearn** uses functional programming principles to make it easier to solve real problems with Machine Learning.
The name is a reference to the widely known [scikit-learn](https://scikit-learn.org/stable/) library.
**fklearn Principles**
1. Validation should reflect real-life situations.
2. Production models should match validated models.
3. Models should be production-ready with few extra steps.
4. Reproducibility and in-depth analysis of model results should be easy to achieve.
Contents[¶](#contents)
---
### Getting started[¶](#getting-started)
#### Installation[¶](#installation)
The fklearn library is compatible only with Python 3.6.2+.
In order to install it using pip, run:
```
pip install fklearn
```
You can also install it from the source:
```
# clone the repository git clone -b master https://github.com/nubank/fklearn.git --depth=1
# open the folder cd fklearn
# install the dependencies pip install -e .
```
If you are a macOS user, you may need to install some dependencies in order to use LGBM. If you have brew installed,
run the following command from the root dir:
```
brew bundle
```
#### Basics[¶](#basics)
##### Learners[¶](#learners)
While in scikit-learn the main abstraction for a model is a class with the methods `fit` and `transform`,
in fklearn we use what we call a **learner function**. A learner function takes in some training data (plus other parameters),
learns something from it and returns three things: a *prediction function*, the *transformed training data*, and a *log*.
The **prediction function** always has the same signature: it takes in a Pandas dataframe and returns a Pandas dataframe.
It should be able to take in any new dataframe, as long as it contains the required columns, and transform it. The tranform in the fklearn library is equivalent to the transform method of the scikit-learn.
In this case, the prediction function simply creates a new column with the predictions of the linear regression model that was trained.
The **transformed training data** is usually just the prediction function applied to the training data. It is useful when you want predictions on your training set, or for building pipelines, as we’ll see later.
The **log** is a dictionary, and can include any information that is relevant for inspecting or debugging the learner, e.g., what features were used, how many samples there were in the training set, feature importance or coefficients.
Learner functions are usually partially initialized (curried) before being passed to pipelines or applied to data:
```
from fklearn.training.regression import linear_regression_learner from fklearn.training.transformation import capper, floorer, prediction_ranger
# initialize several learner functions capper_fn = capper(columns_to_cap=["income"], precomputed_caps={"income": 50000})
regression_fn = linear_regression_learner(features=["income", "bill_amount"], target="spend")
ranger_fn = prediction_ranger(prediction_min=0.0, prediction_max=20000.0)
# apply one individually to some data p, df, log = regression_fn(training_data)
```
Available learner functions in fklearn can be found inside the `fklearn.training` module.
##### Pipelines[¶](#pipelines)
Learner functions are usually composed into pipelines that apply them in order to data:
```
from fklearn.training.pipeline import build_pipeline
learner = build_pipeline(capper_fn, regression_fn, ranger_fn)
predict_fn, training_predictions, logs = learner(train_data)
```
Pipelines behave exactly as individual learner functions. They guarantee that all steps are applied consistently to both traning and testing/production data.
##### Validation[¶](#validation)
Once we have our pipeline defined, we can use fklearn’s validation tools to evaluate the performance of our model in different scenarios and using multiple metrics:
```
from fklearn.validation.evaluators import r2_evaluator, spearman_evaluator, combined_evaluators from fklearn.validation.validator import validator from fklearn.validation.splitters import k_fold_splitter, stability_curve_time_splitter
evaluation_fn = combined_evaluators(evaluators=[r2_evaluator(target_column="spend"),
spearman_evaluator(target_column="spend")])
cv_split_fn = k_fold_splitter(n_splits=3, random_state=42)
stability_split_fn = stability_curve_time_splitter(training_time_limit=pd.to_datetime("2018-01-01"),
time_column="timestamp")
cross_validation_results = validator(train_data=train_data,
split_fn=cv_split_fn,
train_fn=learner,
eval_fn=evaluation_fn)
stability_validation_results = validator(train_data=train_data,
split_fn=stability_split_fn,
train_fn=learner,
eval_fn=evaluation_fn)
```
The `validator` function receives some data, the learner function with our model plus the following:
1. A *splitting function*: these can be found inside the `fklearn.validation.splitters` module. They split the data into training and evaluation folds in different ways, simulating situations where training and testing data differ.
2. A *evaluation function*: these can be found inside the `fklearn.validation.evaluators` module. They compute various performance metrics of interest on our model’s predictions. They can be composed by using `combined_evaluators` for example.
#### Learn More[¶](#learn-more)
* Check this [jupyter notebook](https://github.com/nubank/fklearn/blob/master/docs/source/examples/regression.ipynb) for some additional examples.
* Our [blog post](https://medium.com/building-nubank/introducing-fklearn-nubanks-machine-learning-library-part-i-2a1c781035d0) (Part I) gives an overview of the library and motivation behind it.
### Examples[¶](#examples)
In this section we present practical examples to demonstrate various fklearn features.
#### List of examples[¶](#list-of-examples)
* [Learning Curves](index.html#document-examples/learning_curves)
* [NLP Classification](index.html#document-examples/nlp_classification)
* [Training and Evaluating Simple Regression Model](index.html#document-examples/regression)
* [Causal Inference](index.html#document-examples/causal_inference)
* [Feature transformations](index.html#document-examples/feature_transformation)
* [FKLearn Tutorial:](index.html#document-examples/fklearn_overview)
* [This is the notebook used to generate the dataset used on the FKLearn Tutorial.ipynb](index.html#document-examples/fklearn_overview_dataset_generation)
### fklearn[¶](#fklearn)
#### fklearn package[¶](#fklearn-package)
##### Subpackages[¶](#subpackages)
###### fklearn.causal package[¶](#fklearn-causal-package)
####### Subpackages[¶](#subpackages)
######## fklearn.causal.validation package[¶](#fklearn-causal-validation-package)
######### Submodules[¶](#submodules)
######### fklearn.causal.validation.auc module[¶](#module-fklearn.causal.validation.auc)
`fklearn.causal.validation.auc.``area_under_the_cumulative_effect_curve`[[source]](_modules/fklearn/causal/validation/auc.html#area_under_the_cumulative_effect_curve)[¶](#fklearn.causal.validation.auc.area_under_the_cumulative_effect_curve)
Orders the dataset by prediction and computes the area under the cumulative effect curve, according to that ordering.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*str*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **min_rows** (*int*) – Minimum number of observations needed to have a valid result.
* **steps** (*Integer*) – The number of cumulative steps to iterate when accumulating the effect
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **area_under_the_cumulative_gain_curve** – The area under the cumulative gain curve according to the predictions ordering. |
| Return type: | float |
`fklearn.causal.validation.auc.``area_under_the_cumulative_gain_curve`[[source]](_modules/fklearn/causal/validation/auc.html#area_under_the_cumulative_gain_curve)[¶](#fklearn.causal.validation.auc.area_under_the_cumulative_gain_curve)
Orders the dataset by prediction and computes the area under the cumulative gain curve, according to that ordering.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*Strings*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **min_rows** (*Integer*) – Minimum number of observations needed to have a valid result.
* **steps** (*Integer*) – The number of cumulative steps to iterate when accumulating the effect
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **area_under_the_cumulative_gain_curve** – The area under the cumulative gain curve according to the predictions ordering. |
| Return type: | float |
`fklearn.causal.validation.auc.``area_under_the_relative_cumulative_gain_curve`[[source]](_modules/fklearn/causal/validation/auc.html#area_under_the_relative_cumulative_gain_curve)[¶](#fklearn.causal.validation.auc.area_under_the_relative_cumulative_gain_curve)
Orders the dataset by prediction and computes the area under the relative cumulative gain curve, according to that ordering.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*Strings*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **min_rows** (*Integer*) – Minimum number of observations needed to have a valid result.
* **steps** (*Integer*) – The number of cumulative steps to iterate when accumulating the effect
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **area under the relative cumulative gain curve** – The area under the relative cumulative gain curve according to the predictions ordering. |
| Return type: | float |
######### fklearn.causal.validation.cate module[¶](#module-fklearn.causal.validation.cate)
`fklearn.causal.validation.cate.``cate_mean_by_bin`(*test_data: pandas.core.frame.DataFrame*, *group_column: str*, *control_group_name: str*, *bin_column: str*, *n_bins: int*, *allow_dropped_bins: bool*, *prediction_column: str*, *target_column: str*) → pandas.core.frame.DataFrame[[source]](_modules/fklearn/causal/validation/cate.html#cate_mean_by_bin)[¶](#fklearn.causal.validation.cate.cate_mean_by_bin)
Computes a dataframe with predicted and actual CATEs by bins of a given column.
This is primarily an auxiliary function, but can be used to visualize the CATEs.
| Parameters: | * **test_data** (*DataFrame*) – A Pandas’ DataFrame with group_column as a column.
* **group_column** (*str*) – The name of the column that tells whether rows belong to the test or control group.
* **control_group_name** (*str*) – The name of the control group.
* **bin_column** (*str*) – The name of the column from which the quantiles will be created.
* **n_bins** (*str*) – The number of bins to be created.
* **allow_dropped_bins** (*bool*) – Whether to allow the function to drop duplicated quantiles.
* **prediction_column** (*str*) – The name of the column containing the predictions from the model being evaluated.
* **target_column** (*str*) – The name of the column containing the actual outcomes of the treatment.
|
| Returns: | **gb** – The grouped dataframe with actual and predicted CATEs by bin. |
| Return type: | DataFrame |
`fklearn.causal.validation.cate.``cate_mean_by_bin_meta_evaluator`[[source]](_modules/fklearn/causal/validation/cate.html#cate_mean_by_bin_meta_evaluator)[¶](#fklearn.causal.validation.cate.cate_mean_by_bin_meta_evaluator)
Evaluates the predictions of a causal model that outputs treatment outcomes w.r.t. its capabilities to predict the CATE.
Due to the fundamental lack of counterfactual data, the CATEs are computed for bins of a given column. This function then applies a fklearn-like evaluator on top of the aggregated dataframe.
| Parameters: | * **test_data** (*DataFrame*) – A Pandas’ DataFrame with group_column as a column.
* **group_column** (*str*) – The name of the column that tells whether rows belong to the test or control group.
* **control_group_name** (*str*) – The name of the control group.
* **bin_column** (*str*) – The name of the column from which the quantiles will be created.
* **n_bins** (*str*) – The number of bins to be created.
* **allow_dropped_bins** (*bool**,* *optional* *(**default=False**)*) – Whether to allow the function to drop duplicated quantiles.
* **inner_evaluator** (*UncurriedEvalFnType**,* *optional* *(**default=r2_evaluator**)*) – An instance of a fklearn-like evaluator, which will be applied to the .
* **eval_name** (*str**,* *optional* *(**default=None**)*) – The name of the evaluator as it will appear in the logs.
* **prediction_column** (*str**,* *optional* *(**default=None**)*) – The name of the column containing the predictions from the model being evaluated.
* **target_column** (*str**,* *optional* *(**default=None**)*) – The name of the column containing the actual outcomes of the treatment.
|
| Returns: | **log** – A log-like dictionary with the evaluation by inner_evaluator |
| Return type: | dict |
######### fklearn.causal.validation.curves module[¶](#module-fklearn.causal.validation.curves)
`fklearn.causal.validation.curves.``cumulative_effect_curve`[[source]](_modules/fklearn/causal/validation/curves.html#cumulative_effect_curve)[¶](#fklearn.causal.validation.curves.cumulative_effect_curve)
Orders the dataset by prediction and computes the cumulative effect curve according to that ordering
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*Strings*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **min_rows** (*Integer*) – Minimum number of observations needed to have a valid result.
* **steps** (*Integer*) – The number of cumulative steps to iterate when accumulating the effect
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **cumulative effect curve** – The cumulative treatment effect according to the predictions ordering. |
| Return type: | Numpy’s Array |
`fklearn.causal.validation.curves.``cumulative_gain_curve`[[source]](_modules/fklearn/causal/validation/curves.html#cumulative_gain_curve)[¶](#fklearn.causal.validation.curves.cumulative_gain_curve)
Orders the dataset by prediction and computes the cumulative gain (effect * proportional sample size) curve according to that ordering.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*Strings*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **min_rows** (*Integer*) – Minimum number of observations needed to have a valid result.
* **steps** (*Integer*) – The number of cumulative steps to iterate when accumulating the effect
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **cumulative gain curve** – The cumulative gain according to the predictions ordering. |
| Return type: | float |
`fklearn.causal.validation.curves.``effect_by_segment`[[source]](_modules/fklearn/causal/validation/curves.html#effect_by_segment)[¶](#fklearn.causal.validation.curves.effect_by_segment)
Segments the dataset by a prediction’s quantile and estimates the treatment effect by segment.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*Strings*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **segments** (*Integer*) – The number of the segments to create. Uses Pandas’ qcut under the hood.
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **effect by band** – The effect stored in a Pandas’ series were the indexes are the segments |
| Return type: | Pandas’ Series |
`fklearn.causal.validation.curves.``effect_curves`[[source]](_modules/fklearn/causal/validation/curves.html#effect_curves)[¶](#fklearn.causal.validation.curves.effect_curves)
cumulative effect, cumulative gain and relative cumulative gain. The dataset also contains two columns referencing the data used to compute the curves at each step: number of samples and fraction of samples used.
Moreover one column indicating the cumulative gain for a corresponding random model is also included as a benchmark.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*Strings*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **min_rows** (*Integer*) – Minimum number of observations needed to have a valid result.
* **steps** (*Integer*) – The number of cumulative steps to iterate when accumulating the effect
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **summary curves dataset** – The dataset with the results for multiple validation causal curves according to the predictions ordering. |
| Return type: | pd.DataFrame |
| Type: | Creates a dataset summarizing the effect curves |
`fklearn.causal.validation.curves.``relative_cumulative_gain_curve`[[source]](_modules/fklearn/causal/validation/curves.html#relative_cumulative_gain_curve)[¶](#fklearn.causal.validation.curves.relative_cumulative_gain_curve)
Orders the dataset by prediction and computes the relative cumulative gain curve curve according to that ordering.
The relative gain is simply the cumulative effect minus the Average Treatment Effect (ATE) times the relative sample size.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment** (*Strings*) – The name of the treatment column in df.
* **outcome** (*Strings*) – The name of the outcome column in df.
* **prediction** (*Strings*) – The name of the prediction column in df.
* **min_rows** (*Integer*) – Minimum number of observations needed to have a valid result.
* **steps** (*Integer*) – The number of cumulative steps to iterate when accumulating the effect
* **effect_fn** (*function* *(**df: pandas.DataFrame**,* *treatment: str**,* *outcome: str**)* *-> int* *or* *Array of int*) – A function that computes the treatment effect given a dataframe, the name of the treatment column and the name of the outcome column.
|
| Returns: | **relative cumulative gain curve** – The relative cumulative gain according to the predictions ordering. |
| Return type: | float |
######### Module contents[¶](#module-fklearn.causal.validation)
####### Submodules[¶](#submodules)
####### fklearn.causal.debias module[¶](#module-fklearn.causal.debias)
`fklearn.causal.debias.``debias_with_double_ml`[[source]](_modules/fklearn/causal/debias.html#debias_with_double_ml)[¶](#fklearn.causal.debias.debias_with_double_ml)
Frisch-Waugh-Lovell style debiasing with ML model.
To debias, we
> 1. fit a regression ml model to predict the treatment from the confounders and take out of fold residuals from
> > > this fit (debias step)
> 2. fit a regression ml model to predict the outcome from the confounders and take the out of fold residuals from
> > > this fit (denoise step).
We then add back the average outcome and treatment so that their levels remain unchanged.
Returns a dataframe with the debiased columns with suffix appended to the name
| Parameters: | * **df** (*Pandas DataFrame*) – A Pandas’ DataFrame with with treatment, outcome and confounder columns
* **treatment_column** (*str*) – The name of the column in df with the treatment.
* **outcome_column** (*str*) – The name of the column in df with the outcome.
* **confounder_columns** (*list of str*) – A list of confounder present in df
* **ml_regressor** (*Sklearn's RegressorMixin*) – A regressor model that implements a fit and a predict method
* **extra_params** (*dict*) – The hyper-parameters for the model
* **cv** (*int*) – The number of folds to cross predict
* **suffix** (*str*) – A suffix to append to the returning debiased column names.
* **denoise** (*bool* *(**Default=True**)*) – If it should denoise the outcome using the confounders or not
* **seed** (*int*) – A seed for consistency in random computation
|
| Returns: | **debiased_df** – The original df dataframe with debiased columns added. |
| Return type: | Pandas DataFrame |
`fklearn.causal.debias.``debias_with_fixed_effects`[[source]](_modules/fklearn/causal/debias.html#debias_with_fixed_effects)[¶](#fklearn.causal.debias.debias_with_fixed_effects)
Returns a dataframe with the debiased columns with suffix appended to the name
This is equivalent of debiasing with regression where the forumla is “C(x1) + C(x2) + …”.
However, it is much more eficient than runing such a dummy variable regression.
| Parameters: | * **df** (*Pandas DataFrame*) – A Pandas’ DataFrame with with treatment, outcome and confounder columns
* **treatment_column** (*str*) – The name of the column in df with the treatment.
* **outcome_column** (*str*) – The name of the column in df with the outcome.
* **confounder_columns** (*list of str*) – Confounders are categorical groups we wish to explain away. Some examples are units (ex: customers),
and time (day, months…). We perform a group by on these columns, so they should not be continuous variables.
* **suffix** (*str*) – A suffix to append to the returning debiased column names.
* **denoise** (*bool* *(**Default=True**)*) – If it should denoise the outcome using the confounders or not
|
| Returns: | **debiased_df** – The original df dataframe with debiased columns added. |
| Return type: | Pandas DataFrame |
`fklearn.causal.debias.``debias_with_regression`[[source]](_modules/fklearn/causal/debias.html#debias_with_regression)[¶](#fklearn.causal.debias.debias_with_regression)
Frisch-Waugh-Lovell style debiasing with linear regression.
To debias, we
> 1) fit a linear model to predict the treatment from the confounders and take the residuals from this fit
> (debias step)
> 2) fit a linear model to predict the outcome from the confounders and take the residuals from this fit
> (denoise step).
We then add back the average outcome and treatment so that their levels remain unchanged.
Returns a dataframe with the debiased columns with suffix appended to the name
| Parameters: | * **df** (*Pandas DataFrame*) – A Pandas’ DataFrame with with treatment, outcome and confounder columns
* **treatment_column** (*str*) – The name of the column in df with the treatment.
* **outcome_column** (*str*) – The name of the column in df with the outcome.
* **confounder_columns** (*list of str*) – A list of confounder present in df
* **suffix** (*str*) – A suffix to append to the returning debiased column names.
* **denoise** (*bool* *(**Default=True**)*) – If it should denoise the outcome using the confounders or not
|
| Returns: | **debiased_df** – The original df dataframe with debiased columns added. |
| Return type: | Pandas DataFrame |
`fklearn.causal.debias.``debias_with_regression_formula`[[source]](_modules/fklearn/causal/debias.html#debias_with_regression_formula)[¶](#fklearn.causal.debias.debias_with_regression_formula)
Frisch-Waugh-Lovell style debiasing with linear regression. With R formula to define confounders.
To debias, we
> 1) fit a linear model to predict the treatment from the confounders and take the residuals from this fit
> (debias step)
> 2) fit a linear model to predict the outcome from the confounders and take the residuals from this fit
> (denoise step).
We then add back the average outcome and treatment so that their levels remain unchanged.
Returns a dataframe with the debiased columns with suffix appended to the name
| Parameters: | * **df** (*Pandas DataFrame*) – A Pandas’ DataFrame with with treatment, outcome and confounder columns
* **treatment_column** (*str*) – The name of the column in df with the treatment.
* **outcome_column** (*str*) – The name of the column in df with the outcome.
* **confounder_formula** (*str*) – An R formula modeling the confounders. Check <https://www.statsmodels.org/dev/example_formulas.html> for examples.
* **suffix** (*str*) – A suffix to append to the returning debiased column names.
* **denoise** (*bool* *(**Default=True**)*) – If it should denoise the outcome using the confounders or not
|
| Returns: | **debiased_df** – The original df dataframe with debiased columns added. |
| Return type: | Pandas DataFrame |
####### fklearn.causal.effects module[¶](#module-fklearn.causal.effects)
`fklearn.causal.effects.``exponential_coefficient_effect`[[source]](_modules/fklearn/causal/effects.html#exponential_coefficient_effect)[¶](#fklearn.causal.effects.exponential_coefficient_effect)
Computes the exponential coefficient between the treatment and the outcome. Finds a1 in the following equation outcome = exp(a0 + a1 treatment) + error
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment_column** (*str*) – The name of the treatment column in df.
* **outcome_column** (*str*) – The name of the outcome column in df.
|
| Returns: | **effect** – The exponential coefficient between the treatment and the outcome |
| Return type: | float |
`fklearn.causal.effects.``linear_effect`[[source]](_modules/fklearn/causal/effects.html#linear_effect)[¶](#fklearn.causal.effects.linear_effect)
cov(outcome, treatment)/var(treatment)
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment_column** (*str*) – The name of the treatment column in df.
* **outcome_column** (*str*) – The name of the outcome column in df.
|
| Returns: | **effect** – The linear coefficient from regressing the outcome on the treatment: cov(outcome, treatment)/var(treatment) |
| Return type: | float |
| Type: | Computes the linear coefficient from regressing the outcome on the treatment |
`fklearn.causal.effects.``logistic_coefficient_effect`[[source]](_modules/fklearn/causal/effects.html#logistic_coefficient_effect)[¶](#fklearn.causal.effects.logistic_coefficient_effect)
Computes the logistic coefficient between the treatment and the outcome. Finds a1 in the following equation outcome = logistic(a0 + a1 treatment)
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment_column** (*str*) – The name of the treatment column in df.
* **outcome_column** (*str*) – The name of the outcome column in df.
|
| Returns: | **effect** – The logistic coefficient between the treatment and the outcome |
| Return type: | float |
`fklearn.causal.effects.``pearson_effect`[[source]](_modules/fklearn/causal/effects.html#pearson_effect)[¶](#fklearn.causal.effects.pearson_effect)
Computes the Pearson correlation between the treatment and the outcome
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment_column** (*str*) – The name of the treatment column in df.
* **outcome_column** (*str*) – The name of the outcome column in df.
|
| Returns: | **effect** – The Pearson correlation between the treatment and the outcome |
| Return type: | float |
`fklearn.causal.effects.``spearman_effect`[[source]](_modules/fklearn/causal/effects.html#spearman_effect)[¶](#fklearn.causal.effects.spearman_effect)
Computes the Spearman correlation between the treatment and the outcome
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **treatment_column** (*str*) – The name of the treatment column in df.
* **outcome_column** (*str*) – The name of the outcome column in df.
|
| Returns: | **effect** – The Spearman correlation between the treatment and the outcome |
| Return type: | float |
####### Module contents[¶](#module-fklearn.causal)
###### fklearn.data package[¶](#fklearn-data-package)
####### Submodules[¶](#submodules)
####### fklearn.data.datasets module[¶](#module-fklearn.data.datasets)
`fklearn.data.datasets.``make_confounded_data`(*n: int*) → Tuple[pandas.core.frame.DataFrame, pandas.core.frame.DataFrame, pandas.core.frame.DataFrame][[source]](_modules/fklearn/data/datasets.html#make_confounded_data)[¶](#fklearn.data.datasets.make_confounded_data)
Generates fake data for counterfactual experimentation. The covariants are sex, age and severity, the treatment is a binary variable, medication and the response days until recovery.
| Parameters: | **n** (*int*) – The number of samples to generate |
| Returns: | * **df_rnd** (*pd.DataFrame*) – A dataframe where the treatment is randomly assigned.
* **df_obs** (*pd.DataFrame*) – A dataframe with confounding.
* **df_df** (*pd.DataFrame*) – A counter factual dataframe with confounding. Same as df_obs, but with the treatment flipped.
|
`fklearn.data.datasets.``make_tutorial_data`(*n: int*) → pandas.core.frame.DataFrame[[source]](_modules/fklearn/data/datasets.html#make_tutorial_data)[¶](#fklearn.data.datasets.make_tutorial_data)
Generates fake data for a tutorial. There are 3 numerical features (“num1”, “num3” and “num3”)
and tow categorical features (“cat1” and “cat2”)
sex, age and severity, the treatment is a binary variable, medication and the response days until recovery.
| Parameters: | **n** (*int*) – The number of samples to generate |
| Returns: | **df** – A tutorial dataset |
| Return type: | pd.DataFrame |
####### Module contents[¶](#module-fklearn.data)
###### fklearn.metrics package[¶](#fklearn-metrics-package)
####### Submodules[¶](#submodules)
####### fklearn.metrics.pd_extractors module[¶](#module-fklearn.metrics.pd_extractors)
`fklearn.metrics.pd_extractors.``combined_evaluator_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#combined_evaluator_extractor)[¶](#fklearn.metrics.pd_extractors.combined_evaluator_extractor)
`fklearn.metrics.pd_extractors.``evaluator_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#evaluator_extractor)[¶](#fklearn.metrics.pd_extractors.evaluator_extractor)
`fklearn.metrics.pd_extractors.``extract`[[source]](_modules/fklearn/metrics/pd_extractors.html#extract)[¶](#fklearn.metrics.pd_extractors.extract)
`fklearn.metrics.pd_extractors.``extract_base_iteration`[[source]](_modules/fklearn/metrics/pd_extractors.html#extract_base_iteration)[¶](#fklearn.metrics.pd_extractors.extract_base_iteration)
`fklearn.metrics.pd_extractors.``extract_lc`[[source]](_modules/fklearn/metrics/pd_extractors.html#extract_lc)[¶](#fklearn.metrics.pd_extractors.extract_lc)
`fklearn.metrics.pd_extractors.``extract_param_tuning_iteration`[[source]](_modules/fklearn/metrics/pd_extractors.html#extract_param_tuning_iteration)[¶](#fklearn.metrics.pd_extractors.extract_param_tuning_iteration)
`fklearn.metrics.pd_extractors.``extract_reverse_lc`[[source]](_modules/fklearn/metrics/pd_extractors.html#extract_reverse_lc)[¶](#fklearn.metrics.pd_extractors.extract_reverse_lc)
`fklearn.metrics.pd_extractors.``extract_sc`[[source]](_modules/fklearn/metrics/pd_extractors.html#extract_sc)[¶](#fklearn.metrics.pd_extractors.extract_sc)
`fklearn.metrics.pd_extractors.``extract_tuning`[[source]](_modules/fklearn/metrics/pd_extractors.html#extract_tuning)[¶](#fklearn.metrics.pd_extractors.extract_tuning)
`fklearn.metrics.pd_extractors.``learning_curve_evaluator_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#learning_curve_evaluator_extractor)[¶](#fklearn.metrics.pd_extractors.learning_curve_evaluator_extractor)
`fklearn.metrics.pd_extractors.``permutation_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#permutation_extractor)[¶](#fklearn.metrics.pd_extractors.permutation_extractor)
`fklearn.metrics.pd_extractors.``repeat_split_log`[[source]](_modules/fklearn/metrics/pd_extractors.html#repeat_split_log)[¶](#fklearn.metrics.pd_extractors.repeat_split_log)
`fklearn.metrics.pd_extractors.``reverse_learning_curve_evaluator_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#reverse_learning_curve_evaluator_extractor)[¶](#fklearn.metrics.pd_extractors.reverse_learning_curve_evaluator_extractor)
`fklearn.metrics.pd_extractors.``split_evaluator_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#split_evaluator_extractor)[¶](#fklearn.metrics.pd_extractors.split_evaluator_extractor)
`fklearn.metrics.pd_extractors.``split_evaluator_extractor_iteration`[[source]](_modules/fklearn/metrics/pd_extractors.html#split_evaluator_extractor_iteration)[¶](#fklearn.metrics.pd_extractors.split_evaluator_extractor_iteration)
`fklearn.metrics.pd_extractors.``stability_curve_evaluator_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#stability_curve_evaluator_extractor)[¶](#fklearn.metrics.pd_extractors.stability_curve_evaluator_extractor)
`fklearn.metrics.pd_extractors.``temporal_split_evaluator_extractor`[[source]](_modules/fklearn/metrics/pd_extractors.html#temporal_split_evaluator_extractor)[¶](#fklearn.metrics.pd_extractors.temporal_split_evaluator_extractor)
####### Module contents[¶](#module-fklearn.metrics)
###### fklearn.preprocessing package[¶](#fklearn-preprocessing-package)
####### Submodules[¶](#submodules)
####### fklearn.preprocessing.rebalancing module[¶](#module-fklearn.preprocessing.rebalancing)
`fklearn.preprocessing.rebalancing.``rebalance_by_categorical`[[source]](_modules/fklearn/preprocessing/rebalancing.html#rebalance_by_categorical)[¶](#fklearn.preprocessing.rebalancing.rebalance_by_categorical)
Resample dataset so that the result contains the same number of lines per category in categ_column.
| Parameters: | * **dataset** (*pandas.DataFrame*) – A Pandas’ DataFrame with an categ_column column
* **categ_column** (*str*) – The name of the categorical column
* **max_lines_by_categ** (*int* *(**default None**)*) – The maximum number of lines by category. If None it will be set to the number of lines for the smallest category
* **seed** (*int* *(**default 1**)*) – Random state for consistency.
|
| Returns: | **rebalanced_dataset** – A dataset with fewer lines than dataset, but with the same number of lines per category in categ_column |
| Return type: | pandas.DataFrame |
`fklearn.preprocessing.rebalancing.``rebalance_by_continuous`[[source]](_modules/fklearn/preprocessing/rebalancing.html#rebalance_by_continuous)[¶](#fklearn.preprocessing.rebalancing.rebalance_by_continuous)
Resample dataset so that the result contains the same number of lines per bucket in a continuous column.
| Parameters: | * **dataset** (*pandas.DataFrame*) – A Pandas’ DataFrame with an categ_column column
* **continuous_column** (*str*) – The name of the continuous column
* **buckets** (*int*) – The number of buckets to split the continuous column into
* **max_lines_by_categ** (*int* *(**default None**)*) – The maximum number of lines by category. If None it will be set to the number of lines for the smallest category
* **by_quantile** (*bool* *(**default False**)*) – If True, uses pd.qcut instead of pd.cut to get the buckets from the continuous column
* **seed** (*int* *(**default 1**)*) – Random state for consistency.
|
| Returns: | **rebalanced_dataset** – A dataset with fewer lines than dataset, but with the same number of lines per category in categ_column |
| Return type: | pandas.DataFrame |
####### fklearn.preprocessing.schema module[¶](#module-fklearn.preprocessing.schema)
`fklearn.preprocessing.schema.``column_duplicatable`(*columns_to_bind: str*) → Callable[[source]](_modules/fklearn/preprocessing/schema.html#column_duplicatable)[¶](#fklearn.preprocessing.schema.column_duplicatable)
Decorator to prepend the feature_duplicator learner.
Identifies the columns to be duplicated and applies duplicator.
| Parameters: | **columns_to_bind** (*str*) – Sets feature_duplicator’s “columns_to_duplicate” parameter equal to the columns_to_bind parameter from the decorated learner |
`fklearn.preprocessing.schema.``feature_duplicator`[[source]](_modules/fklearn/preprocessing/schema.html#feature_duplicator)[¶](#fklearn.preprocessing.schema.feature_duplicator)
Duplicates some columns in the dataframe.
When encoding features, a good practice is to save the encoded version in a different column rather than replacing the original values. The purpose of this function is to duplicate the column to be encoded, to be later replaced by the encoded values.
The duplication method is used to preserve the original behaviour (replace).
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with columns_to_duplicate columns
* **columns_to_duplicate** (*list of str*) – List of columns names
* **columns_mapping** (*int* *(**default None**)*) – Mapping of source columns to destination columns
* **prefix** (*int* *(**default None**)*) – prefix to add to columns to duplicate
* **suffix** (*int* *(**default None**)*) – Suffix to add to columns to duplicate
|
| Returns: | **increased_dataset** – A dataset with repeated columns |
| Return type: | pandas.DataFrame |
####### fklearn.preprocessing.splitting module[¶](#module-fklearn.preprocessing.splitting)
`fklearn.preprocessing.splitting.``space_time_split_dataset`[[source]](_modules/fklearn/preprocessing/splitting.html#space_time_split_dataset)[¶](#fklearn.preprocessing.splitting.space_time_split_dataset)
Splits panel data using both ID and Time columns, resulting in four datasets
1. A training set;
2. An in training time, but out sample id hold out dataset;
3. An out of training time, but in sample id hold out dataset;
4. An out of training time and out of sample id hold out dataset.
| Parameters: | * **dataset** (*pandas.DataFrame*) – A Pandas’ DataFrame with an Identifier Column and a Date Column.
The model will be trained to predict the target column from the features.
* **train_start_date** (*str*) – A date string representing a the starting time of the training data.
It should be in the same format as the Date Column in dataset.
* **train_end_date** (*str*) – A date string representing a the ending time of the training data.
This will also be used as the start date of the holdout period if no holdout_start_date is given.
It should be in the same format as the Date Column in dataset.
* **holdout_end_date** (*str*) – A date string representing a the ending time of the holdout data.
It should be in the same format as the Date Column in dataset.
* **split_seed** (*int*) – A seed used by the random number generator.
* **space_holdout_percentage** (*float*) – The out of id holdout size as a proportion of the in id training size.
* **space_column** (*str*) – The name of the Identifier column of dataset.
* **time_column** (*str*) – The name of the Date column of dataset.
* **holdout_space** (*np.array*) – An array containing the hold out IDs. If not specified,
A random subset of IDs will be selected for holdout.
* **holdout_start_date** (*str*) – A date string representing the starting time of the holdout data.
If None is given it will be equal to train_end_date.
It should be in the same format as the Date Column in dataset.
|
| Returns: | * **train_set** (*pandas.DataFrame*) – The in ID sample and in time training set.
* **intime_outspace_hdout** (*pandas.DataFrame*) – The out of ID sample and in time hold out set.
* **outime_inspace_hdout** (*pandas.DataFrame*) – The in ID sample and out of time hold out set.
* **outime_outspace_hdout** (*pandas.DataFrame*) – The out of ID sample and out of time hold out set.
|
`fklearn.preprocessing.splitting.``stratified_split_dataset`[[source]](_modules/fklearn/preprocessing/splitting.html#stratified_split_dataset)[¶](#fklearn.preprocessing.splitting.stratified_split_dataset)
Splits data into a training and testing datasets such that they maintain the same class ratio of the original dataset.
| Parameters: | * **dataset** (*pandas.DataFrame*) – A Pandas’ DataFrame with the target column.
The model will be trained to predict the target column from the features.
* **target_column** (*str*) – The name of the target column of dataset.
* **test_size** (*float*) – Represent the proportion of the dataset to include in the test split.
should be between 0.0 and 1.0.
* **random_state** (*int* *or* *None**,* *optional* *(**default=None**)*) – If int, random_state is the seed used by the random number generator;
If None, the random number generator is the RandomState instance used by np.random.
|
| Returns: | * **train_set** (*pandas.DataFrame*) – The train dataset sampled from the full dataset.
* **test_set** (*pandas.DataFrame*) – The test dataset sampled from the full dataset.
|
`fklearn.preprocessing.splitting.``time_split_dataset`[[source]](_modules/fklearn/preprocessing/splitting.html#time_split_dataset)[¶](#fklearn.preprocessing.splitting.time_split_dataset)
Splits temporal data into a training and testing datasets such that all training data comes before the testings one.
| Parameters: | * **dataset** (*pandas.DataFrame*) – A Pandas’ DataFrame with an Identifier Column and a Date Column.
The model will be trained to predict the target column from the features.
* **train_start_date** (*str*) – A date string representing a the starting time of the training data.
It should be in the same format as the Date Column in dataset.
* **train_end_date** (*str*) – A date string representing a the ending time of the training data.
This will also be used as the start date of the holdout period if no holdout_start_date is given.
It should be in the same format as the Date Column in dataset.
* **holdout_end_date** (*str*) – A date string representing a the ending time of the holdout data.
It should be in the same format as the Date Column in dataset.
* **time_column** (*str*) – The name of the Date column of dataset.
* **holdout_start_date** (*str*) – A date string representing the starting time of the holdout data.
If None is given it will be equal to train_end_date.
It should be in the same format as the Date Column in dataset.
|
| Returns: | * **train_set** (*pandas.DataFrame*) – The in ID sample and in time training set.
* **test_set** (*pandas.DataFrame*) – The out of ID sample and in time hold out set.
|
####### Module contents[¶](#module-fklearn.preprocessing)
###### fklearn.training package[¶](#fklearn-training-package)
####### Submodules[¶](#submodules)
####### fklearn.training.calibration module[¶](#module-fklearn.training.calibration)
`fklearn.training.calibration.``find_thresholds_with_same_risk`[[source]](_modules/fklearn/training/calibration.html#find_thresholds_with_same_risk)[¶](#fklearn.training.calibration.find_thresholds_with_same_risk)
Calculate fair calibration, where for each band any sensitive factor group have the same target mean.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **sensitive_factor** (*str*) – Column where we have the different group classifications that we want to have the same target mean
* **unfair_band_column** (*str*) – Column with the original bands
* **model_prediction_output** (*str*) – Risk model’s output
* **target_column** (*str*) – The name of the column in df that should be used as target for the model.
This column should be binary, since this is a classification model.
* **output_column_name** (*str*) – The name of the column with the fair bins.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the find_thresholds_with_same_risk model.
|
`fklearn.training.calibration.``isotonic_calibration_learner`[[source]](_modules/fklearn/training/calibration.html#isotonic_calibration_learner)[¶](#fklearn.training.calibration.isotonic_calibration_learner)
Fits a single feature isotonic regression to the dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **target_column** (*str*) – The name of the column in df that should be used as target for the model.
This column should be binary, since this is a classification model.
* **prediction_column** (*str*) – The name of the column with the uncalibrated predictions from the model.
* **output_column** (*str*) – The name of the column with the calibrated predictions from the model.
* **y_min** (*float*) – Lower bound of Isotonic Regression
* **y_max** (*float*) – Upper bound of Isotonic Regression
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Isotonic Calibration model.
|
####### fklearn.training.classification module[¶](#module-fklearn.training.classification)
`fklearn.training.classification.``catboost_classification_learner`[[source]](_modules/fklearn/training/classification.html#catboost_classification_learner)[¶](#fklearn.training.classification.catboost_classification_learner)
Fits an CatBoost classifier to the dataset. It first generates a DMatrix with the specified features and labels from df. Then, it fits a CatBoost model to this DMatrix. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be discrete, since this is a classification model.
* **learning_rate** (*float*) – Float in the range (0, 1]
Step size shrinkage used in update to prevents overfitting. After each boosting step,
we can directly get the weights of new features. and eta actually shrinks the feature weights to make the boosting process more conservative.
See the eta hyper-parameter in:
<https://catboost.ai/docs/concepts/python-reference_parameters-list.html>
* **num_estimators** (*int*) – Int in the range (0, inf)
Number of boosted trees to fit.
See the n_estimators hyper-parameter in:
<https://catboost.ai/docs/concepts/python-reference_parameters-list.html>
* **extra_params** (*dict**,* *optional*) – Dictionary in the format {“hyperparameter_name” : hyperparameter_value}.
Other parameters for the CatBoost model. See the list in:
<https://catboost.ai/docs/concepts/python-reference_catboostregressor.html>
If not passed, the default will be used.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
If a multiclass problem, additional prediction_column_i columns will be added for i in range(0,n_classes).
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the catboost_classification_learner model.
|
`fklearn.training.classification.``lgbm_classification_learner`[[source]](_modules/fklearn/training/classification.html#lgbm_classification_learner)[¶](#fklearn.training.classification.lgbm_classification_learner)
Fits an LGBM classifier to the dataset.
It first generates a Dataset with the specified features and labels from df. Then, it fits a LGBM model to this Dataset. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A pandas DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be discrete, since this is a classification model.
* **learning_rate** (*float*) – Float in the range (0, 1]
Step size shrinkage used in update to prevents overfitting. After each boosting step,
we can directly get the weights of new features. and eta actually shrinks the feature weights to make the boosting process more conservative.
See the learning_rate hyper-parameter in:
<https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst>
* **num_estimators** (*int*) – Int in the range (0, inf)
Number of boosted trees to fit.
See the num_iterations hyper-parameter in:
<https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst>
* **extra_params** (*dict**,* *optional*) – Dictionary in the format {“hyperparameter_name” : hyperparameter_value}.
Other parameters for the LGBM model. See the list in:
<https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst>
If not passed, the default will be used.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
* **valid_sets** (*list of pandas.DataFrame**,* *optional* *(**default=None**)*) – A list of datasets to be used for early-stopping during training.
* **valid_names** (*list of strings**,* *optional* *(**default=None**)*) – A list of dataset names matching the list of datasets provided through the `valid_sets` parameter.
* **feval** (*callable**,* *list of callable**, or* *None**,* *optional* *(**default=None**)*) – Customized evaluation function. Each evaluation function should accept two parameters: preds, eval_data, and return (eval_name, eval_result, is_higher_better) or list of such tuples.
* **init_model** (*str**,* *pathlib.Path**,* *Booster* *or* *None**,* *optional* *(**default=None**)*) – Filename of LightGBM model or Booster instance used for continue training.
* **feature_name** (*list of str**, or* *'auto'**,* *optional* *(**default="auto"**)*) – Feature names. If ‘auto’ and data is pandas DataFrame, data columns names are used.
* **categorical_feature** (*list of str* *or* *int**, or* *'auto'**,* *optional* *(**default="auto"**)*) – Categorical features. If list of int, interpreted as indices. If list of str, interpreted as feature names (need to specify feature_name as well). If ‘auto’ and data is pandas DataFrame, pandas unordered categorical columns are used. All values in categorical features will be cast to int32 and thus should be less than int32 max value
(2147483647). Large values could be memory consuming. Consider using consecutive integers starting from zero.
All negative values in categorical features will be treated as missing values. The output cannot be monotonically constrained with respect to a categorical feature. Floating point numbers in categorical features will be rounded towards 0.
* **keep_training_booster** (*bool**,* *optional* *(**default=False**)*) – Whether the returned Booster will be used to keep training. If False, the returned value will be converted into
_InnerPredictor before returning. This means you won’t be able to use eval, eval_train or eval_valid methods of the returned Booster. When your model is very large and cause the memory error, you can try to set this param to True to avoid the model conversion performed during the internal call of model_to_string. You can still use
_InnerPredictor as init_model for future continue training.
* **callbacks** (*list of callable**, or* *None**,* *optional* *(**default=None**)*) – List of callback functions that are applied at each iteration. See Callbacks in LightGBM Python API for more information.
* **dataset_init_score** (*list**,* *list of lists* *(**for multi-class task**)**,* *numpy array**,* *pandas Series**,* *pandas DataFrame* *(**for*) – multi-class task), or None, optional (default=None)
Init score for Dataset. It could be the prediction of the majority class or a prediction from any other model.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the LGBM Classifier model.
|
`fklearn.training.classification.``logistic_classification_learner`[[source]](_modules/fklearn/training/classification.html#logistic_classification_learner)[¶](#fklearn.training.classification.logistic_classification_learner)
Fits an logistic regression classifier to the dataset. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be discrete, since this is a classification model.
* **params** (*dict*) – The LogisticRegression parameters in the format {“par_name”: param}. See:
<http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
If a multiclass problem, additional prediction_column_i columns will be added for i in range(0,n_classes).
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Logistic Regression model.
|
`fklearn.training.classification.``nlp_logistic_classification_learner`[[source]](_modules/fklearn/training/classification.html#nlp_logistic_classification_learner)[¶](#fklearn.training.classification.nlp_logistic_classification_learner)
Fits a text vectorizer (TfidfVectorizer) followed by a logistic regression (LogisticRegression).
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **text_feature_cols** (*list of str*) – A list of column names of the text features used for the model. All these names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be discrete, since this is a classification model.
* **vectorizer_params** (*dict*) – The TfidfVectorizer parameters in the format {“par_name”: param}. See:
<http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html>
* **logistic_params** (*dict*) – The LogisticRegression parameters in the format {“par_name”: param}. See:
<http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the NLP Logistic Regression model.
|
`fklearn.training.classification.``xgb_classification_learner`[[source]](_modules/fklearn/training/classification.html#xgb_classification_learner)[¶](#fklearn.training.classification.xgb_classification_learner)
Fits an XGBoost classifier to the dataset. It first generates a DMatrix with the specified features and labels from df. Then, it fits a XGBoost model to this DMatrix. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be discrete, since this is a classification model.
* **learning_rate** (*float*) – Float in the range (0, 1]
Step size shrinkage used in update to prevents overfitting. After each boosting step,
we can directly get the weights of new features. and eta actually shrinks the feature weights to make the boosting process more conservative.
See the eta hyper-parameter in:
<http://xgboost.readthedocs.io/en/latest/parameter.html>
* **num_estimators** (*int*) – Int in the range (0, inf)
Number of boosted trees to fit.
See the n_estimators hyper-parameter in:
<http://xgboost.readthedocs.io/en/latest/python/python_api.html>
* **extra_params** (*dict**,* *optional*) – Dictionary in the format {“hyperparameter_name” : hyperparameter_value}.
Other parameters for the XGBoost model. See the list in:
<http://xgboost.readthedocs.io/en/latest/parameter.html>
If not passed, the default will be used.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
If a multiclass problem, additional prediction_column_i columns will be added for i in range(0,n_classes).
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the XGboost Classifier model.
|
####### fklearn.training.ensemble module[¶](#module-fklearn.training.ensemble)
`fklearn.training.ensemble.``xgb_octopus_classification_learner`[[source]](_modules/fklearn/training/ensemble.html#xgb_octopus_classification_learner)[¶](#fklearn.training.ensemble.xgb_octopus_classification_learner)
Octopus ensemble allows you to inject domain specific knowledge to force a split in an initial feature, instead of assuming the tree model will do that intelligent split on its own. It works by first defining a split on your dataset and then training one individual model in each separated dataset.
| Parameters: | * **train_set** (*pd.DataFrame*) – A Pandas’ DataFrame with features, target columns and a splitting column that must be categorical.
* **learning_rate_by_bin** (*dict*) – A dictionary of learning rate in the XGBoost model to use in each model split. Ex: if you want to split your training by tenure and you have a tenure column with integer values [1,2,3,…,12], you have to specify a list of learning rates for each split:
```
{
1: 0.08,
2: 0.08,
...
12: 0.1
}
```
* **num_estimators_by_bin** (*dict*) – A dictionary of number of tree estimators in the XGBoost model to use in each model split. Ex: if you want to split your training by tenure and you have a tenure column with integer values [1,2,3,…,12], you have to specify a list of estimators for each split:
```
{
1: 300,
2: 250,
...
12: 300
}
```
* **extra_params_by_bin** (*dict*) – A dictionary of extra parameters dictionaries in the XGBoost model to use in each model split. Ex: if you want to split your training by tenure and you have a tenure column with integer values [1,2,3,…,12], you have to specify a list of extra parameters for each split:
```
{
1: {
'reg_alpha': 0.0,
'colsample_bytree': 0.4,
...
'colsample_bylevel': 0.8
}
2: {
'reg_alpha': 0.1,
'colsample_bytree': 0.6,
...
'colsample_bylevel': 0.4
}
...
12: {
'reg_alpha': 0.0,
'colsample_bytree': 0.7,
...
'colsample_bylevel': 1.0
}
}
```
* **features_by_bin** (*dict*) – A dictionary of features to use in each model split. Ex: if you want to split your training by tenure and you have a tenure column with integer values [1,2,3,…,12], you have to specify a list of features for each split:
```
{
1: [feature-1, feature-2, feature-3, ...],
2: [feature-1, feature-3, feature-5, ...],
...
12: [feature-2, feature-4, feature-8, ...]
}
```
* **train_split_col** (*str*) – The name of the categorical column where the model will make the splits. Ex: if you want to split your training by tenure, you can have a categorical column called “tenure”.
* **train_split_bins** (*list*) – A list with the actual values of the categories from the train_split_col. Ex: if you want to split your training by tenure and you have a tenure column with integer values [1,2,3,…,12] you can pass this list and you will split your training into 12 different models.
* **nthread** (*int*) – Number of threads for the XGBoost learners.
* **target_column** (*str*) – The name of the target column.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Octopus XGB Classifier model.
|
####### fklearn.training.imputation module[¶](#module-fklearn.training.imputation)
`fklearn.training.imputation.``imputer`[[source]](_modules/fklearn/training/imputation.html#imputer)[¶](#fklearn.training.imputation.imputer)
Fits a missing value imputer to the dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with columns to impute missing values.
It must contain all columns listed in columns_to_impute
* **columns_to_impute** (*List of strings*) – A list of names of the columns for missing value imputation.
* **impute_strategy** (*String**,* *(**default="median"**)*) – The imputation strategy.
- If “mean”, then replace missing values using the mean along the axis.
- If “median”, then replace missing values using the median along the axis.
- If “most_frequent”, then replace missing using the most frequent value along the axis.
* **placeholder_value** (*Any**,* *(**default=None**)*) – if not None, use this as default value when some features only contains NA values on training. For transformation, NA values on those features will be replaced by fill_value.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the SimpleImputer model.
|
`fklearn.training.imputation.``placeholder_imputer`[[source]](_modules/fklearn/training/imputation.html#placeholder_imputer)[¶](#fklearn.training.imputation.placeholder_imputer)
Fills missing values with a fixed value.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with columns to fill missing values.
It must contain all columns listed in columns_to_impute
* **columns_to_impute** (*List of strings*) – A list of names of the columns for filling missing value.
* **placeholder_value** (*Any**,* *(**default=-999**)*) – The value used to fill in missing values.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Placeholder SimpleImputer model.
|
####### fklearn.training.pipeline module[¶](#module-fklearn.training.pipeline)
`fklearn.training.pipeline.``build_pipeline`(**learners*, *has_repeated_learners: bool = False*) → Callable[[pandas.core.frame.DataFrame], Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/pipeline.html#build_pipeline)[¶](#fklearn.training.pipeline.build_pipeline)
Builds a pipeline of different chained learners functions with the possibility of using keyword arguments in the predict functions of the pipeline.
Say you have two learners, you create a pipeline with pipeline = build_pipeline(learner1, learner2).
Those learners must be functions with just one unfilled argument (the dataset itself).
Then, you train the pipeline with predict_fn, transformed_df, logs = pipeline(df),
which will be like applying the learners in the following order: learner2(learner1(df)).
Finally, you predict on different datasets with pred_df = predict_fn(new_df), with optional kwargs.
For example, if you have XGBoost or LightGBM, you can get SHAP values with predict_fn(new_df, apply_shap=True).
| Parameters: | * **learners** (*partially-applied learner functions.*) –
* **has_repeated_learners** (*bool*) – Boolean value indicating wheter the pipeline contains learners with the same name or not.
|
| Returns: | * **p** (*function pandas.DataFrame, **kwargs -> pandas.DataFrame*) – A function that when applied to a DataFrame will apply all learner functions in sequence, with optional kwargs.
* **new_df** (*pandas.DataFrame*) – A DataFrame that is the result of applying all learner function in sequence.
* **log** (*dict*) – A log-like Dict that stores information of all learner functions.
|
####### fklearn.training.regression module[¶](#module-fklearn.training.regression)
`fklearn.training.regression.``catboost_regressor_learner`[[source]](_modules/fklearn/training/regression.html#catboost_regressor_learner)[¶](#fklearn.training.regression.catboost_regressor_learner)
Fits an CatBoost regressor to the dataset. It first generates a Pool with the specified features and labels from df. Then it fits a CatBoost model to this Pool. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be numerical and continuous, since this is a regression model.
* **learning_rate** (*float*) – Float in range [0,1].
Step size shrinkage used in update to prevents overfitting. After each boosting step,
we can directly get the weights of new features. and eta actually shrinks the feature weights to make the boosting process more conservative.
See the eta hyper-parameter in:
<https://catboost.ai/docs/concepts/python-reference_parameters-list.html>
* **num_estimators** (*int*) – Int in range [0, inf]
Number of boosted trees to fit.
See the n_estimators hyper-parameter in:
<https://catboost.ai/docs/concepts/python-reference_parameters-list.html>
* **extra_params** (*dict**,* *optional*) – Dictionary in the format {“hyperparameter_name” : hyperparameter_value.
Other parameters for the CatBoost model. See the list in:
<https://catboost.ai/docs/concepts/python-reference_catboostregressor.html>
If not passed, the default will be used.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the CatBoostRegressor model.
|
`fklearn.training.regression.``custom_supervised_model_learner`[[source]](_modules/fklearn/training/regression.html#custom_supervised_model_learner)[¶](#fklearn.training.regression.custom_supervised_model_learner)
Fits a custom model to the dataset.
Return the predict function, the predictions for the input dataset and a log describing the model.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
* **model** (*Object*) – Machine learning model to be used for regression or clasisfication.
model object must have “.fit” attribute to train the data.
For classification problems, it also needs “.predict_proba” attribute.
For regression problemsm it needs “.predict” attribute.
* **supervised_type** (*str*) – Type of supervised learning to be used The options are: ‘classification’ or ‘regression’
* **log** (*Dict**[**str**,* *Dict**]*) – Log with additional information of the custom model used.
It must start with just one element with the model name.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
For classification problems, all probabilities wiill be added: for i in range(0,n_classes).
For regression just prediction_column will be added.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Custom Supervised Model Learner model.
|
`fklearn.training.regression.``elasticnet_regression_learner`[[source]](_modules/fklearn/training/regression.html#elasticnet_regression_learner)[¶](#fklearn.training.regression.elasticnet_regression_learner)
Fits an elastic net regressor to the dataset. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be continuous, since this is a regression model.
* **params** (*dict*) – The ElasticNet parameters in the format {“par_name”: param}. See:
<https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html>
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the ElasticNet Regression model.
|
`fklearn.training.regression.``gp_regression_learner`[[source]](_modules/fklearn/training/regression.html#gp_regression_learner)[¶](#fklearn.training.regression.gp_regression_learner)
Fits an gaussian process regressor to the dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be numerical and continuous, since this is a regression model.
* **kernel** (*sklearn.gaussian_process.kernels*) – The kernel specifying the covariance function of the GP. If None is passed,
the kernel “1.0 * RBF(1.0)” is used as default. Note that the kernel’s hyperparameters are optimized during fitting.
* **alpha** (*float*) – Value added to the diagonal of the kernel matrix during fitting. Larger values correspond to increased noise level in the observations. This can also prevent a potential numerical issue during fitting,
by ensuring that the calculated values form a positive definite matrix.
* **extra_variance** (*float*) – The amount of extra variance to scale to the predictions in standard deviations. If left as the default “fit”,
Uses the standard deviation of the target.
* **return_std** (*bool*) – If True, the standard-deviation of the predictive distribution at the query points is returned along with the mean.
* **extra_params** (*dict {"hyperparameter_name" : hyperparameter_value}**,* *optional*) – Other parameters for the GaussianProcessRegressor model. See the list in:
<http://scikit-learn.org/stable/modules/generated/sklearn.gaussian_process.GaussianProcessRegressor.html>
If not passed, the default will be used.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Gaussian Process Regressor model.
|
`fklearn.training.regression.``lgbm_regression_learner`[[source]](_modules/fklearn/training/regression.html#lgbm_regression_learner)[¶](#fklearn.training.regression.lgbm_regression_learner)
Fits an LGBM regressor to the dataset.
It first generates a Dataset with the specified features and labels from df. Then, it fits a LGBM model to this Dataset. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be binary, since this is a classification model.
* **learning_rate** (*float*) – Float in the range (0, 1]
Step size shrinkage used in update to prevents overfitting. After each boosting step,
we can directly get the weights of new features. and eta actually shrinks the feature weights to make the boosting process more conservative.
See the learning_rate hyper-parameter in:
<https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst>
* **num_estimators** (*int*) – Int in the range (0, inf)
Number of boosted trees to fit.
See the num_iterations hyper-parameter in:
<https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst>
* **extra_params** (*dict**,* *optional*) – Dictionary in the format {“hyperparameter_name” : hyperparameter_value}.
Other parameters for the LGBM model. See the list in:
<https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst>
If not passed, the default will be used.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the LGBM Regressor model.
|
`fklearn.training.regression.``linear_regression_learner`[[source]](_modules/fklearn/training/regression.html#linear_regression_learner)[¶](#fklearn.training.regression.linear_regression_learner)
Fits an linear regressor to the dataset. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be continuous, since this is a regression model.
* **params** (*dict*) – The LinearRegression parameters in the format {“par_name”: param}. See:
<http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html>
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Linear Regression model.
|
`fklearn.training.regression.``xgb_regression_learner`[[source]](_modules/fklearn/training/regression.html#xgb_regression_learner)[¶](#fklearn.training.regression.xgb_regression_learner)
Fits an XGBoost regressor to the dataset. It first generates a DMatrix with the specified features and labels from df. Then it fits a XGBoost model to this DMatrix. Return the predict function for the model and the predictions for the input dataset.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **target** (*str*) – The name of the column in df that should be used as target for the model.
This column should be numerical and continuous, since this is a regression model.
* **learning_rate** (*float*) – Float in range [0,1].
Step size shrinkage used in update to prevents overfitting. After each boosting step,
we can directly get the weights of new features. and eta actually shrinks the feature weights to make the boosting process more conservative.
See the eta hyper-parameter in:
<http://xgboost.readthedocs.io/en/latest/parameter.html>
* **num_estimators** (*int*) – Int in range [0, inf]
Number of boosted trees to fit.
See the n_estimators hyper-parameter in:
<http://xgboost.readthedocs.io/en/latest/python/python_api.html>
* **extra_params** (*dict**,* *optional*) – Dictionary in the format {“hyperparameter_name” : hyperparameter_value.
Other parameters for the XGBoost model. See the list in:
<http://xgboost.readthedocs.io/en/latest/parameter.html>
If not passed, the default will be used.
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **weight_column** (*str**,* *optional*) – The name of the column with scores to weight the data.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the XGboost Regressor model.
|
####### fklearn.training.transformation module[¶](#module-fklearn.training.transformation)
`fklearn.training.transformation.``apply_replacements`(*df: pandas.core.frame.DataFrame, columns: List[str], vec: Dict[str, Dict], replace_unseen: Any*) → pandas.core.frame.DataFrame[[source]](_modules/fklearn/training/transformation.html#apply_replacements)[¶](#fklearn.training.transformation.apply_replacements)
Base function to apply the replacements values found on the
“vec” vectors into the df DataFrame.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas DataFrame containing the data to be replaced.
* **columns** (*list of str*) – The df columns names to perform the replacements.
* **vec** (*dict*) – A dict mapping a col to dict mapping a value to its replacement. For example:
vec = {“feature1”: {1: 2, 3: 5, 6: 8}}
* **replace_unseen** (*Any*) – Default value to replace when original value is not present in the vec dict for the feature
|
`fklearn.training.transformation.``capper`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_cap: List[str] = '__no__default__'*, *precomputed_caps: Dict[str*, *float] = None*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#capper)[¶](#fklearn.training.transformation.capper)
Learns the maximum value for each of the columns_to_cap and used that as the cap for those columns. If precomputed caps are passed, the function uses that as the cap value instead of computing the maximum.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_cap columns.
* **columns_to_cap** (*list of str*) – A list os column names that should be caped.
* **precomputed_caps** (*dict*) – A dictionary on the format {“column_name” : cap_value}.
That maps column names to pre computed cap values
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Capper model.
|
`fklearn.training.transformation.``count_categorizer`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_categorize: List[str] = '__no__default__'*, *replace_unseen: int = -1*, *store_mapping: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#count_categorizer)[¶](#fklearn.training.transformation.count_categorizer)
Replaces categorical variables by count.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_categorize columns.
* **columns_to_categorize** (*list of str*) – A list of categorical column names.
* **replace_unseen** (*int*) – The value to impute unseen categories.
* **store_mapping** (*bool* *(**default: False**)*) – Whether to store the feature value -> integer dictionary in the log
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Count Categorizer model.
|
`fklearn.training.transformation.``custom_transformer`(*df: pandas.core.frame.DataFrame = '__no__default__', columns_to_transform: List[str] = '__no__default__', transformation_function: Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame] = '__no__default__', is_vectorized: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#custom_transformer)[¶](#fklearn.training.transformation.custom_transformer)
Applies a custom function to the desired columns.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns
* **columns_to_transform** (*list of str*) – A list of column names that will remain in the dataframe during training time (fit)
* **transformation_function** (*function**(**pandas.DataFrame**)* *-> pandas.DataFrame*) – A function that receives a DataFrame as input, performs a transformation on its columns and returns another DataFrame.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Custom Transformer model.
|
`fklearn.training.transformation.``discrete_ecdfer`[[source]](_modules/fklearn/training/transformation.html#discrete_ecdfer)[¶](#fklearn.training.transformation.discrete_ecdfer)
Learns an Empirical Cumulative Distribution Function from the specified column in the input DataFrame. It is usually used in the prediction column to convert a predicted probability into a score from 0 to 1000.
| Parameters: | * **df** (*Pandas' pandas.DataFrame*) – A Pandas’ DataFrame that must contain a prediction_column columns.
* **ascending** (*bool*) – Whether to compute an ascending ECDF or a descending one.
* **prediction_column** (*str*) – The name of the column in df to learn the ECDF from.
* **ecdf_column** (*str*) – The name of the new ECDF column added by this function.
* **max_range** (*int*) –
The maximum value for the ECDF. It will go will go from 0 to max_range.
* **round_method** (*Callable*) – A function perform the round of transformed values for ex: (int, ceil, floor, round)
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Discrete ECDFer model.
|
`fklearn.training.transformation.``ecdfer`[[source]](_modules/fklearn/training/transformation.html#ecdfer)[¶](#fklearn.training.transformation.ecdfer)
Learns an Empirical Cumulative Distribution Function from the specified column in the input DataFrame. It is usually used in the prediction column to convert a predicted probability into a score from 0 to 1000.
| Parameters: | * **df** (*Pandas' pandas.DataFrame*) – A Pandas’ DataFrame that must contain a prediction_column columns.
* **ascending** (*bool*) – Whether to compute an ascending ECDF or a descending one.
* **prediction_column** (*str*) – The name of the column in df to learn the ECDF from.
* **ecdf_column** (*str*) – The name of the new ECDF column added by this function
* **max_range** (*int*) –
The maximum value for the ECDF. It will go will go from 0 to max_range.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the ECDFer model.
|
`fklearn.training.transformation.``floorer`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_floor: List[str] = '__no__default__'*, *precomputed_floors: Dict[str*, *float] = None*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#floorer)[¶](#fklearn.training.transformation.floorer)
Learns the minimum value for each of the columns_to_floor and used that as the floot for those columns. If precomputed floors are passed, the function uses that as the cap value instead of computing the minimun.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_floor columns.
* **columns_to_floor** (*list of str*) – A list os column names that should be floored.
* **precomputed_floors** (*dict*) – A dictionary on the format {“column_name” : floor_value}
that maps column names to pre computed floor values
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Floorer model.
|
`fklearn.training.transformation.``label_categorizer`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_categorize: List[str] = '__no__default__'*, *replace_unseen: Union[str*, *float] = nan*, *store_mapping: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#label_categorizer)[¶](#fklearn.training.transformation.label_categorizer)
Replaces categorical variables with a numeric identifier.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_categorize columns.
* **columns_to_categorize** (*list of str*) – A list of categorical column names.
* **replace_unseen** (*int**,* *str**,* *float**, or* *nan*) – The value to impute unseen categories.
* **store_mapping** (*bool* *(**default: False**)*) – Whether to store the feature value -> integer dictionary in the log
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Label Categorizer model.
|
`fklearn.training.transformation.``missing_warner`[[source]](_modules/fklearn/training/transformation.html#missing_warner)[¶](#fklearn.training.transformation.missing_warner)
Creates a new column to warn about rows that columns that don’t have missing in the training set but have missing on the scoring
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame.
* **cols_list** (*list of str*) – List of columns to consider when evaluating missingness
* **new_column_name** (*str*) – Name of the column created to alert the existence of missing values
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Missing Alerter model.
|
`fklearn.training.transformation.``null_injector`[[source]](_modules/fklearn/training/transformation.html#null_injector)[¶](#fklearn.training.transformation.null_injector)
Injects null into columns
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_inject as columns
* **columns_to_inject** (*list of str*) – A list of features to inject nulls. If groups is not None it will be ignored.
* **proportion** (*float*) – Proportion of nulls to inject in the columns.
* **groups** (*list of list of str* *(**default = None**)*) – A list of group of features. If not None, feature in the same group will be set to NaN together.
* **seed** (*int*) – Random seed for consistency.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Null Injector model.
|
`fklearn.training.transformation.``onehot_categorizer`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_categorize: List[str] = '__no__default__'*, *hardcode_nans: bool = False*, *drop_first_column: bool = False*, *store_mapping: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#onehot_categorizer)[¶](#fklearn.training.transformation.onehot_categorizer)
Onehot encoding on categorical columns.
Encoded columns are removed and substituted by columns named fklearn_feat__col==val, where col is the name of the column and val is one of the values the feature can assume.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pd.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_categorize columns.
* **columns_to_categorize** (*list of str*) – A list of categorical column names. Must be non-empty.
* **hardcode_nans** (*bool*) – Hardcodes an extra column with: 1 if nan or unseen else 0.
* **drop_first_column** (*bool*) – Drops the first column to create (k-1)-sized one-hot arrays for k features per categorical column. Can be used to avoid colinearity.
* **store_mapping** (*bool* *(**default: False**)*) – Whether to store the feature value -> integer dictionary in the log
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Onehot Categorizer model.
|
`fklearn.training.transformation.``prediction_ranger`[[source]](_modules/fklearn/training/transformation.html#prediction_ranger)[¶](#fklearn.training.transformation.prediction_ranger)
Caps and floors the specified prediction column to a set range.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain a prediction_column columns.
* **prediction_min** (*float*) – The floor for the prediction.
* **prediction_max** (*float*) – The cap for the prediction.
* **prediction_column** (*str*) – The name of the column in df to cap and floor
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Prediction Ranger model.
|
`fklearn.training.transformation.``quantile_biner`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_bin: List[str] = '__no__default__'*, *q: int = 4*, *right: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#quantile_biner)[¶](#fklearn.training.transformation.quantile_biner)
Discretize continuous numerical columns into its quantiles. Uses pandas.qcut to find the bins and then numpy.digitize to fit the columns into bins.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_categorize columns.
* **columns_to_bin** (*list of str*) – A list of numerical column names.
* **q** (*int*) – Number of quantiles. 10 for deciles, 4 for quartiles, etc.
Alternately array of quantiles, e.g. [0, .25, .5, .75, 1.] for quartiles.
See <https://pandas.pydata.org/pandas-docs/stable/generated/pandas.qcut.html>
* **right** (*bool*) – Indicating whether the intervals include the right or the left bin edge.
Default behavior is (right==False) indicating that the interval does not include the right edge. The left bin end is open in this case, i.e., bins[i-1]
<= x < bins[i] is the default behavior for monotonically increasing bins.
See <https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.digitize.html>
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Quantile Biner model.
|
`fklearn.training.transformation.``rank_categorical`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_rank: List[str] = '__no__default__'*, *replace_unseen: Union[str*, *float] = nan*, *store_mapping: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#rank_categorical)[¶](#fklearn.training.transformation.rank_categorical)
Rank categorical features by their frequency in the train set.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*Pandas' DataFrame*) – A Pandas’ DataFrame that must contain a prediction_column columns.
* **columns_to_rank** (*list of str*) – The df columns names to perform the rank.
* **replace_unseen** (*int**,* *str**,* *float**, or* *nan*) – The value to impute unseen categories.
* **store_mapping** (*bool* *(**default: False**)*) – Whether to store the feature value -> integer dictionary in the log
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Rank Categorical model.
|
`fklearn.training.transformation.``selector`[[source]](_modules/fklearn/training/transformation.html#selector)[¶](#fklearn.training.transformation.selector)
Filters a DataFrames by selecting only the desired columns.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns
* **training_columns** (*list of str*) – A list of column names that will remain in the dataframe during training time (fit)
* **predict_columns** (*list of str*) – A list of column names that will remain in the dataframe during prediction time (transform)
If None, it defaults to training_columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Selector model.
|
`fklearn.training.transformation.``standard_scaler`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_scale: List[str] = '__no__default__'*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#standard_scaler)[¶](#fklearn.training.transformation.standard_scaler)
Fits a standard scaler to the dataset.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with columns to scale.
It must contain all columns listed in columns_to_scale.
* **columns_to_scale** (*list of str*) – A list of names of the columns for standard scaling.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Standard Scaler model.
|
`fklearn.training.transformation.``target_categorizer`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_categorize: List[str] = '__no__default__'*, *target_column: str = '__no__default__'*, *smoothing: float = 1.0*, *ignore_unseen: bool = True*, *store_mapping: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#target_categorizer)[¶](#fklearn.training.transformation.target_categorizer)
Replaces categorical variables with the smoothed mean of the target variable by category.
Uses a weighted average with the overall mean of the target variable for smoothing.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain columns_to_categorize and target_column columns.
* **columns_to_categorize** (*list of str*) – A list of categorical column names.
* **target_column** (*str*) – Target column name. Target can be binary or continuous.
* **smoothing** (*float* *(**default: 1.0**)*) – Weight given to overall target mean against target mean by category.
The value must be greater than or equal to 0
* **ignore_unseen** (*bool* *(**default: True**)*) – If True, unseen values will be encoded as nan If False, these will be replaced by target mean.
* **store_mapping** (*bool* *(**default: False**)*) – Whether to store the feature value -> float dictionary in the log.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Target Categorizer model.
|
`fklearn.training.transformation.``truncate_categorical`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *columns_to_truncate: List[str] = '__no__default__'*, *percentile: float = '__no__default__'*, *replacement: Union[str*, *float] = -9999*, *replace_unseen: Union[str*, *float] = -9999*, *store_mapping: bool = False*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#truncate_categorical)[¶](#fklearn.training.transformation.truncate_categorical)
Truncate infrequent categories and replace them by a single one.
You can think of it like “others” category.
The default behaviour is to replace the original values. To store the original values in a new column, specify prefix or suffix in the parameters, or specify a dictionary with the desired column mapping using the columns_mapping parameter.
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame that must contain a prediction_column columns.
* **columns_to_truncate** (*list of str*) – The df columns names to perform the truncation.
* **percentile** (*float*) – Categories less frequent than the percentile will be replaced by the same one.
* **replacement** (*int**,* *str**,* *float* *or* *nan*) – The value to use when a category is less frequent that the percentile variable.
* **replace_unseen** (*int**,* *str**,* *float**, or* *nan*) – The value to impute unseen categories.
* **store_mapping** (*bool* *(**default: False**)*) – Whether to store the feature value -> integer dictionary in the log.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Truncate Categorical model.
|
`fklearn.training.transformation.``value_mapper`(*df: pandas.core.frame.DataFrame = '__no__default__'*, *value_maps: Dict[str*, *Dict] = '__no__default__'*, *ignore_unseen: bool = True*, *replace_unseen_to: Any = nan*) → Union[Callable, Tuple[Callable[[pandas.core.frame.DataFrame], pandas.core.frame.DataFrame], pandas.core.frame.DataFrame, Dict[str, Dict[str, Any]]]][[source]](_modules/fklearn/training/transformation.html#value_mapper)[¶](#fklearn.training.transformation.value_mapper)
Map values in selected columns in the DataFrame according to dictionaries of replacements.
Learner wrapper for apply_replacements
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas DataFrame containing the data to be replaced.
* **value_maps** (*dict of dicts*) – A dict mapping a col to dict mapping a value to its replacement. For example:
value_maps = {“feature1”: {1: 2, 3: 5, 6: 8}}
* **ignore_unseen** (*bool*) – If True, values not explicitly declared in value_maps will be left as is.
If False, these will be replaced by replace_unseen_to.
* **replace_unseen_to** (*Any*) – Default value to replace when original value is not present in the vec dict for the feature.
|
####### fklearn.training.unsupervised module[¶](#module-fklearn.training.unsupervised)
`fklearn.training.unsupervised.``isolation_forest_learner`[[source]](_modules/fklearn/training/unsupervised.html#isolation_forest_learner)[¶](#fklearn.training.unsupervised.isolation_forest_learner)
Fits an anomaly detection algorithm (Isolation Forest) to the dataset
| Parameters: | * **df** (*pandas.DataFrame*) – A Pandas’ DataFrame with features and target columns.
The model will be trained to predict the target column from the features.
* **features** (*list of str*) – A list os column names that are used as features for the model. All this names should be in df.
* **params** (*dict*) – The IsolationForest parameters in the format {“par_name”: param}. See:
<http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html>
* **prediction_column** (*str*) – The name of the column with the predictions from the model.
* **encode_extra_cols** (*bool* *(**default: True**)*) – If True, treats all columns in df with name pattern fklearn_feat__col==val` as feature columns.
|
| Returns: | * **p** (*function pandas.DataFrame -> pandas.DataFrame*) – A function that when applied to a DataFrame with the same columns as df returns a new DataFrame with a new column with predictions from the model.
* **new_df** (*pandas.DataFrame*) – A df-like DataFrame with the same columns as the input df plus a column with predictions from the model.
* **log** (*dict*) – A log-like Dict that stores information of the Isolation Forest model.
|
####### fklearn.training.utils module[¶](#module-fklearn.training.utils)
`fklearn.training.utils.``expand_features_encoded`(*df: pandas.core.frame.DataFrame, features: List[str]*) → List[str][[source]](_modules/fklearn/training/utils.html#expand_features_encoded)[¶](#fklearn.training.utils.expand_features_encoded)
Expand the list of features to include features created automatically by fklearn in encoders such as Onehot-encoder.
All features created by fklearn have the naming pattern fklearn_feat__col==val.
This function looks for these names in the DataFrame columns, checks if they can be derivative of any of the features listed in features, adds them to the new list of features and removes the original names from the list.
E.g. df has columns col1 with values 0 and 1 and col2. After Onehot-encoding col1 df will have columns fklearn_feat_col1==0, fklearn_feat_col1==1, col2.
This function will then add fklearn_feat_col1==0 and fklearn_feat_col1==1 to the list of features and remove col1. If for some reason df also has another column fklearn_feat_col3==x but col3 is not on the list of features, this column will not be added.
| Parameters: | * **df** (*pd.DataFrame*) – A Pandas’ DataFrame with all features.
* **features** (*list of str*) – The original list of features.
|
`fklearn.training.utils.``log_learner_time`[[source]](_modules/fklearn/training/utils.html#log_learner_time)[¶](#fklearn.training.utils.log_learner_time)
`fklearn.training.utils.``print_learner_run`[[source]](_modules/fklearn/training/utils.html#print_learner_run)[¶](#fklearn.training.utils.print_learner_run)
####### Module contents[¶](#module-fklearn.training)
###### fklearn.tuning package[¶](#fklearn-tuning-package)
####### Submodules[¶](#submodules)
####### fklearn.tuning.model_agnostic_fc module[¶](#module-fklearn.tuning.model_agnostic_fc)
`fklearn.tuning.model_agnostic_fc.``correlation_feature_selection`[[source]](_modules/fklearn/tuning/model_agnostic_fc.html#correlation_feature_selection)[¶](#fklearn.tuning.model_agnostic_fc.correlation_feature_selection)
Feature selection based on correlation
| Parameters: | * **train_set** (*pd.DataFrame*) – A Pandas’ DataFrame with the training data
* **features** (*list of str*) – The list of features to consider when dropping with correlation
* **threshold** (*float*) – The correlation threshold. Will drop features with correlation equal or above this threshold
|
| Returns: | |
| Return type: | log with feature correlation, features to drop and final features |
`fklearn.tuning.model_agnostic_fc.``variance_feature_selection`[[source]](_modules/fklearn/tuning/model_agnostic_fc.html#variance_feature_selection)[¶](#fklearn.tuning.model_agnostic_fc.variance_feature_selection)
Feature selection based on variance
| Parameters: | * **train_set** (*pd.DataFrame*) – A Pandas’ DataFrame with the training data
* **features** (*list of str*) – The list of features to consider when dropping with variance
* **threshold** (*float*) – The variance threshold. Will drop features with variance equal or bellow this threshold
|
| Returns: | |
| Return type: | log with feature variance, features to drop and final features |
####### fklearn.tuning.parameter_tuners module[¶](#fklearn-tuning-parameter-tuners-module)
####### fklearn.tuning.samplers module[¶](#module-fklearn.tuning.samplers)
`fklearn.tuning.samplers.``remove_by_feature_importance`[[source]](_modules/fklearn/tuning/samplers.html#remove_by_feature_importance)[¶](#fklearn.tuning.samplers.remove_by_feature_importance)
Performs feature selection based on feature importance
| Parameters: | * **log** (*dict*) – Dictionaries evaluations.
* **num_removed_by_step** (*int* *(**default 5**)*) – The number of features to remove
|
| Returns: | **features** – The remaining features after removing based on feature importance |
| Return type: | list of str |
`fklearn.tuning.samplers.``remove_by_feature_shuffling`[[source]](_modules/fklearn/tuning/samplers.html#remove_by_feature_shuffling)[¶](#fklearn.tuning.samplers.remove_by_feature_shuffling)
Performs feature selection based on the evaluation of the test vs the evaluation of the test with randomly shuffled features
| Parameters: | * **log** (*LogType*) – Dictionaries evaluations.
* **predict_fn** (*function pandas.DataFrame -> pandas.DataFrame*) – A partially defined predictor that takes a DataFrame and returns the predicted score for this dataframe
* **eval_fn** (*function DataFrame -> log dict*) – A partially defined evaluation function that takes a dataset with prediction and returns the evaluation logs.
* **eval_data** (*pandas.DataFrame*) – Data used to evaluate the model after shuffling
* **extractor** (*function str -> float*) – A extractor that take a string and returns the value of that string on a dict
* **metric_name** (*str*) – String with the name of the column that refers to the metric column to be extracted
* **max_removed_by_step** (*int* *(**default 5**)*) – The maximum number of features to remove. It will only consider the least max_removed_by_step in terms of feature importance. If speed_up_by_importance=True it will first filter the least relevant feature an shuffle only those. If speed_up_by_importance=False it will shuffle all features and drop the last max_removed_by_step in terms of PIMP. In both cases, the features will only be removed if drop in performance is up to the defined threshold.
* **threshold** (*float* *(**default 0.005**)*) – Threshold for model performance comparison
* **speed_up_by_importance** (*bool* *(**default True**)*) – If it should narrow search looking at feature importance first before getting PIMP importance. If True,
will only shuffle the top num_removed_by_step in terms of feature importance.
* **parallel** (*bool* *(**default False**)*) –
* **nthread** (*int* *(**default 1**)*) –
* **seed** (*int* *(**default 7**)*) – Random seed
|
| Returns: | **features** – The remaining features after removing based on feature importance |
| Return type: | list of str |
`fklearn.tuning.samplers.``remove_features_subsets`[[source]](_modules/fklearn/tuning/samplers.html#remove_features_subsets)[¶](#fklearn.tuning.samplers.remove_features_subsets)
Performs feature selection based on the best performing model out of several trained models
| Parameters: | * **log_list** (*list of dict*) – A list of log-like lists of dictionaries evaluations.
* **extractor** (*function string -> float*) – A extractor that take a string and returns the value of that string on a dict
* **metric_name** (*str*) – String with the name of the column that refers to the metric column to be extracted
* **num_removed_by_step** (*int* *(**default 1**)*) – The number of features to remove
|
| Returns: | **keys** – The remaining keys of feature sets after choosing the current best subset |
| Return type: | list of str |
####### fklearn.tuning.selectors module[¶](#fklearn-tuning-selectors-module)
####### fklearn.tuning.stoppers module[¶](#module-fklearn.tuning.stoppers)
`fklearn.tuning.stoppers.``aggregate_stop_funcs`(**stop_funcs*) → Callable[[List[List[Dict[str, Any]]]], bool][[source]](_modules/fklearn/tuning/stoppers.html#aggregate_stop_funcs)[¶](#fklearn.tuning.stoppers.aggregate_stop_funcs)
Aggregate stop functions
| Parameters: | **stop_funcs** (*list of function list of dict -> bool*) – |
| Returns: | **l** – Function that performs the Or logic of all stop_fn applied to the logs |
| Return type: | function logs -> bool |
`fklearn.tuning.stoppers.``stop_by_iter_num`[[source]](_modules/fklearn/tuning/stoppers.html#stop_by_iter_num)[¶](#fklearn.tuning.stoppers.stop_by_iter_num)
Checks for logs to see if feature selection should stop
| Parameters: | * **logs** (*list of list of dict*) – A list of log-like lists of dictionaries evaluations.
* **iter_limit** (*int* *(**default 50**)*) – Limit of Iterations
|
| Returns: | **stop** – A boolean whether to stop recursion or not |
| Return type: | bool |
`fklearn.tuning.stoppers.``stop_by_no_improvement`[[source]](_modules/fklearn/tuning/stoppers.html#stop_by_no_improvement)[¶](#fklearn.tuning.stoppers.stop_by_no_improvement)
Checks for logs to see if feature selection should stop
| Parameters: | * **logs** (*list of list of dict*) – A list of log-like lists of dictionaries evaluations.
* **extractor** (*function str -> float*) – A extractor that take a string and returns the value of that string on a dict
* **metric_name** (*str*) – String with the name of the column that refers to the metric column to be extracted
* **early_stop** (*int* *(**default 3**)*) – Number of iteration without improval before stopping
* **threshold** (*float* *(**default 0.001**)*) – Threshold for model performance comparison
|
| Returns: | **stop** – A boolean whether to stop recursion or not |
| Return type: | bool |
`fklearn.tuning.stoppers.``stop_by_no_improvement_parallel`[[source]](_modules/fklearn/tuning/stoppers.html#stop_by_no_improvement_parallel)[¶](#fklearn.tuning.stoppers.stop_by_no_improvement_parallel)
Checks for logs to see if feature selection should stop
| Parameters: | * **logs** (*list of list of dict*) – A list of log-like lists of dictionaries evaluations.
* **extractor** (*function str -> float*) – A extractor that take a string and returns the value of that string on a dict
* **metric_name** (*str*) – String with the name of the column that refers to the metric column to be extracted
* **early_stop** (*int* *(**default 3**)*) – Number of iterations without improvements before stopping
* **threshold** (*float* *(**default 0.001**)*) – Threshold for model performance comparison
|
| Returns: | **stop** – A boolean whether to stop recursion or not |
| Return type: | bool |
`fklearn.tuning.stoppers.``stop_by_num_features`[[source]](_modules/fklearn/tuning/stoppers.html#stop_by_num_features)[¶](#fklearn.tuning.stoppers.stop_by_num_features)
Checks for logs to see if feature selection should stop
| Parameters: | * **logs** (*list of list of dict*) – A list of log-like lists of dictionaries evaluations.
* **min_num_features** (*int* *(**default 50**)*) – The minimun number of features the model can have before stopping
|
| Returns: | **stop** – A boolean whether to stop recursion or not |
| Return type: | bool |
`fklearn.tuning.stoppers.``stop_by_num_features_parallel`[[source]](_modules/fklearn/tuning/stoppers.html#stop_by_num_features_parallel)[¶](#fklearn.tuning.stoppers.stop_by_num_features_parallel)
Selects the best log out of a list to see if feature selection should stop
| Parameters: | * **logs** (*list of list of list of dict*) – A list of log-like lists of dictionaries evaluations.
* **extractor** (*function str -> float*) – A extractor that take a string and returns the value of that string on a dict
* **metric_name** (*str*) – String with the name of the column that refers to the metric column to be extracted
* **min_num_features** (*int* *(**default 50**)*) – The minimun number of features the model can have before stopping
|
| Returns: | **stop** – A boolean whether to stop recursion or not |
| Return type: | bool |
####### fklearn.tuning.utils module[¶](#module-fklearn.tuning.utils)
`fklearn.tuning.utils.``gen_dict_extract`(*key: str*, *obj: Dict*) → Generator[Any, None, None][[source]](_modules/fklearn/tuning/utils.html#gen_dict_extract)[¶](#fklearn.tuning.utils.gen_dict_extract)
`fklearn.tuning.utils.``gen_key_avgs_from_dicts`(*obj: List*) → Dict[str, float][[source]](_modules/fklearn/tuning/utils.html#gen_key_avgs_from_dicts)[¶](#fklearn.tuning.utils.gen_key_avgs_from_dicts)
`fklearn.tuning.utils.``gen_key_avgs_from_iteration`(*key: str*, *log: Dict*) → Any[[source]](_modules/fklearn/tuning/utils.html#gen_key_avgs_from_iteration)[¶](#fklearn.tuning.utils.gen_key_avgs_from_iteration)
`fklearn.tuning.utils.``gen_key_avgs_from_logs`(*key: str, logs: List[Dict]*) → Dict[str, float][[source]](_modules/fklearn/tuning/utils.html#gen_key_avgs_from_logs)[¶](#fklearn.tuning.utils.gen_key_avgs_from_logs)
`fklearn.tuning.utils.``gen_validator_log`[[source]](_modules/fklearn/tuning/utils.html#gen_validator_log)[¶](#fklearn.tuning.utils.gen_validator_log)
`fklearn.tuning.utils.``get_avg_metric_from_extractor`[[source]](_modules/fklearn/tuning/utils.html#get_avg_metric_from_extractor)[¶](#fklearn.tuning.utils.get_avg_metric_from_extractor)
`fklearn.tuning.utils.``get_best_performing_log`(*log_list: List[Dict[str, Any]], extractor: Callable[[str], float], metric_name: str*) → Dict[[source]](_modules/fklearn/tuning/utils.html#get_best_performing_log)[¶](#fklearn.tuning.utils.get_best_performing_log)
`fklearn.tuning.utils.``get_used_features`(*log: Dict*) → List[str][[source]](_modules/fklearn/tuning/utils.html#get_used_features)[¶](#fklearn.tuning.utils.get_used_features)
`fklearn.tuning.utils.``order_feature_importance_avg_from_logs`(*log: Dict*) → List[str][[source]](_modules/fklearn/tuning/utils.html#order_feature_importance_avg_from_logs)[¶](#fklearn.tuning.utils.order_feature_importance_avg_from_logs)
####### Module contents[¶](#module-fklearn.tuning)
###### fklearn.types package[¶](#fklearn-types-package)
####### Submodules[¶](#submodules)
####### fklearn.types.types module[¶](#module-fklearn.types.types)
####### Module contents[¶](#module-fklearn.types)
###### fklearn.validation package[¶](#fklearn-validation-package)
####### Submodules[¶](#submodules)
####### fklearn.validation.evaluators module[¶](#module-fklearn.validation.evaluators)
`fklearn.validation.evaluators.``auc_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#auc_evaluator)[¶](#fklearn.validation.evaluators.auc_evaluator)
Computes the ROC AUC score, given true label and prediction scores.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction scores.
* **target_column** (*String*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the ROC AUC Score |
| Return type: | dict |
`fklearn.validation.evaluators.``brier_score_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#brier_score_evaluator)[¶](#fklearn.validation.evaluators.brier_score_evaluator)
Computes the Brier score, given true label and prediction scores.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction scores.
* **target_column** (*String*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – The name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the Brier score. |
| Return type: | dict |
`fklearn.validation.evaluators.``combined_evaluators`[[source]](_modules/fklearn/validation/evaluators.html#combined_evaluators)[¶](#fklearn.validation.evaluators.combined_evaluators)
Combine partially applies evaluation functions.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame to apply the evaluators on
* **evaluators** (*List*) – List of evaluator functions
|
| Returns: | **log** – A log-like dictionary with the column mean |
| Return type: | dict |
`fklearn.validation.evaluators.``correlation_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#correlation_evaluator)[¶](#fklearn.validation.evaluators.correlation_evaluator)
Computes the Pearson correlation between prediction and target.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction.
* **target_column** (*String*) – The name of the column in test_data with the continuous target.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the Pearson correlation |
| Return type: | dict |
`fklearn.validation.evaluators.``expected_calibration_error_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#expected_calibration_error_evaluator)[¶](#fklearn.validation.evaluators.expected_calibration_error_evaluator)
Computes the expected calibration error (ECE), given true label and prediction scores.
See “On Calibration of Modern Neural Networks”(<https://arxiv.org/abs/1706.04599>) for more information.
The ECE is the distance between the actuals observed frequency and the predicted probabilities,
for a given choice of bins.
Perfect calibration results in a score of 0.
For example, if for the bin [0, 0.1] we have the three data points:
1. prediction: 0.1, actual: 0 2. prediction: 0.05, actual: 1 3. prediction: 0.0, actual 0
Then the predicted average is (0.1 + 0.05 + 0.00)/3 = 0.05, and the empirical frequency is (0 + 1 + 0)/3 = 1/3.
Therefore, the distance for this bin is:
```
|1/3 - 0.05| ~= 0.28.
```
Graphical intuition:
```
Actuals (empirical frequency between 0 and 1)
| *
| *
| *
______ Predictions (probabilties between 0 and 1)
```
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction scores.
* **target_column** (*String*) – The name of the column in test_data with the binary target.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – The name of the evaluator as it will appear in the logs.
* **n_bins** (*Int* *(**default=100**)*) – The number of bins.
This is a trade-off between the number of points in each bin and the probability range they span.
You want a small enough range that still contains a significant number of points for the distance to work.
* **bin_choice** (*String* *(**default="count"**)*) – Two possibilities:
“count” for equally populated bins (e.g. uses pandas.qcut for the bins)
“prob” for equally spaced probabilities (e.g. uses pandas.cut for the bins),
with distance weighed by the number of samples in each bin.
|
| Returns: | **log** – A log-like dictionary with the expected calibration error. |
| Return type: | dict |
`fklearn.validation.evaluators.``exponential_coefficient_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#exponential_coefficient_evaluator)[¶](#fklearn.validation.evaluators.exponential_coefficient_evaluator)
Computes the exponential coefficient between prediction and target. Finds a1 in the following equation target = exp(a0 + a1 prediction)
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with with target and prediction.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction.
* **target_column** (*String*) – The name of the column in test_data with the continuous target.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the exponential coefficient |
| Return type: | dict |
`fklearn.validation.evaluators.``fbeta_score_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#fbeta_score_evaluator)[¶](#fklearn.validation.evaluators.fbeta_score_evaluator)
Computes the F-beta score, given true label and prediction scores.
| Parameters: | * **test_data** (*pandas.DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **threshold** (*float*) –
A threshold for the prediction column above which samples will be classified as 1
* **beta** (*float*) – The beta parameter determines the weight of precision in the combined score.
beta < 1 lends more weight to precision, while beta > 1 favors recall
(beta -> 0 considers only precision, beta -> inf only recall).
* **prediction_column** (*str*) – The name of the column in test_data with the prediction scores.
* **target_column** (*str*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*str**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the Precision Score |
| Return type: | dict |
`fklearn.validation.evaluators.``generic_sklearn_evaluator`(*name_prefix: str, sklearn_metric: Callable[[...], float]*) → Callable[[...], Dict[str, Union[float, Dict]]][[source]](_modules/fklearn/validation/evaluators.html#generic_sklearn_evaluator)[¶](#fklearn.validation.evaluators.generic_sklearn_evaluator)
Returns an evaluator build from a metric from sklearn.metrics
| Parameters: | * **name_prefix** (*str*) – The default name of the evaluator will be name_prefix + target_column.
* **sklearn_metric** (*Callable*) – Metric function from sklearn.metrics. It should take as parameters y_true, y_score, kwargs.
|
| Returns: | **eval_fn** – An evaluator function that uses the provided metric |
| Return type: | Callable |
`fklearn.validation.evaluators.``hash_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#hash_evaluator)[¶](#fklearn.validation.evaluators.hash_evaluator)
Computes the hash of a pandas dataframe, filtered by hash columns. The purpose is to uniquely identify a dataframe, to be able to check if two dataframes are equal or not.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame to be hashed.
* **hash_columns** (*List**[**str**]**,* *optional* *(**default=None**)*) – A list of column names to filter the dataframe before hashing. If None,
it will hash the dataframe with all the columns
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
* **consider_index** (*bool**,* *optional* *(**default=False**)*) – If true, will consider the index of the dataframe to calculate the hash.
The default behaviour will ignore the index and just hash the content of the features.
|
| Returns: | **log** – A log-like dictionary with the hash of the dataframe |
| Return type: | dict |
`fklearn.validation.evaluators.``linear_coefficient_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#linear_coefficient_evaluator)[¶](#fklearn.validation.evaluators.linear_coefficient_evaluator)
Computes the linear coefficient from regressing the outcome on the prediction
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with with target and prediction.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction.
* **target_column** (*String*) – The name of the column in test_data with the continuous target.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the linear coefficient from regressing the outcome on the prediction |
| Return type: | dict |
`fklearn.validation.evaluators.``logistic_coefficient_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#logistic_coefficient_evaluator)[¶](#fklearn.validation.evaluators.logistic_coefficient_evaluator)
Computes the logistic coefficient between prediction and target. Finds a1 in the following equation target = logistic(a0 + a1 prediction)
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with with target and prediction.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction.
* **target_column** (*String*) – The name of the column in test_data with the continuous target.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the logistic coefficient |
| Return type: | dict |
`fklearn.validation.evaluators.``logloss_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#logloss_evaluator)[¶](#fklearn.validation.evaluators.logloss_evaluator)
Computes the logloss score, given true label and prediction scores.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction scores.
* **target_column** (*String*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the logloss score. |
| Return type: | dict |
`fklearn.validation.evaluators.``mean_prediction_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#mean_prediction_evaluator)[¶](#fklearn.validation.evaluators.mean_prediction_evaluator)
Computes mean for the specified column.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with a column to compute the mean
* **prediction_column** (*Strings*) – The name of the column in test_data to compute the mean.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the column mean |
| Return type: | dict |
`fklearn.validation.evaluators.``mse_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#mse_evaluator)[¶](#fklearn.validation.evaluators.mse_evaluator)
Computes the Mean Squared Error, given true label and predictions.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and predictions.
* **prediction_column** (*Strings*) – The name of the column in test_data with the predictions.
* **target_column** (*String*) – The name of the column in test_data with the continuous target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the MSE Score |
| Return type: | dict |
`fklearn.validation.evaluators.``ndcg_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#ndcg_evaluator)[¶](#fklearn.validation.evaluators.ndcg_evaluator)
Computes the Normalized Discount Cumulative Gain (NDCG) between of the original and predicted rankings:
<https://en.wikipedia.org/wiki/Discounted_cumulative_gain| Parameters: | * **test_data** (*Pandas DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **prediction_column** (*String*) – The name of the column in test_data with the prediction scores.
* **target_column** (*String*) – The name of the column in test_data with the target.
* **k** (*int**,* *optional* *(**default=None**)*) – The size of the rank that is used to fit (highest k scores) the NDCG score. If None, use all outputs.
Otherwise, this value must be between [1, len(test_data[prediction_column])].
* **exponential_gain** (*bool* *(**default=True**)*) – If False, then use the linear gain. The exponential gain places a stronger emphasis on retrieving relevant items. If the relevance of these items is binary values in {0,1}, then the two approaches are the same, which is the linear case.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – The name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the NDCG score, float in [0,1]. |
| Return type: | dict |
`fklearn.validation.evaluators.``permutation_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#permutation_evaluator)[¶](#fklearn.validation.evaluators.permutation_evaluator)
Permutation importance evaluator.
It works by shuffling one or more features on test_data dataframe,
getting the preditions with predict_fn, and evaluating the results with eval_fn.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target, predictions and features.
* **predict_fn** (*function DataFrame -> DataFrame*) – Function that receives the input dataframe and returns a dataframe with the pipeline predictions.
* **eval_fn** (*function DataFrame -> Log Dict*) – A partially applied evaluation function.
* **baseline** (*bool*) – Also evaluates the predict_fn on an unshuffled baseline.
* **features** (*List of strings*) – The features to shuffle and then evaluate eval_fn on the shuffled results.
The default case shuffles all dataframe columns.
* **shuffle_all_at_once** (*bool*) – Shuffle all features at once instead of one per turn.
* **random_state** (*int*) – Seed to be used by the random number generator.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with evaluation results by feature shuffle.
Use the permutation_extractor for better visualization of the results. |
| Return type: | dict |
`fklearn.validation.evaluators.``pr_auc_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#pr_auc_evaluator)[¶](#fklearn.validation.evaluators.pr_auc_evaluator)
Computes the PR AUC score, given true label and prediction scores.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction scores.
* **target_column** (*String*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | |
| Return type: | A log-like dictionary with the PR AUC Score |
`fklearn.validation.evaluators.``precision_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#precision_evaluator)[¶](#fklearn.validation.evaluators.precision_evaluator)
Computes the precision score, given true label and prediction scores.
| Parameters: | * **test_data** (*pandas.DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **threshold** (*float*) –
A threshold for the prediction column above which samples will be classified as 1
* **prediction_column** (*str*) – The name of the column in test_data with the prediction scores.
* **target_column** (*str*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*str**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the Precision Score |
| Return type: | dict |
`fklearn.validation.evaluators.``r2_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#r2_evaluator)[¶](#fklearn.validation.evaluators.r2_evaluator)
Computes the R2 score, given true label and predictions.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction.
* **target_column** (*String*) – The name of the column in test_data with the continuous target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the R2 Score |
| Return type: | dict |
`fklearn.validation.evaluators.``recall_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#recall_evaluator)[¶](#fklearn.validation.evaluators.recall_evaluator)
Computes the recall score, given true label and prediction scores.
| Parameters: | * **test_data** (*pandas.DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **threshold** (*float*) –
A threshold for the prediction column above which samples will be classified as 1
* **prediction_column** (*str*) – The name of the column in test_data with the prediction scores.
* **target_column** (*str*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*str**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the Precision Score |
| Return type: | dict |
`fklearn.validation.evaluators.``roc_auc_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#roc_auc_evaluator)[¶](#fklearn.validation.evaluators.roc_auc_evaluator)
Computes the ROC AUC score, given true label and prediction scores.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction scores.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction scores.
* **target_column** (*String*) – The name of the column in test_data with the binary target.
* **weight_column** (*String* *(**default=None**)*) – The name of the column in test_data with the sample weights.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the ROC AUC Score |
| Return type: | dict |
`fklearn.validation.evaluators.``spearman_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#spearman_evaluator)[¶](#fklearn.validation.evaluators.spearman_evaluator)
Computes the Spearman correlation between prediction and target.
The Spearman correlation evaluates the rank order between two variables:
<https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and prediction.
* **prediction_column** (*Strings*) – The name of the column in test_data with the prediction.
* **target_column** (*String*) – The name of the column in test_data with the continuous target.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with the Spearman correlation |
| Return type: | dict |
`fklearn.validation.evaluators.``split_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#split_evaluator)[¶](#fklearn.validation.evaluators.split_evaluator)
Splits the dataset into the categories in split_col and evaluate model performance in each split. Useful when you belive the model performs differs in a sub population defined by split_col.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and predictions.
* **eval_fn** (*function DataFrame -> Log Dict*) – A partially applied evaluation function.
* **split_col** (*String*) – The name of the column in test_data to split by.
* **split_values** (*Array**,* *optional* *(**default=None**)*) – An Array to split by. If not provided, test_data[split_col].unique()
will be used.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with evaluation results by split. |
| Return type: | dict |
`fklearn.validation.evaluators.``temporal_split_evaluator`[[source]](_modules/fklearn/validation/evaluators.html#temporal_split_evaluator)[¶](#fklearn.validation.evaluators.temporal_split_evaluator)
Splits the dataset into the temporal categories by time_col and evaluate model performance in each split.
The splits are implicitly defined by the time_format.
For example, for the default time format (“%Y-%m”), we will split by year and month.
| Parameters: | * **test_data** (*Pandas' DataFrame*) – A Pandas’ DataFrame with target and predictions.
* **eval_fn** (*function DataFrame -> Log Dict*) – A partially applied evaluation function.
* **time_col** (*string*) – The name of the column in test_data to split by.
* **time_format** (*string*) – The way to format the time_col into temporal categories.
* **split_values** (*Array of string**,* *optional* *(**default=None**)*) – An array of date formatted strings to split the evaluation by.
If not provided, all unique formatted dates will be used.
* **eval_name** (*String**,* *optional* *(**default=None**)*) – the name of the evaluator as it will appear in the logs.
|
| Returns: | **log** – A log-like dictionary with evaluation results by split. |
| Return type: | dict |
####### fklearn.validation.perturbators module[¶](#module-fklearn.validation.perturbators)
`fklearn.validation.perturbators.``nullify`[[source]](_modules/fklearn/validation/perturbators.html#nullify)[¶](#fklearn.validation.perturbators.nullify)
Replace a percenteage of values in the input Series by np.nan
| Parameters: | * **col** (*pd.Series*) – A Pandas’ Series
* **perc** (*float*) – Percentage to be replaced by no.nan
|
| Returns: | |
| Return type: | A transformed pd.Series |
`fklearn.validation.perturbators.``perturbator`[[source]](_modules/fklearn/validation/perturbators.html#perturbator)[¶](#fklearn.validation.perturbators.perturbator)
transforms specific columns of a dataset according to an artificial corruption function.
| Parameters: | * **data** (*pandas.DataFrame*) – A Pandas’ DataFrame
* **cols** (*List**[**str**]*) – A list of columns to apply the corruption function
* **corruption_fn** (*function pandas.Series -> pandas.Series*) – An arbitrary corruption function
|
| Returns: | |
| Return type: | A transformed dataset |
`fklearn.validation.perturbators.``random_noise`[[source]](_modules/fklearn/validation/perturbators.html#random_noise)[¶](#fklearn.validation.perturbators.random_noise)
Fit a gaussian to column, then sample and add to each entry with a magnification parameter
| Parameters: | * **col** (*pd.Series*) – A Pandas’ Series
* **mag** (*float*) – Multiplies the noise to control scaling
|
| Returns: | |
| Return type: | A transformed pd.Series |
`fklearn.validation.perturbators.``sample_columns`[[source]](_modules/fklearn/validation/perturbators.html#sample_columns)[¶](#fklearn.validation.perturbators.sample_columns)
Helper function that picks randomly a percentage of the columns
| Parameters: | * **data** (*pd.DataFrame*) – A Pandas’ DataFrame
* **perc** (*float*) – Percentage of columns to be sampled
|
| Returns: | |
| Return type: | A list of column names |
`fklearn.validation.perturbators.``shift_mu`[[source]](_modules/fklearn/validation/perturbators.html#shift_mu)[¶](#fklearn.validation.perturbators.shift_mu)
Shift the mean of column by a given percentage
| Parameters: | * **col** (*pd.Series*) – A Pandas’ Series
* **perc** (*float*) – How much to shift the mu percentually (can be negative)
|
| Returns: | |
| Return type: | A transformed pd.Series |
####### fklearn.validation.splitters module[¶](#module-fklearn.validation.splitters)
`fklearn.validation.splitters.``forward_stability_curve_time_splitter`[[source]](_modules/fklearn/validation/splitters.html#forward_stability_curve_time_splitter)[¶](#fklearn.validation.splitters.forward_stability_curve_time_splitter)
Splits the data into temporal buckets with both the training and testing folds both moving forward.
The folds move forward by a fixed timedelta step.
Optionally, there can be a gap between the end of the training period and the start of the holdout period.
Similar to the stability curve time splitter, with the difference that the training period also moves forward with each fold.
The clearest use case is to evaluate a periodic re-training framework.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split for stability curve estimation.
* **training_time_start** (*datetime.datetime* *or* *str*) – Date for the start of the training period.
If move_training_start_with_steps is True, each step will increase this date by step.
* **training_time_end** (*datetime.datetime* *or* *str*) – Date for the end of the training period.
Each step increases this date by step.
* **time_column** (*str*) – The name of the Date column of train_data.
* **holdout_gap** (*datetime.timedelta*) – Timedelta of the gap between the end of the training period and the start of the validation period.
* **holdout_size** (*datetime.timedelta*) – Timedelta of the range between the start and the end of the holdout period.
* **step** (*datetime.timedelta*) – Timedelta that shifts both the training period and the holdout period by this value.
* **move_training_start_with_steps** (*bool*) – If True, the training start date will increase by step for each fold.
If False, the training start date remains fixed at the training_time_start value.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``k_fold_splitter`[[source]](_modules/fklearn/validation/splitters.html#k_fold_splitter)[¶](#fklearn.validation.splitters.k_fold_splitter)
Makes K random train/test split folds for cross validation.
The folds are made so that every sample is used at least once for evaluating and K-1 times for training.
If stratified is set to True, the split preserves the distribution of stratify_column
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split into K-Folds for cross validation.
* **n_splits** (*int*) – The number of folds K for the K-Fold cross validation strategy.
* **random_state** (*int*) – Seed to be used by the random number generator.
* **stratify_column** (*string*) – Column name in train_data to be used for stratified split.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``out_of_time_and_space_splitter`[[source]](_modules/fklearn/validation/splitters.html#out_of_time_and_space_splitter)[¶](#fklearn.validation.splitters.out_of_time_and_space_splitter)
Makes K grouped train/test split folds for cross validation.
The folds are made so that every ID is used at least once for evaluating and K-1 times for training. Also, for each fold, evaluation will always be out-of-ID and out-of-time.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split into K out-of-time and ID folds for cross validation.
* **n_splits** (*int*) – The number of folds K for the K-Fold cross validation strategy.
* **in_time_limit** (*str* *or* *datetime.datetime*) – A String representing the end time of the training data.
It should be in the same format as the Date column in train_data.
* **time_column** (*str*) – The name of the Date column of train_data.
* **space_column** (*str*) – The name of the ID column of train_data.
* **holdout_gap** (*datetime.timedelta*) – Timedelta of the gap between the end of the training period and the start of the validation period.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``reverse_time_learning_curve_splitter`[[source]](_modules/fklearn/validation/splitters.html#reverse_time_learning_curve_splitter)[¶](#fklearn.validation.splitters.reverse_time_learning_curve_splitter)
Splits the data into temporal buckets given by the specified frequency.
Uses a fixed out-of-ID and time hold out set for every fold.
Training size increases per fold, with less recent data being added in each fold.
Useful for inverse learning curve validation, that is, for seeing how hold out performance increases as the training size increases with less recent data.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split inverse learning curve estimation.
* **time_column** (*str*) – The name of the Date column of train_data.
* **training_time_limit** (*str*) – The Date String for the end of the training period. Should be of the same format as time_column.
* **lower_time_limit** (*str*) – A Date String for the begining of the training period. This allows limiting the learning curve from bellow, avoiding heavy computation with very old data.
* **freq** (*str*) – The temporal frequency.
See: <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>
* **holdout_gap** (*datetime.timedelta*) – Timedelta of the gap between the end of the training period and the start of the validation period.
* **min_samples** (*int*) – The minimum number of samples required in the split to keep the split.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``spatial_learning_curve_splitter`[[source]](_modules/fklearn/validation/splitters.html#spatial_learning_curve_splitter)[¶](#fklearn.validation.splitters.spatial_learning_curve_splitter)
Splits the data for a spatial learning curve. Progressively adds more and more examples to the training in order to verify the impact of having more data available on a validation set.
The validation set starts after the training set, with an optional time gap.
Similar to the temporal learning curves, but with spatial increases in the training set.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split for learning curve estimation.
* **space_column** (*str*) – The name of the ID column of train_data.
* **time_column** (*str*) – The name of the temporal column of train_data.
* **training_limit** (*datetime* *or* *str*) – The date limiting the training (after which the holdout begins).
* **holdout_gap** (*timedelta*) – The gap between the end of training and the start of the holdout.
If you have censored data, use a gap similar to the censor time.
* **train_percentages** (*list* *or* *tuple of floats*) – A list containing the percentages of IDs to use in the training.
Defaults to (0.25, 0.5, 0.75, 1.0). For example: For the default value,
there would be four model trainings, containing respectively 25%, 50%,
75%, and 100% of the IDs that are not part of the held out set.
* **random_state** (*int*) – A seed for the random number generator that shuffles the IDs.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``stability_curve_time_in_space_splitter`[[source]](_modules/fklearn/validation/splitters.html#stability_curve_time_in_space_splitter)[¶](#fklearn.validation.splitters.stability_curve_time_in_space_splitter)
Splits the data into temporal buckets given by the specified frequency.
Training set is fixed before hold out and uses a rolling window hold out set.
Each fold moves the hold out further into the future.
Useful to see how model performance degrades as the training data gets more outdated. Folds are made so that ALL IDs in the holdout also appear in the training set.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split for stability curve estimation.
* **training_time_limit** (*str*) – The Date String for the end of the testing period. Should be of the same format as time_column.
* **space_column** (*str*) – The name of the ID column of train_data.
* **time_column** (*str*) – The name of the Date column of train_data.
* **freq** (*str*) – The temporal frequency.
See: <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>
* **space_hold_percentage** (*float* *(**default=0.5**)*) – The proportion of hold out IDs.
* **random_state** (*int*) – A seed for the random number generator for ID sampling across train and hold out sets.
* **min_samples** (*int*) – The minimum number of samples required in the split to keep the split.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``stability_curve_time_space_splitter`[[source]](_modules/fklearn/validation/splitters.html#stability_curve_time_space_splitter)[¶](#fklearn.validation.splitters.stability_curve_time_space_splitter)
Splits the data into temporal buckets given by the specified frequency.
Training set is fixed before hold out and uses a rolling window hold out set.
Each fold moves the hold out further into the future.
Useful to see how model performance degrades as the training data gets more outdated. Folds are made so that NONE of the IDs in the holdout appears in the training set.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split for stability curve estimation.
* **training_time_limit** (*str*) – The Date String for the end of the testing period. Should be of the same format as time_column
* **space_column** (*str*) – The name of the ID column of train_data
* **time_column** (*str*) – The name of the Date column of train_data
* **freq** (*str*) – The temporal frequency.
See: <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>
* **space_hold_percentage** (*float*) – The proportion of hold out IDs
* **random_state** (*int*) – A seed for the random number generator for ID sampling across train and hold out sets.
* **min_samples** (*int*) – The minimum number of samples required in the split to keep the split.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``stability_curve_time_splitter`[[source]](_modules/fklearn/validation/splitters.html#stability_curve_time_splitter)[¶](#fklearn.validation.splitters.stability_curve_time_splitter)
Splits the data into temporal buckets given by the specified frequency.
Training set is fixed before hold out and uses a rolling window hold out set.
Each fold moves the hold out further into the future.
Useful to see how model performance degrades as the training data gets more outdated. Training and holdout sets can have same IDs
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split for stability curve estimation.
* **training_time_limit** (*str*) – The Date String for the end of the testing period. Should be of the same format as time_column.
* **time_column** (*str*) – The name of the Date column of train_data.
* **freq** (*str*) – The temporal frequency.
See: <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>
* **min_samples** (*int*) – The minimum number of samples required in a split to keep it.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``time_and_space_learning_curve_splitter`[[source]](_modules/fklearn/validation/splitters.html#time_and_space_learning_curve_splitter)[¶](#fklearn.validation.splitters.time_and_space_learning_curve_splitter)
Splits the data into temporal buckets given by the specified frequency.
Uses a fixed out-of-ID and time hold out set for every fold.
Training size increases per fold, with more recent data being added in each fold.
Useful for learning curve validation, that is, for seeing how hold out performance increases as the training size increases with more recent data.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split for learning curve estimation.
* **training_time_limit** (*str*) – The Date String for the end of the testing period. Should be of the same format as time_column.
* **space_column** (*str*) – The name of the ID column of train_data.
* **time_column** (*str*) – The name of the Date column of train_data.
* **freq** (*str*) – The temporal frequency.
See: <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>
* **space_hold_percentage** (*float*) – The proportion of hold out IDs.
* **holdout_gap** (*datetime.timedelta*) – Timedelta of the gap between the end of the training period and the start of the validation period.
* **random_state** (*int*) – A seed for the random number generator for ID sampling across train and hold out sets.
* **min_samples** (*int*) – The minimum number of samples required in the split to keep the split.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
`fklearn.validation.splitters.``time_learning_curve_splitter`[[source]](_modules/fklearn/validation/splitters.html#time_learning_curve_splitter)[¶](#fklearn.validation.splitters.time_learning_curve_splitter)
Splits the data into temporal buckets given by the specified frequency.
Uses a fixed out-of-ID and time hold out set for every fold.
Training size increases per fold, with more recent data being added in each fold.
Useful for learning curve validation, that is, for seeing how hold out performance increases as the training size increases with more recent data.
| Parameters: | * **train_data** (*pandas.DataFrame*) – A Pandas’ DataFrame that will be split for learning curve estimation.
* **training_time_limit** (*str*) – The Date String for the end of the testing period. Should be of the same format as time_column.
* **time_column** (*str*) – The name of the Date column of train_data.
* **freq** (*str*) – The temporal frequency.
See: <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>
* **holdout_gap** (*datetime.timedelta*) – Timedelta of the gap between the end of the training period and the start of the validation period.
* **min_samples** (*int*) – The minimum number of samples required in the split to keep the split.
|
| Returns: | * **Folds** (*list of tuples*) – A list of folds. Each fold is a Tuple of arrays.
The fist array in each tuple contains training indexes while the second array contains validation indexes.
* **logs** (*list of dict*) – A list of logs, one for each fold
|
####### fklearn.validation.validator module[¶](#fklearn-validation-validator-module)
####### Module contents[¶](#module-fklearn.validation)
##### Submodules[¶](#submodules)
##### fklearn.common_docstrings module[¶](#module-fklearn.common_docstrings)
`fklearn.common_docstrings.``learner_pred_fn_docstring`(*f_name: str*, *shap: bool = False*) → str[[source]](_modules/fklearn/common_docstrings.html#learner_pred_fn_docstring)[¶](#fklearn.common_docstrings.learner_pred_fn_docstring)
`fklearn.common_docstrings.``learner_return_docstring`(*model_name: str*) → str[[source]](_modules/fklearn/common_docstrings.html#learner_return_docstring)[¶](#fklearn.common_docstrings.learner_return_docstring)
##### fklearn.version module[¶](#module-fklearn.version)
`fklearn.version.``version`() → str[[source]](_modules/fklearn/version.html#version)[¶](#fklearn.version.version)
Get package version
| Returns: | **version** |
| Return type: | str |
##### Module contents[¶](#module-fklearn)
### Contributing[¶](#contributing)
Table of contents:
* [Where to start?](#where-to-start)
* [Getting Help](#getting-help)
* [Working with the code](#working-with-the-code)
+ [Version control](#version-control)
+ [Fork](#fork)
+ [Development environment](#development-environment)
- [Creating the virtual environment](#creating-the-virtual-environment)
- [Install the requirements](#install-the-requirements)
- [First testing](#first-testing)
- [Creating a development branch](#creating-a-development-branch)
* [Contribute with code](#contribute-with-code)
+ [Code standards](#code-standards)
+ [Run tests](#run-tests)
+ [Document your code](#document-your-code)
* [Contribute with documentation](#contribute-with-documentation)
+ [Docstrings](#docstrings)
+ [Documentation](#documentation)
+ [Build documentation](#build-documentation)
* [Send your changes to Fklearn repo](#send-your-changes-to-fklearn-repo)
+ [Commit your changes](#commit-your-changes)
+ [Push the changes](#push-the-changes)
+ [Create a pull request](#create-a-pull-request)
+ [When my code will be merged?](#when-my-code-will-be-merged)
* [Versioning](#versioning)
#### [Where to start?](#id1)[¶](#where-to-start)
We love pull requests(and issues) from everyone.
We recommend you to take a look at the project, follow the examples before contribute with code.
By participating in this project, you agree to abide by our code of conduct.
#### [Getting Help](#id2)[¶](#getting-help)
If you found a bug or need a new feature, you can submit an [issue](https://github.com/nubank/fklearn/issues).
If you would like to chat with other contributors to fklearn, consider joining the [Gitter](https://gitter.im/fklearn-python).
#### [Working with the code](#id3)[¶](#working-with-the-code)
Now that you already understand how the project works, maybe it’s time to fix something, add and enhancement, or write new documentation.
It’s time to understand how we send contributions.
##### [Version control](#id4)[¶](#version-control)
This project is hosted in [Github](https://github.com/nubank/fklearn), so to start contributing you will need an account, you can create one for free at [Github Signup](https://github.com/signup).
We use git as version control, so it’s good to understand the basics about git flows before sending new code. You can follow [Github Help](https://docs.github.com/en) to understand how to work with git.
##### [Fork](#id5)[¶](#fork)
To write new code, you will interact with your own fork, so go to [fklearn repo page](https://github.com/nubank/fklearn), and hit the `Fork` button. This will create a copy of our repository in your account. To clone the repository in your machine you can use the next commands:
```
git clone <EMAIL>:your-username/fklearn.git git remote add upstream https://github.com/nubank/fklearn.git
```
This will create a folder called `fklearn` and will connect to the upstream(main repo).
##### [Development environment](#id6)[¶](#development-environment)
We recommend you to create a virtual environment before starting to work with the code, after that you can ensure that everything is working fine by running all tests locally before start writing any new code.
###### [Creating the virtual environment](#id7)[¶](#creating-the-virtual-environment)
```
# Use an ENV_DIR of you choice. We are using ~/venvs python3 -m venv ~/venvs/fklearn-dev source ~/venvs/fklearn-dev/activate
```
###### [Install the requirements](#id8)[¶](#install-the-requirements)
This command will install all the test dependencies. To install the package you can follow the [installation instructions](https://fklearn.readthedocs.io/en/latest/getting_started.html#installation).
```
python3 -m pip install -qe .[devel]
```
###### [First testing](#id9)[¶](#first-testing)
The following command should run all tests, if every test pass, you should be ready to start developing new stuff
```
python3 -m pytest tests/
```
###### [Creating a development branch](#id10)[¶](#creating-a-development-branch)
First you should check that your master branch is up to date with the latest version of the upstream repository.
```
git checkout master git pull upstream master --ff-only
```
```
git checkout -b name-of-your-bugfix-or-feature
```
If you already have a branch, and you want to update with the upstream master
```
git checkout name-of-your-bugfix-or-feature git fetch upstream git merge upstream/master
```
#### [Contribute with code](#id11)[¶](#contribute-with-code)
In this session we’ll guide you on how to contribute with the code. This is a guide which would help you if you want to fix an issue or implement a new feature.
##### [Code standards](#id12)[¶](#code-standards)
This project is compatible only with python 3.6 to 3.9 and follows the [pep8 style](https://www.python.org/dev/peps/pep-0008/)
And we use this [import formatting](https://google.github.io/styleguide/pyguide.html?showone=Imports_formatting#313-imports-formatting)
In order to check if your code is following our codestyle, you can run from the root directory of the repo the next commands:
```
python3 -m pip install -q flake8 python3 -m flake8 \
--ignore=E731,W503 \
--filename=\*.py \
--exclude=__init__.py \
--show-source \
--statistics \
--max-line-length=120 \
src/ tests/
```
We also use mypy for type checking, which you can run with:
```
python3 -m mypy src tests --config mypy.ini
```
##### [Run tests](#id13)[¶](#run-tests)
After you finish your feature development or bug fix, you should run your tests, using:
```
python3 -m pytest tests/
```
Or if you want to run only one test:
```
python3 -m pytest tests/test-file-name.py::test_method_name
```
You must write tests for every feature **always**, you can look at the other tests to have a better idea how we implement them.
As test framework we use [pytest](https://docs.pytest.org/en/latest/)
##### [Document your code](#id14)[¶](#document-your-code)
All methods should have type annotations, this allow us to know what that method expect as parameters, and what is the expected output.
You can learn more about it in [typing docs](https://docs.python.org/3.6/library/typing.html)
To document your code you should add docstrings, all methods with docstring will appear in this documentation’s api file.
If you created a new file, you may need to add it to the `api.rst` following the structure
```
Folder Name
---
File name (fklearn.folder_name.file_name)
#########################################
..currentmodule:: fklearn.folder_name.file_name
.. autosummary::
method_name
```
The docstrings should follow this format
```
"""
Brief introduction of method
More info about it
Parameters
---
parameter_1 : type
Parameter description
Returns
---
value_1 : type
Value description
"""
```
#### [Contribute with documentation](#id15)[¶](#contribute-with-documentation)
You can add, fix documentation of: code(docstrings) or this documentation files.
##### [Docstrings](#id16)[¶](#docstrings)
Follow the same structure we explained in [code contribution](https://fklearn.readthedocs.io/en/latest/contributing.html#document-your-code)
##### [Documentation](#id17)[¶](#documentation)
This documentation is written using rst(`reStructuredText`) you can learn more about it in [rst docs](http://docutils.sourceforge.net/rst.html)
When you make changes in the docs, please make sure, we still be able to build it without any issue.
##### [Build documentation](#id18)[¶](#build-documentation)
From `docs/` folder, install requirements.txt and run
```
make html
```
This command will build the documentation inside `docs/build/html` and you can check locally how it looks, and if everything worked.
#### [Send your changes to Fklearn repo](#id19)[¶](#send-your-changes-to-fklearn-repo)
##### [Commit your changes](#id20)[¶](#commit-your-changes)
You should think about a commit as a unit of change. So it should describe a small change you did in the project.
The following command will list all files you changed:
```
git status
```
To choose which files will be added to the commit:
```
git add path/to/the/file/name.extension
```
And to write a commit message:
This command will open your text editor to write commit messages
```
git commit
```
This will add a commit only with subject
```
git commit -m "My commit message"
```
We recommend this [guide to write better commit messages](https://chris.beams.io/posts/git-commit/)
##### [Push the changes](#id21)[¶](#push-the-changes)
After you write all your commit messages, describing what you did, it’s time to send to your remote repo.
```
git push origin name-of-your-bugfix-or-feature
```
##### [Create a pull request](#id22)[¶](#create-a-pull-request)
Now that you already finished your job, you should:
- Go to your repo’s Github page
- Click `New pull request`
- Choose the branch you want to merge
- Review the files that will be merged
- Click `Create pull request`
- Fill the template
- Tag your PR, add the category(bug, enhancement, documentation…) and a review-request label
##### [When my code will be merged?](#id23)[¶](#when-my-code-will-be-merged)
All code will be reviewed, we require at least one code owner review, and any other person review.
We will usually do weekly releases of the package if we have any new features, that are already reviewed.
#### [Versioning](#id24)[¶](#versioning)
Use Semantic versioning to set library versions, more info: [semver.org](https://semver.org/) But basically this means:
1. MAJOR version when you make incompatible API changes,
2. MINOR version when you add functionality in a backwards-compatible manner, and 3. PATCH version when you make backwards-compatible bug fixes.
(from semver.org summary)
You don’t need to set the version in your PR, we’ll take care of this when we decide to release a new version.
Today the process is:
* Create a new `milestone` X.Y.Z (maintainers only)
* Some PR/issues are attributed to this new milestone
* Merge all the related PRs (maintainers only)
* Create a new PR: `Bump package to X.Y.Z` This PR update the version and the change log (maintainers only)
* Create a tag `X.Y.Z` (maintainers only)
This last step will trigger the CI to build the package and send the version to pypi
When we add new functionality, the past version will be moved to another branch. For example, if we’re at version `1.13.7` and a new functionality is implemented,
we create a new branch `1.13.x`, and protect it(this way we can’t delete it), the new code is merged to master branch, and them we create the tag `1.14.0`
This way we can always fix a past version, opening PRs from `1.13.x` branch. |
commander | npm | JavaScript | [Commander.js](#commanderjs)
===
The complete solution for [node.js](http://nodejs.org) command-line interfaces.
Read this in other languages: English | [简体中文](https://github.com/tj/commander.js/blob/HEAD/Readme_zh-CN.md)
* [Commander.js](#commanderjs)
+ [Installation](#installation)
+ [Quick Start](#quick-start)
+ [Declaring *program* variable](#declaring-program-variable)
+ [Options](#options)
- [Common option types, boolean and value](#common-option-types-boolean-and-value)
- [Default option value](#default-option-value)
- [Other option types, negatable boolean and boolean|value](#other-option-types-negatable-boolean-and-booleanvalue)
- [Required option](#required-option)
- [Variadic option](#variadic-option)
- [Version option](#version-option)
- [More configuration](#more-configuration)
- [Custom option processing](#custom-option-processing)
+ [Commands](#commands)
- [Command-arguments](#command-arguments)
* [More configuration](#more-configuration-1)
* [Custom argument processing](#custom-argument-processing)
- [Action handler](#action-handler)
- [Stand-alone executable (sub)commands](#stand-alone-executable-subcommands)
- [Life cycle hooks](#life-cycle-hooks)
+ [Automated help](#automated-help)
- [Custom help](#custom-help)
- [Display help after errors](#display-help-after-errors)
- [Display help from code](#display-help-from-code)
- [.name](#name)
- [.usage](#usage)
- [.description and .summary](#description-and-summary)
- [.helpOption(flags, description)](#helpoptionflags-description)
- [.addHelpCommand()](#addhelpcommand)
- [More configuration](#more-configuration-2)
+ [Custom event listeners](#custom-event-listeners)
+ [Bits and pieces](#bits-and-pieces)
- [.parse() and .parseAsync()](#parse-and-parseasync)
- [Parsing Configuration](#parsing-configuration)
- [Legacy options as properties](#legacy-options-as-properties)
- [TypeScript](#typescript)
- [createCommand()](#createcommand)
- [Node options such as `--harmony`](#node-options-such-as---harmony)
- [Debugging stand-alone executable subcommands](#debugging-stand-alone-executable-subcommands)
- [npm run-script](#npm-run-script)
- [Display error](#display-error)
- [Override exit and output handling](#override-exit-and-output-handling)
- [Additional documentation](#additional-documentation)
+ [Support](#support)
- [Commander for enterprise](#commander-for-enterprise)
For information about terms used in this document see: [terminology](https://github.com/tj/commander.js/blob/HEAD/docs/terminology.md)
[Installation](#installation)
---
```
npm install commander
```
[Quick Start](#quick-start)
---
You write code to describe your command line interface.
Commander looks after parsing the arguments into options and command-arguments,
displays usage errors for problems, and implements a help system.
Commander is strict and displays an error for unrecognised options.
The two most used option types are a boolean option, and an option which takes its value from the following argument.
Example file: [split.js](https://github.com/tj/commander.js/blob/HEAD/examples/split.js)
```
const { program } = require('commander');
program
.option('--first')
.option('-s, --separator <char>');
program.parse();
const options = program.opts();
const limit = options.first ? 1 : undefined;
console.log(program.args[0].split(options.separator, limit));
```
```
$ node split.js -s / --fits a/b/c error: unknown option '--fits'
(Did you mean --first?)
$ node split.js -s / --first a/b/c
[ 'a' ]
```
Here is a more complete program using a subcommand and with descriptions for the help. In a multi-command program, you have an action handler for each command (or stand-alone executables for the commands).
Example file: [string-util.js](https://github.com/tj/commander.js/blob/HEAD/examples/string-util.js)
```
const { Command } = require('commander');
const program = new Command();
program
.name('string-util')
.description('CLI to some JavaScript string utilities')
.version('0.8.0');
program.command('split')
.description('Split a string into substrings and display as an array')
.argument('<string>', 'string to split')
.option('--first', 'display just the first substring')
.option('-s, --separator <char>', 'separator character', ',')
.action((str, options) => {
const limit = options.first ? 1 : undefined;
console.log(str.split(options.separator, limit));
});
program.parse();
```
```
$ node string-util.js help split Usage: string-util split [options] <stringSplit a string into substrings and display as an array.
Arguments:
string string to split
Options:
--first display just the first substring
-s, --separator <char> separator character (default: ",")
-h, --help display help for command
$ node string-util.js split --separator=/ a/b/c
[ 'a', 'b', 'c' ]
```
More samples can be found in the [examples](https://github.com/tj/commander.js/tree/master/examples) directory.
[Declaring *program* variable](#declaring-program-variable)
---
Commander exports a global object which is convenient for quick programs.
This is used in the examples in this README for brevity.
```
// CommonJS (.cjs)
const { program } = require('commander');
```
For larger programs which may use commander in multiple ways, including unit testing, it is better to create a local Command object to use.
```
// CommonJS (.cjs)
const { Command } = require('commander');
const program = new Command();
```
```
// ECMAScript (.mjs)
import { Command } from 'commander';
const program = new Command();
```
```
// TypeScript (.ts)
import { Command } from 'commander';
const program = new Command();
```
[Options](#options)
---
Options are defined with the `.option()` method, also serving as documentation for the options. Each option can have a short flag (single character) and a long name, separated by a comma or space or vertical bar ('|').
The parsed options can be accessed by calling `.opts()` on a `Command` object, and are passed to the action handler.
Multi-word options such as "--template-engine" are camel-cased, becoming `program.opts().templateEngine` etc.
An option and its option-argument can be separated by a space, or combined into the same argument. The option-argument can follow the short option directly or follow an `=` for a long option.
```
serve -p 80 serve -p80 serve --port 80 serve --port=80
```
You can use `--` to indicate the end of the options, and any remaining arguments will be used without being interpreted.
By default, options on the command line are not positional, and can be specified before or after other arguments.
There are additional related routines for when `.opts()` is not enough:
* `.optsWithGlobals()` returns merged local and global option values
* `.getOptionValue()` and `.setOptionValue()` work with a single option value
* `.getOptionValueSource()` and `.setOptionValueWithSource()` include where the option value came from
### [Common option types, boolean and value](#common-option-types-boolean-and-value)
The two most used option types are a boolean option, and an option which takes its value from the following argument (declared with angle brackets like `--expect <value>`). Both are `undefined` unless specified on command line.
Example file: [options-common.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-common.js)
```
program
.option('-d, --debug', 'output extra debugging')
.option('-s, --small', 'small pizza size')
.option('-p, --pizza-type <type>', 'flavour of pizza');
program.parse(process.argv);
const options = program.opts();
if (options.debug) console.log(options);
console.log('pizza details:');
if (options.small) console.log('- small pizza size');
if (options.pizzaType) console.log(`- ${options.pizzaType}`);
```
```
$ pizza-options -p error: option '-p, --pizza-type <type>' argument missing
$ pizza-options -d -s -p vegetarian
{ debug: true, small: true, pizzaType: 'vegetarian' }
pizza details:
- small pizza size
- vegetarian
$ pizza-options --pizza-type=cheese pizza details:
- cheese
```
Multiple boolean short options may be combined following the dash, and may be followed by a single short option taking a value.
For example `-d -s -p cheese` may be written as `-ds -p cheese` or even `-dsp cheese`.
Options with an expected option-argument are greedy and will consume the following argument whatever the value.
So `--id -xyz` reads `-xyz` as the option-argument.
`program.parse(arguments)` processes the arguments, leaving any args not consumed by the program options in the `program.args` array. The parameter is optional and defaults to `process.argv`.
### [Default option value](#default-option-value)
You can specify a default value for an option.
Example file: [options-defaults.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-defaults.js)
```
program
.option('-c, --cheese <type>', 'add the specified type of cheese', 'blue');
program.parse();
console.log(`cheese: ${program.opts().cheese}`);
```
```
$ pizza-options cheese: blue
$ pizza-options --cheese stilton cheese: stilton
```
### [Other option types, negatable boolean and boolean|value](#other-option-types-negatable-boolean-and-booleanvalue)
You can define a boolean option long name with a leading `no-` to set the option value to false when used.
Defined alone this also makes the option true by default.
If you define `--foo` first, adding `--no-foo` does not change the default value from what it would otherwise be.
Example file: [options-negatable.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-negatable.js)
```
program
.option('--no-sauce', 'Remove sauce')
.option('--cheese <flavour>', 'cheese flavour', 'mozzarella')
.option('--no-cheese', 'plain with no cheese')
.parse();
const options = program.opts();
const sauceStr = options.sauce ? 'sauce' : 'no sauce';
const cheeseStr = (options.cheese === false) ? 'no cheese' : `${options.cheese} cheese`;
console.log(`You ordered a pizza with ${sauceStr} and ${cheeseStr}`);
```
```
$ pizza-options You ordered a pizza with sauce and mozzarella cheese
$ pizza-options --sauce error: unknown option '--sauce'
$ pizza-options --cheese=blue You ordered a pizza with sauce and blue cheese
$ pizza-options --no-sauce --no-cheese You ordered a pizza with no sauce and no cheese
```
You can specify an option which may be used as a boolean option but may optionally take an option-argument
(declared with square brackets like `--optional [value]`).
Example file: [options-boolean-or-value.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-boolean-or-value.js)
```
program
.option('-c, --cheese [type]', 'Add cheese with optional type');
program.parse(process.argv);
const options = program.opts();
if (options.cheese === undefined) console.log('no cheese');
else if (options.cheese === true) console.log('add cheese');
else console.log(`add cheese type ${options.cheese}`);
```
```
$ pizza-options no cheese
$ pizza-options --cheese add cheese
$ pizza-options --cheese mozzarella add cheese type mozzarella
```
Options with an optional option-argument are not greedy and will ignore arguments starting with a dash.
So `id` behaves as a boolean option for `--id -5`, but you can use a combined form if needed like `--id=-5`.
For information about possible ambiguous cases, see [options taking varying arguments](https://github.com/tj/commander.js/blob/HEAD/docs/options-in-depth.md).
### [Required option](#required-option)
You may specify a required (mandatory) option using `.requiredOption()`. The option must have a value after parsing, usually specified on the command line, or perhaps from a default value (say from environment). The method is otherwise the same as `.option()` in format, taking flags and description, and optional default value or custom processing.
Example file: [options-required.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-required.js)
```
program
.requiredOption('-c, --cheese <type>', 'pizza must have cheese');
program.parse();
```
```
$ pizza error: required option '-c, --cheese <type>' not specified
```
### [Variadic option](#variadic-option)
You may make an option variadic by appending `...` to the value placeholder when declaring the option. On the command line you can then specify multiple option-arguments, and the parsed option value will be an array. The extra arguments are read until the first argument starting with a dash. The special argument `--` stops option processing entirely. If a value is specified in the same argument as the option then no further values are read.
Example file: [options-variadic.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-variadic.js)
```
program
.option('-n, --number <numbers...>', 'specify numbers')
.option('-l, --letter [letters...]', 'specify letters');
program.parse();
console.log('Options: ', program.opts());
console.log('Remaining arguments: ', program.args);
```
```
$ collect -n 1 2 3 --letter a b c Options: { number: [ '1', '2', '3' ], letter: [ 'a', 'b', 'c' ] }
Remaining arguments: []
$ collect --letter=A -n80 operand Options: { number: [ '80' ], letter: [ 'A' ] }
Remaining arguments: [ 'operand' ]
$ collect --letter -n 1 -n 2 3 -- operand Options: { number: [ '1', '2', '3' ], letter: true }
Remaining arguments: [ 'operand' ]
```
For information about possible ambiguous cases, see [options taking varying arguments](https://github.com/tj/commander.js/blob/HEAD/docs/options-in-depth.md).
### [Version option](#version-option)
The optional `version` method adds handling for displaying the command version. The default option flags are `-V` and `--version`, and when present the command prints the version number and exits.
```
program.version('0.0.1');
```
```
$ ./examples/pizza -V 0.0.1
```
You may change the flags and description by passing additional parameters to the `version` method, using the same syntax for flags as the `option` method.
```
program.version('0.0.1', '-v, --vers', 'output the current version');
```
### [More configuration](#more-configuration)
You can add most options using the `.option()` method, but there are some additional features available by constructing an `Option` explicitly for less common cases.
Example files: [options-extra.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-extra.js), [options-env.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-env.js), [options-conflicts.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-conflicts.js), [options-implies.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-implies.js)
```
program
.addOption(new Option('-s, --secret').hideHelp())
.addOption(new Option('-t, --timeout <delay>', 'timeout in seconds').default(60, 'one minute'))
.addOption(new Option('-d, --drink <size>', 'drink size').choices(['small', 'medium', 'large']))
.addOption(new Option('-p, --port <number>', 'port number').env('PORT'))
.addOption(new Option('--donate [amount]', 'optional donation in dollars').preset('20').argParser(parseFloat))
.addOption(new Option('--disable-server', 'disables the server').conflicts('port'))
.addOption(new Option('--free-drink', 'small drink included free ').implies({ drink: 'small' }));
```
```
$ extra --help Usage: help [options]
Options:
-t, --timeout <delay> timeout in seconds (default: one minute)
-d, --drink <size> drink cup size (choices: "small", "medium", "large")
-p, --port <number> port number (env: PORT)
--donate [amount] optional donation in dollars (preset: "20")
--disable-server disables the server
--free-drink small drink included free
-h, --help display help for command
$ extra --drink huge error: option '-d, --drink <size>' argument 'huge' is invalid. Allowed choices are small, medium, large.
$ PORT=80 extra --donate --free-drink Options: { timeout: 60, donate: 20, port: '80', freeDrink: true, drink: 'small' }
$ extra --disable-server --port 8000 error: option '--disable-server' cannot be used with option '-p, --port <number>'
```
Specify a required (mandatory) option using the `Option` method `.makeOptionMandatory()`. This matches the `Command` method [.requiredOption()](#required-option).
### [Custom option processing](#custom-option-processing)
You may specify a function to do custom processing of option-arguments. The callback function receives two parameters,
the user specified option-argument and the previous value for the option. It returns the new value for the option.
This allows you to coerce the option-argument to the desired type, or accumulate values, or do entirely custom processing.
You can optionally specify the default/starting value for the option after the function parameter.
Example file: [options-custom-processing.js](https://github.com/tj/commander.js/blob/HEAD/examples/options-custom-processing.js)
```
function myParseInt(value, dummyPrevious) {
// parseInt takes a string and a radix
const parsedValue = parseInt(value, 10);
if (isNaN(parsedValue)) {
throw new commander.InvalidArgumentError('Not a number.');
}
return parsedValue;
}
function increaseVerbosity(dummyValue, previous) {
return previous + 1;
}
function collect(value, previous) {
return previous.concat([value]);
}
function commaSeparatedList(value, dummyPrevious) {
return value.split(',');
}
program
.option('-f, --float <number>', 'float argument', parseFloat)
.option('-i, --integer <number>', 'integer argument', myParseInt)
.option('-v, --verbose', 'verbosity that can be increased', increaseVerbosity, 0)
.option('-c, --collect <value>', 'repeatable value', collect, [])
.option('-l, --list <items>', 'comma separated list', commaSeparatedList)
;
program.parse();
const options = program.opts();
if (options.float !== undefined) console.log(`float: ${options.float}`);
if (options.integer !== undefined) console.log(`integer: ${options.integer}`);
if (options.verbose > 0) console.log(`verbosity: ${options.verbose}`);
if (options.collect.length > 0) console.log(options.collect);
if (options.list !== undefined) console.log(options.list);
```
```
$ custom -f 1e2 float: 100
$ custom --integer 2 integer: 2
$ custom -v -v -v verbose: 3
$ custom -c a -c b -c c
[ 'a', 'b', 'c' ]
$ custom --list x,y,z
[ 'x', 'y', 'z' ]
```
[Commands](#commands)
---
You can specify (sub)commands using `.command()` or `.addCommand()`. There are two ways these can be implemented: using an action handler attached to the command, or as a stand-alone executable file (described in more detail later). The subcommands may be nested ([example](https://github.com/tj/commander.js/blob/HEAD/examples/nestedCommands.js)).
In the first parameter to `.command()` you specify the command name. You may append the command-arguments after the command name, or specify them separately using `.argument()`. The arguments may be `<required>` or `[optional]`, and the last argument may also be `variadic...`.
You can use `.addCommand()` to add an already configured subcommand to the program.
For example:
```
// Command implemented using action handler (description is supplied separately to `.command`)
// Returns new command for configuring.
program
.command('clone <source> [destination]')
.description('clone a repository into a newly created directory')
.action((source, destination) => {
console.log('clone command called');
});
// Command implemented using stand-alone executable file, indicated by adding description as second parameter to `.command`.
// Returns `this` for adding more commands.
program
.command('start <service>', 'start named service')
.command('stop [service]', 'stop named service, or all if no name supplied');
// Command prepared separately.
// Returns `this` for adding more commands.
program
.addCommand(build.makeBuildCommand());
```
Configuration options can be passed with the call to `.command()` and `.addCommand()`. Specifying `hidden: true` will remove the command from the generated help output. Specifying `isDefault: true` will run the subcommand if no other subcommand is specified ([example](https://github.com/tj/commander.js/blob/HEAD/examples/defaultCommand.js)).
You can add alternative names for a command with `.alias()`. ([example](https://github.com/tj/commander.js/blob/HEAD/examples/alias.js))
`.command()` automatically copies the inherited settings from the parent command to the newly created subcommand. This is only done during creation, any later setting changes to the parent are not inherited.
For safety, `.addCommand()` does not automatically copy the inherited settings from the parent command. There is a helper routine `.copyInheritedSettings()` for copying the settings when they are wanted.
### [Command-arguments](#command-arguments)
For subcommands, you can specify the argument syntax in the call to `.command()` (as shown above). This is the only method usable for subcommands implemented using a stand-alone executable, but for other subcommands you can instead use the following method.
To configure a command, you can use `.argument()` to specify each expected command-argument.
You supply the argument name and an optional description. The argument may be `<required>` or `[optional]`.
You can specify a default value for an optional command-argument.
Example file: [argument.js](https://github.com/tj/commander.js/blob/HEAD/examples/argument.js)
```
program
.version('0.1.0')
.argument('<username>', 'user to login')
.argument('[password]', 'password for user, if required', 'no password given')
.action((username, password) => {
console.log('username:', username);
console.log('password:', password);
});
```
The last argument of a command can be variadic, and only the last argument. To make an argument variadic you append `...` to the argument name. A variadic argument is passed to the action handler as an array. For example:
```
program
.version('0.1.0')
.command('rmdir')
.argument('<dirs...>')
.action(function (dirs) {
dirs.forEach((dir) => {
console.log('rmdir %s', dir);
});
});
```
There is a convenience method to add multiple arguments at once, but without descriptions:
```
program
.arguments('<username> <password>');
```
#### [More configuration](#more-configuration-1)
There are some additional features available by constructing an `Argument` explicitly for less common cases.
Example file: [arguments-extra.js](https://github.com/tj/commander.js/blob/HEAD/examples/arguments-extra.js)
```
program
.addArgument(new commander.Argument('<drink-size>', 'drink cup size').choices(['small', 'medium', 'large']))
.addArgument(new commander.Argument('[timeout]', 'timeout in seconds').default(60, 'one minute'))
```
#### [Custom argument processing](#custom-argument-processing)
You may specify a function to do custom processing of command-arguments (like for option-arguments).
The callback function receives two parameters, the user specified command-argument and the previous value for the argument.
It returns the new value for the argument.
The processed argument values are passed to the action handler, and saved as `.processedArgs`.
You can optionally specify the default/starting value for the argument after the function parameter.
Example file: [arguments-custom-processing.js](https://github.com/tj/commander.js/blob/HEAD/examples/arguments-custom-processing.js)
```
program
.command('add')
.argument('<first>', 'integer argument', myParseInt)
.argument('[second]', 'integer argument', myParseInt, 1000)
.action((first, second) => {
console.log(`${first} + ${second} = ${first + second}`);
})
;
```
### [Action handler](#action-handler)
The action handler gets passed a parameter for each command-argument you declared, and two additional parameters which are the parsed options and the command object itself.
Example file: [thank.js](https://github.com/tj/commander.js/blob/HEAD/examples/thank.js)
```
program
.argument('<name>')
.option('-t, --title <honorific>', 'title to use before name')
.option('-d, --debug', 'display some debugging')
.action((name, options, command) => {
if (options.debug) {
console.error('Called %s with options %o', command.name(), options);
}
const title = options.title ? `${options.title} ` : '';
console.log(`Thank-you ${title}${name}`);
});
```
If you prefer, you can work with the command directly and skip declaring the parameters for the action handler. The `this` keyword is set to the running command and can be used from a function expression (but not from an arrow function).
Example file: [action-this.js](https://github.com/tj/commander.js/blob/HEAD/examples/action-this.js)
```
program
.command('serve')
.argument('<script>')
.option('-p, --port <number>', 'port number', 80)
.action(function() {
console.error('Run script %s on port %s', this.args[0], this.opts().port);
});
```
You may supply an `async` action handler, in which case you call `.parseAsync` rather than `.parse`.
```
async function run() { /* code goes here */ }
async function main() {
program
.command('run')
.action(run);
await program.parseAsync(process.argv);
}
```
A command's options and arguments on the command line are validated when the command is used. Any unknown options or missing arguments will be reported as an error. You can suppress the unknown option checks with `.allowUnknownOption()`. By default, it is not an error to pass more arguments than declared, but you can make this an error with `.allowExcessArguments(false)`.
### [Stand-alone executable (sub)commands](#stand-alone-executable-subcommands)
When `.command()` is invoked with a description argument, this tells Commander that you're going to use stand-alone executables for subcommands.
Commander will search the files in the directory of the entry script for a file with the name combination `command-subcommand`, like `pm-install` or `pm-search` in the example below. The search includes trying common file extensions, like `.js`.
You may specify a custom name (and path) with the `executableFile` configuration option.
You may specify a custom search directory for subcommands with `.executableDir()`.
You handle the options for an executable (sub)command in the executable, and don't declare them at the top-level.
Example file: [pm](https://github.com/tj/commander.js/blob/HEAD/examples/pm)
```
program
.name('pm')
.version('0.1.0')
.command('install [name]', 'install one or more packages')
.command('search [query]', 'search with optional query')
.command('update', 'update installed packages', { executableFile: 'myUpdateSubCommand' })
.command('list', 'list packages installed', { isDefault: true });
program.parse(process.argv);
```
If the program is designed to be installed globally, make sure the executables have proper modes, like `755`.
### [Life cycle hooks](#life-cycle-hooks)
You can add callback hooks to a command for life cycle events.
Example file: [hook.js](https://github.com/tj/commander.js/blob/HEAD/examples/hook.js)
```
program
.option('-t, --trace', 'display trace statements for commands')
.hook('preAction', (thisCommand, actionCommand) => {
if (thisCommand.opts().trace) {
console.log(`About to call action handler for subcommand: ${actionCommand.name()}`);
console.log('arguments: %O', actionCommand.args);
console.log('options: %o', actionCommand.opts());
}
});
```
The callback hook can be `async`, in which case you call `.parseAsync` rather than `.parse`. You can add multiple hooks per event.
The supported events are:
| event name | when hook called | callback parameters |
| --- | --- | --- |
| `preAction`, `postAction` | before/after action handler for this command and its nested subcommands | `(thisCommand, actionCommand)` |
| `preSubcommand` | before parsing direct subcommand | `(thisCommand, subcommand)` |
For an overview of the life cycle events see [parsing life cycle and hooks](https://github.com/tj/commander.js/blob/HEAD/docs/parsing-and-hooks.md).
[Automated help](#automated-help)
---
The help information is auto-generated based on the information commander already knows about your program. The default help option is `-h,--help`.
Example file: [pizza](https://github.com/tj/commander.js/blob/HEAD/examples/pizza)
```
$ node ./examples/pizza --help Usage: pizza [options]
An application for pizza ordering
Options:
-p, --peppers Add peppers
-c, --cheese <type> Add the specified type of cheese (default: "marble")
-C, --no-cheese You do not want any cheese
-h, --help display help for command
```
A `help` command is added by default if your command has subcommands. It can be used alone, or with a subcommand name to show further help for the subcommand. These are effectively the same if the `shell` program has implicit help:
```
shell help shell --help
shell help spawn shell spawn --help
```
Long descriptions are wrapped to fit the available width. (However, a description that includes a line-break followed by whitespace is assumed to be pre-formatted and not wrapped.)
### [Custom help](#custom-help)
You can add extra text to be displayed along with the built-in help.
Example file: [custom-help](https://github.com/tj/commander.js/blob/HEAD/examples/custom-help)
```
program
.option('-f, --foo', 'enable some foo');
program.addHelpText('after', `
Example call:
$ custom-help --help`);
```
Yields the following help output:
```
Usage: custom-help [options]
Options:
-f, --foo enable some foo
-h, --help display help for command
Example call:
$ custom-help --help
```
The positions in order displayed are:
* `beforeAll`: add to the program for a global banner or header
* `before`: display extra information before built-in help
* `after`: display extra information after built-in help
* `afterAll`: add to the program for a global footer (epilog)
The positions "beforeAll" and "afterAll" apply to the command and all its subcommands.
The second parameter can be a string, or a function returning a string. The function is passed a context object for your convenience. The properties are:
* error: a boolean for whether the help is being displayed due to a usage error
* command: the Command which is displaying the help
### [Display help after errors](#display-help-after-errors)
The default behaviour for usage errors is to just display a short error message.
You can change the behaviour to show the full help or a custom help message after an error.
```
program.showHelpAfterError();
// or program.showHelpAfterError('(add --help for additional information)');
```
```
$ pizza --unknown error: unknown option '--unknown'
(add --help for additional information)
```
The default behaviour is to suggest correct spelling after an error for an unknown command or option. You can disable this.
```
program.showSuggestionAfterError(false);
```
```
$ pizza --hepl error: unknown option '--hepl'
(Did you mean --help?)
```
### [Display help from code](#display-help-from-code)
`.help()`: display help information and exit immediately. You can optionally pass `{ error: true }` to display on stderr and exit with an error status.
`.outputHelp()`: output help information without exiting. You can optionally pass `{ error: true }` to display on stderr.
`.helpInformation()`: get the built-in command help information as a string for processing or displaying yourself.
### [.name](#name)
The command name appears in the help, and is also used for locating stand-alone executable subcommands.
You may specify the program name using `.name()` or in the Command constructor. For the program, Commander will fall back to using the script name from the full arguments passed into `.parse()`. However, the script name varies depending on how your program is launched, so you may wish to specify it explicitly.
```
program.name('pizza');
const pm = new Command('pm');
```
Subcommands get a name when specified using `.command()`. If you create the subcommand yourself to use with `.addCommand()`,
then set the name using `.name()` or in the Command constructor.
### [.usage](#usage)
This allows you to customise the usage description in the first line of the help. Given:
```
program
.name("my-command")
.usage("[global options] command")
```
The help will start with:
```
Usage: my-command [global options] command
```
### [.description and .summary](#description-and-summary)
The description appears in the help for the command. You can optionally supply a shorter summary to use when listed as a subcommand of the program.
```
program
.command("duplicate")
.summary("make a copy")
.description(`Make a copy of the current project.
This may require additional disk space.
`);
```
### [.helpOption(flags, description)](#helpoptionflags-description)
By default, every command has a help option. You may change the default help flags and description. Pass false to disable the built-in help option.
```
program
.helpOption('-e, --HELP', 'read more information');
```
### [.addHelpCommand()](#addhelpcommand)
A help command is added by default if your command has subcommands. You can explicitly turn on or off the implicit help command with `.addHelpCommand()` and `.addHelpCommand(false)`.
You can both turn on and customise the help command by supplying the name and description:
```
program.addHelpCommand('assist [command]', 'show assistance');
```
### [More configuration](#more-configuration-2)
The built-in help is formatted using the Help class.
You can configure the Help behaviour by modifying data properties and methods using `.configureHelp()`, or by subclassing using `.createHelp()` if you prefer.
The data properties are:
* `helpWidth`: specify the wrap width, useful for unit tests
* `sortSubcommands`: sort the subcommands alphabetically
* `sortOptions`: sort the options alphabetically
* `showGlobalOptions`: show a section with the global options from the parent command(s)
You can override any method on the [Help](https://github.com/tj/commander.js/blob/HEAD/lib/help.js) class. There are methods getting the visible lists of arguments, options, and subcommands. There are methods for formatting the items in the lists, with each item having a *term* and *description*. Take a look at `.formatHelp()` to see how they are used.
Example file: [configure-help.js](https://github.com/tj/commander.js/blob/HEAD/examples/configure-help.js)
```
program.configureHelp({
sortSubcommands: true,
subcommandTerm: (cmd) => cmd.name() // Just show the name, instead of short usage.
});
```
[Custom event listeners](#custom-event-listeners)
---
You can execute custom actions by listening to command and option events.
```
program.on('option:verbose', function () {
process.env.VERBOSE = this.opts().verbose;
});
```
[Bits and pieces](#bits-and-pieces)
---
### [.parse() and .parseAsync()](#parse-and-parseasync)
The first argument to `.parse` is the array of strings to parse. You may omit the parameter to implicitly use `process.argv`.
If the arguments follow different conventions than node you can pass a `from` option in the second parameter:
* 'node': default, `argv[0]` is the application and `argv[1]` is the script being run, with user parameters after that
* 'electron': `argv[1]` varies depending on whether the electron application is packaged
* 'user': all of the arguments from the user
For example:
```
program.parse(process.argv); // Explicit, node conventions program.parse(); // Implicit, and auto-detect electron program.parse(['-f', 'filename'], { from: 'user' });
```
### [Parsing Configuration](#parsing-configuration)
If the default parsing does not suit your needs, there are some behaviours to support other usage patterns.
By default, program options are recognised before and after subcommands. To only look for program options before subcommands, use `.enablePositionalOptions()`. This lets you use an option for a different purpose in subcommands.
Example file: [positional-options.js](https://github.com/tj/commander.js/blob/HEAD/examples/positional-options.js)
With positional options, the `-b` is a program option in the first line and a subcommand option in the second line:
```
program -b subcommand program subcommand -b
```
By default, options are recognised before and after command-arguments. To only process options that come before the command-arguments, use `.passThroughOptions()`. This lets you pass the arguments and following options through to another program without needing to use `--` to end the option processing.
To use pass through options in a subcommand, the program needs to enable positional options.
Example file: [pass-through-options.js](https://github.com/tj/commander.js/blob/HEAD/examples/pass-through-options.js)
With pass through options, the `--port=80` is a program option in the first line and passed through as a command-argument in the second line:
```
program --port=80 arg program arg --port=80
```
By default, the option processing shows an error for an unknown option. To have an unknown option treated as an ordinary command-argument and continue looking for options, use `.allowUnknownOption()`. This lets you mix known and unknown options.
By default, the argument processing does not display an error for more command-arguments than expected.
To display an error for excess arguments, use`.allowExcessArguments(false)`.
### [Legacy options as properties](#legacy-options-as-properties)
Before Commander 7, the option values were stored as properties on the command.
This was convenient to code, but the downside was possible clashes with existing properties of `Command`. You can revert to the old behaviour to run unmodified legacy code by using `.storeOptionsAsProperties()`.
```
program
.storeOptionsAsProperties()
.option('-d, --debug')
.action((commandAndOptions) => {
if (commandAndOptions.debug) {
console.error(`Called ${commandAndOptions.name()}`);
}
});
```
### [TypeScript](#typescript)
extra-typings: There is an optional project to infer extra type information from the option and argument definitions.
This adds strong typing to the options returned by `.opts()` and the parameters to `.action()`.
See [commander-js/extra-typings](https://github.com/commander-js/extra-typings) for more.
```
import { Command } from '@commander-js/extra-typings';
```
ts-node: If you use `ts-node` and stand-alone executable subcommands written as `.ts` files, you need to call your program through node to get the subcommands called correctly. e.g.
```
node -r ts-node/register pm.ts
```
### [createCommand()](#createcommand)
This factory function creates a new command. It is exported and may be used instead of using `new`, like:
```
const { createCommand } = require('commander');
const program = createCommand();
```
`createCommand` is also a method of the Command object, and creates a new command rather than a subcommand. This gets used internally when creating subcommands using `.command()`, and you may override it to customise the new subcommand (example file [custom-command-class.js](https://github.com/tj/commander.js/blob/HEAD/examples/custom-command-class.js)).
### [Node options such as `--harmony`](#node-options-such-as---harmony)
You can enable `--harmony` option in two ways:
* Use `#! /usr/bin/env node --harmony` in the subcommands scripts. (Note Windows does not support this pattern.)
* Use the `--harmony` option when call the command, like `node --harmony examples/pm publish`. The `--harmony` option will be preserved when spawning subcommand process.
### [Debugging stand-alone executable subcommands](#debugging-stand-alone-executable-subcommands)
An executable subcommand is launched as a separate child process.
If you are using the node inspector for [debugging](https://nodejs.org/en/docs/guides/debugging-getting-started/) executable subcommands using `node --inspect` et al.,
the inspector port is incremented by 1 for the spawned subcommand.
If you are using VSCode to debug executable subcommands you need to set the `"autoAttachChildProcesses": true` flag in your launch.json configuration.
### [npm run-script](#npm-run-script)
By default, when you call your program using run-script, `npm` will parse any options on the command-line and they will not reach your program. Use
`--` to stop the npm option parsing and pass through all the arguments.
The synopsis for [npm run-script](https://docs.npmjs.com/cli/v9/commands/npm-run-script) explicitly shows the `--` for this reason:
```
npm run-script <command> [-- <args>]
```
### [Display error](#display-error)
This routine is available to invoke the Commander error handling for your own error conditions. (See also the next section about exit handling.)
As well as the error message, you can optionally specify the `exitCode` (used with `process.exit`)
and `code` (used with `CommanderError`).
```
program.error('Password must be longer than four characters');
program.error('Custom processing has failed', { exitCode: 2, code: 'my.custom.error' });
```
### [Override exit and output handling](#override-exit-and-output-handling)
By default, Commander calls `process.exit` when it detects errors, or after displaying the help or version. You can override this behaviour and optionally supply a callback. The default override throws a `CommanderError`.
The override callback is passed a `CommanderError` with properties `exitCode` number, `code` string, and `message`. The default override behaviour is to throw the error, except for async handling of executable subcommand completion which carries on. The normal display of error messages or version or help is not affected by the override which is called after the display.
```
program.exitOverride();
try {
program.parse(process.argv);
} catch (err) {
// custom processing...
}
```
By default, Commander is configured for a command-line application and writes to stdout and stderr.
You can modify this behaviour for custom applications. In addition, you can modify the display of error messages.
Example file: [configure-output.js](https://github.com/tj/commander.js/blob/HEAD/examples/configure-output.js)
```
function errorColor(str) {
// Add ANSI escape codes to display text in red.
return `\x1b[31m${str}\x1b[0m`;
}
program
.configureOutput({
// Visibly override write routines as example!
writeOut: (str) => process.stdout.write(`[OUT] ${str}`),
writeErr: (str) => process.stdout.write(`[ERR] ${str}`),
// Highlight errors in color.
outputError: (str, write) => write(errorColor(str))
});
```
### [Additional documentation](#additional-documentation)
There is more information available about:
* [deprecated](https://github.com/tj/commander.js/blob/HEAD/docs/deprecated.md) features still supported for backwards compatibility
* [options taking varying arguments](https://github.com/tj/commander.js/blob/HEAD/docs/options-in-depth.md)
* [parsing life cycle and hooks](https://github.com/tj/commander.js/blob/HEAD/docs/parsing-and-hooks.md)
[Support](#support)
---
The current version of Commander is fully supported on Long Term Support versions of Node.js, and requires at least v16.
(For older versions of Node.js, use an older version of Commander.)
The main forum for free and community support is the project [Issues](https://github.com/tj/commander.js/issues) on GitHub.
### [Commander for enterprise](#commander-for-enterprise)
Available as part of the Tidelift Subscription
The maintainers of Commander and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. [Learn more.](https://tidelift.com/subscription/pkg/npm-commander?utm_source=npm-commander&utm_medium=referral&utm_campaign=enterprise&utm_term=repo)
Readme
---
### Keywords
* commander
* command
* option
* parser
* cli
* argument
* args
* argv |
@aws-cdk/aws-appconfig-alpha | npm | JavaScript | [AWS AppConfig Construct Library](#aws-appconfig-construct-library)
===
---
> The APIs of higher level constructs in this module are experimental and under active development.
> They are subject to non-backward compatible changes or removal in any future version. These are
> not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be
> announced in the release notes. This means that while you may use them, you may need to update
> your source code when upgrading to a newer version of this package.
---
This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project.
Use AWS AppConfig, a capability of AWS Systems Manager, to create, manage, and quickly deploy application configurations. A configuration is a collection of settings that influence the behavior of your application. You can use AWS AppConfig with applications hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Lambda, containers, mobile applications, or IoT devices. To view examples of the types of configurations you can manage by using AWS AppConfig, see [Example configurations](https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-configuration-and-profile.html#appconfig-creating-configuration-and-profile-examples).
[Application](#application)
---
In AWS AppConfig, an application is simply an organizational construct like a folder. This organizational construct has a relationship with some unit of executable code. For example, you could create an application called MyMobileApp to organize and manage configuration data for a mobile application installed by your users. Configurations and environments are associated with the application.
The name and description of an application are optional.
Create a simple application:
```
new appconfig.Application(this, 'MyApplication');
```
Create an application with a name and description:
```
new appconfig.Application(this, 'MyApplication', {
name: 'App1',
description: 'This is my application created through CDK.',
});
```
[Deployment Strategy](#deployment-strategy)
---
A deployment strategy defines how a configuration will roll out. The roll out is defined by four parameters: deployment type, step percentage, deployment time, and bake time.
See: <https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-deployment-strategy.htmlDeployment strategy with predefined values:
```
new appconfig.DeploymentStrategy(this, 'MyDeploymentStrategy', {
rolloutStrategy: appconfig.RolloutStrategy.CANARY_10_PERCENT_20_MINUTES,
});
```
Deployment strategy with custom values:
```
new appconfig.DeploymentStrategy(this, 'MyDeploymentStrategy', {
rolloutStrategy: appconfig.RolloutStrategy.linear({
growthFactor: 20,
deploymentDuration: Duration.minutes(30),
finalBakeTime: Duration.minutes(30),
}),
});
```
[Configuration](#configuration)
---
A configuration is a higher-level construct that can either be a `HostedConfiguration` (stored internally through AWS AppConfig) or a `SourcedConfiguration` (stored in an Amazon S3 bucket, AWS Secrets Manager secrets, Systems Manager (SSM) Parameter Store parameters, SSM documents, or AWS CodePipeline). This construct manages deployments on creation.
### [HostedConfiguration](#hostedconfiguration)
A hosted configuration represents configuration stored in the AWS AppConfig hosted configuration store. A hosted configuration takes in the configuration content and associated AWS AppConfig application. On construction of a hosted configuration, the configuration is deployed.
```
declare const application: appconfig.Application;
new appconfig.HostedConfiguration(this, 'MyHostedConfiguration', {
application,
content: appconfig.ConfigurationContent.fromInlineText('This is my configuration content.'),
});
```
AWS AppConfig supports the following types of configuration profiles.
* **Feature flag**: Use a feature flag configuration to turn on new features that require a timely deployment, such as a product launch or announcement.
* **Freeform**: Use a freeform configuration to carefully introduce changes to your application.
A hosted configuration with type:
```
declare const application: appconfig.Application;
new appconfig.HostedConfiguration(this, 'MyHostedConfiguration', {
application,
content: appconfig.ConfigurationContent.fromInlineText('This is my configuration content.'),
type: appconfig.ConfigurationType.FEATURE_FLAGS,
});
```
When you create a configuration and configuration profile, you can specify up to two validators. A validator ensures that your configuration data is syntactically and semantically correct. You can create validators in either JSON Schema or as an AWS Lambda function.
See [About validators](https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-configuration-and-profile.html#appconfig-creating-configuration-and-profile-validators) for more information.
A hosted configuration with validators:
```
declare const application: appconfig.Application;
declare const fn: lambda.Function;
new appconfig.HostedConfiguration(this, 'MyHostedConfiguration', {
application,
content: appconfig.ConfigurationContent.fromInlineText('This is my configuration content.'),
validators: [
appconfig.JsonSchemaValidator.fromFile('schema.json'),
appconfig.LambdaValidator.fromFunction(fn),
],
});
```
You can attach a deployment strategy (as described in the previous section) to your configuration to specify how you want your configuration to roll out.
A hosted configuration with a deployment strategy:
```
declare const application: appconfig.Application;
new appconfig.HostedConfiguration(this, 'MyHostedConfiguration', {
application,
content: appconfig.ConfigurationContent.fromInlineText('This is my configuration content.'),
deploymentStrategy: new appconfig.DeploymentStrategy(this, 'MyDeploymentStrategy', {
rolloutStrategy: appconfig.RolloutStrategy.linear({
growthFactor: 15,
deploymentDuration: Duration.minutes(30),
finalBakeTime: Duration.minutes(15),
}),
}),
});
```
The `deployTo` parameter is used to specify which environments to deploy the configuration to. If this parameter is not specified, there will not be a deployment.
A hosted configuration with `deployTo`:
```
declare const application: appconfig.Application;
declare const env: appconfig.Environment;
new appconfig.HostedConfiguration(this, 'MyHostedConfiguration', {
application,
content: appconfig.ConfigurationContent.fromInlineText('This is my configuration content.'),
deployTo: [env],
});
```
### [SourcedConfiguration](#sourcedconfiguration)
A sourced configuration represents configuration stored in an Amazon S3 bucket, AWS Secrets Manager secret, Systems Manager (SSM) Parameter Store parameter, SSM document, or AWS CodePipeline. A sourced configuration takes in the location source construct and optionally a version number to deploy. On construction of a sourced configuration, the configuration is deployed only if a version number is specified.
### [S3](#s3)
Use an Amazon S3 bucket to store a configuration.
```
declare const application: appconfig.Application;
const bucket = new s3.Bucket(this, 'MyBucket', {
versioned: true,
});
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromBucket(bucket, 'path/to/file.json'),
});
```
Use an encrypted bucket:
```
declare const application: appconfig.Application;
const bucket = new s3.Bucket(this, 'MyBucket', {
versioned: true,
encryption: s3.BucketEncryption.KMS,
});
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromBucket(bucket, 'path/to/file.json'),
});
```
### [AWS Secrets Manager secret](#aws-secrets-manager-secret)
Use a Secrets Manager secret to store a configuration.
```
declare const application: appconfig.Application;
declare const secret: secrets.Secret;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromSecret(secret),
});
```
### [SSM Parameter Store parameter](#ssm-parameter-store-parameter)
Use an SSM parameter to store a configuration.
```
declare const application: appconfig.Application;
declare const parameter: ssm.StringParameter;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromParameter(parameter),
versionNumber: '1',
});
```
### [SSM document](#ssm-document)
Use an SSM document to store a configuration.
```
declare const application: appconfig.Application;
declare const document: ssm.CfnDocument;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromCfnDocument(document),
});
```
### [AWS CodePipeline](#aws-codepipeline)
Use an AWS CodePipeline pipeline to store a configuration.
```
declare const application: appconfig.Application;
declare const pipeline: codepipeline.Pipeline;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromPipeline(pipeline),
});
```
Similar to a hosted configuration, a sourced configuration can optionally take in a type, validators, a `deployTo` parameter, and a deployment strategy.
A sourced configuration with type:
```
declare const application: appconfig.Application;
declare const bucket: s3.Bucket;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromBucket(bucket, 'path/to/file.json'),
type: appconfig.ConfigurationType.FEATURE_FLAGS,
name: 'MyConfig',
description: 'This is my sourced configuration from CDK.',
});
```
A sourced configuration with validators:
```
declare const application: appconfig.Application;
declare const bucket: s3.Bucket;
declare const fn: lambda.Function;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromBucket(bucket, 'path/to/file.json'),
validators: [
appconfig.JsonSchemaValidator.fromFile('schema.json'),
appconfig.LambdaValidator.fromFunction(fn),
],
});
```
A sourced configuration with a deployment strategy:
```
declare const application: appconfig.Application;
declare const bucket: s3.Bucket;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromBucket(bucket, 'path/to/file.json'),
deploymentStrategy: new appconfig.DeploymentStrategy(this, 'MyDeploymentStrategy', {
rolloutStrategy: appconfig.RolloutStrategy.linear({
growthFactor: 15,
deploymentDuration: Duration.minutes(30),
finalBakeTime: Duration.minutes(15),
}),
}),
});
```
The `deployTo` parameter is used to specify which environments to deploy the configuration to. If this parameter is not specified, there will not be a deployment.
A sourced configuration with `deployTo`:
```
declare const application: appconfig.Application;
declare const bucket: s3.Bucket;
declare const env: appconfig.Environment;
new appconfig.SourcedConfiguration(this, 'MySourcedConfiguration', {
application,
location: appconfig.ConfigurationSource.fromBucket(bucket, 'path/to/file.json'),
deployTo: [env],
});
```
[Environment](#environment)
---
For each AWS AppConfig application, you define one or more environments. An environment is a logical deployment group of AWS AppConfig targets, such as applications in a Beta or Production environment. You can also define environments for application subcomponents such as the Web, Mobile, and Back-end components for your application. You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration.
Basic environment with monitors:
```
declare const application: appconfig.Application;
declare const alarm: cloudwatch.Alarm;
new appconfig.Environment(this, 'MyEnvironment', {
application,
monitors: [
{alarm},
]
});
```
[Extension](#extension)
---
An extension augments your ability to inject logic or behavior at different points during the AWS AppConfig workflow of creating or deploying a configuration.
See: <https://docs.aws.amazon.com/appconfig/latest/userguide/working-with-appconfig-extensions.html### [AWS Lambda destination](#aws-lambda-destination)
Use an AWS Lambda as the event destination for an extension.
```
declare const fn: lambda.Function;
new appconfig.Extension(this, 'MyExtension', {
actions: [
new appconfig.Action({
actionPoints: [appconfig.ActionPoint.ON_DEPLOYMENT_START],
eventDestination: new appconfig.LambdaDestination(fn),
}),
],
});
```
Lambda extension with parameters:
```
declare const fn: lambda.Function;
new appconfig.Extension(this, 'MyExtension', {
actions: [
new appconfig.Action({
actionPoints: [appconfig.ActionPoint.ON_DEPLOYMENT_START],
eventDestination: new appconfig.LambdaDestination(fn),
}),
],
parameters: [
appconfig.Parameter.required('testParam', 'true'),
appconfig.Parameter.notRequired('testNotRequiredParam'),
]
});
```
### [Amazon Simple Queue Service (SQS) destination](#amazon-simple-queue-service-sqs-destination)
Use a queue as the event destination for an extension.
```
declare const queue: sqs.Queue;
new appconfig.Extension(this, 'MyExtension', {
actions: [
new appconfig.Action({
actionPoints: [appconfig.ActionPoint.ON_DEPLOYMENT_START],
eventDestination: new appconfig.SqsDestination(queue),
}),
],
});
```
### [Amazon Simple Notification Service (SNS) destination](#amazon-simple-notification-service-sns-destination)
Use an SNS topic as the event destination for an extension.
```
declare const topic: sns.Topic;
new appconfig.Extension(this, 'MyExtension', {
actions: [
new appconfig.Action({
actionPoints: [appconfig.ActionPoint.ON_DEPLOYMENT_START],
eventDestination: new appconfig.SnsDestination(topic),
}),
],
});
```
### [Amazon EventBridge destination](#amazon-eventbridge-destination)
Use the default event bus as the event destination for an extension.
```
const bus = events.EventBus.fromEventBusName(this, 'MyEventBus', 'default');
new appconfig.Extension(this, 'MyExtension', {
actions: [
new appconfig.Action({
actionPoints: [appconfig.ActionPoint.ON_DEPLOYMENT_START],
eventDestination: new appconfig.EventBridgeDestination(bus),
}),
],
});
```
You can also add extensions and their associations directly by calling `onDeploymentComplete()` or any other action point method on the AWS AppConfig application, configuration, or environment resource. To add an association to an existing extension, you can call `addExtension()` on the resource.
Adding an association to an AWS AppConfig application:
```
declare const application: appconfig.Application;
declare const extension: appconfig.Extension;
declare const lambdaDestination: appconfig.LambdaDestination;
application.addExtension(extension);
application.onDeploymentComplete(lambdaDestination);
```
Readme
---
### Keywords
* aws
* cdk
* constructs
* appconfig |
sensu-plugins-librato | ruby | Ruby | Sensu-Plugins-librato
---
[![Build Status](https://travis-ci.org/sensu-plugins/sensu-plugins-librato.svg?branch=master)](https://travis-ci.org/sensu-plugins/sensu-plugins-librato)
[![Gem Version](https://badge.fury.io/rb/sensu-plugins-librato.svg)](http://badge.fury.io/rb/sensu-plugins-librato)
[![Code Climate](https://codeclimate.com/github/sensu-plugins/sensu-plugins-librato/badges/gpa.svg)](https://codeclimate.com/github/sensu-plugins/sensu-plugins-librato)
[![Test Coverage](https://codeclimate.com/github/sensu-plugins/sensu-plugins-librato/badges/coverage.svg)](https://codeclimate.com/github/sensu-plugins/sensu-plugins-librato)
[![Dependency Status](https://gemnasium.com/sensu-plugins/sensu-plugins-librato.svg)](https://gemnasium.com/sensu-plugins/sensu-plugins-librato)
Functionality
---
Files
---
* bin/handler-librato-occurrences.rb
* bin/handler-metrics-librato.rb
Usage
---
**handler-metrics-librato**
```
{
"librato": {
"email": "[[email protected]](/cdn-cgi/l/email-protection)",
"api_key": "12345",
"use_sensu_client_hostname_as_source": false
}
}
```
**handler-librato-occurrences**
```
{
"librato": {
"email": "[[email protected]](/cdn-cgi/l/email-protection)",
"api_key": "12345"
}
}
```
Installation
---
[Installation and Setup](http://sensu-plugins.io/docs/installation_instructions.html)
Notes
--- |
django-ttag | readthedoc | Markdown | Date:
Categories:
Tags:
`Tag` and the various `Arg` classes are consciously modelled after
Django’s `Model` , `Form` , and respective `Field` classes. `Arg` properties are set on a `Tag` in the same way `Field` properties
are set on a `Model` or `Form` .
## Example¶
Following is a minimal example of a template tag:
```
class Welcome(ttag.Tag):
This would create a tag `{% welcome %}` which took no arguments and output `Hi there!` .
### Registering your tag¶
TTag `Tag` classes are registered just like a standard tag:
```
from django import template
import ttag
register = template.Library()
class Welcome(ttag.Tag):
register.tag(Welcome)
```
## Defining arguments¶
By default, arguments are positional, meaning that they must appear in the tag in the order they are defined in your tag class.
Here is an example of using arguments to extend the basic `{% welcome %}` example tag above so we can greet the user personally:
def output(self, data):
name = data['user'].get_full_name()
return "Hi, %s!" % name
```
The tag would then be used like: `{% welcome user %}` Arguments are usually resolved against the template context. For simpler cases where you don’t want this behaviour, use `ttag.BasicArg` . Sometimes, the argument name you want to use is a Python keyword and can’t be used as a class attribute (such as `with` , `as` , `and` , etc.). In these
cases append an underscore:
```
class Format(ttag.Tag):
as_ = ttag.Arg()
```
This is only used during the definition; the argument name is stored without (and therefore should be referenced without) this trailing underscore.
### Named arguments¶
Arguments can alternatively be marked as a named argument. In these cases the argument name is part of the tag definition in the template.
Named arguments can be defined in the template tag in any order.
Here are a few examples of named arguments:
```
* ``{% slap with fish %}`` has an argument named ``with``.
* ``{% show_people country "NZ" limit 10 %}`` has two named arguments,
``country`` and ``limit``. They could potentially be marked as optional
and can be listed in any order.
* ``{% show_countries populated_only %} has a boolean argument,
demonstrating that an argument may not always take a single value.
Boolean arguments take no values, and a special argument type could take
more than one value (for example, :class:`ttag.KeywordsArg`).
```
# Space separated arguments¶
The first named argument format looks like
```
[argument name] [value]
```
, for
example: Here’s an example of what the `{% slap %}` tag above may look like:
```
class Slap(ttag.Tag):
with_ = ttag.Arg(named=True)
def output(self, data):
return "You have been slapped with a %s" % data['with']
```
# Keyword arguments¶
An alternate named argument format is to use keyword arguments:
```
class Output(ttag.Tag):
list_ = self.Arg()
limit = self.Arg(keyword=True)
offset = self.Arg(keyword=True)
```
This would result in a tag which can be used like this:
```
{% output people limit=10 offset=report.offset %}
```
Note
If your tag should define a list of arbitrary keywords, you may benefit from `ttag.KeywordsArg` instead.
### Validation arguments¶
Some default classes are included to assist with validation of template arguments.
## Using context¶
The `output()` method which we have used so far is just a
shortcut the `render()` . The shortcut method doesn’t provide direct access to the context, so if you need alter the context, or check other context variables, you can use `render()` directly.
Note
The `ttag.helpers.AsTag` class is available for the common
case of tags that end in `... as something %}` .
For example:
```
class GetHost(ttag.Tag):
"""
Returns the current host. Requires that ``request`` is on the template
context.
"""
def render(self, context):
print context['request'].get_host()
```
Use `resolve()` to resolve the tag’s arguments into a data
dictionary:
def render(self, context):
context['welcomed'] = True
data = self.resolve(context)
name = data['user'].get_full_name()
return "Hi, %s!" % name
```
## Cleaning arguments¶
You can validate / clean arguments similar to Django’s forms.
To clean an individual argument, use a
```
clean_[argname](value)
```
method.
Ensure that your method returns the cleaned value. After the individual arguments are cleaned, a `clean(data, context)` method
is run. This method must return the cleaned data dictionary. Use the
```
ttag.TagValidationError
```
exception to raise validation errors.
## Writing a block tag¶
For simple block tags, use the `block` option:
class Meta():
block = True
end_block = 'done'
def render(self, context):
data = self.resolve(context)
output = []
for i in range(data['count']):
context.push()
output.append(self.nodelist.render(context))
context.pop()
return ''.join(output)
```
As you can see, using the block option will add a `nodelist` attribute to the
tag, which can then be rendered using the context. The optional `end_block` option allows for an alternate ending block. The
default value is `'end%(name)s'` , so it would be `{% endrepeat %}` for the
above tag if the option hadn’t been provided.
### Working with multiple blocks¶
Say we wanted to expand on our repeat tag to look for an `{% empty %}` alternative section for when a zero-value count is received. Rather than setting the `block` option to `True` , we set it to a dictionary
where the keys are the section tags to look for and the values are whether the
section is required:
class Meta():
block = {'empty': False}
def render(self, context):
data = self.resolve(context)
if not data['count']:
return self.nodelist_empty.render(context)
output = []
for i in range(data['count']):
context.push()
output.append(self.nodelist.render(context))
context.pop()
return ''.join(output)
```
This will cause two attributes to be added to the tag: `nodelist` will
contain everything collected up to the `{% empty %}` section tag, and `nodelist_empty` will contain everything up until the end tag. If no matching section tag is found when parsing the template, either a `TemplateSyntaxError` will be raised (if it’s a required section)
or an empty node list will be used. More advanced cases can be handled using Django’s standard parser in the `__init__` method of your tag:
```
class AdvancedTag(ttags.Tag):
def __init__(self, parser, token):
super(AdvancedTag, self).__init__(parser, token)
# Do whatever fancy parser modification you like.
```
## Full Example¶
This example provides a template tag which outputs a tweaked version of the instance name passed in. It demonstrates using the various `Arg` types:
```
class TweakName(ttag.Tag):
"""
Provides the tweak_name template tag, which outputs a
slightly modified version of the NamedModel instance passed in.
{% tweak_name instance [offset=0] [limit=10] [reverse] %}
"""
instance = ttag.ModelInstanceArg(model=NamedModel)
offset = ttag.IntegerArg(default=0, keyword=True)
limit = ttag.IntegerArg(default=10, keyword=True)
reverse = ttag.BooleanArg()
def clean_limit(self, value):
"""
Check that limit is not negative.
"""
if value < 0:
raise ttag.TagValidationError("limit must be >= 0")
return value
def output(self, data):
name = data['instance'].name
# Reverse if appropriate.
if 'reverse' in data:
name = name[::-1]
# Apply our offset and limit.
name = name[data['offset']:data['offset'] + data['limit']]
# Return the tweaked name.
return name
```
Example usages:
```
{% tweak_name obj limit=5 %}
{% tweak_name obj offset=1 %}
{% tweak_name obj reverse %}
{% tweak_name obj offset=1 limit=5 reverse %}
``` |
github.com/pebbe/zmq4 | go | Go | README
[¶](#section-readme)
---
A Go interface to [ZeroMQ](http://www.zeromq.org/) version 4.
---
### Warning
Starting with Go 1.14, on Unix-like systems, you will get a lot of interrupted signal calls. See the top of a package documentation for a fix.
---
[![Go Report Card](https://goreportcard.com/badge/github.com/pebbe/zmq4)](https://goreportcard.com/report/github.com/pebbe/zmq4)
[![GoDoc](https://godoc.org/github.com/pebbe/zmq4?status.svg)](https://godoc.org/github.com/pebbe/zmq4)
This requires ZeroMQ version 4.0.1 or above. To use CURVE security in versions prior to 4.2, ZeroMQ must be installed with
[libsodium](https://github.com/jedisct1/libsodium) enabled.
Partial support for ZeroMQ 4.2 DRAFT is available in the alternate version of zmq4 `draft`. The API pertaining to this is subject to change. To use this:
```
import (
zmq "github.com/pebbe/zmq4/draft"
)
```
For ZeroMQ version 3, see: <http://github.com/pebbe/zmq3For ZeroMQ version 2, see: <http://github.com/pebbe/zmq2Including all examples of [ØMQ - The Guide](http://zguide.zeromq.org/page:all).
Keywords: zmq, zeromq, 0mq, networks, distributed computing, message passing, fanout, pubsub, pipeline, request-reply
#### See also
* [go-zeromq/zmq4](https://github.com/go-zeromq/zmq4) — A pure-Go implementation of ØMQ (ZeroMQ), version 4
* [go-nanomsg](https://github.com/op/go-nanomsg) — Language bindings for nanomsg in Go
* [goczmq](https://github.com/zeromq/goczmq) — A Go interface to CZMQ
* [Mangos](https://github.com/go-mangos/mangos) — An implementation in pure Go of the SP ("Scalable Protocols") protocols
### Requirements
zmq4 is just a wrapper for the ZeroMQ library. It doesn't include the library itself. So you need to have ZeroMQ installed, including its development files. On Linux and Darwin you can check this with (`$` is the command prompt):
```
$ pkg-config --modversion libzmq 4.3.1
```
The Go compiler must be able to compile C code. You can check this with:
```
$ go env CGO_ENABLED 1
```
You can't do cross-compilation. That would disable C.
#### Windows
Build with `CGO_CFLAGS` and `CGO_LDFLAGS` environment variables, for example:
```
$env:CGO_CFLAGS='-ID:/dev/vcpkg/installed/x64-windows/include'
$env:CGO_LDFLAGS='-LD:/dev/vcpkg/installed/x64-windows/lib -l:libzmq-mt-4_3_4.lib'
```
> Deploy result program with `libzmq-mt-4_3_4.dll`
### Install
```
go get github.com/pebbe/zmq4
```
### Docs
* [package help](http://godoc.org/github.com/pebbe/zmq4)
* [wiki](https://github.com/pebbe/zmq4/wiki)
### API change
There has been an API change in commit 0bc5ab465849847b0556295d9a2023295c4d169e of 2014-06-27, 10:17:55 UTC in the functions `AuthAllow` and `AuthDeny`.
Old:
```
func AuthAllow(addresses ...string)
func AuthDeny(addresses ...string)
```
New:
```
func AuthAllow(domain string, addresses ...string)
func AuthDeny(domain string, addresses ...string)
```
If `domain` can be parsed as an IP address, it will be interpreted as such, and it and all remaining addresses are added to all domains.
So this should still work as before:
```
zmq.AuthAllow("127.0.0.1", "123.123.123.123")
```
But this won't compile:
```
a := []string{"127.0.0.1", "123.123.123.123"}
zmq.AuthAllow(a...)
```
And needs to be rewritten as:
```
a := []string{"127.0.0.1", "123.123.123.123"}
zmq.AuthAllow("*", a...)
```
Furthermore, an address can now be a single IP address, as well as an IP address and mask in CIDR notation, e.g. "123.123.123.0/24".
Documentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm
### Overview [¶](#pkg-overview)
A Go interface to ZeroMQ (zmq, 0mq) version 4.
For ZeroMQ version 3, see: <http://github.com/pebbe/zmq3For ZeroMQ version 2, see: <http://github.com/pebbe/zmq2<http://www.zeromq.org/See also the wiki: <https://github.com/pebbe/zmq4/wiki---
A note on the use of a context:
This package provides a default context. This is what will be used by the functions without a context receiver, that create a socket or manipulate the context. Package developers that import this package should probably not use the default context with its associated functions, but create their own context(s). See: type Context.
---
Since Go 1.14 you will get a lot of interrupted system calls.
See: <https://golang.org/doc/go1.14#runtimeThere are two options to prevent this.
The first option is to build your program with the environment variable:
```
GODEBUG=asyncpreemptoff=1
```
The second option is to let the program retry after an interrupted system call.
Initially, this is set to true, for the global context, and for contexts created with NewContext().
When you install a signal handler, for instance to handle Ctrl-C, you should probably clear this option in your signal handler. For example:
```
zctx, _ := zmq.NewContext()
ctx, cancel := context.WithCancel(context.Background())
go func() {
chSignal := make(chan os.Signal, 1)
signal.Notify(chSignal, syscall.SIGHUP, syscall.SIGINT, syscall.SIGQUIT, syscall.SIGTERM)
<-chSignal
zmq4.SetRetryAfterEINTR(false)
zctx.SetRetryAfterEINTR(false)
cancel()
}()
```
---
### Index [¶](#pkg-index)
* [Constants](#pkg-constants)
* [Variables](#pkg-variables)
* [func AuthAllow(domain string, addresses ...string)](#AuthAllow)
* [func AuthCurveAdd(domain string, pubkeys ...string)](#AuthCurveAdd)
* [func AuthCurvePublic(z85SecretKey string) (z85PublicKey string, err error)](#AuthCurvePublic)
* [func AuthCurveRemove(domain string, pubkeys ...string)](#AuthCurveRemove)
* [func AuthCurveRemoveAll(domain string)](#AuthCurveRemoveAll)
* [func AuthDeny(domain string, addresses ...string)](#AuthDeny)
* [func AuthMetaBlob(key, value string) (blob []byte, err error)](#AuthMetaBlob)
* [func AuthPlainAdd(domain, username, password string)](#AuthPlainAdd)
* [func AuthPlainRemove(domain string, usernames ...string)](#AuthPlainRemove)
* [func AuthPlainRemoveAll(domain string)](#AuthPlainRemoveAll)
* [func AuthSetMetadataHandler(...)](#AuthSetMetadataHandler)
* [func AuthSetVerbose(verbose bool)](#AuthSetVerbose)
* [func AuthStart() (err error)](#AuthStart)
* [func AuthStop()](#AuthStop)
* [func Error(e int) string](#Error)
* [func GetBlocky() (bool, error)](#GetBlocky)
* [func GetIoThreads() (int, error)](#GetIoThreads)
* [func GetIpv6() (bool, error)](#GetIpv6)
* [func GetMaxMsgsz() (int, error)](#GetMaxMsgsz)
* [func GetMaxSockets() (int, error)](#GetMaxSockets)
* [func GetRetryAfterEINTR() bool](#GetRetryAfterEINTR)
* [func HasCurve() bool](#HasCurve)
* [func HasGssapi() bool](#HasGssapi)
* [func HasIpc() bool](#HasIpc)
* [func HasNorm() bool](#HasNorm)
* [func HasPgm() bool](#HasPgm)
* [func HasTipc() bool](#HasTipc)
* [func NewCurveKeypair() (z85_public_key, z85_secret_key string, err error)](#NewCurveKeypair)
* [func Proxy(frontend, backend, capture *Socket) error](#Proxy)
* [func ProxySteerable(frontend, backend, capture, control *Socket) error](#ProxySteerable)
* [func SetBlocky(i bool) error](#SetBlocky)
* [func SetIoThreads(n int) error](#SetIoThreads)
* [func SetIpv6(i bool) error](#SetIpv6)
* [func SetMaxMsgsz(n int) error](#SetMaxMsgsz)
* [func SetMaxSockets(n int) error](#SetMaxSockets)
* [func SetRetryAfterEINTR(retry bool)](#SetRetryAfterEINTR)
* [func SetThreadPriority(n int) error](#SetThreadPriority)
* [func SetThreadSchedPolicy(n int) error](#SetThreadSchedPolicy)
* [func Term() error](#Term)
* [func Version() (major, minor, patch int)](#Version)
* [func Z85decode(s string) string](#Z85decode)
* [func Z85encode(data string) string](#Z85encode)
* [type Context](#Context)
* + [func NewContext() (ctx *Context, err error)](#NewContext)
* + [func (ctx *Context) GetBlocky() (bool, error)](#Context.GetBlocky)
+ [func (ctx *Context) GetIoThreads() (int, error)](#Context.GetIoThreads)
+ [func (ctx *Context) GetIpv6() (bool, error)](#Context.GetIpv6)
+ [func (ctx *Context) GetMaxMsgsz() (int, error)](#Context.GetMaxMsgsz)
+ [func (ctx *Context) GetMaxSockets() (int, error)](#Context.GetMaxSockets)
+ [func (ctx *Context) GetRetryAfterEINTR() bool](#Context.GetRetryAfterEINTR)
+ [func (ctx *Context) NewSocket(t Type) (soc *Socket, err error)](#Context.NewSocket)
+ [func (ctx *Context) SetBlocky(i bool) error](#Context.SetBlocky)
+ [func (ctx *Context) SetIoThreads(n int) error](#Context.SetIoThreads)
+ [func (ctx *Context) SetIpv6(i bool) error](#Context.SetIpv6)
+ [func (ctx *Context) SetMaxMsgsz(n int) error](#Context.SetMaxMsgsz)
+ [func (ctx *Context) SetMaxSockets(n int) error](#Context.SetMaxSockets)
+ [func (ctx *Context) SetRetryAfterEINTR(retry bool)](#Context.SetRetryAfterEINTR)
+ [func (ctx *Context) SetThreadPriority(n int) error](#Context.SetThreadPriority)
+ [func (ctx *Context) SetThreadSchedPolicy(n int) error](#Context.SetThreadSchedPolicy)
+ [func (ctx *Context) Term() error](#Context.Term)
* [type Errno](#Errno)
* + [func AsErrno(err error) Errno](#AsErrno)
* + [func (errno Errno) Error() string](#Errno.Error)
* [type Event](#Event)
* + [func (e Event) String() string](#Event.String)
* [type Flag](#Flag)
* + [func (f Flag) String() string](#Flag.String)
* [type Mechanism](#Mechanism)
* + [func (m Mechanism) String() string](#Mechanism.String)
* [type Polled](#Polled)
* [type Poller](#Poller)
* + [func NewPoller() *Poller](#NewPoller)
* + [func (p *Poller) Add(soc *Socket, events State) int](#Poller.Add)
+ [func (p *Poller) Poll(timeout time.Duration) ([]Polled, error)](#Poller.Poll)
+ [func (p *Poller) PollAll(timeout time.Duration) ([]Polled, error)](#Poller.PollAll)
+ [func (p *Poller) Remove(id int) error](#Poller.Remove)
+ [func (p *Poller) RemoveBySocket(soc *Socket) error](#Poller.RemoveBySocket)
+ [func (p *Poller) String() string](#Poller.String)
+ [func (p *Poller) Update(id int, events State) (previous State, err error)](#Poller.Update)
+ [func (p *Poller) UpdateBySocket(soc *Socket, events State) (previous State, err error)](#Poller.UpdateBySocket)
* [type Reactor](#Reactor)
* + [func NewReactor() *Reactor](#NewReactor)
* + [func (r *Reactor) AddChannel(ch <-chan interface{}, limit int, handler func(interface{}) error) (id uint64)](#Reactor.AddChannel)
+ [func (r *Reactor) AddChannelTime(ch <-chan time.Time, limit int, handler func(interface{}) error) (id uint64)](#Reactor.AddChannelTime)
+ [func (r *Reactor) AddSocket(soc *Socket, events State, handler func(State) error)](#Reactor.AddSocket)
+ [func (r *Reactor) RemoveChannel(id uint64)](#Reactor.RemoveChannel)
+ [func (r *Reactor) RemoveSocket(soc *Socket)](#Reactor.RemoveSocket)
+ [func (r *Reactor) Run(interval time.Duration) (err error)](#Reactor.Run)
+ [func (r *Reactor) SetVerbose(verbose bool)](#Reactor.SetVerbose)
* [type Socket](#Socket)
* + [func NewSocket(t Type) (soc *Socket, err error)](#NewSocket)
* + [func (soc *Socket) Bind(endpoint string) error](#Socket.Bind)
+ [func (client *Socket) ClientAuthCurve(server_public_key, client_public_key, client_secret_key string) error](#Socket.ClientAuthCurve)
+ [func (client *Socket) ClientAuthPlain(username, password string) error](#Socket.ClientAuthPlain)
+ [func (soc *Socket) Close() error](#Socket.Close)
+ [func (soc *Socket) Connect(endpoint string) error](#Socket.Connect)
+ [func (soc *Socket) Context() (*Context, error)](#Socket.Context)
+ [func (soc *Socket) Disconnect(endpoint string) error](#Socket.Disconnect)
+ [func (soc *Socket) GetAffinity() (uint64, error)](#Socket.GetAffinity)
+ [func (soc *Socket) GetBacklog() (int, error)](#Socket.GetBacklog)
+ [func (soc *Socket) GetConnectTimeout() (time.Duration, error)](#Socket.GetConnectTimeout)
+ [func (soc *Socket) GetCurvePublickeyRaw() (string, error)](#Socket.GetCurvePublickeyRaw)
+ [func (soc *Socket) GetCurvePublickeykeyZ85() (string, error)](#Socket.GetCurvePublickeykeyZ85)
+ [func (soc *Socket) GetCurveSecretkeyRaw() (string, error)](#Socket.GetCurveSecretkeyRaw)
+ [func (soc *Socket) GetCurveSecretkeyZ85() (string, error)](#Socket.GetCurveSecretkeyZ85)
+ [func (soc *Socket) GetCurveServerkeyRaw() (string, error)](#Socket.GetCurveServerkeyRaw)
+ [func (soc *Socket) GetCurveServerkeyZ85() (string, error)](#Socket.GetCurveServerkeyZ85)
+ [func (soc *Socket) GetEvents() (State, error)](#Socket.GetEvents)
+ [func (soc *Socket) GetFd() (int, error)](#Socket.GetFd)
+ [func (soc *Socket) GetGssapiPlaintext() (bool, error)](#Socket.GetGssapiPlaintext)
+ [func (soc *Socket) GetGssapiPrincipal() (string, error)](#Socket.GetGssapiPrincipal)
+ [func (soc *Socket) GetGssapiServer() (bool, error)](#Socket.GetGssapiServer)
+ [func (soc *Socket) GetGssapiServicePrincipal() (string, error)](#Socket.GetGssapiServicePrincipal)
+ [func (soc *Socket) GetHandshakeIvl() (time.Duration, error)](#Socket.GetHandshakeIvl)
+ [func (soc *Socket) GetIdentity() (string, error)](#Socket.GetIdentity)
+ [func (soc *Socket) GetImmediate() (bool, error)](#Socket.GetImmediate)
+ [func (soc *Socket) GetInvertMatching() (int, error)](#Socket.GetInvertMatching)
+ [func (soc *Socket) GetIpv6() (bool, error)](#Socket.GetIpv6)
+ [func (soc *Socket) GetLastEndpoint() (string, error)](#Socket.GetLastEndpoint)
+ [func (soc *Socket) GetLinger() (time.Duration, error)](#Socket.GetLinger)
+ [func (soc *Socket) GetMaxmsgsize() (int64, error)](#Socket.GetMaxmsgsize)
+ [func (soc *Socket) GetMechanism() (Mechanism, error)](#Socket.GetMechanism)
+ [func (soc *Socket) GetMulticastHops() (int, error)](#Socket.GetMulticastHops)
+ [func (soc *Socket) GetMulticastMaxtpdu() (int, error)](#Socket.GetMulticastMaxtpdu)
+ [func (soc *Socket) GetPlainPassword() (string, error)](#Socket.GetPlainPassword)
+ [func (soc *Socket) GetPlainServer() (int, error)](#Socket.GetPlainServer)
+ [func (soc *Socket) GetPlainUsername() (string, error)](#Socket.GetPlainUsername)
+ [func (soc *Socket) GetRate() (int, error)](#Socket.GetRate)
+ [func (soc *Socket) GetRcvbuf() (int, error)](#Socket.GetRcvbuf)
+ [func (soc *Socket) GetRcvhwm() (int, error)](#Socket.GetRcvhwm)
+ [func (soc *Socket) GetRcvmore() (bool, error)](#Socket.GetRcvmore)
+ [func (soc *Socket) GetRcvtimeo() (time.Duration, error)](#Socket.GetRcvtimeo)
+ [func (soc *Socket) GetReconnectIvl() (time.Duration, error)](#Socket.GetReconnectIvl)
+ [func (soc *Socket) GetReconnectIvlMax() (time.Duration, error)](#Socket.GetReconnectIvlMax)
+ [func (soc *Socket) GetRecoveryIvl() (time.Duration, error)](#Socket.GetRecoveryIvl)
+ [func (soc *Socket) GetSndbuf() (int, error)](#Socket.GetSndbuf)
+ [func (soc *Socket) GetSndhwm() (int, error)](#Socket.GetSndhwm)
+ [func (soc *Socket) GetSndtimeo() (time.Duration, error)](#Socket.GetSndtimeo)
+ [func (soc *Socket) GetSocksProxy() (string, error)](#Socket.GetSocksProxy)
+ [func (soc *Socket) GetTcpKeepalive() (int, error)](#Socket.GetTcpKeepalive)
+ [func (soc *Socket) GetTcpKeepaliveCnt() (int, error)](#Socket.GetTcpKeepaliveCnt)
+ [func (soc *Socket) GetTcpKeepaliveIdle() (int, error)](#Socket.GetTcpKeepaliveIdle)
+ [func (soc *Socket) GetTcpKeepaliveIntvl() (int, error)](#Socket.GetTcpKeepaliveIntvl)
+ [func (soc *Socket) GetTcpMaxrt() (time.Duration, error)](#Socket.GetTcpMaxrt)
+ [func (soc *Socket) GetThreadSafe() (bool, error)](#Socket.GetThreadSafe)
+ [func (soc *Socket) GetTos() (int, error)](#Socket.GetTos)
+ [func (soc *Socket) GetType() (Type, error)](#Socket.GetType)
+ [func (soc *Socket) GetVmciBufferMaxSize() (uint64, error)](#Socket.GetVmciBufferMaxSize)
+ [func (soc *Socket) GetVmciBufferMinSize() (uint64, error)](#Socket.GetVmciBufferMinSize)
+ [func (soc *Socket) GetVmciBufferSize() (uint64, error)](#Socket.GetVmciBufferSize)
+ [func (soc *Socket) GetVmciConnectTimeout() (time.Duration, error)](#Socket.GetVmciConnectTimeout)
+ [func (soc *Socket) GetZapDomain() (string, error)](#Socket.GetZapDomain)
+ [func (soc *Socket) Getusefd() (int, error)](#Socket.Getusefd)
+ [func (soc *Socket) Monitor(addr string, events Event) error](#Socket.Monitor)
+ [func (soc *Socket) Recv(flags Flag) (string, error)](#Socket.Recv)
+ [func (soc *Socket) RecvBytes(flags Flag) ([]byte, error)](#Socket.RecvBytes)
+ [func (soc *Socket) RecvBytesWithMetadata(flags Flag, properties ...string) (msg []byte, metadata map[string]string, err error)](#Socket.RecvBytesWithMetadata)
+ [func (soc *Socket) RecvEvent(flags Flag) (event_type Event, addr string, value int, err error)](#Socket.RecvEvent)
+ [func (soc *Socket) RecvMessage(flags Flag) (msg []string, err error)](#Socket.RecvMessage)
+ [func (soc *Socket) RecvMessageBytes(flags Flag) (msg [][]byte, err error)](#Socket.RecvMessageBytes)
+ [func (soc *Socket) RecvMessageBytesWithMetadata(flags Flag, properties ...string) (msg [][]byte, metadata map[string]string, err error)](#Socket.RecvMessageBytesWithMetadata)
+ [func (soc *Socket) RecvMessageWithMetadata(flags Flag, properties ...string) (msg []string, metadata map[string]string, err error)](#Socket.RecvMessageWithMetadata)
+ [func (soc *Socket) RecvWithMetadata(flags Flag, properties ...string) (msg string, metadata map[string]string, err error)](#Socket.RecvWithMetadata)
+ [func (soc *Socket) Send(data string, flags Flag) (int, error)](#Socket.Send)
+ [func (soc *Socket) SendBytes(data []byte, flags Flag) (int, error)](#Socket.SendBytes)
+ [func (soc *Socket) SendMessage(parts ...interface{}) (total int, err error)](#Socket.SendMessage)
+ [func (soc *Socket) SendMessageDontwait(parts ...interface{}) (total int, err error)](#Socket.SendMessageDontwait)
+ [func (server *Socket) ServerAuthCurve(domain, secret_key string) error](#Socket.ServerAuthCurve)
+ [func (server *Socket) ServerAuthNull(domain string) error](#Socket.ServerAuthNull)
+ [func (server *Socket) ServerAuthPlain(domain string) error](#Socket.ServerAuthPlain)
+ [func (soc *Socket) SetAffinity(value uint64) error](#Socket.SetAffinity)
+ [func (soc *Socket) SetBacklog(value int) error](#Socket.SetBacklog)
+ [func (soc *Socket) SetConflate(value bool) error](#Socket.SetConflate)
+ [func (soc *Socket) SetConnectRid(value string) error](#Socket.SetConnectRid)
+ [func (soc *Socket) SetConnectTimeout(value time.Duration) error](#Socket.SetConnectTimeout)
+ [func (soc *Socket) SetCurvePublickey(key string) error](#Socket.SetCurvePublickey)
+ [func (soc *Socket) SetCurveSecretkey(key string) error](#Socket.SetCurveSecretkey)
+ [func (soc *Socket) SetCurveServer(value int) error](#Socket.SetCurveServer)
+ [func (soc *Socket) SetCurveServerkey(key string) error](#Socket.SetCurveServerkey)
+ [func (soc *Socket) SetGssapiPlaintext(value bool) error](#Socket.SetGssapiPlaintext)
+ [func (soc *Socket) SetGssapiPrincipal(value string) error](#Socket.SetGssapiPrincipal)
+ [func (soc *Socket) SetGssapiServer(value bool) error](#Socket.SetGssapiServer)
+ [func (soc *Socket) SetGssapiServicePrincipal(value string) error](#Socket.SetGssapiServicePrincipal)
+ [func (soc *Socket) SetHandshakeIvl(value time.Duration) error](#Socket.SetHandshakeIvl)
+ [func (soc *Socket) SetHeartbeatIvl(value time.Duration) error](#Socket.SetHeartbeatIvl)
+ [func (soc *Socket) SetHeartbeatTimeout(value time.Duration) error](#Socket.SetHeartbeatTimeout)
+ [func (soc *Socket) SetHeartbeatTtl(value time.Duration) error](#Socket.SetHeartbeatTtl)
+ [func (soc *Socket) SetIdentity(value string) error](#Socket.SetIdentity)
+ [func (soc *Socket) SetImmediate(value bool) error](#Socket.SetImmediate)
+ [func (soc *Socket) SetInvertMatching(value int) error](#Socket.SetInvertMatching)
+ [func (soc *Socket) SetIpv6(value bool) error](#Socket.SetIpv6)
+ [func (soc *Socket) SetLinger(value time.Duration) error](#Socket.SetLinger)
+ [func (soc *Socket) SetMaxmsgsize(value int64) error](#Socket.SetMaxmsgsize)
+ [func (soc *Socket) SetMulticastHops(value int) error](#Socket.SetMulticastHops)
+ [func (soc *Socket) SetMulticastMaxtpdu(value int) error](#Socket.SetMulticastMaxtpdu)
+ [func (soc *Socket) SetPlainPassword(password string) error](#Socket.SetPlainPassword)
+ [func (soc *Socket) SetPlainServer(value int) error](#Socket.SetPlainServer)
+ [func (soc *Socket) SetPlainUsername(username string) error](#Socket.SetPlainUsername)
+ [func (soc *Socket) SetProbeRouter(value int) error](#Socket.SetProbeRouter)
+ [func (soc *Socket) SetRate(value int) error](#Socket.SetRate)
+ [func (soc *Socket) SetRcvbuf(value int) error](#Socket.SetRcvbuf)
+ [func (soc *Socket) SetRcvhwm(value int) error](#Socket.SetRcvhwm)
+ [func (soc *Socket) SetRcvtimeo(value time.Duration) error](#Socket.SetRcvtimeo)
+ [func (soc *Socket) SetReconnectIvl(value time.Duration) error](#Socket.SetReconnectIvl)
+ [func (soc *Socket) SetReconnectIvlMax(value time.Duration) error](#Socket.SetReconnectIvlMax)
+ [func (soc *Socket) SetRecoveryIvl(value time.Duration) error](#Socket.SetRecoveryIvl)
+ [func (soc *Socket) SetReqCorrelate(value int) error](#Socket.SetReqCorrelate)
+ [func (soc *Socket) SetReqRelaxed(value int) error](#Socket.SetReqRelaxed)
+ [func (soc *Socket) SetRouterHandover(value bool) error](#Socket.SetRouterHandover)
+ [func (soc *Socket) SetRouterMandatory(value int) error](#Socket.SetRouterMandatory)
+ [func (soc *Socket) SetRouterRaw(value int) error](#Socket.SetRouterRaw)
+ [func (soc *Socket) SetSndbuf(value int) error](#Socket.SetSndbuf)
+ [func (soc *Socket) SetSndhwm(value int) error](#Socket.SetSndhwm)
+ [func (soc *Socket) SetSndtimeo(value time.Duration) error](#Socket.SetSndtimeo)
+ [func (soc *Socket) SetSocksProxy(value string) error](#Socket.SetSocksProxy)
+ [func (soc *Socket) SetStreamNotify(value int) error](#Socket.SetStreamNotify)
+ [func (soc *Socket) SetSubscribe(filter string) error](#Socket.SetSubscribe)
+ [func (soc *Socket) SetTcpAcceptFilter(filter string) error](#Socket.SetTcpAcceptFilter)
+ [func (soc *Socket) SetTcpKeepalive(value int) error](#Socket.SetTcpKeepalive)
+ [func (soc *Socket) SetTcpKeepaliveCnt(value int) error](#Socket.SetTcpKeepaliveCnt)
+ [func (soc *Socket) SetTcpKeepaliveIdle(value int) error](#Socket.SetTcpKeepaliveIdle)
+ [func (soc *Socket) SetTcpKeepaliveIntvl(value int) error](#Socket.SetTcpKeepaliveIntvl)
+ [func (soc *Socket) SetTcpMaxrt(value time.Duration) error](#Socket.SetTcpMaxrt)
+ [func (soc *Socket) SetTos(value int) error](#Socket.SetTos)
+ [func (soc *Socket) SetUnsubscribe(filter string) error](#Socket.SetUnsubscribe)
+ [func (soc *Socket) SetUseFd(value int) error](#Socket.SetUseFd)
+ [func (soc *Socket) SetVmciBufferMaxSize(value uint64) error](#Socket.SetVmciBufferMaxSize)
+ [func (soc *Socket) SetVmciBufferMinSize(value uint64) error](#Socket.SetVmciBufferMinSize)
+ [func (soc *Socket) SetVmciBufferSize(value uint64) error](#Socket.SetVmciBufferSize)
+ [func (soc *Socket) SetVmciConnectTimeout(value time.Duration) error](#Socket.SetVmciConnectTimeout)
+ [func (soc *Socket) SetXpubManual(value int) error](#Socket.SetXpubManual)
+ [func (soc *Socket) SetXpubNodrop(value bool) error](#Socket.SetXpubNodrop)
+ [func (soc *Socket) SetXpubVerbose(value int) error](#Socket.SetXpubVerbose)
+ [func (soc *Socket) SetXpubVerboser(value int) error](#Socket.SetXpubVerboser)
+ [func (soc *Socket) SetXpubWelcomeMsg(value string) error](#Socket.SetXpubWelcomeMsg)
+ [func (soc *Socket) SetZapDomain(domain string) error](#Socket.SetZapDomain)
+ [func (soc Socket) String() string](#Socket.String)
+ [func (soc *Socket) Unbind(endpoint string) error](#Socket.Unbind)
* [type State](#State)
* + [func (s State) String() string](#State.String)
* [type Type](#Type)
* + [func (t Type) String() string](#Type.String)
### Constants [¶](#pkg-constants)
```
const (
// On Windows platform some of the standard POSIX errnos are not defined.
EADDRINUSE = [Errno](#Errno)([C](/C).[EADDRINUSE](/C#EADDRINUSE))
EADDRNOTAVAIL = [Errno](#Errno)([C](/C).[EADDRNOTAVAIL](/C#EADDRNOTAVAIL))
EAFNOSUPPORT = [Errno](#Errno)([C](/C).[EAFNOSUPPORT](/C#EAFNOSUPPORT))
ECONNABORTED = [Errno](#Errno)([C](/C).[ECONNABORTED](/C#ECONNABORTED))
ECONNREFUSED = [Errno](#Errno)([C](/C).[ECONNREFUSED](/C#ECONNREFUSED))
ECONNRESET = [Errno](#Errno)([C](/C).[ECONNRESET](/C#ECONNRESET))
EHOSTUNREACH = [Errno](#Errno)([C](/C).[EHOSTUNREACH](/C#EHOSTUNREACH))
EINPROGRESS = [Errno](#Errno)([C](/C).[EINPROGRESS](/C#EINPROGRESS))
EMSGSIZE = [Errno](#Errno)([C](/C).[EMSGSIZE](/C#EMSGSIZE))
ENETDOWN = [Errno](#Errno)([C](/C).[ENETDOWN](/C#ENETDOWN))
ENETRESET = [Errno](#Errno)([C](/C).[ENETRESET](/C#ENETRESET))
ENETUNREACH = [Errno](#Errno)([C](/C).[ENETUNREACH](/C#ENETUNREACH))
ENOBUFS = [Errno](#Errno)([C](/C).[ENOBUFS](/C#ENOBUFS))
ENOTCONN = [Errno](#Errno)([C](/C).[ENOTCONN](/C#ENOTCONN))
ENOTSOCK = [Errno](#Errno)([C](/C).[ENOTSOCK](/C#ENOTSOCK))
ENOTSUP = [Errno](#Errno)([C](/C).[ENOTSUP](/C#ENOTSUP))
EPROTONOSUPPORT = [Errno](#Errno)([C](/C).[EPROTONOSUPPORT](/C#EPROTONOSUPPORT))
ETIMEDOUT = [Errno](#Errno)([C](/C).[ETIMEDOUT](/C#ETIMEDOUT))
// Native 0MQ error codes.
EFSM = [Errno](#Errno)([C](/C).[EFSM](/C#EFSM))
EMTHREAD = [Errno](#Errno)([C](/C).[EMTHREAD](/C#EMTHREAD))
ENOCOMPATPROTO = [Errno](#Errno)([C](/C).[ENOCOMPATPROTO](/C#ENOCOMPATPROTO))
ETERM = [Errno](#Errno)([C](/C).[ETERM](/C#ETERM))
)
```
```
const (
MaxSocketsDflt = [int](/builtin#int)([C](/C).[ZMQ_MAX_SOCKETS_DFLT](/C#ZMQ_MAX_SOCKETS_DFLT))
IoThreadsDflt = [int](/builtin#int)([C](/C).[ZMQ_IO_THREADS_DFLT](/C#ZMQ_IO_THREADS_DFLT))
)
```
```
const (
// Constants for NewSocket()
// See: <http://api.zeromq.org/4-1:zmq-socket#toc3>
REQ = [Type](#Type)([C](/C).[ZMQ_REQ](/C#ZMQ_REQ))
REP = [Type](#Type)([C](/C).[ZMQ_REP](/C#ZMQ_REP))
DEALER = [Type](#Type)([C](/C).[ZMQ_DEALER](/C#ZMQ_DEALER))
ROUTER = [Type](#Type)([C](/C).[ZMQ_ROUTER](/C#ZMQ_ROUTER))
PUB = [Type](#Type)([C](/C).[ZMQ_PUB](/C#ZMQ_PUB))
SUB = [Type](#Type)([C](/C).[ZMQ_SUB](/C#ZMQ_SUB))
XPUB = [Type](#Type)([C](/C).[ZMQ_XPUB](/C#ZMQ_XPUB))
XSUB = [Type](#Type)([C](/C).[ZMQ_XSUB](/C#ZMQ_XSUB))
PUSH = [Type](#Type)([C](/C).[ZMQ_PUSH](/C#ZMQ_PUSH))
PULL = [Type](#Type)([C](/C).[ZMQ_PULL](/C#ZMQ_PULL))
PAIR = [Type](#Type)([C](/C).[ZMQ_PAIR](/C#ZMQ_PAIR))
STREAM = [Type](#Type)([C](/C).[ZMQ_STREAM](/C#ZMQ_STREAM))
)
```
```
const (
// Flags for (*Socket)Send(), (*Socket)Recv()
// For Send, see: <http://api.zeromq.org/4-1:zmq-send#toc2>
// For Recv, see: <http://api.zeromq.org/4-1:zmq-msg-recv#toc2>
DONTWAIT = [Flag](#Flag)([C](/C).[ZMQ_DONTWAIT](/C#ZMQ_DONTWAIT))
SNDMORE = [Flag](#Flag)([C](/C).[ZMQ_SNDMORE](/C#ZMQ_SNDMORE))
)
```
```
const (
// Flags for (*Socket)Monitor() and (*Socket)RecvEvent()
// See: <http://api.zeromq.org/4-3:zmq-socket-monitor#toc3>
EVENT_ALL = [Event](#Event)([C](/C).[ZMQ_EVENT_ALL](/C#ZMQ_EVENT_ALL))
EVENT_CONNECTED = [Event](#Event)([C](/C).[ZMQ_EVENT_CONNECTED](/C#ZMQ_EVENT_CONNECTED))
EVENT_CONNECT_DELAYED = [Event](#Event)([C](/C).[ZMQ_EVENT_CONNECT_DELAYED](/C#ZMQ_EVENT_CONNECT_DELAYED))
EVENT_CONNECT_RETRIED = [Event](#Event)([C](/C).[ZMQ_EVENT_CONNECT_RETRIED](/C#ZMQ_EVENT_CONNECT_RETRIED))
EVENT_LISTENING = [Event](#Event)([C](/C).[ZMQ_EVENT_LISTENING](/C#ZMQ_EVENT_LISTENING))
EVENT_BIND_FAILED = [Event](#Event)([C](/C).[ZMQ_EVENT_BIND_FAILED](/C#ZMQ_EVENT_BIND_FAILED))
EVENT_ACCEPTED = [Event](#Event)([C](/C).[ZMQ_EVENT_ACCEPTED](/C#ZMQ_EVENT_ACCEPTED))
EVENT_ACCEPT_FAILED = [Event](#Event)([C](/C).[ZMQ_EVENT_ACCEPT_FAILED](/C#ZMQ_EVENT_ACCEPT_FAILED))
EVENT_CLOSED = [Event](#Event)([C](/C).[ZMQ_EVENT_CLOSED](/C#ZMQ_EVENT_CLOSED))
EVENT_CLOSE_FAILED = [Event](#Event)([C](/C).[ZMQ_EVENT_CLOSE_FAILED](/C#ZMQ_EVENT_CLOSE_FAILED))
EVENT_DISCONNECTED = [Event](#Event)([C](/C).[ZMQ_EVENT_DISCONNECTED](/C#ZMQ_EVENT_DISCONNECTED))
EVENT_MONITOR_STOPPED = [Event](#Event)([C](/C).[ZMQ_EVENT_MONITOR_STOPPED](/C#ZMQ_EVENT_MONITOR_STOPPED))
EVENT_HANDSHAKE_FAILED_NO_DETAIL = [Event](#Event)([C](/C).[ZMQ_EVENT_HANDSHAKE_FAILED_NO_DETAIL](/C#ZMQ_EVENT_HANDSHAKE_FAILED_NO_DETAIL))
EVENT_HANDSHAKE_SUCCEEDED = [Event](#Event)([C](/C).[ZMQ_EVENT_HANDSHAKE_SUCCEEDED](/C#ZMQ_EVENT_HANDSHAKE_SUCCEEDED))
EVENT_HANDSHAKE_FAILED_PROTOCOL = [Event](#Event)([C](/C).[ZMQ_EVENT_HANDSHAKE_FAILED_PROTOCOL](/C#ZMQ_EVENT_HANDSHAKE_FAILED_PROTOCOL))
EVENT_HANDSHAKE_FAILED_AUTH = [Event](#Event)([C](/C).[ZMQ_EVENT_HANDSHAKE_FAILED_AUTH](/C#ZMQ_EVENT_HANDSHAKE_FAILED_AUTH))
)
```
```
const (
// Flags for (*Socket)GetEvents()
// See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc8>
POLLIN = [State](#State)([C](/C).[ZMQ_POLLIN](/C#ZMQ_POLLIN))
POLLOUT = [State](#State)([C](/C).[ZMQ_POLLOUT](/C#ZMQ_POLLOUT))
)
```
```
const (
// Constants for (*Socket)GetMechanism()
// See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc22>
NULL = [Mechanism](#Mechanism)([C](/C).[ZMQ_NULL](/C#ZMQ_NULL))
PLAIN = [Mechanism](#Mechanism)([C](/C).[ZMQ_PLAIN](/C#ZMQ_PLAIN))
CURVE = [Mechanism](#Mechanism)([C](/C).[ZMQ_CURVE](/C#ZMQ_CURVE))
GSSAPI = [Mechanism](#Mechanism)([C](/C).[ZMQ_GSSAPI](/C#ZMQ_GSSAPI))
)
```
```
const CURVE_ALLOW_ANY = "*"
```
### Variables [¶](#pkg-variables)
```
var (
ErrorContextClosed = [errors](/errors).[New](/errors#New)("Context is closed")
ErrorSocketClosed = [errors](/errors).[New](/errors#New)("Socket is closed")
ErrorMoreExpected = [errors](/errors).[New](/errors#New)("More expected")
ErrorNotImplemented405 = [errors](/errors).[New](/errors#New)("Not implemented, requires 0MQ version 4.0.5")
ErrorNotImplemented41 = [errors](/errors).[New](/errors#New)("Not implemented, requires 0MQ version 4.1")
ErrorNotImplemented42 = [errors](/errors).[New](/errors#New)("Not implemented, requires 0MQ version 4.2")
ErrorNotImplementedWindows = [errors](/errors).[New](/errors#New)("Not implemented on Windows")
ErrorNoSocket = [errors](/errors).[New](/errors#New)("No such socket")
)
```
### Functions [¶](#pkg-functions)
####
func [AuthAllow](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L416) [¶](#AuthAllow)
```
func AuthAllow(domain [string](/builtin#string), addresses ...[string](/builtin#string))
```
Allow (whitelist) some addresses for a domain.
An address can be a single IP address, or an IP address and mask in CIDR notation.
For NULL, all clients from these addresses will be accepted.
For PLAIN and CURVE, they will be allowed to continue with authentication.
You can call this method multiple times to whitelist multiple IP addresses.
If you whitelist a single address for a domain, any non-whitelisted addresses for that domain are treated as blacklisted.
Use domain "*" for all domains.
For backward compatibility: if domain can be parsed as an IP address, it will be interpreted as another address, and it and all remaining addresses will be added to all domains.
####
func [AuthCurveAdd](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L515) [¶](#AuthCurveAdd)
```
func AuthCurveAdd(domain [string](/builtin#string), pubkeys ...[string](/builtin#string))
```
Add public user keys for CURVE authentication for a given domain.
To cover all domains, use "*".
Public keys are in Z85 printable text format.
To allow all client keys without checking, specify CURVE_ALLOW_ANY for the key.
####
func [AuthCurvePublic](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L649) [¶](#AuthCurvePublic)
```
func AuthCurvePublic(z85SecretKey [string](/builtin#string)) (z85PublicKey [string](/builtin#string), err [error](/builtin#error))
```
Helper function to derive z85 public key from secret key
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func [AuthCurveRemove](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L525) [¶](#AuthCurveRemove)
```
func AuthCurveRemove(domain [string](/builtin#string), pubkeys ...[string](/builtin#string))
```
Remove user keys from CURVE authentication for a given domain.
####
func [AuthCurveRemoveAll](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L534) [¶](#AuthCurveRemoveAll)
```
func AuthCurveRemoveAll(domain [string](/builtin#string))
```
Remove all user keys from CURVE authentication for a given domain.
####
func [AuthDeny](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L457) [¶](#AuthDeny)
```
func AuthDeny(domain [string](/builtin#string), addresses ...[string](/builtin#string))
```
Deny (blacklist) some addresses for a domain.
An address can be a single IP address, or an IP address and mask in CIDR notation.
For all security mechanisms, this rejects the connection without any further authentication.
Use either a whitelist for a domain, or a blacklist for a domain, not both.
If you define both a whitelist and a blacklist for a domain, only the whitelist takes effect.
Use domain "*" for all domains.
For backward compatibility: if domain can be parsed as an IP address, it will be interpreted as another address, and it and all remaining addresses will be added to all domains.
####
func [AuthMetaBlob](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L572) [¶](#AuthMetaBlob)
```
func AuthMetaBlob(key, value [string](/builtin#string)) (blob [][byte](/builtin#byte), err [error](/builtin#error))
```
This encodes a key/value pair into the format used by a ZAP handler.
Returns an error if key is more then 255 characters long.
####
func [AuthPlainAdd](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L487) [¶](#AuthPlainAdd)
```
func AuthPlainAdd(domain, username, password [string](/builtin#string))
```
Add a user for PLAIN authentication for a given domain.
Set `domain` to "*" to apply to all domains.
####
func [AuthPlainRemove](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L495) [¶](#AuthPlainRemove)
```
func AuthPlainRemove(domain [string](/builtin#string), usernames ...[string](/builtin#string))
```
Remove users from PLAIN authentication for a given domain.
####
func [AuthPlainRemoveAll](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L504) [¶](#AuthPlainRemoveAll)
```
func AuthPlainRemoveAll(domain [string](/builtin#string))
```
Remove all users from PLAIN authentication for a given domain.
####
func [AuthSetMetadataHandler](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L561) [¶](#AuthSetMetadataHandler)
```
func AuthSetMetadataHandler(
handler func(
version, request_id, domain, address, identity, mechanism [string](/builtin#string), credentials ...[string](/builtin#string)) (metadata map[[string](/builtin#string)][string](/builtin#string)))
```
This function sets the metadata handler that is called by the ZAP handler to retrieve key/value properties that should be set on reply messages in case of a status code "200" (succes).
Default properties are `Socket-Type`, which is already set, and
`Identity` and `User-Id` that are empty by default. The last two can be set, and more properties can be added.
The `User-Id` property is used for the `user id` frame of the reply message. All other properties are stored in the `metadata` frame of the reply message.
The default handler returns an empty map.
For the meaning of the handler arguments, and other details, see:
<http://rfc.zeromq.org/spec:27#toc10####
func [AuthSetVerbose](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L539) [¶](#AuthSetVerbose)
```
func AuthSetVerbose(verbose [bool](/builtin#bool))
```
Enable verbose tracing of commands and activity.
####
func [AuthStart](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L324) [¶](#AuthStart)
```
func AuthStart() (err [error](/builtin#error))
```
Start authentication.
Note that until you add policies, all incoming NULL connections are allowed
(classic ZeroMQ behaviour), and all PLAIN and CURVE connections are denied.
####
func [AuthStop](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L368) [¶](#AuthStop)
```
func AuthStop()
```
Stop authentication.
####
func [Error](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L173) [¶](#Error)
```
func Error(e [int](/builtin#int)) [string](/builtin#string)
```
Get 0MQ error message string.
####
func [GetBlocky](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L347) [¶](#GetBlocky)
```
func GetBlocky() ([bool](/builtin#bool), [error](/builtin#error))
```
Returns the blocky setting in the default context.
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func [GetIoThreads](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L264) [¶](#GetIoThreads)
```
func GetIoThreads() ([int](/builtin#int), [error](/builtin#error))
```
Returns the size of the 0MQ thread pool in the default context.
####
func [GetIpv6](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L323) [¶](#GetIpv6)
```
func GetIpv6() ([bool](/builtin#bool), [error](/builtin#error))
```
Returns the IPv6 option in the default context.
####
func [GetMaxMsgsz](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L300) [¶](#GetMaxMsgsz)
```
func GetMaxMsgsz() ([int](/builtin#int), [error](/builtin#error))
```
Returns the maximum message size in the default context.
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func [GetMaxSockets](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L280) [¶](#GetMaxSockets)
```
func GetMaxSockets() ([int](/builtin#int), [error](/builtin#error))
```
Returns the maximum number of sockets allowed in the default context.
####
func [GetRetryAfterEINTR](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L376) [¶](#GetRetryAfterEINTR)
added in v1.1.0
```
func GetRetryAfterEINTR() [bool](/builtin#bool)
```
Returns the retry after EINTR setting in the default context.
####
func [HasCurve](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1523) [¶](#HasCurve)
```
func HasCurve() [bool](/builtin#bool)
```
Returns false for ZeroMQ version < 4.1.0
Else: returns true if the library supports the CURVE security mechanism
####
func [HasGssapi](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1530) [¶](#HasGssapi)
```
func HasGssapi() [bool](/builtin#bool)
```
Returns false for ZeroMQ version < 4.1.0
Else: returns true if the library supports the GSSAPI security mechanism
####
func [HasIpc](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1495) [¶](#HasIpc)
```
func HasIpc() [bool](/builtin#bool)
```
Returns false for ZeroMQ version < 4.1.0
Else: returns true if the library supports the ipc:// protocol
####
func [HasNorm](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1516) [¶](#HasNorm)
```
func HasNorm() [bool](/builtin#bool)
```
Returns false for ZeroMQ version < 4.1.0
Else: returns true if the library supports the norm:// protocol
####
func [HasPgm](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1502) [¶](#HasPgm)
```
func HasPgm() [bool](/builtin#bool)
```
Returns false for ZeroMQ version < 4.1.0
Else: returns true if the library supports the pgm:// protocol
####
func [HasTipc](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1509) [¶](#HasTipc)
```
func HasTipc() [bool](/builtin#bool)
```
Returns false for ZeroMQ version < 4.1.0
Else: returns true if the library supports the tipc:// protocol
####
func [NewCurveKeypair](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1402) [¶](#NewCurveKeypair)
```
func NewCurveKeypair() (z85_public_key, z85_secret_key [string](/builtin#string), err [error](/builtin#error))
```
Generate a new CURVE keypair
See: <http://api.zeromq.org/4-1:zmq-curve-keypair#toc2####
func [Proxy](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1298) [¶](#Proxy)
```
func Proxy(frontend, backend, capture *[Socket](#Socket)) [error](/builtin#error)
```
Start built-in ØMQ proxy
See: <http://api.zeromq.org/4-1:zmq-proxy#toc2####
func [ProxySteerable](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1323) [¶](#ProxySteerable)
```
func ProxySteerable(frontend, backend, capture, control *[Socket](#Socket)) [error](/builtin#error)
```
Start built-in ØMQ proxy with PAUSE/RESUME/TERMINATE control flow
Returns ErrorNotImplemented405 with ZeroMQ version < 4.0.5
See: <http://api.zeromq.org/4-1:zmq-proxy-steerable#toc2####
func [SetBlocky](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L575) [¶](#SetBlocky)
```
func SetBlocky(i [bool](/builtin#bool)) [error](/builtin#error)
```
Sets the blocky behavior in the default context.
See: <http://api.zeromq.org/4-2:zmq-ctx-set#toc3Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func [SetIoThreads](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L406) [¶](#SetIoThreads)
```
func SetIoThreads(n [int](/builtin#int)) [error](/builtin#error)
```
Specifies the size of the 0MQ thread pool to handle I/O operations in the default context. If your application is using only the inproc transport for messaging you may set this to zero, otherwise set it to at least one. This option only applies before creating any sockets.
Default value: 1
####
func [SetIpv6](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L543) [¶](#SetIpv6)
```
func SetIpv6(i [bool](/builtin#bool)) [error](/builtin#error)
```
Sets the IPv6 value for all sockets created in the default context from this point onwards.
A value of true means IPv6 is enabled, while false means the socket will use only IPv4.
When IPv6 is enabled, a socket will connect to, or accept connections from, both IPv4 and IPv6 hosts.
Default value: false
####
func [SetMaxMsgsz](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L488) [¶](#SetMaxMsgsz)
```
func SetMaxMsgsz(n [int](/builtin#int)) [error](/builtin#error)
```
Set maximum message size in the default context.
Default value: INT_MAX
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func [SetMaxSockets](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L517) [¶](#SetMaxSockets)
```
func SetMaxSockets(n [int](/builtin#int)) [error](/builtin#error)
```
Sets the maximum number of sockets allowed in the default context.
Default value: 1024
####
func [SetRetryAfterEINTR](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L608) [¶](#SetRetryAfterEINTR)
added in v1.1.0
```
func SetRetryAfterEINTR(retry [bool](/builtin#bool))
```
Sets the retry after EINTR setting in the default context.
Initital value is true.
####
func [SetThreadPriority](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L471) [¶](#SetThreadPriority)
```
func SetThreadPriority(n [int](/builtin#int)) [error](/builtin#error)
```
Sets scheduling priority for default context’s thread pool.
This option requires ZeroMQ version 4.1, and is not available on Windows.
Supported values for this option depend on chosen scheduling policy.
Details can be found in sched.h file, or at
<http://man7.org/linux/man-pages/man2/sched_setscheduler.2.htmlThis option only applies before creating any sockets on the context.
Default value: -1
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
Returns ErrorNotImplementedWindows on Windows
####
func [SetThreadSchedPolicy](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L444) [¶](#SetThreadSchedPolicy)
```
func SetThreadSchedPolicy(n [int](/builtin#int)) [error](/builtin#error)
```
Sets the scheduling policy for default context’s thread pool.
This option requires ZeroMQ version 4.1, and is not available on Windows.
Supported values for this option can be found in sched.h file, or at
<http://man7.org/linux/man-pages/man2/sched_setscheduler.2.htmlThis option only applies before creating any sockets on the context.
Default value: -1
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
Returns ErrorNotImplementedWindows on Windows
####
func [Term](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L218) [¶](#Term)
```
func Term() [error](/builtin#error)
```
Terminates the default context.
For linger behavior, see: <http://api.zeromq.org/4-1:zmq-ctx-term####
func [Version](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L163) [¶](#Version)
```
func Version() (major, minor, patch [int](/builtin#int))
```
Report 0MQ library version.
####
func [Z85decode](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1381) [¶](#Z85decode)
```
func Z85decode(s [string](/builtin#string)) [string](/builtin#string)
```
Decode a binary key from Z85 printable text
See: <http://api.zeromq.org/4-1:zmq-z85-decode####
func [Z85encode](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1358) [¶](#Z85encode)
```
func Z85encode(data [string](/builtin#string)) [string](/builtin#string)
```
Encode a binary key as Z85 printable text
See: <http://api.zeromq.org/4-1:zmq-z85-encode### Types [¶](#pkg-types)
####
type [Context](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L187) [¶](#Context)
```
type Context struct {
// contains filtered or unexported fields
}
```
A context that is not the default context.
####
func [NewContext](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L195) [¶](#NewContext)
```
func NewContext() (ctx *[Context](#Context), err [error](/builtin#error))
```
Create a new context.
####
func (*Context) [GetBlocky](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L362) [¶](#Context.GetBlocky)
```
func (ctx *[Context](#Context)) GetBlocky() ([bool](/builtin#bool), [error](/builtin#error))
```
Returns the blocky setting.
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func (*Context) [GetIoThreads](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L275) [¶](#Context.GetIoThreads)
```
func (ctx *[Context](#Context)) GetIoThreads() ([int](/builtin#int), [error](/builtin#error))
```
Returns the size of the 0MQ thread pool.
####
func (*Context) [GetIpv6](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L334) [¶](#Context.GetIpv6)
```
func (ctx *[Context](#Context)) GetIpv6() ([bool](/builtin#bool), [error](/builtin#error))
```
Returns the IPv6 option.
####
func (*Context) [GetMaxMsgsz](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L315) [¶](#Context.GetMaxMsgsz)
```
func (ctx *[Context](#Context)) GetMaxMsgsz() ([int](/builtin#int), [error](/builtin#error))
```
Returns the maximum message size.
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func (*Context) [GetMaxSockets](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L291) [¶](#Context.GetMaxSockets)
```
func (ctx *[Context](#Context)) GetMaxSockets() ([int](/builtin#int), [error](/builtin#error))
```
Returns the maximum number of sockets allowed.
####
func (*Context) [GetRetryAfterEINTR](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L383) [¶](#Context.GetRetryAfterEINTR)
added in v1.1.0
```
func (ctx *[Context](#Context)) GetRetryAfterEINTR() [bool](/builtin#bool)
```
Returns the retry after EINTR setting.
####
func (*Context) [NewSocket](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L902) [¶](#Context.NewSocket)
```
func (ctx *[Context](#Context)) NewSocket(t [Type](#Type)) (soc *[Socket](#Socket), err [error](/builtin#error))
```
Create 0MQ socket in the given context.
WARNING:
The Socket is not thread safe. This means that you cannot access the same Socket from different goroutines without using something like a mutex.
For a description of socket types, see: <http://api.zeromq.org/4-1:zmq-socket#toc3####
func (*Context) [SetBlocky](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L592) [¶](#Context.SetBlocky)
```
func (ctx *[Context](#Context)) SetBlocky(i [bool](/builtin#bool)) [error](/builtin#error)
```
Sets the blocky behavior.
See: <http://api.zeromq.org/4-2:zmq-ctx-set#toc3Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func (*Context) [SetIoThreads](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L424) [¶](#Context.SetIoThreads)
```
func (ctx *[Context](#Context)) SetIoThreads(n [int](/builtin#int)) [error](/builtin#error)
```
Specifies the size of the 0MQ thread pool to handle I/O operations. If your application is using only the inproc transport for messaging you may set this to zero, otherwise set it to at least one. This option only applies before creating any sockets.
Default value: 1
####
func (*Context) [SetIpv6](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L560) [¶](#Context.SetIpv6)
```
func (ctx *[Context](#Context)) SetIpv6(i [bool](/builtin#bool)) [error](/builtin#error)
```
Sets the IPv6 value for all sockets created in the context from this point onwards.
A value of true means IPv6 is enabled, while false means the socket will use only IPv4.
When IPv6 is enabled, a socket will connect to, or accept connections from, both IPv4 and IPv6 hosts.
Default value: false
####
func (*Context) [SetMaxMsgsz](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L505) [¶](#Context.SetMaxMsgsz)
```
func (ctx *[Context](#Context)) SetMaxMsgsz(n [int](/builtin#int)) [error](/builtin#error)
```
Set maximum message size.
Default value: INT_MAX
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
####
func (*Context) [SetMaxSockets](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L532) [¶](#Context.SetMaxSockets)
```
func (ctx *[Context](#Context)) SetMaxSockets(n [int](/builtin#int)) [error](/builtin#error)
```
Sets the maximum number of sockets allowed.
Default value: 1024
####
func (*Context) [SetRetryAfterEINTR](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L617) [¶](#Context.SetRetryAfterEINTR)
added in v1.1.0
```
func (ctx *[Context](#Context)) SetRetryAfterEINTR(retry [bool](/builtin#bool))
```
Sets the retry after EINTR setting.
Initital value is true.
####
func (*Context) [SetThreadPriority](https://github.com/pebbe/zmq4/blob/v1.2.10/ctxoptions_unix.go#L51) [¶](#Context.SetThreadPriority)
```
func (ctx *[Context](#Context)) SetThreadPriority(n [int](/builtin#int)) [error](/builtin#error)
```
Sets scheduling priority for internal context’s thread pool.
This option requires ZeroMQ version 4.1, and is not available on Windows.
Supported values for this option depend on chosen scheduling policy.
Details can be found in sched.h file, or at
<http://man7.org/linux/man-pages/man2/sched_setscheduler.2.htmlThis option only applies before creating any sockets on the context.
Default value: -1
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
Returns ErrorNotImplementedWindows on Windows
####
func (*Context) [SetThreadSchedPolicy](https://github.com/pebbe/zmq4/blob/v1.2.10/ctxoptions_unix.go#L27) [¶](#Context.SetThreadSchedPolicy)
```
func (ctx *[Context](#Context)) SetThreadSchedPolicy(n [int](/builtin#int)) [error](/builtin#error)
```
Sets the scheduling policy for internal context’s thread pool.
This option requires ZeroMQ version 4.1, and is not available on Windows.
Supported values for this option can be found in sched.h file, or at
<http://man7.org/linux/man-pages/man2/sched_setscheduler.2.htmlThis option only applies before creating any sockets on the context.
Default value: -1
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
Returns ErrorNotImplementedWindows on Windows
####
func (*Context) [Term](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L233) [¶](#Context.Term)
```
func (ctx *[Context](#Context)) Term() [error](/builtin#error)
```
Terminates the context.
For linger behavior, see: <http://api.zeromq.org/4-1:zmq-ctx-term####
type [Errno](https://github.com/pebbe/zmq4/blob/v1.2.10/errors.go#L15) [¶](#Errno)
```
type Errno [uintptr](/builtin#uintptr)
```
An Errno is an unsigned number describing an error condition as returned by a call to ZeroMQ.
It implements the error interface.
The number is either a standard system error, or an error defined by the C library of ZeroMQ.
####
func [AsErrno](https://github.com/pebbe/zmq4/blob/v1.2.10/errors.go#L84) [¶](#AsErrno)
```
func AsErrno(err [error](/builtin#error)) [Errno](#Errno)
```
Convert error to Errno.
Example usage:
```
switch AsErrno(err) {
case zmq.Errno(syscall.EINTR):
// standard system error
// call was interrupted
case zmq.ETERM:
// error defined by ZeroMQ
// context was terminated
}
```
See also: examples/interrupt.go
####
func (Errno) [Error](https://github.com/pebbe/zmq4/blob/v1.2.10/errors.go#L56) [¶](#Errno.Error)
```
func (errno [Errno](#Errno)) Error() [string](/builtin#string)
```
Return Errno as string.
####
type [Event](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L705) [¶](#Event)
```
type Event [int](/builtin#int)
```
Used by (*Socket)Monitor() and (*Socket)RecvEvent()
####
func (Event) [String](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L731) [¶](#Event.String)
```
func (e [Event](#Event)) String() [string](/builtin#string)
```
Socket event as string.
####
type [Flag](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L677) [¶](#Flag)
```
type Flag [int](/builtin#int)
```
Used by (*Socket)Send() and (*Socket)Recv()
####
func (Flag) [String](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L690) [¶](#Flag.String)
```
func (f [Flag](#Flag)) String() [string](/builtin#string)
```
Socket flag as string.
####
type [Mechanism](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L817) [¶](#Mechanism)
```
type Mechanism [int](/builtin#int)
```
Specifies the security mechanism, used by (*Socket)GetMechanism()
####
func (Mechanism) [String](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L831) [¶](#Mechanism.String)
```
func (m [Mechanism](#Mechanism)) String() [string](/builtin#string)
```
Security mechanism as string.
####
type [Polled](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L15) [¶](#Polled)
```
type Polled struct {
Socket *[Socket](#Socket) // socket with matched event(s)
Events [State](#State) // actual matched event(s)
}
```
Return type for (*Poller)Poll
####
type [Poller](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L20) [¶](#Poller)
```
type Poller struct {
// contains filtered or unexported fields
}
```
####
func [NewPoller](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L26) [¶](#NewPoller)
```
func NewPoller() *[Poller](#Poller)
```
Create a new Poller
####
func (*Poller) [Add](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L39) [¶](#Poller.Add)
```
func (p *[Poller](#Poller)) Add(soc *[Socket](#Socket), events [State](#State)) [int](/builtin#int)
```
* [Events is a bitwise OR of zmq.POLLIN and zmq.POLLOUT](#hdr-Events_is_a_bitwise_OR_of_zmq_POLLIN_and_zmq_POLLOUT)
Add items to the poller
#### Events is a bitwise OR of zmq.POLLIN and zmq.POLLOUT [¶](#hdr-Events_is_a_bitwise_OR_of_zmq_POLLIN_and_zmq_POLLOUT)
Returns the id of the item, which can be used as a handle to
(*Poller)Update and as an index into the result of (*Poller)PollAll
####
func (*Poller) [Poll](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L135) [¶](#Poller.Poll)
```
func (p *[Poller](#Poller)) Poll(timeout [time](/time).[Duration](/time#Duration)) ([][Polled](#Polled), [error](/builtin#error))
```
Input/output multiplexing
If timeout < 0, wait forever until a matching event is detected
Only sockets with matching socket events are returned in the list.
Example:
```
poller := zmq.NewPoller()
poller.Add(socket0, zmq.POLLIN)
poller.Add(socket1, zmq.POLLIN)
// Process messages from both sockets for {
sockets, _ := poller.Poll(-1)
for _, socket := range sockets {
switch s := socket.Socket; s {
case socket0:
msg, _ := s.Recv(0)
// Process msg
case socket1:
msg, _ := s.Recv(0)
// Process msg
}
}
}
```
####
func (*Poller) [PollAll](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L149) [¶](#Poller.PollAll)
```
func (p *[Poller](#Poller)) PollAll(timeout [time](/time).[Duration](/time#Duration)) ([][Polled](#Polled), [error](/builtin#error))
```
This is like (*Poller)Poll, but it returns a list of all sockets,
in the same order as they were added to the poller,
not just those sockets that had an event.
For each socket in the list, you have to check the Events field to see if there was actually an event.
When error is not nil, the return list contains no sockets.
####
func (*Poller) [Remove](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L82) [¶](#Poller.Remove)
```
func (p *[Poller](#Poller)) Remove(id [int](/builtin#int)) [error](/builtin#error)
```
Remove a socket from the poller
Returns ErrorNoSocket if the id was out of range
####
func (*Poller) [RemoveBySocket](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L99) [¶](#Poller.RemoveBySocket)
```
func (p *[Poller](#Poller)) RemoveBySocket(soc *[Socket](#Socket)) [error](/builtin#error)
```
Remove a socket from the poller
Returns ErrorNoSocket if the socket didn't match
####
func (*Poller) [String](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L196) [¶](#Poller.String)
```
func (p *[Poller](#Poller)) String() [string](/builtin#string)
```
Poller as string.
####
func (*Poller) [Update](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L54) [¶](#Poller.Update)
```
func (p *[Poller](#Poller)) Update(id [int](/builtin#int), events [State](#State)) (previous [State](#State), err [error](/builtin#error))
```
* [Replaces the Poller's bitmask for the specified id with the events parameter passed](#hdr-Replaces_the_Poller_s_bitmask_for_the_specified_id_with_the_events_parameter_passed)
Update the events mask of a socket in the poller
#### Replaces the Poller's bitmask for the specified id with the events parameter passed [¶](#hdr-Replaces_the_Poller_s_bitmask_for_the_specified_id_with_the_events_parameter_passed)
Returns the previous value, or ErrorNoSocket if the id was out of range
####
func (*Poller) [UpdateBySocket](https://github.com/pebbe/zmq4/blob/v1.2.10/polling.go#L68) [¶](#Poller.UpdateBySocket)
```
func (p *[Poller](#Poller)) UpdateBySocket(soc *[Socket](#Socket), events [State](#State)) (previous [State](#State), err [error](/builtin#error))
```
* [Replaces the Poller's bitmask for the specified socket with the events parameter passed](#hdr-Replaces_the_Poller_s_bitmask_for_the_specified_socket_with_the_events_parameter_passed)
Update the events mask of a socket in the poller
#### Replaces the Poller's bitmask for the specified socket with the events parameter passed [¶](#hdr-Replaces_the_Poller_s_bitmask_for_the_specified_socket_with_the_events_parameter_passed)
Returns the previous value, or ErrorNoSocket if the socket didn't match
####
type [Reactor](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L20) [¶](#Reactor)
```
type Reactor struct {
// contains filtered or unexported fields
}
```
####
func [NewReactor](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L46) [¶](#NewReactor)
```
func NewReactor() *[Reactor](#Reactor)
```
Create a reactor to mix the handling of sockets and channels (timers or other channels).
Example:
```
reactor := zmq.NewReactor()
reactor.AddSocket(socket1, zmq.POLLIN, socket1_handler)
reactor.AddSocket(socket2, zmq.POLLIN, socket2_handler)
reactor.AddChannelTime(time.Tick(time.Second), 1, ticker_handler)
reactor.Run(time.Second)
```
Warning:
There are problems with the reactor showing up with Go 1.14 (and later)
such as data race occurrences and code lock-up. Using SetRetryAfterEINTR seems an effective fix, but at the moment there is no guaranty.
####
func (*Reactor) [AddChannel](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L87) [¶](#Reactor.AddChannel)
```
func (r *[Reactor](#Reactor)) AddChannel(ch <-chan interface{}, limit [int](/builtin#int), handler func(interface{}) [error](/builtin#error)) (id [uint64](/builtin#uint64))
```
Add channel handler to the reactor.
Returns id of added handler, that can be used later to remove it.
If limit is positive, at most this many items will be handled in each run through the main loop,
otherwise it will process as many items as possible.
The handler function receives the value received from the channel.
####
func (*Reactor) [AddChannelTime](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L95) [¶](#Reactor.AddChannelTime)
```
func (r *[Reactor](#Reactor)) AddChannelTime(ch <-chan [time](/time).[Time](/time#Time), limit [int](/builtin#int), handler func(interface{}) [error](/builtin#error)) (id [uint64](/builtin#uint64))
```
This function wraps AddChannel, using a channel of type time.Time instead of type interface{}.
####
func (*Reactor) [AddSocket](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L61) [¶](#Reactor.AddSocket)
```
func (r *[Reactor](#Reactor)) AddSocket(soc *[Socket](#Socket), events [State](#State), handler func([State](#State)) [error](/builtin#error))
```
Add socket handler to the reactor.
You can have only one handler per socket. Adding a second one will remove the first.
The handler receives the socket state as an argument: POLLIN, POLLOUT, or both.
####
func (*Reactor) [RemoveChannel](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L113) [¶](#Reactor.RemoveChannel)
```
func (r *[Reactor](#Reactor)) RemoveChannel(id [uint64](/builtin#uint64))
```
Remove a channel from the reactor.
Closed channels are removed automatically.
####
func (*Reactor) [RemoveSocket](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L68) [¶](#Reactor.RemoveSocket)
```
func (r *[Reactor](#Reactor)) RemoveSocket(soc *[Socket](#Socket))
```
Remove a socket handler from the reactor.
####
func (*Reactor) [Run](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L132) [¶](#Reactor.Run)
```
func (r *[Reactor](#Reactor)) Run(interval [time](/time).[Duration](/time#Duration)) (err [error](/builtin#error))
```
Run the reactor.
The interval determines the time-out on the polling of sockets.
Interval must be positive if there are channels.
If there are no channels, you can set interval to -1.
The run alternates between polling/handling sockets (using the interval as timeout),
and reading/handling channels. The reading of channels is without time-out: if there is no activity on any channel, the run continues to poll sockets immediately.
The run exits when any handler returns an error, returning that same error.
####
func (*Reactor) [SetVerbose](https://github.com/pebbe/zmq4/blob/v1.2.10/reactor.go#L117) [¶](#Reactor.SetVerbose)
```
func (r *[Reactor](#Reactor)) SetVerbose(verbose [bool](/builtin#bool))
```
####
type [Socket](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L849) [¶](#Socket)
```
type Socket struct {
// contains filtered or unexported fields
}
```
Socket functions starting with `Set` or `Get` are used for setting and getting socket options.
####
func [NewSocket](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L883) [¶](#NewSocket)
```
func NewSocket(t [Type](#Type)) (soc *[Socket](#Socket), err [error](/builtin#error))
```
Create 0MQ socket in the default context.
WARNING:
The Socket is not thread safe. This means that you cannot access the same Socket from different goroutines without using something like a mutex.
For a description of socket types, see: <http://api.zeromq.org/4-1:zmq-socket#toc3####
func (*Socket) [Bind](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L961) [¶](#Socket.Bind)
```
func (soc *[Socket](#Socket)) Bind(endpoint [string](/builtin#string)) [error](/builtin#error)
```
Accept incoming connections on a socket.
For a description of endpoint, see: <http://api.zeromq.org/4-1:zmq-bind#toc2####
func (*Socket) [ClientAuthCurve](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L635) [¶](#Socket.ClientAuthCurve)
```
func (client *[Socket](#Socket)) ClientAuthCurve(server_public_key, client_public_key, client_secret_key [string](/builtin#string)) [error](/builtin#error)
```
Set CURVE client role.
####
func (*Socket) [ClientAuthPlain](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L626) [¶](#Socket.ClientAuthPlain)
```
func (client *[Socket](#Socket)) ClientAuthPlain(username, password [string](/builtin#string)) [error](/builtin#error)
```
Set PLAIN client role.
####
func (*Socket) [Close](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L928) [¶](#Socket.Close)
```
func (soc *[Socket](#Socket)) Close() [error](/builtin#error)
```
If not called explicitly, the socket will be closed on garbage collection
####
func (*Socket) [Connect](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1011) [¶](#Socket.Connect)
```
func (soc *[Socket](#Socket)) Connect(endpoint [string](/builtin#string)) [error](/builtin#error)
```
Create outgoing connection from socket.
For a description of endpoint, see: <http://api.zeromq.org/4-1:zmq-connect#toc2####
func (*Socket) [Context](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L949) [¶](#Socket.Context)
```
func (soc *[Socket](#Socket)) Context() (*[Context](#Context), [error](/builtin#error))
```
Return the context associated with a socket
####
func (*Socket) [Disconnect](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1036) [¶](#Socket.Disconnect)
```
func (soc *[Socket](#Socket)) Disconnect(endpoint [string](/builtin#string)) [error](/builtin#error)
```
Disconnect a socket.
For a description of endpoint, see: <http://api.zeromq.org/4-1:zmq-disconnect#toc2####
func (*Socket) [GetAffinity](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L149) [¶](#Socket.GetAffinity)
```
func (soc *[Socket](#Socket)) GetAffinity() ([uint64](/builtin#uint64), [error](/builtin#error))
```
ZMQ_AFFINITY: Retrieve I/O thread affinity
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc3####
func (*Socket) [GetBacklog](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L226) [¶](#Socket.GetBacklog)
```
func (soc *[Socket](#Socket)) GetBacklog() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_BACKLOG: Retrieve maximum length of the queue of outstanding connections
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc4####
func (*Socket) [GetConnectTimeout](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L583) [¶](#Socket.GetConnectTimeout)
```
func (soc *[Socket](#Socket)) GetConnectTimeout() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
ZMQ_CONNECT_TIMEOUT: Retrieve connect() timeout
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc5####
func (*Socket) [GetCurvePublickeyRaw](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L372) [¶](#Socket.GetCurvePublickeyRaw)
```
func (soc *[Socket](#Socket)) GetCurvePublickeyRaw() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_CURVE_PUBLICKEY: Retrieve current CURVE public key
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc5####
func (*Socket) [GetCurvePublickeykeyZ85](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L379) [¶](#Socket.GetCurvePublickeykeyZ85)
```
func (soc *[Socket](#Socket)) GetCurvePublickeykeyZ85() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_CURVE_PUBLICKEY: Retrieve current CURVE public key
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc5####
func (*Socket) [GetCurveSecretkeyRaw](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L386) [¶](#Socket.GetCurveSecretkeyRaw)
```
func (soc *[Socket](#Socket)) GetCurveSecretkeyRaw() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_CURVE_SECRETKEY: Retrieve current CURVE secret key
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc6####
func (*Socket) [GetCurveSecretkeyZ85](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L393) [¶](#Socket.GetCurveSecretkeyZ85)
```
func (soc *[Socket](#Socket)) GetCurveSecretkeyZ85() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_CURVE_SECRETKEY: Retrieve current CURVE secret key
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc6####
func (*Socket) [GetCurveServerkeyRaw](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L400) [¶](#Socket.GetCurveServerkeyRaw)
```
func (soc *[Socket](#Socket)) GetCurveServerkeyRaw() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_CURVE_SERVERKEY: Retrieve current CURVE server key
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc7####
func (*Socket) [GetCurveServerkeyZ85](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L407) [¶](#Socket.GetCurveServerkeyZ85)
```
func (soc *[Socket](#Socket)) GetCurveServerkeyZ85() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_CURVE_SERVERKEY: Retrieve current CURVE server key
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc7####
func (*Socket) [GetEvents](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L292) [¶](#Socket.GetEvents)
```
func (soc *[Socket](#Socket)) GetEvents() ([State](#State), [error](/builtin#error))
```
ZMQ_EVENTS: Retrieve socket event state
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc8####
func (*Socket) [GetFd](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget_unix.go#L13) [¶](#Socket.GetFd)
```
func (soc *[Socket](#Socket)) GetFd() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_FD: Retrieve file descriptor associated with the socket
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc9####
func (*Socket) [GetGssapiPlaintext](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L500) [¶](#Socket.GetGssapiPlaintext)
```
func (soc *[Socket](#Socket)) GetGssapiPlaintext() ([bool](/builtin#bool), [error](/builtin#error))
```
ZMQ_GSSAPI_PLAINTEXT: Retrieve GSSAPI plaintext or encrypted status
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc10####
func (*Socket) [GetGssapiPrincipal](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L476) [¶](#Socket.GetGssapiPrincipal)
```
func (soc *[Socket](#Socket)) GetGssapiPrincipal() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_GSSAPI_PRINCIPAL: Retrieve the name of the GSSAPI principal
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc11####
func (*Socket) [GetGssapiServer](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L463) [¶](#Socket.GetGssapiServer)
```
func (soc *[Socket](#Socket)) GetGssapiServer() ([bool](/builtin#bool), [error](/builtin#error))
```
ZMQ_GSSAPI_SERVER: Retrieve current GSSAPI server role
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc12####
func (*Socket) [GetGssapiServicePrincipal](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L488) [¶](#Socket.GetGssapiServicePrincipal)
```
func (soc *[Socket](#Socket)) GetGssapiServicePrincipal() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_GSSAPI_SERVICE_PRINCIPAL: Retrieve the name of the GSSAPI service principal
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc13####
func (*Socket) [GetHandshakeIvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L513) [¶](#Socket.GetHandshakeIvl)
```
func (soc *[Socket](#Socket)) GetHandshakeIvl() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
ZMQ_HANDSHAKE_IVL: Retrieve maximum handshake interval
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc14####
func (*Socket) [GetIdentity](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L156) [¶](#Socket.GetIdentity)
```
func (soc *[Socket](#Socket)) GetIdentity() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_IDENTITY: Retrieve socket identity
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc15####
func (*Socket) [GetImmediate](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L281) [¶](#Socket.GetImmediate)
```
func (soc *[Socket](#Socket)) GetImmediate() ([bool](/builtin#bool), [error](/builtin#error))
```
ZMQ_IMMEDIATE: Retrieve attach-on-connect value
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc16####
func (*Socket) [GetInvertMatching](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L571) [¶](#Socket.GetInvertMatching)
```
func (soc *[Socket](#Socket)) GetInvertMatching() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_INVERT_MATCHING: Retrieve inverted filtering status
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc18####
func (*Socket) [GetIpv6](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L273) [¶](#Socket.GetIpv6)
```
func (soc *[Socket](#Socket)) GetIpv6() ([bool](/builtin#bool), [error](/builtin#error))
```
ZMQ_IPV6: Retrieve IPv6 socket status
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc18####
func (*Socket) [GetLastEndpoint](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L300) [¶](#Socket.GetLastEndpoint)
```
func (soc *[Socket](#Socket)) GetLastEndpoint() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_LAST_ENDPOINT: Retrieve the last endpoint set
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc19####
func (*Socket) [GetLinger](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L194) [¶](#Socket.GetLinger)
```
func (soc *[Socket](#Socket)) GetLinger() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
* [Returns time.Duration(-1) for infinite](#hdr-Returns_time_Duration__1__for_infinite)
ZMQ_LINGER: Retrieve linger period for socket shutdown
#### Returns time.Duration(-1) for infinite [¶](#hdr-Returns_time_Duration__1__for_infinite)
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc20####
func (*Socket) [GetMaxmsgsize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L233) [¶](#Socket.GetMaxmsgsize)
```
func (soc *[Socket](#Socket)) GetMaxmsgsize() ([int64](/builtin#int64), [error](/builtin#error))
```
ZMQ_MAXMSGSIZE: Maximum acceptable inbound message size
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc21####
func (*Socket) [GetMechanism](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L335) [¶](#Socket.GetMechanism)
```
func (soc *[Socket](#Socket)) GetMechanism() ([Mechanism](#Mechanism), [error](/builtin#error))
```
ZMQ_MECHANISM: Retrieve current security mechanism
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc22####
func (*Socket) [GetMulticastHops](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L240) [¶](#Socket.GetMulticastHops)
```
func (soc *[Socket](#Socket)) GetMulticastHops() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_MULTICAST_HOPS: Maximum network hops for multicast packets
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc23####
func (*Socket) [GetMulticastMaxtpdu](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L622) [¶](#Socket.GetMulticastMaxtpdu)
```
func (soc *[Socket](#Socket)) GetMulticastMaxtpdu() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_MULTICAST_MAXTPDU: Maximum transport data unit size for multicast packets
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc26####
func (*Socket) [GetPlainPassword](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L361) [¶](#Socket.GetPlainPassword)
```
func (soc *[Socket](#Socket)) GetPlainPassword() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_PLAIN_PASSWORD: Retrieve current password
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc24####
func (*Socket) [GetPlainServer](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L343) [¶](#Socket.GetPlainServer)
```
func (soc *[Socket](#Socket)) GetPlainServer() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_PLAIN_SERVER: Retrieve current PLAIN server role
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc25####
func (*Socket) [GetPlainUsername](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L350) [¶](#Socket.GetPlainUsername)
```
func (soc *[Socket](#Socket)) GetPlainUsername() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_PLAIN_USERNAME: Retrieve current PLAIN username
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc26####
func (*Socket) [GetRate](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L163) [¶](#Socket.GetRate)
```
func (soc *[Socket](#Socket)) GetRate() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_RATE: Retrieve multicast data rate
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc27####
func (*Socket) [GetRcvbuf](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L185) [¶](#Socket.GetRcvbuf)
```
func (soc *[Socket](#Socket)) GetRcvbuf() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_RCVBUF: Retrieve kernel receive buffer size
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc28####
func (*Socket) [GetRcvhwm](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L142) [¶](#Socket.GetRcvhwm)
```
func (soc *[Socket](#Socket)) GetRcvhwm() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_RCVHWM: Retrieve high water mark for inbound messages
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc29####
func (*Socket) [GetRcvmore](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L127) [¶](#Socket.GetRcvmore)
```
func (soc *[Socket](#Socket)) GetRcvmore() ([bool](/builtin#bool), [error](/builtin#error))
```
ZMQ_RCVMORE: More message data parts to follow
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc30####
func (*Socket) [GetRcvtimeo](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L249) [¶](#Socket.GetRcvtimeo)
```
func (soc *[Socket](#Socket)) GetRcvtimeo() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
* [Returns time.Duration(-1) for infinite](#hdr-Returns_time_Duration__1__for_infinite)
ZMQ_RCVTIMEO: Maximum time before a socket operation returns with EAGAIN
#### Returns time.Duration(-1) for infinite [¶](#hdr-Returns_time_Duration__1__for_infinite)
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc31####
func (*Socket) [GetReconnectIvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L207) [¶](#Socket.GetReconnectIvl)
```
func (soc *[Socket](#Socket)) GetReconnectIvl() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
* [Returns time.Duration(-1) for no reconnection](#hdr-Returns_time_Duration__1__for_no_reconnection)
ZMQ_RECONNECT_IVL: Retrieve reconnection interval
#### Returns time.Duration(-1) for no reconnection [¶](#hdr-Returns_time_Duration__1__for_no_reconnection)
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc32####
func (*Socket) [GetReconnectIvlMax](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L218) [¶](#Socket.GetReconnectIvlMax)
```
func (soc *[Socket](#Socket)) GetReconnectIvlMax() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
ZMQ_RECONNECT_IVL_MAX: Retrieve maximum reconnection interval
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc33####
func (*Socket) [GetRecoveryIvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L170) [¶](#Socket.GetRecoveryIvl)
```
func (soc *[Socket](#Socket)) GetRecoveryIvl() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
ZMQ_RECOVERY_IVL: Get multicast recovery interval
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc34####
func (*Socket) [GetSndbuf](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L178) [¶](#Socket.GetSndbuf)
```
func (soc *[Socket](#Socket)) GetSndbuf() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_SNDBUF: Retrieve kernel transmit buffer size
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc35####
func (*Socket) [GetSndhwm](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L135) [¶](#Socket.GetSndhwm)
```
func (soc *[Socket](#Socket)) GetSndhwm() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_SNDHWM: Retrieves high water mark for outbound messages
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc36####
func (*Socket) [GetSndtimeo](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L262) [¶](#Socket.GetSndtimeo)
```
func (soc *[Socket](#Socket)) GetSndtimeo() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
* [Returns time.Duration(-1) for infinite](#hdr-Returns_time_Duration__1__for_infinite)
ZMQ_SNDTIMEO: Maximum time before a socket operation returns with EAGAIN
#### Returns time.Duration(-1) for infinite [¶](#hdr-Returns_time_Duration__1__for_infinite)
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc37####
func (*Socket) [GetSocksProxy](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L525) [¶](#Socket.GetSocksProxy)
```
func (soc *[Socket](#Socket)) GetSocksProxy() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_SOCKS_PROXY: NOT DOCUMENTED
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
####
func (*Socket) [GetTcpKeepalive](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L307) [¶](#Socket.GetTcpKeepalive)
```
func (soc *[Socket](#Socket)) GetTcpKeepalive() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_TCP_KEEPALIVE: Override SO_KEEPALIVE socket option
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc38####
func (*Socket) [GetTcpKeepaliveCnt](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L321) [¶](#Socket.GetTcpKeepaliveCnt)
```
func (soc *[Socket](#Socket)) GetTcpKeepaliveCnt() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_TCP_KEEPALIVE_CNT: Override TCP_KEEPCNT socket option
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc39####
func (*Socket) [GetTcpKeepaliveIdle](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L314) [¶](#Socket.GetTcpKeepaliveIdle)
```
func (soc *[Socket](#Socket)) GetTcpKeepaliveIdle() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_TCP_KEEPALIVE_IDLE: Override TCP_KEEPCNT(or TCP_KEEPALIVE on some OS)
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc40####
func (*Socket) [GetTcpKeepaliveIntvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L328) [¶](#Socket.GetTcpKeepaliveIntvl)
```
func (soc *[Socket](#Socket)) GetTcpKeepaliveIntvl() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_TCP_KEEPALIVE_INTVL: Override TCP_KEEPINTVL socket option
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc41####
func (*Socket) [GetTcpMaxrt](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L596) [¶](#Socket.GetTcpMaxrt)
```
func (soc *[Socket](#Socket)) GetTcpMaxrt() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
ZMQ_TCP_MAXRT: Retrieve Max TCP Retransmit Timeout
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc44####
func (*Socket) [GetThreadSafe](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L609) [¶](#Socket.GetThreadSafe)
```
func (soc *[Socket](#Socket)) GetThreadSafe() ([bool](/builtin#bool), [error](/builtin#error))
```
ZMQ_THREAD_SAFE: Retrieve socket thread safety
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc45####
func (*Socket) [GetTos](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L449) [¶](#Socket.GetTos)
```
func (soc *[Socket](#Socket)) GetTos() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_TOS: Retrieve the Type-of-Service socket override status
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc42####
func (*Socket) [GetType](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L119) [¶](#Socket.GetType)
```
func (soc *[Socket](#Socket)) GetType() ([Type](#Type), [error](/builtin#error))
```
ZMQ_TYPE: Retrieve socket type
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc43####
func (*Socket) [GetVmciBufferMaxSize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L658) [¶](#Socket.GetVmciBufferMaxSize)
```
func (soc *[Socket](#Socket)) GetVmciBufferMaxSize() ([uint64](/builtin#uint64), [error](/builtin#error))
```
ZMQ_VMCI_BUFFER_MAX_SIZE: Retrieve max buffer size of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc51####
func (*Socket) [GetVmciBufferMinSize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L646) [¶](#Socket.GetVmciBufferMinSize)
```
func (soc *[Socket](#Socket)) GetVmciBufferMinSize() ([uint64](/builtin#uint64), [error](/builtin#error))
```
ZMQ_VMCI_BUFFER_MIN_SIZE: Retrieve min buffer size of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc50####
func (*Socket) [GetVmciBufferSize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L634) [¶](#Socket.GetVmciBufferSize)
```
func (soc *[Socket](#Socket)) GetVmciBufferSize() ([uint64](/builtin#uint64), [error](/builtin#error))
```
ZMQ_VMCI_BUFFER_SIZE: Retrieve buffer size of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc49####
func (*Socket) [GetVmciConnectTimeout](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L670) [¶](#Socket.GetVmciConnectTimeout)
```
func (soc *[Socket](#Socket)) GetVmciConnectTimeout() ([time](/time).[Duration](/time#Duration), [error](/builtin#error))
```
ZMQ_VMCI_CONNECT_TIMEOUT: Retrieve connection timeout of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc52####
func (*Socket) [GetZapDomain](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L414) [¶](#Socket.GetZapDomain)
```
func (soc *[Socket](#Socket)) GetZapDomain() ([string](/builtin#string), [error](/builtin#error))
```
ZMQ_ZAP_DOMAIN: Retrieve RFC 27 authentication domain
See: <http://api.zeromq.org/4-1:zmq-getsockopt#toc44####
func (*Socket) [Getusefd](https://github.com/pebbe/zmq4/blob/v1.2.10/socketget.go#L683) [¶](#Socket.Getusefd)
```
func (soc *[Socket](#Socket)) Getusefd() ([int](/builtin#int), [error](/builtin#error))
```
ZMQ_USE_FD: Retrieve the pre-allocated socket file descriptor
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-getsockopt#toc29####
func (*Socket) [Monitor](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1201) [¶](#Socket.Monitor)
```
func (soc *[Socket](#Socket)) Monitor(addr [string](/builtin#string), events [Event](#Event)) [error](/builtin#error)
```
Register a monitoring callback.
See: <http://api.zeromq.org/4-1:zmq-socket-monitor#toc2WARNING: Closing a context with a monitoring callback will lead to random crashes.
This is a bug in the ZeroMQ library.
The monitoring callback has the same context as the socket it was created for.
Example:
```
package main
import (
zmq "github.com/pebbe/zmq4"
"log"
"time"
)
func rep_socket_monitor(addr string) {
s, err := zmq.NewSocket(zmq.PAIR)
if err != nil {
log.Fatalln(err)
}
err = s.Connect(addr)
if err != nil {
log.Fatalln(err)
}
for {
a, b, c, err := s.RecvEvent(0)
if err != nil {
log.Println(err)
break
}
log.Println(a, b, c)
}
s.Close()
}
func main() {
// REP socket
rep, err := zmq.NewSocket(zmq.REP)
if err != nil {
log.Fatalln(err)
}
// REP socket monitor, all events
err = rep.Monitor("inproc://monitor.rep", zmq.EVENT_ALL)
if err != nil {
log.Fatalln(err)
}
go rep_socket_monitor("inproc://monitor.rep")
// Generate an event
rep.Bind("tcp://*:5555")
if err != nil {
log.Fatalln(err)
}
// Allow some time for event detection
time.Sleep(time.Second)
rep.Close()
zmq.Term()
}
```
####
func (*Socket) [Recv](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1061) [¶](#Socket.Recv)
```
func (soc *[Socket](#Socket)) Recv(flags [Flag](#Flag)) ([string](/builtin#string), [error](/builtin#error))
```
Receive a message part from a socket.
For a description of flags, see: <http://api.zeromq.org/4-1:zmq-msg-recv#toc2####
func (*Socket) [RecvBytes](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1071) [¶](#Socket.RecvBytes)
```
func (soc *[Socket](#Socket)) RecvBytes(flags [Flag](#Flag)) ([][byte](/builtin#byte), [error](/builtin#error))
```
Receive a message part from a socket.
For a description of flags, see: <http://api.zeromq.org/4-1:zmq-msg-recv#toc2####
func (*Socket) [RecvBytesWithMetadata](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1440) [¶](#Socket.RecvBytesWithMetadata)
```
func (soc *[Socket](#Socket)) RecvBytesWithMetadata(flags [Flag](#Flag), properties ...[string](/builtin#string)) (msg [][byte](/builtin#byte), metadata map[[string](/builtin#string)][string](/builtin#string), err [error](/builtin#error))
```
Receive a message part with metadata.
This requires ZeroMQ version 4.1.0. Lower versions will return the message part without metadata.
The returned metadata map contains only those properties that exist on the message.
For a description of flags, see: <http://api.zeromq.org/4-1:zmq-msg-recv#toc2For a description of metadata, see: <http://api.zeromq.org/4-1:zmq-msg-gets#toc3####
func (*Socket) [RecvEvent](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1245) [¶](#Socket.RecvEvent)
```
func (soc *[Socket](#Socket)) RecvEvent(flags [Flag](#Flag)) (event_type [Event](#Event), addr [string](/builtin#string), value [int](/builtin#int), err [error](/builtin#error))
```
Receive a message part from a socket interpreted as an event.
For a description of flags, see: <http://api.zeromq.org/4-1:zmq-msg-recv#toc2For a description of event_type, see: <http://api.zeromq.org/4-1:zmq-socket-monitor#toc3For an example, see: func (*Socket) Monitor
####
func (*Socket) [RecvMessage](https://github.com/pebbe/zmq4/blob/v1.2.10/utils.go#L112) [¶](#Socket.RecvMessage)
```
func (soc *[Socket](#Socket)) RecvMessage(flags [Flag](#Flag)) (msg [][string](/builtin#string), err [error](/builtin#error))
```
Receive parts as message from socket.
Returns last non-nil error code.
####
func (*Socket) [RecvMessageBytes](https://github.com/pebbe/zmq4/blob/v1.2.10/utils.go#L138) [¶](#Socket.RecvMessageBytes)
```
func (soc *[Socket](#Socket)) RecvMessageBytes(flags [Flag](#Flag)) (msg [][][byte](/builtin#byte), err [error](/builtin#error))
```
Receive parts as message from socket.
Returns last non-nil error code.
####
func (*Socket) [RecvMessageBytesWithMetadata](https://github.com/pebbe/zmq4/blob/v1.2.10/utils.go#L186) [¶](#Socket.RecvMessageBytesWithMetadata)
```
func (soc *[Socket](#Socket)) RecvMessageBytesWithMetadata(flags [Flag](#Flag), properties ...[string](/builtin#string)) (msg [][][byte](/builtin#byte), metadata map[[string](/builtin#string)][string](/builtin#string), err [error](/builtin#error))
```
Receive parts as message from socket, including metadata.
Metadata is picked from the first message part.
For details about metadata, see RecvBytesWithMetadata().
Returns last non-nil error code.
####
func (*Socket) [RecvMessageWithMetadata](https://github.com/pebbe/zmq4/blob/v1.2.10/utils.go#L168) [¶](#Socket.RecvMessageWithMetadata)
```
func (soc *[Socket](#Socket)) RecvMessageWithMetadata(flags [Flag](#Flag), properties ...[string](/builtin#string)) (msg [][string](/builtin#string), metadata map[[string](/builtin#string)][string](/builtin#string), err [error](/builtin#error))
```
Receive parts as message from socket, including metadata.
Metadata is picked from the first message part.
For details about metadata, see RecvWithMetadata().
Returns last non-nil error code.
####
func (*Socket) [RecvWithMetadata](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1424) [¶](#Socket.RecvWithMetadata)
```
func (soc *[Socket](#Socket)) RecvWithMetadata(flags [Flag](#Flag), properties ...[string](/builtin#string)) (msg [string](/builtin#string), metadata map[[string](/builtin#string)][string](/builtin#string), err [error](/builtin#error))
```
Receive a message part with metadata.
This requires ZeroMQ version 4.1.0. Lower versions will return the message part without metadata.
The returned metadata map contains only those properties that exist on the message.
For a description of flags, see: <http://api.zeromq.org/4-1:zmq-msg-recv#toc2For a description of metadata, see: <http://api.zeromq.org/4-1:zmq-msg-gets#toc3####
func (*Socket) [Send](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1103) [¶](#Socket.Send)
```
func (soc *[Socket](#Socket)) Send(data [string](/builtin#string), flags [Flag](#Flag)) ([int](/builtin#int), [error](/builtin#error))
```
Send a message part on a socket.
For a description of flags, see: <http://api.zeromq.org/4-1:zmq-send#toc2####
func (*Socket) [SendBytes](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L1112) [¶](#Socket.SendBytes)
```
func (soc *[Socket](#Socket)) SendBytes(data [][byte](/builtin#byte), flags [Flag](#Flag)) ([int](/builtin#int), [error](/builtin#error))
```
Send a message part on a socket.
For a description of flags, see: <http://api.zeromq.org/4-1:zmq-send#toc2####
func (*Socket) [SendMessage](https://github.com/pebbe/zmq4/blob/v1.2.10/utils.go#L17) [¶](#Socket.SendMessage)
```
func (soc *[Socket](#Socket)) SendMessage(parts ...interface{}) (total [int](/builtin#int), err [error](/builtin#error))
```
Send multi-part message on socket.
Any `[]string' or `[][]byte' is split into separate `string's or `[]byte's
Any other part that isn't a `string' or `[]byte' is converted to `string' with `fmt.Sprintf("%v", part)'.
Returns total bytes sent.
####
func (*Socket) [SendMessageDontwait](https://github.com/pebbe/zmq4/blob/v1.2.10/utils.go#L24) [¶](#Socket.SendMessageDontwait)
```
func (soc *[Socket](#Socket)) SendMessageDontwait(parts ...interface{}) (total [int](/builtin#int), err [error](/builtin#error))
```
Like SendMessage(), but adding the DONTWAIT flag.
####
func (*Socket) [ServerAuthCurve](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L614) [¶](#Socket.ServerAuthCurve)
```
func (server *[Socket](#Socket)) ServerAuthCurve(domain, secret_key [string](/builtin#string)) [error](/builtin#error)
```
Set CURVE server role.
####
func (*Socket) [ServerAuthNull](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L596) [¶](#Socket.ServerAuthNull)
```
func (server *[Socket](#Socket)) ServerAuthNull(domain [string](/builtin#string)) [error](/builtin#error)
```
Set NULL server role.
####
func (*Socket) [ServerAuthPlain](https://github.com/pebbe/zmq4/blob/v1.2.10/auth.go#L605) [¶](#Socket.ServerAuthPlain)
```
func (server *[Socket](#Socket)) ServerAuthPlain(domain [string](/builtin#string)) [error](/builtin#error)
```
Set PLAIN server role.
####
func (*Socket) [SetAffinity](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L128) [¶](#Socket.SetAffinity)
```
func (soc *[Socket](#Socket)) SetAffinity(value [uint64](/builtin#uint64)) [error](/builtin#error)
```
ZMQ_AFFINITY: Set I/O thread affinity
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc3####
func (*Socket) [SetBacklog](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L219) [¶](#Socket.SetBacklog)
```
func (soc *[Socket](#Socket)) SetBacklog(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_BACKLOG: Set maximum length of the queue of outstanding connections
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc4####
func (*Socket) [SetConflate](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L435) [¶](#Socket.SetConflate)
```
func (soc *[Socket](#Socket)) SetConflate(value [bool](/builtin#bool)) [error](/builtin#error)
```
ZMQ_CONFLATE: Keep only last message
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc6####
func (*Socket) [SetConnectRid](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L501) [¶](#Socket.SetConnectRid)
```
func (soc *[Socket](#Socket)) SetConnectRid(value [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_CONNECT_RID: Assign the next outbound connection id
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc5####
func (*Socket) [SetConnectTimeout](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L748) [¶](#Socket.SetConnectTimeout)
```
func (soc *[Socket](#Socket)) SetConnectTimeout(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_CONNECT_TIMEOUT: Set connect() timeout
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc7####
func (*Socket) [SetCurvePublickey](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L407) [¶](#Socket.SetCurvePublickey)
```
func (soc *[Socket](#Socket)) SetCurvePublickey(key [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_CURVE_PUBLICKEY: Set CURVE public key
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc7####
func (*Socket) [SetCurveSecretkey](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L414) [¶](#Socket.SetCurveSecretkey)
```
func (soc *[Socket](#Socket)) SetCurveSecretkey(key [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_CURVE_SECRETKEY: Set CURVE secret key
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc8####
func (*Socket) [SetCurveServer](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L400) [¶](#Socket.SetCurveServer)
```
func (soc *[Socket](#Socket)) SetCurveServer(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_CURVE_SERVER: Set CURVE server role
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc9####
func (*Socket) [SetCurveServerkey](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L421) [¶](#Socket.SetCurveServerkey)
```
func (soc *[Socket](#Socket)) SetCurveServerkey(key [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_CURVE_SERVERKEY: Set CURVE server key
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc10####
func (*Socket) [SetGssapiPlaintext](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L556) [¶](#Socket.SetGssapiPlaintext)
```
func (soc *[Socket](#Socket)) SetGssapiPlaintext(value [bool](/builtin#bool)) [error](/builtin#error)
```
ZMQ_GSSAPI_PLAINTEXT: Disable GSSAPI encryption
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc11####
func (*Socket) [SetGssapiPrincipal](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L532) [¶](#Socket.SetGssapiPrincipal)
```
func (soc *[Socket](#Socket)) SetGssapiPrincipal(value [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_GSSAPI_PRINCIPAL: Set name of GSSAPI principal
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc12####
func (*Socket) [SetGssapiServer](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L516) [¶](#Socket.SetGssapiServer)
```
func (soc *[Socket](#Socket)) SetGssapiServer(value [bool](/builtin#bool)) [error](/builtin#error)
```
ZMQ_GSSAPI_SERVER: Set GSSAPI server role
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc13####
func (*Socket) [SetGssapiServicePrincipal](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L544) [¶](#Socket.SetGssapiServicePrincipal)
```
func (soc *[Socket](#Socket)) SetGssapiServicePrincipal(value [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_GSSAPI_SERVICE_PRINCIPAL: Set name of GSSAPI service principal
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc14####
func (*Socket) [SetHandshakeIvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L572) [¶](#Socket.SetHandshakeIvl)
```
func (soc *[Socket](#Socket)) SetHandshakeIvl(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_HANDSHAKE_IVL: Set maximum handshake interval
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc15####
func (*Socket) [SetHeartbeatIvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L697) [¶](#Socket.SetHeartbeatIvl)
```
func (soc *[Socket](#Socket)) SetHeartbeatIvl(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_HEARTBEAT_IVL: Set interval between sending ZMTP heartbeats
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc17####
func (*Socket) [SetHeartbeatTimeout](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L723) [¶](#Socket.SetHeartbeatTimeout)
```
func (soc *[Socket](#Socket)) SetHeartbeatTimeout(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_HEARTBEAT_TIMEOUT: Set timeout for ZMTP heartbeats
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc18####
func (*Socket) [SetHeartbeatTtl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L710) [¶](#Socket.SetHeartbeatTtl)
```
func (soc *[Socket](#Socket)) SetHeartbeatTtl(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_HEARTBEAT_TTL: Set the TTL value for ZMTP heartbeats
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc19####
func (*Socket) [SetIdentity](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L149) [¶](#Socket.SetIdentity)
```
func (soc *[Socket](#Socket)) SetIdentity(value [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_IDENTITY: Set socket identity
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc16####
func (*Socket) [SetImmediate](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L277) [¶](#Socket.SetImmediate)
```
func (soc *[Socket](#Socket)) SetImmediate(value [bool](/builtin#bool)) [error](/builtin#error)
```
ZMQ_IMMEDIATE: Queue messages only to completed connections
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc17####
func (*Socket) [SetInvertMatching](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L685) [¶](#Socket.SetInvertMatching)
```
func (soc *[Socket](#Socket)) SetInvertMatching(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_INVERT_MATCHING: Invert message filtering
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc22####
func (*Socket) [SetIpv6](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L266) [¶](#Socket.SetIpv6)
```
func (soc *[Socket](#Socket)) SetIpv6(value [bool](/builtin#bool)) [error](/builtin#error)
```
ZMQ_IPV6: Enable IPv6 on socket
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc18####
func (*Socket) [SetLinger](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L187) [¶](#Socket.SetLinger)
```
func (soc *[Socket](#Socket)) SetLinger(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
* [For infinite, use -1](#hdr-For_infinite__use__1)
ZMQ_LINGER: Set linger period for socket shutdown
#### For infinite, use -1 [¶](#hdr-For_infinite__use__1)
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc19####
func (*Socket) [SetMaxmsgsize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L226) [¶](#Socket.SetMaxmsgsize)
```
func (soc *[Socket](#Socket)) SetMaxmsgsize(value [int64](/builtin#int64)) [error](/builtin#error)
```
ZMQ_MAXMSGSIZE: Maximum acceptable inbound message size
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc20####
func (*Socket) [SetMulticastHops](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L233) [¶](#Socket.SetMulticastHops)
```
func (soc *[Socket](#Socket)) SetMulticastHops(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_MULTICAST_HOPS: Maximum network hops for multicast packets
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc21####
func (*Socket) [SetMulticastMaxtpdu](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L774) [¶](#Socket.SetMulticastMaxtpdu)
```
func (soc *[Socket](#Socket)) SetMulticastMaxtpdu(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_MULTICAST_MAXTPDU: Maximum transport data unit size for multicast packets
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc27####
func (*Socket) [SetPlainPassword](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L390) [¶](#Socket.SetPlainPassword)
```
func (soc *[Socket](#Socket)) SetPlainPassword(password [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_PLAIN_PASSWORD: Set PLAIN security password
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc22####
func (*Socket) [SetPlainServer](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L373) [¶](#Socket.SetPlainServer)
```
func (soc *[Socket](#Socket)) SetPlainServer(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_PLAIN_SERVER: Set PLAIN server role
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc23####
func (*Socket) [SetPlainUsername](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L380) [¶](#Socket.SetPlainUsername)
```
func (soc *[Socket](#Socket)) SetPlainUsername(username [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_PLAIN_USERNAME: Set PLAIN security username
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc24####
func (*Socket) [SetProbeRouter](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L304) [¶](#Socket.SetProbeRouter)
```
func (soc *[Socket](#Socket)) SetProbeRouter(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_PROBE_ROUTER: bootstrap connections to ROUTER sockets
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc25####
func (*Socket) [SetRate](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L156) [¶](#Socket.SetRate)
```
func (soc *[Socket](#Socket)) SetRate(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_RATE: Set multicast data rate
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc26####
func (*Socket) [SetRcvbuf](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L178) [¶](#Socket.SetRcvbuf)
```
func (soc *[Socket](#Socket)) SetRcvbuf(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_RCVBUF: Set kernel receive buffer size
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc27####
func (*Socket) [SetRcvhwm](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L121) [¶](#Socket.SetRcvhwm)
```
func (soc *[Socket](#Socket)) SetRcvhwm(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_RCVHWM: Set high water mark for inbound messages
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc28####
func (*Socket) [SetRcvtimeo](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L242) [¶](#Socket.SetRcvtimeo)
```
func (soc *[Socket](#Socket)) SetRcvtimeo(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
* [For infinite, use -1](#hdr-For_infinite__use__1)
ZMQ_RCVTIMEO: Maximum time before a recv operation returns with EAGAIN
#### For infinite, use -1 [¶](#hdr-For_infinite__use__1)
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc29####
func (*Socket) [SetReconnectIvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L200) [¶](#Socket.SetReconnectIvl)
```
func (soc *[Socket](#Socket)) SetReconnectIvl(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
* [For no reconnection, use -1](#hdr-For_no_reconnection__use__1)
ZMQ_RECONNECT_IVL: Set reconnection interval
#### For no reconnection, use -1 [¶](#hdr-For_no_reconnection__use__1)
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc30####
func (*Socket) [SetReconnectIvlMax](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L211) [¶](#Socket.SetReconnectIvlMax)
```
func (soc *[Socket](#Socket)) SetReconnectIvlMax(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_RECONNECT_IVL_MAX: Set maximum reconnection interval
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc31####
func (*Socket) [SetRecoveryIvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L163) [¶](#Socket.SetRecoveryIvl)
```
func (soc *[Socket](#Socket)) SetRecoveryIvl(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_RECOVERY_IVL: Set multicast recovery interval
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc32####
func (*Socket) [SetReqCorrelate](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L318) [¶](#Socket.SetReqCorrelate)
```
func (soc *[Socket](#Socket)) SetReqCorrelate(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_REQ_CORRELATE: match replies with requests
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc33####
func (*Socket) [SetReqRelaxed](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L325) [¶](#Socket.SetReqRelaxed)
```
func (soc *[Socket](#Socket)) SetReqRelaxed(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_REQ_RELAXED: relax strict alternation between request and reply
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc34####
func (*Socket) [SetRouterHandover](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L473) [¶](#Socket.SetRouterHandover)
```
func (soc *[Socket](#Socket)) SetRouterHandover(value [bool](/builtin#bool)) [error](/builtin#error)
```
ZMQ_ROUTER_HANDOVER: handle duplicate client identities on ROUTER sockets
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc35####
func (*Socket) [SetRouterMandatory](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L288) [¶](#Socket.SetRouterMandatory)
```
func (soc *[Socket](#Socket)) SetRouterMandatory(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_ROUTER_MANDATORY: accept only routable messages on ROUTER sockets
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc36####
func (*Socket) [SetRouterRaw](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L297) [¶](#Socket.SetRouterRaw)
```
func (soc *[Socket](#Socket)) SetRouterRaw(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_ROUTER_RAW: switch ROUTER socket to raw mode
This option is deprecated since ZeroMQ version 4.1, please use ZMQ_STREAM sockets instead.
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc37####
func (*Socket) [SetSndbuf](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L171) [¶](#Socket.SetSndbuf)
```
func (soc *[Socket](#Socket)) SetSndbuf(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_SNDBUF: Set kernel transmit buffer size
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc38####
func (*Socket) [SetSndhwm](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L114) [¶](#Socket.SetSndhwm)
```
func (soc *[Socket](#Socket)) SetSndhwm(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_SNDHWM: Set high water mark for outbound messages
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc39####
func (*Socket) [SetSndtimeo](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L255) [¶](#Socket.SetSndtimeo)
```
func (soc *[Socket](#Socket)) SetSndtimeo(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
* [For infinite, use -1](#hdr-For_infinite__use__1)
ZMQ_SNDTIMEO: Maximum time before a send operation returns with EAGAIN
#### For infinite, use -1 [¶](#hdr-For_infinite__use__1)
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc40####
func (*Socket) [SetSocksProxy](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L583) [¶](#Socket.SetSocksProxy)
```
func (soc *[Socket](#Socket)) SetSocksProxy(value [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_SOCKS_PROXY: NOT DOCUMENTED
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
####
func (*Socket) [SetStreamNotify](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L673) [¶](#Socket.SetStreamNotify)
```
func (soc *[Socket](#Socket)) SetStreamNotify(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_STREAM_NOTIFY: send connect and disconnect notifications
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc48####
func (*Socket) [SetSubscribe](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L135) [¶](#Socket.SetSubscribe)
```
func (soc *[Socket](#Socket)) SetSubscribe(filter [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_SUBSCRIBE: Establish message filter
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc41####
func (*Socket) [SetTcpAcceptFilter](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L363) [¶](#Socket.SetTcpAcceptFilter)
```
func (soc *[Socket](#Socket)) SetTcpAcceptFilter(filter [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_TCP_ACCEPT_FILTER: Assign filters to allow new TCP connections
This option is deprecated since ZeroMQ version 4.1, please use authentication via the ZAP API and IP address whitelisting / blacklisting.
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc50####
func (*Socket) [SetTcpKeepalive](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L332) [¶](#Socket.SetTcpKeepalive)
```
func (soc *[Socket](#Socket)) SetTcpKeepalive(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_TCP_KEEPALIVE: Override SO_KEEPALIVE socket option
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc42####
func (*Socket) [SetTcpKeepaliveCnt](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L346) [¶](#Socket.SetTcpKeepaliveCnt)
```
func (soc *[Socket](#Socket)) SetTcpKeepaliveCnt(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_TCP_KEEPALIVE_CNT: Override TCP_KEEPCNT socket option
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc43####
func (*Socket) [SetTcpKeepaliveIdle](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L339) [¶](#Socket.SetTcpKeepaliveIdle)
```
func (soc *[Socket](#Socket)) SetTcpKeepaliveIdle(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_TCP_KEEPALIVE_IDLE: Override TCP_KEEPCNT(or TCP_KEEPALIVE on some OS)
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc44####
func (*Socket) [SetTcpKeepaliveIntvl](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L353) [¶](#Socket.SetTcpKeepaliveIntvl)
```
func (soc *[Socket](#Socket)) SetTcpKeepaliveIntvl(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_TCP_KEEPALIVE_INTVL: Override TCP_KEEPINTVL socket option
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc45####
func (*Socket) [SetTcpMaxrt](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L761) [¶](#Socket.SetTcpMaxrt)
```
func (soc *[Socket](#Socket)) SetTcpMaxrt(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_TCP_MAXRT: Set TCP Maximum Retransmit Timeout
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc54####
func (*Socket) [SetTos](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L489) [¶](#Socket.SetTos)
```
func (soc *[Socket](#Socket)) SetTos(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_TOS: Set the Type-of-Service on socket
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc46####
func (*Socket) [SetUnsubscribe](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L142) [¶](#Socket.SetUnsubscribe)
```
func (soc *[Socket](#Socket)) SetUnsubscribe(filter [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_UNSUBSCRIBE: Remove message filter
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc47####
func (*Socket) [SetUseFd](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L835) [¶](#Socket.SetUseFd)
```
func (soc *[Socket](#Socket)) SetUseFd(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_USE_FD: Set the pre-allocated socket file descriptor
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc31####
func (*Socket) [SetVmciBufferMaxSize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L810) [¶](#Socket.SetVmciBufferMaxSize)
```
func (soc *[Socket](#Socket)) SetVmciBufferMaxSize(value [uint64](/builtin#uint64)) [error](/builtin#error)
```
ZMQ_VMCI_BUFFER_MAX_SIZE: Set max buffer size of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc70####
func (*Socket) [SetVmciBufferMinSize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L798) [¶](#Socket.SetVmciBufferMinSize)
```
func (soc *[Socket](#Socket)) SetVmciBufferMinSize(value [uint64](/builtin#uint64)) [error](/builtin#error)
```
ZMQ_VMCI_BUFFER_MIN_SIZE: Set min buffer size of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc69####
func (*Socket) [SetVmciBufferSize](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L786) [¶](#Socket.SetVmciBufferSize)
```
func (soc *[Socket](#Socket)) SetVmciBufferSize(value [uint64](/builtin#uint64)) [error](/builtin#error)
```
ZMQ_VMCI_BUFFER_SIZE: Set buffer size of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc68####
func (*Socket) [SetVmciConnectTimeout](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L822) [¶](#Socket.SetVmciConnectTimeout)
```
func (soc *[Socket](#Socket)) SetVmciConnectTimeout(value [time](/time).[Duration](/time#Duration)) [error](/builtin#error)
```
ZMQ_VMCI_CONNECT_TIMEOUT: Set connection timeout of the VMCI socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc71####
func (*Socket) [SetXpubManual](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L646) [¶](#Socket.SetXpubManual)
```
func (soc *[Socket](#Socket)) SetXpubManual(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_XPUB_MANUAL: change the subscription handling to manual
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc59####
func (*Socket) [SetXpubNodrop](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L600) [¶](#Socket.SetXpubNodrop)
```
func (soc *[Socket](#Socket)) SetXpubNodrop(value [bool](/builtin#bool)) [error](/builtin#error)
```
ZMQ_XPUB_NODROP: do not silently drop messages if SENDHWM is reached
Returns ErrorNotImplemented41 with ZeroMQ version < 4.1
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc60####
func (*Socket) [SetXpubVerbose](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L311) [¶](#Socket.SetXpubVerbose)
```
func (soc *[Socket](#Socket)) SetXpubVerbose(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_XPUB_VERBOSE: provide all subscription messages on XPUB sockets
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc48####
func (*Socket) [SetXpubVerboser](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L736) [¶](#Socket.SetXpubVerboser)
```
func (soc *[Socket](#Socket)) SetXpubVerboser(value [int](/builtin#int)) [error](/builtin#error)
```
ZMQ_XPUB_VERBOSER: pass subscribe and unsubscribe messages on XPUB socket
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc58####
func (*Socket) [SetXpubWelcomeMsg](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L658) [¶](#Socket.SetXpubWelcomeMsg)
```
func (soc *[Socket](#Socket)) SetXpubWelcomeMsg(value [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_XPUB_WELCOME_MSG: set welcome message that will be received by subscriber when connecting
Returns ErrorNotImplemented42 with ZeroMQ version < 4.2
See: <http://api.zeromq.org/4-2:zmq-setsockopt#toc61####
func (*Socket) [SetZapDomain](https://github.com/pebbe/zmq4/blob/v1.2.10/socketset.go#L428) [¶](#Socket.SetZapDomain)
```
func (soc *[Socket](#Socket)) SetZapDomain(domain [string](/builtin#string)) [error](/builtin#error)
```
ZMQ_ZAP_DOMAIN: Set RFC 27 authentication domain
See: <http://api.zeromq.org/4-1:zmq-setsockopt#toc49####
func (Socket) [String](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L859) [¶](#Socket.String)
```
func (soc [Socket](#Socket)) String() [string](/builtin#string)
```
Socket as string.
####
func (*Socket) [Unbind](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L986) [¶](#Socket.Unbind)
```
func (soc *[Socket](#Socket)) Unbind(endpoint [string](/builtin#string)) [error](/builtin#error)
```
Stop accepting connections on a socket.
For a description of endpoint, see: <http://api.zeromq.org/4-1:zmq-unbind#toc2####
type [State](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L790) [¶](#State)
```
type State [int](/builtin#int)
```
Used by (soc *Socket)GetEvents()
####
func (State) [String](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L802) [¶](#State.String)
```
func (s [State](#State)) String() [string](/builtin#string)
```
Socket state as string.
####
type [Type](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L624) [¶](#Type)
```
type Type [int](/builtin#int)
```
Specifies the type of a socket, used by NewSocket()
####
func (Type) [String](https://github.com/pebbe/zmq4/blob/v1.2.10/zmq4.go#L646) [¶](#Type.String)
```
func (t [Type](#Type)) String() [string](/builtin#string)
```
Socket type as string. |
@josetr/medusa | npm | JavaScript | Medusa
===
####
[Documentation](https://docs.medusajs.com) |
[Medusa Admin Demo](https://demo.medusajs.com/) |
[Website](https://www.medusajs.com)
An open source composable commerce engine built for developers.
Getting Started
---
Follow our [quickstart guide](https://docs.medusajs.com/quickstart/quick-start) to learn how to set up a Medusa server.
### Requirements
You can check out [this documentation for details about setting up your environment](https://docs.medusajs.com/tutorial/set-up-your-development-environment).
What is Medusa
---
Medusa is an open source composable commerce engine built with Node.js. Medusa enables developers to build scalable and sophisticated commerce setups with low effort and great developer experience.
You can learn more about [Medusa’s architecture in our documentation](https://docs.medusajs.com/introduction).
### Features
You can learn about all of the ecommerce features that Medusa provides [in our documentation](https://docs.medusajs.com/#features).
Roadmap
---
Write-ups for all features will be made available in [Github discussions](https://github.com/medusajs/medusa/discussions) before starting the implementation process.
### **2022**
* [x] Admin revamp
* [x] Tax API
* [x] Tax Calculation Strategy
* [x] Cart Calculation Strategy
* [x] Customer Groups API
* [x] Promotions API
* [x] Price Lists API
* [x] Price Selection Strategy
* [x] Import / Export API
* [x] Sales Channel API
* [ ] Extended Order API (managing placed orders)
* [ ] PaymentCollection API (collecting payments separate from carts and draft orders)
* [ ] Multi-warehouse API
* [ ] Extended Product API (custom fields, publishing control, and more)
Plugins
---
Check out [our available plugins](https://github.com/medusajs/medusa/tree/master/packages) that you can install and use instantly on your Medusa server.
Contributions
---
Please check [our contribution guide](https://github.com/medusajs/medusa/blob/master/CONTRIBUTING.md) for details about how to contribute to both our codebase and our documentation.
Upgrade Guides
---
Follow our [upgrade guides](https://docs.medusajs.com/advanced/backend/upgrade-guides/) on the documentation to keep your Medusa project up-to-date.
Community & Support
---
Use these channels to be part of the community, ask for help while using Medusa, or just learn more about Medusa:
* [Discord](https://discord.gg/medusajs): This is the main channel to join the community. You can ask for help, showcase your work with Medusa, and stay up to date with everything Medusa.
* [GitHub Issues](https://github.com/medusajs/medusa/issues): for sending in any issues you face or bugs you find while using Medusa.
* [GitHub Discussions](https://github.com/medusajs/medusa/discussions): for joining discussions and submitting your ideas.
* [Medusa Blog](https://medusajs.com/blog/): find diverse tutorials and company news.
* [Twitter](https://twitter.com/medusajs)
* [LinkedIn](https://www.linkedin.com/company/medusajs)
License
---
Licensed under the [MIT License](https://github.com/medusajs/medusa/blob/master/LICENSE)
Readme
---
### Keywords
none |
scGate | cran | R | Package ‘scGate’
December 20, 2022
Type Package
Title Marker-Based Cell Type Purification for Single-Cell Sequencing
Data
Version 1.4.1
Description
A common bioinformatics task in single-cell data analysis is to purify a cell type or cell popula-
tion of interest from heterogeneous datasets. 'scGate' automatizes marker-based purifica-
tion of specific cell populations, without requiring training data or reference gene expression pro-
files. Briefly, 'scGate' takes as input: i) a gene expression matrix stored in a 'Seurat' ob-
ject and ii) a “gating model” (GM), consisting of a set of marker genes that define the cell popu-
lation of interest. The GM can be as simple as a single marker gene, or a combination of posi-
tive and negative markers. More complex GMs can be constructed in a hierarchical fash-
ion, akin to gating strategies employed in flow cytometry. 'scGate' evaluates the strength of signa-
ture marker expression in each cell using the rank-based method 'UCell', and then performs k-
nearest neighbor (kNN) smoothing by calculating the mean 'UCell' score across neighbor-
ing cells. kNN-smoothing aims at compensating for the large degree of sparsity in scRNA-
seq data. Finally, a universal threshold over kNN-smoothed signature scores is applied in bi-
nary decision trees generated from the user-provided gating model, to annotate cells as ei-
ther “pure” or “impure”, with respect to the cell population of interest. See the related publica-
tion Andreatta et al. (2022) <doi:10.1093/bioinformatics/btac141>.
biocViews
Depends R(>= 4.2.0)
Imports Seurat(>= 4.0.0), UCell(>= 2.1.3), dplyr, stats, utils,
methods, patchwork, ggridges, reshape2, ggplot2, BiocParallel
Suggests ggparty, partykit, knitr, rmarkdown
VignetteBuilder knitr
URL https://github.com/carmonalab/scGate
BugReports https://github.com/carmonalab/scGate/issues
License GPL-3
Encoding UTF-8
LazyData true
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-8036-2647>),
<NAME> [aut] (<https://orcid.org/0000-0001-8540-5389>),
<NAME> [aut],
<NAME> [aut] (<https://orcid.org/0000-0002-2495-0671>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-12-20 11:00:02 UTC
R topics documented:
combine_scGate_multiclas... 2
gating_mode... 3
genes.blacklist.defaul... 4
get_scGateD... 5
get_testing_dat... 6
load_scGate_mode... 6
performance.metric... 7
plot_level... 8
plot_tre... 9
plot_UCell_score... 9
query.seura... 10
scGat... 11
test_my_mode... 13
combine_scGate_multiclass
Combine scGate annotations
Description
If a single-cell dataset has precomputed results for multiple scGate models, combined them in
multi-class annotation
Usage
combine_scGate_multiclass(
obj,
prefix = "is.pure_",
scGate_classes = NULL,
min_cells = 20,
multi.asNA = FALSE,
out_column = "scGate_multi"
)
Arguments
obj Seurat object with scGate results for multiple models stored as metadata
prefix Prefix in metadata column names for scGate result models
scGate_classes Vector of scGate model names. If NULL, use all columns that start with "prefix"
above.
min_cells Minimum number of cells for a cell label to be considered
multi.asNA How to label cells that are "Pure" for multiple annotations: "Multi" (FALSE) or
NA (TRUE)
out_column The name of the metadata column where to store the multi-class cell labels
Value
A Seurat object with multi-class annotations based on the combination of multiple models. A new
column (by default "scGate_multi") is added to the metadata of the Seurat object.
Examples
# Define gating models
model.B <- gating_model(name = "Bcell", signature = c("MS4A1"))
model.T <- gating_model(name = "Tcell", signature = c("CD2","CD3D","CD3E"))
# Apply scGate with these models
data(query.seurat)
query.seurat <- scGate(query.seurat, model=model.T,
reduction="pca", output.col.name = "is.pure_Tcell")
query.seurat <- scGate(query.seurat, model=model.B,
reduction="pca", output.col.name = "is.pure_Bcell")
query.seurat <- combine_scGate_multiclass(query.seurat, scGate_class=c("Tcell","Bcell"))
table(query.seurat$scGate_multi)
gating_model Model creation and editing
Description
Generate an scGate model from scratch or edit an existing one
Usage
gating_model(
model = NULL,
level = 1,
name,
signature,
positive = TRUE,
negative = FALSE,
remove = FALSE
)
Arguments
model scGate model to be modified. When is NULL (default) a new model will be
initialized.
level integer. It refers to the hierarchical level of the model tree in which the signature
will be added (level=1 by default)
name Arbitrary signature name (i.e. Immune, Tcell, NK etc).
signature character vector indicating gene symbols to be included in the signature (e.g.
CD3D). If a minus sign is placed to the end of a gene name (e.g. "CD3D-"), this
gene will be used as negative in UCell computing. See UCell documentation for
details
positive Logical indicating if the signature must be used as a positive signature in those
model level. Default is TRUE.
negative Same as ‘positive‘ but negated (negative=TRUE equals to positive=FALSE)
remove Whether to remove the given signature from the model
Value
A scGate model that can be used by scGate to filter target cell types.
Examples
# create a simple gating model
my_model <- gating_model(level = 1, name = "immune", signature = c("PTPRC"))
my_model <- gating_model(model = my_model, level = 1, positive = FALSE,
name = "Epithelial", signature = c("CDH1","FLT1") )
# Remove an existing signature
dropped_model <- gating_model(model = my_model, remove =TRUE, level = 1, name = "Epithelial")
genes.blacklist.default
Blocklist of genes for dimensionality reduction
Description
A list of signatures, for mouse and human. These include cell cycling, heat-shock genes, mitochon-
drial genes, and other genes classes, that may confound the identification of cell types. These are
used internally by scGate and excluded from the calculation of dimensional reductions (PCA).
Format
A list of signatures
get_scGateDB Load scGate model database
Description
Download, update or load local version of the scGate model database. These are stored in a GitHub
repository, from where you can download specific versions of the database.
Usage
get_scGateDB(
destination = tempdir(),
force_update = FALSE,
version = "latest",
branch = c("master", "dev"),
verbose = FALSE,
repo_url = "https://github.com/carmonalab/scGate_models"
)
Arguments
destination Destination path for storing the DB. The default is tempdir(); if you wish to edit
locally the models and link them to the current project, set this parameter to a
new directory name, e.g. scGateDB
force_update Whether to update an existing database.
version Specify the version of the scGate_models database (e.g. ’v0.1’). By default
downloads the latest available version.
branch branch of the scGate model repository, either ’master’ (default) or ’dev’ for the
latest models
verbose display progress messages
repo_url URL path to scGate model repository database
Details
Models for scGate are dataframes where each line is a signature for a given filtering level. A
database of models can be downloaded using the function get_scGateDB. You may directly use the
models from the database, or edit one of these models to generate your own custom gating model.
Value
A list of models, organized according to the folder structure of the database. See the examples
below.
See Also
scGate load_scGate_model
Examples
scGate.model.db <- get_scGateDB()
# To see a specific model, browse the list of models:
scGate.model.db$human$generic$Myeloid
# Apply scGate with this model
data(query.seurat)
query <- scGate(query.seurat, model=scGate.model.db$human$generic$Myeloid, reduction="pca")
get_testing_data Download sample data
Description
Helper function to obtain some sample data
Usage
get_testing_data(version = "hsa.latest", destination = tempdir())
Arguments
version Which sample dataset
destination Save to this directory
Value
A list of datasets that can be used to test scGate
Examples
testing.datasets <- get_testing_data(version = 'hsa.latest')
load_scGate_model Load a single scGate model
Description
Loads a custom scGate model into R. For the format of these models, have a look or edit one of the
default models obtained with get_scGateDB
Usage
load_scGate_model(model_file, master.table = "master_table.tsv")
Arguments
model_file scGate model file, in .tsv format.
master.table File name of the master table (in repo_path folder) that contains cell type signa-
tures.
Value
A scGate model in dataframe format, which can given as input to the scGate function.
See Also
scGate get_scGateDB
Examples
dir <- tempdir() # this may also be set to your working directory
models <- get_scGateDB(destination=dir)
# Original or edited model
model.path <- paste0(dir,"/scGate_models-master/human/generic/Bcell_scGate_Model.tsv")
master.path <- paste0(dir,"/scGate_models-master/human/generic/master_table.tsv")
my.model <- load_scGate_model(model.path, master.path)
my.model
performance.metrics Performance metrics
Description
Evaluate model performance for binary tasks
Usage
performance.metrics(actual, pred, return_contingency = FALSE)
Arguments
actual Logical or numeric binary vector giving the actual cell labels.
pred Logical or numeric binary vector giving the predicted cell labels.
return_contingency
Logical indicating if contingency table must be returned.
Value
Prediction performance metrics (Precision, Recall, MCC) between actual and predicted cell type
labels.
Examples
results <- performance.metrics(actual= sample(c(1,0),20,replace=TRUE),
pred = sample(c(1,0),20,replace=TRUE,prob = c(0.65,0.35) ) )
plot_levels Plot scGate filtering results by level
Description
Fast plotting of gating results over each model level.
Usage
plot_levels(obj, pure.col = "green", impure.col = "gray")
Arguments
obj Gated Seurat object output of scGate filtering function
pure.col Color code for pure category
impure.col Color code for impure category
Value
UMAP plots with ’Pure’/’Impure’ labels for each level of the scGate model
Examples
scGate.model.db <- get_scGateDB()
model <- scGate.model.db$human$generic$Myeloid
# Apply scGate with this model
data(query.seurat)
query.seurat <- scGate(query.seurat, model=model,
reduction="pca", save.levels=TRUE)
library(patchwork)
pll <- plot_levels(query.seurat)
wrap_plots(pll)
plot_tree Plot model tree
Description
View scGate model as a decision tree (require ggparty package)
Usage
plot_tree(model, box.size = 8, edge.text.size = 4)
Arguments
model A scGate model to be visualized
box.size Box size
edge.text.size Edge text size
Value
A plot of the model as a decision tree. At each level, green boxes indicate the ’positive’ (accepted)
cell types, red boxed indicate the ’negative’ cell types (filtered out). The final Pure population is the
bottom right subset in the tree.
Examples
library(ggparty)
models <- get_scGateDB()
plot_tree(models$human$generic$Tcell)
plot_UCell_scores Plot UCell scores by level
Description
Show distribution of UCell scores for each level of a given scGate model
Usage
plot_UCell_scores(
obj,
model,
overlay = 5,
pos.thr = 0.2,
neg.thr = 0.2,
ncol = NULL,
combine = TRUE
)
Arguments
obj Gated Seurat object (output of scGate)
model scGate model used to identify a target population in obj
overlay Degree of overlay for ggridges
pos.thr Threshold for positive signatures used in scGate model (set to NULL to disable)
neg.thr Threshold for negative signatures used in scGate model (set to NULL to disable)
ncol Number of columns in output object (passed to wrap_plots)
combine Whether to combine plots into a single object, or to return a list of plots
Value
Returns a density plot of UCell scores for the signatures in the scGate model, for each level of the
model
Either a plot combined by patchwork (combine=T) or a list of plots (combine=F)
Examples
scGate.model.db <- get_scGateDB()
model <- scGate.model.db$human$generic$Tcell
# Apply scGate with this model
data(query.seurat)
query.seurat <- scGate(query.seurat, model=model,
reduction="pca", save.levels=TRUE)
# View UCell score distribution
plot_UCell_scores(query.seurat, model)
query.seurat Toy dataset to test the package
Description
A downsampled version (300 cells) of the single-cell dataset by Zilionis et al. (2019) <doi:10.1016/j.immuni.2019.03.009>,
with precalculated PCA and UMAP reductions.
Format
A Seurat object
scGate Filter single-cell data by cell type
Description
Apply scGate to filter specific cell types in a query dataset
Usage
scGate(
data,
model,
pos.thr = 0.2,
neg.thr = 0.2,
assay = NULL,
slot = "data",
ncores = 1,
seed = 123,
keep.ranks = FALSE,
reduction = c("calculate", "pca", "umap", "harmony", "Liors_elephant"),
min.cells = 30,
nfeatures = 2000,
pca.dim = 30,
param_decay = 0.25,
maxRank = 1500,
output.col.name = "is.pure",
k.param = 30,
genes.blacklist = "default",
multi.asNA = FALSE,
additional.signatures = NULL,
save.levels = FALSE,
verbose = FALSE
)
Arguments
data Seurat object containing a query data set - filtering will be applied to this object
model A single scGate model, or a list of scGate models. See Details for this format
pos.thr Minimum UCell score value for positive signatures
neg.thr Maximum UCell score value for negative signatures
assay Seurat assay to use
slot Data slot in Seurat object
ncores Number of processors for parallel processing
seed Integer seed for random number generator
keep.ranks Store UCell rankings in Seurat object. This will speed up calculations if the
same object is applied again with new signatures.
reduction Dimensionality reduction to use for knn smoothing. By default, calculates a new
reduction based on the given assay; otherwise you may specify a precalculated
dimensionality reduction (e.g. in the case of an integrated dataset after batch-
effect correction)
min.cells Minimum number of cells to cluster or define cell types
nfeatures Number of variable genes for dimensionality reduction
pca.dim Number of principal components for dimensionality reduction
param_decay Controls decrease in parameter complexity at each iteration, between 0 and 1.
param_decay == 0 gives no decay, increasingly higher param_decay gives in-
creasingly stronger decay
maxRank Maximum number of genes that UCell will rank per cell
output.col.name
Column name with ’pure/impure’ annotation
k.param Number of nearest neighbors for knn smoothing
genes.blacklist
Genes blacklisted from variable features. The default loads the list of genes
in scGate::genes.blacklist.default; you may deactivate blacklisting by
setting genes.blacklist=NULL
multi.asNA How to label cells that are "Pure" for multiple annotations: "Multi" (FALSE) or
NA (TRUE)
additional.signatures
A list of additional signatures, not included in the model, to be evaluated (e.g. a
cycling signature). The scores for this list of signatures will be returned but not
used for filtering.
save.levels Whether to save in metadata the filtering output for each gating model level
verbose Verbose output
Details
Models for scGate are data frames where each line is a signature for a given filtering level. A
database of models can be downloaded using the function get_scGateDB. You may directly use the
models from the database, or edit one of these models to generate your own custom gating model.
Multiple models can also be evaluated at once, by running scGate with a list of models. Gating for
each individual model is returned as metadata, with a consensus annotation stored in scGate_multi
metadata field. This allows using scGate as a multi-class classifier, where only cells that are "Pure"
for a single model are assigned a label, cells that are "Pure" for more than one gating model are
labeled as "Multi", all others cells are annotated as NA.
Value
A new metadata column is.pure is added to the query Seurat object, indicating which cells passed
the scGate filter. The active.ident is also set to this variable.
See Also
load_scGate_model get_scGateDB plot_tree
Examples
### Test using a small toy set
data(query.seurat)
# Define basic gating model for B cells
my_scGate_model <- gating_model(name = "Bcell", signature = c("MS4A1"))
query.seurat <- scGate(query.seurat, model = my_scGate_model, reduction="pca")
table(query.seurat$is.pure)
### Test with larger datasets
library(Seurat)
testing.datasets <- get_testing_data(version = 'hsa.latest')
seurat_object <- testing.datasets[["JerbyArnon"]]
# Download pre-defined models
models <- get_scGateDB()
seurat_object <- scGate(seurat_object, model=models$human$generic$PanBcell)
DimPlot(seurat_object)
seurat_object_filtered <- subset(seurat_object, subset=is.pure=="Pure")
### Run multiple models at once
models <- get_scGateDB()
model.list <- list("Bcell" = models$human$generic$Bcell,
"Tcell" = models$human$generic$Tcell)
seurat_object <- scGate(seurat_object, model=model.list)
DimPlot(seurat_object, group.by = "scGate_multi")
test_my_model Test your model
Description
Wrapper for fast model testing on 3 sampled datasets
Usage
test_my_model(
model,
testing.version = "hsa.latest",
custom.dataset = NULL,
target = NULL,
plot = TRUE
)
Arguments
model scGate model in data.frame format
testing.version
Character indicating the version of testing tatasets to be used. By default "hsa-
latest" will be used. It will be ignored if a custom dataset is provided (in Seurat
format).
custom.dataset Seurat object to be used as a testing dataset. For testing purposes, metadata
seurat object must contain a column named ’cell_type’ to be used as a gold
standard. Also a set of positive targets must be provided in the target variable.
target Positive target cell types. If default testing version is used this variable must be a
character indicating one of the available target models (’immune’,’Lymphoid’,’Myeloid’,’Tcell’,’Bcell’,’C
’NK’,’MoMacDC’,’Plasma_cell’,’PanBcell’). If a custom dataset is provided
in Seurat format, this variable must be a vector of positive cell types in your
data. The last case also require that such labels were named as in your cell_type
meta.data column.
plot Whether to return plots to device
Value
Returns performance metrics for the benchmarking datasets, and optionally plots of the predicted
cell type labels in reduced dimensionality space.
Examples
scGate.model.db <- get_scGateDB()
# Browse the list of models and select one:
model.panBcell <- scGate.model.db$human$generic$PanBcell
# Test the model with available testing datasets
panBcell.performance <- test_my_model(model.panBcell, target = "PanBcell")
model.Myeloid <- scGate.model.db$human$generic$Myeloid
myeloid.performance <- test_my_model(model.Myeloid, target = "Myeloid") |
framed_aio | rust | Rust | Crate framed_aio
===
A crate for framed async io.
This crate allows performing io operations in a framed manner, which means that instead of sending and receiving bytes from a stream of bytes, you can send and receive frames of bytes.
The reading of frames is implemented in a cancel safe way, which means that you can use it in one of the branches of `tokio::select!`.
The implementation is also tuned for high performance and low overhead.
Goals
---
* Provide cancel safety
* High performance
* Low overhead
Usage
---
To read from some type which implements `AsyncRead`
in a framed manner, you can wrap it in a `FramedReader` and call
`FramedReader::read_frame`.
To write to some type which implements `AsyncWrite`
in a framed manner, you can wrap it in a `FramedWriter` and call
`FramedWriter::write_frame` or `FramedWriter::write_frame_cancel_safe`
for writing a frame in a cancel safe way.
Typed Frames
---
If you wish to send typed frames, which is frames that contains serialized objects, you need to opt in the `typed` feature flag.
You can then use the [`FramedReader::read_frame_typed`] and
[`FramedWriter::write_frame_typed`] functions to read and write typed frames.
Structs
---
FramedReaderA reader which reads from a readable source in a framed manner. This reader can only read frames generated by a `FramedWriter`.
FramedWriterA writer which writes to a writeable sink in a framed manner. Frames written by this writer can be read using a `FramedReader`.
Enums
---
ReadFrameErrorAn error which occured while trying to read a frame from a
`FramedReader`.
WriteFrameErrorAn error which occured while trying to write a frame to a
`FramedWriter`.
Crate framed_aio
===
A crate for framed async io.
This crate allows performing io operations in a framed manner, which means that instead of sending and receiving bytes from a stream of bytes, you can send and receive frames of bytes.
The reading of frames is implemented in a cancel safe way, which means that you can use it in one of the branches of `tokio::select!`.
The implementation is also tuned for high performance and low overhead.
Goals
---
* Provide cancel safety
* High performance
* Low overhead
Usage
---
To read from some type which implements `AsyncRead`
in a framed manner, you can wrap it in a `FramedReader` and call
`FramedReader::read_frame`.
To write to some type which implements `AsyncWrite`
in a framed manner, you can wrap it in a `FramedWriter` and call
`FramedWriter::write_frame` or `FramedWriter::write_frame_cancel_safe`
for writing a frame in a cancel safe way.
Typed Frames
---
If you wish to send typed frames, which is frames that contains serialized objects, you need to opt in the `typed` feature flag.
You can then use the [`FramedReader::read_frame_typed`] and
[`FramedWriter::write_frame_typed`] functions to read and write typed frames.
Structs
---
FramedReaderA reader which reads from a readable source in a framed manner. This reader can only read frames generated by a `FramedWriter`.
FramedWriterA writer which writes to a writeable sink in a framed manner. Frames written by this writer can be read using a `FramedReader`.
Enums
---
ReadFrameErrorAn error which occured while trying to read a frame from a
`FramedReader`.
WriteFrameErrorAn error which occured while trying to write a frame to a
`FramedWriter`.
Struct framed_aio::FramedReader
===
```
pub struct FramedReader<R: AsyncRead> { /* private fields */ }
```
A reader which reads from a readable source in a framed manner. This reader can only read frames generated by a `FramedWriter`.
Implementations
---
### impl<R: AsyncRead> FramedReader<R#### pub fn new(source: R) -> Self
Creates a new framed reader which reads from the given readable source.
#### pub async fn read_frame(&mut self) -> Result<&[u8], ReadFrameErrorReads a single frame from the readable source.
##### Cancel Safety
This function is cancel safe, and can be used as one of the branches of
`tokio::select` without causing any data loss.
Auto Trait Implementations
---
### impl<R> RefUnwindSafe for FramedReader<R>where R: RefUnwindSafe,
### impl<R> Send for FramedReader<R>where R: Send,
### impl<R> Sync for FramedReader<R>where R: Sync,
### impl<R> Unpin for FramedReader<R### impl<R> UnwindSafe for FramedReader<R>where R: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct framed_aio::FramedWriter
===
```
pub struct FramedWriter<W: AsyncWrite> { /* private fields */ }
```
A writer which writes to a writeable sink in a framed manner. Frames written by this writer can be read using a `FramedReader`.
Implementations
---
### impl<W: AsyncWrite> FramedWriter<W#### pub fn new(sink: W) -> Self
Creates a new framed writer which writes to the given writable sink.
#### pub async fn write_frame(&mut self, frame: &[u8]) -> Result<(), WriteFrameErrorWrites a single frame to the writeable sink.
The framed writer uses a `BufWriter` under the hood, which means that written frames may not immediately be sent, but may stay buffered until the buffer is full enough, at which point all written frames will be flushed.
If you need the frame to be flushed immediately, you can use
`flush`.
##### Cancel Safety
This function is **not** cancel safe, and so it shouldn’t be used as one of the branches of `tokio::select`, because it may cause data loss when cancelled.
If you want to write a frame in a cancel safe way, use
`write_frame_cancel_safe`.
#### pub async fn write_frame_cancel_safe( &mut self, frame: &[u8]) -> Result<(), WriteFrameErrorWrites a single frame to the writeable sink in a cancel safe manner.
The framed writer uses a `BufWriter` under the hood, which means that written frames may not immediately be sent, but may stay buffered until the buffer is full enough, at which point all written frames will be flushed.
If you need the frame to be flushed immediately, you can use
`flush`.
This is more expensive than `write_frame`,
but provides cancel safety.
##### Cancel Safety
This function is cancel safe, and can be used as one of the branches of
`tokio::select` without causing any data loss.
#### pub async fn flush(&mut self) -> Result<(), WriteFrameErrorFlushes the framed writer, ensuring that any buffered data reaches its destination.
Auto Trait Implementations
---
### impl<W> RefUnwindSafe for FramedWriter<W>where W: RefUnwindSafe,
### impl<W> Send for FramedWriter<W>where W: Send,
### impl<W> Sync for FramedWriter<W>where W: Sync,
### impl<W> Unpin for FramedWriter<W### impl<W> UnwindSafe for FramedWriter<W>where W: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum framed_aio::ReadFrameError
===
```
pub enum ReadFrameError {
IO(Error),
FailedToDecodeFrameLength(FeedVarintDecoderError),
}
```
An error which occured while trying to read a frame from a
`FramedReader`.
Variants
---
### `IO(Error)`
### `FailedToDecodeFrameLength(FeedVarintDecoderError)`
Trait Implementations
---
### impl Debug for ReadFrameError
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Display for ReadFrameError
#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Error for ReadFrameError
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
#### fn provide(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read more
### impl From<Error> for ReadFrameError
#### fn from(source: Error) -> Self
Converts to this type from the input type.
Auto Trait Implementations
---
### impl !RefUnwindSafe for ReadFrameError
### impl Send for ReadFrameError
### impl Sync for ReadFrameError
### impl Unpin for ReadFrameError
### impl !UnwindSafe for ReadFrameError
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere E: Error + ?Sized,
#### fn provide(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. Read more
### impl<T> ToString for Twhere T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Enum framed_aio::WriteFrameError
===
```
pub enum WriteFrameError {
IO(Error),
FrameTooLong {
length: usize,
max_allowed_frame_length: usize,
},
}
```
An error which occured while trying to write a frame to a
`FramedWriter`.
Variants
---
### `IO(Error)`
### `FrameTooLong`
#### Fields
`length: usize``max_allowed_frame_length: usize`Trait Implementations
---
### impl Debug for WriteFrameError
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Display for WriteFrameError
#### fn fmt(&self, __formatter: &mut Formatter<'_>) -> Result
Formats the value using the given formatter. Read more
### impl Error for WriteFrameError
#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more
1.0.0 · source#### fn description(&self) -> &str
👎Deprecated since 1.42.0: use the Display impl or to_string()
Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting
#### fn provide(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. Read more
### impl From<Error> for WriteFrameError
#### fn from(source: Error) -> Self
Converts to this type from the input type.
Auto Trait Implementations
---
### impl !RefUnwindSafe for WriteFrameError
### impl Send for WriteFrameError
### impl Sync for WriteFrameError
### impl Unpin for WriteFrameError
### impl !UnwindSafe for WriteFrameError
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`. Read more
### impl<T> Borrow<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value. Read more
### impl<T> BorrowMut<T> for Twhere T: ?Sized,
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value. Read more
### impl<T> From<T> for T
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<E> Provider for Ewhere E: Error + ?Sized,
#### fn provide(&'a self, demand: &mut Demand<'a>)
🔬This is a nightly-only experimental API. (`provide_any`)Data providers should implement this method to provide *all* values they are able to provide by using `demand`. Read more
### impl<T> ToString for Twhere T: Display + ?Sized,
#### default fn to_string(&self) -> String
Converts the given value to a `String`. Read more
### impl<T, U> TryFrom<U> for Twhere U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.
const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.
### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. |
voxa | readthedoc | JavaScript | voxa 3.0.0 documentation
[voxa](index.html#document-index)
---
Welcome to Voxa’s documentation![¶](#welcome-to-voxa-s-documentation)
===
Summary[¶](#summary)
---
Voxa is an Alexa skill framework that provides a way to organize a skill into a state machine. Even the most complex voice user interface (VUI) can be represented through the state machine and it provides the flexibility needed to both be rigid when needed in specific states and flexible to jump around when allowing that also makes sense.
Why Voxa vs other frameworks[¶](#why-voxa-vs-other-frameworks)
---
Voxa provides a more robust framework for building Alexa skills. It provides a design pattern that wasn’t found in other frameworks. Critical to Voxa was providing a pluggable interface and supporting all of the latest ASK features.
Features[¶](#features)
---
* MVC Pattern
* State or Intent handling (State Machine)
* Easy integration with several Analytics providers
* Easy to modify response file (the view)
* Compatibility with all SSML features
* Works with companion app cards
* Supports i18n in the responses
* Clean code structure with a unit testing framework
* Easy error handling
* Account linking support
* Several Plugins
Installation[¶](#installation)
---
Voxa is distributed via `npm`
```
$ npm install voxa --save
```
Initial Configuration[¶](#initial-configuration)
---
Instantiating a Voxa Application requires a configuration specifying your [Views and Variables](index.html#views-and-variables).
```
const voxa = require('voxa');
const views = require('./views'):
const variables = require('./variables');
const app = new voxa.VoxaApp({ variables, views });
```
Platforms[¶](#platforms)
---
Once you have instantiated a platform is time to create a plaform application. There are platform handlers for Alexa, Dialogflow (Google Assistant and Facebook Messenger) and Botframework (Cortana);
```
const alexaSkill = new voxa.AlexaPlatform(app);
const googleAction = new voxa.GoogleAssistantPlatform(app);
const facebookBot = new voxa.FacebookPlatform(app);
// botframework requires some extra configuration like the Azure Table Storage to use and the Luis.ai endpoint const storageName = config.cortana.storageName;
const tableName = config.cortana.tableName;
const storageKey = config.cortana.storageKey; // Obtain from Azure Portal const azureTableClient = new azure.AzureTableClient(tableName, storageName, storageKey);
const tableStorage = new azure.AzureBotStorage({ gzipData: false }, azureTableClient);
const botframeworkSkill = new voxa.BotFrameworkPlatform(app, {
storage: tableStorage,
recognizerURI: config.cortana.recognizerURI,
applicationId: config.cortana.applicationId,
applicationPassword: config.cortana.applicationPassword,
defaultLocale: 'en',
});
```
Using the development server[¶](#using-the-development-server)
---
The framework provides a simple builtin server that’s configured to serve all POST requests to your skill, this works great when developing, specially when paired with [ngrok](https://ngrok.com)
```
// this will start an http server listening on port 3000 alexaSkill.startServer(3000);
```
Responding to an intent event[¶](#responding-to-an-intent-event)
---
```
app.onIntent('HelpIntent', (voxaEvent) => {
return { tell: 'HelpIntent.HelpAboutSkill' };
});
app.onIntent('ExitIntent', (voxaEvent) => {
return { tell: 'ExitIntent.Farewell' };
});
```
Responding to lambda requests[¶](#responding-to-lambda-requests)
---
Once you have your skill configured creating a lambda handler is as simple using the [`alexaSkill.lambda`](index.html#VoxaPlatform.lambda) method
```
exports.handler = alexaSkill.lambda();
```
Links[¶](#links)
===
* [Search Page](search.html)
New Alexa developer[¶](#new-alexa-developer)
---
If the skills development for alexa is a new thing for you, we have some suggestion to get you deep into this world.
### Getting Started with the Alexa Skills[¶](#getting-started-with-the-alexa-skills)
Alexa provides a set of built-in capabilities, referred to as skills. For example, Alexa’s abilities include playing music from multiple providers, answering questions, providing weather forecasts, and querying Wikipedia.
The Alexa Skills Kit lets you teach Alexa new skills. Customers can access these new abilities by asking Alexa questions or making requests. You can build skills that provide users with many different types of abilities. For example, a skill might do any one of the following:
* Look up answers to specific questions (“Alexa, ask tide pooler for the high tide today in Seattle.”)
* Challenge the user with puzzles or games (“Alexa, play Jeopardy.”)
* Control lights and other devices in the home (“Alexa, turn on the living room lights.”)
* Provide audio or text content for a customer’s flash briefing (“Alexa, give me my flash briefing”)
You can see the different types of skills [here](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/understanding-the-different-types-of-skills) to got more deep reference.
#### How users interact with Alexa?[¶](#how-users-interact-with-alexa)
With Interaction Model.
End users interact with all of Alexa’s abilities in the same way – by waking the device with the wake word (or a button for a device such as the Amazon Tap) and asking a question or making a request.
For example, users interact with the built-in Weather service like this:
User: Alexa, what’s the weather?
Alexa: Right now in Seattle, there are cloudy skies…
In the context of Alexa, an interaction model is somewhat analogous to a graphical user interface in a traditional app. Instead of clicking buttons and selecting options from dialog boxes, users make their requests and respond to questions by voice.
[Here you can see how the interaction model works](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/understanding-how-users-interact-with-skills)
### Amazon Developer Service Account[¶](#amazon-developer-service-account)
Amazon Web Services provides a suite of solutions that enable developers and their organizations to leverage Amazon.com’s robust technology infrastructure and content via simple API calls.
The first thing you need to do is create your own [Amazon Developer Account](https://developer.amazon.com).
### Registering an Alexa skill[¶](#registering-an-alexa-skill)
Registering a new skill or ability on the Amazon Developer Portal creates a configuration containing the information that the Alexa service needs to do the following:
* Route requests to the AWS Lambda function or web service that implements the skill, or for development purpose you can run it locally using [ngrok](https://ngrok.com).
* Display information about the skill in the Amazon Alexa App. The app shows all published skills, as well as all of your own skills currently under development.
You must register a skill before you can test it with the Service Simulator in the developer portal or an Alexa-enabled device.
Follow [these instructions](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/registering-and-managing-alexa-skills-in-the-developer-portal#registering-an-alexa-skill) to register and managing your Alexa skill.
Voxa architecture pattern MVC[¶](#voxa-architecture-pattern-mvc)
---
Voxa Application[¶](#voxa-application)
---
*class* `VoxaApp`(*config*)[¶](#VoxaApp)
| Arguments: | * **config** – Configuration for your skill, it should include [Views and Variables](index.html#views-and-variables) and optionally a [model](index.html#models) and a list of appIds.
|
If appIds is present then the framework will check every alexa event and enforce the application id to match one of the specified application ids.
```
const app = new VoxaApp.pp({ Model, variables, views, appIds });
```
`VoxaApp.``execute`(*event*, *context*)[¶](#VoxaApp.execute)
The main entry point for the Skill execution
| Arguments: | * **event** – The event sent by the platform.
* **context** – The context of the lambda function
|
| Returns: | Promise: A response resolving to a javascript object to be sent as a result to Alexa. |
```
app.execute(event, context)
.then(result => callback(null, result))
.catch(callback);
```
`VoxaApp.``onState`(*stateName*, *handler*)[¶](#VoxaApp.onState)
Maps a handler to a state
| Arguments: | * **stateName** (*string*) – The name of the state
* **handler** (*function/object*) – The controller to handle the state
* **intent** (*string/array*) – The intents that this state will handle
|
| Returns: | An object or a promise that resolves to an object that specifies a transition to another state and/or a view to render |
```
app.onState('entry', {
LaunchIntent: 'launch',
'AMAZON.HelpIntent': 'help',
});
app.onState('launch', (voxaEvent) => {
return { tell: 'LaunchIntent.OpenResponse', to: 'die' };
});
```
Also you can use a shorthand version to define a controller. This is very useful when having a controller that only returns a [transition](index.html#transition)
```
voxaApp.onState('launch',
{
flow: 'yield'
reply: 'LaunchIntent.OpenResponse',
to: 'nextState'
}
);
```
You can also set the intent that the controller will handle. If set, any other triggered intent will not enter into the controller.
```
voxaApp.onState("agreed?", {
to: "PurchaseAccepted"
}, "YesIntent");
voxaApp.onState("agreed?", {
to: "TransactionCancelled"
}, ["NoIntent", "CancelIntent"]);
voxaApp.onState("agreed?", {
to: "agreed?",
reply: "Help.ArticleExplanation",
flow: "yield"
}, "HelpIntent");
voxaApp.onState("agreed?", {
to: "agreed?",
reply: "UnknownInput",
flow: "yield"
});
```
**The order on how you declare your controllers matter in Voxa**
You can set multiple [controllers](index.html#controllers) for a single state, so how do you know which code will be executed? The first one that Voxa finds. Take this example:
```
voxaApp.onState('ProcessUserRequest', (voxaEvent) => {
// Some code
return { tell: 'ThankYouResponse', to: 'die' };
});
voxaApp.onState('ProcessUserRequest', (voxaEvent) => {
// Some other code
return { tell: 'GoodbyeResponse', to: 'die' };
});
```
If the state machine goes to the ProcessUserRequest, the code running will always be the first one, so the user will always hear the ThankYouResponse.
The only scenario where this is overwritten is when you have more than one handler for the same state, and one of them has one or more intents defined. If the user triggers the intent that’s inside the list of one-controller intents, Voxa will give it priority. For example, take this code:
```
voxaApp.onState("agreed?", {
to: "PurchaseAccepted"
}, "YesIntent");
voxaApp.onState("agreed?", {
to: "agreed?",
reply: "UnknownInput",
flow: "yield"
});
voxaApp.onState("agreed?", {
to: "TransactionCancelled"
}, ["NoIntent", "CancelIntent"]);
```
If the user triggers the NoIntent, and the state machine goes to the agreed? state, the user will listen to the TransactionCancelled response, it doesn’t matter if the controller is placed above or below a controller without defined intents, the priority will go to the controller with the defined intent.
`VoxaApp.``onIntent`(*intentName*, *handler*)[¶](#VoxaApp.onIntent)
A shortcut for definining state controllers that map directly to an intent
| Arguments: | * **intentName** (*string*) – The name of the intent
* **handler** (*function/object*) – The controller to handle the state
|
| Returns: | An object or a promise that resolves to an object that specifies a transition to another state and/or a view to render |
```
app.onIntent('HelpIntent', (voxaEvent) => {
return { tell: 'HelpIntent.HelpAboutSkill' };
});
```
`VoxaApp.``onIntentRequest`(*callback*[, *atLast*])[¶](#VoxaApp.onIntentRequest)
This is executed for all `IntentRequest` events, default behavior is to execute the State Machine machinery, you generally don’t need to override this.
| Arguments: | * **callback** (*function*) –
* **last** (*bool*) –
|
| Returns: | Promise |
`VoxaApp.``onLaunchRequest`(*callback*[, *atLast*])[¶](#VoxaApp.onLaunchRequest)
Adds a callback to be executed when processing a `LaunchRequest`, the default behavior is to fake the [alexa event](index.html#alexa-event) as an `IntentRequest` with a `LaunchIntent` and just defer to the `onIntentRequest` handlers. You generally don’t need to override this.
`VoxaApp.``onBeforeStateChanged`(*callback*[, *atLast*])[¶](#VoxaApp.onBeforeStateChanged)
This is executed before entering every state, it can be used to track state changes or make changes to the [alexa event](index.html#alexa-event) object
`VoxaApp.``onBeforeReplySent`(*callback*[, *atLast*])[¶](#VoxaApp.onBeforeReplySent)
Adds a callback to be executed just before sending the reply, internally this is used to add the serialized model and next state to the session.
It can be used to alter the reply, or for example to track the final response sent to a user in analytics.
```
app.onBeforeReplySent((voxaEvent, reply) => {
const rendered = reply.write();
analytics.track(voxaEvent, rendered)
});
```
`VoxaApp.``onAfterStateChanged`(*callback*[, *atLast*])[¶](#VoxaApp.onAfterStateChanged)
Adds callbacks to be executed on the result of a state transition, this are called after every transition and internally it’s used to render the [transition](index.html#transition) `reply` using the [views and variables](index.html#views-and-variables)
The callbacks get `voxaEvent`, `reply` and `transition` params, it should return the transition object
```
app.onAfterStateChanged((voxaEvent, reply, transition) => {
if (transition.reply === 'LaunchIntent.PlayTodayLesson') {
transition.reply = _.sample(['LaunchIntent.PlayTodayLesson1', 'LaunchIntent.PlayTodayLesson2']);
}
return transition;
});
```
`VoxaApp.``onUnhandledState`(*callback*[, *atLast*])[¶](#VoxaApp.onUnhandledState)
Adds a callback to be executed when a state transition fails to generate a result, this usually happens when redirecting to a missing state or an entry call for a non configured intent, the handlers get a [alexa event](index.html#alexa-event) parameter and should return a [transition](index.html#transition) the same as a state controller would.
`VoxaApp.``onSessionStarted`(*callback*[, *atLast*])[¶](#VoxaApp.onSessionStarted)
Adds a callback to the `onSessinStarted` event, this executes for all events where `voxaEvent.session.new === true`
This can be useful to track analytics
```
app.onSessionStarted((voxaEvent, reply) => {
analytics.trackSessionStarted(voxaEvent);
});
```
`VoxaApp.``onRequestStarted`(*callback*[, *atLast*])[¶](#VoxaApp.onRequestStarted)
Adds a callback to be executed whenever there’s a `LaunchRequest`, `IntentRequest` or a `SessionEndedRequest`,
this can be used to initialize your analytics or get your account linking user data. Internally it’s used to initialize the model based on the event session
```
app.onRequestStarted((voxaEvent, reply) => {
let data = ... // deserialized from the platform's session
voxaEvent.model = this.config.Model.deserialize(data, voxaEvent);
});
```
`VoxaApp.``onSessionEnded`(*callback*[, *atLast*])[¶](#VoxaApp.onSessionEnded)
Adds a callback to the `onSessionEnded` event, this is called for every `SessionEndedRequest` or when the skill returns a transition to a state where `isTerminal === true`, normally this is a transition to the `die` state. You would normally use this to track analytics
`VoxaApp.onSystem.``ExceptionEncountered`(*callback*[, *atLast*])[¶](#VoxaApp.onSystem.ExceptionEncountered)
This handles [System.ExceptionEncountered](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/custom-audioplayer-interface-reference#system-exceptionencountered) event that are sent to your skill when a response to an `AudioPlayer` event causes an error
```
return Promise.reduce(errorHandlers, (result, errorHandler) => {
if (result) {
return result;
}
return Promise.resolve(errorHandler(voxaEvent, error));
}, null);
```
### Error handlers[¶](#error-handlers)
You can register many error handlers to be used for the different kind of errors the application could generate. They all follow the same logic where if the first error type is not handled then the default is to be deferred to the more general error handler that ultimately just returns a default error reply.
They’re executed sequentially and will stop when the first handler returns a reply.
`VoxaApp.``onError`(*callback*[, *atLast*])[¶](#VoxaApp.onError)
This is the more general handler and will catch all unhandled errors in the framework, it gets `(voxaEvent, error)` parameters as arguments
```
app.onError((voxaEvent, error) => {
return new Reply(voxaEvent, { tell: 'An unrecoverable error occurred.' })
.write();
});
```
### Playback Controller handlers[¶](#playback-controller-handlers)
Handle events from the [AudioPlayer interface](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/custom-audioplayer-interface-reference#requests)
`audioPlayerCallback`(*voxaEvent*, *reply*)[¶](#audioPlayerCallback)
All audio player middleware callbacks get a [alexa event](index.html#alexa-event) and a [reply](index.html#alexa-reply) object
| Arguments: | * **voxaEvent** ([*AlexaEvent*](index.html#AlexaEvent)) – The [alexa event](index.html#alexa-event) sent by Alexa
* **reply** (*object*) – A reply to be sent as a response
|
| Returns object write: |
| | Your alexa event handler should return an appropriate response according to the event type, this generally means appending to the [reply](index.html#alexa-reply) object |
In the following example the alexa event handler returns a `REPLACE_ENQUEUED` directive to a [`PlaybackNearlyFinished()`](#VoxaApp.onAudioPlayer.PlaybackNearlyFinished) event.
```
app['onAudioPlayer.PlaybackNearlyFinished']((voxaEvent, reply) => {
const playAudio = new PlayAudio({
behavior: "REPLACE_ALL",
offsetInMilliseconds: 0,
token: "",
url: 'https://www.dl-sounds.com/wp-content/uploads/edd/2016/09/Classical-Bed3-preview.mp3'
});
playAudio.writeToReply(reply);
return reply;
});
```
`VoxaApp.onAudioPlayer.``PlaybackStarted`(*callback*[, *atLast*])[¶](#VoxaApp.onAudioPlayer.PlaybackStarted)
`VoxaApp.onAudioPlayer.``PlaybackFinished`(*callback*[, *atLast*])[¶](#VoxaApp.onAudioPlayer.PlaybackFinished)
`VoxaApp.onAudioPlayer.``PlaybackStopped`(*callback*[, *atLast*])[¶](#VoxaApp.onAudioPlayer.PlaybackStopped)
`VoxaApp.onAudioPlayer.``PlaybackFailed`(*callback*[, *atLast*])[¶](#VoxaApp.onAudioPlayer.PlaybackFailed)
`VoxaApp.onAudioPlayer.``PlaybackNearlyFinished`(*callback*[, *atLast*])[¶](#VoxaApp.onAudioPlayer.PlaybackNearlyFinished)
`VoxaApp.onPlaybackController.``NextCommandIssued`(*callback*[, *atLast*])[¶](#VoxaApp.onPlaybackController.NextCommandIssued)
`VoxaApp.onPlaybackController.``PauseCommandIssued`(*callback*[, *atLast*])[¶](#VoxaApp.onPlaybackController.PauseCommandIssued)
`VoxaApp.onPlaybackController.``PlayCommandIssued`(*callback*[, *atLast*])[¶](#VoxaApp.onPlaybackController.PlayCommandIssued)
`VoxaApp.onPlaybackController.``PreviousCommandIssued`(*callback*[, *atLast*])[¶](#VoxaApp.onPlaybackController.PreviousCommandIssued)
### Alexa Skill Event handlers[¶](#alexa-skill-event-handlers)
Handle request for the [Alexa Skill Events](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/skill-events-in-alexa-skills#skill-events-in-json-format)
`alexaSkillEventCallback`(*alexaEvent*)[¶](#alexaSkillEventCallback)
All the alexa skill event callbacks get a [alexa event](index.html#alexa-event) and a [reply](index.html#alexa-reply) object
| Arguments: | * **alexaEvent** ([*AlexaEvent*](index.html#AlexaEvent)) – The [alexa event](index.html#alexa-event) sent by Alexa
* **reply** (*object*) – A reply to be sent as the response
|
| Returns object reply: |
| | Alexa only needs an acknowledgement that you received and processed the event so it doesn’t need to resend the event. Just returning the [reply](index.html#alexa-reply) object is enough |
This is an example on how your skill can process a [`SkillEnabled()`](#VoxaApp.onAlexaSkillEvent.SkillEnabled) event.
```
app['onAlexaSkillEvent.SkillEnabled']((alexaEvent, reply) => {
const userId = alexaEvent.user.userId;
console.log(`skill was enabled for user: ${userId}`);
return reply;
});
```
`VoxaApp.onAlexaSkillEvent.``SkillAccountLinked`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaSkillEvent.SkillAccountLinked)
`VoxaApp.onAlexaSkillEvent.``SkillEnabled`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaSkillEvent.SkillEnabled)
`VoxaApp.onAlexaSkillEvent.``SkillDisabled`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaSkillEvent.SkillDisabled)
`VoxaApp.onAlexaSkillEvent.``SkillPermissionAccepted`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaSkillEvent.SkillPermissionAccepted)
`VoxaApp.onAlexaSkillEvent.``SkillPermissionChanged`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaSkillEvent.SkillPermissionChanged)
### Alexa List Event handlers[¶](#alexa-list-event-handlers)
Handle request for the [Alexa List Events](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/list-events-in-alexa-skills#list-events-json)
`alexaListEventCallback`(*alexaEvent*)[¶](#alexaListEventCallback)
All the alexa list event callbacks get a [alexa event](index.html#alexa-event) and a [reply](index.html#alexa-reply) object
| Arguments: | * **alexaEvent** ([*AlexaEvent*](index.html#AlexaEvent)) – The [alexa event](index.html#alexa-event) sent by Alexa
* **reply** (*object*) – A reply to be sent as the response
|
| Returns object reply: |
| | Alexa only needs an acknowledgement that you received and processed the event so it doesn’t need to resend the event. Just returning the [reply](index.html#alexa-reply) object is enough |
This is an example on how your skill can process a [`ItemsCreated()`](#VoxaApp.onAlexaHouseholdListEvent.ItemsCreated) event.
```
app['onAlexaHouseholdListEvent.ItemsCreated']((alexaEvent, reply) => {
const listId = alexaEvent.request.body.listId;
const userId = alexaEvent.user.userId;
console.log(`Items created for list: ${listId}` for user ${userId});
return reply;
});
```
`VoxaApp.onAlexaHouseholdListEvent.``ItemsCreated`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaHouseholdListEvent.ItemsCreated)
`VoxaApp.onAlexaHouseholdListEvent.``ItemsUpdated`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaHouseholdListEvent.ItemsUpdated)
`VoxaApp.onAlexaHouseholdListEvent.``ItemsDeleted`(*callback*[, *atLast*])[¶](#VoxaApp.onAlexaHouseholdListEvent.ItemsDeleted)
Voxa Platforms[¶](#voxa-platforms)
---
Voxa Platforms wrap your [`VoxaApp`](index.html#VoxaApp) and allows you to define handlers for the different supported voice platforms.
*class* `VoxaPlatform`(*voxaApp*, *config*)[¶](#VoxaPlatform)
| Arguments: | * **voxaApp** ([*VoxaApp*](index.html#VoxaApp)) – The app
* **config** – The config
|
`VoxaPlatform.``startServer`([*port*])[¶](#VoxaPlatform.startServer)
| Returns: | A promise that resolves to a running `http.Server` on the specified port number, if no port number is specified it will try to get a port number from the `PORT` environment variable or default to port 3000 |
This method can then be used in combination with a proxy server like [ngrok](https://ngrok.com/) or [Bespoken tools proxy](http://docs.bespoken.io/en/latest/commands/proxy/) to enable local development of your voice application
`VoxaPlatform.``lambda`()[¶](#VoxaPlatform.lambda)
| Returns: | A lambda handler that will call the [`app.execute`](index.html#VoxaApp.execute) method |
```
exports.handler = alexaSkill.lambda();
```
`VoxaPlatform.``lambdaHTTP`()[¶](#VoxaPlatform.lambdaHTTP)
| Returns: | A lambda handler to use as an AWS API Gateway ProxyEvent handler that will call the [`app.execute`](index.html#VoxaApp.execute) method |
```
exports.handler = dialogflowAction.lambdaHTTP();
```
`VoxaPlatform.``azureFunction`()[¶](#VoxaPlatform.azureFunction)
| Returns: | An azure function handler |
```
module.exports = cortanaSkill.azureFunction();
```
### Alexa[¶](#alexa)
The Alexa Platform allows you to use Voxa with Alexa
```
const { AlexaPlatform } = require('voxa');
const { voxaApp } = require('./app');
const alexaSkill = new AlexaPlatform(voxaApp);
exports.handler = alexaSkill.lambda();
```
### Dialogflow[¶](#dialogflow)
The GoogleAssistant and Facebook Platforms allow you to use Voxa with these 2 type of bots
```
const { GoogleAssistantPlatform, FacebookPlatform } = require('voxa');
const { voxaApp } = require('./app');
const googleAction = new GoogleAssistantPlatform(voxaApp);
exports.handler = googleAction.lambdaHTTP();
const facebookBot = new FacebookPlatform(voxaApp);
exports.handler = facebookBot.lambdaHTTP();
```
### Botframework[¶](#botframework)
The BotFramework Platform allows you to use Voxa with Microsoft Botframework
```
const { BotFrameworkPlatform } = require('voxa');
const { AzureBotStorage, AzureTableClient } = require('botbuilder-azure');
const { voxaApp } = require('./app');
const config = require('./config');
const tableName = config.tableName;
const storageKey = config.storageKey; // Obtain from Azure Portal const storageName = config.storageName;
const azureTableClient = new AzureTableClient(tableName, storageName, storageKey);
const tableStorage = new AzureBotStorage({ gzipData: false }, azureTableClient);
const botframeworkSkill = new BotFrameworkPlatform(voxaApp, {
storage: tableStorage,
recognizerURI: process.env.LuisRecognizerURI,
applicationId: process.env.MicrosoftAppId,
applicationPassword: process.env.MicrosoftAppPassword,
defaultLocale: 'en',
});
module.exports = botframeworkSkill.azureFunction();
```
Models[¶](#models)
---
Models are the data structure that holds the current state of your application, the framework doesn’t make many assumptions on it and only requires have a `deserialize` method that should initialize it based on an object of attributes and a `serialize` method that should return a `JSON.stringify` able structure to then store in the session attributes.
```
/*
* Copyright (c) 2018 Rain Agency <<EMAIL>>
* Author: Rain Agency <<EMAIL>>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy of
* this software and associated documentation files (the "Software"), to deal in
* the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
* FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
* COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
import * as _ from "lodash";
import { IBag, IVoxaEvent } from "./VoxaEvent";
export class Model {
[key: string]: any;
public static deserialize(
data: IBag,
voxaEvent: IVoxaEvent,
): Promise<Model> | Model {
return new this(data);
}
public state?: string;
constructor(data: any = {}) {
_.assign(this, data);
}
public async serialize(): Promise<any> {
return this;
}
}
export interface IModel {
new (data?: any): Model;
deserialize(data: IBag, event: IVoxaEvent): Model | Promise<Model>;
serialize(): Promise<any>;
}
```
Views and Variables[¶](#views-and-variables)
---
### Views[¶](#views)
Views are the Voxa way of handling replies to the user, they’re templates of responses using a simple javascript DSL. They can contain ssml and include cards.
There are 5 responses in the following snippet: `LaunchIntent.OpenResponse`, `ExitIntent.Farewell`, `HelpIntent.HelpAboutSkill`, `Count.Say` and `Count.Tell`
Also, there’s a special type of view which can contain arrays of options, when Voxa finds one of those like the `LaunchIntent.OpenResponse` it will select a random sample and use it as the response.
### I18N[¶](#i18n)
Internationalization support is done using the [i18next](http://i18next.com/) library, the same the Amazon Alexa Node SDK uses.
The framework takes care of selecting the correct locale on every voxa event by looking at the `voxaEvent.request.locale` property.
```
const views = {
en: {
translaction: {
LaunchIntent: {
OpenResponse: [
'Hello! <break time="3s"/> Good {time}. Is there anything i can do to help you today?',
'Hi there! <break time="3s"/> Good {time}. How may i be of service?',
'Good {time}, Welcome!. How can i help you?',
]
},
ExitIntent: {
Farewell: 'Ok. For more info visit {site} site.',
},
HelpIntent: {
HelpAboutSkill: 'For more help visit example dot com'
},
Count: {
Say: '{count}',
Tell: '{count}',
},
}
}
};
```
### Variables[¶](#variables)
Variables are the rendering engine way of adding logic into your views. They’re dessigned to be very simple since most of your logic should be in your [model](index.html#models) or [controllers](index.html#controllers).
A variable signature is:
`variable`(*model*, *voxaEvent*)[¶](#variable)
| Arguments: | * **voxaEvent** – The current [voxa event](index.html#voxa-event).
|
| Returns: | The value to be rendered or a promise resolving to a value to be rendered in the view. |
```
const variables = {
site: function site(voxaEvent) {
return Promise.resolve('example.com');
},
count: function count(voxaEvent) {
return voxaEvent.model.count;
},
locale: function locale(voxaEvent) {
return voxaEvent.locale;
}
};
```
Controllers[¶](#controllers)
---
Controllers in your application control the logic of your skill, they respond to alexa voxaEvents, external resources, manipulate the input and give proper responses using your [Model](index.html#models), [Views and Variables](index.html#views-and-variables).
States come in one of two ways, they can be an object with a transition:
```
app.onState('HelpIntent', {
tell: "Help"
});
```
Or they can be a function that gets a [voxaEvent](index.html#voxa-event) object:
```
app.onState('launch', (voxaEvent) => {
return { tell: 'LaunchIntent.OpenResponse' };
});
```
Your state should respond with a [transition](index.html#transition). The transition is a plain object that can take `directives`, `to` and `flow` keys.
`onState` also takes a third parameter which can be used to limit which intents a controller can respond, for example
```
app.onState('shouldSendEmail?', {
sayp: "All right! An email has been sent to your inbox",
flowp: "terminate"
}, "YesIntent");
app.onState('shouldSendEmail?', {
sayp: "No problem, is there anything else i can help you with?",
flowp: "yield"
}, "NoIntent");
```
### The `onIntent` helper[¶](#the-onintent-helper)
For the simple pattern of having a controller respond to an specific intent the framework provides the `onIntent` helper
```
app.onIntent('LaunchIntent', (voxaEvent) => {
return { tell: 'LaunchIntent.OpenResponse' };
});
```
If you receive a Display.ElementSelected type request, you could use the same approach for intents and state. Voxa receives this type of request and turns it into `DisplayElementSelected` Intent
```
app.onIntent('DisplayElementSelected', (voxaEvent) => {
return { tell: 'DisplayElementSelected.OpenResponse' };
});
```
Keep in mind that controllers created with `onIntent` won’t accept transitions from other states with a different intent
Transition[¶](#transition)
---
A transition is the result of controller execution, it’s a simple object with keys that control the flow of execution in your skill.
### `to`[¶](#to)
The `to` key should be the name of a state in your state machine, when present it indicates to the framework that it should move to a new state. If absent it’s assumed that the framework should move to the `die` state.
```
return { to: 'stateName' };
```
### `directives`[¶](#directives)
Directives is an array of directive objects that implement the `IDirective` interface, they can make modifications to the reply object directly
```
const { PlayAudio } = require('voxa').alexa;
return {
directives: [new PlayAudio(url, token)],
};
```
### `flow`[¶](#flow)
The `flow` key can take one of three values:
`continue`:
This is the default value if the flow key is not present, it merely continues the state machine execution with an internal transition, it keeps building the response until a controller returns a `yield` or a `terminate` flow.
`yield`:
This stops the state machine and returns the current response to the user without terminating the session.
`terminate`:
This stops the state machine and returns the current response to the user, it closes the session.
### `say`[¶](#say)
Renders a view and adds it as SSML to the response
### `sayp`[¶](#sayp)
Adds the passed value as SSML to the response
### `text`[¶](#text)
Renders a view and adds it as plain text to the response
### `textp`[¶](#textp)
Adds the passed value as plain text to the response
### `reprompt`[¶](#reprompt)
Used to render a view and add the result to the response as a reprompt
### `reply`[¶](#reply)
```
return { reply: 'LaunchIntent.OpenResponse' };
const reply = new Reply(voxaEvent, { tell: 'Hi there!' });
return { reply };
```
The `voxaEvent` Object[¶](#the-voxaevent-object)
---
*class* `VoxaEvent`(*event*, *context*)[¶](#VoxaEvent)
The `voxaEvent` object contains all the information from the Voxa event, it’s an object kept for the entire lifecycle of the state machine transitions and as such is a perfect place for middleware to put information that should be available on every request.
`VoxaEvent.``rawEvent`[¶](#VoxaEvent.rawEvent)
A plain javascript copy of the request object as received from the platform
`VoxaEvent.``executionContext`[¶](#VoxaEvent.executionContext)
On AWS Lambda this object contains the context
`VoxaEvent.``t`[¶](#VoxaEvent.t)
The current translation function from i18next, initialized to the language of the current request
`VoxaEvent.``renderer`[¶](#VoxaEvent.renderer)
The renderer object used in the current request
`VoxaEvent.``platform`[¶](#VoxaEvent.platform)
The currently running [Voxa Platform](index.html#voxa-platforms)
`VoxaEvent.``model`[¶](#VoxaEvent.model)
The default middleware instantiates a `Model` and makes it available through `voxaEvent.model`
`VoxaEvent.intent.``params`[¶](#VoxaEvent.intent.params)
In Alexa the voxaEvent object makes `intent.slots` available through `intent.params` after aplying a simple transformation so
```
{ "slots": [{ "name": "Dish", "value": "Fried Chicken" }] }
```
becomes:
```
{ "Dish": "Fried Chicken" }
```
in other platforms it does it’s best to make the intent params for each platform also available on `intent.params`
`VoxaEvent.``user`[¶](#VoxaEvent.user)
An object that contains the userId and accessToken if available
```
{
"userId": "The platform specific userId",
"id": "same as userId",
"accessToken": "available if user has done account linking"
}
```
`VoxaEvent.``model`
An instance of the [Voxa App Model](index.html#models).
`VoxaEvent.``log`[¶](#VoxaEvent.log)
An instance of [lambda-log](https://www.npmjs.com/package/lambda-log)
`VoxaEvent.``supportedInterfaces`()[¶](#VoxaEvent.supportedInterfaces)
Array of supported interfaces
| Returns Array: | A string array of the platform’s supported interfaces |
`VoxaEvent.``getUserInformation`()[¶](#VoxaEvent.getUserInformation)
Object with user personal information from the platform being used.
```
{
// Google specific fields
"sub": 1234567890, // The unique ID of the user's Google Account
"iss": "https://accounts.google.com", // The token's issuer
"aud": "123-abc.apps.googleusercontent.com", // Client ID assigned to your Actions project
"iat": 233366400, // Unix timestamp of the token's creation time
"exp": 233370000, // Unix timestamp of the token's expiration time
"emailVerified": true,
"givenName": "John",
"familyName": "Doe",
"locale": "en_US",
// Alexa specific fields
"zipCode": "98101",
"userId": "amzn1.account.K2LI23KL2LK2",
// Platforms common fields
"email": "<EMAIL>",
"name": "<NAME>"
}
```
| Returns object: | A object with user’s information |
`VoxaEvent.``getUserInformationWithGoogle`()[¶](#VoxaEvent.getUserInformationWithGoogle)
Object with user personal information from Google. Go [here](index.html#google-sign-in) for more information.
```
{
"sub": 1234567890, // The unique ID of the user's Google Account
"iss": "https://accounts.google.com", // The token's issuer
"aud": "123-abc.apps.googleusercontent.com", // Client ID assigned to your Actions project
"iat": 233366400, // Unix timestamp of the token's creation time
"exp": 233370000, // Unix timestamp of the token's expiration time
"givenName": "John",
"familyName": "Doe",
"locale": "en_US",
"email": "<EMAIL>",
"name": "<NAME>"
}
```
| Returns object: | A object with user’s information |
`VoxaEvent.``getUserInformationWithLWA`()[¶](#VoxaEvent.getUserInformationWithLWA)
Object with user personal information from Amazon. Go [here](index.html#lwa) for more information.
```
{
"email": "<EMAIL>",
"name": "<NAME>",
"zipCode": "98101",
"userId": "amzn1.account.K2LI23KL2LK2"
}
```
| Returns object: | A object with user’s information |
`IVoxaEvent` is an interface that inherits its attributes and function to the specific platforms, for more information about each platform’s own methods visit:
* [AlexaEvent](index.html#alexa-event)
* [BotFrameworkEvent](index.html#botframework-event)
* [Dialogflow Event](index.html#dialogflow-events)
* [GoogleAssistantEvent](index.html#googleassistant-event)
* [FacebookEvent](index.html#facebook-event)
The `AlexaEvent` Object[¶](#the-alexaevent-object)
---
*class* `AlexaEvent`(*event*, *lambdaContext*)[¶](#AlexaEvent)
The `alexaEvent` object contains all the information from the Voxa event, it’s an object kept for the entire lifecycle of the state machine transitions and as such is a perfect place for middleware to put information that should be available on every request.
`AlexaEvent.AlexaEvent.``token`[¶](#AlexaEvent.AlexaEvent.token)
A convenience getter to obtain the request’s token, specially when using the `Display.ElementSelected`
`AlexaEvent.AlexaEvent.alexa.``customerContact`[¶](#AlexaEvent.AlexaEvent.alexa.customerContact)
When a customer enables your Alexa skill, your skill can request the customer’s permission to the their contact information, see [Customer Contact Information Reference](index.html#alexa-customer-contact).
`AlexaEvent.AlexaEvent.alexa.``deviceAddress`[¶](#AlexaEvent.AlexaEvent.alexa.deviceAddress)
When a customer enables your Alexa skill, your skill can obtain the customer’s permission to use address data associated with the customer’s Alexa device, see [Device Address Information Reference](index.html#alexa-device-address).
`AlexaEvent.AlexaEvent.alexa.``deviceSettings`[¶](#AlexaEvent.AlexaEvent.alexa.deviceSettings)
Alexa customers can set their timezone, distance measuring unit, and temperature measurement unit in the Alexa app, see [Device Settings Reference](index.html#alexa-device-settings).
`AlexaEvent.AlexaEvent.alexa.``isp`[¶](#AlexaEvent.AlexaEvent.alexa.isp)
The [in-skill purchasing](https://developer.amazon.com/docs/in-skill-purchase/isp-overview.html) feature enables you to sell premium content such as game features and interactive stories for use in skills with a custom interaction model, see [In-Skill Purchases Reference](index.html#alexa-isp).
`AlexaEvent.AlexaEvent.alexa.``lists`[¶](#AlexaEvent.AlexaEvent.alexa.lists)
Alexa customers have access to two default lists: Alexa to-do and Alexa shopping. In addition, Alexa customer can create and manage [custom lists](https://developer.amazon.com/docs/custom-skills/access-the-alexa-shopping-and-to-do-lists.html) in a skill that supports that, see [Alexa Shopping and To-Do Lists Reference](index.html#alexa-lists).
The `BotFrameworkEvent` Object[¶](#the-botframeworkevent-object)
---
The `GoogleAssistantEvent` Object[¶](#the-googleassistantevent-object)
---
*class* `GoogleAssistantEvent`(*event*, *lambdaContext*)[¶](#GoogleAssistantEvent)
The `googleAssistantEvent` object contains all the information from the Voxa event, it’s an object kept for the entire lifecycle of the state machine transitions and as such is a perfect place for middleware to put information that should be available on every request.
`GoogleAssistantEvent.GoogleAssistantEvent.google.``conv`[¶](#GoogleAssistantEvent.GoogleAssistantEvent.google.conv)
The conversation instance that contains the raw input sent by Dialogflow
The `FacebookEvent` Object[¶](#the-facebookevent-object)
---
*class* `FacebookEvent`(*event*, *lambdaContext*)[¶](#FacebookEvent)
The `facebookEvent` object contains all the information from the Voxa event for the Facebook Messenger platform, just like Google Assistant events. Additionally you can access the facebook property to send [Actions](https://developers.facebook.com/docs/messenger-platform/send-messages/sender-actions) to the Chatbot conversation:
```
const { FacebookEvent, FacebookPlatform, VoxaApp } = require('voxa');
const config = {
pageAccessToken: '<KEY>',
};
const app = new VoxaApp({ views, variables });
const facebookBot = new FacebookPlatform(app, config);
app.onIntent("LaunchIntent", async (voxaEvent: FacebookEvent) => {
await voxaEvent.facebook.sendTypingOffAction();
await voxaEvent.facebook.sendMarkSeenAction();
await voxaEvent.facebook.sendTypingOnAction();
const info = await voxaEvent.getUserInformation(FACEBOOK_USER_FIELDS.ALL);
voxaEvent.model.info = info;
return {
flow: "terminate",
text: "Facebook.User.FullInfo",
to: "die",
};
});
const reply = await facebookBot.execute(event);
```
The `facebookEvent` object also gives you the necessary helpers to implement the Handover Protocol, very useful when you want to pass the conversation control from your bot to a person who manages your Facebook Page. The most common example is when user sends to your bot the following text: I want to talk to a representative. This means your bot is not understanding what user is saying, or the bot can’t find what the user is looking for. So, it’s necessary a person to talk directly to the user. You can pass the control to your Page Inbox like this:
```
const { FacebookEvent, FacebookPlatform, VoxaApp } = require('voxa');
const config = {
pageAccessToken: '<KEY>',
};
const app = new VoxaApp({ views, variables });
const facebookBot = new FacebookPlatform(app, config);
app.onIntent("PassControlIntent", async (voxaEvent: FacebookEvent) => {
await voxaEvent.facebook.passThreadControlToPageInbox();
return {
flow: "terminate",
text: "Facebook.RepresentativeWillGetInTouch.text",
to: "die",
};
});
```
Also, if the app you are working on is not the Primary Receiver, you can request control of the conversation like this:
```
const { FacebookEvent, FacebookPlatform, VoxaApp } = require('voxa');
const config = {
pageAccessToken: '<KEY>',
};
const app = new VoxaApp({ views, variables });
const facebookBot = new FacebookPlatform(app, config);
app.onIntent("CustomIntent", async (voxaEvent: FacebookEvent) => {
await voxaEvent.facebook.requestThreadControl();
return {
flow: "terminate",
text: "Facebook.ControlRequested.text",
to: "die",
};
});
```
Finally, if you detect the secondary receiver is not responding to the user, you can make your bot (Primary Receiver) take the control of the conversation like this:
```
const { FacebookEvent, FacebookPlatform, VoxaApp } = require('voxa');
const config = {
pageAccessToken: '<KEY>',
};
const app = new VoxaApp({ views, variables });
const facebookBot = new FacebookPlatform(app, config);
app.onIntent("CustomIntent", async (voxaEvent: FacebookEvent) => {
await voxaEvent.facebook.takeThreadControl();
return {
flow: "terminate",
text: "Facebook.ControlTaken.text",
to: "die",
};
});
```
Alexa Directives[¶](#alexa-directives)
---
### HomeCard[¶](#homecard)
[Alexa Documentation](https://developer.amazon.com/docs/custom-skills/include-a-card-in-your-skills-response.html)
Interactions between a user and an Alexa device can include home cards displayed in the Amazon Alexa App, the companion app available for Fire OS, Android, iOS, and desktop web browsers. These are graphical cards that describe or enhance the voice interaction. A custom skill can include these cards in its responses.
In Voxa you can send cards using a view or returning a Card like structure directly from your controller
```
const views = {
"de-DE": {
translation: {
Card: {
image: {
largeImageUrl: "https://example.com/large.jpg",
smallImageUrl: "https://example.com/small.jpg",
},
title: "Title",
type: "Standard",
},
},
};
app.onState('someState', () => {
return {
alexaCard: 'Card',
};
});
app.onState('someState', () => {
return {
alexaCard: {
image: {
largeImageUrl: "https://example.com/large.jpg",
smallImageUrl: "https://example.com/small.jpg",
},
title: "Title",
type: "Standard",
},
};
});
```
### AccountLinkingCard[¶](#accountlinkingcard)
[Alexa Documentation](https://developer.amazon.com/docs/custom-skills/include-a-card-in-your-skills-response.html#define-a-card-for-use-with-account-linking)
An account linking card is sent with the alexaAccountLinkingCard key in your controller, it requires no parameters.
```
app.onState('someState', () => {
return {
alexaAccountLinkingCard: null,
};
});
```
### RenderTemplate[¶](#rendertemplate)
[Alexa Documentation](https://developer.amazon.com/docs/custom-skills/display-interface-reference.html)
Voxa provides a DisplayTemplate builder that can be used with the alexaRenderTemplate controller key to create Display templates for the echo show and echo spot.
```
const voxa = require('voxa');
const { DisplayTemplate } = voxa;
app.onState('someState', () => {
const template = new DisplayTemplate("BodyTemplate1")
.setToken("token")
.setTitle("This is the title")
.setTextContent("This is the text content", "secondaryText", "tertiaryText")
.setBackgroundImage("http://example.com/image.jpg", "Image Description")
.setBackButton("HIDDEN");
return {
alexaRenderTemplate: template,
};
});
```
### Alexa Presentation Language (APL) Templates[¶](#alexa-presentation-language-apl-templates)
[Alexa Documentation](https://developer.amazon.com/docs/alexa-presentation-language/apl-overview.html)
An APL Template is sent with the alexaAPLTemplate key in your controller. You can pass the directive object directly or a view name with the directive object.
One important thing to know is that is you sent a Render Template and a APL Template in the same response but the APL Template will be the one being rendered if the device supports it; if not, the Render Template will be one being rendered.
```
// variables.js
exports.MyAPLTemplate = (voxaEvent) => {
// Do something with the voxaEvent, or not...
return {
datasources: {},
document: {},
token: "SkillTemplateToken",
type: "Alexa.Presentation.APL.RenderDocument",
};
});
// views.js
const views = {
"en-US": {
translation: {
MyAPLTemplate: "{MyAPLTemplate}"
},
};
};
// state.js
app.onState('someState', () => {
return {
alexaAPLTemplate: "MyAPLTemplate",
};
});
// Or you can do it directly...
app.onState('someState', () => {
return {
alexaAPLTemplate: {
datasources: {},
document: {},
token: "SkillTemplateToken",
type: "Alexa.Presentation.APL.RenderDocument",
},
};
});
```
### Alexa Presentation Language (APL) Commands[¶](#alexa-presentation-language-apl-commands)
[Alexa Documentation](https://developer.amazon.com/docs/alexa-presentation-language/apl-commands.html)
An APL Command is sent with the alexaAPLCommand key in your controller. Just like the APL Template, you can pass the directive object directly or a view name with the directive object.
```
// variables.js
exports.MyAPLCommand = (voxaEvent) => {
// Do something with the voxaEvent, or not...
return {
token: "SkillTemplateToken",
type: "Alexa.Presentation.APL.ExecuteCommands";
commands: [{
type: "SpeakItem", // Karaoke type command
componentId: "someAPLComponent";
}],
};
});
// views.js
const views = {
"en-US": {
translation: {
MyAPLCommand: "{MyAPLCommand}"
},
};
};
// state.js
app.onState('someState', () => {
return {
alexaAPLCommand: "MyAPLCommand",
};
});
// Or you can do it directly...
app.onState('someState', () => {
return {
alexaAPLCommand: {
token: "SkillTemplateToken",
type: "Alexa.Presentation.APL.ExecuteCommands";
commands: [{
type: "SpeakItem", // Karaoke type command
componentId: "someAPLComponent";
}],
},
};
});
Alexa Presentation Language - T (APLT) Templates
```
---
[Alexa Documentation](https://developer.amazon.com/en-US/docs/alexa/alexa-presentation-language/apl-reference-character-displays.html)
Alexa Presentation Language is supported on devices with character displays. Use the APLT document format to send text to these devices. The APLT document format is smaller and simpler than the APL document format supported by devices with screens.
One important thing to know is that if you sent a Render Template and a APLT Template in the same response but the APLT Template will be the one being rendered if the device supports it; if not, the Render Template will be one being rendered.
```
// variables.js
exports.MyAPLTTemplate = (voxaEvent) => {
// Do something with the voxaEvent, or not...
return {
datasources: {},
document: {},
targetProfile: "FOUR_CHARACTER_CLOCK",
token: SkillTemplateToken,
type: "Alexa.Presentation.APLT.RenderDocument"
};
});
// views.js
const views = {
"en-US": {
translation: {
MyAPLTTemplate: "{MyAPLTTemplate}"
},
};
};
// state.js
app.onState('someState', () => {
return {
alexaAPLTTemplate: "MyAPLTTemplate",
};
});
// Or you can do it directly...
app.onState('someState', () => {
return {
alexaAPLTTemplate: {
datasources: {},
document: {},
targetProfile: "FOUR_CHARACTER_CLOCK",
token: "SkillTemplateToken",
type: "Alexa.Presentation.APLT.RenderDocument",
},
};
});
```
### Alexa Presentation Language - T (APLT) Commands[¶](#alexa-presentation-language-t-aplt-commands)
[Alexa Documentation](https://developer.amazon.com/en-US/docs/alexa/alexa-presentation-language/aplt-interface.html#executecommands-directive)
An APLT Command is sent with the alexaAPLTCommand key in your controller. Just like the APLT Template, you can pass the directive object directly or a view name with the directive object.
> // variables.js
> > > > exports.MyAPLTCommand = (voxaEvent) => {
> > // Do something with the voxaEvent, or not…
> > > > > return {
> > token: “SkillTemplateToken”,
> > type: “Alexa.Presentation.APLT.ExecuteCommands”;
> > commands: [ {
> > > > > > > > type: “SetValue”,
> > > description:
> > > > > > > > > > > > > “Changes the text property value on the ‘myTextId’ component.”,
> > > > > > > componentId: “myTextId”,
> > > property: “text”,
> > > value: “New text value!”,
> > > delay: 3000
> > > > > > > > > > }]
> > > > > > };
> > > > > > });
> > > > // views.js
> > > > const views = {
> > > “en-US”: {
> > > translation: {
> > MyAPLTCommand: “{MyAPLTCommand}”
> > > },
> > > > > > };
> > > > > > };
> > > > // state.js
> > > > app.onState(‘someState’, () => {
> > > return {
> > alexaAPLTCommand: “MyAPLTCommand”,
> > > };
> > > > > > });
> > > > // Or you can do it directly…
> > > > > app.onState(‘someState’, () => {
> > > return {
> > > alexaAPLTCommand: {
> > token: “SkillTemplateToken”,
> > type: “Alexa.Presentation.APLT.ExecuteCommands”;
> > commands: [ {
> > > > > > > > type: “SetValue”,
> > > description:
> > > > > > > > > > > > > “Changes the text property value on the ‘myTextId’ component.”,
> > > > > > > componentId: “myTextId”,
> > > property: “text”,
> > > value: “New text value!”,
> > > delay: 3000
> > > > > > > > > > }]
> > > > > > },
> > > > > > };
> > > > > > });
> > > ### PlayAudio[¶](#playaudio)
[Alexa Documentation](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#play)
```
function register(app) {
app.onState('someState', () => {
const url = 'http://example.com/example.mp3';
const token = '{}';
const offsetInMilliseconds = 0;
const behavior = 'REPLACE_ALL';
const playAudio = new PlayAudio(url, token, offsetInMilliseconds, behavior);
return {
directives: [playAudio],
};
});
}
```
**Add metadata for your audio**
The PlayAudio directive has a fifth parameter to set metadata for an audio, just pass it when creating a PlayAudio instance following the correct structure required by Amazon (refer to the Alexa documentation link above).
```
function register(app) {
app.onState('someState', () => {
const url = 'http://example.com/example.mp3';
const token = '{}';
const offsetInMilliseconds = 0;
const behavior = 'REPLACE_ALL';
const metadata = {
title: 'title of the track to display',
subtitle: 'subtitle of the track to display',
art: {
sources: [
{
url: 'https://cdn.example.com/url-of-the-album-art-image.png'
}
]
},
backgroundImage: {
sources: [
{
url: 'https://cdn.example.com/url-of-the-background-image.png'
}
]
}
};
const playAudio = new PlayAudio(url, token, offsetInMilliseconds, behavior, metadata);
return {
directives: [playAudio],
};
});
}
```
### StopAudio[¶](#stopaudio)
[Alexa Documentation](https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#stop)
```
function register(app) {
app.onState("PauseIntent", {
alexaStopAudio: true,
reply: "SomeViewWithAPauseText",
to: "die"
});
}
```
### Resume an Audio[¶](#resume-an-audio)
Resuming an audio works using the PlayAudio directive, the only thing that need to change is the offsetInMilliseconds to, of course, start the audio where it stopped. The offsetInMilliseconds comes from the context attribute in the raw event coming from Alexa.
You can also use the token to pass important information since the AudioPlayer context is outside of the skill session, therefore you can’t access the session variables. In this example, the information of the audio is returned with the alexaPlayAudio key from Voxa.
```
function register(app) {
app.onState("playSomeAudio", () => {
const url = 'http://example.com/example.mp3';
const token = JSON.stringify({ url });
const offsetInMilliseconds = 0;
const behavior = 'REPLACE_ALL';
const metadata = {
art: {
sources: [
{
url: "http://example.com/image.png",
},
],
},
backgroundImage: {
sources: [
{
url: "http://example.com/image.png",
},
],
},
subtitle: "Subtitle",
title: "Title",
};
return {
alexaPlayAudio: {
behavior,
metadata,
offsetInMilliseconds,
token
url,
},
};
});
app.onIntent("ResumeIntent", (voxaEvent: IVoxaEvent) => {
if (voxaEvent.rawEvent.context) {
const token = JSON.parse(voxaEvent.rawEvent.context.AudioPlayer.token);
const offsetInMilliseconds = voxaEvent.rawEvent.context.AudioPlayer.offsetInMilliseconds;
const url = token.url;
const playAudio = new PlayAudio(url, token, offsetInMilliseconds);
return {
reply: "SomeViewSayingResumingAudio",
to: "die",
directives: [playAudio]
};
}
return { flow: "terminate", reply: "SomeGoodbyeMessage" };
});
}
```
### ElicitSlot Directive[¶](#elicitslot-directive)
[Alexa Documentation](https://developer.amazon.com/docs/custom-skills/dialog-interface-reference.html#elicitslot)
When there is an active dialog you can use the `alexaElicitDialog` to tell alexa to prompt the user for a specific slot in the next turn. A prompt passed in as a `say`, `reply` or another statement is required and will replace the prompt that is provided to the interaction model for the dialog.
The `flow` and `to` keys should not be used or should always be `flow: "yield"` and `to: "{current_intent}"` since dialogs loop the same intent until all of the parameters are filled.
The only required parameter is the `slotToElicit`, but you can also pass in the values for slots to update the current values. If a slot isn’t declared in the interaction model it will be ignored or cause an error.
```
// simplest example app.onIntent('someDialogIntent', () => {
// check if the dialog is complete and do some cool stuff here //
// if we need to ask the user for something //
return {
alexaElicitDialog: {
slotToElicit: "favoriteColor",
},
sayp: ["What is your favorite color?"],
};
});
// updating slots example app.onIntent('someDialogIntent', () => {
// check if the dialog is complete and do some cool stuff here //
// if we need to ask the user for something //
return {
alexaElicitDialog: {
slotToElicit: "favoriteColor",
slots: {
bestLetter: {
value: "W",
confirmationStatus: "CONFIRMED",
},
},
},
sayp: ["What is your favorite color?"],
};
});
// This is still OK app.onIntent('someDialogIntent', () => {
return {
alexaElicitDialog: {
slotToElicit: "favoriteColor",
},
sayp: ["What is your favorite color?"],
to: "someDialogIntent",
};
});
// This will break app.onIntent('someDialogIntent', () => {
return {
alexaElicitDialog: {
slotToElicit: "favoriteColor",
},
sayp: ["What is your favorite color?"],
to: "someOtherThing",
};
});
```
### Dynamic Entities[¶](#dynamic-entities)
[Alexa Documentation](https://developer.amazon.com/docs/custom-skills/use-dynamic-entities-for-customized-interactions.html)
Dynamic entities are sent with the alexaDynamicEntities key in your controller. You need to pass a view name with the types array.
```
// variables.js
exports.dynamicNames = (voxaEvent) => {
return [
{
name: "LIST_OF_AVAILABLE_NAMES",
values: [
{
id: "nathan",
name: {
synonyms: ["nate"],
value: "nathan"
}
}
]
}
];
});
// views.js
const views = {
"en-US": {
translation: {
MyAvailableNames: "{dynamicNames}"
},
};
};
// state.js
app.onState('someState', () => {
return {
alexaDynamicEntities: "MyAvailableNames",
};
});
// Or you can pass the types directly...
app.onState('someState', () => {
return {
alexaDynamicEntities: [
{
name: "LIST_OF_AVAILABLE_NAMES",
values: [
{
id: "nathan",
name: {
synonyms: ["nate"],
value: "nathan"
}
}
]
}
],
};
});
// Or you can pass the whole directive directly...
app.onState('someState', () => {
return {
alexaDynamicEntities: {
type: "Dialog.UpdateDynamicEntities",
updateBehavior: "REPLACE",
types: [
{
name: "LIST_OF_AVAILABLE_NAMES",
values: [
{
id: "nathan",
name: {
synonyms: ["nate"],
value: "nathan"
}
}
]
}
]
},
};
});
```
Google Assistant Directives[¶](#google-assistant-directives)
---
Dialog Flow directives expose google actions functionality that’s platform specific. In general they take the same parameters you would pass to the Actions on Google Node JS SDK.
### List[¶](#list)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/responses#list)
The single-select list presents the user with a vertical list of multiple items and allows the user to select a single one. Selecting an item from the list generates a user query (chat bubble) containing the title of the list item.
```
app.onState('someState', () => {
return {
dialogflowList: {
title: 'List Title',
items: {
// Add the first item to the list
[SELECTION_KEY_ONE]: {
synonyms: [
'synonym of title 1',
'synonym of title 2',
'synonym of title 3',
],
title: 'Title of First List Item',
description: 'This is a description of a list item.',
image: new Image({
url: IMG_URL_AOG,
alt: 'Image alternate text',
}),
},
// Add the second item to the list
[SELECTION_KEY_GOOGLE_HOME]: {
synonyms: [
'Google Home Assistant',
'Assistant on the Google Home',
],
title: 'Google Home',
description: 'Google Home is a voice-activated speaker powered by ' +
'the Google Assistant.',
image: new Image({
url: IMG_URL_GOOGLE_HOME,
alt: 'Google Home',
}),
},
// Add the third item to the list
[SELECTION_KEY_GOOGLE_PIXEL]: {
synonyms: [
'Google Pixel XL',
'Pixel',
'Pixel XL',
],
title: 'Google Pixel',
description: 'Pixel. Phone by Google.',
image: new Image({
url: IMG_URL_GOOGLE_PIXEL,
alt: 'Google Pixel',
}),
},
},
}
}
});
```
### Carousel[¶](#carousel)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/responses#carousel)
The carousel scrolls horizontally and allows for selecting one item. Compared to the list selector, it has large tiles-allowing for richer content. The tiles that make up a carousel are similar to the basic card with image. Selecting an item from the carousel will simply generate a chat bubble as the response just like with list selector.
```
app.onState('someState', () => {
return {
dialogflowCarousel: {
items: {
// Add the first item to the carousel
[SELECTION_KEY_ONE]: {
synonyms: [
'synonym of title 1',
'synonym of title 2',
'synonym of title 3',
],
title: 'Title of First Carousel Item',
description: 'This is a description of a carousel item.',
image: new Image({
url: IMG_URL_AOG,
alt: 'Image alternate text',
}),
},
// Add the second item to the carousel
[SELECTION_KEY_GOOGLE_HOME]: {
synonyms: [
'Google Home Assistant',
'Assistant on the Google Home',
],
title: 'Google Home',
description: 'Google Home is a voice-activated speaker powered by ' +
'the Google Assistant.',
image: new Image({
url: IMG_URL_GOOGLE_HOME,
alt: 'Google Home',
}),
},
// Add third item to the carousel
[SELECTION_KEY_GOOGLE_PIXEL]: {
synonyms: [
'Google Pixel XL',
'Pixel',
'Pixel XL',
],
title: 'Google Pixel',
description: 'Pixel. Phone by Google.',
image: new Image({
url: IMG_URL_GOOGLE_PIXEL,
alt: 'Google Pixel',
}),
},
},
}
}
});
```
### Browse Carousel[¶](#browse-carousel)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/responses#browsing_carousel)
A browsing carousel is a rich response that allows users to scroll vertically and select a tile in a collection. Browsing carousels are designed specifically for web content by opening the selected tile in a web browser.
```
app.onState('someState', () => {
return {
dialogflowBrowseCarousel: {
items: [
{
title: 'Title of the item',
description: 'This is a description of an item.',
footer: 'Footer of the item'
openUrlAction: {
url: 'https://example.com/page',
urlTypeHint: 'DEFAULT' // Optional
}
},
{
title: 'Title of the item',
description: 'This is a description of an item.',
footer: 'Footer of the item'
openUrlAction: {
url: 'https://example.com/page',
urlTypeHint: 'DEFAULT' // Optional
}
},
],
}
}
});
```
### Suggestions[¶](#suggestions)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/responses#suggestion_chip)
Use suggestion chips to hint at responses to continue or pivot the conversation. If during the conversation there is a primary call for action, consider listing that as the first suggestion chip.
Whenever possible, you should incorporate one key suggestion as part of the chat bubble, but do so only if the response or chat conversation feels natural.
```
app.onState('someState', () => {
return {
dialogflowSuggestions: ['Exit', 'Continue']
}
});
```
```
app.onState('someState', () => {
return {
dialogflowLinkOutSuggestion: {
name: "Suggestion Link",
url: 'https://assistant.google.com/',
}
}
});
```
### BasicCard[¶](#basiccard)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/responses#basic_card)
A basic card displays information that can include the following:
* Image
* Title
* Sub-title
* Text body
* Link button
* Border
Use basic cards mainly for display purposes. They are designed to be concise, to present key (or summary) information to users, and to allow users to learn more if you choose (using a weblink).
In most situations, you should add suggestion chips below the cards to continue or pivot the conversation.
Avoid repeating the information presented in the card in the chat bubble at all costs.
```
app.onState('someState', () => {
return {
dialogflowBasicCard: {
text: `This is a basic card. Text in a basic card can include "quotes" and
most other unicode characters including emoji. Basic cards also support
some markdown formatting like *emphasis* or _italics_, **strong** or
__bold__, and ***bold itallic*** or ___strong emphasis___ `,
subtitle: 'This is a subtitle',
title: 'Title: this is a title',
buttons: new Button({
title: 'This is a button',
url: 'https://assistant.google.com/',
}),
image: new Image({
url: 'https://example.com/image.png',
alt: 'Image alternate text',
}),
}
}
});
```
### AccountLinkingCard[¶](#accountlinkingcard)
[Actions on Google Documentation](https://developers.google.com/actions/identity/account-linking)
Account linking is a great way to lets users connect their Google accounts to existing accounts on your service. This allows you to build richer experiences for your users that take advantage of the data they already have in their account on your service. Whether it’s food preferences, existing payment accounts, music preferences, your users should be able to have better experiences in the Google Assistant by linking their accounts.
```
app.onState('someState', () => {
return {
dialogflowAccountLinkingCard: "To track your exercise"
}
});
```
### MediaResponse[¶](#mediaresponse)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/responses#media_responses)
Media responses let your app play audio content with a playback duration longer than the 120-second limit of SSML. The primary component of a media response is the single-track card. The card allows the user to perform these operations:
* Replay the last 10 seconds.
* Skip forward for 30 seconds.
* View the total length of the media content.
* View a progress indicator for audio playback.
* View the elapsed playback time.
```
const { MediaObject } = require('actions-on-google');
app.onState('someState', () => {
const mediaObject = new MediaObject({
name,
url,
});
return {
dialogflowMediaResponse: mediaObject
};
});
```
### User Information[¶](#user-information)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/helpers#user_information)
User information You can obtain the following user information with this helper:
* Display name
* Given name
* Family name
* Coarse device location (zip code and city)
* Precise device location (coordinates and street address)
```
app.onState('someState', () => {
return {
dialogflowPermission: {
context: 'To read your mind',
permissions: 'NAME',
}
};
});
```
### Date and Time[¶](#date-and-time)
Actions on Google Documentation <https://developers.google.com/actions/assistant/helpers#date_and_timeYou can obtain a date and time from users by requesting fulfillment of the actions.intent.DATETIME intent.
```
app.onState('someState', () => {
return {
dialogflowDateTime: {
prompts: {
initial: 'When do you want to come in?',
date: 'Which date works best for you?',
time: 'What time of day works best for you?',
}
}
};
});
```
### Confirmation[¶](#confirmation)
Actions on Google Documentation <https://developers.google.com/actions/assistant/helpers#confirmationYou can ask a generic confirmation from the user (yes/no question) and get the resulting answer. The grammar for “yes” and “no” naturally expands to things like “Yea” or “Nope”, making it usable in many situations.
```
app.onState('someState', () => {
return {
dialogflowConfirmation: 'Can you confirm?',
};
});
```
### Android Link[¶](#android-link)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/helpers#android_link)
You can ask the user to continue an interaction via your Android app. This helper allows you to prompt the user as part of the conversation. You’ll first need to associate your Android app with your Actions Console project via the Brand Verification page.
```
app.onState('someState', () => {
const options = {
destination: 'Google',
url: 'example://gizmos',
package: 'com.example.gizmos',
reason: 'handle this for you',
};
return {
dialogflowDeepLink: options
};
});
```
### Place and Location[¶](#place-and-location)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/helpers#place_and_location)
You can obtain a location from users by requesting fulfillment of the actions.intent.PLACE intent. This helper is used to prompt the user for addresses and other locations, including any home/work/contact locations that they’ve saved with Google.
Saved locations will only return the address, not the associated mapping (e.g. “123 Main St” as opposed to “HOME = 123 Main St”).
```
app.onState('someState', () => {
return {
dialogflowPlace: {
context: 'To find a place to pick you up',
prompt: 'Where would you like to be picked up?',
}
};
});
```
### Digital Goods[¶](#digital-goods)
[Actions on Google Documentation](https://developers.google.com/actions/transactions/digital/dev-guide-digital)
You can add dialog to your Action that sells your in-app products in the Google Play store, using the digital purchases API.
You can use the *google.digitalGoods* object to get the subscriptions and InAppEntitlements filtered by the skuIds you pass to the function. Voxa handles all operations in background to get access to your digital goods in the Play Store. To do that, you need to pass to the GoogleAssistantPlatform object, the packageName of your Android application along with the keyFile with the credentials you created in your Google Cloud project.
### TransactionDecision[¶](#transactiondecision)
### TransactionRequirements[¶](#transactionrequirements)
### Routine Suggestions[¶](#routine-suggestions)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/updates/routines)
To consistently re-engage with users, you need to become a part of their daily habits. Google Assistant users can already use Routines to execute multiple Actions with a single command, perfect for those times when users wake up in the morning, head out of the house, get ready for bed or many of the other tasks we perform throughout the day. Now, with Routine Suggestions, after someone engages with your Action, you can prompt them to add your Action to their Routines with just a couple of taps.
```
app.onState('someState', () => {
return {
dialogflowRegisterUpdate: {
intent: 'Show Image',
frequency: 'ROUTINES'
}
};
});
```
### Push notifications[¶](#push-notifications)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/updates/notifications)
Your app can send push notifications to users whenever relevant, such as sending a reminder when the due date for a task is near.
```
app.onState('someState', () => {
return {
dialogflowUpdatePermission: {
intent: 'tell_latest_tip'
}
};
});
```
### Multi-surface conversations[¶](#multi-surface-conversations)
[Actions on Google Documentation](https://developers.google.com/actions/assistant/surface-capabilities#multi-surface_conversations)
At any point during your app’s flow, you can check if the user has any other surfaces with a specific capability. If another surface with the requested capability is available, you can then transfer the current conversation over to that new surface.
```
app.onIntent('someState', async (voxaEvent) => {
const screen = 'actions.capability.SCREEN_OUTPUT';
if (!_.includes(voxaEvent.supportedInterfaces, screen)) {
const screenAvailable = voxaEvent.conv.available.surfaces.capabilities.has(screen);
const context = 'Sure, I have some sample images for you.';
const notification = 'Sample Images';
const capabilities = ['actions.capability.SCREEN_OUTPUT'];
if (screenAvailable) {
return {
sayp: 'Hello',
to: 'entry',
flow: 'yield',
dialogflowNewSurface: {
context, notification, capabilities,
},
};
}
return {
sayp: 'Does not have a screen',
flow: 'terminate',
};
}
return {
sayp: 'Already has a screen',
flow: 'terminate',
};
});
```
### Output Contexts[¶](#output-contexts)
[Actions on Google Documentation](https://actions-on-google.github.io/actions-on-google-nodejs/classes/dialogflow.contextvalues.html#set)
If you need to add output contexts to the dialog flow webhook you can use the dialogflowContext directive
```
app.onIntent("LaunchIntent", {
dialogflowContext: {
lifespan: 5,
name: "DONE_YES_NO_CONTEXT",
},
sayp: "Hello!",
to: "entry",
flow: "yield",
});
```
### Session Entities[¶](#session-entities)
[Google Documentation](https://cloud.google.com/dialogflow/docs/entities-session)
A session represents a conversation between a Dialogflow agent and an end-user. You can create special entities, called session entities during a session. Session entities can extend or replace custom entity types and only exist during the session that they were created for. All session data, including session entities, is stored by Dialogflow for 20 minutes.
For example, if your agent has a @fruit entity type that includes “pear” and “grape”, that entity type could be updated to include “apple” or “orange”, depending on the information your agent collects from the end-user. The updated entity type would have the “apple” or “orange” entity entry for the rest of the session.
Create an entity in your dialogflow agent and make sure that define synonyms option is checked. Add some values and synonyms if needed according agent instructions. Notice that the name of the entity is the value to be used by the directive (i.e., list-of-fruits).
```
// variables.js
export function mySessionEntity(voxaEvent: VoxaEvent) {
// Do something with the voxaEvent, or not...
const sessionEntityType = [
{
"entities": [
{
"synonyms": ["apple", "green apple", "crabapple"],
"value": "APPLE_KEY"
},
{
"synonyms": ["orange"],
"value": "ORANGE_KEY"
}
],
"entityOverrideMode": "ENTITY_OVERRIDE_MODE_OVERRIDE",
"name": "list-of-fruits"
}
];
return sessionEntityType;
}
// views.js
const views = {
"en-US": {
translation: {
mySessionEntity: "{mySessionEntity}"
},
};
};
// state.js
app.onState('someState', {
dialogflowSessionEntity: "mySessionEntity",
flow: "yield",
sayp: "Hello!",
to: "entry",
});
// Or you can do it directly...
app.onState('someState', {
dialogflowSessionEntity: [
{
"entities": [
{
"synonyms": ["apple", "green apple", "crabapple"],
"value": "APPLE_KEY"
},
{
"synonyms": ["orange"],
"value": "ORANGE_KEY"
}
],
"entityOverrideMode": "ENTITY_OVERRIDE_MODE_OVERRIDE",
"name": "list-of-fruits"
}
],
flow: "yield",
sayp: "Hello!",
to: "entry",
});
```
Botframework Directives[¶](#botframework-directives)
---
### Sign In Card[¶](#sign-in-card)
A sign in card is used to account link your user. On Cortana the parameters are ignored and the system will use the parameters configured in the cortana channel
```
app.onIntent("LaunchIntent", {
botframeworkSigninCard: {
buttonTitle: "Sign In",
cardText: "Sign In Card",
url: "https://example.com",
},
to: "die",
});
```
### Hero Card[¶](#hero-card)
```
import { HeroCard } from "botbuilder";
const card = new HeroCard()
.title("Card Title")
.subtitle("Card Subtitle")
.text("Some Text");
app.onIntent("LaunchIntent", {
botframeworkHeroCard: card,
to: "die",
});
```
### Suggested Actions[¶](#suggested-actions)
```
import { SuggestedActions } from "botbuilder";
const suggestedActions = new SuggestedActions().addAction({
title: "Green",
type: "imBack",
value: "productId=1&color=green",
});
app.onIntent("LaunchIntent", {
botframeworkSuggestedActions: suggestedActions,
to: "die",
});
```
### Audio Card[¶](#audio-card)
```
import { AudioCard } from "botbuilder";
const audioCard = new AudioCard().title("Sample audio card");
audioCard.media([
{
profile: "audio.mp3",
url: "http://example.com/audio.mp3",
},
]);
app.onIntent("LaunchIntent", {
botframeworkAudioCard: audioCard,
to: "die",
});
```
### Text[¶](#text)
The `Text` directive renders a view and adds it to the response in plain text, this response is then shown to the user in devices with a screen
```
app.onIntent("LaunchIntent", {
say: "SomeView",
text: "SomeView",
to: "die",
});
```
### Text P[¶](#text-p)
```
app.onIntent("LaunchIntent", {
sayp: "Some Text",
textp: "Some Text",
to: "die",
});
```
### Attachments and Attachment Layouts[¶](#attachments-and-attachment-layouts)
```
const cards = _.map([1, 2, 3], (index: number) => {
return new HeroCard().title(`Event ${index}`).toAttachment();
});
app.onIntent("LaunchIntent", {
botframeworkAttachmentLayout: AttachmentLayout.carousel,
botframeworkAttachments: cards,
to: "die",
});
```
Dialogflow Platform Integrations[¶](#dialogflow-platform-integrations)
---
Dialogflow offers a variety of integrations so you share your base code across several platforms like Google Assistant, Facebook Messenger and more. For more information about these platforms, visit their [Integration docs](https://dialogflow.com/docs/integrations).
More integrations comming soon to Voxa
### Google Assistant[¶](#google-assistant)
The most common Dialogflow integration is the GoogleAssistantPlatform.
```
const { GoogleAssistantPlatform } = require('voxa');
const app = new VoxaApp({ views, variables });
const googleAction = new GoogleAssistantPlatform(app);
```
#### Facebook Messenger[¶](#facebook-messenger)
The `DialogflowPlatform` for voxa has available some of the core functionalities to send to your chatbot in responses. When you initialize the Facebook Platform object, you can optionally pass a configuration object with the Facebook Page Access Token:
```
const { FacebookPlatform } = require('voxa');
const config = {
pageAccessToken: '<KEY>',
};
const app = new VoxaApp({ views, variables });
const facebookBot = new FacebookPlatform(app, config);
```
Voxa will use this token to perform some authentication operations like sending actions to the user’s chat window. Check the [Facebook Event](dialogflow-events.html#the-facebookevent-object) object for more details.
Voxa also offers a variety of built-in rich components for you to send along with your response. For now, you can integrate the following:
### Account Linking button[¶](#account-linking-button)
You need to include in your controller the following field: `facebookAccountLink`, which takes a URL to go into the account linking flow. For more information about the account linking flow, check how to add a [Log In Button](https://developers.facebook.com/docs/messenger-platform/send-messages/buttons#login), and [Account Linking](https://developers.facebook.com/docs/messenger-platform/identity/account-linking).
```
app.onState('someState', () => {
return {
facebookAccountLink: "https://www.messenger.com",
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookAccountLink"
};
});
.....
views
.....
{
"FacebookAccountLink": {
"facebookAccountLink": "https://www.messenger.com"
}
}
```
### Account Unlink button[¶](#account-unlink-button)
You need to include in your controller the following field: `facebookAccountUnlink`, which can take any value like a boolean, just to indicate to Voxa we’re adding this button to the response. For more information about the account linking flow, check how to add a [Log Out Button](https://developers.facebook.com/docs/messenger-platform/send-messages/buttons#logout), and [Account Linking](https://developers.facebook.com/docs/messenger-platform/identity/account-linking).
```
app.onState('someState', () => {
return {
facebookAccountUnlink: true,
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookAccountUnlink"
};
});
.....
views
.....
{
"FacebookAccountLink": {
"facebookAccountUnlink": true
}
}
```
### Location Quick Reply[¶](#location-quick-reply)
You need to include in your controller the following field: `facebookQuickReplyLocation`, which takes a string with the title of the message that goes along with the button requesting user’s location. For more information about the account linking flow, check how to add a [Location Quick Reply](https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies#locations).
```
app.onState('someState', () => {
return {
facebookQuickReplyLocation: "Send me your location",
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookQuickReplyLocation"
};
});
.....
views
.....
{
"FacebookQuickReplyLocation": {
"facebookQuickReplyLocation": "Send me your location"
}
}
```
### Phone Number Quick Reply[¶](#phone-number-quick-reply)
You need to include in your controller the following field: `facebookQuickReplyPhoneNumber`, which takes a string with the title of the message that goes along with the button requesting user’s phone number. For more information about the account linking flow, check how to add a [User Phone Number Quick Reply](https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies#phone).
```
app.onState('someState', () => {
return {
facebookQuickReplyPhoneNumber: "Send me your phone number",
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookQuickReplyPhoneNumber"
};
});
.....
views
.....
{
"FacebookQuickReplyPhoneNumber": {
"facebookQuickReplyPhoneNumber": "Send me your phone number"
}
}
```
### Text Quick Reply[¶](#text-quick-reply)
You need to include in your controller the following field: `directives`, which takes an array of directives, and the one you’re going to send is a FacebookQuickReplyText directive, that takes 2 parameters:
- message: string with the title of the message that goes along with the button requesting user’s email.
- replyArray: a IFacebookQuickReply object or array of objets with the options to render in the chat.
For more information about the account linking flow, check how to add a [User Text Quick Reply](https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies#text).
```
const { FacebookQuickReplyText, IFacebookQuickReply } = require('voxa');
app.onState('someState', () => {
const quickReplyTextArray: IFacebookQuickReply[] = [
{
imageUrl: "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e9/16777216colors.png/220px-16777216colors.png",
payload: "square",
title: "Square Multicolor",
},
{
imageUrl: "https://www.w3schools.com/colors/img_colormap.gif",
payload: "hexagonal",
title: "Hexagonal multicolor",
},
];
const facebookQuickReplyText = new FacebookQuickReplyText("What's your favorite shape?", quickReplyTextArray);
return {
directives: [facebookQuickReplyText],
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookQuickReplyText"
};
});
.....
views
.....
{
"FacebookQuickReplyText": {
"facebookQuickReplyText": "{quickReplyText}"
}
}
.........
variables
.........
const { FacebookQuickReplyText } = require('voxa');
export function quickReplyText(request) {
const quickReplyTextArray = [
{
imageUrl: "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e9/16777216colors.png/220px-16777216colors.png",
payload: "square",
title: "Square Multicolor",
},
{
imageUrl: "https://www.w3schools.com/colors/img_colormap.gif",
payload: "hexagonal",
title: "Hexagonal multicolor",
},
];
const facebookQuickReplyText = new FacebookQuickReplyText("What's your favorite shape?", quickReplyTextArray);
return {
directives: [facebookQuickReplyText],
};
},
```
### Email Quick Reply[¶](#email-quick-reply)
You need to include in your controller the following field: `facebookQuickReplyUserEmail`, which takes a string with the title of the message that goes along with the button requesting user’s email. For more information about the account linking flow, check how to add a [User Email Quick Reply](https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies#email).
```
app.onState('someState', () => {
return {
facebookQuickReplyUserEmail: "Send me your email",
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookQuickReplyUserEmail"
};
});
.....
views
.....
{
"FacebookQuickReplyUserEmail": {
"facebookQuickReplyUserEmail": "Send me your email"
}
}
```
### Postbacks buttons (Suggestion chips)[¶](#postbacks-buttons-suggestion-chips)
You need to include in your controller the following field: `facebookSuggestionChips`, which could be a simple string that the Voxa renderer will get from your views file with an array of strings, or directly an array of strings. For more information about this, check how to add [Postback Buttons](https://developers.facebook.com/docs/messenger-platform/send-messages/buttons#postback).
```
app.onState('someState', () => {
return {
facebookSuggestionChips: ["YES", "NO"],
textp: "Select YES or NO",
to: "entry",
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookSuggestionChips"
};
});
.....
views
.....
{
"FacebookSuggestionChips": {
"facebookSuggestionChips": ["YES", "NO"]
}
}
```
### Carousel[¶](#carousel)
You need to include in your controller the following field: `facebookCarousel`, which takes an object with an array of elements to be taken as items in a generic list of buttons. For more information about the carousel, check how to add a [Generic Template](https://developers.facebook.com/docs/messenger-platform/send-messages/template/generic).
```
const {
FACEBOOK_BUTTONS,
FACEBOOK_WEBVIEW_HEIGHT_RATIO,
FacebookButtonTemplateBuilder,
FacebookElementTemplateBuilder,
FacebookTemplateBuilder,
} = require('voxa');
app.onState('someState', () => {
const buttonBuilder1 = new FacebookButtonTemplateBuilder();
const buttonBuilder2 = new FacebookButtonTemplateBuilder();
const elementBuilder1 = new FacebookElementTemplateBuilder();
const elementBuilder2 = new FacebookElementTemplateBuilder();
const facebookTemplateBuilder = new FacebookTemplateBuilder();
buttonBuilder1
.setTitle("Go to see this URL")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("https://www.example.com/imgs/imageExample.png");
buttonBuilder2
.setPayload("value")
.setTitle("Send this to chat")
.setType(FACEBOOK_BUTTONS.POSTBACK);
elementBuilder1
.addButton(buttonBuilder1.build())
.addButton(buttonBuilder2.build())
.setDefaultActionUrl("https://www.example.com/imgs/imageExample.png")
.setDefaultMessengerExtensions(false)
.setDefaultWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.COMPACT)
.setImageUrl("https://www.w3schools.com/colors/img_colormap.gif")
.setSubtitle("subtitle")
.setTitle("title");
elementBuilder2
.addButton(buttonBuilder1.build())
.addButton(buttonBuilder2.build())
.setDefaultActionUrl("https://www.example.com/imgs/imageExample.png")
.setDefaultMessengerExtensions(false)
.setDefaultWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.TALL)
.setImageUrl("https://www.w3schools.com/colors/img_colormap.gif")
.setSubtitle("subtitle")
.setTitle("title");
facebookTemplateBuilder
.addElement(elementBuilder1.build())
.addElement(elementBuilder2.build());
return {
facebookCarousel: facebookTemplateBuilder.build(),
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookCarousel"
};
});
.....
views
.....
{
"FacebookCarousel": {
"facebookCarousel": "{carousel}"
}
}
.........
variables
.........
carousel: function carousel(request) {
const buttons = [
{
title: "Go to see this URL",
type: FACEBOOK_BUTTONS.WEB_URL,
url: "https://www.example.com/imgs/imageExample.png",
},
{
payload: "value",
title: "Send this to chat",
type: FACEBOOK_BUTTONS.POSTBACK,
},
];
const carousel = {
elements: [
{
buttons,
defaultActionUrl: "https://www.example.com/imgs/imageExample.png",
defaultMessengerExtensions: false,
defaultWebviewHeightRatio: FACEBOOK_WEBVIEW_HEIGHT_RATIO.COMPACT,
imageUrl: "https://www.w3schools.com/colors/img_colormap.gif",
subtitle: "subtitle",
title: "title",
},
{
buttons,
defaultActionUrl: "https://www.example.com/imgs/imageExample.png",
defaultMessengerExtensions: false,
defaultWebviewHeightRatio: FACEBOOK_WEBVIEW_HEIGHT_RATIO.TALL,
imageUrl: "https://www.w3schools.com/colors/img_colormap.gif",
subtitle: "subtitle",
title: "title",
},
],
};
return carousel;
},
```
### List[¶](#list)
You need to include in your controller the following field: `facebookList`, which takes an object with an array of elements to be taken as items in a list of buttons. For more information about the carousel, check how to add a [List Template](https://developers.facebook.com/docs/messenger-platform/send-messages/template/list).
```
const {
FACEBOOK_BUTTONS,
FACEBOOK_WEBVIEW_HEIGHT_RATIO,
FACEBOOK_TOP_ELEMENT_STYLE,
FacebookButtonTemplateBuilder,
FacebookElementTemplateBuilder,
FacebookTemplateBuilder,
} = require('voxa');
app.onState('someState', () => {
const buttonBuilder1 = new FacebookButtonTemplateBuilder();
const buttonBuilder2 = new FacebookButtonTemplateBuilder();
const elementBuilder1 = new FacebookElementTemplateBuilder();
const elementBuilder2 = new FacebookElementTemplateBuilder();
const elementBuilder3 = new FacebookElementTemplateBuilder();
const facebookTemplateBuilder = new FacebookTemplateBuilder();
buttonBuilder1
.setPayload("payload")
.setTitle("View More")
.setType(FACEBOOK_BUTTONS.POSTBACK);
buttonBuilder2
.setTitle("View")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.FULL);
elementBuilder1
.addButton(buttonBuilder2.build())
.setImageUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setSubtitle("See all our colors")
.setTitle("Classic T-Shirt Collection");
elementBuilder2
.setDefaultActionUrl("https://www.w3schools.com")
.setDefaultWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.TALL)
.setImageUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setSubtitle("See all our colors")
.setTitle("Classic T-Shirt Collection");
elementBuilder3
.addButton(buttonBuilder2.build())
.setDefaultActionUrl("https://www.w3schools.com")
.setDefaultWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.TALL)
.setImageUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setSubtitle("100% Cotton, 200% Comfortable")
.setTitle("Classic T-Shirt Collection");
facebookTemplateBuilder
.addButton(buttonBuilder1.build())
.addElement(elementBuilder1.build())
.addElement(elementBuilder2.build())
.addElement(elementBuilder3.build())
.setSharable(true)
.setTopElementStyle(FACEBOOK_TOP_ELEMENT_STYLE.LARGE);
return {
facebookList: facebookTemplateBuilder.build(),
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookList"
};
});
.....
views
.....
{
"FacebookList": {
"facebookList": "{list}"
}
}
.........
variables
.........
list: function list(request) {
const buttonBuilder1 = new FacebookButtonTemplateBuilder();
const buttonBuilder2 = new FacebookButtonTemplateBuilder();
const elementBuilder1 = new FacebookElementTemplateBuilder();
const elementBuilder2 = new FacebookElementTemplateBuilder();
const elementBuilder3 = new FacebookElementTemplateBuilder();
const facebookTemplateBuilder = new FacebookTemplateBuilder();
buttonBuilder1
.setPayload("payload")
.setTitle("View More")
.setType(FACEBOOK_BUTTONS.POSTBACK);
buttonBuilder2
.setTitle("View")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.FULL);
elementBuilder1
.addButton(buttonBuilder2.build())
.setImageUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setSubtitle("See all our colors")
.setTitle("Classic T-Shirt Collection");
elementBuilder2
.setDefaultActionUrl("https://www.w3schools.com")
.setDefaultWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.TALL)
.setImageUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setSubtitle("See all our colors")
.setTitle("Classic T-Shirt Collection");
elementBuilder3
.addButton(buttonBuilder2.build())
.setDefaultActionUrl("https://www.w3schools.com")
.setDefaultWebviewHeightRatio(FACEBOOK_WEBVIEW_HEIGHT_RATIO.TALL)
.setImageUrl("https://www.scottcountyiowa.com/sites/default/files/images/pages/IMG_6541-960x720_0.jpg")
.setSubtitle("100% Cotton, 200% Comfortable")
.setTitle("Classic T-Shirt Collection");
facebookTemplateBuilder
.addButton(buttonBuilder1.build())
.addElement(elementBuilder1.build())
.addElement(elementBuilder2.build())
.addElement(elementBuilder3.build())
.setSharable(true)
.setTopElementStyle(FACEBOOK_TOP_ELEMENT_STYLE.LARGE);
return facebookTemplateBuilder.build();
},
```
### Button Template[¶](#button-template)
You need to include in your controller the following field: `facebookButtonTemplate`, which takes an object with an array of buttons to be taken as items in a list of buttons. For more information about the button template, check how to add a [Button Template](https://developers.facebook.com/docs/messenger-platform/send-messages/template/button).
```
const {
FACEBOOK_BUTTONS,
FacebookButtonTemplateBuilder,
FacebookTemplateBuilder,
} = require('voxa');
app.onState('someState', () => {
const buttonBuilder1 = new FacebookButtonTemplateBuilder();
const buttonBuilder2 = new FacebookButtonTemplateBuilder();
const buttonBuilder3 = new FacebookButtonTemplateBuilder();
const facebookTemplateBuilder = new FacebookTemplateBuilder();
buttonBuilder1
.setPayload("payload")
.setTitle("View More")
.setType(FACEBOOK_BUTTONS.POSTBACK);
buttonBuilder2
.setPayload("1234567890")
.setTitle("<NAME>")
.setType(FACEBOOK_BUTTONS.PHONE_NUMBER);
buttonBuilder3
.setTitle("Go to Twitter")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("http://www.twitter.com");
facebookTemplateBuilder
.addButton(buttonBuilder1.build())
.addButton(buttonBuilder2.build())
.addButton(buttonBuilder3.build())
.setText("What do you want to do?");
return {
facebookButtonTemplate: facebookTemplateBuilder.build(),
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookButtonTemplate"
};
});
.....
views
.....
{
"FacebookButtonTemplate": {
"facebookButtonTemplate": "{buttonTemplate}"
}
}
.........
variables
.........
buttonTemplate: function buttonTemplate(request) {
const buttonBuilder1 = new FacebookButtonTemplateBuilder();
const buttonBuilder2 = new FacebookButtonTemplateBuilder();
const buttonBuilder3 = new FacebookButtonTemplateBuilder();
const facebookTemplateBuilder = new FacebookTemplateBuilder();
buttonBuilder1
.setPayload("payload")
.setTitle("View More")
.setType(FACEBOOK_BUTTONS.POSTBACK);
buttonBuilder2
.setPayload("1234567890")
.setTitle("Call John")
.setType(FACEBOOK_BUTTONS.PHONE_NUMBER);
buttonBuilder3
.setTitle("Go to Twitter")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("http://www.twitter.com");
facebookTemplateBuilder
.addButton(buttonBuilder1.build())
.addButton(buttonBuilder2.build())
.addButton(buttonBuilder3.build())
.setText("What do you want to do?");
return facebookTemplateBuilder.build();
},
```
### Open Graph Template[¶](#open-graph-template)
You need to include in your controller the following field: `facebookOpenGraphTemplate`, which takes an object with an array of buttons to be taken as items in a list of buttons and a url for the open graph link. For more information about the button template, check how to add a [Open Graph Template](https://developers.facebook.com/docs/messenger-platform/send-messages/template/open-graph).
```
const {
FACEBOOK_BUTTONS,
FacebookButtonTemplateBuilder,
FacebookTemplateBuilder,
} = require('voxa');
app.onState('someState', () => {
const elementBuilder1 = new FacebookElementTemplateBuilder();
const buttonBuilder1 = new FacebookButtonTemplateBuilder();
const buttonBuilder2 = new FacebookButtonTemplateBuilder();
const facebookTemplateBuilder = new FacebookTemplateBuilder();
buttonBuilder1
.setTitle("Go to Wikipedia")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("https://en.wikipedia.org/wiki/Rickrolling");
buttonBuilder2
.setTitle("Go to Twitter")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("http://www.twitter.com");
elementBuilder1
.addButton(buttonBuilder1.build())
.addButton(buttonBuilder2.build())
.setUrl("https://open.spotify.com/track/7GhIk7Il098yCjg4BQjzvb");
facebookTemplateBuilder
.addElement(elementBuilder1.build());
return {
facebookOpenGraphTemplate: facebookTemplateBuilder.build(),
};
});
```
Or you can also handle these values from your views file
```
app.onState('someState', () => {
return {
reply: "FacebookOpenGraphTemplate"
};
});
.....
views
.....
{
"FacebookOpenGraphTemplate": {
"facebookOpenGraphTemplate": "{openGraphTemplate}"
}
}
.........
variables
.........
openGraphTemplate: function openGraphTemplate(request) {
const elementBuilder1 = new FacebookElementTemplateBuilder();
const buttonBuilder1 = new FacebookButtonTemplateBuilder();
const buttonBuilder2 = new FacebookButtonTemplateBuilder();
const facebookTemplateBuilder = new FacebookTemplateBuilder();
buttonBuilder1
.setTitle("Go to Wikipedia")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("https://en.wikipedia.org/wiki/Rickrolling");
buttonBuilder2
.setTitle("Go to Twitter")
.setType(FACEBOOK_BUTTONS.WEB_URL)
.setUrl("http://www.twitter.com");
elementBuilder1
.addButton(buttonBuilder1.build())
.addButton(buttonBuilder2.build())
.setUrl("https://open.spotify.com/track/7GhIk7Il098yCjg4BQjzvb");
facebookTemplateBuilder
.addElement(elementBuilder1.build());
return facebookTemplateBuilder.build();
},
```
For more information check the [Dialogflow documentation for Facebook Messenger](https://dialogflow.com/docs/integrations/facebook)
#### Telegram[¶](#telegram)
The `DialogflowPlatform` for voxa can be easily integrated with telegram, just make sure to use
`text` responses in your controllers and everything should work as usual.
For more information check the [Dialogflow documentation for telegram](https://dialogflow.com/docs/integrations/telegram)
Alexa APIs[¶](#alexa-apis)
---
Amazon has integrated several APIs so users can leverage the Alexa’s configurations, device’s and user’s information.
### Customer Contact Information Reference[¶](#customer-contact-information-reference)
When a customer enables your Alexa skill, your skill can request the customer’s permission to the their contact information, which includes name, email address and phone number, if the customer has consented. You can then use this data to support personalized intents to enhance the customer experience without account linking. For example, your skill may use customer contact information to make a reservation at a nearby restaurant and send a confirmation to the customer.
*class* `CustomerContact`(*alexaEvent*)[¶](#CustomerContact)
| Arguments: | * **alexaEvent** ([*VoxaEvent.rawEvent*](index.html#VoxaEvent.rawEvent)) – Alexa Event object.
|
`CustomerContact.``getEmail`()[¶](#CustomerContact.getEmail)
Gets user’s email
| Returns string: | A string with user’s email address |
`CustomerContact.``getGivenName`()[¶](#CustomerContact.getGivenName)
Gets user’s given name
| Returns string: | A string with user’s given name |
`CustomerContact.``getName`()[¶](#CustomerContact.getName)
Gets user’s full name
| Returns string: | A string with user’s full name |
`CustomerContact.``getPhoneNumber`()[¶](#CustomerContact.getPhoneNumber)
Gets user’s phone number
| Returns object: | A JSON object with user’s phone number and country code |
`CustomerContact.``getFullUserInformation`()[¶](#CustomerContact.getFullUserInformation)
Gets name or given name, phone number, and email address
| Returns object: | A JSON object with user’s info with the following structure |
```
{
"countryCode": "string",
"email": "string",
"givenName": "string",
"name": "string",
"phoneNumber": "string"
}
```
With Voxa, you can ask for the user’s full name like this:
```
app.onIntent('FullAddressIntent', async (voxaEvent) => {
const name = await voxaEvent.alexa.customerContact.getName();
voxaEvent.model.name = name;
return { ask: 'CustomerContact.Name' };
});
```
Voxa also has a method to request all parameters at once:
```
app.onIntent('FullAddressIntent', async (voxaEvent) => {
const info = await voxaEvent.alexa.customerContact.getFullUserInformation();
const { countryCode, email, name, phoneNumber } = info;
voxaEvent.model.countryCode = countryCode;
voxaEvent.model.email = email;
voxaEvent.model.name = name;
voxaEvent.model.phoneNumber = phoneNumber;
return { ask: 'CustomerContact.FullInfo' };
});
```
To send a card requesting user the permission to access their information, you can simply add the card object to the view in your views.js file with the following format:
```
ContactPermission: {
tell: 'Before accessing your information, you need to give me permission. Go to your Alexa app, I just sent a link.',
card: {
type: 'AskForPermissionsConsent',
permissions: [
'alexa::profile:name:read',
'alexa::profile:email:read',
'alexa::profile:mobile_number:read'
],
},
},
```
### Device Address Information Reference[¶](#device-address-information-reference)
When a customer enables your Alexa skill, your skill can obtain the customer’s permission to use address data associated with the customer’s Alexa device. You can then use this address data to provide key functionality for the skill, or to enhance the customer experience. For example, your skill could provide a list of nearby store locations or provide restaurant recommendations using this address information. This document describes how to enable this capability and query the Device Address API for address data.
Note that the address entered in the Alexa device may not represent the current physical address of the device. This API uses the address that the customer has entered manually in the Alexa app, and does not have any capability of testing for GPS or other location-based data.
*class* `DeviceAddress`(*alexaEvent*)[¶](#DeviceAddress)
| Arguments: | * **alexaEvent** ([*VoxaEvent.rawEvent*](index.html#VoxaEvent.rawEvent)) – Alexa Event object.
|
`DeviceAddress.``getAddress`()[¶](#DeviceAddress.getAddress)
Gets full address info
| Returns object: | A JSON object with the full address info |
`DeviceAddress.``getCountryRegionPostalCode`()[¶](#DeviceAddress.getCountryRegionPostalCode)
Gets country/region and postal code
| Returns object: | A JSON object with country/region info |
With Voxa, you can ask for the full device’s address like this:
```
app.onIntent('FullAddressIntent', async (voxaEvent) => {
const info = await voxaEvent.alexa.deviceAddress.getAddress();
voxaEvent.model.deviceInfo = `${info.addressLine1}, ${info.city}, ${info.countryCode}`;
return { ask: 'DeviceAddress.FullAddress' };
});
```
You can decide to only get the country/region and postal code. You can do it this way:
```
app.onIntent('PostalCodeIntent', async (voxaEvent) => {
const info = await voxaEvent.alexa.deviceAddress.getCountryRegionPostalCode();
voxaEvent.model.deviceInfo = `${info.postalCode}, ${info.countryCode}`;
return { ask: 'DeviceAddress.PostalCode' };
});
```
To send a card requesting user the permission to access the device address info, you can simply add the card object to the view in your views.js file with the following format:
```
FullAddressPermision: {
tell: 'Before accessing your full address, you need to give me permission. Go to your Alexa app, I just sent a link.',
card: {
type: 'AskForPermissionsConsent',
permissions: [
'read::alexa:device:all:address',
],
},
},
PostalCodePermission: {
tell: 'Before accessing your postal code, you need to give me permission. Go to your Alexa app, I just sent a link.',
card: {
type: 'AskForPermissionsConsent',
permissions: [
'read::alexa:device:all:address:country_and_postal_code',
],
},
},
```
### Device Settings Reference[¶](#device-settings-reference)
Alexa customers can set their timezone, distance measuring unit, and temperature measurement unit in the Alexa app. The Alexa Settings APIs allow developers to retrieve customer preferences for these settings in a unified view.
*class* `DeviceSettings`(*voxaEvent*)[¶](#DeviceSettings)
| Arguments: | * **alexaEvent** ([*VoxaEvent.rawEvent*](index.html#VoxaEvent.rawEvent)) – Alexa Event object.
|
`DeviceSettings.``getDistanceUnits`()[¶](#DeviceSettings.getDistanceUnits)
Gets distance units
| Returns string: | A string with the distance units |
`DeviceSettings.``getTemperatureUnits`()[¶](#DeviceSettings.getTemperatureUnits)
Gets temperature units
| Returns string: | A string with the temperature units |
`DeviceSettings.``getTimezone`()[¶](#DeviceSettings.getTimezone)
Gets timezone
| Returns string: | A string with the timezone value |
`DeviceSettings.``getSettings`()[¶](#DeviceSettings.getSettings)
Gets all settings details
| Returns object: | A JSON object with device’s info with the following structure |
```
{
"distanceUnits": "string",
"temperatureUnits": "string",
"timezone": "string"
}
```
With Voxa, you can ask for the full device’s address like this:
```
app.onIntent('FullSettingsIntent', async (voxaEvent) => {
const info = await voxaEvent.alexa.deviceSettings.getSettings();
voxaEvent.model.settingsInfo = `${info.distanceUnits}, ${info.temperatureUnits}, ${info.timezone}`;
return { ask: 'DeviceSettings.FullSettings' };
});
```
You don’t need to request to the user the permission to access the device settings info.
### In-Skill Purchases Reference[¶](#in-skill-purchases-reference)
The [in-skill purchasing](https://developer.amazon.com/docs/in-skill-purchase/isp-overview.html) feature enables you to sell premium content such as game features and interactive stories for use in skills with a custom interaction model.
Buying these products in a skill is seamless to a user. They may ask to shop products, buy products by name, or agree to purchase suggestions you make while they interact with a skill. Customers pay for products using the payment options associated with their Amazon account.
For more information about setting up ISP with the ASK CLI follow this [link](https://developer.amazon.com/docs/in-skill-purchase/use-the-cli-to-manage-in-skill-products.html). And to understand what’s the process behind the ISP requests and responses to the Alexa Service click [here](https://developer.amazon.com/docs/in-skill-purchase/add-isps-to-a-skill.html).
With Voxa, you can implement all ISP features like buying, refunding and upselling an item:
```
app.onIntent('BuyIntent', async (voxaEvent) => {
const { productName } = voxaEvent.intent.params;
const token = 'startState';
const buyDirective = await voxaEvent.alexa.isp.buyByReferenceName(productName, token);
return { alexaConnectionsSendRequest: buyDirective };
});
app.onIntent('RefundIntent', async (voxaEvent) => {
const { productName } = voxaEvent.intent.params;
const token = 'startState';
const buyDirective = await voxaEvent.alexa.isp.cancelByReferenceName(productName, token);
return { alexaConnectionsSendRequest: buyDirective };
});
```
You can also check if the ISP feature is allowed in a locale or the account is correctly setup in the markets ISP works just by checking with the isAllowed() function.
```
app.onIntent('UpsellIntent', async (voxaEvent) => {
if (!voxaEvent.alexa.isp.isAllowed()) {
return { ask: 'ISP.Invalid', to: 'entry' };
}
const { productName } = voxaEvent.intent.params;
const token = 'startState';
const buyDirective = await voxaEvent.alexa.isp.upsellByReferenceName(productName, upsellMessage, token);
return { alexaConnectionsSendRequest: buyDirective };
});
```
To get the full list of products and know which ones have been purchased, you can do it like this:
```
app.onIntent('ProductsIntent', async (voxaEvent) => {
const list = await voxaEvent.alexa.isp.getProductList();
voxaEvent.model.productArray = list.inSkillProducts.map(x => x.referenceName);
return { ask: 'Products.List', to: 'entry' };
});
```
When users accept or refuse to buy/cancel an item, Alexa sends a Connections.Response directive. A very simple example on how the Connections.Response JSON request from Alexa looks like is:
```
{
"type": "Connections.Response",
"requestId": "string",
"timestamp": "string",
"name": "Upsell",
"status": {
"code": "string",
"message": "string"
},
"payload": {
"purchaseResult": "ACCEPTED",
"productId": "string",
"message": "optional additional message"
},
"token": "string"
}
```
### Alexa Shopping and To-Do Lists Reference[¶](#alexa-shopping-and-to-do-lists-reference)
Alexa customers have access to two default lists: Alexa to-do and Alexa shopping. In addition, Alexa customer can create and manage [custom lists](https://developer.amazon.com/docs/custom-skills/access-the-alexa-shopping-and-to-do-lists.html) in a skill that supports that.
Customers can review and modify their Alexa lists using voice through a device with Alexa or via the Alexa app. For example, a customer can tell Alexa to add items to the Alexa Shopping List at home, and then while at the store, view the items via the Alexa app, and check them off.
*class* `Lists`(*alexaEvent*)[¶](#Lists)
| Arguments: | * **alexaEvent** ([*VoxaEvent.rawEvent*](index.html#VoxaEvent.rawEvent)) – Alexa Raw Event object.
|
`Lists.``getDefaultShoppingList`()[¶](#Lists.getDefaultShoppingList)
Gets info for the Alexa default Shopping list
| Returns Object: | A JSON object with the Shopping list info |
`Lists.``getDefaultToDoList`()[¶](#Lists.getDefaultToDoList)
Gets info for the Alexa default To-Do list
| Returns Object: | A JSON object with the To-Do list info |
`Lists.``getListMetadata`()[¶](#Lists.getListMetadata)
Gets list metadata for all user’s lists including the default list
| Returns Array: | An object array |
`Lists.``getListById`(*listId*, *status = 'active'*)[¶](#Lists.getListById)
Gets specific list by id and status
| Arguments: | * **listId** – List ID.
* **status** – list status, defaults to active (only value accepted for now)
|
| Returns Object: | A JSON object with the specific list info. |
`Lists.``getOrCreateList`(*name*)[¶](#Lists.getOrCreateList)
Looks for a list by name and returns it, if it is not found, it creates a new list with that name and returns it.
| Arguments: | * **name** – List name.
|
| Returns Object: | A JSON object with the specific list info. |
`Lists.``createList`(*name*, *state = 'active'*)[¶](#Lists.createList)
Creates a new list with the name and state.
| Arguments: | * **name** – List name.
* **active** – list status, defaults to active (only value accepted for now)
|
| Returns Object: | A JSON object with the specific list info. |
`Lists.``updateList`(*listId*, *name*, *state = 'active'*, *version*)[¶](#Lists.updateList)
Updates list with the name, state, and version.
| Arguments: | * **listId** – List ID.
* **state** – list status, defaults to active (only value accepted for now)
* **version** – List version.
|
| Returns Object: | A JSON object with the specific list info. |
`Lists.``deleteList`(*listId*)[¶](#Lists.deleteList)
Deletes a list by ID.
| Arguments: | * **listId** – List ID.
|
| Returns: | undefined. HTTP response with 200 or error if any. |
`Lists.``getListItem`(*listId*, *itemId*)[¶](#Lists.getListItem)
Creates a new list with the name and state.
| Arguments: | * **listId** – List ID.
* **itemId** – Item ID.
|
| Returns Object: | A JSON object with the specific list info. |
`Lists.``createItem`(*listId*, *value*, *status = 'active'*)[¶](#Lists.createItem)
Creates a new list with the name and state.
| Arguments: | * **listId** – List ID.
* **value** – Item name.
* **status** – item status, defaults to active. Other values accepted: ‘completed’
|
| Returns Object: | A JSON object with the specific item info. |
`Lists.``updateItem`(*listId*, *itemId*, *value*, *status*, *version*)[¶](#Lists.updateItem)
Creates a new list with the name and state.
| Arguments: | * **listId** – List ID.
* **itemId** – Item ID.
* **value** – Item name.
* **status** – Item status. Values accepted: ‘active | completed’
|
| Returns Object: | A JSON object with the specific item info. |
`Lists.``deleteItem`(*listId*, *itemId*)[¶](#Lists.deleteItem)
Creates a new list with the name and state.
| Arguments: | * **listId** – List ID.
* **itemId** – Item ID.
|
| Returns: | undefined. HTTP response with 200 or error if any. |
With Voxa, you can implement all lists features. In this code snippet you will see how to check if a list exists, if not, it creates one. If it does exist, it will check if an item is already in the list and updates the list with a new version, if no, it adds it:
```
app.onIntent('AddItemToListIntent', async (voxaEvent) => {
const { productName } = voxaEvent.intent.params;
const listsMetadata = await voxaEvent.alexa.lists.getListMetadata();
const listName = 'MY_CUSTOM_LIST';
const listMeta = _.find(listsMetadata.lists, { name: listName });
let itemInfo;
let listInfo;
if (listMeta) {
listInfo = await voxaEvent.alexa.lists.getListById(listMeta.listId);
itemInfo = _.find(listInfo.items, { value: productName });
await voxaEvent.alexa.lists.updateList(listMeta.name, 'active', 2);
} else {
listInfo = await voxaEvent.alexa.lists.createList(listName);
}
if (itemInfo) {
return { ask: 'List.ProductAlreadyInList' };
}
await voxaEvent.alexa.lists.createItem(listInfo.listId, productName);
return { ask: 'List.ProductCreated' };
});
```
There’s also a faster way to consult and/or create a list. Follow this example:
```
app.onIntent('AddItemToListIntent', async (voxaEvent) => {
const { productName } = voxaEvent.intent.params;
const listName = 'MY_CUSTOM_LIST';
const listInfo = await voxaEvent.alexa.lists.getOrCreateList(listName);
const itemInfo = _.find(listInfo.items, { value: productName });
if (itemInfo) {
return { ask: 'List.ProductAlreadyInList' };
}
await voxaEvent.alexa.lists.createItem(listInfo.listId, productName);
return { ask: 'List.ProductCreated' };
});
```
Let’s review another example. Let’s say we have an activity in the default To-Do list and we want to mark it as completed. For that, we need to pull down the items from the default To-Do list, find our item and modify it:
```
app.onIntent('CompleteActivityIntent', async (voxaEvent) => {
const { activity } = voxaEvent.intent.params;
const listInfo = await voxaEvent.alexa.lists.getDefaultToDoList();
const itemInfo = _.find(listInfo.items, { value: activity });
await voxaEvent.alexa.lists.updateItem(
listInfo.listId,
itemInfo.id,
activity,
'completed',
2);
return { ask: 'List.ActivityCompleted' };
});
```
Let’s check another example. Let’s say users want to remove an item in their default shopping list that they had already marked as completed. We’re going to first fetch the default shopping list’s info, then look for the product to remove, we’re going to first check if the product is marked as completed to then delete it:
```
app.onIntent('RemoveProductIntent', async (voxaEvent) => {
const { productId } = voxaEvent.model;
const listInfo = await voxaEvent.alexa.lists.getDefaultShoppingList();
const itemInfo = await voxaEvent.alexa.lists.getListItem(listInfo.listId, productId);
if (itemInfo.status === 'active') {
return { ask: 'List.ConfirmProductDeletion', to: 'wantToDeleteActiveProduct?' };
}
await voxaEvent.alexa.lists.deleteItem(listInfo.listId, productId);
return { ask: 'List.ProductRemoved' };
});
```
Finally, if you want to remove the list you had created:
```
app.onIntent('DeleteListIntent', async (voxaEvent) => {
const listName = 'MY_CUSTOM_LIST';
const listInfo = await voxaEvent.alexa.lists.getOrCreateList(listName);
await voxaEvent.alexa.lists.deleteList(listInfo.listId);
return { ask: 'List.ListRemoved' };
});
```
To send a card requesting user the permission to read/write Alexa lists, you can simply add the card object to the view in your views.js file with the following format:
```
NeedShoppingListPermission: {
tell: 'Before adding an item to your list, you need to give me permission. Go to your Alexa app, I just sent a link.',
card: {
type: 'AskForPermissionsConsent',
permissions: [
'read::alexa:household:list',
'write::alexa:household:list',
],
},
},
```
### Alexa Reminders API Reference[¶](#alexa-reminders-api-reference)
Use the Alexa Reminders API to create and manage reminders from your skill. This reference describes the available operations for the Alexa Reminders API.
Note that you need to modify your skill manifest by adding the reminder permission:
```
{
"permissions": [
{
"name": "alexa::alerts:reminders:skill:readwrite"
}
],
}
```
*class* `Reminders`(*alexaEvent*)[¶](#Reminders)
| Arguments: | * **alexaEvent** ([*VoxaEvent.rawEvent*](index.html#VoxaEvent.rawEvent)) – Alexa Event object.
|
`Reminders.``getReminder`()[¶](#Reminders.getReminder)
Gets a reminder
| Arguments: | * **alertToken** – Reminder’s ID.
|
| Returns object: | A JSON object with the reminder’s details |
`Reminders.``getAllReminders`()[¶](#Reminders.getAllReminders)
Gets all reminders
| Returns object: | A JSON object with an array of the reminder’s details |
`Reminders.``createReminder`(*reminder*)[¶](#Reminders.createReminder)
Creates a reminder
| Arguments: | * **reminder** – Reminder Builder Object.
|
| Returns object: | A JSON object with the details of reminder’s creation |
`Reminders.``updateReminder`(*alertToken*, *reminder*)[¶](#Reminders.updateReminder)
Updates a reminder
| Arguments: | * **alertToken** – Reminder’s ID.
* **reminder** – Reminder Builder Object.
|
| Returns object: | A JSON object with the details of reminder’s update |
`Reminders.``deleteReminder`(*alertToken*)[¶](#Reminders.deleteReminder)
Deletes a reminder
| Arguments: | * **alertToken** – Reminder’s ID.
|
| Returns object: | A JSON object with the details of reminder’s deletion |
*class* `ReminderBuilder`()[¶](#ReminderBuilder)
`ReminderBuilder.``setCreatedTime`(*createdTime*)[¶](#ReminderBuilder.setCreatedTime)
Sets created time
| Arguments: | * **createdTime** – Reminder’s creation time.
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setRequestTime`(*requestTime*)[¶](#ReminderBuilder.setRequestTime)
Sets request time
| Arguments: | * **requestTime** – Reminder’s request time.
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setTriggerAbsolute`(*scheduledTime*)[¶](#ReminderBuilder.setTriggerAbsolute)
Sets the reminder trigger as absolute
| Arguments: | * **scheduledTime** – Reminder’s scheduled time.
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setTriggerRelative`(*offsetInSeconds*)[¶](#ReminderBuilder.setTriggerRelative)
Sets the reminder trigger as relative
| Arguments: | * **offsetInSeconds** – Reminder’s offset in seconds.
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setTimeZoneId`(*timeZoneId*)[¶](#ReminderBuilder.setTimeZoneId)
Sets time zone Id
| Arguments: | * **timeZoneId** – Reminder’s time zone.
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setRecurrenceFreqDaily`()[¶](#ReminderBuilder.setRecurrenceFreqDaily)
Sets reminder’s recurrence frequence to “DAILY”
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setRecurrenceFreqWeekly`()[¶](#ReminderBuilder.setRecurrenceFreqWeekly)
Sets reminder’s recurrence frequence to “WEEKLY”
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setRecurrenceByDay`(*recurrenceByDay*)[¶](#ReminderBuilder.setRecurrenceByDay)
Sets frequency by day
| Arguments: | * **recurrenceByDay** – Array of frequency by day.
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``setRecurrenceInterval`(*interval*)[¶](#ReminderBuilder.setRecurrenceInterval)
Sets reminder’s interval
| Arguments: | * **interval** – Reminder’s interval
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``addContent`(*locale*, *text*)[¶](#ReminderBuilder.addContent)
Sets reminder’s content
| Arguments: | * **locale** – Reminder’s locale
* **text** – Reminder’s text
|
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``enablePushNotification`()[¶](#ReminderBuilder.enablePushNotification)
Sets reminder’s push notification status to “ENABLED”
| Returns object: | A ReminderBuilder object |
`ReminderBuilder.``disablePushNotification`()[¶](#ReminderBuilder.disablePushNotification)
Sets reminder’s push notification status to “DISABLED”
| Returns object: | A ReminderBuilder object |
With Voxa, you can create, update, delete and get reminders like this:
```
const { ReminderBuilder } = require("voxa");
app.onIntent('CreateReminderIntent', async (voxaEvent) => {
const reminder = new ReminderBuilder()
.setCreatedTime("2018-12-11T14:05:38.811")
.setTriggerAbsolute("2018-12-12T12:00:00.000")
.setTimeZoneId("America/Denver")
.setRecurrenceFreqDaily()
.addContent("en-US", "CREATION REMINDER TEST")
.enablePushNotification();
const reminderResponse = await voxaEvent.alexa.reminders.createReminder(reminder);
voxaEvent.model.reminder = reminderResponse;
return { tell: "Reminder.Created" };
});
app.onIntent('UpdateReminderIntent', async (voxaEvent) => {
const alertToken = '1234-5678-9012-3456';
const reminder = new ReminderBuilder()
.setRequestTime("2018-12-11T14:05:38.811")
.setTriggerAbsolute("2018-12-12T12:00:00.000")
.setTimeZoneId("America/Denver")
.setRecurrenceFreqDaily()
.addContent("en-US", "CREATION REMINDER TEST")
.enablePushNotification();
const reminderResponse = await voxaEvent.alexa.reminders.updateReminder(alertToken, reminder);
voxaEvent.model.reminder = reminderResponse;
return { tell: "Reminder.Updated" };
});
app.onIntent('UpdateReminderIntent', async (voxaEvent) => {
const alertToken = '1234-5678-9012-3456';
const reminderResponse = await voxaEvent.alexa.reminders.deleteReminder(alertToken);
return { tell: "Reminder.Deleted" };
});
app.onIntent('GetReminderIntent', async (voxaEvent) => {
const alertToken = '1234-5678-9012-3456';
const reminderResponse = await voxaEvent.alexa.reminders.getReminder(alertToken);
voxaEvent.model.reminder = reminderResponse.alerts[0];
return { tell: "Reminder.Get" };
});
app.onIntent('GetAllRemindersIntent', async (voxaEvent) => {
const reminderResponse = await voxaEvent.alexa.reminders.getAllReminders();
voxaEvent.model.reminders = reminderResponse.alerts;
return { tell: "Reminder.Get" };
});
```
### Skill Messaging API Reference[¶](#skill-messaging-api-reference)
The Skill Messaging API is used to send message requests to skills. These methods are meant to work for out-of-session operations, so you will not likely use it in the skill code. However, you might have a separate file inside your Voxa project to work with some automated triggers like CloudWatch Events or SQS functions. In that case, your file has access to the voxa package, thus, you can take advantage of these methods.
*class* `Messaging`(*clientId*, *clientSecret*)[¶](#Messaging)
| Arguments: | * **clientId** – Client ID to call Messaging API.
* **clientSecret** – Client Secret to call Messaging API.
|
`Messaging.``sendMessage`()[¶](#Messaging.sendMessage)
Sends message to a skill
| Arguments: | * **request** – Message request params with the following structure:
|
```
{
endpoint: string, // User's endpoint.
userId: string, // User's userId.
data: any, // Object with key-value pairs to send to the skill.
expiresAfterSeconds: number, // Expiration time in milliseconds, defaults to 3600 milliseconds.
}
:returns: undefined
```
In the following example, you’ll see a simple code of a lambda function which calls your database to fetch users to whom you’ll send a reminder with the Reminder API, the message is sent via Messaging API:
```
'use strict';
const Promise = require('bluebird');
const { Messaging } = require('voxa');
const Storage = require('./Storage');
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
exports.handler = async (event, context, callback) => {
const usersOptedIn = await Storage.getUsers();
const messaging = new Messaging(CLIENT_ID, CLIENT_SECRET);
await Promise.map(usersOptedIn, (user) => {
const data = {
timezone: user.timezone,
title: user.reminderTitle,
when: user.reminderTime,
};
const request = {
endpoint: user.endpoint,
userId: user.userId,
data,
};
return messaging.sendMessage(request)
.catch((err) => {
console.log('ERROR SENDING MESSAGE', err);
return null;
});
});
callback(undefined, "OK");
};
```
This will dispatch a ‘Messaging.MessageReceived’ request to every user and you can handle the code in Voxa like this:
```
const { ReminderBuilder } = require("voxa");
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const reminderData = voxaEvent.rawEvent.request.message;
const reminder = new ReminderBuilder()
.setCreatedTime("2018-12-11T14:05:38.811")
.setTriggerAbsolute(reminderData.when)
.setTimeZoneId(reminderData.timezone)
.setRecurrenceFreqDaily()
.addContent("en-US", reminderData.title)
.enablePushNotification();
await voxaEvent.alexa.reminders.createReminder(reminder);
return reply;
});
```
The main advantage of sending a message with the Messaging API is that it generates a new access token valid for 1 hour. This is important for out-of-session operations where you don’t have access to a valid access token. The event sent to your skill now has a new access token valid for 1 hour. So now, you can use it to call any Alexa API that requires an access token in the authorization headers. The request object of the event looks like this:
```
{
"request": {
"type": "Messaging.MessageReceived",
"requestId": "amzn1.echo-api.request.VOID",
"timestamp": "2018-12-17T22:06:28Z",
"message": {
"name": "John"
}
}
}
```
### Proactive Events API Reference[¶](#proactive-events-api-reference)
The ProactiveEvents API enables Alexa Skill Developers to send events to Alexa, which represent factual data that may interest a customer. Upon receiving an event, Alexa proactively delivers the information to customers subscribed to receive these events. This API currently supports one proactive channel, Alexa Notifications. As more proactive channels are added in the future, developers will be able to take advantage of them without requiring integration with a new API.
*class* `ProactiveEvents`(*clientId*, *clientSecret*)[¶](#ProactiveEvents)
| Arguments: | * **clientId** – Client ID to call Messaging API.
* **clientSecret** – Client Secret to call Messaging API.
|
`ProactiveEvents.``createEvent`()[¶](#ProactiveEvents.createEvent)
Creates proactive event
| Arguments: | * **endpoint** – User’s default endpoint
* **body** – Event’s body
* **isDevelopment** – Flag to define if the event is sent to development stage. If false, then it goes to live skill
|
| Returns: | undefined |
This API is meant to work as an out-of-session task, so you’d need to use the Messaging API if you want to send a notification triggered by your server. The following examples show how you can use the different schemas to send proactive events (formerly Notifications):
* WeatherAlertEventsBuilder
```
const { ProactiveEvents, WeatherAlertEventsBuilder } = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: WeatherAlertEventsBuilder = new WeatherAlertEventsBuilder();
event
.setHurricane()
.addContent("en-US", "source", eventData.localizedValue)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
* SportsEventBuilder
```
const { ProactiveEvents, SportsEventBuilder } = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: SportsEventBuilder = new SportsEventBuilder();
event
.setAwayTeamStatistic("Boston Red Sox", 5)
.setHomeTeamStatistic("New York Yankees", 2)
.setUpdate("Boston Red Sox", 5)
.addContent("en-US", "eventLeagueName", eventData.localizedValue)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
* MessageAlertEventBuilder
```
const {
MESSAGE_ALERT_FRESHNESS,
MESSAGE_ALERT_STATUS,
MESSAGE_ALERT_URGENCY,
MessageAlertEventBuilder,
ProactiveEvents,
} = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: MessageAlertEventBuilder = new MessageAlertEventBuilder();
event
.setMessageGroup(eventData.creatorName, eventData.count, MESSAGE_ALERT_URGENCY.URGENT)
.setState(MESSAGE_ALERT_STATUS.UNREAD, MESSAGE_ALERT_FRESHNESS.NEW)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
* OrderStatusEventBuilder
```
const {
ORDER_STATUS,
OrderStatusEventBuilder,
ProactiveEvents,
} = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: OrderStatusEventBuilder = new OrderStatusEventBuilder();
event
.setStatus(ORDER_STATUS.ORDER_DELIVERED, eventData.expectedArrival, eventData.enterTimestamp)
.addContent("en-US", "sellerName", eventData.localizedValue)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
* OccasionEventBuilder
```
const {
OCCASION_CONFIRMATION_STATUS,
OCCASION_TYPE,
OccasionEventBuilder,
ProactiveEvents,
} = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: OccasionEventBuilder = new OccasionEventBuilder();
event
.setOccasion(eventData.bookingType, OCCASION_TYPE.APPOINTMENT)
.setStatus(OCCASION_CONFIRMATION_STATUS.CONFIRMED)
.addContent("en-US", "brokerName", eventData.brokerName)
.addContent("en-US", "providerName", eventData.providerName)
.addContent("en-US", "subject", eventData.subject)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
* TrashCollectionAlertEventBuilder
```
const {
GARBAGE_COLLECTION_DAY,
GARBAGE_TYPE,
TrashCollectionAlertEventBuilder,
ProactiveEvents,
} = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: TrashCollectionAlertEventBuilder = new TrashCollectionAlertEventBuilder();
event
.setAlert(GARBAGE_COLLECTION_DAY.MONDAY,
GARBAGE_TYPE.BOTTLES,
GARBAGE_TYPE.BULKY,
GARBAGE_TYPE.CANS,
GARBAGE_TYPE.CLOTHING)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
* MediaContentEventBuilder
```
const {
MEDIA_CONTENT_METHOD,
MEDIA_CONTENT_TYPE,
MediaContentEventBuilder,
ProactiveEvents,
} = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: MediaContentEventBuilder = new MediaContentEventBuilder();
event
.setAvailability(MEDIA_CONTENT_METHOD.AIR)
.setContentType(MEDIA_CONTENT_TYPE.ALBUM)
.addContent("en-US", "providerName", eventData.providerName)
.addContent("en-US", "contentName", eventData.contentName)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
* SocialGameInviteEventBuilder
```
const {
SOCIAL_GAME_INVITE_TYPE,
SOCIAL_GAME_RELATIONSHIP_TO_INVITEE,
SocialGameInviteEventBuilder,
ProactiveEvents,
} = require("voxa");
const CLIENT_ID = 'CLIENT_ID';
const CLIENT_SECRET = 'CLIENT_SECRET';
app["onMessaging.MessageReceived"](async (voxaEvent, reply) => {
const eventData = voxaEvent.rawEvent.request.message;
const event: SocialGameInviteEventBuilder = new SocialGameInviteEventBuilder();
event
.setGame(SOCIAL_GAME_OFFER.GAME)
.setInvite(eventData.name, SOCIAL_GAME_INVITE_TYPE.CHALLENGE, SOCIAL_GAME_RELATIONSHIP_TO_INVITEE.CONTACT)
.addContent("en-US", "gameName", eventData.localizedValue)
.setReferenceId(eventData.referenceId)
.setTimestamp(eventData.timestamp)
.setExpiryTime(eventData.expiryTime)
.setUnicast(eventData.userId);
const proactiveEvents = new ProactiveEvents(CLIENT_ID, CLIENT_SECRET);
await proactiveEvents.createEvent(endpoint, event, true);
return reply;
});
```
CanFulfillIntentRequest[¶](#canfulfillintentrequest)
---
Name-free interaction enables customers to interact with Alexa without invoking a specific skill by name, which helps facilitate greater interaction with Alexa because customers do not always know which skill is appropriate.
When Alexa receives a request from a customer without a skill name, such as “Alexa, play relaxing sounds with crickets,” Alexa looks for skills that might fulfill the request. Alexa determines the best choice among eligible skills and hands the request to the skill.
To make your skill more discoverable for name-free interaction, you can implement the the [CanFulfillIntentRequest](https://developer.amazon.com/docs/custom-skills/quick-start-canfulfill-intent-request.html) interface in your skill.
In Voxa, you can take advantage of this feature by following this example:
```
app.onCanFulfillIntentRequest((alexaEvent, reply) => {
if (alexaEvent.intent.name === 'InfluencerIntent') {
reply.fulfillIntent('YES');
_.each(alexaEvent.intent.params, (value, slotName) => {
reply.fulfillSlot(slotName, 'YES', 'YES');
});
}
return reply;
});
```
Voxa offers the function [`onCanFulfillIntentRequest`_](#id3) so you can implement it in your code to validate wether you’re going to fulfill the request or not.
Additionally, if you have several intents that you want to automatically fulfill, regardless of the slot values in the request, you can simply add an array of intents to the property: [`defaultFulfillIntents`_](#id5) of the Voxa config file:
```
const defaulFulfillIntents = [
'NameIntent',
'PhoneIntent',
];
const voxaApp = new VoxaApp({ views, variables });
const alexaSkill = new AlexaPlatform(voxaApp, { defaultFulfillIntents });
```
If Alexa sends an intent that you didn’t register with this function, then you should implement the [`onCanFulfillIntentRequest`_](#id7) method to handle it. Important: If you implement this method in your skill, you should always return the [`reply`_](#id9) object.
If a skill has implemented canFulfillIntent according to the interface specification, the skill should be aware that the skill is not yet being asked to take action on behalf of the customer, and should not modify any state outside its scope, or have any observable interaction with its own calling functions or the outside world besides returning a value. Thus, the skill should not, at this point, perform any actions such as playing sound, turning lights on or off, providing feedback to the customer, committing a transaction, or making a state change.
Testing using ASK-CLI[¶](#testing-using-ask-cli)
---
There are 2 options to test this feature manually. The first one is using the Manual JSON section of the Test tab in the developer portal. And the other one, is to use the [ASK CLI](https://developer.amazon.com/docs/custom-skills/implement-canfulfillintentrequest-for-name-free-interaction.html#test-the-skill-using-ask-cli) from Amazon.
You can just trigger this command in the console, and you’ll get the result in your terminal:
```
$ ask api invoke-skill --skill-id amzn1.ask.skill.[unique-value-here] --file /path/to/input/json --endpoint-region [endpoint-region-here]
```
LoginWithAmazon[¶](#loginwithamazon)
---
With LoginWithAmazon, you can request a customer profile that contains the data that Login with Amazon applications can access regarding a particular customer. This includes: a unique ID for the user; the user’s name, the user’s email address, and their postal code. This data is divided into three scopes: profile, profile:user_id and postal_code.
LoginWithAmazon works a seamless solution to get user’s information using account linking via web browser. To see more information about it, follow this [link](https://developer.amazon.com/docs/login-with-amazon/customer-profile.html). To implement LoginWithAmazon in your Alexa skill, follow this step-by-step [tutorial](https://developer.amazon.com/blogs/post/Tx3CX1ETRZZ2NPC/Alexa-Account-Linking-5-Steps-to-Seamlessly-Link-Your-Alexa-Skill-with-Login-wit). You can also do account linking via voice. Go [here](index.html#alexa-apis) to check it out!
With Voxa, you can ask for the user’s full name like this:
```
app.onIntent('ProfileIntent', async (alexaEvent: AlexaEvent) => {
const userInfo = await alexaEvent.getUserInformation();
alexaEvent.model.email = userInfo.email;
alexaEvent.model.name = userInfo.name;
alexaEvent.model.zipCode = userInfo.zipCode;
return { ask: 'CustomerContact.FullInfo' };
});
```
In this case, Voxa will detect you’re running an Alexa Skill, so, it will call the getUserInformationWithLWA() method. But you can also call it directly. Even if you create voice experiences in other platforms like Google or Cortana, you can take advantage of the methods for authenticating with other platforms.
Google Sign-In[¶](#google-sign-in)
---
Google Sign-In for the Assistant provides the simplest and easiest user experience to users and developers both for account linking and account creation. Your Action can request access to your user’s Google profile during a conversation, including the user’s name, email address, and profile picture.
The profile information can be used to create a personalized user experience in your Action. If you have apps on other platforms and they use Google Sign-In, you can also find and link to an existing user’s account, create a new account, and establish a direct channel of communication to the user.
To perform account linking with Google Sign-In, you ask the user to give consent to access their Google profile. You then use the information in their profile, for example their email address, to identify the user in your system. Check out this [link](https://developers.google.com/actions/identity/google-sign-in) for more information.
With Voxa, you can ask for the user’s full name like this:
```
app.onIntent('ProfileIntent', async (googleAssistantEvent: GoogleAssistantEvent) => {
const userInfo = await googleAssistantEvent.getUserInformation();
googleAssistantEvent.model.email = userInfo.email;
googleAssistantEvent.model.familyName = userInfo.familyName;
googleAssistantEvent.model.givenName = userInfo.givenName;
googleAssistantEvent.model.name = userInfo.name;
googleAssistantEvent.model.locale = userInfo.locale;
return { ask: 'CustomerContact.FullInfo' };
});
```
In this case, Voxa will detect you’re running a Google Action, so, it will call the getUserInformationWithGoogle() method. Since this is a Google-only API, you can’t use this method on other platforms for the moment.
The `reply` Object[¶](#the-reply-object)
---
*class* `IVoxaReply`()[¶](#IVoxaReply)
The `reply` object is used by the framework to render voxa responses, it takes all of your `statements`, `cards` and `directives` and generates a proper json response for each platform.
`IVoxaReply.IVoxaReply.``clear`()[¶](#IVoxaReply.IVoxaReply.clear)
Resets the response object
`IVoxaReply.IVoxaReply.``terminate`()[¶](#IVoxaReply.IVoxaReply.terminate)
Sends a flag to indicate the session will be closed.
`IVoxaReply.IVoxaReply.``addStatement`(*statement*, *isPlain*)[¶](#IVoxaReply.IVoxaReply.addStatement)
Adds statements to the `Reply`
| Arguments: | * **statement** – The string to be spoken by the voice assistant
* **isPlain** – Indicates if the statement is plain text, if null, it means is SSML
|
`IVoxaReply.IVoxaReply.``addReprompt`(*statement*, *isPlain*)[¶](#IVoxaReply.IVoxaReply.addReprompt)
Adds the reprompt text to the `Reply`
| Arguments: | * **statement** – The string to be spoken by the voice assistant as a reprompt
* **isPlain** – Indicates if the statement is plain text, if null, it means is SSML
|
`IVoxaReply.IVoxaReply.``hasDirective`()[¶](#IVoxaReply.IVoxaReply.hasDirective)
Verifies if the reply has directives
| Returns: | A boolean flag indicating if the reply object has any kind of directives |
`IVoxaReply.IVoxaReply.``saveSession`(*event*)[¶](#IVoxaReply.IVoxaReply.saveSession)
Converts the model object into session attributes
| Arguments: | * **event** – A Voxa event with session attributes
|
For the speceific classes used in every platform you can check:
### The `AlexaReply` Object[¶](#the-alexareply-object)
*class* `AlexaReply`()[¶](#AlexaReply)
`AlexaReply` object is used by the framework to render Alexa responses, it takes all of your `statements`, `cards` and `directives` and generates a proper json response for Alexa
`AlexaReply.Reply.``fulfillIntent`(*canFulfill*)[¶](#AlexaReply.Reply.fulfillIntent)
Fulfills the request in the response object
| Arguments: | * **canFulfill** – A string with possible values: YES | NO | MAYBE to fulfill request
|
`AlexaReply.Reply.``fulfillSlot`(*slotName*, *canUnderstand*, *canFulfill*)[¶](#AlexaReply.Reply.fulfillSlot)
Fulfills the slot with fulfill and understand values
| Arguments: | * **slotName** – A string with the slot to fulfill
* **canUnderstand** – A string with possible values: YES | NO | MAYBE that indicates slot understanding
* **canFulfill** – A string with possible values: YES | NO that indicates slot fulfillment
|
Request Flow[¶](#request-flow)
---
Gadget Controller Interface Reference[¶](#gadget-controller-interface-reference)
---
The [Gadget Controller interface](https://developer.amazon.com/docs/gadget-skills/gadgetcontroller-interface-reference.html) enables your skill to control Echo Buttons. This interface works with compatible Amazon Echo devices only. With the Gadget Controller interface, you can send animations to illuminate the buttons with different colors in a specific order.
With Voxa, you can implement this interface like this:
```
const voxa = require('voxa');
const { GadgetController, TRIGGER_EVENT_ENUM } = voxa.alexa;
app.onIntent('GameEngine.InputHandlerEvent', (voxaEvent) => {
// REMEMBER TO SAVE THE VALUE originatingRequestId in your model
voxaEvent.model.originatingRequestId = voxaEvent.request.originatingRequestId;
const gameEvents = voxaEvent.request.events[0] || [];
const inputEvents = _(gameEvents.inputEvents)
.groupBy('gadgetId')
.map(value => value[0])
.value();
const directives = [];
let customId = 0;
_.forEach(inputEvents, (gadgetEvent) => {
customId += 1;
const id = `g${customId}`;
if (!_.includes(voxaEvent.model.buttons, id)) {
const buttonIndex = _.size(voxaEvent.model.buttons);
const targetGadgets = [gadgetEvent.gadgetId];
_.set(voxaEvent.model, `buttonIds.${id}`, gadgetEvent.gadgetId);
voxaEvent.model.buttons = [];
voxaEvent.model.buttons.push(id);
const triggerEventTimeMs = 0;
const gadgetController = new GadgetController();
const animationBuilder = GadgetController.getAnimationsBuilder();
const sequenceBuilder = GadgetController.getSequenceBuilder();
sequenceBuilder
.duration(1000)
.blend(false)
.color(COLORS[buttonIndex].dark);
animationBuilder
.repeat(100)
.targetLights(['1'])
.sequence([sequenceBuilder]);
directives.push(gadgetController
.setAnimations(animationBuilder)
.setTriggerEvent(TRIGGER_EVENT_ENUM.NONE)
.setLight(targetGadgets, triggerEventTimeMs));
const otherAnimationBuilder = GadgetController.getAnimationsBuilder();
const otherSequenceBuilder = GadgetController.getSequenceBuilder();
otherSequenceBuilder
.duration(500)
.blend(false)
.color(COLORS[buttonIndex].hex);
otherAnimationBuilder
.repeat(1)
.targetLights(['1'])
.sequence([otherSequenceBuilder.build()]);
directives.push(gadgetController
.setAnimations(otherAnimationBuilder.build())
.setTriggerEvent(TRIGGER_EVENT_ENUM.BUTTON_DOWN)
.setLight(targetGadgets, triggerEventTimeMs));
}
});
return {
alexaGadgetControllerLightDirective: directives,
tell: 'Buttons.Next',
to: 'entry',
};
});
```
If there’s an error when you send this directive, Alexa will return a [System ExceptionEncountered](https://developer.amazon.com/docs/gadget-skills/gadgetcontroller-interface-reference.html#system-exceptionencountered) request.
A very simple example on how the GadgetController.SetLight JSON response looks like is:
```
{
"version": "1.0",
"sessionAttributes": {},
"shouldEndSession": true,
"response": {
"outputSpeech": "outputSpeech",
"reprompt": "reprompt",
"directives": [
{
"type": "GadgetController.SetLight",
"version": 1,
"targetGadgets": [ "gadgetId1", "gadgetId2" ],
"parameters": {
"triggerEvent": "none",
"triggerEventTimeMs": 0,
"animations": [
{
"repeat": 1,
"targetLights": ["1"],
"sequence": [
{
"durationMs": 10000,
"blend": false,
"color": "0000FF"
}
]
}
]
}
}
]
}
}
```
Game Engine Interface Reference[¶](#game-engine-interface-reference)
---
The [Game Engine interface](https://developer.amazon.com/docs/gadget-skills/gameengine-interface-reference.html) enables your skill to receive input from Echo Buttons. This interface works with compatible Amazon Echo devices only.
Your skill uses the Game Engine Interface by sending directives that start and stop the Input Handler, which is the component within Alexa that sends your skill Echo Button events when conditions that you define are met (for example, the user pressed a certain sequence of buttons).
With Voxa, you can implement this interface like this:
```
const voxa = require('voxa');
const { ANCHOR_ENUM, EVENT_REPORT_ENUM, GameEngine } = voxa.alexa;
app.onIntent('LaunchIntent', (voxaEvent) => {
const alexaGameEngineStartInputHandler = rollCall(voxaEvent);
return {
alexaGameEngineStartInputHandler,
tell: 'Buttons.Discover',
};
});
function rollCall(voxaEvent) {
const gameEngineTimeout = 15000;
const eventBuilder = GameEngine.getEventsBuilder('sample_event');
const timeoutEventBuilder = GameEngine.getEventsBuilder('timeout');
const recognizerBuilder = GameEngine.getPatternRecognizerBuilder('sample_event');
eventBuilder
.fails(['fails'])
.meets(['sample_event'])
.maximumInvocations(1)
.reports(EVENT_REPORT_ENUM.MATCHES)
.shouldEndInputHandler(true)
.triggerTimeMilliseconds(1000);
timeoutEventBuilder
.meets(['timed out'])
.reports(EVENT_REPORT_ENUM.HISTORY)
.shouldEndInputHandler(true);
recognizerBuilder
.actions('actions')
.fuzzy(true)
.gadgetIds(['gadgetIds'])
.anchor(ANCHOR_ENUM.ANYWHERE)
.pattern(rollCallPattern);
return gameEngine
.setEvents(eventBuilder, timeoutEventBuilder.build())
.setRecognizers(recognizerBuilder.build())
.startInputHandler(gameEngineTimeout, proxies);
}
```
The [recognizers](https://developer.amazon.com/docs/gadget-skills/gameengine-interface-reference.html#recognizers) object contains one or more objects that represent different types of recognizers: the patternRecognizer, deviationRecognizer, or progressRecognizer. In addition to these recognizers, there is a predefined timed out recognizer. All of these recognizers are described next.
The [events](https://developer.amazon.com/docs/gadget-skills/gameengine-interface-reference.html#events) object is where you define the conditions that must be met for your skill to be notified of Echo Button input. You must define at least one event.
If there’s an error when you send these directives, Alexa will return a [System ExceptionEncountered](https://developer.amazon.com/docs/gadget-skills/gameengine-interface-reference.html#system-exceptionencountered) request.
A very simple example on how the GameEngine.InputHandlerEvent JSON request from Alexa looks like is:
```
{
"version": "1.0",
"session": {
"application": {},
"user": {},
"request": {
"type": "GameEngine.InputHandlerEvent",
"requestId": "amzn1.echo-api.request.406fbc75-8bf8-4077-a73d-519f53d172a4",
"timestamp": "2017-08-18T01:29:40.027Z",
"locale": "en-US",
"originatingRequestId": "amzn1.echo-api.request.406fbc75-8bf8-4077-a73d-519f53d172d6",
"events": [
{
"name": "myEventName",
"inputEvents": [
{
"gadgetId": "someGadgetId1",
"timestamp": "2017-08-18T01:32:40.027Z",
"action": "down",
"color": "FF0000"
}
]
}
]
}
}
}
```
The field `originatingRequestId` provides the requestId of the request to which you responded with a StartInputHandler directive. You need to save this value in your session attributes to send the [StopInputHandler](https://developer.amazon.com/docs/gadget-skills/gameengine-interface-reference.html#stop) directive. You can send this directive with Voxa as follows:
```
const voxa = require('voxa');
app.onIntent('ExitIntent', (voxaEvent) => {
const { originatingRequestId } = voxaEvent.model;
return {
alexaGameEngineStopInputHandler: originatingRequestId,
tell: 'Buttons.Bye',
};
});
```
This will stop Echo Button events to be sent to your skill.
Plugins[¶](#plugins)
---
Plugins allow you to modify how the StateMachineSkill handles an alexa event. When a plugin is registered it will use the different hooks in your skill to add functionality. If you have several skills with similar behavior then your answer is to create a plugin.
### Using a plugin[¶](#using-a-plugin)
After instantiating a StateMachineSkill you can register plugins on it. Built in plugins can be accessed through `Voxa.plugins`
```
'use strict';
const { VoxaApp, plugins } = require('voxa');
const Model = require('./model');
const views = require('./views'):
const variables = require('./variables');
const app = new VoxaApp({ Model, variables, views });
plugins.replaceIntent(app);
```
### State Flow plugin[¶](#state-flow-plugin)
Stores the state transitions for every alexa event in an array.
`stateFlow`(*app*)[¶](#stateFlow)
State Flow attaches callbacks to [`onRequestStarted()`](index.html#VoxaApp.onRequestStarted), [`onBeforeStateChanged()`](index.html#VoxaApp.onBeforeStateChanged) and [`onBeforeReplySent()`](index.html#VoxaApp.onBeforeReplySent) to track state transitions in a `voxaEvent.flow` array
| Arguments: | * **app** ([*VoxaApp*](index.html#VoxaApp)) – The app object
|
#### Usage[¶](#usage)
```
const { plugins, VoxaApp } = require('voxa');
const voxaApp = new VoxaApp();
plugins.stateFlow(voxaApp);
voxaApp.onBeforeReplySent((voxaEvent) => {
console.log(voxaEvent.session.outputAttributes.flow.join(' > ')); // entry > firstState > secondState > die
});
```
### Replace Intent plugin[¶](#replace-intent-plugin)
It allows you to rename an intent name based on a regular expression. By default it will match `/(.*)OnlyIntent$/` and replace it with `$1Intent`.
`replaceIntent`(*app*[, *config*])[¶](#replaceIntent)
Replace Intent plugin uses [`onIntentRequest()`](index.html#VoxaApp.onIntentRequest) to modify the incoming request intent name
| Arguments: | * **app** ([*VoxaApp*](index.html#VoxaApp)) – The stateMachineSkill
* **config** – An object with the `regex` to look for and the `replace` value.
|
#### Usage[¶](#id2)
```
const app = new Voxa({ Model, variables, views });
Voxa.plugins.replaceIntent(app, { regex: /(.*)OnlyIntent$/, replace: '$1Intent' });
Voxa.plugins.replaceIntent(app, { regex: /^VeryLong(.*)/, replace: 'Long$1' });
```
#### Why OnlyIntents?[¶](#why-onlyintents)
A good practice is to isolate an utterance into another intent if it contains a single slot. By creating the OnlyIntent, Alexa will prioritize this intent if the user says only a value from that slot.
Let’s explain with the following scenario. You need the user to provide a zipcode.
You would have an intent called `ZipCodeIntent`. But you still have to manage if the user only says a zipcode without any other words. So that’s when we create an OnlyIntent. Let’s call it `ZipCodeOnlyIntent`.
Our utterance file will be like this:
```
ZipCodeIntent here is my {ZipCodeSlot}
ZipCodeIntent my zip is {ZipCodeSlot}
...
ZipCodeOnlyIntent {ZipCodeSlot}
```
But now we have two states which are basically the same. Replace Intent plugin will rename all incoming requests intents from `ZipCodeOnlyIntent` to `ZipCodeIntent`.
### CloudWatch plugin[¶](#cloudwatch-plugin)
It logs a CloudWatch metric when the skill catches an error or success execution.
#### Params[¶](#params)
`cloudwatch`(*app*, *cloudwatch*[, *eventMetric*])[¶](#cloudwatch)
CloudWatch plugin uses [`VoxaApp.onError()`](index.html#VoxaApp.onError) and [`VoxaApp.onBeforeReplySent()`](index.html#VoxaApp.onBeforeReplySent) to log metrics
| Arguments: | * **app** ([*VoxaApp*](index.html#VoxaApp)) – The stateMachineSkill
* **cloudwatch** – A new [AWS.CloudWatch](http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatch.html#constructor-property/) object.
* **putMetricDataParams** – Params for [putMetricData](http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatch.html#putMetricData-property)
|
#### Usage[¶](#id3)
```
const AWS = require('aws-sdk');
const app = new Voxa({ Model, variables, views });
const cloudWatch = new AWS.CloudWatch({});
const eventMetric = {
MetricName: 'Caught Error', // Name of your metric
Namespace: 'SkillName' // Name of your skill
};
Voxa.plugins.cloudwatch(app, cloudWatch, eventMetric);
```
### Autoload plugin[¶](#autoload-plugin)
It accepts an adapter to autoload info into the model object coming in every alexa request.
#### Params[¶](#id4)
`autoLoad`(*app*[, *config*])[¶](#autoLoad)
Autoload plugin uses `app.onSessionStarted` to load data the first time the user opens a skill
| Arguments: | * **app** ([*VoxaApp*](index.html#VoxaApp)) – The stateMachineSkill.
* **config** – An object with an `adapter` key with a get Promise method in which you can handle your database access to fetch information from any resource.
|
#### Usage[¶](#id5)
```
const app = new VoxaApp({ Model, variables, views });
plugins.autoLoad(app, { adapter });
```
### S3Persistence plugin[¶](#s3persistence-plugin)
It stores the user’s session attributes in a file in an S3 bucket. You can use this plugin when you host your Node.js code with the Alexa-Hosted skills feature. For more details about how this work, check the [official documentation](https://developer.amazon.com/docs/hosted-skills/build-a-skill-end-to-end-using-an-alexa-hosted-skill.html#persistence).
If you host your code in your own AWS account and plan to use S3 as an storage alternative, keep in mind that you cannot do any Scan or Query operations from S3 and the time to storage and get info is a little longer than DynamoDB.
#### Params[¶](#id6)
`s3Persistence`(*app*[, *config*])[¶](#s3Persistence)
S3Persistence plugin uses `app.onRequestStarted` to load data every time the user sends a request to the skill S3Persistence plugin uses `app.onBeforeReplySent` to store the user’s session data before sending a response back to the skill
| Arguments: | * **app** ([*VoxaApp*](index.html#VoxaApp)) – The stateMachineSkill.
* **config** – An object with a `bucketName` key for the S3 bucket to store the info. A `pathPrefix` key in case you want to store this info in a folder. An `aws` key if you want to initialize the S3 object with specific values, and an `s3Client` key, in case you want to provide an S3 object already initialized.
|
#### Usage[¶](#id7)
```
const app = new VoxaApp({ Model, variables, views });
const s3PersistenceConfig = {
bucketName: 'MY_S3_BUCKET',
pathPrefix: 'userSessions',
};
plugins.s3Persistence(app, s3PersistenceConfig);
```
Debugging[¶](#debugging)
---
Voxa uses the [debug](http://npmjs.com/package/debug) module internally to log a number of different internal events, if you want have a look at those events you have to declare the following environment variable
`DEBUG=voxa`
This is an example of the log output
```
voxa Received new event: {"version":"1.0","session":{"new":true,"sessionId":"SessionId.09162f2a-cf8f-414f-92e6-1e3616ecaa05","application":{"applicationId":"amzn1.ask.skill.1fe77997-14db-409b-926c-0d8c161e5376"},"attributes":{},"user":{"userId":"amzn1.ask.account.","accessToken":""}},"request":{"type":"LaunchRequest","requestId":"EdwRequestId.0f7b488d-c198-4374-9fb5-6c2034a5c883","timestamp":"2017-01-25T23:01:15Z","locale":"en-US"}} +0ms voxa Initialized model like {} +8ms voxa Starting the state machine from entry state +2s voxa Running simpleTransition for entry +1ms voxa Running onAfterStateChangeCallbacks +0ms voxa entry transition resulted in {"to":"launch"} +0ms voxa Running launch enter function +1ms voxa Running onAfterStateChangeCallbacks +0ms voxa launch transition resulted in {"reply":"Intent.Launch","to":"entry","message":{"tell":"Welcome <EMAIL>!"},"session":{"data":{},"reply":null}} +7ms
```
You can also get more per platform debugging information with
`DEBUG=voxa:alexa`
`DEBUG=voxa:botframework`
`DEBUG=voxa:dialogflow` |
mun_capi_utils | rust | Rust | Struct mun_capi_utils::error::ErrorHandle
===
```
#[repr(C)]pub struct ErrorHandle(pub *constc_char);
```
A C-style handle to an error message.
If the handle contains a non-null pointer, an error occurred.
cbindgen:field-names=[error_string]
Tuple Fields
---
`0: *constc_char`Implementations
---
### impl ErrorHandle
#### pub fn new<T: Into<Vec<u8>>>(error_message: T) -> Self
Constructs an `ErrorHandle` from the specified error message.
#### pub fn is_ok(&self) -> bool
Returns true if this error handle doesnt actually contain any error.
#### pub fn is_err(&self) -> bool
Returns true if this error handle contains an error
#### pub unsafe fn err(&self) -> Option<&CStrReturns the error associated with this instance or `None` if there is no error.
##### Safety
If the error contained in this handle has previously been deallocated the data may have been corrupted.
Trait Implementations
---
### impl Clone for ErrorHandle
#### fn clone(&self) -> ErrorHandle
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn default() -> Self
Returns the “default value” for a type.
#### fn from(bytes: T) -> Self
Converts to this type from the input type.### impl Copy for ErrorHandle
Auto Trait Implementations
---
### impl RefUnwindSafe for ErrorHandle
### impl !Send for ErrorHandle
### impl !Sync for ErrorHandle
### impl Unpin for ErrorHandle
### impl UnwindSafe for ErrorHandle
Blanket Implementations
---
### impl<T> Any for Twhere T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
const: unstable · source#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
const: unstable · source#### fn borrow_mut(&mut self) -> &mutT
Mutably borrows from an owned value.
const: unstable · source#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere U: From<T>,
const: unstable · source#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
#### type Error = Infallible
The type returned in the event of a conversion error.const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Module mun_capi_utils::error
===
Exposes error reporting using the C ABI.
Structs
---
ErrorHandleA C-style handle to an error message.Functions
---
mun_error_destroy⚠Destructs the error message corresponding to the specified handle.
Function mun_capi_utils::mun_string_destroy
===
```
#[no_mangle]
pub unsafe extern "C" fn mun_string_destroy(string: *constc_char)
```
Deallocates a string that was allocated by the runtime.
Safety
---
This function receives a raw pointer as parameter. Only when the argument is not a null pointer,
its content will be deallocated. Passing pointers to invalid data or memory allocated by other processes, will lead to undefined behavior.
Function mun_capi_utils::try_convert_c_string
===
```
pub unsafe fn try_convert_c_string<'a>(
string: *constc_char
) -> Result<&'a str, &'static str>
```
Tries to convert a C style string pointer to a CStr.
Safety
---
The caller must provide a valid C string with a null terminator, whose content doesnt change during the lifetime `'a`. |
HSPOR | cran | R | Package ‘HSPOR’
October 12, 2022
Title Hidden Smooth Polynomial Regression for Rupture Detection
Version 1.1.9
Description Several functions that allow by different methods to infer a piecewise polynomial regres-
sion model under regularity constraints, namely continuity or differentiability of the link func-
tion. The implemented functions are either spe-
cific to data with two regimes, or generic for any num-
ber of regimes, which can be given by the user or learned by the algorithm. A paper describ-
ing all these methods will be submitted soon. The refer-
ence will be added to this file as soon as available.
License LGPL-3
Encoding UTF-8
LazyData true
RoxygenNote 6.1.1
Imports stats, corpcor, npregfast, graphics
NeedsCompilation no
Author <NAME> [aut, cre],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2019-09-03 07:30:11 UTC
R topics documented:
H2SPO... 2
H2SPOR_DynPro... 3
HKSPO... 5
HKSPOR_DynPro... 7
H2SPOR Inference method for two regimes
Description
H2SPOR is an inference method that estimates, under regularity constraint, the parameters of a
polynomial regression model with 2 regimes.
Usage
H2SPOR(X, Y, deg, constraint = 1, EM = TRUE, TimeTrans_Prop = c(),
plotG = TRUE)
Arguments
X A numerical vector corresponding to the explanatory variable. X must be sorted
in ascending order if this is not the case, X will be sorted in the function and the
corresponding permutation will be applied to Y. The user will be notified by a
warning message. In addition, if X contains NAs, they will be deleted from the
data and the user will be notified by a warning message. Finally, if X contains
duplicate data, the excess data will be deleted and the user will be notified by a
warning message.
Y A numerical vector corresponding to the variable to be explain. It should contain
two regimes that could be modelled by polynomials. In addition, if Y contains
NAs they will be deleted from the data and the user will be notified by a warning
message. Finally, if X contains dupplicate data, the excess data will be deleted
and the value of the remaining Y will become the average of the Ys, calculated
for this value of X.
deg The degree of polynomials. The size of X and Y must be greater than 2(deg+2)
+ 1.
constraint Number that determines the regularity assumption that is applied for the param-
eters estimation. By default, the variable is set to 1, i. e. the parameters estima-
tion is done under continuity constraint. If the variable is 0 or 2, the estimation
of the parameters will be done without assumption of regularity (constraint = 0)
or under assumption of differentiability (constraint = 2). Warning, if the differ-
entiability assumption is not verified by the model, it is preferable not to use it
to estimate the model parameters. In addition, if the degree of the polynomials
is equal to 1, you cannot use the differentiability assumption.
EM A Boolean. If EM is TRUE (default), then the function will estimate the param-
eters of a latent variable polynomial regression model using an EM algorithm.
If EM is FALSE then the function will estimate the parameters of the initial
polynomial regression model by a fixed point algorithm.
TimeTrans_Prop A numerical vector. This vector is empty by default. If you want to estimate the
model parameters for a fixed jump time value, you can propose this value here.
plotG A Boolean. If TRUE (default) the estimation results obtained by the H2SPOR
function are plotted.
Value
A dataframe that contains the estimated parameters of the polynomial regression model at two
regimes: the jump time, the coefficients of the polynomials and the variances of the two regimes. If
plotG = TRUE, the data (X,Y) and the estimated model will be plotted.
Examples
#generated data with two regimes
set.seed(1)
xgrid1 = seq(0,10,length.out=6)
xgrid2 = seq(10.2,20,length.out=6)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,3)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,3)
xgrid = c(xgrid1,xgrid2)
ygrid = c(ygrid1,ygrid2)
#Inference of a polynomial regression model with two regimes on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
#under continuity constraint.
H2SPOR(xgrid,ygrid,2,1,EM=FALSE,c())
set.seed(1)
xgrid1 = seq(0,10,by=0.2)
xgrid2 = seq(10.2,20,by=0.2)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,3)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,3)
xgrid = c(xgrid1,xgrid2)
ygrid = c(ygrid1,ygrid2)
#Inference of a polynomial regression model with two regimes on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
#under continuity constraint.
H2SPOR(xgrid,ygrid,2,1,EM=FALSE,c())
#Executed time : 9.69897 secs (intel core i7 processor)
H2SPOR_DynProg Inference method that does not require a priori knowledge of the num-
ber of regimes and uses dynamic programming
Description
H2SPOR_DynProg is an inference method implemented as a binary segmentation algorithm. This
method makes it possible to estimate, using dynamic programming and under regularity assumption,
the parameters of a piecewise polynomial regression model when we have no a priori knowledge of
the number of regimes.
Usage
H2SPOR_DynProg(X, Y, deg, constraint = 1, EM = TRUE, plotG = TRUE)
Arguments
X A numerical vector corresponding to the explanatory variable. X must be sorted
in ascending order if this is not the case, X will be sorted in the function and the
corresponding permutation will be applied to Y. The user will be notified by a
warning message. In addition, if X contains NAs, they will be deleted from the
data and the user will be notified by a warning message. Finally, if X contains
duplicate data, the excess data will be deleted and the user will be notified by a
warning message.
Y A numerical vector corresponding to the variable to be explain. It should contain
at least two regimes that could be modelled by polynomials. In addition, if Y
contains NAs they will be deleted from the data and the user will be notified by
a warning message. Finally, if X contains dupplicate data, the excess data will
be deleted and the value of the remaining Y will become the average of the Ys,
calculated for this value of X.
deg Degree of the polynomials. The size of X and Y must be greater than 2(deg+2)
+ 1.
constraint Number that determines the regularity assumption that is applied for the param-
eters estimation. By default, the variable is set to 1, i. e. the parameters estima-
tion is done under continuity constraint. If the variable is 0 or 2, the estimation
of the parameters will be done without assumption of regularity (constraint = 0)
or under assumption of differentiability (constraint = 2). Warning, if the differ-
entiability assumption is not verified by the model, it is preferable not to use it
to estimate the model parameters. In addition, if the degree of the polynomials
is equal to 1, you cannot use the differentiability assumption.
EM A Boolean. If EM is TRUE (default), then the function will estimate the param-
eters of a latent variable polynomial regression model using an EM algorithm.
If EM is FALSE then the function will estimate the parameters of the initial
polynomial regression model by a fixed point algorithm.
plotG A Boolean. If TRUE (default) the estimation results obtained by the H2SPOR_DynProg
function are plotted.
Value
A dataframe which contains the estimated parameters of the polynomial regression model at an
estimated number of regimes: the times of jump, the polynomials coefficients and the variances of
an estimated number of regimes. If plotG = TRUE, the data(X,Y) and the estimated model will be
plotted.
Examples
set.seed(1)
#generated data with two regimes
xgrid1 = seq(0,10,length.out = 6)
xgrid2 = seq(10.2,20,length.out=6)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,3)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,3)
xgrid = c(xgrid1,xgrid2)
ygrid = c(ygrid1,ygrid2)
# Inference of a piecewise polynomial regression model on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
#under continuity constraint.
H2SPOR_DynProg(xgrid,ygrid,2,1,EM=FALSE)
set.seed(1)
xgrid1 = seq(0,10,by=0.2)
xgrid2 = seq(10.2,20,by=0.2)
xgrid3 = seq(20.2,30,by=0.2)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,3)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,3)
ygrid3 = -10*xgrid3+300+rnorm(length(xgrid3),0,3)
datX = c(xgrid1,xgrid2,xgrid3)
datY = c(ygrid1,ygrid2,ygrid3)
#Inference of a piecewise polynomial regression model on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
#under continuity constraint.
H2SPOR_DynProg(datX,datY,2,1)
#Executed time : 2.349685 mins (intel core i7 processor)
HKSPOR Inference method for any number K of regimes
Description
HKSPOR is an inference method that estimates, under regularity constraint, the parameters of a
polynomial regression model for a number K of regimes given by the user.
Usage
HKSPOR(X, Y, deg, K, constraint = 1, EM = TRUE, TimeTrans_Prop = c(),
plotG = TRUE)
Arguments
X A numerical vector corresponding to the explanatory variable. X must be sorted
in ascending order if this is not the case, X will be sorted in the function and the
corresponding permutation will be applied to Y. The user will be notified by a
warning message. In addition, if X contains NAs, they will be deleted from the
data and the user will be notified by a warning message. Finally, if X contains
duplicate data, the excess data will be deleted and the user will be notified by a
warning message.
Y A numerical vector corresponding to the variable to be explain. It should contain
at least two regimes that could be modelled by polynomials. In addition, if Y
contains NAs they will be deleted from the data and the user will be notified by
a warning message. Finally, if X contains dupplicate data, the excess data will
be deleted and the value of the remaining Y will become the average of the Ys,
calculated for this value of X.
deg Degree of the polynomials. The size of X and Y must be greater than K(deg+2)
+ K.
K The number of regimes. The size of X and Y must be greater than K(deg+2) +
K.
constraint Number that determines the regularity assumption that is applied for the param-
eters estimation. By default, the variable is set to 1, i. e. the parameters estima-
tion is done under continuity constraint. If the variable is 0 or 2, the estimation
of the parameters will be done without assumption of regularity (constraint = 0)
or under assumption of differentiability (constraint = 2). Warning, if the differ-
entiability assumption is not verified by the model, it is preferable not to use it
to estimate the model parameters. In addition, if the degree of the polynomials
is equal to 1, you cannot use the differentiability assumption.
EM A Boolean. If EM is TRUE (default), then the function will estimate the param-
eters of a latent variable polynomial regression model using an EM algorithm.
If EM is FALSE then the function will estimate the parameters of the initial
polynomial regression model by a fixed point algorithm.
TimeTrans_Prop A numerical vector. This vector is empty by default. If you want to estimate
the model parameters for fixed jump time values, you can propose these values
here. Warning, the size of this vector must be equal to K-1.
plotG A Boolean. If TRUE (default) the estimation results obtained by the HKSPOR
function are plotted.
Value
A dataframe which contains the estimated parameters of the polynomial regression model at K
regimes: the times of transition, the polynomials coefficients and the variances of the K regimes. If
plotG = TRUE, the data (X,Y) and the estimated model will be plotted.
Examples
set.seed(3)
xgrid1 = seq(0,10,by=0.2)
xgrid2 = seq(10.2,20,by=0.2)
xgrid3 = seq(20.2,30,by=0.2)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,3)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,3)
ygrid3 = -10*xgrid3+300+rnorm(length(xgrid3),0,3)
xgrid = c(xgrid1,xgrid2,xgrid3)
ygrid = c(ygrid1,ygrid2,ygrid3)
#Inference of a polynomial regression model with three regimes on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
# under continuity constraint when the times of jump are fixed to 10 and 20.
HKSPOR(xgrid,ygrid,2,3,1,EM = FALSE,c(10,20))
set.seed(3)
xgrid1 = seq(0,10,by=0.2)
xgrid2 = seq(10.2,20,by=0.2)
xgrid3 = seq(20.2,30,by=0.2)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,3)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,3)
ygrid3 = -10*xgrid3+300+rnorm(length(xgrid3),0,3)
xgrid = c(xgrid1,xgrid2,xgrid3)
ygrid = c(ygrid1,ygrid2,ygrid3)
#Inference of a polynomial regression model with three regimes (K=3) on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
#under continuity constraint.
HKSPOR(xgrid,ygrid,2,3,1)
#Executed time : 49.70051 mins (intel core i7 processor)
HKSPOR_DynProg Inference method for any number K of regimes using dynamic pro-
gramming
Description
HKSPOR_DynProg is an inference method implemented in the form of a Bellman algorithm that
estimates, under the assumption of regularity, the parameters of a polynomial regression model for
a number K of regimes given by the user..
Usage
HKSPOR_DynProg(X, Y, deg, K, constraint = 1, smoothing = TRUE,
verbose = FALSE, plotG = TRUE)
Arguments
X A numerical vector corresponding to the explanatory variable. X must be sorted
in ascending order if this is not the case, X will be sorted in the function and the
corresponding permutation will be applied to Y. The user will be notified by a
warning message. In addition, if X contains NAs, they will be deleted from the
data and the user will be notified by a warning message. Finally, if X contains
duplicate data, the excess data will be deleted and the user will be notified by a
warning message.
Y A numerical vector corresponding to the variable to be explain. It should contain
at least two regimes that could be modelled by polynomials. In addition, if Y
contains NAs they will be deleted from the data and the user will be notified by
a warning message. Finally, if X contains dupplicate data, the excess data will
be deleted and the value of the remaining Y will become the average of the Ys,
calculated for this value of X.
deg The degree of the polynomials. The size of X and Y must be greater than
K(deg+2) + K.
K The number of regimes. The size of X and Y must be greater than K(deg+2) +
K.
constraint Number that determines the regularity assumption that is applied for the param-
eters estimation. By default, the variable is set to 1, i. e. the parameters estima-
tion is done under continuity constraint. If the variable is 0 or 2, the estimation
of the parameters will be done without assumption of regularity (constraint =
0) or under assumption of differentiability (constraint = 2). Warning, if the dif-
ferentiability assumption is not verified by the model, it is preferable not to use
it to estimate the model parameters. In addition, in this dynamic programming
method, to ensure that the number of constraints is not greater that the number
of parameters to be estimated, the degree of the polynomials must be at least
equal to 3 to be able to use the differentiability assumption.
smoothing A Boolean. If TRUE (default), the method will estimate the parameters of a
piecewise polynomial regression model with latent variable by maximizing the
log-likelihood weighted by the probability of being in the latent variable regime.
If FALSE, the method will estimate the parameters of the piecewise polynomial
regression model.
verbose A Boolean. If FALSE (default) the HKSPOR_Dynprog function will return
only one dataframe containing the parameter estimates obtained for a model at
K regimes. If TRUE, the function will return all the results obtained for a model
with 1 regime up to K regimes.
plotG A Boolean. If TRUE (default) the estimation results obtained by the HKSPOR_DynProg
function are plotted.
Value
One or more dataframes depend on the verbose value. If verbose = False, the output table will con-
tain the estimated parameters of the polynomial regression model at K regimes: jump times, poly-
nomial coefficients and variances of K regimes. If verbose = True then there will be K dataframes
in output. Each table will contain the results of the estimated parameters obtained for each value of
k (k=1,...,k=K). If plotG = TRUE, the data (X,Y) and the estimated model(s) will be plotted.
Examples
#generated data with three regimes
set.seed(1)
xgrid1 = seq(0,10,length.out=6)
xgrid2 = seq(10.2,20,length.out=6)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,4)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,4)
datX = c(xgrid1,xgrid2)
datY = c(ygrid1,ygrid2)
#Inference of a polynomial regression model with two regimes (K=2) on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
#under continuity constraint.
HKSPOR_DynProg(datX,datY,2,2)
set.seed(2)
xgrid1 = seq(0,10,by=0.2)
xgrid2 = seq(10.2,20,by=0.2)
xgrid3 = seq(20.2,30,by=0.2)
ygrid1 = xgrid1^2-xgrid1+1+ rnorm(length(xgrid1),0,3)
ygrid2 = rep(91,length(xgrid2))+ rnorm(length(xgrid2),0,3)
ygrid3 = -10*xgrid3+300+rnorm(length(xgrid3),0,3)
datX = c(xgrid1,xgrid2,xgrid3)
datY = c(ygrid1,ygrid2,ygrid3)
#Inference of a polynomial regression model with three (K=3) regimes on these data.
#The degree of the polynomials is fixed to 2 and the parameters are estimated
#under continuity constraint.
HKSPOR_DynProg(datX,datY,2,3)
#Executed time : 3.658121 mins (intel core i7 processor) |
firehose | hex | Erlang | Toggle Theme
firehose v0.4.0
API Reference
===
Modules
---
[Firehose](Firehose.html)
[Firehose.Batch](Firehose.Batch.html)
[Firehose.Emitter](Firehose.Emitter.html)
[Firehose.Manager](Firehose.Manager.html)
[Firehose.Record](Firehose.Record.html)
Toggle Theme
firehose v0.4.0
Firehose
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[emit(stream, data)](#emit/2)
[Link to this section](#functions)
Functions
===
[Link to this function](#emit/2 "Link to this function")
emit(stream, data)
Toggle Theme
firehose v0.4.0
Firehose.Batch
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[add(batch, data)](#add/2)
[Link to this section](#functions)
Functions
===
[Link to this function](#add/2 "Link to this function")
add(batch, data)
Toggle Theme
firehose v0.4.0
Firehose.Emitter
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[child\_spec(arg)](#child_spec/1)
Returns a specification to start this module under a supervisor
[emit(stream, data)](#emit/2)
[emit(pid, stream, data)](#emit/3)
[init(arg)](#init/1)
Invoked when the server is started. `start_link/3` or `start/3` will block until it returns
[start\_link(stream, manager)](#start_link/2)
[Link to this section](#functions)
Functions
===
[Link to this function](#child_spec/1 "Link to this function")
child\_spec(arg)
Returns a specification to start this module under a supervisor.
See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html).
[Link to this function](#emit/2 "Link to this function")
emit(stream, data)
[Link to this function](#emit/3 "Link to this function")
emit(pid, stream, data)
[Link to this function](#init/1 "Link to this function")
init(arg)
Invoked when the server is started. `start_link/3` or `start/3` will block until it returns.
`args` is the argument term (second argument) passed to `start_link/3`.
Returning `{:ok, state}` will cause `start_link/3` to return
`{:ok, pid}` and the process to enter its loop.
Returning `{:ok, state, timeout}` is similar to `{:ok, state}`
except `handle_info(:timeout, state)` will be called after `timeout`
milliseconds if no messages are received within the timeout.
Returning `{:ok, state, :hibernate}` is similar to
`{:ok, state}` except the process is hibernated before entering the loop. See
`c:handle_call/3` for more information on hibernation.
Returning `:ignore` will cause `start_link/3` to return `:ignore` and the process will exit normally without entering the loop or calling `c:terminate/2`.
If used when part of a supervision tree the parent supervisor will not fail to start nor immediately try to restart the [`GenServer`](https://hexdocs.pm/elixir/GenServer.html). The remainder of the supervision tree will be (re)started and so the [`GenServer`](https://hexdocs.pm/elixir/GenServer.html) should not be required by other processes. It can be started later with
[`Supervisor.restart_child/2`](https://hexdocs.pm/elixir/Supervisor.html#restart_child/2) as the child specification is saved in the parent supervisor. The main use cases for this are:
* The [`GenServer`](https://hexdocs.pm/elixir/GenServer.html) is disabled by configuration but might be enabled later.
* An error occurred and it will be handled by a different mechanism than the
[`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). Likely this approach involves calling [`Supervisor.restart_child/2`](https://hexdocs.pm/elixir/Supervisor.html#restart_child/2)
after a delay to attempt a restart.
Returning `{:stop, reason}` will cause `start_link/3` to return
`{:error, reason}` and the process to exit with reason `reason` without entering the loop or calling `c:terminate/2`.
Callback implementation for [`GenServer.init/1`](https://hexdocs.pm/elixir/GenServer.html#c:init/1).
[Link to this function](#start_link/2 "Link to this function")
start\_link(stream, manager)
Toggle Theme
firehose v0.4.0
Firehose.Manager
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[child\_spec(arg)](#child_spec/1)
Returns a specification to start this module under a supervisor
[emit(stream, data)](#emit/2)
[emit(pid, stream, data)](#emit/3)
[init(opts)](#init/1)
Invoked when the server is started. `start_link/3` or `start/3` will block until it returns
[start\_link(options \\ [])](#start_link/1)
[start\_link(name, options)](#start_link/2)
[terminate(\_)](#terminate/1)
[Link to this section](#functions)
Functions
===
[Link to this function](#child_spec/1 "Link to this function")
child\_spec(arg)
Returns a specification to start this module under a supervisor.
See [`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html).
[Link to this function](#emit/2 "Link to this function")
emit(stream, data)
[Link to this function](#emit/3 "Link to this function")
emit(pid, stream, data)
[Link to this function](#init/1 "Link to this function")
init(opts)
Invoked when the server is started. `start_link/3` or `start/3` will block until it returns.
`args` is the argument term (second argument) passed to `start_link/3`.
Returning `{:ok, state}` will cause `start_link/3` to return
`{:ok, pid}` and the process to enter its loop.
Returning `{:ok, state, timeout}` is similar to `{:ok, state}`
except `handle_info(:timeout, state)` will be called after `timeout`
milliseconds if no messages are received within the timeout.
Returning `{:ok, state, :hibernate}` is similar to
`{:ok, state}` except the process is hibernated before entering the loop. See
`c:handle_call/3` for more information on hibernation.
Returning `:ignore` will cause `start_link/3` to return `:ignore` and the process will exit normally without entering the loop or calling `c:terminate/2`.
If used when part of a supervision tree the parent supervisor will not fail to start nor immediately try to restart the [`GenServer`](https://hexdocs.pm/elixir/GenServer.html). The remainder of the supervision tree will be (re)started and so the [`GenServer`](https://hexdocs.pm/elixir/GenServer.html) should not be required by other processes. It can be started later with
[`Supervisor.restart_child/2`](https://hexdocs.pm/elixir/Supervisor.html#restart_child/2) as the child specification is saved in the parent supervisor. The main use cases for this are:
* The [`GenServer`](https://hexdocs.pm/elixir/GenServer.html) is disabled by configuration but might be enabled later.
* An error occurred and it will be handled by a different mechanism than the
[`Supervisor`](https://hexdocs.pm/elixir/Supervisor.html). Likely this approach involves calling [`Supervisor.restart_child/2`](https://hexdocs.pm/elixir/Supervisor.html#restart_child/2)
after a delay to attempt a restart.
Returning `{:stop, reason}` will cause `start_link/3` to return
`{:error, reason}` and the process to exit with reason `reason` without entering the loop or calling `c:terminate/2`.
Callback implementation for [`GenServer.init/1`](https://hexdocs.pm/elixir/GenServer.html#c:init/1).
[Link to this function](#start_link/1 "Link to this function")
start\_link(options \\ [])
[Link to this function](#start_link/2 "Link to this function")
start\_link(name, options)
[Link to this function](#terminate/1 "Link to this function")
terminate(\_)
Toggle Theme
firehose v0.4.0
Firehose.Record
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[add(record, data)](#add/2)
[max\_size()](#max_size/0)
[Link to this section](#functions)
Functions
===
[Link to this function](#add/2 "Link to this function")
add(record, data)
[Link to this function](#max_size/0 "Link to this function")
max\_size() |
notify-debouncer-full | rust | Rust | Crate notify_debouncer_full
===
A debouncer for notify that is optimized for ease of use.
* Only emits a single `Rename` event if the rename `From` and `To` events can be matched
* Merges multiple `Rename` events
* Takes `Rename` events into account and updates paths for events that occurred before the rename event, but which haven’t been emitted, yet
* Optionally keeps track of the file system IDs all files and stiches rename events together (FSevents, Windows)
* Emits only one `Remove` event when deleting a directory (inotify)
* Doesn’t emit duplicate create events
* Doesn’t emit `Modify` events after a `Create` event
Installation
---
```
[dependencies]
notify-debouncer-full = "0.3.1"
```
In case you want to select specific features of notify,
specify notify as dependency explicitly in your dependencies.
Otherwise you can just use the re-export of notify from debouncer-full.
```
notify-debouncer-full = "0.3.1"
notify = { version = "..", features = [".."] }
```
Examples
---
```
use notify_debouncer_full::{notify::*, new_debouncer, DebounceEventResult};
// Select recommended watcher for debouncer.
// Using a callback here, could also be a channel.
let mut debouncer = new_debouncer(Duration::from_secs(2), None, |result: DebounceEventResult| {
match result {
Ok(events) => events.iter().for_each(|event| println!("{event:?}")),
Err(errors) => errors.iter().for_each(|error| println!("{error:?}")),
}
}).unwrap();
// Add a path to be watched. All files and directories at that path and
// below will be monitored for changes.
debouncer.watcher().watch(Path::new("."), RecursiveMode::Recursive).unwrap();
// Add the same path to the file ID cache. The cache uses unique file IDs
// provided by the file system and is used to stich together rename events
// in case the notification back-end doesn't emit rename cookies.
debouncer.cache().add_root(Path::new("."), RecursiveMode::Recursive);
```
Features
---
The following crate features can be turned on or off in your cargo dependency config:
* `crossbeam` enabled by default, adds `DebounceEventHandler` support for crossbeam channels.
Also enables crossbeam-channel in the re-exported notify. You may want to disable this when using the tokio async runtime.
* `serde` enables serde support for events.
Caveats
---
As all file events are sourced from notify, the known problems section applies here too.
Re-exports
---
* `pub use file_id;`
* `pub use notify;`
Structs
---
* DebouncedEventA debounced event is emitted after a short delay.
* DebouncerDebouncer guard, stops the debouncer on drop.
* FileIdMapA cache to hold the file system IDs of all watched files.
* NoCacheAn implementation of the `FileIdCache` trait that doesn’t hold any data.
Traits
---
* DebounceEventHandlerThe set of requirements for watcher debounce event handling functions.
* FileIdCacheThe interface of a file ID cache.
Functions
---
* new_debouncerShort function to create a new debounced watcher with the recommended debouncer and the built-in file ID cache.
* new_debouncer_optCreates a new debounced watcher with custom configuration.
Type Definitions
---
* DebounceEventResultA result of debounced events.
Comes with either a vec of events or vec of errors.
Crate notify_debouncer_full
===
A debouncer for notify that is optimized for ease of use.
* Only emits a single `Rename` event if the rename `From` and `To` events can be matched
* Merges multiple `Rename` events
* Takes `Rename` events into account and updates paths for events that occurred before the rename event, but which haven’t been emitted, yet
* Optionally keeps track of the file system IDs all files and stiches rename events together (FSevents, Windows)
* Emits only one `Remove` event when deleting a directory (inotify)
* Doesn’t emit duplicate create events
* Doesn’t emit `Modify` events after a `Create` event
Installation
---
```
[dependencies]
notify-debouncer-full = "0.3.1"
```
In case you want to select specific features of notify,
specify notify as dependency explicitly in your dependencies.
Otherwise you can just use the re-export of notify from debouncer-full.
```
notify-debouncer-full = "0.3.1"
notify = { version = "..", features = [".."] }
```
Examples
---
```
use notify_debouncer_full::{notify::*, new_debouncer, DebounceEventResult};
// Select recommended watcher for debouncer.
// Using a callback here, could also be a channel.
let mut debouncer = new_debouncer(Duration::from_secs(2), None, |result: DebounceEventResult| {
match result {
Ok(events) => events.iter().for_each(|event| println!("{event:?}")),
Err(errors) => errors.iter().for_each(|error| println!("{error:?}")),
}
}).unwrap();
// Add a path to be watched. All files and directories at that path and
// below will be monitored for changes.
debouncer.watcher().watch(Path::new("."), RecursiveMode::Recursive).unwrap();
// Add the same path to the file ID cache. The cache uses unique file IDs
// provided by the file system and is used to stich together rename events
// in case the notification back-end doesn't emit rename cookies.
debouncer.cache().add_root(Path::new("."), RecursiveMode::Recursive);
```
Features
---
The following crate features can be turned on or off in your cargo dependency config:
* `crossbeam` enabled by default, adds `DebounceEventHandler` support for crossbeam channels.
Also enables crossbeam-channel in the re-exported notify. You may want to disable this when using the tokio async runtime.
* `serde` enables serde support for events.
Caveats
---
As all file events are sourced from notify, the known problems section applies here too.
Re-exports
---
* `pub use file_id;`
* `pub use notify;`
Structs
---
* DebouncedEventA debounced event is emitted after a short delay.
* DebouncerDebouncer guard, stops the debouncer on drop.
* FileIdMapA cache to hold the file system IDs of all watched files.
* NoCacheAn implementation of the `FileIdCache` trait that doesn’t hold any data.
Traits
---
* DebounceEventHandlerThe set of requirements for watcher debounce event handling functions.
* FileIdCacheThe interface of a file ID cache.
Functions
---
* new_debouncerShort function to create a new debounced watcher with the recommended debouncer and the built-in file ID cache.
* new_debouncer_optCreates a new debounced watcher with custom configuration.
Type Definitions
---
* DebounceEventResultA result of debounced events.
Comes with either a vec of events or vec of errors.
Trait notify_debouncer_full::DebounceEventHandler
===
```
pub trait DebounceEventHandler: Send + 'static {
// Required method
fn handle_event(&mut self, event: DebounceEventResult);
}
```
The set of requirements for watcher debounce event handling functions.
Example implementation
---
```
/// Prints received events struct EventPrinter;
impl DebounceEventHandler for EventPrinter {
fn handle_event(&mut self, result: DebounceEventResult) {
match result {
Ok(events) => events.iter().for_each(|event| println!("{event:?}")),
Err(errors) => errors.iter().for_each(|error| println!("{error:?}")),
}
}
}
```
Required Methods
---
#### fn handle_event(&mut self, event: DebounceEventResult)
Handles an event.
Implementations on Foreign Types
---
### impl DebounceEventHandler for Sender<DebounceEventResult#### fn handle_event(&mut self, event: DebounceEventResult)
### impl DebounceEventHandler for Sender<DebounceEventResult#### fn handle_event(&mut self, event: DebounceEventResult)
Implementors
---
### impl<F> DebounceEventHandler for Fwhere
F: FnMut(DebounceEventResult) + Send + 'static,
Struct notify_debouncer_full::DebouncedEvent
===
```
pub struct DebouncedEvent {
pub event: Event,
pub time: Instant,
}
```
A debounced event is emitted after a short delay.
Fields
---
`event: Event`The original event.
`time: Instant`The time at which the event occurred.
Implementations
---
### impl DebouncedEvent
#### pub fn new(event: Event, time: Instant) -> Self
Methods from Deref<Target = Event>
---
#### pub fn need_rescan(&self) -> bool
Returns whether some events may have been missed. If true, you should assume any file or folder might have been modified.
See `Flag::Rescan` for more information.
#### pub fn tracker(&self) -> Option<usizeRetrieves the tracker ID for an event directly, if present.
#### pub fn flag(&self) -> Option<FlagRetrieves the Notify flag for an event directly, if present.
#### pub fn info(&self) -> Option<&strRetrieves the additional info for an event directly, if present.
#### pub fn source(&self) -> Option<&strRetrieves the source for an event directly, if present.
Trait Implementations
---
### impl Clone for DebouncedEvent
#### fn clone(&self) -> DebouncedEvent
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> Self
Returns the “default value” for a type.
#### type Target = Event
The resulting type after dereferencing.#### fn deref(&self) -> &Self::Target
Dereferences the value.### impl DerefMut for DebouncedEvent
#### fn deref_mut(&mut self) -> &mut Self::Target
Mutably dereferences the value.### impl From<Event> for DebouncedEvent
#### fn from(event: Event) -> Self
Converts to this type from the input type.### impl PartialEq<DebouncedEvent> for DebouncedEvent
#### fn eq(&self, other: &DebouncedEvent) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Eq for DebouncedEvent
### impl StructuralEq for DebouncedEvent
### impl StructuralPartialEq for DebouncedEvent
Auto Trait Implementations
---
### impl RefUnwindSafe for DebouncedEvent
### impl Send for DebouncedEvent
### impl Sync for DebouncedEvent
### impl Unpin for DebouncedEvent
### impl UnwindSafe for DebouncedEvent
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct notify_debouncer_full::Debouncer
===
```
pub struct Debouncer<T: Watcher, C: FileIdCache> { /* private fields */ }
```
Debouncer guard, stops the debouncer on drop.
Implementations
---
### impl<T: Watcher, C: FileIdCache> Debouncer<T, C#### pub fn stop(self)
Stop the debouncer, waits for the event thread to finish.
May block for the duration of one tick_rate.
#### pub fn stop_nonblocking(self)
Stop the debouncer, does not wait for the event thread to finish.
#### pub fn watcher(&mut self) -> &mut T
Access to the internally used notify Watcher backend
#### pub fn cache(&mut self) -> MappedMutexGuard<'_, CAccess to the internally used notify Watcher backend
Trait Implementations
---
### impl<T: Debug + Watcher, C: Debug + FileIdCache> Debug for Debouncer<T, C#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
Executes the destructor for this type. Read moreAuto Trait Implementations
---
### impl<T, C> !RefUnwindSafe for Debouncer<T, C### impl<T, C> Send for Debouncer<T, C>where
C: Send,
T: Send,
### impl<T, C> Sync for Debouncer<T, C>where
C: Send,
T: Sync,
### impl<T, C> Unpin for Debouncer<T, C>where
T: Unpin,
### impl<T, C> !UnwindSafe for Debouncer<T, CBlanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct notify_debouncer_full::FileIdMap
===
```
pub struct FileIdMap { /* private fields */ }
```
A cache to hold the file system IDs of all watched files.
The file ID cache uses unique file IDs provided by the file system and is used to stich together rename events in case the notification back-end doesn’t emit rename cookies.
Implementations
---
### impl FileIdMap
#### pub fn new() -> Self
Construct an empty cache.
#### pub fn add_root(
&mut self,
path: impl Into<PathBuf>,
recursive_mode: RecursiveMode
)
Add a path to the cache.
If `recursive_mode` is `Recursive`, all children will be added to the cache as well and all paths will be kept up-to-date in case of changes like new files being added,
files being removed or renamed.
#### pub fn remove_root(&mut self, path: impl AsRef<Path>)
Remove a path form the cache.
If the path was added with `Recursive` mode, all children will also be removed from the cache.
Trait Implementations
---
### impl Clone for FileIdMap
#### fn clone(&self) -> FileIdMap
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> FileIdMap
Returns the “default value” for a type.
#### fn cached_file_id(&self, path: &Path) -> Option<&FileIdGet a `FileId` from the cache for a given `path`.
Add a new path to the cache or update its value.
Remove a path from the cache.
Re-scan all paths. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for FileIdMap
### impl Send for FileIdMap
### impl Sync for FileIdMap
### impl Unpin for FileIdMap
### impl UnwindSafe for FileIdMap
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Struct notify_debouncer_full::NoCache
===
```
pub struct NoCache;
```
An implementation of the `FileIdCache` trait that doesn’t hold any data.
This pseudo cache can be used to disable the file tracking using file system IDs.
Trait Implementations
---
### impl FileIdCache for NoCache
#### fn cached_file_id(&self, _path: &Path) -> Option<&FileIdGet a `FileId` from the cache for a given `path`.
Add a new path to the cache or update its value.
Remove a path from the cache.
Re-scan all paths. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for NoCache
### impl Send for NoCache
### impl Sync for NoCache
### impl Unpin for NoCache
### impl UnwindSafe for NoCache
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T, U> Into<U> for Twhere
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
Trait notify_debouncer_full::FileIdCache
===
```
pub trait FileIdCache {
// Required methods
fn cached_file_id(&self, path: &Path) -> Option<&FileId>;
fn add_path(&mut self, path: &Path);
fn remove_path(&mut self, path: &Path);
fn rescan(&mut self);
}
```
The interface of a file ID cache.
This trait can be implemented for an existing cache, if it already holds `FileId`s.
Required Methods
---
#### fn cached_file_id(&self, path: &Path) -> Option<&FileIdGet a `FileId` from the cache for a given `path`.
If the path is not cached, `None` should be returned and there should not be any attempt to read the file ID from disk.
#### fn add_path(&mut self, path: &Path)
Add a new path to the cache or update its value.
This will be called if a new file or directory is created or if an existing file is overridden.
#### fn remove_path(&mut self, path: &Path)
Remove a path from the cache.
This will be called if a file or directory is deleted.
#### fn rescan(&mut self)
Re-scan all paths.
This will be called if the notification back-end has dropped events.
Implementors
---
### impl FileIdCache for FileIdMap
### impl FileIdCache for NoCache
Function notify_debouncer_full::new_debouncer
===
```
pub fn new_debouncer<F: DebounceEventHandler>(
timeout: Duration,
tick_rate: Option<Duration>,
event_handler: F
) -> Result<Debouncer<RecommendedWatcher, FileIdMap>, Error>
```
Short function to create a new debounced watcher with the recommended debouncer and the built-in file ID cache.
Timeout is the amount of time after which a debounced event is emitted.
If tick_rate is None, notify will select a tick rate that is 1/4 of the provided timeout.
Function notify_debouncer_full::new_debouncer_opt
===
```
pub fn new_debouncer_opt<F: DebounceEventHandler, T: Watcher, C: FileIdCache + Send + 'static>(
timeout: Duration,
tick_rate: Option<Duration>,
event_handler: F,
file_id_cache: C,
config: Config
) -> Result<Debouncer<T, C>, Error>
```
Creates a new debounced watcher with custom configuration.
Timeout is the amount of time after which a debounced event is emitted.
If tick_rate is None, notify will select a tick rate that is 1/4 of the provided timeout.
Type Definition notify_debouncer_full::DebounceEventResult
===
```
pub type DebounceEventResult = Result<Vec<DebouncedEvent>, Vec<Error>>;
```
A result of debounced events.
Comes with either a vec of events or vec of errors. |
MSCMT | cran | R | Package ‘MSCMT’
April 17, 2023
Type Package
Encoding UTF-8
Version 1.3.7
Date 2023-04-17
Title Multivariate Synthetic Control Method Using Time Series
Depends R (>= 3.2.0)
Imports stats, utils, parallel, lpSolve, ggplot2, lpSolveAPI, Rglpk,
Rdpack
Suggests DEoptim, rgenoud, DEoptimR, GenSA, GA, soma, cmaes, NMOF,
nloptr, hydroPSO, pso, kernlab, reshape, knitr, rmarkdown
Description Three generalizations of the synthetic control method (which has
already an implementation in package 'Synth') are implemented: first,
'MSCMT' allows for using multiple outcome variables, second, time series
can be supplied as economic predictors, and third, a well-defined
cross-validation approach can be used.
Much effort has been taken to make the implementation as stable as possible
(including edge cases) without losing computational efficiency.
A detailed description of the main algorithms is given in
Becker and Klößner (2018) <doi:10.1016/j.ecosta.2017.08.002>.
License GPL
Copyright inst/COPYRIGHTS
RoxygenNote 7.2.3
VignetteBuilder knitr
BuildVignettes yes
NeedsCompilation yes
Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-2336-9751>),
<NAME> [aut],
<NAME> [com],
<NAME> [cph],
<NAME> [cph],
K.H. Haskell [cph],
<NAME> [cph],
LAPACK authors [cph]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-04-17 18:20:06 UTC
R topics documented:
compar... 2
di... 3
ggplot.mscm... 4
improveSynt... 8
listFromLon... 9
MSCM... 11
mscm... 12
plot.mscm... 18
pprati... 19
print.mscm... 21
pvalu... 22
compare Compare MSCMT estimation results
Description
compare collects estimation results from mscmt for comparison purposes.
Usage
compare(..., auto.name.prefix = "")
Arguments
... Objects of class "mscmt" or (a) list(s) containing objects of class "mscmt".
auto.name.prefix
A character string (default: "") internally used to facilitate automatic naming in
nested lists of unnamed estimation results.
Details
compare collects (potentially many) estimation results from mscmt in a special object of class
"mscmt", which includes a component "comparison" where the different estimation results are
aggregated. This aggregated information is used by the ggplot.mscmt and print.mscmt methods
to present summaries of the different results.
Value
An object of class "mscmt", which itself contains the individual estimation results as well as a
component "comparison" with aggregated information.
did Difference-in-difference estimator based on SCM
Description
did calculates difference-in-difference estimators based on SCM.
Usage
did(
x,
what,
range.pre,
range.post,
alternative = c("two.sided", "less", "greater"),
exclude.ratio = Inf
)
Arguments
x An object of class "mscmt", usually obtained as the result of a call to function
mscmt.
what A character vector. Name of the variable to be considered. If missing, the (first)
dependent variable will be used.
range.pre A vector of length 2 defining the range of the pre-treatment period with start and
end time given as
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
corresponding to the format of the respective column of the times.dep argu-
ment of mscmt. If missing, the corresponding column of times.dep will be
used.
range.post A vector of length 2 defining the range of the post-treatment period with start
and end time given as
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
corresponding to the format of the respective column of the times.dep argu-
ment of mscmt. Will be guessed if missing.
alternative A character string giving the alternative of the test. Either "two.sided" (de-
fault), "less", or "greater".
exclude.ratio A numerical scalar (default: Inf). When calculating the p-value, control units
with an average pre-treatment gap of more then exclude.ratio times the aver-
age pre-treatment gap of the treated unit are excluded from the analysis.
Details
did calculates difference-in-difference estimators with corresponding p-values (if results of a placebo
study are present) based on the Synthetic Control Method.
Value
A list with components effect.size, average.pre and average.post. If x contains the results of
a placebo study, three components p.value, rank, and excluded (with the names of the excluded
units) are included additionally.
Examples
## Not run:
## for an example, see the main package vignette:
vignette("WorkingWithMSCMT",package="MSCMT")
## End(Not run)
ggplot.mscmt Plotting Results of mscmt with ggplot2
Description
ggplot.mscmt plots results of mscmt based on ggplot.
Usage
## S3 method for class 'mscmt'
ggplot(
data,
mapping = aes(),
what,
type = c("gaps", "comparison", "placebo.gaps", "placebo.data", "p.value"),
treatment.time,
zero.line = TRUE,
ylab,
xlab = "Date",
main,
col,
lty,
lwd,
legend = TRUE,
bw = FALSE,
date.format,
unit.name,
full.legend = TRUE,
include.smooth = FALSE,
include.mean = FALSE,
include.synth = FALSE,
draw.estwindow = TRUE,
what.set,
limits = NULL,
alpha = 1,
alpha.min = 0.1,
exclude.units = NULL,
exclude.ratio = Inf,
ratio.type = c("rmspe", "mspe"),
alternative = c("two.sided", "less", "greater"),
draw.points = TRUE,
control.name = "control units",
size = 1,
treated.name = "treated unit",
labels = c("actual data", "synthsized data"),
...,
environment = parent.frame()
)
Arguments
data An object of class "mscmt", usually obtained as the result of a call to function
mscmt.
mapping An object necessary to match the definition of the ggplot generic (passed to
ggplot as is). Defaults to aes().
what A character vector. Name(s) of the variables to be plotted. If missing, the (first)
dependent variable will be used.
type A character scalar denoting the type of the plot containing either "gaps", "comparison",
"placebo.gaps", "placebo.data", or "p.value". Partial matching allowed,
defaults to "placebo.gaps", if results of a placebo study are present, and to
"gaps", else.
treatment.time An optional scalar (numeric, character, or Date) giving the treatment time. If
treatment.time is numeric, Jan 01 of that year will be used. If treatment.time
is a character string, it will be converted to a Date and must thus be in an unam-
biguous format. A vertical dotted line at the given point in time is included in
the plot.
zero.line A logical scalar. If TRUE (default), a horizontal dotted line (at zero level) is
plotted for "gaps" and "placebo.gaps" plots.
ylab Optional label for the y-axis, automatically generated if missing.
xlab Optional label for the x-axis, defaults to "Date".
main Optional main title for the plot, automatically generated if missing.
col Optional character vector with length 1 (for gaps plots) or 2 (for all other plot
types). For comparison plots, col contains the colours for the actual and syn-
thesized data, for placebo.plots (with full.legend==FALSE), col contains the
colours for the treated unit and the control units. Automatically generated if
missing.
lty Optional numerical vector with length 1 (for gaps plots) or 2 (for all other plot
types). For comparison plots, lty contains the linetypes for the actual and syn-
thesized data, for placebo.plots (with full.legend==FALSE), col contains the
linetypes for the treated unit and the control units. Automatically generated if
missing.
lwd Optional numerical vector with length 1 (for gaps plots) or 2 (for all other plot
types). For comparison plots, lty contains the linewidths for the actual and
synthesized data, for placebo.plots (with full.legend==FALSE), col contains
the linewidths for the treated unit and the control units. Automatically generated
if missing.
legend A logical scalar. If TRUE (default), a legend is included in the plot.
bw A logical scalar. If FALSE (default), the automatically generated colours and line
types are optimized for a colour plot, if TRUE, the automatic colours and line
types are set for a black and white plot.
date.format A character string giving the format for the tick labels of the x axis as docu-
mented in strptime. Defaults to "%b %y" or "%Y", depending on the granularity
of the data.
unit.name A character string with the title of the legend for comparison and placebo plots.
Defaults to "Estimation" for comparison and "Unit" for placebo plots.
full.legend A logical scalar. If TRUE (default), a full legend of all units (donors) is con-
structed. If FALSE, only the treated and the control units are distinguished.
include.smooth A logical scalar. If TRUE, a geometric smoother based on the control units is
added to placebo plots. Default: FALSE.
include.mean A logical scalar. If TRUE, the arithmetic mean of all control units is added to
placebo plots. Default: FALSE.
include.synth A logical scalar. If TRUE, the synthesized data for the treated unit are added to
plots of type "placebo.data". Defaults to FALSE.
draw.estwindow A logical scalar. If TRUE (default), the time range containing all optimization
periods is shaded in the corresponding plots.
what.set An optional character string for a convenient selection of multiple variables.
Accepted values are "dependents", "predictors", and "all", which collects
all dependent, all predictor, or all variables of both types, respectively. Overrides
parameter what (if the latter is present).
limits An optional vector of length 2 giving the range of the plot or NULL. If limits
is numeric, Jan 01 of the corresponding years will be used. If limits is of type
character, both strings will be converted to Dates (via as.Date) and must thus
be in an unambiguous format.
alpha Either a numerical scalar, a numerical vector of length corresponding to the
number of units, or the character string "auto". If alpha is a numerical scalar
(default with value 1), a fixed value for the alpha channel (transparency) is in-
cluded for all units in placebo plots. If alpha is numeric and has length corre-
sponding to the number of units, these values are assigned as alpha channel to
the individual units. If "auto", the alpha channel information is obtained from
the w weights of the control units.
alpha.min A numerical scalar (default: 0.1). If alpha is set to "auto", the individual alpha
channel information for control unit i is set to alpha.min + (1-alpha.min) *
w[i].
exclude.units An optional (default: NULL) character vector with names for control units which
shall be excluded from placebo plots and p-value calculations.
exclude.ratio A numeric scalar (default: Inf). Control units with a pre-treatment (r)mspe of
more than exclude.ratio times the pre-treatment (r)mspe of the treated unit
are excluded from placebo plots and p-value calculations.
ratio.type A character string. Either rmspe (default) or mspe. Selects whether root mean
squared errors or mean squared errors are considered for the exclusion of control
units (see exclude.ratio).
alternative A character string giving the alternative of the test for plots of type "p.value".
Either "two.sided" (default), "less", or "greater".
draw.points A logical scalar. If TRUE (default), points are added to the line plots to enhance
visibility.
control.name A character string for the naming of the non-treated units in placebo plots. De-
faults to "control units".
size A numerical scalar (default: 1). If draw.points is TRUE (default), size specifies
the size of the points.
treated.name A character string giving the label for the treated unit. Defaults to "treated
unit".
labels A character vector of length 2 giving the labels for the actual and synthesized
data. Defaults to c("actual data","synthsized data").
... Necessary to match the definition of the "ggplot" generic (passed to ggplot as
is).
environment An object necessary to match the definition of the "ggplot" generic (passed to
ggplot as is). Defaults to parent.frame().
Details
A unified plot method for gaps plots, comparison of treated and synthetic values, as well as plots
for placebo studies, based on ggplot. ggplot.mscmt is the preferred plot method and has more
functionality than plot.mscmt.
Value
An object of class ggplot.
improveSynth Check (and Improve) Results of Package Synth
Description
improveSynth checks the results of synth for feasibility and optimality and tries to find a better
solution.
Usage
improveSynth(
synth.out,
dataprep.out,
lb = 1e-08,
tol = 1e-05,
verbose = TRUE,
seed = 1,
...
)
Arguments
synth.out A result of synth from package 'Synth'.
dataprep.out The input of function synth which led to synth.out.
lb A numerical scalar (default: 1e-8), corresponding to the lower bound for the
outer optimization.
tol A numerical scalar (default: 1e-5). If the relative *and* absolute improvement
of loss.v and loss.w, respectively, exceed tol, this is reported (if verbose is
TRUE). Better optima are always looked for (independent of the value of tol).
verbose A logical scalar. Should the ouput be verbose (defaults to TRUE).
seed A numerical vector or NULL. See the corresponding documentation for mscmt.
Defaults to 1 in order to provide reproducibility of the results.
... Further arguments to mscmt. Supported arguments are check.global, inner.optim,
inner.opar, outer.optim, and outer.opar.
Details
Performing SCM means solving a nested optimization problem. Depending on the validity of the
results of the inner optimization, SCM may produce
• invalid or infeasible results, if the vector w of donor weights reported as the result of the inner
optimization is in fact not optimal, ie. produces too large loss.w,
• suboptimal results, if the vector v of predictor weights reported as the result of the outer
optimization is in fact not optimal (which may be caused by shortcomings of the inner opti-
mization).
improveSynth first checks synth.out for feasibility and then tries to find a feasible and optimal
solution by applying the optimization methods of package MSCMT to dataprep.out (with default
settings, more flexibility will probably be added in a future release).
Value
An updated version of synth.out, where solution.v, solution.w, loss.v, and loss.w are re-
placed by the optimum obtained by package 'MSCMT' and all other components of synth.out are
removed.
Examples
## Not run:
## example has been removed because package 'Synth' has been archived
## See vignette 'Checking and Improving Results of package Synth'
## for an example working with a cached copy
## End(Not run)
listFromLong Convert Long Format to List Format
Description
listFromLong converts long to list format.
Usage
listFromLong(
foo,
unit.variable,
time.variable,
unit.names.variable = NULL,
exclude.columns = NULL
)
Arguments
foo A data.frame containing the data in "long" format.
unit.variable Either a numeric scalar with the column number (in foo) containing the units or
a character scalar with the corresponding column name in foo.
time.variable Either a numeric scalar with the column number (in foo) containing the times
or a character scalar with the corresponding column name in foo.
unit.names.variable
Optional. If not NULL, either a numeric scalar with the column number (in foo)
containing the unit names or a character scalar with the corresponding column
name in foo. Must match with the units defined by unit.variable (if not
NULL).
exclude.columns
Optional (defaults to NULL). Numeric vector with column numbers of foo to be
excluded from the conversion.
Details
listFromLong is a convenience function to convert long format (in a data.frame, as used by
package ’Synth’) to list format, where data is stored as a list of matrices.
Most parameter names are named after their equivalents in the dataprep function of package
’Synth’.
Value
A list of matrices with rows corresponding to the times and columns corresponding to the unit (or
unit names, respectively) for all columns of foo which are neither excluded nor have a special role
as time, unit, or unit names variable.
Examples
## Not run:
## example has been modified because package 'Synth' has been archived
## dataset 'basque' is now retrieved from archive
setwd(tempdir())
download.file("https://cran.r-project.org/src/contrib/Archive/Synth/Synth_1.1-6.tar.gz",
destfile="Synth.tar.gz")
untar("Synth.tar.gz",files="Synth/data/basque.RData")
load("Synth/data/basque.RData")
Basque <- listFromLong(basque, unit.variable="regionno",
time.variable="year",
unit.names.variable="regionname")
names(Basque)
head(Basque$gdpcap)
## End(Not run)
MSCMT Multivariate Synthetic Control Method Using Time Series
Description
MSCMT implements the Multivariate Synthetic Control Method Using Time Series.
Details
MSCMT implements three generalizations of the synthetic control method (which has already an
implementation in package ’Synth’):
1. it allows for using multiple outcome variables,
2. time series can be supplied as economic predictors,
3. a well-defined cross-validation approach can be used.
Much effort has been taken to make the implementation as stable as possible (including edge cases)
without losing computational efficiency.
References
<NAME>, <NAME> (2003). “The Economic Costs of Conflict: A Case Study of the Basque
Country.” The American Economic Review, 93(1), 113-132. http://dx.doi.org/10.1257/000282803321455188.
<NAME>, <NAME>, <NAME> (2010). “Synthetic Control Methods for Comparative Case
Studies: Estimating the Effect of California’s Tobacco Control Program.” Journal of the American
Statistical Association, 105(490), 493–505. http://dx.doi.org/10.1198/jasa.2009.ap08746.
<NAME>, <NAME> (2018). “Fast and Reliable Computation of Generalized Synthetic Controls.”
Econometrics and Statistics, 5, 1–19. https://doi.org/10.1016/j.ecosta.2017.08.002.
<NAME>, <NAME>, <NAME> (2018). “Cross-Validating Synthetic Controls.” Economics Bul-
letin, 38, 603-609. Working Paper, http://www.accessecon.com/Pubs/EB/2018/Volume38/
EB-18-V38-I1-P58.pdf.
<NAME>, <NAME> (2015). “Synthesizing Cash for Clunkers: Stabilizing the Car Market, Hurt-
ing the Environment.” Verein für Socialpolitik/German Economic Association. https://ideas.
repec.org/p/zbw/vfsc15/113207.html.
Examples
## Not run:
## for examples, see the package vignettes:
browseVignettes(package="MSCMT")
## End(Not run)
mscmt Multivariate SCM Using Time Series
Description
mscmt performs the Multivariate Synthetic Control Method Using Time Series.
Usage
mscmt(
data,
treatment.identifier = NULL,
controls.identifier = NULL,
times.dep = NULL,
times.pred = NULL,
agg.fns = NULL,
placebo = FALSE,
placebo.with.treated = FALSE,
univariate = FALSE,
univariate.with.dependent = FALSE,
check.global = TRUE,
inner.optim = "wnnlsOpt",
inner.opar = list(),
outer.optim = "DEoptC",
outer.par = list(),
outer.opar = list(),
std.v = c("sum", "mean", "min", "max"),
alpha = NULL,
beta = NULL,
gamma = NULL,
return.ts = TRUE,
single.v = FALSE,
verbose = TRUE,
debug = FALSE,
seed = NULL,
cl = NULL,
times.pred.training = NULL,
times.dep.validation = NULL,
v.special = integer(),
cv.alpha = 0,
spec.search.treated = FALSE,
spec.search.placebos = FALSE
)
Arguments
data Typically, a list of matrices with rows corresponding to times and columns cor-
responding to units for all relevant features (dependent as well as predictor vari-
ables, identified by the list elements’ names). This might be the result of con-
verting from a data.frame by using function listFromLong.
For convenience, data may alternatively be the result of function dataprep
of package 'Synth'. In this case, the parameters treatment.identifier,
controls.identifier, times.dep, times.pred, and agg.fns are ignored, as
these input parameters are generated automatically from data. The parame-
ters univariate, alpha, beta, and gamma are ignored by fixing them to their
defaults. Using results of dataprep is experimental, because the automatic gen-
eration of input parameters may fail due to lack of information contained in
results of dataprep.
treatment.identifier
A character scalar containing the name of the treated unit. Must be contained in
the column names of the matrices in data.
controls.identifier
A character vector containing the names of at least two control units. Entries
must be contained in the column names of the matrices in data.
times.dep A matrix with two rows (containing start times in the first and end times in the
second row) and one column for each dependent variable, where the column
names must exactly match the names of the corresponding dependent variables.
A sequence of dates with the given start and end times of
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
will be constructed; these dates are looked for in the row names of the respective
matrices in data. In applications with cross-validation, times.dep belongs to
the main period.
times.pred A matrix with two rows (containing start times in the first and end times in
the second row) and one column for each predictor variable, where the column
names must exactly match the names of the corresponding predictor variables.
A sequence of dates with the given start and end times of
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
will be constructed; these dates are looked for in the row names of the respective
matrices in data. In applications with cross-validation, times.pred belongs to
the main period.
agg.fns Either NULL (default) or a character vector containing one name of an aggrega-
tion function for each predictor variable (i.e., each column of times.pred). The
character string "id" may be used as a "no-op" aggregation. Each aggregation
function must accept a numeric vector and return either a numeric scalar ("clas-
sical" MSCM) or a numeric vector (leading to MSCM*T* if length of vector is
at least two).
placebo A logical scalar. If TRUE, a placebo study is performed where, apart from the
treated unit, each control unit is considered as treated unit in separate optimiza-
tions. Defaults to FALSE. Depending on the number of control units and the
complexity of the problem, placebo studies may take a long time to finish.
placebo.with.treated
A logical scalar. If TRUE, the treated unit is included as control unit (for other
treated units in placebo studies). Defaults to FALSE.
univariate A logical scalar. If TRUE, a series of univariate SCMT optimizations is done
(instead of one MSCMT optimization) even if there is more than one dependent
variable. Defaults to FALSE.
univariate.with.dependent
A logical scalar. If TRUE (and if univariate is also TRUE), all dependent vari-
ables (contained in the column names of times.dep) apart from the current
(real) dependent variable are included as predictors in the series of univariate
SCMT optimizations. Defaults to FALSE.
check.global A logical scalar. If TRUE (default), a check for the feasibility of the unrestricted
outer optimum (where actually no restrictions are imposed by the predictor vari-
ables) is made before starting the actual optimization procedure.
inner.optim A character scalar containing the name of the optimization method for the inner
optimization. Defaults to "wnnlsOpt", which (currently) is the only supported
implementation, because it outperforms all other inner optimizers we are aware
of. "ipopOpt", which uses ipop has experimental support for benchmark pur-
poses.
inner.opar A list containing further parameters for the inner optimizer. Defaults to the
empty list. (For "wnnlsOpt", there are no meaningful further parameters.)
outer.optim A character vector containing the name(s) of the optimization method(s) for
the outer optimization. Defaults to "DEoptC", which (currently) is the recom-
mended global optimizer. The optimizers currently supported can be found in
the documentation of parameter outer.opar, where the default control parame-
ters for the various optimizers are listed. If outer.optim has length greater than
1, one optimization is invoked for each outer optimizer (and, potentially, each
random seed, see below), and the best result is used.
outer.par A list containing further parameters for the outer optimization procedure. De-
faults to the empty list. Entries in this list may override the following hard-coded
general defaults:
• lb=1e-8, corresponding to the lower bound for the ratio of predictor weights,
• opt.separate=TRUE, corresponding to an improved outer optimization where
each predictor is treated as the (potentially) most important predictor (i.e.
with maximal weight) in separate optimizations (one for each predictor),
see [1].
outer.opar A list (or a list of lists, if outer.optim has length greater than 1) containing fur-
ther parameters for the outer optimizer(s). Defaults to the empty list. Entries in
this list may override the following hard-coded defaults for the individual opti-
mizers, which are quite modest concerning the computing time. dim is a variable
holding the problem dimension, typically the number of predictors minus one.
Optimizer Package Default parameters
DEoptC MSCMT nG=500, nP=20*dim, waitgen=100,
minimpr=1e-14, F=0.5, CR=0.9
cma_es cmaes maxit=2500
crs nloptr maxeval=2.5e4, xtol_rel=1e-14,
population=20*dim, algorithm="NLOPT_GN_CRS2_LM"
DEopt NMOF nG=100, nP=20*dim
DEoptim DEoptim nP=20*dim
ga GA maxiter=50, monitor=FALSE,
popSize=20*dim
genoud rgenoud print.level=0, max.generations=70,
solution.tolerance=1e-12, pop.size=20*dim,
wait.generations=dim, boundary.enforcement=2,
gradient.check=FALSE, MemoryMatrix=FALSE
GenSA GenSA max.call=1e7, max.time=25/dim,
trace.mat=FALSE
hydroPSO hydroPSO maxit=300, reltol=1e-14, npart=3*dim
isres nloptr maxeval=2e4, xtol_rel=1e-14,
population=20*dim, algorithm="NLOPT_GN_ISRES"
nlminbOpt MSCMT/stats nrandom=30
optimOpt MSCMT/stats nrandom=25
PSopt NMOF nG=100, nP=20*dim
psoptim pso maxit=700
soma soma nMigrations=100
If outer.opar is a list of lists, its names must correspond to (a subset of) the
outer optimizers chosen in outer.optim.
std.v A character scalar containing one of the function names "sum", "mean", "min",
or "max" for the standardization of the predictor weights (weights are divided
by std.v(weights) before reporting). Defaults to "sum", partial matching al-
lowed.
alpha A numerical vector with weights for the dependent variables in an MSCMT
optimization or NULL (default). If not NULL, the length of alpha must agree
with the number of dependent variables, NULL is equivalent to weight 1 for all
dependent variables.
beta Either NULL (default), a numerical vector, or a list. If beta is a numerical vector
or a list, its length must agree with the number of dependent variables.
• If beta is a numerical vector, the ith dependent variable is discounted with
discount factor beta[i] (the observations of the dependent variables must
thus be in chronological order!).
• If beta is a list, the components of beta must be numerical vectors with
lengths corresponding to the numbers of observations for the individual
dependent variables. These observations are then multiplied with the corre-
sponding component of beta.
gamma Either NULL (default), a numerical vector, or a list. If gamma is a numerical vector
or a list, its length must agree with the number of predictor variables.
• If gamma is a numerical vector, the output of agg.fns[i] applied to the ith
predictor variable is discounted with discount factor gamma[i] (the output
of agg.fns[i] must therefore be in chronological order!).
• If gamma is a list, the components of gamma must be numerical vectors with
lengths corresponding to the lengths of the output of agg.fns for the indi-
vidual predictor variables. The output of agg.fns is then multiplied with
the corresponding component of gamma.
return.ts A logical scalar. If TRUE (default), most results are converted to time series.
single.v A logical scalar. If FALSE (default), a selection of feasible (optimal!) predictor
weight vectors is generated. If TRUE, the one optimal weight vector which has
maximal order statistics is generated to facilitate cross validation studies.
verbose A logical scalar. If TRUE (default), output is verbose.
debug A logical scalar. If TRUE, output is very verbose. Defaults to FALSE.
seed A numerical vector or NULL. If not NULL, the random number generator is ini-
tialized with the elements of seed via set.seed(seed) (see Random) before
calling the optimizer, performing repeated optimizations (and staying with the
best) if seed has length greater than 1. Defaults to NULL. If not NULL, the seeds
int.seed (default: 53058) and unif.seed (default: 812821) for genoud are
also initialized to the corresponding element of seed, but this can be overridden
with the list elements int.seed and unif.seed of (the corresponding element
of) outer.opar.
cl NULL (default) or an object of class cluster obtained by makeCluster of pack-
age parallel. Repeated estimations (see outer.optim and seed) and placebo
studies will make use of the cluster cl (if not NULL).
times.pred.training
A matrix with two rows (containing start times in the first and end times in
the second row) and one column for each predictor variable, where the column
names must exactly match the names of the corresponding predictor variables
(or NULL by default). If not NULL, times.pred.training defines training peri-
ods for cross-validation applications. For the format of the start and end times,
see the documentation of parameter times.pred.
times.dep.validation
A matrix with two rows (containing start times in the first and end times in the
second row) and one column for each dependent variable, where the column
names must exactly match the names of the corresponding dependent variables
(or NULL by default). If not NULL, times.dep.validation defines validation
period(s) for cross-validation applications. For the format of the start and end
times, see the documentation of parameter times.dep.
v.special integer vector containing indices of important predictors with special treatment
(see below). Defaults to the empty set.
cv.alpha numeric scalar containing the minimal proportion (of the maximal feasible weight)
for the weights of the predictors selected by v.special. Defaults to 0.
spec.search.treated
A logical scalar. If TRUE, a specification search (for the optimal set of included
predictors) is done for the treated unit. Defaults to FALSE.
spec.search.placebos
A logical scalar. If TRUE, a specification search (for the optimal set of included
predictors) is done for the control unit. Defaults to FALSE.
Details
mscmt combines, if necessary, the preparation of the raw data (which is expected to be in "list"
format, possibly after conversion from a data.frame with function listFromLong) and the call to
the appropriate MSCMT optimization procedures (depending on the input parameters). For details
on the input parameters alpha, beta, and gamma, see [1]. For details on cross-validation, see [2].
Value
An object of class "mscmt", which is essentially a list containing the results of the estimation and,
if applicable, the placebo study. The most important list elements are
• the weight vector w for the control units,
• a matrix v with weight vectors for the predictors in its columns,
• scalars loss.v and rmspe with the dependent loss and its square root,
• a vector loss.w with the predictor losses corresponding to the various weight vectors in the
columns of v,
• a matrix predictor.table containing aggregated statistics of predictor values (similar to list
element tab.pred of function synth.tab of package 'Synth'),
• a list of multivariate time series combined containing, for each dependent and predictor vari-
able, a multivariate time series with elements treated for the actual values of the treated unit,
synth for the synthesized values, and gaps for the differences.
Placebo studies produce a list containing individual results for each unit (as treated unit), starting
with the original treated unit, as well as a list element named placebo with aggregated results for
each dependent and predictor variable.
If times.pred.training and times.dep.validation are not NULL, a cross-validation is done and
a list of elements cv with the results of the cross-validation period and main with the results of the
main period is returned.
References
[1] <NAME>, <NAME> (2018). “Fast and Reliable Computation of Generalized Synthetic Con-
trols.” Econometrics and Statistics, 5, 1–19. https://doi.org/10.1016/j.ecosta.2017.08.
002.
[2] <NAME>, <NAME>, <NAME> (2018). “Cross-Validating Synthetic Controls.” Economics
Bulletin, 38, 603-609. Working Paper, http://www.accessecon.com/Pubs/EB/2018/Volume38/
EB-18-V38-I1-P58.pdf.
Examples
## Not run:
## for examples, see the package vignettes:
browseVignettes(package="MSCMT")
## End(Not run)
plot.mscmt Plotting Results of MSCMT
Description
plot.mscmt plots results of mscmt.
Usage
## S3 method for class 'mscmt'
plot(
x,
what,
type = c("gaps", "comparison", "placebo.gaps", "placebo.data"),
treatment.time,
zero.line = TRUE,
ylab,
xlab = "Date",
main,
sub,
col,
lty,
lwd,
legend = TRUE,
bw = FALSE,
...
)
Arguments
x An object of class "mscmt", usually obtained as the result of a call to function
mscmt.
what A character scalar. Name of the variable to be plotted. If missing, the (first)
dependent variable will be used.
type A character scalar denoting the type of the plot containing either "gaps", "comparison",
"placebo.gaps", or "placebo.data". Partial matching allowed, defaults to
"placebo.gaps", if results of a placebo study are present, and to "gaps", else.
treatment.time An optional numerical scalar. If not missing, a vertical dotted line at the given
point in time is included in the plot. treatment.time is measured in years,
but may as well be a decimal number to reflect treatment times different from
January 1st.
zero.line A logical scalar. If TRUE (default), a horizontal dotted line (at zero level) is
plotted for "gaps" and "placebo" plots.
ylab Optional label for the y-axis, automatically generated if missing.
xlab Optional label for the x-axis, defaults to "Date".
main Optional main title for the plot, automatically generated if missing.
sub Optional subtitle for the plot. If missing, the subtitle is generated automatically
for "comparison" and "gaps" plots.
col Optional character vector with length corresponding to the number of units.
Contains the colours for the different units, automatically generated if missing.
lty Optional numerical vector with length corresponding to the number of units.
Contains the line types for the different units, automatically generated if miss-
ing.
lwd Optional numerical vector with length corresponding to the number of units.
Contains the line widths for the different units, automatically generated if miss-
ing.
legend A logical scalar. If TRUE (default), a legend is included in the plot.
bw A logical scalar. If FALSE (default), the automatically generated colours and line
types are optimized for a colour plot, if TRUE, the automatic colours and line
types are set for a black and white plot.
... Further optional parameters for the underlying plot function.
Details
A unified basic plot function for gaps plots, comparison of treated and synthetic values, as well as
plots for placebo studies. Consider using ggplot.mscmt instead, which is the preferred plot method
and has more functionality than plot.mscmt.
Value
Nothing useful (function is called for its side effects).
ppratio Post-pre-(r)mspe-ratios for placebo studies
Description
ppratio calculates post-to-pre-(r)mspe-ratios for placebo studies.
Usage
ppratio(
x,
what,
range.pre,
range.post,
type = c("rmspe", "mspe"),
return.all = FALSE
)
Arguments
x An object of class "mscmt", usually obtained as the result of a call to function
mscmt.
what A character vector. Name of the variable to be considered. If missing, the (first)
dependent variable will be used.
range.pre A vector of length 2 defining the range of the pre-treatment period with start and
end time given as
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
corresponding to the format of the respective column of the times.dep argu-
ment of mscmt. If missing, the corresponding column of times.dep will be
used.
range.post A vector of length 2 defining the range of the post-treatment period with start
and end time given as
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
corresponding to the format of the respective column of the times.dep argu-
ment of mscmt. Will be guessed if missing.
type A character string. Either rmspe (default) or mspe. Selects whether root mean
squared errors or mean squared errors are calculated.
return.all A logical scalar. If FALSE (default), only the (named) vector of post-pre-(r)mspe-
ratios is returned, if TRUE, a three-column matrix with pre- and post-treatment
(r)mspe’s as well as the post-pre-ratios will be returned.
Details
ppratio calculates post-to-pre-(r)mspe-ratios for placebo studies based on Synthetic Control Meth-
ods.
Value
If return.all is FALSE, a (named) vector of post-pre-(r)mspe-ratios. If return.all is TRUE, a
matrix with three columns containing the pre-treatment (r)mspe, the post-treatment (r)mspe, and
the post-pre-ratio.
print.mscmt Printing Results of MSCMT
Description
print.mscmt prints results of mscmt.
Usage
## S3 method for class 'mscmt'
print(x, ...)
Arguments
x An object of class "mscmt", usually obtained as the result of a call to function
mscmt.
... Further arguments to be passed to or from other methods. They are ignored in
this function.
Details
A human-readable summary of mscmt’s results.
Value
Nothing useful (function is called for its side effects).
pvalue P-values for placebo studies
Description
pvalue calculates p-values for placebo studies.
Usage
pvalue(
x,
what,
range.pre,
range.post,
alternative = c("two.sided", "less", "greater"),
exclude.ratio = Inf,
ratio.type = c("rmspe", "mspe")
)
Arguments
x An object of class "mscmt", usually obtained as the result of a call to function
mscmt.
what A character vector. Name of the variable to be considered. If missing, the (first)
dependent variable will be used.
range.pre A vector of length 2 defining the range of the pre-treatment period with start and
end time given as
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
corresponding to the format of the respective column of the times.dep argu-
ment of mscmt. If missing, the corresponding column of times.dep will be
used.
range.post A vector of length 2 defining the range of the post-treatment period with start
and end time given as
• annual dates, if the format of start/end time is "dddd", e.g. "2016",
• quarterly dates, if the format of start/end time is "ddddQd", e.g. "2016Q1",
• monthly dates, if the format of start/end time is "dddd?dd" with "?" different
from "W" (see below), e.g. "2016/03" or "2016-10",
• weekly dates, if the format of start/end time is "ddddWdd", e.g. "2016W23",
• daily dates, if the format of start/end time is "dddd-dd-dd", e.g. "2016-08-
18",
corresponding to the format of the respective column of the times.dep argu-
ment of mscmt. Will be guessed if missing.
alternative A character string giving the alternative of the test. Either "two.sided" (de-
fault), "less", or "greater".
exclude.ratio A numerical scalar (default: Inf). Control units with a pre-treatment-(r)mspe of
more than exclude.ratio times the pre-treatment-(r)mspe of the treated unit
are excluded from the calculations of the p-value.
ratio.type A character string. Either rmspe (default) or mspe. Selects whether root mean
squared errors or mean squared errors are calculated.
Details
pvalue calculates p-values for placebo studies based on Synthetic Control Methods.
Value
A time series containing the p-values for the post-treatment periods.
Examples
## Not run:
## for an example, see the main package vignette:
vignette("WorkingWithMSCMT",package="MSCMT")
## End(Not run) |
flask-docs-kr_readthedocs_io_ko_latest | free_programming_book | Python | Flask 문서에 오신것을 환영합니다. 이 문서는 다양한 파트로 나누어져 있습니다. 저자는 설치하기 와 빠르게 시작하기 를 먼저 보실것을 추천합니다. 빠르게 시작하기 뿐만아니라, 어떻게 Flask 어플리케이션을 만들 수 있는지 좀 더 상세하게 다루는 튜토리얼 또한 볼 수 있습니다.
* 만약 여러분이 오히려 Flask의 내부로 직접 뛰어 들고 싶은 경우라면,
* API 문서를 확인하십시오. 일반적으로 사용되는 패턴들은
Flask를 위한 패턴들 섹션을 확인하면 됩니다..
Flask는 두개의 외부 라이브러리에 의존합니다.: 바로 Jinja2 템플릿엔진과 Werkzeug WSGI 툴킷입니다. 이 라이브러리들은 이 문서에서 다루지않습니다. 만약 여러분이 이 라이브러리들에 대해서 깊이 알고 싶다면 다음의 링크를 확인하십시오.
## 사용자 가이드¶
이 문서의 사용자 가이드 파트에서는, Flask에 관한 일부 배경 정보들로 시작해서 지시사항을 따라하는 Flask 웹 개발을위한 단계별 지침에 초점을 맞추고 있습니다..
* 머리말
* 경험있는 프로그래머를 위한 머릿글
* 설치하기
* 빠르게 시작하기
* 튜토리얼
* 템플릿
* Flask 어플리케이션 테스트하기
* 어플리케이션 에러 로깅하기
* 어플리케이션 에러 디버깅
* 설정 다루기
* 시그널(Signals)
* 플러거블 뷰(Pluggable Views)
* 어플리케이션 컨텍스트
* 요청 컨텍스트
* 블루프린트를 가진 모듈화된 어플리케이션
* Flask 확장기능
* 쉘에서 작업하기
* Flask를 위한 패턴들
## API 레퍼런스¶
만약 여러분이 특정 함수, 클래스나 메소드에 대한 정보를 찾고 있다면, 문서의 이 부분은 당신에게 도움이 될 것 입니다.
* API
* 어플리케이션 객체(Application Object)
* 블루프린트 객체(Blueprint Objects)
* 유입되는 요청 데이터(Incoming Request Data)
* 응답 객체(Response Objects)
* 세션(Sessions)
* Session Interface
* Test Client
* Application Globals
* Useful Functions and Classes
* Message Flashing
* JSON Support
* Template Rendering
* Configuration
* Extensions
* Stream Helpers
* Useful Internals
* Signals
* Class-Based Views
* URL Route Registrations
* View Function Options
## 추가적인 참고사항¶
Flask의 디자인 노트, 법률 정보 변경 로그에 대한 관심있는 내용은 이곳에 있습니다.
* Design Decisions in Flask
* HTML/XHTML FAQ
* Security Considerations
* Unicode in Flask
* Flask Extension Development
* Pocoo Styleguide
* Upgrading to Newer Releases
* License
Flask를 시작하기 전에 먼저 이글을 읽어주시기 바랍니다. 이 글이 여러분의 프로젝트에서 이것을 사용하여야 하거나 사용하지 말아야 할때에 해당 목적과 의도에 대한 일부 질문들에 대한 답변이 될 수 있기를 희망합니다.
## “마이크로(Micro)”는 무엇을 뜻하는가?¶
“마이크로”는 여러분의 웹 어플리케이션이 하나의 파이썬 파일으로 개발되야한다는 걸 말하는게 아니다.(그렇게 해석될 수 도 있겠지만…) 또한, 기능적으로 부족하다는걸 의미하지도 않는다. 마이크로 프레임워크(Microframework)에서의 “마이크로”는 핵심기능만 간결하게 유지하지만, 확장가능한 것을 목적으로 한다. Flask는 여러분을 위해 데이타베이스를 선택해주지 않는다. Flask에서 제공하는 템플릿 엔진을 변경하는것은 쉽다. 그밖에 모든것은 여러분에게 달려있고, 그래서 Flask는 여러분이 필요한 모든것일 수 있고, 필요없는것은 하나도 없을것이다.
## 설정과 관례¶
Flask로 시작할때, 여러분은 잘 정의된 기본값이 있는 많은 설정값들과 몇가지 관례를 볼 수 있다. 템플릿과 정적파일은 어플리케이션의 파이썬 소스 디렉토리의 하위 디렉토리에 templates과 statics에 저장된다. 이 방식은 변경할 수 있지만, 처음부터 반드시 그럴 필요는 없다.
## Flask를 이용하여 성장시키기¶
> 여러분이 Flask를 가지고 개발할때, 운영을 위해 여러분의 프로젝트와 통합할 여러 종류의 확장을 찾게된다. Flask 핵심 개발팀은 많은 확장을 검토했고 그것들이 앞으로의 배포판과 어긋나지 않는다는것을 보증한다.
여러분의 코드의 규모가 점점 커지면서, 여러분의 프로젝트에 설계에 적합한 선택을 할 기회가 온다 Flask는 계속적으로 파이썬이 제공하는 최고의 모듈에 간결한 연결층을 제공할 것이다. 여러분은 SQLAlchemy이나 다른 디비툴 또는 알맞게 비관계형 데이타 저장소로 고급 패턴을 만들수도 있고 파이썬 웹 인터페이스인 WSGI를 위한 프레임웍에 관계없이 사용할 수 있는 툴을 이용할 수 있다.
Flask는 프레임웍의 기능을 변경하기 위한 많은 훅(hook)을 포함한다. 그것보다 더 많이 변경하고 싶다면 Flask class 자체를 상속하면 된다. 여러분이 subclassing에 관심이 있다면 크게 만들기 챕터를 확인한다. 디자인 패턴이 궁금하다면 Design Decisions in Flask 에 대한 섹션으로 넘어가라.
계속하기 설치하기, 빠르게 시작하기, 혹은 경험있는 프로그래머를 위한 머릿글.
Date: 2011-01-01
Categories:
Tags:
## Flask에서 쓰레드 로컬¶
Flask에 적용된 설계 원칙 중 하나는 단순한 업무는 단순해야한다는 것이다. 그런 종류의 업무들은 많은 코드를 요구하지 않아야 하지만, 여러분을 제약해서도 안된다. 그런 이유로 Flask는 몇몇 사람들을 놀라게 하거나, 정통적인 방식이 아니라고 생각할 수도 있는 몇개의 설계 원칙을 갖고 있다. 예를 들면, Flask는 내부적으로 쓰레드로컬방식을 사용해, 쓰레드-안전한 상태를 유지하기 위해 하나의 요청에서 함수들이 돌아가며 객체를 주고받을 필요가 없도록 했다. 이런 접근은 편리하지만, 의존 주입을 하거나 요청에 고정된 값을 사용하는 코드를 재사용하려할 때 유효한 요청 문맥을 요구한다. Flask 프로젝트는 쓰레드로컬에 대해 투명하고, 숨기지 않고, 심지어 사용되는 코드와 문서에서 드러내고 있다.
## 웹개발에서 주의점¶
웹 어플리케이션을 개발할때에는 항상 보안에 대해 신경써야한다.
여러분이 웹 개발을 할때, 개발된 어플리케이션의 사용자들은 여러분의 서버에 사용자의 정보가 등록되고 남겨지는 것을 허용할 것이다. 사용자는 데이타에 있어서 여러분을 신뢰한다는 것이다. 만약 여러분이 직접 작성한 어플리케이션의 유일한 사용자라 할지라도, 자신의 데이타가 안전하기를 원할 것이다.
불행히도, 웹 어플리케이션의 보안이 타협되는 여러 가지 경우가 있다. 프라스크는 현대의 웹 어플리케이션의 가장 일반적인 보안 문제인 XSS로 부터 여러분을 보호한다. 굳이 여러분이 안전하지 않게 작성된 HTML을 안전하게 변환하지 않더라도, Flask와 그 하부의 Jinja2 템플릿 엔진이 여러분을 보호해준다. 그러나 보안문제를 유발하는 더 많은 경우가 있다.
이 문서는 보안에 있어 주의를 요구하는 웹 개발의 측면을 여러분에게 당부하고 있다. 이런 보안 관점의 몇몇은 일부 사람이 생각하는것 보다 훨씬 복잡하고, 우리 모두는 때때로 취약점이 이용될것이라는 가능성을 뛰어난 공격자가 우리의 어플리케이션의 취약점을 찾아낼때까지 낮게 점치곤한다. 그리고 여러분의 어플리케이션이 공격자가 칩입할 만큼 중요하지 않다고 생각하지 마라. 공격의 종류에 따라, 자동화된 bot이 여러분의 데이타베이스에 스팸을 채우기 위해 탐색하거나, 악성코드를 링크하는 방식과 같은 기회들이 있다.
Flask는 여러분이 반드시 주의하며 개발해야하고, 요구사항에 맞춰 개발할때 취약점을 조심해야 하는 측면은 어느 다른 프레임워크와도 같다.
## Python3의 상태¶
요즘 파이썬 공동체는 파이썬 프로그래밍 언어의 새로운 버전을 지원하기 위한 라이브러리의 개선 과정중이다. 상황은 대단히 개선되고 있지만, 우리들이 파이썬3로 넘어가는데 걸림돌이 되는 몇가지 이슈가 있다. 이 문제들은 부분적으로 오래동안 검토되지 않은 언어의 변화에 의해 야기됐다. 부분적으로는 우리들이 저수준API가 파이썬3에서 유니코드의 차이점에 맞춰 어떤식으로 바뀌어야 하는지 해결해내지 못했다는 점도 있다.
Werkzeug과 Flask는 그 변경에 대한 해결책을 찾는 순간 파이썬3로 포팅되고, 파이썬3으로 개발된 버전의 업그레이드에 대한 유용한 팁을 제공할 것이다. 그때까지, 여러분은 개발하는 동안 파이썬2.6이나 2.7을 파이썬3 경고를 활성화한 상태로 사용할 것을 권고한다. 여러분이 근래에 파이썬3로 업그레이드를 계획중이라면 How to write forwards compatible Python code.를 읽는것을 적극 추천한다.
Flask를 시작하기 원합니까? 이 장은 Flask에 대해 알맞은 소개를 한다. 이 장은 여러분이 이미 Flask를 설치했다고 가정할것이고, 설치가 안됐다면 설치하기 섹션으로 넘어가기 바란다.
## 기본 애플리케이션¶
기본 Flask 어플리케이션은 다음과 같은 모습이다.
이 내용을 hello.py (아니면 비슷한 다른 이름으로) 저장하고 파이썬 인터프리터로 실행한다. 여러분이 작성한 어플리케이션을 `flask.py`로 호출하지 않도록 주의해라. 왜냐하면 Flask 자체와 충돌이 나기 때문이다.
```
$ python hello.py
* Running on http://127.0.0.1:5000/
```
자, 이제 웹브라우져를 열고 http://127.0.0.1:5000/, 로 이동해보자 그러면, 여러분이 작성한 hello world greeting이 화면에 나와야한다.
그렇다면, 위의 코드는 무슨 일은 한걸까?
* 먼저 우리는
`Flask` class를 임포트했다. 이 클래스의 인스턴스가 우리의 WSGI어플리케이션이 될것이다. 첫번째 인자는 이 어플리케이션의 이름이다. 여러분이 단일 모듈을 사용한다면(위의 예제처럼),여러분은 __name__`을 사용해야한다. 왜냐하면, 어플리케이션으로 시작되는지, 혹은 모듈로 임포트되는지에 따라 이름이 달라지기 때문이다.(`‘__main__’`` 대 실제로 임포트한 이름) 더 자세한 정보는 `Flask` 문서를 참고해라. * 다음은 Flask class의 인스턴스를 생성한다. 인자로 모듈이나 패키지의 이름을 넣는다. 이것은 플라스크에서 팀플릿이나 정적파일을 찾을때 필요하다.
*
`route()` 데코레이터를 사용해서 Flask에게 어떤 URL이 우리가 작성한 함수를 실행시키는지 알려준다. * 작성된 함수의 이름은 그 함수에 대한 URL을 생성하는데 사용되고(url_for 함수 참고), 그 함수는 사용자 브라우저에 보여줄 메시지를 리턴한다.
* 최종적으로
`run()` 함수를 사용해서 우리가 개발한 어플리케이션을 로컬서버로 실행한다. 소스파일을 모듈이 아닌 python 인터프리터를 이용해서 직접 실행한다면
이 문장은 우리가 실행한 서버가 현재 동작되는 유일한 서버라는 것을 보장한다.
실행된 서버를 중지하려면, control-C를 누른다. .. _public-server:
외부에서 접근 가능한 서버
위의 서버를 실행했다면, 그 서버는 네트워크상에 있는 다른 컴퓨터에서 접근이 안되고 여러분의 로컬서버에서만 접근 가능하다. 이것이 기본설정인 이유는 디버그모드상에서 어플리케이션의 사용자가 임의의 파이썬코드를 여러분의 컴퓨터에서 실행할 수 있기 때문이다.
여러분이 debug 모드를 해제하거나 여러분의 네트워크상에 있는 사용자들을 신뢰한다면, 다음과 같이 간단히 `run()` 메소드의 호출을 변경해서 서버의 접근을
오픈할 수 있다.
```
app.run(host='0.0.0.0')
```
위의 변경은 여러분의 OS에게 모든 public IP를 접근가능도록 설정한다.
## 디버그 모드¶
`run()` 메소드는 로컬개발서버를 실행시키기에 좋지만 코드 변경후에
수동으로 재시작해야한다. 플라스크는 그런 번거로운것을 개선한 방식을 제공한다. 여러분이
디버그모드를 지원하면, 서버는 코드변경을 감지하고 자동으로 리로드하고, 문제가 발생하면
문제를 찾을수 있도록 디버거를 제공한다.
디버깅을 활성화하는 두가지 방식이 있다. 한가지는 어플리케이션 객체에 플래그로 설정하는 방식이거나
```
app.debug = True
app.run()
```
어플리케이션을 실행할때 파라미터로 넘겨주는 방식이다.
`app.run(debug=True)`
두 메소드는 같은 결과를 보여준다.
주의
대화식 디버거가 forking 환경에서 동작안됨에도 불구하고(운영서버에서는 거의 사용이 불가능함), 여전히 임의의 코드가 실행될 수 있다. 이런점은 주요 보안 취약점이 될 수 있으므로, 운영 환경에서는 절대 사용하지 말아야한다.
디버거의 스크린샷:
디버거에 대해 좀 더 궁금하다면 디버거로 작업하기 를 참고 하기 바란다.
## 라우팅¶
현대 웹 어플리케이션은 잘 구조화된 URL로 구성되있다. 이것으로 사람들은 URL을 쉽게 기억할 수 있고, 열악한 네트워크 연결 상황하의 기기들에서 동작하는 어플리케이션에서도 사용하기 좋다. 사용자가 인덱스 페이지를 거치지 않고 바로 원하는 페이지로 접근할 수 있다면, 사용자는 그 페이지를 좋아할 것이고 다시 방문할 가능성이 커진다.
위에서 본것처럼, `route()` 데코레이터는 함수와 URL을 연결해준다. 아래는 기본적인 예제들이다. @app.route(‘/’) def index():
return ‘Index Page’
@app.route(‘/hello’) def hello():
return ‘Hello World’
하지만, 여기엔 더 많은것이 있다. 여러분은 URL을 동적으로 구성할 수 있고, 함수에 여러 룰을 덧붙일수있다.
### 변수 규칙¶
URL의 변수 부분을 추가하기위해 여러분은
```
<variable_name>``으로 URL에 특별한 영역으로
표시해야된다. 그 부분은 함수의 키워드 인수로써 넘어간다. 선택적으로,
``<converter:variable_name>
```
으로 규칙을 표시하여 변환기를 추가할 수 있다.
여기 멋진 예제가 있다.
```
@app.route('/user/<username>')
def show_user_profile(username):
# show the user profile for that user
return 'User %s' % username
@app.route('/post/<int:post_id>')
def show_post(post_id):
# show the post with the given id, the id is an integer
return 'Post %d' % post_id
```
다음과 같은 변환기를 제공한다. :
int | accepts integers |
| --- | --- |
float | like int but for floating point values |
path | like the default but also accepts slashes |
유일한 URL과 리디렉션 동작
Flask의 URL 규칙은 Werkzeug의 라우팅 모듈에 기반한다. 그 라우팅 모듈의 기본 사상은 아파치나 초기 HTTP서버들에서 제공한 전례에 기반을 둔 잘 구성되고 유일한 URL을 보장하는것이다.
아래의 두가지 규칙을 살펴보자
```
@app.route('/projects/')
def projects():
return 'The project page'
@app.route('/about')
def about():
return 'The about page'
```
이 두 둘은 비슷해 보이지만, URL 정의에 있어서 뒷 슬래쉬(trailing slash) 사용이 다르다. 첫번째 경우는, projects 끝점에 대한 정규 URL은 뒷 슬래쉬를 포함한다.이점에서 파일시스템의 폴더와 유사하다. 뒷 슬래쉬 없이 URL에 접근하면, Flask가 뒷 슬래쉬를 가진 정규 URL로 고쳐준다.
그러나, 두번째 경우의 URL은 Unix계열의 파일의 경로명처럼 뒷 슬래쉬없이 정의됐다. 이 경우, 뒷 슬래쉬를 포함해서 URL에 접근하면 404”페이지를 찾지 못함” 에러를 유발한다.
이것은 사용자가 뒷 슬래쉬를 잊어버리고 페이지에 접근했을때 상대적인 URL이 문제없이 동작하게한다. 이 방식은 아파치 및 다른 서버들이 동작하는 방식과 일치한다. 또한, URL을 유일하게 유지할 것이고, 검색엔진이 같은 페이지를 중복해서 색인하지 않도록 도와준다.
### URL 생성¶
플라스크가 URL을 맞춰줄수 있다면, URL을 생성할 수 있을까? 물론이다. 라우팅이 설정된 함수에 대한 URL을 얻어내기위해서 여러분은 `url_for()` 함수를 사용하면 된다. 이
함수는 첫번째 인자로 함수의 이름과 URL 룰의 변수 부분에 대한 다수의 키워드를 인자로 받는다.
알수없는 인자는 쿼리 인자로 URL에 덧붙여진다. :
```
>>> from flask import Flask, url_for
>>> app = Flask(__name__)
>>> @app.route('/')
... def index(): pass
...
>>> @app.route('/login')
... def login(): pass
...
>>> @app.route('/user/<username>')
... def profile(username): pass
...
>>> with app.test_request_context():
... print url_for('index')
... print url_for('login')
... print url_for('login', next='/')
... print url_for('profile', username='John Doe')
...
/
/login
/login?next=/
/user/John%20Doe
```
(이 테스트에서
를 사용한다. 이 함수는
플라스크에게 현재 파이썬 쉘에서 테스트를 하고 있음에도 지금 실제로 요청을 처리하고 있는것
처럼 상황을 제공한다. 컨텍스트 로컬 섹션에서 다시 설명이 언급된다).
왜 템플릿에 URL을 하드코딩하지 않고 URL을 얻어내기를 원하는가? 여기엔 세가지 적합한 이유가 있다.:
URL역변환이 URL을 하드코딩하는것 보다 훨씬 설명적이다. 더 중요한것은, 이 방식은 전체적으로 URL이 어디있는지 기억할 필요없이 한번에 URL을 다 변경할 수 있다..
*
URL을 얻어내는것은 특수 문자 및 유니코드 데이타를에 대한 이스케이핑을 명확하게 해주기때문에 여러분이 그것들을 처리할 필요가 없다..
*
* 여러분이 작성한 어플케이션이 URL의 최상위 바깥에 위치한다면 (예를 들면,
*
`/` 대신에 `/myapplication` ), `url_for()` 가 그 위치를 상대적 위치로 적절하게 처리해줄것이다..
### HTTP 메소드¶
HTTP(웹 어플리케이션에서 사용하는 프로토콜)는 URL 접근에 대해 몇가지 다른 방식을 제공한다. 기본적으로 GET 방식으로 제공되지만, `route()` 데코레이터에
methods 인자를 제공하면 다른 방식으로 변경할 수 있다. 아래에 몇가지 예가 있다:
```
@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'POST':
do_the_login()
else:
show_the_login_form()
```
GET 방식이 나타난다면, HEAD 가 자동적으로 더해질것이다. 여러분들이 그것을 처리할 필요가 없다. HEAD 요청은 HTTP RFC (HTTP프로토콜을 설명하는 문서)의 요청으로써 처리된다는 것을 보장할 것이다. 그래서 여러분은 HTTP명세에 대한 그 부분을 완전히 신경쓰지 않아도 된다. 마찬가지로, 플라스크0.6에서는 OPTION 을 자동으로 처리한다.
HTTP 메소드가 뭔지 모르는가? 걱정하지 말자, 여기 짧게 요약된 HTTP 메소드에 대한 소개와 그것이 중요한 이유가 있다. :
HTTP 메소드 (종종 “the verb” 라고 불리우는..) 는 클라이언트가 서버에게 요청된 페이지를 통해서 무엇을 하도록 원하는지 말해준다. 다음의 메소드들은 매우 일반적인 것들이다 :
* GET
* 브라우저가 어떤 페이지에 저장된 정보를 단지 얻기 위해 서버에 요청하고 서버는 그 정보를 보낸다. 가장 일반적인 메소드다.
* HEAD
* 브라우저가 어떤 페이지에 저장된 내용이 아니라 헤더라 불리는 정보를 요청한다. 어떤 어플리케이션이 GET 요청을 받은것 처럼 처리하나, 실제 내용이 전달되지 않는다. 하부에 있는 Werkzeug 라이브러리들이 그런 처리를 하기 때문에 플라스크에서는 그런 처리는 전혀 할 필요가 없다.
* POST
* 브라우저는 서버에게 새로운 정보를 *전송*하도록 특정 URL에 요청하고 그 정보가 오직 한번 저장되는것을 보장하도록 한다. 이것이 보통 HTML폼을 통해서 서버에 데이터 전송하는 방식이다.
* PUT
* POST 와 유사하지만 서버가 오래된 값들을 한번 이상 덮어쓰면서 store procedure를 여러번 실행할 수 있다. 여러분이 이것이 유용한 이유를 물을수도 있지만, 몇가지 적당한 이유가 있다. 전송시 연결을 잃어버리는 경우는 생각해보면, 브라우저와 서버사이에서 정보의 단절없이 요청을 다시 안전하게 받을 수도 있다. POST 는 단 한번 요청을 제공하기 때문에 이런 방식은 불가능하다.
* DELETE
* 주어진 위치에 있는 정보를 제거한다..
* OPTIONS
* 클라이언트에게 요청하는 URL이 어떤 메소드를 지원하는지 알려준다. Flask 0.6부터 이 기능은 자동 구현된다..
현재 흥미로운 부분은 HTML4와 XHTML1에서 폼이 서버로 전달하는 메소드는 GET 과 POST 다. 그러나 자바스크립와 미래의 HTML표준으로 여러분은 다른 메소드도 사용할 수 있다. 게다가,HTTP는 최근에 상당히 널리 사용되고 있고, 브라우저가 HTTP를 사용하는 유일한 클라이언트가 아니다. 예를 들면, 많은 버전관리 시스템이 HTTP를 사용한다.
동적인 웹 어플리케이션은 정적 파일은 필요로한다. 그것들은 보통 자바스크립트나 CSS파일을 의미한다. 이상적으로 웹서버는 정적 파일들을 서비스하지만, 개발시에는 플라스크가 그 역할을 대신해준다. 단지 static 이라는 폴더를 여러분이 생성한 패키지 아래에 만들거나 모듈 옆에 위치시키면 개발된 어플리케이션에서 /static 위치에서 정적 파일을 제공할 것이다.
정적파일에 대한 URL을 얻으려면, 특별한 `'static'` 끝점 이름을 사용해야한다
```
url_for('static', filename='style.css')
```
이 파일(style.css)는 파일시스템에 `static/style.css` 로 저장되어야한다.
## 템플릿 보여주기¶
파이썬 소스코드에서 HTML을 생성하는 것은 그다지 재밌는 일은 아니다.(굉장히 번거로운 일이다) 왜냐하면 어플리케이션 보안을 위해 동적으로 변환되는 값에 대해 이스케이핑을 여러분 스스로 작성해야한다. 그런 불편함때문에 Flask는 <http://jinja.pocoo.org/2/>`_ 를 템플릿엔진으로 구성하여 자동으로 HTML 이스케이핑을 한다.
템플릿을 뿌려주기 위해, 어려분은 `render_template()` 메소드를 사용할 수 있다.
여러분이 해야하는 것은 단지 템플릿의 이름과 템플릿에 보여줄 변수를 키워드 인자로 넘겨주면
된다. 아래는 템플릿을 뿌려주는 방식의 간단한 예를 보여준다
@app.route('/hello/')
@app.route('/hello/<name>')
def hello(name=None):
return render_template('hello.html', name=name)
```
Flask는 templates 폴더에서 템플릿을 찾는다. 여러분이 모듈로 어플리케이션을 개발했다면 이 폴더는 그 모듈 옆에 위치하고, 패키지로 개발했다면 그 패키지 안에 위치한다 :
Case 1: 모듈:
```
/application.py
/templates
/hello.html
```
Case 2: 패키지:
```
/application
/__init__.py
/templates
/hello.html
```
플라스크에서는 템플릿에 있어서 Jinja2의 강력함을 사용할 수 있다. Jinja2에 대한 더 자세한 내용은 Jinja2 Template Documentation 공식 문서를 참고하기 바란다.
여기 템플릿 예제가있다:
```
<!doctype html>
<title>Hello from Flask</title>
{% if name %}
<h1>Hello {{ name }}!</h1>
{% else %}
<h1>Hello World!</h1>
{% endif %}
```
템플릿 안에서도 여러분은 `request` ,:class:~flask.session 와 `g` [1] 객체에 접근할 수 있다.
템플릿에서 특히 상속이 유용하게 사용된다. 템플릿 상속에 대해서 자세한 내용을 보고 싶으면 템플릿 상속 패턴문서를 참고하기 바란다. 기본적으로 템플릿 상속은 각 페이지에서 특정 요소를 유지할수 있게 해준다.(헤더, 네비게이션과 풋터)
자동 이스케이핑은 켜져있고, name 이란 변수가 HTML에 포함되있다면, 그 변수값은 자동으로 이스케이핑된다. 여러분이 변수가 안전하고 그 변수를 안전한 HTML로 만든다면, `Markup` 템플릿에서 `|safe` 필터를 사용해서 안전하게 만들수 있다.
템플릿에 대한 자세한 내용은 Jinja2 문서를 참고하기 바란다. 여기에 `Markup` 클래스를 사용하는 예제가 있다:
```
>>> from flask import Markup
>>> Markup('<strong>Hello %s!</strong>') % '<blink>hacker</blink>'
Markup(u'<strong>Hello <blink>hacker</blink>!</strong>')
>>> Markup.escape('<blink>hacker</blink>')
Markup(u'<blink>hacker</blink>')
>>> Markup('<em>Marked up</em> » HTML').striptags()
u'Marked up \xbb HTML'
```
버전 0.5으로 변경: Autoescaping is no longer enabled for all templates. The following extensions for templates trigger autoescaping: `.html` , `.htm` , `.xml` , `.xhtml` . Templates loaded from a string will have
autoescaping disabled.
[1] | Unsure what that |
| --- | --- |
## 요청 데이타 접근하기¶
웹 어플리케이션에 있어서 클라이언트에서 서버로 보내는 데이타를 처리하는 것은 중요한 일이다. Flask에서 이 정보는 글로벌한 `request` 객체에 의해 제공된다. 여러분이
파이썬 경험이 있다면, 어떻게 그 객체가 글로벌하고 쓰레드 안전하게 처리되는지 궁금할 수 도
있다. 그 답은 컨텍스트 로컬에 있다. :
### 컨텍스트 로컬¶
Flask 에서 어떤 객체들은 보통 객체들이 아닌 전역 객체들이다. 이 객체들은 실제로 어떤 특정한 문맥에서 지역적인 객체들에 대한 대리자들이다. 무슨 말인지 어렵다. 하지만, 실제로는 꽤 쉽게 이해할 수 있다.
쓰레드를 다루는 문맥을 생각해보자. 웹에서 요청이 하나 들어오면, 웹서버는 새로운 쓰레드를 하나 생성한다 (그렇지 않다면, 다른 방식으로 기반을 이루는 객체가 쓰레드가 아닌 동시성을 제공하는 시스템을 다룰 수도 있다). 플라스크가 내부적으로 요청을 처리할때, 플라스크는 현재 처리되는 쓰레드를 활성화된 문맥이라고 간주하고, 현재 실행되는 어플리케이션과 WSGI환경을 그 문맥(쓰레드)에 연결한다. 이렇게 처리하는 것이 문맥을 지능적으로 처리하는 방식이고, 이렇게하여 한 어플리케이션이 끊어짐없이 다른 어플리케이션을 호출할 수 있다.
그렇다면 이것은 여러분에게 어떤 의미인가? 기본적으로 여러분이 유닛 테스트(Unit Test)와 같은 것을 하지 않는다면 이것을 완전히 무시할 수 있다. 여러분은 요청 객체에 의존하는 코드가 갑자기 깨지는것을 알게 될것인데, 왜냐하면 요청 객체가 존재하지 않기 때문이다. 해결책은 요청 객체를 생성해서 그 객체를 문맥에 연결하는 것이다. 유닛 테스트에 있어서 가장 쉬운 해결책은
문맥 관리자(Context Manager)를 사용하는
것이다. with 절과 함께 사용해서 test_request_context() 문맥 관리자는 테스트 요청을
연결할 것이고, 그렇게해서 여러분은 그 객체와 상호 작용할 수 있다. 아래에 예가 있다:
with app.test_request_context('/hello', method='POST'):
# now you can do something with the request until the
# end of the with block, such as basic assertions:
assert request.path == '/hello'
assert request.method == 'POST'
```
다른 방법은 WSGI 환경 변수를 `request_context()` 메소드에 인자로
넘기는 것이다.
with app.request_context(environ):
assert request.method == 'POST'
```
### 요청 객체¶
요청(requqest) 객체는 API(Application Programming Interface) 장에 설명되있으므로, 여기서는 자세히 언급하지 않을것이다(see `request` ). 여기서는 가장 일반적인
동작에 대해 거시적으로 살펴본다. 여러분은 먼저 Flask 모듈에서 요청 개체를
임포트해야한다:
현재 요청 메소도는 `method` 속성으로 사용할 수 있다. 폼 데이타(
HTTP POST 나 PUT 요청으로 전달된 데이타)에 접근하려면, `form` 속성을 사용할 수 있다. 아래에 위에서 언급한 두가지 속성에 대한 완전한 예제가 있다:
```
@app.route('/login', methods=['POST', 'GET'])
def login():
error = None
if request.method == 'POST':
if valid_login(request.form['username'],
request.form['password']):
return log_the_user_in(request.form['username'])
else:
error = 'Invalid username/password'
# 아래의 코드는 요청이 GET 이거나, 인증정보가 잘못됐을때 실행된다.
return render_template('login.html', error=error)
```
위에 폼 에 접근한 키(username이나 password)가 존재하지 않으면 어떻게 되나? KeyError가 발생한다. 여러분은 표준적인 `KeyError` 로 이 예외를 잡을 수 있지만, 예외처리를 하지
않는다면 HTTP 400 잘못된 요청(Bad Request)에 대한 오류 페이지를 보여준다. 그렇기 때문에
대다수의 상황에서 이 문제를 여러분이 직접 처리할 필요는 없다. URL로 넘겨진 파라메터 ( `?key=value` 소위 말하는 질의 문자열)에 접근하려면, 여러분은 `args` 속성을 사용할 수 있다:
```
searchword = request.args.get('key', '')
```
우리는 args속성의 get 을 사용하거나 KeyError 예외를 처리하여 URL파라메터에 접근할 것을 추천한다. 왜냐하면 사용자가 URL을 변경할 수 있으며 사용자에게 친근하지 않은 400 잘못된 요청 페이지를 보여주기 때문이다.
요청 객체에 대한 메소드와 속성의 전체 목록은 `request` 문서에서 살펴보라.
### 파일 업로드¶
> 여러분은 Flask를 사용하여 쉽게 업로드된 파일들을 다룰 수 있다. 여러분의 HTMl form에
```
enctype="multipart/form-data"
```
가 설정하는 것만 잊지 말아라. 그렇지 않으면 브라우저는 파일을 전혀 전송하지 않을 것이다. 업로드된 파일들은 메모리나 파일시스템의 임시 장소에 저장된다. 여러분이 `files` 객체의 files 속성을 찾아 그 파일들에 접근할 수 있다. 업로드된 각 파일들은 그 dictionary 안에 저장되어 있다. 그것은 마치 표준 파이썬 `file` 객체처럼 행동한다. 그러나 서버의 파일시스템에 파일을 저장하도록 하는 `save()` 메소드 또한 가지고 있다. 아래 save 메소드가 어떻게 실행되는지를 보여주는 간단한 예제가 있다:
@app.route('/upload', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
f = request.files['the_file']
f.save('/var/www/uploads/uploaded_file.txt')
...
```
만약 여러분의 어플리케이션에 파일이 업로드되기 전 클라이언트에서의 파일명을 알고 싶다면, `filename` 속성에 접근할 수 있다. 그러나
이 값은 위조될 수 있으며 결코 신뢰할 수 없는 값인 것을 명심해라. 만약 서버에 저장되는
파일명을 클라이언트에서의 파일명을 그대로 사용하기를 원한다면, Werkzeug에서 제공하는 `secure_filename()` 함수에 그 파일명을 전달하라:
```
from flask import request
from werkzeug import secure_filename
@app.route('/upload', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
f = request.files['the_file']
f.save('/var/www/uploads/' + secure_filename(f.filename))
...
```
더 나은 예제를 보고 싶다면 파일 업로드하기 챕터의 패턴을 확인하라.
### 쿠키¶
쿠키에 접근하기 위해서는 `cookies` 속성을 사용할 수 있다. 쿠키를
저장하기 위해서는 response 객체의 `set_cookie` 메소드를 사용할
수 있다. request 객체의 `cookies` 속성은 클라이언트가 전송하는
모든 쿠키를 가지고 있는 dictionary이다. 만약 여러분이 세션을
사용하기를 원한다면 쿠키를 직접 사용하는 대신에 쿠키 위에서 보안성을 추가한 Flask의
세션 을 사용하라.
Reading cookies:
@app.route('/')
def index():
username = request.cookies.get('username')
# use cookies.get(key) instead of cookies[key] to not get a
# KeyError if the cookie is missing.
```
Storing cookies:
```
from flask import make_response
@app.route('/')
def index():
resp = make_response(render_template(...))
resp.set_cookie('username', 'the username')
return resp
```
쿠키가 respone 객체에 저장되는 것을 주목하다. 여러분이 보통 뷰 함수로부터 단지 문자열을 반환하기 때문에, Flask는 그 문자열들을 여러분을 위해 response 객체로 변환할 것이다. 만약 여러분이 명시적으로 변환하기를 원한다면 여러분은 `make_response()` 함수를
사용하여 값을 변경할 수 있다.
때때로 여러분은 response 객체가 아직 존재하지 않는 시점에 쿠키를 저장하기를 원할 수도 있다. 지연된(deferred) 요청 콜백 패턴을 사용하면 가능하다.
이것을 위해 응답에 관하여 챕터를 참조해라.
## 리다이렉션과 에러¶
사용자가 다른 엔드포인트로 redirect하기 위해서는 `redirect()` 함수를
사용하라. 에러콛를 가지고 일찍 요청을 중단하기를 원한다면 `abort()` 함수를
사용하라:
```
from flask import abort, redirect, url_for
@app.route('/')
def index():
return redirect(url_for('login'))
@app.route('/login')
def login():
abort(401)
this_is_never_executed()
```
위 코드는 사용자가 인덱스 페이지에서 그들이 접근할 수 없는(401은 접근불가를 의미) 페이지로 redirect되어질 것이기 때문에 다소 무의미한 예제일 수는 있으나 어떻게 작동된다는 것을 보여주고 있다.
기본으로 하얀 화면에 검정 글씨의 에러 페이지가 각 에러코드를 위해 보여진다. 만약 여러분이 에러페이지를 변경하기를 원한다면 `errorhandler()` 데코레이터를 사용할
수 있다:
@app.errorhandler(404)
def page_not_found(error):
return render_template('page_not_found.html'), 404
```
`render_template()` 호출 뒤에 있는 `404` 를 주목해하. 이것은 페이지의 상태
코드가 그 페이지를 찾을 수 없다는 404가 되어야 하는 것을 Flask에게 말해 준다. 기본으로
200이 가정되며, 그것은 모든 것이 잘 실행됐다는 것으로 해석된다.
## 응답에 관하여¶
view 함수로부터 반환되는 값은 자동으로 response 객체로 변환된다. 만약 반환값이 문자열이라면 response body로 문자열과 `200 OK` 코드, `text/html` mimtype을 갖는
response객체로 변환된다. Flask에서 반환값을 response 객체로 변환하는 로직은 아래와
같다:
* 만약 정확한 유형의 response객체가 반환된다면 그 객체는 그대로 뷰로부터 반환되어 진다.
* 만약 문자열이 반환된다면, response객체는 해당 데이타와 기본 파라미터들을 갖는 reponse객체가 생성된다.
* 만약 튜플이 반환된다면 튜플 안에 있는 아이템들이 추가적인 정보를 제공할 수 있다. 그런 퓨틀들은 지정된 폼
이여야 하며, 그 중 적어도 하나의 아이템이 튜플 안에 있어야 한다. status 값은 statud code를 오버라이드하면 ` headers` 는 추가적인 정보의 list, dictionary가 될 수 있다. * 만약 위의 것들이 아무것도 적용되지 않는다면, Flask는 반환값이 유효한 WSGI application 이라고 가정하고 WSGI application을 response객체로 변환할 것이다.
만약 여러분이 뷰 안에서 결과 response 객체를 찾기를 원한다면 `make_response()` 함수를 사용할 수 있다.
아래와 같은 뷰를 가지고 있다고 상상해 보아라:
```
@app.errorhandler(404)
def not_found(error):
return render_template('error.html'), 404
```
여러분은 단지 `make_response()` 함수를 사용하여 반환되는 표현을 래핑하고,
변경을 위해 결과 객체를 얻은 다음 반환하기만 하면 된다:
```
@app.errorhandler(404)
def not_found(error):
resp = make_response(render_template('error.html'), 404)
resp.headers['X-Something'] = 'A value'
return resp
```
## 세션¶
Request object외에도 하나의 요청에서 다음 요청까지 사용자에 대한 구체적인 정보를 저장할 수 있는 `session` 이라는 객체가 있다. 세션은 여러분을 위해 쿠키 위에서
구현되어 지고 암호화를 사용하여 그 쿠키를 서명한다. 즉, 사용자는 쿠키의 내용을 볼 수는
있지만 서명을 위해 사용된 비밀키를 알지 못한다면 쿠키의 내용을 변경할 수 없다는 것을
의미한다.
세션을 사용하기 위해서는 비밀키를 설정해야 한다. 아래 세션이 어떻게 사용되는지 참조해라:
```
from flask import Flask, session, redirect, url_for, escape, request
@app.route('/')
def index():
if 'username' in session:
return 'Logged in as %s' % escape(session['username'])
return 'You are not logged in'
@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'POST':
session['username'] = request.form['username']
return redirect(url_for('index'))
return '''
<form action="" method="post">
<p><input type=text name=username>
<p><input type=submit value=Login>
</form>
'''
@app.route('/logout')
def logout():
# remove the username from the session if it's there
session.pop('username', None)
return redirect(url_for('index'))
# set the secret key. keep this really secret:
app.secret_key = '<KEY>'
```
위 예제에서 `escape()` 는 템플릿 엔진을 사용하지 않을경우 이스케이프를 한다.
무작위(랜덤)로 생성하는 것의 문제는 정말 그것이 무작위(랜덤)한 것인지 판단하기 어렵다는 것이다. 비밀키는 가능한 한 무작위로 생성되어야 하다. 여러분의 OS는 키를 얻는 데 사용할 수 있는 cryptographic random generator(암호 난수 발생기) 기반의 꽤 무작위의 키를 생성하는 방법을 제공한다. :
> >>> import os >>> os.urandom(24) '\xfd{H\xe5<\x95\xf9\xe3\x96.5\xd1\x01O<!\xd5\xa2\xa0\x9fR"\xa1\xa8'
위 코드를 여러분의 코드에 복사하기/붙혀넣기만 하면 된다.
쿠키 기반의 세션: Flask는 여러분이 값들을 세션 객체 안에 넣고 세션 객체들을 쿠키로 직렬화할 것이다. 만약 쿠키는 사용 가능하나 세션에 저장한 값들이 여러 요청에 걸쳐 지속적으로 사용할 수 없다는 것을 발견한다면, 그리고 분명한 에러 메세지를 얻지 못한다면, 웹 브라우저에 의해 지원되는 쿠키 크기와 응답 페이지 내의 쿠키 크기를 확인해라.
## 메시지 플래싱¶
좋은 어플리케이션과 유저 인터페이스는 모두 피드백에 관한 것이다. 만약 사용자가 충분한 피드백을 받지 못한다면 사용자들은 아마 결국에는 그 어플리케이션을 싫어할 것이다. Flask는 flashing system을 사용하여 사용자에게 피드백을 주기 위한 정말 간단한 방법을 제공한다. flashing system이란 기본적으로 요청의 끝과 바로 다음 요청에 접근할 때 메세지를 기록하는 것을 가능하게 한다. 이것은 보통 메세지를 사용자에게 노출하기 위한 레이아웃 템플릿과 조합되어 있다.
메세지를 flash 하기 위하여 `flash()` 메소드를 사용하고 메세지를 가져오기
위하여 템플릿에서 사용할 수 있는
메소드를 사용할
수 있다. 예제가 필요하면 메시지 플래싱(Message Flashing) 챕터를 확인하기 바란다.
## 로깅¶
때때로 여러분은 정확해야 하는 데이타를 다뤄야 하나 실제로는 그렇지 못한 상황에 처할 지도 모른다. 예를 들면 형식이 틀린 HTTP 요청을 서버로 보내는 클라이언트 사이드 코드를 가질 지도 모른다. 이런 상황은 사용자가 데이타 조작에 의해 발생할 수 있으며, 클라이언트 코드는 정상적으로 실행되지 않을 것이다. 대부분의 경우에는 이런 상황에서 `400 Bad Request` 로
응답하는 것은 상관없으나 때때로 `400 Bad Request` 로 응답하지 않고 코드가 정상적으로
실행되어야 하는 경우가 있다.
이런 경우 여러분은 여전히 수상한 어떤 일이 발생한 것을 로깅하기를 원할지도 모른다. 이런 상황에서 로걸ㄹ 사용할 수 있다. Flask 0.3부터 로거는 여러분이 사용할 수 있도록 미리 설정되어 있다.
아래 로그를 출력하는 예제가 있다.
```
app.logger.debug('A value for debugging')
app.logger.warning('A warning occurred (%d apples)', 42)
app.logger.error('An error occurred')
```
첨부된 `Logger` 이다. 더
많은 정보를 원하면 logging
documentation 문서를 참조해라.
## WSGI 미들웨어에서 후킹하기¶
만약 여러분이 여러분이 개발한 어플리케이션을 WSGI 미들웨어에 올리기를 원한다면 여러분은 내부 WSGI 어플리케이션을 래핑할 수 있다. 예를 들면 여러분이 lighttpd의 버그를 피하기 위해 Werkzeug 패키지의 미들웨어 중 하나를 사용하기를 원한다면, 여러분은 아래 코드와 같이 내부 WSGI 어플리케이션을 래핑할 수 있다:
```
from werkzeug.contrib.fixers import LighttpdCGIRootFix
app.wsgi_app = LighttpdCGIRootFix(app.wsgi_app)
```
## 웹서버에 배포하기¶
여러분이 Flask로 개발한 어플리케이션을 배포할 준비가 되었는가? quickstart를 마무리하기 위하여 여러분은 hosted된 플랫폼에 배포할 수 있다. 모든 플랫폼들은 소규모 프로젝트를 위해 무료 서비스를 제공한다:
Flask 어플리케이션을 host할 수 있는 다른 곳도 있다:
* Deploying Flask on Webfaction
* Deploying Flask on Google App Engine
* Sharing your Localhost Server with Localtunnel
만약 여러분이 자신만의 host를 관리하고 서비스하기를 원한다면, 배포 옵션 챕터를 참조하라.
파이썬과 Flask로 어플리케이션을 개발하기를 원하는가? 여기서 예제를 가지고 그것을 배울 기회를 가질 수 있다. 이 튜토리얼에서는 우리는 간단한 마이크로 블로그 어플리케이션을 개발할 것이다. 텍스트만 입력가능한 한 명의 사용자만 지원하며 피드백이나 커멘트를 달 수 없다. 그러나 여러분이 시작하기에 필요한 모든 내용들이 있을 것이다. 우리는 Flask와 파이썬 범위에서 벗어난 데이타베이스로 SQLite를 사용할 것이다. 그 밖에 필요한 것들은 없다.
만약 여러분이 미리 또는 비교를 위해 모든 소스코드를 원한다면 example source 를 확인 하길 바란다.
우리는 우리의 블로깅 어플리케이션을 flaskr 이라고 부를 것이다. 웬지 덜 웹 2.0스러운 이름을 선택해야할 것 같은 느낌에서 자유로워 진것 같다. 기본적으로 우리는 flaskr을 통해서 다음 사항들을 하기를 원한다:
* 1.사용자가 지정한 자격증명 설정을 이용하여 로그인/로그아웃을 할 수 있게 한다.
* 사용자는 단한명만 지원한다.
* 2.사용자가 로그인하면 사용자는 제목과 내용을 몇몇 HTML과 텍스트로만 입력할 수 있다.
* 우리는 사용자를 신뢰하기 때문에 HTML에 대한 위험성 검증은 하지 않는다.
3.flaskr 페이지에서는 지금까지 등록된 모든 항목들을 시간의 역순으로 상단에 보여준다 최근것을 제일 위로)`(최근것을 제일 위로) 로그인한 사용자는 새로 글을 추가 할 수 있다.
이정도 규모의 어플리케이션에서 사용하기에는 SQLite3도 충분한 선택이다. 그러나 더 큰 규모의 어플리케이션을 위해서는 더 현명한 방법으로 데이타베이스 연결을 핸들링하고 다른 RDBMS를 사용이 가능한 SQLAlchemy 를 사용하는 것이 맞다. 만약 여러분의 데이타가 NoSQL에 더 적합하다면 인기있는 NoSQL 데이타베이스 중 하나를 고려하기를 원할 수도 있다.
아래는 최종 완성된 어플리케이션의 스크린샷이다.:
계속해서 스텝 0 폴더 생성하기를 보자. 스텝 0: 폴더를 생성하기.
어플리케이션 개발을 시작하기전에, 어플리케이션에서 사용할 폴더를 만들자
```
/flaskr
/static
/templates
```
flaskr 폴더는 Python 패키지가 아니다. 단지 우리의 파일들을 저장할 장소이다. 우리는 이 폴더 안에 데이터베이스 스키마 뿐만 아니라 다른 앞으로 소개될 다른 스텝에 나오는 주요 모듈들 넣을 곳이다. static 폴더 내 파일들은 HTTP 를 통해 어플리케이션 사용자들이 이용할 수 있다. 이 폴더는 css와 javascript 파일들이 저장되는 곳이다. Flasks는 templates 폴더에서 Jinja2 템플릿을 찾을 것이다
계속해서 Step 1:데이타베이스 스키마를 보자 스텝 1: 데이터베이스 스키마.
먼저 우리는 데이터베이스 스키마를 생성해야 한다. 우리의 어플리케이션을 위해서는 단지 하나의 테이블만 필요하며 사용이 매우 쉬운 SQLite를 지원하기를 원한다. 다음의 내용을 schema.sql 이라는 이름의 파일로 방금 생성한 flaskr 폴더에 저장한다.
```
drop table if exists entries;
create table entries (
id integer primary key autoincrement,
title string not null,
text string not null
);
```
이 스키마는 entries 라는 이름의 테이블로 구성되어 있으며 이 테이블의 각 row에는 id, title, text 컬럼으로 구성된다. id 는 자동으로 증가되는 정수이며 프라이머리 키(primary key) 이다. 나머지 두개의 컬럼은 null이 아닌 문자열(strings) 값을 가져야 한다.
계속해서 Step 2: 어플리케이션 셋업 코드를 보자. 스텝 2: 어플리케이션 셋업 코드.
이제 우리는 데이터베이스 스키마를 가지고 있고 어플리케이션 모듈을 생성할 수 있다. 우리가 만들 어플리케이션을 flaskr 폴더안에 있는 flaskr.py 라고 부르자. 시작하는 사람들을 위하여 우리는 import가 필요한 모듈 뿐만 아니라 설정 영역도 추가할 것이다. 소규모 어플리케이션을 위해서는 우리가 여기에서 할 모듈 안에 설정을 직접 추가하는 것이 가능하다. 그러나 더 깔끔한 해결책은 설정을 .ini 또는 .py 로 분리하여 생성하여 로드하거나 그 파일로부터 값들을 import하는 것이다.
아래는 flaskr.py 파일 내용이다:
In flaskr.py:
```
# all the imports
import sqlite3
from flask import Flask, request, session, g, redirect, url_for, \
abort, render_template, flash
# configuration
DATABASE = '/tmp/flaskr.db'
DEBUG = True
SECRET_KEY = 'development key'
USERNAME = 'admin'
PASSWORD = 'default'
```
다음으로 우리는 우리의 실제 어플리케이션을 생성하고 같은 파일의 설정을 가지고 어플리케이션을 초기화할 수 있다. flaskr.py 내용은
```
# create our little application :)
app = Flask(__name__)
app.config.from_object(__name__)
```
`from_object()` 는 인자로 주어진 객체를 설정값을 읽어 오기 위해 살펴 볼 것이다.
(만약 인자 값이 문자열이면 해당 객체를 임포트 할것이다.) 그리고나서 거기에 정의된 모든 대문자
변수들을 찾을 것이다. 우리의 경우, 우리가 위에서 몇 줄의 코드로 작성했던 설정이다.
여러분은 분리된 파일로도 설정값들을 이동시킬 수 있다. 일반적으로 설정 파일에서 설정값을 로드하는 것은 좋은 생각이다. 위에서 사용한 `from_object()` 대신 `from_envvar()` 를 사용하여 설정값을 로드할 수도 있다:
```
app.config.from_envvar('FLASKR_SETTINGS', silent=True)
```
위와 같은 방식으로 환경변수를 호출하여 설정값을 로드할 수도 있다. `FLASKR_SETTINGS` 에 명시된 설정 파일이 로드되면 기본 설정값들은 덮어쓰기가 된다.
silent 스위치는 해당 환경변수가 존재 하지 않아도 Flask가 작동하도록 하는 것이다.
클라이언트에서의 세션을 안전하게 보장하기 위해서는 secret_key 가 필요하다. secret_key는 추측이 어렵도록 가능한 복잡하게 선택하여야 한다. 디버그 플래그는 인터랙ㅌ브 디버거를 활성화 시키거나 비활성화 시키는 일을 한다. 운영시스템에서는 디버그 모드를 절대로 활성화 시키지 말아야 한다. 왜냐하면 디버그 모드에서는 사용자가 서버의 코드를 실행할수가 있기 때문이다.
우리는 또한 명세화된 데이터베이스에 쉽게 접속할 수 있는 방법을 추가할 것이다. 이방법으로 Python 인터랙티브 쉘이나 스크립트에서 요청에 의해 커넥션을 얻기위해 사용할 수 있다. 이 방법을 뒤에서 좀더 편리하게 만들어 볼 것이다.
```
def connect_db():
return sqlite3.connect(app.config['DATABASE'])
```
마지막으로 우리는 파일의 마지막에 단독 서버로 실행되는 애플리케이션을 위한 서버 실행 코드를 한줄 추가 하였다.:
여기까지 되어있으면 문제없이 어플리케이션을 시작할 수 있어야 한다. 다음 명령어로 실행이 가능하다:
`python flaskr.py`
서버가 접근가능한 주소로 실행되었다고 알려주는 메시지를 접할 수 있을 것이다.
우리가 아직 아무런 뷰(view)를 만들지 않았기 때문에 브라우저에서는 페이지를 찾을 수 없다는 404에러를 볼 수 있을 것이다. 이부분에 대해서는 좀 더 후에 살펴 보도록 할 것이다. 먼저 살펴봐야 할 것은 데이터베이스가 작동되는지 확인하는 것이다.
외부에서 접근가능한 서버
당신의 서버를 외부에 공개하고 싶다면 다음 섹션을 참고 하라 externally visible server
Continue with 스텝 3: 데이터베이스 생성하기.
Flaskr은 이전에 설명한 대로 데이터베이스를 사용하는 어플리케이션이고 좀더 정확하게는 관계형 데이터베이스 시스템에 의해 구동되는 어플리케이션이다. 이러한 시스템은 어떻게 데이터를 저장할지에 대한 정보를 가지고 있는 스키마가 필요하다. 그래서 처음으로 서버를 실행하기 전에 스키마를 생성하는 것이 중요하다.
이러한 스키마는 schema.sql 파일을 이용하여 sqlite3 명령어를 사용하여 다음과 같이 만들 수 있다.:
```
sqlite3 /tmp/flaskr.db < schema.sql
```
이방법에서의 단점은 sqlite3 명령어가 필요하다는 점인데, sqlite3 명령어는 모든 시스템들에서 필수적으로 설치되어 있는 것은 아니기 때문이다. 한가지 추가적인 문제는 데이터베이스 경로로 제공받은 어떤 경로들은 오류를 발생시킬 수도 있다는 것이다. 당신의 어플리케이션에 데이터베이스를 초기화 하는 함수를 추가하는 것은 좋은 생각이다.
만약 당신이 데이터베이스를 초기화 하는 함수를 추가하기 원한다면 먼저 contextlib 패키지에 있는 `contextlib.closing()` 함수를 import 해야한다.
만약 Python 2.5를 사용하고 싶다면 먼저 with 구문을 추가적으로 사용해야 하다.
(__future__ 를 반드시 제일 먼저 import 해야 한다.).
따라서, 다음의 라인들을 기존의 flaskr.py 파일에 추가한다.
```
from __future__ import with_statement
from contextlib import closing
```
다음으로 우리는 데이터베이스를 초기화 시키는 init_db 함수를 만들 수 있다. 이 함수에서 우리는 앞서 정의한 connect_db 함수를 사용할 수 있다. flaskr.py 파일의 connect_db 함수 아래에 다음의 내용을 추가 하자.:
`closing()` 함수는 with 블럭안에서 연결한 커넥션을 유지하도록
도와준다. `open_resource()` 는 어플리케이션 객체의 함수이며
영역 밖에서도 기능을 지원하며 with 블럭에서 직접적으로 사용할 수 있다.
이 함수를 통해서 리소스 경로(flaskr 의 폴더)의 파일을 열고 그 값을 읽을 수 있다.
우리는 이것을 이용하여 데이터베이스에 연결하는 스크립트를 실행시킬 것이다.
우리가 데이터베이스에 연결할 때 우리는 커서를 제공하는 커넥션 객체를 얻는다. (여기에서는 db 라고 부르려고 한다.) 커서에는 전체 스크립트를 실행하는 메소드를 가지고 있다. 마지막으로, 우리는 변경사항들을 커밋해야 한다. SQLite 3 이다 다른 트랜잭션 데이터베이스들은 명시적으로 커밋을 하도록 선언하지 않는 이상 진행하지 않는다.
이제 Python 쉘에서 다음 함수를 import 하여 실행시키면 데이터베이스 생성이 가능하다.:
```
>>> from flaskr import init_db
>>> init_db()
```
Troubleshooting
만약 테이블을 찾을 수 없다는 예외사항이 발생하면 init_db 함수를 호출하였는지 확인하고 테이블 이름이 정확한지 확인하라. (예를들면 단수형, 복수형과 같은 실수..)
다음 섹션에서 계속된다. 스텝 4: 데이터베이스 커넥션 요청하기
이제 우리는 어떻게 데이터베이스 커넥션을 생성할 수 있고 스크립트에서 어떻게 사용되는지 알고 있다. 하지만 어떻게 하면 좀더 근사하게 커넥션 요청을 할 수 있을까? 우리는 우리의 모든 함수에서 데이터베이스 커넥션을 필요로 한다. 그러므로 요청이 오기전에 커넥션을 초기화 하고 사용이 끝난 후 종료시키는 것이 합리적이다.
Flask에서는 `before_request()` , `after_request()` 그리고 `teardown_request()` 데코레이터(decorators)를 이용할 수 있다.:
```
@app.before_request
def before_request():
g.db = connect_db()
@app.teardown_request
def teardown_request(exception):
g.db.close()
```
파라미터가 없는 `before_request()` 함수는 리퀘스트가 실행되기 전에
호출되는 함수이다. `after_request()` 함수는 리퀘스트가 실행된 다음에
호출되는 함수이며 클라이언트에게 전송된 응답(reponse)를 파리미터로 넘겨주어야 한다.
이 함수들은 반드시 사용된 응답(response)객체 혹은 새로운 응답(respone)객체를 리턴하여야 한다.
그러나 이 함수들은 예외가 발생할 경우 반드시 실행됨을 보장하지 않는다.
이 경우 예외상황은 `teardown_request()` 으로 전달된다.
이 함수들은 응답객체가 생성된 후 호출된다. 이 ㅎ마수들은 request객체를 수정할 수 없으며,
리턴 값들은 무시된다. 만약 리퀘스트가 진행중에 예외사항이 발생 했을 경우 해당 리퀘스트는
다시 각 함수들에게로 전달되며 그렇지 않을 경우에는 None 이 전달된다. 우리는 현재 사용중인 데이터베이스 커넥션을 특별하게 저장한다. Flask 는 `g` 라는 특별한 객체를 우리에게 제공한다. 이 객체는
각 함수들에 대해서 오직 한번의 리퀘스트에 대해서만 유효한 정보를 저장하하고 있다.
쓰레드환경의 경우 다른 객체에서 위와 같이 사용 할경우 작동이 보장되지 않기 때문에
결코 사용해서는 안된다. 이 특별한 `g` 객체는 보이지않는 뒷편에서 마법과 같은 어떤일을 수행하여
쓰레드환경에서도 위와같은 사용이 올바르게 작동하도록 해준다.
다음 섹션에서 계속 스텝 5: 뷰 함수들.
힌트
어느 곳에 이 소스코드를 위치시켜야 하나요?
만얀 당신이 이 튜토리얼을 따라서 여기까지 진행했다면, 아마도 당신은 이번 스텝과 다음스텝 어디에 코드를 작성해 넣어야 하는지 궁금할 수 있습니다. 논리적인 위치는 함수들이 함께 그룹핑되는 모듈 레벨의 위치 이고, 새로 만든 `before_request` 와 `teardown_request` 함수를 기존의 ``init_db`
함수 아래에 작성할 수 있다.
(튜토리얼을 따라 한줄씩 작성한다.)
만약 현시점에서 각 부분들의 관계를 알고 싶다면, 예제 소스 가 어떻게 구성되어 있는지 눈여겨 볼 필요가 있다. Flask에서는 하나의 Python 파일에 당신의 모든 어플리케이션 코드를 다 작성하여 넣는것도 가능하다. 물론 정말 그렇게 할 필요는 없다. 만약 당신의 어플리케이션이 grows larger 점점 커져간다면 이것은 좋은 생각이 아니다.
이제 데이터베이스 연결이 제대로 작동하므로 우리는 이제 뷰함수 작성을 시작할 수 있다. 우리는 뷰함수중 네가지를 사용합니다.
## 작성된 글 보여주기¶
이뷰는 데이터베이스에 저장된 모든 글들을 보여준다. 이뷰에서는 어플리케이션 루트에서 대기하고 있다가 요청이 오면 데이터베이스의 title 컬럼과 text 컬럼에서 자료를 검색하여 보여준다. 가장 큰 값을 가지고 있는 id (가장 최신 항목)를 제일 위에서 보여준다. 커서를 통해 리턴되는 row들은 select 구문에서 명시된 순서대로 정리된 튜플(tuples)이다. 여기에서 다루는 예에서 사용하는 작은 어플리케이션에게는 이정도 기능만으로도 충분하다. 하지만 튜플들을 dict타입으로 변경하고 싶을수도 있을것이다. 만약 어떻게 변경이 가능한지 흥미있다면 다음의 예제를 참고 할 수 있다. 쉬운 질의하기 예제.
뷰 함수는 데이터베이스에서 검색된 항목들을 dict 타입으로 show_entries.html template 에 렌더링하여 리턴한다.
```
@app.route('/')
def show_entries():
cur = g.db.execute('select title, text from entries order by id desc')
entries = [dict(title=row[0], text=row[1]) for row in cur.fetchall()]
return render_template('show_entries.html', entries=entries)
```
## 새로운 글 추가하기¶
이뷰는 로그인한 사용자에게 새로운 글을 작성하게 해준다. 이뷰는 오직 POST 리퀘스트에만 응답하도록 하고 이와 관련된 실제 form은 show_entries 페이지에 있다. 만약 모든것이 제대로 작동하고 있다면 `flash()` 에서
메시지로 새로작성된 글에 대한 정보를 보여주고 show_entries 페이지로 리다이렉션한다.:
```
@app.route('/add', methods=['POST'])
def add_entry():
if not session.get('logged_in'):
abort(401)
g.db.execute('insert into entries (title, text) values (?, ?)',
[request.form['title'], request.form['text']])
g.db.commit()
flash('New entry was successfully posted')
return redirect(url_for('show_entries'))
```
우리는 여기에서 사용자가 로그인되었는지 체크 할 것이라는 것을 주목하자. (세션에서 logged_in 키가 존재하고 값이 True 인지 확인한다.)
Security Note
위의 예에서 SQL 구문을 생성할때 물음표를 사용하도록 해야 한다. 그렇지 않고 SQL 구문에서 문자열 포맷팅을 사용하게 되면 당신의 어플리케이션은 SQL injection에 취약해질 것이다. 다음의 섹션을 확인해보자:ref:sqlite3
## 로그인과 로그아웃¶
이 함수들은 사용자의 로그인과 로그아웃에 사용된다. 로그인을 할때에는 입력받은 사용자이름과 패스워드를 설정에서 셋팅한 값과 비교하여 세션의 logged_in 키에 값을 설정하여 로그인상태와 로드아웃상태를 결정한다. 만약 사용자가 성공적으로 로그인 되었다면 키값에 True 를 셋팅한 후에 show_entries 페이지로 리다이렉트한다.
또한 사용자가 성공적으로 로그인되었는지 메시지로 정보를 보여준다. 만약 오류가 발생하였다면, 템플릿에서 오류내용에 대해 통지하고 사용자에게 다시 질문하도록 한다.:
```
@app.route('/login', methods=['GET', 'POST'])
def login():
error = None
if request.method == 'POST':
if request.form['username'] != app.config['USERNAME']:
error = 'Invalid username'
elif request.form['password'] != app.config['PASSWORD']:
error = 'Invalid password'
else:
session['logged_in'] = True
flash('You were logged in')
return redirect(url_for('show_entries'))
return render_template('login.html', error=error)
```
다른 한편으로 로그아웃 함수는 세션에서 logged_in 키값에 대하여 로그인 설정에 대한 값을 제거한다. 우리는 여기에서 정교한 트릭을 사용할 것이다 : 만약 당신이 dict객체의 `pop()` 함수에 두번째 파라미터(기본값)를 전달하여 사용하면 이 함수는 만약 해당 키가
dcit객체에 존재하거나 그렇지 않은 경우 dict객체에서 해당 키를 삭제할 것이다.
이 방법은 사용자가 로그인 하였는지 아닌지 체크할 필요가 없기 때문에 유용하다.
```
@app.route('/logout')
def logout():
session.pop('logged_in', None)
flash('You were logged out')
return redirect(url_for('show_entries'))
```
다음 섹션에 계속된다. 스텝 6: 템플릿.
이제 우리는 템플릿을 작업해야 한다. 만약 우리가 지금까지 만든 flaskr에 URL을 요청하는 경우 Flask는 템플릿(templates)을 찾을 수 없다는 예외를 발생시킬것이다. 템플릿들은 Jinja2 문법을 사용하고 있고 autoescaping 가 기본으로 활성화되있다. 이 의미는 개발자가 직접 :class”~flask.Markup 이나 혹은 `|safe` 필터를 템플릿에서
직접 관리하지 않아도 된다는 뜻이다.
Jinja2는 `<` 혹은 `>` 와 같은 특별한 문자들에 대하여 XML에서 표기하는 동등한 표기법으로
탈피할 수 있도록 보장한다.
우리는 또한 가능한 모든 페이지에서 웹 사이트의 레이아웃을 재사용 할 수있도록 템플릿 상속을 할 수 있도록 하고 있다.
다음의 템플릿을 templates 폴더에 넣도록 하자:
## layout.html¶
이 템플릿은 HTML의 뼈대(skeleton)을, 헤더 및 로그인링크 (혹은 사용자가 로그인 한 경우에는 로그아웃 링크)들을 포함하고 있다. 또한 상황에 딸라 메시지를 보여주기도 한다. 부모 템플릿의 `{% block body %}` 블럭은 이를 상속받은 후손 템플릿에서 동일한 이름의 블럭위치에
치환된다. `session` dict 객체도 템플릿안에서 사용 가능할 수 있으며 이를 이용해
사용자가 로그인되어 있는지 그렇지 않은지 확인 할 수 있다. Jinja에서는 세션에서 키(key)가 없는 경우에도 제공된 dict 객체의 누락된 속성값이나 객체에 접근이 가능하다. 세션곅체에 `'logged_in'` 키가 없는 경우에도 가능하다.
```
<!doctype html>
<title>Flaskr</title>
<link rel=stylesheet type=text/css href="{{ url_for('static', filename='style.css') }}">
<div class=page>
<h1>Flaskr</h1>
<div class=metanav>
{% if not session.logged_in %}
<a href="{{ url_for('login') }}">log in</a>
{% else %}
<a href="{{ url_for('logout') }}">log out</a>
{% endif %}
</div>
{% for message in get_flashed_messages() %}
<div class=flash>{{ message }}</div>
{% endfor %}
{% block body %}{% endblock %}
</div>
```
## show_entries.html¶
이 템플릿은 layout.html 템플릿을 상속받는 메시지를 보여주는 템플릿이다. 유의할 점은 for 루프는 우리가 `render_template()` 함수에서
전달한 메시지에들 만큼 반복된다는 점이다.
우리는 또한 form이 전송될때 add_entry 함수가 HTTP`의 `POST 메소드를 통해서
전송된다는 것을 이야기 해둔다.:
```
{% extends "layout.html" %}
{% block body %}
{% if session.logged_in %}
<form action="{{ url_for('add_entry') }}" method=post class=add-entry>
<dl>
<dt>Title:
<dd><input type=text size=30 name=title>
<dt>Text:
<dd><textarea name=text rows=5 cols=40></textarea>
<dd><input type=submit value=Share>
</dl>
</form>
{% endif %}
<ul class=entries>
{% for entry in entries %}
<li><h2>{{ entry.title }}</h2>{{ entry.text|safe }}
{% else %}
<li><em>Unbelievable. No entries here so far</em>
{% endfor %}
</ul>
{% endblock %}
```
## login.html¶
마지막으로 로그인 템플릿은 기본적으로 사용자가 로그인을 할 수 있도록 보여주는 form 이다. :
```
{% extends "layout.html" %}
{% block body %}
<h2>Login</h2>
{% if error %}<p class=error><strong>Error:</strong> {{ error }}{% endif %}
<form action="{{ url_for('login') }}" method=post>
<dl>
<dt>Username:
<dd><input type=text name=username>
<dt>Password:
<dd><input type=password name=password>
<dd><input type=submit value=Login>
</dl>
</form>
{% endblock %}
```
다음 섹션에서 계속 스텝 7: 스타일 추가하기.
이제 모든것들이 작동한다. 이제 약간의 스타일을 어플리케이션에 추가해볼 시간이다. 전에 생성해 두었던 static 폴더에 style.css 이라고 불리는 스타일시트를 생성해서 추가해 보자:
```
body { font-family: sans-serif; background: #eee; }
a, h1, h2 { color: #377BA8; }
h1, h2 { font-family: 'Georgia', serif; margin: 0; }
h1 { border-bottom: 2px solid #eee; }
h2 { font-size: 1.2em; }
.page { margin: 2em auto; width: 35em; border: 5px solid #ccc;
padding: 0.8em; background: white; }
.entries { list-style: none; margin: 0; padding: 0; }
.entries li { margin: 0.8em 1.2em; }
.entries li h2 { margin-left: -1em; }
.add-entry { font-size: 0.9em; border-bottom: 1px solid #ccc; }
.add-entry dl { font-weight: bold; }
.metanav { text-align: right; font-size: 0.8em; padding: 0.3em;
margin-bottom: 1em; background: #fafafa; }
.flash { background: #CEE5F5; padding: 0.5em;
border: 1px solid #AACBE2; }
.error { background: #F0D6D6; padding: 0.5em; }
```
다음 섹션에서 계속 보너스: 어플리케이션 테스트 하기.
이제 당신은 어플리케이션 개발을 끝마쳤고 모든것들이 예상한대로 작동한다. 미래에 어플리케이션을 수정하게될 경우를 대비해 테스트를 자동화하는 것은 나쁜 생각이 아니다. 위에서 소개한 어플리케이션은 유닛테스트를 어떻게 수행하는지에 대한 기본적인 예제를 가지고 있다. 이 문서의 Flask 어플리케이션 테스트하기 섹션을 참고하라. 해당 문서를 살펴보고 어떻게 쉽게 Flask 어플리케이션을 테스트 할 수있는지 알아보자
Flask는 템플릿엔진으로 Jinja2를 사용한다. 물론 다른 무료 템플릿엔진을 사용할 수 있지만, Flask를 구동하기 위해서는 Jinja2를 반드시 설치해야 한다. Jinja2가 필요한 이유는 Flask의 풍부한 기능 확장을 위해서이다. 확장기능은 Jinja2와 의존관계에 있을 수도 있다.
이 섹션은 Jinja2가 Flask에 어떻게 통합되어 있는지 빠르게 소개하는 섹션이다. 만약 템플릿엔진 자체의 문법에 대한 정보를 얻기 원한다면 Jinja2 공식 문서인 Jinja2 Template Documentation 를 참고하라.
## Jinja 설정¶
사용자가 임의로 설정하지 않는이상, Jinja2는 다음과 같이 Flask에 의해 설정되어 있다.:
* 자동변환(autoescaping) 기능은
`.html` , `.xml` 과 `.xhtml` 과 같은 모든 템플릿 파일들에 대해서 기본으로 활성화되어 있다. * 하나의 템플릿은 in/out에 대한 자동변환(autoescape) 기능을
`{% autoescape %}` 태그를 이용하여 사용할 수 있다. * Flask는 기본적으로 Jinja2 컨텍스트(context)를 통해서 전역 함수들과 헬퍼함수들을 제공한다.
## 표준 컨텍스트¶
다음의 전역 변수들은 Jinja2에서 기본으로 사용가능하다.:
*
`config` *
현재 설정값을 가지고 있는 객체 (
`flask.config` )
버전 0.6에 추가.
버전 0.10으로 변경: This is now always available, even in imported templates.
*
`request` *
현재 요청된 객체 (request object) (
`flask.request` ). 이 변수는 템플릿이 활성화된 컨텍스트에서 요청된것이 아니라면 유효하지 않다.
*
`session` *
현재 가지고 있는 세션 객체 (
`flask.session` ). 이 변수는 템플릿이 활성화된 컨텍스트에서 요청된것이 아니라면 유효하지 않다.
*
`g` *
요청(request)에 한정되어진 전역 변수 (
`flask.g` ) . 이 변수는 템플릿이 활성화된 컨텍스트에서 요청된것이 아니라면 유효하지 않다.
*
`url_for` () *
`flask.url_for()` 함수 참고.
*
`get_flashed_messages` () *
```
flask.get_flashed_messages()
```
함수 참고.
Jinja 환경에서의 작동방식
이러한 변수들은 변수 컨텍스트에 추가되며 전역 변수가 아니다. 전역변수와의 차이점은 컨텍스트에서 불려진 템플릿에서는 기본적으로 보이지는 않는다는 것이다. 이것은 일부분은 성능을 고려하기위해서고 일부분은 명시적으로 유지하기 위해서이다.
이것은 무엇을 의미하는가? 만약 불러오고 싶은 매크로(macro)를 가지고 있다면, 요청 객체(request object)에 접근하는 것이 필요하며 여기에 두가지 가능성이 있다.:
* 매크로(macro)로 요청 객체를 파라미터로 명시적으로 전달하거나, 관심있는 객체에 속성값으로 가지고 있어야 한다.
* 매크로를 컨텍스트로 불러와야한다.
다음과 같이 컨텍스트에서 불러온다.:
```
{% from '_helpers.html' import my_macro with context %}
```
## 표준 필터¶
다음 필터들은 Jinja2에서 자체적으로 추가 제공되어 이용할 수 있는 것들이다.:
*
`tojson` () *
이 기능은 JSON 표기법으로 주어진 객체를 변환기켜주는 것이다. 예를들어 만약 JavaScript를 즉석에서 생성하려고 한다면 이기능은 많은 도움이 될것이다.
script 태그안에서는 변환(escaping)이 반드시 일어나서는 안되기 때문에,
`|safe` 필터가 script 태그안에서 비활성화되도록 보장해야 한다.: > <script type=text/javascript> doSomethingWith({{ user.username|tojson|safe }}); </script>
`|tojson` 필터는 올바르게 슬래쉬들을 변환해 준다.
## 자동변환(Autoescaping) 제어하기¶
자동변환(Autoescaping)은 자동으로 특수 문자들을 변환시켜주는 개념이다. 특수문자들은 HTML (혹은 XML, 그리고 XHTML) 문서 에서 `&` , `>` , `<` , `"` , `'` 에 해당한다. 이 문자들은 해당 문서에서 특별한 의미들을 담고 있고 이 문자들을
텍스트 그대로 사용하고 싶다면 “entities” 라고 불리우는 값들로 변환하여야 한다.
이렇게 하지 않으면 본문에 해당 문자들을 사용할 수 없어 사용자에게 불만을
초래할뿐만 아니라 보안 문제도 발생할 수 있다.
(다음을 참고 Cross-Site Scripting (XSS))
그러나 때때로 템플릿에서 이러한 자동변환을 비활성화 할 필요가 있다. 만약 명시적으로 HTML을 페이지에 삽입하려고 한다면, 예를 들어 HTML로 전환되는 마크다운(markdown)과 같은 안전한 HTML을 생성하는 특정 시스템으로부터 오는 것일 경우에는 유용하다.
이 경우 세가지 방법이 있다:
* Python 코드에서는, HTML 문자열을
`Markup` 객체를 통해서 템플릿에 전달되기 전에 래핑한다. 이방법은 일반적으로 권장되는 방법이다. * 템플릿 내부에,
`|safe` 필터를 명시적으로 사용하여 문자열을 안전한 HTML이 되도록 한다. (
```
{{ myvariable|safe }}
```
) * 일시적으로 모두 자동변환(autoescape) 시스템을 해제한다.
템플릿에서 자동변환(autoescape) 시스템을 비활성화 하려면, `{%autoescape %}` 블럭을 이용할 수 있다. :
```
{% autoescape false %}
<p>autoescaping is disabled here
<p>{{ will_not_be_escaped }}
{% endautoescape %}
```
이 블럭이 수행될때마다, 이 블록에서 사용하는 변수애 대해 각별한 주의를 기울여야 한다.
## 필터 등록하기¶
만약 Jinja2에서 자신만의 필터를 직접 등록하기를 원한다면 두가지 방법이있다. 다음의 방법을 이용할 수 있다. `jinja_env` Jinja 어플리케이션에서 이용하거나 `template_filter()` 데코레이터(decorator)를 이용가능하다.
다음 두개의 예제는 객체의 값을 거꾸로 만드는 같은 일을 한다:
```
@app.template_filter('reverse')
def reverse_filter(s):
return s[::-1]
def reverse_filter(s):
return s[::-1]
app.jinja_env.filters['reverse'] = reverse_filter
```
만약 함수이름을 필터이름으로 사용하려면 데코레이터(decorator)의 아규먼트는 선택조건이어야한다. 한번 필터가 등록되면, Jinja2의 내장 필터를 사용하는 것과 똑같이 사용이 가능하다, 예를 들면 만약 mylist 라는 Python 리스트(list)가 컨텍스트에 있다면
```
{% for x in mylist | reverse %}
{% endfor %}
```
## 컨텍스트 프로세서(context processor)¶
새로운 변수들을 자동으로 템플릿의 컨텍스트에 주입시키기 위해서 Flask에는 컨텍스트 프로세서들이 존재한다. 컨텍스트 프로세서들은 새로운 값들을 템플릿 컨텍스트에 주입시키기 위해 템플릿이 렌더링되기 전에 실행되어야 한다. 템플릿 프로세서는 딕셔너리(dictionary) 객체를 리턴하는 함수이다. 이 딕셔너리의 키와 밸류들은 어플리케이션에서의 모든 템플릿들을 위해서 템플릿 컨텍스트에 통합된다.:
```
@app.context_processor
def inject_user():
return dict(user=g.user)
```
위의 컨텍스트 프로세서는 user 라고 부르는 유효한 변수를 템플릿 내부에 g.user 의 값으로 만든다. 이 예제는 g 변수가 템플릿에서 유효하기 때문에 그렇게 흥미롭지는 않지만 어떻게 동작하는지에 대한 아이디어를 제공한다.
변수들은 값들에 제한되지 않으며, 또한 컨텍스트 프로세서는 템플릿에서 함수들을 사용할 수 있도록 해준다. (Python이 패싱 어라운드(passing around)함수를 지원하기 때문에):
```
@app.context_processor
def utility_processor():
def format_price(amount, currency=u'€'):
return u'{0:.2f}{1}'.format(amount, currency)
return dict(format_price=format_price)
```
위의 컨텍스트 프로세서는 format_price 함수를 모든 템플릿들에서 사용가능하도록 해준다
```
{{ format_price(0.33) }}
```
또한 format_price 를 템플릿 필터로 만들 수도 있다. (다음을 참고 필터 등록하기), 하지만 이 예제에서는 컨텍스트 프로세서에 어떻게 함수들을 전달하는지에 대해서만 설명한다.
> Something that is untested is broken.
이 문구의 기원을 정확하게 알수는 없지만, 이것은 진실에 가깝다. 테스트되지 않은 어플리케이션은들은 기존 코드의 개선을 어렵게 하며 프로그램 개발자들을 심한 편집증환자로 만들어 버린다. 만약 어플리케이션의 테스트들이 자동화 되어 있다면, 우리는 문제가 발생했을때 안전하며 즉각적으로 변경이 가능하다.
Flask는 Werkzeug 를 통해 테스트 `Client` 를 제공하여
어플리케이션의 컨텍스트 로컬을 처리하고 테스트할 수 있는 방법을 제공한다.
그리고 당신이 가장 좋아하는 테스팅 도구를 사용 할 수 있도록 해준다.
이 문서에서 우리는 Python에서 기본으로 제공하는 `unittest` 를
사용 할 것이다.
## 어플리케이션¶
첫째로 우리는 테스트할 어플리케이션이 필요하다. 우리는 튜토리얼 에서 소개된 어플리케이션을 사용할 것이다. 아직 어플리케이션이 준비되지 않았다면 the examples 에서 소스코드를 준비하자.
## 테스팅 스켈레톤(Skeleton)¶
어플리케이션을 테스트 하기 위해서, 우리는 두번째 모듈 (flaskr_tests.py) 을 추가하고 단위테스트 스켈레톤(Skeleton)을 만든다.:
```
import os
import flaskr
import unittest
import tempfile
class FlaskrTestCase(unittest.TestCase):
def tearDown(self):
os.close(self.db_fd)
os.unlink(flaskr.app.config['DATABASE'])
`setUp()` 함수의 코드는 새로운 테스트 클라이어트를
생성하고 새로운 데이터베이스를 초기화 한다. 이 함수는 각각의 테스트 함수가
실행되기 전에 먼저 호출된다. 테스트후 데이터베이스를 삭제하기 위해 `tearDown()` 함수에서 파일을 닫고 파일시스템에서
제거 할 수 있다. 또한 setup 함수가 실행되는 동안에 `TESTING` 플래그(flag)가
활성화된다. 요청을 처리하는 동안에 오류 잡기(error catch)가 비활성화되어
있는 것은 어플리케이션에대한 성능테스트에 대하여 좀 더 나은 오류 리포트를 얻기
위해서이다.
이 테스트 클라이언트는 어플리케이션에대한 단순한 인터페이스를 제공한다. 우리는 어플리케이션에게 테스트 요청을 실행시킬 수 있고, 테스트 클라이언트는 이를 위해 쿠키를 놓치지 않고 기록한다.
SQLite3는 파일시스템 기반이기 때문에 임시 데이터베이스를 생성할때 tempfile 모듈을 사용하여 초기화 할 수 있다. `mkstemp()` 함수는 두가지 작업을 수행한다:
이 함수는 로우-레벨 파일핸들과 임의의 파일이름을 리턴한다. 여기서 임의의 파일이름을
데이터베이스 이름으로 사용한다. 우리는 단지 db_fd 라는 파일핸들을 `os.close()` 함수를
사용하여 파일을 닫기전까지 유지하면 된다.
만약 지금 테스트 실행한다면, 다음과 같은 출력내용을 볼 수 있을 것이다.:
```
$ python flaskr_tests.py
----------------------------------------------------------------------
Ran 0 tests in 0.000s
비록 실제 테스트를 실행하지는 않았지만, 우리는 이미 flaskr 어플리케이션의 문법구문상으로 유효하다는 것을 벌써 알게되었다, 그렇지 않다면 어플리케이션이 종료되는 예외상황을 겪었을 것이다.
## 첫번째 테스트¶
이제 어플리케이션의의 기능 테스트를 시작할 시간이다. 어플리케이션의 루트 ( `/` )로 접근하였을때 어플리케이션이
“No entries here so far” 를 보여주는지 확인해야 한다.
이 작업을 수행하기 위해서, 우리는 새로운 테스트 메소드를
다음과 같이 클래스에 추가하여야 한다.:
```
class FlaskrTestCase(unittest.TestCase):
def tearDown(self):
os.close(self.db_fd)
os.unlink(flaskr.DATABASE)
def test_empty_db(self):
rv = self.app.get('/')
assert 'No entries here so far' in rv.data
```
우리의 테스트 함수들의 이름이 test 로 시작하고 있다는 것에 주목하자. 이점을 활용하여 `unittest` 에서 테스트를 수행할 함수를 자동적으로 식별할 수 있다. self.app.get 를 사용함으로써 HTTP GET 요청을 주어진 경로에 보낼 수 있다. 리턴값은 `response_class` 객체의 값이 될 것이다.
이제 `data` 의 속성을 사용하여 어플리케이션
으로부터 넘어온 리턴 값(문자열)을 검사 할 수 있다.
이런 경우,
```
'No entries here so far'
```
가 출력 메시지에 포함되어 있는 것을 확인해야 한다.
다시 실행해 보면 하나의 테스트에 통과 한 것을 확인할 수 있을 수 있을 것이다.
```
$ python flaskr_tests.py
.
----------------------------------------------------------------------
Ran 1 test in 0.034s
## 입력과 출력 로깅¶
우리의 어플리케이션에서 대부분의 기능은 관리자만 사용이 가능하다. 그래서 우리의 테스트 클라이언트에서는 어플리케이션의 입력과 출력에대한 로그를 기록할 수 있어야 한다. 이 작업을 작동시키려면, 로그인과 로그아웃 페이지요청들에 폼 데이터(사용자이름과 암호) 를 적용해야 한다. 그리고 로그인과 로그아웃 페이지들은 리다이렉트(Redirect)되기 때문에 클라이언트에게 follow_redirects 를 설정해 주어야 한다.
다음의 두 함수를 FlaskrTestCase 클래스에 추가 하자
```
def login(self, username, password):
return self.app.post('/login', data=dict(
username=username,
password=password
), follow_redirects=True)
def logout(self):
return self.app.get('/logout', follow_redirects=True)
```
이제 로그인과 로그아웃에 대해서 잘 작동하는지, 유효하지 않은 자격증명에 대해서 실패 하는지 쉽게 테스트 하고 로깅 할 수 있다. 다음의 새로운 테스트를 클래스에 추가 하자:
```
def test_login_logout(self):
rv = self.login('admin', 'default')
assert 'You were logged in' in rv.data
rv = self.logout()
assert 'You were logged out' in rv.data
rv = self.login('adminx', 'default')
assert 'Invalid username' in rv.data
rv = self.login('admin', 'defaultx')
assert 'Invalid password' in rv.data
```
## 메시지 추가 테스트¶
메시지를 추가 하게 되면 잘 작동하는지 확인해야만 한다. 새로운 테스트 함수를 다음과 같이 추가 하자
```
def test_messages(self):
self.login('admin', 'default')
rv = self.app.post('/add', data=dict(
title='<Hello>',
text='<strong>HTML</strong> allowed here'
), follow_redirects=True)
assert 'No entries here so far' not in rv.data
assert '<Hello>' in rv.data
assert '<strong>HTML</strong> allowed here' in rv.data
```
여기에서 우리가 의도한 대로 제목을 제외한 부분에서 HTML이 사용가능한지 확인한다.
이제 실행 하면 세가지 테스트를 통과 할 것이다.:
```
$ python flaskr_tests.py
...
----------------------------------------------------------------------
Ran 3 tests in 0.332s
헤더값들과 상태코드들이 포함된 보다 복잡한 테스트를 위해서는, MiniTwit Example 예제 소스의 좀 더 큰 어플리케이션의 테스트 수헹방법을 확인하자.
## 다른 테스팅 기법들¶
위에서 살펴본 대로 테스트 클라이언트를 사용하는 것 이외에,
함수를 with 구문과 조합하여
요청 컨텍스트를 임시적으로 할성화 하기 위해 사용 될 수 있다.
이것을 이용하여 `request` , `g` 과 `session` 같은 뷰 함수들에서 사용하는 객체들에 접근 할 수 있다.
다음 예제는 이런 방법들을 보여주는 전체 예제이다.:
with app.test_request_context('/?name=Peter'):
assert flask.request.path == '/'
assert flask.request.args['name'] == 'Peter'
```
컨텍스트와 함께 바인드된 모든 객체는 같은 방법으로 사용이 가능하다.
만약 서로 다른 설정구성으로 어플리케이션을 테스트하기 원할경우 이것을 해결하기 위한 좋은 방법은 없는것 같다. 이와 같이 어플리케이션을 테스트 하려면 어플리케이션 팩토리에 대해서 고혀해 보길 바란다. (참고 어플리케이션 팩토리)
그러나 만약 테스트 요청 컨텍스트를 사용하는 경우 `before_request()` 함수 와 `after_request()` 는 자동으로 호출되지 않는다.
반면에:meth:~flask.Flask.teardown_request 함수는 with 블럭에서 요청 컨텍스트를
빠져나올때 실제로 실행된다.
만약 `before_request()` 함수도 마찬가지로 호출되기를 원한다면, `preprocess_request()` 를 직접 호출해야 한다.:
with app.test_request_context('/?name=Peter'):
app.preprocess_request()
...
```
이경우 어플리케이션이 어떻게 설계되었느냐에 따라 데이터베이스 컨넥션 연결이 필요할 수도 있다.
만약 `after_request()` 함수를 호출하려 한다면, `process_response()` 함수에 응답객체(Response Object)를 전달하여 직접 호출하여야 한다:
with app.test_request_context('/?name=Peter'):
resp = Response('...')
resp = app.process_response(resp)
...
```
이같은 방식은 일반적으로 해당 시점에 직접 테스트 클라이언트를 사용 할 수 있기 때문에 크게 유용한 방법은 아니다.
## 컨텍스트 유지시키기¶
버전 0.4에 추가.
때로는 일반적인 요청이 실행되는 경우에도 테스트 검증이 필요해질 경우가 있기 때문에 컨텍스트 정보를 좀더 유지 하는 것이 도움이 될 수 있다. Flask 0.4 버전에서 부터는 `test_client()` 를 with 블럭과 함께
사용하면 가능하다.:
with app.test_client() as c:
rv = c.get('/?tequila=42')
assert request.args['tequila'] == '42'
```
만약 `test_client()` 를 with 블럭이 없이 사용한다면 ,
request 가 더이상 유효하지 않기 때문에 assert 가 실패 하게 된다.
(그 이유는 실제 요청의 바깥에서 사용하려고 했기 때문이다.)
## 세션에 접근하고 수정하기¶
버전 0.8에 추가.
때로는 테스트 클라이언트에서 세션에 접근하고 수정하는 일은 매우 유용할 수 있다. 일반적으로 이를 위한 두가지 방법이 있다. 만약 세션이 특정 키 값으로 설정이 되어 있고 그 값이 컨텍스트를 통해서 유지 된다고 접근 가능한것을 보장하는 경우 `flask.session` :
```
with app.test_client() as c:
rv = c.get('/')
assert flask.session['foo'] == 42
```
그렇지만 이와 같은 경우는 세션을 수정하거나 접급하는 하는 것을 요청이 실행되기전에 가능하도록 해주지는 않는다. Flask 0.8 버전 부터는 “세션 트랜잭션(session transparent)” 이라고 부르는 세션에 대한 적절한 호출과 테스트 클라이언트에서의 수정이 가능한지 시뮬레이션이 가능하도록 하고 있다. 트랜잭션의 끝에서 해당 세션은 저장된다. 이것은 백엔드(backend)에서 사용되었던 세션과 독립적으로 작동가능하다.:
```
with app.test_client() as c:
with c.session_transaction() as sess:
sess['a_key'] = 'a value'
# once this is reached the session was stored
```
이경우에 `flask.session` 프록시의 `sess` 객체를 대신에 사용하여야 함을
주의하자. 이 객체는 동일한 인터페이스를 제공한다.
어플리케이션이 실패하면 서버도 실패한다. 조만간 여러분은 운영환경에서 예뢰를 보게 될 것이다. 비록 여러분의 코드가 100%가 정확하다 하더라도 여러분은 여전히 가끔씩 예외를 볼 것이다. 왜? 왜냐하면 포함된 그 밖에 모든 것들이 실패할 것이기 때문이다. 여기에 완벽하게 좋은 코드가 서버 에러를 발생시킬 수 있는 상황이 있다:
* 클라이언트가 request를 일찍 없애 버렸는데 여전히 어플리케이션이 입력되는 데이타를 읽고 있는 경우.
* 데이타베이스 서버가 과부하로 인해 쿼리를 다룰 수 없는 경우.
* 파일시스템이 꽉찬 경우
* 하드 드라이브가 크래쉬된 경우
* 백엔드 서버가 과부하 걸린 경우
* 여러분이 사용하고 있는 라이브러에서 프로그래밍 에러가 발생하는 경우
* 서버에서 다른 시스템으로의 네트워크 연결이 실패한 경우
위 상황은 여러분이 직면할 수 있는 이슈들의 작은 예일 뿐이다. 이러한 종류의 문제들을 어떻게 다루어야 할 까? 기본적으로 어플리케이션이 운영 모드에서 실행되고 있다면, Flask는 매우 간단한 페이지를 보여주고 `logger` 에 예외를 로깅할 것이다.
그러나 여러분이 할 수 있는 더 많은 것들이 있다. 우리는 에러를 다루기 위해 더 나은 셋업을 다룰 것이다.
## 메일로 에러 발송하기¶
> 만약 어플리케이션이 운영 모드에서 실행되고 있다면(여러분의 서버에서) 여러분은 기본으로 어떠한 로그 메세지도 보지 못할 것이다. 그건 왜일까? Flask는 설정이 없는 프레임워크가 되려고 노력한다. 만약 어떠한 설정도 없다면 Flask는 어디에 로그를 떨어뜨려야 하나? 추측은 좋은 아이디어가 아닌다. 왜냐하면 추측되어진 위치는 사용자가 로그 파일을 생성할 권한을 가지고 있는 위치가 아니기 때문이다.(?) 또한 어쨌든 대부분의 작은 어플리케이션에서는 아무도 로그 파일을 보지 않을 것이다.
사실 어플리케이션 에러에 대한 로그 파일을 설정한다하더라도 사용자가 이슈를 보고 했을 때 디버깅을 할 때를 제외하고 결코 로그 파일을 보지 않을 것이라는 것을 확신한다. 대신 여러분이 원하는 것은 예외가 발생했을 때 메일을 받는 것이다. 그리고 나서 여러분은 경보를 받고 그것에 대한 무언가를 할 수 있다.
Flask는 파이썬에 빌트인된 로깅 시스템을 사용한다. 실제로 에러가 발생했을 때 아마도 여러분이 원하는 메일을 보낸다. 아래는 예외가 발생했을 때 Flask 로거가 메일을 보내는 것을 설정하는 방법을 보여준다:
```
ADMINS = ['<EMAIL>']
if not app.debug:
import logging
from logging.handlers import SMTPHandler
mail_handler = SMTPHandler('127.0.0.1',
'<EMAIL>',
ADMINS, 'YourApplication Failed')
mail_handler.setLevel(logging.ERROR)
app.logger.addHandler(mail_handler)
```
무슨 일이 일어났는가? 우리는 `127.0.0.1` 에서 리스닝하고 있는 메일 서버를 가지고
“YourApplication Failed”란 제목으로 <EMAIL> 송신자로부터 모든
관리자 에게 메일을 보내는 새로운 `SMTPHandler` 를 생성했다.
만약 여러분의 메일 서버가 자격을 > 요구한다면 그것들 또한 제공될 수 있다. 이를 위해
`SMTPHandler` 를 위한 문서를 확인해라.
우리는 또한 단지 에러와 더 심각한 메세지를 보내라고 핸들러에게 말한다. 왜냐하면 우리는 주의 사항이나 요청을 핸들링하는 동안 발생할 수 있는 쓸데없는 로그들을에 대한 메일을 받는 것을 원하지 않는다
여러분이 운영 환경에서 이것을 실행하기 전에, 에러 메일 안에 더 많은 정보를 넣기 위해 로그 포맷 다루기 챕터를 보아라. 그것은 많은 불만으로부터 너를 구해줄 것이다.
## 파일에 로깅하기¶
여러분이 메일을 받는다 하더라도, 여러분은 아마 주의사항을 로깅하기를 원할지도 모른다. 문제를 디버그하기 위해 요구되어질 수 있는 많은 정보를 많이 유지하는 것은 좋은 생각이다. Flask는 자체적으로 코어 시스템에서 어떠한 주의사항도 발생하지 않는다는 것을 주목해라. 그러므로 무언가 이상해 보일 때 코드 안에 주의사항을 남기는 것은 여러분의 책임이다.
로깅 시스템에 의해 제공되는 몇가지 핸들러가 있다. 그러나 기본 에러 로깅을 위해 그것들 모두가 유용한 것은 아니다. 가장 흥미로운 것은 아마 아래오 같은 것들일 것이다:
*
`FileHandler` - 파일 시스템 내 파일에 메세지를 남긴다. *
`RotatingFileHandler` - 파일 시스템 내 파일에 메세지를 남기며 특정 횟수로 순환한다. *
`NTEventLogHandler` - 윈도 시스템의 시스템 이벤트 로그에 로깅할 것이다. 만약 여러분이 윈도에 디플로이를 한다면 이 방법이 사용하기 원하는 방법일 것이다. *
`SysLogHandler` - 유닉스 syslog에 로그를 보낸다.
일단 여러분이 로그 핸들러를 선택하면, 위에서 설명한 SMTP 핸들러를 가지고 여러분이 했던 더 낮은 레벨을 설정하는 것만 확인하라(필자는 WARNING을 추천한다.):
```
if not app.debug:
import logging
from themodule import TheHandlerYouWant
file_handler = TheHandlerYouWant(...)
file_handler.setLevel(logging.WARNING)
app.logger.addHandler(file_handler)
```
## 로그 포맷 다루기¶
기본으로 해들러는 단지 파일 안에 메세지 문자열을 쓰거나 메일로 여러분에 메세지를 보내기만 할 것이다. 로그 기록은 더 많은 정보를 저장한다. 왜 에러가 발생했는니나 더 중요한 어디서 에러가 발생했는지 등의 더 많은 정보를 포함하도록 로거를 설정할 수 있다.
포매터는 포맷 문자열을 가지고 초기화될 수 있다. 자동으로 역추적이 로그 진입점에 추가되어진다는 것을 주목하라.(?) 여러분은 로그 포맷터 포맷 문자열안에 그걸 할 필요가 없다.
여기 몇가지 셋업 샘플들이 있다:
### 이메일¶
```
from logging import Formatter
mail_handler.setFormatter(Formatter('''
Message type: %(levelname)s
Location: %(pathname)s:%(lineno)d
Module: %(module)s
Function: %(funcName)s
Time: %(asctime)s
Message:
%(message)s
'''))
```
### 파일 로깅¶
```
from logging import Formatter
file_handler.setFormatter(Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'
))
```
### 복잡한 로그 포맷¶
여기에 포맷 문자열을 위한 유용한 포맷팅 변수 목록이 있다. 이 목록은 완전하지는 않으며 전체 리스트를 보려면 `logging` 의 공식 문서를 참조하라.
만약 여러분이 포맷티을 더 커스터마이징하기를 원한다면 포맷터를 상속받을 수 있다. 그 포매터는 세가지 흥미로운 메소드를 가지고 있다:
*
`format()` : * 실제 포매팅을 다룬다.
`LogRecord` 객체를 전달하면 포매팅된 문자열을 반환해야 한다. *
`formatTime()` : * called for asctime 포매팅을 위해 호출된다. 만약 다른 시간 포맷을 원한다면 이 메소드를 오버라이드할 수 있다.
*
`formatException()` * 예외 포매팅을 위해 호출된다.
`exc_info` 튜플을 전달하면 문자열을 반환해야 한다. 보통 기본으로 사용해도 괜찮으며, 굳이 오버라이드할 필요는 없다.
더 많은 정보를 위해서 공식 문서를 참조해라.
## 다른 라이브러리들¶
이제까지는 우리는 단지 여러분의 어플리케이션이 생성한 로거를 설정했다. 다른 라이브러리들 또한 로그를 남길 수 있다. 예를 들면 SQLAlchemy가 그것의 코어 안에서 무겁게 로깅을 사용한다. `logging` 패키지 안에 모든 로거들을 설정할 방법이 있지만 나는 그거을 사용하는 것을
추천하지 않는다. 여러분이 같은 파이썬 인터프리터에서 같이 실행되는 여러 개로 분리된
어플리케이션을 갖기를 원할 수도 있다. 이러한 상황을 위해 다른 로깅을 셋업하는 것은
불가능하다. 대신 `getLogger()` 함수를 가지고 로거들을 얻고 핸들러를 첨부하기 위해 얻은
로거들을 반복하여 여러분이 관심있어 하는 로거들을 만드는 것을 추천한다:
```
from logging import getLogger
loggers = [app.logger, getLogger('sqlalchemy'),
getLogger('otherlibrary')]
for logger in loggers:
logger.addHandler(mail_handler)
logger.addHandler(file_handler)
```
# 어플리케이션 에러 디버깅¶
제품화된 어플리케이션의 경우, 어플리케이션 에러 로깅하기 에 설명된것을 참고하여 로깅과 알림설정을 구성하는 것이 좋다. 이 섹션은 디버깅을 위한 설정으로 배포할때 완전한 기능을 갖춘 Python 디버거를 깊이있게 사용하는 방법을 제공한다.
## 의심이 들때는 수동으로 실행하자¶
제품화를 위해 설정된 어플리케이션에서 문제를 겪고 있는가? 만약 해당 호스트에 쉘 접근 권한을 가지고 있다면, 배포 환경에서 쉘을 이용해 수동으로 어플리케이션을 실행 할 수 있는지 확인한다. 권한에 관련된 문제를 해결하기 위해서는 배포환경에 설정된 것과 동일한 사용자 계정에서 실행되어야 한다. 제품화된 운영 호스트에서 debug=True 를 이용하여 Flask에 내장된 개발 서버를 사용하면 설정 문제를 해결하는데 도움이되지만, 이와같은 설정은 통제된 환경에서 임시적으로만 사용해야 함을 명심하자. debug=True 설정은 운영환경 혹은 제품화되었을때는 절대 사용해서는 안된다.
## 디버거로 작업하기¶
좀더깊이 들어가서 코드 실행을 추적한다면, Flask는 독자적인 디버거를 제공한다. (디버그 모드 참고) 만약 또다른 Python 디버거를 사용하고 싶다면 이 디버거들은 서로 간섭현상이 발생하므로 주의가 필요하다. 선호하는 디버거를 사용하기 위해서는 몇몇 디버깅 옵션을 설정해야만 한다.:
*
`debug` - 디버그 모드를 사용하고 예외를 잡을 수 있는지 여부 *
`use_debugger` - Flask 내부 디버거를 사용할지 여부 *
`use_reloader` - 예외발생시 프로세스를 포크하고 리로드할지 여부 `debug` 옵션은 다른 두 옵션 이 어떤값을 갖던지 반드시 True 이어야 한다.
(즉, 예외는 잡아야만 한다.) 만약 Eclipse에서 Aptana를 디버깅을 위해 사용하고 싶다면,
```
use_debugger` 와 `use_reloader
```
옵션을 False로 설정해야 한다.
config.yaml을 이용해서 다음과 같은 유용한 설정패턴을 사용하는 것이 가능하다 (물론 자신의 어플리케이션을위해 적절하게 블럭안의 값들을 변경시킬 수 있다.):
```
FLASK:
DEBUG: True
DEBUG_WITH_APTANA: True
```
이렇게 설정한다음 어플리케이션의 시작점(main.py)에 다음과 같이 사용할 수 있다.:
```
if __name__ == "__main__":
# To allow aptana to receive errors, set use_debugger=False
app = create_app(config="config.yaml")
if app.debug: use_debugger = True
try:
# Disable Flask's debugger if external debugger is requested
use_debugger = not(app.config.get('DEBUG_WITH_APTANA'))
except:
pass
app.run(use_debugger=use_debugger, debug=app.debug,
use_reloader=use_debugger, host='0.0.0.0')
```
어플리케이션들은 일종의 설정 및 구성을 필요로 한다. 어플리케이션 실행 환경에서 다양한 종류의 설정 값들을 변경 할 수 있다. 디버깅모드를 변경하거나 비밀 키(secret key)를 설정하거나그밖의 다른 환경에 관련된 값을 변경시킬 수 있다.
Flask는 일반적인 경우 어플리케이션이 구동될때에 설정값들을 사용될 수 있어야 하도록 설계되었다. 설정값들을 하드코드(hard code)로 적용 할 수도 있는데 이 방식은 작은 규모의 어프리케이션에서는 그리 나쁘지 않은 방법이지만 더 나은 방법들이 있다.
설정값을 독립적으로 로드하는 방법으로, 이미 로드된 설정들값들 중에 속해 있는 설정 객체(config object)를 사용할 수 있다: `Flask` 객체의 `config` 속성을 참고.
이 객체의 속성을 통해 Flask 자신의 특정 설정값을 저장할수 있고 Flask의
확장 플러그인들도 자신의 설정값을 저장할 수 있다.
마찬가지로, 여러분의 어플리키에션 성정값 역시 저장할 수 있다.
## 설정 기초연습¶
`config` 은 실제로는 dictionary 의 서브클래스이며,
다른 dictionary 처럼 다음과 같이 수정될 수 있다:
```
app = Flask(__name__)
app.config['DEBUG'] = True
```
확정된 설정값들은 또한 `Flask` 객체로 전달될 수 있으며,
그 객체를 통해 설정값들을 읽거나 쓸수 있다. `app.debug = True` 한번에 다수의 키(key)들을 업데이트 하기 위해서는 `dict.update()` 함수를 사용 할 수 있다.
```
app.config.update(
DEBUG=True,
SECRET_KEY='...'
)
```
## 내장된 고유 설정값들¶
다음의 설정값들은 Flask 의 내부에서 이미 사용되고 있는 것들이다. :
| 디버그 모드를 활성화/비활성화 함 |
| --- | --- |
| 테스팅 모드를 활성화/비활성화 함 |
| 명시적으로 예외를 전파하는 것에 대한 활성화 혹은 비활성화 함. 이 값을 특별히 설정을 안하거나 명시적으로 None 으로 설정했을 경우라도 TESTING 혹은 DEBUG 가 true 라면 이 값 역시 true 가 된다. |
| 어플리케이션이 디버그모드에 있는 경우 요청 컨텍스트는 디버거에서 데이터를 확인 하기 위해 예외를 발생시키지 않는다. 이 설정을 이 키값을 설정하여 이 설정을 비활성화 할 수 있다. 또한 이 설정은 제품화된(하지만 매우 위험 할 수 있는) 어플리케이션을 디버깅하기위해 유리 할수 있으며 디버거 실행을 위해 강제로 사용하도록 설정하고 싶을 때 사용가능하다. |
| 비밀키 |
| 세션 쿠키의 이름 |
| 세션 쿠키에 대한 도메인. 이값이 설정되어 있지 않은 경우 쿠키는 |
| 세션 쿠키에 대한 경로를 설정한다. 이값이 설정되어 있지 않은 경우 쿠키는 |
| 쿠키가 httponly 플래그를 설정해야만 하도록 통제한다. 기본값은 True 이다. |
| 쿠키가 secure 플래그를 설정해야만 하도록 통제한다. 기본값은 False 이다. |
DEBUG
디버그 모드를 활성화/비활성화 함
TESTING
테스팅 모드를 활성화/비활성화 함
PROPAGATE_EXCEPTIONS
명시적으로 예외를 전파하는 것에 대한 활성화 혹은 비활성화 함. 이 값을 특별히 설정을 안하거나 명시적으로 None 으로 설정했을 경우라도 TESTING 혹은 DEBUG 가 true 라면 이 값 역시 true 가 된다.
PRESERVE_CONTEXT_ON_EXCEPTION
어플리케이션이 디버그모드에 있는 경우 요청 컨텍스트는 디버거에서 데이터를 확인 하기 위해 예외를 발생시키지 않는다. 이 설정을 이 키값을 설정하여 이 설정을 비활성화 할 수 있다. 또한 이 설정은 제품화된(하지만 매우 위험 할 수 있는) 어플리케이션을 디버깅하기위해 유리 할수 있으며 디버거 실행을 위해 강제로 사용하도록 설정하고 싶을 때 사용가능하다.
SECRET_KEY
비밀키
SESSION_COOKIE_NAME
세션 쿠키의 이름
SESSION_COOKIE_DOMAIN
세션 쿠키에 대한 도메인. 이값이 설정되어 있지 않은 경우 쿠키는SERVER_NAME 의 모든 하위 도메인에
대하여 유효하다.
SESSION_COOKIE_PATH
세션 쿠키에 대한 경로를 설정한다. 이값이 설정되어 있지 않은 경우 쿠키는'/' 로 설정되어 있지 않은 모든
APPLICATION_ROOT 에 대해서 유효하다
SESSION_COOKIE_HTTPONLY
쿠키가 httponly 플래그를 설정해야만 하도록 통제한다. 기본값은 True 이다.
SESSION_COOKIE_SECURE
쿠키가 secure 플래그를 설정해야만 하도록 통제한다. 기본값은 False 이다.
PERMANENT_SESSION_LIFETIME
datetime.timedelta 를 이용하여
영구 세션 유지 시간을 설정한다.
Flask 0.8버전부터 integer 타입의 초단위로
설정이 가능하다.
USE_X_SENDFILE
x-sendfile 기능을 활성화/비활성화 함
LOGGER_NAME
로거의 이름을 설정함
SERVER_NAME
서버의 이름과 포트 번호를 뜻한다. 서브도메인을 지원하기 위해 요구된다. (예:'myapp.dev:5000')
이 값을 “localhost” 로 설정하는 것은 서브
도메인을 지원하지 않는 것에 그리 도움이
되지는 않는다는 것을 주의하자.
또한 SERVER_NAME 를 설정하는 것은
기본적으로 요청 컨텍스트 없이 어플리케이션
컨텍스트를 통해 URL을 생성 할 수 있도록
해준다.
APPLICATION_ROOT
어플리케이션이 전체 도메인을 사용하지 않거나 서브도메인을 사용하지 않는 경우 이 값은 어플리케이션이 어느 경로에서 실행되기 위해 설정되어 있는지 결정한다. 이값은 세션 쿠키에서 경로 값으로 사용된다 만약 도메인이 사용되는 경우 이 값은None 이다.
MAX_CONTENT_LENGTH
만약 이 변수 값이 바이트수로 설정되면, 들어오는 요청에 대해서 이 값보다 더 큰 길이의 컨텐트일 경우 413 상태 코드를 리턴한다.
SEND_FILE_MAX_AGE_DEFAULT:
send_static_file() (기본 정적파일 핸들러)
와 send_file() 에서 사용하는 캐시 제어에 대한
최대 시간은 초단위로 정한다. 파일 단위로 사용되는 이 값을
덮어쓰기 위해서는 Flask 나 Blueprint 를
개별적으로 후킹하여 get_send_file_max_age() 를 사용한다.
기본값은 43200초 이다(12 시간).
TRAP_HTTP_EXCEPTIONS
만약 이 값이 True 로 셋팅되어 있다면 Flask는
HTTP 익셉션 처리를 위한 에러 핸들러를 실행 하지 않는다.
하지만, 대신에 익셉션 스택을 발생시킨다. 이렇게 하면 디버깅이 어려운 상황에서
HTTP 에러가 발견되는 곳을 찾을 때 유용하다.
TRAP_BAD_REQUEST_ERRORS
잘못된 요청(BadRequest)에 대한 주요 익셉션 에러들은 Werkzeug의 내부 데이터 구조에 의해 다루어 진다. 마찬가지로 많은 작업들이 잘못된 요청에 의해 암시적으로 일관성부분에서 실패할 수 있다. 이 플래그는 왜 실패가 발생했는지 디버깅 상황에서 명백하게 알고 싶을 때 좋다. 만약 True 로 설정 되어 있다면, 주기적인 추적값을 얻을 수 있을 것이다.
PREFERRED_URL_SCHEME
사용가능한 URL 스키마가 존재하지 않을 경우 해당 URL에 대한 스키마가 반드시 필요하기 때문에 기본 URL 스키마가 필요하다. 기본값은 http.
JSON_AS_ASCII
Flask 직렬화 객체는 기본으로 아스키로 인코딩된 JSON을 사용한다. 만약 이 값이 False 로 설정 되어 있다면, Flask는 ASCII로 인코딩하지 않을 것이며
현재의 출력 문자열을 유니코드 문자열로 리턴할 것이다.
jsonify 는 자동으로 ``utf-8``로 인코딩을 한 후 해당 인스터스로 전송한다.
버전 0.4에 추가: `LOGGER_NAME` 버전 0.5에 추가: `SERVER_NAME` 버전 0.6에 추가: `MAX_CONTENT_LENGTH` 버전 0.7에 추가: `PROPAGATE_EXCEPTIONS` ,
버전 0.8에 추가:
```
TRAP_BAD_REQUEST_ERRORS
```
, `TRAP_HTTP_EXCEPTIONS` , `APPLICATION_ROOT` ,
```
SESSION_COOKIE_DOMAIN
```
, `SESSION_COOKIE_PATH` ,
```
SESSION_COOKIE_HTTPONLY
```
,
```
SESSION_COOKIE_SECURE
```
버전 0.9에 추가: `PREFERRED_URL_SCHEME` 버전 0.10에 추가: `JSON_AS_ASCII`
## 파일을 통하여 설정하기¶
만약 설정을 실제 어플리케이션 패키지의 바깥쪽에 위치한 별도의 파일에 저장할 수 있다면 좀더 유용할 것이다. 이것은 어플리케이션의 패키징과 배포단계에서 다양한 패키지 도구 (Distribute으로 전개하기) 들을 사용할 수 있도록 해주며, 결과적으로 사후에 설정 파일을 수정 할 수 있도록 해준다.
일반적인 패턴은 다음과 같다:
이 예제 첫부분에서 설정 파일을 yourapplication.default_settings 모듈로부터 불러 온 다음 환경설정 값들을
```
YOURAPPLICATION_SETTINGS
```
파일의 내용으로 덮어씌운다.
이 환경 변수들은 리눅스(Linux) 혹은 OS X 에서는 서버를 시작하기 전에 쉘의 export 명령어로
설정 할 수도 있다:
```
$ export YOURAPPLICATION_SETTINGS=/path/to/settings.cfg
$ python run-app.py
* Running on http://127.0.0.1:5000/
* Restarting with reloader...
```
윈도우 시스템에서는 내장 명령어인 set 을 대신 사용할 수 있다:
```
>set YOURAPPLICATION_SETTINGS=\path\to\settings.cfg
```
설정 파일들은 실제로는 파이썬 파일들이다. 오직 대문자로된 값들만 나중에 실제로 설정 객체에 저장된다. 그래서 반드시 설정 키값들은 대문자를 사용하여야 한다.
여기 설정 파일에 대한 예제가 있다:
```
# Example configuration
DEBUG = False
SECRET_KEY = '?\<KEY>'
```
아주 초기에 설정을 로드 할수 있도록 확신할 수 있어야 하고, 확장(플러그인)들은 실행시점에 설정에 접근 할 수 있다. 설정 객체에서 뿐만 아니라 개별 파일에서도 역시 로드할 수 있는 방법이 존재 한다. 완전한 참고를 위해 `Config` 객체에 대한 문서를 읽으면 된다.
## 설정 베스트 사례¶
앞에서 언급한 접근 방법들에 대한 단점은 테스트를 좀더 확실히 해야 한다는 것이다. 이 문제에 대해서 일반적인 100%의 단일 해법은 존재하지 않는다. 하지만, 이러한 경험을 개선하기 위해 여두해 두어야 할 몇가지가 있다:
* 여러분의 어플리케이션을 함수에 구현하고 (Flask의) 블루프린트에 등록하자. 이러한 방법을 통해 어플리케이션에 대해서 다중인스턴스를 생성하여 유닛테스트를 보다 쉽게 진행 할 수 있다. 필요에 따라 설정값을 전달해주기 위해 이 방법을 사용할 수 있다.
* 임포트 시점에 설정정보를 필요로 하는 코드를 작성하지 않는다. 만약 여러분이 스스로 설정값에 대해서 오직 요청만 가능하도록 접근을 제한 한다면 필요에 의해 나중에 설정 객체를 재설정 할 수 있다.
## 개발 / 운영(혹은 제품)¶
대부분의 어플리케이션은 하나 이상의 설정(구성)이 필요 하다. 적어도 운영 환경과 개발환경은 독립된 설정값을 가지고 있어야만 한다. 이것을 다루는 가장 쉬운 방법은 버전 관리를 통하여 항상 로드되는 기본 설정값을 사용하는 것이다. 그리고 독립된 설정을값들을 필요에 따라 위에서 언급했던 방식으로 덮어쓰기한다:
그런다음, 여러분은 단지 독립적인 config.py 파일을 추가한 후
```
YOURAPPLICATION_SETTINGS=/path/to/config.py
```
를 export 하면 끝난다.
물론 다른 방법들도 있다. 예를들면, import 혹은 상속과 같은 방법도 가능하다. 장고(Django)의 세계에서는 명시적으로 설정 파일을
```
from yourapplication.default_settings import import *
```
를 이용해 파일의 상단에 추가 하여 변경 사항은 수작업으로 덮어쓰기 하는 방법이 가장 일반적이다.
또한 `YOURAPPLICATION_MODE` 와 환경 변수에 대해서 production , development 등의 값을 조사하여
하드코드된 다른 값들을 import 할 수도 있다.
한가지 흥미로운 패턴은 설정에 대해서도 클래스와 상속을 사용할 수 있다는 것이다:
```
class Config(object):
DEBUG = False
TESTING = False
DATABASE_URI = 'sqlite://:memory:'
class ProductionConfig(Config):
DATABASE_URI = 'mysql://user@localhost/foo'
class DevelopmentConfig(Config):
DEBUG = True
class TestingConfig(Config):
TESTING = True
```
이와 같은 설정을 활성화하기 위해서는 단지 meth:~flask.Config.from_object:: 를 호출 하면 된다:
```
app.config.from_object('configmodule.ProductionConfig')
```
여기 많은 다양한 다른 방법이 있지만 이것은 여러분이 설정 파일을 어떻게 관리하기 원하는가에 달려 있다. 여기 몇가지 좋은 권고사항이 있다:
* 버전관리에서 기본 설정을 유지한다. 각각의 설정 파일을 이전할때에 이 기본 설정 파일을 사용하거나 설정 값을 덮어쓰기 전에 기본 설정 파일의 내용을 자신의 파일로 가져오기
* 설정 사이를 전환할 환경변수를 사용한다. 이방식은 파이썬 인터프리터 외부에서 수행하고 빠르고 쉽기 때문에 결국 코드를 만지지 않고도 다른 설정 사이를 전환 할 수 있기 때문에 개발 및 배포를 훨씬 쉽게 만들어 줄 수 있다. 만약 여러분이 종종 다른 프로젝트들에서 작업한다면, virtualenv를 구동하는 자신만의 스크립트를 생성하여 자신을 위한 개발 설정을 export 하여 사용할 수 있다.
* 코드와 설정을 독립된 운영 서버에 배포하기 위해서 fabric 과 같은 도구를 사용한다. 어떻게 할 수 있는지 자세한 내용은 다음의 내용을 참고 하면 된다. Fabric으로 전개하기 패턴
## 인스턴스 폴더¶
버전 0.8에 추가.
Flask 0.8 에서 임시 폴더가 도입되었다. 오랫시간동안 Flask는 가능한 어플리케이션에 대한 상대 경로를 직접 참조 할 수 있도록 해왔다( `Flask.root_path` 를 통해서).
이것을 통해 어플리케이션의 설정을 바로 옆에 저장하고 불러오는 많은 개발자들이 있었다.
하지만 불행하게도 이 방법은 오직 어플리케이션이 루트 경로가 패키지의 내용을 참고하지
않는 경우에만 잘 작동 한다. Flask 0.8 부터 새로룬 속성이 도입 되었다: `Flask.instance_path` . 이 새로운 컨셉의 속성은 “인스턴스 폴더” 라고 불린다.
인스턴스 폴더는 버전 관리와 특정한 배포에 속하지 않도록 설계되었다. 인스턴스 폴더는
런타임에서의 변경 사항 혹은 설정 파일의 변경 사항에 대한 것들을 보관하기에 완벽한 장소이다.
여러분은 인스턴스 폴더의 경로에 대해 Flask 어플리케이션이 생성될 때 혹은 Flask가 인스턴스 폴더를 자동으로 탐지 하도록 하여 명백하게 제공 받을 수 있다 명시적인 설정으로 사용하기 위해서는 instance_path 파라미터를 쓸 수 있다:
```
app = Flask(__name__, instance_path='/path/to/instance/folder')
```
이때에 제공되는 폴더의 경로는 절대 경로임을 반드시 명심하자.
만약 instance_path 파라미터가 인스턴스 폴더를 제공하지 않는 다면 다음의 위치가 기본으로 사용된다:
Uninstalled module:
> /myapp.py /instance
*
Uninstalled package:
> /myapp /__init__.py /instance
*
Installed module or package:
> $PREFIX/lib/python2.X/site-packages/myapp $PREFIX/var/myapp-instance
`$PREFIX` 는 파이썬이 설치된 경로의 prefix 이다. 이 것은 `/usr` 혹은 여러분의 virtualenv 경로이다. `sys.prefix` 를 이용해서 현재 설정된 prefix를 출력해 볼 수 있다.
설정 객체가 설정 파일을 상대 경로로 부터 읽어 올 수 있도록 제공 하기 때문에 우리는 가능한 한 인스턴스 경로에 대한 상대 파일이름을 통해서 로딩을 변경했다. 설정파일이 있는 상대경로의 동작은 “relative to the application root” (디폴트)와 “relative to instance folder” 사이에서 instance_relative_config 어플리케이션 생성자에 의해 뒤바뀔 수 있다:
```
app = Flask(__name__, instance_relative_config=True)
```
여기 어떻게 Flask에서 모듈의 설정을 미리 로드하고 설정 폴더가 존재 할 경우 설정 파일로 부터 설정을 덮어쓰기 할 수 있는 전체 예제가 있다
```
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('yourapplication.default_settings')
app.config.from_pyfile('application.cfg', silent=True)
```
인스턴스 폴더의 경로를 통해 `Flask.instance_path` 를 찾을 수 있다.
Flask는 또한 인스턴스 폴더의 파일을 열기 위한 바로가기를
```
Flask.open_instance_resource()
```
를 통해 제공 한다.
두 경우에 대한 사용 예제:
```
filename = os.path.join(app.instance_path, 'application.cfg')
with open(filename) as f:
config = f.read()
# or via open_instance_resource:
with app.open_instance_resource('application.cfg') as f:
config = f.read()
```
버전 0.6에 추가.
플라스크 0.6부터 시그널 지원 기능이 플라스크에 통합됐다. 이 기능은 blinker 라는 훌륭한 라이브러리로 제공되며 사용할 수 없을 경우 자연스럽게 지원하지 않는 상태로 돌아갈 것이다.
시그널이란 무엇인가? 시그널은 핵심 프레임워크나 다른 플라스크 확장의 어느 곳에서 동작이 발생했을 때 공지를 보내어 어플리케이션을 동작하게 하여 어플리케이션간의 의존성을 분리하도록 돕는다. 요약하자면 시그널은 특정 시그널 발신자가 어떤 일이 발생했다고 수신자에게 알려준다.
플라스크에는 여러 시그널이 있고 플라스크 확장은 더 많은 시그널을 제공할 수 도 있다. 또한 시그널은 수신자에게 무엇인가를 알리도록 의도한 것이지 수신자가 데이터를 변경하도록 권장하지 않아야 한다. 몇몇 빌트인 데코레이터의 동작과 같은 것을 하는 것 처럼 보이는 시그널이 있다는 것을 알게될 것이다. (예를 들면 `request_started` 은 `before_request()` 과 매우 유사하다) 하지만, 그 동작 방식에
차이는 있다. 예제의 핵심 `before_request()` 핸들러는 정해진
순서에 의해 실행되고 응답이 반환됨에 의해 일찍 요청처리를 중단할 수 있다.
반면에 다른 모든 시그널은 정해진 순서 없이 실행되며 어떤 데이터도 수정하지 않는다.
핸들러 대비 시그널의 큰 장점은 짧은 순간 동안 그 시그널을 안전하게 수신할 수 있다는 것이다. 예를 들면 이런 일시적인 수신은 단위테스팅에 도움이 된다. 여러분이 요청의 일부분으로 어떤 템플릿을 보여줄지 알고 싶다고 해보자: 시그널이 정확히 바로 그 작업을 하게 한다.
## 시그널을 수신하기¶
시그널은 수신하려면 시그널의 `connect()` 메소드를
사용할 수 있다. 첫번째 인자는 시그널이 송신됐을 때 호출되는 함수고, 선택적인
두번째 인자는 송신자를 지정한다. 해당 시그널의 송신을 중단하려면 `disconnect()` 메소드를 사용하면 된다.
모든 핵심 플라스크 시그널에 대해서 송신자는 시그널을 발생하는 어플리케이션이다. 여러분이 시그널을 수신할 때, 모든 어플리케이션의 시그널을 수신하고 싶지 않다면 받고자하는 시그널의 송신자 지정을 잊지 말도록 해라. 여러분이 확장을 개발하고 있다면 특히나 주의해야한다.
예를 들면 여기에 단위테스팅에서 어떤 템플릿이 보여지고 어떤 변수가 템플릿으로 전달되는지 이해하기 위해 사용될 수 있는 헬퍼 컨택스트 매니저가 있다:
```
from flask import template_rendered
from contextlib import contextmanager
@contextmanager
def captured_templates(app):
recorded = []
def record(sender, template, context, **extra):
recorded.append((template, context))
template_rendered.connect(record, app)
try:
yield recorded
finally:
template_rendered.disconnect(record, app)
```
위의 메소드는 테스트 클라이언트와 쉽게 묶일 수 있다:
```
with captured_templates(app) as templates:
rv = app.test_client().get('/')
assert rv.status_code == 200
assert len(templates) == 1
template, context = templates[0]
assert template.name == 'index.html'
assert len(context['items']) == 10
```
플라스크가 시그널에 새 인자를 알려준다면 여러분이 호출한 메소드는 실패하지 않을 것이므로 추가적인 `**extra` 인자로 수신하도록 해라.
with 블럭의 내용에 있는 어플리케이션 app 이 생성한 코드에서 보여주는 모든 템플릿은 templates 변수에 기록될 것이다. 템플릿이 그려질 때마다 컨텍스트 뿐만 아니라 템플릿 객체도 어플리케이션에 덧붙여진다.
부가적으로 편리한 헬퍼 메소드도 존재한다( `connected_to()` ).
그 메소드는 일시적으로 그 자체에 컨텍스트 메니저를 가진 시그널에 대한 함수를 수신한다.
컨텍스트 매니저의 반환값을 그런 방식으로 지정할 수 없기 때문에 인자로 템플릿의 목록을
넘겨줘야 한다:
def captured_templates(app, recorded, **extra):
def record(sender, template, context):
recorded.append((template, context))
return template_rendered.connected_to(record, app)
```
위의 예제는 아래처럼 보일 수 있다:
```
templates = []
with captured_templates(app, templates, **extra):
...
template, context = templates[0]
```
Blinker API 변경내용
`connected_to()` 메소드는 Blinker
버전 1.1에 나왔다.
## 시그널 생성하기¶
여러분이 어플리케이션에서 시그널을 사용하고 싶다면, 직접 blinker 라이브러리를 사용할 수 있다. 가장 일반적인 사용예는 변경된 `Namespace` .
클래스에 시그널을 명명하는 것이다. 이것이 보통 권고되는 방식이다:
```
from blinker import Namespace
my_signals = Namespace()
```
이제 여러분은 아래와 같이 새 시그널을 생성할 수 있다:
```
model_saved = my_signals.signal('model-saved')
```
여기에서 시그널에 이름을 준것은 시그널은 구분해주고 또한 디버깅을 단순화한다. `name` 속성으로 시그널에
부여된 이름을 얻을 수 있다.
플라스크 확장 개발자를 위해서
여러분이 플라스크 확장을 개발하고 있고 blinker 설치를 놓친것에 대해 부드럽게 대처하고 싶다면,
```
flask.signals.Namespace
```
클래스를 사용할 수 있다.
## 시그널 보내기¶
시그널을 전송하고 싶다면, `send()` 메소드를 호출하면 된다.
이 메소드는 첫번째 인자로 송신자를 넘겨주고 선택적으로 시그널 수신자에게 전달되는
키워드 인자도 있다:
```
class Model(object):
...
def save(self):
model_saved.send(self)
```
항상 좋은 송신자를 뽑도록 한다. 여러분이 시그널을 보내는 클래스를 갖는다면 송신자로 self 를 넘겨준다. 여러분이 임의의 함수에서 시그널을 전송한다면,
를 송신자로 전달할 수 있다.
송신자로 프락시를 넘겨주기
시그널의 송신자로 절대 `current_app` 를 넘겨주지 않도록 하고
대신
를 사용한다. 왜냐하면 `current_app` 는 실제 어플리케이션 객체가 아닌 프락시 객체이기
때문이다.
## 시그널과 플라스크 요청 컨텍스트¶
시그널을 수신할 때 요청 컨텍스트 를 완전하게 지원한다. Context-local 변수는 `request_started` 과 `request_finished` 사이에서
일관성을 유지하므로 여러분은 필요에 따라 `flask.g` 과 다른 변수를 참조할
수 있다. 시그널 보내기 과 `request_tearing_down` 시그널에서
언급하는 제약에 대해 유의한다.
## 시그널 수신 기반 데코레이터¶
Blinker 1.1 과 함께 여러분은 또한 `connect_via()` 데코레이터를 사용하여 시그널을 쉽게 수신할 수 있다:
@template_rendered.connect_via(app)
def when_template_rendered(sender, template, context, **extra):
print 'Template %s is rendered with %s' % (template.name, context)
```
## 핵심 시그널¶
플라스크에는 다음과 같은 시그널이 존재한다:
*
`flask.` `template_rendered` *
이 시그널은 템플릿이 성공적으로 뿌려졌을 때 송신된다. 이 시그널은 template 으로 템플릿과 딕셔너리 형태인 context 로 컨텍스트를 인스턴스로 하여 호출된다.
수신 예제:
> def log_template_renders(sender, template, context, **extra): sender.logger.debug('Rendering template "%s" with context %s', template.name or 'string template', context) from flask import template_rendered template_rendered.connect(log_template_renders, app)
*
`flask.` `request_started` *
이 시그널은 요청 처리가 시작되기 전이지만 요청 컨텍스트는 만들어졌을 때 송신된다. 요청 컨텍스트가 이미 연결됐기 때문에, 수신자는 표준 전역 프록시 객체인
`request` 으로 요청을 참조할 수 있다.
수신 예제:
> def log_request(sender, **extra): sender.logger.debug('Request context is set up') from flask import request_started request_started.connect(log_request, app)
*
`flask.` `request_finished` *
이 시그널은 클라이언트로 응답이 가기 바로 전에 보내진다. response 인자로 응답 객체를 넘겨준다.
수신 예제:
> def log_response(sender, response, **extra): sender.logger.debug('Request context is about to close down. ' 'Response: %s', response) from flask import request_finished request_finished.connect(log_response, app)
```
got_request_exception
```
이 시그널은 요청 처리 동안 예외가 발생했을 때 보내진다. 표준 예외처리가 시작되기 전에 송신되고 예외 처리를 하지 않는 디버깅 환경에서도 보내진다. exception 인자로 예외 자체가 수신자에게 넘어간다.
수신 예제:
> def log_exception(sender, exception, **extra): sender.logger.debug('Got exception during processing: %s', exception) from flask import got_request_exception got_request_exception.connect(log_exception, app)
*
`flask.` `request_tearing_down` *
이 시그널은 요청 객체가 제거될 때 보내진다. 요청 처리 과정에서 요류가 발생하더라도 항상 호출된다. 현재 시그널을 기다리고 있는 함수는 일반 teardown 핸들러 뒤에 호출되지만, 순서를 보장하지는 않는다.
수신 예제:
> def close_db_connection(sender, **extra): session.close() from flask import request_tearing_down request_tearing_down.connect(close_db_connection, app)
플라스크 0.9에서, 예외가 있는 경우 이 시그널을 야기하는 예외에 대한 참조를 갖는 ‘exc’ 키워드 인자를 또한 넘겨줄 것이다.
```
appcontext_tearing_down
```
이 시그널은 어플리케이션 컨텍스트가 제거될 때 보내진다. 예외가 발생하더라도 이 시그널은 항상 호출되고, 일반 teardown 핸들러 뒤에 시그널에 대한 콜백 함수가 호출되지만, 순서는 보장하지 않는다.
수신 예제:
> def close_db_connection(sender, **extra): session.close() from flask import appcontext_tearing_down appcontext_tearing_down.connect(close_db_connection, app)
‘request_tearing_down’ 과 마찬가지로 예외에 대한 참조를 exc 키워드 인자로 넘겨줄 것이다.
플라스크 0.7은 함수가 아닌 클래스를 기반으로한 Django 프레임워크의 제너릭 뷰(generic view)에 영향을 받은 플러거블 뷰(pluggable view : 끼워넣는 뷰)를 소개한다. 이 뷰의 주 목적는 여러분이 구현체의 부분들을 바꿀수 있고 이 방식으로 맞춤과 끼워넣는것이 가능한 뷰를 갖는 것이다.
## 기본 원칙(Basic Principle)¶
여러분이 데이타베이스에서 어떤 객체의 목록을 읽어서 템플릿에 보여주는 함수를 가진다고 고려해보자:
```
@app.route('/users/')
def show_users(page):
users = User.query.all()
return render_template('users.html', users=users)
```
위의 코드는 간단하고 유연하지만, 여러분이 다른 모델이나 템플릿에도 적용가능한 일반적인 방식으로 이 뷰를 제공하고 싶다면 더 유연한 구조를 원할지 모른다. 이 경우가 끼워넣을 수 있는 클래스를 기반으로 한 뷰가 적합한 곳이다. 클래스 기반의 뷰로 변환하기 위한 첫 단계는 아래와 같다.
```
from flask.views import View
class ShowUsers(View):
def dispatch_request(self):
users = User.query.all()
return render_template('users.html', objects=users)
app.add_url_rule('/users/', view_func=ShowUsers.as_view('show_users'))
```
위에서 볼 수 있는 것처럼, 여러분은 `flask.views.View` 의 서브클래스를 만들고 `dispatch_request()` 를 구현해야한다. 그리고 그 클래스를 `as_view()` 클래스 메소드를 사용해서 실제 뷰 함수로 변환해야한다. 그 함수로 전달하는 문자열은 뷰가 가질 끝점(end-point)의 이름이다. 위의 코드를 조금 리팩토링해보자.
from flask.views import View class ListView(View): def get_template_name(self): raise NotImplementedError() def render_template(self, context): return render_template(self.get_template_name(), **context) def dispatch_request(self): context = {'objects': self.get_objects()} return self.render_template(context) class UserView(ListView): def get_template_name(self): return 'users.html' def get_objects(self): return User.query.all() 물론 이것은 이런 작은 예제에서는 그다지 도움이 안될 수도 있지만, 기본 원칙을 설명하기에는 충분하다. 여러분이 클래스 기반의 뷰를 가질 때, self 가 가리키는 건 무엇이냐는 질문이 나온다. 이것이 동작하는 방식은 요청이 들어올 때마다 클래스의 새 인스턴스가 생성되고 `dispatch_request()` 메소드가 URL 규칙으로부터 나온 인자를
가지고 호출된다. 클래스 그 자체로는 `as_view()` 함수에 넘겨지는
인자들을 가지고 인스턴스화된다. 예를 들면, 아래와 같은 클래스를 작성할 수 있다:
```
class RenderTemplateView(View):
def __init__(self, template_name):
self.template_name = template_name
def dispatch_request(self):
return render_template(self.template_name)
```
그런 다음에 여러분은 아래와 같이 뷰 함수를 등록할 수 있다:
```
app.add_url_rule('/about', view_func=RenderTemplateView.as_view(
'about_page', template_name='about.html'))
```
## 메소드 힌트¶
끼워넣을수 있는 뷰는 `route()` 나 더 낫게는 `methods()` 속성 정보를 제공할 수 있다:
```
class MyView(View):
methods = ['GET', 'POST']
def dispatch_request(self):
if request.method == 'POST':
...
...
app.add_url_rule('/myview', view_func=MyView.as_view('myview'))
```
## 메소드 기반 디스패치¶
RESTful API에서 각 HTTP 메소드별로 다른 함수를 수행하는 것은 굉장히 도움이 된다.
```
flask.views.MethodView
```
로 여러분은 그 작업을 쉽게 할 수 있다. 각 HTTP
메소드는 같은 이름을 가진 함수(소문자)로 연결된다.
```
from flask.views import MethodView
class UserAPI(MethodView):
def get(self):
users = User.query.all()
...
def post(self):
user = User.from_form_data(request.form)
...
app.add_url_rule('/users/', view_func=UserAPI.as_view('users'))
```
이 방식은 또한 여러분이 `methods` 속성을 제공하지 않아도 된다.
클래스에 정의된 메소드 기반으로 자동으로 설정된다.
## 데코레이팅 뷰¶
뷰 클래스 그 자체는 라우팅 시스템에 추가되는 뷰 함수가 아니기 때문에, 클래스 자체를 데코레이팅하는 것은 이해되지 않는다. 대신, 여러분이 수동으로 `as_view()` 함수의 리턴값을 데코레이팅해야한다:
```
def user_required(f):
"""Checks whether user is logged in or raises error 401."""
def decorator(*args, **kwargs):
if not g.user:
abort(401)
return f(*args, **kwargs)
return decorator
view = user_required(UserAPI.as_view('users'))
app.add_url_rule('/users/', view_func=view)
```
플라스크 0.8부터 클래스 선언에도 적용할 데코레이터 목록을 표시할 수 있는 대안이 있다:
```
class UserAPI(MethodView):
decorators = [user_required]
```
호출하는 입장에서 그 자체로 암시적이기 때문에 여러분이 뷰의 개별 메소드에 일반적인 뷰 데코레이터를 사용할 수 없다는 것을 명심하길 바란다.
## 메소드 뷰 API¶
웹 API는 보통 HTTP 메소드와 매우 밀접하게 동작하는데 `MethodView` 기반의 API를 구현할때는
더욱 의미가 들어맞는다. 그렇긴 하지만, 여러분은 그 API가 대부분 같은 메소드 뷰로 가는 여러
다른 URL 규칙을 요구할 것이라는 것을 알아야 할 것이다. 예를 들면, 여러분이 웹에서 사용자
객체에 노출된 상황을 고려해보자:
URL | Method | Description |
| --- | --- | --- |
| | 전체 사용자 정보 목록 얻기 |
| | 새로운 사용자 정보 생성 |
| | 단일 사용자 정보 얻기 |
| | 단일 사용자 정보 갱신 |
| | 단일 사용자 정보 삭제 |
그렇다면 `MethodView` 를 가지고는 어떻게 위와 같은 작업을 계속할
것인가? 여러분이 같은 뷰에 여러 규칙을 제공할 수 있는 것이 요령이다.
뷰가 아래와 같이 보일 때를 가정해보자:
```
class UserAPI(MethodView):
def get(self, user_id):
if user_id is None:
# return a list of users
pass
else:
# expose a single user
pass
def post(self):
# create a new user
pass
def delete(self, user_id):
# delete a single user
pass
def put(self, user_id):
# update a single user
pass
```
그렇다면 우리는 이것을 어떻게 라우팅 시스템으로 연결하는가? 두가지 규칙을 추가하고 명시적으로 각 메소드를 언급한다:
```
user_view = UserAPI.as_view('user_api')
app.add_url_rule('/users/', defaults={'user_id': None},
view_func=user_view, methods=['GET',])
app.add_url_rule('/users/', view_func=user_view, methods=['POST',])
app.add_url_rule('/users/<int:user_id>', view_func=user_view,
methods=['GET', 'PUT', 'DELETE'])
```
여러분이 유사하게 보이는 여러 API를 갖고 있다면, 등록하는 메소드를 추가하도록 아래처럼 리팩토링할 수 있다:
```
def register_api(view, endpoint, url, pk='id', pk_type='int'):
view_func = view.as_view(endpoint)
app.add_url_rule(url, defaults={pk: None},
view_func=view_func, methods=['GET',])
app.add_url_rule(url, view_func=view_func, methods=['POST',])
app.add_url_rule('%s<%s:%s>' % (url, pk_type, pk), view_func=view_func,
methods=['GET', 'PUT', 'DELETE'])
register_api(UserAPI, 'user_api', '/users/', pk='user_id')
```
버전 0.9에 추가.
Flask의 설계에 숨겨진 사상들 중 하나는 코드가 수행되는 두가지 다른 “상태”가 있다는 것이다. 하나는 어플리케이션이 암묵적으로 모듈 레벨에 있는 어플리케이션 셋업 상태이다. 그 상태는 `Flask` 객체가 초기화될 때때 시작되고, 첫 요청을 받았을 때 암묵적으로 종료된다.
어플리케이션이 이 상태에 있을 때, 아래와 같은 몇가지 가정이 성립한다:
* 개발자는 어플리케이션 객체를 안전하게 수정할 수 있다.
* 현재까지 어떤 요청도 처리되지 않았다.
* 여러분이 어플리케이션 객체를 수정하려면 그 객체에 대한 참조를 가져야만한다.여러분이 현재 생성하고 수정하고 있는 어플리케이션 객체에 대한 참조를 줄 수 있는 어떠한 매직 프록시(magic proxy)는 존재하지 않는다.
반대로, 요청을 처리하는 동안에는, 몇 가지 다른 룰이 존재한다:
* 요청이 살아있는 동안에는 컨텍스트 로컬 객체(
`flask.request` 와 몇가지 객체들)가 현재 요청을 가리킨다. * 어떤 코드도 언제든지 이 컨텍스트 로컬 객체를 가질 수 있다.
이 상태의 작은 틈 사이에 있는 세번째 상태가 있다. 때때로 여러분은 요청처리 동안 요청이 활성화되지 않은 어플리케이션과 상호작용하는 방식과 유사하게 어플리케이션을 다룰 것이다. 예를 들면, 여러분이 대화형 파이썬 쉘에 있고 어플리케이션이나 명령행으로 어플리케이션을 실행하는 상황을 고려할 수 있다.
어플리케이션 컨텍스트는 `current_app` 라는 컨텍스트 로컬을 작동시킨다.
## 어플리케이션 컨텍스트의 목적¶
어플리케이션의 컨텍스트가 존재하는 주요한 이유는 과거에 다수의 기능이 더 나은 솔루션의 부족으로 요청 컨텍스트에 덧붙여있었다는 것이다. Flask의 기본 설계 중 하나는 여러분이 같은 파이썬 프로세스에 하나 이상의 어플리케이션을 갖을 수 있다는 것이다.
그래서 코드는 어떻게 “적합한” 어플리케이션을 찾을 수 있는가? 과거에 우리는 명시적으로 어플리케이션을 제공했었지만, 그 방식은 명시적 전달을 염두하지 않은 라이브러리와 문제를 야기했다.
> 그 문제의 일반적인 차선책은 현재 요청에 대한 어플리케이션 참조와 연결되있는
`current_app` 프록시를 나중에 사용하는 것이었다. 그러나, 그런 요청 컨텍스트를 생성하는 것은 어떤 요청도 없는 경우에 불필요하게 고비용 동작이기 때문에, 그 어플리케이션 컨텍스트가 소개되었다.
## 어플리케이션 컨텍스트 생성하기¶
어플리케이션 컨텍스트를 만들기 위해서는 두 가지 방법이 있다. 첫번째는 임의적인 방식으로, 요청 컨텍스트가 들어올 때마다, 어플리케이션 컨텍스트가 필요한 경우 바로 옆에 생성될 것이다. 그 결과로 여러분은 어플리케이션 컨텍스트가 필요없다면 그 존재를 무시할 수 있다.
두 번째 방식은 `app_context()` 메소드를 사용하는 명시적으로 방법이다:
```
from flask import Flask, current_app
app = Flask(__name__)
with app.app_context():
# within this block, current_app points to app.
print current_app.name
```
어플리케이션 문맥은 `SERVER_NAME` 이 설정된 경우 `url_for()` 함수에
의해서도 사용된다. 이것은 요청이 없는 경우에도 여러분이 URL을 생성할 수 있게 해준다.
## 컨텍스트의 지역성¶
어플리케이션 문맥은 필요에 따라 생성되고 소멸된다. 그것은 결코 쓰레드들 사이를 이동할 수 없고 요청 사이에서 공유되지 않을 것이다. 그와 같이, 어플리케이션 문맥은 데이타베이스 연결 정보와 다른 정보들을 저장하는 최적의 장소다. 내부 스택 객체는 `flask._app_ctx_stack` 이다.
확장들은 충분히 구별되는 이름을 선택하다는 가정하에서 자유롭게 가장 상위에 추가적인 정보를
저장한다.
더 많은 정보에 대해서는 Flask Extension Development 을 살펴보라.
## 컨텍스트 사용¶
컨텍스트는 일반적으로 요청당 하나씩 생성되거나 직접 사용하는 경우에 캐시 리소스에 사용된다. 예를 들어 데이터베이스 연결과 같은 경우다. 어플리케이션 컨텍스트에 저장할 때에는 반드시 고유한 이름을 선택하여야 한다. 이 영역은 Flask 어플리케이션들과 확장(플러그인 같은)들에서 공유되기 때문이다.
가장 일반적인 사용법은 리소스 관리를 두개의 파트로 나누는 것이다:
* 컨텍스트에서의 암시적인 자원 캐시
* 컨텍스트 분해를 통한 리소스 할당 해제
일반적으로 자원 `X` 를 생성하는 `get_X()` 함수가 있다고 하자.
만약 그것이 아직 존재 하지 않고, 다른 방법으로 같은 자원을 반환하는 `teardown_X()` 함수가
teardown 핸들러에 등록되어 있다.
여기 데이터베이스에 연결하는 예제가 있다:
def get_db():
top = _app_ctx_stack.top
if not hasattr(top, 'database'):
top.database = connect_to_database()
return top.database
@app.teardown_appcontext
def teardown_db(exception):
top = _app_ctx_stack.top
if hasattr(top, 'database'):
top.database.close()
```
처음 `get_db()` 가 호출된 시점에 연결이 이루어진다.
이 것을 암시적으로 만들기 위해서는 `LocalProxy` 를 사용할 수 있다:
```
from werkzeug.local import LocalProxy
db = LocalProxy(get_db)
```
이 문서는 플라스크 0.7에 있는 동작을 설명하는데 대부분은 이전 버전과 비슷하지만, 몇몇 작고 미묘한 차이를 갖는다.
13장인 어플리케이션 컨텍스트 어플리케이션 컨텍스트 을 먼저 읽는 것을 권고한다.
## 컨텍스트 로컬로 다이빙하기¶
여러분이 사용자를 리디렉트해야하는 URL을 반환하는 유틸리티 함수를 갖는다고 하자. 그 함수는 항상 URL의 next 인자나 HTTP referer 또는 index에 대한 URL을 리디렉트한다고 가정하자:
```
from flask import request, url_for
def redirect_url():
return request.args.get('next') or \
request.referrer or \
url_for('index')
```
여러분이 볼 수 있는 것처럼, 그것은 요청 객체를 접근한다. 여러분은 플레인 파이썬 쉘에서 이것을 실행하면, 여러분은 아래와 같은 예외를 볼 것이다:
```
>>> redirect_url()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'request'
```
우리가 현재 접근할 수 있는 요청을 갖고 있지 않기 때문에 그것은 이해가 된다. 그래서 우리는 요청을 만들고 그것을 현재 컨텍스트에 연결시켜야 한다. `test_request_context` 메소드는 우리에게 `RequestContext` 를 만들어 줄 수 있다:
```
>>> ctx = app.test_request_context('/?next=http://example.com/')
```
이 컨텍스트는 두 가지 방식으로 사용될 수 있다. with 절과 함께 사용되거나 `push()` 와 `pop()` 메소드를 호출하여 사용된다: `>>> ctx.push()`
이 시점부터 여러분은 요청 객체를 가지고 작업할 수 있다 :
```
>>> redirect_url()
u'http://example.com/'
```
pop 함수를 호출할 때까지:
`>>> ctx.pop()`
요청 컨텍스트는 내부적으로 스택을 유지하기 때문에 여러분은 여러번 push와 pop을 할 수 있다. 이것이 내부적인 리디렉트와 같은 것을 손쉽게 구현한 것이다.
상호작용 파이썬 쉘로부터 요청 컨텍스트를 사용하는 더 많은 정보는 쉘에서 작업하기 장으로 넘어가라.
## 컨텍스트가 작동하는 방식¶
여러분이 플라스크 WSGI 어플리케이션이 내부적으로 동작하는 방식을 살펴보려면, 아래와 대단히 유사한 코드를 찾게될 것이다:
```
def wsgi_app(self, environ):
with self.request_context(environ):
try:
response = self.full_dispatch_request()
except Exception, e:
response = self.make_response(self.handle_exception(e))
return response(environ, start_response)
```
`request_context()` 메소드는 새로운 `RequestContext` 객체를 반환하고 컨텍스트를 연결하기 위해 with 구문과 조합하여
RequestContext 객체를 사용한다. 이 시점부터 with 구문의 끝까지 같은
쓰레드에서 호출되는 모든 것은 요청 글로벌 객체( `flask.request` 와 기타 다른것들)에 접근할 수 있을 것이다. 요청 컨텍스트도 내부적으로 스택처럼 동작한다. 스택의 가장 상위에는 현재 활성화된 요청이 있다. `push()` 는 스택의 제일 위에 컨텍스트를 더하고, `pop()` 은 스택으로부터 제일 상위에 있는 컨텍스트를 제거한다.
컨텍스트를 제거하는 pop 동작 시, 어플리케이션의 `teardown_request()` 함수 또한 실행된다.
주목할 다른 것은 요청 컨텍스트가 들어오고 그때까지 해당 어플리케이션에 어떠한 어플리케이션 컨텍스트가 없었다면, 자동으로 application context 또한 생성할 것이라는 것이다.
## 콜백과 오류¶
Flask에서 요청 처리하는 동안 오류가 발생하면 어떻게 되는가? 버전 0.7에서 이 특별한 동작이 변경되었는데, 왜냐하면 우리가 실제로 발생하는 것이 무엇인지 좀 더 쉽게 이해가기 원했기 때문이다:
* 각 요청 전에,
`before_request()` 함수들이 수행된다. 이런 함수들 중 하나라도 응답을 반환하면, 다른 함수들은 더 이상 호출되지 않는다. 그러나 어떤 경우라도 반환값은 뷰의 반환값에 대한 대체값으로 처리된다. *
`before_request()` 함수들이 응닶을 반환하지 않는다면, 보통 요청 처리가 시작되고 요청에 맞는 뷰 함수가 응답을 반환하게 된다. * 그리고 나서, 뷰의 반환값은 실제 응답 객체로 변환되고 응답 객체를 대체하거나 변경할 준비가 되있는
`after_request()` 함수로 전달된다. * 요청의 종료 시에는
`teardown_request()` 함수가 실행된다. 이 함수는 심지어 처리되지 않는 예외의 경우나 before-request 핸들러가 아직 처리되지 않거나 전혀 실해되지 않는 경우에도 항상 발생한다. (예를 들면, 테스트 환경에서 때때로 before-request 콜백이 호출되지 않기를 원할수도 있다.)
자 오류가 발생하면 무슨일이 발생하는가? 운영 환경에서는 예외가 처리되지 않으면, 500 내부 서버 핸들러가 실행된다. 그러나, 개발 환경에서는 예외는 더 진행되지 않고 WSGI 서버로 영향을 미친다. 대화형 디버거와 같은 방식에서는 도움되는 디버깅 정보를 제공할 수 있다.
버전 0.7에서 중요한 변화는 내부 서버 오류가 더 이상 after-request 콜백에 의해 사후 처리되지 않고 after-request 콜백이 실행되는 것을 보장해주지 않는다는 것이다. 이 방식으로 내부 디스패칭 코드는 더 깔끔해졌고 커스터마이징과 이해가 더 쉬워졌다.
새로운 teardown 함수는 요청의 마지막에서 반드시 행이 필요한 경우에 대체할 목적으로 사용되는 것을 가정한다.
## 테어다운(Teardown) 콜백¶
테어다운 콜백은 특별한 콜백인데 여러 다른 시점에 실행되기 때문이다. 엄격하게 말하자면, 그것들이 `RequestContext` 객체의 생명주기와 연결되있긴 하지만, 그것들은 실제 요청 처리와 독립되있다. 요청 문맥이 꺼내질 때, `teardown_request()` 함수는 호출된다.
with 구문이 있는 테스트 클라이언트를 사용하여 요청 문맥의 생명이 범위가 늘었는지 또는 명령줄에서 요청 문맥을 사용할 때를 아는 것이 중요하다:
```
with app.test_client() as client:
resp = client.get('/foo')
# the teardown functions are still not called at that point
# even though the response ended and you have the response
# object in your hand
# only when the code reaches this point the teardown functions
# are called. Alternatively the same thing happens if another
# request was triggered from the test client
```
명령줄에서 이 동작을 보기는 쉽다.:
```
>>> app = Flask(__name__)
>>> @app.teardown_request
... def teardown_request(exception=None):
... print 'this runs after request'
...
>>> ctx = app.test_request_context()
>>> ctx.push()
>>> ctx.pop()
this runs after request
>>>
```
before-request 콜백은 아직 실행되지 않고 예외가 발생했더라도,teardown 콜백은 항상 호출된다는 것을 명심하라. 테스트 시스템의 어떤 부분들 또한 before-request 핸들러를 호출하지 않고 일시적으로 요청 문맥을 생성할지도 모른다. 절대로 실패하지 않는 방식으로 여러분의 teardown-request 핸들러를 쓰는 것을 보장하라.
## 프록시에서 주의할 점¶
플라스크에서 제공하는 일부 객체들은 다른 객체에 대한 프록시들이다. 이렇게 하는 뒷 배경에는 이런 프락시들이 쓰레들간에 공유되어 있고 그 프락시들이 쓰레드들에 연결된 실제 객체로 필요시에 보이지 않게 디스패치되어야 한다는 것이다.
대게 여러분은 그 프락시에 대해서 신경쓰지 않아도 되지만, 이 객체가 실제 프락시인지 알면 좋은 몇 가지 예외의 경우가 있다. :
* 프락시 객체들이 상속받은 타입을 속이지 않아서, 여러분이 실제 인스턴스를 확인한다면, 프락시로 된 인스턴스에서 타입을 확인해야한다.(아래의 _get_current_object 를 보라).
* 객체의 참조가 중요한 경우( 시그널(Signals) 을 보내는 경우)
여러분이 프록시된 감춰진 객체에 접근할 필요가 있다면,
```
_get_current_object()
```
메소드를 사용할 수 있다: > app = current_app._get_current_object() my_signal.send(app)
## 오류 시 컨텍스트 보존¶
오류가 발생하거나 하지 않거나, 요청의 마지막에서 요청 문맥은 스택에서 빠지게 되고 그 문맥과 관련된 모든 데이타는 소멸된다. 하지만, 개발하는 동안 그것은 예외가 발생하는 경우에 여러분이 더 오랜 시간동안 그 정보를 갖고 있기를 원하기 때문에 문제가 될 수 있다. 플라스크 0.6과 그 이전 버전의 디버그 모드에서는 예외가 발생했을 때, 요청 문맥이 꺼내지지 않아서 인터렉티브( 상호작용하는) 디버거가 여러분에게 여전히 중요한 정보를 제공할 수 있었다.
플라스크 0.7부터 여러분은
설정 변수값을 설정하여 그
행동을 좀 더 세밀하게 설정한다. 디폴트로 이 설정은 `DEBUG` 설정과 연결된다.
어플리케이션이 디버그 모드라면, 그 문맥은 보존되지만, 운영 모드라면 보존되지 않는다.
어플리케이션이 예외 발생 시 메모리 누수를 야기할 수 있으므로 운영 모드에서 ``PRESERVE_CONTEXT_ON_EXCEPTION``을 강제로 활성화하지 않아야 한다. 하지만, 개발 모드에서 운영 설정에서만 발생하는 오류를 디버그하려 할 때 개발 모드로 같은 오류 보존 동작을 얻는 것은 개발하는 동안에는 유용할 수 있다.
버전 0.7에 추가.
플라스크는 어플리케이션 컴포넌트를 만들고 어플리케이션 내부나 어플리케이션간에 공통 패턴을 지원하기 위해 블루프린트(blueprint) 라는 개념을 사용한다. 블루프린트는 보통 대형 어플리케이션이 동작하는 방식을 단순화하고 어플리케이션의 동작을 등록하기 위한 플라스크 확장에 대한 중앙 집중된 수단을 제공할 수 있다. `Blueprint` 객체는 `Flask` 어플리케이션 객체와 유사하게 동작하지만
실제로 어플리케이션은 아니다. 다만 어플리케이션을 생성하거나 확장하는 방식에 대한
블루프린트 이다.
## 왜 블루프린트인가?¶
플라스크의 블루프린트는 다음 경우로 인해 만들어졌다:
* 어플리케이션을 블루프린트의 집합으로 고려한다. 이 방식은 대형 어플리케이션에 있어서 이상적이다; 프로젝트는 어플리케이션 객체를 인스턴스화하고, 여러 확장을 초기화하고, 블루프린트의 묶음을 등록할 수 있다.
* 어플리케이션 상에 URL 접두어와/또는 서브도메인으로 블루프린트를 등록한다. URL 접두어와/또는 서브도메인에 있는 파라메터는 블루프린트에 있는 모든 뷰 함수에 걸쳐있는 공통 뷰 인자(기본값을 가진)가 된다.
* 어플리케이션에 여러 URL 규칙을 가진 블루프린트를 여러번 등록한다.
* 블루프린트를 통해 템플릿 필터, 정적 파일, 템플릿, 그리고 다른 유틸리티를 제공한다. 블루프린트는 어플리케이션이나 뷰 함수를 구현하지 않아도 된다.
* 플라스크 확장을 초기화할 때 이런 경우 중 어떤 경우라도 어플리케이션에 블루프린트를 등록한다.
플라스크에 있는 블루프린트는 끼우고 뺄수 있는 앱이 아니다 왜냐하면 블루프린트는 실제 어플리케이션이 아니기 때문이다 – 그것은 어플리케이션에 등록될 수 있는 동작의 집합인데 심지어 여러번 등록될 수 있다. 왜 복수의 어플리케이션 객체를 가지지 않는가? 여러분은 그렇게(어플리케이션 디스패칭 을 살펴봐라)할 수 있지만, 어플리케이션은 분리된 설정을 가질것 이고 WSGI 계층에서 관리될 것이다.
대신에 블루프린트는 플라스크 레벨에서 분리를 제공하고, 어플리케이션 설정을 공유하며 , 그리고 등록된 것을 가지고 필요에 따라 어플리케이션 객체를 변경할 수 있다. 이것의 단점은 일단 전체 어플리케이션 객체를 제거할 필요없이 어플리케이션이 생성됐을 때 여러분은 블루프린트를 해지할 수 없다.
## 블루프린트의 개념¶
블루프린트의 기본 개념은 어플리케이션에 블루프린트이 등록될 때 실행할 동작을 기록한다는 것이다. 플라스크는 요청을 보내고 하나의 끝점에서 다른 곳으로 URL을 생성할 때 뷰 함수와 블루프린트의 연관을 맺는다.
## 첫번째 블루프린트¶
아래는 가장 기본적인 블루프린트의 모습이다. 이 경우에 우리는 정적 템플릿을 간단하게 그려주는 블루프린트를 구현하기를 원할 것이다:
```
from flask import Blueprint, render_template, abort
from jinja2 import TemplateNotFound
simple_page = Blueprint('simple_page', __name__,
template_folder='templates')
@simple_page.route('/', defaults={'page': 'index'})
@simple_page.route('/<page>')
def show(page):
try:
return render_template('pages/%s.html' % page)
except TemplateNotFound:
abort(404)
```
`@simple_page.route` 데코레이터와 함수를 연결할 때 블루프린트는
어플리케이션에 있는 그 함수를 등록하겠다는 의도를 기록할 것이다.
게다가 블루프린트는 `Blueprint` 생성자(위의 경우에는 `simple_page` )
에 들어가는 그 이름을 가지고 등록된 함수의 끝점 앞에 붙일 것이다.
## 블루프린트 등록하기¶
그렇다면 블루프린트를 어떻게 등록할 것 인가? 아래와 같이 등록한다:
```
from flask import Flask
from yourapplication.simple_page import simple_page
app = Flask(__name__)
app.register_blueprint(simple_page)
```
여러분이 어플리케이션에 등록된 규칙을 확인한다면, 여러분은 아래와 같은 것을 찾을 것이다:
```
[<Rule '/static/<filename>' (HEAD, OPTIONS, GET) -> static>,
<Rule '/<page>' (HEAD, OPTIONS, GET) -> simple_page.show>,
<Rule '/' (HEAD, OPTIONS, GET) -> simple_page.show>]
```
첫 규칙은 명시적으로 어플리케이션에 있는 정적 파일에 대한 것이다. 다른 두 규칙은 `simple_page` 블루프린트의 show 함수에 대한 것이다.
볼 수 있는 것 처럼, 블루프린트의 이름이 접두어로 붙어있고 점 ( `.` )
으로 구분되있다.
하지만 블루프린트는 또한 다른 지점으로 마운트 될 수 있도 있다:
```
app.register_blueprint(simple_page, url_prefix='/pages')
```
그리고 물론 말할 것도 없이, 아래와 같은 규칙이 생성된다:
```
[<Rule '/static/<filename>' (HEAD, OPTIONS, GET) -> static>,
<Rule '/pages/<page>' (HEAD, OPTIONS, GET) -> simple_page.show>,
<Rule '/pages/' (HEAD, OPTIONS, GET) -> simple_page.show>]
```
무엇보다 모든 블루프린트이 복수로 적용되는 것에 적절한 응답을 주지는 않지만 여러분은 블루프린트를 여러번 등록할 수 있다. 사실 한번 이상 블루프린트를 마운트할 수 있다면 제대로 블루프린트이 동작하느냐는 블루프린트를 어떻게 구현했으냐에 달려있다.
## 블루프린트 리소스¶
블루프린트는 리소스 또한 제공할 수 있다. 때때로 여러분은 단지 리소스만을 제공하기 위해 블루프린트를 사용하고 싶을 수도 있다.
### 블루프린트 리소스 폴더¶
보통 어플리케이션처럼, 블루프린트는 폴더안에 포함되도록 고려된다. 다수의 블루프린트이 같은 폴더에서 제공될 수 있지만, 그런 경우가 될 필요도 없고 보통 권고하지 않는다.
폴더는 보통 __name__ 인 `Blueprint` 에 두번째 인자로 생각된다.
이 인자는 어떤 논리적인 파이썬 모듈이나 패키지가 블루프린트과 상응되는지
알려준다. 그것이 실제 파이썬 패키지를 가리킨다면 그 패키지 (파일 시스템의
폴더인) 는 리소스 폴더다. 그것이 모듈이라면, 모듈이 포함되있는 패키지는
리소스 폴더가 될 것이다. 리소스 폴더가 어떤것인지 보기 위해서는 `Blueprint.root_path` 속성에 접근할 수 있다:
```
>>> simple_page.root_path
'/Users/username/TestProject/yourapplication'
```
이 폴더에서 소스를 빨리 열기 위해서 여러분은 `open_resource()` 함수를 사용할 수 있다:
```
with simple_page.open_resource('static/style.css') as f:
code = f.read()
```
### 정적 파일¶
블루프린트는 static_folder 키워드 인자를 통해서 파일시스템에 있는 폴더에 경로를 제공하여 정적 파일을 가진 폴더를 노출할 수 있다. 그것은 절대 경로이거나 블루프린트 폴더에 대해 상대 경로일 수 있다:
```
admin = Blueprint('admin', __name__, static_folder='static')
```
기본값으로 경로의 가장 오른쪽 부분이 웹에 노출되는 곳이다. 폴더는 여기서 `static` 이라고 불리기 때문에 블루프린트 위치 + `static` 으로 될 것이다.
블루프린트이 `/admin` 으로 등록되있다고 하면 정적 폴더는 `/admin/static` 으로 될 것이다.
끝점은 bluepirnt_name.static 으로 되고 여러분은 어플리케이션의 정적 폴더에 한 것 처럼 그 끝점에 URL을 생성할 수 있다:
```
url_for('admin.static', filename='style.css')
```
### 템플릿¶
여러분이 블루프린트이 템플릿을 노출하게 하고 싶다면 `Blueprint` 생성자에
template_folder 인자를 제공하여 노출할 수 있다:
```
admin = Blueprint('admin', __name__, template_folder='templates')
```
정적 파일에 관해서, 그 경로는 절대 경로일 수 있고 블루프린트 리소스 폴더 대비 상대적일 수 있다. 템플릿 폴더는 템플릿 검색경로에 추가되지만 실제 어플리케이션의 템플릿 폴더보다 낮은 우선순위를 갖는다. 그런 방식으로 여러분은 블루프린트이 실제 어플리케이션에서 제공하는 템플릿을 쉽게 오버라이드 할 수 있다.
그러므로
```
yourapplication/admin
```
폴더에 블루프린트이 있고 `'admin/index.html'` 를 뿌려주고 template_folder 로 `templates` 를 제공한다면 여러분은
```
yourapplication/admin/templates/admin/index.html
```
같이 파일을 생성해야
할 것이다.
## URL 만들기¶
하나의 페이지에서 다른 페이지로 링크하고 싶다면 보통 블루프린트명을 접두어로 하고 점 ( `.` ) 으로 URL 끝점을 구분하는 것처럼 `url_for()` 함수를
사용할 수 있다:
```
url_for('admin.index')
```
추가적으로 여러분이 블루프린트의 뷰 함수에 있거나 뿌려진 템플릿에 있고 같은 블루프린트의 다른 끝점으로 링크를 걸고 싶다면, 점을 접두어로 하여 끝점에 붙여서 상대적인 리디렉션을 사용할 수 있다:
`url_for('.index')` 예를 들면 현재 요청을 어떤 다른 블루프린트의 끝점으로 보내는 경우에 `admin.index` 로 링크할 것이다.
Flask 확장기능(extensions)은 서로 다른 다양한 방법으로 Flask 의 기능을 추가 시킬 수 있게 해준다. 예를 들어 확장기능을 이용하여 데이터베이스 그리고 다른 일반적인 태스크들을 지원할 수 있다.
## 확장기능 찾아내기¶
플라스크 확장기능들의 목록은 Flask Extension Registry 에서 살펴볼 수 있다. 그리고 `easy_install` 혹은 `pip` 를 통해 다운로드 할 수 있다.
만약 여러분이 Flask 확장기능을 추가하면, `requirements.rst` 혹은 `setup.py` 에 명시된
파일 의존관계로부터 간단한 명령어의 실행이나 응용 프로그램 설치와 함께 설치될 수 있다.
## 확장기능 사용하기¶
확장기능은 일반적으로 해당 기능을 사용하는 방법을 보여주는 문서를 가지고 있다. 이 확장기능의 작동 방식에 대한 일반적인 규칙은 없지만, 확장기능은 공통된 위치에서 가져오게 된다. 만약 여러분이 `Flask-Foo` 혹은 `Foo-Flask` 라는 확장기능을
호출 하였다면, 그것은 항상 `flask.ext.foo` 을 통해서 가져오게 될 것이다
## Flask 0.8 이전버전의 경우¶
만약 여러분의 Flask가 0.7 버전 혹은 그 이전의 것이라면 `flask.ext` 패키지가
존재하지 않는다. 대신에 여러분은 `flaskext.foo` 혹은 `flask_foo` 등의 형식으로
확장기능이 배포되는 방식에 따라 불러와야만 한다. 만약 여러분이 여전히 Flask 0.7 혹은 이전
이전 버전을 지원하는 어플리케이션을 개발하기 원한다면, 여러분은 여전히 `flask.ext` 패키지를 불러와야만 한다. 우리는 Flask 의 이전 버전의 Flask를 위한 호환성 모듈을 제공하고 있다.
여러분은 github을 통해서 : flaskext_compat.py 를 다운로드 받을 수 있다.
그리고 여기에서 호환성 모듈의 사용 방법을 볼 수 있다:
```
import flaskext_compat
flaskext_compat.activate()
`flaskext_compat` 모듈이 활성화 되면 `flask.ext` 가 존재하게 되고
여러분은 이것을 통해 여기서부터 불러오기를 시작 할 수 있다.
많은 사람들이 파이썬을 좋아하는 이유 중 한가지는 바로 대화식 쉘이다. 그것은 기본적으로 여러분이 실시간으로 파이썬 명령을 싱행하고 결과를 실시간으로 즉시 받아 볼 수 있다. Flask 자체는 대화식 쉘과 함께 제공되지 않는다. 왜냐하면 Flask는 특정한 선행 작업이 필요하지 않고, 단지 여러분의 어플리케이션에서 불러오기만 하면 시작할 수 있기 때문이다.
하지만, 쉘에서 좀 더 많은 즐거운 경험을 얻을 수 있는 몇 가지 유용한 헬퍼들이 있다. 대화식 콘솔 세션에서의 가장 중요한 문제는 여러분이 직접 브라우저에서 처럼 `g` , `request` 를 발생 시킬 수 없고, 그 밖의 다른 것들도 가능하지 않다. 하지만 테스트 해야 할 코드가 그것들에게 종속관계에 있다면 여러분은 어떻게 할 것인가?
이 장에는 몇가지 도움이 되는 함수가 있다. 이 함수들은 대화식 쉘에서의 사용뿐만 아니라, 단위테스트와 같은 그리고 요청 컨텍스트를 위조해야 하는 상황에서 또한 유용하다는 것을 염두해 두자.
일반적으로 여러분이 요청 컨텍스트 를 먼저 읽기를 권장한다.
## 요청 컨텍스트 생성하기¶
쉘에서 적절한 요청 컨텍스트를 생성하는 가장 쉬운 방법은 `RequestContext` : 를 우리에게 생성해주는 `test_request_context` 메소드를 사용하는 것이다.:
```
>>> ctx = app.test_request_context()
```
일반적으로 여러분은 with 구문을 사용하여 이 요청 객체를 활성화 하겠지만, 쉘에서는 `push()` 와 `pop()` 를
직접 다루는 방법이 더 쉽다: `>>> ctx.push()`
여러분이 pop 을 호출하는 그 시점까지 request 객체를 사용하여 작업 할 수 있다. :
`>>> ctx.pop()`
## 요청하기 전/후에 발사하기(Firing)¶
단지 요청 컨텍스트를 생성만 하면, 일반적으로 요청전에 실행되는 코드를 실행할 필요가 없다. 이것은 만약 여러분이 before-request 콜백에서 데이터베이스 연결을 하거나 , 혹은 현재 사용자가 `g` 객체에 저장하지 않을 경우에 발생할 수 있다.. 그러나 이것은 단지 `preprocess_request()` 를 호출하여 쉽게 수행 할 수 있다:
```
>>> ctx = app.test_request_context()
>>> ctx.push()
>>> app.preprocess_request()
```
다음을 염두 해 두어야 한다. 이 경우에 `preprocess_request()` 함수는 응답(response) 객체를 리턴할 것이고, 이것은 그냥 무시해도 된다. 요청을 종료시키기 위해서, 여러분은 사후 요청 함수 ( `process_response()` 에 의해 트리거되는)가 응답 객체에서 사용 전에 약간의 트릭을 사용해야 한다:
```
>>> app.process_response(app.response_class())
<Response 0 bytes [200 OK]>
>>> ctx.pop()
```
컨텍스트가 열리게 되면 `teardown_request()` 로 등록된 함수가
자동으로 호출된다. 그래서 이것은 자동으로 요청 컨텍스트 (데이터 베이스 연결과 같은)에
필요한 자원을 해제 할 수 있는 완벽한 장소이다..
## 쉘 경험을 더욱 향상시키기¶
여러분이 만약 쉘에서의 실험적인 아이디어를 실험하기 좋아한다면, 본인 스스로 여러분의 대화형 세션으로 불러올 모듈을 만들 수 있다. 여기에 또한 여러분은 데이터베이스를 초기화 하거나 테이블을 삭제하는등의 좀 더 유용한 도우미 메소드등을 정의 할 수 있다.
단지 모듈에 이렇게 삽입하면 된다 (shelltools 처럼 불러온다.) :
```
>>> from shelltools import *
```
어떤 것들은 충분히 일반적이어서 여러분이 대부분의 웹 어플리케이션에서 찾을 가능성이 높다. 예를 들면 많은 어플리케이션들이 관계형 데이타베이스와 사용자 인증을 사용한다. 그 경우에, 그 어플리케이션들은 요청 초반에 데이타베이스 연결을 열고 사용자 테이블에서 현재 로그인된 사용자의 정보를 얻을 것이다. 요청의 마지막에는 그 데이타베이스 연결을 다시 닫는다.
플라스크 스니핏 묶음(Flask Snippet Archives) 에 많은 사용자들이 기여한 스니핏과 패턴들이 있다.
더 큰 어플리케이션들의 경우, 모듈 대신 패키지를 사용하는게 좋다.패키지의 사용은 꽤 간단하다.작은 어플리케이션은 아래와 같은 구조로 되어있다고 생각해보자:
```
/yourapplication
/yourapplication.py
/static
/style.css
/templates
layout.html
index.html
login.html
...
```
## 간단한 패키지¶
작은 어플리케이션을 큰 어플리케이션 구조로 변환하기 위해, 단지 기존에 존재하던 폴더에 새 폴더 yourapplication 를 생성하고 그 폴더로 모두 옯긴다. 그리고 나서, yourapplication.py 를 __init__.py 으로 이름을 바꾼다. (먼저 모든 .pyc 을 삭제해야지, 그렇지 않으면 모든 파일이 깨질 가능성이 크다.)
여러분은 아래와 같은 구조를 최종적으로 보게 될 것이다:
```
/yourapplication
/yourapplication
/__init__.py
/static
/style.css
/templates
layout.html
index.html
login.html
...
```
하지만 이 상황에서 여러분은 어떻게 어플리케이션을 실행하는가?
```
python yourapplication/__init__.py
```
와 같은 순진한 명령은 실행되지 않을 것이다.
파이썬은 패키지에 있는 모듈이 스타트업 파일이 되는 것을 원하지 않는다고 해보자. 하지만, 그것은 큰 문제는 아니고, 아래와 같이 단지 runserver.py 라는 새 파일을 루트 폴더 바로 아래 있는 yourapplication 폴더 안에 추가하기만 하면 된다:
```
from yourapplication import app
app.run(debug=True)
```
이렇게해서 우리는 무엇을 얻었는가? 이제 우리는 이 어플리케이션을 복개의 모듈들로 재구조화할 수 있다. 여러분이 기억해야할 유일한 것은 다음의 체크리스트이다:
* 플라스크 어플리케이션 객체 생성은 __init__.py 파일에서 해야한다. 그런 방식으로 개별 모듈은 안전하게 포함되고 __name__ 변수는 알맞은 패키지로 해석될 것이다.
* 모든 뷰 함수들은(함수의 선언부 위에
`route()` 데코레이터(decorator)를 가진 함수)는 __init__.py 파일에 임포트되어야 하는데, 객체가 아닌 함수가 있는 모듈을 임포트해야한다. 어플리케이션 객체를 생성한 후에 뷰 모듈을 임포트해라.
여기에 __init__.py 파일의 예제가 있다:
import yourapplication.views
```
그리고 아래가 views.py 파일의 예제일 수 있다:
```
from yourapplication import app
여러분은 최종적으로 아래와 같은 구조를 얻을 것이다:
```
/yourapplication
/runserver.py
/yourapplication
/__init__.py
/views.py
/static
/style.css
/templates
layout.html
index.html
login.html
...
```
순환 임포트(Circular Imports)
모든 파이썬 프로그래머는 순환 임포트를 싫어하지만, 우리는 일부 그것을 더했다: 순환 임포트는(두 모듈이 서로 의존 관계가 있는 경우이다. 위 경우 views.py 는 __init__.py 에 의존한다). Be advised that this is a bad idea in general but here it is actually fine. 이런 방식은 일반적으로 나쁘지만, 이 경우는 실제로 괜찮다고 생각해도 된다. 왜냐하면 `__init__.py`에 있는 뷰들을 실제로 사용하지 않고 단지 모듈들이 임포트되었는지 보장하고 그 파일의 제일 하단에서 임포트하기 때문이다.
이런 접근법에도 일부 문제가 남아있지만 여러분이 데코레이터(decoroator)를 사용하고 싶다면 문제를 피할 방도는 없다. 그것을 다루는 방법에 대한 몇 가지 영감을 위해 크게 만들기 단락을 확인해라
## 청사진(Blueprints)으로 작업하기¶
여러분이 더 큰 규모의 어플리케이션을 갖고 있다면, 그것들을 청사진으로 구현된 더 작은 그룹으로 나누는 것을 추천한다. 이 주제에 대한 가벼운 소개는 이 문서의 블루프린트를 가진 모듈화된 어플리케이션 장을 참고해라.
여러분이 어플리케이션에 이미 패키지들과 청사진들을 사용한다면(블루프린트를 가진 모듈화된 어플리케이션) 그 경험들을 좀 더 개선할 몇 가지 정말 좋은 방법들이 있다. 일반적인 패턴은 청사진을 임포트할 때 어플리케이션 객체를 생성하는 것이다. 하지만 여러분이 이 객체의 생성을 함수로 옮긴다면, 나중에 이 객체에 대한 복수 개의 인스턴스를 생성할 수 있다.
그래서 여러분은 왜 이렇게 하고 싶은 것인가?
* 테스팅. 여러분은 모든 케이스를 테스트하기 위해 여러 설정을 가진 어플리케이션 인스턴스들을 가질수 있다.
* 복수 개의 인스턴스. 여러분이 같은 어플리케이션의 여러 다른 버전을 실행하고 싶다고 가정하자. 물론 여러분은 여러분은 웹서버에 여러 다른 설정을 가진 복수 개의 인스턴스를 가질 수도 있지만, 여러분이 팩토리를 사용한다면, 여러분은 간편하게 같은 어플리케이션 프로세스에서 동작하는 복수 개의 인스턴스를 가질 수 있다.
그렇다면 어떻게 여러분은 실제로 그것을 구현할 것인가?
## 기본 팩토리¶
이 방식은 함수 안에 어플리케이션을 설정하는 방법이다:
```
def create_app(config_filename):
app = Flask(__name__)
app.config.from_pyfile(config_filename)
from yourapplication.views.admin import admin
from yourapplication.views.frontend import frontend
app.register_blueprint(admin)
app.register_blueprint(frontend)
return app
```
이 방식의 단점은 여러분은 임포트하는 시점에 청사진 안에 있는 어플리케이션 객체를 사용할 수 없다. 그러나 여러분은 요청 안에서 어플리케이션 객체를 사용할 수 있다. 어떻게 여러분이 설정을 갖고 있는 어플리케이션에 접근하는가? `current_app` 을 사용하면 된다:
```
from flask import current_app, Blueprint, render_template
admin = Blueprint('admin', __name__, url_prefix='/admin')
@admin.route('/')
def index():
return render_template(current_app.config['INDEX_TEMPLATE'])
```
여기에서 우리는 설정에 있는 템플릿 이름을 찾아낸다.
## 어플리케이션(Application) 사용하기¶
그렇다면 그런 어플리케이션을 사용하기 위해서 어려분은 먼저 어플리케이션을 생성해야한다. 아래는 그런 어플리케이션을 실행하는 run.py 파일이다:
```
from yourapplication import create_app
app = create_app('/path/to/config.cfg')
app.run()
```
## 팩토리 개선¶
위에서 팩토리 함수는 지금까지는 그다지 똑똑하지 않았지만, 여러분은 개선할 수 있다. 다음 변경들은 간단하고 가능성이 있다:
* 여러분이 파일시스템에 설정 파일을 만들지 않도록 유닛테스트를 위해 설정값을 전달하도록 만들어라.
* 여러분이 어플리케이션의 속성들을 변경할 곳을 갖기 위해 어플리케이션이 셋업될 때 청사진에서 함수를 호출해라. (요청 핸들러의 앞/뒤로 가로채는 것 처럼)
* 필요하다면 어플리케이션이 생성될 때, WSGI 미들웨어에 추가해라.
어플리케이션 디스패칭은 WSGI 레벨에서 복수의 플라스크 어플리케이션들을 결합하는 과정이다. 여러분은 플라스크 어플리케이션들을 더 큰 것으로 만들수 있을뿐 아니라 어떤 WSGI 어플리케이션으로도 결합할 수 있다. 이것은 여러분이 원하다면 심지어 같은 인터프리터에서 장고(Django) 와 플라스크 어플리케이션을 바로 옆에서 실행할 수 있게해준다. 이 유용함 어플리케이션들이 내부적으로 동작하는 방식에 의존한다.
module approach 과 근본적인 차이점은 이 경우에 여러분은 서로 완전히 분리된 동일하거나 다른 플라스크 어플리케이션들을 실행한다는 것이다. 그것들은 다른 설정에서 실행되고 WSGI 레벨에서 디스패치된다.
## 이 문서를 가지고 작업하기¶
아래의 각 기법들과 예제들은 어떤 WSGI 서버에서 실행가능한 `application` 객체이다.
운영 환경의 경우, 배포 옵션 를 살펴봐라.
개발 환경의 경우, 베르크쭉(Werkzeug)은
```
werkzeug.serving.run_simple()
```
에 개발용 빌트인 서버를 제공한다:
```
from werkzeug.serving import run_simple
run_simple('localhost', 5000, application, use_reloader=True)
```
`run_simple` 은 운영환경에서 사용을 의도하지 않는다.
운영환경에 맞는 기능이 갖춰진 서버(full-blown WSGI server)를 사용해라. 대화형(interactive) 디버거를 사용하기 위해서, 디버깅이 어플리케이션과 심플 서버(simple server) 양쪽에서 활성화되어 있어야한다. 아래는 디버깅과 `run_simple` 를 사용한
“hellow world” 예제이다:
```
from flask import Flask
from werkzeug.serving import run_simple
app = Flask(__name__)
app.debug = True
if __name__ == '__main__':
run_simple('localhost', 5000, app,
use_reloader=True, use_debugger=True, use_evalex=True)
```
## 어플리케이션 결합하기¶
여러분이 완전하게 분리된 어플리케이션들을 갖고 있고 그것들이 동일한 파이썬 프로세스 위의 바로 옆에서 동작하기를 원한다면,
```
werkzeug.wsgi.DispatcherMiddleware
```
를 이용할 수 있다.
그 방식은 각 플라스크 어플리케이션이 유효한 WSGI 어플리케이션이고 디스패처 미들웨어에 의해
URL 접두어(prefix)에 기반해서 디스패치되는 하나의 더 커다란 어플리케이션으로 결합되는 것이다.
예를 들면, 여러분의 주(main) 어플리케이션을 ‘/’에 두고 백엔드 인터페이스는 `/backend`에 둘 수 있다:
```
from werkzeug.wsgi import DispatcherMiddleware
from frontend_app import application as frontend
from backend_app import application as backend
application = DispatcherMiddleware(frontend, {
'/backend': backend
})
```
## 하위도메인(subdomain)으로 디스패치하기¶
여러분은 때때로 다른 구성으로 같은 어플리케이션에 대한 복수 개의 인스턴스를 사용하고 싶을 때가 있을 것이다. 그 어플리케이션이 어떤 함수 안에서 생성됐고 여러분이 그 어플리케이션을 인스턴스화 하기위해 그 함수를 호출할 수 있다고 가정하면, 그런 방식은 굉장히 구현하기 쉽다. 함수로 새 인스턴스를 생성을 지원하도록 어플리케이션을 개발하기 위해서는 어플리케이션 팩토리 패턴을 살펴보도록 해라.
매우 일반적인 예제는 하위도메인 별로 어플리케이션을 생성하는 것이다. 예를 들면, 여러분은 어플리케이션의 모든 하위도메인에 대한 모든 요청을 디스패치 하도록 웹서버를 구성하고 사용자 지정 인스턴스를 생성하기 위해 하위도메인 정보를 사용한다. 일단 여러분의 웹서버가 모든 하위도메인의 요청을 받아들이도록(listen) 설정하면, 여러분은 동적인 어플리케이션 생성을 할 수있는 매우 간단한 WSGI 어플리케이션을 사용할 수 있다.
이 관점에서 추상화의 최적의 레벨은 WSGI 계층이다. 여러분은 들어오는 요청을 보고 그것을 여러분의 플라스크 어플리케이션으로 위임하는 여러분 자신만의 WSGI 어플리케이션을 작성할 수 있다. 그 어플리케이션이 아직 존재하지 않는다면, 그것은 동적으로 생성되고 기억된다:
```
from threading import Lock
class SubdomainDispatcher(object):
def __init__(self, domain, create_app):
self.domain = domain
self.create_app = create_app
self.lock = Lock()
self.instances = {}
def get_application(self, host):
host = host.split(':')[0]
assert host.endswith(self.domain), 'Configuration error'
subdomain = host[:-len(self.domain)].rstrip('.')
with self.lock:
app = self.instances.get(subdomain)
if app is None:
app = self.create_app(subdomain)
self.instances[subdomain] = app
return app
def __call__(self, environ, start_response):
app = self.get_application(environ['HTTP_HOST'])
return app(environ, start_response)
```
그리고나서 이 디스패쳐는 아래와 같이 사용될 수 있다:
```
from myapplication import create_app, get_user_for_subdomain
from werkzeug.exceptions import NotFound
def make_app(subdomain):
user = get_user_for_subdomain(subdomain)
if user is None:
# if there is no user for that subdomain we still have
# to return a WSGI application that handles that request.
# We can then just return the NotFound() exception as
# application which will render a default 404 page.
# You might also redirect the user to the main page then
return NotFound()
# otherwise create the application for the specific user
return create_app(user)
application = SubdomainDispatcher('example.com', make_app)
```
## 경로로 디스패치하기¶
URL 경로로 디스패치하는 것도 하위도메인과 굉장히 유사하다. 하위도메인 헤더를 확인하기 위해 Host 헤더를 보는 것 대신, 간단히 첫 번째 슬래쉬(/)까지의 요청 경로를 보는 것이다:
```
from threading import Lock
from werkzeug.wsgi import pop_path_info, peek_path_info
class PathDispatcher(object):
def __init__(self, default_app, create_app):
self.default_app = default_app
self.create_app = create_app
self.lock = Lock()
self.instances = {}
def get_application(self, prefix):
with self.lock:
app = self.instances.get(prefix)
if app is None:
app = self.create_app(prefix)
if app is not None:
self.instances[prefix] = app
return app
def __call__(self, environ, start_response):
app = self.get_application(peek_path_info(environ))
if app is not None:
pop_path_info(environ)
else:
app = self.default_app
return app(environ, start_response)
```
경로와 하위도메인 디스패치의 큰 차이점은 경로 디스패치는 생성 함수가 None 을 반환하면 다른 어플리케이션으로 넘어갈 수 있다는 것이다:
```
from myapplication import create_app, default_app, get_user_for_prefix
def make_app(prefix):
user = get_user_for_prefix(prefix)
if user is not None:
return create_app(user)
application = PathDispatcher(default_app, make_app)
```
플라스크 0.7 은 URL 프로세서의 개념을 소개한다. 이 방식은 여러분이 항상 명시적으로 제공하기를 원하지 않는 URL의 공통 부분을 포함하는 여러 리소스를 가질 수 있다는 것이다. 예를 들면, 여러분은 URL 안에 언어 코드를 갖지만 모든 개별 함수에서 그것을 처리하고 싶지않은 다수의 URL을 가질 수 있다.
청사진과 결합될 때, URL 프로세서는 특히나 도움이 된다. 청사진이 지정된 URL 프로세서 뿐만아니라 어플리케이션이 지정된 URL 프로세서 둘 다 다룰 것이다.
## 국제화된 어플리케이션 URL¶
아래와 같은 어플리케이션을 고려해보자:
@app.route('/<lang_code>/')
def index(lang_code):
g.lang_code = lang_code
...
@app.route('/<lang_code>/about')
def about(lang_code):
g.lang_code = lang_code
...
```
이것은 여러분이 모든 개별 함수마다 `g` 객체에 언어 코드 설정을
처리해야하기 때문에 엄청나게 많은 반복이다. 물론, 데코레이터가 이런 작업을
간단하게 만들어 줄 수 있지만, 여러분이 하나의 함수에서 다른 함수로 URL을
생성하고 싶다면, 여러분은 언어 코드를 명시적으로 제공하는 성가신 작업을
여전히 해야한다. 후자의 경우, `url_defaults()` 가 관여하는 곳 이다.
그것들은 자동으로 `url_for()` 호출에 대해 값을 주입한다.
아래의 코드는 언어 코드가 URL 딕셔너리에는 아직 없는지와 끝점(endpoint)가 `'lang_code'` 라는 변수의 값을 원하는지를 확인한다:
URL 맵에 대한
```
is_endpoint_expecting()
```
메소드는
주어진 끝점에 대해 언어 코드를 제공하는 것이 적합한지 확인하는데 사용된다. 위의 함수와 반대되는 함수로는
```
url_value_preprocessor()
```
가 있다.
그것들은 요청이 URL과 매치된 후에 바로 실행되고 URL 값에 기반된 코드를 실행할 수 있다.
이 방식은 그 함수들이 딕셔너리로 부터 값을 꺼내고 다른 곳에 그 값을 넣는 것이다:
이 방식으로 여러분은 더 이상 모든 함수에서 `g` 에 lang_code 를
할당하지 않아도 된다. 여러분은 언어 코드를 URL의 접두어로 만드는 데코레이터를 작성하여
좀 더 개선할 수 있지만, 더 아름다운 해결책은 청사진(blueprint)을 사용하는 것이다.
일단 `'lang_code'` 가 URL 값의 딕셔너리에서 꺼내지면 그 값은 아래와 같이 코드가 줄어든
뷰 함수로 더 이상 넘어가지 않는다:
@app.route('/<lang_code>/')
def index():
...
@app.route('/<lang_code>/about')
def about():
...
```
## 국제화된 청사진 URL¶
청사진은 자동으로 공통 문자열을 모든 URL에 접두어화 시킬 수 있기 때문에 모든 함수에 자동으로 그 값을 처리한다. 게다가, 청사진은 청사진 별로 `url_defaults()` 함수에서 많은 로직을 제거하는
URL 프로세서를 가질 수 있는데, 왜냐하면 청사진은 더 이상 URL이
진짜 `'lang_code'` 에 관심이 있는지 확인하지 않아도 되기 때문이다:
bp = Blueprint('frontend', __name__, url_prefix='/<lang_code>')
@bp.url_defaults
def add_language_code(endpoint, values):
values.setdefault('lang_code', g.lang_code)
@bp.url_value_preprocessor
def pull_lang_code(endpoint, values):
g.lang_code = values.pop('lang_code')
@bp.route('/')
def index():
...
@bp.route('/about')
def about():
...
```
예전에 설치툴(setuptool)이었던 distribute 가 지금은 파이썬 라이브러리와 확장(extentions)을 배포(툴의 이름처럼)하는데 일반적으로 사용되는 확장 라이브러리이다. 그것은 더 규모있는 어플리케이션의 배포를 쉽게 만드는 여러 더 복잡한 생성자 또한 지원하기 위해 파이썬과 같이 설치되는 disutils를 확장한 것이다:
* 의존성 지원(support for dependencies): 라이브러리나 어플리케이션은 여러분을 위해 자동으로 설치되어야 할 그것에 의존적인 다른 라이브러리들의 목록을 선언할 수 있다.
* 패키지 레지스트리(package registry): 설치툴은 파이썬 설치와 함께 여러분의 패키지를 등록한다. 이것은 하나의 패키지에서 제공되는 정보를 다른 패키지에서 질의하는 것을 가능하게 한다. 이 시스템의 가장 잘 알려진 특징은 패키지가 다른 패키지를 확장하기 위해 끼어들 수 있는 “진입점(entry point)”을 선언할 수 있도록 하는 진입점을 지원하다는 것이다.
* 설치 관리자(installation manager): distribute와 함께 설치되는 easy_install 은 여러분을 위해 다른 라이브러리를 설치해준다. 여러분은 또한 조만간 패키지 설치 이상의 기능을 제공하는 easy_install 을 대체할 pip 을 사용할 수 있다.
플라스크 그 자체와 cheeseshop(파이썬 라이브러리 인덱스)에서 찾을 수 있는 모든 라이브러리들은 distribute, the older setuptools 또는 distutils 중 한 가지로 배포된다.
이 경우, 여러분은 여러분의 어플리케이션이 yourapplication.py 이고 모듈을 사용하지 않지만 package 를 사용한다고 가정하자. 표준 모듈들을 가진 리소스들을 배포하는 것은 distribute 이 지원하지 않지만, 우리는 그것을 신경쓰지 않을 것이다. 여러분이 아직 어플리케이션을 패키지로 변환하지 않았다면, 어떻게 변환될 수 있는지 보기 위해 더 큰 어플케이션들 패턴으로 돌아가길 바란다.
distribute를 가지고 배포하는 것은 좀 더 복잡하고 자동화된 배포 시나리오로 들어가는 첫 단계이다. 만약 여러분이 배포 프로세스를 완전히 자동화하고 싶다면, Fabric으로 전개하기 장 또한 읽어야한다.
## 기본 설치 스크립트¶
여러분은 플라스크를 실행시키고 있기 때문에, 어쨌든 여러분의 시스템에는 setuptools나 distribute를 사용할 수 있을 것이다. 만약 사용할 수 없다면, 두려워하지 말기 바란다. 여러분을 위해 그런 설치도구를 설치해 줄 distribute_setup.py 이라는 스크립트가 있다. 단지 다운받아서 파이썬으로 실행하면 된다.
표준적인 포기는 다음을 적용한다: 여러분은 virtualenv를 사용하는 것이 낫다 .
여러분의 셋업 코드는 항상 `setup.py`이라는 파일명으로 여러분의 어플리케이션 옆에 놓인다. 그 파일명은 단지 관례일뿐이지만, 모두가 그 이름으로 파일을 찾을 것이기 때문에 다른 이름으로 바꾸지 않는게 좋을 것이다.
그렇다, 여러분은 distribute`를 사용하고 있음에도 불구하고, `setuptools`라 불리는 패키지를 입포트하고 있다. `distribute 은 setuptools 과 완전하게 하위버전이 호환되므로 임포트 명을 또한 사용한다.
플라스크 어플리케이션에 대한 기본적인 setup.py 파일은 아래와 같다:
```
from setuptools import setup
setup(
name='Your Application',
version='1.0',
long_description=__doc__,
packages=['yourapplication'],
include_package_data=True,
zip_safe=False,
install_requires=['Flask']
)
```
여러분은 명시적으로 하위 패키지들을 나열해야만 한다는 것을 명심해야한다. 여러분이 자동적으로 distribute가 패키지명을 찾아보기를 원한다면, find_packages 함수를 사용할 수 있다:
```
from setuptools import setup, find_packages
setup(
...
packages=find_packages()
)
```
include_package_data 과 zip_safe 은 아닐수도 있지만, setup 함수의 대부분 인자들은 스스로 설명이 가능해야 한다. include_package_data 은 distribute 에게 MANIFEST.in 파일을 찾고 패키지 데이타로서 일치하는 모든 항목을 설치하도록 요청한다. 우리들은 파이썬 모듈과 같이 정적 파일들과 템플릿들을 배포하기 위해 이것을 사용할 것이다.(see 리소스 배포하기). zip_safe 플래그는 zip 아카이브의 생성을 강제하거나 막기위해 사용될 수 있다. 일반적으로 여러분은 아마 여러분의 패키지들이 zip 파일로 설치되기를 원하지는 않을 것인데, 왜냐하면 어떤 도구들은 그것들을 지원하지 않으며 디버깅을 훨씬 더 어렵게 한다.
## 리소스 배포하기¶
여러분이 방금 생성한 패키지를 설치하려고 한다면, 여러분은 static 이나 ‘templates’ 같은 폴더들이 생성되어 있지 않다는 것을 알게될 것이다. 왜냐하면 distribute 은 추가할 파일이 어떤 것인지 모르기 때문이다. 여러분이 해야하는 것은 setup.py’ 파일 옆에 `MANIFEST.in 파일을 생성하는 것이다. 이 파일은 여러분의 타르볼(tarball)에 추가되어야 하는 모든 파일들을 나열한다:
```
recursive-include yourapplication/templates *
recursive-include yourapplication/static *
```
여러분이 MANIFEST.in 파일에 그 목록들을 요청함에도 불구하고, setup 함수의 include_package_data 인자가 True 로 설정되지 않는다면, 그것들은 설치되지 않을 것이라는 것을 잊지 말도록 해라.
## 의존성 선언하기¶
의존성은 install_requires 인자에 리스트로 선언된다. 그 리스트에 있는 각 항목은 설치 시 PyPI로 부터 당겨져야 하는 패키지 명이다. 디폴트로 항상 최신 버전을 사용하지만, 여러분은 또한 최소 버전과 최대 버전에 대한 요구사항을 제공할 수 있다. 아래에 예가 있다:
```
install_requires=[
'Flask>=0.2',
'SQLAlchemy>=0.6',
'BrokenPackage>=0.7,<=1.0'
]
```
앞에서 의존성은 PyPI로부터 당겨진다고 언급했다. 다른 사람과 공유하고 싶지 않은 내부 패키지기 때문에 PyPI에서 찾을 수 없고 찾지도 못하는 패키지에 의존하고 싶다면 어떻게 되는가? 여전히 PyPI 목록이 있는 것 처럼 처리하고 distribute 가 타르볼을 찾아야할 다른 장소의 목록을 제공하면 된다:
```
dependency_links=['http://example.com/yourfiles']
```
페이지가 디렉토리 목록를 갖고 있고 그 페이지의 링크는 distribute가 파일들을 찾는 방법처럼 실제 타르볼을 가리키도록 해야한다. 만약 여러분이 회사의 내부 서버에 패키지를 갖고 있다면, 그 서버에 대한 URL을 제공하도록 해라.
## 설치하기/개발하기¶
여러분의 어플리케이션을 설치하는 것은(이상적으로는 virtualenv를 이용해서) 단지 install 인자로 `setup.py`를 실행하기만 하면 된다. 그것은 여러분의 어플리케이션을 virtualenv의 사이트 패키지(site-packages) 폴더로 설치되고 또한 모든 의존성을 갖고 받아지고 설치될 것이다:
```
$ python setup.py install
```
만약 어려분이 패키지 기반으로 개발하고 있고 또한 패키지 기반에 대한 필수 항목이 설치되어야 한다면, develop 명령을 대신 사용할 수 있다:
```
$ python setup.py develop
```
이것의 이점은 데이타를 복사하는 것이 아니라 사이트 패키지 폴더에 대한 링크를 설치한다는 것이다. 그러면 여러분은 개별 변경 후에도 다시 install 을 실행할 필요없이 계속해서 코드에 대한 작업을 할 수 있다.
Fabric Makefiles과 유사하지만 원격 서버에 있는 명령을 실행할 수 있는 기능도 갖고 있는 파이썬 도구이다. 적당한 파이썬 설치 패키지 (더 큰 어플케이션들) 와 설정 (설정 다루기)에 대한 좋은 개념의 결합은 플라스크 어플리케이션을 외부 서버에 상당히 쉽게 전개하도록 해준다.
시작하기에 앞서, 우리가 사전에 빠르게 확인해야할 체크리스트가 있다:
* Fabric 1.0 은 로컬에 설치되어 있어야한다. 이 문서는 Fabric의 가장 최신버전을 가정한다.
* 어플리케이션은 이미 패키지로 되어있고 동작하는 setup.py 파일을 요구한다 (Distribute으로 전개하기).
* 뒤따르는 예제에서 우리는 원격 서버에 mod_wsgi 를 사용할 것이다. 물론 여러분이 좋아하는 서버를 사용할 수 있겠지만, 이 예제에서는 Apache + mod_wsgi 를 사용하기로 했는데, 그 방식이 설치가 쉽고 root 권한 없이도 어플리케이션을 간단하게 리로드할 수 있기 때문이다.
## 첫 번째 Fabfile 파일 생성하기¶
fabfile 은 Fabric이 실행할 대상을 제어하는 것이다. fabfile은 fabfile.py 라는 파일명을 갖고, fab 명령으로 실행된다. 그 파일에 정의된 모든 기능들은 fab 하위명령(subcommands)가 보여준다. 그 명령들은 하나 이상의 호스트에서 실행된다. 이 호스트들은 fabfile 파일이나 명령줄에서 정의될 수 있다. 여기에서는 fabfile 파일에 호스트들을 추가할 것이다.
아래는 현재 소스코드를 서버로 업로드하고 사전에 만들어진 가상 환경에 설치하는 기능을 하는 기본적인 첫 번째 예제이다:
```
from fabric.api import *
# the user to use for the remote commands
env.user = 'appuser'
# the servers where the commands are executed
env.hosts = ['server1.example.com', 'server2.example.com']
def pack():
# create a new source distribution as tarball
local('python setup.py sdist --formats=gztar', capture=False)
def deploy():
# figure out the release name and version
dist = local('python setup.py --fullname', capture=True).strip()
# upload the source tarball to the temporary folder on the server
put('dist/%s.tar.gz' % dist, '/tmp/yourapplication.tar.gz')
# create a place where we can unzip the tarball, then enter
# that directory and unzip it
run('mkdir /tmp/yourapplication')
with cd('/tmp/yourapplication'):
run('tar xzf /tmp/yourapplication.tar.gz')
# now setup the package with our virtual environment's
# python interpreter
run('/var/www/yourapplication/env/bin/python setup.py install')
# now that all is set up, delete the folder again
run('rm -rf /tmp/yourapplication /tmp/yourapplication.tar.gz')
# and finally touch the .wsgi file so that mod_wsgi triggers
# a reload of the application
run('touch /var/www/yourapplication.wsgi')
```
위의 예제는 문서화가 잘 되고 있고 직관적일 것이다. 아래는 fabric이 제공하는 가장 일반적인 명령들을 요약했다:
* run - 원격 서버에서 명령을 수행함
* local - 로컬 서버에서 명령을 수행함
* put - 원격 서버로 파일을 업로드함
* cd - 서버에서 디렉토리를 변경함.
이 명령은 with 절과 결합되어 사용되어야 한다.
## Fabfile 실행하기¶
이제 여러분은 어떻게 그 fabfile을 실행할 것인가? fab 명령을 사용한다. 원격 서버에 있는 현재 버전의 코드를 전개하기 위해서 여러분은 아래 명령을 사용할 것이다:
`$ fab pack deploy` 하지만 이것은 서버에 이미
폴더가 생성되어있고
```
/var/www/yourapplication/env
```
을 가상 환경으로 갖고 있는 것을 요구한다.
더욱이 우리는 서버에 구성 파일이나 .wsgi 파일을 생성하지 않았다. 그렇다면
우리는 어떻게 신규 서버에 우리의 기반구조를 만들 수 있을까?
이것은 우리가 설치하고자 하는 서버의 개수에 달려있다. 우리가 단지 한 개의 어플리케이션 서버를 갖고 있다면 (다수의 어플리케이션들이 포함된), fabfile 에서 명령을 생성하는 것은 과한 것이다. 그러나 명백하게 여러분은 그것을 할 수 있다. 이 경우에 여러분은 아마 그것을 setup 이나 bootstrap 으로 호출하고 그 다음에 명령줄에 명시적으로 서버명을 넣을 것이다:
```
$ fab -H newserver.example.com bootstrap
```
신규 서버를 설치하기 위해서 여러분은 대략 다음과 같은 단계를 수행할 것이다:
*
`/var/www` 에 디렉토리 구조를 생성한다: > $ mkdir /var/www/yourapplication $ cd /var/www/yourapplication $ virtualenv --distribute env
*
새로운 application.wsgi 파일과 설정 파일 (eg: application.cfg) 을 서버로 업로드한다.
*
yourapplication 에 대한 Apache 설정을 생성하고 설정을 활성화 한다. .wsgi 파일을 변경(touch)하여 어플리케이션을 자동으로 리로드하기 위해 그 파일의 변경에 대한 감시를 활성화하는 것을 확인한다. ( mod_wsgi (아파치) 에 더 많은 정보가 있다)
자 그렇다면 application.wsgi 파일과 application.cfg 파일은 어디서 왔을까? 라는 질문이 나온다.
## WSGI 파일¶
WSGI 파일은 어플리케이션이 설정파일을 어디서 찾아야 하는지 알기 위해 어플리케이션을 임포트해야하고 또한 환경 변수를 설정해야한다. 아래는 정확히 그 설정을 하는 짧은 예제이다:
```
import os
os.environ['YOURAPPLICATION_CONFIG'] = '/var/www/yourapplication/application.cfg'
from yourapplication import app
```
그리고 나서 어플리케이션 그 자체는 그 환경 변수에 대한 설정을 찾기 위해 아래와 같은 방식으로 초기화 해야한다:
```
app = Flask(__name__)
app.config.from_object('yourapplication.default_config')
app.config.from_envvar('YOURAPPLICATION_CONFIG')
```
이 접근법은 이 문서의 설정 다루기 단락에 자세히 설명되어 있다.
위에서 언급한 것 처럼, 어플리케이션은 YOURAPPLICATION_CONFIG 환경 변수 를 찾음으로서 올바른 설정 파일을 찾을 것이다. 그래서 우리들은 어플리케이션이 그 변수를 찾을 수 있는 곳에 그 설정을 넣어야만 한다. 설정 파일들은 모든 컴퓨터에서 여러 다른 상태를 갖을수 있는 불친절한 특징을 갖기 때문에 보통은 설정 파일들을 버전화하지 않는다.
많이 사용되는 접근법은 분리된 버전 관리 저장소에 여러 다른 서버에 대한 설정 파일들을 보관하고 모든 서버에 그것들을 받아가는(check-out) 것이다. 그리고 나서 어떤 서버에 대해 사용 중인 설정 파일은 그 파일이 심볼릭 링크로 생성되도록 기대되는 곳으로 링크를 건다 (eg:
).
다른 방법으로는, 여기에서 우리는 단지 하나 또는 두개의 서버만 가정했으므로 수동으로 그것들은 생각보다 빨리 업로드할 수 있다.
## 첫 번째 전개¶
이제 여러분은 첫 번째 전개를 할 수 있게 되었다. 우리는 서버가 가상환경과 활성화된 apache 설정들을 갖게하기 위해 그 서버들을 설치한다. 우리는 아래의 명령을 이용해서 어플리케이션을 감싸서 전개할 수 있다:
`$ fab pack deploy`
Fabric은 이제 모든 서버에 연결될 것이고 fabfile에 적힌 명령들을 수행할 것이다. 먼저 타르볼을 준비하기위해 pack을 실행할 것이고 그리고 나서 deploy를 실행해서 모든 서버에 소스코드를 업로드하고 하고 거기에 그것을 설치할 것이다. 자동으로 필수 라이브러리들과 받아서 우리의 가상 환경으로 넣어주는 setup.py 파일에 감사할 뿐이다.
## 다음 단계들¶
이 시점부터는 실제로 전개를 재밌게 만들 수 있는 많은 것들이 있다:
* 신규 서버를 초기화하는 bootstrap 명령을 생성한다. 그것은 새로운 가상 환경을 초기화하고 알맞게 apache를 설치 등을 할 수 있다.
* 설정 파일들을 분리된 버전 관리 저장소에 넣고 활성화된 설정들에 대해 지정된 위치로 심볼릭 링크를 생성한다.
* 여러분의 어플리케이션 코드 또한 저장소에 넣을 수 있고 서버에 가장 최신 버전을 받아 설치할 수 있다. 그런 방식으로 이전 버전들로 쉽게 돌아갈 수도 있다.
* 외부 서버에 전개하고 테스트묶음을 실행할 수 있도록 테스팅 기능을 끼워 넣는다.
Fabric을 가지고 작업하는 것이 재미있다면 여러분은 `fab deploy` 을 입력하는 것이
상당히 마법같다는 것을 알 것이고 하나 이상의 원격 서버에 자동으로 여러분의
어플리케이션이 전개되는 것을 볼 수 있을 것이다.
여러분은 Flask에서 필요할 때 데이타베이스 연결을 열고 문맥이 끝났을 때 (보통 요청이 끝에서) 연결을 닫는 것을 쉽게 구현할 수 있다:
DATABASE = '/path/to/database.db'
def get_db():
top = _app_ctx_stack.top
if not hasattr(top, 'sqlite_db'):
top.sqlite_db = sqlite3.connect(DATABASE)
return top.sqlite_db
@app.teardown_appcontext
def close_connection(exception):
top = _app_ctx_stack.top
if hasattr(top, 'sqlite_db'):
top.sqlite_db.close()
```
데이타베이스를 지금 사용하기 위해서 어플리케이션에서 필요한 전부는 활성화된 어플리케이션 문맥을 갖거나 (전달중인 요청이 있다면 항상 활성화 된 것이다) 어플리케이션 문맥 자체를 생성하는 것이다. 그 시점에 `get_db` 함수는 현재
데이타베이스 연결을 얻기위해 사용될 수 있다. 그 문맥이 소멸될 때마다
그 데이타베이스 연결은 종료될 것이다.
예제:
```
@app.route('/')
def index():
cur = get_db().cursor()
...
```
주석
before-request 핸들러가 실패하거나 절대 실행되지 않더라도, teardown 요청과 appcontext 함수는 항상 실행되다는 것을 명심하길 바란다. 그런 이유로 우리가 데이타베이스를 닫기전에 거기에 데이타베이스가 있었다는 것을 여기서 보장해야한다.
## 필요할 때 연결하기¶
이 접근법의 장점은 (첫 사용시 연결하는 것) 정말 필요할 때만 연결이 열린다는 것이다. 여러분이 요청 문맥 밖에서 이 코드를 사용하고 싶다면 파이썬 쉘에서 수동으로 어플리케이션 문맥을 열고서 그 연결을 사용할 수 있다:
```
with app.app_context():
# now you can use get_db()
```
## 쉬운 질의하기¶
이제 각 요청 핸들링 함수에서 현재 열린 데이타베이스 연결을 얻기 위해 여러분은 g.db 에 접근할 수 있다. SQLite 로 작업을 간단하게 하기 위해, 행(row) 팩토리 함수가 유용하다. 결과를 변환하기 위해 데이타베이스에서 반환된 모든 결과에 대해 실행된다. 예를 들면 튜플 대신 딕셔너리를 얻기 위해 아래와 같이 사용될 수 있다:
```
def make_dicts(cursor, row):
return dict((cur.description[idx][0], value)
for idx, value in enumerate(row))
db.row_factory = make_dicts
```
덧붙이자면 커서를 얻고, 결과를 실행하고 꺼내는 것을 결합한 질의 함수를 제공하는 것은 괜찮은 생각이다:
```
def query_db(query, args=(), one=False):
cur = get_db().execute(query, args)
rv = cur.fetchall()
cur.close()
return (rv[0] if rv else None) if one else rv
```
이 유용한 작은 함수는 행 팩토리와 결합되어 데이타베이스와 작업을 단지 원형의 커서와 연결 객체를 사용하는 것 보다 훨씬 더 기분 좋게 만든다.
아래에 그것을 사용하는 방법이 있다:
```
for user in query_db('select * from users'):
print user['username'], 'has the id', user['user_id']
```
또는 여러분이 단지 단일 결과를 원한다면:
```
user = query_db('select * from users where username = ?',
[the_username], one=True)
if user is None:
print 'No such user'
else:
print the_username, 'has the id', user['user_id']
```
변수의 일부분을 SQL 구문으로 전달하기 위해, 구문 안에 물음표를 사용하고 목록으로 인자안에 전달한다. 절대로 직접 인자들을 문자열 형태로 SQL 구문에 추가하면 안되는데 왜냐하면 SQL 인젝션(Injections) 을 사용해서 그 어플리케이션을 공격할 수 있기 때문이다.
## 초기 스키마¶
관계형 데이타베이스들은 스키마를 필요로 하기 때문에, 어플리케이션들은 데이타베이스를 생성하는 schema.sql 파일을 종종 만들어낸다. 그 스키마에 기반한 데이타베이스를 생성하는 함수를 제공하는 것은 괜찮은 생각이다. 아래 함수는 여러분을 위해 그러한 작업을 할 수 있다:
그리고 나면 여러분은 파이썬 쉘에서 그런 데이타베이스를 생성할 수 있다:
```
>>> from yourapplication import init_db
>>> init_db()
```
많은 사람들이 데이타베이스에 접근하기 위해 SQLAlchemy 선호한다. 이런 경우 여러분의 Flask 어플리케이션에 대해 모듈 보다는 패키지를 사용하고 모델들을 분리된 모듈로 만드는 것이 독려된다(더 큰 어플케이션들). 그것이 필수는 아니지만, 많은 부분에서 이해가 될만하다.
There are four very common ways to use SQLAlchemy를 사용하는 매우 일반적인 네가지 방식이 있다. 여기서 그것들을 각각 간략하게 설명할 것이다:
## Flask-SQLAlchemy 확장¶
SQLAlchemy는 공통 데이타베이스 추상 계층이고 설정하는데 약간의 노력을 요하는 객체 관계형 맵퍼(mapper)이기 때문에, 여러분을 위해 그 역할을 해줄 Flask 확장(extension)이 있다. 여러분이 빨리 시작하기를 원한다면 이 방식을 추천한다.
여러분은 `PyPI<http://pypi.python.org/pypi/Flask-SQLAlchemy>`_ 에서 Flask-SQLAlchemy 를 받을 수 있다.
## 선언부(Declarative)¶
SQLAlchemy에서 선언부(declarative) 확장은 SQLAlchemy를 사용하는 가장 최신 방법이다. 그 방법은 여러분이 한꺼번에 테이블들과 모델들을 정의하도록 해주는데, 그 방식은 Django(장고)가 동작하는 방식과 유사하다. 다음의 내용에 추가하여 declarative 확장에 대한 공식 문서를 권고한다.
아래는 여러분의 어플리케이션을 위해 database.py 모듈의 예제이다:
```
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite:////tmp/test.db', convert_unicode=True)
db_session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=engine))
Base = declarative_base()
Base.query = db_session.query_property()
def init_db():
# import all modules here that might define models so that
# they will be registered properly on the metadata. Otherwise
# you will have to import them first before calling init_db()
import yourapplication.models
Base.metadata.create_all(bind=engine)
```
모델들을 정의하기 위해, 위의 코드로 생성된 Base 클래스를 상속하면 된다. 여러분이 왜 우리가 여기서 쓰레드를 신경쓰지 않아도 되는지 궁금하다면 (위의 SQLite3 예제에서 `g` 객체를 가지고 한 것 처럼):
that’s because SQLAlchemy does that for us already with the `scoped_session` .
그것은 SQLAlchemy가 `scoped_session` 을 가지고
여러분을 위해서 이미 그러한 작업을 했기 때문이다.
여러분의 어플리케이션에서 선언적인 방식으로 SQLAlchemy를 사용하려면, 여러분의 어플리케이션 모듈에 아래의 코드를 집어넣기만 하면 된다. Flask는 여러분을 위해 요청의 끝에서 데이타베이스 세션을 제거할 것이다:
아래는 예제 모델이다 (이 코드를 models.py 에 넣어라, e.g.):
```
from sqlalchemy import Column, Integer, String
from yourapplication.database import Base
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String(50), unique=True)
email = Column(String(120), unique=True)
데이타베이스를 생성하기 위해서 여러분은 init_db 함수를 사용할 수 있다:
```
>>> from yourapplication.database import init_db
>>> init_db()
```
여러분은 아래와 같이 항목들을 데이타베이스에 추가할 수 있다:
```
>>> from yourapplication.database import db_session
>>> from yourapplication.models import User
>>> u = User('admin', '<EMAIL>')
>>> db_session.add(u)
>>> db_session.commit()
```
질의하는것 또한 간단하다:
`>>> User.query.all()`
[<User u’admin’>] >>> User.query.filter(User.name == ‘admin’).first() <User u’admin’## 수동 객체 관계 매핑¶
수동 객체 관계 매핑은 앞에서 나온 선언적 접근에 대비하여 몇 가지 장단점을 갖는다. 주요한 차이점은 여러분이 테이블들과 클래스들을 분리해서 정의하고 그것들을 함께 매핑한다는 것이다. 그 방식은 더 유연하지만 입력할 것이 약간 더 있다. 일반적으로 선언적 접근처럼 동작하기 때문에 어려분의 어플리케이션 또한 패키지안에 여러 모듈로 분리되도록 보장해라.
여기 여러분의 어플리케이션에 대한 database.py 모듈의 예가 있다:
```
from sqlalchemy import create_engine, MetaData
from sqlalchemy.orm import scoped_session, sessionmaker
engine = create_engine('sqlite:////tmp/test.db', convert_unicode=True)
metadata = MetaData()
db_session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=engine))
def init_db():
metadata.create_all(bind=engine)
```
선언적 접근법에 대하여 여러분은 각 요청 후에 세션을 닫을 필요가 있다. 이것을 여러분의 어플리케이션 모듈에 넣어라:
여기에 예제 테이블과 모델이 있다 (이것을 models.py 에 넣어라):
```
from sqlalchemy import Table, Column, Integer, String
from sqlalchemy.orm import mapper
from yourapplication.database import metadata, db_session
class User(object):
query = db_session.query_property()
users = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(50), unique=True),
Column('email', String(120), unique=True)
)
mapper(User, users)
```
질의하고 추가하는 것은 위의 예제에서와 정확히 같게 동작한다.
## SQL 추상 계층¶
여러분이 단지 데이타베이스 시스템 (그리고 SQL) 추상 계층을 사용하고 싶다면 여러분은 기본적으로 단지 그 엔진만 필요한 것이다:
```
from sqlalchemy import create_engine, MetaData
engine = create_engine('sqlite:////tmp/test.db', convert_unicode=True)
metadata = MetaData(bind=engine)
```
그러면 여러분은 위의 예제에서 처럼 여러분의 코드에 테이블을 선언할 수 있거나, 자동으로 그것들을 적재할 수 있다:
```
users = Table('users', metadata, autoload=True)
```
데이타를 추가하기 위해서 여러분은 insert 메소드를 사용할 수 있다. 우리는 트랜젝션을 사용할 수 있도록 먼저 연결을 얻어야 한다:
```
>>> con = engine.connect()
>>> con.execute(users.insert(), name='admin', email='admin@localhost')
```
SQLAlchemy는 자동으로 커밋을 할 것이다.
여러분의 데이타베이스에 질의하기 위해서, 여러분은 직접 엔진을 사용하거나 트랜잭션을 사용한다.
```
>>> users.select(users.c.id == 1).execute().first()
(1, u'admin', u'admin@localhost')
```
이런 결과들 또한 딕셔너리와 같은 튜플이다:
```
>>> r = users.select(users.c.id == 1).execute().first()
>>> r['name']
```
u’admin’
여러분은 또한 `execute()` 메소드에
SQL 구문의 문자열을 넘길 수 있다.:
```
>>> engine.execute('select * from users where id = :1', [1]).first()
(1, u'admin', u'admin@localhost')
```
SQLAlchemy에 대해서 더 많은 정보는 website 로 넘어가면 된다.
오 그렇다, 그리운 파일 업로드이다. 파일 업로드의 기본 방식은 실제로 굉장히 간단하다. 기본적으로 다음과 같이 동작한다:
*
`<form>` 태그에
```
enctype=multipart/form-data
```
과 `<input type=file>` 을 넣는다. * 어플리케이션이 요청 객체에
`files` 딕셔너리로 부터 파일 객체에 접근한다. * 파일시스템에 영구적으로 저장하기 위해 파일 객체의
`save()` 메소드를 사용한다.
## 파일 업로드의 가벼운 소개¶
지정된 업로드 폴더에 파일을 업로드하고 사용자에게 파일을 보여주는 매우 간단한 어플리케이션으로 시작해보자:
```
import os
from flask import Flask, request, redirect, url_for
from werkzeug import secure_filename
UPLOAD_FOLDER = '/path/to/the/uploads'
ALLOWED_EXTENSIONS = set(['txt', 'pdf', 'png', 'jpg', 'jpeg', 'gif'])
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
```
자 첫번째로 몇 가지 패키지를 임포트해야한다. 대부분 직관적이지만,
```
werkzeug.secure_filename()
```
은 나중에 약간 설명이 더 필요하다.
UPLOAD_FOLDER 는 업로드된 파일이 저장되는 것이고 ALLOWED_EXTENSIONS 은
허용할 파일의 확장자들이다. 그리고 나면 보통은 어플리케이션에 직접 URL
규칙을 추가하는데 여기서는 그렇게 하지 않을 것이다. 왜 여기서는 하지 않는가?
왜냐하면 우리가 사용하는 웹서버 (또는 개발 서버) 가 이런 파일을 업로드하는
역할도 하기 때문에 이 파일에 대한 URL을 생성하기 위한 규칙만 필요로 한다.
왜 허용할 파일 확장자를 제한하는가? 서버가 클라이언트로 직접 데이타를 전송한다면 여러분은 아마도 사용자가 그 서버에 뭐든지 올릴 수 있는 것을 원하지 않을 것이다. 그런 방식으로 여러분은 사용자가 XSS 문제 (Cross-Site Scripting (XSS)) 를 야기할 수도 있는 HTML 파일을 업로드하지 못하도록 할 수 있다. 또한 서버가 .php 파일과 같은 스크립트를 실행할 수 있다면 그 파일 또한 허용하지 말아야 한다. 하지만, 누가 이 서버에 PHP를 설치하겠는가, 그렇지 않은가? :)
다음은 확장자가 유효한지 확인하고 파일을 업로드하고 업로드된 파일에 대한 URL로 사용자를 리디렉션하는 함수들이다:
```
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS
@app.route('/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return redirect(url_for('uploaded_file',
filename=filename))
return '''
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<form action="" method=post enctype=multipart/form-data>
<p><input type=file name=file>
<input type=submit value=Upload>
</form>
'''
```
그렇다면 이 `secure_filename()` 함수는 실제로 무엇을 하는건가?
이제 문제는 “절대로 사용자의 입력을 믿지마라” 라고 불리우는 원칙에 있다.
이것은 또한 업로드된 파일명에 대해서도 적용된다. 모든 전송된 폼 데이타는
위조될 수 있고, 그래서 파일명이 위험할 수도 있다. 잠시동안 기억해보자:
파일시스템에 직접 파일을 저장하기 전에 파일명을 보호하기 위해 항상 이 함수를
사용하자.
Information for the Pros 장점에 대한 정보
그래서 여러분은 `secure_filename()` 함수가 하는 것에
관심이 있고 그 함수를 사용하지 않는다면 무슨 문제가 있는가? 그렇다면 어떤 사람이
여러분의 어플리케이션에 `filename`으로 다음과 같은 정보를 보낸다고 생각해보자:
```
filename = "../../../../home/username/.bashrc"
```
`../` 의 개수가 맞게 되있고 이것과 UPLOAD_FOLDER 와 더한다고 가정하면
사용자는 수정하면 않아야하는 서버의 파일시스템에 있는 파일을 수정할 수 있게
된다. 이것은 어플리케이션이 어떻게 생겼는가에 대한 약간의 정보를 요구하지만,
나를 믿어라, 해커들은 참을성이 많다 :)
이제 이 함수가 동작하는 것을 살펴보자:
```
>>> secure_filename('../../../../home/username/.bashrc')
'home_username_.bashrc'
```
지금 한가지 마지막으로 놓친것이 있다: 업로드된 파일의 제공. 플라스크 0.5에 관해서 우리는 업로드된 파일을 받을 수 있는 함수를 사용할 수 있다:
@app.route('/uploads/<filename>')
def uploaded_file(filename):
return send_from_directory(app.config['UPLOAD_FOLDER'],
filename)
```
다른방법으로 여러분은 build_only 로써 uploaded_file 을 등록하고 `SharedDataMiddleware` 를 사용할 수 있다. 이것은
또한 플라스크의 지난 과거 버전에서도 동작한다:
```
from werkzeug import SharedDataMiddleware
app.add_url_rule('/uploads/<filename>', 'uploaded_file',
build_only=True)
app.wsgi_app = SharedDataMiddleware(app.wsgi_app, {
'/uploads': app.config['UPLOAD_FOLDER']
})
```
여러분 이제 이 어플리케이션을 실행하면 기대하는데로 모든 것이 동작해야 할 것이다.
## 업로드 개선하기¶
버전 0.6에 추가.
그렇다면 정확히 플라스크가 업로드를 어떻게 처리한다는 것인가? 플라스크는 업로드된 파일이 적당히 작다면 웹서버의 메모리에 저장하고 그렇지 않다면 웹서버의 임시 장소 (
```
tempfile.gettempdir()
```
) 저장할 것이다. 그러나
여러분은 어떻게 업로드를 중단된 후에 최대 파일 크기를 지정할 수 있는가?
기본으로 플라스크는 제한되지 않은 메모리까지 파일 업로드를 허용할 것이지만,
여러분은 `MAX_CONTENT_LENGTH` 설정 키값을 설정하여 크기를 제한할 수 있다:
app = Flask(__name__)
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024
```
위의 코드는 허용되는 최대 파일 크기를 16메가바이트로 제한할 것이다. 그 최대 크기보다 더 큰 파일이 업로드되면, 플라스크는
```
RequestEntityTooLarge
```
예외를 발생시킬 것이다.
이 기능은 플라스크 0.6에서 추가됐지만, 요청 객체를 상속받아서 이전 버전에서 사용할 수도 있다. 더 많은 정보는 벡자이크(Werkzeug) 문서의 파일 처리(file handling) 을 검토해봐라.
## 업로드 진행 상태바¶
얼마전에 많은 개발자들이 클라이언트에서 자바스크립트로 업로드 진행 상태를 받아올 수 있도록 작은 단위로 유입되는 파일을 읽어서 데이터베이스에 진행 상태를 저장하는 방식을 생각했다. 짧게 얘기하자면: 클라이언트가 서버에 5초마다 얼마나 전송됐는지 묻는다. 얼마나 아이러니인지 알겠는가? 클리언트는 이미 자신이 알고 있는 사실을 묻고 있는 것이다.
이제 더 빠르고 안정적으로 동작하는 더 좋은 해결책이 있다. 웹은 최근에 많은 변화가 있었고 여러분은 HTML5, Java, Silverlight 나 Flash 을 사용해서 클라이언트에서 더 좋은 업로드 경험을 얻을 수 있다. 다음 라이브러리들은 그런 작업을 할 수 있는 몇 가지 좋은 예제들을 보여준다:
* Plupload - HTML5, Java, Flash
* SWFUpload - Flash
* JumpLoader - Java
## 더 쉬운 해결책¶
업로드를 다루는 모든 어플리케이션에서 파일 업로드에 대한 일반적인 패턴은 거의 변화가 없었기 때문에, 파일 확장자에 대한 화이트/블랙리스트와 다른 많은 기능을 제공하는 업로드 메커니즘을 구현한 Flask-Uploads 라는 플라스크 확장이 있다.
여러분의 어플리케이션이 느린 경우, 일종의 캐시를 넣어봐라. 그것이 속도를 높이는 최소한의 가장 쉬운 방법이다. 캐시가 무엇을 하는가? 여러분이 수행을 마치는데 꽤 시간이 걸리는 함수를 갖고 있지만 결과가 실시간이 아닌 5분이 지난 결과도 괜찮다고 하자. 그렇다면 여러분은 그 시간동안 결과를 캐시에 넣어두고 사용해도 좋다는게 여기의 생각이다.
플라스크 그 자체는 캐시를 제공하지 않지만, 플라스크의 토대가 되는 라이브러중 하나인 벡자이크(Werkzeug)는 굉장히 기본적인 캐시를 지원한다. 보통은 여러분이 memchached 서버로 사용하고 싶은 다중 캐시 백엔드를 지원한다.
## 캐시 설정하기¶
여러분은 `Flask` 을 생성하는 방법과 유사하게 캐시 객체를
일단 생성하고 유지한다. 여러분이 개발 서버를 사용하고 있따면 여러분은 `SimpleCache` 객체를 생성할 수 있고,
그 객체는 파이썬 인터프리터의 메모리에 캐시의 항목을 저장하는 간단한 캐시다:
```
from werkzeug.contrib.cache import SimpleCache
cache = SimpleCache()
```
여러분이 memcached를 사용하고 싶다면, 지원되는 memcache 모듈중 하나를 갖고 (PyPI 에서 얻음) 어디선가 memcached 서버가 동작하는 것을 보장해라. 그리고 나면 아래의 방식으로 memcached 서버에 연결하면 된다:
```
from werkzeug.contrib.cache import MemcachedCache
cache = MemcachedCache(['127.0.0.1:11211'])
```
여러분이 App 엔진을 사용한다면, 손쉽게 App 엔진 ememcache 서버에 연결할 수 있다:
```
from werkzeug.contrib.cache import GAEMemcachedCache
cache = GAEMemcachedCache()
```
## 캐시 사용하기¶
캐시는 어떻게 사용할 수 있을까? 두가지 굉장히 중요한 함수가 있다: `get()` 과 `set()` 이다. 아래는 사용 방법이다: 캐시에서 항목을 얻기 위해서는 문자열로 된 키 명으로 `get()` 를 호출하면 된다. 캐시에 그 키에
값이 있따면, 그 값이 반환된다. 없다면 `None`이 반환될 것이다:
```
rv = cache.get('my-item')
```
캐시에 항목을 넣기 위해서는, `set()` 를
사용하면 된다. 첫번째 인자는 키이고 두번째는 설정할 값이다. 타임아웃 또한
항목으로 넣을 수가 있는데 그 시간이 지나면 캐시에서 자동으로 그 항목은 삭제된다.
아래는 보통 정상적으로 사용되는 전체 예제이다:
```
def get_my_item():
rv = cache.get('my-item')
if rv is None:
rv = calculate_value()
cache.set('my-item', rv, timeout=5 * 60)
return rv
```
파이썬에는 함수 데코레이터라 불리는 꽤 흥미로운 기능이 있다. 이 기능은 웹 어플리케이션에서 정말 깔끔한 기능을 허용한다. 플라스크 각 뷰는 한 개 이상의 함수에 추가적인 기능을 주입하는데 사용될 수 있는 함수 데코레이터 이기 때문인다. 여러분은 이미 `route()` 데코레이터를
사용했을 것이다. 하지만 여러분 자신만의 데코레이터를 구현하는 몇가지
사용법이 있다. 예를 들면, 로그인한 사람들에게만 사용되야하는 뷰가 있다고
하자. 사용자가 해당 사이트로 가서 로그인 하지 않았다면, 그 사용자는
로그인 페이지로 리디렉션 되어야한다. 이런 상황이 데코레이터가 훌륭한
해결책이 되는 사용법의 좋은 예이다.
## 로그인이 필수적인 데코레이터¶
자 그렇다면 그런 데코레이터를 구현해보자. 데코레이터는 함수를 반환하는 함수이다. 사실 꽤 간단한 개념이다. 이런것을 구현할 때 꼭 유념해야할 것은 __name__, __module__ 그리고 함수의 몇 가지 다른 속성들이다. 이런 것을 자주 잊곤하는데, 수동으로 그것을 할 필요는 없고, 그것을 해주는 데코레이터처럼 사용되는 함수가 있다 ( `functools.wraps()` ). 이 예제는 로그인 페이지를 `login` 이라하고 현재 사용자는 g.user 로
저장돼있으며 로그인되지 않았다면 None 이 저장된다고 가정한다:
```
from functools import wraps
from flask import g, request, redirect, url_for
def login_required(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if g.user is None:
return redirect(url_for('login', next=request.url))
return f(*args, **kwargs)
return decorated_function
```
그렇다면 여러분은 여기서 그 데코레이터를 어떻게 사용하겠는가? 뷰 함수의 가장 안쪽에 있는 데코레이터로 사용하면 된다. 더 나아가서 적용할 때, `route()` 데코레이터가 가장 바깥에 있다는 것을 기억해라:
```
@app.route('/secret_page')
@login_required
def secret_page():
pass
```
## 캐싱 데코레이터¶
여러분이 시간이 많이 소요되는 계산을 하는 뷰 함수를 가지고 있고 그것 때문에 일정 시간동안 계산된 결과를 캐시하고 싶다고 생각해보자. 이런 경우 데코레이터가 멋지게 적용될 수 있다. 우리는 여러분이 캐싱(Caching) 라 언급한 캐시를 만든다고 가정할 것이다.
여기에 캐시 함수의 예제가 있다. 그 캐시 함수는 특정 접두어(형식이 있는 문자열) 로부터 캐시 키와 요청에 대한 현재 경로를 생성한다. 먼저 함수 자체를 데코레이트하는 데코레이터를 먼저 생성하는 함수를 사용하고 있다는 것에 주목해라. 끔찍하게 들리는가? 불행히도 약간 더 복잡하지만, 그 코드는 읽기엔 여전히 직관적일 것이다.
데코레이트된 함수는 다음과 같이 동작할 것이다
* 현재 경로에 기반한 현재 요청에 대한 유일한 캐시 키를 얻는다.
* 캐시에서 그 키에 대한 값을 얻는다. 캐시가 어떤값을 반환하면 우리는 그 값을 반환할 것이다.
* 그 밖에는 원본 함수가 호출되고 반환값은 주어진 타임아웃 (기본값 5분) 동안 캐시에 저장된다.
여기 코드가 있다:
def cached(timeout=5 * 60, key='view/%s'):
def decorator(f):
@wraps(f)
def decorated_function(*args, **kwargs):
cache_key = key % request.path
rv = cache.get(cache_key)
if rv is not None:
return rv
rv = f(*args, **kwargs)
cache.set(cache_key, rv, timeout=timeout)
return rv
return decorated_function
return decorator
```
이 함수는 인스턴스화된 cache 객체가 사용 가능하다고 가정한다는 것에 주목하고, 더 많은 정보는 캐싱(Caching) 을 살펴봐라.
## 데코레이터를 템플화하기¶
일전에 터보기어스(TurboGears) 친구들이 고안한 공통 패턴이 데코레이터 템플릿화이다. 그 데코레이터의 방식은 여러분은 뷰 함수로부터 템플릿에 넘겨질 값들을 가진 딕셔너리를 반환하고 그 템플릿은 자동으로 값을 화면에 뿌려준다. 그 템플릿으로, 아래 세가지 예제는 정확히 같은 동작을 한다:
```
@app.route('/')
def index():
return render_template('index.html', value=42)
@app.route('/')
@templated('index.html')
def index():
return dict(value=42)
@app.route('/')
@templated()
def index():
return dict(value=42)
```
여러분이 볼 수 있는 것처럼, 템플릿 명이 없다면 URL 맵의 끝점의 점(dot)을 슬래쉬(/)로 바꾸고 `'.html'` 을 더해서 사용할 것이다. 데코레이트된
함수가 반환할 때, 반환된 딕셔너리는 템플릿 렌더링 함수에 넘겨진다.
None 이 반환되고, 빈 딕셔너리를 가정한다면, 딕셔너리가 아닌 다른 것이
반환된다면 우리는 변경되지 않는 함수에서 그것을 반환한다. 그 방식으로
여러분은 여전히 리디렉트 함수를 사용하거나 간단한 문자열을 반환할 수 있다.
여기 그 데코레이터에 대한 코드가 있다:
def templated(template=None):
def decorator(f):
@wraps(f)
def decorated_function(*args, **kwargs):
template_name = template
if template_name is None:
template_name = request.endpoint \
.replace('.', '/') + '.html'
ctx = f(*args, **kwargs)
if ctx is None:
ctx = {}
elif not isinstance(ctx, dict):
return ctx
return render_template(template_name, **ctx)
return decorated_function
return decorator
```
## 끝점(Endpoint) 데코레이터¶
여러분이 더 유연함을 제공하는 벡자이크 라우팅 시스템을 사용하고 싶을 때 여러분은 `Rule` 에 정의된 것 처럼 끝점을 뷰 함수와
맞출(map) 필요가 있다. 이것은 이 끝점 데코레이터와 함께 사용하여 가능하다.
예를 들면:
```
from flask import Flask
from werkzeug.routing import Rule
app = Flask(__name__)
app.url_map.add(Rule('/', endpoint='index'))
@app.endpoint('index')
def my_index():
return "Hello world"
```
여러분이 브라우저에서 전송되는 폼 데이타를 가지고 작업해야할 때 곧 뷰 코드는 읽기가 매우 어려워진다. 이 과정을 더 쉽게 관리할 수 있도록 설계된 라이브러리들이 있다. 그것들 중 하나가 우리가 여기서 다룰 예정인 WTForms 이다. 여러분이 많은 폼을 갖는 상황에 있다면, 이것을 한번 시도해보길 원할지도 모른다.
여러분이 WTForms로 작업하고 있을 때 여러분은 먼저 클래스로 그 폼들을 정의해야한다. 어플리케이션을 여러 모듈로 쪼개고 (더 큰 어플케이션들) 폼에 대한 분리된 모듈을 추가하는 것을 권고한다.
확장을 가지고 대부분의 WTForms 얻기
Flask-WTF 확장은 이 패턴을 확대하고 폼과 작업하면서 플라스크를 더 재밌게 만드는 몇가지 유효한 작은 헬퍼들을 추가한다. 여러분은 PyPI 에서 그 확장을 얻을 수 있다.
## 폼(Forms)¶
이것은 전형적인 등록 페이지에 대한 예제 폼이다:
```
from wtforms import Form, BooleanField, TextField, PasswordField, validators
class RegistrationForm(Form):
username = TextField('Username', [validators.Length(min=4, max=25)])
email = TextField('Email Address', [validators.Length(min=6, max=35)])
password = PasswordField('New Password', [
validators.Required(),
validators.EqualTo('confirm', message='Passwords must match')
])
confirm = PasswordField('Repeat Password')
accept_tos = BooleanField('I accept the TOS', [validators.Required()])
```
## 뷰 안에서(In the View)¶
뷰 함수에서, 이 폼의 사용은 이것 처럼 보인다:
```
@app.route('/register', methods=['GET', 'POST'])
def register():
form = RegistrationForm(request.form)
if request.method == 'POST' and form.validate():
user = User(form.username.data, form.email.data,
form.password.data)
db_session.add(user)
flash('Thanks for registering')
return redirect(url_for('login'))
return render_template('register.html', form=form)
```
뷰가 SQLAlchemy (Flask에서 SQLAlchemy 사용하기) 를 사용한다고 가정하지만, 물론 필수조건은 아니라는 사실을 염두해라. 필요에 따라 코드를 수정해라.
기억할 것 :
## 템플릿에 있는 폼¶
이제 템플릿 측면에서 살펴보자. 여러분이 템플릿에 폼을 넘겨주면 그 폼을 템플릿에서 쉽게 뿌려줄 수 있다. 이런 방식이 얼마나 쉽게 되는지 보기 위해 다음 템플릿 예제를 보자. WTForms 가 이미 폼 생성의 반을 처리했다. 조금 더 멋지게 만들기 위해서, 우리는 레이블과 오류가 발생한다면 오류의 목록까지 가진 필드를 그려줄 매크로(macro)를 작성할 수 있다.
여기 그런 방식의 메크로를 가진 예제인 _formhelpers.html 템플릿이 있다:
```
{% macro render_field(field) %}
<dt>{{ field.label }}
<dd>{{ field(**kwargs)|safe }}
{% if field.errors %}
<ul class=errors>
{% for error in field.errors %}
<li>{{ error }}</li>
{% endfor %}
</ul>
{% endif %}
</dd>
{% endmacro %}
```
이 매크로는 필드를 뿌리는 WTForm의 필드 함수로 넘겨지는 몇가지 키워드 인자를 허용한다. 그 키워드 인자는 HTML 속성으로 추가될 것이다. 그래서 예를 들면 여러분은 입력 요소에 클래스(class)를 추가하기 위해
```
render_field(form.username, class='username')
```
를 호출할 수 있다.
WTForms는 표준 파이썬 유니코드 문자열을 반환하므로 우리는 진자2(Jinja2)에
|safe 필터를 가지고 이 데이터를 이미 HTML 이스케이프처리 하게 해야한다
는 것에 주목해라.
아래는 _formhelpers.html 템플릿을 이용해서 위에서 사용된 함수로 만든 register.html 템플릿이다:
```
{% from "_formhelpers.html" import render_field %}
<form method=post action="/register">
<dl>
{{ render_field(form.username) }}
{{ render_field(form.email) }}
{{ render_field(form.password) }}
{{ render_field(form.confirm) }}
{{ render_field(form.accept_tos) }}
</dl>
<p><input type=submit value=Register>
</form>
```
WTForms에 대한 더 많은 정보는, WTForms website 로 가서 살펴봐라.
진자(Jinja)의 가장 강력한 부분은 템플릿 상속 기능이다. 템플릿 상속은 여러분의 사이트에 대한 모든 일반적인 요소들을 포함한 기본 “스켈레톤(skeleton)” 템플릿을 생성하도록 하고 자식 템플릿은 기본 템플릿을 오버라이드(override)할 수 있는 blocks 을 정의한다.
복잡해 보이지만 꽤 간단하다. 예제로 시작하는게 이해하는데 가장 쉽다.
## 기본 템플릿¶
우리가 `layout.html` 이라 부를 이 팀플릿은 간단한 두개의 칼럼을 가진 페이지로
사용할 간단한 HTML 스켈레톤 문서를 정의한다. 내용의 빈 블럭을 채우것이 “자식”
템플릿의 일이다:
```
<!doctype html>
<html>
<head>
{% block head %}
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
<title>{% block title %}{% endblock %} - My Webpage</title>
{% endblock %}
</head>
<body>
<div id="content">{% block content %}{% endblock %}</div>
<div id="footer">
{% block footer %}
© Copyright 2010 by <a href="http://domain.invalid/">you</a>.
{% endblock %}
</div>
</body>
```
이 예제에서, `{% block %}` 태그는 자식 템플릿이 채울 수 있는 네개의 블럭을
정의한다. block 태그가 하는 전부는 템플릿 엔진에 자식 템플릿이 템플릿의 block
태그를 오버라이드할 수 도 있다라고 알려준다.
## 자식 템플릿¶
자식 템플릿은 아래와 같이 보일 수도 있다:
```
{% extends "layout.html" %}
{% block title %}Index{% endblock %}
{% block head %}
{{ super() }}
<style type="text/css">
.important { color: #336699; }
</style>
{% endblock %}
{% block content %}
<h1>Index</h1>
<p class="important">
Welcome on my awesome homepage.
{% endblock %}
```
`{% extends %}` 태그가 여기서 핵심이다. 이 태그는 템플릿 엔진에게 이 템플릿이
다른 템플릿을 “확장(extends)” 한다라고 알려준다. 템플릿 시스템이 이 템플릿을
검증할 때, 가장 먼저 부모 템플릿을 찾는다. 그 확장 태그가 템플릿에서 가장 먼저
있어야 한다. 부모 템플릿에 정의된 블럭의 내용을 보여주려면 `{{ super() }}` 를
사용하면 된다.
좋은 어플리케이션과 사용자 인터페이스의 모든것은 피드백이다. 사용자가 충분한 피드백을 얻지 못한다면 그들은 결국 그 어플리케이션을 싫어할 것이다. 플라스크는 플래싱 시스템을 가지고 사용자에게 피드백을 주는 정말 간단한 방법을 제공한다. 플래싱 시스템은 기본적으로 요청의 끝에 메시지를 기록하고 그 다음 요청에서만 그 메시지에 접근할 수 있게 한다. 보통은 플래싱을 처리하는 레이아웃 템플릿과 결함되어 사용된다.
## 간단한 플래싱¶
그래서 아래 전체 예제를 준비했다:
```
from flask import Flask, flash, redirect, render_template, \
request, url_for
app = Flask(__name__)
app.secret_key = 'some_secret'
@app.route('/login', methods=['GET', 'POST'])
def login():
error = None
if request.method == 'POST':
if request.form['username'] != 'admin' or \
request.form['password'] != 'secret':
error = 'Invalid credentials'
else:
flash('You were successfully logged in')
return redirect(url_for('index'))
return render_template('login.html', error=error)
if __name__ == "__main__":
app.run()
```
그리고 여기에 그 마법을 다룰 `layout.html` 템플릿이 있다:
```
<!doctype html>
<title>My Application</title>
{% with messages = get_flashed_messages() %}
{% if messages %}
<ul class=flashes>
{% for message in messages %}
<li>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
{% endwith %}
{% block body %}{% endblock %}
```
이것은 index.html 템플릿이다:
```
{% extends "layout.html" %}
{% block body %}
<h1>Overview</h1>
<p>Do you want to <a href="{{ url_for('login') }}">log in?</a>
{% endblock %}
```
물론 로그인 템플릿도 있다:
```
{% extends "layout.html" %}
{% block body %}
<h1>Login</h1>
{% if error %}
<p class=error><strong>Error:</strong> {{ error }}
{% endif %}
<form action="" method=post>
<dl>
<dt>Username:
<dd><input type=text name=username value="{{
request.form.username }}">
<dt>Password:
<dd><input type=password name=password>
</dl>
<p><input type=submit value=Login>
</form>
{% endblock %}
```
## 카테고리를 가진 플래싱¶
버전 0.3에 추가.
메시지를 플래싱 할 때 카테고리를 제공하는 것 또한 가능하다. 어떤 것도 제공되지 않는다면 기본 카테고리는 `'message'` 이다. 다른 카테고리도
사용자에게 더 좋은 피드백을 제공하는데 사용될 수 있다. 예를 들면, 오류
메시지는 붉은색 뒷배경으로 표시될 수 있다. 다른 카테고리로 메시지를 플래시하기 위해, `flash()` function::
의 두번째 인자로 카테고리를 넘겨주면 된다. > flash(u’Invalid password provided’, ‘error’)
그리고 나서 템플릿 안에서 그 카테고리를 받으려면
함수를 사용해야한다. 아래의 루프는
그러한 상황에서 약간 다르게 보인다:
```
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
<ul class=flashes>
{% for category, message in messages %}
<li class="{{ category }}">{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
{% endwith %}
```
이것이 플래시 메시지를 보여주는 방법의 한가지 예이다. 메시지에
```
<strong>Error:</strong>
```
과 같은 접두어를 더하기 위해 카테고리를
또한 사용할 수 도 있다.
## 플래시 메시지를 필터링하기¶
버전 0.9에 추가.
선택적으로 여러분은
의 결과를 필터링할
카테고리의 목록을 넘겨줄 수 있다. 여러분이 분리된 블럭에 각 카테고리를
보여주고 싶다면 이 기능은 유용하다.
```
{% with errors = get_flashed_messages(category_filter=["error"]) %}
{% if errors %}
<div class="alert-message block-message error">
<a class="close" href="#">×</a>
<ul>
{%- for msg in errors %}
<li>{{ msg }}</li>
{% endfor -%}
</ul>
</div>
{% endif %}
{% endwith %}
```
jQuery 는 DOM과 자바스크립트에 공통으로 사용되어 작업을 간편하게 해주는데 사용되는 작은 자바스크립트 라이브러리이다. jQuery는 또한 서버와 클라이언트 사이에 JSON으로 통신하며 더 동적인 웹 어플리케이션을 만들게 해주는 최상의 도구이다.
JSON 그 자체는 매우 경량의 전송 포맷으로, 널리 지원되며 굉장히 파싱하기 쉬운 파이썬 기본 타입(numbers,strings, dicts와 lists)과 유사하게 생겼다. 그것은 수년전에 널리 사용되었고 웹 어플리케이션에서 전송포맷으로 XML을 빠르게 대체하고 있다.
여러분이 파이썬 2.6을 갖고 있다면 JSON은 그 패키지에서 사용될 것이고, 파이썬 2.5에서는 PyPI에서 simplejson 라이브러리를 설치해야할 것이다.
## jQuery 로딩하기¶
jQuery를 사용하기 위해서, 먼저 그것을 다운로드받고 여러분 어플리케이션의 static 폴더에 그 파일을 넣어야한다. 그리고 나서 그것이 로드되는지 확인한다. 이상적으로 여러분은 모든 페이지에서 사용할 layout 템플릿을 갖고 거기에서 <body> 의 하단에 jQuery를 로드할 스크립트 문을 추가해야한다:
```
<script type=text/javascript src="{{
url_for('static', filename='jquery.js') }}"></script>
```
다른 방법은 구글의 AJAX Libraries API 를 사용하는 것이다:
```
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.js"></script>
<script>window.jQuery || document.write('<script src="{{
url_for('static', filename='jquery.js') }}">\x3C/script>')</script>
```
이 경우에 여러분은 대비책으로 static 폴더에 jQuery를 넣어둬야 하지만, 우선 구글로 부터 직접 그 라이브러리를 로딩하도록 할 것이다. 이것은 사용자들이 구글에서 같은 jQuery 버전을 사용하는 다른 웹사이트를 적어도 한번 방문했다면 여러분의 웹 사이트는 더 빠르게 로딩될 것이라는 점에서 장점이 있다. 왜냐하면 그 라이브러리는 브라우저 캐쉬에 이미 있을 것이기 때문이다.
## 내 사이트는 어디에 있는가?¶
여러분의 어플리케이션이 어디에 있는지 알고 있는가? 여러분이 개발하고 있다면 그 답은 굉장히 간단하다: 로컬호스트의 어떤 포트와 직접적으로 그 서버의 루트에 있다. 그러나 여러분이 최근에 어플리케이션을 다른 위치로 이동하기로 결정했다면 어떠한가? 예를 들면
```
http://example.com/myapp
```
과 같은 사이트로 말이다.
서버 측면에서 이것은 어떤 문제도 되지 않는데 왜냐하면 우리는 그 질문에
답변할 수 있는 간편한 `url_for()` 함수를 사용하고 있기 때문이다.
하지만, 우리는 jQuery를 사용하고 있고 어플리케이션에 경로를 하드코딩하지
않아야 하고 그것을 동적으로 만들어야 한다. 그렇다면 어떻게 해야겠는가?
간단한 방법은 어플리케이션의 루트에 대한 접두어에 전역 변수를 설정한 페이지에 스크립트 태그를 추가하는 것이다. 다음과 같다:
```
<script type=text/javascript>
$SCRIPT_ROOT = {{ request.script_root|tojson|safe }};
</script>
```
`|safe` 는 진자가 HTML 규칙을 가진 JSON 인코딩된 문자열을 이스케이핑하지
못하게 하기 위해 필요하다. 보통은 이것이 필요하겠지만, 다른 규칙을 적용하는
script 블럭 안에 두겠다.
Information for Pros
HTML에서 script 태그는 엔티티로 분석되지 않는 CDATA 로 선언된다. `</script>` 까지 모든 것은 스크립트로 처리된다. 이것은 또한 `</` 와
script 태그 사이의 어떤것도 존재해서는 안된다는 것을 의미한다. `|tojson` 은 여기서 제대로 적용되어 이스케이핑 슬래쉬도 잘 처리된다
(
```
{{ "</script>"|tojson|safe }}
```
은 `"<\/script>"` 게 보인다).
## JSON 뷰 함수¶
이제 두개의 URL 인자를 받아서 더하고 그 결과를 JSON 객체로 어플리케이션에 되돌려주는 서버 측 함수를 생성하자. 이것은 정말 우스운 예제이고 보통은 클라이언트 측에서만 동작할 내용이지만, 그럼에도 불구하고 jQuery와 플라스크가 동작하는 방식을 보여주는 예제이다:
```
from flask import Flask, jsonify, render_template, request
app = Flask(__name__)
@app.route('/_add_numbers')
def add_numbers():
a = request.args.get('a', 0, type=int)
b = request.args.get('b', 0, type=int)
return jsonify(result=a + b)
여러분이 볼 수 있는 것처럼 템플릿을 뿌려주는 index 메소드를 추가했다. 이 템플릿은 위에서 처럼 jQuery를 로딩할 것이고 두 숫자를 더할 수 있는 작은 폼과 서버 측에서 호출될 함수에 대한 링크를 갖고 있다.
우리는 여기서 절대 실패하지 않을 `get()` 메소드를 사용하고 있다는 점에 주목하라. 키가 없다면 기본값 (여기서는 `0` ) 이
반환된다는 것이다. 게다가 그것은 값을 특정 타입 (여기서는 int)으로 변환할
수 있다. 이것은 특히나 스크립트 (APIs, 자바스크립트 등) 로 실행되는 코드에 특히나
유용한데 왜냐하면 여러분은 키가 없을 때 발생하는 특별한 오류 보고가 필요없기
때문이다.
## HTML¶
위의 index.html 템플릿은 jQuery를 로딩하고 $SCRIPT_ROOT 변수를 설정하면서 layout.html 템플릿을 확장하거나 제일 상위에 그것을 설정해야한다. 여기에 우리의 작은 어플리케이션에 대해 필요한 HTML 코드가 있다 (index.html). 우리는 또한 필요한 스크립트를 바로 HTML에 넣는다는 것에 주목해라. 분리된 스크립트 파일에 그 코드를 갖는게 일반적으로는 더 나은 방식이다:
```
<script type=text/javascript>
$(function() {
$('a#calculate').bind('click', function() {
$.getJSON($SCRIPT_ROOT + '/_add_numbers', {
a: $('input[name="a"]').val(),
b: $('input[name="b"]').val()
}, function(data) {
$("#result").text(data.result);
});
return false;
});
});
</script>
<h1>jQuery Example</h1>
<p><input type=text size=5 name=a> +
<input type=text size=5 name=b> =
<span id=result>?</span>
<p><a href=# id=calculate>calculate server side</a>
```
여기서는 jQuery가 어떠헥 동작하는지 자세하게 들어가지는 않을 것이고, 단지 위에 있는 일부 코드에 대한 간략한 설명만 있을 것이다:
```
$(function() { ... })
```
는 브라우저가 해당 페이지의 기본 구성들을 로딩할 때 실행될 코드를 지정한다. *
`$('selector')` 는 요소를 선택하고 그 요소에서 실행하게 한다. *
```
element.bind('event', func)
```
는 사용자가 해당 요소에서 클릭했을 때 실행될 함수를 지정한다. 그 함수가 false 를 반환하면, 기본 동작은 시작되지 않을 것이다 (이 경우, # URL로 이동). *
```
$.getJSON(url, data, func)
```
은 url 로 GET 요청을 보내고 쿼리 인자로 data 객체의 내요을 보낼 것이다. 일단 데이터가 도착하면, 인자로 반환 값을 가지고 주어진 함수를 호출할 것이다. 여기서는 앞에서 설정한 $SCRIPT_ROOT 변수를 사용할 수 있다는 것에 주목해라.
여러분이 전체적으로 이해가 안된다면, 깃허브(github)에서 이 예제에 대한 소스 코드.
플라스크에는 앞에서 나온 HTTP 오류 코드를 가지고 요청을 중단하는 `abort()` 함수가 있다. 그것은 또한 정말 꾸미지 않은 기본적인
설명을 가진 단순한 흑백의 오류 페이지를 제공할 것이다.
오류 코드에 따라서 사용자가 실제로 그런 오류를 볼 가능성이 있다.
## 공통 오류 코드¶
다음의 오류 코드는 어플리케이션이 정상적으로 동작했음에도 불구하고 사용자에게 종종 보여지는 것이다:
* 404 Not Found
* 즐겨쓰는 메시지에 “이봐 친구, 그 URL 입력에 실수가 있어” 가 있다. 인터넷 초보자 조차 그 404를 알고 있는 그렇게 일반적인 것은 다음을 의미한다 : 젠장, 내가 찾고 있는 것이 거기에 없네. 404 페이지에 적어도 index 페이지로 돌아갈 수 있는 링크와 같은 유용한 것이 있도록 하는게 매우 좋은 방식이다.
* 403 Forbidden
* 여러분의 웹사이트에 어떤 접근 제어가 있다면, 허용되지 않는 자원에 대해 403 코드를 보내야할 것이다. 그렇기 때문에 사용자가 금지된 자원에 대해 접근하려할 때 사용자가 링크를 잃어버리지 않도록 해야한다.
* 410 Gone
* 여러분은 “404 Not Found” 에게 “410 Gone” 이라는 형제가 있었다는 것을 알았는가? 일부 사람들만 실제로 그것을 구현했지만, 그 방식은 전에 존재했지만 현재 삭제된 자원에 대해 404 대신에 401 로 응답하는 것이다. 여러분이 데이터베이스에서 영구적으로 문서를 지우지 않고 삭제됐다고 표시만 한다면, 사용자에게 편의를 제공하며 410 코드를 대신 사용하고 그들이 찾고 있는 것은 영구적으로 삭제됐다는 메시지를 보여줘라.
* 500 Internal Server Error
* 보통 프로그래밍 오류나 서버에 한계 부하를 넘었을 때 이 오류가 발생한다. 그 경우에 멋진 페이지를 보여주는 것이 굉장히 좋은 방식인데, 왜냐하면 여러분의 어플리케이션은 머지않아 다시 동작하지 않을 것이기 때문이다 (여기를 또한 살펴봐라: 어플리케이션 에러 로깅하기).
## 오류 핸들러¶
오류 핸들러는 뷰 함수와 같은 일종의 함수이지만, 그것은 오류가 발생하고 그 오류를 전달했을 때 호출된다. 오류는 대부분 `HTTPException` 이지만, 어떤 경우에는 다른
오류일 수도 있다: 내부 서버 오류에 대한 핸들러는 그 요류들이 잡히지 않을지
모르지만 다른 예외 인스턴스에 넘겨질 것이다. 오류 핸들러는 `errorhandler()` 데코레이터와 예외에 대한
오류 코드를 가지고 등록된다. 플라스크는 오류 코드를 설정하지 않을 것
이라는 것을 명심하고, 그렇기 때문에 응답을 반환할 때 HTTP 상태 코드를
제공하도록 해야한다.
여기에 “404 Page Not Found” 예외에 대한 구현 예제가 있다:
@app.errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404
```
예제 템플릿은 다음과 같을 것이다:
```
{% extends "layout.html" %}
{% block title %}Page Not Found{% endblock %}
{% block body %}
<h1>Page Not Found</h1>
<p>What you were looking for is just not there.
<p><a href="{{ url_for('index') }}">go somewhere nice</a>
{% endblock %}
```
플라스크는 보통 데코레이터를 가지고 사용된다. 데코레이터는 간단하고 여러분은 특정 URL에 대해 호출되는 함수 바로 옆에 URL을 놓게 된다. 그러나 이런 방식에 단점이 있다: 데코레이터를 사용하는 여러분의 모든 코드가 가장 앞단에서 임포트되어야 하고 그렇지 않다면 플라스크는 실제로 여러분의 함수를 결코 찾지 못할 것이다.
여러분의 어플리케이션이 빠르게 임포트되어야 한다면 이것은 문제가 될 수 있다. 구글 앱 엔진이나 다른 시스템처럼 그 자체가 그 임포트를 해야할 지도 모른다. 그렇기 때문에 여러분의 어플리케이션이 규모가 커져서 이 방식이 더 이상 맞지 않다는 것을 갑자기 알게된다면 여러분은 중앙집중식 URL 매핑으로 되돌아 갈 수 도 있다.
중앙식 URL 맴을 갖도록 활성화하는 시스템은 `add_url_rule()` 함수이다. 데코레이터를 사용하는 대신, 여러분은 어플리케이션의 모든 URL을
파일로 갖을 수 있다.
## 중앙집중식 URL 맴으로 변환하기¶
현재 어플리케이션이 아래와 같은 모습이라고 상상해보자:
@app.route('/')
def index():
pass
그렇다면 중앙집중 방식에서는 여러분은 데코레이터가 없는 뷰를 가진 하나의 파일 (views.py) 을 가질 것이다:
```
def index():
pass
그렇다면 URL과 함수를 매핑하는 어플리케이션을 설정하는 파일을 다음과 같다:
```
from flask import Flask
from yourapplication import views
app = Flask(__name__)
app.add_url_rule('/', view_func=views.index)
app.add_url_rule('/user/<username>', view_func=views.user)
```
## 늦은 로딩¶
지금까지 우리는 단지 뷰와 라우팅만 분리했지만, 모듈은 여전히 처음에 로딩된다. 필요할 때 뷰 함수가 실제 로딩되는 기법이 있다. 이것은 함수와 같이 동작하는 도움 클래스와 같이 수행될 수 있지만, 내부적으로 첫 사용에는 실제 함수를 임포는 한다:
```
from werkzeug import import_string, cached_property
class LazyView(object):
def __init__(self, import_name):
self.__module__, self.__name__ = import_name.rsplit('.', 1)
self.import_name = import_name
@cached_property
def view(self):
return import_string(self.import_name)
def __call__(self, *args, **kwargs):
return self.view(*args, **kwargs)
```
여기서 중요한 것은 __module__ 과 __name__ 알맞게 설정되야 한다. 이것은 여러분이 URL 규칙에 대한 이름을 제공하는 않는 경우에 플라스크가 내부적으로 URL 규칙을 이름짓는 방식을 알기 위해 사용된다.
그리고 나서 여러분은 아래처럼 뷰와 결합할 중앙위치를 정의할 수 있다:
from flask import Flask from yourapplication.helpers import LazyView app = Flask(__name__) app.add_url_rule(‘/’,
view_func=LazyView(‘yourapplication.views.index’)) * app.add_url_rule(‘/user/<username>’,
* view_func=LazyView(‘yourapplication.views.user’))
여러분은 키보드로 입력하는 코드의 양이라는 측면에서 프로젝트 명과 점(dot)을 접두어화하고 필요에 따라 LazyView 에 있는 view_func 을 래핑한 것으로 `add_url_rule()` 을 호출하는 함수를 작성함으로 이것을
더 최적화할 수 있다:
```
def url(url_rule, import_name, **options):
view = LazyView('yourapplication.' + import_name)
app.add_url_rule(url_rule, view_func=view, **options)
url('/', 'views.index')
url('/user/<username>', 'views.user')
```
한가지 명심해야할 것은 전후 요청 핸들러가 첫 요청에 올바르게 동작하기 위해 앞단에서 임포트되는 파일에 있어야 한다는 것이다. 나머지 데코레이터도 같은 방식이 적용된다.
기능을 갖춘 DBMS 보다 문서 기반 데이터베이스를 사용하는 것이 요즘은 더 일반적이다. 이번 패턴은 MongoDB와 통합하기 위한 문서 매핑 라이브러리인 MongoKit의 사용법을 보여준다.
이 패턴은 동작하는 MongoDB 서버와 MongoKit 라이브러리가 설치되있는 것을 전제로 한다.
MongoKit을 사용하는 두가지 일반적인 방식이 있다. 여기서 각 방법을 요약할 것이다:
## 선언 부분¶
MongoKit의 기본 동작은 Django 나 SQLAlchemy의 선언적 확장의 공통 방식에 기반을 둔 선언적 방식이다.
아래는 app.py 모듈의 예제이다:
```
from flask import Flask
from mongokit import Connection, Document
# configuration
MONGODB_HOST = 'localhost'
MONGODB_PORT = 27017
# create the little application object
app = Flask(__name__)
app.config.from_object(__name__)
# connect to the database
connection = Connection(app.config['MONGODB_HOST'],
app.config['MONGODB_PORT'])
```
여러분의 모델을 정의하기 위해, MongoKit에서 임포트한 Document 클래스는 상속해라. SQLAlchemy 패턴을 봤다면 여러분은 왜 우리가 세션을 갖고 있지 않고 심지어 init_db 함수를 여기서 정의하지 않았는지 궁금해할 지도 모른다. 한편으로, MongoKit은 세션같은 것을 갖지 않는다. 이것은 때때로 더 많이 타이핑을 하지만 엄청나게 빠르다. 다른 면으로, MongoDB는 스키마가 없다. 이것은 여러분이 하나의 입력 질의로부터 어떤 문제도 없이 다음 질의에서 데이터 구조를 변경할 수 있다. MongoKit 또한 스키마가 없지만, 데이터의 무결성을 보장하기 위해 어떤 검증을 구현한다.
여기서 예제 문서가 있다 (예를 들면 이것 또한 app.py 에 넣는다):
```
def max_length(length):
def validate(value):
if len(value) <= length:
return True
raise Exception('%s must be at most %s characters long' % length)
return validate
class User(Document):
structure = {
'name': unicode,
'email': unicode,
}
validators = {
'name': max_length(50),
'email': max_length(120)
}
use_dot_notation = True
def __repr__(self):
return '<User %r>' % (self.name)
# register the User document with our current connection
connection.register([User])
```
이 예제는 여러분에게 스키마 (구조라 불리는) 를 정의하는 법, 최대 문자 길이에 대한 검증자를 보여주고 use_dot_notation 이라 불리는 특별한 MongoKit 기능을 사용한다. 기본 MongoKit 마다 파이썬 딕셔너리 같은 동작을 하지만 use_dot_notation 에 True 로 설정을 하면 여러분은 속성들 사이를 분리하기 위해 점(dot)을 사용해서 어떤 다른 ORM과 근접한 모델을 사용하는 것 처럼 문서를 사용할 수 있다.
여러분은 아래 처럼 데이터베이스에 항목을 넣을 수 있다:
```
>>> from yourapplication.database import connection
>>> from yourapplication.models import User
>>> collection = connection['test'].users
>>> user = collection.User()
>>> user['name'] = u'admin'
>>> user['email'] = u'<EMAIL>'
>>> user.save()
```
MongoKit은 사용된 컬럼 타입에 다소 엄격하고, 여러분은 유니코드인 name 또는 email 에 대한 공통의 str 타입을 사용하지 않아야 한다.
질의하는것 또한 간단하다:
```
>>> list(collection.User.find())
[<User u'admin'>]
>>> collection.User.find_one({'name': u'admin'})
<User u'admin'>
```
## PyMongo 호환성 계층¶
여러분이 PyMongo를 단지 사용하고 싶다면, MongoKit을 가지고 그것을 할 수 있다. 여러분이 데이터를 얻는데 가장 좋은 성능이 필요하다면 이 프로세스를 사용하면 된다. 이 예제는 Flask와 그것을 연결하는 방법을 보여주는지 않고, 예를 들면 위의 위의 MongoKit 코드를 봐라:
```
from MongoKit import Connection
connection = Connection()
```
데이터를 입력하기 위해 여러분은 insert 메소드를 사용할 수있다. 우리는 첫번째로 콜렉션을 얻어야하고, 이것은 SQL 세상에서 테이블과 약간 유사하다.
```
>>> collection = connection['test'].users
>>> user = {'name': u'admin', 'email': u'<EMAIL>'}
>>> collection.insert(user)
```
MongoKit는 자동으로 커밋할 것이다.
데이터베이스에 질의하기 위해서, 여러분은 직접 컬렉션을 사용한다.
```
>>> list(collection.find())
[{u'_id': ObjectId('4c271729e13823182f000000'), u'name': u'admin', u'email': u'<EMAIL>'}]
>>> collection.find_one({'name': u'admin'})
{u'_id': ObjectId('4c271729e13823182f000000'), u'name': u'admin', u'email': u'<EMAIL>'}
```
이 결과 또한 딕셔너리 형태의 객체이다:
```
>>> r = collection.find_one({'name': u'admin'})
>>> r['email']
u'admin<EMAIL>'
```
MongoKit에 대한 더 많은 정보에 대해서, website 으로 가서 살펴봐라.
“파비콘(favicon)”은 탭이나 북마크를 위해 브라우저에서 사용되는 아이콘이다. 이 아이콘은 여러분의 웹사이트를 구분하는데 도움을 주고 그 사이트에 유일한 표식을 준다.
일반적인 질문은 플라스크 어플리케이션에 어떻게 파이콘을 추가하는가 이다. 물론 제일 먼저 파이콘이 필요하다. 16 × 16 픽셀과 ICO 파일 형태이어야 한다. 이것은 전제사항은 아니지만 그 형태가 모든 브라우저에서 지원하는 업계 표준이다. 여러분의 static 디렉토리에 그 아이콘을 `favicon.ico` 파일명으로 넣는다.
자, 그 아이콘을 브라우저에서 찾으려면, 여러분의 HTML에 link 태그를 추가하는게 알맞은 방법이다. 그래서 예를 들면:
```
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
```
대부분의 브라우저에서는 위의 한줄이 여러분이 해줄 전부이지만, 몇몇 예전 브라우저는 이 표준을 지원하지 않는다. 예전 업계 표준은 웹사이트의 루트에 그 파일을 위치시켜 제공하는 것이다. 여러분의 어플리케이션이 해당 도메인의 루트 경로에 마운트되지 않았다면 루트에서 그 아이콘이 제공되도록 웹서버 설정이 필요하므로 그렇게 할 수 없다면 여러분은 운이 없는 것이다. 그러나 어프리케이션이 루트에 있다면 여러분은 리디렉션으로 간단히 경로를 찾게할 수 있다:
```
app.add_url_rule('/favicon.ico',
redirect_to=url_for('static', filename='favicon.ico'))
```
여러분이 추가적인 리디렉션 요청을 저장하고 싶다면
```
send_from_directory()
```
를 사용한 뷰 함수 또한 작성할 수 있다:
@app.route('/favicon.ico')
def favicon():
return send_from_directory(os.path.join(app.root_path, 'static'),
'favicon.ico', mimetype='image/vnd.microsoft.icon')
```
명시적인 마임타입(mimetype)을 생략할 수 있고 그 타입은 추측될 것이지만, 그것은 항상 같게 추측될 것이므로 추가적인 추측을 피하기 위해 타입을 지정하는 것이 좋다.
위의 예는 어플리케이션을 통해 아이콘을 제공할 것이고 가능하다면 웹서버 문서를 참고하여 그렇게 할 전담 웹서버를 구성하는 것이 더 좋다.
종종 여러분은 클라이언트로 메모리에 유지하고 싶은 양보다 훨씬 더 큰, 엄청난 양의 데이터를 전송하기를 원한다. 여러분이 그 데이터를 바로 생성하고 있을 때, 파일시스템으로 갔다오지 않고 어떻게 클라이언트로 그것을 전송하는가?
답은 바로 생성기(generators)와 직접 응답이다.
## 기본 사용법¶
이것은 많은 CSV 데이터를 즉각 생성하는 기초적인 뷰 함수이다. 데이터를 생성하는 생성기를 사용하는 내부(inner) 함수를 갖고 그 함수를 호출하면서 응답 객체에 그 함수를 넘기는것이 그 기법이다:
@app.route('/large.csv')
def generate_large_csv():
def generate():
for row in iter_all_rows():
yield ','.join(row) + '\n'
return Response(generate(), mimetype='text/csv')
```
각 `yield` 표현식은 브라우져에 직접 전송된다. 어떤 WSGI 미들웨어는
스트리밍을 깰수도 있지만, 프로파일러를 가진 디버그 환경과 여러분이 활성화
시킨 다른 것들을 조심하도록 유의하라.
## 템플릿에서 스트리밍¶
진자2 템플릿 엔진은 또한 부분 단위로 템플릿을 뿌려주는 것을 지원한다. 이 기능은 꽤 일반적이지 않기 때문에 플라스크에 직접적으로 노출되지 않지만, 아래처럼 여러분이 쉽게 직접 구현할 수 있다:
def stream_template(template_name, **context):
app.update_template_context(context)
t = app.jinja_env.get_template(template_name)
rv = t.stream(context)
rv.enable_buffering(5)
return rv
@app.route('/my-large-page.html')
def render_large_template():
rows = iter_all_rows()
return Response(stream_template('the_template.html', rows=rows))
```
여기서의 기법은 어플리케이션의 진자2 환경에서 템플릿 객체를 얻고 문자열 대신 스트림 객체를 반환하는 `render()` 대신 `stream()` 를 호출하는 것이다. 플라스크 템플릿 렌더 함수를
지나치고 템플릿 객체 자체를 사용하고 있기 때문에
```
update_template_context()
```
를 호출하여 렌더 컨텍스트를 갱신하도록
보장해야한다. 그리고 나서 스트림을 순환하면서 템플릿을 해석한다. 매번 여러분은 yield
를 호출하기 때문에 서버는 클라이언트로 내용을 밀어내어 보낼 것이고 여러분은
```
rv.enable_buffering(size)
```
을 가지고 템플릿 안에 몇가지 항목을 버퍼링하기를 원할지도
모른다. `5` 가 보통 기본값이다.
## 컨텍스트를 가진 스트리밍¶
버전 0.9에 추가.
여러분이 데이터를 스트리밍할 때, 요청 컨텍스트는 그 함수가 호출된 순간 이미 종료된다는 것에 주목해라. 플라스크 0.9 는 생성기가 수행하는 동안 요청 컨텍스트를 유지하게 해주는 조력자를 제공한다:
```
from flask import stream_with_context, request, Response
@app.route('/stream')
def streamed_response():
def generate():
yield 'Hello '
yield request.args['name']
yield '!'
return Response(stream_with_context(generate()))
```
```
stream_with_context()
```
함수 없다면 여러분은 그 시점에 `RuntimeError` 를 얻을 것이다.
플라스크의 설계 원칙 중 한가지는 응답 객체가 생성되고 그 객체를 수정하거나 대체할 수 있는 잠재적인 콜백의 호출 사슬로 그 객체를 전달하는 것이다. 요청 처리가 시작될 때, 아직은 응답 객체를 존재하지 않는다. 뷰 함수나 시스템에 있는 어떤 다른 컴포넌트에 의해 필요할 때 생성된다.
하지만 응답이 아직 존재하지 않는 시점에서 응답을 수정하려 하면 어떻게 되는가? 그에 대한 일반적인 예제는 응답 객체에 있는 쿠키를 설정하기를 원하는 before-request 함수에서 발생할 것이다.
한가지 방법은 그런 상황을 피하는 것이다. 꽤 자주 이렇게 하는게 가능하다. 예를 들면 여러분은 그런 로직을 after-request 콜백으로 대신 옮기도록 시도할 수 있다. 하지만 때때로 거기에 있는 코드를 옮기는 것은 유쾌한 일이 아니거나 코드가 대단히 부자연스럽게 보이게 된다.
다른 가능성으로서 여러분은 `g` 객체에 여러 콜백 함수를
추가하고 요청의 끝부분에서 그것을 호출할 수 있다. 이 방식으로 여러분은
어플리케이션의 어느 위치로도 코드 실행을 지연시킬 수 있다.
## 데코레이터¶
다음에 나올 데코레이터가 핵심이다. 그 데코레이터는 `g` 객체에
있는 리스트에 함수를 등록한다:
def after_this_request(f):
if not hasattr(g, 'after_request_callbacks'):
g.after_request_callbacks = []
g.after_request_callbacks.append(f)
return f
```
## 지연된 함수의 호출¶
이제 여러분은 요청의 마지막에서 호출될 함수를 표시하기 위해 after_this_request 데코레이터를 사용할 수 있다. 하지만 우리는 여전히 그 함수를 호출할 수도 있다. 이렇게 하기 위해 다음 함수가 `after_request()` 콜백으로 등록될
필요가 있다:
```
@app.after_request
def call_after_request_callbacks(response):
for callback in getattr(g, 'after_request_callbacks', ()):
response = callback(response)
return response
```
## 실제적인 예제¶
이제 우리는 이 특정 요청의 마지막에서 호출될 함수를 적절한 어느 시점에라도 쉽게 등록할 수 있다. 예를 들면 여러분은 before-request 함수에서 쿠키에 사용자의 현재 언어를 기억하게 할 수 있다:
@app.before_request
def detect_user_language():
language = request.cookies.get('user_lang')
if language is None:
language = guess_language_from_request()
@after_this_request
def remember_language(response):
response.set_cookie('user_lang', language)
g.language = language
```
어떤 HTTP 프록시는 임시적인 HTTP 메소드나 새로운 HTTP 메소드 (PATCH 같은) 를 지원하지 않는다. 그런 경우에 프로토콜 전체를 위반하는 방식으로 HTTP 메소드를 다른 HTTP 메소드로 “프록시” 하는 것이 가능하다.
이렇게 동작하는 방식은 클라이언트가 HTTP POST로 요청하고
```
X-HTTP-Method-Override
```
헤더를 설정하고 그 값으로 의도하는 HTTP 메소드
( `PATCH` 와 같은)를 설정하면 된다.
이것은 HTTP 미들웨어로 쉽게 수행할 수 있다:
```
class HTTPMethodOverrideMiddleware(object):
allowed_methods = frozenset([
'GET',
'HEAD',
'POST',
'DELETE',
'PUT',
'PATCH',
'OPTIONS'
])
bodyless_methods = frozenset(['GET', 'HEAD', 'OPTIONS', 'DELETE'])
def __call__(self, environ, start_response):
method = environ.get('HTTP_X_HTTP_METHOD_OVERRIDE', '').upper()
if method in self.allowed_methods:
method = method.encode('ascii', 'replace')
environ['REQUEST_METHOD'] = method
if method in self.bodyless_methods:
environ['CONTENT_LENGTH'] = '0'
return self.app(environ, start_response)
```
플라스크로 이것을 하려면 아래와 같이 하면 된다:
```
from flask import Flask
app = Flask(__name__)
app.wsgi_app = HTTPMethodOverrideMiddleware(app.wsgi_app)
```
코드의 여러 부분이 요청 데이터로 Various pieces of code can consume the request data and preprocess it. For instance JSON data ends up on the request object already read and processed, form data ends up there as well but goes through a different code path. This seems inconvenient when you want to calculate the checksum of the incoming request data. This is necessary sometimes for some APIs.
Fortunately this is however very simple to change by wrapping the input stream.
The following example calculates the SHA1 checksum of the incoming data as it gets read and stores it in the WSGI environment:
```
import hashlib
class ChecksumCalcStream(object):
def __init__(self, stream):
self._stream = stream
self._hash = hashlib.sha1()
def read(self, bytes):
rv = self._stream.read(bytes)
self._hash.update(rv)
return rv
def readline(self, size_hint):
rv = self._stream.readline(size_hint)
self._hash.update(rv)
return rv
def generate_checksum(request):
env = request.environ
stream = ChecksumCalcStream(env['wsgi.input'])
env['wsgi.input'] = stream
return stream._hash
```
To use this, all you need to do is to hook the calculating stream in before the request starts consuming data. (Eg: be careful accessing `request.form` or anything of that nature.
```
before_request_handlers
```
for instance should be careful not to access it).
Example usage:
```
@app.route('/special-api', methods=['POST'])
def special_api():
hash = generate_checksum(request)
# Accessing this parses the input stream
files = request.files
# At this point the hash is fully constructed.
checksum = hash.hexdigest()
return 'Hash was: %s' % checksum
```
여러분이 어떤 것을 이용할 수 있는지에 따라서, 플라스크 어플리케이션을 실행할 수 있는 다양한 방법이 있다. 개발 중에는 내장 서버를 사용할 수 있지만, 운영울 위해서는 모든 배포 옵션을 사용해야 한다. (운영 중에는 내장 개발 서버를 사용하지 마라.) 사용할 수 있는 몇가지 옵션이 있다.
만약 다른 WSGI 서버가 있다면, WSGI 어플리케이션을 사용하는 방법에 대한 서버 문서를 참조하라. 단지 `Flask` 어플리케이션 객체가 실제 WSGI 어플리케이션임을 기억하라.
빠르게 배포하고 실행하기 위한 옵션을 보려면, 퀵스타트의 웹서버에 배포하기 를 참조하라.
만약 Apache 웹서브를 사용하고 있다면, mod_wsgi 를 사용할 것을 고려하라.
## mod_wsgi 설치하기¶
mod_wsgi 가 아직 설치되지 않았다면, 패키지 관리자를 사용하여 설치하거나, 직접 컴파일해야 한다. mod_wsgi installation instructions 가 유닉스 시스템에서의 소스 설치를 다룬다.
만약 우분투/데비안을 사용하고 있다면 apt-get를 사용하여 다음과 같이 활성화할 수 있다:
```
# apt-get install libapache2-mod-wsgi
```
FreeBSD에에서는 www/mod_wsgi 포트를 컴파일하거나 pkg-add를 사용하여 mod_wsgi 를 설치하라:
```
# pkg_add -r mod_wsgi
```
만약 pkgsrc를 사용하고 있다면, www/ap2-wsgi 패키지를 컴파일하여 mod_wsgi 를 설치할 수 있다.
만약 처음 아파치를 리로드한 후에 세그멘테이션폴트가 나는 자식 프로세스를 발견한다면, 그것들을 무시할 수 있다. 단지 서버를 재시작하면 된다.
## .wsgi 파일 생성하기¶
어플리케이션을 실행하기 위해서는 yourapplication.wsgi 파일이 필요하다. 이 파일은 application 객체를 얻기 위해 시작 시점에 mod_wsgi`가 실행할 코드를 포함하고 있다. 그 객체는 그 파일에서 `application 라고 불리며, application으로 사용된다.
대부분의 어플리케이션에서 다음과 같은 파일이면 충분하다:
```
from yourapplication import app as application
```
만약 여러분이 application 객체 생성하는 팩토리 함수가 없다면, ‘application’으로 하나를 직접 임포트하여 싱글톤 객체로 사용할 수 있다.
그 파일을 나중에 다시 찾을 수 있는 어딘가에 저장하고(예:/var/www/yourapplication) `yourapplication`과 사용되는 모든 라이브러리가 파이썬 로그 경로에 있는지 확인하라. 만약 파이썬을 설치하는 것을 원하지 않으면, 시스템 전반적으로 `virtual python`_ 객체를 사용하는 것을 고려하라. 실제로 어플리케이션을 virtualenv에 설치해야 하는 것을 기억하라. 다른 방법으로는 임포트 전에 .wsgi 파일 내 경로를 패치하기 위한 옵션이 있다:
```
import sys
sys.path.insert(0, '/path/to/the/application')
```
여러분이 해야할 마지막 일은 어플리케이션을 위한 아파치 설정 파일을 생성하는 것이다. 이 예제에서 보안적인 이유로 다른 사용자 하에서 어플리케이션을 실행하라고 ‘mod_wsgi’에게 말할 것이다:
```
<VirtualHost *>
ServerName example.com
WSGIDaemonProcess yourapplication user=user1 group=group1 threads=5
WSGIScriptAlias / /var/www/yourapplication/yourapplication.wsgi
<Directory /var/www/yourapplication>
WSGIProcessGroup yourapplication
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
```
Note: WSGIDaemonProcess는 윈도우에서 구현되어 있지 않으며 아파치는 위와 같은 설정을 허용하지 않을 것이다. 윈도우 시스템에서는 해당 라인들을 제거하라:
```
<VirtualHost *>
ServerName example.com
WSGIScriptAlias / C:\yourdir\yourapp.wsgi
<Directory C:\yourdir>
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
```
더 많은 정보를 위해 `mod_wsgi wiki`_를 참조하라.
## 문제해결¶
만약 어플리케이션이 실행되지 않는다면 아래 문제해결 가이드를 참조하라:
* 문제: 어플리케이션이 실행되지 않으며, 에러로그는 SystemExit ignored를 보여준다
*
조건에 의해 보호되지 않는 어플리케이션 파일에서 `app.run()` 를 호출한다. 파일에서 `run()` 호출을 제거하고 run.py 파일로 옮기거나, if 블럭 안에 넣어라. * 문제: 어플리케이션이 퍼미션 에러를 준다.
* 아마 잘못된 사용자에 의해 어플리케이션이 실행되었을 것이다. 어플리케이션이 접근이 필요한 폴더가 적절한 권한이 설정되어 있는지 어플리케이션이 올바른 사용자로 실행되는지 확인하라 (WSGIDaemonProcess 지시어에
`user` 와 `group` 파라미터) * 문제: 어플리케이션이 에러를 출력하며 죽는다
*
mod_wsgi는
`sys.stdout` 와
```
sys.stderr`로 어떤 것을 하는 것을 허용하지 않는다는 것을 기억라라. `WSGIRestrictStdout
```
를 ``off``로 설정하여 이 보호를 해지할 수 있다: > WSGIRestrictStdout Off
다른 대안으로 .wsgi 파일에서의 표준 출력을 다른 스트림으로 변환할 수 있다:
> import sys sys.stdout = sys.stderr
* 문제: IO 에러가 나는 자원에 접근하기
*
아마 어플리케이션이 site-packages 폴더안에 심볼링링크되어 있는 싱글 .py file일 것이다. 이것은 작동하지 않으며 대신 파일이 저장되어 있는 폴더를 pythonpath를 넣거나 어플리케이션을 패키지로 바꿔라.
이러한 이유는 패키지로 설치되지 않은 경우, 모듈 파일명이 자원을 찾기 위해 사용되어 지며, 심볼링링크를 위해 잘못된 파일명이 선택되어 지기 때문이다.
## 자동 리로딩 지원¶
배포 도구를 돕기 위애 여러분은 자동 리로딩 지원을 활성화할 수 있다. .wsgi 파일이 변경되면, `mod_wsgi`는 모든 데몬 프로세스를 리로드할 것이다.
이를 위해 Directory 섹션에 다음과 같은 지시어만 추가하면 된다:
```
WSGIScriptReloading On
```
## 가상 환경에서 실행하기¶
가상환경은 필요한 의존적인 것들을 시스템 전반적으로 설치할 필요가 없어서 사용되는 것을 더 잘 컨트롤 할 수 있는 잇점이 있다. 만약 가상환경에서 mod_wsgi를 사용하고 싶다면, .wsgi 파일을 조금 변경할 필요가 있다.
.wsgi 파일 위에 아래와 같은 내용을 추가하라:
```
activate_this = '/path/to/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
```
이것은 가상환경 설정에 따라서 로드할 경로를 설정한다. 경로가 절대 경로임을 명심하라.
WSGI 어플리케이션을 포함하고 HTTP를 서비스하는 파이썬으로 개발된 인기있는 서버가 있다. 이 서버들은 실행될 때 독립적이다; 여러분은 그것들을 웹서버를 통해 프록시할 수 있다. 이와 같이 하려면 프록시 설정 단락을 참조하라.
## Gunicorn¶
Gunicorn ‘Green Unicorn’은 UNIX를 위한 WSGI HTTP 서버이다. 이것은 루비의 Unicorn 프로젝트에서 포팅된 pre-fork worker 모델이다. 이것은 `eventlet`_와 `greenlet`_을 모두 지원한다. 이 서버에서 플라스크 어플리케이션을 실행하는 것은 매우 간단하다:
```
gunicorn myproject:app
```
Gunicorn`_은 많은 커맨드라인 옵션을 제공한다 – ``gunicorn -h` 를 참조하라. 예를 들면 로컬호스트 4000포트로 바인딩하면서( `-b 127.0.0.1:4000` )
4개의 worker 프로세스로 실행하기 위해서는 ( `-w 4` ) 아래와 같이 하면 된다:
```
gunicorn -w 4 -b 127.0.0.1:4000 myproject:app
```
## Tornado¶
`Tornado`_는 `FriendFeed`_를 강화한 확장성있는 넌블러킹 웹서버의 오픈소스 버전이다. 넌블러킹이며 epoll을 사용하기 때문에 수천개의 동시 연결을 처리할 수 있다. 이것은 이상적인 실시간 웹서비스를 의미한다. 플라스크를 이 서비스로 통합하는 것은 복잡하지 않다:
```
from tornado.wsgi import WSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from yourapplication import app
http_server = HTTPServer(WSGIContainer(app))
http_server.listen(5000)
IOLoop.instance().start()
```
## Gevent¶
`Gevent`_는 `libevent`_ 이벤트 루프 위에서 고수준 동기화 API를 제공하기 위해 `greenlet`_를 사용하는 coroutine기반 파이썬 네트워킹 라이브러리이다:
```
from gevent.wsgi import WSGIServer
from yourapplication import app
http_server = WSGIServer(('', 5000), app)
http_server.serve_forever()
```
## Twisted Web¶
Twisted Web`_은 `Twisted`_에 포함되어 있는 웹서버이며, 성숙된 넌블러킹 이벤트 드리븐 네트워킹 라이브러리이다. Twisted Web은 ``twistd` 유틸리티를 사용하여 커맨드라인을 통해 컨트롤할 수 있는 표준 WSGI 컨테이너이다:
```
twistd web --wsgi myproject.app
```
이 예제는 `myproject` 모듈로부터 ``app``을 호출하는 Flask application를 실행할 것이다. Twisted Web은 많은 플래그와 옵션을 지원하며, `twistd` 유틸리티 또한 많은 것을 제공한다;
더 많은 정보를 위해 `twistd -h` 와
```
twistd web -h``를 참조하라.
예를 들어, ``myproject
```
어플리케이션을 8080포트로 Twisted 웹서버로 서비스하려면 아래와 같이 하면 된다:
```
twistd -n web --port 8080 --wsgi myproject.app
```
## 프록시 설정¶
어플리케이션을 HTTP 프록시 뒤에 있는 서버들 중 하나를 사용하여 배포한다면 어플리케이션을 실행하기 위해 몇가지 헤더를 rewrite할 필요가 있을 것이다. WSGI 환경에서 문제가 있는 두가지 값을 REMOTE_ADDR 와 HTTP_HOST 이다. 이 헤더들은 httpd에 전달하기 위한 설정을 할 수 있다. 또는 미들웨어에서 그 헤더들을 수정할 수 있다. Werkzeug는 몇가지 설정으로 해결할 수 있는 픽서(fixer)을 포함한다. 그러나 특정한 설정을 위해 WSGI 미들웨어를 사용하기를 원할지도 모른다.
아래는 적절한 헤더를 설정하여 로컬포스트 8000번 포트로 서비스되는 어플리케이션으로 프록시하는 간단한 nginx 설정이다:
```
.. sourcecode:: nginx
```
* server {
listen 80;
server_name _;
access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log;
* location / {
proxy_pass http://127.0.0.1:8000/; proxy_redirect off;
proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
만약 httpd가 이 헤더들을 제공하지 않는다면 가장 일반적인 설정은 `X-Forwarded-Host`로부터 호스트를 `X-Forwarded-For`로부터 원격 어드레스를 가져오는 것이다:
```
from werkzeug.contrib.fixers import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app)
```
헤더 신뢰
악의저인 클라이언트에 의해 위조될 수 있는 헤더를 무조건 신뢰할 것이기 때문에 non-proxy 설정에서 이런 미들웨어를 사용하는 것은 보안적인 문제가 있다는 것을 기억하라.
만약 다른 헤더로부터 헤더들은 rewrite하려면, 아래와 같이 픽서를 사용할 수 있다:
```
class CustomProxyFix(object):
def __call__(self, environ, start_response):
host = environ.get('HTTP_X_FHOST', '')
if host:
environ['HTTP_HOST'] = host
return self.app(environ, start_response)
app.wsgi_app = CustomProxyFix(app.wsgi_app)
```
uWSGI는 nginx, lighttpd, cherokee 와 같은 서버에서 사용할 수 있는 배포 옵션이다; 다른 옵션을 보려면 FastCGI 와 독립적인 WSGI 컨테이너 를 확인하라. uWSGI 프로토토콜로 WSGI 어플리케이션을 사용하기 위해서는 먼저 uWSGI 서버가 필요하다. uWSGI는프로토콜이면서 어플리케이션 서버이다; 어플리케이션서버는 uWSGI, FastCGI, HTTP 프로토콜을 서비스할 수 있다.
가장 인기있는 uWSGI 서버는 `uwsgi`_이며, 설명을 위해 사용할 것이다. 아래와 같이 설치되어 있는지 확인하라.
## uwsgi로 app 시작하기¶
`uwsgi`는 파이썬 모듈에 있는 WSGI callables에서 운영하기 위해 설계되어 있다.
myapp.py에 flask application이 있다면 아래와 같이 사용하라:
```
$ uwsgi -s /tmp/uwsgi.sock --module myapp --callable app
```
또는 아래와 같은 방법도 사용할 수 있다:
```
$ uwsgi -s /tmp/uwsgi.sock -w myapp:app
```
nginx를 위한 기본적인 flask uWSGI 설정은 아래와 같다:
```
location = /yourapplication { rewrite ^ /yourapplication/; }
location /yourapplication { try_files $uri @yourapplication; }
location @yourapplication {
include uwsgi_params;
uwsgi_param SCRIPT_NAME /yourapplication;
uwsgi_modifier1 30;
uwsgi_pass unix:/tmp/uwsgi.sock;
}
```
이 설정은 어플리케이션을 `/yourapplication`에 바인드한다. URL 루트에 어플리케이션을 두길 원한다면, WSGI `SCRIPT_NAME`를 알려줄 필요가 없거나 uwsgi modifier를 설정할 필요가 없기 때문에 조금 더 간단하다:
This configuration binds the application to /yourapplication. If you want to have it in the URL root it’s a bit simpler because you don’t have to tell it the WSGI SCRIPT_NAME or set the uwsgi modifier to make use of it:
```
location / { try_files $uri @yourapplication; }
location @yourapplication {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.sock;
}
```
FastCGI는 nginx, lighttpd, cherokee 와 같은 서버에서 사용할 수 있는 배포 옵션이다; 다른 옵션을 보려면 uWSGI 와 독립적인 WSGI 컨테이너 를 확인하라. WSGI 어플리케이션을 사용하기 위해서는 먼저 FastCGI 서버가 필요하다. 가장 인기있는 것은 `flup`_이며, 설명을 위해 사용할 것이다. 아래와 같이 설치되어 있는지 확인하라.
## .fcgi 파일 생성하기¶
먼저 FastCGI 서버 파일을 만드는 것이 필요하다. 그것을 yourapplication.fcgi:: 이라고 부르자.
#!/usr/bin/python from flup.server.fcgi import WSGIServer from yourapplication import app
* if __name__ == ‘__main__’:
* WSGIServer(app).run()
이것은 아파치에 적용하기에는 충분하지만 nginx나 lighttpd의 오래된 버전의 경우 FastCGI 서버와ㅏ 통신하기 위해 명시적으로 소켓을 전달해야 한다. `WSGIServer` ::에 소켓 경로를 전달해야 한다. > WSGIServer(application, bindAddress=’/path/to/fcgi.sock’).run()
그 경로는 서버 설정에 정의된 것과 정확히 같아야 한다.
yourapplication.fcgi`을 다시 찾을 수 있는 어딘가에 저장하라. `/var/www/yourapplication 나 비슷한 곳에 두면 된다.
서버가 이 파일을 실행할 수 있도록 실행 가능 비트를 설정하라:
```
# chmod +x /var/www/yourapplication/yourapplication.fcgi
```
기본적인 아파치 배포는 위 예제로 충분하나, .fcgi 파일이 example.com/yourapplication.fcgi/news/ 와 같이 어플리케이션 URL에 보일 것이다. URL에 yourapplication.fcgi을 보이지 않게 하는 며가지 설정 방법이 있다. 가장 많이 사용되는 방법은 ScriptAlias 설정 지시어를 사용하는 것이다:
* <VirtualHost *>
* ServerName example.com ScriptAlias / /path/to/yourapplication.fcgi/
</VirtualHost만약 공유된 웹호스트에서와 같이 ScriptAlias를 설정할 수 없다면, URL에서 yourapplication.fcgi를 제거하기 위해 WSGI 미들웨어를 사용할 수 있다. .htaccess:: 를 설정하라.
* <IfModule mod_fcgid.cAddHandler fcgid-script .fcgi <Files ~ (.fcgi)>
SetHandler fcgid-script Options +FollowSymLinks +ExecCGI
</Files</IfModule>
* <IfModule mod_rewrite.c>
* Options +FollowSymlinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ yourapplication.fcgi/$1 [QSA,L]
</IfModuleyourapplication.fcgi:: 를 설정하라.
#!/usr/bin/python #: optional path to your local python site-packages folder import sys sys.path.insert(0, ‘<your_local_path>/lib/python2.6/site-packages’)
from flup.server.fcgi import WSGIServer from yourapplication import app
* class ScriptNameStripper(object):
* def __init__(self, app):
* self.app = app
* def __call__(self, environ, start_response):
* environ[‘SCRIPT_NAME’] = ‘’ return self.app(environ, start_response)
app = ScriptNameStripper(app)
* if __name__ == ‘__main__’:
* WSGIServer(app).run()
## lighttpd 설정하기¶
lighttpd를 위한 기본적인 FastCGI 설정은 아래와 같다:
```
fastcgi.server = ("/yourapplication.fcgi" =>
((
"socket" => "/tmp/yourapplication-fcgi.sock",
"bin-path" => "/var/www/yourapplication/yourapplication.fcgi",
"check-local" => "disable",
"max-procs" => 1
))
)
alias.url = (
"/static/" => "/path/to/your/static"
)
url.rewrite-once = (
"^(/static($|/.*))$" => "$1",
"^(/.*)$" => "/yourapplication.fcgi$1"
```
FastCGI, alias, rewrite modules 모듈을 활성화하는 것을 기억하라. 이 설정은 어플리케이션을 /yourapplication`에 바인드한다. 만약 어플리케이션을 URL 루트에서 실행하기를 원한다면, :class:`~werkzeug.contrib.fixers.LighttpdCGIRootFix 미들웨어 관련 lighttpd 버그의 회피 수단을 적용해야 한다.
어플리케이션을 URL 루트에 마운트하는 경우에만 이것을 적용해야 한다. 또한 FastCGI and Python 에 대한 더 많은 정보를 위해 lighttpd 문서를 확인하라.(명시적으로 소켓을 전달하는 것이 더 이상 필요하지 않음을 주목하라.)
nginx에 FastCGI 어플리케이션 설치는 기본적으로 어떠한 FastCGI 파라미터가 전달되지 않기 때문에 조금 다르다.
nginx를 위한 기본적인 플라스크 FastCGI 설정은 아래와 같다:
```
location = /yourapplication { rewrite ^ /yourapplication/ last; }
location /yourapplication { try_files $uri @yourapplication; }
location @yourapplication {
include fastcgi_params;
fastcgi_split_path_info ^(/yourapplication)(.*)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_pass unix:/tmp/yourapplication-fcgi.sock;
}
```
이 설정은 어플리케이션을 /yourapplication`에 바인드한다. URL 루트에 어플리케이션을 두길 원한다면, `PATH_INFO 와 `SCRIPT_NAME`를 계산하는 방법을 이해할 필요가 없기 때문에 조금 더 간단하다:
```
location / { try_files $uri @yourapplication; }
location @yourapplication {
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param SCRIPT_NAME "";
fastcgi_pass unix:/tmp/yourapplication-fcgi.sock;
}
```
## FastCGI 프로세스 실행하기¶
Nginx와 다른 서버들은 FastCGI 어플리케이션을 로드하지 않기 때문에, 스스로 로드를 해야 한다. 관리자가 FastCGI 프로세스를 관리할 수 있다. 다른 FastCGI 프로세스 관리자를 찾아 보거나, 부팅할 때 .fcgi 파일을 실행하도록 스크립트를 작성할 수 있다. (예. SysV `init.d` 스크립트를 사용)
임시 방편으로는 GNU 화면 안에서 항상 `.fcgi` 스크립트를 실행할 수 있다.
상세를 위해 `man screen` 를 확인하고, 이것이 시스템을 재시작하면 지속되지 않는 수동적인 해결책임을 주의하라:
```
$ screen
$ /var/www/yourapplication/yourapplication.fcgi
```
## 디버깅¶
FastCGI 배포는 대부분 웹서버에서 디버깅하기 어려운 경향이 있다. 아주 종종 서버 로그가 알려는 유일한 정보는 “premature end of headers” 라인과 함께 나타난다. 어플리케이션을 디버깅하기 위해서, 여러분에게 정말로 줄 수 있는 유일한 방법은 정확한 사용자로 바꾸고 손으로 직접 어플리케이션을 실행하는 것이다.
아래 예제는 어플리케이션을 application.fcgi 라고 부르고, 웹서버가 사용자가 `www-data`라고 가정한다:
```
$ su www-data
$ cd /var/www/yourapplication
$ python application.fcgi
Traceback (most recent call last):
File "yourapplication.fcgi", line 4, in <module>
ImportError: No module named yourapplication
```
이 경우 에러는 “yourapplication”가 파이썬 경로에 없는 것 같이 보인다. 일반적인 문제는 아래와 같다:
```
- 상대 경로가 사용된 경우. 현재 작업 디렉토리에 의존하지 마라.
- 웹서버에 의해 설정되지 않는 환경 변수에 의존적인 코드.
- 다른 파이썬 해석기가 사용된 경우.
```
다른 모든 배포 방법을 적용할 수 없다면, CGI는 확실히 가능할 것이다. CGI는 거의 모든 주요 서버에 의해 제공되지만 보통 차선의 성능을 가진다.
또한 CGI와 같은 환경에서 실행할 수 있는 구글의 App Engine 에서 플라스크 어플리케이션을 사용할 수 있는 방법이기도 하다.
## .cgi 파일 만들기¶
먼저 CGI 어플리케이션 파일을 만드는 것이 필요하다. 그것을 yourapplication.cgi:: 이라고 부르자.
#!/usr/bin/python from wsgiref.handlers import CGIHandler from yourapplication import app
CGIHandler().run(app)
## 서버 설정¶
보통 서버를 설정하는 두가지 방법이 있다. .cgi 를 cgi-bin 으로 복사하거나(mod_rewrite 나 URL을 재작성하는 유사한 것을 사용) 서버 지점을 파일에 직접 작성한다.
아파치의 경우, 아래와 같이 설정 파일 안에 입력할 수 있다:
```
ScriptAlias /app /path/to/the/application.cgi
```
그러나 공유된 웹호스팅에서는 아파치 설정에 접근할 수 없을 지도 모른다. 이런 경우, 여러분의 어플리케이션이 실행되기를 원하는 공개 디렉토리에 있는 .htaccess 파일에 입력할 수 있으나, 이 경우엔는 ScriptAlias 지시어는 적용되지 않을 것이다:
```
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f # Don't interfere with static files
RewriteRule ^(.*)$ /path/to/the/application.cgi/$1 [L]
```
더 많은 정보를 위해 사용하는 웹서버의 문서를 참조하라.
코드가 증가하고 어플리케이션의 규모가 커질 때 여러분에게 몇가지 선택사항이 있다.
## 소스코드를 읽어라.¶
플라스크는 이미 존재하고 잘 사용되는 도구인 벡자이크(WSGI)와 진자(템플릿 엔진) 의 상위에서 여러분 자신의 프레임워크를 만들기 위한 방법을 설명하기 위해 부분적으로 시작되었고, 개발되어오면서, 넓은 사용자층에게 유용해졌다. 여러분의 코드가 점점 커진다면, 플라스크를 사용하지 않아야 한다 – 그 이유를 이해해라. 소스를 읽어라. 플라스크의 소스는 읽혀지기 위해 쓰여진것이다; 그것은 공표된 문서이기에 여러분은 그것의 내부 API를 사용할 수 있다. 플라스크는 여러분의 프로젝트에 필요한 훅 포인트 (hook point)를 찾을 수 있도록 상위 라이브러리에 있는 문서화된 API를 고수하고 그것의 내부 유틸리티를 문서화한다.
## 훅(Hook)하고. 확장하라.¶
API 문서는 사용가능한 오버라이드, 훅 포인트, 시그널(Signals) 로 가득 차 있다. 여러분은 요청과 응답 객체와 같은 것에 대한 커스텀 클래스를 제공할 수 있다. 여러분이 사용하는 API에 대해 좀 더 깊게 파보고, 플라스크 릴리즈에 특별하게 사용가능한 커스터마이징된 것을 찾아봐라. 여러분의 프로젝트가 여러 유틸리티나 플라스크 확장으로 리팩토링될 수 있는 방식을 찾아봐라. 커뮤니티에 있는 여러 확장 을 살펴보고 , 여러분이 필요한 도구를 찾을 수 없다면 여러분 자신의 확장을 만들기 위한 패턴을 찾아봐라.
## 서브클래싱하라.¶
`Flask` 클래스는 서브클래싱에 대해 설계된 여러 메소드가 있다.
여러분은 `Flask` (연결된 메소드 문서를 살펴봐라) 를
서브클래싱하고 어플리케이션 클래스를 인스턴스화한 곳 어디서나 그 서브클래스를
사용하여 동작을 빠르게 추가하거나 커스터마이징할 수 있다. 이것은
어플리케이션 팩토리 과 함께 잘 동작한다.
## 미들웨어로 감싸라.¶
어플리케이션 디스패칭 장에서 미들웨어를 적용하는 방법에 대해 자세히 보여줬다. 여러분의 플라스크 인스턴스를 감싸기 위해 WSGI 미들웨어와 여러분의 어플리케이션 과 HTTP 서버 사이에 있는 계층에 수정과 변경을 소개할 수 있다. 벡자이크는 다수의 미들웨어 를 포함하고 있다.
## 분기하라.¶
위에서 언급한 선택사항 중 어느 것에도 해당되지 않는다면, 플라스크를 분기해라. 플라스크 코드의 다수는 벡자이크와 진자2 안에 있는 것이다. 이런 라이브러리가 그 동작의 대다수를 수행한다. 플라스크는 단지 그런 라이브러리를 잘 붙여논 풀같은 것이다. 모든 프로젝트에 대해 하부 프레임워크가 방해가 되는 점이 있다 (왜냐하면 원 개발자들의 가진 가정들 때문에). 이것은 자연스러운 것인데 왜냐하면 그런 경우가 아니라면, 그 프레임워크는 시작부터 굉장히 가파른 학습 곡선과 많은 개발자의 좌절감을 유발하는 매우 복잡한 시스템이 될것이기 때문이다.
이것은 플라스크에만 유일하게 적용되지 않는다. 많은 사람들이 그들이 사용하는 프레임워크의 단점에 대응하기 위해 패치되고 변경된 버전의 프레임워크를 사용한다. 이 방식 또한 플라스크의 라이선스에 반영돼있다. 여러분이 플라스크에 변경하기로 결정했더라도 그 어떤 변경에 대해서도 다시 기여하지 않아도 된다.
물론 분기하는 것에 단점은 플라스크 확장에 대부분 깨질것이라는 점인데 왜냐하면 새로운 프레임워크는 다른 임포트 명을 가질 것이기 때문이다. 더 나아가서 변경의 개수의 따라 상위 변경을 통합하는 것이 복합한 과정일 수 있다. 그런것 때문에 분기하는 것은 최후의 보루가 되어야 할 것이다.
## 프로페셔널처럼 규모를 조절하라.¶
많은 웹 어플리케이션에 대한 코드의 복잡도는 예상되는 사용자 수와 데이터 항목에 대한 규모를 조절하는것에 비하면 큰 이슈는 아니다. 플라스크 그 자체로는 여러분의 어플리케이션 코드, 사용하고자 하는 데이타 저장소 그리고 실행되는 파이썬 구현과 웹서버에 의한 규모 조절이라는 측면에서만 제한된다.
예를 자면 규모를 잘 조절한다는 것은 만약 여러분이 서버의 양을 두배로 했을 때 두배의 성능을 얻는다는 것을 의미한다. 규모 조절이 잘못됐다는 것은 여러분이 새로운 서버를 추가했을 때 어플리케이션이 더 좋은 성능을 내지 못하거나 심지어 대체되는 서버로 지원하지도 못하는 것을 의미한다.
문맥 로컬 프록시가 플라스크에서 규모 조절과 관련된 유일한 제한 요소이다. 그것들은 플라스크에 있는 쓰레드, 프로세스 혹은 그린렛으로 정의되는 문맥에 의존한다. 여러분의 서버가 쓰레드나 그린렛에 기반하지 않은 어떤 동시성의 기능을 사용한다면, 플라스크는 이와 같은 전역적인 프록시를 더 이상 제공할 수 없을 것이다. 하지만 서버의 다수는 기반에서 벡자이크 라이브러리에 의해 잘 지원되는 모든 메소드인 동시성을 수행하기 위해 쓰레드, 그린렛 또는 분리된 프로세스를 사용할 것이다.
## 커뮤니티와 논의해라.¶
플라스크 개발자들은 사용자들에게 크고 작은 코드를 가지고 그 프레임워크에 접근할 수 있도록 유지한다. 여러분의 방식에서 플라스크로 인한 어떤 걸림돌을 만난다면 메일링 리스트나 IRC 채널을 통해 플라스크 개발자들에게 연락하는 것에 망설이지 말도록 해라. 더 큰 어플리케이션에 대한 도구를 개선하기 위해 플라스크와 그 확장 을 개발하는 개발자들에게 가장 좋은 방법은 사용자들에게 피드백을 얻는 것이다.
If you are curious why Flask does certain things the way it does and not differently, this section is for you. This should give you an idea about some of the design decisions that may appear arbitrary and surprising at first, especially in direct comparison with other frameworks.
## The Explicit Application Object¶
A Python web application based on WSGI has to have one central callable object that implements the actual application. In Flask this is an instance of the `Flask` class. Each Flask application has
to create an instance of this class itself and pass it the name of the
module, but why can’t Flask do that itself?
Without such an explicit application object the following code:
Would look like this instead:
```
from hypothetical_flask import route
@route('/')
def index():
return 'Hello World!'
```
There are three major reasons for this. The most important one is that implicit application objects require that there may only be one instance at the time. There are ways to fake multiple applications with a single application object, like maintaining a stack of applications, but this causes some problems I won’t outline here in detail. Now the question is: when does a microframework need more than one application at the same time? A good example for this is unittesting. When you want to test something it can be very helpful to create a minimal application to test specific behavior. When the application object is deleted everything it allocated will be freed again.
Another thing that becomes possible when you have an explicit object lying around in your code is that you can subclass the base class ( `Flask` ) to alter specific behavior. This would not be
possible without hacks if the object were created ahead of time for you
based on a class that is not exposed to you. But there is another very important reason why Flask depends on an explicit instantiation of that class: the package name. Whenever you create a Flask instance you usually pass it __name__ as package name. Flask depends on that information to properly load resources relative to your module. With Python’s outstanding support for reflection it can then access the package to figure out where the templates and static files are stored (see `open_resource()` ). Now obviously there
are frameworks around that do not need any configuration and will still be
able to load templates relative to your application module. But they have
to use the current working directory for that, which is a very unreliable
way to determine where the application is. The current working directory
is process-wide and if you are running multiple applications in one
process (which could happen in a webserver without you knowing) the paths
will be off. Worse: many webservers do not set the working directory to
the directory of your application but to the document root which does not
have to be the same folder. The third reason is “explicit is better than implicit”. That object is your WSGI application, you don’t have to remember anything else. If you want to apply a WSGI middleware, just wrap it and you’re done (though there are better ways to do that so that you do not lose the reference to the application object `wsgi_app()` ).
Furthermore this design makes it possible to use a factory function to create the application which is very helpful for unittesting and similar things (어플리케이션 팩토리).
## The Routing System¶
Flask uses the Werkzeug routing system which has was designed to automatically order routes by complexity. This means that you can declare routes in arbitrary order and they will still work as expected. This is a requirement if you want to properly implement decorator based routing since decorators could be fired in undefined order when the application is split into multiple modules.
Another design decision with the Werkzeug routing system is that routes in Werkzeug try to ensure that URLs are unique. Werkzeug will go quite far with that in that it will automatically redirect to a canonical URL if a route is ambiguous.
## One Template Engine¶
Flask decides on one template engine: Jinja2. Why doesn’t Flask have a pluggable template engine interface? You can obviously use a different template engine, but Flask will still configure Jinja2 for you. While that limitation that Jinja2 is always configured will probably go away, the decision to bundle one template engine and use that will not.
Template engines are like programming languages and each of those engines has a certain understanding about how things work. On the surface they all work the same: you tell the engine to evaluate a template with a set of variables and take the return value as string.
But that’s about where similarities end. Jinja2 for example has an extensive filter system, a certain way to do template inheritance, support for reusable blocks (macros) that can be used from inside templates and also from Python code, uses Unicode for all operations, supports iterative template rendering, configurable syntax and more. On the other hand an engine like Genshi is based on XML stream evaluation, template inheritance by taking the availability of XPath into account and more. Mako on the other hand treats templates similar to Python modules.
When it comes to connecting a template engine with an application or framework there is more than just rendering templates. For instance, Flask uses Jinja2’s extensive autoescaping support. Also it provides ways to access macros from Jinja2 templates.
A template abstraction layer that would not take the unique features of the template engines away is a science on its own and a too large undertaking for a microframework like Flask.
Furthermore extensions can then easily depend on one template language being present. You can easily use your own templating language, but an extension could still depend on Jinja itself.
## Micro with Dependencies¶
Why does Flask call itself a microframework and yet it depends on two libraries (namely Werkzeug and Jinja2). Why shouldn’t it? If we look over to the Ruby side of web development there we have a protocol very similar to WSGI. Just that it’s called Rack there, but besides that it looks very much like a WSGI rendition for Ruby. But nearly all applications in Ruby land do not work with Rack directly, but on top of a library with the same name. This Rack library has two equivalents in Python: WebOb (formerly Paste) and Werkzeug. Paste is still around but from my understanding it’s sort of deprecated in favour of WebOb. The development of WebOb and Werkzeug started side by side with similar ideas in mind: be a good implementation of WSGI for other applications to take advantage.
Flask is a framework that takes advantage of the work already done by Werkzeug to properly interface WSGI (which can be a complex task at times). Thanks to recent developments in the Python package infrastructure, packages with dependencies are no longer an issue and there are very few reasons against having libraries that depend on others.
## Thread Locals¶
Flask uses thread local objects (context local objects in fact, they support greenlet contexts as well) for request, session and an extra object you can put your own things on ( `g` ). Why is that and
isn’t that a bad idea?
Yes it is usually not such a bright idea to use thread locals. They cause troubles for servers that are not based on the concept of threads and make large applications harder to maintain. However Flask is just not designed for large applications or asynchronous servers. Flask wants to make it quick and easy to write a traditional web application.
Also see the 크게 만들기 section of the documentation for some inspiration for larger applications based on Flask.
## What Flask is, What Flask is Not¶
Flask will never have a database layer. It will not have a form library or anything else in that direction. Flask itself just bridges to Werkzeug to implement a proper WSGI application and to Jinja2 to handle templating. It also binds to a few common standard library packages such as logging. Everything else is up for extensions.
Why is this the case? Because people have different preferences and requirements and Flask could not meet those if it would force any of this into the core. The majority of web applications will need a template engine in some sort. However not every application needs a SQL database.
The idea of Flask is to build a good foundation for all applications. Everything else is up to you or extensions.
Date: 2004-01-01
Categories:
Tags:
The Flask documentation and example applications are using HTML5. You may notice that in many situations, when end tags are optional they are not used, so that the HTML is cleaner and faster to load. Because there is much confusion about HTML and XHTML among developers, this document tries to answer some of the major questions.
## History of XHTML¶
For a while, it appeared that HTML was about to be replaced by XHTML. However, barely any websites on the Internet are actual XHTML (which is HTML processed using XML rules). There are a couple of major reasons why this is the case. One of them is Internet Explorer’s lack of proper XHTML support. The XHTML spec states that XHTML must be served with the MIME type application/xhtml+xml, but Internet Explorer refuses to read files with that MIME type. While it is relatively easy to configure Web servers to serve XHTML properly, few people do. This is likely because properly using XHTML can be quite painful.
One of the most important causes of pain is XML’s draconian (strict and ruthless) error handling. When an XML parsing error is encountered, the browser is supposed to show the user an ugly error message, instead of attempting to recover from the error and display what it can. Most of the (X)HTML generation on the web is based on non-XML template engines (such as Jinja, the one used in Flask) which do not protect you from accidentally creating invalid XHTML. There are XML based template engines, such as Kid and the popular Genshi, but they often come with a larger runtime overhead and, are not as straightforward to use because they have to obey XML rules.
The majority of users, however, assumed they were properly using XHTML. They wrote an XHTML doctype at the top of the document and self-closed all the necessary tags ( `<br>` becomes `<br/>` or `<br></br>` in XHTML).
However, even if the document properly validates as XHTML, what really
determines XHTML/HTML processing in browsers is the MIME type, which as
said before is often not set properly. So the valid XHTML was being treated
as invalid HTML.
XHTML also changed the way JavaScript is used. To properly work with XHTML, programmers have to use the namespaced DOM interface with the XHTML namespace to query for HTML elements.
## History of HTML5¶
Development of the HTML5 specification was started in 2004 under the name “Web Applications 1.0” by the Web Hypertext Application Technology Working Group, or WHATWG (which was formed by the major browser vendors Apple, Mozilla, and Opera) with the goal of writing a new and improved HTML specification, based on existing browser behavior instead of unrealistic and backwards-incompatible specifications.
For example, in HTML4 `<title/Hello/` theoretically parses exactly the
same as `<title>Hello</title>` . However, since people were using
XHTML-like tags along the lines of `<link />` , browser vendors implemented
the XHTML syntax over the syntax defined by the specification.
In 2007, the specification was adopted as the basis of a new HTML specification under the umbrella of the W3C, known as HTML5. Currently, it appears that XHTML is losing traction, as the XHTML 2 working group has been disbanded and HTML5 is being implemented by all major browser vendors.
## HTML versus XHTML¶
The following table gives you a quick overview of features available in HTML 4.01, XHTML 1.1 and HTML5. (XHTML 1.0 is not included, as it was superseded by XHTML 1.1 and the barely-used XHTML5.)
HTML4.01 | XHTML1.1 | HTML5 |
| --- | --- | --- |
HTML4.01
XHTML1.1
HTML5
<tag/value/ == <tag>value</tag>
[1]
<br/> supported
[2]
<script/> supported
should be served as text/html
[3]
should be served as application/xhtml+xml
strict error handling
inline SVG
inline MathML
<video> tag
<audio> tag
New semantic tags like <article[1] | This is an obscure feature inherited from SGML. It is usually not supported by browsers, for reasons detailed above. |
| --- | --- |
[2] | This is for compatibility with server code that generates XHTML for tags such as |
| --- | --- |
[3] | XHTML 1.0 is the last XHTML standard that allows to be served as text/html for backwards compatibility reasons. |
| --- | --- |
## What does “strict” mean?¶
HTML5 has strictly defined parsing rules, but it also specifies exactly how a browser should react to parsing errors - unlike XHTML, which simply states parsing should abort. Some people are confused by apparently invalid syntax that still generates the expected results (for example, missing end tags or unquoted attribute values).
Some of these work because of the lenient error handling most browsers use when they encounter a markup error, others are actually specified. The following constructs are optional in HTML5 by standard, but have to be supported by browsers:
* Wrapping the document in an
`<html>` tag * Wrapping header elements in
`<head>` or the body elements in `<body>` * Closing the
`<p>` , `<li>` , `<dt>` , `<dd>` , `<tr>` , `<td>` , `<th>` , `<tbody>` , `<thead>` , or `<tfoot>` tags. * Quoting attributes, so long as they contain no whitespace or special characters (like
`<` , `>` , `'` , or `"` ). * Requiring boolean attributes to have a value.
This means the following page in HTML5 is perfectly valid:
```
<!doctype html>
<title>Hello HTML5</title>
<div class=header>
<h1>Hello HTML5</h1>
<p class=tagline>HTML5 is awesome
</div>
<ul class=nav>
<li><a href=/index>Index</a>
<li><a href=/downloads>Downloads</a>
<li><a href=/about>About</a>
</ul>
<div class=body>
<h2>HTML5 is probably the future</h2>
<p>
There might be some other things around but in terms of
browser vendor support, HTML5 is hard to beat.
<dl>
<dt>Key 1
<dd>Value 1
<dt>Key 2
<dd>Value 2
</dl>
</div>
```
## New technologies in HTML5¶
HTML5 adds many new features that make Web applications easier to write and to use.
* The
`<audio>` and `<video>` tags provide a way to embed audio and video without complicated add-ons like QuickTime or Flash. * Semantic elements like
`<article>` , `<header>` , `<nav>` , and `<time>` that make content easier to understand. * The
`<canvas>` tag, which supports a powerful drawing API, reducing the need for server-generated images to present data graphically. * New form control types like
`<input type="date">` that allow user agents to make entering and validating values easier. * Advanced JavaScript APIs like Web Storage, Web Workers, Web Sockets, geolocation, and offline applications.
Many other features have been added, as well. A good guide to new features in HTML5 is Mark Pilgrim’s soon-to-be-published book, Dive Into HTML5. Not all of them are supported in browsers yet, however, so use caution.
## What should be used?¶
Currently, the answer is HTML5. There are very few reasons to use XHTML considering the latest developments in Web browsers. To summarize the reasons given above:
* Internet Explorer (which, sadly, currently leads in market share) has poor support for XHTML.
* Many JavaScript libraries also do not support XHTML, due to the more complicated namespacing API it requires.
* HTML5 adds several new features, including semantic tags and the long-awaited
`<audio>` and `<video>` tags. * It has the support of most browser vendors behind it.
* It is much easier to write, and more compact.
For most applications, it is undoubtedly better to use HTML5 than XHTML.
Web applications usually face all kinds of security problems and it’s very hard to get everything right. Flask tries to solve a few of these things for you, but there are a couple more you have to take care of yourself.
## Cross-Site Scripting (XSS)¶
Cross site scripting is the concept of injecting arbitrary HTML (and with it JavaScript) into the context of a website. To remedy this, developers have to properly escape text so that it cannot include arbitrary HTML tags. For more information on that have a look at the Wikipedia article on Cross-Site Scripting.
Flask configures Jinja2 to automatically escape all values unless explicitly told otherwise. This should rule out all XSS problems caused in templates, but there are still other places where you have to be careful:
* generating HTML without the help of Jinja2
* calling
`Markup` on data submitted by users * sending out HTML from uploaded files, never do that, use the Content-Disposition: attachment header to prevent that problem.
* sending out textfiles from uploaded files. Some browsers are using content-type guessing based on the first few bytes so users could trick a browser to execute HTML.
Another thing that is very important are unquoted attributes. While Jinja2 can protect you from XSS issues by escaping HTML, there is one thing it cannot protect you from: XSS by attribute injection. To counter this possible attack vector, be sure to always quote your attributes with either double or single quotes when using Jinja expressions in them:
```
<a href="{{ href }}">the text</a>
```
Why is this necessary? Because if you would not be doing that, an attacker could easily inject custom JavaScript handlers. For example an attacker could inject this piece of HTML+JavaScript:
```
onmouseover=alert(document.cookie)
```
When the user would then move with the mouse over the link, the cookie would be presented to the user in an alert window. But instead of showing the cookie to the user, a good attacker might also execute any other JavaScript code. In combination with CSS injections the attacker might even make the element fill out the entire page so that the user would just have to have the mouse anywhere on the page to trigger the attack.
## Cross-Site Request Forgery (CSRF)¶
Another big problem is CSRF. This is a very complex topic and I won’t outline it here in detail just mention what it is and how to theoretically prevent it.
If your authentication information is stored in cookies, you have implicit state management. The state of “being logged in” is controlled by a cookie, and that cookie is sent with each request to a page. Unfortunately that includes requests triggered by 3rd party sites. If you don’t keep that in mind, some people might be able to trick your application’s users with social engineering to do stupid things without them knowing.
Say you have a specific URL that, when you sent POST requests to will delete a user’s profile (say http://example.com/user/delete). If an attacker now creates a page that sends a post request to that page with some JavaScript they just has to trick some users to load that page and their profiles will end up being deleted.
Imagine you were to run Facebook with millions of concurrent users and someone would send out links to images of little kittens. When users would go to that page, their profiles would get deleted while they are looking at images of fluffy cats.
How can you prevent that? Basically for each request that modifies content on the server you would have to either use a one-time token and store that in the cookie and also transmit it with the form data. After receiving the data on the server again, you would then have to compare the two tokens and ensure they are equal.
Why does Flask not do that for you? The ideal place for this to happen is the form validation framework, which does not exist in Flask.
## JSON Security¶
ECMAScript 5 Changes
Starting with ECMAScript 5 the behavior of literals changed. Now they are not constructed with the constructor of `Array` and others, but
with the builtin constructor of `Array` which closes this particular
attack vector.
JSON itself is a high-level serialization format, so there is barely anything that could cause security problems, right? You can’t declare recursive structures that could cause problems and the only thing that could possibly break are very large responses that can cause some kind of denial of service at the receiver’s side.
However there is a catch. Due to how browsers work the CSRF issue comes up with JSON unfortunately. Fortunately there is also a weird part of the JavaScript specification that can be used to solve that problem easily and Flask is kinda doing that for you by preventing you from doing dangerous stuff. Unfortunately that protection is only there for `jsonify()` so you are still at risk when using other ways to
generate JSON.
So what is the issue and how to avoid it? The problem are arrays at top-level in JSON. Imagine you send the following data out in a JSON request. Say that’s exporting the names and email addresses of all your friends for a part of the user interface that is written in JavaScript. Not very uncommon:
```
[
{"username": "admin",
"email": "<EMAIL>@localhost"}
]
```
And it is doing that of course only as long as you are logged in and only for you. And it is doing that for all GET requests to a certain URL, say the URL for that request is
```
http://example.com/api/get_friends.json
```
.
So now what happens if a clever hacker is embedding this to his website and social engineers a victim to visiting his site:
```
<script type=text/javascript>
var captured = [];
var oldArray = Array;
function Array() {
var obj = this, id = 0, capture = function(value) {
obj.__defineSetter__(id++, capture);
if (value)
captured.push(value);
};
capture();
}
</script>
<script type=text/javascript
src=http://example.com/api/get_friends.json></script>
<script type=text/javascript>
Array = oldArray;
// now we have all the data in the captured array.
</script>
```
If you know a bit of JavaScript internals you might know that it’s possible to patch constructors and register callbacks for setters. An attacker can use this (like above) to get all the data you exported in your JSON file. The browser will totally ignore the `application/json` mimetype if `text/javascript` is defined as content type in the script
tag and evaluate that as JavaScript. Because top-level array elements are
allowed (albeit useless) and we hooked in our own constructor, after that
page loaded the data from the JSON response is in the captured array. Because it is a syntax error in JavaScript to have an object literal ( `{...}` ) toplevel an attacker could not just do a request to an
external URL with the script tag to load up the data. So what Flask does
is to only allow objects as toplevel elements when using `jsonify()` . Make sure to do the same when using an ordinary
JSON generate function.
Flask like Jinja2 and Werkzeug is totally Unicode based when it comes to text. Not only these libraries, also the majority of web related Python libraries that deal with text. If you don’t know Unicode so far, you should probably read The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets. This part of the documentation just tries to cover the very basics so that you have a pleasant experience with Unicode related things.
## Automatic Conversion¶
Flask has a few assumptions about your application (which you can change of course) that give you basic and painless Unicode support:
* the encoding for text on your website is UTF-8
* internally you will always use Unicode exclusively for text except for literal strings with only ASCII character points.
* encoding and decoding happens whenever you are talking over a protocol that requires bytes to be transmitted.
So what does this mean to you?
HTTP is based on bytes. Not only the protocol, also the system used to address documents on servers (so called URIs or URLs). However HTML which is usually transmitted on top of HTTP supports a large variety of character sets and which ones are used, are transmitted in an HTTP header. To not make this too complex Flask just assumes that if you are sending Unicode out you want it to be UTF-8 encoded. Flask will do the encoding and setting of the appropriate headers for you.
The same is true if you are talking to databases with the help of SQLAlchemy or a similar ORM system. Some databases have a protocol that already transmits Unicode and if they do not, SQLAlchemy or your other ORM should take care of that.
## The Golden Rule¶
So the rule of thumb: if you are not dealing with binary data, work with Unicode. What does working with Unicode in Python 2.x mean?
* as long as you are using ASCII charpoints only (basically numbers, some special characters of latin letters without umlauts or anything fancy) you can use regular string literals (
`'Hello World'` ). * if you need anything else than ASCII in a string you have to mark this string as Unicode string by prefixing it with a lowercase u. (like
`u'Hänsel und Gretel'` ) * if you are using non-Unicode characters in your Python files you have to tell Python which encoding your file uses. Again, I recommend UTF-8 for this purpose. To tell the interpreter your encoding you can put the
```
# -*- coding: utf-8 -*-
```
into the first or second line of your Python source file. * Jinja is configured to decode the template files from UTF-8. So make sure to tell your editor to save the file as UTF-8 there as well.
## Encoding and Decoding Yourself¶
If you are talking with a filesystem or something that is not really based on Unicode you will have to ensure that you decode properly when working with Unicode interface. So for example if you want to load a file on the filesystem and embed it into a Jinja2 template you will have to decode it from the encoding of that file. Here the old problem that text files do not specify their encoding comes into play. So do yourself a favour and limit yourself to UTF-8 for text files as well.
Anyways. To load such a file with Unicode you can use the built-in `str.decode()` method:
```
def read_file(filename, charset='utf-8'):
with open(filename, 'r') as f:
return f.read().decode(charset)
```
To go from Unicode into a specific charset such as UTF-8 you can use the `unicode.encode()` method:
```
def write_file(filename, contents, charset='utf-8'):
with open(filename, 'w') as f:
f.write(contents.encode(charset))
```
## Configuring Editors¶
Most editors save as UTF-8 by default nowadays but in case your editor is not configured to do this you have to change it. Here some common ways to set your editor to store as UTF-8:
Vim: put
`set enc=utf-8` to your `.vimrc` file. *
Emacs: either use an encoding cookie or put this into your
`.emacs` file: > (prefer-coding-system 'utf-8) (setq default-buffer-file-coding-system 'utf-8)
*
Notepad++:
* Go to Settings -> Preferences ...
* Select the “New Document/Default Directory” tab
* Select “UTF-8 without BOM” as encoding
It is also recommended to use the Unix newline format, you can select it in the same panel but this is not a requirement.
Flask, being a microframework, often requires some repetitive steps to get a third party library working. Because very often these steps could be abstracted to support multiple projects the Flask Extension Registry was created.
If you want to create your own Flask extension for something that does not exist yet, this guide to extension development will help you get your extension running in no time and to feel like users would expect your extension to behave.
## Anatomy of an Extension¶
Extensions are all located in a package called `flask_something` where “something” is the name of the library you want to bridge. So for
example if you plan to add support for a library named simplexml to
Flask, you would name your extension’s package `flask_simplexml` .
The name of the actual extension (the human readable name) however would be something like “Flask-SimpleXML”. Make sure to include the name “Flask” somewhere in that name and that you check the capitalization. This is how users can then register dependencies to your extension in their setup.py files.
Flask sets up a redirect package called `flask.ext` where users
should import the extensions from. If you for instance have a package
called `flask_something` users would import it as `flask.ext.something` . This is done to transition from the old
namespace packages. See Extension Import Transition for more details.
But how do extensions look like themselves? An extension has to ensure that it works with multiple Flask application instances at once. This is a requirement because many people will use patterns like the 어플리케이션 팩토리 pattern to create their application as needed to aid unittests and to support multiple configurations. Because of that it is crucial that your application supports that kind of behavior.
Most importantly the extension must be shipped with a setup.py file and registered on PyPI. Also the development checkout link should work so that people can easily install the development version into their virtualenv without having to download the library by hand.
Flask extensions must be licensed under a BSD, MIT or more liberal license to be able to be enlisted in the Flask Extension Registry. Keep in mind that the Flask Extension Registry is a moderated place and libraries will be reviewed upfront if they behave as required.
## “Hello Flaskext!”¶
So let’s get started with creating such a Flask extension. The extension we want to create here will provide very basic support for SQLite3.
First we create the following folder structure:
```
flask-sqlite3/
flask_sqlite3.py
LICENSE
README
```
Here’s the contents of the most important files:
### setup.py¶
The next file that is absolutely required is the setup.py file which is used to install your Flask extension. The following contents are something you can work with:
```
"""
Flask-SQLite3
-------------
This is the description for that library
"""
from setuptools import setup
setup(
name='Flask-SQLite3',
version='1.0',
url='http://example.com/flask-sqlite3/',
license='BSD',
author='<NAME>',
author_email='<EMAIL>',
description='Very short description',
long_description=__doc__,
py_modules=['flask_sqlite3'],
# if you would be using a package instead use packages instead
# of py_modules:
# packages=['flask_sqlite3'],
zip_safe=False,
include_package_data=True,
platforms='any',
install_requires=[
'Flask'
],
classifiers=[
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules'
]
)
```
That’s a lot of code but you can really just copy/paste that from existing extensions and adapt.
### flask_sqlite3.py¶
Now this is where your extension code goes. But how exactly should such an extension look like? What are the best practices? Continue reading for some insight.
## Initializing Extensions¶
Many extensions will need some kind of initialization step. For example, consider an application that’s currently connecting to SQLite like the documentation suggests (Flask에서 SQLite 3 사용하기). So how does the extension know the name of the application object?
Quite simple: you pass it to it.
There are two recommended ways for an extension to initialize:
initialization functions:
> If your extension is called helloworld you might have a function called
```
init_helloworld(app[, extra_args])
```
that initializes the extension for that application. It could attach before / after handlers etc.
classes:
> Classes work mostly like initialization functions but can later be used to further change the behavior. For an example look at how the OAuth extension works: there is an OAuth object that provides some helper functions like OAuth.remote_app to create a reference to a remote application that uses OAuth.
What to use depends on what you have in mind. For the SQLite 3 extension we will use the class-based approach because it will provide users with an object that handles opening and closing database connections.
What’s important about classes is that they encourage to be shared around on module level. In that case, the object itself must not under any circumstances store any application specific state and must be shareable between different application.
## The Extension Code¶
Here’s the contents of the flask_sqlite3.py for copy/paste:
```
import sqlite3
# Find the stack on which we want to store the database connection.
# Starting with Flask 0.9, the _app_ctx_stack is the correct one,
# before that we need to use the _request_ctx_stack.
try:
from flask import _app_ctx_stack as stack
except ImportError:
from flask import _request_ctx_stack as stack
class SQLite3(object):
def __init__(self, app=None):
if app is not None:
self.app = app
self.init_app(self.app)
else:
self.app = None
def init_app(self, app):
app.config.setdefault('SQLITE3_DATABASE', ':memory:')
# Use the newstyle teardown_appcontext if it's available,
# otherwise fall back to the request context
if hasattr(app, 'teardown_appcontext'):
app.teardown_appcontext(self.teardown)
else:
app.teardown_request(self.teardown)
def connect(self):
return sqlite3.connect(self.app.config['SQLITE3_DATABASE'])
def teardown(self, exception):
ctx = stack.top
if hasattr(ctx, 'sqlite3_db'):
ctx.sqlite3_db.close()
@property
def connection(self):
ctx = stack.top
if ctx is not None:
if not hasattr(ctx, 'sqlite3_db'):
ctx.sqlite3_db = self.connect()
return ctx.sqlite3_db
```
So here’s what these lines of code do:
The
`__init__` method takes an optional app object and, if supplied, will call `init_app` . *
The
`init_app` method exists so that the `SQLite3` object can be instantiated without requiring an app object. This method supports the factory pattern for creating applications. The `init_app` will set the configuration for the database, defaulting to an in memory database if no configuration is supplied. In addition, the `init_app` method attaches the `teardown` handler. It will try to use the newstyle app context handler and if it does not exist, falls back to the request context one. *
Next, we define a
`connect` method that opens a database connection. *
Finally, we add a
`connection` property that on first access opens the database connection and stores it on the context. This is also the recommended way to handling resources: fetch resources lazily the first time they are used.
Note here that we’re attaching our database connection to the top application context via
`_app_ctx_stack.top` . Extensions should use the top context for storing their own information with a sufficiently complex name. Note that we’re falling back to the
```
_request_ctx_stack.top
```
if the application is using an older version of Flask that does not support it.
So why did we decide on a class-based approach here? Because using our extension looks something like this:
```
from flask import Flask
from flask_sqlite3 import SQLite3
app = Flask(__name__)
app.config.from_pyfile('the-config.cfg')
db = SQLite3(app)
```
You can then use the database from views like this:
```
@app.route('/')
def show_all():
cur = db.connection.cursor()
cur.execute(...)
```
Likewise if you are outside of a request but you are using Flask 0.9 or later with the app context support, you can use the database in the same way:
```
with app.app_context():
cur = db.connection.cursor()
cur.execute(...)
```
At the end of the with block the teardown handles will be executed automatically.
Additionally, the `init_app` method is used to support the factory pattern
for creating apps:
```
db = Sqlite3()
# Then later on.
app = create_app('the-config.cfg')
db.init_app(app)
```
Keep in mind that supporting this factory pattern for creating apps is required for approved flask extensions (described below).
Note on `init_app` As you noticed, `init_app` does not assign `app` to `self` . This
is intentional! Class based Flask extensions must only store the
application on the object when the application was passed to the
constructor. This tells the extension: I am not interested in using
multiple applications. When the extension needs to find the current application and it does not have a reference to it, it must either use the `current_app` context local or change the API in a way
that you can pass the application explicitly.
## Using _app_ctx_stack¶
In the example above, before every request, a `sqlite3_db` variable is
assigned to `_app_ctx_stack.top` . In a view function, this variable is
accessible using the `connection` property of `SQLite3` . During the
teardown of a request, the `sqlite3_db` connection is closed. By using
this pattern, the same connection to the sqlite3 database is accessible
to anything that needs it for the duration of the request. If the `_app_ctx_stack` does not exist because the user uses
an old version of Flask, it is recommended to fall back to `_request_ctx_stack` which is bound to a request.
## Teardown Behavior¶
This is only relevant if you want to support Flask 0.6 and older
Due to the change in Flask 0.7 regarding functions that are run at the end of the request your extension will have to be extra careful there if it wants to continue to support older versions of Flask. The following pattern is a good way to support both:
```
def close_connection(response):
ctx = _request_ctx_stack.top
ctx.sqlite3_db.close()
return response
if hasattr(app, 'teardown_request'):
app.teardown_request(close_connection)
else:
app.after_request(close_connection)
```
Strictly speaking the above code is wrong, because teardown functions are passed the exception and typically don’t return anything. However because the return value is discarded this will just work assuming that the code in between does not touch the passed parameter.
## Learn from Others¶
This documentation only touches the bare minimum for extension development. If you want to learn more, it’s a very good idea to check out existing extensions on the Flask Extension Registry. If you feel lost there is still the mailinglist and the IRC channel to get some ideas for nice looking APIs. Especially if you do something nobody before you did, it might be a very good idea to get some more input. This not only to get an idea about what people might want to have from an extension, but also to avoid having multiple developers working on pretty much the same side by side.
Remember: good API design is hard, so introduce your project on the mailinglist, and let other developers give you a helping hand with designing the API.
The best Flask extensions are extensions that share common idioms for the API. And this can only work if collaboration happens early.
## Approved Extensions¶
Flask also has the concept of approved extensions. Approved extensions are tested as part of Flask itself to ensure extensions do not break on new releases. These approved extensions are listed on the Flask Extension Registry and marked appropriately. If you want your own extension to be approved you have to follow these guidelines:
* An approved Flask extension requires a maintainer. In the event an extension author would like to move beyond the project, the project should find a new maintainer including full source hosting transition and PyPI access. If no maintainer is available, give access to the Flask core team.
* An approved Flask extension must provide exactly one package or module named
`flask_extensionname` . They might also reside inside a `flaskext` namespace packages though this is discouraged now. * It must ship a testing suite that can either be invoked with
`make test` or `python setup.py test` . For test suites invoked with `make test` the extension has to ensure that all dependencies for the test are installed automatically. If tests are invoked with `python setup.py test` , test dependencies can be specified in the setup.py file. The test suite also has to be part of the distribution. * APIs of approved extensions will be checked for the following characteristics:
* an approved extension has to support multiple applications running in the same Python process.
* it must be possible to use the factory pattern for creating applications.
* The license must be BSD/MIT/WTFPL licensed.
* The naming scheme for official extensions is Flask-ExtensionName or ExtensionName-Flask.
* Approved extensions must define all their dependencies in the setup.py file unless a dependency cannot be met because it is not available on PyPI.
* The extension must have documentation that uses one of the two Flask themes for Sphinx documentation.
* The setup.py description (and thus the PyPI description) has to link to the documentation, website (if there is one) and there must be a link to automatically install the development version (
`PackageName==dev` ). * The
`zip_safe` flag in the setup script must be set to `False` , even if the extension would be safe for zipping. * An extension currently has to support Python 2.5, 2.6 as well as Python 2.7
## Extension Import Transition¶
For a while we recommended using namespace packages for Flask extensions. This turned out to be problematic in practice because many different competing namespace package systems exist and pip would automatically switch between different systems and this caused a lot of problems for users.
Instead we now recommend naming packages `flask_foo` instead of the now
deprecated `flaskext.foo` . Flask 0.8 introduces a redirect import
system that lets uses import from `flask.ext.foo` and it will try `flask_foo` first and if that fails `flaskext.foo` . Flask extensions should urge users to import from `flask.ext.foo` instead of `flask_foo` or `flaskext_foo` so that extensions can
transition to the new package name without affecting users.
The Pocoo styleguide is the styleguide for all Pocoo Projects, including Flask. This styleguide is a requirement for Patches to Flask and a recommendation for Flask extensions.
In general the Pocoo Styleguide closely follows PEP 8 with some small differences and extensions.
## General Layout¶
* Indentation:
* 4 real spaces. No tabs, no exceptions.
* Maximum line length:
* 79 characters with a soft limit for 84 if absolutely necessary. Try to avoid too nested code by cleverly placing break, continue and return statements.
* Continuing long statements:
*
To continue a statement you can use backslashes in which case you should align the next line with the last dot or equal sign, or indent four spaces:
> this_is_a_very_long(function_call, 'with many parameters') \ .that_returns_an_object_with_an_attribute MyModel.query.filter(MyModel.scalar > 120) \ .order_by(MyModel.name.desc()) \ .limit(10)
If you break in a statement with parentheses or braces, align to the braces:
> this_is_a_very_long(function_call, 'with many parameters', 23, 42, 'and even more')
For lists or tuples with many items, break immediately after the opening brace:
> items = [ 'this is the first', 'set of items', 'with more items', 'to come in this line', 'like this' ]
* Blank lines:
*
Top level functions and classes are separated by two lines, everything else by one. Do not use too many blank lines to separate logical segments in code. Example:
> def hello(name): print 'Hello %s!' % name def goodbye(name): print 'See you %s.' % name class MyClass(object): """This is a simple docstring""" def __init__(self, name): self.name = name def get_annoying_name(self): return self.name.upper() + '!!!!111'
## Expressions and Statements¶
* General whitespace rules:
*
* No whitespace for unary operators that are not words (e.g.:
`-` , `~` etc.) as well on the inner side of parentheses. * Whitespace is placed between binary operators.
Good:
> exp = -1.05 value = (item_value / item_count) * offset / exp value = my_list[index] value = my_dict['key']
Bad:
> exp = - 1.05 value = ( item_value / item_count ) * offset / exp value = (item_value/item_count)*offset/exp value=( item_value/item_count ) * offset/exp value = my_list[ index ] value = my_dict ['key']
* No whitespace for unary operators that are not words (e.g.:
* Yoda statements are a no-go:
*
Never compare constant with variable, always variable with constant:
Good:
> if method == 'md5': pass
Bad:
> if 'md5' == method: pass
* Comparisons:
*
* against arbitrary types:
`==` and `!=` * against singletons with
`is` and `is not` (eg: `foo is not None` ) * never compare something with True or False (for example never do
`foo == False` , do `not foo` instead) * against arbitrary types:
* Negated containment checks:
* use
`foo not in bar` instead of `not foo in bar` * Instance checks:
*
`isinstance(a, C)` instead of `type(A) is C` , but try to avoid instance checks in general. Check for features.
## Naming Conventions¶
* Class names:
`CamelCase` , with acronyms kept uppercase ( `HTTPWriter` and not `HttpWriter` ) * Variable names:
* Method and function names:
* Constants:
```
UPPERCASE_WITH_UNDERSCORES
```
* precompiled regular expressions:
`name_re`
Protected members are prefixed with a single underscore. Double underscores are reserved for mixin classes.
On classes with keywords, trailing underscores are appended. Clashes with builtins are allowed and must not be resolved by appending an underline to the variable name. If the function needs to access a shadowed builtin, rebind the builtin to a different name instead.
* Function and method arguments:
*
* class methods:
`cls` as first parameter * instance methods:
`self` as first parameter * lambdas for properties might have the first parameter replaced with
`x` like in
```
display_name = property(lambda x: x.real_name or x.username)
```
* class methods:
## Docstrings¶
* Docstring conventions:
*
All docstrings are formatted with reStructuredText as understood by Sphinx. Depending on the number of lines in the docstring, they are laid out differently. If it’s just one line, the closing triple quote is on the same line as the opening, otherwise the text is on the same line as the opening quote and the triple quote that closes the string on its own line:
> def foo(): """This is a simple docstring""" def bar(): """This is a longer docstring with so much information in there that it spans three lines. In this case the closing triple quote is on its own line. """
* Module header:
*
The module header consists of an utf-8 encoding declaration (if non ASCII letters are used, but it is recommended all the time) and a standard docstring:
> # -*- coding: utf-8 -*- """ package.module ~~~~~~~~~~~~~~ A brief description goes here. :copyright: (c) YEAR by AUTHOR. :license: LICENSE_NAME, see LICENSE_FILE for more details. """
Please keep in mind that proper copyrights and license files are a requirement for approved Flask extensions.
## Comments¶
Rules for comments are similar to docstrings. Both are formatted with reStructuredText. If a comment is used to document an attribute, put a colon after the opening pound sign ( `#` ):
```
class User(object):
#: the name of the user as unicode string
name = Column(String)
#: the sha1 hash of the password + inline salt
pw_hash = Column(String)
```
Flask itself is changing like any software is changing over time. Most of the changes are the nice kind, the kind where you don’t have to change anything in your code to profit from a new release.
However every once in a while there are changes that do require some changes in your code or there are changes that make it possible for you to improve your own code quality by taking advantage of new features in Flask.
This section of the documentation enumerates all the changes in Flask from release to release and how you can change your code to have a painless updating experience.
If you want to use the easy_install command to upgrade your Flask installation, make sure to pass it the `-U` parameter:
```
$ easy_install -U Flask
```
## Version 0.10¶
The biggest change going from 0.9 to 0.10 is that the cookie serialization format changed from pickle to a specialized JSON format. This change has been done in order to avoid the damage an attacker can do if the secret key is leaked. When you upgrade you will notice two major changes: all sessions that were issued before the upgrade are invalidated and you can only store a limited amount of types in the session. The new sessions are by design much more restricted to only allow JSON with a few small extensions for tuples and strings with HTML markup.
In order to not break people’s sessions it is possible to continue using the old session system by using the Flask-OldSessions_ extension.
## Version 0.9¶
The behavior of returning tuples from a function was simplified. If you return a tuple it no longer defines the arguments for the response object you’re creating, it’s now always a tuple in the form
where at least one item has to be provided. If you depend on
the old behavior, you can add it easily by subclassing Flask:
```
class TraditionalFlask(Flask):
def make_response(self, rv):
if isinstance(rv, tuple):
return self.response_class(*rv)
return Flask.make_response(self, rv)
```
If you maintain an extension that was using `_request_ctx_stack` before, please consider changing to `_app_ctx_stack` if it makes
sense for your extension. For instance, the app context stack makes sense for
extensions which connect to databases. Using the app context stack instead of
the request stack will make extensions more readily handle use cases outside of
requests.
## Version 0.8¶
Flask introduced a new session interface system. We also noticed that there was a naming collision between flask.session the module that implements sessions and `flask.session` which is the global session
object. With that introduction we moved the implementation details for
the session system into a new module called `flask.sessions` . If you
used the previously undocumented session support we urge you to upgrade. If invalid JSON data was submitted Flask will now raise a `BadRequest` exception instead of letting the
default `ValueError` bubble up. This has the advantage that you no
longer have to handle that error to avoid an internal server error showing
up for the user. If you were catching this down explicitly in the past
as ValueError you will need to change this.
Due to a bug in the test client Flask 0.7 did not trigger teardown handlers when the test client was used in a with statement. This was since fixed but might require some changes in your testsuites if you relied on this behavior.
## Version 0.7¶
In Flask 0.7 we cleaned up the code base internally a lot and did some backwards incompatible changes that make it easier to implement larger applications with Flask. Because we want to make upgrading as easy as possible we tried to counter the problems arising from these changes by providing a script that can ease the transition.
The script scans your whole application and generates an unified diff with changes it assumes are safe to apply. However as this is an automated tool it won’t be able to find all use cases and it might miss some. We internally spread a lot of deprecation warnings all over the place to make it easy to find pieces of code that it was unable to upgrade.
We strongly recommend that you hand review the generated patchfile and only apply the chunks that look good.
If you are using git as version control system for your project we recommend applying the patch with
```
path -p1 < patchfile.diff
```
and then
using the interactive commit feature to only apply the chunks that look
good.
To apply the upgrade script do the following:
Download the script: flask-07-upgrade.py
*
Run it in the directory of your application:
> python flask-07-upgrade.py > patchfile.diff
*
Review the generated patchfile.
*
Apply the patch:
> patch -p1 < patchfile.diff
*
If you were using per-module template folders you need to move some templates around. Previously if you had a folder named
`templates` next to a blueprint named `admin` the implicit template path automatically was `admin/index.html` for a template file called `templates/index.html` . This no longer is the case. Now you need to name the template
```
templates/admin/index.html
```
. The tool will not detect this so you will have to do that on your own. Please note that deprecation warnings are disabled by default starting with Python 2.7. In order to see the deprecation warnings that might be emitted you have to enabled them with the `warnings` module.
If you are working with windows and you lack the patch command line utility you can get it as part of various Unix runtime environments for windows including cygwin, msysgit or ming32. Also source control systems like svn, hg or git have builtin support for applying unified diffs as generated by the tool. Check the manual of your version control system for more information.
### Bug in Request Locals¶
Due to a bug in earlier implementations the request local proxies now raise a `RuntimeError` instead of an `AttributeError` when they
are unbound. If you caught these exceptions with `AttributeError` before, you should catch them with `RuntimeError` now. Additionally the `send_file()` function is now issuing
deprecation warnings if you depend on functionality that will be removed
in Flask 1.0. Previously it was possible to use etags and mimetypes
when file objects were passed. This was unreliable and caused issues
for a few setups. If you get a deprecation warning, make sure to
update your application to work with either filenames there or disable
etag attaching and attach them yourself.
Old code:
```
return send_file(my_file_object)
return send_file(my_file_object)
```
New code:
```
return send_file(my_file_object, add_etags=False)
```
### Upgrading to new Teardown Handling¶
We streamlined the behavior of the callbacks for request handling. For things that modify the response the `after_request()` decorators continue to work as expected, but for things that absolutely
must happen at the end of request we introduced the new `teardown_request()` decorator. Unfortunately that
change also made after-request work differently under error conditions.
It’s not consistently skipped if exceptions happen whereas previously it
might have been called twice to ensure it is executed at the end of the
request.
If you have database connection code that looks like this:
```
@app.after_request
def after_request(response):
g.db.close()
return response
```
You are now encouraged to use this instead:
```
@app.teardown_request
def after_request(exception):
if hasattr(g, 'db'):
g.db.close()
```
On the upside this change greatly improves the internal code flow and makes it easier to customize the dispatching and error handling. This makes it now a lot easier to write unit tests as you can prevent closing down of database connections for a while. You can take advantage of the fact that the teardown callbacks are called when the response context is removed from the stack so a test can query the database after request handling:
```
with app.test_client() as client:
resp = client.get('/')
# g.db is still bound if there is such a thing
# and here it's gone
```
### Manual Error Handler Attaching¶
While it is still possible to attach error handlers to `Flask.error_handlers` it’s discouraged to do so and in fact
deprecated. In general we no longer recommend custom error handler
attaching via assignments to the underlying dictionary due to the more
complex internal handling to support arbitrary exception classes and
blueprints. See `Flask.errorhandler()` for more information.
The proper upgrade is to change this:
```
app.error_handlers[403] = handle_error
```
Into this:
```
app.register_error_handler(403, handle_error)
```
Alternatively you should just attach the function with a decorator:
```
@app.errorhandler(403)
def handle_error(e):
...
```
(Note that
```
register_error_handler()
```
is new in Flask 0.7)
### Blueprint Support¶
Blueprints replace the previous concept of “Modules” in Flask. They provide better semantics for various features and work better with large applications. The update script provided should be able to upgrade your applications automatically, but there might be some cases where it fails to upgrade. What changed?
* Blueprints need explicit names. Modules had an automatic name guesssing scheme where the shortname for the module was taken from the last part of the import module. The upgrade script tries to guess that name but it might fail as this information could change at runtime.
* Blueprints have an inverse behavior for
`url_for()` . Previously `.foo` told `url_for()` that it should look for the endpoint foo on the application. Now it means “relative to current module”. The script will inverse all calls to `url_for()` automatically for you. It will do this in a very eager way so you might end up with some unnecessary leading dots in your code if you’re not using modules. * Blueprints do not automatically provide static folders. They will also no longer automatically export templates from a folder called templates next to their location however but it can be enabled from the constructor. Same with static files: if you want to continue serving static files you need to tell the constructor explicitly the path to the static folder (which can be relative to the blueprint’s module path).
* Rendering templates was simplified. Now the blueprints can provide template folders which are added to a general template searchpath. This means that you need to add another subfolder with the blueprint’s name into that folder if you want
```
blueprintname/template.html
```
as the template name.
If you continue to use the Module object which is deprecated, Flask will restore the previous behavior as good as possible. However we strongly recommend upgrading to the new blueprints as they provide a lot of useful improvement such as the ability to attach a blueprint multiple times, blueprint specific error handlers and a lot more.
## Version 0.6¶
Flask 0.6 comes with a backwards incompatible change which affects the order of after-request handlers. Previously they were called in the order of the registration, now they are called in reverse order. This change was made so that Flask behaves more like people expected it to work and how other systems handle request pre- and postprocessing. If you depend on the order of execution of post-request functions, be sure to change the order.
Another change that breaks backwards compatibility is that context processors will no longer override values passed directly to the template rendering function. If for example request is as variable passed directly to the template, the default context processor will not override it with the current request object. This makes it easier to extend context processors later to inject additional variables without breaking existing template not expecting them.
## Version 0.5¶
Flask 0.5 is the first release that comes as a Python package instead of a single module. There were a couple of internal refactoring so if you depend on undocumented internal details you probably have to adapt the imports.
The following changes may be relevant to your application:
* autoescaping no longer happens for all templates. Instead it is configured to only happen on files ending with
`.html` , `.htm` , `.xml` and `.xhtml` . If you have templates with different extensions you should override the
```
select_jinja_autoescape()
```
method. * Flask no longer supports zipped applications in this release. This functionality might come back in future releases if there is demand for this feature. Removing support for this makes the Flask internal code easier to understand and fixes a couple of small issues that make debugging harder than necessary.
* The create_jinja_loader function is gone. If you want to customize the Jinja loader now, use the
```
create_jinja_environment()
```
method instead.
## Version 0.4¶
For application developers there are no changes that require changes in your code. In case you are developing on a Flask extension however, and that extension has a unittest-mode you might want to link the activation of that mode to the new `TESTING` flag.
## Version 0.3¶
Flask 0.3 introduces configuration support and logging as well as categories for flashing messages. All these are features that are 100% backwards compatible but you might want to take advantage of them.
### Configuration Support¶
The configuration support makes it easier to write any kind of application that requires some sort of configuration. (Which most likely is the case for any application out there).
If you previously had code like this:
```
app.debug = DEBUG
app.secret_key = SECRET_KEY
```
You no longer have to do that, instead you can just load a configuration into the config object. How this works is outlined in 설정 다루기.
### Logging Integration¶
Flask now configures a logger for you with some basic and useful defaults. If you run your application in production and want to profit from automatic error logging, you might be interested in attaching a proper log handler. Also you can start logging warnings and errors into the logger when appropriately. For more information on that, read 어플리케이션 에러 로깅하기.
### Categories for Flash Messages¶
Flash messages can now have categories attached. This makes it possible to render errors, warnings or regular messages differently for example. This is an opt-in feature because it requires some rethinking in the code.
Read all about that in the 메시지 플래싱(Message Flashing) pattern.
Flask is licensed under a three clause BSD License. It basically means: do whatever you want with it as long as the copyright in Flask sticks around, the conditions are not modified and the disclaimer is present. Furthermore you must not use the names of the authors to promote derivatives of the software without written consent.
The full license text can be found below (Flask License). For the documentation and artwork different licenses apply.
## General License Definitions¶
The following section contains the full license texts for Flask and the documentation.
* “AUTHORS” hereby refers to all the authors listed in the Authors section.
* The “Flask License” applies to all the sourcecode shipped as part of Flask (Flask itself as well as the examples and the unittests) as well as documentation.
* The “Flask Artwork License” applies to the project’s Horn-Logo.
|
predictNMB | cran | R | Package ‘predictNMB’
June 3, 2023
Type Package
Title Evaluate Clinical Prediction Models by Net Monetary Benefit
Version 0.2.1
Description Estimates when and where a model-guided treatment strategy may
outperform a treat-all or treat-none approach by Monte Carlo simulation and
evaluation of the Net Monetary Benefit. Details can be viewed in
Parsons et al. (2023) <doi:10.21105/joss.05328>.
License GPL (>= 3)
Encoding UTF-8
RoxygenNote 7.2.3
Imports assertthat, cutpointr, dplyr, ggplot2, magrittr, pmsampsize,
rlang, scales, stats, tibble, tidyr
Suggests spelling, covr, flextable, knitr, parallel, pbapply,
rmarkdown, testthat (>= 3.0.0), vdiffr, withr
Config/testthat/edition 3
VignetteBuilder knitr
URL https://docs.ropensci.org/predictNMB/
BugReports https://github.com/ropensci/predictNMB/issues
Depends R (>= 3.5.0)
Language en-US
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-6053-8174>),
<NAME> [aut] (<https://orcid.org/0000-0002-3643-4332>),
<NAME> [aut] (<https://orcid.org/0000-0001-6339-0374>),
<NAME> [rev] (Emi Tanaka reviewed predictNMB for rOpenSci, see
<https://github.com/ropensci/software-review/issues/566>.),
<NAME> [rev] (T<NAME>ariyawasam reviewed predictNMB for
rOpenSci, see
<https://github.com/ropensci/software-review/issues/566>.),
<NAME> [ctb] (<https://orcid.org/0000-0001-9041-9531>),
<NAME> [ctb] (<https://orcid.org/0000-0002-1463-662X>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-06-03 07:40:02 UTC
R topics documented:
autoplot.predictNMBscree... 2
autoplot.predictNMBsi... 4
ce_plo... 6
ce_plot.predictNMBsi... 8
do_nmb_si... 9
evaluate_cutpoint_cos... 11
evaluate_cutpoint_nm... 12
evaluate_cutpoint_qaly... 13
get_inbuilt_cutpoin... 14
get_inbuilt_cutpoint_method... 14
get_nmb_sample... 15
get_sampl... 16
get_threshold... 17
print.predictNMBscree... 19
print.predictNMBsi... 19
screen_simulation_input... 20
summary.predictNMBscree... 22
summary.predictNMBsi... 23
theme_si... 25
autoplot.predictNMBscreen
Create plots of from screened predictNMB simulations.
Description
Create plots of from screened predictNMB simulations.
Usage
## S3 method for class 'predictNMBscreen'
autoplot(
object,
x_axis_var = NULL,
constants = list(),
what = c("nmb", "inb", "cutpoints", "qalys", "costs"),
inb_ref_col = NA,
plot_range = TRUE,
plot_conf_level = TRUE,
plot_line = TRUE,
plot_alpha = 0.5,
dodge_width = 0,
conf.level = 0.95,
methods_order = NULL,
rename_vector,
...
)
Arguments
object A predictNMBscreen object.
x_axis_var The desired screened factor to be displayed along the x axis. For example, if
the simulation screen was used with many values for event rate, this could be
"event_rate". Defaults to the first detected, varied input.
constants Named vector. If multiple inputs were screened in this object, this argument can
be used to modify the selected values for all those except the input that’s varying
along the x-axis. See the summarising methods vignette.
what What to summarise: one of "nmb", "inb", "cutpoints", "qalys" or "costs". De-
faults to "nmb".
inb_ref_col Which cutpoint method to use as the reference strategy when calculating the
incremental net monetary benefit. See do_nmb_sim for more information.
plot_range logical. Whether or not to plot the range of the distribution as a thin line.
Defaults to TRUE.
plot_conf_level
logical. Whether or not to plot the confidence region of the distribution as a
thicker line. Defaults to TRUE.
plot_line logical. Whether or not to connect the medians of the distributions for each
method along the x-axis. Defaults to TRUE.
plot_alpha Alpha value (transparency) of all plot elements. Defaults to 0.5.
dodge_width The dodge width of plot elements. Can be used to avoid excessive overlap be-
tween methods. Defaults to 0.
conf.level The confidence level of the interval. Defaults to 0.95 (coloured area of distribu-
tion represents 95% CIs).
methods_order The order (left to right) to display the cutpoint methods.
rename_vector A named vector for renaming the methods in the summary. The values of the
vector are the default names and the names given are the desired names in the
output.
... Additional (unused) arguments.
Details
This plot method works with predictNMBscreen objects that are created using screen_simulation_inputs().
Can be used to visualise distributions from many different simulations and assign a varying input to
the x-axis of the plot.
Value
Returns a ggplot object.
Examples
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_screen_obj <- screen_simulation_inputs(
n_sims = 50, n_valid = 10000, sim_auc = seq(0.7, 0.9, 0.1),
event_rate = c(0.1, 0.2, 0.3),
fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb,
cutpoint_methods = c("all", "none", "youden", "value_optimising")
)
autoplot(sim_screen_obj)
autoplot(
sim_screen_obj,
x_axis_var = "event_rate",
constants = c(sim_auc = 0.8),
dodge_width = 0.02,
rename_vector = c(
"Value-Optimising" = "value_optimising",
"Treat-None" = "none",
"Treat-All" = "all",
"Youden Index" = "youden"
)
)
autoplot.predictNMBsim
Create plots of from predictNMB simulations.
Description
Create plots of from predictNMB simulations.
Usage
## S3 method for class 'predictNMBsim'
autoplot(
object,
what = c("nmb", "inb", "cutpoints", "qalys", "costs"),
inb_ref_col = NA,
conf.level = 0.95,
methods_order = NULL,
n_bins = 40,
label_wrap_width = 12,
fill_cols = c("grey50", "#ADD8E6"),
median_line_size = 2,
median_line_alpha = 0.5,
median_line_col = "black",
rename_vector,
...
)
Arguments
object A predictNMBsim object.
what What to summarise: one of "nmb", "inb", "cutpoints", "qalys" or "costs". De-
faults to "nmb".
inb_ref_col Which cutpoint method to use as the reference strategy when calculating the
incremental net monetary benefit. See do_nmb_sim for more information.
conf.level The confidence level of the interval. Defaults to 0.95 (coloured area of distribu-
tion represents 95% CIs).
methods_order The order (left to right) to display the cutpoint methods.
n_bins The number of bins used when constructing histograms. Defaults to 40.
label_wrap_width
The number of characters in facet labels at which the label is wrapped. Default
is 12.
fill_cols Vector containing the colours used for fill aesthetic of histograms. The first
colour represents the area outside of the confidence region, second colour shows
the confidence region. Defaults to c("grey50", "#ADD8E6").
median_line_size
Size of line used to represent the median of distribution. Defaults to 2.
median_line_alpha
Alpha (transparency) for line used to represent the median of distribution. De-
faults to 0.5.
median_line_col
Colour of line used to represent the median of distribution. Defaults to "black".
rename_vector A named vector for renaming the methods in the summary. The values of the
vector are the default names and the names given are the desired names in the
output.
... Additional (unused) arguments.
Details
This plot method works with predictNMBsim objects that are created using do_nmb_sim(). Can be
used to visualise distributions from simulations for different cutpoint methods.
Value
Returns a ggplot object.
Examples
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_obj <- do_nmb_sim(
sample_size = 200, n_sims = 50, n_valid = 10000, sim_auc = 0.7,
event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb,
cutpoint_methods = c("all", "none", "youden", "value_optimising")
)
autoplot(
sim_obj,
rename_vector = c(
"Value- Optimising" = "value_optimising",
"Treat- None" = "none",
"Treat- All" = "all",
"Youden Index" = "youden"
)
) + theme_sim()
ce_plot Create a cost-effectiveness plot.
Description
Create a cost-effectiveness plot.
Usage
ce_plot(
object,
ref_col,
wtp,
show_wtp = TRUE,
methods_order = NULL,
rename_vector,
shape = 21,
wtp_linetype = "dashed",
add_prop_ce = FALSE,
...
)
Arguments
object A predictNMBsim object.
ref_col Which cutpoint method to use as the reference strategy when calculating the in-
cremental net monetary benefit. Often sensible to use a "all" or "none" approach
for this.
wtp A numeric. The willingness to pay (WTP) value used to create a WTP thresh-
old line on the plot (if show_wtp = TRUE). Defaults to the WTP stored in the
predictNMBsim object.
show_wtp A logical. Whether or not to show the willingness to pay threshold.
methods_order The order (within the legend) to display the cutpoint methods.
rename_vector A named vector for renaming the methods in the summary. The values of the
vector are the default names and the names given are the desired names in the
output.
shape The shape used for ggplot2::geom_point(). Defaults to 21 (hollow cir-
cles). If shape = "method" or shape = "cost-effective" (only applicable
when show_wtp = TRUE) , then the shape will be mapped to that aesthetic.
wtp_linetype The linetype used for ggplot2::geom_abline() when making the WTP. De-
faults to "dashed".
add_prop_ce Whether to append the proportion of simulations for that method which were
cost-effective (beneath the WTP threshold) to their labels in the legend. Only
applicable when show_wtp = TRUE.
... Additional (unused) arguments.
Details
This plot method works with predictNMBsim objects that are created using do_nmb_sim(). Can be
used to visualise the simulations on a cost-effectiveness plot (costs vs effectiveness)
Value
Returns a ggplot object.
Examples
get_nmb_evaluation <- get_nmb_sampler(
qalys_lost = function() rnorm(1, 0.33, 0.03),
wtp = 28000,
high_risk_group_treatment_effect = function() exp(rnorm(n = 1, mean = log(0.58), sd = 0.43)),
high_risk_group_treatment_cost = function() rnorm(n = 1, mean = 161, sd = 49)
)
sim_obj <- do_nmb_sim(
sample_size = 200, n_sims = 50, n_valid = 10000, sim_auc = 0.7,
event_rate = 0.1, fx_nmb_training = get_nmb_evaluation, fx_nmb_evaluation = get_nmb_evaluation
)
ce_plot(sim_obj, ref_col = "all")
ce_plot.predictNMBsim Create a cost-effectiveness plot.
Description
Create a cost-effectiveness plot.
Usage
## S3 method for class 'predictNMBsim'
ce_plot(
object,
ref_col,
wtp,
show_wtp = TRUE,
methods_order = NULL,
rename_vector,
shape = 21,
wtp_linetype = "dashed",
add_prop_ce = FALSE,
...
)
Arguments
object A predictNMBsim object.
ref_col Which cutpoint method to use as the reference strategy when calculating the in-
cremental net monetary benefit. Often sensible to use a "all" or "none" approach
for this.
wtp A numeric. The willingness to pay (WTP) value used to create a WTP thresh-
old line on the plot (if show_wtp = TRUE). Defaults to the WTP stored in the
predictNMBsim object.
show_wtp A logical. Whether or not to show the WTP threshold.
methods_order The order (within the legend) to display the cutpoint methods.
rename_vector A named vector for renaming the methods in the summary. The values of the
vector are the default names and the names given are the desired names in the
output.
shape The shape used for ggplot2::geom_point(). Defaults to 21 (hollow cir-
cles). If shape = "method" or shape = "cost-effective" (only applicable
when show_wtp = TRUE) , then the shape will be mapped to that aesthetic.
wtp_linetype The linetype used for ggplot2::geom_abline() when making the WTP. De-
faults to "dashed".
add_prop_ce Whether to append the proportion of simulations for that method which were
cost-effective (beneath the WTP threshold) to their labels in the legend. Only
applicable when show_wtp = TRUE.
... Additional (unused) arguments.
Details
This plot method works with predictNMBsim objects that are created using do_nmb_sim(). Can be
used to visualise the simulations on a cost-effectiveness plot (costs vs effectiveness)
Value
Returns a ggplot object.
Examples
get_nmb_evaluation <- get_nmb_sampler(
qalys_lost = function() rnorm(1, 0.33, 0.03),
wtp = 28000,
high_risk_group_treatment_effect = function() exp(rnorm(n = 1, mean = log(0.58), sd = 0.43)),
high_risk_group_treatment_cost = function() rnorm(n = 1, mean = 161, sd = 49)
)
sim_obj <- do_nmb_sim(
sample_size = 200, n_sims = 50, n_valid = 10000, sim_auc = 0.7,
event_rate = 0.1, fx_nmb_training = get_nmb_evaluation, fx_nmb_evaluation = get_nmb_evaluation
)
ce_plot(sim_obj, ref_col = "all")
do_nmb_sim Do the predictNMB simulation, evaluating the net monetary benefit
(NMB) of the simulated model.
Description
Do the predictNMB simulation, evaluating the net monetary benefit (NMB) of the simulated model.
Usage
do_nmb_sim(
sample_size,
n_sims,
n_valid,
sim_auc,
event_rate,
cutpoint_methods = get_inbuilt_cutpoint_methods(),
fx_nmb_training,
fx_nmb_evaluation,
meet_min_events = TRUE,
min_events = NA,
show_progress = FALSE,
cl = NULL
)
Arguments
sample_size Sample size of training set. If missing, a sample size calculation will be per-
formed and the calculated size will be used.
n_sims Number of simulations to run.
n_valid Sample size for evaluation set.
sim_auc Simulated model discrimination (AUC).
event_rate Simulated event rate of the binary outcome being predicted. Also known as
prevalence.
cutpoint_methods
A value or vector of cutpoint methods to include. Defaults to use the inbuilt
methods:
• "all" = treat all patients (cutpoint = 0)
• "none" = treat no patients (cutpoint = 1)
• "value_optimising" = select the cutpoint that maximises NMB
• "youden" = select cutpoint based on the Youden index, also known as the
J-index (sensitivity + specificity - 1)
• "cost_minimising" = select the cutpoint that minimises expected value of
costs
• "prod_sens_spec" = product of sensitivity and specificity (sensitivity * speci-
ficity)
• "roc01" = selects the closest threshold to the (0,1) point on the ROC curve
User-defined cutpoint methods can be used by passing the name of a function
that takes the following arguments:
• predicted (predicted probabilities)
• actual (the actual, binary outcome)
• nmb (a named vector containing NMB values assigned to each predicted
class (i.e. c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)))
See ?get_thresholds for an example of a user-defined cutpoint function.
fx_nmb_training
Function or NMBsampler that returns a named vector of NMB assigned to clas-
sifications used for obtaining cutpoint on training set.
fx_nmb_evaluation
Function or NMBsampler that returns a named vector of NMB assigned to clas-
sifications used for obtaining cutpoint on evaluation set.
meet_min_events
Whether or not to incrementally add samples until the expected number of events
(sample_size * event_rate) is met. (Applies to sampling of training data
only.)
min_events The minimum number of events to include in the training sample. If less than
this number are included in sample of size sample_size, additional samples are
added until the min_events is met. The default (NA) will use the expected value
given the event_rate and the sample_size.
show_progress Logical. Whether to display a progress bar. Requires the pbapply package.
cl A cluster made using parallel::makeCluster(). If a cluster is provided, the
simulation will be done in parallel.
Details
This function runs a simulation for a given set of inputs that represent a healthcare setting using
model-guided interventions.
The arguments fx_nmb_training and fx_nmb_evaluation should be functions that capture the
treatment being used, its costs and effectiveness, and the costs of the outcome being treated/prevented.
Both of these are functions that return a named vector of NMB values when called and are used
for obtaining and evaluating cutpoints, respectively. For example, the following function returns
the appropriately named vector.
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
There is a helper function, get_nmb_sampler(), to help you create these.
Value
Returns a predictNMBsim object.
Examples
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
do_nmb_sim(
sample_size = 200, n_sims = 50, n_valid = 10000, sim_auc = 0.7,
event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb
)
evaluate_cutpoint_cost
Evaluates a cutpoint by returning the mean treatment cost per sample.
Description
Evaluates a cutpoint by returning the mean treatment cost per sample.
Usage
evaluate_cutpoint_cost(predicted, actual, pt, nmb)
Arguments
predicted A vector of predicted probabilities.
actual A vector of actual outcomes.
pt The probability threshold to be evaluated.
nmb A named vector containing NMB assigned to each classification and the treat-
ment costs.
Value
Returns a numeric value representing the mean cost for that cutpoint and data.
Examples
evaluate_cutpoint_cost(
predicted = runif(1000),
actual = sample(c(0, 1), size = 1000, replace = TRUE),
pt = 0.1,
nmb = c(
"qalys_lost" = 5,
"low_risk_group_treatment_cost" = 0,
"high_risk_group_treatment_cost" = 1,
"low_risk_group_treatment_effect" = 0,
"high_risk_group_treatment_effect" = 0.3,
"outcome_cost" = 10
)
)
evaluate_cutpoint_nmb Evaluates a cutpoint by returning the mean NMB per sample.
Description
Evaluates a cutpoint by returning the mean NMB per sample.
Usage
evaluate_cutpoint_nmb(predicted, actual, pt, nmb)
Arguments
predicted A vector of predicted probabilities.
actual A vector of actual outcomes.
pt The probability threshold to be evaluated.
nmb A named vector containing NMB assigned to each classification.
Value
Returns a numeric value representing the NMB for that cutpoint and data.
Examples
evaluate_cutpoint_nmb(
predicted = runif(1000),
actual = sample(c(0, 1), size = 1000, replace = TRUE),
pt = 0.1,
nmb = c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
)
evaluate_cutpoint_qalys
Evaluates a cutpoint by returning the mean QALYs lost per sample.
Description
Evaluates a cutpoint by returning the mean QALYs lost per sample.
Usage
evaluate_cutpoint_qalys(predicted, actual, pt, nmb)
Arguments
predicted A vector of predicted probabilities.
actual A vector of actual outcomes.
pt The probability threshold to be evaluated.
nmb A named vector containing NMB assigned to each classification and the treat-
ment effects and QALYS lost due to the event of interest.
Value
Returns a numeric value representing the mean QALYs for that cutpoint and data.
Examples
evaluate_cutpoint_qalys(
predicted = runif(1000),
actual = sample(c(0, 1), size = 1000, replace = TRUE),
pt = 0.1,
nmb = c(
"qalys_lost" = 5,
"low_risk_group_treatment_effect" = 0,
"high_risk_group_treatment_effect" = 0.5
)
)
get_inbuilt_cutpoint Get a cutpoint using the methods inbuilt to predictNMB
Description
Get a cutpoint using the methods inbuilt to predictNMB
Usage
get_inbuilt_cutpoint(predicted, actual, nmb, method)
Arguments
predicted A vector of predicted probabilities
actual A vector of actual outcomes
nmb A named vector containing NMB assigned to each classification
method A cutpoint selection method to be used methods that can be used as the method
argument
Value
Returns a selected cutpoint (numeric).
Examples
## get the list of available methods:
get_inbuilt_cutpoint_methods()
## get the cutpoint that maximises the Youden index for a given set of
## probabilities and outcomes
get_inbuilt_cutpoint(
predicted = runif(1000),
actual = sample(c(0, 1), size = 1000, replace = TRUE),
method = "youden"
)
get_inbuilt_cutpoint_methods
Get a vector of all the inbuilt cutpoint methods
Description
Get a vector of all the inbuilt cutpoint methods
Usage
get_inbuilt_cutpoint_methods()
Value
Returns a vector cutpoint methods that can be used in do_nmb_sim().
Examples
get_inbuilt_cutpoint_methods()
get_nmb_sampler Make a NMB sampler for use in do_nmb_sim() or
screen_simulation_inputs()
Description
Make a NMB sampler for use in do_nmb_sim() or screen_simulation_inputs()
Usage
get_nmb_sampler(
outcome_cost,
wtp,
qalys_lost,
high_risk_group_treatment_effect,
high_risk_group_treatment_cost,
low_risk_group_treatment_effect = 0,
low_risk_group_treatment_cost = 0,
use_expected_values = FALSE,
nboot = 10000
)
Arguments
outcome_cost The cost of the outcome. Must be provided if wtp and qalys_lost are not. Or
can be used in addition to these arguments to represent additional cost to the
health burden.
wtp Willingness-to-pay.
qalys_lost Quality-adjusted life years (QALYs) lost due to healthcare event being pre-
dicted.
high_risk_group_treatment_effect
The effect of the treatment provided to patients given high risk prediction. Can
be a number of a function. Provide a function to incorporate uncertainty.
high_risk_group_treatment_cost
The cost of the treatment provided to patients given high risk prediction. Can be
a number of a function. Provide a function to incorporate uncertainty.
low_risk_group_treatment_effect
The effect of the treatment provided to patients given low risk prediction. Can be
a number of a function. Provide a function to incorporate uncertainty. Defaults
to 0 (no treatment).
low_risk_group_treatment_cost
The cost of the treatment provided to patients given low risk prediction. Can be
a number of a function. Provide a function to incorporate uncertainty. Defaults
to 0 (no treatment).
use_expected_values
Logical. If TRUE, gets the mean of many samples from the produced function
and returns these every time. This is a sensible choice when using the resulting
function for selecting the cutpoint. See fx_nmb_training. Defaults to FALSE.
nboot The number of samples to use when creating a function that returns the expected
values. Defaults to 10000.
Value
Returns a NMBsampler object.
Examples
get_nmb_training <- get_nmb_sampler(
outcome_cost = 100,
high_risk_group_treatment_effect = function() rbeta(1, 1, 2),
high_risk_group_treatment_cost = 10,
use_expected_values = TRUE
)
get_nmb_evaluation <- get_nmb_sampler(
outcome_cost = 100,
high_risk_group_treatment_effect = function() rbeta(1, 1, 2),
high_risk_group_treatment_cost = 10
)
get_nmb_training()
get_nmb_training()
get_nmb_training()
get_nmb_evaluation()
get_nmb_evaluation()
get_nmb_evaluation()
get_sample Samples data for a prediction model with a specified AUC and preva-
lence.
Description
Samples data for a prediction model with a specified AUC and prevalence.
Usage
get_sample(auc, n_samples, prevalence, min_events = 0)
Arguments
auc The Area Under the (receiver operating characteristic) Curve.
n_samples Number of samples to draw.
prevalence Prevalence or event rate of the binary outcome as a proportion (0.1 = 10%).
min_events Minimum number of events required in the sample.
Value
Returns a data.frame.
Examples
get_sample(0.7, 1000, 0.1)
get_thresholds Gets probability thresholds given predicted probabilities, outcomes
and NMB.
Description
Gets probability thresholds given predicted probabilities, outcomes and NMB.
Usage
get_thresholds(predicted, actual, nmb, cutpoint_methods = NULL)
Arguments
predicted A vector of predicted probabilities.
actual A vector of actual outcomes.
nmb A named vector containing NMB assigned to true positives, true negatives, false
positives and false negatives
cutpoint_methods
Which cutpoint method(s) to return. The default (NULL) uses all the inbuilt
methods.
Value
Returns a list.
Examples
# get thresholds using default (all inbuilt) cutpoint methods
get_thresholds(
predicted = runif(1000),
actual = sample(c(0, 1), size = 1000, replace = TRUE),
nmb = c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
)
# get cutpoints using user-defined functions
# These functions must take the \code{predicted} and \code{actual}
# as arguments. They can also take \code{nmb} (named vector containing NMB
# with values for TP, FP, TN, FN).
fx_roc01 <- function(predicted, actual, ...) {
cutpointr::cutpointr(
x = predicted, class = actual, method = cutpointr::minimize_metric,
metric = cutpointr::roc01,
silent = TRUE
)[["optimal_cutpoint"]]
}
fx_sum_sens_spec <- function(predicted, actual, ...) {
cutpointr::cutpointr(
x = predicted, class = actual, method = cutpointr::maximize_metric,
metric = cutpointr::sum_sens_spec,
silent = TRUE
)[["optimal_cutpoint"]]
}
get_thresholds(
predicted = runif(1000),
actual = sample(c(0, 1), size = 1000, replace = TRUE),
cutpoint_methods = c("fx_roc01", "fx_sum_sens_spec"),
nmb = c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
)
# get a combination of cutpoints from both user-defined functions and
# inbuilt methods
get_thresholds(
predicted = runif(1000),
actual = sample(c(0, 1), size = 1000, replace = TRUE),
cutpoint_methods = c(
"fx_roc01",
"fx_sum_sens_spec",
"youden",
"all",
"none"
),
nmb = c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
)
print.predictNMBscreen
Print a summary of a predictNMBscreen object
Description
Print a summary of a predictNMBscreen object
Usage
## S3 method for class 'predictNMBscreen'
print(x, ...)
Arguments
x A predictNMBscreen object.
... Optional, ignored arguments.
Value
print(x) returns x invisibly.
Examples
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_screen_obj <- screen_simulation_inputs(
n_sims = 50, n_valid = 10000, sim_auc = seq(0.7, 0.9, 0.1),
event_rate = 0.1,
fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb
)
print(sim_screen_obj)
print.predictNMBsim Print a summary of a predictNMBsim object
Description
Print a summary of a predictNMBsim object
Usage
## S3 method for class 'predictNMBsim'
print(x, ...)
Arguments
x A predictNMBsim object.
... Optional, ignored arguments.
Value
print(x) returns x invisibly.
Examples
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_obj <- do_nmb_sim(
sample_size = 200, n_sims = 50, n_valid = 10000, sim_auc = 0.7,
event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb
)
print(sim_obj)
screen_simulation_inputs
Screen many simulation inputs: a parent function to do_nmb_sim()
Description
Runs do_nmb_sim() with a range of inputs.
Usage
screen_simulation_inputs(
sample_size,
n_sims,
n_valid,
sim_auc,
event_rate,
cutpoint_methods = get_inbuilt_cutpoint_methods(),
fx_nmb_training,
fx_nmb_evaluation,
pair_nmb_train_and_evaluation_functions = FALSE,
meet_min_events = TRUE,
min_events = NA,
show_progress = FALSE,
cl = NULL
)
Arguments
sample_size A value (or vector of values): Sample size of training set. If missing, a sample
size calculation will be performed and the calculated size will be used.
n_sims A value (or vector of values): Number of simulations to run.
n_valid A value (or vector of values): Sample size for evaluation set.
sim_auc A value (or vector of values): Simulated model discrimination (AUC).
event_rate A value (or vector of values): simulated event rate of the binary outcome being
predicted.
cutpoint_methods
cutpoint methods to include. Defaults to use the inbuilt methods. This doesn’t
change across calls to do_nmb_sim().
fx_nmb_training
A function or NMBsampler (or list of) that returns named vector of NMB as-
signed to classifications use for obtaining cutpoint on training set.
fx_nmb_evaluation
A function or NMBsampler (or list of) that returns named vector of NMB as-
signed to classifications use for obtaining cutpoint on evaluation set.
pair_nmb_train_and_evaluation_functions
logical. Whether or not to pair the lists of functions passed for fx_nmb_training
and fx_nmb_evaluation. If two treatment strategies are being used, it may
make more sense to pair these because selecting a value-optimising or cost-
minimising threshold using one strategy but evaluating another is likely un-
wanted.
meet_min_events
Whether or not to incrementally add samples until the expected number of events
(sample_size * event_rate) is met. (Applies to sampling of training data
only.)
min_events A value: the minimum number of events to include in the training sample. If
less than this number are included in sample of size sample_size, additional
samples are added until the min_events is met. The default (NA) will use the
expected value given the event_rate and the sample_size.
show_progress Logical. Whether to display a progress bar.
cl A cluster made using parallel::makeCluster(). If a cluster is provided, the
simulation will be done in parallel.
Value
Returns a predictNMBscreen object.
Examples
# Screen for optimal cutpoints given increasing values of
# model discrimination (sim_auc)
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_screen_obj <- screen_simulation_inputs(
n_sims = 50, n_valid = 10000, sim_auc = seq(0.7, 0.9, 0.1),
event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb
)
summary.predictNMBscreen
Create table summaries of predictNMBscreen objects.
Description
Create table summaries of predictNMBscreen objects.
Usage
## S3 method for class 'predictNMBscreen'
summary(
object,
what = c("nmb", "inb", "cutpoints"),
inb_ref_col = NULL,
agg_functions = list(median = function(x) {
round(stats::median(x), digits = 2)
}, `95% CI` = function(x) {
paste0(round(stats::quantile(x, probs = c(0.025,
0.975)), digits = 1), collapse = " to ")
}),
rename_vector,
show_full_inputs = FALSE,
...
)
Arguments
object A predictNMBscreen object.
what What to summarise: one of "nmb", "inb" or "cutpoints". Defaults to "nmb".
inb_ref_col Which cutpoint method to use as the reference strategy when calculating the
incremental net monetary benefit. See do_nmb_sim for more information.
agg_functions A named list of functions to use to aggregate the selected values. Defaults to the
median and 95% interval.
rename_vector A named vector for renaming the methods in the summary. The values of the
vector are the default names and the names given are the desired names in the
output.
show_full_inputs
A logical. Whether or not to include the inputs used for simulation alongside
aggregations.
... Additional, ignored arguments.
Details
Table summaries will be based on the what argument. Using "nmb" returns the simulated values
for NMB, with no reference group; "inb" returns the difference between simulated values for NMB
and a set strategy defined by inb_ref_col; "cutpoints" returns the cutpoints selected (0, 1).
Value
Returns a tibble.
Examples
# perform screen with increasing values of model discimination (sim_auc)
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_screen_obj <- screen_simulation_inputs(
n_sims = 50, n_valid = 10000, sim_auc = seq(0.7, 0.9, 0.1),
event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb,
cutpoint_methods = c("all", "none", "youden", "value_optimising")
)
summary(
sim_screen_obj,
rename_vector = c(
"Value_Optimising" = "value_optimising",
"Treat_None" = "none",
"Treat_All" = "all",
"Youden_Index" = "youden"
)
)
summary.predictNMBsim Create table summaries of predictNMBsim objects.
Description
Create table summaries of predictNMBsim objects.
Usage
## S3 method for class 'predictNMBsim'
summary(
object,
what = c("nmb", "inb", "cutpoints"),
inb_ref_col = NULL,
agg_functions = list(median = function(x) {
round(stats::median(x), digits = 2)
}, `95% CI` = function(x) {
paste0(round(stats::quantile(x, probs = c(0.025,
0.975)), digits = 1), collapse = " to ")
}),
rename_vector,
...
)
Arguments
object A predictNMBsim object.
what What to summarise: one of "nmb", "inb" or "cutpoints". Defaults to "nmb".
inb_ref_col Which cutpoint method to use as the reference strategy when calculating the
incremental net monetary benefit. See do_nmb_sim for more information.
agg_functions A named list of functions to use to aggregate the selected values. Defaults to the
median and 95% interval.
rename_vector A named vector for renaming the methods in the summary. The values of the
vector are the default names and the names given are the desired names in the
output.
... Additional, ignored arguments.
Details
Table summaries will be based on the what argument. Using "nmb" returns the simulated values
for NMB, with no reference group; "inb" returns the difference between simulated values for NMB
and a set strategy defined by inb_ref_col; "cutpoints" returns the cutpoints selected (0, 1).
Value
Returns a tibble.
Examples
# perform simulation with do_nmb_sim()
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_obj <- do_nmb_sim(
sample_size = 200, n_sims = 50, n_valid = 10000, sim_auc = 0.7,
event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb,
cutpoint_methods = c("all", "none", "youden", "value_optimising")
)
summary(
sim_obj,
rename_vector = c(
"Value_Optimising" = "value_optimising",
"Treat_None" = "none",
"Treat_All" = "all",
"Youden_Index" = "youden"
)
)
theme_sim Returns a ggplot2 theme that reduces clutter in an autoplot() of a
predictNMBsim object.
Description
Returns a ggplot2 theme that reduces clutter in an autoplot() of a predictNMBsim object.
Usage
theme_sim()
Value
Returns a ggplot2 theme.
Examples
get_nmb <- function() c("TP" = -3, "TN" = 0, "FP" = -1, "FN" = -4)
sim_obj <- do_nmb_sim(
sample_size = 200, n_sims = 50, n_valid = 10000, sim_auc = 0.7,
event_rate = 0.1, fx_nmb_training = get_nmb, fx_nmb_evaluation = get_nmb
)
autoplot(sim_obj) + theme_sim() |
iconr | cran | R | Package ‘iconr’
October 13, 2022
Title Graphical and Spatial Analysis for Prehistoric Iconography
Version 0.1.0
Description Set of formal methods for studying archaeological iconographic datasets (rock-art, pot-
tery decoration, stelae, etc.) using network and spatial analysis (Alexan-
der 2008 <doi:10.11588/propylaeumdok.00000512>; Huet 2018 <https:
//hal.archives-ouvertes.fr/hal-02913656>).
License GPL-2
Encoding UTF-8
LazyData true
Imports igraph, magick, rgdal, grDevices, graphics, utils
Suggests ggplot2, knitr, rmarkdown, dplyr, kableExtra, data.tree,
dendextend
VignetteBuilder knitr
RoxygenNote 7.1.1
URL https://zoometh.github.io/iconr/
BugReports https://github.com/zoometh/iconr/issues
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-1112-6122>),
<NAME> [aut] (<https://orcid.org/0000-0002-0759-3510>),
<NAME> [ctb] (<https://orcid.org/0000-0001-7539-6415>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-02-16 09:10:05 UTC
R topics documented:
contemp_nd... 2
labels_shado... 3
list_compa... 4
list_de... 6
named_element... 7
plot_compa... 8
plot_dec_grp... 10
read_ed... 13
read_nd... 14
same_element... 15
side_plo... 16
contemp_nds Select Contemporaneous Nodes
Description
Find the connected component, or subgraph, of contemporaneous nodes (connected by normal and
attribute edges) given a selected node and remove the other components
Usage
contemp_nds(nds.df, eds.df, selected.nd)
Arguments
nds.df Dataframe of the nodes as the one obtained by the function read_nds.
eds.df Dataframe of the edges as the one obtained by the function read_eds.
selected.nd The node of the decoration graph for which to extract the connected component.
It can be either the node order (numeric) or the node name/id (character).
Value
A named list of two dataframes: list(nodes, edges), collecting the contemporaneous nodes and
edges, respectivelly.
Examples
# Set data folder
dataDir <- system.file("extdata", package = "iconr")
# Read a decoration
nds.df <- read_nds(site = "Ibahernando",
decor = "Ibahernando",
dir = dataDir)
eds.df <- read_eds(site = "Ibahernando",
decor = "Ibahernando",
dir = dataDir)
# Extract the subgraph contemporaneous to the node 2
l_dec_df <- contemp_nds(nds.df, eds.df, selected.nd = 2)
## It returns a list of two dataframes, one for nodes and one for edges:
l_dec_df
labels_shadow Plot Labels with Contrasting Shadow
Description
Plot labels (text) with a contrasting buffer to make them more visible when located on a similar
color background. This function is the shadowtext() function developed by <NAME>. Called
by plot functions: plot_dec_grph, plot_compar
Usage
labels_shadow(x, y = NULL, labels,
col = "black", bg = "white",
theta = seq(0, 2 * pi, length.out = 50),
r = 0.1,
cex = 1, ...)
Arguments
x, y Numeric vector of coordinates where the labels should be plotted. Alternatively,
a single argument x can be provided with the same syntax as in xy.coords.
labels Set of labels provided as a character vector.
col, bg Graphical parameters for the label color and background (buffer) color.
theta Angles for generating the buffer with possible anisotropy along one direction
(default is isotropic) and controlling buffer smoothness (angular resolution).
r Thickness of the buffer relative to the size of the used font, by default 0.1.
cex Size of the label, by default 1.
... Further graphical parameter accepted by text, such as pos, offset, or family.
Value
No return value. It creates a contrasting buffer to make labels more visible.
References
https://rdrr.io/cran/TeachingDemos/man/shadowtext.html
list_compar Graph Pairwise Comparison on Common Elements
Description
nds_compar identifies common nodes in a pair of graphs.
eds_compar identifies common edges in a pair of graphs.
Given a list of graphs, list_compar extract all combinations of graph pairs and compare them on
common elements (nodes and edges).
Usage
nds_compar(grphs, nd.var = "type")
eds_compar(grphs, nd.var = "type")
list_compar(lgrph, nd.var = "type",
verbose = FALSE)
Arguments
grphs A list of two graphs (pair of graphs) to be compared.
lgrph A list of any number of graphs to be pairwise compared. The list can be typically
obtained with the function list_dec
nd.var An attribute of the graph nodes containing the node variable (ie, field) on which
the comparison will be done. By default nd.var = "type".
verbose Logical. If TRUE, the names of each graph pair combination are listed on the
screen. By default verbose = FALSE.
Details
list_compar() calls the functions: nds_compar() and eds_compar() which return respectively
the common nodes and the common edges of a graph pairwise.
Nodes are common when they have the same value for a given variable, for example horse, sword,
etc., for the variable type (nd.var = "type").
Edges are common when they have the same value for starting and ending nodes (horse, sword,
etc.) and the same type of edge ('=', '+', etc.). For example, a -=- b in graph 1 is equal to a -=- b
in graph 2, but not equal to a -+- b. Edges of type = (normal edges) are undirected, so that a -=- b
is equal to b -=- a. But edges of types + (attribute edges) or > (diachronic edges) are directed, so:
a ->- b is not equal to b ->- a.
If any of the graphs has multiple nodes/edges with the same value, it is considered to count for as
many coincidences as the smaller multiplicity. For instance, if there are 2 nodes with value epee in
graph 1, and 3 nodes with value epee in graph 2, their number of common nodes is min(2, 3) = 2.
Value
nds_compar() returns the input pair of graphs, each complemented with a new node attribute
named comm with value 1 for common nodes and 0 for non-common nodes.
eds_compar() returns the input pair of graphs, each complemented with a new edge attribute named
comm with value 1 for common edges and 0 for non-common edges.
list_compar() returns a list of all combinations of graph pairs. For each pair, both graphs are
complemented with the node attribute (comm) identifying common nodes and the edge attribute
(comm) identifying common edges. Each pair is also complemented with an attribute named nd.var
recording the compared node variable.
See Also
list_dec, plot_compar, same_elements
Examples
# Read data
imgs <- read.table(system.file("extdata", "imgs.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
nodes <- read.table(system.file("extdata", "nodes.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
edges <- read.table(system.file("extdata", "edges.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
# Generate list of graphs from the three data.frames
lgrph <- list_dec(imgs, nodes, edges)
# Generate list of all graph comparisons depending on the node "type" variable
g.compar <- list_compar(lgrph, nd.var = "type")
length(g.compar)
## Ten pairwise comparisons
# Inspect the second pairwise comparison of the list
g.compar[[2]]
## The two compared graphs with the name of the comparison variable
# Inspecting nodes:
igraph::as_data_frame(g.compar[[2]][[1]], "vertices")
## Vertices from the first decoration graph
igraph::as_data_frame(g.compar[[2]][[2]], "vertices")
## Vertices from the second decoration graph
# Inspecting edges:
igraph::as_data_frame(g.compar[[2]][[1]])
## Edges of the first decoration graph
igraph::as_data_frame(g.compar[[2]][[2]])
## Edges of the second decoration graph
list_dec Create Decoration’s Graphs and Store them in a List
Description
Create undirected graphs for each decoration from nodes, edges and imgs dataframes and store the
graphs in a list. The join between these dataframes is done on the two fields site and decor. Graph
names refer to imgs$idf.
Usage
list_dec(imgs,
nodes,
edges)
Arguments
imgs Dataframe of decorations
nodes Dataframe of nodes
edges Dataframe of edges
Value
A list of igraph graphs.
See Also
graph_from_data_frame
Examples
# Read imgs, nodes and edges dataframes
imgs <- read.table(system.file("extdata", "imgs.csv", package = "iconr"),
sep=";", stringsAsFactors = FALSE)
nodes <- read.table(system.file("extdata", "nodes.csv", package = "iconr"),
sep=";", stringsAsFactors = FALSE)
edges <- read.table(system.file("extdata", "edges.csv", package = "iconr"),
sep=";", stringsAsFactors = FALSE)
# Create the list of graphs
lgrph <- list_dec(imgs, nodes, edges)
# Get the first graph
g <- lgrph[[1]]
g
# Graph name
g$name
# Graph label
g$lbl
# Graph number of nodes
igraph::gorder(g)
# Graph number of edges
igraph::gsize(g)
named_elements Textual Notation of Graph Elements
Description
Create a textual notation for nodes or edges.
Usage
named_elements(grph,
focus = "edges",
nd.var = "type",
disamb.marker = "#")
Arguments
grph A decoration graph (object of class igraph).
focus Textual notation of edges (focus = "edges") or nodes (focus = "nodes"). By
default focus = "edges".
nd.var The attribute of the graph nodes containing the node variable (ie, field) for the
textual annotation. By default nd.var = "type".
disamb.marker Marker used to disambiguated repeated elements. By default disamb.marker =
"#".
Details
Edges of type '=' (normal edges) are undirected, so that the order of their nodes is irrelevant and
they are presented in alphabetical order. Conversely, edges of types '+' (attribute edges) and '>'
(diachronic edges) are directed, so that the given order of nodes is preserved.
Repeated node or edge names are disambiguated by appending the symbol disamb.marker ('#'
by default) at the end of the second appearance (suffix). Subsequent appearances are marked by
additional disamb.markers.
Value
A character vector of named nodes or edges.
See Also
list_compar, same_elements
Examples
# Read data
imgs <- read.table(system.file("extdata", "imgs.tsv", package = "iconr"),
sep="\t", stringsAsFactors = FALSE)
nodes <- read.table(system.file("extdata", "nodes.tsv", package = "iconr"),
sep="\t", stringsAsFactors = FALSE)
edges <- read.table(system.file("extdata", "edges.tsv", package = "iconr"),
sep="\t", stringsAsFactors = FALSE)
# Generate list of graphs from the three data.frames
lgrph <- list_dec(imgs, nodes, edges)
# Textual notation of disambiguated edges
named_elements(lgrph[[2]], focus = "edges", nd.var="type")
# Textual notation of disambiguated nodes
named_elements(lgrph[[2]], focus = "nodes", nd.var="type")
plot_compar Plot and Save Comparison Figures Between Pairs of Graphs
Description
Given a list of pairwise graph comparisons, the function plots any given subset selected by graph
name, displaying side-by-side pairs of graphs and highlighting common nodes or common edges
with a choice of several graphical parameters.
Usage
plot_compar(listg, dec2comp = NULL, focus = "nodes",
dir = getwd(),
nd.color = c("orange", "red"), nd.size = c(0.5, 1),
ed.color = c("orange", "red"), ed.width = c(1, 2),
lbl.size = 0.5,
dir.out = dir, out.file.name = NULL,
img.format = NULL, res = 300)
Arguments
listg A list of graph pairwise comparisons as returned by list_compar.
dec2comp A vector with the names of the graphs for which comparisons are to be plotted.
The user can select to plot all pairwise combinations (by default), all combina-
tions of a subset, or a single pair.
focus Either "nodes" (default) or "edges". It selects the type of comparison to be
plotted, highlighting common nodes or common edges, respectively.
dir Data folder including the decoration images. By default the working directory.
nd.color, nd.size, ed.color, ed.width
Graphical parameters for color and size/widths of nodes and edges. Each of
them is a vector with two values for different and common nodes/edges, re-
spectively. If only one value is provided, this unique value is taken for both
different and common elements. Labels are displayed with the same color as
common nodes. For focus = "nodes" all edges are plotted with the first value
of ed.color and ed.width.
lbl.size Graphical parameter for the size of the labels with the node names. The default
is 0.5.
dir.out Folder for the output image. By default, it coincides with the input dir.
out.file.name Name of the output image, including path from current directory and extension.
By default the name is automatically generated including site, decor, nd.var,
and the extension from img.format.
If set, out.file.name overrides dir.out and img.format.
img.format, res
Format and resolution of the saved images. The handled formats are "png",
"bmp", "tiff"/"tif", "jpeg"/"jpg", and "pdf". The default resolution is 300
(ppi). The resolution does not applies to format pdf.
if img.format=NULL (default), the plot is sent to the active device.
Details
To highlight common elements between a list of graphs, the user can focus on nodes (focus =
"nodes") or edges (focus = "edges"). As stated in the function list_compar, for a given com-
parison variable (eg. nd.var="type") if there is multiple nodes/edges with the same value, it is
considered to count for as many coincidences as the smaller multiplicity.
img.format=NULL (plot to the active device) does not make sense for more than one comparison.
Value
Generates graph decoration images, for pairwise comparisons between two or more decorations,
comparing graphs elements (nodes or edges).
If img.format=NULL, the plot is sent to the active device and no value is returned.
If img.format= "png" or "bmp" or "tiff"/"tif" or "jpeg"/"jpg" or "pdf", the return value is a
character vector with the dir/name of every saved image in the indicated format.
See Also
list_compar plot_dec_grph
Examples
# Read data
imgs <- read.table(system.file("extdata", "imgs.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
nodes <- read.table(system.file("extdata", "nodes.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
edges <- read.table(system.file("extdata", "edges.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
# Generate list of graphs from the three dataframes
lgrph <- list_dec(imgs, nodes, edges)
# Generate all pairwise comparisons of the graphs with respect to nodes "type"
g.compar <- list_compar(lgrph, nd.var="type")
# Generate the image showing the comparison on common nodes of graphs
# '1' and '4', save it in png format, and return its path.
dataDir <- system.file("extdata", package = "iconr")
outDir <- tempdir()
plot_compar(g.compar, c(1,4), focus = "nodes",
dir = dataDir,
dir.out = outDir,
img.format = "png")
# Generate the image showing the comparison on common edges of all pairwise
# combinations of graphs '1','3', and '4', save them in pdf format, and return
# their path.
# Plot nodes involved in non-common edges in orange and
# nodes involved in common edges and the corresponding labels in brown.
plot_compar(g.compar, c(1, 3, 4), focus = "edges",
dir = dataDir,
nd.color = c("orange", "brown"),
dir.out = outDir,
img.format = "pdf")
# Save the png image showing the comparison on common nodes of graphs
# '1' and '4'.
# Then read and plot the image.
img.filename <- plot_compar(g.compar, c(1, 4), focus = "nodes",
dir = dataDir,
dir.out = outDir,
img.format = "png")
plot(magick::image_read(img.filename))
# Plot directly on the active device (default) the comparison on common nodes
# of graphs '1' and '4'.
plot_compar(g.compar, c(1, 4), focus = "nodes",
dir = dataDir)
plot_dec_grph Plot a Graph on a Decoration
Description
Plot with nodes only, edges only, or both (geometric graph) over a decoration image.
Usage
plot_dec_grph(nodes = NULL,
edges = NULL,
imgs,
site,
decor,
dir = getwd(),
nd.var = 'id',
nd.color = 'orange',
nd.size = 0.5,
lbl.color = 'black',
lbl.size = 0.5,
ed.color = c("orange", "blue"),
ed.lwd = 1,
dir.out = dir,
out.file.name = NULL,
img.format = NULL,
res = 300)
Arguments
nodes Dataframe of nodes
edges Dataframe of edges
imgs Dataframe of decorations
site Name of the site
decor Name of the decoration
dir Data folder including the decoration images. By default the working directory.
nd.var Field name in the nodes data frame to be displayed as node labels. By default
the identifier nodes$id.
nd.color, nd.size, lbl.color, lbl.size, ed.color, ed.lwd
Graphical parameters for color and size/widths of nodes, edges, and labels.
ed.color is a vector with two values (the second value is used for diachronic
edges).
dir.out Folder for the output image. By default, it coincides with the input dir.
out.file.name Name of the output image, including path from current directory and extension.
By default the name is automatically generated including site, decor, nd.var,
and the extension from img.format.
If set, out.file.name overrides dir.out and img.format.
img.format, res
Format and resolution of the saved images. The handled formats are "png",
"bmp", "tiff"/"tif", "jpeg"/"jpg", and "pdf". The default resolution is 300
(ppi). The resolution does not applies to format pdf.
if img.format=NULL (default), the plot is sent to the active device.
Details
Plot nodes only (if edges = NULL), edges only (if nodes = NULL), or both (graph) over a decoration
image.
Value
Generates graph decoration images with nodes, edges, or both, overlapping the decoration image.
If img.format=NULL, the plot is sent to the active device and no value is returned.
If img.format= "png" or "bmp" or "tiff"/"tif" or "jpeg"/"jpg" or "pdf", the return value is a
character vector with the dir/name of the saved image in the indicated format.
Examples
## Set data folder
dataDir <- system.file("extdata", package = "iconr")
## Decoration to be plotted
site <- "Brozas"
decor <- "Brozas"
## Read nodes, edges, and decorations
nds.df <- read_nds(site, decor, dataDir)
eds.df <- read_eds(site, decor, dataDir)
imgs <- read.table(paste0(dataDir, "/imgs.tsv"),
sep="\t", stringsAsFactors = FALSE)
## Plot 'Brozas' nodes and edges on the active device
## with node variable "type" as labels
plot_dec_grph(nds.df, eds.df, imgs,
site, decor,
dir = dataDir,
lbl.size = 0.4,
nd.var = "type")
## Save only edges of 'Brozas' with bigger widths and in image format jpg.
outDir <- tempdir()
img.filename <- plot_dec_grph(nodes = NULL, eds.df, imgs,
site, decor,
dir = dataDir,
ed.lwd = 2,
dir.out = outDir,
img.format = "jpg")
## Then read and plot the image.
a.dec <- magick::image_read(img.filename)
## Inspect the output image
magick::image_info(a.dec)
## Plot the output image
plot(a.dec)
read_eds Read Edges of a Decoration
Description
Read edges’ information from a file including all edges and extract edges of one decoration. Ac-
cepted formats are tab separated values (’tsv’), semicolon separated values (’csv’), or shapefile
(’shp’).
Usage
read_eds(site,
decor,
dir = getwd(),
edges = "edges",
nodes = "nodes",
format = "tsv")
Arguments
site Name of the site.
decor Name of the decoration.
dir Path to the working folder, by default it is the working directory.
edges Name of the edges file (a dataframe or a shapefile).
nodes Name of the nodes file (a dataframe or a shapefile).
format File extension indicating a file format from ’tsv’ (tab separated values), ’csv’
(semicolon separated values) or ’shp’ (shapefile). For ’tsv’ and ’csv’ the coordi-
nates of the edges will be calculated from the same decoration’s node dataframe.
Details
Subset the dataframe of edges depending on ’site’ and ’decor’.
Value
Dataframe of graph edges, including at least the columns "site", "decor", "a", "b", "xa", "ya", "xb",
"yb", with values for each edge (row).
Examples
# Set data folder
dataDir <- system.file("extdata", package = "iconr")
# Read .tsv file
eds.df <- read_eds(site = "Cerro Muriano", decor = "Cerro Muriano 1",
dir = dataDir, edges = "edges", format = "tsv")
eds.df
## Dataframe of edges
# Read shapefile
eds.df <- read_eds(site = "Cerro Muriano", decor = "Cerro Muriano 1",
dir = dataDir, edges = "edges", format = "shp")
eds.df
## Dataframe of edges
read_nds Read Nodes of a Decoration
Description
Read nodes’ information from a file including all nodes and extract nodes of one decoration. Ac-
cepted formats are tab separated values (’tsv’), semicolon separated values (’csv’), or shapefile
(’shp’).
Usage
read_nds(site,
decor,
dir = getwd(),
nodes = "nodes",
format = "tsv")
Arguments
site Name of the site
decor Name of the decoration
dir Path to the working folder, by default it is the working directory
nodes Name of the nodes file (a dataframe or a shapefile)
format File extension indicating a file format from ’tsv’ (tab separated values), ’csv’
(semicolon separated values) or ’shp’ (shapefile). For ’tsv’ and ’csv’ the files
must include node coordinates (nodes$x, nodes$y).
Value
Dataframe of graph nodes, including at least the columns "site", "decor", "id", "x", "y", with values
for each node (row).
Examples
# Set data folder
dataDir <- system.file("extdata", package = "iconr")
# Read dataframe of nodes
nds.df <- read_nds(site = "Cerro Muriano", decor = "Cerro Muriano 1",
dir = dataDir, format = "tsv")
nds.df
## Dataframe of nodes
# Read shapefile of nodes
nds.df <- read_nds(site = "Cerro Muriano", decor = "Cerro Muriano 1",
dir = dataDir, format = "shp")
nds.df
## Dataframe of nodes
same_elements Number of Equal Elements Between Each Decoration Pair
Description
Create the (symmetric) dataframe with the count of common nodes or common edges (see list_compar
for comparison criteria) for each pair of decorations (graphs) from a list. The diagonal of the sym-
metric dataframe is filled with counts of nodes/edges for each decoration.
Usage
same_elements(lgrph, nd.var = "type",
focus = "nodes")
Arguments
lgrph A list of any number of graphs to be pairwise compared. The list can be typically
obtained with the function list_dec
nd.var An attribute of the graph vertices containing the node variable (ie, field) on
which the comparison will be done. By default nd.var = "type".
focus Either "nodes" (default) or "edges" to select the type of elements to be com-
pared for the count.
Value
A symmetric matrix with the counts of the pairwise coincidences of nodes or edges. The matrix has
as row and column names the names of the corresponding graphs in the input list.
See Also
list_dec, list_compar, plot_compar
Examples
# read imgs, nodes and edges dataframes
imgs <- read.table(system.file("extdata", "imgs.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
nodes <- read.table(system.file("extdata", "nodes.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
edges <- read.table(system.file("extdata", "edges.tsv", package = "iconr"),
sep="\t",stringsAsFactors = FALSE)
lgrph <- list_dec(imgs,nodes,edges)
# Counting same nodes
df.same_nodes <- same_elements(lgrph, nd.var = "type",
focus = "nodes")
df.same_nodes
## a symmetric matrix of nodes comparisons
# same edges
df.same_edges <- same_elements(lgrph, nd.var = "type",
focus = "edges")
df.same_edges
## a symmetric matrix of edges comparisons
side_plot Plot Two Figures Side-by-Side Identifying Common Elements
Description
Plot two decoration graphs side-by-side identifying common nodes and common edges. This func-
tion is called by the function plot_compar.
Usage
side_plot(grph, dir, nd.var, focus = "nodes",
nd.color = c("orange", "red"),
nd.size = c(0.5, 1),
ed.color = c("orange", "red"),
ed.width = c(1, 2),
lbl.size = 0.5)
Arguments
grph List of two or more ’igraph’ graphs created with the list_compar function.
dir Working directory which contains the imgs, nodes, edges dataframes and the
decoration images.
nd.var Field of nodes on which the comparison will be done.
focus Focus on nodes or on edges, by default focus = "nodes".
nd.color, nd.size, ed.color, ed.width
Graphical parameters for the nodes and edges. The different nodes/edges will
be displayed with the first values of the vectors (eg, "orange") while the common
nodes/edges will be displayed with the second values of the vectors (eg, "red").
lbl.size Size of the labels
Value
No return value, group images side-by-side
See Also
plot_compar |
smartsdk-recipes | readthedoc | JSON | This page explains some mechanisms that are common for the management of services deployed with the recipes.
## Multisite Deployment
If you happen to have your cluster with nodes distributed in specific areas (A1, A2, ..., AN) and you would like for the deployment of the replicas of your service S to happen only in specific areas, you can achieve this by using placement constraints.
For example, imagine you have a cluster composed of VMS in Málaga, Madrid and Zurich, but you want the deployment of the replicas of your database to stay only within Spain boundaries due to legal regulations. (DISCLAIMER: This is just a simplification, you should always inform yourself on how to properly comply with data protection regulations.)
First, you need define a labeling for the nodes of your cluster where you want your deployment to happen. Your label in this case can be done by region. You connect to any of the swarm manager nodes and execute the following commands.
```
docker node update --label-add region=ES malaga-0
docker node update --label-add region=ES madrid-0
docker node update --label-add region=ES madrid-1
docker node update --label-add region=CH zurich-0
docker node update --label-add region=CH zurich-1
```
When you are about to deploy your database, you will have to add a constraint to the definition of the service. This means you will have to edit the recipe before deploying. For instance, in the case of MongoDB, it should have the `deploy` part looking something like this:
```
mongo:
...
deploy:
placement:
constraints:
- region == ES
```
This was just a simple example. You can have multiple tags, combine them so the placement only happens in nodes with all the tags, and many other combinations. For more details of this functionality, please refer to the official docker docs on service placement.
Note: For these features to work, you need access to a manager node of the swarm cluster and Docker 17.04+.
## Scalability
As you probably already know, each recipe deploys a bunch of services, and each service can be of any of two types: Stateless or Stateful. The way to differentiate which type a service is, is by inspecting the implementation of the service to tell if it needs data persistence within itself or not.
The point is, the way to scale services depends on the type of the service.
Scaling stateless services with Docker is pretty straightforward, you can simply increase the number of replicas. Assuming no constraints violations (see previous section), you will be able to dynamically set more or less replicas for each stateless service.
```
docker service scale orion=5
```
More info on the scaling process is documented here.
As regards scaling stateful services, there is no silver bullet and it will always depend on the service being discussed.
For example, Docker handles two types of service deployments: replicated and global, as explained here. A replicated service can be scaled as shown in the previous example, but the only way to scale a global service (which means there will be a maximum of one instance per node), is by adding nodes. To scale down a global service you can either remove nodes or apply constraints (see Multisite Deployment section above).
In any of the two cases, for a stateful service, someone will have to be responsible for coordinating the data layer among all the instances and deal with replication, partitioning and all sort of issues typically seen in distributed systems.
Finally, the recipes should properly document which of its services are stateless and which are not, so as to have these considerations for Scalability. They should include also, notes on how their stateful services could be scaled.
Here you can find recipes aimed at different usages of the Orion Context Broker. We assume you are already familiar with Orion. If not, refer to the official documentation.
The easiest and simplest way to try Orion is as explained in Orion's official docker image docs using this docker-compose file. But here, we will explore a distributed configuration for this Generic Enabler.
Instructions on how to prepare your environment to test these recipes are given in https://github.com/smartsdk/smartsdk-recipes.
Date: 2016-01-01
Categories:
Tags:
This recipe shows how to deploy an scalable Orion Context Broker service backed with an scalable replica set of MongoDB instances.
Orion needs a mongo database for its backend. If you have already deployed Mongo within your cluster and would like to reuse that database, you can skip the next step (deploying backend). You will just need to pay attention to the variables you define for Orion to link to Mongo, namely, `MONGO_SERVICE_URI` . Make sure
you have the correct values in `settings.env` (or `settings.bat` in Windows).
The value of `MONGO_SERVICE_URI` should be a routable address for mongo.
If deployed within the swarm, the service name (with stack prefix)
would suffice. You can read more in the
official docker docs.
The default values should be fine for you if you used the
Mongo ReplicaSet Recipe.
Now you can activate your settings and deploy Orion...
```
$ source settings.env # In Windows, simply execute settings.bat instead.
$ docker stack deploy -c docker-compose.yml orion
```
At some point, your deployment should look like this...
```
$ docker service ls
ID NAME MODE REPLICAS IMAGE
nrxbm6k0a2yn mongo-rs_mongo global 3/3 mongo:3.2
rgws8vumqye2 mongo-rs_mongo-controller replicated 1/1 smartsdk/mongo-rs-controller-swarm:latest
zk7nu592vsde orion_orion replicated 3/3 fiware/orion:1.3.0
```
As shown above, if you see `3/3` in the replicas column it means the 3 replicas
are up and running.
You can check the distribution of the containers of a service (a.k.a tasks) through the swarm running the following...
```
$ docker service ps orion_orion
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wwgt3q6nqqg3 orion_orion.1 fiware/orion:1.3.0 ms-worker0 Running Running 9 minutes ago
l1wavgqra8ry orion_orion.2 fiware/orion:1.3.0 ms-worker1 Running Running 9 minutes ago
z20v0pnym8ky orion_orion.3 fiware/orion:1.3.0 ms-manager0 Running Running 25 minutes ago
```
The good news is that, as you can see from the above output, by default docker already took care of deploying all the replicas of the service `context-broker_orion` to different hosts. Of course, with the use of labels, constraints or deploying mode you have the power to customize the distribution of tasks among swarm nodes. You can see the mongo replica recipe to understand the deployment of the `mongo-replica_mongo` service.
Now, let's query Orion to check it's truly up and running. The question now is... where is Orion actually running? We'll cover the network internals later, but for now let's query the manager node...
```
$ sh ../query.sh $(docker-machine ip ms-manager0)
```
You will get something like...
```
{
"orion" : {
"version" : "1.3.0",
"uptime" : "0 d, 0 h, 18 m, 13 s",
"git_hash" : "cb6813f044607bc01895296223a27e4466ab0913",
"compile_time" : "Fri Sep 2 08:19:12 UTC 2016",
"compiled_by" : "root",
"compiled_in" : "ba19f7d3be65"
}
}
[]
```
Thanks to the docker swarm internal routing mesh, you can actually perform the previous query to any node of the swarm, it will be redirected to a node where the request on port `1026` can be attended (i.e, any node running Orion).
Let's insert some data...
```
$ sh ../insert.sh $(docker-machine ip ms-worker1)
```
And check it's there...
```
$ sh ../query.sh $(docker-machine ip ms-worker0)
...
[
{
"id": "Room1",
"pressure": {
"metadata": {},
"type": "Integer",
"value": 720
},
"temperature": {
"metadata": {},
"type": "Float",
"value": 23
},
"type": "Room"
}
]
```
Yes, you can query any of the three nodes.
Swarm's internal load balancer will be load-balancing in a round-robin approach all the requests for an orion service among the orion tasks running in the swarm.
## Rescaling Orion
Scaling up and down orion is a simple as runnnig something like...
```
$ docker service scale orion_orion=2
```
(this maps to the `replicas` argument in the docker-compose)
Consequently, one of the nodes (ms-worker1 in my case) is no longer running Orion...
```
$ docker service ps orion_orion
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
2tibpye24o5q orion_orion.2 fiware/orion:1.3.0 ms-manager0 Running Running 11 minutes ago
w9zmn8pp61ql orion_orion.3 fiware/orion:1.3.0 ms-worker0 Running Running 11 minutes ago
```
But still responds to the querying as mentioned above...
```
$ sh ../query.sh $(docker-machine ip ms-worker1)
{
"orion" : {
"version" : "1.3.0",
"uptime" : "0 d, 0 h, 14 m, 30 s",
"git_hash" : "cb6813f044607bc01895296223a27e4466ab0913",
"compile_time" : "Fri Sep 2 08:19:12 UTC 2016",
"compiled_by" : "root",
"compiled_in" : "ba19f7d3be65"
}
}
[]
```
You can see the mongo replica recipe to see how to scale the mongodb backend. But basically, due to the fact that it's a "global" service, you can scale it down like shown before. However, scaling it up might require adding a new node to the swarm because there can be only one instance per node.
## Dealing with failures
Docker is taking care of the reconciliation of the services in case a container goes down. Let's show this by running the following (always on the manager node):
Suppose orion container goes down...
```
$ docker rm -f abc5e37037f0
```
You will see it gone, but after a while it will automatically come back.
Even if a whole node goes down, the service will remain working because you had both redundant orion instances and redundant db replicas.
```
$ docker-machine rm ms-worker0
```
You will still get replies to...
```
$ sh ../query.sh $(docker-machine ip ms-manager0)
$ sh ../query.sh $(docker-machine ip ms-worker1)
```
This recipe shows how to deploy an scalable API Umbrella instance service backed with an scalable replica set of MongoDB instances.
At the time being, other services, such as Elastic Search for logging api interactions and QoS are not deployed. This is mostly due to the fact that API Umbrella supports only obsolete versions of Elastic Search (i.e. version 2, while current version is 6).
In case you haven't done it yet for other recipes, deploy `backend` and `frontend` networks as described in the
installation guide. API Umbrella needs a mongo database for its backend. If you have already deployed Mongo within your cluster and would like to reuse that database, you can skip the next step (deploying backend). You will just need to pay attention to the variables you define for API Umbrella to link to Mongo, namely, `MONGO_SERVICE_URI` and `REPLICASET_NAME` .
Otherwise, if you prefer to make a new deployment of MongoDB just for API Umbrella, you can take a shortcut and run...
```
$ sh deploy_back.sh
Creating config mongo-rs_mongo-healthcheck
Creating service mongo-rs_mongo
Creating service mongo-rs_controller
```
Beside that, given that Ruby driver for MongoDB is not supporting service discovery, you will need to expose the ports of the MongoDB server on the cluster to allow the connection to the Replica Set from API Umbrella.
Be aware that this works only when you deploy (as in the script), your MongoDB in global mode.
```
$ docker service update --publish-add published=27017,target=27017,protocol=tcp,mode=host mongo-rs_mongo
mongo-rs_mongo
overall progress: 1 out of 1 tasks
w697ke0djs3c: running [==================================================>]
verify: Service converged
```
Wait some time until the backend is ready, you can check the backed deployment running:
```
$ docker stack ps mongo-rs
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
mxxrlexvj0r9 mongo-rs_mongo.z69rvapjce827l69b6zehceal mongo:3.2 ms-worker1 Running Starting 9 seconds ago
d74orl0f0q7a mongo-rs_mongo.fw2ajm8zw4f12ut3sgffgdwsl mongo:3.2 ms-worker0 Running Starting 15 seconds ago
a2wddzw2g2fg mongo-rs_mongo.w697ke0djs3cfdf3bgbrcblam mongo:3.2 ms-manager0 Running Starting 6 seconds ago
nero0vahaa8h mongo-rs_controller.1 smartsdk/mongo-rs-controller-swarm:latest ms-manager0 Running Running 5 seconds ago
```
Set the connection url for mongo based on the IPs of your Swarm Cluster (alternatively edit the `frontend.env` file):
```
$ MONGO_REPLICATE_SET_IPS=192.168.99.100:27017,192.168.99.101:27017,192.168.99.102:27017
$ export MONGO_REPLICATE_SET_IPS
```
If you used `miniswarm` to create your cluster, you can get the different IPs
using the `docker-machine ip` command, e.g.:
$ docker-machine ip ms-worker0
$ docker-machine ip ms-worker1
```
when all services will be in status ready, your backend is ready to be used:
```
$ sh deploy_front.sh
generating config file
replacing target file api-umbrella.yml
replace mongodb with mongo-rs_mongo
replacing target file api-umbrella.yml
replace rs_name with rs
Creating config api_api-umbrella
Creating service api_api-umbrella
```
When also the frontend services will be running, your deployment will look like this:
```
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ca11lmx40tu5 api_api-umbrella replicated 2/2 smartsdk/api-umbrella:0.14.4-1-fiware *:80->80/tcp,*:443->443/tcp
te1i0vhwtmnw mongo-rs_controller replicated 1/1 smartsdk/mongo-rs-controller-swarm:latest
rbo2oe2y0d72 mongo-rs_mongo global 3/3 mongo:3.2
```
If you see `3/3` in the replicas column it means the 3 out of 3 planned replicas
are up and running.
In the following walkthrough we will explain how to do the initial configuration of API Umbrella and register your first API. For more details read API Umbrella's documentation.
* Let's create the admin user in API Umbrella. As first thing, get the IP of your master node:
Open the browser at the following endpoint:
```
http://<your-cluster-manager-ip>/admin
```
.
Unless you also created certificates for your server, API Umbrella will ask you to accept the connection to an insecure instance.
In the page displayed you can enter the admin user name and the password.
Now you are logged in and you can configure the backend APIs.
N.B.: The usage of the cluster master IP is just a convention, you can reach the services also at the IPs of the worker nodes.
Retrieve
`X-Admin-Auth-Token` Access and `X-Api-Key` . In the menu select
```
Users->Admin Accounts
```
and click on the username you just created. Copy the `Admin API Access` for your account.
In the menu select
`Users->Api Users` click on the username
```
<EMAIL>@internal.apiumbrella
```
and copy the API Key (of course you can create new ones instead of reusing API Umbrella defaults). *
Register an new API. Create a simple API to test that everything works:
```bash $ curl -k -X POST "https://} } EOF
Response: { "api": { "backend_host": "maps.googleapis.com", "backend_protocol": "http", "balance_algorithm": "least_conn", "created_at": "2018-02-26T13:47:02Z", "created_by": "c9d7c2cf-737c-46ae-974b-22ebc12cce0c", "deleted_at": null, "frontend_host": "
* Publish the newly registered API.
```bash $ curl -k -X POST "https://
Response:
{ "config_version": { "config": { "apis": [ { "_id": "cbe24047-7f74-4eb5-bd7e-211c3f8ede22", "version": 2, "deleted_at": null, "name": "distance FIWARE REST", "sort_order": 100000, "backend_protocol": "http", "frontend_host": "192.168.99.100", "backend_host": "maps.googleapis.com", "balance_algorithm": "least_conn", "updated_by": "c9d7c2cf-737c-46ae-974b-22ebc12cce0c", "updated_at": "2018-02-26T14:02:08Z", "created_at": "2018-02-26T13:47:02Z", "created_by": "c9d7c2cf-737c-46ae-974b-22ebc12cce0c", "settings": { "require_https": "required_return_error", "disable_api_key": false, "api_key_verification_level": "none", "require_idp": "fiware-oauth2", "rate_limit_mode": "unlimited", "error_templates": {}, "_id": "4dfe22af-c12a-4733-807d-0a668c413a96", "anonymous_rate_limit_behavior": "ip_fallback", "authenticated_rate_limit_behavior": "all", "error_data": {} }, "servers": [ { "host": "maps.googleapis.com", "port": 80, "_id": "f0f7a039-d88c-4ef8-8798-a00ad3c8fcdb" } ], "url_matches": [ { "frontend_prefix": "/distance2/", "backend_prefix": "/", "_id": "ec719b9f-2020-4eb9-8744-5cb2bae4b625" } ] } ], "website_backends": [] }, "created_at": "2018-02-26T14:03:53Z", "updated_at": "2018-02-26T14:03:53Z", "version": "2018-02-26T14:03:53Z", "id": { "$oid": "5a9413c99f9d04008c5a0b6c" } } } ```
Test your new API, by issuing a query:
* Get a token from FIWARE:
```bash $ wget --no-check-certificate https://raw.githubusercontent.com/fgalan/oauth2-example-orion-client/master/token_script.sh $ bash token_script.sh
Username: <EMAIL> Password: Token:
b'```'
* Use it to make a query to your API:
```bash $ curl -k "https://
b'/distance2/maps/api/distancematrix/json?units=imperial&origins=Washington,DC&destinations=New+York+City,NY&token='
b'"'
Response: { "destination_addresses" : [ "New York, NY, USA" ], "origin_addresses" : [ "Washington, DC, USA" ], "rows" : [ { "elements" : [ { "distance" : { "text" : "225 mi", "value" : 361940 }, "duration" : { "text" : "3 hours 50 mins", "value" : 13816 }, "status" : "OK" } ] } ], "status" : "OK" } ```
This section contains useful (and sometimes temporary) scripts as well as references to tools, projects and pieces of documentation used for the development of the recipes.
The basic environment setup is explained in the Installation part of the docs.
## Playing with Recipes?
### miniswarm
Helpful tool to help you quickly setup a local virtualbox-based swarm cluster for testing purposes.
### wait-for-it
Useful shell script used when you need to wait for a service to be started.
Note: This might no longer be needed since docker introduced the healthchecks feature.
### docker-swarm-visualizer
If you'd like to have a basic view of the distribution of containers in your swarm cluster, you can use the `visualzer.yml` file provided in this folder.
```
docker stack deploy -c visualizer.yml vis
```
### portainer
If you'd like a more sophisticated UI with info about your swarm, you can deploy portainer as follows.
```
docker service create \
--name portainer \
--publish 9000:9000 \
--constraint 'node.role == manager' \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
portainer/portainer \
-H unix:///var/run/docker.sock
```
Alternatively, you can make use of the docker-compose file available in this folder.
```
docker stack deploy -c portainer.yml portainer
```
### postman
A well-known tool for experimenting with APIs. Do you want to try the curl-based examples of the recipes from Postman? Import the
```
postman_collection.json
```
available in this folder and make your tests easier. Note: this collection is
work in progress, feel free to contribute!
## Writing Docs?
We typically write documentation in markdown format. Then, mkdocs is used to generate the html format. You can see in the root of this project the `mkdocs.yml` config file.
For architecture diagrams, we use PlantUML. For the diagrams we follow the conventions and leverage on the features you can find in this project.
Instead of uploading pictures in this document, we use gravizo's power to convert the .dot or PlantUML files and have them served as pictures online. There is an intermediate conversion done with gravizo's converter. Inspect the source of any recipe's `readme.md` to see an example.
Other tools for documentation that you may find useful are...
### draw.io
Use this tool when the diagrams start getting too complex of when you foresee the diagram will be complex from the scratch.
Complex in the sense that making a simple change takes more time understanding the `.dot` than making a manual gui-based change. When using draw.io, keep the source file in the repository under a `/doc` subfolder of the corresponding recipe.
### color names
The reference for color names used in `.dot` files.
### diagramr (deprecated)
To give more docker-related details we could use this tool to create diagrams from docker-compose files. The tools gives also the .dot file, which would be eventually customized and then turned into a png file using graphviz.
```
$ dot compose.dot -Tpng -o compose.png
```
Contributions are more than welcome in the form of Pull Requests.
Feel free to open issues if something looks wrong.
Be aware that during the CI process, a number of linters are run:
To ensure correctness of yml files, we recommend you to check yaml linting rules.
*
To ensure consistency of the documentation style, we recommend you to adhere to the MD linting rules.
Once you make a pull request to the repository, you will be able to observe the results of the compliancy verification in your PR. Merge will be only possible after a successful CI run.
## Important observations for new recipes
Please add a
`README.md` with an introduction on how to use the recipe and which things MUST be set in order for it to work. *
State which services are stateful and which are stateless.
*
Provide the recipe with sensible defaults that work out of the box in the FIWARE Lab. Use as much defaults as possible, so users need little to none configuration before the first testing deployment.
*
Please keep recipes in order (respect the folder structure) following categories.
## Contributing to Portainer Recipes
The SmartDSK Recipes aim to be deployable both using command line tools following this Guide or by using a portainer template.
The documentation of the portainer template file is documented in the Stack template definition format
The json file is however not human friendly to edit, so in this project the templates should be written in the
files, and before committing changes, call
the `make` from the root directory of the project in order to update
the file
. Then Add and commit the files
and
together.
## Documentation
For now we are using Mkdocs deploying on Github Pages.
You will also notice that instead of having a separate `docs` folder,
the documentation is composed of the README's content of all subfolders so as
to keep docs as close to the respective recipes as possible. If you change the structure of the index or add new pages, remember to update `mkdocs.yml` accordingly.
Note you can preview your changes locally by running
After all your changes, remember to run |
learningstatisticswithr_com_book | free_programming_book | Unknown | # Licensing
This book is published under a Creative Commons BY-SA license (CC BY-SA) version 4.0. This means that this book can be reused, remixed, retained, revised and redistributed (including commercially) as long as appropriate credit is given to the authors. If you remix, or modify the original version of this open textbook, you must redistribute all versions of this open textbook under the same license - CC BY-SA. https://creativecommons.org/licenses/by-sa/4.0/
Learning statistics with R: A tutorial for psychology students and other beginners. (Version 0.6.1) Dedication This book was brought to you today by the letter ‘R’.
Date: 2013-01-13
Categories:
Tags:
# Preface
## 0.1 Preface to Version 0.6.1
Hi! I’m not Danielle.
This version is a work in progress to transport the book from LaTeX to bookdown, this allows for the code chunks shown in the text to be interactively rendered and is the first step in updating sections to make use of the tidyverse.
Moving from LaTeX is a bit fiddly so there are broken pieces in this version that I am updating progressively. If you notice errors or have suggestions feel free to raise an issue (or pull request) at http://github.com/ekothe/rbook
Cheers <NAME>
## 0.2 Preface to Version 0.6
The book hasn’t changed much since 2015 when I released Version 0.5 – it’s probably fair to say that I’ve changed more than it has. I moved from Adelaide to Sydney in 2016 and my teaching profile at UNSW is different to what it was at Adelaide, and I haven’t really had a chance to work on it since arriving here! It’s a little strange looking back at this actually. A few quick comments…
* Weirdly, the book consistently misgenders me, but I suppose I have only myself to blame for that one :-) There’s now a brief footnote on page 12 that mentions this issue; in real life I’ve been working through a gender affirmation process for the last two years and mostly go by she/her pronouns. I am, however, just as lazy as I ever was so I haven’t bothered updating the text in the book.
* For Version 0.6 I haven’t changed much I’ve made a few minor changes when people have pointed out typos or other errors. In particular it’s worth noting the issue associated with the etaSquared function in the lsr package (which isn’t really being maintained any more) in Section 14.4. The function works fine for the simple examples in the book, but there are definitely bugs in there that I haven’t found time to check! So please take care with that one.
* The biggest change really is the licensing! I’ve released it under a Creative Commons licence (CC BY-SA 4.0, specifically), and placed all the source files to the associated GitHub repository, if anyone wants to adapt it.
Maybe someone would like to write a version that makes use of the tidyverse… I hear that’s become rather important to R these days :-)
Best, <NAME>
Another year, another update. This time around, the update has focused almost entirely on the theory sections of the book. Chapters 9, 10 and 11 have been rewritten, hopefully for the better. Along the same lines, Chapter 17 is entirely new, and focuses on Bayesian statistics. I think the changes have improved the book a great deal. I’ve always felt uncomfortable about the fact that all the inferential statistics in the book are presented from an orthodox perspective, even though I almost always present Bayesian data analyses in my own work. Now that I’ve managed to squeeze Bayesian methods into the book somewhere, I’m starting to feel better about the book as a whole. I wanted to get a few other things done in this update, but as usual I’m running into teaching deadlines, so the update has to go out the way it is!
<NAME> February 16, 2015
## 0.4 Preface to Version 0.4
A year has gone by since I wrote the last preface. The book has changed in a few important ways: Chapters 3 and 4 do a better job of documenting some of the time saving features of Rstudio, Chapters 12 and 13 now make use of new functions in the lsr package for running chi-square tests and t tests, and the discussion of correlations has been adapted to refer to the new functions in the lsr package. The soft copy of 0.4 now has better internal referencing (i.e., actual hyperlinks between sections), though that was introduced in 0.3.1. There’s a few tweaks here and there, and many typo corrections (thank you to everyone who pointed out typos!), but overall 0.4 isn’t massively different from 0.3.
I wish I’d had more time over the last 12 months to add more content. The absence of any discussion of repeated measures ANOVA and mixed models more generally really does annoy me. My excuse for this lack of progress is that my second child was born at the start of 2013, and so I spent most of last year just trying to keep my head above water. As a consequence, unpaid side projects like this book got sidelined in favour of things that actually pay my salary! Things are a little calmer now, so with any luck version 0.5 will be a bigger step forward.
One thing that has surprised me is the number of downloads the book gets. I finally got some basic tracking information from the website a couple of months ago, and (after excluding obvious robots) the book has been averaging about 90 downloads per day. That’s encouraging: there’s at least a few people who find the book useful!
<NAME> February 4, 2014
There’s a part of me that really doesn’t want to publish this book. It’s not finished.
And when I say that, I mean it. The referencing is spotty at best, the chapter summaries are just lists of section titles, there’s no index, there are no exercises for the reader, the organisation is suboptimal, and the coverage of topics is just not comprehensive enough for my liking. Additionally, there are sections with content that I’m not happy with, figures that really need to be redrawn, and I’ve had almost no time to hunt down inconsistencies, typos, or errors. In other words, this book is not finished. If I didn’t have a looming teaching deadline and a baby due in a few weeks, I really wouldn’t be making this available at all.
What this means is that if you are an academic looking for teaching materials, a Ph.D. student looking to learn R, or just a member of the general public interested in statistics, I would advise you to be cautious. What you’re looking at is a first draft, and it may not serve your purposes. If we were living in the days when publishing was expensive and the internet wasn’t around, I would never consider releasing a book in this form. The thought of someong shelling out $80 for this (which is what a commercial publisher told me it would retail for when they offered to distribute it) makes me feel more than a little uncomfortable. However, it’s the 21st century, so I can post the pdf on my website for free, and I can distribute hard copies via a print-on-demand service for less than half what a textbook publisher would charge. And so my guilt is assuaged, and I’m willing to share! With that in mind, you can obtain free soft copies and cheap hard copies online, from the following webpages:
* http://www.compcogscisydney.com/learning-statistics-with-r.html
* http://www.lulu.com/content/13570633
Even so, the warning still stands: what you are looking at is Version 0.3 of a work in progress. If and when it hits Version 1.0, I would be willing to stand behind the work and say, yes, this is a textbook that I would encourage other people to use. At that point, I’ll probably start shamelessly flogging the thing on the internet and generally acting like a tool. But until that day comes, I’d like it to be made clear that I’m really ambivalent about the work as it stands.
All of the above being said, there is one group of people that I can enthusiastically endorse this book to: the psychology students taking our undergraduate research methods classes (DRIP and DRIP:A) in 2013. For you, this book is ideal, because it was written to accompany your stats lectures. If a problem arises due to a shortcoming of these notes, I can and will adapt content on the fly to fix that problem. Effectively, you’ve got a textbook written specifically for your classes, distributed for free (electronic copy) or at near-cost prices (hard copy). Better yet, the notes have been tested: Version 0.1 of these notes was used in the 2011 class, Version 0.2 was used in the 2012 class, and now you’re looking at the new and improved Version 0.3. I’m not saying these notes are titanium plated awesomeness on a stick – though if you wanted to say so on the student evaluation forms, then you’re totally welcome to – because they’re not. But I am saying that they’ve been tried out in previous years and they seem to work okay. Besides, there’s a group of us around to troubleshoot if any problems come up, and you can guarantee that at least one of your lecturers has read the whole thing cover to cover!
Okay, with all that out of the way, I should say something about what the book aims to be. At its core, it is an introductory statistics textbook pitched primarily at psychology students. As such, it covers the standard topics that you’d expect of such a book: study design, descriptive statistics, the theory of hypothesis testing, \(t\)-tests, \(\chi^2\) tests, ANOVA and regression. However, there are also several chapters devoted to the R statistical package, including a chapter on data manipulation and another one on scripts and programming. Moreover, when you look at the content presented in the book, you’ll notice a lot of topics that are traditionally swept under the carpet when teaching statistics to psychology students. The Bayesian/frequentist divide is openly disussed in the probability chapter, and the disagreement between Neyman and Fisher about hypothesis testing makes an appearance. The difference between probability and density is discussed. A detailed treatment of Type I, II and III sums of squares for unbalanced factorial ANOVA is provided. And if you have a look in the Epilogue, it should be clear that my intention is to add a lot more advanced content.
My reasons for pursuing this approach are pretty simple: the students can handle it, and they even seem to enjoy it. Over the last few years I’ve been pleasantly surprised at just how little difficulty I’ve had in getting undergraduate psych students to learn R. It’s certainly not easy for them, and I’ve found I need to be a little charitable in setting marking standards, but they do eventually get there. Similarly, they don’t seem to have a lot of problems tolerating ambiguity and complexity in presentation of statistical ideas, as long as they are assured that the assessment standards will be set in a fashion that is appropriate for them. So if the students can handle it, why not teach it? The potential gains are pretty enticing. If they learn R, the students get access to CRAN, which is perhaps the largest and most comprehensive library of statistical tools in existence. And if they learn about probability theory in detail, it’s easier for them to switch from orthodox null hypothesis testing to Bayesian methods if they want to. Better yet, they learn data analysis skills that they can take to an employer without being dependent on expensive and proprietary software.
Sadly, this book isn’t the silver bullet that makes all this possible. It’s a work in progress, and maybe when it is finished it will be a useful tool. One among many, I would think. There are a number of other books that try to provide a basic introduction to statistics using R, and I’m not arrogant enough to believe that mine is better. Still, I rather like the book, and maybe other people will find it useful, incomplete though it is.
<NAME> January 13, 2013
Date: 2016-05-08
Categories:
Tags:
# Chapter 1 Why do we learn statistics?
“Thou shalt not answer questionnaires
Or quizzes upon World Affairs, Nor with compliance
Take any test. Thou shalt not sit With statisticians nor commit" – W.H. Auden1
## 1.1 On the psychology of statistics
To the surprise of many students, statistics is a fairly significant part of a psychological education. To the surprise of no-one, statistics is very rarely the favourite part of one’s psychological education. After all, if you really loved the idea of doing statistics, you’d probably be enrolled in a statistics class right now, not a psychology class. So, not surprisingly, there’s a pretty large proportion of the student base that isn’t happy about the fact that psychology has so much statistics in it. In view of this, I thought that the right place to start might be to answer some of the more common questions that people have about stats…
A big part of this issue at hand relates to the very idea of statistics. What is it? What’s it there for? And why are scientists so bloody obsessed with it? These are all good questions, when you think about it. So let’s start with the last one. As a group, scientists seem to be bizarrely fixated on running statistical tests on everything. In fact, we use statistics so often that we sometimes forget to explain to people why we do. It’s a kind of article of faith among scientists – and especially social scientists – that your findings can’t be trusted until you’ve done some stats. Undergraduate students might be forgiven for thinking that we’re all completely mad, because no-one takes the time to answer one very simple question:
Why do you do statistics? Why don’t scientists just use common sense?
It’s a naive question in some ways, but most good questions are. There’s a lot of good answers to it,2 but for my money, the best answer is a really simple one: we don’t trust ourselves enough. We worry that we’re human, and susceptible to all of the biases, temptations and frailties that humans suffer from. Much of statistics is basically a safeguard. Using “common sense” to evaluate evidence means trusting gut instincts, relying on verbal arguments and on using the raw power of human reason to come up with the right answer. Most scientists don’t think this approach is likely to work.
In fact, come to think of it, this sounds a lot like a psychological question to me, and since I do work in a psychology department, it seems like a good idea to dig a little deeper here. Is it really plausible to think that this “common sense” approach is very trustworthy? Verbal arguments have to be constructed in language, and all languages have biases – some things are harder to say than others, and not necessarily because they’re false (e.g., quantum electrodynamics is a good theory, but hard to explain in words). The instincts of our “gut” aren’t designed to solve scientific problems, they’re designed to handle day to day inferences – and given that biological evolution is slower than cultural change, we should say that they’re designed to solve the day to day problems for a different world than the one we live in. Most fundamentally, reasoning sensibly requires people to engage in “induction”, making wise guesses and going beyond the immediate evidence of the senses to make generalisations about the world. If you think that you can do that without being influenced by various distractors, well, I have a bridge in Brooklyn I’d like to sell you. Heck, as the next section shows, we can’t even solve “deductive” problems (ones where no guessing is required) without being influenced by our pre-existing biases.
### 1.1.1 The curse of belief bias
People are mostly pretty smart. We’re certainly smarter than the other species that we share the planet with (though many people might disagree). Our minds are quite amazing things, and we seem to be capable of the most incredible feats of thought and reason. That doesn’t make us perfect though. And among the many things that psychologists have shown over the years is that we really do find it hard to be neutral, to evaluate evidence impartially and without being swayed by pre-existing biases. A good example of this is the belief bias effect in logical reasoning: if you ask people to decide whether a particular argument is logically valid (i.e., conclusion would be true if the premises were true), we tend to be influenced by the believability of the conclusion, even when we shouldn’t. For instance, here’s a valid argument where the conclusion is believable:
No cigarettes are inexpensive (Premise 1)
Some addictive things are inexpensive (Premise 2) Therefore, some addictive things are not cigarettes (Conclusion)
And here’s a valid argument where the conclusion is not believable:
No addictive things are inexpensive (Premise 1)
Some cigarettes are inexpensive (Premise 2) Therefore, some cigarettes are not addictive (Conclusion)
The logical structure of argument #2 is identical to the structure of argument #1, and they’re both valid. However, in the second argument, there are good reasons to think that premise 1 is incorrect, and as a result it’s probably the case that the conclusion is also incorrect. But that’s entirely irrelevant to the topic at hand: an argument is deductively valid if the conclusion is a logical consequence of the premises. That is, a valid argument doesn’t have to involve true statements.
On the other hand, here’s an invalid argument that has a believable conclusion:
No addictive things are inexpensive (Premise 1)
Some cigarettes are inexpensive (Premise 2) Therefore, some addictive things are not cigarettes (Conclusion)
And finally, an invalid argument with an unbelievable conclusion:
No cigarettes are inexpensive (Premise 1)
Some addictive things are inexpensive (Premise 2) Therefore, some cigarettes are not addictive (Conclusion)
Now, suppose that people really are perfectly able to set aside their pre-existing biases about what is true and what isn’t, and purely evaluate an argument on its logical merits. We’d expect 100% of people to say that the valid arguments are valid, and 0% of people to say that the invalid arguments are valid. So if you ran an experiment looking at this, you’d expect to see data like this:
conclusion feels true | conclusion feels false |
| --- | --- |
argument is valid | 100% say “valid” | 100% say “valid” |
argument is invalid | 0% say “valid” | 0% say “valid” |
If the psychological data looked like this (or even a good approximation to this), we might feel safe in just trusting our gut instincts. That is, it’d be perfectly okay just to let scientists evaluate data based on their common sense, and not bother with all this murky statistics stuff. However, you guys have taken psych classes, and by now you probably know where this is going…
In a classic study, Evans, Barston, and Pollard (1983) ran an experiment looking at exactly this. What they found is that when pre-existing biases (i.e., beliefs) were in agreement with the structure of the data, everything went the way you’d hope:
Not perfect, but that’s pretty good. But look what happens when our intuitive feelings about the truth of the conclusion run against the logical structure of the argument:
Oh dear, that’s not as good. Apparently, when people are presented with a strong argument that contradicts our pre-existing beliefs, we find it pretty hard to even perceive it to be a strong argument (people only did so 46% of the time). Even worse, when people are presented with a weak argument that agrees with our pre-existing biases, almost no-one can see that the argument is weak (people got that one wrong 92% of the time!)3
If you think about it, it’s not as if these data are horribly damning. Overall, people did do better than chance at compensating for their prior biases, since about 60% of people’s judgements were correct (you’d expect 50% by chance). Even so, if you were a professional “evaluator of evidence”, and someone came along and offered you a magic tool that improves your chances of making the right decision from 60% to (say) 95%, you’d probably jump at it, right? Of course you would. Thankfully, we actually do have a tool that can do this. But it’s not magic, it’s statistics. So that’s reason #1 why scientists love statistics. It’s just too easy for us to “believe what we want to believe”; so if we want to “believe in the data” instead, we’re going to need a bit of help to keep our personal biases under control. That’s what statistics does: it helps keep us honest.
## 1.2 The cautionary tale of Simpson’s paradox
The following is a true story (I think…). In 1973, the University of California, Berkeley had some worries about the admissions of students into their postgraduate courses. Specifically, the thing that caused the problem was that the gender breakdown of their admissions, which looked like this…
Number of applicants | Percent admitted |
| --- | --- |
Males | 8442 | 46% |
Females | 4321 | 35% |
…and the were worried about being sued.4 Given that there were nearly 13,000 applicants, a difference of 9% in admission rates between males and females is just way too big to be a coincidence. Pretty compelling data, right? And if I were to say to you that these data actually reflect a weak bias in favour of women (sort of!), you’d probably think that I was either crazy or sexist.
Oddly, it’s actually sort of true …when people started looking more carefully at the admissions data (Bickel, Hammel, and O’Connell 1975) they told a rather different story. Specifically, when they looked at it on a department by department basis, it turned out that most of the departments actually had a slightly higher success rate for female applicants than for male applicants. Table 1.1 shows the admission figures for the six largest departments (with the names of the departments removed for privacy reasons):
Department | Male Applicants | Male Percent Admitted | Female Applicants | Female Percent admitted |
| --- | --- | --- | --- | --- |
A | 825 | 62% | 108 | 82% |
B | 560 | 63% | 25 | 68% |
C | 325 | 37% | 593 | 34% |
D | 417 | 33% | 375 | 35% |
E | 191 | 28% | 393 | 24% |
F | 272 | 6% | 341 | 7% |
Remarkably, most departments had a higher rate of admissions for females than for males! Yet the overall rate of admission across the university for females was lower than for males. How can this be? How can both of these statements be true at the same time?
Here’s what’s going on. Firstly, notice that the departments are not equal to one another in terms of their admission percentages: some departments (e.g., engineering, chemistry) tended to admit a high percentage of the qualified applicants, whereas others (e.g., English) tended to reject most of the candidates, even if they were high quality. So, among the six departments shown above, notice that department A is the most generous, followed by B, C, D, E and F in that order. Next, notice that males and females tended to apply to different departments. If we rank the departments in terms of the total number of male applicants, we get A>B>D>C>F>E (the “easy” departments are in bold). On the whole, males tended to apply to the departments that had high admission rates. Now compare this to how the female applicants distributed themselves. Ranking the departments in terms of the total number of female applicants produces a quite different ordering C>E>D>F>A>B. In other words, what these data seem to be suggesting is that the female applicants tended to apply to “harder” departments. And in fact, if we look at all Figure 1.1 we see that this trend is systematic, and quite striking. This effect is known as Simpson’s paradox. It’s not common, but it does happen in real life, and most people are very surprised by it when they first encounter it, and many people refuse to even believe that it’s real. It is very real. And while there are lots of very subtle statistical lessons buried in there, I want to use it to make a much more important point …doing research is hard, and there are lots of subtle, counterintuitive traps lying in wait for the unwary. That’s reason #2 why scientists love statistics, and why we teach research methods. Because science is hard, and the truth is sometimes cunningly hidden in the nooks and crannies of complicated data.
Before leaving this topic entirely, I want to point out something else really critical that is often overlooked in a research methods class. Statistics only solves part of the problem. Remember that we started all this with the concern that Berkeley’s admissions processes might be unfairly biased against female applicants. When we looked at the “aggregated” data, it did seem like the university was discriminating against women, but when we “disaggregate” and looked at the individual behaviour of all the departments, it turned out that the actual departments were, if anything, slightly biased in favour of women. The gender bias in total admissions was caused by the fact that women tended to self-select for harder departments. From a legal perspective, that would probably put the university in the clear. Postgraduate admissions are determined at the level of the individual department (and there are good reasons to do that), and at the level of individual departments, the decisions are more or less unbiased (the weak bias in favour of females at that level is small, and not consistent across departments). Since the university can’t dictate which departments people choose to apply to, and the decision making takes place at the level of the department it can hardly be held accountable for any biases that those choices produce.
That was the basis for my somewhat glib remarks earlier, but that’s not exactly the whole story, is it? After all, if we’re interested in this from a more sociological and psychological perspective, we might want to ask why there are such strong gender differences in applications. Why do males tend to apply to engineering more often than females, and why is this reversed for the English department? And why is it it the case that the departments that tend to have a female-application bias tend to have lower overall admission rates than those departments that have a male-application bias? Might this not still reflect a gender bias, even though every single department is itself unbiased? It might. Suppose, hypothetically, that males preferred to apply to “hard sciences” and females prefer “humanities”. And suppose further that the reason for why the humanities departments have low admission rates is because the government doesn’t want to fund the humanities (Ph.D. places, for instance, are often tied to government funded research projects). Does that constitute a gender bias? Or just an unenlightened view of the value of the humanities? What if someone at a high level in the government cut the humanities funds because they felt that the humanities are “useless chick stuff”. That seems pretty blatantly gender biased. None of this falls within the purview of statistics, but it matters to the research project. If you’re interested in the overall structural effects of subtle gender biases, then you probably want to look at both the aggregated and disaggregated data. If you’re interested in the decision making process at Berkeley itself then you’re probably only interested in the disaggregated data.
In short there are a lot of critical questions that you can’t answer with statistics, but the answers to those questions will have a huge impact on how you analyse and interpret data. And this is the reason why you should always think of statistics as a tool to help you learn about your data, no more and no less. It’s a powerful tool to that end, but there’s no substitute for careful thought.
## 1.3 Statistics in psychology
I hope that the discussion above helped explain why science in general is so focused on statistics. But I’m guessing that you have a lot more questions about what role statistics plays in psychology, and specifically why psychology classes always devote so many lectures to stats. So here’s my attempt to answer a few of them…
Why does psychology have so much statistics?
To be perfectly honest, there’s a few different reasons, some of which are better than others. The most important reason is that psychology is a statistical science. What I mean by that is that the “things” that we study are people. Real, complicated, gloriously messy, infuriatingly perverse people. The “things” of physics include object like electrons, and while there are all sorts of complexities that arise in physics, electrons don’t have minds of their own. They don’t have opinions, they don’t differ from each other in weird and arbitrary ways, they don’t get bored in the middle of an experiment, and they don’t get angry at the experimenter and then deliberately try to sabotage the data set (not that I’ve ever done that…). At a fundamental level psychology is harder than physics.5
Basically, we teach statistics to you as psychologists because you need to be better at stats than physicists. There’s actually a saying used sometimes in physics, to the effect that “if your experiment needs statistics, you should have done a better experiment”. They have the luxury of being able to say that because their objects of study are pathetically simple in comparison to the vast mess that confronts social scientists. It’s not just psychology, really: most social sciences are desperately reliant on statistics. Not because we’re bad experimenters, but because we’ve picked a harder problem to solve. We teach you stats because you really, really need it.
Can’t someone else do the statistics?
To some extent, but not completely. It’s true that you don’t need to become a fully trained statistician just to do psychology, but you do need to reach a certain level of statistical competence. In my view, there’s three reasons that every psychological researcher ought to be able to do basic statistics:
* Firstly, there’s the fundamental reason: statistics is deeply intertwined with research design. If you want to be good at designing psychological studies, you need to at least understand the basics of stats.
* Secondly, if you want to be good at the psychological side of the research, then you need to be able to understand the psychological literature, right? But almost every paper in the psychological literature reports the results of statistical analyses. So if you really want to understand the psychology, you need to be able to understand what other people did with their data. And that means understanding a certain amount of statistics.
* Thirdly, there’s a big practical problem with being dependent on other people to do all your statistics: statistical analysis is expensive. If you ever get bored and want to look up how much the Australian government charges for university fees, you’ll notice something interesting: statistics is designated as a “national priority” category, and so the fees are much, much lower than for any other area of study. This is because there’s a massive shortage of statisticians out there. So, from your perspective as a psychological researcher, the laws of supply and demand aren’t exactly on your side here! As a result, in almost any real life situation where you want to do psychological research, the cruel facts will be that you don’t have enough money to afford a statistician. So the economics of the situation mean that you have to be pretty self-sufficient.
Note that a lot of these reasons generalise beyond researchers. If you want to be a practicing psychologist and stay on top of the field, it helps to be able to read the scientific literature, which relies pretty heavily on statistics.
I don’t care about jobs, research, or clinical work. Do I need statistics?
Okay, now you’re just messing with me. Still, I think it should matter to you too. Statistics should matter to you in the same way that statistics should matter to everyone: we live in the 21st century, and data are everywhere. Frankly, given the world in which we live these days, a basic knowledge of statistics is pretty damn close to a survival tool! Which is the topic of the next section…
## 1.4 Statistics in everyday life
“We are drowning in information, but we are starved for knowledge”
-Various authors, original probably <NAME>
When I started writing up my lecture notes I took the 20 most recent news articles posted to the ABC news website. Of those 20 articles, it turned out that 8 of them involved a discussion of something that I would call a statistical topic; 6 of those made a mistake. The most common error, if you’re curious, was failing to report baseline data (e.g., the article mentions that 5% of people in situation X have some characteristic Y, but doesn’t say how common the characteristic is for everyone else!) The point I’m trying to make here isn’t that journalists are bad at statistics (though they almost always are), it’s that a basic knowledge of statistics is very helpful for trying to figure out when someone else is either making a mistake or even lying to you. In fact, one of the biggest things that a knowledge of statistics does to you is cause you to get angry at the newspaper or the internet on a far more frequent basis: you can find a good example of this in Section 5.1.5. In later versions of this book I’ll try to include more anecdotes along those lines.
## 1.5 There’s more to research methods than statistics
So far, most of what I’ve talked about is statistics, and so you’d be forgiven for thinking that statistics is all I care about in life. To be fair, you wouldn’t be far wrong, but research methodology is a broader concept than statistics. So most research methods courses will cover a lot of topics that relate much more to the pragmatics of research design, and in particular the issues that you encounter when trying to do research with humans. However, about 99% of student fears relate to the statistics part of the course, so I’ve focused on the stats in this discussion, and hopefully I’ve convinced you that statistics matters, and more importantly, that it’s not to be feared. That being said, it’s pretty typical for introductory research methods classes to be very stats-heavy. This is not (usually) because the lecturers are evil people. Quite the contrary, in fact. Introductory classes focus a lot on the statistics because you almost always find yourself needing statistics before you need the other research methods training. Why? Because almost all of your assignments in other classes will rely on statistical training, to a much greater extent than they rely on other methodological tools. It’s not common for undergraduate assignments to require you to design your own study from the ground up (in which case you would need to know a lot about research design), but it is common for assignments to ask you to analyse and interpret data that were collected in a study that someone else designed (in which case you need statistics). In that sense, from the perspective of allowing you to do well in all your other classes, the statistics is more urgent.
But note that “urgent” is different from “important” – they both matter. I really do want to stress that research design is just as important as data analysis, and this book does spend a fair amount of time on it. However, while statistics has a kind of universality, and provides a set of core tools that are useful for most types of psychological research, the research methods side isn’t quite so universal. There are some general principles that everyone should think about, but a lot of research design is very idiosyncratic, and is specific to the area of research that you want to engage in. To the extent that it’s the details that matter, those details don’t usually show up in an introductory stats and research methods class.
The quote comes from Auden’s 1946 poem Under Which Lyre: A Reactionary Tract for the Times, delivered as part of a commencement address at Harvard University. The history of the poem is kind of interesting: http://harvardmagazine.com/2007/11/a-poets-warning.html↩
*
Including the suggestion that common sense is in short supply among scientists.↩
*
In my more cynical moments I feel like this fact alone explains 95% of what I read on the internet.↩
*
Earlier versions of these notes incorrectly suggested that they actually were sued – apparently that’s not true. There’s a nice commentary on this here: https://www.refsmmat.com/posts/2016-05-08-simpsons-paradox-berkeley.html. A big thank you to <NAME> for pointing this out to me!↩
*
Which might explain why physics is just a teensy bit further advanced as a science than we are.↩
# Chapter 2 A brief introduction to research design
To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.
– <NAME>
In this chapter, we’re going to start thinking about the basic ideas that go into designing a study, collecting data, checking whether your data collection works, and so on. It won’t give you enough information to allow you to design studies of your own, but it will give you a lot of the basic tools that you need to assess the studies done by other people. However, since the focus of this book is much more on data analysis than on data collection, I’m only giving a very brief overview. Note that this chapter is “special” in two ways. Firstly, it’s much more psychology-specific than the later chapters. Secondly, it focuses much more heavily on the scientific problem of research methodology, and much less on the statistical problem of data analysis. Nevertheless, the two problems are related to one another, so it’s traditional for stats textbooks to discuss the problem in a little detail. This chapter relies heavily on Campbell and Stanley (1963) for the discussion of study design, and Stevens (1946) for the discussion of scales of measurement. Later versions will attempt to be more precise in the citations.
## 2.1 Introduction to psychological measurement
The first thing to understand is data collection can be thought of as a kind of measurement. That is, what we’re trying to do here is measure something about human behaviour or the human mind. What do I mean by “measurement”?
### 2.1.1 Some thoughts about psychological measurement
Measurement itself is a subtle concept, but basically it comes down to finding some way of assigning numbers, or labels, or some other kind of well-defined descriptions to “stuff”. So, any of the following would count as a psychological measurement:
* My age is 33 years.
* I do not like anchovies.
* My chromosomal gender is male.
* My self-identified gender is male.7
In the short list above, the bolded part is “the thing to be measured”, and the italicised part is “the measurement itself”. In fact, we can expand on this a little bit, by thinking about the set of possible measurements that could have arisen in each case:
* My age (in years) could have been 0, 1, 2, 3 …, etc. The upper bound on what my age could possibly be is a bit fuzzy, but in practice you’d be safe in saying that the largest possible age is 150, since no human has ever lived that long.
* When asked if I like anchovies, I might have said that I do, or I do not, or I have no opinion, or I sometimes do.
* My chromosomal gender is almost certainly going to be male (XY) or female (XX), but there are a few other possibilities. I could also have Klinfelter’s syndrome (XXY), which is more similar to male than to female. And I imagine there are other possibilities too.
* My self-identified gender is also very likely to be male or female, but it doesn’t have to agree with my chromosomal gender. I may also choose to identify with neither, or to explicitly call myself transgender.
As you can see, for some things (like age) it seems fairly obvious what the set of possible measurements should be, whereas for other things it gets a bit tricky. But I want to point out that even in the case of someone’s age, it’s much more subtle than this. For instance, in the example above, I assumed that it was okay to measure age in years. But if you’re a developmental psychologist, that’s way too crude, and so you often measure age in years and months (if a child is 2 years and 11 months, this is usually written as “2;11”). If you’re interested in newborns, you might want to measure age in days since birth, maybe even hours since birth. In other words, the way in which you specify the allowable measurement values is important.
Looking at this a bit more closely, you might also realise that the concept of “age” isn’t actually all that precise. In general, when we say “age” we implicitly mean “the length of time since birth”. But that’s not always the right way to do it. Suppose you’re interested in how newborn babies control their eye movements. If you’re interested in kids that young, you might also start to worry that “birth” is not the only meaningful point in time to care about. If <NAME> is born 3 weeks premature and <NAME> is born 1 week late, would it really make sense to say that they are the “same age” if we encountered them “2 hours after birth”? In one sense, yes: by social convention, we use birth as our reference point for talking about age in everyday life, since it defines the amount of time the person has been operating as an independent entity in the world, but from a scientific perspective that’s not the only thing we care about. When we think about the biology of human beings, it’s often useful to think of ourselves as organisms that have been growing and maturing since conception, and from that perspective Alice and Bianca aren’t the same age at all. So you might want to define the concept of “age” in two different ways: the length of time since conception, and the length of time since birth. When dealing with adults, it won’t make much difference, but when dealing with newborns it might.
Moving beyond these issues, there’s the question of methodology. What specific “measurement method” are you going to use to find out someone’s age? As before, there are lots of different possibilities:
* You could just ask people “how old are you?” The method of self-report is fast, cheap and easy, but it only works with people old enough to understand the question, and some people lie about their age.
* You could ask an authority (e.g., a parent) “how old is your child?” This method is fast, and when dealing with kids it’s not all that hard since the parent is almost always around. It doesn’t work as well if you want to know “age since conception”, since a lot of parents can’t say for sure when conception took place. For that, you might need a different authority (e.g., an obstetrician).
* You could look up official records, like birth certificates. This is time consuming and annoying, but it has its uses (e.g., if the person is now dead).
### 2.1.2 Operationalisation: defining your measurement
All of the ideas discussed in the previous section all relate to the concept of operationalisation. To be a bit more precise about the idea, operationalisation is the process by which we take a meaningful but somewhat vague concept, and turn it into a precise measurement. The process of operationalisation can involve several different things:
* Being precise about what you are trying to measure. For instance, does “age” mean “time since birth” or “time since conception” in the context of your research?
* Determining what method you will use to measure it. Will you use self-report to measure age, ask a parent, or look up an official record? If you’re using self-report, how will you phrase the question?
* Defining the set of the allowable values that the measurement can take. Note that these values don’t always have to be numerical, though they often are. When measuring age, the values are numerical, but we still need to think carefully about what numbers are allowed. Do we want age in years, years and months, days, hours? Etc. For other types of measurements (e.g., gender), the values aren’t numerical. But, just as before, we need to think about what values are allowed. If we’re asking people to self-report their gender, what options to we allow them to choose between? Is it enough to allow only “male” or “female”? Do you need an “other” option? Or should we not give people any specific options, and let them answer in their own words? And if you open up the set of possible values to include all verbal response, how will you interpret their answers?
Operationalisation is a tricky business, and there’s no “one, true way” to do it. The way in which you choose to operationalise the informal concept of “age” or “gender” into a formal measurement depends on what you need to use the measurement for. Often you’ll find that the community of scientists who work in your area have some fairly well-established ideas for how to go about it. In other words, operationalisation needs to be thought through on a case by case basis. Nevertheless, while there a lot of issues that are specific to each individual research project, there are some aspects to it that are pretty general.
Before moving on, I want to take a moment to clear up our terminology, and in the process introduce one more term. Here are four different things that are closely related to each other:
* A theoretical construct. This is the thing that you’re trying to take a measurement of, like “age”, “gender” or an “opinion”. A theoretical construct can’t be directly observed, and often they’re actually a bit vague.
* A measure. The measure refers to the method or the tool that you use to make your observations. A question in a survey, a behavioural observation or a brain scan could all count as a measure.
* An operationalisation. The term “operationalisation” refers to the logical connection between the measure and the theoretical construct, or to the process by which we try to derive a measure from a theoretical construct.
* A variable. Finally, a new term. A variable is what we end up with when we apply our measure to something in the world. That is, variables are the actual “data” that we end up with in our data sets.
In practice, even scientists tend to blur the distinction between these things, but it’s very helpful to try to understand the differences.
## 2.2 Scales of measurement
As the previous section indicates, the outcome of a psychological measurement is called a variable. But not all variables are of the same qualitative type, and it’s very useful to understand what types there are. A very useful concept for distinguishing between different types of variables is what’s known as scales of measurement.
### 2.2.1 Nominal scale
A nominal scale variable (also referred to as a categorical variable) is one in which there is no particular relationship between the different possibilities: for these kinds of variables it doesn’t make any sense to say that one of them is “bigger’ or”better" than any other one, and it absolutely doesn’t make any sense to average them. The classic example for this is “eye colour”. Eyes can be blue, green and brown, among other possibilities, but none of them is any “better” than any other one. As a result, it would feel really weird to talk about an “average eye colour”. Similarly, gender is nominal too: male isn’t better or worse than female, neither does it make sense to try to talk about an “average gender”. In short, nominal scale variables are those for which the only thing you can say about the different possibilities is that they are different. That’s it.
Let’s take a slightly closer look at this. Suppose I was doing research on how people commute to and from work. One variable I would have to measure would be what kind of transportation people use to get to work. This “transport type” variable could have quite a few possible values, including: “train”, “bus”, “car”, “bicycle”, etc. For now, let’s suppose that these four are the only possibilities, and suppose that when I ask 100 people how they got to work today, and I get this:
So, what’s the average transportation type? Obviously, the answer here is that there isn’t one. It’s a silly question to ask. You can say that travel by car is the most popular method, and travel by train is the least popular method, but that’s about all. Similarly, notice that the order in which I list the options isn’t very interesting. I could have chosen to display the data like this
and nothing really changes.
### 2.2.2 Ordinal scale
Ordinal scale variables have a bit more structure than nominal scale variables, but not by a lot. An ordinal scale variable is one in which there is a natural, meaningful way to order the different possibilities, but you can’t do anything else. The usual example given of an ordinal variable is “finishing position in a race”. You can say that the person who finished first was faster than the person who finished second, but you don’t know how much faster. As a consequence we know that 1st > 2nd, and we know that 2nd > 3rd, but the difference between 1st and 2nd might be much larger than the difference between 2nd and 3rd.
Here’s an more psychologically interesting example. Suppose I’m interested in people’s attitudes to climate change, and I ask them to pick one of these four statements that most closely matches their beliefs:
* Temperatures are rising, because of human activity
* Temperatures are rising, but we don’t know why
* Temperatures are rising, but not because of humans
* Temperatures are not rising
Notice that these four statements actually do have a natural ordering, in terms of “the extent to which they agree with the current science”. Statement 1 is a close match, statement 2 is a reasonable match, statement 3 isn’t a very good match, and statement 4 is in strong opposition to the science. So, in terms of the thing I’m interested in (the extent to which people endorse the science), I can order the items as 1 > 2 > 3 > 4. Since this ordering exists, it would be very weird to list the options like this…
* Temperatures are rising, but not because of humans
* Temperatures are rising, because of human activity
* Temperatures are not rising
* Temperatures are rising, but we don’t know why
… because it seems to violate the natural “structure” to the question.
So, let’s suppose I asked 100 people these questions, and got the following answers:
Response | Number |
| --- | --- |
(1) Temperatures are rising, because of human activity | 51 |
(2) Temperatures are rising, but we don’t know why | 20 |
(3) Temperatures are rising, but not because of humans | 10 |
(4) Temperatures are not rising | 19 |
When analysing these data, it seems quite reasonable to try to group (1), (2) and (3) together, and say that 81 of 100 people were willing to at least partially endorse the science. And it’s also quite reasonable to group (2), (3) and (4) together and say that 49 of 100 people registered at least some disagreement with the dominant scientific view. However, it would be entirely bizarre to try to group (1), (2) and (4) together and say that 90 of 100 people said… what? There’s nothing sensible that allows you to group those responses together at all.
That said, notice that while we can use the natural ordering of these items to construct sensible groupings, what we can’t do is average them. For instance, in my simple example here, the “average” response to the question is 1.97. If you can tell me what that means, I’d love to know. Because that sounds like gibberish to me!
### 2.2.3 Interval scale
In contrast to nominal and ordinal scale variables, interval scale and ratio scale variables are variables for which the numerical value is genuinely meaningful. In the case of interval scale variables, the differences between the numbers are interpretable, but the variable doesn’t have a “natural” zero value. A good example of an interval scale variable is measuring temperature in degrees celsius. For instance, if it was 15\(^\circ\) yesterday and 18\(^\circ\) today, then the 3\(^\circ\) difference between the two is genuinely meaningful. Moreover, that 3\(^\circ\) difference is exactly the same as the 3\(^\circ\) difference between 7\(^\circ\) and 10\(^\circ\). In short, addition and subtraction are meaningful for interval scale variables.8
However, notice that the 0\(^\circ\) does not mean “no temperature at all”: it actually means “the temperature at which water freezes”, which is pretty arbitrary. As a consequence, it becomes pointless to try to multiply and divide temperatures. It is wrong to say that \(20^\circ\) is twice as hot as 10\(^\circ\), just as it is weird and meaningless to try to claim that 20\(^\circ\) is negative two times as hot as -10\(^\circ\).
Again, lets look at a more psychological example. Suppose I’m interested in looking at how the attitudes of first-year university students have changed over time. Obviously, I’m going to want to record the year in which each student started. This is an interval scale variable. A student who started in 2003 did arrive 5 years before a student who started in 2008. However, it would be completely insane for me to divide 2008 by 2003 and say that the second student started “1.0024 times later” than the first one. That doesn’t make any sense at all.
### 2.2.4 Ratio scale
The fourth and final type of variable to consider is a ratio scale variable, in which zero really means zero, and it’s okay to multiply and divide. A good psychological example of a ratio scale variable is response time (RT). In a lot of tasks it’s very common to record the amount of time somebody takes to solve a problem or answer a question, because it’s an indicator of how difficult the task is. Suppose that Alan takes 2.3 seconds to respond to a question, whereas Ben takes 3.1 seconds. As with an interval scale variable, addition and subtraction are both meaningful here. Ben really did take 3.1 - 2.3 = 0.8 seconds longer than Alan did. However, notice that multiplication and division also make sense here too: Ben took 3.1 / 2.3 = 1.35 times as long as Alan did to answer the question. And the reason why you can do this is that, for a ratio scale variable such as RT, “zero seconds” really does mean “no time at all”.
### 2.2.5 Continuous versus discrete variables
There’s a second kind of distinction that you need to be aware of, regarding what types of variables you can run into. This is the distinction between continuous variables and discrete variables. The difference between these is as follows:
* A continuous variable is one in which, for any two values that you can think of, it’s always logically possible to have another value in between.
* A discrete variable is, in effect, a variable that isn’t continuous. For a discrete variable, it’s sometimes the case that there’s nothing in the middle.
These definitions probably seem a bit abstract, but they’re pretty simple once you see some examples. For instance, response time is continuous. If Alan takes 3.1 seconds and Ben takes 2.3 seconds to respond to a question, then it’s possible for Cameron’s response time to lie in between, by taking 3.0 seconds. And of course it would also be possible for David to take 3.031 seconds to respond, meaning that his RT would lie in between Cameron’s and Alan’s. And while in practice it might be impossible to measure RT that precisely, it’s certainly possible in principle. Because we can always find a new value for RT in between any two other ones, we say that RT is continuous.
Discrete variables occur when this rule is violated. For example, nominal scale variables are always discrete: there isn’t a type of transportation that falls “in between” trains and bicycles, not in the strict mathematical way that 2.3 falls in between 2 and 3. So transportation type is discrete. Similarly, ordinal scale variables are always discrete: although “2nd place” does fall between “1st place” and “3rd place”, there’s nothing that can logically fall in between “1st place” and “2nd place”. Interval scale and ratio scale variables can go either way. As we saw above, response time (a ratio scale variable) is continuous. Temperature in degrees celsius (an interval scale variable) is also continuous. However, the year you went to school (an interval scale variable) is discrete. There’s no year in between 2002 and 2003. The number of questions you get right on a true-or-false test (a ratio scale variable) is also discrete: since a true-or-false question doesn’t allow you to be “partially correct”, there’s nothing in between 5/10 and 6/10. Table 2.1 summarises the relationship between the scales of measurement and the discrete/continuity distinction. Cells with a tick mark correspond to things that are possible. I’m trying to hammer this point home, because (a) some textbooks get this wrong, and (b) people very often say things like “discrete variable” when they mean “nominal scale variable”. It’s very unfortunate.
continuous | discrete |
| --- | --- |
nominal | \(\checkmark\) |
ordinal | \(\checkmark\) |
interval | \(\checkmark\) | \(\checkmark\) |
ratio | \(\checkmark\) | \(\checkmark\) |
### 2.2.6 Some complexities
Okay, I know you’re going to be shocked to hear this, but … the real world is much messier than this little classification scheme suggests. Very few variables in real life actually fall into these nice neat categories, so you need to be kind of careful not to treat the scales of measurement as if they were hard and fast rules. It doesn’t work like that: they’re guidelines, intended to help you think about the situations in which you should treat different variables differently. Nothing more.
So let’s take a classic example, maybe the classic example, of a psychological measurement tool: the Likert scale. The humble Likert scale is the bread and butter tool of all survey design. You yourself have filled out hundreds, maybe thousands of them, and odds are you’ve even used one yourself. Suppose we have a survey question that looks like this:
Which of the following best describes your opinion of the statement that “all pirates are freaking awesome” …
and then the options presented to the participant are these:
* Strongly disagree
* Disagree
* Neither agree nor disagree
* Agree
* Strongly agree
This set of items is an example of a 5-point Likert scale: people are asked to choose among one of several (in this case 5) clearly ordered possibilities, generally with a verbal descriptor given in each case. However, it’s not necessary that all items be explicitly described. This is a perfectly good example of a 5-point Likert scale too:
* Strongly disagree
* Strongly agree
Likert scales are very handy, if somewhat limited, tools. The question is, what kind of variable are they? They’re obviously discrete, since you can’t give a response of 2.5. They’re obviously not nominal scale, since the items are ordered; and they’re not ratio scale either, since there’s no natural zero.
But are they ordinal scale or interval scale? One argument says that we can’t really prove that the difference between “strongly agree” and “agree” is of the same size as the difference between “agree” and “neither agree nor disagree”. In fact, in everyday life it’s pretty obvious that they’re not the same at all. So this suggests that we ought to treat Likert scales as ordinal variables. On the other hand, in practice most participants do seem to take the whole “on a scale from 1 to 5” part fairly seriously, and they tend to act as if the differences between the five response options were fairly similar to one another. As a consequence, a lot of researchers treat Likert scale data as if it were interval scale. It’s not interval scale, but in practice it’s close enough that we usually think of it as being quasi-interval scale.
## 2.3 Assessing the reliability of a measurement
At this point we’ve thought a little bit about how to operationalise a theoretical construct and thereby create a psychological measure; and we’ve seen that by applying psychological measures we end up with variables, which can come in many different types. At this point, we should start discussing the obvious question: is the measurement any good? We’ll do this in terms of two related ideas: reliability and validity. Put simply, the reliability of a measure tells you how precisely you are measuring something, whereas the validity of a measure tells you how accurate the measure is. In this section I’ll talk about reliability; we’ll talk about validity in the next chapter.
Reliability is actually a very simple concept: it refers to the repeatability or consistency of your measurement. The measurement of my weight by means of a “bathroom scale” is very reliable: if I step on and off the scales over and over again, it’ll keep giving me the same answer. Measuring my intelligence by means of “asking my mum” is very unreliable: some days she tells me I’m a bit thick, and other days she tells me I’m a complete moron. Notice that this concept of reliability is different to the question of whether the measurements are correct (the correctness of a measurement relates to it’s validity). If I’m holding a sack of potatos when I step on and off of the bathroom scales, the measurement will still be reliable: it will always give me the same answer. However, this highly reliable answer doesn’t match up to my true weight at all, therefore it’s wrong. In technical terms, this is a reliable but invalid measurement. Similarly, while my mum’s estimate of my intelligence is a bit unreliable, she might be right. Maybe I’m just not too bright, and so while her estimate of my intelligence fluctuates pretty wildly from day to day, it’s basically right. So that would be an unreliable but valid measure. Of course, to some extent, notice that if my mum’s estimates are too unreliable, it’s going to be very hard to figure out which one of her many claims about my intelligence is actually the right one. To some extent, then, a very unreliable measure tends to end up being invalid for practical purposes; so much so that many people would say that reliability is necessary (but not sufficient) to ensure validity.
Okay, now that we’re clear on the distinction between reliability and validity, let’s have a think about the different ways in which we might measure reliability:
* Test-retest reliability. This relates to consistency over time: if we repeat the measurement at a later date, do we get a the same answer?
* Inter-rater reliability. This relates to consistency across people: if someone else repeats the measurement (e.g., someone else rates my intelligence) will they produce the same answer?
* Parallel forms reliability. This relates to consistency across theoretically-equivalent measurements: if I use a different set of bathroom scales to measure my weight, does it give the same answer?
* Internal consistency reliability. If a measurement is constructed from lots of different parts that perform similar functions (e.g., a personality questionnaire result is added up across several questions) do the individual parts tend to give similar answers.
Not all measurements need to possess all forms of reliability. For instance, educational assessment can be thought of as a form of measurement. One of the subjects that I teach, Computational Cognitive Science, has an assessment structure that has a research component and an exam component (plus other things). The exam component is intended to measure something different from the research component, so the assessment as a whole has low internal consistency. However, within the exam there are several questions that are intended to (approximately) measure the same things, and those tend to produce similar outcomes; so the exam on its own has a fairly high internal consistency. Which is as it should be. You should only demand reliability in those situations where you want to be measure the same thing!
## 2.4 The “role” of variables: predictors and outcomes
Okay, I’ve got one last piece of terminology that I need to explain to you before moving away from variables. Normally, when we do some research we end up with lots of different variables. Then, when we analyse our data we usually try to explain some of the variables in terms of some of the other variables. It’s important to keep the two roles “thing doing the explaining” and “thing being explained” distinct. So let’s be clear about this now. Firstly, we might as well get used to the idea of using mathematical symbols to describe variables, since it’s going to happen over and over again. Let’s denote the “to be explained” variable \(Y\), and denote the variables “doing the explaining” as \(X_1\), \(X_2\), etc.
Now, when we doing an analysis, we have different names for \(X\) and \(Y\), since they play different roles in the analysis. The classical names for these roles are independent variable (IV) and dependent variable (DV). The IV is the variable that you use to do the explaining (i.e., \(X\)) and the DV is the variable being explained (i.e., \(Y\)). The logic behind these names goes like this: if there really is a relationship between \(X\) and \(Y\) then we can say that \(Y\) depends on \(X\), and if we have designed our study “properly” then \(X\) isn’t dependent on anything else. However, I personally find those names horrible: they’re hard to remember and they’re highly misleading, because (a) the IV is never actually “independent of everything else” and (b) if there’s no relationship, then the DV doesn’t actually depend on the IV. And in fact, because I’m not the only person who thinks that IV and DV are just awful names, there are a number of alternatives that I find more appealing. The terms that I’ll use in these notes are predictors and outcomes. The idea here is that what you’re trying to do is use \(X\) (the predictors) to make guesses about \(Y\) (the outcomes).9 This is summarised in Table 2.2.
role of the variable | classical name | modern name |
| --- | --- | --- |
to be explained | dependent variable (DV) | outcome |
to do the explaining | independent variable (IV) | predictor |
## 2.5 Experimental and non-experimental research
One of the big distinctions that you should be aware of is the distinction between “experimental research” and “non-experimental research”. When we make this distinction, what we’re really talking about is the degree of control that the researcher exercises over the people and events in the study.
### 2.5.1 Experimental research
The key features of experimental research is that the researcher controls all aspects of the study, especially what participants experience during the study. In particular, the researcher manipulates or varies the predictor variables (IVs), and then allows the outcome variable (DV) to vary naturally. The idea here is to deliberately vary the predictors (IVs) to see if they have any causal effects on the outcomes. Moreover, in order to ensure that there’s no chance that something other than the predictor variables is causing the outcomes, everything else is kept constant or is in some other way “balanced” to ensure that they have no effect on the results. In practice, it’s almost impossible to think of everything else that might have an influence on the outcome of an experiment, much less keep it constant. The standard solution to this is randomisation: that is, we randomly assign people to different groups, and then give each group a different treatment (i.e., assign them different values of the predictor variables). We’ll talk more about randomisation later in this course, but for now, it’s enough to say that what randomisation does is minimise (but not eliminate) the chances that there are any systematic difference between groups.
Let’s consider a very simple, completely unrealistic and grossly unethical example. Suppose you wanted to find out if smoking causes lung cancer. One way to do this would be to find people who smoke and people who don’t smoke, and look to see if smokers have a higher rate of lung cancer. This is not a proper experiment, since the researcher doesn’t have a lot of control over who is and isn’t a smoker. And this really matters: for instance, it might be that people who choose to smoke cigarettes also tend to have poor diets, or maybe they tend to work in asbestos mines, or whatever. The point here is that the groups (smokers and non-smokers) actually differ on lots of things, not just smoking. So it might be that the higher incidence of lung cancer among smokers is caused by something else, not by smoking per se. In technical terms, these other things (e.g. diet) are called “confounds”, and we’ll talk about those in just a moment.
In the meantime, let’s now consider what a proper experiment might look like. Recall that our concern was that smokers and non-smokers might differ in lots of ways. The solution, as long as you have no ethics, is to control who smokes and who doesn’t. Specifically, if we randomly divide participants into two groups, and force half of them to become smokers, then it’s very unlikely that the groups will differ in any respect other than the fact that half of them smoke. That way, if our smoking group gets cancer at a higher rate than the non-smoking group, then we can feel pretty confident that (a) smoking does cause cancer and (b) we’re murderers.
### 2.5.2 Non-experimental research
Non-experimental research is a broad term that covers “any study in which the researcher doesn’t have quite as much control as they do in an experiment”. Obviously, control is something that scientists like to have, but as the previous example illustrates, there are lots of situations in which you can’t or shouldn’t try to obtain that control. Since it’s grossly unethical (and almost certainly criminal) to force people to smoke in order to find out if they get cancer, this is a good example of a situation in which you really shouldn’t try to obtain experimental control. But there are other reasons too. Even leaving aside the ethical issues, our “smoking experiment” does have a few other issues. For instance, when I suggested that we “force” half of the people to become smokers, I must have been talking about starting with a sample of non-smokers, and then forcing them to become smokers. While this sounds like the kind of solid, evil experimental design that a mad scientist would love, it might not be a very sound way of investigating the effect in the real world. For instance, suppose that smoking only causes lung cancer when people have poor diets, and suppose also that people who normally smoke do tend to have poor diets. However, since the “smokers” in our experiment aren’t “natural” smokers (i.e., we forced non-smokers to become smokers; they didn’t take on all of the other normal, real life characteristics that smokers might tend to possess) they probably have better diets. As such, in this silly example they wouldn’t get lung cancer, and our experiment will fail, because it violates the structure of the “natural” world (the technical name for this is an “artifactual” result; see later).
One distinction worth making between two types of non-experimental research is the difference between quasi-experimental research and case studies. The example I discussed earlier – in which we wanted to examine incidence of lung cancer among smokers and non-smokers, without trying to control who smokes and who doesn’t – is a quasi-experimental design. That is, it’s the same as an experiment, but we don’t control the predictors (IVs). We can still use statistics to analyse the results, it’s just that we have to be a lot more careful.
The alternative approach, case studies, aims to provide a very detailed description of one or a few instances. In general, you can’t use statistics to analyse the results of case studies, and it’s usually very hard to draw any general conclusions about “people in general” from a few isolated examples. However, case studies are very useful in some situations. Firstly, there are situations where you don’t have any alternative: neuropsychology has this issue a lot. Sometimes, you just can’t find a lot of people with brain damage in a specific area, so the only thing you can do is describe those cases that you do have in as much detail and with as much care as you can. However, there’s also some genuine advantages to case studies: because you don’t have as many people to study, you have the ability to invest lots of time and effort trying to understand the specific factors at play in each case. This is a very valuable thing to do. As a consequence, case studies can complement the more statistically-oriented approaches that you see in experimental and quasi-experimental designs. We won’t talk much about case studies in these lectures, but they are nevertheless very valuable tools!
## 2.6 Assessing the validity of a study
More than any other thing, a scientist wants their research to be “valid”. The conceptual idea behind validity is very simple: can you trust the results of your study? If not, the study is invalid. However, while it’s easy to state, in practice it’s much harder to check validity than it is to check reliability. And in all honesty, there’s no precise, clearly agreed upon notion of what validity actually is. In fact, there’s lots of different kinds of validity, each of which raises it’s own issues, and not all forms of validity are relevant to all studies. I’m going to talk about five different types:
* Internal validity
* External validity
* Construct validity
* Face validity
* Ecological validity
To give you a quick guide as to what matters here… (1) Internal and external validity are the most important, since they tie directly to the fundamental question of whether your study really works. (2) Construct validity asks whether you’re measuring what you think you are. (3) Face validity isn’t terribly important except insofar as you care about “appearances”. (4) Ecological validity is a special case of face validity that corresponds to a kind of appearance that you might care about a lot.
### 2.6.1 Internal validity
Internal validity refers to the extent to which you are able draw the correct conclusions about the causal relationships between variables. It’s called “internal” because it refers to the relationships between things “inside” the study. Let’s illustrate the concept with a simple example. Suppose you’re interested in finding out whether a university education makes you write better. To do so, you get a group of first year students, ask them to write a 1000 word essay, and count the number of spelling and grammatical errors they make. Then you find some third-year students, who obviously have had more of a university education than the first-years, and repeat the exercise. And let’s suppose it turns out that the third-year students produce fewer errors. And so you conclude that a university education improves writing skills. Right? Except… the big problem that you have with this experiment is that the third-year students are older, and they’ve had more experience with writing things. So it’s hard to know for sure what the causal relationship is: Do older people write better? Or people who have had more writing experience? Or people who have had more education? Which of the above is the true cause of the superior performance of the third-years? Age? Experience? Education? You can’t tell. This is an example of a failure of internal validity, because your study doesn’t properly tease apart the causal relationships between the different variables.
### 2.6.2 External validity
External validity relates to the generalisability of your findings. That is, to what extent do you expect to see the same pattern of results in “real life” as you saw in your study. To put it a bit more precisely, any study that you do in psychology will involve a fairly specific set of questions or tasks, will occur in a specific environment, and will involve participants that are drawn from a particular subgroup. So, if it turns out that the results don’t actually generalise to people and situations beyond the ones that you studied, then what you’ve got is a lack of external validity.
The classic example of this issue is the fact that a very large proportion of studies in psychology will use undergraduate psychology students as the participants. Obviously, however, the researchers don’t care only about psychology students; they care about people in general. Given that, a study that uses only psych students as participants always carries a risk of lacking external validity. That is, if there’s something “special” about psychology students that makes them different to the general populace in some relevant respect, then we may start worrying about a lack of external validity.
That said, it is absolutely critical to realise that a study that uses only psychology students does not necessarily have a problem with external validity. I’ll talk about this again later, but it’s such a common mistake that I’m going to mention it here. The external validity is threatened by the choice of population if (a) the population from which you sample your participants is very narrow (e.g., psych students), and (b) the narrow population that you sampled from is systematically different from the general population, in some respect that is relevant to the psychological phenomenon that you intend to study. The italicised part is the bit that lots of people forget: it is true that psychology undergraduates differ from the general population in lots of ways, and so a study that uses only psych students may have problems with external validity. However, if those differences aren’t very relevant to the phenomenon that you’re studying, then there’s nothing to worry about. To make this a bit more concrete, here’s two extreme examples:
* You want to measure “attitudes of the general public towards psychotherapy”, but all of your participants are psychology students. This study would almost certainly have a problem with external validity.
* You want to measure the effectiveness of a visual illusion, and your participants are all psychology students. This study is very unlikely to have a problem with external validity
Having just spent the last couple of paragraphs focusing on the choice of participants (since that’s the big issue that everyone tends to worry most about), it’s worth remembering that external validity is a broader concept. The following are also examples of things that might pose a threat to external validity, depending on what kind of study you’re doing:
* People might answer a “psychology questionnaire” in a manner that doesn’t reflect what they would do in real life.
* Your lab experiment on (say) “human learning” has a different structure to the learning problems people face in real life.
### 2.6.3 Construct validity
Construct validity is basically a question of whether you’re measuring what you want to be measuring. A measurement has good construct validity if it is actually measuring the correct theoretical construct, and bad construct validity if it doesn’t. To give very simple (if ridiculous) example, suppose I’m trying to investigate the rates with which university students cheat on their exams. And the way I attempt to measure it is by asking the cheating students to stand up in the lecture theatre so that I can count them. When I do this with a class of 300 students, 0 people claim to be cheaters. So I therefore conclude that the proportion of cheaters in my class is 0%. Clearly this is a bit ridiculous. But the point here is not that this is a very deep methodological example, but rather to explain what construct validity is. The problem with my measure is that while I’m trying to measure “the proportion of people who cheat” what I’m actually measuring is “the proportion of people stupid enough to own up to cheating, or bloody minded enough to pretend that they do”. Obviously, these aren’t the same thing! So my study has gone wrong, because my measurement has very poor construct validity.
### 2.6.4 Face validity
Face validity simply refers to whether or not a measure “looks like” it’s doing what it’s supposed to, nothing more. If I design a test of intelligence, and people look at it and they say “no, that test doesn’t measure intelligence”, then the measure lacks face validity. It’s as simple as that. Obviously, face validity isn’t very important from a pure scientific perspective. After all, what we care about is whether or not the measure actually does what it’s supposed to do, not whether it looks like it does what it’s supposed to do. As a consequence, we generally don’t care very much about face validity. That said, the concept of face validity serves three useful pragmatic purposes:
* Sometimes, an experienced scientist will have a “hunch” that a particular measure won’t work. While these sorts of hunches have no strict evidentiary value, it’s often worth paying attention to them. Because often times people have knowledge that they can’t quite verbalise, so there might be something to worry about even if you can’t quite say why. In other words, when someone you trust criticises the face validity of your study, it’s worth taking the time to think more carefully about your design to see if you can think of reasons why it might go awry. Mind you, if you don’t find any reason for concern, then you should probably not worry: after all, face validity really doesn’t matter much.
* Often (very often), completely uninformed people will also have a “hunch” that your research is crap. And they’ll criticise it on the internet or something. On close inspection, you’ll often notice that these criticisms are actually focused entirely on how the study “looks”, but not on anything deeper. The concept of face validity is useful for gently explaining to people that they need to substantiate their arguments further.
* Expanding on the last point, if the beliefs of untrained people are critical (e.g., this is often the case for applied research where you actually want to convince policy makers of something or other) then you have to care about face validity. Simply because – whether you like it or not – a lot of people will use face validity as a proxy for real validity. If you want the government to change a law on scientific, psychological grounds, then it won’t matter how good your studies “really” are. If they lack face validity, you’ll find that politicians ignore you. Of course, it’s somewhat unfair that policy often depends more on appearance than fact, but that’s how things go.
### 2.6.5 Ecological validity
Ecological validity is a different notion of validity, which is similar to external validity, but less important. The idea is that, in order to be ecologically valid, the entire set up of the study should closely approximate the real world scenario that is being investigated. In a sense, ecological validity is a kind of face validity – it relates mostly to whether the study “looks” right, but with a bit more rigour to it. To be ecologically valid, the study has to look right in a fairly specific way. The idea behind it is the intuition that a study that is ecologically valid is more likely to be externally valid. It’s no guarantee, of course. But the nice thing about ecological validity is that it’s much easier to check whether a study is ecologically valid than it is to check whether a study is externally valid. An simple example would be eyewitness identification studies. Most of these studies tend to be done in a university setting, often with fairly simple array of faces to look at rather than a line up. The length of time between seeing the “criminal” and being asked to identify the suspect in the “line up” is usually shorter. The “crime” isn’t real, so there’s no chance that the witness being scared, and there’s no police officers present, so there’s not as much chance of feeling pressured. These things all mean that the study definitely lacks ecological validity. They might (but might not) mean that it also lacks external validity.
## 2.7 Confounds, artifacts and other threats to validity
If we look at the issue of validity in the most general fashion, the two biggest worries that we have are confounds and artifact. These two terms are defined in the following way:
* Confound: A confound is an additional, often unmeasured variable10 that turns out to be related to both the predictors and the outcomes. The existence of confounds threatens the internal validity of the study because you can’t tell whether the predictor causes the outcome, or if the confounding variable causes it, etc.
* Artifact: A result is said to be “artifactual” if it only holds in the special situation that you happened to test in your study. The possibility that your result is an artifact describes a threat to your external validity, because it raises the possibility that you can’t generalise your results to the actual population that you care about.
As a general rule confounds are a bigger concern for non-experimental studies, precisely because they’re not proper experiments: by definition, you’re leaving lots of things uncontrolled, so there’s a lot of scope for confounds working their way into your study. Experimental research tends to be much less vulnerable to confounds: the more control you have over what happens during the study, the more you can prevent confounds from appearing.
However, there’s always swings and roundabouts, and when we start thinking about artifacts rather than confounds, the shoe is very firmly on the other foot. For the most part, artifactual results tend to be a concern for experimental studies than for non-experimental studies. To see this, it helps to realise that the reason that a lot of studies are non-experimental is precisely because what the researcher is trying to do is examine human behaviour in a more naturalistic context. By working in a more real-world context, you lose experimental control (making yourself vulnerable to confounds) but because you tend to be studying human psychology “in the wild” you reduce the chances of getting an artifactual result. Or, to put it another way, when you take psychology out of the wild and bring it into the lab (which we usually have to do to gain our experimental control), you always run the risk of accidentally studying something different than you wanted to study: which is more or less the definition of an artifact.
Be warned though: the above is a rough guide only. It’s absolutely possible to have confounds in an experiment, and to get artifactual results with non-experimental studies. This can happen for all sorts of reasons, not least of which is researcher error. In practice, it’s really hard to think everything through ahead of time, and even very good researchers make mistakes. But other times it’s unavoidable, simply because the researcher has ethics (e.g., see 2.7.5).
Okay. There’s a sense in which almost any threat to validity can be characterised as a confound or an artifact: they’re pretty vague concepts. So let’s have a look at some of the most common examples…
### 2.7.1 History effects
History effects refer to the possibility that specific events may occur during the study itself that might influence the outcomes. For instance, something might happen in between a pre-test and a post-test. Or, in between testing participant 23 and participant 24. Alternatively, it might be that you’re looking at an older study, which was perfectly valid for its time, but the world has changed enough since then that the conclusions are no longer trustworthy. Examples of things that would count as history effects:
* You’re interested in how people think about risk and uncertainty. You started your data collection in December 2010. But finding participants and collecting data takes time, so you’re still finding new people in February 2011. Unfortunately for you (and even more unfortunately for others), the Queensland floods occurred in January 2011, causing billions of dollars of damage and killing many people. Not surprisingly, the people tested in February 2011 express quite different beliefs about handling risk than the people tested in December 2010. Which (if any) of these reflects the “true” beliefs of participants? I think the answer is probably both: the Queensland floods genuinely changed the beliefs of the Australian public, though possibly only temporarily. The key thing here is that the “history” of the people tested in February is quite different to people tested in December.
* You’re testing the psychological effects of a new anti-anxiety drug. So what you do is measure anxiety before administering the drug (e.g., by self-report, and taking physiological measures, let’s say), then you administer the drug, and then you take the same measures afterwards. In the middle, however, because your labs are in Los Angeles, there’s an earthquake, which increases the anxiety of the participants.
### 2.7.2 Maturation effects
As with history effects, maturational effects are fundamentally about change over time. However, maturation effects aren’t in response to specific events. Rather, they relate to how people change on their own over time: we get older, we get tired, we get bored, etc. Some examples of maturation effects:
* When doing developmental psychology research, you need to be aware that children grow up quite rapidly. So, suppose that you want to find out whether some educational trick helps with vocabulary size among 3 year olds. One thing that you need to be aware of is that the vocabulary size of children that age is growing at an incredible rate (multiple words per day), all on its own. If you design your study without taking this maturational effect into account, then you won’t be able to tell if your educational trick works.
* When running a very long experiment in the lab (say, something that goes for 3 hours), it’s very likely that people will begin to get bored and tired, and that this maturational effect will cause performance to decline, regardless of anything else going on in the experiment
### 2.7.3 Repeated testing effects
An important type of history effect is the effect of repeated testing. Suppose I want to take two measurements of some psychological construct (e.g., anxiety). One thing I might be worried about is if the first measurement has an effect on the second measurement. In other words, this is a history effect in which the “event” that influences the second measurement is the first measurement itself! This is not at all uncommon. Examples of this include:
* Learning and practice: e.g., “intelligence” at time 2 might appear to go up relative to time 1 because participants learned the general rules of how to solve “intelligence-test-style” questions during the first testing session.
* Familiarity with the testing situation: e.g., if people are nervous at time 1, this might make performance go down; after sitting through the first testing situation, they might calm down a lot precisely because they’ve seen what the testing looks like.
* Auxiliary changes caused by testing: e.g., if a questionnaire assessing mood is boring, then mood at measurement at time 2 is more likely to become “bored”, precisely because of the boring measurement made at time 1.
### 2.7.4 Selection bias
Selection bias is a pretty broad term. Suppose that you’re running an experiment with two groups of participants, where each group gets a different “treatment”, and you want to see if the different treatments lead to different outcomes. However, suppose that, despite your best efforts, you’ve ended up with a gender imbalance across groups (say, group A has 80% females and group B has 50% females). It might sound like this could never happen, but trust me, it can. This is an example of a selection bias, in which the people “selected into” the two groups have different characteristics. If any of those characteristics turns out to be relevant (say, your treatment works better on females than males) then you’re in a lot of trouble.
### 2.7.5 Differential attrition
One quite subtle danger to be aware of is called differential attrition, which is a kind of selection bias that is caused by the study itself. Suppose that, for the first time ever in the history of psychology, I manage to find the perfectly balanced and representative sample of people. I start running “Dan’s incredibly long and tedious experiment” on my perfect sample, but then, because my study is incredibly long and tedious, lots of people start dropping out. I can’t stop this: as we’ll discuss later in the chapter on research ethics, participants absolutely have the right to stop doing any experiment, any time, for whatever reason they feel like, and as researchers we are morally (and professionally) obliged to remind people that they do have this right. So, suppose that “Dan’s incredibly long and tedious experiment” has a very high drop out rate. What do you suppose the odds are that this drop out is random? Answer: zero. Almost certainly, the people who remain are more conscientious, more tolerant of boredom etc than those that leave. To the extent that (say) conscientiousness is relevant to the psychological phenomenon that I care about, this attrition can decrease the validity of my results.
When thinking about the effects of differential attrition, it is sometimes helpful to distinguish between two different types. The first is homogeneous attrition, in which the attrition effect is the same for all groups, treatments or conditions. In the example I gave above, the differential attrition would be homogeneous if (and only if) the easily bored participants are dropping out of all of the conditions in my experiment at about the same rate. In general, the main effect of homogeneous attrition is likely to be that it makes your sample unrepresentative. As such, the biggest worry that you’ll have is that the generalisability of the results decreases: in other words, you lose external validity.
The second type of differential attrition is heterogeneous attrition, in which the attrition effect is different for different groups. This is a much bigger problem: not only do you have to worry about your external validity, you also have to worry about your internal validity too. To see why this is the case, let’s consider a very dumb study in which I want to see if insulting people makes them act in a more obedient way. Why anyone would actually want to study that I don’t know, but let’s suppose I really, deeply cared about this. So, I design my experiment with two conditions. In the “treatment” condition, the experimenter insults the participant and then gives them a questionnaire designed to measure obedience. In the “control” condition, the experimenter engages in a bit of pointless chitchat and then gives them the questionnaire. Leaving aside the questionable scientific merits and dubious ethics of such a study, let’s have a think about what might go wrong here. As a general rule, when someone insults me to my face, I tend to get much less co-operative. So, there’s a pretty good chance that a lot more people are going to drop out of the treatment condition than the control condition. And this drop out isn’t going to be random. The people most likely to drop out would probably be the people who don’t care all that much about the importance of obediently sitting through the experiment. Since the most bloody minded and disobedient people all left the treatment group but not the control group, we’ve introduced a confound: the people who actually took the questionnaire in the treatment group were already more likely to be dutiful and obedient than the people in the control group. In short, in this study insulting people doesn’t make them more obedient: it makes the more disobedient people leave the experiment! The internal validity of this experiment is completely shot.
### 2.7.6 Non-response bias
Non-response bias is closely related to selection bias, and to differential attrition. The simplest version of the problem goes like this. You mail out a survey to 1000 people, and only 300 of them reply. The 300 people who replied are almost certainly not a random subsample. People who respond to surveys are systematically different to people who don’t. This introduces a problem when trying to generalise from those 300 people who replied, to the population at large; since you now have a very non-random sample. The issue of non-response bias is more general than this, though. Among the (say) 300 people that did respond to the survey, you might find that not everyone answers every question. If (say) 80 people chose not to answer one of your questions, does this introduce problems? As always, the answer is maybe. If the question that wasn’t answered was on the last page of the questionnaire, and those 80 surveys were returned with the last page missing, there’s a good chance that the missing data isn’t a big deal: probably the pages just fell off. However, if the question that 80 people didn’t answer was the most confrontational or invasive personal question in the questionnaire, then almost certainly you’ve got a problem. In essence, what you’re dealing with here is what’s called the problem of missing data. If the data that is missing was “lost” randomly, then it’s not a big problem. If it’s missing systematically, then it can be a big problem.
### 2.7.7 Regression to the mean
Regression to the mean is a curious variation on selection bias. It refers to any situation where you select data based on an extreme value on some measure. Because the measure has natural variation, it almost certainly means that when you take a subsequent measurement, that later measurement will be less extreme than the first one, purely by chance.
Here’s an example. Suppose I’m interested in whether a psychology education has an adverse effect on very smart kids. To do this, I find the 20 psych I students with the best high school grades and look at how well they’re doing at university. It turns out that they’re doing a lot better than average, but they’re not topping the class at university, even though they did top their classes at high school. What’s going on? The natural first thought is that this must mean that the psychology classes must be having an adverse effect on those students. However, while that might very well be the explanation, it’s more likely that what you’re seeing is an example of “regression to the mean”. To see how it works, let’s take a moment to think about what is required to get the best mark in a class, regardless of whether that class be at high school or at university. When you’ve got a big class, there are going to be lots of very smart people enrolled. To get the best mark you have to be very smart, work very hard, and be a bit lucky. The exam has to ask just the right questions for your idiosyncratic skills, and you have to not make any dumb mistakes (we all do that sometimes) when answering them. And that’s the thing: intelligence and hard work are transferrable from one class to the next. Luck isn’t. The people who got lucky in high school won’t be the same as the people who get lucky at university. That’s the very definition of “luck”. The consequence of this is that, when you select people at the very extreme values of one measurement (the top 20 students), you’re selecting for hard work, skill and luck. But because the luck doesn’t transfer to the second measurement (only the skill and work), these people will all be expected to drop a little bit when you measure them a second time (at university). So their scores fall back a little bit, back towards everyone else. This is regression to the mean.
Regression to the mean is surprisingly common. For instance, if two very tall people have kids, their children will tend to be taller than average, but not as tall as the parents. The reverse happens with very short parents: two very short parents will tend to have short children, but nevertheless those kids will tend to be taller than the parents. It can also be extremely subtle. For instance, there have been studies done that suggested that people learn better from negative feedback than from positive feedback. However, the way that people tried to show this was to give people positive reinforcement whenever they did good, and negative reinforcement when they did bad. And what you see is that after the positive reinforcement, people tended to do worse; but after the negative reinforcement they tended to do better. But! Notice that there’s a selection bias here: when people do very well, you’re selecting for “high” values, and so you should expect (because of regression to the mean) that performance on the next trial should be worse, regardless of whether reinforcement is given. Similarly, after a bad trial, people will tend to improve all on their own. The apparent superiority of negative feedback is an artifact caused by regression to the mean (see Kahneman and Tversky 1973 for discussion).
### 2.7.8 Experimenter bias
Experimenter bias can come in multiple forms. The basic idea is that the experimenter, despite the best of intentions, can accidentally end up influencing the results of the experiment by subtly communicating the “right answer” or the “desired behaviour” to the participants. Typically, this occurs because the experimenter has special knowledge that the participant does not – either the right answer to the questions being asked, or knowledge of the expected pattern of performance for the condition that the participant is in, and so on. The classic example of this happening is the case study of “<NAME>”, which dates back to 1907 (Pfungst 1911; Hothersall 2004). <NAME> was a horse that apparently was able to read and count, and perform other human like feats of intelligence. After <NAME> became famous, psychologists started examining his behaviour more closely. It turned out that – not surprisingly – Hans didn’t know how to do maths. Rather, Hans was responding to the human observers around him. Because they did know how to count, and the horse had learned to change its behaviour when people changed theirs.
The general solution to the problem of experimenter bias is to engage in double blind studies, where neither the experimenter nor the participant knows which condition the participant is in, or knows what the desired behaviour is. This provides a very good solution to the problem, but it’s important to recognise that it’s not quite ideal, and hard to pull off perfectly. For instance, the obvious way that I could try to construct a double blind study is to have one of my Ph.D. students (one who doesn’t know anything about the experiment) run the study. That feels like it should be enough. The only person (me) who knows all the details (e.g., correct answers to the questions, assignments of participants to conditions) has no interaction with the participants, and the person who does all the talking to people (the Ph.D. student) doesn’t know anything. Except, that last part is very unlikely to be true. In order for the Ph.D. student to run the study effectively, they need to have been briefed by me, the researcher. And, as it happens, the Ph.D. student also knows me, and knows a bit about my general beliefs about people and psychology (e.g., I tend to think humans are much smarter than psychologists give them credit for). As a result of all this, it’s almost impossible for the experimenter to avoid knowing a little bit about what expectations I have. And even a little bit of knowledge can have an effect: suppose the experimenter accidentally conveys the fact that the participants are expected to do well in this task. Well, there’s a thing called the “Pygmalion effect”: if you expect great things of people, they’ll rise to the occasion; but if you expect them to fail, they’ll do that too. In other words, the expectations become a self-fulfilling prophesy.
### 2.7.9 Demand effects and reactivity
When talking about experimenter bias, the worry is that the experimenter’s knowledge or desires for the experiment are communicated to the participants, and that these effect people’s behaviour (Rosenthal 1966). However, even if you manage to stop this from happening, it’s almost impossible to stop people from knowing that they’re part of a psychological study. And the mere fact of knowing that someone is watching/studying you can have a pretty big effect on behaviour. This is generally referred to as reactivity or demand effects. The basic idea is captured by the Hawthorne effect: people alter their performance because of the attention that the study focuses on them. The effect takes its name from a the “Hawthorne Works” factory outside of Chicago (see Adair 1984). A study done in the 1920s looking at the effects of lighting on worker productivity at the factory turned out to be an effect of the fact that the workers knew they were being studied, rather than the lighting.
To get a bit more specific about some of the ways in which the mere fact of being in a study can change how people behave, it helps to think like a social psychologist and look at some of the roles that people might adopt during an experiment, but might not adopt if the corresponding events were occurring in the real world:
* The good participant tries to be too helpful to the researcher: he or she seeks to figure out the experimenter’s hypotheses and confirm them.
* The negative participant does the exact opposite of the good participant: he or she seeks to break or destroy the study or the hypothesis in some way.
* The faithful participant is unnaturally obedient: he or she seeks to follow instructions perfectly, regardless of what might have happened in a more realistic setting.
* The apprehensive participant gets nervous about being tested or studied, so much so that his or her behaviour becomes highly unnatural, or overly socially desirable.
### 2.7.10 Placebo effects
The placebo effect is a specific type of demand effect that we worry a lot about. It refers to the situation where the mere fact of being treated causes an improvement in outcomes. The classic example comes from clinical trials: if you give people a completely chemically inert drug and tell them that it’s a cure for a disease, they will tend to get better faster than people who aren’t treated at all. In other words, it is people’s belief that they are being treated that causes the improved outcomes, not the drug.
### 2.7.11 Situation, measurement and subpopulation effects
In some respects, these terms are a catch-all term for “all other threats to external validity”. They refer to the fact that the choice of subpopulation from which you draw your participants, the location, timing and manner in which you run your study (including who collects the data) and the tools that you use to make your measurements might all be influencing the results. Specifically, the worry is that these things might be influencing the results in such a way that the results won’t generalise to a wider array of people, places and measures.
### 2.7.12 Fraud, deception and self-deception
It is difficult to get a man to understand something, when his salary depends on his not understanding it.
– <NAME>
One final thing that I feel like I should mention. While reading what the textbooks often have to say about assessing the validity of the study, I couldn’t help but notice that they seem to make the assumption that the researcher is honest. I find this hilarious. While the vast majority of scientists are honest, in my experience at least, some are not.11 Not only that, as I mentioned earlier, scientists are not immune to belief bias – it’s easy for a researcher to end up deceiving themselves into believing the wrong thing, and this can lead them to conduct subtly flawed research, and then hide those flaws when they write it up. So you need to consider not only the (probably unlikely) possibility of outright fraud, but also the (probably quite common) possibility that the research is unintentionally “slanted”. I opened a few standard textbooks and didn’t find much of a discussion of this problem, so here’s my own attempt to list a few ways in which these issues can arise are:
* Data fabrication. Sometimes, people just make up the data. This is occasionally done with “good” intentions. For instance, the researcher believes that the fabricated data do reflect the truth, and may actually reflect “slightly cleaned up” versions of actual data. On other occasions, the fraud is deliberate and malicious. Some high-profile examples where data fabrication has been alleged or shown include <NAME> (a psychologist who is thought to have fabricated some of his data), <NAME> (who has been accused of fabricating his data connecting the MMR vaccine to autism) and <NAME> (who falsified a lot of his data on stem cell research).
* Hoaxes. Hoaxes share a lot of similarities with data fabrication, but they differ in the intended purpose. A hoax is often a joke, and many of them are intended to be (eventually) discovered. Often, the point of a hoax is to discredit someone or some field. There’s quite a few well known scientific hoaxes that have occurred over the years (e.g., Piltdown man) some of were deliberate attempts to discredit particular fields of research (e.g., the Sokal affair).
* Data misrepresentation. While fraud gets most of the headlines, it’s much more common in my experience to see data being misrepresented. When I say this, I’m not referring to newspapers getting it wrong (which they do, almost always). I’m referring to the fact that often, the data don’t actually say what the researchers think they say. My guess is that, almost always, this isn’t the result of deliberate dishonesty, it’s due to a lack of sophistication in the data analyses. For instance, think back to the example of Simpson’s paradox that I discussed in the beginning of these notes. It’s very common to see people present “aggregated” data of some kind; and sometimes, when you dig deeper and find the raw data yourself, you find that the aggregated data tell a different story to the disaggregated data. Alternatively, you might find that some aspect of the data is being hidden, because it tells an inconvenient story (e.g., the researcher might choose not to refer to a particular variable). There’s a lot of variants on this; many of which are very hard to detect.
* Study “misdesign”. Okay, this one is subtle. Basically, the issue here is that a researcher designs a study that has built-in flaws, and those flaws are never reported in the paper. The data that are reported are completely real, and are correctly analysed, but they are produced by a study that is actually quite wrongly put together. The researcher really wants to find a particular effect, and so the study is set up in such a way as to make it “easy” to (artifactually) observe that effect. One sneaky way to do this – in case you’re feeling like dabbling in a bit of fraud yourself – is to design an experiment in which it’s obvious to the participants what they’re “supposed” to be doing, and then let reactivity work its magic for you. If you want, you can add all the trappings of double blind experimentation etc. It won’t make a difference, since the study materials themselves are subtly telling people what you want them to do. When you write up the results, the fraud won’t be obvious to the reader: what’s obvious to the participant when they’re in the experimental context isn’t always obvious to the person reading the paper. Of course, the way I’ve described this makes it sound like it’s always fraud: probably there are cases where this is done deliberately, but in my experience the bigger concern has been with unintentional misdesign. The researcher believes … and so the study just happens to end up with a built in flaw, and that flaw then magically erases itself when the study is written up for publication.
* Data mining & post hoc hypothesising. Another way in which the authors of a study can more or less lie about what they found is by engaging in what’s referred to as “data mining”. As we’ll discuss later in the class, if you keep trying to analyse your data in lots of different ways, you’ll eventually find something that “looks” like a real effect but isn’t. This is referred to as “data mining”. It used to be quite rare because data analysis used to take weeks, but now that everyone has very powerful statistical software on their computers, it’s becoming very common. Data mining per se isn’t “wrong”, but the more that you do it, the bigger the risk you’re taking. The thing that is wrong, and I suspect is very common, is unacknowledged data mining. That is, the researcher run every possible analysis known to humanity, finds the one that works, and then pretends that this was the only analysis that they ever conducted. Worse yet, they often “invent” a hypothesis after looking at the data, to cover up the data mining. To be clear: it’s not wrong to change your beliefs after looking at the data, and to reanalyse your data using your new “post hoc” hypotheses. What is wrong (and, I suspect, common) is failing to acknowledge that you’ve done so. If you acknowledge that you did it, then other researchers are able to take your behaviour into account. If you don’t, then they can’t. And that makes your behaviour deceptive. Bad!
* Publication bias & self-censoring. Finally, a pervasive bias is “non-reporting” of negative results. This is almost impossible to prevent. Journals don’t publish every article that is submitted to them: they prefer to publish articles that find “something”. So, if 20 people run an experiment looking at whether reading Finnegans Wake causes insanity in humans, and 19 of them find that it doesn’t, which one do you think is going to get published? Obviously, it’s the one study that did find that Finnegans Wake causes insanity.12 This is an example of a publication bias: since no-one ever published the 19 studies that didn’t find an effect, a naive reader would never know that they existed. Worse yet, most researchers “internalise” this bias, and end up self-censoring their research. Knowing that negative results aren’t going to be accepted for publication, they never even try to report them. As a friend of mine says “for every experiment that you get published, you also have 10 failures”. And she’s right. The catch is, while some (maybe most) of those studies are failures for boring reasons (e.g. you stuffed something up) others might be genuine “null” results that you ought to acknowledge when you write up the “good” experiment. And telling which is which is often hard to do. A good place to start is a paper by Ioannidis (2005) with the depressing title “Why most published research findings are false”. I’d also suggest taking a look at work by Kühberger, Fritz, and Scherndl (2014) presenting statistical evidence that this actually happens in psychology.
There’s probably a lot more issues like this to think about, but that’ll do to start with. What I really want to point out is the blindingly obvious truth that real world science is conducted by actual humans, and only the most gullible of people automatically assumes that everyone else is honest and impartial. Actual scientists aren’t usually that naive, but for some reason the world likes to pretend that we are, and the textbooks we usually write seem to reinforce that stereotype.
## 2.8 Summary
This chapter isn’t really meant to provide a comprehensive discussion of psychological research methods: it would require another volume just as long as this one to do justice to the topic. However, in real life statistics and study design are tightly intertwined, so it’s very handy to discuss some of the key topics. In this chapter, I’ve briefly discussed the following topics:
* Introduction to psychological measurement. What does it mean to operationalise a theoretical construct? What does it mean to have variables and take measurements?
* Scales of measurement and types of variables. Remember that there are two different distinctions here: there’s the difference between discrete and continuous data, and there’s the difference between the four different scale types (nominal, ordinal, interval and ratio).
* Reliability of a measurement. If I measure the “same” thing twice, should I expect to see the same result? Only if my measure is reliable. But what does it mean to talk about doing the “same” thing? Well, that’s why we have different types of reliability. Make sure you remember what they are.
* Terminology: predictors and outcomes. What roles do variables play in an analysis? Can you remember the difference between predictors and outcomes? Dependent and independent variables? Etc.
* Experimental and non-experimental research designs. What makes an experiment an experiment? Is it a nice white lab coat, or does it have something to do with researcher control over variables?
* Validity and its threats. Does your study measure what you want it to? How might things go wrong? And is it my imagination, or was that a very long list of possible ways in which things can go wrong?
All this should make clear to you that study design is a critical part of research methodology. I built this chapter from the classic little book by <NAME> Stanley (1963), but there are of course a large number of textbooks out there on research design. Spend a few minutes with your favourite search engine and you’ll find dozens.
Presidential Address to the First Indian Statistical Congress, 1938. Source: http://en.wikiquote.org/wiki/Ronald_Fisher↩
*
Well… now this is awkward, isn’t it? This section is one of the oldest parts of the book, and it’s outdated in a rather embarrassing way. I wrote this in 2010, at which point all of those facts were true. Revisiting this in 2018… well I’m not 33 any more, but that’s not surprising I suppose. I can’t imagine my chromosomes have changed, so I’m going to guess my karyotype was then and is now XY. The self-identified gender, on the other hand… ah. I suppose the fact that the title page now refers to me as Danielle rather than Daniel might possibly be a giveaway, but I don’t typically identify as “male” on a gender questionnaire these days, and I prefer “she/her” pronouns as a default (it’s a long story)! I did think a little about how I was going to handle this in the book, actually. The book has a somewhat distinct authorial voice to it, and I feel like it would be a rather different work if I went back and wrote everything as Danielle and updated all the pronouns in the work. Besides, it would be a lot of work, so I’ve left my name as “Dan” throughout the book, and in ant case “Dan” is a perfectly good nickname for “Danielle”, don’t you think? In any case, it’s not a big deal. I only wanted to mention it to make life a little easier for readers who aren’t sure how to refer to me. I still don’t like anchovies though :-)↩
*
Actually, I’ve been informed by readers with greater physics knowledge than I that temperature isn’t strictly an interval scale, in the sense that the amount of energy required to heat something up by 3\(^\circ\) depends on it’s current temperature. So in the sense that physicists care about, temperature isn’t actually interval scale. But it still makes a cute example, so I’m going to ignore this little inconvenient truth.↩
*
Annoyingly, though, there’s a lot of different names used out there. I won’t list all of them – there would be no point in doing that – other than to note that R often uses “response variable” where I’ve used “outcome”, and a traditionalist would use “dependent variable”. Sigh. This sort of terminological confusion is very common, I’m afraid.↩
*
The reason why I say that it’s unmeasured is that if you have measured it, then you can use some fancy statistical tricks to deal with the confound. Because of the existence of these statistical solutions to the problem of confounds, we often refer to a confound that we have measured and dealt with as a covariate. Dealing with covariates is a topic for a more advanced course, but I thought I’d mention it in passing, since it’s kind of comforting to at least know that this stuff exists.↩
*
Some people might argue that if you’re not honest then you’re not a real scientist. Which does have some truth to it I guess, but that’s disingenuous (google the “No true Scotsman” fallacy). The fact is that there are lots of people who are employed ostensibly as scientists, and whose work has all of the trappings of science, but who are outright fraudulent. Pretending that they don’t exist by saying that they’re not scientists is just childish.↩
*
Clearly, the real effect is that only insane people would even try to read Finnegans Wake.↩
# Chapter 3 Getting started with R
Robots are nice to work with.
–<NAME>
In this chapter I’ll discuss how to get started in R. I’ll briefly talk about how to download and install R, but most of the chapter will be focused on getting you started typing R commands. Our goal in this chapter is not to learn any statistical concepts: we’re just trying to learn the basics of how R works and get comfortable interacting with the system. To do this, we’ll spend a bit of time using R as a simple calculator, since that’s the easiest thing to do with R. In doing so, you’ll get a bit of a feel for what it’s like to work in R. From there I’ll introduce some very basic programming ideas: in particular, I’ll talk about the idea of defining variables to store information, and a few things that you can do with these variables.
However, before going into any of the specifics, it’s worth talking a little about why you might want to use R at all. Given that you’re reading this, you’ve probably got your own reasons. However, if those reasons are “because that’s what my stats class uses”, it might be worth explaining a little why your lecturer has chosen to use R for the class. Of course, I don’t really know why other people choose R, so I’m really talking about why I use it.
* It’s sort of obvious, but worth saying anyway: doing your statistics on a computer is faster, easier and more powerful than doing statistics by hand. Computers excel at mindless repetitive tasks, and a lot of statistical calculations are both mindless and repetitive. For most people, the only reason to ever do statistical calculations with pencil and paper is for learning purposes. In my class I do occasionally suggest doing some calculations that way, but the only real value to it is pedagogical. It does help you to get a “feel” for statistics to do some calculations yourself, so it’s worth doing it once. But only once!
* Doing statistics in a spreadsheet (e.g., Microsoft Excel) is generally a bad idea in the long run. Although many people are likely feel more familiar with them, spreadsheets are very limited in terms of what analyses they allow you do. If you get into the habit of trying to do your real life data analysis using spreadsheets, then you’ve dug yourself into a very deep hole.
* Avoiding proprietary software is a very good idea. There are a lot of commercial packages out there that you can buy, some of which I like and some of which I don’t. They’re usually very glossy in their appearance, and generally very powerful (much more powerful than spreadsheets). However, they’re also very expensive: usually, the company sells “student versions” (crippled versions of the real thing) very cheaply; they sell full powered “educational versions” at a price that makes me wince; and they sell commercial licences with a staggeringly high price tag. The business model here is to suck you in during your student days, and then leave you dependent on their tools when you go out into the real world. It’s hard to blame them for trying, but personally I’m not in favour of shelling out thousands of dollars if I can avoid it. And you can avoid it: if you make use of packages like R that are open source and free, you never get trapped having to pay exorbitant licensing fees.
* Something that you might not appreciate now, but will love later on if you do anything involving data analysis, is the fact that R is highly extensible. When you download and install R, you get all the basic “packages”, and those are very powerful on their own. However, because R is so open and so widely used, it’s become something of a standard tool in statistics, and so lots of people write their own packages that extend the system. And these are freely available too. One of the consequences of this, I’ve noticed, is that if you open up an advanced textbook (a recent one, that is) rather than introductory textbooks, is that a lot of them use R. In other words, if you learn how to do your basic statistics in R, then you’re a lot closer to being able to use the state of the art methods than you would be if you’d started out with a “simpler” system: so if you want to become a genuine expert in psychological data analysis, learning R is a very good use of your time.
* Related to the previous point: R is a real programming language. As you get better at using R for data analysis, you’re also learning to program. To some people this might seem like a bad thing, but in truth, programming is a core research skill across a lot of the social and behavioural sciences. Think about how many surveys and experiments are done online, or presented on computers. Think about all those online social environments which you might be interested in studying; and maybe collecting data from in an automated fashion. Think about artificial intelligence systems, computer vision and speech recognition. If any of these are things that you think you might want to be involved in – as someone “doing research in psychology”, that is – you’ll need to know a bit of programming. And if you don’t already know how to program, then learning how to do statistics using R is a nice way to start.
Those are the main reasons I use R. It’s not without its flaws: it’s not easy to learn, and it has a few very annoying quirks to it that we’re all pretty much stuck with, but on the whole I think the strengths outweigh the weakness; more so than any other option I’ve encountered so far.
## 3.1 Installing R
Okay, enough with the sales pitch. Let’s get started. Just as with any piece of software, R needs to be installed on a “computer”, which is a magical box that does cool things and delivers free ponies. Or something along those lines: I may be confusing computers with the iPad marketing campaigns. Anyway, R is freely distributed online, and you can download it from the R homepage, which is:
At the top of the page – under the heading “Download and Install R” – you’ll see separate links for Windows users, Mac users, and Linux users. If you follow the relevant link, you’ll see that the online instructions are pretty self-explanatory, but I’ll walk you through the installation anyway. As of this writing, the current version of R is 3.0.2 (``Frisbee Sailing“), but they usually issue updates every six months, so you’ll probably have a newer version.14
### 3.1.1 Installing R on a Windows computer
The CRAN homepage changes from time to time, and it’s not particularly pretty, or all that well-designed quite frankly. But it’s not difficult to find what you’re after. In general you’ll find a link at the top of the page with the text “Download R for Windows”. If you click on that, it will take you to a page that offers you a few options. Again, at the very top of the page you’ll be told to click on a link that says to click here if you’re installing R for the first time. That’s probably what you want. This will take you to a page that has a prominent link at the top called “Download R 3.0.2 for Windows”. That’s the one you want. Click on that and your browser should start downloading a file called `R-3.0.2-win.exe` , or whatever the equivalent version number is by the time you read this. The file for version 3.0.2 is about 54MB in size, so it may take some time depending on how fast your internet connection is. Once you’ve downloaded the file, double click to install it. As with any software you download online, Windows will ask you some questions about whether you trust the file and so on. After you click through those, it’ll ask you where you want to install it, and what components you want to install. The default values should be fine for most people, so again, just click through. Once all that is done, you should have R installed on your system. You can access it from the Start menu, or from the desktop if you asked it to add a shortcut there. You can now open up R in the usual way if you want to, but what I’m going to suggest is that instead of doing that you should now install RStudio (see Section 3.1.4 for instructions).
### 3.1.2 Installing R on a Mac
When you click on the Mac OS X link, you should find yourself on a page with the title “R for Mac OS X”. The vast majority of Mac users will have a fairly recent version of the operating system: as long as you’re running Mac OS X 10.6 (Snow Leopard) or higher, then you’ll be fine.15 There’s a fairly prominent link on the page called “R-3.0.2.pkg”, which is the one you want. Click on that link and you’ll start downloading the installer file, which is (not surprisingly) called `R-3.0.2.pkg` . It’s about 61MB in size, so the download can take a while on slower internet connections. Once you’ve downloaded `R-3.0.2.pkg` , all you need to do is open it by double clicking on the package file. The installation should go smoothly from there: just follow all the instructions just like you usually do when you install something. Once it’s finished, you’ll find a file called `R.app` in the Applications folder. You can now open up R in the usual way16 if you want to, but what I’m going to suggest is that instead of doing that you should now install RStudio (see Section 3.1.4 for instructions).
### 3.1.3 Installing R on a Linux computer
If you’re successfully managing to run a Linux box, regardless of what distribution, then you should find the instructions on the website easy enough. You can compile R from source yourself if you want, or install it through your package management system, which will probably have R in it. Alternatively, the CRAN site has precompiled binaries for Debian, Red Hat, Suse and Ubuntu and has separate instructions for each. Once you’ve got R installed, you can run it from the command line just by typing `R` . However, if you’re feeling envious of Windows and Mac users for their fancy GUIs, you can download RStudio too (see Section 3.1.4 for instructions).
### 3.1.4 Downloading and installing RStudio
Okay, so regardless of what operating system you’re using, the last thing that I told you to do is to download RStudio. To understand why I’ve suggested this, you need to understand a little bit more about R itself. The term R doesn’t really refer to a specific application on your computer. Rather, it refers to the underlying statistical language. You can use this language through lots of different applications. When you install R initially, it comes with one application that lets you do this: it’s the R.exe application on a Windows machine, and the R.app application on a Mac. But that’s not the only way to do it. There are lots of different applications that you can use that will let you interact with R. One of those is called RStudio, and it’s the one I’m going to suggest that you use. RStudio provides a clean, professional interface to R that I find much nicer to work with than either the Windows or Mac defaults. Like R itself, RStudio is free software: you can find all the details on their webpage. In the meantime, you can download it here:
When you visit the RStudio website, you’ll probably be struck by how much cleaner and simpler it is than the CRAN website,17 and how obvious it is what you need to do: click the big green button that says “Download”.
When you click on the download button on the homepage it will ask you to choose whether you want the desktop version or the server version. You want the desktop version. After choosing the desktop version it will take you to a page http://www.RStudio.org/download/desktop) that shows several possible downloads: there’s a different one for each operating system. However, the nice people at RStudio have designed the webpage so that it automatically recommends the download that is most appropriate for your computer. Click on the appropriate link, and the RStudio installer file will start downloading.
Once it’s finished downloading, open the installer file in the usual way to install RStudio. After it’s finished installing, you can start R by opening RStudio. You don’t need to open R.app or R.exe in order to access R. RStudio will take care of that for you. To illustrate what RStudio looks like, Figure 3.1 shows a screenshot of an R session in progress. In this screenshot, you can see that it’s running on a Mac, but it looks almost identical no matter what operating system you have. The Windows version looks more like a Windows application (e.g., the menus are attached to the application window and the colour scheme is slightly different), but it’s more or less identical. There are a few minor differences in where things are located in the menus (I’ll point them out as we go along) and in the shortcut keys, because RStudio is trying to “feel” like a proper Mac application or a proper Windows application, and this means that it has to change its behaviour a little bit depending on what computer it’s running on. Even so, these differences are very small: I started out using the Mac version of RStudio and then started using the Windows version as well in order to write these notes.
The only “shortcoming” I’ve found with RStudio is that – as of this writing – it’s still a work in progress. The “problem” is that they keep improving it. New features keep turning up the more recent releases, so there’s a good chance that by the time you read this book there will be a version out that has some really neat things that weren’t in the version that I’m using now.
### 3.1.5 Starting up R
One way or another, regardless of what operating system you’re using and regardless of whether you’re using RStudio, or the default GUI, or even the command line, it’s time to open R and get started. When you do that, the first thing you’ll see (assuming that you’re looking at the R console, that is) is a whole lot of text that doesn’t make much sense. It should look something like this:
```
R version 3.0.2 (2013-09-25) -- "Frisbee Sailing"
Copyright (C) 2013 The R Foundation for Statistical Computing
Platform: x86_64-apple-darwin10.8.0 (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
>
```
Most of this text is pretty uninteresting, and when doing real data analysis you’ll never really pay much attention to it. The important part of it is this…
`>`
… which has a flashing cursor next to it. That’s the command prompt. When you see this, it means that R is waiting patiently for you to do something!
## 3.2 Typing commands at the R console
One of the easiest things you can do with R is use it as a simple calculator, so it’s a good place to start. For instance, try typing `10 + 20` , and hitting enter.18 When you do this, you’ve entered a command, and R will “execute” that command. What you see on screen now will be this:
Not a lot of surprises in this extract. But there’s a few things worth talking about, even with such a simple example. Firstly, it’s important that you understand how to read the extract. In this example, what I typed was the `10 + 20` part. I didn’t type the `>` symbol: that’s just the R command prompt and isn’t part of the actual command. And neither did I type the `[1] 30` part. That’s what R printed out in response to my command. Secondly, it’s important to understand how the output is formatted. Obviously, the correct answer to the sum `10 + 20` is `30` , and not surprisingly R has printed that out as part of its response. But it’s also printed out this `[1]` part, which probably doesn’t make a lot of sense to you right now. You’re going to see that a lot. I’ll talk about what this means in a bit more detail later on, but for now you can think of `[1] 30` as if R were saying “the answer to the 1st question you asked is 30”. That’s not quite the truth, but it’s close enough for now. And in any case it’s not really very interesting at the moment: we only asked R to calculate one thing, so obviously there’s only one answer printed on the screen. Later on this will change, and the `[1]` part will start to make a bit more sense. For now, I just don’t want you to get confused or concerned by it.
### 3.2.1 An important digression about formatting
Now that I’ve taught you these rules I’m going to change them pretty much immediately. That is because I want you to be able to copy code from the book directly into R if if you want to test things or conduct your own analyses. However, if you copy this kind of code (that shows the command prompt and the results) directly into R you will get an error
```
## Error: <text>:1:1: unexpected '>'
## 1: >
## ^
```
So instead, I’m going to provide code in a slightly different format so that it looks like this…
`10 + 20` `## [1] 30`
There are two main differences.
* In your console, you type after the >, but from now I I won’t show the command prompt in the book.
* In the book, output is commented out with ##, in your console it appears directly after your code.
These two differences mean that if you’re working with an electronic version of the book, you can easily copy code out of the book and into the console.
So for example if you copied the two lines of code from the book you’d get this
`10 + 20` `## [1] 30` `## [1] 30`
### 3.2.2 Be very careful to avoid typos
Before we go on to talk about other types of calculations that we can do with R, there’s a few other things I want to point out. The first thing is that, while R is good software, it’s still software. It’s pretty stupid, and because it’s stupid it can’t handle typos. It takes it on faith that you meant to type exactly what you did type. For example, suppose that you forgot to hit the shift key when trying to type `+` , and as a result your command ended up being `10 = 20` rather than `10 + 20` . Here’s what happens: `10 = 20`
```
## Error in 10 = 20: invalid (do_set) left-hand side to assignment
```
What’s happened here is that R has attempted to interpret `10 = 20` as a command, and spits out an error message because the command doesn’t make any sense to it. When a human looks at this, and then looks down at his or her keyboard and sees that `+` and `=` are on the same key, it’s pretty obvious that the command was a typo. But R doesn’t know this, so it gets upset. And, if you look at it from its perspective, this makes sense. All that R “knows” is that `10` is a legitimate number, `20` is a legitimate number, and `=` is a legitimate part of the language too. In other words, from its perspective this really does look like the user meant to type `10 = 20` , since all the individual parts of that statement are legitimate and it’s too stupid to realise that this is probably a typo. Therefore, R takes it on faith that this is exactly what you meant… it only “discovers” that the command is nonsense when it tries to follow your instructions, typo and all. And then it whinges, and spits out an error. Even more subtle is the fact that some typos won’t produce errors at all, because they happen to correspond to “well-formed” R commands. For instance, suppose that not only did I forget to hit the shift key when trying to type `10 + 20` , I also managed to press the key next to one I meant do. The resulting typo would produce the command `10 - 20` . Clearly, R has no way of knowing that you meant to add 20 to 10, not subtract 20 from 10, so what happens this time is this: `10 - 20` `## [1] -10`
In this case, R produces the right answer, but to the the wrong question.
To some extent, I’m stating the obvious here, but it’s important. The people who wrote R are smart. You, the user, are smart. But R itself is dumb. And because it’s dumb, it has to be mindlessly obedient. It does exactly what you ask it to do. There is no equivalent to “autocorrect” in R, and for good reason. When doing advanced stuff – and even the simplest of statistics is pretty advanced in a lot of ways – it’s dangerous to let a mindless automaton like R try to overrule the human user. But because of this, it’s your responsibility to be careful. Always make sure you type exactly what you mean. When dealing with computers, it’s not enough to type “approximately” the right thing. In general, you absolutely must be precise in what you say to R … like all machines it is too stupid to be anything other than absurdly literal in its interpretation.
### 3.2.3 R is (a bit) flexible with spacing
Of course, now that I’ve been so uptight about the importance of always being precise, I should point out that there are some exceptions. Or, more accurately, there are some situations in which R does show a bit more flexibility than my previous description suggests. The first thing R is smart enough to do is ignore redundant spacing. What I mean by this is that, when I typed `10 + 20` before, I could equally have done this `10 + 20` `## [1] 30`
or this
`10+20` `## [1] 30` and I would get exactly the same answer. However, that doesn’t mean that you can insert spaces in any old place. When we looked at the startup documentation in Section 3.1.5 it suggested that you could type `citation()` to get some information about how to cite R. If I do so… `citation()`
```
##
## To cite R in publications use:
##
## R Core Team (2018). R: A language and environment for
## statistical computing. R Foundation for Statistical Computing,
## Vienna, Austria. URL https://www.R-project.org/.
##
## A BibTeX entry for LaTeX users is
##
## @Manual{,
## title = {R: A Language and Environment for Statistical Computing},
## author = {{<NAME>}},
## organization = {R Foundation for Statistical Computing},
## address = {Vienna, Austria},
## year = {2018},
## url = {https://www.R-project.org/},
## }
##
## We have invested a lot of time and effort in creating R, please
## cite it when using it for data analysis. See also
## 'citation("pkgname")' for citing R packages.
```
… it tells me to cite the R manual (R Core Team 2013). Let’s see what happens when I try changing the spacing. If I insert spaces in between the word and the parentheses, or inside the parentheses themselves, then all is well. That is, either of these two commands
`citation ()` `citation( )`
will produce exactly the same response. However, what I can’t do is insert spaces in the middle of the word. If I try to do this, R gets upset:
`citat ion()`
```
## Error: <text>:1:7: unexpected symbol
## 1: citat ion
## ^
```
Throughout this book I’ll vary the way I use spacing a little bit, just to give you a feel for the different ways in which spacing can be used. I’ll try not to do it too much though, since it’s generally considered to be good practice to be consistent in how you format your commands.
### 3.2.4 R can sometimes tell that you’re not finished yet (but not often)
One more thing I should point out. If you hit enter in a situation where it’s “obvious” to R that you haven’t actually finished typing the command, R is just smart enough to keep waiting. For example, if you type `10 +` and then press enter, even R is smart enough to realise that you probably wanted to type in another number. So here’s what happens (for illustrative purposes I’m breaking my own code formatting rules in this section):
```
> 10+
+
```
and there’s a blinking cursor next to the plus sign. What this means is that R is still waiting for you to finish. It “thinks” you’re still typing your command, so it hasn’t tried to execute it yet. In other words, this plus sign is actually another command prompt. It’s different from the usual one (i.e., the `>` symbol) to remind you that R is going to “add” whatever you type now to what you typed last time. For example, if I then go on to type `3` and hit enter, what I get is this:
And as far as R is concerned, this is exactly the same as if you had typed `10 + 20` . Similarly, consider the `citation()` command that we talked about in the previous section. Suppose you hit enter after typing `citation(` . Once again, R is smart enough to realise that there must be more coming – since you need to add the `)` character – so it waits. I can even hit enter several times and it will keep waiting:
```
> citation(
+
+
+ )
```
I’ll make use of this a lot in this book. A lot of the commands that we’ll have to type are pretty long, and they’re visually a bit easier to read if I break it up over several lines. If you start doing this yourself, you’ll eventually get yourself in trouble (it happens to us all). Maybe you start typing a command, and then you realise you’ve screwed up. For example,
```
> citblation(
+
+
```
You’d probably prefer R not to try running this command, right? If you want to get out of this situation, just hit the ‘escape’ key.19 R will return you to the normal command prompt (i.e. `>` ) without attempting to execute the botched command. That being said, it’s not often the case that R is smart enough to tell that there’s more coming. For instance, in the same way that I can’t add a space in the middle of a word, I can’t hit enter in the middle of a word either. If I hit enter after typing `citat` I get an error, because R thinks I’m interested in an “object” called `citat` and can’t find it:
```
> citat
Error: object 'citat' not found
```
What about if I typed `citation` and hit enter? In this case we get something very odd, something that we definitely don’t want, at least at this stage. Here’s what happens:
```
citation
## function (package = "base", lib.loc = NULL, auto = NULL)
## {
## dir <- system.file(package = package, lib.loc = lib.loc)
## if (dir == "")
## stop(gettextf("package '%s' not found", package), domain = NA)
BLAH BLAH BLAH
```
where the `BLAH BLAH BLAH` goes on for rather a long time, and you don’t know enough R yet to understand what all this gibberish actually means (of course, it doesn’t actually say BLAH BLAH BLAH - it says some other things we don’t understand or need to know that I’ve edited for length) This incomprehensible output can be quite intimidating to novice users, and unfortunately it’s very easy to forget to type the parentheses; so almost certainly you’ll do this by accident. Do not panic when this happens. Simply ignore the gibberish. As you become more experienced this gibberish will start to make sense, and you’ll find it quite handy to print this stuff out.20 But for now just try to remember to add the parentheses when typing your commands.
## 3.3 Doing simple calculations with R
Okay, now that we’ve discussed some of the tedious details associated with typing R commands, let’s get back to learning how to use the most powerful piece of statistical software in the world as a $2 calculator. So far, all we know how to do is addition. Clearly, a calculator that only did addition would be a bit stupid, so I should tell you about how to perform other simple calculations using R. But first, some more terminology. Addition is an example of an “operation” that you can perform (specifically, an arithmetic operation), and the operator that performs it is `+` . To people with a programming or mathematics background, this terminology probably feels pretty natural, but to other people it might feel like I’m trying to make something very simple (addition) sound more complicated than it is (by calling it an arithmetic operation). To some extent, that’s true: if addition was the only operation that we were interested in, it’d be a bit silly to introduce all this extra terminology. However, as we go along, we’ll start using more and more different kinds of operations, so it’s probably a good idea to get the language straight now, while we’re still talking about very familiar concepts like addition!
### 3.3.1 Adding, subtracting, multiplying and dividing
So, now that we have the terminology, let’s learn how to perform some arithmetic operations in R. To that end, Table 3.1 lists the operators that correspond to the basic arithmetic we learned in primary school: addition, subtraction, multiplication and division.
operation | operator | example input | example output |
| --- | --- | --- | --- |
addition | | 10 + 2 | 12 |
subtraction | | 9 - 3 | 6 |
multiplication | | 5 * 5 | 25 |
division | | 10 / 3 | 3 |
power | | 5 ^ 2 | 25 |
As you can see, R uses fairly standard symbols to denote each of the different operations you might want to perform: addition is done using the `+` operator, subtraction is performed by the `-` operator, and so on. So if I wanted to find out what 57 times 61 is (and who wouldn’t?), I can use R instead of a calculator, like so: `57 * 61` `## [1] 3477`
So that’s handy.
### 3.3.2 Taking powers
The first four operations listed in Table 3.1 are things we all learned in primary school, but they aren’t the only arithmetic operations built into R. There are three other arithmetic operations that I should probably mention: taking powers, doing integer division, and calculating a modulus. Of the three, the only one that is of any real importance for the purposes of this book is taking powers, so I’ll discuss that one here: the other two are discussed in Chapter 7.
For those of you who can still remember your high school maths, this should be familiar. But for some people high school maths was a long time ago, and others of us didn’t listen very hard in high school. It’s not complicated. As I’m sure everyone will probably remember the moment they read this, the act of multiplying a number \(x\) by itself \(n\) times is called “raising \(x\) to the \(n\)-th power”. Mathematically, this is written as \(x^n\). Some values of \(n\) have special names: in particular \(x^2\) is called \(x\)-squared, and \(x^3\) is called \(x\)-cubed. So, the 4th power of 5 is calculated like this: \[ 5^4 = 5 \times 5 \times 5 \times 5 \]
One way that we could calculate \(5^4\) in R would be to type in the complete multiplication as it is shown in the equation above. That is, we could do this
`5 * 5 * 5 * 5` `## [1] 625`
but it does seem a bit tedious. It would be very annoying indeed if you wanted to calculate \(5^{15}\), since the command would end up being quite long. Therefore, to make our lives easier, we use the power operator instead. When we do that, our command to calculate \(5^4\) goes like this:
`5 ^ 4` `## [1] 625`
Much easier.
### 3.3.3 Doing calculations in the right order
Okay. At this point, you know how to take one of the most powerful pieces of statistical software in the world, and use it as a $2 calculator. And as a bonus, you’ve learned a few very basic programming concepts. That’s not nothing (you could argue that you’ve just saved yourself $2) but on the other hand, it’s not very much either. In order to use R more effectively, we need to introduce more programming concepts.
In most situations where you would want to use a calculator, you might want to do multiple calculations. R lets you do this, just by typing in longer commands.21 In fact, we’ve already seen an example of this earlier, when I typed in `5 * 5 * 5 * 5` . However, let’s try a slightly different example: `1 + 2 * 4` `## [1] 9` Clearly, this isn’t a problem for R either. However, it’s worth stopping for a second, and thinking about what R just did. Clearly, since it gave us an answer of `9` it must have multiplied `2 * 4` (to get an interim answer of 8) and then added 1 to that. But, suppose it had decided to just go from left to right: if R had decided instead to add `1+2` (to get an interim answer of 3) and then multiplied by 4, it would have come up with an answer of `12` . To answer this, you need to know the order of operations that R uses. If you remember back to your high school maths classes, it’s actually the same order that you got taught when you were at school: the “BEDMAS” order.22 That is, first calculate things inside Brackets `()` , then calculate Exponents `^` , then Division `/` and Multiplication `*` , then Addition `+` and Subtraction `-` . So, to continue the example above, if we want to force R to calculate the `1+2` part before the multiplication, all we would have to do is enclose it in brackets: `(1 + 2) * 4` `## [1] 12` This is a fairly useful thing to be able to do. The only other thing I should point out about order of operations is what to expect when you have two operations that have the same priority: that is, how does R resolve ties? For instance, multiplication and division are actually the same priority, but what should we expect when we give R a problem like `4 / 2 * 3` to solve? If it evaluates the multiplication first and then the division, it would calculate a value of two-thirds. But if it evaluates the division first it calculates a value of 6. The answer, in this case, is that R goes from left to right, so in this case the division step would come first: `4 / 2 * 3` `## [1] 6` All of the above being said, it’s helpful to remember that brackets always come first. So, if you’re ever unsure about what order R will do things in, an easy solution is to enclose the thing you want it to do first in brackets. There’s nothing stopping you from typing `(4 / 2) * 3` . By enclosing the division in brackets we make it clear which thing is supposed to happen first. In this instance you wouldn’t have needed to, since R would have done the division first anyway, but when you’re first starting out it’s better to make sure R does what you want!
## 3.4 Storing a number as a variable
One of the most important things to be able to do in R (or any programming language, for that matter) is to store information in variables. Variables in R aren’t exactly the same thing as the variables we talked about in the last chapter on research methods, but they are similar. At a conceptual level you can think of a variable as label for a certain piece of information, or even several different pieces of information. When doing statistical analysis in R all of your data (the variables you measured in your study) will be stored as variables in R, but as well see later in the book you’ll find that you end up creating variables for other things too. However, before we delve into all the messy details of data sets and statistical analysis, let’s look at the very basics for how we create variables and work with them.
### 3.4.1 Variable assignment using
`<-` and `->` Since we’ve been working with numbers so far, let’s start by creating variables to store our numbers. And since most people like concrete examples, let’s invent one. Suppose I’m trying to calculate how much money I’m going to make from this book. There’s several different numbers I might want to store. Firstly, I need to figure out how many copies I’ll sell. This isn’t exactly Harry Potter, so let’s assume I’m only going to sell one copy per student in my class. That’s 350 sales, so let’s create a variable called `sales` . What I want to do is assign a value to my variable `sales` , and that value should be `350` . We do this by using the assignment operator, which is `<-` . Here’s how we do it: `sales <- 350` When you hit enter, R doesn’t print out any output.23 It just gives you another command prompt. However, behind the scenes R has created a variable called `sales` and given it a value of `350` . You can check that this has happened by asking R to print the variable on screen. And the simplest way to do that is to type the name of the variable and hit enter24. `sales` `## [1] 350`
So that’s nice to know. Anytime you can’t remember what R has got stored in a particular variable, you can just type the name of the variable and hit enter.
Okay, so now we know how to assign variables. Actually, there’s a bit more you should know. Firstly, one of the curious features of R is that there are several different ways of making assignments. In addition to the `<-` operator, we can also use `->` and `=` , and it’s pretty important to understand the differences between them.25 Let’s start by considering `->` , since that’s the easy one (we’ll discuss the use of `=` in Section 3.5.1. As you might expect from just looking at the symbol, it’s almost identical to `<-` . It’s just that the arrow (i.e., the assignment) goes from left to right. So if I wanted to define my `sales` variable using `->` , I would write it like this: `350 -> sales` This has the same effect: and it still means that I’m only going to sell `350` copies. Sigh. Apart from this superficial difference, `<-` and `->` are identical. In fact, as far as R is concerned, they’re actually the same operator, just in a “left form” and a “right form.”26
### 3.4.2 Doing calculations using variables
Okay, let’s get back to my original story. In my quest to become rich, I’ve written this textbook. To figure out how good a strategy is, I’ve started creating some variables in R. In addition to defining a `sales` variable that counts the number of copies I’m going to sell, I can also create a variable called `royalty` , indicating how much money I get per copy. Let’s say that my royalties are about $7 per book:
```
sales <- 350
royalty <- 7
```
The nice thing about variables (in fact, the whole point of having variables) is that we can do anything with a variable that we ought to be able to do with the information that it stores. That is, since R allows me to multiply `350` by `7` `350 * 7` `## [1] 2450` it also allows me to multiply `sales` by `royalty` `sales * royalty` `## [1] 2450` As far as R is concerned, the `sales * royalty` command is the same as the `350 * 7` command. Not surprisingly, I can assign the output of this calculation to a new variable, which I’ll call `revenue` . And when we do this, the new variable `revenue` gets the value `2450` . So let’s do that, and then get R to print out the value of `revenue` so that we can verify that it’s done what we asked:
```
revenue <- sales * royalty
revenue
```
`## [1] 2450`
That’s fairly straightforward. A slightly more subtle thing we can do is reassign the value of my variable, based on its current value. For instance, suppose that one of my students (no doubt under the influence of psychotropic drugs) loves the book so much that he or she donates me an extra $550. The simplest way to capture this is by a command like this:
```
revenue <- revenue + 550
revenue
```
`## [1] 3000` In this calculation, R has taken the old value of `revenue` (i.e., 2450) and added 550 to that value, producing a value of 3000. This new value is assigned to the `revenue` variable, overwriting its previous value. In any case, we now know that I’m expecting to make $3000 off this. Pretty sweet, I thinks to myself. Or at least, that’s what I thinks until I do a few more calculation and work out what the implied hourly wage I’m making off this looks like.
### 3.4.3 Rules and conventions for naming variables
In the examples that we’ve seen so far, my variable names ( `sales` and `revenue` ) have just been English-language words written using lowercase letters. However, R allows a lot more flexibility when it comes to naming your variables, as the following list of rules27 illustrates:
* Variable names can only use the upper case alphabetic characters
`A` - `Z` as well as the lower case characters `a` - `z` . You can also include numeric characters `0` - `9` in the variable name, as well as the period `.` or underscore `_` character. In other words, you can use `SaL.e_s` as a variable name (though I can’t think why you would want to), but you can’t use `Sales?` . * Variable names cannot include spaces: therefore
`my sales` is not a valid name, but `my.sales` is. * Variable names are case sensitive: that is,
`Sales` and `sales` are different variable names. * Variable names must start with a letter or a period. You can’t use something like
`_sales` or `1sales` as a variable name. You can use `.sales` as a variable name if you want, but it’s not usually a good idea. By convention, variables starting with a `.` are used for special purposes, so you should avoid doing so. * Variable names cannot be one of the reserved keywords. These are special names that R needs to keep “safe” from us mere users, so you can’t use them as the names of variables. The keywords are:
`if` , `else` , `repeat` , `while` , `function` , `for` , `in` , `next` , `break` , `TRUE` , `FALSE` , `NULL` , `Inf` , `NaN` , `NA` , `NA_integer_` , `NA_real_` , `NA_complex_` , and finally, `NA_character_` . Don’t feel especially obliged to memorise these: if you make a mistake and try to use one of the keywords as a variable name, R will complain about it like the whiny little automaton it is.
In addition to those rules that R enforces, there are some informal conventions that people tend to follow when naming variables. One of them you’ve already seen: i.e., don’t use variables that start with a period. But there are several others. You aren’t obliged to follow these conventions, and there are many situations in which it’s advisable to ignore them, but it’s generally a good idea to follow them when you can:
* Use informative variable names. As a general rule, using meaningful names like
`sales` and `revenue` is preferred over arbitrary ones like `variable1` and `variable2` . Otherwise it’s very hard to remember what the contents of different variables are, and it becomes hard to understand what your commands actually do. * Use short variable names. Typing is a pain and no-one likes doing it. So we much prefer to use a name like
`sales` over a name like
```
sales.for.this.book.that.you.are.reading
```
. Obviously there’s a bit of a tension between using informative names (which tend to be long) and using short names (which tend to be meaningless), so use a bit of common sense when trading off these two conventions. * Use one of the conventional naming styles for multi-word variable names. Suppose I want to name a variable that stores “my new salary”. Obviously I can’t include spaces in the variable name, so how should I do this? There are three different conventions that you sometimes see R users employing. Firstly, you can separate the words using periods, which would give you
`my.new.salary` as the variable name. Alternatively, you could separate words using underscores, as in `my_new_salary` . Finally, you could use capital letters at the beginning of each word (except the first one), which gives you `myNewSalary` as the variable name. I don’t think there’s any strong reason to prefer one over the other,28 but it’s important to be consistent.
## 3.5 Using functions to do calculations
The symbols `+` , `-` , `*` and so on are examples of operators. As we’ve seen, you can do quite a lot of calculations just by using these operators. However, in order to do more advanced calculations (and later on, to do actual statistics), you’re going to need to start using functions.29 I’ll talk in more detail about functions and how they work in Section 8.4, but for now let’s just dive in and use a few. To get started, suppose I wanted to take the square root of 225. The square root, in case your high school maths is a bit rusty, is just the opposite of squaring a number. So, for instance, since “5 squared is 25” I can say that “5 is the square root of 25”. The usual notation for this is
\[ \sqrt{25} = 5 \]
though sometimes you’ll also see it written like this \(25^{0.5} = 5.\) This second way of writing it is kind of useful to “remind” you of the mathematical fact that “square root of \(x\)” is actually the same as “raising \(x\) to the power of 0.5”. Personally, I’ve never found this to be terribly meaningful psychologically, though I have to admit it’s quite convenient mathematically. Anyway, it’s not important. What is important is that you remember what a square root is, since we’re going to need it later on.
To calculate the square root of 25, I can do it in my head pretty easily, since I memorised my multiplication tables when I was a kid. It gets harder when the numbers get bigger, and pretty much impossible if they’re not whole numbers. This is where something like R comes in very handy. Let’s say I wanted to calculate \(\sqrt{225}\), the square root of 225. There’s two ways I could do this using R. Firstly, since the square root of 255 is the same thing as raising 225 to the power of 0.5, I could use the power operator `^` , just like we did earlier: `225 ^ 0.5` `## [1] 15` However, there’s a second way that we can do this, since R also provides a square root function, `sqrt()` . To calculate the square root of 255 using this function, what I do is insert the number `225` in the parentheses. That is, the command I type is this: `sqrt( 225 )` `## [1] 15` and as you might expect from our previous discussion, the spaces in between the parentheses are purely cosmetic. I could have typed `sqrt(225)` or `sqrt( 225 )` and gotten the same result. When we use a function to do something, we generally refer to this as calling the function, and the values that we type into the function (there can be more than one) are referred to as the arguments of that function. Obviously, the `sqrt()` function doesn’t really give us any new functionality, since we already knew how to do square root calculations by using the power operator `^` , though I do think it looks nicer when we use `sqrt()` . However, there are lots of other functions in R: in fact, almost everything of interest that I’ll talk about in this book is an R function of some kind. For example, one function that we will need to use in this book is the absolute value function. Compared to the square root function, it’s extremely simple: it just converts negative numbers to positive numbers, and leaves positive numbers alone. Mathematically, the absolute value of \(x\) is written \(|x|\) or sometimes \(\mbox{abs}(x)\). Calculating absolute values in R is pretty easy, since R provides the `abs()` function that you can use for this purpose. When you feed it a positive number… `abs( 21 )` `## [1] 21`
the absolute value function does nothing to it at all. But when you feed it a negative number, it spits out the positive version of the same number, like this:
`abs( -13 )` `## [1] 13`
In all honesty, there’s nothing that the absolute value function does that you couldn’t do just by looking at the number and erasing the minus sign if there is one. However, there’s a few places later in the book where we have to use absolute values, so I thought it might be a good idea to explain the meaning of the term early on.
Before moving on, it’s worth noting that – in the same way that R allows us to put multiple operations together into a longer command, like `1 + 2*4` for instance – it also lets us put functions together and even combine functions with operators if we so desire. For example, the following is a perfectly legitimate command: `sqrt( 1 + abs(-8) )` `## [1] 3` When R executes this command, starts out by calculating the value of `abs(-8)` , which produces an intermediate value of `8` . Having done so, the command simplifies to `sqrt( 1 + 8 )` . To solve the square root30 it first needs to add `1 + 8` to get `9` , at which point it evaluates `sqrt(9)` , and so it finally outputs a value of `3` .
### 3.5.1 Function arguments, their names and their defaults
There’s two more fairly important things that you need to understand about how functions work in R, and that’s the use of “named” arguments, and default values" for arguments. Not surprisingly, that’s not to say that this is the last we’ll hear about how functions work, but they are the last things we desperately need to discuss in order to get you started. To understand what these two concepts are all about, I’ll introduce another function. The `round()` function can be used to round some value to the nearest whole number. For example, I could type this: `round( 3.1415 )` `## [1] 3` Pretty straightforward, really. However, suppose I only wanted to round it to two decimal places: that is, I want to get `3.14` as the output. The `round()` function supports this, by allowing you to input a second argument to the function that specifies the number of decimal places that you want to round the number to. In other words, I could do this: `round( 3.14165, 2 )` `## [1] 3.14` What’s happening here is that I’ve specified two arguments: the first argument is the number that needs to be rounded (i.e., `3.1415` ), the second argument is the number of decimal places that it should be rounded to (i.e., `2` ), and the two arguments are separated by a comma. In this simple example, it’s quite easy to remember which one argument comes first and which one comes second, but for more complicated functions this is not easy. Fortunately, most R functions make use of argument names. For the `round()` function, for example the number that needs to be rounded is specified using the `x` argument, and the number of decimal points that you want it rounded to is specified using the `digits` argument. Because we have these names available to us, we can specify the arguments to the function by name. We do so like this:
`## [1] 3.14` Notice that this is kind of similar in spirit to variable assignment (Section 3.4), except that I used `=` here, rather than `<-` . In both cases we’re specifying specific values to be associated with a label. However, there are some differences between what I was doing earlier on when creating variables, and what I’m doing here when specifying arguments, and so as a consequence it’s important that you use `=` in this context.
As you can see, specifying the arguments by name involves a lot more typing, but it’s also a lot easier to read. Because of this, the commands in this book will usually specify arguments by name,31 since that makes it clearer to you what I’m doing. However, one important thing to note is that when specifying the arguments using their names, it doesn’t matter what order you type them in. But if you don’t use the argument names, then you have to input the arguments in the correct order. In other words, these three commands all produce the same output…
`round( 3.14165, 2 )` `## [1] 3.14`
`## [1] 3.14`
```
round( digits = 2, x = 3.1415 )
```
`## [1] 3.14`
but this one does not…
`round( 2, 3.14165 )` `## [1] 2`
How do you find out what the correct order is? There’s a few different ways, but the easiest one is to look at the help documentation for the function (see Section 4.12. However, if you’re ever unsure, it’s probably best to actually type in the argument name.
Okay, so that’s the first thing I said you’d need to know: argument names. The second thing you need to know about is default values. Notice that the first time I called the `round()` function I didn’t actually specify the `digits` argument at all, and yet R somehow knew that this meant it should round to the nearest whole number. How did that happen? The answer is that the `digits` argument has a default value of `0` , meaning that if you decide not to specify a value for `digits` then R will act as if you had typed `digits = 0` . This is quite handy: the vast majority of the time when you want to round a number you want to round it to the nearest whole number, and it would be pretty annoying to have to specify the `digits` argument every single time. On the other hand, sometimes you actually do want to round to something other than the nearest whole number, and it would be even more annoying if R didn’t allow this! Thus, by having `digits = 0` as the default value, we get the best of both worlds.
## 3.6 Letting RStudio help you with your commands
Time for a bit of a digression. At this stage you know how to type in basic commands, including how to use R functions. And it’s probably beginning to dawn on you that there are a lot of R functions, all of which have their own arguments. You’re probably also worried that you’re going to have to remember all of them! Thankfully, it’s not that bad. In fact, very few data analysts bother to try to remember all the commands. What they really do is use tricks to make their lives easier. The first (and arguably most important one) is to use the internet. If you don’t know how a particular R function works, Google it. Second, you can look up the R help documentation. I’ll talk more about these two tricks in Section 4.12. But right now I want to call your attention to a couple of simple tricks that RStudio makes available to you.
### 3.6.1 Autocomplete using “tab”
The first thing I want to call your attention to is the autocomplete ability in RStudio.32
Let’s stick to our example above and assume that what you want to do is to round a number. This time around, start typing the name of the function that you want, and then hit the “tab” key. RStudio will then display a little window like the one shown in Figure 3.2. In this figure, I’ve typed the letters `ro` at the command line, and then hit tab. The window has two panels. On the left, there’s a list of variables and functions that start with the letters that I’ve typed shown in black text, and some grey text that tells you where that variable/function is stored. Ignore the grey text for now: it won’t make much sense to you until we’ve talked about packages in Section 4.2. In Figure 3.2 you can see that there’s quite a few things that start with the letters `ro` : there’s something called `rock` , something called `round` , something called `round.Date` and so on. The one we want is `round` , but if you’re typing this yourself you’ll notice that when you hit the tab key the window pops up with the top entry (i.e., `rock` ) highlighted. You can use the up and down arrow keys to select the one that you want. Or, if none of the options look right to you, you can hit the escape key (“esc”) or the left arrow key to make the window go away. In our case, the thing we want is the `round` option, so we’ll select that. When you do this, you’ll see that the panel on the right changes. Previously, it had been telling us something about the `rock` data set (i.e., “Measurements on 48 rock samples…”) that is distributed as part of R. But when we select `round` , it displays information about the `round()` function, exactly as it is shown in Figure 3.2. This display is really handy. The very first thing it says is `round(x, digits = 0)` : what this is telling you is that the `round()` function has two arguments. The first argument is called `x` , and it doesn’t have a default value. The second argument is `digits` , and it has a default value of 0. In a lot of situations, that’s all the information you need. But RStudio goes a bit further, and provides some additional information about the function underneath. Sometimes that additional information is very helpful, sometimes it’s not: RStudio pulls that text from the R help documentation, and my experience is that the helpfulness of that documentation varies wildly. Anyway, if you’ve decided that `round()` is the function that you want to use, you can hit the right arrow or the enter key, and RStudio will finish typing the rest of the function name for you. The RStudio autocomplete tool works slightly differently if you’ve already got the name of the function typed and you’re now trying to type the arguments. For instance, suppose I’ve typed `round(` into the console, and then I hit tab. RStudio is smart enough to recognise that I already know the name of the function that I want, because I’ve already typed it! Instead, it figures that what I’m interested in is the arguments to that function. So that’s what pops up in the little window. You can see this in Figure 3.3. Again, the window has two panels, and you can interact with this window in exactly the same way that you did with the window shown in Figure 3.2. On the left hand panel, you can see a list of the argument names. On the right hand side, it displays some information about what the selected argument does.
### 3.6.2 Browsing your command history
One thing that R does automatically is keep track of your “command history”. That is, it remembers all the commands that you’ve previously typed. You can access this history in a few different ways. The simplest way is to use the up and down arrow keys. If you hit the up key, the R console will show you the most recent command that you’ve typed. Hit it again, and it will show you the command before that. If you want the text on the screen to go away, hit escape33 Using the up and down keys can be really handy if you’ve typed a long command that had one typo in it. Rather than having to type it all again from scratch, you can use the up key to bring up the command and fix it.
The second way to get access to your command history is to look at the history panel in RStudio. On the upper right hand side of the RStudio window you’ll see a tab labelled “History”. Click on that, and you’ll see a list of all your recent commands displayed in that panel: it should look something like Figure 3.4. If you double click on one of the commands, it will be copied to the R console. (You can achieve the same result by selecting the command you want with the mouse and then clicking the “To Console” button).34
## 3.7 Storing many numbers as a vector
At this point we’ve covered functions in enough detail to get us safely through the next couple of chapters (with one small exception: see Section 4.11, so let’s return to our discussion of variables. When I introduced variables in Section 3.4 I showed you how we can use variables to store a single number. In this section, we’ll extend this idea and look at how to store multiple numbers within the one variable. In R, the name for a variable that can store multiple values is a vector. So let’s create one.
### 3.7.1 Creating a vector
Let’s stick to my silly “get rich quick by textbook writing” example. Suppose the textbook company (if I actually had one, that is) sends me sales data on a monthly basis. Since my class start in late February, we might expect most of the sales to occur towards the start of the year. Let’s suppose that I have 100 sales in February, 200 sales in March and 50 sales in April, and no other sales for the rest of the year. What I would like to do is have a variable – let’s call it `sales.by.month` – that stores all this sales data. The first number stored should be `0` since I had no sales in January, the second should be `100` , and so on. The simplest way to do this in R is to use the combine function, `c()` . To do so, all we have to do is type all the numbers you want to store in a comma separated list, like this:35
```
sales.by.month <- c(0, 100, 200, 50, 0, 0, 0, 0, 0, 0, 0, 0)
sales.by.month
```
To use the correct terminology here, we have a single variable here called `sales.by.month` : this variable is a vector that consists of 12 elements.
### 3.7.2 A handy digression
Now that we’ve learned how to put information into a vector, the next thing to understand is how to pull that information back out again. However, before I do so it’s worth taking a slight detour. If you’ve been following along, typing all the commands into R yourself, it’s possible that the output that you saw when we printed out the `sales.by.month` vector was slightly different to what I showed above. This would have happened if the window (or the RStudio panel) that contains the R console is really, really narrow. If that were the case, you might have seen output that looks something like this: `sales.by.month`
```
## [1] 0 100 200 50
## [5] 0 0 0 0
## [9] 0 0 0 0
```
Because there wasn’t much room on the screen, R has printed out the results over three lines. But that’s not the important thing to notice. The important point is that the first line has a `[1]` in front of it, whereas the second line starts with `[5]` and the third with `[9]` . It’s pretty clear what’s happening here. For the first row, R has printed out the 1st element through to the 4th element, so it starts that row with a `[1]` . For the second row, R has printed out the 5th element of the vector through to the 8th one, and so it begins that row with a `[5]` so that you can tell where it’s up to at a glance. It might seem a bit odd to you that R does this, but in some ways it’s a kindness, especially when dealing with larger data sets!
### 3.7.3 Getting information out of vectors
To get back to the main story, let’s consider the problem of how to get information out of a vector. At this point, you might have a sneaking suspicion that the answer has something to do with the `[1]` and `[9]` things that R has been printing out. And of course you are correct. Suppose I want to pull out the February sales data only. February is the second month of the year, so let’s try this: `sales.by.month[2]` `## [1] 100` Yep, that’s the February sales all right. But there’s a subtle detail to be aware of here: notice that R outputs `[1] 100` , not `[2] 100` . This is because R is being extremely literal. When we typed in `sales.by.month[2]` , we asked R to find exactly one thing, and that one thing happens to be the second element of our `sales.by.month` vector. So, when it outputs `[1] 100` what R is saying is that the first number that we just asked for is `100` . This behaviour makes more sense when you realise that we can use this trick to create new variables. For example, I could create a `february.sales` variable like this:
```
february.sales <- sales.by.month[2]
february.sales
```
`## [1] 100` Obviously, the new variable `february.sales` should only have one element and so when I print it out this new variable, the R output begins with a `[1]` because `100` is the value of the first (and only) element of `february.sales` . The fact that this also happens to be the value of the second element of `sales.by.month` is irrelevant. We’ll pick this topic up again shortly (Section 3.10).
### 3.7.4 Altering the elements of a vector
Sometimes you’ll want to change the values stored in a vector. Imagine my surprise when the publisher rings me up to tell me that the sales data for May are wrong. There were actually an additional 25 books sold in May, but there was an error or something so they hadn’t told me about it. How can I fix my `sales.by.month` variable? One possibility would be to assign the whole vector again from the beginning, using `c()` . But that’s a lot of typing. Also, it’s a little wasteful: why should R have to redefine the sales figures for all 12 months, when only the 5th one is wrong? Fortunately, we can tell R to change only the 5th element, using this trick:
```
sales.by.month[5] <- 25
sales.by.month
```
Another way to edit variables is to use the `edit()` or `fix()` functions. I won’t discuss them in detail right now, but you can check them out on your own.
### 3.7.5 Useful things to know about vectors
Before moving on, I want to mention a couple of other things about vectors. Firstly, you often find yourself wanting to know how many elements there are in a vector (usually because you’ve forgotten). You can use the `length()` function to do this. It’s quite straightforward:
```
length( x = sales.by.month )
```
`## [1] 12` Secondly, you often want to alter all of the elements of a vector at once. For instance, suppose I wanted to figure out how much money I made in each month. Since I’m earning an exciting $7 per book (no seriously, that’s actually pretty close to what authors get on the very expensive textbooks that you’re expected to purchase), what I want to do is multiply each element in the `sales.by.month` vector by `7` . R makes this pretty easy, as the following example shows: `sales.by.month * 7`
```
## [1] 0 700 1400 350 175 0 0 0 0 0 0 0
```
In other words, when you multiply a vector by a single number, all elements in the vector get multiplied. The same is true for addition, subtraction, division and taking powers. So that’s neat. On the other hand, suppose I wanted to know how much money I was making per day, rather than per month. Since not every month has the same number of days, I need to do something slightly different. Firstly, I’ll create two new vectors:
```
days.per.month <- c(31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31)
profit <- sales.by.month * 7
```
Obviously, the `profit` variable is the same one we created earlier, and the `days.per.month` variable is pretty straightforward. What I want to do is divide every element of `profit` by the corresponding element of `days.per.month` . Again, R makes this pretty easy:
```
profit / days.per.month
```
```
## [1] 0.000000 25.000000 45.161290 11.666667 5.645161 0.000000 0.000000
## [8] 0.000000 0.000000 0.000000 0.000000 0.000000
```
I still don’t like all those zeros, but that’s not what matters here. Notice that the second element of the output is 25, because R has divided the second element of `profit` (i.e. 700) by the second element of `days.per.month` (i.e. 28). Similarly, the third element of the output is equal to 1400 divided by 31, and so on. We’ll talk more about calculations involving vectors later on (and in particular a thing called the “recycling rule”; Section 7.12.2), but that’s enough detail for now.
## 3.8 Storing text data
A lot of the time your data will be numeric in nature, but not always. Sometimes your data really needs to be described using text, not using numbers. To address this, we need to consider the situation where our variables store text. To create a variable that stores the word “hello”, we can type this:
```
greeting <- "hello"
greeting
```
`## [1] "hello"` When interpreting this, it’s important to recognise that the quote marks here aren’t part of the string itself. They’re just something that we use to make sure that R knows to treat the characters that they enclose as a piece of text data, known as a character string. In other words, R treats `"hello"` as a string containing the word “hello”; but if I had typed `hello` instead, R would go looking for a variable by that name! You can also use `'hello'` to specify a character string. Okay, so that’s how we store the text. Next, it’s important to recognise that when we do this, R stores the entire word `"hello"` as a single element: our `greeting` variable is not a vector of five different letters. Rather, it has only the one element, and that element corresponds to the entire character string `"hello"` . To illustrate this, if I actually ask R to find the first element of `greeting` , it prints the whole string: `greeting[1]` `## [1] "hello"` Of course, there’s no reason why I can’t create a vector of character strings. For instance, if we were to continue with the example of my attempts to look at the monthly sales data for my book, one variable I might want would include the names of all 12 `months` .36 To do so, I could type in a command like this
```
months <- c("January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November",
"December")
```
This is a character vector containing 12 elements, each of which is the name of a month. So if I wanted R to tell me the name of the fourth month, all I would do is this:
`months[4]` `## [1] "April"`
### 3.8.1 Working with text
Working with text data is somewhat more complicated than working with numeric data, and I discuss some of the basic ideas in Section 7.8, but for purposes of the current chapter we only need this bare bones sketch. The only other thing I want to do before moving on is show you an example of a function that can be applied to text data. So far, most of the functions that we have seen (i.e., `sqrt()` , `abs()` and `round()` ) only make sense when applied to numeric data (e.g., you can’t calculate the square root of “hello”), and we’ve seen one function that can be applied to pretty much any variable or vector (i.e., `length()` ). So it might be nice to see an example of a function that can be applied to text. The function I’m going to introduce you to is called `nchar()` , and what it does is count the number of individual characters that make up a string. Recall earlier that when we tried to calculate the `length()` of our `greeting` variable it returned a value of `1` : the `greeting` variable contains only the one string, which happens to be `"hello"` . But what if I want to know how many letters there are in the word? Sure, I could count them, but that’s boring, and more to the point it’s a terrible strategy if what I wanted to know was the number of letters in War and Peace. That’s where the `nchar()` function is helpful:
```
nchar( x = greeting )
```
`## [1] 5` That makes sense, since there are in fact 5 letters in the string `"hello"` . Better yet, you can apply `nchar()` to whole vectors. So, for instance, if I want R to tell me how many letters there are in the names of each of the 12 months, I can do this: `nchar( x = months )`
```
## [1] 7 8 5 5 3 4 4 6 9 7 8 8
```
So that’s nice to know. The `nchar()` function can do a bit more than this, and there’s a lot of other functions that you can do to extract more information from text or do all sorts of fancy things. However, the goal here is not to teach any of that! The goal right now is just to see an example of a function that actually does work when applied to text.
## 3.9 Storing “true or false” data
Time to move onto a third kind of data. A key concept in that a lot of R relies on is the idea of a logical value. A logical value is an assertion about whether something is true or false. This is implemented in R in a pretty straightforward way. There are two logical values, namely `TRUE` and `FALSE` . Despite the simplicity, a logical values are very useful things. Let’s see how they work.
### 3.9.1 Assessing mathematical truths
In <NAME>’s classic book 1984, one of the slogans used by the totalitarian Party was “two plus two equals five”, the idea being that the political domination of human freedom becomes complete when it is possible to subvert even the most basic of truths. It’s a terrifying thought, especially when the protagonist <NAME> finally breaks down under torture and agrees to the proposition. “Man is infinitely malleable”, the book says. I’m pretty sure that this isn’t true of humans37 but it’s definitely not true of R. R is not infinitely malleable. It has rather firm opinions on the topic of what is and isn’t true, at least as regards basic mathematics. If I ask it to calculate `2 + 2` , it always gives the same answer, and it’s not bloody 5: `2 + 2` `## [1] 4`
Of course, so far R is just doing the calculations. I haven’t asked it to explicitly assert that \(2+2 = 4\) is a true statement. If I want R to make an explicit judgement, I can use a command like this:
`2 + 2 == 4` `## [1] TRUE` What I’ve done here is use the equality operator, `==` , to force R to make a “true or false” judgement.38 Okay, let’s see what R thinks of the Party slogan: `2+2 == 5` `## [1] FALSE` Booyah! Freedom and ponies for all! Or something like that. Anyway, it’s worth having a look at what happens if I try to force R to believe that two plus two is five by making an assignment statement like `2 + 2 = 5` or `2 + 2 <- 5` . When I do this, here’s what happens: `2 + 2 = 5`
```
## Error in 2 + 2 = 5: target of assignment expands to non-language object
```
R doesn’t like this very much. It recognises that `2 + 2` is not a variable (that’s what the “non-language object” part is saying), and it won’t let you try to “reassign” it. While R is pretty flexible, and actually does let you do some quite remarkable things to redefine parts of R itself, there are just some basic, primitive truths that it refuses to give up. It won’t change the laws of addition, and it won’t change the definition of the number `2` .
That’s probably for the best.
### 3.9.2 Logical operations
So now we’ve seen logical operations at work, but so far we’ve only seen the simplest possible example. You probably won’t be surprised to discover that we can combine logical operations with other operations and functions in a more complicated way, like this:
`3*3 + 4*4 == 5*5` `## [1] TRUE`
or this
`sqrt( 25 ) == 5` `## [1] TRUE`
Not only that, but as Table 3.2 illustrates, there are several other logical operators that you can use, corresponding to some basic mathematical concepts.
operation | operator | example input | answer |
| --- | --- | --- | --- |
less than | < | 2 < 3 | |
less than or equal to | <= | 2 <= 2 | |
greater than | > | 2 > 3 | |
greater than or equal to | >= | 2 >= 2 | |
equal to | == | 2 == 3 | |
not equal to | != | 2 != 3 | |
Hopefully these are all pretty self-explanatory: for example, the less than operator `<` checks to see if the number on the left is less than the number on the right. If it’s less, then R returns an answer of `TRUE` : `99 < 100` `## [1] TRUE` but if the two numbers are equal, or if the one on the right is larger, then R returns an answer of `FALSE` , as the following two examples illustrate: `100 < 100` `## [1] FALSE` `100 < 99` `## [1] FALSE` In contrast, the less than or equal to operator `<=` will do exactly what it says. It returns a value of `TRUE` if the number of the left hand side is less than or equal to the number on the right hand side. So if we repeat the previous two examples using `<=` , here’s what we get: `100 <= 100` `## [1] TRUE` `100 <= 99` `## [1] FALSE` And at this point I hope it’s pretty obvious what the greater than operator `>` and the greater than or equal to operator `>=` do! Next on the list of logical operators is the not equal to operator `!=` which – as with all the others – does what it says it does. It returns a value of `TRUE` when things on either side are not identical to each other. Therefore, since \(2+2\) isn’t equal to \(5\), we get: `2 + 2 != 5` `## [1] TRUE`
We’re not quite done yet. There are three more logical operations that are worth knowing about, listed in Table 3.3.
operation | operator | example input | answer |
| --- | --- | --- | --- |
not | ! | !(1==1) | |
or | | | (1==1) | (2==3) | |
and | & | (1==1) & (2==3) | |
These are the not operator `!` , the and operator `&` , and the or operator `|` . Like the other logical operators, their behaviour is more or less exactly what you’d expect given their names. For instance, if I ask you to assess the claim that “either \(2+2 = 4\) or \(2+2 = 5\)” you’d say that it’s true. Since it’s an “either-or” statement, all we need is for one of the two parts to be true. That’s what the `|` operator does:
```
(2+2 == 4) | (2+2 == 5)
```
`## [1] TRUE` On the other hand, if I ask you to assess the claim that “both \(2+2 = 4\) and \(2+2 = 5\)” you’d say that it’s false. Since this is an and statement we need both parts to be true. And that’s what the `&` operator does:
```
(2+2 == 4) & (2+2 == 5)
```
`## [1] FALSE`
Finally, there’s the not operator, which is simple but annoying to describe in English. If I ask you to assess my claim that “it is not true that \(2+2 = 5\)” then you would say that my claim is true; because my claim is that “\(2+2 = 5\) is false”. And I’m right. If we write this as an R command we get this:
`! (2+2 == 5)` `## [1] TRUE` In other words, since `2+2 == 5` is a `FALSE` statement, it must be the case that `!(2+2 == 5)` is a `TRUE` one. Essentially, what we’ve really done is claim that “not false” is the same thing as “true”. Obviously, this isn’t really quite right in real life. But R lives in a much more black or white world: for R everything is either true or false. No shades of gray are allowed. We can actually see this much more explicitly, like this: `! FALSE` `## [1] TRUE` Of course, in our \(2+2 = 5\) example, we didn’t really need to use “not” `!` and “equals to” `==` as two separate operators. We could have just used the “not equals to” operator `!=` like this: `2+2 != 5` `## [1] TRUE` But there are many situations where you really do need to use the `!` operator. We’ll see some later on.39
### 3.9.3 Storing and using logical data
Up to this point, I’ve introduced numeric data (in Sections 3.4 and 3.7) and character data (in Section 3.8). So you might not be surprised to discover that these `TRUE` and `FALSE` values that R has been producing are actually a third kind of data, called logical data. That is, when I asked R if `2 + 2 == 5` and it said `[1] FALSE` in reply, it was actually producing information that we can store in variables. For instance, I could create a variable called `is.the.Party.correct` , which would store R’s opinion:
```
is.the.Party.correct <- 2 + 2 == 5
is.the.Party.correct
```
`## [1] FALSE` Alternatively, you can assign the value directly, by typing `TRUE` or `FALSE` in your command. Like this:
```
is.the.Party.correct <- FALSE
is.the.Party.correct
```
`## [1] FALSE` Better yet, because it’s kind of tedious to type `TRUE` or `FALSE` over and over again, R provides you with a shortcut: you can use `T` and `F` instead (but it’s case sensitive: `t` and `f` won’t work).40 So this works:
```
is.the.Party.correct <- F
is.the.Party.correct
```
`## [1] FALSE`
but this doesn’t:
```
is.the.Party.correct <- f
```
### 3.9.4 Vectors of logicals
The next thing to mention is that you can store vectors of logical values in exactly the same way that you can store vectors of numbers (Section 3.7) and vectors of text data (Section 3.8). Again, we can define them directly via the `c()` function, like this:
```
x <- c(TRUE, TRUE, FALSE)
x
```
or you can produce a vector of logicals by applying a logical operator to a vector. This might not make a lot of sense to you, so let’s unpack it slowly. First, let’s suppose we have a vector of numbers (i.e., a “non-logical vector”). For instance, we could use the `sales.by.month` vector that we were using in Section 3.7. Suppose I wanted R to tell me, for each month of the year, whether I actually sold a book in that month. I can do that by typing this: `sales.by.month > 0`
and again, I can store this in a vector if I want, as the example below illustrates:
```
any.sales.this.month <- sales.by.month > 0
any.sales.this.month
```
In other words, `any.sales.this.month` is a logical vector whose elements are `TRUE` only if the corresponding element of `sales.by.month` is greater than zero. For instance, since I sold zero books in January, the first element is `FALSE` .
### 3.9.5 Applying logical operation to text
In a moment (Section 3.10) I’ll show you why these logical operations and logical vectors are so handy, but before I do so I want to very briefly point out that you can apply them to text as well as to logical data. It’s just that we need to be a bit more careful in understanding how R interprets the different operations. In this section I’ll talk about how the equal to operator `==` applies to text, since this is the most important one. Obviously, the not equal to operator `!=` gives the exact opposite answers to `==` so I’m implicitly talking about that one too, but I won’t give specific commands showing the use of `!=` . As for the other operators, I’ll defer a more detailed discussion of this topic to Section 7.8.5. Okay, let’s see how it works. In one sense, it’s very simple. For instance, I can ask R if the word `"cat"` is the same as the word `"dog"` , like this: `"cat" == "dog"` `## [1] FALSE` That’s pretty obvious, and it’s good to know that even R can figure that out. Similarly, R does recognise that a `"cat"` is a `"cat"` : `"cat" == "cat"` `## [1] TRUE`
Again, that’s exactly what we’d expect. However, what you need to keep in mind is that R is not at all tolerant when it comes to grammar and spacing. If two strings differ in any way whatsoever, R will say that they’re not equal to each other, as the following examples indicate:
`" cat" == "cat"` `## [1] FALSE` `"cat" == "CAT"` `## [1] FALSE` `"cat" == "c a t"` `## [1] FALSE`
## 3.10 Indexing vectors
One last thing to add before finishing up this chapter. So far, whenever I’ve had to get information out of a vector, all I’ve done is typed something like `months[4]` ; and when I do this R prints out the fourth element of the `months` vector. In this section, I’ll show you two additional tricks for getting information out of the vector.
### 3.10.1 Extracting multiple elements
One very useful thing we can do is pull out more than one element at a time. In the previous example, we only used a single number (i.e., `2` ) to indicate which element we wanted. Alternatively, we can use a vector. So, suppose I wanted the data for February, March and April. What I could do is use the vector `c(2,3,4)` to indicate which elements I want R to pull out. That is, I’d type this:
```
sales.by.month[ c(2,3,4) ]
```
`## [1] 100 200 50` Notice that the order matters here. If I asked for the data in the reverse order (i.e., April first, then March, then February) by using the vector `c(4,3,2)` , then R outputs the data in the reverse order:
```
sales.by.month[ c(4,3,2) ]
```
`## [1] 50 200 100` A second thing to be aware of is that R provides you with handy shortcuts for very common situations. For instance, suppose that I wanted to extract everything from the 2nd month through to the 8th month. One way to do this is to do the same thing I did above, and use the vector `c(2,3,4,5,6,7,8)` to indicate the elements that I want. That works just fine
```
sales.by.month[ c(2,3,4,5,6,7,8) ]
```
but it’s kind of a lot of typing. To help make this easier, R lets you use `2:8` as shorthand for `c(2,3,4,5,6,7,8)` , which makes things a lot simpler. First, let’s just check that this is true: `2:8` `## [1] 2 3 4 5 6 7 8` Next, let’s check that we can use the `2:8` shorthand as a way to pull out the 2nd through 8th elements of `sales.by.months` : `sales.by.month[2:8]`
So that’s kind of neat.
### 3.10.2 Logical indexing
At this point, I can introduce an extremely useful tool called logical indexing. In the last section, I created a logical vector `any.sales.this.month` , whose elements are `TRUE` for any month in which I sold at least one book, and `FALSE` for all the others. However, that big long list of `TRUE` s and `FALSE` s is a little bit hard to read, so what I’d like to do is to have R select the names of the `months` for which I sold any books. Earlier on, I created a vector `months` that contains the names of each of the months. This is where logical indexing is handy. What I need to do is this:
```
months[ sales.by.month > 0 ]
```
To understand what’s happening here, it’s helpful to notice that `sales.by.month > 0` is the same logical expression that we used to create the `any.sales.this.month` vector in the last section. In fact, I could have just done this:
```
months[ any.sales.this.month ]
```
and gotten exactly the same result. In order to figure out which elements of `months` to include in the output, what R does is look to see if the corresponding element in `any.sales.this.month` is `TRUE` . Thus, since element 1 of `any.sales.this.month` is `FALSE` , R does not include `"January"` as part of the output; but since element 2 of `any.sales.this.month` is `TRUE` , R does include `"February"` in the output. Note that there’s no reason why I can’t use the same trick to find the actual sales numbers for those months. The command to do that would just be this:
```
sales.by.month [ sales.by.month > 0 ]
```
```
## [1] 100 200 50 25
```
In fact, we can do the same thing with text. Here’s an example. Suppose that – to continue the saga of the textbook sales – I later find out that the bookshop only had sufficient stocks for a few months of the year. They tell me that early in the year they had `"high"` stocks, which then dropped to `"low"` levels, and in fact for one month they were `"out"` of copies of the book for a while before they were able to replenish them. Thus I might have a variable called `stock.levels` which looks like this:
```
stock.levels<-c("high", "high", "low", "out", "out", "high",
"high", "high", "high", "high", "high", "high")
stock.levels
```
```
## [1] "high" "high" "low" "out" "out" "high" "high" "high" "high" "high"
## [11] "high" "high"
```
Thus, if I want to know the months for which the bookshop was out of my book, I could apply the logical indexing trick, but with the character vector `stock.levels` , like this:
```
months[stock.levels == "out"]
```
`## [1] "April" "May"`
Alternatively, if I want to know when the bookshop was either low on copies or out of copies, I could do this:
```
months[stock.levels == "out" | stock.levels == "low"]
```
or this
```
months[stock.levels != "high" ]
```
Either way, I get the answer I want.
At this point, I hope you can see why logical indexing is such a useful thing. It’s a very basic, yet very powerful way to manipulate data. We’ll talk a lot more about how to manipulate data in Chapter 7, since it’s a critical skill for real world research that is often overlooked in introductory research methods classes (or at least, that’s been my experience). It does take a bit of practice to become completely comfortable using logical indexing, so it’s a good idea to play around with these sorts of commands. Try creating a few different variables of your own, and then ask yourself questions like “how do I get R to spit out all the elements that are [blah]”. Practice makes perfect, and it’s only by practicing logical indexing that you’ll perfect the art of yelling frustrated insults at your computer.41
## 3.11 Quitting R
There’s one last thing I should cover in this chapter: how to quit R. When I say this, I’m not trying to imply that R is some kind of pathological addition and that you need to call the R QuitLine or wear patches to control the cravings (although you certainly might argue that there’s something seriously pathological about being addicted to R). I just mean how to exit the program. Assuming you’re running R in the usual way (i.e., through RStudio or the default GUI on a Windows or Mac computer), then you can just shut down the application in the normal way. However, R also has a function, called `q()` that you can use to quit, which is pretty handy if you’re running R in a terminal window.
Regardless of what method you use to quit R, when you do so for the first time R will probably ask you if you want to save the “workspace image”. We’ll talk a lot more about loading and saving data in Section 4.5, but I figured we’d better quickly cover this now otherwise you’re going to get annoyed when you close R at the end of the chapter. If you’re using RStudio, you’ll see a dialog box that looks like the one shown in Figure 3.5. If you’re using a text based interface you’ll see this:
```
q()
## Save workspace image? [y/n/c]:
```
The `y/n/c` part here is short for “yes / no / cancel”. Type `y` if you want to save, `n` if you don’t, and `c` if you’ve changed your mind and you don’t want to quit after all. What does this actually mean? What’s going on is that R wants to know if you want to save all those variables that you’ve been creating, so that you can use them later. This sounds like a great idea, so it’s really tempting to type `y` or click the “Save” button. To be honest though, I very rarely do this, and it kind of annoys me a little bit… what R is really asking is if you want it to store these variables in a “default” data file, which it will automatically reload for you next time you open R. And quite frankly, if I’d wanted to save the variables, then I’d have already saved them before trying to quit. Not only that, I’d have saved them to a location of my choice, so that I can find it again later. So I personally never bother with this.
In fact, every time I install R on a new machine one of the first things I do is change the settings so that it never asks me again. You can do this in RStudio really easily: use the menu system to find the RStudio option; the dialog box that comes up will give you an option to tell R never to whine about this again (see Figure 3.6. On a Mac, you can open this window by going to the “RStudio” menu and selecting “Preferences”. On a Windows machine you go to the “Tools” menu and select “Global Options”. Under the “General” tab you’ll see an option that reads “Save workspace to .Rdata on exit”. By default this is set to “ask”. If you want R to stop asking, change it to “never”.
## 3.12 Summary
Every book that tries to introduce basic programming ideas to novices has to cover roughly the same topics, and in roughly the same order. Mine is no exception, and so in the grand tradition of doing it just the same way everyone else did it, this chapter covered the following topics:
* Getting started. We downloaded and installed R and RStudio
* Basic commands. We talked a bit about the logic of how R works and in particular how to type commands into the R console (Section @ref(#firstcommand), and in doing so learned how to perform basic calculations using the arithmetic operators
`+` , `-` , `*` , `/` and `^` . * Introduction to functions. We saw several different functions, three that are used to perform numeric calculations (
`sqrt()` , `abs()` , `round()` , one that applies to text ( `nchar()` ; Section 3.8.1), and one that works on any variable ( `length()` ; Section 3.7.5). In doing so, we talked a bit about how argument names work, and learned about default values for arguments. (Section 3.5.1) * Introduction to variables. We learned the basic idea behind variables, and how to assign values to variables using the assignment operator
`<-` (Section 3.4). We also learned how to create vectors using the combine function `c()` (Section 3.7). * Data types. Learned the distinction between numeric, character and logical data; including the basics of how to enter and use each of them. (Sections 3.4 to 3.9)
* Logical operations. Learned how to use the logical operators
`==` , `!=` , `<` , `>` , `<=` , `=>` , `!` , `&` and `|` . And learned how to use logical indexing. (Section 3.10)
We still haven’t arrived at anything that resembles a “data set”, of course. Maybe the next Chapter will get us a bit closer…
Source: Dismal Light (1968).↩
*
Although R is updated frequently, it doesn’t usually make much of a difference for the sort of work we’ll do in this book. In fact, during the writing of the book I upgraded several times, and didn’t have to change much except these sections describing the downloading.↩
*
If you’re running an older version of the Mac OS, then you need to follow the link to the “old” page (http://cran.r-project.org/bin/macosx/old/). You should be able to find the installer file that you need at the bottom of the page.↩
*
Tip for advanced Mac users. You can run R from the terminal if you want to. The command is just “R”. It behaves like the normal desktop version, except that help documentation behaves like a “man” page instead of opening in a new window.↩
*
This is probably no coincidence: the people who design and distribute the core R language itself are focused on technical stuff. And sometimes they almost seem to forget that there’s an actual human user at the end. The people who design and distribute RStudio are focused on user interface. They want to make R as usable as possible. The two websites reflect that difference.↩
*
Seriously. If you’re in a position to do so, open up R and start typing. The simple act of typing it rather than “just reading” makes a big difference. It makes the concepts more concrete, and it ties the abstract ideas (programming and statistics) to the actual context in which you need to use them. Statistics is something you do, not just something you read about in a textbook.↩
*
If you’re running R from the terminal rather than from RStudio, escape doesn’t work: use CTRL-C instead.↩
*
For advanced users: yes, as you’ve probably guessed, R is printing out the source code for the function.↩
*
If you’re reading this with R open, a good learning trick is to try typing in a few different variations on what I’ve done here. If you experiment with your commands, you’ll quickly learn what works and what doesn’t↩
*
For advanced users: if you want a table showing the complete order of operator precedence in R, type
`?Syntax` . I haven’t included it in this book since there are quite a few different operators, and we don’t need that much detail. Besides, in practice most people seem to figure it out from seeing examples: until writing this book I never looked at the formal statement of operator precedence for any language I ever coded in, and never ran into any difficulties.↩ *
If you are using RStudio, and the “environment” panel (formerly known as the “workspace” panel) is visible when you typed the command, then you probably saw something happening there. That’s to be expected, and is quite helpful. However, there’s two things to note here (1) I haven’t yet explained what that panel does, so for now just ignore it, and (2) this is one of the helpful things RStudio does, not a part of R itself.↩
*
As we’ll discuss later, by doing this we are implicitly using the
`print()` function↩ *
Actually, in keeping with the R tradition of providing you with a billion different screwdrivers (even when you’re actually looking for a hammer) these aren’t the only options. There’s also the
`assign()` function, and the `<<-` and `->>` operators. However, we won’t be using these at all in this book.↩ *
A quick reminder: when using operators like
`<-` and `->` that span multiple characters, you can’t insert spaces in the middle. That is, if you type `- >` or `< -` , R will interpret your command the wrong way. And I will cry.↩ *
Actually, you can override any of these rules if you want to, and quite easily. All you have to do is add quote marks or backticks around your non-standard variable name. For instance
``my sales ` <- 350` would work just fine, but it’s almost never a good idea to do this.↩ *
For very advanced users: there is one exception to this. If you’re naming a function, don’t use
`.` in the name unless you are intending to make use of the S3 object oriented programming system in R. If you don’t know what S3 is, then you definitely don’t want to be using it! For function naming, there’s been a trend among R users to prefer `myFunctionName` .↩ *
A side note for students with a programming background. Technically speaking, operators are functions in R: the addition operator
`+` is actually a convenient way of calling the addition function `+()` . Thus `10+20` is equivalent to the function call `+(20, 30)` . Not surprisingly, no-one ever uses this version. Because that would be stupid.↩ *
A note for the mathematically inclined: R does support complex numbers, but unless you explicitly specify that you want them it assumes all calculations must be real valued. By default, the square root of a negative number is treated as undefined:
`sqrt(-9)` will produce `NaN` (not a number) as its output. To get complex numbers, you would type `sqrt(-9+0i)` and R would now return `0+3i` . However, since we won’t have any need for complex numbers in this book, I won’t refer to them again.↩ *
The two functions discussed previously,
`sqrt()` and `abs()` , both only have a single argument, `x` . So I could have typed something like `sqrt(x = 225)` or `abs(x = -13)` earlier. The fact that all these functions use `x` as the name of the argument that corresponds the “main” variable that you’re working with is no coincidence. That’s a fairly widely used convention. Quite often, the writers of R functions will try to use conventional names like this to make your life easier. Or at least that’s the theory. In practice it doesn’t always work as well as you’d hope.↩ *
For advanced users: obviously, this isn’t just an RStudio thing. If you’re running R in a terminal window, tab autocomplete still works, and does so in exactly the way you’d expect. It’s not as visually pretty as the RStudio version, of course, and lacks some of the cooler features that RStudio provides. I don’t bother to document that here: my assumption is that if you are running R in the terminal then you’re already familiar with using tab autocomplete.↩
*
Incidentally, that always works: if you’ve started typing a command and you want to clear it and start again, hit escape.↩
*
Another method is to start typing some text and then hit the Control key and the up arrow together (on Windows or Linux) or the Command key and the up arrow together (on a Mac). This will bring up a window showing all your recent commands that started with the same text as what you’ve currently typed. That can come in quite handy sometimes.↩
*
Notice that I didn’t specify any argument names here. The
`c()` function is one of those cases where we don’t use names. We just type all the numbers, and R just dumps them all in a single variable.↩ *
Though actually there’s no real need to do this, since R has an inbuilt variable called
`month.name` that you can use for this purpose.↩ *
I offer up my teenage attempts to be “cool” as evidence that some things just can’t be done.↩
*
Note that this is a very different operator to the assignment operator
`=` that I talked about in Section 3.4. A common typo that people make when trying to write logical commands in R (or other languages, since the “ `=` versus `==` ” distinction is important in most programming languages) is to accidentally type `=` when you really mean `==` . Be especially cautious with this – I’ve been programming in various languages since I was a teenager, and I still screw this up a lot. Hm. I think I see why I wasn’t cool as a teenager. And why I’m still not cool.↩ *
A note for those of you who have taken a computer science class: yes, R does have a function for exclusive-or, namely
`xor()` . Also worth noting is the fact that R makes the distinction between element-wise operators `&` and `|` and operators that look only at the first element of the vector, namely `&&` and `||` . To see the distinction, compare the behaviour of a command like
```
c(FALSE,TRUE) & c(TRUE,TRUE)
```
to the behaviour of something like
```
c(FALSE,TRUE) && c(TRUE,TRUE)
```
. If this doesn’t mean anything to you, ignore this footnote entirely. It’s not important for the content of this book.↩ *
Warning!
`TRUE` and `FALSE` are reserved keywords in R, so you can trust that they always mean what they say they do. Unfortunately, the shortcut versions `T` and `F` do not have this property. It’s even possible to create variables that set up the reverse meanings, by typing commands like `T <- FALSE` and `F <- TRUE` . This is kind of insane, and something that is generally thought to be a design flaw in R. Anyway, the long and short of it is that it’s safer to use `TRUE` and `FALSE` .↩ *
Well, I say that… but in my personal experience it wasn’t until I started learning “regular expressions” that my loathing of computers reached its peak.↩
Date: 2011-09-12
Categories:
Tags:
# Chapter 4 Additional R concepts
Form follows function
– <NAME>
In Chapter 3 our main goal was to get started in R. As we go through the book we’ll run into a lot of new R concepts, which I’ll explain alongside the relevant data analysis concepts. However, there’s still quite a few things that I need to talk about now, otherwise we’ll run into problems when we start trying to work with data and do statistics. So that’s the goal in this chapter: to build on the introductory content from the last chapter, to get you to the point that we can start using R for statistics. Broadly speaking, the chapter comes in two parts. The first half of the chapter is devoted to the “mechanics” of R: installing and loading packages, managing the workspace, navigating the file system, and loading and saving data. In the second half, I’ll talk more about what kinds of variables exist in R, and introduce three new kinds of variables: factors, data frames and formulas. I’ll finish up by talking a little bit about the help documentation in R as well as some other avenues for finding assistance. In general, I’m not trying to be comprehensive in this chapter, I’m trying to make sure that you’ve got the basic foundations needed to tackle the content that comes later in the book. However, a lot of the topics are revisited in more detail later, especially in Chapters 7 and 8.
## 4.1 Using comments
Before discussing any of the more complicated stuff, I want to introduce the comment character, `#` . It has a simple meaning: it tells R to ignore everything else you’ve written on this line. You won’t have much need of the `#` character immediately, but it’s very useful later on when writing scripts (see Chapter 8). However, while you don’t need to use it, I want to be able to include comments in my R extracts. For instance, if you read this:42
```
seeker <- 3.1415 # create the first variable
lover <- 2.7183 # create the second variable
keeper <- seeker * lover # now multiply them to create a third one
print( keeper ) # print out the value of 'keeper'
```
`## [1] 8.539539`
it’s a lot easier to understand what I’m doing than if I just write this:
```
seeker <- 3.1415
lover <- 2.7183
keeper <- seeker * lover
print( keeper )
```
`## [1] 8.539539` You might have already noticed that the code extracts in Chapter 3 included the `#` character, but from now on, you’ll start seeing `#` characters appearing in the extracts, with some human-readable explanatory remarks next to them. These are still perfectly legitimate commands, since R knows that it should ignore the `#` character and everything after it. But hopefully they’ll help make things a little easier to understand.
## 4.2 Installing and loading packages
In this section I discuss R packages, since almost all of the functions you might want to use in R come in packages. A package is basically just a big collection of functions, data sets and other R objects that are all grouped together under a common name. Some packages are already installed when you put R on your computer, but the vast majority of them of R packages are out there on the internet, waiting for you to download, install and use them.
When I first started writing this book, RStudio didn’t really exist as a viable option for using R, and as a consequence I wrote a very lengthy section that explained how to do package management using raw R commands. It’s not actually terribly hard to work with packages that way, but it’s clunky and unpleasant. Fortunately, we don’t have to do things that way anymore. In this section, I’ll describe how to work with packages using the RStudio tools, because they’re so much simpler. Along the way, you’ll see that whenever you get RStudio to do something (e.g., install a package), you’ll actually see the R commands that get created. I’ll explain them as we go, because I think that helps you understand what’s going on.
However, before we get started, there’s a critical distinction that you need to understand, which is the difference between having a package installed on your computer, and having a package loaded in R. As of this writing, there are just over 5000 R packages freely available “out there” on the internet.43 When you install R on your computer, you don’t get all of them: only about 30 or so come bundled with the basic R installation. So right now there are about 30 packages “installed” on your computer, and another 5000 or so that are not installed. So that’s what installed means: it means “it’s on your computer somewhere”. The critical thing to remember is that just because something is on your computer doesn’t mean R can use it. In order for R to be able to use one of your 30 or so installed packages, that package must also be “loaded”. Generally, when you open up R, only a few of these packages (about 7 or 8) are actually loaded. Basically what it boils down to is this:
A package must be installed before it can be loaded.
A package must be loaded before it can be used.
This two step process might seem a little odd at first, but the designers of R had very good reasons to do it this way,44 and you get the hang of it pretty quickly.
### 4.2.1 The package panel in RStudio
Right, lets get started. The first thing you need to do is look in the lower right hand panel in RStudio. You’ll see a tab labelled “Packages”. Click on the tab, and you’ll see a list of packages that looks something like Figure 4.1. Every row in the panel corresponds to a different package, and every column is a useful piece of information about that package.45 Going from left to right, here’s what each column is telling you:
* The check box on the far left column indicates whether or not the package is loaded.
* The one word of text immediately to the right of the check box is the name of the package.
* The short passage of text next to the name is a brief description of the package.
* The number next to the description tells you what version of the package you have installed.
* The little x-mark next to the version number is a button that you can push to uninstall the package from your computer (you almost never need this).
### 4.2.2 Loading a package
That seems straightforward enough, so let’s try loading and unloading packades. For this example, I’ll use the `foreign` package. The `foreign` package is a collection of tools that are very handy when R needs to interact with files that are produced by other software packages (e.g., SPSS). It comes bundled with R, so it’s one of the ones that you have installed already, but it won’t be one of the ones loaded. Inside the `foreign` package is a function called `read.spss()` . It’s a handy little function that you can use to import an SPSS data file into R, so let’s pretend we want to use it. Currently, the `foreign` package isn’t loaded, so if I ask R to tell me if it knows about a function called `read.spss()` it tells me that there’s no such thing…
`## [1] FALSE` Now let’s load the package. In RStudio, the process is dead simple: go to the package tab, find the entry for the `foreign` package, and check the box on the left hand side. The moment that you do this, you’ll see a command like this appear in the R console:
```
library("foreign", lib.loc="/Library/Frameworks/R.framework/Versions/3.0/Resources/library")
```
The `lib.loc` bit will look slightly different on Macs versus on Windows, because that part of the command is just RStudio telling R where to look to find the installed packages. What I’ve shown you above is the Mac version. On a Windows machine, you’ll probably see something that looks like this:
```
library("foreign", lib.loc="C:/Program Files/R/R-3.0.2/library")
```
But actually it doesn’t matter much. The `lib.loc` bit is almost always unnecessary. Unless you’ve taken to installing packages in idiosyncratic places (which is something that you can do if you really want) R already knows where to look. So in the vast majority of cases, the command to load the `foreign` package is just this: `library("foreign")` Throughout this book, you’ll often see me typing in `library()` commands. You don’t actually have to type them in yourself: you can use the RStudio package panel to do all your package loading for you. The only reason I include the `library()` commands sometimes is as a reminder to you to make sure that you have the relevant package loaded. Oh, and I suppose we should check to see if our attempt to load the package actually worked. Let’s see if R now knows about the existence of the `read.spss()` function…
`## [1] TRUE`
Yep. All good.
### 4.2.3 Unloading a package
Sometimes, especially after a long session of working with R, you find yourself wanting to get rid of some of those packages that you’ve loaded. The RStudio package panel makes this exactly as easy as loading the package in the first place. Find the entry corresponding to the package you want to unload, and uncheck the box. When you do that for the `foreign` package, you’ll see this command appear on screen:
```
detach("package:foreign", unload=TRUE)
```
```
## Warning: 'foreign' namespace cannot be unloaded:
## namespace 'foreign' is imported by 'rio', 'psych' so cannot be unloaded
```
And the package is unloaded. We can verify this by seeing if the `read.spss()` function still `exists()` :
`## [1] FALSE`
Nope. Definitely gone.
### 4.2.4 A few extra comments
Sections 4.2.2 and 4.2.3 cover the main things you need to know about loading and unloading packages. However, there’s a couple of other details that I want to draw your attention to. A concrete example is the best way to illustrate. One of the other packages that you already have installed on your computer is the `Matrix` package, so let’s load that one and see what happens:
```
library( Matrix )
## Loading required package: lattice
```
This is slightly more complex than the output that we got last time, but it’s not too complicated. The `Matrix` package makes use of some of the tools in the `lattice` package, and R has kept track of this dependency. So when you try to load the `Matrix` package, R recognises that you’re also going to need to have the `lattice` package loaded too. As a consequence, both packages get loaded, and R prints out a helpful little note on screen to tell you that it’s done so. R is pretty aggressive about enforcing these dependencies. Suppose, for example, I try to unload the `lattice` package while the `Matrix` package is still loaded. This is easy enough to try: all I have to do is uncheck the box next to “lattice” in the packages panel. But if I try this, here’s what happens:
```
detach("package:lattice", unload=TRUE)
## Error: package `lattice' is required by `Matrix' so will not be detached
```
R refuses to do it. This can be quite useful, since it stops you from accidentally removing something that you still need. So, if I want to remove both `Matrix` and `lattice` , I need to do it in the correct order Something else you should be aware of. Sometimes you’ll attempt to load a package, and R will print out a message on screen telling you that something or other has been “masked”. This will be confusing to you if I don’t explain it now, and it actually ties very closely to the whole reason why R forces you to load packages separately from installing them. Here’s an example. Two of the package that I’ll refer to a lot in this book are called `car` and `psych` . The `car` package is short for “Companion to Applied Regression” (which is a really great book, I’ll add), and it has a lot of tools that I’m quite fond of. The `car` package was written by a guy called <NAME>, who has written a lot of great statistical tools for social science applications. The `psych` package was written by <NAME>, and it has a lot of functions that are very useful for psychologists in particular, especially in regards to psychometric techniques. For the most part, `car` and `psych` are quite unrelated to each other. They do different things, so not surprisingly almost all of the function names are different. But… there’s one exception to that. The `car` package and the `psych` package both contain a function called `logit()` .46 This creates a naming conflict. If I load both packages into R, an ambiguity is created. If the user types in `logit(100)` , should R use the `logit()` function in the `car` package, or the one in the `psych` package? The answer is: R uses whichever package you loaded most recently, and it tells you this very explicitly. Here’s what happens when I load the `car` package, and then afterwards load the `psych` package:
```
library(car)
library(psych)
```
The output here is telling you that the `logit` object (i.e., function) in the `car` package is no longer accessible to you. It’s been hidden (or “masked”) from you by the one in the `psych` package.47
### 4.2.5 Downloading new packages
One of the main selling points for R is that there are thousands of packages that have been written for it, and these are all available online. So whereabouts online are these packages to be found, and how do we download and install them? There is a big repository of packages called the “Comprehensive R Archive Network” (CRAN), and the easiest way of getting and installing a new package is from one of the many CRAN mirror sites. Conveniently for us, R provides a function called `install.packages()` that you can use to do this. Even more conveniently, the RStudio team runs its own CRAN mirror and RStudio has a clean interface that lets you install packages without having to learn how to use the `install.packages()` command48
Using the RStudio tools is, again, dead simple. In the top left hand corner of the packages panel (Figure 4.1) you’ll see a button called “Install Packages”. If you click on that, it will bring up a window like the one shown in Figure 4.2.
There are a few different buttons and boxes you can play with. Ignore most of them. Just go to the line that says “Packages” and start typing the name of the package that you want. As you type, you’ll see a dropdown menu appear (Figure 4.3), listing names of packages that start with the letters that you’ve typed so far.
You can select from this list, or just keep typing. Either way, once you’ve got the package name that you want, click on the install button at the bottom of the window. When you do, you’ll see the following command appear in the R console:
```
install.packages("psych")
```
This is the R command that does all the work. R then goes off to the internet, has a conversation with CRAN, downloads some stuff, and installs it on your computer. You probably don’t care about all the details of R’s little adventure on the web, but the `install.packages()` function is rather chatty, so it reports a bunch of gibberish that you really aren’t all that interested in:
```
trying URL 'http://cran.rstudio.com/bin/macosx/contrib/3.0/psych_1.4.1.tgz'
Content type 'application/x-gzip' length 2737873 bytes (2.6 Mb)
opened URL
==================================================
downloaded 2.6 Mb
The downloaded binary packages are in
/<KEY>/downloaded_packages
```
Despite the long and tedious response, all thar really means is “I’ve installed the psych package”. I find it best to humour the talkative little automaton. I don’t actually read any of this garbage, I just politely say “thanks” and go back to whatever I was doing.
### 4.2.6 Updating R and R packages
Every now and then the authors of packages release updated versions. The updated versions often add new functionality, fix bugs, and so on. It’s generally a good idea to update your packages periodically. There’s an `update.packages()` function that you can use to do this, but it’s probably easier to stick with the RStudio tool. In the packages panel, click on the “Update Packages” button. This will bring up a window that looks like the one shown in Figure 4.4. In this window, each row refers to a package that needs to be updated. You can to tell R which updates you want to install by checking the boxes on the left. If you’re feeling lazy and just want to update everything, click the “Select All” button, and then click the “Install Updates” button. R then prints out a lot of garbage on the screen, individually downloading and installing all the new packages. This might take a while to complete depending on how good your internet connection is. Go make a cup of coffee. Come back, and all will be well.
About every six months or so, a new version of R is released. You can’t update R from within RStudio (not to my knowledge, at least): to get the new version you can go to the CRAN website and download the most recent version of R, and install it in the same way you did when you originally installed R on your computer. This used to be a slightly frustrating event, because whenever you downloaded the new version of R, you would lose all the packages that you’d downloaded and installed, and would have to repeat the process of re-installing them. This was pretty annoying, and there were some neat tricks you could use to get around this. However, newer versions of R don’t have this problem so I no longer bother explaining the workarounds for that issue.
### 4.2.7 What packages does this book use?
There are several packages that I make use of in this book. The most prominent ones are:
*
`lsr` . This is the Learning Statistics with R package that accompanies this book. It doesn’t have a lot of interesting high-powered tools: it’s just a small collection of handy little things that I think can be useful to novice users. As you get more comfortable with R this package should start to feel pretty useless to you. *
`psych` . This package, written by <NAME>, includes a lot of tools that are of particular use to psychologists. In particular, there’s several functions that are particularly convenient for producing analyses or summaries that are very common in psych, but less common in other disciplines. *
`car` . This is the Companion to Applied Regression package, which accompanies the excellent book of the same name by (Fox and Weisberg 2011). It provides a lot of very powerful tools, only some of which we’ll touch in this book. Besides these three, there are a number of packages that I use in a more limited fashion: `gplots` , `sciplot` , `foreign` , `effects` , `R.matlab` , `gdata` , `lmtest` , and probably one or two others that I’ve missed. There are also a number of packages that I refer to but don’t actually use in this book, such as `reshape` , `compute.es` , `HistData` and `multcomp` among others. Finally, there are a number of packages that provide more advanced tools that I hope to talk about in future versions of the book, such as `sem` , `ez` , `nlme` and `lme4` . In any case, whenever I’m using a function that isn’t in the core packages, I’ll make sure to note this in the text.
## 4.3 Managing the workspace
Let’s suppose that you’re reading through this book, and what you’re doing is sitting down with it once a week and working through a whole chapter in each sitting. Not only that, you’ve been following my advice and typing in all these commands into R. So far during this chapter, you’d have typed quite a few commands, although the only ones that actually involved creating variables were the ones you typed during Section 4.1. As a result, you currently have three variables; `seeker` , `lover` , and `keeper` . These three variables are the contents of your workspace, also referred to as the global environment. The workspace is a key concept in R, so in this section we’ll talk a lot about what it is and how to manage its contents.
### 4.3.1 Listing the contents of the workspace
The first thing that you need to know how to do is examine the contents of the workspace. If you’re using RStudio, you will probably find that the easiest way to do this is to use the “Environment” panel in the top right hand corner. Click on that, and you’ll see a list that looks very much like the one shown in Figures 4.5 and 4.6. If you’re using the commmand line, then the `objects()` function may come in handy: `objects()`
```
## [1] "a" "addArrow" "addDistPlot"
## [4] "afl" "afl.finalists" "afl.margins"
## [7] "afl2" "age" "age.breaks"
## [10] "age.group" "age.group2" "age.group3"
## [13] "age.labels" "agpp" "animals"
## [16] "anova.model" "anovaImg" "any.sales.this.month"
## [19] "awesome" "b" "bad.coef"
## [22] "balance" "beers" "berkeley"
## [25] "berkeley.small" "binomPlot" "bw"
## [28] "cake.1" "cake.2" "cake.df"
## [31] "cake.mat1" "cake.mat2" "cakes"
## [34] "cakes.flipped" "cardChoices" "cards"
## [37] "chapek9" "chapekFrequencies" "chi.sq.20"
## [40] "chi.sq.3" "chico" "chico2"
## [43] "chico3" "chiSqImg" "choice"
## [46] "choice.2" "clin.trial" "clin.trial.2"
## [49] "coef" "coffee" "colour"
## [52] "crit" "crit.hi" "crit.lo"
## [55] "crit.val" "crosstab" "d.cor"
## [58] "dan.awake" "data" "day"
## [61] "days.per.month" "def.par" "describeImg"
## [64] "dev.from.gp.means" "dev.from.grand.mean" "df"
## [67] "doubleMax" "drawBasicScatterplot" "drug.anova"
## [70] "drug.lm" "drug.means" "drug.regression"
## [73] "druganxifree" "drugs" "drugs.2"
## [76] "eff" "effort" "emphCol"
## [79] "emphColLight" "emphGrey" "eps"
## [82] "es" "estImg" "eta.squared"
## [85] "eventNames" "expected" "f"
## [88] "F.3.20" "F.stat" "fac"
## [91] "february.sales" "fibonacci" "Fibonacci"
## [94] "fileName" "freq" "full.model"
## [97] "G" "garden" "gender"
## [100] "generateRLineTypes" "generateRPointShapes" "good.coef"
## [103] "gp.mean" "gp.means" "gp.sizes"
## [106] "grades" "grand.mean" "greeting"
## [109] "group" "h" "happiness"
## [112] "harpo" "heavy.tailed.data" "height"
## [115] "hw" "i" "interest"
## [118] "IQ" "is.MP.speaking" "is.the.Party.correct"
## [121] "itng" "itng.table" "keeper"
## [124] "likert.centred" "likert.ordinal" "likert.raw"
## [127] "lover" "lower.area" "m"
## [130] "M" "M0" "M1"
## [133] "makka.pakka" "max.val" "mod"
## [136] "mod.1" "mod.2" "mod.3"
## [139] "mod.4" "mod.H" "mod.R"
## [142] "model" "model.1" "model.2"
## [145] "model.3" "models" "monkey"
## [148] "monkey.1" "month" "monthly.multiplier"
## [151] "months" "ms.diff" "ms.res"
## [154] "msg" "mu" "mu.null"
## [157] "my.anova" "my.anova.residuals" "my.contrasts"
## [160] "my.var" "n" "N"
## [163] "ng" "nhstImg" "nodrug.regression"
## [166] "normal.a" "normal.b" "normal.c"
## [169] "normal.d" "normal.data" "null.model"
## [172] "nullProbs" "numbers" "observed"
## [175] "old" "old.text" "old_par"
## [178] "oneCorPlot" "opinion.dir" "opinion.strength"
## [181] "out.0" "out.1" "out.2"
## [184] "outcome" "p.value" "parenthood"
## [187] "payments" "PJ" "plotHist"
## [190] "plotOne" "plotSamples" "plotTwo"
## [193] "pow" "probabilities" "profit"
## [196] "projecthome" "quadruple" "r"
## [199] "R.squared" "random.contrasts" "regression.1"
## [202] "regression.2" "regression.3" "regression.model"
## [205] "regression.model.2" "regression.model.3" "regressionImg"
## [208] "resid" "revenue" "right.table"
## [211] "row.1" "row.2" "royalty"
## [214] "rtfm.1" "rtfm.2" "rtfm.3"
## [217] "s" "salem.tabs" "sales"
## [220] "sales.by.month" "sample.mean" "scaled.chi.sq.20"
## [223] "scaled.chi.sq.3" "score.A" "score.B"
## [226] "sd.true" "sd1" "seeker"
## [229] "sem.true" "setUpPlot" "sig"
## [232] "sigEx" "simpson" "skewed.data"
## [235] "some.data" "speaker" "speech.by.char"
## [238] "squared.devs" "ss.diff" "SS.drug"
## [241] "SS.res" "ss.res.full" "ss.res.null"
## [244] "SS.resid" "SS.therapy" "ss.tot"
## [247] "SS.tot" "SSb" "SStot"
## [250] "SSw" "stock.levels" "suspicious.cases"
## [253] "t.3" "teams" "text"
## [256] "therapy.means" "theta" "today"
## [259] "tombliboo" "total.paid" "trial"
## [262] "ttestImg" "type.I.sum" "type.II.sum"
## [265] "upper.area" "upsy.daisy" "utterance"
## [268] "w" "W" "w.length"
## [271] "width" "words" "wt.squared.devs"
## [274] "x" "X" "x1"
## [277] "X1" "x2" "X2"
## [280] "x3" "X3" "X4"
## [283] "xlu" "xtab.3d" "xval"
## [286] "y" "Y" "Y.pred"
## [289] "y1" "Y1" "y2"
## [292] "Y2" "y3" "Y3"
## [295] "Y4" "Ybar" "yhat.2"
## [298] "yval" "yval.1" "yval.2"
## [301] "z" "Z" "z.score"
```
Of course, in the true R tradition, the `objects()` function has a lot of fancy capabilities that I’m glossing over in this example. Moreover there are also several other functions that you can use, including `ls()` which is pretty much identical to `objects()` , and `ls.str()` which you can use to get a fairly detailed description of all the variables in the workspace. In fact, the `lsr` package actually includes its own function that you can use for this purpose, called `who()` . The reason for using the `who()` function is pretty straightforward: in my everyday work I find that the output produced by the `objects()` command isn’t quite informative enough, because the only thing it prints out is the name of each variable; but the `ls.str()` function is too informative, because it prints out a lot of additional information that I really don’t like to look at. The `who()` function is a compromise between the two. First, now that we’ve got the `lsr` package installed, we need to load it: `library(lsr)` and now we can use the `who()` function: `who()`
As you can see, the `who()` function lists all the variables and provides some basic information about what kind of variable each one is and how many elements it contains. Personally, I find this output much easier more useful than the very compact output of the `objects()` function, but less overwhelming than the extremely verbose `ls.str()` function. Throughout this book you’ll see me using the `who()` function a lot. You don’t have to use it yourself: in fact, I suspect you’ll find it easier to look at the RStudio environment panel. But for the purposes of writing a textbook I found it handy to have a nice text based description: otherwise there would be about another 100 or so screenshots added to the book.49
### 4.3.2 Removing variables from the workspace
Looking over that list of variables, it occurs to me that I really don’t need them any more. I created them originally just to make a point, but they don’t serve any useful purpose anymore, and now I want to get rid of them. I’ll show you how to do this, but first I want to warn you – there’s no “undo” option for variable removal. Once a variable is removed, it’s gone forever unless you save it to disk. I’ll show you how to do that in Section 4.5, but quite clearly we have no need for these variables at all, so we can safely get rid of them.
In RStudio, the easiest way to remove variables is to use the environment panel. Assuming that you’re in grid view (i.e., Figure 4.6), check the boxes next to the variables that you want to delete, then click on the “Clear” button at the top of the panel. When you do this, RStudio will show a dialog box asking you to confirm that you really do want to delete the variables. It’s always worth checking that you really do, because as RStudio is at pains to point out, you can’t undo this. Once a variable is deleted, it’s gone.50 In any case, if you click “yes”, that variable will disappear from the workspace: it will no longer appear in the environment panel, and it won’t show up when you use the `who()` command. Suppose you don’t access to RStudio, and you still want to remove variables. This is where the remove function `rm()` comes in handy. The simplest way to use `rm()` is just to type in a (comma separated) list of all the variables you want to remove. Let’s say I want to get rid of `seeker` and `lover` , but I would like to keep `keeper` . To do this, all I have to do is type: `rm( seeker, lover )`
There’s no visible output, but if I now inspect the workspace
`who()`
I see that there’s only the `keeper` variable left. As you can see, `rm()` can be very handy for keeping the workspace tidy.
## 4.5 Loading and saving data
There are several different types of files that are likely to be relevant to us when doing data analysis. There are three in particular that are especially important from the perspective of this book:
* Workspace files are those with a .Rdata file extension. This is the standard kind of file that R uses to store data and variables. They’re called “workspace files” because you can use them to save your whole workspace.
* Comma separated value (CSV) files are those with a .csv file extension. These are just regular old text files, and they can be opened with almost any software. It’s quite typical for people to store data in CSV files, precisely because they’re so simple.
* Script files are those with a .R file extension. These aren’t data files at all; rather, they’re used to save a collection of commands that you want R to execute later. They’re just text files, but we won’t make use of them until Chapter 8.
There are also several other types of file that R makes use of,53 but they’re not really all that central to our interests. There are also several other kinds of data file that you might want to import into R. For instance, you might want to open Microsoft Excel spreadsheets (.xlsx files), or data files that have been saved in the native file formats for other statistics software, such as SPSS, SAS, Minitab, Stata or Systat. Finally, you might have to handle databases. R tries hard to play nicely with other software, so it has tools that let you open and work with any of these and many others. I’ll discuss some of these other possibilities elsewhere in this book (Section 7.9), but for now I want to focus primarily on the two kinds of data file that you’re most likely to need: .Rdata files and .csv files. In this section I’ll talk about how to load a workspace file, how to import data from a CSV file, and how to save your workspace to a workspace file. Throughout this section I’ll first describe the (sometimes awkward) R commands that do all the work, and then I’ll show you the (much easier) way to do it using RStudio.
### 4.5.1 Loading workspace files using R
When I used the `list.files()` command to list the contents of the
directory (in Section 4.4.2), the output referred to a file called booksales.Rdata. Let’s say I want to load the data from this file into my workspace. The way I do this is with the `load()` function. There are two arguments to this function, but the only one we’re interested in is
*
`file` . This should be a character string that specifies a path to the file that needs to be loaded. You can use an absolute path or a relative path to do so.
Using the absolute file path, the command would look like this:
```
load( file = "/Users/dan/Rbook/data/booksales.Rdata" )
```
but this is pretty lengthy. Given that the working directory (remember, we changed the directory at the end of Section 4.4.4) is
, I could use a relative file path, like so:
```
load( file = "../data/booksales.Rdata" )
```
However, my preference is usually to change the working directory first, and then load the file. What that would look like is this:
```
setwd( "../data" ) # move to the data directory
load( "booksales.Rdata" ) # load the data
```
If I were then to type `who()` I’d see that there are several new variables in my workspace now. Throughout this book, whenever you see me loading a file, I will assume that the file is actually stored in the working directory, or that you’ve changed the working directory so that R is pointing at the directory that contains the file. Obviously, you don’t need type that command yourself: you can use the RStudio file panel to do the work.
### 4.5.2 Loading workspace files using RStudio
Okay, so how do we open an .Rdata file using the RStudio file panel? It’s terribly simple. First, use the file panel to find the folder that contains the file you want to load. If you look at Figure 4.7, you can see that there are several .Rdata files listed. Let’s say I want to load the `booksales.Rdata` file. All I have to do is click on the file name. RStudio brings up a little dialog box asking me to confirm that I do want to load this file. I click yes. The following command then turns up in the console,
```
load("~/Rbook/data/booksales.Rdata")
```
and the new variables will appear in the workspace (you’ll see them in the Environment panel in RStudio, or if you type `who()` ). So easy it barely warrants having its own section.
### 4.5.3 Importing data from CSV files using
`loadingcsv`
One quite commonly used data format is the humble “comma separated value” file, also called a CSV file, and usually bearing the file extension .csv. CSV files are just plain old-fashioned text files, and what they store is basically just a table of data. This is illustrated in Figure 4.8, which shows a file called booksales.csv that I’ve created. As you can see, each row corresponds to a variable, and each row represents the book sales data for one month. The first row doesn’t contain actual data though: it has the names of the variables.
If RStudio were not available to you, the easiest way to open this file would be to use the `read.csv()` function.54 This function is pretty flexible, and I’ll talk a lot more about it’s capabilities in Section 7.9 for more details, but for now there’s only two arguments to the function that I’ll mention:
*
`file` . This should be a character string that specifies a path to the file that needs to be loaded. You can use an absolute path or a relative path to do so. *
`header` . This is a logical value indicating whether or not the first row of the file contains variable names. The default value is `TRUE` .
Therefore, to import the CSV file, the command I need is:
```
books <- read.csv( file = "booksales.csv" )
```
There are two very important points to notice here. Firstly, notice that I didn’t try to use the `load()` function, because that function is only meant to be used for .Rdata files. If you try to use `load()` on other types of data, you get an error. Secondly, notice that when I imported the CSV file I assigned the result to a variable, which I imaginatively called `books` .55 file. There’s a reason for this. The idea behind an `.Rdata` file is that it stores a whole workspace. So, if you had the ability to look inside the file yourself you’d see that the data file keeps track of all the variables and their names. So when you `load()` the file, R restores all those original names. CSV files are treated differently: as far as R is concerned, the CSV only stores one variable, but that variable is big table. So when you import that table into the workspace, R expects you to give it a name.] Let’s have a look at what we’ve got: `print( books )`
```
## Month Days Sales Stock.Levels
## 1 January 31 0 high
## 2 February 28 100 high
## 3 March 31 200 low
## 4 April 30 50 out
## 5 May 31 0 out
## 6 June 30 0 high
## 7 July 31 0 high
## 8 August 31 0 high
## 9 September 30 0 high
## 10 October 31 0 high
## 11 November 30 0 high
## 12 December 31 0 high
```
Clearly, it’s worked, but the format of this output is a bit unfamiliar. We haven’t seen anything like this before. What you’re looking at is a data frame, which is a very important kind of variable in R, and one I’ll discuss in Section 4.8. For now, let’s just be happy that we imported the data and that it looks about right.
### 4.5.4 Importing data from CSV files using RStudio
Yet again, it’s easier in RStudio. In the environment panel in RStudio you should see a button called “Import Dataset”. Click on that, and it will give you a couple of options: select the “From Text File…” option, and it will open up a very familiar dialog box asking you to select a file: if you’re on a Mac, it’ll look like the usual Finder window that you use to choose a file; on Windows it looks like an Explorer window. An example of what it looks like on a Mac is shown in Figure 4.9. I’m assuming that you’re familiar with your own computer, so you should have no problem finding the CSV file that you want to import! Find the one you want, then click on the “Open” button. When you do this, you’ll see a window that looks like the one in Figure 4.10.
The import data set window is relatively straightforward to understand.
In the top left corner, you need to type the name of the variable you R to create. By default, that will be the same as the file name: our file is called `booksales.csv` , so RStudio suggests the name `booksales` . If you’re happy with that, leave it alone. If not, type something else. Immediately below this are a few things that you can tweak to make sure that the data gets imported correctly:
* Heading. Does the first row of the file contain raw data, or does it contain headings for each variable? The
`booksales.csv` file has a header at the top, so I selected “yes”. * Separator. What character is used to separate different entries? In most CSV files this will be a comma (it is “comma separated” after all). But you can change this if your file is different.
* Decimal. What character is used to specify the decimal point? In English speaking countries, this is almost always a period (i.e.,
`.` ). That’s not universally true: many European countries use a comma. So you can change that if you need to. * Quote. What character is used to denote a block of text? That’s usually going to be a double quote mark. It is for the
`booksales.csv` file, so that’s what I selected.
The nice thing about the RStudio window is that it shows you the raw data file at the top of the window, and it shows you a preview of the data at the bottom. If the data at the bottom doesn’t look right, try changing some of the settings on the left hand side. Once you’re happy, click “Import”. When you do, two commands appear in the R console:
```
booksales <- read.csv("~/Rbook/data/booksales.csv")
View(booksales)
```
The first of these commands is the one that loads the data. The second one will display a pretty table showing the data in RStudio.
### 4.5.5 Saving a workspace file using
`save` Not surprisingly, saving data is very similar to loading data. Although RStudio provides a simple way to save files (see below), it’s worth understanding the actual commands involved. There are two commands you can use to do this, `save()` and `save.image()` . If you’re happy to save all of the variables in your workspace into the data file, then you should use `save.image()` . And if you’re happy for R to save the file into the current working directory, all you have to do is this:
```
save.image( file = "myfile.Rdata" )
```
Since `file` is the first argument, you can shorten this to
```
save.image("myfile.Rdata")
```
; and if you want to save to a different directory, then (as always) you need to be more explicit about specifying the path to the file, just as we discussed in Section 4.4. Suppose, however, I have several variables in my workspace, and I only want to save some of them. For instance, I might have this as my workspace:
```
who()
## -- Name -- -- Class -- -- Size --
## data data.frame 3 x 2
## handy character 1
## junk numeric 1
```
I want to save `data` and `handy` , but not `junk` . But I don’t want to delete `junk` right now, because I want to use it for something else later on. This is where the `save()` function is useful, since it lets me indicate exactly which variables I want to save. Here is one way I can use the `save` function to solve my problem:
```
save(data, handy, file = "myfile.Rdata")
```
Importantly, you must specify the name of the `file` argument. The reason is that if you don’t do so, R will think that `"myfile.Rdata"` is actually a variable that you want to save, and you’ll get an error message. Finally, I should mention a second way to specify which variables the `save()` function should save, which is to use the `list` argument. You do so like this:
```
save.me <- c("data", "handy") # the variables to be saved
save( file = "booksales2.Rdata", list = save.me ) # the command to save them
```
### 4.5.6 Saving a workspace file using RStudio
RStudio allows you to save the workspace pretty easily. In the environment panel (Figures 4.5 and 4.6) you can see the “save” button. There’s no text, but it’s the same icon that gets used on every computer everywhere: it’s the one that looks like a floppy disk. You know, those things that haven’t been used in about 20 years. Alternatively, go to the “Session” menu and click on the “Save Workspace As…” option.56 This will bring up the standard “save” dialog box for your operating system (e.g., on a Mac it’ll look a little bit like the loading dialog box in Figure 4.9). Type in the name of the file that you want to save it to, and all the variables in your workspace will be saved to disk. You’ll see an R command like this one
```
save.image("~/Desktop/Untitled.RData")
```
Pretty straightforward, really.
### 4.5.7 Other things you might want to save
Until now, we’ve talked mostly about loading and saving data. Other things you might want to save include:
The output. Sometimes you might also want to keep a copy of all your interactions with R, including everything that you typed in and everything that R did in response. There are some functions that you can use to get R to write its output to a file rather than to print onscreen (e.g.,
`sink()` ), but to be honest, if you do want to save the R output, the easiest thing to do is to use the mouse to select the relevant text in the R console, go to the “Edit” menu in RStudio and select “Copy”. The output has now been copied to the clipboard. Now open up your favourite text editor or word processing software, and paste it. And you’re done. However, this will only save the contents of the console, not the plots you’ve drawn (assuming you’ve drawn some). We’ll talk about saving images later on. *
A script. While it is possible – and sometimes handy – to save the R output as a method for keeping a copy of your statistical analyses, another option that people use a lot (especially when you move beyond simple “toy” analyses) is to write scripts. A script is a text file in which you write out all the commands that you want R to run. You can write your script using whatever software you like. In real world data analysis writing scripts is a key skill – and as you become familiar with R you’ll probably find that most of what you do involves scripting rather than typing commands at the R prompt. However, you won’t need to do much scripting initially, so we’ll leave that until Chapter 8.
## 4.6 Useful things to know about variables
In Chapter 3 I talked a lot about variables, how they’re assigned and some of the things you can do with them, but there’s a lot of additional complexities. That’s not a surprise of course. However, some of those issues are worth drawing your attention to now. So that’s the goal of this section; to cover a few extra topics. As a consequence, this section is basically a bunch of things that I want to briefly mention, but don’t really fit in anywhere else. In short, I’ll talk about several different issues in this section, which are only loosely connected to one another.
### 4.6.1 Special values
The first thing I want to mention are some of the “special” values that you might see R produce. Most likely you’ll see them in situations where you were expecting a number, but there are quite a few other ways you can encounter them. These values are `Inf` , `NaN` , `NA` and `NULL` . These values can crop up in various different places, and so it’s important to understand what they mean.
* Infinity (
`Inf` ). The easiest of the special values to explain is `Inf` , since it corresponds to a value that is infinitely large. You can also have `-Inf` . The easiest way to get `Inf` is to divide a positive number by 0: `1 / 0` `## [1] Inf`
In most real world data analysis situations, if you’re ending up with infinite numbers in your data, then something has gone awry. Hopefully you’ll never have to see them.
* Not a Number (
`NaN` ). The special value of `NaN` is short for “not a number”, and it’s basically a reserved keyword that means “there isn’t a mathematically defined number for this”. If you can remember your high school maths, remember that it is conventional to say that \(0/0\) doesn’t have a proper answer: mathematicians would say that \(0/0\) is undefined. R says that it’s not a number: `0 / 0` `## [1] NaN` Nevertheless, it’s still treated as a “numeric” value. To oversimplify, `NaN` corresponds to cases where you asked a proper numerical question that genuinely has no meaningful answer.
Not available (
`NA` ). `NA` indicates that the value that is “supposed” to be stored here is missing. To understand what this means, it helps to recognise that the `NA` value is something that you’re most likely to see when analysing data from real world experiments. Sometimes you get equipment failures, or you lose some of the data, or whatever. The point is that some of the information that you were “expecting” to get from your study is just plain missing. Note the difference between `NA` and `NaN` . For `NaN` , we really do know what’s supposed to be stored; it’s just that it happens to correspond to something like \(0/0\) that doesn’t make any sense at all. In contrast, `NA` indicates that we actually don’t know what was supposed to be there. The information is missing. *
No value (
`NULL` ). The `NULL` value takes this “absence” concept even further. It basically asserts that the variable genuinely has no value whatsoever. This is quite different to both `NaN` and `NA` . For `NaN` we actually know what the value is, because it’s something insane like \(0/0\). For `NA` , we believe that there is supposed to be a value “out there”, but a dog ate our homework and so we don’t quite know what it is. But for `NULL` we strongly believe that there is no value at all.
### 4.6.2 Assigning names to vector elements
One thing that is sometimes a little unsatisfying about the way that R prints out a vector is that the elements come out unlabelled. Here’s what I mean. Suppose I’ve got data reporting the quarterly profits for some company. If I just create a no-frills vector, I have to rely on memory to know which element corresponds to which event. That is:
```
profit <- c( 3.1, 0.1, -1.4, 1.1 )
profit
```
```
## [1] 3.1 0.1 -1.4 1.1
```
You can probably guess that the first element corresponds to the first quarter, the second element to the second quarter, and so on, but that’s only because I’ve told you the back story and because this happens to be a very simple example. In general, it can be quite difficult. This is where it can be helpful to assign `names` to each of the elements. Here’s how you do it:
```
names(profit) <- c("Q1","Q2","Q3","Q4")
profit
```
This is a slightly odd looking command, admittedly, but it’s not too difficult to follow. All we’re doing is assigning a vector of labels (character strings) to `names(profit)` . You can always delete the names again by using the command
```
names(profit) <- NULL
```
. It’s also worth noting that you don’t have to do this as a two stage process. You can get the same result with this command:
```
profit <- c( "Q1" = 3.1, "Q2" = 0.1, "Q3" = -1.4, "Q4" = 1.1 )
profit
```
The important things to notice are that (a) this does make things much easier to read, but (b) the names at the top aren’t the “real” data. The value of `profit[1]` is still `3.1` ; all I’ve done is added a name to `profit[1]` as well. Nevertheless, names aren’t purely cosmetic, since R allows you to pull out particular elements of the vector by referring to their names: `profit["Q1"]`
```
## Q1
## 3.1
```
And if I ever need to pull out the names themselves, then I just type `names(profit)` .
### 4.6.3 Variable classes
As we’ve seen, R allows you to store different kinds of data. In particular, the variables we’ve defined so far have either been character data (text), numeric data, or logical data.57 It’s important that we remember what kind of information each variable stores (and even more important that R remembers) since different kinds of variables allow you to do different things to them. For instance, if your variables have numerical information in them, then it’s okay to multiply them together:
```
x <- 5 # x is numeric
y <- 4 # y is numeric
x * y
```
`## [1] 20`
But if they contain character data, multiplication makes no sense whatsoever, and R will complain if you try to do it:
```
x <- "apples" # x is character
y <- "oranges" # y is character
x * y
```
Even R is smart enough to know you can’t multiply `"apples"` by `"oranges"` . It knows this because the quote marks are indicators that the variable is supposed to be treated as text, not as a number. This is quite useful, but notice that it means that R makes a big distinction between `5` and `"5"` . Without quote marks, R treats `5` as the number five, and will allow you to do calculations with it. With the quote marks, R treats `"5"` as the textual character five, and doesn’t recognise it as a number any more than it recognises `"p"` or `"five"` as numbers. As a consequence, there’s a big difference between typing `x <- 5` and typing `x <- "5"` . In the former, we’re storing the number `5` ; in the latter, we’re storing the character `"5"` . Thus, if we try to do multiplication with the character versions, R gets stroppy:
```
x <- "5" # x is character
y <- "4" # y is character
x * y
```
Okay, let’s suppose that I’ve forgotten what kind of data I stored in the variable `x` (which happens depressingly often). R provides a function that will let us find out. Or, more precisely, it provides three functions: `class()` , `mode()` and `typeof()` . Why the heck does it provide three functions, you might be wondering? Basically, because R actually keeps track of three different kinds of information about a variable:
* The class of a variable is a “high level” classification, and it captures psychologically (or statistically) meaningful distinctions. For instance
`"2011-09-12"` and `"my birthday"` are both text strings, but there’s an important difference between the two: one of them is a date. So it would be nice if we could get R to recognise that `"2011-09-12"` is a date, and allow us to do things like add or subtract from it. The class of a variable is what R uses to keep track of things like that. Because the class of a variable is critical for determining what R can or can’t do with it, the `class()` function is very handy. * The mode of a variable refers to the format of the information that the variable stores. It tells you whether R has stored text data or numeric data, for instance, which is kind of useful, but it only makes these “simple” distinctions. It can be useful to know about, but it’s not the main thing we care about. So I’m not going to use the
`mode()` function very much.58 * The type of a variable is a very low level classification. We won’t use it in this book, but (for those of you that care about these details) this is where you can see the distinction between integer data, double precision numeric, etc. Almost none of you actually will care about this, so I’m not even going to bother demonstrating the
`typeof()` function. For purposes, it’s the `class()` of the variable that we care most about. Later on, I’ll talk a bit about how you can convince R to “coerce” a variable to change from one class to another (Section 7.10). That’s a useful skill for real world data analysis, but it’s not something that we need right now. In the meantime, the following examples illustrate the use of the `class()` function:
```
x <- "hello world" # x is text
class(x)
```
`## [1] "character"`
```
x <- TRUE # x is logical
class(x)
```
`## [1] "logical"`
```
x <- 100 # x is a number
class(x)
```
`## [1] "numeric"`
Exciting, no?
## 4.7 Factors
Okay, it’s time to start introducing some of the data types that are somewhat more specific to statistics. If you remember back to Chapter 2, when we assign numbers to possible outcomes, these numbers can mean quite different things depending on what kind of variable we are attempting to measure. In particular, we commonly make the distinction between nominal, ordinal, interval and ratio scale data. How do we capture this distinction in R? Currently, we only seem to have a single numeric data type. That’s probably not going to be enough, is it?
A little thought suggests that the numeric variable class in R is perfectly suited for capturing ratio scale data. For instance, if I were to measure response time (RT) for five different events, I could store the data in R like this:
```
RT <- c(342, 401, 590, 391, 554)
```
where the data here are measured in milliseconds, as is conventional in the psychological literature. It’s perfectly sensible to talk about “twice the response time”, \(2 \times \mbox{RT}\), or the “response time plus 1 second”, \(\mbox{RT} + 1000\), and so both of the following are perfectly reasonable things for R to do:
`2 * RT`
```
## [1] 684 802 1180 782 1108
```
`RT + 1000`
```
## [1] 1342 1401 1590 1391 1554
```
And to a lesser extent, the “numeric” class is okay for interval scale data, as long as we remember that multiplication and division aren’t terribly interesting for these sorts of variables. That is, if my IQ score is 110 and yours is 120, it’s perfectly okay to say that you’re 10 IQ points smarter than me59, but it’s not okay to say that I’m only 92% as smart as you are, because intelligence doesn’t have a natural zero.60 We might even be willing to tolerate the use of numeric variables to represent ordinal scale variables, such as those that you typically get when you ask people to rank order items (e.g., like we do in Australian elections), though as we will see R actually has a built in tool for representing ordinal data (see Section 7.11.2) However, when it comes to nominal scale data, it becomes completely unacceptable, because almost all of the “usual” rules for what you’re allowed to do with numbers don’t apply to nominal scale data. It is for this reason that R has factors.
### 4.7.1 Introducing factors
Suppose, I was doing a study in which people could belong to one of three different treatment conditions. Each group of people were asked to complete the same task, but each group received different instructions. Not surprisingly, I might want to have a variable that keeps track of what group people were in. So I could type in something like this
```
group <- c(1,1,1,2,2,2,3,3,3)
```
so that `group[i]` contains the group membership of the `i` -th person in my study. Clearly, this is numeric data, but equally obviously this is a nominal scale variable. There’s no sense in which “group 1” plus “group 2” equals “group 3”, but nevertheless if I try to do that, R won’t stop me because it doesn’t know any better: `group + 2`
```
## [1] 3 3 3 4 4 4 5 5 5
```
Apparently R seems to think that it’s allowed to invent “group 4” and “group 5”, even though they didn’t actually exist. Unfortunately, R is too stupid to know any better: it thinks that `3` is an ordinary number in this context, so it sees no problem in calculating `3 + 2` . But since we’re not that stupid, we’d like to stop R from doing this. We can do so by instructing R to treat `group` as a factor. This is easy to do using the `as.factor()` function.61
```
group <- as.factor(group)
group
```
It looks more or less the same as before (though it’s not immediately obvious what all that `Levels` rubbish is about), but if we ask R to tell us what the class of the `group` variable is now, it’s clear that it has done what we asked: `class(group)` `## [1] "factor"` Neat. Better yet, now that I’ve converted `group` to a factor, look what happens when I try to add 2 to it: `group + 2`
```
## Warning in Ops.factor(group, 2): '+' not meaningful for factors
```
```
## [1] NA NA NA NA NA NA NA NA NA
```
This time even R is smart enough to know that I’m being an idiot, so it tells me off and then produces a vector of missing values. (i.e., `NA` : see Section 4.6.1).
### 4.7.2 Labelling the factor levels
I have a confession to make. My memory is not infinite in capacity; and it seems to be getting worse as I get older. So it kind of annoys me when I get data sets where there’s a nominal scale variable called `gender` , with two levels corresponding to males and females. But when I go to print out the variable I get something like this: `gender`
```
## [1] 1 1 1 1 1 2 2 2 2
## Levels: 1 2
```
Okaaaay. That’s not helpful at all, and it makes me very sad. Which number corresponds to the males and which one corresponds to the females? Wouldn’t it be nice if R could actually keep track of this? It’s way too hard to remember which number corresponds to which gender. To fix this problem what we need to do is assign meaningful labels to the different levels of each factor. We can do that like this:
```
levels(group) <- c("group 1", "group 2", "group 3")
print(group)
```
```
levels(gender) <- c("male", "female")
print(gender)
```
```
## [1] male male male male male female female female female
## Levels: male female
```
That’s much easier on the eye.
### 4.7.3 Moving on…
Factors are very useful things, and we’ll use them a lot in this book: they’re the main way to represent a nominal scale variable. And there are lots of nominal scale variables out there. I’ll talk more about factors in Section 7.11.2, but for now you know enough to be able to get started.
## 4.8 Data frames
It’s now time to go back and deal with the somewhat confusing thing that happened in Section ?? when we tried to open up a CSV file. Apparently we succeeded in loading the data, but it came to us in a very odd looking format. At the time, I told you that this was a data frame. Now I’d better explain what that means.
### 4.8.1 Introducing data frames
In order to understand why R has created this funny thing called a data frame, it helps to try to see what problem it solves. So let’s go back to the little scenario that I used when introducing factors in Section 4.7. In that section I recorded the `group` and `gender` for all 9 participants in my study. Let’s also suppose I recorded their ages and their `score` on “Dan’s Terribly Exciting Psychological Test”:
```
age <- c(17, 19, 21, 37, 18, 19, 47, 18, 19)
score <- c(12, 10, 11, 15, 16, 14, 25, 21, 29)
```
Assuming no other variables are in the workspace, if I type `who()` I get this: `who()`
So there are four variables in the workspace, `age` , `gender` , `group` and `score` . And it just so happens that all four of them are the same size (i.e., they’re all vectors with 9 elements). Aaaand it just so happens that `age[1]` corresponds to the age of the first person, and `gender[1]` is the gender of that very same person, etc. In other words, you and I both know that all four of these variables correspond to the same data set, and all four of them are organised in exactly the same way. However, R doesn’t know this! As far as it’s concerned, there’s no reason why the `age` variable has to be the same length as the `gender` variable; and there’s no particular reason to think that `age[1]` has any special relationship to `gender[1]` any more than it has a special relationship to `gender[4]` . In other words, when we store everything in separate variables like this, R doesn’t know anything about the relationships between things. It doesn’t even really know that these variables actually refer to a proper data set. The data frame fixes this: if we store our variables inside a data frame, we’re telling R to treat these variables as a single, fairly coherent data set. To see how they do this, let’s create one. So how do we create a data frame? One way we’ve already seen: if we import our data from a CSV file, R will store it as a data frame. A second way is to create it directly from some existing variables using the `data.frame()` function. All you have to do is type a list of variables that you want to include in the data frame. The output of a `data.frame()` command is, well, a data frame. So, if I want to store all four variables from my experiment in a data frame called `expt` I can do so like this:
```
expt <- data.frame ( age, gender, group, score )
expt
```
```
## age gender group score
## 1 17 male group 1 12
## 2 19 male group 1 10
## 3 21 male group 1 11
## 4 37 male group 2 15
## 5 18 male group 2 16
## 6 19 female group 2 14
## 7 47 female group 3 25
## 8 18 female group 3 21
## 9 19 female group 3 29
```
Note that `expt` is a completely self-contained variable. Once you’ve created it, it no longer depends on the original variables from which it was constructed. That is, if we make changes to the original `age` variable, it will not lead to any changes to the age data stored in `expt` .
### 4.8.2 Pulling out the contents of the data frame using
`$` At this point, our workspace contains only the one variable, a data frame called `expt` . But as we can see when we told R to print the variable out, this data frame contains 4 variables, each of which has 9 observations. So how do we get this information out again? After all, there’s no point in storing information if you don’t use it, and there’s no way to use information if you can’t access it. So let’s talk a bit about how to pull information out of a data frame. The first thing we might want to do is pull out one of our stored variables, let’s say `score` . One thing you might try to do is ignore the fact that `score` is locked up inside the `expt` data frame. For instance, you might try to print it out like this: `score`
```
## Error in eval(expr, envir, enclos): object 'score' not found
```
This doesn’t work, because R doesn’t go “peeking” inside the data frame unless you explicitly tell it to do so. There’s actually a very good reason for this, which I’ll explain in a moment, but for now let’s just assume R knows what it’s doing. How do we tell R to look inside the data frame? As is always the case with R there are several ways. The simplest way is to use the `$` operator to extract the variable you’re interested in, like this: `expt$score`
```
## [1] 12 10 11 15 16 14 25 21 29
```
### 4.8.3 Getting information about a data frame
One problem that sometimes comes up in practice is that you forget what you called all your variables. Normally you might try to type `objects()` or `who()` , but neither of those commands will tell you what the names are for those variables inside a data frame! One way is to ask R to tell you what the names of all the variables stored in the data frame are, which you can do using the `names()` function: `names(expt)`
```
## [1] "age" "gender" "group" "score"
```
An alternative method is to use the `who()` function, as long as you tell it to look at the variables inside data frames. If you set `expand = TRUE` then it will not only list the variables in the workspace, but it will “expand” any data frames that you’ve got in the workspace, so that you can see what they look like. That is: `who(expand = TRUE)`
```
## -- Name -- -- Class -- -- Size --
## a numeric 1
## addArrow function
## addDistPlot function
## afl data.frame 4296 x 12
## $home.team factor 4296
## $away.team factor 4296
## $home.score numeric 4296
## $away.score numeric 4296
## $year numeric 4296
## $round numeric 4296
## $weekday factor 4296
## $day numeric 4296
## $month numeric 4296
## $is.final logical 4296
## $venue factor 4296
## $attendance numeric 4296
## afl.finalists factor 400
## afl.margins numeric 176
## afl2 data.frame 4296 x 2
## $margin numeric 4296
## $year numeric 4296
## age.breaks numeric 4
## age.group factor 11
## age.group2 factor 11
## age.group3 factor 11
## age.labels character 3
## agpp data.frame 100 x 3
## $id factor 100
## $response_before factor 100
## $response_after factor 100
## animals character 4
## anova.model aov 13
## anovaImg list 0
## any.sales.this.month logical 12
## awesome data.frame 10 x 2
## $scores numeric 10
## $group factor 10
## b numeric 1
## bad.coef numeric 2
## balance numeric 1
## beers character 3
## berkeley data.frame 39 x 3
## $women.apply numeric 39
## $total.admit numeric 39
## $number.apply numeric 39
## berkeley.small data.frame 46 x 2
## $women.apply numeric 46
## $total.admit numeric 46
## binomPlot function
## bw numeric 1
## cake.1 numeric 5
## cake.2 numeric 5
## cake.df data.frame 5 x 2
## $cake.1 numeric 5
## $cake.2 numeric 5
## cake.mat1 matrix 5 x 2
## cake.mat2 matrix 2 x 5
## cakes matrix 4 x 5
## cakes.flipped matrix 5 x 4
## cardChoices xtabs 4 x 4
## cards data.frame 200 x 3
## $id factor 200
## $choice_1 factor 200
## $choice_2 factor 200
## chapek9 data.frame 180 x 2
## $species factor 180
## $choice factor 180
## chapekFrequencies xtabs 3 x 2
## chi.sq.20 numeric 1000
## chi.sq.3 numeric 1000
## chico data.frame 20 x 3
## $id factor 20
## $grade_test1 numeric 20
## $grade_test2 numeric 20
## chico2 data.frame 40 x 4
## $id factor 40
## $improvement numeric 40
## $time factor 40
## $grade numeric 40
## chico3 data.frame 20 x 4
## $id factor 20
## $grade_test1 numeric 20
## $grade_test2 numeric 20
## $improvement numeric 20
## chiSqImg list 0
## choice data.frame 4 x 10
## $id integer 4
## $gender factor 4
## $MRT/block1/day1 numeric 4
## $MRT/block1/day2 numeric 4
## $MRT/block2/day1 numeric 4
## $MRT/block2/day2 numeric 4
## $PC/block1/day1 numeric 4
## $PC/block1/day2 numeric 4
## $PC/block2/day1 numeric 4
## $PC/block2/day2 numeric 4
## choice.2 data.frame 16 x 6
## $id integer 16
## $gender factor 16
## $MRT numeric 16
## $PC numeric 16
## $block factor 16
## $day factor 16
## clin.trial data.frame 18 x 3
## $drug factor 18
## $therapy factor 18
## $mood.gain numeric 18
## clin.trial.2 data.frame 18 x 5
## $druganxifree numeric 18
## $drugjoyzepam numeric 18
## $therapyCBT numeric 18
## $mood.gain numeric 18
## $druganxifree numeric 18
## coef numeric 2
## coffee data.frame 18 x 3
## $milk factor 18
## $sugar factor 18
## $babble numeric 18
## colour logical 1
## crit numeric 1
## crit.hi numeric 1
## crit.lo numeric 1
## crit.val numeric 1
## crosstab xtabs 2 x 3
## d.cor numeric 1
## dan.awake logical 10
## data data.frame 12 x 4
## $V1 factor 12
## $V2 integer 12
## $V3 integer 12
## $V4 factor 12
## day character 1
## days.per.month numeric 12
## def.par list 66
## describeImg list 0
## dev.from.gp.means array 18
## dev.from.grand.mean array 3
## df numeric 1
## doubleMax function
## drawBasicScatterplot function
## drug.anova aov 13
## drug.lm lm 13
## drug.means numeric 3
## drug.regression lm 12
## druganxifree numeric 18
## drugs data.frame 10 x 8
## $id factor 10
## $gender factor 10
## $WMC_alcohol numeric 10
## $WMC_caffeine numeric 10
## $WMC_no.drug numeric 10
## $RT_alcohol numeric 10
## $RT_caffeine numeric 10
## $RT_no.drug numeric 10
## drugs.2 data.frame 30 x 5
## $id factor 30
## $gender factor 30
## $drug factor 30
## $WMC numeric 30
## $RT numeric 30
## eff eff 22
## effort data.frame 10 x 2
## $hours numeric 10
## $grade numeric 10
## emphCol character 1
## emphColLight character 1
## emphGrey character 1
## eps logical 1
## es matrix 4 x 7
## estImg list 0
## eta.squared numeric 1
## eventNames character 5
## expected numeric 4
## expt data.frame 9 x 4
## $age numeric 9
## $gender factor 9
## $group factor 9
## $score numeric 9
## f table 14
## F.3.20 numeric 1000
## F.stat numeric 1
## fac factor 3
## february.sales numeric 1
## fibonacci numeric 6
## Fibonacci numeric 7
## fileName character 1
## freq integer 17
## full.model lm 12
## G factor 18
## garden data.frame 5 x 3
## $speaker factor 5
## $utterance factor 5
## $line numeric 5
## generateRLineTypes function
## generateRPointShapes function
## good.coef numeric 2
## gp.mean array 3
## gp.means array 3
## gp.sizes array 3
## grades numeric 20
## grand.mean numeric 1
## greeting character 1
## h numeric 1
## happiness data.frame 10 x 3
## $before numeric 10
## $after numeric 10
## $change numeric 10
## harpo data.frame 33 x 2
## $grade numeric 33
## $tutor factor 33
## heavy.tailed.data numeric 100
## height numeric 1
## hw character 2
## i integer 1
## interest numeric 1
## IQ numeric 10000
## is.MP.speaking logical 5
## is.the.Party.correct table 14
## itng data.frame 10 x 2
## $speaker factor 10
## $utterance factor 10
## itng.table table 3 x 4
## likert.centred numeric 10
## likert.ordinal ordered 10
## likert.raw numeric 10
## lower.area numeric 1
## m numeric 1
## M matrix 2 x 3
## M0 lm 12
## M1 lm 12
## makka.pakka character 4
## max.val numeric 1
## mod lm 13
## mod.1 lm 11
## mod.2 lm 13
## mod.3 lm 13
## mod.4 lm 13
## mod.H lm 13
## mod.R lm 13
## model aov 13
## model.1 aov 13
## model.2 aov 13
## model.3 aov 13
## models BFBayesFactor
## monkey character 1
## monkey.1 list 1
## month numeric 1
## monthly.multiplier numeric 1
## months character 12
## ms.diff numeric 1
## ms.res numeric 1
## msg character 1
## mu numeric 3
## mu.null numeric 1
## my.anova aov 13
## my.anova.residuals numeric 18
## my.contrasts list 2
## my.var numeric 1
## n numeric 1
## N integer 1
## ng character 2
## nhstImg list 0
## nodrug.regression lm 12
## normal.a numeric 1000
## normal.b numeric 1000
## normal.c numeric 1000
## normal.d numeric 1000
## normal.data numeric 100
## null.model lm 11
## nullProbs numeric 4
## numbers numeric 3
## observed table 4
## old list 66
## old.text character 1
## old_par list 72
## oneCorPlot function
## opinion.dir numeric 10
## opinion.strength numeric 10
## out.0 data.frame 100 x 2
## $V1 numeric 100
## $V2 numeric 100
## out.1 data.frame 100 x 2
## $V1 numeric 100
## $V2 numeric 100
## out.2 data.frame 100 x 2
## $V1 numeric 100
## $V2 numeric 100
## outcome numeric 18
## p.value numeric 1
## parenthood data.frame 100 x 4
## $dan.sleep numeric 100
## $baby.sleep numeric 100
## $dan.grump numeric 100
## $day integer 100
## payments numeric 1
## PJ character 1
## plotHist function
## plotOne function
## plotSamples function
## plotTwo function
## pow numeric 100
## probabilities numeric 5
## projecthome character 1
## quadruple function
## r numeric 1
## R.squared numeric 1
## random.contrasts matrix 3 x 2
## regression.1 lm 12
## regression.2 lm 12
## regression.3 lm 12
## regression.model lm 12
## regression.model.2 lm 13
## regression.model.3 lm 13
## regressionImg list 0
## resid numeric 18
## revenue numeric 1
## right.table xtabs 2 x 2
## row.1 numeric 3
## row.2 numeric 3
## royalty numeric 1
## rtfm.1 data.frame 8 x 3
## $grade numeric 8
## $attend numeric 8
## $reading numeric 8
## rtfm.2 data.frame 8 x 3
## $grade numeric 8
## $attend factor 8
## $reading factor 8
## rtfm.3 data.frame 8 x 3
## $grade numeric 8
## $attend factor 8
## $reading factor 8
## s numeric 1
## [ reached getOption("max.print") -- omitted 85 rows ]
```
or, since `expand` is the first argument in the `who()` function you can just type `who(TRUE)` . I’ll do that a lot in this book.
### 4.8.4 Looking for more on data frames?
There’s a lot more that can be said about data frames: they’re fairly complicated beasts, and the longer you use R the more important it is to make sure you really understand them. We’ll talk a lot more about them in Chapter 7.
## 4.9 Lists
The next kind of data I want to mention are lists. Lists are an extremely fundamental data structure in R, and as you start making the transition from a novice to a savvy R user you will use lists all the time. I don’t use lists very often in this book – not directly – but most of the advanced data structures in R are built from lists (e.g., data frames are actually a specific type of list). Because lists are so important to how R stores things, it’s useful to have a basic understanding of them. Okay, so what is a list, exactly? Like data frames, lists are just “collections of variables.” However, unlike data frames – which are basically supposed to look like a nice “rectangular” table of data – there are no constraints on what kinds of variables we include, and no requirement that the variables have any particular relationship to one another. In order to understand what this actually means, the best thing to do is create a list, which we can do using the `list()` function. If I type this as my command:
```
Dan <- list( age = 34,
nerd = TRUE,
parents = c("Joe","Liz")
)
```
R creates a new list variable called `Dan` , which is a bundle of three different variables: `age` , `nerd` and `parents` . Notice, that the `parents` variable is longer than the others. This is perfectly acceptable for a list, but it wouldn’t be for a data frame. If we now print out the variable, you can see the way that R stores the list: `print( Dan )`
```
## $age
## [1] 34
##
## $nerd
## [1] TRUE
##
## $parents
## [1] "Joe" "Liz"
```
As you might have guessed from those `$` symbols everywhere, the variables are stored in exactly the same way that they are for a data frame (again, this is not surprising: data frames are a type of list). So you will (I hope) be entirely unsurprised and probably quite bored when I tell you that you can extract the variables from the list using the `$` operator, like so: `Dan$nerd` `## [1] TRUE` If you need to add new entries to the list, the easiest way to do so is to again use `$` , as the following example illustrates. If I type a command like this
```
Dan$children <- "Alex"
```
then R creates a new entry to the end of the list called `children` , and assigns it a value of `"Alex"` . If I were now to `print()` this list out, you’d see a new entry at the bottom of the printout. Finally, it’s actually possible for lists to contain other lists, so it’s quite possible that I would end up using a command like `Dan$children$age` to find out how old my son is. Or I could try to remember it myself I suppose.
## 4.10 Formulas
The last kind of variable that I want to introduce before finally being able to start talking about statistics is the formula. Formulas were originally introduced into R as a convenient way to specify a particular type of statistical model (see Chapter 15) but they’re such handy things that they’ve spread. Formulas are now used in a lot of different contexts, so it makes sense to introduce them early.
Stated simply, a formula object is a variable, but it’s a special type of variable that specifies a relationship between other variables. A formula is specified using the “tilde operator” `~` . A very simple example of a formula is shown below:62
```
formula1 <- out ~ pred
formula1
```
`## out ~ pred` The precise meaning of this formula depends on exactly what you want to do with it, but in broad terms it means “the `out` (outcome) variable, analysed in terms of the `pred` (predictor) variable”. That said, although the simplest and most common form of a formula uses the “one variable on the left, one variable on the right” format, there are others. For instance, the following examples are all reasonably common
```
formula2 <- out ~ pred1 + pred2 # more than one variable on the right
formula3 <- out ~ pred1 * pred2 # different relationship between predictors
formula4 <- ~ var1 + var2 # a 'one-sided' formula
```
and there are many more variants besides. Formulas are pretty flexible things, and so different functions will make use of different formats, depending on what the function is intended to do.
## 4.11 Generic functions
There’s one really important thing that I omitted when I discussed functions earlier on in Section 3.5, and that’s the concept of a generic function. The two most notable examples that you’ll see in the next few chapters are `summary()` and `plot()` , although you’ve already seen an example of one working behind the scenes, and that’s the `print()` function. The thing that makes generics different from the other functions is that their behaviour changes, often quite dramatically, depending on the `class()` of the input you give it. The easiest way to explain the concept is with an example. With that in mind, lets take a closer look at what the `print()` function actually does. I’ll do this by creating a formula, and printing it out in a few different ways. First, let’s stick with what we know:
```
my.formula <- blah ~ blah.blah # create a variable of class "formula"
print( my.formula ) # print it out using the generic print() function
```
`## blah ~ blah.blah` So far, there’s nothing very surprising here. But there’s actually a lot going on behind the scenes here. When I type `print( my.formula )` , what actually happens is the `print()` function checks the class of the `my.formula` variable. When the function discovers that the variable it’s been given is a formula, it goes looking for a function called `print.formula()` , and then delegates the whole business of printing out the variable to the `print.formula()` function.63 For what it’s worth, the name for a “dedicated” function like `print.formula()` that exists only to be a special case of a generic function like `print()` is a method, and the name for the process in which the generic function passes off all the hard work onto a method is called method dispatch. You won’t need to understand the details at all for this book, but you do need to know the gist of it; if only because a lot of the functions we’ll use are actually generics. Anyway, to help expose a little more of the workings to you, let’s bypass the `print()` function entirely and call the formula method directly:
```
print.formula( my.formula ) # print it out using the print.formula() method
## Appears to be deprecated
```
There’s no difference in the output at all. But this shouldn’t surprise you because it was actually the `print.formula()` method that was doing all the hard work in the first place. The `print()` function itself is a lazy bastard that doesn’t do anything other than select which of the methods is going to do the actual printing. Okay, fair enough, but you might be wondering what would have happened if `print.formula()` didn’t exist? That is, what happens if there isn’t a specific method defined for the class of variable that you’re using? In that case, the generic function passes off the hard work to a “default” method, whose name in this case would be `print.default()` . Let’s see what happens if we bypass the `print()` formula, and try to print out `my.formula` using the `print.default()` function:
```
print.default( my.formula ) # print it out using the print.default() method
```
```
## blah ~ blah.blah
## attr(,"class")
## [1] "formula"
## attr(,".Environment")
## <environment: R_GlobalEnv>
```
Hm. You can kind of see that it is trying to print out the same formula, but there’s a bunch of ugly low-level details that have also turned up on screen. This is because the `print.default()` method doesn’t know anything about formulas, and doesn’t know that it’s supposed to be hiding the obnoxious internal gibberish that R produces sometimes.
At this stage, this is about as much as we need to know about generic functions and their methods. In fact, you can get through the entire book without learning any more about them than this, so it’s probably a good idea to end this discussion here.
## 4.12 Getting help
The very last topic I want to mention in this chapter is where to go to find help. Obviously, I’ve tried to make this book as helpful as possible, but it’s not even close to being a comprehensive guide, and there’s thousands of things it doesn’t cover. So where should you go for help?
### 4.12.1 How to read the help documentation
I have somewhat mixed feelings about the help documentation in R. On the plus side, there’s a lot of it, and it’s very thorough. On the minus side, there’s a lot of it, and it’s very thorough. There’s so much help documentation that it sometimes doesn’t help, and most of it is written with an advanced user in mind. Often it feels like most of the help files work on the assumption that the reader already understands everything about R except for the specific topic that it’s providing help for. What that means is that, once you’ve been using R for a long time and are beginning to get a feel for how to use it, the help documentation is awesome. These days, I find myself really liking the help files (most of them anyway). But when I first started using R I found it very dense.
To some extent, there’s not much I can do to help you with this. You just have to work at it yourself; once you’re moving away from being a pure beginner and are becoming a skilled user, you’ll start finding the help documentation more and more helpful. In the meantime, I’ll help as much as I can by trying to explain to you what you’re looking at when you open a help file. To that end, let’s look at the help documentation for the `load()` function. To do so, I type either of the following:
```
?load
help("load")
```
When I do that, R goes looking for the help file for the “load” topic. If it finds one, Rstudio takes it and displays it in the help panel. Alternatively, you can try a fuzzy search for a help topic
```
??load
help.search("load")
```
`load` ” topic as our example. Firstly, at the very top we see this:
load {base} | R Documentation |
| --- | --- |
# Reload Saved Datasets
# Description
Reload datasets written with the function `save` .
# Usage
> load(file, envir = parent.frame(), verbose = FALSE)
In this instance, the usage section is actually pretty readable. It’s telling you that there are two arguments to the `load()` function: the first one is called `file` , and the second one is called `envir` . It’s also telling you that there is a default value for the envir argument; so if the user doesn’t specify what the value of envir should be, then R will assume that
. In contrast, the file argument has no default value at all, so the user must specify a value for it. So in one sense, this section is very straightforward. The problem, of course, is that you don’t know what the `parent.frame()` function actually does, so it’s hard for you to know what the
bit is all about. What you could do is then go look up the help documents for the `parent.frame()` function (and sometimes that’s actually a good idea), but often you’ll find that the help documents for those functions are just as dense (if not more dense) than the help file that you’re currently reading. As an alternative, my general approach when faced with something like this is to skim over it, see if I can make any sense of it. If so, great. If not, I find that the best thing to do is ignore it. In fact, the first time I read the help file for the load() function, I had no idea what any of the `envir` related stuff was about. But fortunately I didn’t have to: the default setting here (i.e.,
) is actually the thing you want in about 99% of cases, so it’s safe to ignore it.
Basically, what I’m trying to say is: don’t let the scary, incomprehensible parts of the help file intimidate you. Especially because there’s often some parts of the help file that will make sense. Of course, I guarantee you that sometimes this strategy will lead you to make mistakes… often embarrassing mistakes. But it’s still better than getting paralysed with fear.
So, let’s continue on. The next part of the help documentation discusses each of the arguments, and what they’re supposed to do:
# Arguments
filea (readable binary-mode) connection or a character string giving the name of the file to load (when tilde expansion is done).envirthe environment where the data should be loaded.verboseshould item names be printed during loading?
Okay, so what this is telling us is that the `file` argument needs to be a string (i.e., text data) which tells R the name of the file to load. It also seems to be hinting that there’s other possibilities too (e.g., a “binary mode connection”), and you probably aren’t quite sure what “tilde expansion” means64. But overall, the meaning is pretty clear. Turning to the `envir` argument, it’s now a little clearer what the Usage section was babbling about. The `envir` argument specifies the name of an environment (see Section 4.3 if you’ve forgotten what environments are) into which R should place the variables when it loads the file. Almost always, this is a no-brainer: you want R to load the data into the same damn environment in which you’re invoking the `load()` command. That is, if you’re typing `load()` at the R prompt, then you want the data to be loaded into your workspace (i.e., the global environment). But if you’re writing your own function that needs to load some data, you want the data to be loaded inside that function’s private workspace. And in fact, that’s exactly what the `parent.frame()` thing is all about. It’s telling the `load()` function to send the data to the same place that the `load()` command itself was coming from. As it turns out, if we’d just ignored the envir bit we would have been totally safe. Which is nice to know.
# Details
`load` can load R objects saved in the current or any earlier format. It can read a compressed file (see `save` ) directly from a file or from a suitable connection (including a call to `url` ). A not-open connection will be opened in mode `“rb”` and closed after use. Any connection other than a `gzfile` or `gzcon` connection will be wrapped in `gzcon` to allow compressed saves to be handled: note that this leaves the connection in an altered state (in particular, binary-only), and that it needs to be closed explicitly (it will not be garbage-collected).
Only R objects saved in the current format (used since R 1.4.0) can be read from a connection. If no input is available on a connection a warning will be given, but any input not in the current format will result in a error.
Loading from an earlier version will give a warning about the ‘magic number’: magic numbers `1971:1977` are from R < 0.99.0, and `RD[ABX]1` from R 0.99.0 to R 1.3.1. These are all obsolete, and you are strongly recommended to re-save such files in a current format. The `verbose` argument is mainly intended for debugging. If it is `TRUE` , then as objects from the file are loaded, their names will be printed to the console. If `verbose` is set to an integer value greater than one, additional names corresponding to attributes and other parts of individual objects will also be printed. Larger values will print names to a greater depth.
Objects can be saved with references to namespaces, usually as part of the environment of a function or formula. Such objects can be loaded even if the namespace is not available: it is replaced by a reference to the global environment with a warning. The warning identifies the first object with such a reference (but there may be more than one).
Then it tells you what the output value of the function is:
# Value
A character vector of the names of objects created, invisibly.
This is usually a bit more interesting, but since the `load()` function is mainly used to load variables into the workspace rather than to return a value, it’s no surprise that this doesn’t do much or say much. Moving on, we sometimes see a few additional sections in the help file, which can be different depending on what the function is:
# Warning
Saved R objects are binary files, even those saved with `ascii = TRUE` , so ensure that they are transferred without conversion of end of line markers. `load` tries to detect such a conversion and gives an informative error message. `load(<file>)` replaces all existing objects with the same names in the current environment (typically your workspace, `.GlobalEnv` ) and hence potentially overwrites important data. It is considerably safer to use `envir =` to load into a different environment, or to `attach(file)` which `load()` s into a new entry in the `search` path.
# Note
`file` can be a UTF-8-encoded filepath that cannot be translated to the current locale.
Yeah, yeah. Warning, warning, blah blah blah. Towards the bottom of the help file, we see something like this, which suggests a bunch of related topics that you might want to look at. These can be quite helpful:
# See Also
`save` , `download.file` ; further `attach` as wrapper for `load()` . For other interfaces to the underlying serialization format, see `unserialize` and `readRDS` . `load` ” help file:
# Examples
> ## save all data xx <- pi # to ensure there is some data save(list = ls(all = TRUE), file= "all.rda") rm(xx) ## restore the saved values to the current environment local({ load("all.rda") ls() }) xx <- exp(1:3) ## restore the saved values to the user's workspace load("all.rda") ## which is here *equivalent* to ## load("all.rda", .GlobalEnv) ## This however annihilates all objects in .GlobalEnv with the same names ! xx # no longer exp(1:3) rm(xx) attach("all.rda") # safer and will warn about masked objects w/ same name in .GlobalEnv ls(pos = 2) ## also typically need to cleanup the search path: detach("file:all.rda") ## clean up (the example): unlink("all.rda") ## Not run: con <- url("http://some.where.net/R/data/example.rda") ## print the value to see what objects were created. print(load(con)) close(con) # url() always opens the connection ## End(Not run)
As you can see, they’re pretty dense, and not at all obvious to the novice user. However, they do provide good examples of the various different things that you can do with the `load()` function, so it’s not a bad idea to have a look at them, and to try not to find them too intimidating.
### 4.12.2 Other resources
* The Rseek website (www.rseek.org). One thing that I really find annoying about the R help documentation is that it’s hard to search properly. When coupled with the fact that the documentation is dense and highly technical, it’s often a better idea to search or ask online for answers to your questions. With that in mind, the Rseek website is great: it’s an R specific search engine. I find it really useful, and it’s almost always my first port of call when I’m looking around.
* The R-help mailing list (see http://www.r-project.org/mail.html for details). This is the official R help mailing list. It can be very helpful, but it’s very important that you do your homework before posting a question. The list gets a lot of traffic. While the people on the list try as hard as they can to answer questions, they do so for free, and you really don’t want to know how much money they could charge on an hourly rate if they wanted to apply market rates. In short, they are doing you a favour, so be polite. Don’t waste their time asking questions that can be easily answered by a quick search on Rseek (it’s rude), make sure your question is clear, and all of the relevant information is included. In short, read the posting guidelines carefully (http://www.r-project.org/posting-guide.html), and make use of the
`help.request()` function that R provides to check that you’re actually doing what you’re expected.
## 4.13 Summary
This chapter continued where Chapter 3 left off. The focus was still primarily on introducing basic R concepts, but this time at least you can see how those concepts are related to data analysis:
* Installing, loading and updating packages. Knowing how to extend the functionality of R by installing and using packages is critical to becoming an effective R user
* Getting around. Section 4.3 talked about how to manage your workspace and how to keep it tidy. Similarly, Section 4.4 talked about how to get R to interact with the rest of the file system.
* Loading and saving data. Finally, we encountered actual data files. Loading and saving data is obviously a crucial skill, one we discussed in Section 4.5.
* Useful things to know about variables. In particular, we talked about special values, element names and classes.
* More complex types of variables. R has a number of important variable types that will be useful when analysing real data. I talked about factors in Section 4.7, data frames in Section 4.8, lists in Section 4.9 and formulas in Section 4.10.
* Generic functions. How is it that some function seem to be able to do lots of different things? Section 4.11 tells you how.
* Getting help. Assuming that you’re not looking for counselling, Section 4.12 covers several possibilities. If you are looking for counselling, well, this book really can’t help you there. Sorry.
Taken together, Chapters 3 and 4 provide enough of a background that you can finally get started doing some statistics! Yes, there’s a lot more R concepts that you ought to know (and we’ll talk about some of them in Chapters7 and8), but I think that we’ve talked quite enough about programming for the moment. It’s time to see how your experience with programming can be used to do some data analysis…
Notice that I used
`print(keeper)` rather than just typing `keeper` . Later on in the text I’ll sometimes use the `print()` function to display things because I think it helps make clear what I’m doing, but in practice people rarely do this.↩ *
More precisely, there are 5000 or so packages on CRAN, the Comprehensive R Archive Network.↩
*
Basically, the reason is that there are 5000 packages, and probably about 4000 authors of packages, and no-one really knows what all of them do. Keeping the installation separate from the loading minimizes the chances that two packages will interact with each other in a nasty way.↩
*
If you’re using the command line, you can get the same information by typing
`library()` at the command line.↩ *
The logit function a simple mathematical function that happens not to have been included in the basic R distribution.↩
*
Tip for advanced users. You can get R to use the one from the
`car` package by using `car::logit()` as your command rather than `logit()` , since the `car::` part tells R explicitly which package to use. See also `:::` if you’re especially keen to force R to use functions it otherwise wouldn’t, but take care, since `:::` can be dangerous.↩ *
It is not very difficult.↩
*
This would be especially annoying if you’re reading an electronic copy of the book because the text displayed by the
`who()` function is searchable, whereas text shown in a screen shot isn’t!↩ *
Mind you, all that means is that it’s been removed from the workspace. If you’ve got the data saved to file somewhere, then that file is perfectly safe.↩
*
Well, the partition, technically.↩
*
One additional thing worth calling your attention to is the
`file.choose()` function. Suppose you want to load a file and you don’t quite remember where it is, but would like to browse for it. Typing `file.choose()` at the command line will open a window in which you can browse to find the file; when you click on the file you want, R will print out the full path to that file. This is kind of handy.↩ *
Notably those with .rda, .Rd, .Rhistory, .rdb and .rdx extensions↩
*
In a lot of books you’ll see the
`read.table()` function used for this purpose instead of `read.csv()` . They’re more or less identical functions, with the same arguments and everything. They differ only in the default values.↩ *
Note that I didn’t to this in my earlier example when loading the .Rdata↩
*
A word of warning: what you don’t want to do is use the “File” menu. If you look in the “File” menu you will see “Save” and “Save As…” options, but they don’t save the workspace. Those options are used for dealing with scripts, and so they’ll produce
`.R` files. We won’t get to those until Chapter 8.↩ *
Or functions. But let’s ignore functions for the moment.↩
*
Actually, I don’t think I ever use this in practice. I don’t know why I bother to talk about it in the book anymore.↩
*
Taking all the usual caveats that attach to IQ measurement as a given, of course.↩
*
Or, more precisely, we don’t know how to measure it. Arguably, a rock has zero intelligence. But it doesn’t make sense to say that the IQ of a rock is 0 in the same way that we can say that the average human has an IQ of 100. And without knowing what the IQ value is that corresponds to a literal absence of any capacity to think, reason or learn, then we really can’t multiply or divide IQ scores and expect a meaningful answer.↩
*
Once again, this is an example of coercing a variable from one class to another. I’ll talk about coercion in more detail in Section 7.10.↩
*
Note that, when I write out the formula, R doesn’t check to see if the
`out` and `pred` variables actually exist: it’s only later on when you try to use the formula for something that this happens.↩ *
For readers with a programming background: what I’m describing is the very basics of how S3 methods work. However, you should be aware that R has two entirely distinct systems for doing object oriented programming, known as S3 and S4. Of the two, S3 is simpler and more informal, whereas S4 supports all the stuff that you might expect of a fully object oriented language. Most of the generics we’ll run into in this book use the S3 system, which is convenient for me because I’m still trying to figure out S4.↩
*
It’s extremely simple, by the way. We discussed it in Section 4.4, though I didn’t call it by that name. Tilde expansion is the thing where R recognises that, in the context of specifying a file location, the tilde symbol ~ corresponds to the user home directory (e.g., /Users/dan/).↩
Date: 2010-09-24
Categories:
Tags:
# Chapter 5 Descriptive statistics
Any time that you get a new data set to look at, one of the first tasks that you have to do is find ways of summarising the data in a compact, easily understood fashion. This is what descriptive statistics (as opposed to inferential statistics) is all about. In fact, to many people the term “statistics” is synonymous with descriptive statistics. It is this topic that we’ll consider in this chapter, but before going into any details, let’s take a moment to get a sense of why we need descriptive statistics. To do this, let’s load the `aflsmall.Rdata` file, and use the `who()` function in the `lsr` package to see what variables are stored in the file:
```
load( "./data/aflsmall.Rdata" )
library(lsr)
who()
```
There are two variables here, `afl.finalists` and `afl.margins` . We’ll focus a bit on these two variables in this chapter, so I’d better tell you what they are. Unlike most of data sets in this book, these are actually real data, relating to the Australian Football League (AFL)65 The `afl.margins` variable contains the winning margin (number of points) for all 176 home and away games played during the 2010 season. The `afl.finalists` variable contains the names of all 400 teams that played in all 200 finals matches played during the period 1987 to 2010. Let’s have a look at the `afl.margins` variable: `print(afl.margins)`
This output doesn’t make it easy to get a sense of what the data are actually saying. Just “looking at the data” isn’t a terribly effective way of understanding data. In order to get some idea about what’s going on, we need to calculate some descriptive statistics (this chapter) and draw some nice pictures (Chapter 6. Since the descriptive statistics are the easier of the two topics, I’ll start with those, but nevertheless I’ll show you a histogram of the `afl.margins` data, since it should help you get a sense of what the data we’re trying to describe actually look like. But for what it’s worth, this histogram – which is shown in Figure 5.1 – was generated using the `hist()` function. We’ll talk a lot more about how to draw histograms in Section 6.3. For now, it’s enough to look at the histogram and note that it provides a fairly interpretable representation of the `afl.margins` data.
## 5.1 Measures of central tendency
Drawing pictures of the data, as I did in Figure 5.1 is an excellent way to convey the “gist” of what the data is trying to tell you, it’s often extremely useful to try to condense the data into a few simple “summary” statistics. In most situations, the first thing that you’ll want to calculate is a measure of central tendency. That is, you’d like to know something about the “average” or “middle” of your data lies. The two most commonly used measures are the mean, median and mode; occasionally people will also report a trimmed mean. I’ll explain each of these in turn, and then discuss when each of them is useful.
### 5.1.1 The mean
The mean of a set of observations is just a normal, old-fashioned average: add all of the values up, and then divide by the total number of values. The first five AFL margins were 56, 31, 56, 8 and 32, so the mean of these observations is just: \[ \frac{56 + 31 + 56 + 8 + 32}{5} = \frac{183}{5} = 36.60 \] Of course, this definition of the mean isn’t news to anyone: averages (i.e., means) are used so often in everyday life that this is pretty familiar stuff. However, since the concept of a mean is something that everyone already understands, I’ll use this as an excuse to start introducing some of the mathematical notation that statisticians use to describe this calculation, and talk about how the calculations would be done in R.
The first piece of notation to introduce is \(N\), which we’ll use to refer to the number of observations that we’re averaging (in this case \(N = 5\)). Next, we need to attach a label to the observations themselves. It’s traditional to use \(X\) for this, and to use subscripts to indicate which observation we’re actually talking about. That is, we’ll use \(X_1\) to refer to the first observation, \(X_2\) to refer to the second observation, and so on, all the way up to \(X_N\) for the last one. Or, to say the same thing in a slightly more abstract way, we use \(X_i\) to refer to the \(i\)-th observation. Just to make sure we’re clear on the notation, the following table lists the 5 observations in the `afl.margins` variable, along with the mathematical symbol used to refer to it, and the actual value that the observation corresponds to:
Okay, now let’s try to write a formula for the mean. By tradition, we use \(\bar{X}\) as the notation for the mean. So the calculation for the mean could be expressed using the following formula: \[ \bar{X} = \frac{X_1 + X_2 + ... + X_{N-1} + X_N}{N} \] This formula is entirely correct, but it’s terribly long, so we make use of the summation symbol \(\scriptstyle\sum\) to shorten it.66 If I want to add up the first five observations, I could write out the sum the long way, \(X_1 + X_2 + X_3 + X_4 +X_5\) or I could use the summation symbol to shorten it to this: \[ \sum_{i=1}^5 X_i \] Taken literally, this could be read as “the sum, taken over all \(i\) values from 1 to 5, of the value \(X_i\)”. But basically, what it means is “add up the first five observations”. In any case, we can use this notation to write out the formula for the mean, which looks like this: \[ \bar{X} = \frac{1}{N} \sum_{i=1}^N X_i \]
In all honesty, I can’t imagine that all this mathematical notation helps clarify the concept of the mean at all. In fact, it’s really just a fancy way of writing out the same thing I said in words: add all the values up, and then divide by the total number of items. However, that’s not really the reason I went into all that detail. My goal was to try to make sure that everyone reading this book is clear on the notation that we’ll be using throughout the book: \(\bar{X}\) for the mean, \(\scriptstyle\sum\) for the idea of summation, \(X_i\) for the \(i\)th observation, and \(N\) for the total number of observations. We’re going to be re-using these symbols a fair bit, so it’s important that you understand them well enough to be able to “read” the equations, and to be able to see that it’s just saying “add up lots of things and then divide by another thing”.
### 5.1.2 Calculating the mean in R
Okay that’s the maths, how do we get the magic computing box to do the work for us? If you really wanted to, you could do this calculation directly in R. For the first 5 AFL scores, do this just by typing it in as if R were a calculator…
```
(56 + 31 + 56 + 8 + 32) / 5
```
`## [1] 36.6` … in which case R outputs the answer 36.6, just as if it were a calculator. However, that’s not the only way to do the calculations, and when the number of observations starts to become large, it’s easily the most tedious. Besides, in almost every real world scenario, you’ve already got the actual numbers stored in a variable of some kind, just like we have with the `afl.margins` variable. Under those circumstances, what you want is a function that will just add up all the values stored in a numeric vector. That’s what the `sum()` function does. If we want to add up all 176 winning margins in the data set, we can do so using the following command:67 `sum( afl.margins )` `## [1] 6213`
If we only want the sum of the first five observations, then we can use square brackets to pull out only the first five elements of the vector. So the command would now be:
```
sum( afl.margins[1:5] )
```
`## [1] 183`
To calculate the mean, we now tell R to divide the output of this summation by five, so the command that we need to type now becomes the following:
```
sum( afl.margins[1:5] ) / 5
```
`## [1] 36.6` Although it’s pretty easy to calculate the mean using the `sum()` function, we can do it in an even easier way, since R also provides us with the `mean()` function. To calculate the mean for all 176 games, we would use the following command:
`## [1] 35.30114` However, since `x` is the first argument to the function, I could have omitted the argument name. In any case, just to show you that there’s nothing funny going on, here’s what we would do to calculate the mean for the first five observations:
```
mean( afl.margins[1:5] )
```
`## [1] 36.6`
As you can see, this gives exactly the same answers as the previous calculations.
### 5.1.3 The median
The second measure of central tendency that people use a lot is the median, and it’s even easier to describe than the mean. The median of a set of observations is just the middle value. As before let’s imagine we were interested only in the first 5 AFL winning margins: 56, 31, 56, 8 and 32. To figure out the median, we sort these numbers into ascending order: \[ 8, 31, \mathbf{32}, 56, 56 \] From inspection, it’s obvious that the median value of these 5 observations is 32, since that’s the middle one in the sorted list (I’ve put it in bold to make it even more obvious). Easy stuff. But what should we do if we were interested in the first 6 games rather than the first 5? Since the sixth game in the season had a winning margin of 14 points, our sorted list is now \[ 8, 14, \mathbf{31}, \mathbf{32}, 56, 56 \] and there are two middle numbers, 31 and 32. The median is defined as the average of those two numbers, which is of course 31.5. As before, it’s very tedious to do this by hand when you’ve got lots of numbers. To illustrate this, here’s what happens when you use R to sort all 176 winning margins. First, I’ll use the `sort()` function (discussed in Chapter 7) to display the winning margins in increasing numerical order:
The middle values are 30 and 31, so the median winning margin for 2010 was 30.5 points. In real life, of course, no-one actually calculates the median by sorting the data and then looking for the middle value. In real life, we use the median command:
`## [1] 30.5`
which outputs the median value of 30.5.
### 5.1.4 Mean or median? What’s the difference?
Knowing how to calculate means and medians is only a part of the story. You also need to understand what each one is saying about the data, and what that implies for when you should use each one. This is illustrated in Figure 5.2 the mean is kind of like the “centre of gravity” of the data set, whereas the median is the “middle value” in the data. What this implies, as far as which one you should use, depends a little on what type of data you’ve got and what you’re trying to achieve. As a rough guide:
* If your data are nominal scale, you probably shouldn’t be using either the mean or the median. Both the mean and the median rely on the idea that the numbers assigned to values are meaningful. If the numbering scheme is arbitrary, then it’s probably best to use the mode (Section 5.1.7) instead.
* If your data are ordinal scale, you’re more likely to want to use the median than the mean. The median only makes use of the order information in your data (i.e., which numbers are bigger), but doesn’t depend on the precise numbers involved. That’s exactly the situation that applies when your data are ordinal scale. The mean, on the other hand, makes use of the precise numeric values assigned to the observations, so it’s not really appropriate for ordinal data.
* For interval and ratio scale data, either one is generally acceptable. Which one you pick depends a bit on what you’re trying to achieve. The mean has the advantage that it uses all the information in the data (which is useful when you don’t have a lot of data), but it’s very sensitive to extreme values, as we’ll see in Section 5.1.6.
Let’s expand on that last part a little. One consequence is that there’s systematic differences between the mean and the median when the histogram is asymmetric (skewed; see Section 5.3). This is illustrated in Figure 5.2 notice that the median (right hand side) is located closer to the “body” of the histogram, whereas the mean (left hand side) gets dragged towards the “tail” (where the extreme values are). To give a concrete example, suppose Bob (income $50,000), Kate (income $60,000) and Jane (income $65,000) are sitting at a table: the average income at the table is $58,333 and the median income is $60,000. Then Bill sits down with them (income $100,000,000). The average income has now jumped to $25,043,750 but the median rises only to $62,500. If you’re interested in looking at the overall income at the table, the mean might be the right answer; but if you’re interested in what counts as a typical income at the table, the median would be a better choice here.
### 5.1.5 A real life example
To try to get a sense of why you need to pay attention to the differences between the mean and the median, let’s consider a real life example. Since I tend to mock journalists for their poor scientific and statistical knowledge, I should give credit where credit is due. This is from an excellent article on the ABC news website68 24 September, 2010:
Senior Commonwealth Bank executives have travelled the world in the past couple of weeks with a presentation showing how Australian house prices, and the key price to income ratios, compare favourably with similar countries. “Housing affordability has actually been going sideways for the last five to six years,” said <NAME>, the chief economist of the bank’s trading arm, CommSec.
This probably comes as a huge surprise to anyone with a mortgage, or who wants a mortgage, or pays rent, or isn’t completely oblivious to what’s been going on in the Australian housing market over the last several years. Back to the article:
CBA has waged its war against what it believes are housing doomsayers with graphs, numbers and international comparisons. In its presentation, the bank rejects arguments that Australia’s housing is relatively expensive compared to incomes. It says Australia’s house price to household income ratio of 5.6 in the major cities, and 4.3 nationwide, is comparable to many other developed nations. It says San Francisco and New York have ratios of 7, Auckland’s is 6.7, and Vancouver comes in at 9.3.
More excellent news! Except, the article goes on to make the observation that…
Many analysts say that has led the bank to use misleading figures and comparisons. If you go to page four of CBA’s presentation and read the source information at the bottom of the graph and table, you would notice there is an additional source on the international comparison – Demographia. However, if the Commonwealth Bank had also used Demographia’s analysis of Australia’s house price to income ratio, it would have come up with a figure closer to 9 rather than 5.6 or 4.3
That’s, um, a rather serious discrepancy. One group of people say 9, another says 4-5. Should we just split the difference, and say the truth lies somewhere in between? Absolutely not: this is a situation where there is a right answer and a wrong answer. Demographia are correct, and the Commonwealth Bank is incorrect. As the article points out
[An] obvious problem with the Commonwealth Bank’s domestic price to income figures is they compare average incomes with median house prices (unlike the Demographia figures that compare median incomes to median prices). The median is the mid-point, effectively cutting out the highs and lows, and that means the average is generally higher when it comes to incomes and asset prices, because it includes the earnings of Australia’s wealthiest people. To put it another way: the Commonwealth Bank’s figures count <NAME>’ multi-million dollar pay packet on the income side, but not his (no doubt) very expensive house in the property price figures, thus understating the house price to income ratio for middle-income Australians.
Couldn’t have put it better myself. The way that Demographia calculated the ratio is the right thing to do. The way that the Bank did it is incorrect. As for why an extremely quantitatively sophisticated organisation such as a major bank made such an elementary mistake, well… I can’t say for sure, since I have no special insight into their thinking, but the article itself does happen to mention the following facts, which may or may not be relevant:
[As] Australia’s largest home lender, the Commonwealth Bank has one of the biggest vested interests in house prices rising. It effectively owns a massive swathe of Australian housing as security for its home loans as well as many small business loans.
My, my.
### 5.1.6 Trimmed mean
One of the fundamental rules of applied statistics is that the data are messy. Real life is never simple, and so the data sets that you obtain are never as straightforward as the statistical theory says.69 This can have awkward consequences. To illustrate, consider this rather strange looking data set: \[ -100,2,3,4,5,6,7,8,9,10 \] If you were to observe this in a real life data set, you’d probably suspect that something funny was going on with the \(-100\) value. It’s probably an outlier, a value that doesn’t really belong with the others. You might consider removing it from the data set entirely, and in this particular case I’d probably agree with that course of action. In real life, however, you don’t always get such cut-and-dried examples. For instance, you might get this instead: \[ -15,2,3,4,5,6,7,8,9,12 \] The \(-15\) looks a bit suspicious, but not anywhere near as much as that \(-100\) did. In this case, it’s a little trickier. It might be a legitimate observation, it might not.
When faced with a situation where some of the most extreme-valued observations might not be quite trustworthy, the mean is not necessarily a good measure of central tendency. It is highly sensitive to one or two extreme values, and is thus not considered to be a robust measure. One remedy that we’ve seen is to use the median. A more general solution is to use a “trimmed mean”. To calculate a trimmed mean, what you do is “discard” the most extreme examples on both ends (i.e., the largest and the smallest), and then take the mean of everything else. The goal is to preserve the best characteristics of the mean and the median: just like a median, you aren’t highly influenced by extreme outliers, but like the mean, you “use” more than one of the observations. Generally, we describe a trimmed mean in terms of the percentage of observation on either side that are discarded. So, for instance, a 10% trimmed mean discards the largest 10% of the observations and the smallest 10% of the observations, and then takes the mean of the remaining 80% of the observations. Not surprisingly, the 0% trimmed mean is just the regular mean, and the 50% trimmed mean is the median. In that sense, trimmed means provide a whole family of central tendency measures that span the range from the mean to the median.
For our toy example above, we have 10 observations, and so a 10% trimmed mean is calculated by ignoring the largest value (i.e., `12` ) and the smallest value (i.e., `-15` ) and taking the mean of the remaining values. First, let’s enter the data
```
dataset <- c( -15,2,3,4,5,6,7,8,9,12 )
```
Next, let’s calculate means and medians:
`mean( x = dataset )` `## [1] 4.1`
`## [1] 5.5`
That’s a fairly substantial difference, but I’m tempted to think that the mean is being influenced a bit too much by the extreme values at either end of the data set, especially the \(-15\) one. So let’s just try trimming the mean a bit. If I take a 10% trimmed mean, we’ll drop the extreme values on either side, and take the mean of the rest:
```
mean( x = dataset, trim = .1)
```
`## [1] 5.5` which in this case gives exactly the same answer as the median. Note that, to get a 10% trimmed mean you write `trim = .1` , not `trim = 10` . In any case, let’s finish up by calculating the 5% trimmed mean for the `afl.margins` data,
```
mean( x = afl.margins, trim = .05)
```
`## [1] 33.75`
### 5.1.7 Mode
The mode of a sample is very simple: it is the value that occurs most frequently. To illustrate the mode using the AFL data, let’s examine a different aspect to the data set. Who has played in the most finals? The `afl.finalists` variable is a factor that contains the name of every team that played in any AFL final from 1987-2010, so let’s have a look at it. To do this we will use the `head()` command. `head()` is useful when you’re working with a data.frame with a lot of rows since you can use it to tell you how many rows to return. There have been a lot of finals in this period so printing afl.finalists using `print(afl.finalists)` will just fill us the screen. The command below tells R we just want the first 25 rows of the data.frame.
```
head(afl.finalists, 25)
```
```
## [1] Hawthorn Melbourne Carlton Melbourne Hawthorn
## [6] Carlton Melbourne Carlton Hawthorn Melbourne
## [11] Melbourne Hawthorn Melbourne Essendon Hawthorn
## [16] Geelong Geelong Hawthorn Collingwood Melbourne
## [21] Collingwood West Coast Collingwood Essendon Collingwood
## 17 Levels: Adelaide Brisbane Carlton Collingwood Essendon ... Western Bulldogs
```
There are actually 400 entries (aren’t you glad we didn’t print them all?). We could read through all 400, and count the number of occasions on which each team name appears in our list of finalists, thereby producing a frequency table. However, that would be mindless and boring: exactly the sort of task that computers are great at. So let’s use the `table()` function (discussed in more detail in Section 7.1) to do this task for us:
```
table( afl.finalists )
```
Now that we have our frequency table, we can just look at it and see that, over the 24 years for which we have data, Geelong has played in more finals than any other team. Thus, the mode of the `finalists` data is `"Geelong"` . The core packages in R don’t have a function for calculating the mode70. However, I’ve included a function in the `lsr` package that does this. The function is called `modeOf()` , and here’s how you use it:
```
modeOf( x = afl.finalists )
```
`## [1] "Geelong"` There’s also a function called `maxFreq()` that tells you what the modal frequency is. If we apply this function to our finalists data, we obtain the following:
```
maxFreq( x = afl.finalists )
```
`## [1] 39`
Taken together, we observe that Geelong (39 finals) played in more finals than any other team during the 1987-2010 period.
One last point to make with respect to the mode. While it’s generally true that the mode is most often calculated when you have nominal scale data (because means and medians are useless for those sorts of variables), there are some situations in which you really do want to know the mode of an ordinal, interval or ratio scale variable. For instance, let’s go back to thinking about our `afl.margins` variable. This variable is clearly ratio scale (if it’s not clear to you, it may help to re-read Section 2.2), and so in most situations the mean or the median is the measure of central tendency that you want. But consider this scenario… a friend of yours is offering a bet. They pick a football game at random, and (without knowing who is playing) you have to guess the exact margin. If you guess correctly, you win $50. If you don’t, you lose $1. There are no consolation prizes for “almost” getting the right answer. You have to guess exactly the right margin71 For this bet, the mean and the median are completely useless to you. It is the mode that you should bet on. So, we calculate this modal value
```
modeOf( x = afl.margins )
```
`## [1] 3`
```
maxFreq( x = afl.margins )
```
`## [1] 8`
So the 2010 data suggest you should bet on a 3 point margin, and since this was observed in 8 of the 176 game (4.5% of games) the odds are firmly in your favour.
## 5.2 Measures of variability
The statistics that we’ve discussed so far all relate to central tendency. That is, they all talk about which values are “in the middle” or “popular” in the data. However, central tendency is not the only type of summary statistic that we want to calculate. The second thing that we really want is a measure of the variability of the data. That is, how “spread out” are the data? How “far” away from the mean or median do the observed values tend to be? For now, let’s assume that the data are interval or ratio scale, so we’ll continue to use the `afl.margins` data. We’ll use this data to discuss several different measures of spread, each with different strengths and weaknesses.
### 5.2.1 Range
The range of a variable is very simple: it’s the biggest value minus the smallest value. For the AFL winning margins data, the maximum value is 116, and the minimum value is 0. We can calculate these values in R using the `max()` and `min()` functions: `max( afl.margins )` `## [1] 116` `min( afl.margins )` `## [1] 0` where I’ve omitted the output because it’s not interesting. The other possibility is to use the `range()` function; which outputs both the minimum value and the maximum value in a vector, like this: `range( afl.margins )` `## [1] 0 116`
Although the range is the simplest way to quantify the notion of “variability”, it’s one of the worst. Recall from our discussion of the mean that we want our summary measure to be robust. If the data set has one or two extremely bad values in it, we’d like our statistics not to be unduly influenced by these cases. If we look once again at our toy example of a data set containing very extreme outliers… \[ -100,2,3,4,5,6,7,8,9,10 \] … it is clear that the range is not robust, since this has a range of 110, but if the outlier were removed we would have a range of only 8.
### 5.2.2 Interquartile range
The interquartile range (IQR) is like the range, but instead of calculating the difference between the biggest and smallest value, it calculates the difference between the 25th quantile and the 75th quantile. Probably you already know what a quantile is (they’re more commonly called percentiles), but if not: the 10th percentile of a data set is the smallest number \(x\) such that 10% of the data is less than \(x\). In fact, we’ve already come across the idea: the median of a data set is its 50th quantile / percentile! R actually provides you with a way of calculating quantiles, using the (surprise, surprise) `quantile()` function. Let’s use it to calculate the median AFL winning margin:
```
quantile( x = afl.margins, probs = .5)
```
```
## 50%
## 30.5
```
And not surprisingly, this agrees with the answer that we saw earlier with the `median()` function. Now, we can actually input lots of quantiles at once, by specifying a vector for the `probs` argument. So lets do that, and get the 25th and 75th percentile:
```
quantile( x = afl.margins, probs = c(.25,.75) )
```
```
## 25% 75%
## 12.75 50.50
```
And, by noting that \(50.5 - 12.75 = 37.75\), we can see that the interquartile range for the 2010 AFL winning margins data is 37.75. Of course, that seems like too much work to do all that typing, so R has a built in function called `IQR()` that we can use:
```
IQR( x = afl.margins )
```
`## [1] 37.75`
While it’s obvious how to interpret the range, it’s a little less obvious how to interpret the IQR. The simplest way to think about it is like this: the interquartile range is the range spanned by the “middle half” of the data. That is, one quarter of the data falls below the 25th percentile, one quarter of the data is above the 75th percentile, leaving the “middle half” of the data lying in between the two. And the IQR is the range covered by that middle half.
### 5.2.3 Mean absolute deviation
The two measures we’ve looked at so far, the range and the interquartile range, both rely on the idea that we can measure the spread of the data by looking at the quantiles of the data. However, this isn’t the only way to think about the problem. A different approach is to select a meaningful reference point (usually the mean or the median) and then report the “typical” deviations from that reference point. What do we mean by “typical” deviation? Usually, the mean or median value of these deviations! In practice, this leads to two different measures, the “mean absolute deviation (from the mean)” and the “median absolute deviation (from the median)”. From what I’ve read, the measure based on the median seems to be used in statistics, and does seem to be the better of the two, but to be honest I don’t think I’ve seen it used much in psychology. The measure based on the mean does occasionally show up in psychology though. In this section I’ll talk about the first one, and I’ll come back to talk about the second one later.
Since the previous paragraph might sound a little abstract, let’s go through the mean absolute deviation from the mean a little more slowly. One useful thing about this measure is that the name actually tells you exactly how to calculate it. Let’s think about our AFL winning margins data, and once again we’ll start by pretending that there’s only 5 games in total, with winning margins of 56, 31, 56, 8 and 32. Since our calculations rely on an examination of the deviation from some reference point (in this case the mean), the first thing we need to calculate is the mean, \(\bar{X}\). For these five observations, our mean is \(\bar{X} = 36.6\). The next step is to convert each of our observations \(X_i\) into a deviation score. We do this by calculating the difference between the observation \(X_i\) and the mean \(\bar{X}\). That is, the deviation score is defined to be \(X_i - \bar{X}\). For the first observation in our sample, this is equal to \(56 - 36.6 = 19.4\). Okay, that’s simple enough. The next step in the process is to convert these deviations to absolute deviations. As we discussed earlier when talking about the `abs()` function in R (Section 3.5), we do this by converting any negative values to positive ones. Mathematically, we would denote the absolute value of \(-3\) as \(|-3|\), and so we say that \(|-3| = 3\). We use the absolute value function here because we don’t really care whether the value is higher than the mean or lower than the mean, we’re just interested in how close it is to the mean. To help make this process as obvious as possible, the table below shows these calculations for all five observations:
Now that we have calculated the absolute deviation score for every observation in the data set, all that we have to do to calculate the mean of these scores. Let’s do that: \[ \frac{19.4 + 5.6 + 19.4 + 28.6 + 4.6}{5} = 15.52 \] And we’re done. The mean absolute deviation for these five scores is 15.52.
However, while our calculations for this little example are at an end, we do have a couple of things left to talk about. Firstly, we should really try to write down a proper mathematical formula. But in order do to this I need some mathematical notation to refer to the mean absolute deviation. Irritatingly, “mean absolute deviation” and “median absolute deviation” have the same acronym (MAD), which leads to a certain amount of ambiguity, and since R tends to use MAD to refer to the median absolute deviation, I’d better come up with something different for the mean absolute deviation. Sigh. What I’ll do is use AAD instead, short for average absolute deviation. Now that we have some unambiguous notation, here’s the formula that describes what we just calculated: \[ \mbox{}(X) = \frac{1}{N} \sum_{i = 1}^N |X_i - \bar{X}| \]
The last thing we need to talk about is how to calculate AAD in R. One possibility would be to do everything using low level commands, laboriously following the same steps that I used when describing the calculations above. However, that’s pretty tedious. You’d end up with a series of commands that might look like this:
```
X <- c(56, 31,56,8,32) # enter the data
X.bar <- mean( X ) # step 1. the mean of the data
AD <- abs( X - X.bar ) # step 2. the absolute deviations from the mean
AAD <- mean( AD ) # step 3. the mean absolute deviations
print( AAD ) # print the results
```
`## [1] 15.52` Each of those commands is pretty simple, but there’s just too many of them. And because I find that to be too much typing, the `lsr` package has a very simple function called `aad()` that does the calculations for you. If we apply the `aad()` function to our data, we get this:
```
library(lsr)
aad( X )
```
`## [1] 15.52`
No suprises there.
### 5.2.4 Variance
Although the mean absolute deviation measure has its uses, it’s not the best measure of variability to use. From a purely mathematical perspective, there are some solid reasons to prefer squared deviations rather than absolute deviations. If we do that, we obtain a measure is called the variance, which has a lot of really nice statistical properties that I’m going to ignore,72(X)$ and \(\mbox{Var}(Y)\) respectively. Now imagine I want to define a new variable \(Z\) that is the sum of the two, \(Z = X+Y\). As it turns out, the variance of \(Z\) is equal to \(\mbox{Var}(X) + \mbox{Var}(Y)\). This is a very useful property, but it’s not true of the other measures that I talk about in this section.] and one massive psychological flaw that I’m going to make a big deal out of in a moment. The variance of a data set \(X\) is sometimes written as \(\mbox{Var}(X)\), but it’s more commonly denoted \(s^2\) (the reason for this will become clearer shortly). The formula that we use to calculate the variance of a set of observations is as follows: \[ \mbox{Var}(X) = \frac{1}{N} \sum_{i=1}^N \left( X_i - \bar{X} \right)^2 \] \[\mbox{Var}(X) = \frac{\sum_{i=1}^N \left( X_i - \bar{X} \right)^2}{N}\] As you can see, it’s basically the same formula that we used to calculate the mean absolute deviation, except that instead of using “absolute deviations” we use “squared deviations”. It is for this reason that the variance is sometimes referred to as the “mean square deviation”.
Now that we’ve got the basic idea, let’s have a look at a concrete example. Once again, let’s use the first five AFL games as our data. If we follow the same approach that we took last time, we end up with the following table:
Notation [English] | \(i\) [which game] | \(X_i\) [value] | \(X_i - \bar{X}\) [deviation from mean] | \((X_i - \bar{X})^2\) [absolute deviation] |
| --- | --- | --- | --- | --- |
1 | 56 | 19.4 | 376.36 |
2 | 31 | -5.6 | 31.36 |
3 | 56 | 19.4 | 376.36 |
4 | 8 | -28.6 | 817.96 |
5 | 32 | -4.6 | 21.16 |
That last column contains all of our squared deviations, so all we have to do is average them. If we do that by typing all the numbers into R by hand…
```
( 376.36 + 31.36 + 376.36 + 817.96 + 21.16 ) / 5
```
`## [1] 324.64`
… we end up with a variance of 324.64. Exciting, isn’t it? For the moment, let’s ignore the burning question that you’re all probably thinking (i.e., what the heck does a variance of 324.64 actually mean?) and instead talk a bit more about how to do the calculations in R, because this will reveal something very weird.
As always, we want to avoid having to type in a whole lot of numbers ourselves. And as it happens, we have the vector `X` lying around, which we created in the previous section. With this in mind, we can calculate the variance of `X` by using the following command,
```
mean( (X - mean(X) )^2)
```
`## [1] 324.64` and as usual we get the same answer as the one that we got when we did everything by hand. However, I still think that this is too much typing. Fortunately, R has a built in function called `var()` which does calculate variances. So we could also do this… `var(X)` `## [1] 405.8`
and you get the same… no, wait… you get a completely different answer. That’s just weird. Is R broken? Is this a typo? Is Dan an idiot?
As it happens, the answer is no.73 It’s not a typo, and R is not making a mistake. To get a feel for what’s happening, let’s stop using the tiny data set containing only 5 data points, and switch to the full set of 176 games that we’ve got stored in our `afl.margins` vector. First, let’s calculate the variance by using the formula that I described above:
```
mean( (afl.margins - mean(afl.margins) )^2)
```
`## [1] 675.9718` Now let’s use the `var()` function: `var( afl.margins )` `## [1] 679.8345` Hm. These two numbers are very similar this time. That seems like too much of a coincidence to be a mistake. And of course it isn’t a mistake. In fact, it’s very simple to explain what R is doing here, but slightly trickier to explain why R is doing it. So let’s start with the “what”. What R is doing is evaluating a slightly different formula to the one I showed you above. Instead of averaging the squared deviations, which requires you to divide by the number of data points \(N\), R has chosen to divide by \(N-1\). In other words, the formula that R is using is this one\[ \frac{1}{N-1} \sum_{i=1}^N \left( X_i - \bar{X} \right)^2 \] It’s easy enough to verify that this is what’s happening, as the following command illustrates:
```
sum( (X-mean(X))^2 ) / 4
```
`## [1] 405.8` This is the same answer that R gave us originally when we calculated `var(X)` originally. So that’s the what. The real question is why R is dividing by \(N-1\) and not by \(N\). After all, the variance is supposed to be the mean squared deviation, right? So shouldn’t we be dividing by \(N\), the actual number of observations in the sample? Well, yes, we should. However, as we’ll discuss in Chapter 10, there’s a subtle distinction between “describing a sample” and “making guesses about the population from which the sample came”. Up to this point, it’s been a distinction without a difference. Regardless of whether you’re describing a sample or drawing inferences about the population, the mean is calculated exactly the same way. Not so for the variance, or the standard deviation, or for many other measures besides. What I outlined to you initially (i.e., take the actual average, and thus divide by \(N\)) assumes that you literally intend to calculate the variance of the sample. Most of the time, however, you’re not terribly interested in the sample in and of itself. Rather, the sample exists to tell you something about the world. If so, you’re actually starting to move away from calculating a “sample statistic”, and towards the idea of estimating a “population parameter”. However, I’m getting ahead of myself. For now, let’s just take it on faith that R knows what it’s doing, and we’ll revisit the question later on when we talk about estimation in Chapter 10.
Okay, one last thing. This section so far has read a bit like a mystery novel. I’ve shown you how to calculate the variance, described the weird “\(N-1\)” thing that R does and hinted at the reason why it’s there, but I haven’t mentioned the single most important thing… how do you interpret the variance? Descriptive statistics are supposed to describe things, after all, and right now the variance is really just a gibberish number. Unfortunately, the reason why I haven’t given you the human-friendly interpretation of the variance is that there really isn’t one. This is the most serious problem with the variance. Although it has some elegant mathematical properties that suggest that it really is a fundamental quantity for expressing variation, it’s completely useless if you want to communicate with an actual human… variances are completely uninterpretable in terms of the original variable! All the numbers have been squared, and they don’t mean anything anymore. This is a huge issue. For instance, according to the table I presented earlier, the margin in game 1 was “376.36 points-squared higher than the average margin”. This is exactly as stupid as it sounds; and so when we calculate a variance of 324.64, we’re in the same situation. I’ve watched a lot of footy games, and never has anyone referred to “points squared”. It’s not a real unit of measurement, and since the variance is expressed in terms of this gibberish unit, it is totally meaningless to a human.
### 5.2.5 Standard deviation
Okay, suppose that you like the idea of using the variance because of those nice mathematical properties that I haven’t talked about, but – since you’re a human and not a robot – you’d like to have a measure that is expressed in the same units as the data itself (i.e., points, not points-squared). What should you do? The solution to the problem is obvious: take the square root of the variance, known as the standard deviation, also called the “root mean squared deviation”, or RMSD. This solves out problem fairly neatly: while nobody has a clue what “a variance of 324.68 points-squared” really means, it’s much easier to understand “a standard deviation of 18.01 points”, since it’s expressed in the original units. It is traditional to refer to the standard deviation of a sample of data as \(s\), though “sd” and “std dev.” are also used at times. Because the standard deviation is equal to the square root of the variance, you probably won’t be surprised to see that the formula is: \[ s = \sqrt{ \frac{1}{N} \sum_{i=1}^N \left( X_i - \bar{X} \right)^2 } \] and the R function that we use to calculate it is `sd()` . However, as you might have guessed from our discussion of the variance, what R actually calculates is slightly different to the formula given above. Just like the we saw with the variance, what R calculates is a version that divides by \(N-1\) rather than \(N\). For reasons that will make sense when we return to this topic in Chapter@refch:estimation I’ll refer to this new quantity as \(\hat\sigma\) (read as: “sigma hat”), and the formula for this is \[
\hat\sigma = \sqrt{ \frac{1}{N-1} \sum_{i=1}^N \left( X_i - \bar{X} \right)^2 }
\] With that in mind, calculating standard deviations in R is simple: `sd( afl.margins )` `## [1] 26.07364`
Interpreting standard deviations is slightly more complex. Because the standard deviation is derived from the variance, and the variance is a quantity that has little to no meaning that makes sense to us humans, the standard deviation doesn’t have a simple interpretation. As a consequence, most of us just rely on a simple rule of thumb: in general, you should expect 68% of the data to fall within 1 standard deviation of the mean, 95% of the data to fall within 2 standard deviation of the mean, and 99.7% of the data to fall within 3 standard deviations of the mean. This rule tends to work pretty well most of the time, but it’s not exact: it’s actually calculated based on an assumption that the histogram is symmetric and “bell shaped.”74 As you can tell from looking at the AFL winning margins histogram in Figure 5.1, this isn’t exactly true of our data! Even so, the rule is approximately correct. As it turns out, 65.3% of the AFL margins data fall within one standard deviation of the mean. This is shown visually in Figure 5.3.
### 5.2.6 Median absolute deviation
The last measure of variability that I want to talk about is the median absolute deviation (MAD). The basic idea behind MAD is very simple, and is pretty much identical to the idea behind the mean absolute deviation (Section 5.2.3). The difference is that you use the median everywhere. If we were to frame this idea as a pair of R commands, they would look like this:
```
# mean absolute deviation from the mean:
mean( abs(afl.margins - mean(afl.margins)) )
```
`## [1] 21.10124`
```
# *median* absolute deviation from the *median*:
median( abs(afl.margins - median(afl.margins)) )
```
`## [1] 19.5`
This has a straightforward interpretation: every observation in the data set lies some distance away from the typical value (the median). So the MAD is an attempt to describe a typical deviation from a typical value in the data set. It wouldn’t be unreasonable to interpret the MAD value of 19.5 for our AFL data by saying something like this:
The median winning margin in 2010 was 30.5, indicating that a typical game involved a winning margin of about 30 points. However, there was a fair amount of variation from game to game: the MAD value was 19.5, indicating that a typical winning margin would differ from this median value by about 19-20 points.
As you’d expect, R has a built in function for calculating MAD, and you will be shocked no doubt to hear that it’s called `mad()` . However, it’s a little bit more complicated than the functions that we’ve been using previously. If you want to use it to calculate MAD in the exact same way that I have described it above, the command that you need to use specifies two arguments: the data set itself `x` , and a `constant` that I’ll explain in a moment. For our purposes, the constant is 1, so our command becomes
```
mad( x = afl.margins, constant = 1 )
```
`## [1] 19.5` Apart from the weirdness of having to type that `constant = 1` part, this is pretty straightforward. Okay, so what exactly is this `constant = 1` argument? I won’t go into all the details here, but here’s the gist. Although the “raw” MAD value that I’ve described above is completely interpretable on its own terms, that’s not actually how it’s used in a lot of real world contexts. Instead, what happens a lot is that the researcher actually wants to calculate the standard deviation. However, in the same way that the mean is very sensitive to extreme values, the standard deviation is vulnerable to the exact same issue. So, in much the same way that people sometimes use the median as a “robust” way of calculating “something that is like the mean”, it’s not uncommon to use MAD as a method for calculating “something that is like the standard deviation”. Unfortunately, the raw MAD value doesn’t do this. Our raw MAD value is 19.5, and our standard deviation was 26.07. However, what some clever person has shown is that, under certain assumptions75, you can multiply the raw MAD value by 1.4826 and obtain a number that is directly comparable to the standard deviation. As a consequence, the default value of `constant` is 1.4826, and so when you use the `mad()` command without manually setting a value, here’s what you get: `mad( afl.margins )` `## [1] 28.9107` I should point out, though, that if you want to use this “corrected” MAD value as a robust version of the standard deviation, you really are relying on the assumption that the data are (or at least, are “supposed to be” in some sense) symmetric and basically shaped like a bell curve. That’s really not true for our `afl.margins` data, so in this case I wouldn’t try to use the MAD value this way.
### 5.2.7 Which measure to use?
We’ve discussed quite a few measures of spread (range, IQR, MAD, variance and standard deviation), and hinted at their strengths and weaknesses. Here’s a quick summary:
* Range. Gives you the full spread of the data. It’s very vulnerable to outliers, and as a consequence it isn’t often used unless you have good reasons to care about the extremes in the data.
* Interquartile range. Tells you where the “middle half” of the data sits. It’s pretty robust, and complements the median nicely. This is used a lot.
* Mean absolute deviation. Tells you how far “on average” the observations are from the mean. It’s very interpretable, but has a few minor issues (not discussed here) that make it less attractive to statisticians than the standard deviation. Used sometimes, but not often.
* Variance. Tells you the average squared deviation from the mean. It’s mathematically elegant, and is probably the “right” way to describe variation around the mean, but it’s completely uninterpretable because it doesn’t use the same units as the data. Almost never used except as a mathematical tool; but it’s buried “under the hood” of a very large number of statistical tools.
* Standard deviation. This is the square root of the variance. It’s fairly elegant mathematically, and it’s expressed in the same units as the data so it can be interpreted pretty well. In situations where the mean is the measure of central tendency, this is the default. This is by far the most popular measure of variation.
* Median absolute deviation. The typical (i.e., median) deviation from the median value. In the raw form it’s simple and interpretable; in the corrected form it’s a robust way to estimate the standard deviation, for some kinds of data sets. Not used very often, but it does get reported sometimes.
In short, the IQR and the standard deviation are easily the two most common measures used to report the variability of the data; but there are situations in which the others are used. I’ve described all of them in this book because there’s a fair chance you’ll run into most of these somewhere.
## 5.3 Skew and kurtosis
There are two more descriptive statistics that you will sometimes see reported in the psychological literature, known as skew and kurtosis. In practice, neither one is used anywhere near as frequently as the measures of central tendency and variability that we’ve been talking about. Skew is pretty important, so you do see it mentioned a fair bit; but I’ve actually never seen kurtosis reported in a scientific article to date.
`## [1] -0.9246796` `## [1] -0.00814197` `## [1] 0.9197452` Since it’s the more interesting of the two, let’s start by talking about the skewness. Skewness is basically a measure of asymmetry, and the easiest way to explain it is by drawing some pictures. As Figure 5.4 illustrates, if the data tend to have a lot of extreme small values (i.e., the lower tail is “longer” than the upper tail) and not so many extremely large values (left panel), then we say that the data are negatively skewed. On the other hand, if there are more extremely large values than extremely small ones (right panel) we say that the data are positively skewed. That’s the qualitative idea behind skewness. The actual formula for the skewness of a data set is as follows \[ \mbox{skewness}(X) = \frac{1}{N \hat{\sigma}^3} \sum_{i=1}^N (X_i - \bar{X})^3 \] where \(N\) is the number of observations, \(\bar{X}\) is the sample mean, and \(\hat{\sigma}\) is the standard deviation (the “divide by \(N-1\)” version, that is). Perhaps more helpfully, it might be useful to point out that the `psych` package contains a `skew()` function that you can use to calculate skewness. So if we wanted to use this function to calculate the skewness of the `afl.margins` data, we’d first need to load the package `library( psych )`
which now makes it possible to use the following command:
```
skew( x = afl.margins )
```
`## [1] 0.7671555`
Not surprisingly, it turns out that the AFL winning margins data is fairly skewed.
The final measure that is sometimes referred to, though very rarely in practice, is the kurtosis of a data set. Put simply, kurtosis is a measure of the “pointiness” of a data set, as illustrated in Figure 5.5.
`## [1] -0.9516587` `## [1] 0.006731404` `## [1] 2.068523`
By convention, we say that the “normal curve” (black lines) has zero kurtosis, so the pointiness of a data set is assessed relative to this curve. In this Figure, the data on the left are not pointy enough, so the kurtosis is negative and we call the data platykurtic. The data on the right are too pointy, so the kurtosis is positive and we say that the data is leptokurtic. But the data in the middle are just pointy enough, so we say that it is mesokurtic and has kurtosis zero. This is summarised in the table below:
informal term | technical name | kurtosis value |
| --- | --- | --- |
too flat | platykurtic | negative |
just pointy enough | mesokurtic | zero |
too pointy | leptokurtic | positive |
The equation for kurtosis is pretty similar in spirit to the formulas we’ve seen already for the variance and the skewness; except that where the variance involved squared deviations and the skewness involved cubed deviations, the kurtosis involves raising the deviations to the fourth power:76 \[ \mbox{kurtosis}(X) = \frac{1}{N \hat\sigma^4} \sum_{i=1}^N \left( X_i - \bar{X} \right)^4 - 3 \] I know, it’s not terribly interesting to me either. More to the point, the `psych` package has a function called `kurtosi()` that you can use to calculate the kurtosis of your data. For instance, if we were to do this for the AFL margins,
`## [1] 0.02962633`
we discover that the AFL winning margins data are just pointy enough.
## 5.4 Getting an overall summary of a variable
Up to this point in the chapter I’ve explained several different summary statistics that are commonly used when analysing data, along with specific functions that you can use in R to calculate each one. However, it’s kind of annoying to have to separately calculate means, medians, standard deviations, skews etc. Wouldn’t it be nice if R had some helpful functions that would do all these tedious calculations at once? Something like `summary()` or `describe()` , perhaps? Why yes, yes it would. So much so that both of these functions exist. The `summary()` function is in the `base` package, so it comes with every installation of R. The `describe()` function is part of the `psych` package, which we loaded earlier in the chapter.
### 5.4.1 “Summarising” a variable
The `summary()` function is an easy thing to use, but a tricky thing to understand in full, since it’s a generic function (see Section 4.11. The basic idea behind the `summary()` function is that it prints out some useful information about whatever object (i.e., variable, as far as we’re concerned) you specify as the `object` argument. As a consequence, the behaviour of the `summary()` function differs quite dramatically depending on the class of the object that you give it. Let’s start by giving it a numeric object:
```
summary( object = afl.margins )
```
For numeric variables, we get a whole bunch of useful descriptive statistics. It gives us the minimum and maximum values (i.e., the range), the first and third quartiles (25th and 75th percentiles; i.e., the IQR), the mean and the median. In other words, it gives us a pretty good collection of descriptive statistics related to the central tendency and the spread of the data.
Okay, what about if we feed it a logical vector instead? Let’s say I want to know something about how many “blowouts” there were in the 2010 AFL season. I operationalise the concept of a blowout (see Chapter 2) as a game in which the winning margin exceeds 50 points. Let’s create a logical variable `blowouts` in which the \(i\)-th element is `TRUE` if that game was a blowout according to my definition,
```
blowouts <- afl.margins > 50
blowouts
```
```
## [1] TRUE FALSE TRUE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## [12] TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE TRUE FALSE FALSE
## [23] FALSE FALSE FALSE FALSE FALSE TRUE FALSE TRUE TRUE FALSE FALSE
## [34] TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [45] FALSE TRUE TRUE FALSE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
## [56] TRUE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## [67] TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE
## [78] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [89] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
## [100] FALSE TRUE FALSE FALSE FALSE TRUE FALSE TRUE TRUE TRUE FALSE
## [111] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE
## [122] TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
## [133] FALSE TRUE TRUE FALSE FALSE FALSE FALSE FALSE TRUE FALSE TRUE
## [144] TRUE TRUE TRUE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
## [155] TRUE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE
## [166] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
```
So that’s what the `blowouts` variable looks like. Now let’s ask R for a `summary()`
```
summary( object = blowouts )
```
```
## Mode FALSE TRUE
## logical 132 44
```
In this context, the `summary()` function gives us a count of the number of `TRUE` values, the number of `FALSE` values, and the number of missing values (i.e., the `NA` s). Pretty reasonable behaviour. Next, let’s try to give it a factor. If you recall, I’ve defined the `afl.finalists` vector as a factor, so let’s use that:
```
summary( object = afl.finalists )
```
For factors, we get a frequency table, just like we got when we used the `table()` function. Interestingly, however, if we convert this to a character vector using the `as.character()` function (see Section 7.10, we don’t get the same results:
```
f2 <- as.character( afl.finalists )
summary( object = f2 )
```
```
## Length Class Mode
## 400 character character
```
This is one of those situations I was referring to in Section 4.7, in which it is helpful to declare your nominal scale variable as a factor rather than a character vector. Because I’ve defined `afl.finalists` as a factor, R knows that it should treat it as a nominal scale variable, and so it gives you a much more detailed (and helpful) summary than it would have if I’d left it as a character vector.
### 5.4.2 “Summarising” a data frame
Okay what about data frames? When you pass a data frame to the `summary()` function, it produces a slightly condensed summary of each variable inside the data frame. To give you a sense of how this can be useful, let’s try this for a new data set, one that you’ve never seen before. The data is stored in the `clinicaltrial.Rdata` file, and we’ll use it a lot in Chapter 14 (you can find a complete description of the data at the start of that chapter). Let’s load it, and see what we’ve got:
```
load( "./data/clinicaltrial.Rdata" )
who(TRUE)
```
```
## -- Name -- -- Class -- -- Size --
## clin.trial data.frame 18 x 3
## $drug factor 18
## $therapy factor 18
## $mood.gain numeric 18
```
There’s a single data frame called `clin.trial` which contains three variables, `drug` , `therapy` and `mood.gain` . Presumably then, this data is from a clinical trial of some kind, in which people were administered different drugs; and the researchers looked to see what the drugs did to their mood. Let’s see if the `summary()` function sheds a little more light on this situation:
```
summary( clin.trial )
```
```
## drug therapy mood.gain
## placebo :6 no.therapy:9 Min. :0.1000
## anxifree:6 CBT :9 1st Qu.:0.4250
## joyzepam:6 Median :0.8500
## Mean :0.8833
## 3rd Qu.:1.3000
## Max. :1.8000
```
Evidently there were three drugs: a placebo, something called “anxifree” and something called “joyzepam”; and there were 6 people administered each drug. There were 9 people treated using cognitive behavioural therapy (CBT) and 9 people who received no psychological treatment. And we can see from looking at the summary of the `mood.gain` variable that most people did show a mood gain (mean \(=.88\)), though without knowing what the scale is here it’s hard to say much more than that. Still, that’s not too bad. Overall, I feel that I learned something from that.
### 5.4.3 “Describing” a data frame
The `describe()` function (in the `psych` package) is a little different, and it’s really only intended to be useful when your data are interval or ratio scale. Unlike the `summary()` function, it calculates the same descriptive statistics for any type of variable you give it. By default, these are:
*
`var` . This is just an index: 1 for the first variable, 2 for the second variable, and so on. *
`n` . This is the sample size: more precisely, it’s the number of non-missing values. *
`mean` . This is the sample mean (Section 5.1.1). *
`sd` . This is the (bias corrected) standard deviation (Section 5.2.5). *
`median` . The median (Section 5.1.3). *
`trimmed` . This is trimmed mean. By default it’s the 10% trimmed mean (Section 5.1.6). *
`mad` . The median absolute deviation (Section 5.2.6). *
`min` . The minimum value. *
`max` . The maximum value. *
`range` . The range spanned by the data (Section 5.2.1). *
`skew` . The skewness (Section 5.3). *
`kurtosis` . The kurtosis (Section 5.3). *
`se` . The standard error of the mean (Chapter 10). Notice that these descriptive statistics generally only make sense for data that are interval or ratio scale (usually encoded as numeric vectors). For nominal or ordinal variables (usually encoded as factors), most of these descriptive statistics are not all that useful. What the `describe()` function does is convert factors and logical variables to numeric vectors in order to do the calculations. These variables are marked with `*` and most of the time, the descriptive statistics for those variables won’t make much sense. If you try to feed it a data frame that includes a character vector as a variable, it produces an error. With those caveats in mind, let’s use the `describe()` function to have a look at the `clin.trial` data frame. Here’s what we get:
```
describe( x = clin.trial )
```
```
## vars n mean sd median trimmed mad min max range skew
## drug* 1 18 2.00 0.84 2.00 2.00 1.48 1.0 3.0 2.0 0.00
## therapy* 2 18 1.50 0.51 1.50 1.50 0.74 1.0 2.0 1.0 0.00
## mood.gain 3 18 0.88 0.53 0.85 0.88 0.67 0.1 1.8 1.7 0.13
## kurtosis se
## drug* -1.66 0.20
## therapy* -2.11 0.12
## mood.gain -1.44 0.13
```
As you can see, the output for the asterisked variables is pretty meaningless, and should be ignored. However, for the `mood.gain` variable, there’s a lot of useful information.
## 5.5 Descriptive statistics separately for each group
It is very commonly the case that you find yourself needing to look at descriptive statistics, broken down by some grouping variable. This is pretty easy to do in R, and there are three functions in particular that are worth knowing about: `by()` , `describeBy()` and `aggregate()` . Let’s start with the `describeBy()` function, which is part of the `psych` package. The `describeBy()` function is very similar to the `describe()` function, except that it has an additional argument called `group` which specifies a grouping variable. For instance, let’s say, I want to look at the descriptive statistics for the `clin.trial` data, broken down separately by `therapy` type. The command I would use here is:
```
describeBy( x=clin.trial, group=clin.trial$therapy )
```
As you can see, the output is essentially identical to the output that the `describe()` function produce, except that the output now gives you means, standard deviations etc separately for the `CBT` group and the `no.therapy` group. Notice that, as before, the output displays asterisks for factor variables, in order to draw your attention to the fact that the descriptive statistics that it has calculated won’t be very meaningful for those variables. Nevertheless, this command has given us some really useful descriptive statistics `mood.gain` variable, broken down as a function of `therapy` . A somewhat more general solution is offered by the `by()` function. There are three arguments that you need to specify when using this function: the `data` argument specifies the data set, the `INDICES` argument specifies the grouping variable, and the `FUN` argument specifies the name of a function that you want to apply separately to each group. To give a sense of how powerful this is, you can reproduce the `describeBy()` function by using a command like this:
```
by( data=clin.trial, INDICES=clin.trial$therapy, FUN=describe )
```
This will produce the exact same output as the command shown earlier. However, there’s nothing special about the `describe()` function. You could just as easily use the `by()` function in conjunction with the `summary()` function. For example:
```
by( data=clin.trial, INDICES=clin.trial$therapy, FUN=summary )
```
```
## clin.trial$therapy: no.therapy
## drug therapy mood.gain
## placebo :3 no.therapy:9 Min. :0.1000
## anxifree:3 CBT :0 1st Qu.:0.3000
## joyzepam:3 Median :0.5000
## Mean :0.7222
## 3rd Qu.:1.3000
## Max. :1.7000
## --------------------------------------------------------
## clin.trial$therapy: CBT
## drug therapy mood.gain
## placebo :3 no.therapy:0 Min. :0.300
## anxifree:3 CBT :9 1st Qu.:0.800
## joyzepam:3 Median :1.100
## Mean :1.044
## 3rd Qu.:1.300
## Max. :1.800
```
Again, this output is pretty easy to interpret. It’s the output of the `summary()` function, applied separately to `CBT` group and the `no.therapy` group. For the two factors ( `drug` and `therapy` ) it prints out a frequency table, whereas for the numeric variable ( `mood.gain` ) it prints out the range, interquartile range, mean and median. What if you have multiple grouping variables? Suppose, for example, you would like to look at the average mood gain separately for all possible combinations of drug and therapy. It is actually possible to do this using the `by()` and `describeBy()` functions, but I usually find it more convenient to use the `aggregate()` function in this situation. There are again three arguments that you need to specify. The `formula` argument is used to indicate which variable you want to analyse, and which variables are used to specify the groups. For instance, if you want to look at `mood.gain` separately for each possible combination of `drug` and `therapy` , the formula you want is
. The `data` argument is used to specify the data frame containing all the data, and the `FUN` argument is used to indicate what function you want to calculate for each group (e.g., the `mean` ). So, to obtain group means, use this command:
```
aggregate( formula = mood.gain ~ drug + therapy, # mood.gain by drug/therapy combination
data = clin.trial, # data is in the clin.trial data frame
FUN = mean # print out group means
)
```
or, alternatively, if you want to calculate the standard deviations for each group, you would use the following command (argument names omitted this time):
```
## drug therapy mood.gain
## 1 placebo no.therapy 0.2000000
## 2 anxifree no.therapy 0.2000000
## 3 joyzepam no.therapy 0.2081666
## 4 placebo CBT 0.3000000
## 5 anxifree CBT 0.2081666
## 6 joyzepam CBT 0.2645751
```
## 5.6 Standard scores
Suppose my friend is putting together a new questionnaire intended to measure “grumpiness”. The survey has 50 questions, which you can answer in a grumpy way or not. Across a big sample (hypothetically, let’s imagine a million people or so!) the data are fairly normally distributed, with the mean grumpiness score being 17 out of 50 questions answered in a grumpy way, and the standard deviation is 5. In contrast, when I take the questionnaire, I answer 35 out of 50 questions in a grumpy way. So, how grumpy am I? One way to think about would be to say that I have grumpiness of 35/50, so you might say that I’m 70% grumpy. But that’s a bit weird, when you think about it. If my friend had phrased her questions a bit differently, people might have answered them in a different way, so the overall distribution of answers could easily move up or down depending on the precise way in which the questions were asked. So, I’m only 70% grumpy with respect to this set of survey questions. Even if it’s a very good questionnaire, this isn’t very a informative statement.
A simpler way around this is to describe my grumpiness by comparing me to other people. Shockingly, out of my friend’s sample of 1,000,000 people, only 159 people were as grumpy as me (that’s not at all unrealistic, frankly), suggesting that I’m in the top 0.016% of people for grumpiness. This makes much more sense than trying to interpret the raw data. This idea – that we should describe my grumpiness in terms of the overall distribution of the grumpiness of humans – is the qualitative idea that standardisation attempts to get at. One way to do this is to do exactly what I just did, and describe everything in terms of percentiles. However, the problem with doing this is that “it’s lonely at the top”. Suppose that my friend had only collected a sample of 1000 people (still a pretty big sample for the purposes of testing a new questionnaire, I’d like to add), and this time gotten a mean of 16 out of 50 with a standard deviation of 5, let’s say. The problem is that almost certainly, not a single person in that sample would be as grumpy as me.
However, all is not lost. A different approach is to convert my grumpiness score into a standard score, also referred to as a \(z\)-score. The standard score is defined as the number of standard deviations above the mean that my grumpiness score lies. To phrase it in “pseudo-maths” the standard score is calculated like this: \[ \mbox{standard score} = \frac{\mbox{raw score} - \mbox{mean}}{\mbox{standard deviation}} \] In actual maths, the equation for the \(z\)-score is \[ z_i = \frac{X_i - \bar{X}}{\hat\sigma} \] So, going back to the grumpiness data, we can now transform Dan’s raw grumpiness into a standardised grumpiness score.77 If the mean is 17 and the standard deviation is 5 then my standardised grumpiness score would be78 \[ z = \frac{35 - 17}{5} = 3.6 \] To interpret this value, recall the rough heuristic that I provided in Section 5.2.5, in which I noted that 99.7% of values are expected to lie within 3 standard deviations of the mean. So the fact that my grumpiness corresponds to a \(z\) score of 3.6 indicates that I’m very grumpy indeed. Later on, in Section 9.5, I’ll introduce a function called `pnorm()` that allows us to be a bit more precise than this. Specifically, it allows us to calculate a theoretical percentile rank for my grumpiness, as follows: `pnorm( 3.6 )` `## [1] 0.9998409`
At this stage, this command doesn’t make too much sense, but don’t worry too much about it. It’s not important for now. But the output is fairly straightforward: it suggests that I’m grumpier than 99.98% of people. Sounds about right.
In addition to allowing you to interpret a raw score in relation to a larger population (and thereby allowing you to make sense of variables that lie on arbitrary scales), standard scores serve a second useful function. Standard scores can be compared to one another in situations where the raw scores can’t. Suppose, for instance, my friend also had another questionnaire that measured extraversion using a 24 items questionnaire. The overall mean for this measure turns out to be 13 with standard deviation 4; and I scored a 2. As you can imagine, it doesn’t make a lot of sense to try to compare my raw score of 2 on the extraversion questionnaire to my raw score of 35 on the grumpiness questionnaire. The raw scores for the two variables are “about” fundamentally different things, so this would be like comparing apples to oranges.
What about the standard scores? Well, this is a little different. If we calculate the standard scores, we get \(z = (35-17)/5 = 3.6\) for grumpiness and \(z = (2-13)/4 = -2.75\) for extraversion. These two numbers can be compared to each other.79 I’m much less extraverted than most people (\(z = -2.75\)) and much grumpier than most people (\(z = 3.6\)): but the extent of my unusualness is much more extreme for grumpiness (since 3.6 is a bigger number than 2.75). Because each standardised score is a statement about where an observation falls relative to its own population, it is possible to compare standardised scores across completely different variables.
## 5.7 Correlations
Up to this point we have focused entirely on how to construct descriptive statistics for a single variable. What we haven’t done is talked about how to describe the relationships between variables in the data. To do that, we want to talk mostly about the correlation between variables. But first, we need some data.
### 5.7.1 The data
After spending so much time looking at the AFL data, I’m starting to get bored with sports. Instead, let’s turn to a topic close to every parent’s heart: sleep. The following data set is fictitious, but based on real events. Suppose I’m curious to find out how much my infant son’s sleeping habits affect my mood. Let’s say that I can rate my grumpiness very precisely, on a scale from 0 (not at all grumpy) to 100 (grumpy as a very, very grumpy old man). And, lets also assume that I’ve been measuring my grumpiness, my sleeping patterns and my son’s sleeping patterns for quite some time now. Let’s say, for 100 days. And, being a nerd, I’ve saved the data as a file called `parenthood.Rdata` . If we load the data…
```
load( "./data/parenthood.Rdata" )
who(TRUE)
```
```
## -- Name -- -- Class -- -- Size --
## parenthood data.frame 100 x 4
## $dan.sleep numeric 100
## $baby.sleep numeric 100
## $dan.grump numeric 100
## $day integer 100
```
… we see that the file contains a single data frame called `parenthood` , which contains four variables `dan.sleep` , `baby.sleep` , `dan.grump` and `day` . If we peek at the data using `head()` out the data, here’s what we get: `head(parenthood,10)`
```
## dan.sleep baby.sleep dan.grump day
## 1 7.59 10.18 56 1
## 2 7.91 11.66 60 2
## 3 5.14 7.92 82 3
## 4 7.71 9.61 55 4
## 5 6.68 9.75 67 5
## 6 5.99 5.04 72 6
## 7 8.19 10.45 53 7
## 8 7.19 8.27 60 8
## 9 7.40 6.06 60 9
## 10 6.58 7.09 71 10
```
Next, I’ll calculate some basic descriptive statistics:
```
describe( parenthood )
```
```
## vars n mean sd median trimmed mad min max range
## dan.sleep 1 100 6.97 1.02 7.03 7.00 1.09 4.84 9.00 4.16
## baby.sleep 2 100 8.05 2.07 7.95 8.05 2.33 3.25 12.07 8.82
## dan.grump 3 100 63.71 10.05 62.00 63.16 9.64 41.00 91.00 50.00
## day 4 100 50.50 29.01 50.50 50.50 37.06 1.00 100.00 99.00
## skew kurtosis se
## dan.sleep -0.29 -0.72 0.10
## baby.sleep -0.02 -0.69 0.21
## dan.grump 0.43 -0.16 1.00
## day 0.00 -1.24 2.90
```
Finally, to give a graphical depiction of what each of the three interesting variables looks like, Figure 5.6 plots histograms.
One thing to note: just because R can calculate dozens of different statistics doesn’t mean you should report all of them. If I were writing this up for a report, I’d probably pick out those statistics that are of most interest to me (and to my readership), and then put them into a nice, simple table like the one in Table ??.80 Notice that when I put it into a table, I gave everything “human readable” names. This is always good practice. Notice also that I’m not getting enough sleep. This isn’t good practice, but other parents tell me that it’s standard practice.
variable | min | max | mean | median | std. dev | IQR |
| --- | --- | --- | --- | --- | --- | --- |
Dan’s grumpiness | 41 | 91 | 63.71 | 62 | 10.05 | 14 |
Dan’s hours slept | 4.84 | 9 | 6.97 | 7.03 | 1.02 | 1.45 |
Dan’s son’s hours slept | 3.25 | 12.07 | 8.05 | 7.95 | 2.07 | 3.21 |
### 5.7.2 The strength and direction of a relationship
We can draw scatterplots to give us a general sense of how closely related two variables are. Ideally though, we might want to say a bit more about it than that. For instance, let’s compare the relationship between `dan.sleep` and `dan.grump` (Figure 5.7 with that between `baby.sleep` and `dan.grump` (Figure 5.8. When looking at these two plots side by side, it’s clear that the relationship is qualitatively the same in both cases: more sleep equals less grump! However, it’s also pretty obvious that the relationship between `dan.sleep` and `dan.grump` is stronger than the relationship between `baby.sleep` and `dan.grump` . The plot on the left is “neater” than the one on the right. What it feels like is that if you want to predict what my mood is, it’d help you a little bit to know how many hours my son slept, but it’d be more helpful to know how many hours I slept. In contrast, let’s consider Figure 5.8 vs. Figure 5.9. If we compare the scatterplot of “ `baby.sleep` v `dan.grump` ” to the scatterplot of “` `baby.sleep` v `dan.sleep` ”, the overall strength of the relationship is the same, but the direction is different. That is, if my son sleeps more, I get more sleep (positive relationship, but if he sleeps more then I get less grumpy (negative relationship).
### 5.7.3 The correlation coefficient
We can make these ideas a bit more explicit by introducing the idea of a correlation coefficient (or, more specifically, Pearson’s correlation coefficient), which is traditionally denoted by \(r\). The correlation coefficient between two variables \(X\) and \(Y\) (sometimes denoted \(r_{XY}\)), which we’ll define more precisely in the next section, is a measure that varies from \(-1\) to \(1\). When \(r = -1\) it means that we have a perfect negative relationship, and when \(r = 1\) it means we have a perfect positive relationship. When \(r = 0\), there’s no relationship at all. If you look at Figure 5.10, you can see several plots showing what different correlations look like.
The formula for the Pearson’s correlation coefficient can be written in several different ways. I think the simplest way to write down the formula is to break it into two steps. Firstly, let’s introduce the idea of a covariance. The covariance between two variables \(X\) and \(Y\) is a generalisation of the notion of the variance; it’s a mathematically simple way of describing the relationship between two variables that isn’t terribly informative to humans: \[ \mbox{Cov}(X,Y) = \frac{1}{N-1} \sum_{i=1}^N \left( X_i - \bar{X} \right) \left( Y_i - \bar{Y} \right) \] Because we’re multiplying (i.e., taking the “product” of) a quantity that depends on \(X\) by a quantity that depends on \(Y\) and then averaging81, you can think of the formula for the covariance as an “average cross product” between \(X\) and \(Y\). The covariance has the nice property that, if \(X\) and \(Y\) are entirely unrelated, then the covariance is exactly zero. If the relationship between them is positive (in the sense shown in Figure@reffig:corr) then the covariance is also positive; and if the relationship is negative then the covariance is also negative. In other words, the covariance captures the basic qualitative idea of correlation. Unfortunately, the raw magnitude of the covariance isn’t easy to interpret: it depends on the units in which \(X\) and \(Y\) are expressed, and worse yet, the actual units that the covariance itself is expressed in are really weird. For instance, if \(X\) refers to the `dan.sleep` variable (units: hours) and \(Y\) refers to the `dan.grump` variable (units: grumps), then the units for their covariance are “hours \(\times\) grumps”. And I have no freaking idea what that would even mean.
The Pearson correlation coefficient \(r\) fixes this interpretation problem by standardising the covariance, in pretty much the exact same way that the \(z\)-score standardises a raw score: by dividing by the standard deviation. However, because we have two variables that contribute to the covariance, the standardisation only works if we divide by both standard deviations.82 In other words, the correlation between \(X\) and \(Y\) can be written as follows: \[ r_{XY} = \frac{\mbox{Cov}(X,Y)}{ \hat{\sigma}_X \ \hat{\sigma}_Y} \] By doing this standardisation, not only do we keep all of the nice properties of the covariance discussed earlier, but the actual values of \(r\) are on a meaningful scale: \(r= 1\) implies a perfect positive relationship, and \(r = -1\) implies a perfect negative relationship. I’ll expand a little more on this point later, in Section@refsec:interpretingcorrelations. But before I do, let’s look at how to calculate correlations in R.
### 5.7.4 Calculating correlations in R
Calculating correlations in R can be done using the `cor()` command. The simplest way to use the command is to specify two input arguments `x` and `y` , each one corresponding to one of the variables. The following extract illustrates the basic usage of the function:83
`## [1] -0.903384` However, the `cor()` function is a bit more powerful than this simple example suggests. For example, you can also calculate a complete “correlation matrix”, between all pairs of variables in the data frame:84
```
# correlate all pairs of variables in "parenthood":
cor( x = parenthood )
```
### 5.7.5 Interpreting a correlation
Naturally, in real life you don’t see many correlations of 1. So how should you interpret a correlation of, say \(r= .4\)? The honest answer is that it really depends on what you want to use the data for, and on how strong the correlations in your field tend to be. A friend of mine in engineering once argued that any correlation less than \(.95\) is completely useless (I think he was exaggerating, even for engineering). On the other hand there are real cases – even in psychology – where you should really expect correlations that strong. For instance, one of the benchmark data sets used to test theories of how people judge similarities is so clean that any theory that can’t achieve a correlation of at least \(.9\) really isn’t deemed to be successful. However, when looking for (say) elementary correlates of intelligence (e.g., inspection time, response time), if you get a correlation above \(.3\) you’re doing very very well. In short, the interpretation of a correlation depends a lot on the context. That said, the rough guide in Table ?? is pretty typical.
```
knitr::kable(
rbind(
c("-1.0 to -0.9" ,"Very strong", "Negative"),
c("-0.9 to -0.7", "Strong", "Negative") ,
c("-0.7 to -0.4", "Moderate", "Negative") ,
c("-0.4 to -0.2", "Weak", "Negative"),
c("-0.2 to 0","Negligible", "Negative") ,
c("0 to 0.2","Negligible", "Positive"),
c("0.2 to 0.4", "Weak", "Positive"),
c("0.4 to 0.7", "Moderate", "Positive"),
c("0.7 to 0.9", "Strong", "Positive"),
c("0.9 to 1.0", "Very strong", "Positive")), col.names=c("Correlation", "Strength", "Direction"),
booktabs = TRUE)
```
Correlation | Strength | Direction |
| --- | --- | --- |
-1.0 to -0.9 | Very strong | Negative |
-0.9 to -0.7 | Strong | Negative |
-0.7 to -0.4 | Moderate | Negative |
-0.4 to -0.2 | Weak | Negative |
-0.2 to 0 | Negligible | Negative |
0 to 0.2 | Negligible | Positive |
0.2 to 0.4 | Weak | Positive |
0.4 to 0.7 | Moderate | Positive |
0.7 to 0.9 | Strong | Positive |
0.9 to 1.0 | Very strong | Positive |
However, something that can never be stressed enough is that you should always look at the scatterplot before attaching any interpretation to the data. A correlation might not mean what you think it means. The classic illustration of this is “Anscombe’s Quartet” (??? Anscombe1973), which is a collection of four data sets. Each data set has two variables, an \(X\) and a \(Y\). For all four data sets the mean value for \(X\) is 9 and the mean for \(Y\) is 7.5. The, standard deviations for all \(X\) variables are almost identical, as are those for the the \(Y\) variables. And in each case the correlation between \(X\) and \(Y\) is \(r = 0.816\). You can verify this yourself, since the dataset comes distributed with R. The commands would be:
```
cor( anscombe$x1, anscombe$y1 )
```
`## [1] 0.8164205`
```
cor( anscombe$x2, anscombe$y2 )
```
`## [1] 0.8162365`
and so on.
You’d think that these four data setswould look pretty similar to one another. They do not. If we draw scatterplots of \(X\) against \(Y\) for all four variables, as shown in Figure 5.11 we see that all four of these are spectacularly different to each other.
The lesson here, which so very many people seem to forget in real life is “always graph your raw data”. This will be the focus of Chapter 6.
### 5.7.6 Spearman’s rank correlations
The Pearson correlation coefficient is useful for a lot of things, but it does have shortcomings. One issue in particular stands out: what it actually measures is the strength of the linear relationship between two variables. In other words, what it gives you is a measure of the extent to which the data all tend to fall on a single, perfectly straight line. Often, this is a pretty good approximation to what we mean when we say “relationship”, and so the Pearson correlation is a good thing to calculation. Sometimes, it isn’t.
One very common situation where the Pearson correlation isn’t quite the right thing to use arises when an increase in one variable \(X\) really is reflected in an increase in another variable \(Y\), but the nature of the relationship isn’t necessarily linear. An example of this might be the relationship between effort and reward when studying for an exam. If you put in zero effort (\(X\)) into learning a subject, then you should expect a grade of 0% (\(Y\)). However, a little bit of effort will cause a massive improvement: just turning up to lectures means that you learn a fair bit, and if you just turn up to classes, and scribble a few things down so your grade might rise to 35%, all without a lot of effort. However, you just don’t get the same effect at the other end of the scale. As everyone knows, it takes a lot more effort to get a grade of 90% than it takes to get a grade of 55%. What this means is that, if I’ve got data looking at study effort and grades, there’s a pretty good chance that Pearson correlations will be misleading.
To illustrate, consider the data plotted in Figure 5.12, showing the relationship between hours worked and grade received for 10 students taking some class. The curious thing about this – highly fictitious – data set is that increasing your effort always increases your grade. It might be by a lot or it might be by a little, but increasing effort will never decrease your grade. The data are stored in `effort.Rdata` :
```
> load( "effort.Rdata" )
> who(TRUE)
-- Name -- -- Class -- -- Size --
effort data.frame 10 x 2
$hours numeric 10
$grade numeric 10
```
The raw data look like this:
```
> effort
hours grade
1 2 13
2 76 91
3 40 79
4 6 14
5 16 21
6 28 74
7 27 47
8 59 85
9 46 84
10 68 88
```
If we run a standard Pearson correlation, it shows a strong relationship between hours worked and grade received,
```
> cor( effort$hours, effort$grade )
[1] 0.909402
```
but this doesn’t actually capture the observation that increasing hours worked always increases the grade. There’s a sense here in which we want to be able to say that the correlation is perfect but for a somewhat different notion of what a “relationship” is. What we’re looking for is something that captures the fact that there is a perfect ordinal relationship here. That is, if student 1 works more hours than student 2, then we can guarantee that student 1 will get the better grade. That’s not what a correlation of \(r = .91\) says at all.
How should we address this? Actually, it’s really easy: if we’re looking for ordinal relationships, all we have to do is treat the data as if it were ordinal scale! So, instead of measuring effort in terms of “hours worked”, lets rank all 10 of our students in order of hours worked. That is, student 1 did the least work out of anyone (2 hours) so they get the lowest rank (rank = 1). Student 4 was the next laziest, putting in only 6 hours of work in over the whole semester, so they get the next lowest rank (rank = 2). Notice that I’m using “rank =1” to mean “low rank”. Sometimes in everyday language we talk about “rank = 1” to mean “top rank” rather than “bottom rank”. So be careful: you can rank “from smallest value to largest value” (i.e., small equals rank 1) or you can rank “from largest value to smallest value” (i.e., large equals rank 1). In this case, I’m ranking from smallest to largest, because that’s the default way that R does it. But in real life, it’s really easy to forget which way you set things up, so you have to put a bit of effort into remembering!
Okay, so let’s have a look at our students when we rank them from worst to best in terms of effort and reward:
rank (hours worked) | rank (grade received) |
| --- | --- |
student | 1 | 1 |
student | 2 | 10 |
student | 3 | 6 |
student | 4 | 2 |
student | 5 | 3 |
student | 6 | 5 |
student | 7 | 4 |
student | 8 | 8 |
student | 9 | 7 |
student | 10 | 9 |
Hm. These are identical. The student who put in the most effort got the best grade, the student with the least effort got the worst grade, etc. We can get R to construct these rankings using the `rank()` function, like this:
```
> hours.rank <- rank( effort$hours ) # rank students by hours worked
> grade.rank <- rank( effort$grade ) # rank students by grade received
```
As the table above shows, these two rankings are identical, so if we now correlate them we get a perfect relationship:
```
> cor( hours.rank, grade.rank )
[1] 1
```
What we’ve just re-invented is Spearman’s rank order correlation, usually denoted \(\rho\) to distinguish it from the Pearson correlation \(r\). We can calculate Spearman’s \(\rho\) using R in two different ways. Firstly we could do it the way I just showed, using the `rank()` function to construct the rankings, and then calculate the Pearson correlation on these ranks. However, that’s way too much effort to do every time. It’s much easier to just specify the `method` argument of the `cor()` function.
```
> cor( effort$hours, effort$grade, method = "spearman")
[1] 1
```
The default value of the `method` argument is `"pearson"` , which is why we didn’t have to specify it earlier on when we were doing Pearson correlations.
### 5.7.7 The
`correlate()` function As we’ve seen, the `cor()` function works pretty well, and handles many of the situations that you might be interested in. One thing that many beginners find frustrating, however, is the fact that it’s not built to handle non-numeric variables. From a statistical perspective, this is perfectly sensible: Pearson and Spearman correlations are only designed to work for numeric variables, so the `cor()` function spits out an error. Here’s what I mean. Suppose you were keeping track of how many `hours` you worked in any given day, and counted how many `tasks` you completed. If you were doing the tasks for money, you might also want to keep track of how much `pay` you got for each job. It would also be sensible to keep track of the `weekday` on which you actually did the work: most of us don’t work as much on Saturdays or Sundays. If you did this for 7 weeks, you might end up with a data set that looks like this one:
```
> load("work.Rdata")
> who(TRUE)
-- Name -- -- Class -- -- Size --
work data.frame 49 x 7
$hours numeric 49
$tasks numeric 49
$pay numeric 49
$day integer 49
$weekday factor 49
$week numeric 49
$day.type factor 49
> head(work)
hours tasks pay day weekday week day.type
1 7.2 14 41 1 Tuesday 1 weekday
2 7.4 11 39 2 Wednesday 1 weekday
3 6.6 14 13 3 Thursday 1 weekday
4 6.5 22 47 4 Friday 1 weekday
5 3.1 5 4 5 Saturday 1 weekend
6 3.0 7 12 6 Sunday 1 weekend
```
Obviously, I’d like to know something about how all these variables correlate with one another. I could correlate `hours` with `pay` quite using `cor()` , like so:
```
> cor(work$hours,work$pay)
[1] 0.7604283
```
But what if I wanted a quick and easy way to calculate all pairwise correlations between the numeric variables? I can’t just input the `work` data frame, because it contains two factor variables, `weekday` and `day.type` . If I try this, I get an error:
```
> cor(work)
Error in cor(work) : 'x' must be numeric
```
It order to get the correlations that I want using the `cor()` function, is create a new data frame that doesn’t contain the factor variables, and then feed that new data frame into the `cor()` function. It’s not actually very hard to do that, and I’ll talk about how to do it properly in Section@refsec:subsetdataframe. But it would be nice to have some function that is smart enough to just ignore the factor variables. That’s where the `correlate()` function in the `lsr` package can be handy. If you feed it a data frame that contains factors, it knows to ignore them, and returns the pairwise correlations only between the numeric variables:
```
> correlate(work)
hours tasks pay day weekday week day.type
hours . 0.800 0.760 -0.049 . 0.018 .
tasks 0.800 . 0.720 -0.072 . -0.013 .
pay 0.760 0.720 . 0.137 . 0.196 .
day -0.049 -0.072 0.137 . . 0.990 .
weekday . . . . . . .
week 0.018 -0.013 0.196 0.990 . . .
day.type . . . . . . .
```
The output here shows a `.` whenever one of the variables is non-numeric. It also shows a `.` whenever a variable is correlated with itself (it’s not a meaningful thing to do). The `correlate()` function can also do Spearman correlations, by specifying the `corr.method` to use:
```
> correlate( work, corr.method="spearman" )
hours tasks pay day weekday week day.type
hours . 0.805 0.745 -0.047 . 0.010 .
tasks 0.805 . 0.730 -0.068 . -0.008 .
pay 0.745 0.730 . 0.094 . 0.154 .
day -0.047 -0.068 0.094 . . 0.990 .
weekday . . . . . . .
week 0.010 -0.008 0.154 0.990 . . .
day.type . . . . . . .
```
Obviously, there’s no new functionality in the `correlate()` function, and any advanced R user would be perfectly capable of using the `cor()` function to get these numbers out. But if you’re not yet comfortable with extracting a subset of a data frame, the `correlate()` function is for you.
## 5.8 Handling missing values
There’s one last topic that I want to discuss briefly in this chapter, and that’s the issue of missing data. Real data sets very frequently turn out to have missing values: perhaps someone forgot to fill in a particular survey question, for instance. Missing data can be the source of a lot of tricky issues, most of which I’m going to gloss over. However, at a minimum, you need to understand the basics of handling missing data in R.
### 5.8.1 The single variable case
Let’s start with the simplest case, in which you’re trying to calculate descriptive statistics for a single variable which has missing data. In R, this means that there will be `NA` values in your data vector. Let’s create a variable like that:
```
> partial <- c(10, 20, NA, 30)
```
Let’s assume that you want to calculate the mean of this variable. By default, R assumes that you want to calculate the mean using all four elements of this vector, which is probably the safest thing for a dumb automaton to do, but it’s rarely what you actually want. Why not? Well, remember that the basic interpretation of `NA` is “I don’t know what this number is”. This means that `1 + NA = NA` : if I add 1 to some number that I don’t know (i.e., the `NA` ) then the answer is also a number that I don’t know. As a consequence, if you don’t explicitly tell R to ignore the `NA` values, and the data set does have missing values, then the output will itself be a missing value. If I try to calculate the mean of the `partial` vector, without doing anything about the missing value, here’s what happens:
```
> mean( x = partial )
[1] NA
```
Technically correct, but deeply unhelpful.
To fix this, all of the descriptive statistics functions that I’ve discussed in this chapter (with the exception of `cor()` which is a special case I’ll discuss below) have an optional argument called `na.rm` , which is shorthand for “remove NA values”. By default, `na.rm = FALSE` , so R does nothing about the missing data problem. Let’s try setting `na.rm = TRUE` and see what happens: When calculating sums and means when missing data are present (i.e., when there are `NA` values) there’s actually an additional argument to the function that you should be aware of. This argument is called `na.rm` , and is a logical value indicating whether R should ignore (or “remove”) the missing data for the purposes of doing the calculations. By default, R assumes that you want to keep the missing values, so unless you say otherwise it will set `na.rm = FALSE` . However, R assumes that `1 + NA = NA` : if I add 1 to some number that I don’t know (i.e., the `NA` ) then the answer is also a number that I don’t know. As a consequence, if you don’t explicitly tell R to ignore the `NA` values, and the data set does have missing values, then the output will itself be a missing value. This is illustrated in the following extract:
```
> mean( x = partial, na.rm = TRUE )
[1] 20
```
Notice that the mean is `20` (i.e., `60 / 3` ) and not `15` . When R ignores a `NA` value, it genuinely ignores it. In effect, the calculation above is identical to what you’d get if you asked for the mean of the three-element vector `c(10, 20, 30)` . As indicated above, this isn’t unique to the `mean()` function. Pretty much all of the other functions that I’ve talked about in this chapter have an `na.rm` argument that indicates whether it should ignore missing values. However, its behaviour is the same for all these functions, so I won’t waste everyone’s time by demonstrating it separately for each one.
### 5.8.2 Missing values in pairwise calculations
I mentioned earlier that the `cor()` function is a special case. It doesn’t have an `na.rm` argument, because the story becomes a lot more complicated when more than one variable is involved. What it does have is an argument called `use` which does roughly the same thing, but you need to think little more carefully about what you want this time. To illustrate the issues, let’s open up a data set that has missing values, `parenthood2.Rdata` . This file contains the same data as the original parenthood data, but with some values deleted. It contains a single data frame, `parenthood2` :
```
> load( "parenthood2.Rdata" )
> print( parenthood2 )
dan.sleep baby.sleep dan.grump day
1 7.59 NA 56 1
2 7.91 11.66 60 2
3 5.14 7.92 82 3
4 7.71 9.61 55 4
5 6.68 9.75 NA 5
6 5.99 5.04 72 6
BLAH BLAH BLAH
```
If I calculate my descriptive statistics using the `describe()` function
```
> describe( parenthood2 )
var n mean sd median trimmed mad min max BLAH
dan.sleep 1 91 6.98 1.02 7.03 7.02 1.13 4.84 9.00 BLAH
baby.sleep 2 89 8.11 2.05 8.20 8.13 2.28 3.25 12.07 BLAH
dan.grump 3 92 63.15 9.85 61.00 62.66 10.38 41.00 89.00 BLAH
day 4 100 50.50 29.01 50.50 50.50 37.06 1.00 100.00 BLAH
```
we can see from the `n` column that there are 9 missing values for `dan.sleep` , 11 missing values for `baby.sleep` and 8 missing values for `dan.grump` .85 Suppose what I would like is a correlation matrix. And let’s also suppose that I don’t bother to tell R how to handle those missing values. Here’s what happens:
```
> cor( parenthood2 )
dan.sleep baby.sleep dan.grump day
dan.sleep 1 NA NA NA
baby.sleep NA 1 NA NA
dan.grump NA NA 1 NA
day NA NA NA 1
```
Annoying, but it kind of makes sense. If I don’t know what some of the values of `dan.sleep` and `baby.sleep` actually are, then I can’t possibly know what the correlation between these two variables is either, since the formula for the correlation coefficient makes use of every single observation in the data set. Once again, it makes sense: it’s just not particularly helpful. To make R behave more sensibly in this situation, you need to specify the `use` argument to the `cor()` function. There are several different values that you can specify for this, but the two that we care most about in practice tend to be `"complete.obs"` and
```
"pairwise.complete.obs"
```
. If we specify `use = "complete.obs"` , R will completely ignore all cases (i.e., all rows in our `parenthood2` data frame) that have any missing values at all. So, for instance, if you look back at the extract earlier when I used the `head()` function, notice that observation 1 (i.e., day 1) of the `parenthood2` data set is missing the value for `baby.sleep` , but is otherwise complete? Well, if you choose `use = "complete.obs"` R will ignore that row completely: that is, even when it’s trying to calculate the correlation between `dan.sleep` and `dan.grump` , observation 1 will be ignored, because the value of `baby.sleep` is missing for that observation. Here’s what we get:
```
> cor(parenthood2, use = "complete.obs")
dan.sleep baby.sleep dan.grump day
dan.sleep 1.00000000 0.6394985 -0.89951468 0.06132891
baby.sleep 0.63949845 1.0000000 -0.58656066 0.14555814
dan.grump -0.89951468 -0.5865607 1.00000000 -0.06816586
day 0.06132891 0.1455581 -0.06816586 1.00000000
```
The other possibility that we care about, and the one that tends to get used more often in practice, is to set
```
use = "pairwise.complete.obs"
```
. When we do that, R only looks at the variables that it’s trying to correlate when determining what to drop. So, for instance, since the only missing value for observation 1 of `parenthood2` is for `baby.sleep` R will only drop observation 1 when `baby.sleep` is one of the variables involved: and so R keeps observation 1 when trying to correlate `dan.sleep` and `dan.grump` . When we do it this way, here’s what we get:
```
> cor(parenthood2, use = "pairwise.complete.obs")
dan.sleep baby.sleep dan.grump day
dan.sleep 1.00000000 0.61472303 -0.903442442 -0.076796665
baby.sleep 0.61472303 1.00000000 -0.567802669 0.058309485
dan.grump -0.90344244 -0.56780267 1.000000000 0.005833399
day -0.07679667 0.05830949 0.005833399 1.000000000
```
Similar, but not quite the same. It’s also worth noting that the `correlate()` function (in the `lsr` package) automatically uses the “pairwise complete” method:
```
> correlate(parenthood2)
dan.sleep baby.sleep dan.grump day
dan.sleep . 0.615 -0.903 -0.077
baby.sleep 0.615 . -0.568 0.058
dan.grump -0.903 -0.568 . 0.006
day -0.077 0.058 0.006 .
```
The two approaches have different strengths and weaknesses. The “pairwise complete” approach has the advantage that it keeps more observations, so you’re making use of more of your data and (as we’ll discuss in tedious detail in Chapter 10 and it improves the reliability of your estimated correlation. On the other hand, it means that every correlation in your correlation matrix is being computed from a slightly different set of observations, which can be awkward when you want to compare the different correlations that you’ve got.
So which method should you use? It depends a lot on why you think your values are missing, and probably depends a little on how paranoid you are. For instance, if you think that the missing values were “chosen” completely randomly86 then you’ll probably want to use the pairwise method. If you think that missing data are a cue to thinking that the whole observation might be rubbish (e.g., someone just selecting arbitrary responses in your questionnaire), but that there’s no pattern to which observations are “rubbish” then it’s probably safer to keep only those observations that are complete. If you think there’s something systematic going on, in that some observations are more likely to be missing than others, then you have a much trickier problem to solve, and one that is beyond the scope of this book.
## 5.9 Summary
Calculating some basic descriptive statistics is one of the very first things you do when analysing real data, and descriptive statistics are much simpler to understand than inferential statistics, so like every other statistics textbook I’ve started with descriptives. In this chapter, we talked about the following topics:
* Measures of central tendency. Broadly speaking, central tendency measures tell you where the data are. There’s three measures that are typically reported in the literature: the mean, median and mode. (Section 5.1)
* Measures of variability. In contrast, measures of variability tell you about how “spread out” the data are. The key measures are: range, standard deviation, interquartile reange (Section 5.2)
* Getting summaries of variables in R. Since this book focuses on doing data analysis in R, we spent a bit of time talking about how descriptive statistics are computed in R. (Section 2.8 and 5.5)
* Standard scores. The \(z\)-score is a slightly unusual beast. It’s not quite a descriptive statistic, and not quite an inference. We talked about it in Section 5.6. Make sure you understand that section: it’ll come up again later.
* Correlations. Want to know how strong the relationship is between two variables? Calculate a correlation. (Section 5.7)
* Missing data. Dealing with missing data is one of those frustrating things that data analysts really wish the didn’t have to think about. In real life it can be hard to do well. For the purpose of this book, we only touched on the basics in Section 5.8
In the next section we’ll move on to a discussion of how to draw pictures! Everyone loves a pretty picture, right? But before we do, I want to end on an important point. A traditional first course in statistics spends only a small proportion of the class on descriptive statistics, maybe one or two lectures at most. The vast majority of the lecturer’s time is spent on inferential statistics, because that’s where all the hard stuff is. That makes sense, but it hides the practical everyday importance of choosing good descriptives. With that in mind…
## 5.10 Epilogue: Good descriptive statistics are descriptive!
The death of one man is a tragedy. The death of millions is a statistic.
– <NAME>, Potsdam 1945
950,000 – 1,200,000
– Estimate of Soviet repression deaths, 1937-1938 (Ellman 2002)
Stalin’s infamous quote about the statistical character death of millions is worth giving some thought. The clear intent of his statement is that the death of an individual touches us personally and its force cannot be denied, but that the deaths of a multitude are incomprehensible, and as a consequence mere statistics, more easily ignored. I’d argue that Stalin was half right. A statistic is an abstraction, a description of events beyond our personal experience, and so hard to visualise. Few if any of us can imagine what the deaths of millions is “really” like, but we can imagine one death, and this gives the lone death its feeling of immediate tragedy, a feeling that is missing from Ellman’s cold statistical description.
Yet it is not so simple: without numbers, without counts, without a description of what happened, we have no chance of understanding what really happened, no opportunity event to try to summon the missing feeling. And in truth, as I write this, sitting in comfort on a Saturday morning, half a world and a whole lifetime away from the Gulags, when I put the Ellman estimate next to the Stalin quote a dull dread settles in my stomach and a chill settles over me. The Stalinist repression is something truly beyond my experience, but with a combination of statistical data and those recorded personal histories that have come down to us, it is not entirely beyond my comprehension. Because what Ellman’s numbers tell us is this: over a two year period, Stalinist repression wiped out the equivalent of every man, woman and child currently alive in the city where I live. Each one of those deaths had it’s own story, was it’s own tragedy, and only some of those are known to us now. Even so, with a few carefully chosen statistics, the scale of the atrocity starts to come into focus.
Thus it is no small thing to say that the first task of the statistician and the scientist is to summarise the data, to find some collection of numbers that can convey to an audience a sense of what has happened. This is the job of descriptive statistics, but it’s not a job that can be told solely using the numbers. You are a data analyst, not a statistical software package. Part of your job is to take these statistics and turn them into a description. When you analyse data, it is not sufficient to list off a collection of numbers. Always remember that what you’re really trying to do is communicate with a human audience. The numbers are important, but they need to be put together into a meaningful story that your audience can interpret. That means you need to think about framing. You need to think about context. And you need to think about the individual events that your statistics are summarising.
Note for non-Australians: the AFL is an Australian rules football competition. You don’t need to know anything about Australian rules in order to follow this section.↩
*
The choice to use \(\Sigma\) to denote summation isn’t arbitrary: it’s the Greek upper case letter sigma, which is the analogue of the letter S in that alphabet. Similarly, there’s an equivalent symbol used to denote the multiplication of lots of numbers: because multiplications are also called “products”, we use the \(\Pi\) symbol for this; the Greek upper case pi, which is the analogue of the letter P.↩
*
Note that, just as we saw with the combine function
`c()` and the remove function `rm()` , the `sum()` function has unnamed arguments. I’ll talk about unnamed arguments later in Section 8.4.1, but for now let’s just ignore this detail.↩ *
www.abc.net.au/news/stories/2010/09/24/3021480.htm↩
*
Or at least, the basic statistical theory – these days there is a whole subfield of statistics called robust statistics that tries to grapple with the messiness of real data and develop theory that can cope with it.↩
*
As we saw earlier, it does have a function called
`mode()` , but it does something completely different.↩ *
This is called a “0-1 loss function”, meaning that you either win (1) or you lose (0), with no middle ground.↩
*
Well, I will very briefly mention the one that I think is coolest, for a very particular definition of “cool”, that is. Variances are additive. Here’s what that means: suppose I have two variables \(X\) and \(Y\), whose variances are $↩
*
With the possible exception of the third question.↩
*
Strictly, the assumption is that the data are normally distributed, which is an important concept that we’ll discuss more in Chapter 9, and will turn up over and over again later in the book.↩
*
The assumption again being that the data are normally-distributed!↩
*
The “\(-3\)” part is something that statisticians tack on to ensure that the normal curve has kurtosis zero. It looks a bit stupid, just sticking a “-3” at the end of the formula, but there are good mathematical reasons for doing this.↩
*
I haven’t discussed how to compute \(z\)-scores, explicitly, but you can probably guess. For a variable
`X` , the simplest way is to use a command like
```
(X - mean(X)) / sd(X)
```
. There’s also a fancier function called `scale()` that you can use, but it relies on somewhat more complicated R concepts that I haven’t explained yet.↩ *
Technically, because I’m calculating means and standard deviations from a sample of data, but want to talk about my grumpiness relative to a population, what I’m actually doing is estimating a \(z\) score. However, since we haven’t talked about estimation yet (see Chapter 10) I think it’s best to ignore this subtlety, especially as it makes very little difference to our calculations.↩
*
Though some caution is usually warranted. It’s not always the case that one standard deviation on variable A corresponds to the same “kind” of thing as one standard deviation on variable B. Use common sense when trying to determine whether or not the \(z\) scores of two variables can be meaningfully compared.↩
*
Actually, even that table is more than I’d bother with. In practice most people pick one measure of central tendency, and one measure of variability only.↩
*
Just like we saw with the variance and the standard deviation, in practice we divide by \(N-1\) rather than \(N\).↩
*
This is an oversimplification, but it’ll do for our purposes.↩
*
If you are reading this after having already completed Chapter 11 you might be wondering about hypothesis tests for correlations. R has a function called
`cor.test()` that runs a hypothesis test for a single correlation, and the `psych` package contains a version called `corr.test()` that can run tests for every correlation in a correlation matrix; hypothesis tests for correlations are discussed in more detail in Section 15.6.↩ *
An alternative usage of
`cor()` is to correlate one set of variables with another subset of variables. If `X` and `Y` are both data frames with the same number of rows, then `cor(x = X, y = Y)` will produce a correlation matrix that correlates all variables in `X` with all variables in `Y` .↩ *
It’s worth noting that, even though we have missing data for each of these variables, the output doesn’t contain any
`NA` values. This is because, while `describe()` also has an `na.rm` argument, the default value for this function is `na.rm = TRUE` .↩ *
The technical term here is “missing completely at random” (often written MCAR for short). Makes sense, I suppose, but it does sound ungrammatical to me.↩
# Chapter 6 Drawing graphs
Above all else show the data.
–<NAME>
Visualising data is one of the most important tasks facing the data analyst. It’s important for two distinct but closely related reasons. Firstly, there’s the matter of drawing “presentation graphics”: displaying your data in a clean, visually appealing fashion makes it easier for your reader to understand what you’re trying to tell them. Equally important, perhaps even more important, is the fact that drawing graphs helps you to understand the data. To that end, it’s important to draw “exploratory graphics” that help you learn about the data as you go about analysing it. These points might seem pretty obvious, but I cannot count the number of times I’ve seen people forget them.
To give a sense of the importance of this chapter, I want to start with a classic illustration of just how powerful a good graph can be. To that end, Figure 6.1 shows a redrawing of one of the most famous data visualisations of all time: John Snow’s 1854 map of cholera deaths. The map is elegant in its simplicity. In the background we have a street map, which helps orient the viewer. Over the top, we see a large number of small dots, each one representing the location of a cholera case. The larger symbols show the location of water pumps, labelled by name. Even the most casual inspection of the graph makes it very clear that the source of the outbreak is almost certainly the Broad Street pump. Upon viewing this graph, Dr Snow arranged to have the handle removed from the pump, ending the outbreak that had killed over 500 people. Such is the power of a good data visualisation.
The goals in this chapter are twofold: firstly, to discuss several fairly standard graphs that we use a lot when analysing and presenting data, and secondly, to show you how to create these graphs in R. The graphs themselves tend to be pretty straightforward, so in that respect this chapter is pretty simple. Where people usually struggle is learning how to produce graphs, and especially, learning how to produce good graphs.88 Fortunately, learning how to draw graphs in R is reasonably simple, as long as you’re not too picky about what your graph looks like. What I mean when I say this is that R has a lot of very good graphing functions, and most of the time you can produce a clean, high-quality graphic without having to learn very much about the low-level details of how R handles graphics. Unfortunately, on those occasions when you do want to do something non-standard, or if you need to make highly specific changes to the figure, you actually do need to learn a fair bit about the these details; and those details are both complicated and boring. With that in mind, the structure of this chapter is as follows: I’ll start out by giving you a very quick overview of how graphics work in R. I’ll then discuss several different kinds of graph and how to draw them, as well as showing the basics of how to customise these plots. I’ll then talk in more detail about R graphics, discussing some of those complicated and boring issues. In a future version of this book, I intend to finish this chapter off by talking about what makes a good or a bad graph, but I haven’t yet had the time to write that section.
## 6.1 An overview of R graphics
Reduced to its simplest form, you can think of an R graphic as being much like a painting. You start out with an empty canvas. Every time you use a graphics function, it paints some new things onto your canvas. Later on, you can paint more things over the top if you want; but just like painting, you can’t “undo” your strokes. If you make a mistake, you have to throw away your painting and start over. Fortunately, this is way more easy to do when using R than it is when painting a picture in real life: you delete the plot and then type a new set of commands.89 This way of thinking about drawing graphs is referred to as the painter’s model. So far, this probably doesn’t sound particularly complicated, and for the vast majority of graphs you’ll want to draw it’s exactly as simple as it sounds. Much like painting in real life, the headaches usually start when we dig into details. To see why, I’ll expand this “painting metaphor” a bit further just to show you the basics of what’s going on under the hood, but before I do I want to stress that you really don’t need to understand all these complexities in order to draw graphs. I’d been using R for years before I even realised that most of these issues existed! However, I don’t want you to go through the same pain I went through every time I inadvertently discovered one of these things, so here’s a quick overview.
Firstly, if you want to paint a picture, you need to paint it on something. In real life, you can paint on lots of different things. Painting onto canvas isn’t the same as painting onto paper, and neither one is the same as painting on a wall. In R, the thing that you paint your graphic onto is called a device. For most applications that we’ll look at in this book, this “device” will be a window on your computer. If you’re using Windows as your operating system, then the name for this device is `windows` ; on a Mac it’s called `quartz` because that’s the name of the software that the Mac OS uses to draw pretty pictures; and on Linux/Unix, you’re probably using `X11` . On the other hand, if you’re using Rstudio (regardless of which operating system you’re on), there’s a separate device called `RStudioGD` that forces R to paint inside the “plots” panel in Rstudio. However, from the computers perspective there’s nothing terribly special about drawing pictures on screen: and so R is quite happy to paint pictures directly into a file. R can paint several different types of image files: `jpeg` , `png` , `postscript` , `tiff` and `bmp` files are all among the options that you have available to you. For the most part, these different devices all behave the same way, so you don’t really need to know much about the differences between them when learning how to draw pictures. But, just like real life painting, sometimes the specifics do matter. Unless stated otherwise, you can assume that I’m drawing a picture on screen, using the appropriate device (i.e., `windows` , `quartz` , `X11` or `RStudioGD` ). One the rare occasions where these behave differently from one another, I’ll try to point it out in the text. Secondly, when you paint a picture you need to paint it with something. Maybe you want to do an oil painting, but maybe you want to use watercolour. And, generally speaking, you pretty much have to pick one or the other. The analog to this in R is a “graphics system”. A graphics system defines a collection of very low-level graphics commands about what to draw and where to draw it. Something that surprises most new R users is the discovery that R actually has two completely independent graphics systems, known as traditional graphics (in the `graphics` package) and grid graphics (in the `grid` package).90 Not surprisingly, the traditional graphics system is the older of the two: in fact, it’s actually older than R since it has it’s origins in S, the system from which R is descended. Grid graphics are newer, and in some respects more powerful, so many of the more recent, fancier graphical tools in R make use of grid graphics. However, grid graphics are somewhat more complicated beasts, so most people start out by learning the traditional graphics system. Nevertheless, as long as you don’t want to use any low-level commands yourself, then you don’t really need to care about whether you’re using traditional graphics or grid graphics. However, the moment you do want to tweak your figure by using some low-level commands you do need to care. Because these two different systems are pretty much incompatible with each other, there’s a pretty big divide in R graphics universe. Unless stated otherwise, you can assume that everything I’m saying pertains to traditional graphics. Thirdly, a painting is usually done in a particular style. Maybe it’s a still life, maybe it’s an impressionist piece, or maybe you’re trying to annoy me by pretending that cubism is a legitimate artistic style. Regardless, each artistic style imposes some overarching aesthetic and perhaps even constraints on what can (or should) be painted using that style. In the same vein, R has quite a number of different packages, each of which provide a collection of high-level graphics commands. A single high-level command is capable of drawing an entire graph, complete with a range of customisation options. Most but not all of the high-level commands that I’ll talk about in this book come from the `graphics` package itself, and so belong to the world of traditional graphics. These commands all tend to share a common visual style, although there are a few graphics that I’ll use that come from other packages that differ in style somewhat. On the other side of the great divide, the grid universe relies heavily on two different packages – `lattice` and `ggplots2` – each of which provides a quite different visual style. As you’ve probably guessed, there’s a whole separate bunch of functions that you’d need to learn if you want to use `lattice` graphics or make use of the `ggplots2` . However, for the purposes of this book I’ll restrict myself to talking about the basic `graphics` tools.
At this point, I think we’ve covered more than enough background material. The point that I’m trying to make by providing this discussion isn’t to scare you with all these horrible details, but rather to try to convey to you the fact that R doesn’t really provide a single coherent graphics system. Instead, R itself provides a platform, and different people have built different graphical tools using that platform. As a consequence of this fact, there’s two different universes of graphics, and a great multitude of packages that live in them. At this stage you don’t need to understand these complexities, but it’s useful to know that they’re there. But for now, I think we can be happy with a simpler view of things: we’ll draw pictures on screen using the traditional graphics system, and as much as possible we’ll stick to high level commands only.
So let’s start painting.
## 6.2 An introduction to plotting
Before I discuss any specialised graphics, let’s start by drawing a few very simple graphs just to get a feel for what it’s like to draw pictures using R. To that end, let’s create a small vector `Fibonacci` that contains a few numbers we’d like R to draw for us. Then, we’ll ask R to `plot()` those numbers. The result is Figure 6.2.
```
Fibonacci <- c( 1,1,2,3,5,8,13 )
plot( Fibonacci )
```
As you can see, what R has done is plot the values stored in the `Fibonacci` variable on the vertical axis (y-axis) and the corresponding index on the horizontal axis (x-axis). In other words, since the 4th element of the vector has a value of 3, we get a dot plotted at the location (4,3). That’s pretty straightforward, and the image in Figure 6.2 is probably pretty close to what you would have had in mind when I suggested that we plot the `Fibonacci` data. However, there’s quite a lot of customisation options available to you, so we should probably spend a bit of time looking at some of those options. So, be warned: this ends up being a fairly long section, because there’s so many possibilities open to you. Don’t let it overwhelm you though… while all of the options discussed here are handy to know about, you can get by just fine only knowing a few of them. The only reason I’ve included all this stuff right at the beginning is that it ends up making the rest of the chapter a lot more readable!
### 6.2.1 A tedious digression
Before we go into any discussion of customising plots, we need a little more background. The important thing to note when using the `plot()` function, is that it’s another example of a generic function (Section 4.11, much like `print()` and `summary()` , and so its behaviour changes depending on what kind of input you give it. However, the `plot()` function is somewhat fancier than the other two, and its behaviour depends on two arguments, `x` (the first input, which is required) and `y` (which is optional). This makes it (a) extremely powerful once you get the hang of it, and (b) hilariously unpredictable, when you’re not sure what you’re doing. As much as possible, I’ll try to make clear what type of inputs produce what kinds of outputs. For now, however, it’s enough to note that I’m only doing very basic plotting, and as a consequence all of the work is being done by the `plot.default()` function. What kinds of customisations might we be interested in? If you look at the help documentation for the default plotting method (i.e., type `?plot.default` or `help("plot.default")` ) you’ll see a very long list of arguments that you can specify to customise your plot. I’ll talk about several of them in a moment, but first I want to point out something that might seem quite wacky. When you look at all the different options that the help file talks about, you’ll notice that some of the options that it refers to are “proper” arguments to the `plot.default()` function, but it also goes on to mention a bunch of things that look like they’re supposed to be arguments, but they’re not listed in the “Usage” section of the file, and the documentation calls them graphical parameters instead. Even so, it’s usually possible to treat them as if they were arguments of the plotting function. Very odd. In order to stop my readers trying to find a brick and look up my home address, I’d better explain what’s going on; or at least give the basic gist behind it. What exactly is a graphical parameter? Basically, the idea is that there are some characteristics of a plot which are pretty universal: for instance, regardless of what kind of graph you’re drawing, you probably need to specify what colour to use for the plot, right? So you’d expect there to be something like a `col` argument to every single graphics function in R? Well, sort of. In order to avoid having hundreds of arguments for every single function, what R does is refer to a bunch of these “graphical parameters” which are pretty general purpose. Graphical parameters can be changed directly by using the low-level `par()` function, which I discuss briefly in Section 6.7.1 though not in a lot of detail. If you look at the help files for graphical parameters (i.e., type `?par` ) you’ll see that there’s lots of them. Fortunately, (a) the default settings are generally pretty good so you can ignore the majority of the parameters, and (b) as you’ll see as we go through this chapter, you very rarely need to use `par()` directly, because you can “pretend” that graphical parameters are just additional arguments to your high-level function (e.g. `plot.default()` ). In short… yes, R does have these wacky “graphical parameters” which can be quite confusing. But in most basic uses of the plotting functions, you can act as if they were just undocumented additional arguments to your function.
### 6.2.2 Customising the title and the axis labels
One of the first things that you’ll find yourself wanting to do when customising your plot is to label it better. You might want to specify more appropriate axis labels, add a title or add a subtitle. The arguments that you need to specify to make this happen are:
*
`main` . A character string containing the title. *
`sub` . A character string containing the subtitle. *
`xlab` . A character string containing the x-axis label. *
`ylab` . A character string containing the y-axis label.
These aren’t graphical parameters, they’re arguments to the high-level function. However, because the high-level functions all rely on the same low-level function to do the drawing91 the names of these arguments are identical for pretty much every high-level function I’ve come across. Let’s have a look at what happens when we make use of all these arguments. Here’s the command. The picture that this draws is shown in Figure 6.3.
```
plot( x = Fibonacci,
main = "You specify title using the 'main' argument",
sub = "The subtitle appears here! (Use the 'sub' argument for this)",
xlab = "The x-axis label is 'xlab'",
ylab = "The y-axis label is 'ylab'"
)
```
It’s more or less as you’d expect. The plot itself is identical to the one we drew in Figure 6.2, except for the fact that we’ve changed the axis labels, and added a title and a subtitle. Even so, there’s a couple of interesting features worth calling your attention to. Firstly, notice that the subtitle is drawn below the plot, which I personally find annoying; as a consequence I almost never use subtitles. You may have a different opinion, of course, but the important thing is that you remember where the subtitle actually goes. Secondly, notice that R has decided to use boldface text and a larger font size for the title. This is one of my most hated default settings in R graphics, since I feel that it draws too much attention to the title. Generally, while I do want my reader to look at the title, I find that the R defaults are a bit overpowering, so I often like to change the settings. To that end, there are a bunch of graphical parameters that you can use to customise the font style:
* Font styles:
`font.main` , `font.sub` , `font.lab` , `font.axis` . These four parameters control the font style used for the plot title ( `font.main` ), the subtitle ( `font.sub` ), the axis labels ( `font.lab` : note that you can’t specify separate styles for the x-axis and y-axis without using low level commands), and the numbers next to the tick marks on the axis ( `font.axis` ). Somewhat irritatingly, these arguments are numbers instead of meaningful names: a value of 1 corresponds to plain text, 2 means boldface, 3 means italic and 4 means bold italic. * Font colours:
`col.main` , `col.sub` , `col.lab` , `col.axis` . These parameters do pretty much what the name says: each one specifies a colour in which to type each of the different bits of text. Conveniently, R has a very large number of named colours (type `colours()` to see a list of over 650 colour names that R knows), so you can use the English language name of the colour to select it.92 Thus, the parameter value here string like `"red"` , `"gray25"` or `"springgreen4"` (yes, R really does recognise four different shades of “spring green”). * Font size:
`cex.main` , `cex.sub` , `cex.lab` , `cex.axis` . Font size is handled in a slightly curious way in R. The “cex” part here is short for “character expansion”, and it’s essentially a magnification value. By default, all of these are set to a value of 1, except for the font title: `cex.main` has a default magnification of 1.2, which is why the title font is 20% bigger than the others. * Font family:
`family` . This argument specifies a font family to use: the simplest way to use it is to set it to `"sans"` , `"serif"` , or `"mono"` , corresponding to a san serif font, a serif font, or a monospaced font. If you want to, you can give the name of a specific font, but keep in mind that different operating systems use different fonts, so it’s probably safest to keep it simple. Better yet, unless you have some deep objections to the R defaults, just ignore this parameter entirely. That’s what I usually do.
To give you a sense of how you can use these parameters to customise your titles, the following command can be used to draw Figure 6.4:
```
plot( x = Fibonacci, # the data to plot
main = "The first 7 Fibonacci numbers", # the title
xlab = "Position in the sequence", # x-axis label
ylab = "The Fibonacci number", # y-axis
font.main = 1,
cex.main = 1,
font.axis = 2,
col.lab = "gray50" )
```
Although this command is quite long, it’s not complicated: all it does is override a bunch of the default parameter values. The only difficult aspect to this is that you have to remember what each of these parameters is called, and what all the different values are. And in practice I never remember: I have to look up the help documentation every time, or else look it up in this book.
### 6.2.3 Changing the plot type
Adding and customising the titles associated with the plot is one way in which you can play around with what your picture looks like. Another thing that you’ll want to do is customise the appearance of the actual plot! To start with, let’s look at the single most important options that the `plot()` function (or, recalling that we’re dealing with a generic function, in this case the `plot.default()` function, since that’s the one doing all the work) provides for you to use, which is the `type` argument. The type argument specifies the visual style of the plot. The possible values for this are:
*
`type = "p"` . Draw the points only. *
`type = "l"` . Draw a line through the points. *
`type = "o"` . Draw the line over the top of the points. *
`type = "b"` . Draw both points and lines, but don’t overplot. *
`type = "h"` . Draw “histogram-like” vertical bars. *
`type = "s"` . Draw a staircase, going horizontally then vertically. *
`type = "S"` . Draw a Staircase, going vertically then horizontally. *
`type = "c"` . Draw only the connecting lines from the “b” version. *
`type = "n"` . Draw nothing. (Apparently this is useful sometimes?) The simplest way to illustrate what each of these really looks like is just to draw them. To that end, Figure 6.5 shows the same Fibonacci data, drawn using six different `types` of plot. As you can see, by altering the type argument you can get a qualitatively different appearance to your plot. In other words, as far as R is concerned, the only difference between a scatterplot (like the ones we drew in Section 5.7 and a line plot is that you draw a scatterplot by setting `type = "p"` and you draw a line plot by setting `type = "l"` . However, that doesn’t imply that you should think of them as begin equivalent to each other. As you can see by looking at Figure 6.5, a line plot implies that there is some notion of continuity from one point to the next, whereas a scatterplot does not.
### 6.2.4 Changing other features of the plot
In Section 6.2.2 we talked about a group of graphical parameters that are related to the formatting of titles, axis labels etc. The second group of parameters I want to discuss are those related to the formatting of the plot itself:
* Colour of the plot:
`col` . As we saw with the previous colour-related parameters, the simplest way to specify this parameter is using a character string: e.g., `col = "blue"` . It’s a pretty straightforward parameter to specify: the only real subtlety is that every high-level function tends to draw a different “thing” as it’s output, and so this parameter gets interpreted a little differently by different functions. However, for the `plot.default()` function it’s pretty simple: the `col` argument refers to the colour of the points and/or lines that get drawn! * Character used to plot points:
`pch` . The plot character parameter is a number, usually between 1 and 25. What it does is tell R what symbol to use to draw the points that it plots. The simplest way to illustrate what the different values do is with a picture. Figure 6.6 shows the first 25 plotting characters. The default plotting character is a hollow circle (i.e., `pch = 1` ).
* Plot size:
`cex` . This parameter describes a character expansion factor (i.e., magnification) for the plotted characters. By default `cex=1` , but if you want bigger symbols in your graph you should specify a larger value. * Line type:
`lty` . The line type parameter describes the kind of line that R draws. It has seven values which you can specify using a number between `0` and `7` , or using a meaningful character string: `"blank"` , `"solid"` , `"dashed"` , `"dotted"` , `"dotdash"` , `"longdash"` , or `"twodash"` . Note that the “blank” version (value 0) just means that R doesn’t draw the lines at all. The other six versions are shown in Figure 6.7.
* Line width:
`lwd` . The last graphical parameter in this category that I want to mention is the line width parameter, which is just a number specifying the width of the line. The default value is 1. Not surprisingly, larger values produce thicker lines and smaller values produce thinner lines. Try playing around with different values of `lwd` to see what happens.
To illustrate what you can do by altering these parameters, let’s try the following command, the output is shown in Figure 6.8.
```
plot( x = Fibonacci,
type = "b",
col = "blue",
pch = 19,
cex=5,
lty=2,
lwd=4)
```
### 6.2.5 Changing the appearance of the axes
There are several other possibilities worth discussing. Ignoring graphical parameters for the moment, there’s a few other arguments to the `plot.default()` function that you might want to use. As before, many of these are standard arguments that are used by a lot of high level graphics functions:
* Changing the axis scales:
`xlim` , `ylim` . Generally R does a pretty good job of figuring out where to set the edges of the plot. However, you can override its choices by setting the `xlim` and `ylim` arguments. For instance, if I decide I want the vertical scale of the plot to run from 0 to 100, then I’d set `ylim = c(0, 100)` . * Suppress labelling:
`ann` . This is a logical-valued argument that you can use if you don’t want R to include any text for a title, subtitle or axis label. To do so, set `ann = FALSE` . This will stop R from including any text that would normally appear in those places. Note that this will override any of your manual titles. For example, if you try to add a title using the `main` argument, but you also specify `ann = FALSE` , no title will appear. * Suppress axis drawing:
`axes` . Again, this is a logical valued argument. Suppose you don’t want R to draw any axes at all. To suppress the axes, all you have to do is add `axes = FALSE` . This will remove the axes and the numbering, but not the axis labels (i.e. the `xlab` and `ylab` text). Note that you can get finer grain control over this by specifying the `xaxt` and `yaxt` graphical parameters instead (see below). * Include a framing box:
`frame.plot` . Suppose you’ve removed the axes by setting `axes = FALSE` , but you still want to have a simple box drawn around the plot; that is, you only wanted to get rid of the numbering and the tick marks, but you want to keep the box. To do that, you set `frame.plot = TRUE` . Note that this list isn’t exhaustive. There are a few other arguments to the `plot.default` function that you can play with if you want to, but those are the ones you are probably most likely to want to use. As always, however, if these aren’t enough options for you, there’s also a number of other graphical parameters that you might want to play with as well. That’s the focus of the next section. In the meantime, here’s a command that makes use of all these different options. The output is shown in Figure 6.9, and it’s pretty much exactly as you’d expect. The axis scales on both the horizontal and vertical dimensions have been expanded, the axes have been suppressed as have the annotations, but I’ve kept a box around the plot.
```
plot( x = Fibonacci, # the data
xlim = c(0, 15), # expand the x-scale
ylim = c(0, 15), # expand the y-scale
ann = FALSE, # delete all annotations
axes = FALSE, # delete the axes
frame.plot = TRUE # but include a framing box
)
```
Before moving on, I should point out that there are several graphical parameters relating to the axes, the box, and the general appearance of the plot which allow finer grain control over the appearance of the axes and the annotations.
* Suppressing the axes individually:
`xaxt` , `yaxt` . These graphical parameters are basically just fancier versions of the `axes` argument we discussed earlier. If you want to stop R from drawing the vertical axis but you’d like it to keep the horizontal axis, set `yaxt = "n"` . I trust that you can figure out how to keep the vertical axis and suppress the horizontal one! * Box type:
`bty` . In the same way that `xaxt` , `yaxt` are just fancy versions of `axes` , the box type parameter is really just a fancier version of the `frame.plot` argument, allowing you to specify exactly which out of the four borders you want to keep. The way we specify this parameter is a bit stupid, in my opinion: the possible values are `"o"` (the default), `"l"` , `"7"` , `"c"` , `"u"` , or `"]"` , each of which will draw only those edges that the corresponding character suggests. That is, the letter `"c"` has a top, a bottom and a left, but is blank on the right hand side, whereas `"7"` has a top and a right, but is blank on the left and the bottom. Alternatively a value of `"n"` means that no box will be drawn. * Orientation of the axis labels
`las` . I presume that the name of this parameter is an acronym of label style or something along those lines; but what it actually does is govern the orientation of the text used to label the individual tick marks (i.e., the numbering, not the `xlab` and `ylab` axis labels). There are four possible values for `las` : A value of 0 means that the labels of both axes are printed parallel to the axis itself (the default). A value of 1 means that the text is always horizontal. A value of 2 means that the labelling text is printed at right angles to the axis. Finally, a value of 3 means that the text is always vertical.
Again, these aren’t the only possibilities. There are a few other graphical parameters that I haven’t mentioned that you could use to customise the appearance of the axes,93 but that’s probably enough (or more than enough) for now. To give a sense of how you could use these parameters, let’s try the following command. The output is shown in Figure 6.10. As you can see, this isn’t a very useful plot at all. However, it does illustrate the graphical parameters we’re talking about, so I suppose it serves its purpose.
```
plot( x = Fibonacci, # the data
xaxt = "n", # don't draw the x-axis
bty = "]", # keep bottom, right and top of box only
las = 1 ) # rotate the text
```
### 6.2.6 Don’t panic
At this point, a lot of readers will be probably be thinking something along the lines of, “if there’s this much detail just for drawing a simple plot, how horrible is it going to get when we start looking at more complicated things?” Perhaps, contrary to my earlier pleas for mercy, you’ve found a brick to hurl and are right now leafing through an Adelaide phone book trying to find my address. Well, fear not! And please, put the brick down. In a lot of ways, we’ve gone through the hardest part: we’ve already covered vast majority of the plot customisations that you might want to do. As you’ll see, each of the other high level plotting commands we’ll talk about will only have a smallish number of additional options. Better yet, even though I’ve told you about a billion different ways of tweaking your plot, you don’t usually need them. So in practice, now that you’ve read over it once to get the gist, the majority of the content of this section is stuff you can safely forget: just remember to come back to this section later on when you want to tweak your plot.
## 6.3 Histograms
Now that we’ve tamed (or possibly fled from) the beast that is R graphical parameters, let’s talk more seriously about some real life graphics that you’ll want to draw. We begin with the humble histogram. Histograms are one of the simplest and most useful ways of visualising data. They make most sense when you have an interval or ratio scale (e.g., the `afl.margins` data from Chapter 5 and what you want to do is get an overall impression of the data. Most of you probably know how histograms work, since they’re so widely used, but for the sake of completeness I’ll describe them. All you do is divide up the possible values into bins, and then count the number of observations that fall within each bin. This count is referred to as the frequency of the bin, and is displayed as a bar: in the AFL winning margins data, there are 33 games in which the winning margin was less than 10 points, and it is this fact that is represented by the height of the leftmost bar in Figure 6.11. Drawing this histogram in R is pretty straightforward. The function you need to use is called `hist()` , and it has pretty reasonable default settings. In fact, Figure 6.11 is exactly what you get if you just type this: `hist( afl.margins )` Although this image would need a lot of cleaning up in order to make a good presentation graphic (i.e., one you’d include in a report), it nevertheless does a pretty good job of describing the data. In fact, the big strength of a histogram is that (properly used) it does show the entire spread of the data, so you can get a pretty good sense about what it looks like. The downside to histograms is that they aren’t very compact: unlike some of the other plots I’ll talk about it’s hard to cram 20-30 histograms into a single image without overwhelming the viewer. And of course, if your data are nominal scale (e.g., the `afl.finalists` data) then histograms are useless. The main subtlety that you need to be aware of when drawing histograms is determining where the `breaks` that separate bins should be located, and (relatedly) how many breaks there should be. In Figure 6.11, you can see that R has made pretty sensible choices all by itself: the breaks are located at 0, 10, 20, … 120, which is exactly what I would have done had I been forced to make a choice myself. On the other hand, consider the two histograms in Figure 6.12 and 6.13, which I produced using the following two commands:
```
hist( x = afl.margins, breaks = 3 )
```
```
hist( x = afl.margins, breaks = 0:116 )
```
In Figure 6.13, the bins are only 1 point wide. As a result, although the plot is very informative (it displays the entire data set with no loss of information at all!) the plot is very hard to interpret, and feels quite cluttered. On the other hand, the plot in Figure 6.12 has a bin width of 50 points, and has the opposite problem: it’s very easy to “read” this plot, but it doesn’t convey a lot of information. One gets the sense that this histogram is hiding too much. In short, the way in which you specify the breaks has a big effect on what the histogram looks like, so it’s important to make sure you choose the breaks sensibly. In general R does a pretty good job of selecting the breaks on its own, since it makes use of some quite clever tricks that statisticians have devised for automatically selecting the right bins for a histogram, but nevertheless it’s usually a good idea to play around with the breaks a bit to see what happens.
There is one fairly important thing to add regarding how the `breaks` argument works. There are two different ways you can specify the breaks. You can either specify how many breaks you want (which is what I did for panel b when I typed `breaks = 3` ) and let R figure out where they should go, or you can provide a vector that tells R exactly where the breaks should be placed (which is what I did for panel c when I typed `breaks = 0:116` ). The behaviour of the `hist()` function is slightly different depending on which version you use. If all you do is tell it how many breaks you want, R treats it as a “suggestion” not as a demand. It assumes you want “approximately 3” breaks, but if it doesn’t think that this would look very pretty on screen, it picks a different (but similar) number. It does this for a sensible reason – it tries to make sure that the breaks are located at sensible values (like 10) rather than stupid ones (like 7.224414). And most of the time R is right: usually, when a human researcher says “give me 3 breaks”, he or she really does mean “give me approximately 3 breaks, and don’t put them in stupid places”. However, sometimes R is dead wrong. Sometimes you really do mean “exactly 3 breaks”, and you know precisely where you want them to go. So you need to invoke “real person privilege”, and order R to do what it’s bloody well told. In order to do that, you have to input the full vector that tells R exactly where you want the breaks. If you do that, R will go back to behaving like the nice little obedient calculator that it’s supposed to be.
### 6.3.1 Visual style of your histogram
Okay, so at this point we can draw a basic histogram, and we can alter the number and even the location of the `breaks` . However, the visual style of the histograms shown in Figures 6.11, 6.12, and 6.13 could stand to be improved. We can fix this by making use of some of the other arguments to the `hist()` function. Most of the things you might want to try doing have already been covered in Section 6.2, but there’s a few new things:
* Shading lines:
`density` , `angle` . You can add diagonal lines to shade the bars: the `density` value is a number indicating how many lines per inch R should draw (the default value of `NULL` means no lines), and the `angle` is a number indicating how many degrees from horizontal the lines should be drawn at (default is `angle = 45` degrees). * Specifics regarding colours:
`col` , `border` . You can also change the colours: in this instance the `col` parameter sets the colour of the shading (either the shading lines if there are any, or else the colour of the interior of the bars if there are not), and the `border` argument sets the colour of the edges of the bars. * Labelling the bars:
`labels` . You can also attach labels to each of the bars using the `labels` argument. The simplest way to do this is to set `labels = TRUE` , in which case R will add a number just above each bar, that number being the exact number of observations in the bin. Alternatively, you can choose the labels yourself, by inputting a vector of strings, e.g.,
```
labels = c("label 1","label 2","etc")
```
Not surprisingly, this doesn’t exhaust the possibilities. If you type `help("hist")` or `?hist` and have a look at the help documentation for histograms, you’ll see a few more options. A histogram that makes use of the histogram-specific customisations as well as several of the options we discussed in Section 6.2 is shown in Figure 6.14. The R command that I used to draw it is this:
```
hist( x = afl.margins,
main = "2010 AFL margins", # title of the plot
xlab = "Margin", # set the x-axis label
density = 10, # draw shading lines: 10 per inch
angle = 40, # set the angle of the shading lines is 40 degrees
border = "gray20", # set the colour of the borders of the bars
col = "gray80", # set the colour of the shading lines
labels = TRUE, # add frequency labels to each bar
ylim = c(0,40) # change the scale of the y-axis
)
```
Overall, this is a much nicer histogram than the default ones.
## 6.4 Stem and leaf plots
Histograms are one of the most widely used methods for displaying the observed values for a variable. They’re simple, pretty, and very informative. However, they do take a little bit of effort to draw. Sometimes it can be quite useful to make use of simpler, if less visually appealing, options. One such alternative is the stem and leaf plot. To a first approximation you can think of a stem and leaf plot as a kind of text-based histogram. Stem and leaf plots aren’t used as widely these days as they were 30 years ago, since it’s now just as easy to draw a histogram as it is to draw a stem and leaf plot. Not only that, they don’t work very well for larger data sets. As a consequence you probably won’t have as much of a need to use them yourself, though you may run into them in older publications. These days, the only real world situation where I use them is if I have a small data set with 20-30 data points and I don’t have a computer handy, because it’s pretty easy to quickly sketch a stem and leaf plot by hand.
With all that as background, lets have a look at stem and leaf plots. The AFL margins data contains 176 observations, which is at the upper end for what you can realistically plot this way. The function in R for drawing stem and leaf plots is called `stem()` and if we ask for a stem and leaf plot of the `afl.margins` data, here’s what we get: `stem( afl.margins )`
```
##
## The decimal point is 1 digit(s) to the right of the |
##
## 0 | 001111223333333344567788888999999
## 1 | 0000011122234456666899999
## 2 | 00011222333445566667788999999
## 3 | 01223555566666678888899
## 4 | 012334444477788899
## 5 | 00002233445556667
## 6 | 0113455678
## 7 | 01123556
## 8 | 122349
## 9 | 458
## 10 | 148
## 11 | 6
```
The values to the left of the `|` are called stems and the values to the right are called leaves. If you just look at the shape that the leaves make, you can see something that looks a lot like a histogram made out of numbers, just rotated by 90 degrees. But if you know how to read the plot, there’s quite a lot of additional information here. In fact, it’s also giving you the actual values of all of the observations in the data set. To illustrate, let’s have a look at the last line in the stem and leaf plot, namely `11 | 6` . Specifically, let’s compare this to the largest values of the `afl.margins` data set: `max( afl.margins )` `## [1] 116` Hm… `11 | 6` versus `116` . Obviously the stem and leaf plot is trying to tell us that the largest value in the data set is 116. Similarly, when we look at the line that reads `10 | 148` , the way we interpret it to note that the stem and leaf plot is telling us that the data set contains observations with values 101, 104 and 108. Finally, when we see something like
```
5 | 00002233445556667
```
the four `0` s in the the stem and leaf plot are telling us that there are four observations with value 50.
I won’t talk about them in a lot of detail, but I should point out that some customisation options are available for stem and leaf plots in R. The two arguments that you can use to do this are:
*
`scale` . Changing the `scale` of the plot (default value is 1), which is analogous to changing the number of breaks in a histogram. Reducing the scale causes R to reduce the number of stem values (i.e., the number of breaks, if this were a histogram) that the plot uses. *
`width` . The second way that to can customise a stem and leaf plot is to alter the `width` (default value is 80). Changing the width alters the maximum number of leaf values that can be displayed for any given stem.
However, since stem and leaf plots aren’t as important as they used to be, I’ll leave it to the interested reader to investigate these options. Try the following two commands to see what happens:
```
stem( x = afl.margins, scale = .25 )
stem( x = afl.margins, width = 20 )
```
The only other thing to note about stem and leaf plots is the line in which R tells you where the decimal point is. If our data set had included only the numbers .11, .15, .23, .35 and .59 and we’d drawn a stem and leaf plot of these data, then R would move the decimal point: the stem values would be 1,2,3,4 and 5, but R would tell you that the decimal point has moved to the left of the `|` symbol. If you want to see this in action, try the following command:
```
stem( x = afl.margins / 1000 )
```
The stem and leaf plot itself will look identical to the original one we drew, except for the fact that R will tell you that the decimal point has moved.
## 6.5 Boxplots
Another alternative to histograms is a boxplot, sometimes called a “box and whiskers” plot. Like histograms, they’re most suited to interval or ratio scale data. The idea behind a boxplot is to provide a simple visual depiction of the median, the interquartile range, and the range of the data. And because they do so in a fairly compact way, boxplots have become a very popular statistical graphic, especially during the exploratory stage of data analysis when you’re trying to understand the data yourself. Let’s have a look at how they work, again using the `afl.margins` data as our example. Firstly, let’s actually calculate these numbers ourselves using the `summary()` function:94
```
summary( afl.margins )
```
So how does a boxplot capture these numbers? The easiest way to describe what a boxplot looks like is just to draw one. The function for doing this in R is (surprise, surprise) `boxplot()` . As always there’s a lot of optional arguments that you can specify if you want, but for the most part you can just let R choose the defaults for you. That said, I’m going to override one of the defaults to start with by specifying the `range` option, but for the most part you won’t want to do this (I’ll explain why in a minute). With that as preamble, let’s try the following command:
```
boxplot( x = afl.margins, range = 100 )
```
What R draws is shown in Figure 6.15, the most basic boxplot possible. When you look at this plot, this is how you should interpret it: the thick line in the middle of the box is the median; the box itself spans the range from the 25th percentile to the 75th percentile; and the “whiskers” cover the full range from the minimum value to the maximum value. This is summarised in the annotated plot in Figure 6.16.
In practice, this isn’t quite how boxplots usually work. In most applications, the “whiskers” don’t cover the full range from minimum to maximum. Instead, they actually go out to the most extreme data point that doesn’t exceed a certain bound. By default, this value is 1.5 times the interquartile range, corresponding to a `range` value of 1.5. Any observation whose value falls outside this range is plotted as a circle instead of being covered by the whiskers, and is commonly referred to as an outlier. For our AFL margins data, there is one observation (a game with a margin of 116 points) that falls outside this range. As a consequence, the upper whisker is pulled back to the next largest observation (a value of 108), and the observation at 116 is plotted as a circle. This is illustrated in Figure 6.17. Since the default value is `range = 1.5` we can draw this plot using the simple command
```
boxplot( afl.margins )
```
### 6.5.1 Visual style of your boxplot
I’ll talk a little more about the relationship between boxplots and outliers in the Section 6.5.2, but before I do let’s take the time to clean this figure up. Boxplots in R are extremely customisable. In addition to the usual range of graphical parameters that you can tweak to make the plot look nice, you can also exercise nearly complete control over every element to the plot. Consider the boxplot in Figure 6.18: in this version of the plot, not only have I added labels ( `xlab` , `ylab` ) and removed the stupid border ( `frame.plot` ), I’ve also dimmed all of the graphical elements of the boxplot except the central bar that plots the median ( `border` ) so as to draw more attention to the median rather than the rest of the boxplot. You’ve seen all these options in previous sections in this chapter, so hopefully those customisations won’t need any further explanation. However, I’ve done two new things as well: I’ve deleted the cross-bars at the top and bottom of the whiskers (known as the “staples” of the plot), and converted the whiskers themselves to solid lines. The arguments that I used to do this are called by the ridiculous names of `staplewex` and `whisklty` ,95 and I’ll explain these in a moment.
But first, here’s the actual command I used to draw this figure:
```
boxplot( x = afl.margins, # the data
xlab = "AFL games, 2010", # x-axis label
ylab = "Winning Margin", # y-axis label
border = "grey50", # dim the border of the box
frame.plot = FALSE, # don't draw a frame
staplewex = 0, # don't draw staples
whisklty = 1 # solid line for whisker
)
```
Overall, I think the resulting boxplot is a huge improvement in visual design over the default version. In my opinion at least, there’s a fairly minimalist aesthetic that governs good statistical graphics. Ideally, every visual element that you add to a plot should convey part of the message. If your plot includes things that don’t actually help the reader learn anything new, you should consider removing them. Personally, I can’t see the point of the cross-bars on a standard boxplot, so I’ve deleted them.
Okay, what commands can we use to customise the boxplot? If you type `?boxplot` and flick through the help documentation, you’ll notice that it does mention `staplewex` as an argument, but there’s no mention of `whisklty` . The reason for this is that the function that handles the drawing is called `bxp()` , so if you type `?bxp` all the gory details appear. Here’s the short summary. In order to understand why these arguments have such stupid names, you need to recognise that they’re put together from two components. The first part of the argument name specifies one part of the box plot: `staple` refers to the staples of the plot (i.e., the cross-bars), and `whisk` refers to the whiskers. The second part of the name specifies a graphical parameter: `wex` is a width parameter, and `lty` is a line type parameter. The parts of the plot you can customise are:
*
`box` . The box that covers the interquartile range. *
`med` . The line used to show the median. *
`whisk` . The vertical lines used to draw the whiskers. *
`staple` . The cross bars at the ends of the whiskers. *
`out` . The points used to show the outliers.
The actual graphical parameters that you might want to specify are slightly different for each visual element, just because they’re different shapes from each other. As a consequence, the following options are available:
* Width expansion:
```
boxwex, staplewex, outwex
```
. These are scaling factors that govern the width of various parts of the plot. The default scaling factor is (usually) 0.8 for the box, and 0.5 for the other two. Note that in the case of the outliers this parameter is meaningless unless you decide to draw lines plotting the outliers rather than use points. * Line type:
```
boxlty, medlty, whisklty, staplelty, outlty
```
. These govern the line type for the relevant elements. The values for this are exactly the same as those used for the regular `lty` parameter, with two exceptions. There’s an additional option where you can set `medlty = "blank"` to suppress the median line completely (useful if you want to draw a point for the median rather than plot a line). Similarly, by default the outlier line type is set to `outlty = "blank"` , because the default behaviour is to draw outliers as points instead of lines. * Line width:
```
boxlwd, medlwd, whisklwd, staplelwd, outlwd
```
. These govern the line widths for the relevant elements, and behave the same way as the regular `lwd` parameter. The only thing to note is that the default value for `medlwd` value is three times the value of the others. * Line colour:
```
boxcol, medcol, whiskcol, staplecol, outcol
```
. These govern the colour of the lines used to draw the relevant elements. Specify a colour in the same way that you usually do. * Fill colour:
`boxfill` . What colour should we use to fill the box? * Point character:
`medpch, outpch` . These behave like the regular `pch` parameter used to select the plot character. Note that you can set `outpch = NA` to stop R from plotting the outliers at all, and you can also set `medpch = NA` to stop it from drawing a character for the median (this is the default!) * Point expansion:
`medcex, outcex` . Size parameters for the points used to plot medians and outliers. These are only meaningful if the corresponding points are actually plotted. So for the default boxplot, which includes outlier points but uses a line rather than a point to draw the median, only the `outcex` parameter is meaningful. * Background colours:
`medbg, outbg` . Again, the background colours are only meaningful if the points are actually plotted.
Taken as a group, these parameters allow you almost complete freedom to select the graphical style for your boxplot that you feel is most appropriate to the data set you’re trying to describe. That said, when you’re first starting out there’s no shame in using the default settings! But if you want to master the art of designing beautiful figures, it helps to try playing around with these parameters to see what works and what doesn’t. Finally, I should mention a few other arguments that you might want to make use of:
*
`horizontal` . Set this to `TRUE` to display the plot horizontally rather than vertically. *
`varwidth` . Set this to `TRUE` to get R to scale the width of each box so that the areas are proportional to the number of observations that contribute to the boxplot. This is only useful if you’re drawing multiple boxplots at once (see Section 6.5.3. *
`show.names` . Set this to `TRUE` to get R to attach labels to the boxplots. *
`notch` . If you set `notch = TRUE` , R will draw little notches in the sides of each box. If the notches of two boxplots don’t overlap, then there is a “statistically significant” difference between the corresponding medians. If you haven’t read Chapter 11, ignore this argument – we haven’t discussed statistical significance, so this doesn’t mean much to you. I’m mentioning it only because you might want to come back to the topic later on. (see also the `notch.frac` option when you type `?bxp` ).
### 6.5.2 Using box plots to detect outliers
Because the boxplot automatically (unless you change the `range` argument) separates out those observations that lie within a certain range, people often use them as an informal method for detecting outliers: observations that are “suspiciously” distant from the rest of the data. Here’s an example. Suppose that I’d drawn the boxplot for the AFL margins data, and it came up looking like Figure 6.19. It’s pretty clear that something funny is going on with one of the observations. Apparently, there was one game in which the margin was over 300 points! That doesn’t sound right to me. Now that I’ve become suspicious, it’s time to look a bit more closely at the data. One function that can be handy for this is the `which()` function; it takes as input a vector of logicals, and outputs the indices of the `TRUE` cases. This is particularly useful in the current context because it lets me do this:
```
suspicious.cases <- afl.margins > 300
which( suspicious.cases )
```
`## [1] 137` although in real life I probably wouldn’t bother creating the `suspicious.cases` variable: I’d just cut out the middle man and use a command like
```
which( afl.margins > 300 )
```
. In any case, what this has done is shown me that the outlier corresponds to game 137. Then, I find the recorded margin for that game: `afl.margins[137]` `## [1] 333` Hm. That definitely doesn’t sound right. So then I go back to the original data source (the internet!) and I discover that the actual margin of that game was 33 points. Now it’s pretty clear what happened. Someone must have typed in the wrong number. Easily fixed, just by typing
```
afl.margins[137] <- 33
```
. While this might seem like a silly example, I should stress that this kind of thing actually happens a lot. Real world data sets are often riddled with stupid errors, especially when someone had to type something into a computer at some point. In fact, there’s actually a name for this phase of data analysis, since in practice it can waste a huge chunk of our time: data cleaning. It involves searching for typos, missing data and all sorts of other obnoxious errors in raw data files.96
What about the real data? Does the value of 116 constitute a funny observation not? Possibly. As it turns out the game in question was Fremantle v Hawthorn, and was played in round 21 (the second last home and away round of the season). Fremantle had already qualified for the final series and for them the outcome of the game was irrelevant; and the team decided to rest several of their star players. As a consequence, Fremantle went into the game severely underpowered. In contrast, Hawthorn had started the season very poorly but had ended on a massive winning streak, and for them a win could secure a place in the finals. With the game played on Hawthorn’s home turf97 and with so many unusual factors at play, it is perhaps no surprise that Hawthorn annihilated Fremantle by 116 points. Two weeks later, however, the two teams met again in an elimination final on Fremantle’s home ground, and Fremantle won comfortably by 30 points.98
So, should we exclude the game from subsequent analyses? If this were a psychology experiment rather than an AFL season, I’d be quite tempted to exclude it because there’s pretty strong evidence that Fremantle weren’t really trying very hard: and to the extent that my research question is based on an assumption that participants are genuinely trying to do the task. On the other hand, in a lot of studies we’re actually interested in seeing the full range of possible behaviour, and that includes situations where people decide not to try very hard: so excluding that observation would be a bad idea. In the context of the AFL data, a similar distinction applies. If I’d been trying to make tips about who would perform well in the finals, I would have (and in fact did) disregard the Round 21 massacre, because it’s way too misleading. On the other hand, if my interest is solely in the home and away season itself, I think it would be a shame to throw away information pertaining to one of the most distinctive (if boring) games of the year. In other words, the decision about whether to include outliers or exclude them depends heavily on why you think the data look they way they do, and what you want to use the data for. Statistical tools can provide an automatic method for suggesting candidates for deletion, but you really need to exercise good judgment here. As I’ve said before, R is a mindless automaton. It doesn’t watch the footy, so it lacks the broader context to make an informed decision. You are not a mindless automaton, so you should exercise judgment: if the outlier looks legitimate to you, then keep it. In any case, I’ll return to the topic again in Section 15.9, so let’s return to our discussion of how to draw boxplots.
### 6.5.3 Drawing multiple boxplots
One last thing. What if you want to draw multiple boxplots at once? Suppose, for instance, I wanted separate boxplots showing the AFL margins not just for 2010, but for every year between 1987 and 2010. To do that, the first thing we’ll have to do is find the data. These are stored in the `aflsmall2.Rdata` file. So let’s load it and take a quick peek at what’s inside:
```
load( "aflsmall2.Rdata" )
who( TRUE )
# -- Name -- -- Class -- -- Size --
# afl2 data.frame 4296 x 2
# $margin numeric 4296
# $year numeric 4296
```
Notice that `afl2` data frame is pretty big. It contains 4296 games, which is far more than I want to see printed out on my computer screen. To that end, R provides you with a few useful functions to print out only a few of the row in the data frame. The first of these is `head()` which prints out the first 6 rows, of the data frame, like this: `head( afl2 )`
```
## margin year
## 1 33 1987
## 2 59 1987
## 3 45 1987
## 4 91 1987
## 5 39 1987
## 6 1 1987
```
You can also use the `tail()` function to print out the last 6 rows. The `car` package also provides a handy little function called `some()` which prints out a random subset of the rows. In any case, the important thing is that we have the `afl2` data frame which contains the variables that we’re interested in. What we want to do is have R draw boxplots for the `margin` variable, plotted separately for each separate `year` . The way to do this using the `boxplot()` function is to input a `formula` rather than a variable as the input. In this case, the formula we want is `margin ~ year` . So our boxplot command now looks like this. The result is shown in Figure 6.20.99
```
boxplot( formula = margin ~ year,
data = afl2
)
```
Even this, the default version of the plot, gives a sense of why it’s sometimes useful to choose boxplots instead of histograms. Even before taking the time to turn this basic output into something more readable, it’s possible to get a good sense of what the data look like from year to year without getting overwhelmed with too much detail. Now imagine what would have happened if I’d tried to cram 24 histograms into this space: no chance at all that the reader is going to learn anything useful.
That being said, the default boxplot leaves a great deal to be desired in terms of visual clarity. The outliers are too visually prominent, the dotted lines look messy, and the interesting content (i.e., the behaviour of the median and the interquartile range across years) gets a little obscured. Fortunately, this is easy to fix, since we’ve already covered a lot of tools you can use to customise your output. After playing around with several different versions of the plot, the one I settled on is shown in Figure 6.21. The command I used to produce it is long, but not complicated:
```
boxplot( formula = margin ~ year, # the formula
data = afl2, # the data set
xlab = "AFL season", # x axis label
ylab = "Winning Margin", # y axis label
frame.plot = FALSE, # don't draw a frame
staplewex = 0, # don't draw staples
staplecol = "white", # (fixes a tiny display issue)
boxwex = .75, # narrow the boxes slightly
boxfill = "grey80", # lightly shade the boxes
whisklty = 1, # solid line for whiskers
whiskcol = "grey70", # dim the whiskers
boxcol = "grey70", # dim the box borders
outcol = "grey70", # dim the outliers
outpch = 20, # outliers as solid dots
outcex = .5, # shrink the outliers
medlty = "blank", # no line for the medians
medpch = 20, # instead, draw solid dots
medlwd = 1.5 # make them larger
)
```
Of course, given that the command is that long, you might have guessed that I didn’t spend ages typing all that rubbish in over and over again. Instead, I wrote a script, which I kept tweaking until it produced the figure that I wanted. We’ll talk about scripts later in Section 8.1, but given the length of the command I thought I’d remind you that there’s an easier way of trying out different commands than typing them all in over and over.
## 6.6 Scatterplots
Scatterplots are a simple but effective tool for visualising data. We’ve already seen scatterplots in this chapter, when using the `plot()` function to draw the `Fibonacci` variable as a collection of dots (Section 6.2. However, for the purposes of this section I have a slightly different notion in mind. Instead of just plotting one variable, what I want to do with my scatterplot is display the relationship between two variables, like we saw with the figures in the section on correlation (Section 5.7. It’s this latter application that we usually have in mind when we use the term “scatterplot”. In this kind of plot, each observation corresponds to one dot: the horizontal location of the dot plots the value of the observation on one variable, and the vertical location displays its value on the other variable. In many situations you don’t really have a clear opinions about what the causal relationship is (e.g., does A cause B, or does B cause A, or does some other variable C control both A and B). If that’s the case, it doesn’t really matter which variable you plot on the x-axis and which one you plot on the y-axis. However, in many situations you do have a pretty strong idea which variable you think is most likely to be causal, or at least you have some suspicions in that direction. If so, then it’s conventional to plot the cause variable on the x-axis, and the effect variable on the y-axis. With that in mind, let’s look at how to draw scatterplots in R, using the same `parenthood` data set (i.e. `parenthood.Rdata` ) that I used when introducing the idea of correlations. Suppose my goal is to draw a scatterplot displaying the relationship between the amount of sleep that I get ( `dan.sleep` ) and how grumpy I am the next day ( `dan.grump` ). As you might expect given our earlier use of `plot()` to display the `Fibonacci` data, the function that we use is the `plot()` function, but because it’s a generic function all the hard work is still being done by the `plot.default()` function. In any case, there are two different ways in which we can get the plot that we’re after. The first way is to specify the name of the variable to be plotted on the `x` axis and the variable to be plotted on the `y` axis. When we do it this way, the command looks like this:
```
plot( x = parenthood$dan.sleep, # data on the x-axis
y = parenthood$dan.grump # data on the y-axis
)
```
The second way do to it is to use a “formula and data frame” format, but I’m going to avoid using it.100 For now, let’s just stick with the `x` and `y` version. If we do this, the result is the very basic scatterplot shown in Figure 6.22. This serves fairly well, but there’s a few customisations that we probably want to make in order to have this work properly. As usual, we want to add some labels, but there’s a few other things we might want to do as well. Firstly, it’s sometimes useful to rescale the plots. In Figure 6.22 R has selected the scales so that the data fall neatly in the middle. But, in this case, we happen to know that the grumpiness measure falls on a scale from 0 to 100, and the hours slept falls on a natural scale between 0 hours and about 12 or so hours (the longest I can sleep in real life). So the command I might use to draw this is:
This command produces the scatterplot in Figure 6.23, or at least very nearly. What it doesn’t do is draw the line through the middle of the points. Sometimes it can be very useful to do this, and I can do so using `lines()` , which is a low level plotting function. Better yet, the arguments that I need to specify are pretty much the exact same ones that I use when calling the `plot()` function. That is, suppose that I want to draw a line that goes from the point (4,93) to the point (9.5,37). Then the `x` locations can be specified by the vector `c(4,9.5)` and the `y` locations correspond to the vector `c(93,37)` . In other words, I use this command:
And when I do so, R plots the line over the top of the plot that I drew using the previous command. In most realistic data analysis situations you absolutely don’t want to just guess where the line through the points goes, since there’s about a billion different ways in which you can get R to do a better job. However, it does at least illustrate the basic idea.
One possibility, if you do want to get R to draw nice clean lines through the data for you, is to use the `scatterplot()` function in the `car` package. Before we can use `scatterplot()` we need to load the package: `library( car )`
Having done so, we can now use the function. The command we need is this one:
```
scatterplot( dan.grump ~ dan.sleep,
data = parenthood,
smooth = FALSE
)
```
The first two arguments should be familiar: the first input is a formula
telling R what variables to plot,101 and the second specifies a `data` frame. The third argument `smooth` I’ve set to `FALSE` to stop the `scatterplot()` function from drawing a fancy “smoothed” trendline (since it’s a bit confusing to beginners). The scatterplot itself is shown in Figure 6.24. As you can see, it’s not only drawn the scatterplot, but its also drawn boxplots for each of the two variables, as well as a simple line of best fit showing the relationship between the two variables.
### 6.6.1 More elaborate options
Often you find yourself wanting to look at the relationships between several variables at once. One useful tool for doing so is to produce a scatterplot matrix, analogous to the correlation matrix.
```
cor( x = parenthood ) # calculate correlation matrix
```
We can get a the corresponding scatterplot matrix by using the `pairs()` function:102
```
pairs( x = parenthood ) # draw corresponding scatterplot matrix
```
The output of the `pairs()` command is shown in Figure 6.25. An alternative way of calling the `pairs()` function, which can be useful in some situations, is to specify the variables to include using a one-sided formula. For instance, this
```
pairs( formula = ~ dan.sleep + baby.sleep + dan.grump,
data = parenthood
)
```
would produce a \(3 \times 3\) scatterplot matrix that only compare `dan.sleep` , `dan.grump` and `baby.sleep` . Obviously, the first version is much easier, but there are cases where you really only want to look at a few of the variables, so it’s nice to use the formula interface.
## 6.7 Bar graphs
Another form of graph that you often want to plot is the bar graph. The main function that you can use in R to draw them is the `barplot()` function.103 And to illustrate the use of the function, I’ll use the `finalists` variable that I introduced in Section 5.1.7. What I want to do is draw a bar graph that displays the number of finals that each team has played in over the time spanned by the `afl` data set. So, let’s start by creating a vector that contains this information. I’ll use the `tabulate()` function to do this (which will be discussed properly in Section 7.1, since it creates a simple numeric vector:
```
freq <- tabulate( afl.finalists )
print( freq )
```
```
## [1] 26 25 26 28 32 0 6 39 27 28 28 17 6 24 26 38 24
```
This isn’t exactly the prettiest of frequency tables, of course. I’m only doing it this way so that you can see the `barplot()` function in it’s “purest” form: when the input is just an ordinary numeric vector. That being said, I’m obviously going to need the team names to create some labels, so let’s create a variable with those. I’ll do this using the `levels()` function, which outputs the names of all the levels of a factor (see Section 4.7:
```
teams <- levels( afl.finalists )
print( teams )
```
```
## [1] "Adelaide" "Brisbane" "Carlton"
## [4] "Collingwood" "Essendon" "Fitzroy"
## [7] "Fremantle" "Geelong" "Hawthorn"
## [10] "Melbourne" "North Melbourne" "Port Adelaide"
## [13] "Richmond" "<NAME>" "Sydney"
## [16] "West Coast" "Western Bulldogs"
```
Okay, so now that we have the information we need, let’s draw our bar graph. The main argument that you need to specify for a bar graph is the `height` of the bars, which in our case correspond to the values stored in the `freq` variable:
```
barplot( height = freq ) # specifying the argument name
barplot( freq ) # the lazier version
```
Either of these two commands will produce the simple bar graph shown in Figure 6.26.
As you can see, R has drawn a pretty minimal plot. It doesn’t have any labels, obviously, because we didn’t actually tell the `barplot()` function what the labels are! To do this, we need to specify the `names.arg` argument. The `names.arg` argument needs to be a vector of character strings containing the text that needs to be used as the label for each of the items. In this case, the `teams` vector is exactly what we need, so the command we’re looking for is:
```
barplot( height = freq, names.arg = teams )
```
This is an improvement, but not much of an improvement. R has only included a few of the labels, because it can’t fit them in the plot. This is the same behaviour we saw earlier with the multiple-boxplot graph in Figure 6.20. However, in Figure 6.20 it wasn’t an issue: it’s pretty obvious from inspection that the two unlabelled plots in between 1987 and 1990 must correspond to the data from 1988 and 1989. However, the fact that `barplot()` has omitted the names of every team in between Adelaide and Fitzroy is a lot more problematic. The simplest way to fix this is to rotate the labels, so that the text runs vertically not horizontally. To do this, we need to alter set the `las` parameter, which I discussed briefly in Section 6.2. What I’ll do is tell R to rotate the text so that it’s always perpendicular to the axes (i.e., I’ll set `las = 2` ). When I do that, as per the following command…
```
barplot(height = freq, # the frequencies
names.arg = teams, # the label
las = 2) # rotate the labels
```
… the result is the bar graph shown in Figure 6.28. We’ve fixed the problem, but we’ve created a new one: the axis labels don’t quite fit anymore. To fix this, we have to be a bit cleverer again. A simple fix would be to use shorter names rather than the full name of all teams, and in many situations that’s probably the right thing to do. However, at other times you really do need to create a bit more space to add your labels, so I’ll show you how to do that.
### 6.7.1 Changing global settings using par()
Altering the margins to the plot is actually a somewhat more complicated exercise than you might think. In principle it’s a very simple thing to do: the size of the margins is governed by a graphical parameter called `mar` , so all we need to do is alter this parameter. First, let’s look at what the `mar` argument specifies. The `mar` argument is a vector containing four numbers: specifying the amount of space at the bottom, the left, the top and then the right. The units are “number of ‘lines’”. The default value for `mar` is
```
c(5.1, 4.1, 4.1, 2.1)
```
, meaning that R leaves 5.1 “lines” empty at the bottom, 4.1 lines on the left and the bottom, and only 2.1 lines on the right. In order to make more room at the bottom, what I need to do is change the first of these numbers. A value of 10.1 should do the trick. So far this doesn’t seem any different to the other graphical parameters that we’ve talked about. However, because of the way that the traditional graphics system in R works, you need to specify what the margins will be before calling your high-level plotting function. Unlike the other cases we’ve see, you can’t treat `mar` as if it were just another argument in your plotting function. Instead, you have to use the `par()` function to change the graphical parameters beforehand, and only then try to draw your figure. In other words, the first thing I would do is this:
```
par( mar = c( 10.1, 4.1, 4.1, 2.1) )
```
There’s no visible output here, but behind the scenes R has changed the graphical parameters associated with the current device (remember, in R terminology all graphics are drawn onto a “device”). Now that this is done, we could use the exact same command as before, but this time you’d see that the labels all fit, because R now leaves twice as much room for the labels at the bottom. However, since I’ve now figured out how to get the labels to display properly, I might as well play around with some of the other options, all of which are things you’ve seen before:
```
barplot( height = freq,
names.arg = teams,
las=2,
ylab = "Number of Finals",
main = "Finals Played, 1987-2010",
density = 10,
angle = 20)
```
However, one thing to remember about the `par()` function is that it doesn’t just change the graphical parameters for the current plot. Rather, the changes pertain to any subsequent plot that you draw onto the same device. This might be exactly what you want, in which case there’s no problem. But if not, you need to reset the graphical parameters to their original settings. To do this, you can either close the device (e.g., close the window, or click the “Clear All” button in the Plots panel in Rstudio) or you can reset the graphical parameters to their original values, using a command like this:
```
par( mar = c(5.1, 4.1, 4.1, 2.1) )
```
## 6.8 Saving image files using R and Rstudio
Hold on, you might be thinking. What’s the good of being able to draw pretty pictures in R if I can’t save them and send them to friends to brag about how awesome my data is? How do I save the picture? This is another one of those situations where the easiest thing to do is to use the RStudio tools.
If you’re running R through Rstudio, then the easiest way to save your image is to click on the “Export” button in the Plot panel (i.e., the area in Rstudio where all the plots have been appearing). When you do that you’ll see a menu that contains the options “Save Plot as PDF” and “Save Plot as Image”. Either version works. Both will bring up dialog boxes that give you a few options that you can play with, but besides that it’s pretty simple.
This works pretty nicely for most situations. So, unless you’re filled with a burning desire to learn the low level details, feel free to skip the rest of this section.
### 6.8.1 The ugly details (advanced)
As I say, the menu-based options should be good enough for most people most of the time. However, one day you might want to be a bit more sophisticated, and make use of R’s image writing capabilities at a lower level. In this section I’ll give you a very basic introduction to this. In all honesty, this barely scratches the surface, but it will help a little bit in getting you started if you want to learn the details.
Okay, as I hinted earlier, whenever you’re drawing pictures in R you’re deemed to be drawing to a device of some kind. There are devices that correspond to a figure drawn on screen, and there are devices that correspond to graphics files that R will produce for you. For the purposes of this section I’ll assume that you’re using the default application in either Windows or Mac OS, not Rstudio. The reason for this is that my experience with the graphical device provided by Rstudio has led me to suspect that it still has a bunch on non-standard (or possibly just undocumented) features, and so I don’t quite trust that it always does what I expect. I’ve no doubt they’ll smooth it out later, but I can honestly say that I don’t quite get what’s going on with the `RStudioGD` device. In any case, we can ask R to list all of the graphics devices that currently exist, simply by using the command `dev.list()` . If there are no figure windows open, then you’ll see this:
```
dev.list()
# NULL
```
which just means that R doesn’t have any graphics devices open. However, suppose if you’ve just drawn a histogram and you type the same command, R will now give you a different answer. For instance, if you’re using Windows:
```
hist( afl.margins )
dev.list()
# windows
# 2
```
What this means is that there is one graphics device (device 2) that is currently open, and it’s a figure window. If you did the same thing on a Mac, you get basically the same answer, except that the name of the device would be `quartz` rather than `windows` . If you had several graphics windows open (which, incidentally, you can do by using the `dev.new()` command) then you’d see something like this:
```
dev.list()
# windows windows windows
# 2 3 4
```
Okay, so that’s the basic idea behind graphics devices. The key idea here is that graphics files (like JPEG images etc) are also graphics devices as far as R is concerned. So what you want to do is to copy the contents of one graphics device to another one. There’s a command called `dev.copy()` that does this, but what I’ll explain to you is a simpler one called `dev.print()` . It’s pretty simple:
```
dev.print( device = jpeg, # what are we printing to?
filename = "thisfile.jpg", # name of the image file
width = 480, # how many pixels wide should it be
height = 300 # how many pixels high should it be
)
```
This takes the “active” figure window, copies it to a jpeg file (which R treats as a device) and then closes that device. The
```
filename = "thisfile.jpg"
```
part tells R what to name the graphics file, and the `width = 480` and `height = 300` arguments tell R to draw an image that is 300 pixels high and 480 pixels wide. If you want a different kind of file, just change the device argument from `jpeg` to something else. R has devices for `png` , `tiff` and `bmp` that all work in exactly the same way as the `jpeg` command, but produce different kinds of files. Actually, for simple cartoonish graphics like this histogram, you’d be better advised to use PNG or TIFF over JPEG. The JPEG format is very good for natural images, but is wasteful for simple line drawings. The information above probably covers most things you might want to. However, if you want more information about what kinds of options you can specify using R, have a look at the help documentation by typing `?jpeg` or `?tiff` or whatever.
## 6.9 Summary
Perhaps I’m a simple minded person, but I love pictures. Every time I write a new scientific paper, one of the first things I do is sit down and think about what the pictures will be. In my head, an article is really just a sequence of pictures, linked together by a story. All the rest of it is just window dressing. What I’m really trying to say here is that the human visual system is a very powerful data analysis tool. Give it the right kind of information and it will supply a human reader with a massive amount of knowledge very quickly. Not for nothing do we have the saying “a picture is worth a thousand words”. With that in mind, I think that this is one of the most important chapters in the book. The topics covered were:
* Basic overview to R graphics. In Section 6.1 we talked about how graphics in R are organised, and then moved on to the basics of how they’re drawn in Section 6.2.
* Common plots. Much of the chapter was focused on standard graphs that statisticians like to produce: histograms (Section 6.3), stem and leaf plots (Section 6.4), boxplots (Section 6.5), scatterplots (Section 6.6) and bar graphs (Section 6.7).
* Saving image files. The last part of the chapter talked about how to export your pictures (Section 6.8)
One final thing to point out. At the start of the chapter I mentioned that R has several completely distinct systems for drawing figures. In this chapter I’ve focused on the traditional graphics system. It’s the easiest one to get started with: you can draw a histogram with a command as simple as `hist(x)` . However, it’s not the most powerful tool for the job, and after a while most R users start looking to shift to fancier systems. One of the most popular graphics systems is provided by the `ggplot2` package (see ), which is loosely based on “The grammar of graphics” (Wilkinson et al. 2006). It’s not for novices: you need to have a pretty good grasp of R before you can start using it, and even then it takes a while to really get the hang of it. But when you’re finally at that stage, it’s worth taking the time to teach yourself, because it’s a much cleaner system.
The origin of this quote is Tufte’s lovely book The Visual Display of Quantitative Information.↩
*
I should add that this isn’t unique to R. Like everything in R there’s a pretty steep learning curve to learning how to draw graphs, and like always there’s a massive payoff at the end in terms of the quality of what you can produce. But to be honest, I’ve seen the same problems show up regardless of what system people use. I suspect that the hardest thing to do is to force yourself to take the time to think deeply about what your graphs are doing. I say that in full knowledge that only about half of my graphs turn out as well as they ought to. Understanding what makes a good graph is easy: actually designing a good graph is hard.↩
*
Or, since you can always use the up and down keys to scroll through your recent command history, you can just pull up your most recent commands and edit them to fix your mistake. It becomes even easier once you start using scripts (Section 8.1, since all you have to do is edit your script and then run it again.↩
*
Of course, even that is a slightly misleading description, since some R graphics tools make use of external graphical rendering systems like OpenGL (e.g., the
`rgl` package). I absolutely will not be talking about OpenGL or the like in this book, but as it happens there is one graph in this book that relies on them: Figure 15.6.↩ *
The low-level function that does this is called
`title()` in case you ever need to know, and you can type `?title` to find out a bit more detail about what these arguments do.↩ *
On the off chance that this isn’t enough freedom for you, you can select a colour directly as a “red, green, blue” specification using the
`rgb()` function, or as a “hue, saturation, value” specification using the `hsv()` function.↩ *
Also, there’s a low level function called
`axis()` that allows a lot more control over the appearance of the axes.↩ *
R being what it is, it’s no great surprise that there’s also a
`fivenum()` function that does much the same thing.↩ *
I realise there’s a kind of logic to the way R names are constructed, but they still sound dumb. When I typed this sentence, all I could think was that it sounded like the name of a kids movie if it had been written by <NAME>: “The frabjous gambolles of Staplewex and Whisklty” or something along those lines.↩
*
Sometimes it’s convenient to have the boxplot automatically label the outliers for you. The original
`boxplot()` function doesn’t allow you to do this; however, the `Boxplot()` function in the `car` package does. The design of the `Boxplot()` function is very similar to `boxplot()` . It just adds a few new arguments that allow you to tweak the labelling scheme. I’ll leave it to the reader to check this out.↩ *
Sort of. The game was played in Launceston, which is a de facto home away from home for Hawthorn.↩
*
Contrast this situation with the next largest winning margin in the data set, which was Geelong’s 108 point demolition of Richmond in round 6 at their home ground, Kardinia Park. Geelong have been one of the most dominant teams over the last several years, a period during which they strung together an incredible 29-game winning streak at Kardinia Park. Richmond have been useless for several years. This is in no meaningful sense an outlier. Geelong have been winning by these margins (and Richmond losing by them) for quite some time. Frankly I’m surprised that the result wasn’t more lopsided: as happened to Melbourne in 2011 when Geelong won by a modest 186 points.↩
*
Actually, there’s other ways to do this. If the input argument
`x` is a list object (see Section 4.9, the `boxplot()` function will draw a separate boxplot for each variable in that list. Relatedly, since the `plot()` function – which we’ll discuss shortly – is a generic (see Section 4.11, you might not be surprised to learn that one of its special cases is a boxplot: specifically, if you use `plot()` where the first argument `x` is a factor and the second argument `y` is numeric, then the result will be a boxplot, showing the values in `y` , with a separate boxplot for each level. For instance, something like
```
plot(x = afl2\$year, y = afl2\$margin)
```
would work.↩ *
The reason is that there’s an annoying design flaw in the way the
`plot()` function handles this situation. The problem is that the `plot.formula()` function uses different names to for the arguments than the `plot()` function expects. As a consequence, you can’t specify the formula argument by name. If you just specify a formula as the first argument without using the name it works fine, because the `plot()` function thinks the formula corresponds to the `x` argument, and the `plot.formula()` function thinks it corresponds to the `formula` argument; and surprisingly, everything works nicely. But the moment that you, the user, tries to be unambiguous about the name, one of those two functions is going to cry.↩ *
You might be wondering why I haven’t specified the argument name for the formula. The reason is that there’s a bug in how the
`scatterplot()` function is written: under the hood there’s one function that expects the argument to be named `x` and another one that expects it to be called `formula` . I don’t know why the function was written this way, but it’s not an isolated problem: this particular kind of bug repeats itself in a couple of other functions (you’ll see it again in Chapter 13. The solution in such cases is to omit the argument name: that way, one function “thinks” that you’ve specified `x` and the other one “thinks” you’ve specified `formula` and everything works the way it’s supposed to. It’s not a great state of affairs, I’ll admit, but it sort of works.↩ *
Yet again, we could have produced this output using the
`plot()` function: when the `x` argument is a data frame containing numeric variables only, then the output is a scatterplot matrix. So, once again, what I could have done is just type `plot( parenthood )` .↩ *
Once again, it’s worth noting the link to the generic
`plot()` function. If the `x` argument to `plot()` is a factor (and no `y` argument is given), the result is a bar graph. So you could use
```
plot( afl.finalists )
```
and get the same output as
```
barplot( afl.finalists )
```
.↩
Date: 2019-01-11
Categories:
Tags:
# Chapter 7 Pragmatic matters
The garden of life never seems to confine itself to the plots philosophers have laid out for its convenience. Maybe a few more tractors would do the trick.
–<NAME>
This is a somewhat strange chapter, even by my standards. My goal in this chapter is to talk a bit more honestly about the realities of working with data than you’ll see anywhere else in the book. The problem with real world data sets is that they are messy. Very often the data file that you start out with doesn’t have the variables stored in the right format for the analysis you want to do. Sometimes might be a lot of missing values in your data set. Sometimes you only want to analyse a subset of the data. Et cetera. In other words, there’s a lot of data manipulation that you need to do, just to get all your data set into the format that you need it. The purpose of this chapter is to provide a basic introduction to all these pragmatic topics. Although the chapter is motivated by the kinds of practical issues that arise when manipulating real data, I’ll stick with the practice that I’ve adopted through most of the book and rely on very small, toy data sets that illustrate the underlying issue. Because this chapter is essentially a collection of “tricks” and doesn’t tell a single coherent story, it may be useful to start with a list of topics:
As you can see, the list of topics that the chapter covers is pretty broad, and there’s a lot of content there. Even though this is one of the longest and hardest chapters in the book, I’m really only scratching the surface of several fairly different and important topics. My advice, as usual, is to read through the chapter once and try to follow as much of it as you can. Don’t worry too much if you can’t grasp it all at once, especially the later sections. The rest of the book is only lightly reliant on this chapter, so you can get away with just understanding the basics. However, what you’ll probably find is that later on you’ll need to flick back to this chapter in order to understand some of the concepts that I refer to here.
## 7.1 Tabulating and cross-tabulating data
A very common task when analysing data is the construction of frequency tables, or cross-tabulation of one variable against another. There are several functions that you can use in R for that purpose. In this section I’ll illustrate the use of three functions – `table()` , `xtabs()` and `tabulate()` – though there are other options (e.g., `ftable()` ) available.
### 7.1.1 Creating tables from vectors
Let’s start with a simple example. As the father of a small child, I naturally spend a lot of time watching TV shows like In the Night Garden. In the `nightgarden.Rdata` file, I’ve transcribed a short section of the dialogue. The file contains two variables, `speaker` and `utterance` , and when we take a look at the data, it becomes very clear what happened to my sanity.
```
library(lsr)
load(file.path(projecthome,"data","nightgarden.Rdata"))
who()
```
```
## -- Name -- -- Class -- -- Size --
## afl.finalists factor 400
## afl.margins numeric 176
## afl2 data.frame 4296 x 2
## colour logical 1
## d.cor numeric 1
## describeImg list 0
## effort data.frame 10 x 2
## emphCol character 1
## emphColLight character 1
## emphGrey character 1
## eps logical 1
## Fibonacci numeric 7
## freq integer 17
## generateRLineTypes function
## generateRPointShapes function
## height numeric 1
## old list 66
## oneCorPlot function
## out.0 data.frame 100 x 2
## out.1 data.frame 100 x 2
## out.2 data.frame 100 x 2
## parenthood data.frame 100 x 4
## plotOne function
## projecthome character 1
## speaker character 10
## suspicious.cases logical 176
## teams character 17
## utterance character 10
## width numeric 1
## X1 numeric 11
## X2 numeric 11
## X3 numeric 11
## X4 numeric 11
## Y1 numeric 11
## Y2 numeric 11
## Y3 numeric 11
## Y4 numeric 11
```
`print( speaker )`
```
## [1] "upsy-daisy" "upsy-daisy" "upsy-daisy" "upsy-daisy" "tombliboo"
## [6] "tombliboo" "makka-pakka" "makka-pakka" "makka-pakka" "makka-pakka"
```
`print( utterance )`
With these as my data, one task I might find myself needing to do is construct a frequency count of the number of words each character speaks during the show. The `table()` function provides a simple way do to this. The basic usage of the `table()` function is as follows: `table(speaker)`
```
## speaker
## makka-pakka tombliboo upsy-daisy
## 4 2 4
```
The output here tells us on the first line that what we’re looking at is a tabulation of the `speaker` variable. On the second line it lists all the different speakers that exist in the data, and on the third line it tells you how many times that speaker appears in the data. In other words, it’s a frequency table105 Notice that in the command above I didn’t name the argument, since `table()` is another function that makes use of unnamed arguments. You just type in a list of the variables that you want R to tabulate, and it tabulates them. For instance, if I type in the name of two variables, what I get as the output is a cross-tabulation:
```
table(speaker, utterance)
```
When interpreting this table, remember that these are counts: so the fact that the first row and second column corresponds to a value of 2 indicates that Makka-Pakka (row 1) says “onk” (column 2) twice in this data set. As you’d expect, you can produce three way or higher order cross tabulations just by adding more objects to the list of inputs. However, I won’t discuss that in this section.
### 7.1.2 Creating tables from data frames
Most of the time your data are stored in a data frame, not kept as separate variables in the workspace. Let’s create one:
```
itng <- data.frame( speaker, utterance )
itng
```
There’s a couple of options under these circumstances. Firstly, if you just want to cross-tabulate all of the variables in the data frame, then it’s really easy:
`table(itng)`
However, it’s often the case that you want to select particular variables from the data frame to tabulate. This is where the `xtabs()` function is useful. In this function, you input a one sided `formula` in order to list all the variables you want to cross-tabulate, and the name of the `data` frame that stores the data:
```
xtabs( formula = ~ speaker + utterance, data = itng )
```
Clearly, this is a totally unnecessary command in the context of the `itng` data frame, but in most situations when you’re analysing real data this is actually extremely useful, since your data set will almost certainly contain lots of variables and you’ll only want to tabulate a few of them at a time.
### 7.1.3 Converting a table of counts to a table of proportions
The tabulation commands discussed so far all construct a table of raw frequencies: that is, a count of the total number of cases that satisfy certain conditions. However, often you want your data to be organised in terms of proportions rather than counts. This is where the `prop.table()` function comes in handy. It has two arguments:
*
`x` . The frequency table that you want to convert. *
`margin` . Which “dimension” do you want to calculate proportions for. By default, R assumes you want the proportion to be expressed as a fraction of all possible events. See examples for details.
To see how this works:
```
itng.table <- table(itng) # create the table, and assign it to a variable
itng.table # display the table again, as a reminder
```
```
prop.table( x = itng.table ) # express as proportion:
```
Notice that there were 10 observations in our original data set, so all that R has done here is divide all our raw frequencies by 10. That’s a sensible default, but more often you actually want to calculate the proportions separately by row ( `margin = 1` ) or by column ( `margin = 2` ). Again, this is most clearly seen by looking at examples:
Notice that each row now sums to 1, but that’s not true for each column. What we’re looking at here is the proportions of utterances made by each character. In other words, 50% of Makka-Pakka’s utterances are “pip”, and the other 50% are “onk”. Let’s contrast this with the following command:
Now the columns all sum to 1 but the rows don’t. In this version, what we’re seeing is the proportion of characters associated with each utterance. For instance, whenever the utterance “ee” is made (in this data set), 100% of the time it’s a Tombliboo saying it.
### 7.1.4 Low level tabulation
One final function I want to mention is the `tabulate()` function, since this is actually the low-level function that does most of the hard work. It takes a numeric vector as input, and outputs frequencies as outputs:
```
some.data <- c(1,2,3,1,1,3,1,1,2,8,3,1,2,4,2,3,5,2)
tabulate(some.data)
```
```
## [1] 6 5 4 1 1 0 0 1
```
## 7.2 Transforming and recoding a variable
It’s not uncommon in real world data analysis to find that one of your variables isn’t quite equivalent to the variable that you really want. For instance, it’s often convenient to take a continuous-valued variable (e.g., age) and break it up into a smallish number of categories (e.g., younger, middle, older). At other times, you may need to convert a numeric variable into a different numeric variable (e.g., you may want to analyse at the absolute value of the original variable). In this section I’ll describe a few key tricks that you can make use of to do this.
### 7.2.1 Creating a transformed variable
The first trick to discuss is the idea of transforming a variable. Taken literally, anything you do to a variable is a transformation, but in practice what it usually means is that you apply a relatively simple mathematical function to the original variable, in order to create new variable that either (a) provides a better way of describing the thing you’re actually interested in or (b) is more closely in agreement with the assumptions of the statistical tests you want to do. Since – at this stage – I haven’t talked about statistical tests or their assumptions, I’ll show you an example based on the first case.
To keep the explanation simple, the variable we’ll try to transform ( `likert.raw` ) isn’t inside a data frame, though in real life it almost certainly would be. However, I think it’s useful to start with an example that doesn’t use data frames because it illustrates the fact that you already know how to do variable transformations. To see this, let’s go through an example. Suppose I’ve run a short study in which I ask 10 people a single question: On a scale of 1 (strongly disagree) to 7 (strongly agree), to what extent do you agree with the proposition that “Dinosaurs are awesome”?
Now let’s load and look at the data. The data file `likert.Rdata` contains a single variable that contains the raw Likert-scale responses:
```
load(file.path(projecthome,"data","likert.Rdata"))
likert.raw
```
However, if you think about it, this isn’t the best way to represent these responses. Because of the fairly symmetric way that we set up the response scale, there’s a sense in which the midpoint of the scale should have been coded as 0 (no opinion), and the two endpoints should be \(+3\) (strong agree) and \(-3\) (strong disagree). By recoding the data in this way, it’s a bit more reflective of how we really think about the responses. The recoding here is trivially easy: we just subtract 4 from the raw scores:
```
likert.centred <- likert.raw - 4
likert.centred
```
```
## [1] -3 3 -1 0 0 0 -2 2 1 1
```
One reason why it might be useful to have the data in this format is that there are a lot of situations where you might prefer to analyse the strength of the opinion separately from the direction of the opinion. We can do two different transformations on this `likert.centred` variable in order to distinguish between these two different concepts. Firstly, to compute an `opinion.strength` variable, we want to take the absolute value of the centred data (using the `abs()` function that we’ve seen previously), like so:
```
opinion.strength <- abs( likert.centred )
opinion.strength
```
```
## [1] 3 3 1 0 0 0 2 2 1 1
```
Secondly, to compute a variable that contains only the direction of the opinion and ignores the strength, we can use the `sign()` function to do this. If you type `?sign` you’ll see that this function is really simple: all negative numbers are converted to \(-1\), all positive numbers are converted to \(1\) and zero stays as \(0\). So, when we apply the `sign()` function we obtain the following:
```
opinion.dir <- sign( likert.centred )
opinion.dir
```
And we’re done. We now have three shiny new variables, all of which are useful transformations of the original `likert.raw` data. All of this should seem pretty familiar to you. The tools that you use to do regular calculations in R (e.g., Chapters 3 and 4) are very much the same ones that you use to transform your variables! To that end, in Section 7.3 I’ll revisit the topic of doing calculations in R because there’s a lot of other functions and operations that are worth knowing about.
Before moving on, you might be curious to see what these calculations look like if the data had started out in a data frame. To that end, it may help to note that the following example does all of the calculations using variables inside a data frame, and stores the variables created inside it:
```
df <- data.frame( likert.raw ) # create data frame
df$likert.centred <- df$likert.raw - 4 # create centred data
df$opinion.strength <- abs( df$likert.centred ) # create strength variable
df$opinion.dir <- sign( df$likert.centred ) # create direction variable
df # print the final data frame:
```
```
## likert.raw likert.centred opinion.strength opinion.dir
## 1 1 -3 3 -1
## 2 7 3 3 1
## 3 3 -1 1 -1
## 4 4 0 0 0
## 5 4 0 0 0
## 6 4 0 0 0
## 7 2 -2 2 -1
## 8 6 2 2 1
## 9 5 1 1 1
## 10 5 1 1 1
```
In other words, the commands you use are basically ones as before: it’s just that every time you want to read a variable from the data frame or write to the data frame, you use the `$` operator. That’s the easiest way to do it, though I should make note of the fact that people sometimes make use of the `within()` function to do the same thing. However, since (a) I don’t use the `within()` function anywhere else in this book, and (b) the `$` operator works just fine, I won’t discuss it any further.
### 7.2.2 Cutting a numeric variable into categories
One pragmatic task that arises more often than you’d think is the problem of cutting a numeric variable up into discrete categories. For instance, suppose I’m interested in looking at the age distribution of people at a social gathering:
```
age <- c( 60,58,24,26,34,42,31,30,33,2,9 )
```
In some situations it can be quite helpful to group these into a smallish number of categories. For example, we could group the data into three broad categories: young (0-20), adult (21-40) and older (41-60). This is a quite coarse-grained classification, and the labels that I’ve attached only make sense in the context of this data set (e.g., viewed more generally, a 42 year old wouldn’t consider themselves as “older”). We can slice this variable up quite easily using the `cut()` function.106 To make things a little cleaner, I’ll start by creating a variable that defines the boundaries for the categories:
```
age.breaks <- seq( from = 0, to = 60, by = 20 )
age.breaks
```
`## [1] 0 20 40 60`
and another one for the labels:
```
age.labels <- c( "young", "adult", "older" )
age.labels
```
```
## [1] "young" "adult" "older"
```
Note that there are four numbers in the `age.breaks` variable, but only three labels in the `age.labels` variable; I’ve done this because the `cut()` function requires that you specify the edges of the categories rather than the mid-points. In any case, now that we’ve done this, we can use the `cut()` function to assign each observation to one of these three categories. There are several arguments to the `cut()` function, but the three that we need to care about are:
*
`x` . The variable that needs to be categorised. *
`breaks` . This is either a vector containing the locations of the breaks separating the categories, or a number indicating how many categories you want. *
`labels` . The labels attached to the categories. This is optional: if you don’t specify this R will attach a boring label showing the range associated with each category.
Since we’ve already created variables corresponding to the breaks and the labels, the command we need is just:
```
age.group <- cut( x = age, # the variable to be categorised
breaks = age.breaks, # the edges of the categories
labels = age.labels ) # the labels for the categories
```
Note that the output variable here is a factor. In order to see what this command has actually done, we could just print out the `age.group` variable, but I think it’s actually more helpful to create a data frame that includes both the original variable and the categorised one, so that you can see the two side by side:
```
data.frame(age, age.group)
```
```
## age age.group
## 1 60 older
## 2 58 older
## 3 24 adult
## 4 26 adult
## 5 34 adult
## 6 42 older
## 7 31 adult
## 8 30 adult
## 9 33 adult
## 10 2 young
## 11 9 young
```
It can also be useful to tabulate the output, just to see if you’ve got a nice even division of the sample:
`table( age.group )`
```
## age.group
## young adult older
## 2 6 3
```
In the example above, I made all the decisions myself. Much like the `hist()` function that we saw in Chapter 6, if you want to you can delegate a lot of the choices to R. For instance, if you want you can specify the number of categories you want, rather than giving explicit ranges for them, and you can allow R to come up with some labels for the categories. To give you a sense of how this works, have a look at the following example:
```
age.group2 <- cut( x = age, breaks = 3 )
```
With this command, I’ve asked for three categories, but let R make the choices for where the boundaries should be. I won’t bother to print out the `age.group2` variable, because it’s not terribly pretty or very interesting. Instead, all of the important information can be extracted by looking at the tabulated data: `table( age.group2 )`
```
## age.group2
## (1.94,21.3] (21.3,40.7] (40.7,60.1]
## 2 6 3
```
This output takes a little bit of interpretation, but it’s not complicated. What R has done is determined that the lowest age category should run from 1.94 years up to 21.3 years, the second category should run from 21.3 years to 40.7 years, and so on. The formatting on those labels might look a bit funny to those of you who haven’t studied a lot of maths, but it’s pretty simple. When R describes the first category as corresponding to the range \((1.94, 21.3]\) what it’s saying is that the range consists of those numbers that are larger than 1.94 but less than or equal to 21.3. In other words, the weird asymmetric brackets is R s way of telling you that if there happens to be a value that is exactly equal to 21.3, then it belongs to the first category, not the second one. Obviously, this isn’t actually possible since I’ve only specified the ages to the nearest whole number, but R doesn’t know this and so it’s trying to be precise just in case. This notation is actually pretty standard, but I suspect not everyone reading the book will have seen it before. In any case, those labels are pretty ugly, so it’s usually a good idea to specify your own, meaningful labels to the categories.
Before moving on, I should take a moment to talk a little about the mechanics of the `cut()` function. Notice that R has tried to divide the `age` variable into three roughly equal sized bins. Unless you specify the particular breaks you want, that’s what it will do. But suppose you want to divide the `age` variable into three categories of different size, but with approximately identical numbers of people. How would you do that? Well, if that’s the case, then what you want to do is have the breaks correspond to the 0th, 33rd, 66th and 100th percentiles of the data. One way to do this would be to calculate those values using the `quantiles()` function and then use those quantiles as input to the `cut()` function. That’s pretty easy to do, but it does take a couple of lines to type. So instead, the `lsr` package has a function called `quantileCut()` that does exactly this:
```
age.group3 <- quantileCut( x = age, n = 3 )
table( age.group3 )
```
```
## age.group3
## (1.94,27.3] (27.3,33.7] (33.7,60.1]
## 4 3 4
```
Notice the difference in the boundaries that the `quantileCut()` function selects. The first and third categories now span an age range of about 25 years each, whereas the middle category has shrunk to a span of only 6 years. There are some situations where this is genuinely what you want (that’s why I wrote the function!), but in general you should be careful. Usually the numeric variable that you’re trying to cut into categories is already expressed in meaningful units (i.e., it’s interval scale), but if you cut it into unequal bin sizes then it’s often very difficult to attach meaningful interpretations to the resulting categories. More generally, regardless of whether you’re using the original `cut()` function or the `quantileCut()` version, it’s important to take the time to figure out whether or not the resulting categories make any sense at all in terms of your research project. If they don’t make any sense to you as meaningful categories, then any data analysis that uses those categories is likely to be just as meaningless. More generally, in practice I’ve noticed that people have a very strong desire to carve their (continuous and messy) data into a few (discrete and simple) categories; and then run analysis using the categorised data instead of the original one.107 I wouldn’t go so far as to say that this is an inherently bad idea, but it does have some fairly serious drawbacks at times, so I would advise some caution if you are thinking about doing it.
## 7.3 A few more mathematical functions and operations
In Section 7.2 I discussed the ideas behind variable transformations, and showed that a lot of the transformations that you might want to apply to your data are based on fairly simple mathematical functions and operations, of the kind that we discussed in Chapter 3. In this section I want to return to that discussion, and mention several other mathematical functions and arithmetic operations that I didn’t bother to mention when introducing you to R, but are actually quite useful for a lot of real world data analysis. Table 7.1 gives a brief overview of the various mathematical functions I want to talk about (and some that I already have talked about). Obviously this doesn’t even come close to cataloging the range of possibilities available in R, but it does cover a very wide range of functions that are used in day to day data analysis.
mathematical.function | R.function | example.input | answer |
| --- | --- | --- | --- |
square root | sqrt() | sqrt(25) | 5 |
absolute value | abs() | abs(-23) | 23 |
logarithm (base 10) | log10() | log10(1000) | 3 |
logarithm (base e) | log() | log(1000) | 6.908 |
exponentiation | exp() | exp(6.908) | 1000.245 |
rounding to nearest | round() | round(1.32) | 1 |
rounding down | floor() | floor(1.32) | 1 |
rounding up | ceiling() | ceiling(1.32) | 2 |
### 7.3.1 Rounding a number
One very simple transformation that crops up surprisingly often is the need to round a number to the nearest whole number, or to a certain number of significant digits. To start with, let’s assume that we want to round to a whole number. To that end, there are three useful functions in R you want to know about: `round()` , `floor()` and `ceiling()` . The `round()` function just rounds to the nearest whole number. So if you round the number 4.3, it “rounds down” to `4` , like so: `round( x = 4.3 )` `## [1] 4` In contrast, if we want to round the number 4.7, we would round upwards to 5. In everyday life, when someone talks about “rounding”, they usually mean “round to nearest”, so this is the function we use most of the time. However sometimes you have reasons to want to always round up or always round down. If you want to always round down, use the `floor()` function instead; and if you want to force R to round up, then use `ceiling()` . That’s the only difference between the three functions. What if you want to round to a certain number of digits? Let’s suppose you want to round to a fixed number of decimal places, say 2 decimal places. If so, what you need to do is specify the `digits` argument to the `round()` function. That’s pretty straightforward:
`## [1] 0.01` The only subtlety that you need to keep in mind is that sometimes what you want to do is round to 2 significant digits and not to two decimal places. The difference is that, when determining the number of significant digits, zeros don’t count. To see this, let’s apply the `signif()` function instead of the `round()` function:
```
signif( x = 0.0123, digits = 2 )
```
`## [1] 0.012`
This time around, we get an answer of 0.012 because the zeros don’t count as significant digits. Quite often scientific journals will ask you to report numbers to two or three significant digits, so it’s useful to remember the distinction.
### 7.3.2 Modulus and integer division
operation | operator | example.input | answer |
| --- | --- | --- | --- |
integer division | %/% | 42 %/% 10 | 4 |
modulus | %% | 42 %% 10 | 2 |
Since we’re on the topic of simple calculations, there are two other arithmetic operations that I should mention, since they can come in handy when working with real data. These operations are calculating a modulus and doing integer division. They don’t come up anywhere else in this book, but they are worth knowing about. First, let’s consider integer division. Suppose I have $42 in my wallet, and want to buy some sandwiches, which are selling for $10 each. How many sandwiches can I afford108 to buy? The answer is of course 4. Note that it’s not 4.2, since no shop will sell me one-fifth of a sandwich. That’s integer division. In R we perform integer division by using the `%/%` operator: `42 %/% 10` `## [1] 4` Okay, that’s easy enough. What about the modulus? Basically, a modulus is the remainder after integer division, and it’s calculated using the `%%` operator. For the sake of argument, let’s suppose I buy four overpriced $10 sandwiches. If I started out with $42, how much money do I have left? The answer, as both R and common sense tells us, is $2: `42 %% 10` `## [1] 2`
So that’s also pretty easy. There is, however, one subtlety that I need to mention, and this relates to how negative numbers are handled. Firstly, what would happen if I tried to do integer division with a negative number? Let’s have a look:
`-42 %/% 10` `## [1] -5` This might strike you as counterintuitive: why does `42 %/% 10` produce an answer of `4` , but `-42 %/% 10` gives us an answer of `-5` ? Intuitively you might think that the answer to the second one should be `-4` . The way to think about it is like this. Suppose I owe the sandwich shop $42, but I don’t have any money. How many sandwiches would I have to give them in order to stop them from calling security? The answer109 here is 5, not 4. If I handed them 4 sandwiches, I’d still owe them $2, right? So I actually have to give them 5 sandwiches. And since it’s me giving them the sandwiches, the answer to `-42 %/% 10` is `-5` . As you might expect, the behaviour of the modulus operator has a similar pattern. If I’ve handed 5 sandwiches over to the shop in order to pay off my debt of $42, then they now owe me $8. So the modulus is now: `-42 %% 10` `## [1] 8`
### 7.3.3 Logarithms and exponentials
As I’ve mentioned earlier, R has an incredible range of mathematical functions built into it, and there really wouldn’t be much point in trying to describe or even list all of them. For the most part, I’ve focused only on those functions that are strictly necessary for this book. However I do want to make an exception for logarithms and exponentials. Although they aren’t needed anywhere else in this book, they are everywhere in statistics more broadly, and not only that, there are a lot of situations in which it is convenient to analyse the logarithm of a variable (i.e., to take a “log-transform” of the variable). I suspect that many (maybe most) readers of this book will have encountered logarithms and exponentials before, but from past experience I know that there’s a substantial proportion of students who take a social science statistics class who haven’t touched logarithms since high school, and would appreciate a bit of a refresher.
In order to understand logarithms and exponentials, the easiest thing to do is to actually calculate them and see how they relate to other simple calculations. There are three R functions in particular that I want to talk about, namely `log()` , `log10()` and `exp()` . To start with, let’s consider `log10()` , which is known as the “logarithm in base 10”. The trick to understanding a logarithm is to understand that it’s basically the “opposite” of taking a power. Specifically, the logarithm in base 10 is closely related to the powers of 10. So let’s start by noting that 10-cubed is 1000. Mathematically, we would write this: \[
10^3 = 1000
\] and in R we’d calculate it by using the command `10^3` . The trick to understanding a logarithm is to recognise that the statement that “10 to the power of 3 is equal to 1000” is equivalent to the statement that “the logarithm (in base 10) of 1000 is equal to 3”. Mathematically, we write this as follows, \[
\log_{10}( 1000 ) = 3
\] and if we wanted to do the calculation in R we would type this: `log10( 1000 )` `## [1] 3`
Obviously, since you already know that \(10^3 = 1000\) there’s really no point in getting R to tell you that the base-10 logarithm of 1000 is 3. However, most of the time you probably don’t know what right answer is. For instance, I can honestly say that I didn’t know that \(10^{2.69897} = 500\), so it’s rather convenient for me that I can use R to calculate the base-10 logarithm of 500:
`log10( 500 )` `## [1] 2.69897`
Or at least it would be convenient if I had a pressing need to know the base-10 logarithm of 500.
Okay, since the `log10()` function is related to the powers of 10, you might expect that there are other logarithms (in bases other than 10) that are related to other powers too. And of course that’s true: there’s not really anything mathematically special about the number 10. You and I happen to find it useful because decimal numbers are built around the number 10, but the big bad world of mathematics scoffs at our decimal numbers. Sadly, the universe doesn’t actually care how we write down numbers. Anyway, the consequence of this cosmic indifference is that there’s nothing particularly special about calculating logarithms in base 10. You could, for instance, calculate your logarithms in base 2, and in fact R does provide a function for doing that, which is (not surprisingly) called `log2()` . Since we know that \(2^3 = 2 \times 2 \times 2 = 8\), it’s not surprise to see that `log2( 8 )` `## [1] 3` Alternatively, a third type of logarithm – and one we see a lot more of in statistics than either base 10 or base 2 – is called the natural logarithm, and corresponds to the logarithm in base \(e\). Since you might one day run into it, I’d better explain what \(e\) is. The number \(e\), known as Euler’s number, is one of those annoying “irrational” numbers whose decimal expansion is infinitely long, and is considered one of the most important numbers in mathematics. The first few digits of \(e\) are: \[ e = 2.718282 \] There are quite a few situation in statistics that require us to calculate powers of \(e\), though none of them appear in this book. Raising \(e\) to the power \(x\) is called the exponential of \(x\), and so it’s very common to see \(e^x\) written as \(\exp(x)\). And so it’s no surprise that R has a function that calculate exponentials, called `exp()` . For instance, suppose I wanted to calculate \(e^3\). I could try typing in the value of \(e\) manually, like this: `2.718282 ^ 3` `## [1] 20.08554` but it’s much easier to do the same thing using the `exp()` function: `exp( 3 )` `## [1] 20.08554` Anyway, because the number \(e\) crops up so often in statistics, the natural logarithm (i.e., logarithm in base \(e\)) also tends to turn up. Mathematicians often write it as \(\log_e(x)\) or \(\ln(x)\), or sometimes even just \(\log(x)\). In fact, R works the same way: the `log()` function corresponds to the natural logarithm110 Anyway, as a quick check, let’s calculate the natural logarithm of 20.08554 using R: `log( 20.08554 )` `## [1] 3`
And with that, I think we’ve had quite enough exponentials and logarithms for this book!
## 7.4 Extracting a subset of a vector
One very important kind of data handling is being able to extract a particular subset of the data. For instance, you might be interested only in analysing the data from one experimental condition, or you may want to look closely at the data from people over 50 years in age. To do this, the first step is getting R to extract the subset of the data corresponding to the observations that you’re interested in. In this section I’ll talk about subsetting as it applies to vectors, extending the discussion from Chapters 3 and 4. In Section 7.5 I’ll go on to talk about how this discussion extends to data frames.
### 7.4.1 Refresher
This section returns to the `nightgarden.Rdata` data set. If you’re reading this whole chapter in one sitting, then you should already have this data set loaded. If not, don’t forget to use the
```
load("nightgarden.Rdata")
```
command. For this section, let’s ignore the `itng` data frame that we created earlier, and focus instead on the two vectors `speaker` and `utterance` (see Section 7.1 if you’ve forgotten what those vectors look like). Suppose that what I want to do is pull out only those utterances that were made by Makka-Pakka. To that end, I could first use the equality operator to have R tell me which cases correspond to Makka-Pakka speaking:
```
is.MP.speaking <- speaker == "makka-pakka"
is.MP.speaking
```
and then use logical indexing to get R to print out those elements of `utterance` for which `is.MP.speaking` is true, like so:
```
utterance[ is.MP.speaking ]
```
Or, since I’m lazy, I could collapse it to a single command like so:
```
utterance[ speaker == "makka-pakka" ]
```
### 7.4.2 Using
`%in%` to match multiple cases A second useful trick to be aware of is the `%in%` operator111. It’s actually very similar to the `==` operator, except that you can supply a collection of acceptable values. For instance, suppose I wanted to keep only those cases when the utterance is either “pip” or “oo”. One simple way do to this is:
```
utterance %in% c("pip","oo")
```
What this does if return `TRUE` for those elements of `utterance` that are either `"pip"` or `"oo"` and returns `FALSE` for all the others. What that means is that if I want a list of all those instances of characters speaking either of these two words, I could do this:
```
speaker[ utterance %in% c("pip","oo") ]
```
```
## [1] "upsy-daisy" "upsy-daisy" "tombliboo" "makka-pakka" "makka-pakka"
```
### 7.4.3 Using negative indices to drop elements
Before moving onto data frames, there’s a couple of other tricks worth mentioning. The first of these is to use negative values as indices. Recall from Section 3.10 that we can use a vector of numbers to extract a set of elements that we would like to keep. For instance, suppose I want to keep only elements 2 and 3 from `utterance` . I could do so like this: `utterance[2:3]` `## [1] "pip" "onk"`
But suppose, on the other hand, that I have discovered that observations 2 and 3 are untrustworthy, and I want to keep everything except those two elements. To that end, R lets you use negative numbers to remove specific values, like so:
`utterance [ -(2:3) ]`
The output here corresponds to element 1 of the original vector, followed by elements 4, 5, and so on. When all you want to do is remove a few cases, this is a very handy convention.
### 7.4.4 Splitting a vector by group
One particular example of subsetting that is especially common is the problem of splitting one one variable up into several different variables, one corresponding to each group. For instance, in our In the Night Garden example, I might want to create subsets of the `utterance` variable for every character. One way to do this would be to just repeat the exercise that I went through earlier separately for each character, but that quickly gets annoying. A faster way do it is to use the `split()` function. The arguments are:
*
`x` . The variable that needs to be split into groups. *
`f` . The grouping variable. What this function does is output a list (Section 4.9), containing one variable for each group. For instance, I could split up the `utterance` variable by `speaker` using the following command:
```
speech.by.char <- split( x = utterance, f = speaker )
speech.by.char
```
```
## $`makka-pakka`
## [1] "pip" "pip" "onk" "onk"
##
## $tombliboo
## [1] "ee" "oo"
##
## $`upsy-daisy`
## [1] "pip" "pip" "onk" "onk"
```
Once you’re starting to become comfortable working with lists and data frames, this output is all you need, since you can work with this list in much the same way that you would work with a data frame. For instance, if you want the first utterance made by Makka-Pakka, all you need to do is type this:
```
speech.by.char$`makka-pakka`[1]
```
`## [1] "pip"` Just remember that R does need you to add the quoting characters (i.e. `'` ). Otherwise, there’s nothing particularly new or difficult here. However, sometimes – especially when you’re just starting out – it can be convenient to pull these variables out of the list, and into the workspace. This isn’t too difficult to do, though it can be a little daunting to novices. To that end, I’ve included a function called `importList()` in the `lsr` package that does this.112 First, here’s what you’d have if you had wiped the workspace before the start of this section: `who()`
Now we use the `importList()` function to copy all of the variables within the `speech.by.char` list:
```
importList( speech.by.char, ask = FALSE)
```
Because the `importList()` function is attempting to create new variables based on the names of the elements of the list, it pauses to check that you’re okay with the variable names. The reason it does this is that, if one of the to-be-created variables has the same name as a variable that you already have in your workspace, that variable will end up being overwritten, so it’s a good idea to check. Assuming that you type `y` , it will go on to create the variables. Nothing appears to have happened, but if we look at our workspace now: `who()`
we see that there are three new variables, called `makka.pakka` , `tombliboo` and `upsy.daisy` . Notice that the `importList()` function has converted the original character strings into valid R variable names, so the variable corresponding to `"makka-pakka"` is actually `makka.pakka` .113 Nevertheless, even though the names can change, note that each of these variables contains the exact same information as the original elements of the list did. For example:
```
> makka.pakka
[1] "pip" "pip" "onk" "onk"
```
## 7.5 Extracting a subset of a data frame
In this section we turn to the question of how to subset a data frame rather than a vector. To that end, the first thing I should point out is that, if all you want to do is subset one of the variables inside the data frame, then as usual the `$` operator is your friend. For instance, suppose I’m working with the `itng` data frame, and what I want to do is create the `speech.by.char` list. I can use the exact same tricks that I used last time, since what I really want to do is `split()` the `itng$utterance` vector, using the `itng$speaker` vector as the grouping variable. However, most of the time what you actually want to do is select several different variables within the data frame (i.e., keep only some of the columns), or maybe a subset of cases (i.e., keep only some of the rows). In order to understand how this works, we need to talk more specifically about data frames and how to subset them.
### 7.5.1 Using the
`subset()` function There are several different ways to subset a data frame in R, some easier than others. I’ll start by discussing the `subset()` function, which is probably the conceptually simplest way do it. For our purposes there are three different arguments that you’ll be most interested in:
*
`x` . The data frame that you want to subset. *
`subset` . A vector of logical values indicating which cases (rows) of the data frame you want to keep. By default, all cases will be retained. *
`select` . This argument indicates which variables (columns) in the data frame you want to keep. This can either be a list of variable names, or a logical vector indicating which ones to keep, or even just a numeric vector containing the relevant column numbers. By default, all variables will be retained. Let’s start with an example in which I use all three of these arguments. Suppose that I want to subset the `itng` data frame, keeping only the utterances made by Makka-Pakka. What that means is that I need to use the `select` argument to pick out the `utterance` variable, and I also need to use the `subset` variable, to pick out the cases when Makka-Pakka is speaking (i.e.,
```
speaker == "makka-pakka"
```
). Therefore, the command I need to use is this:
```
df <- subset( x = itng, # data frame is itng
subset = speaker == "makka-pakka", # keep only Makka-Pakkas speech
select = utterance ) # keep only the utterance variable
print( df )
```
The variable `df` here is still a data frame, but it only contains one variable (called `utterance` ) and four cases. Notice that the row numbers are actually the same ones from the original data frame. It’s worth taking a moment to briefly explain this. The reason that this happens is that these “row numbers’ are actually row names. When you create a new data frame from scratch R will assign each row a fairly boring row name, which is identical to the row number. However, when you subset the data frame, each row keeps its original row name. This can be quite useful, since – as in the current example – it provides you a visual reminder of what each row in the new data frame corresponds to in the original data frame. However, if it annoys you, you can change the row names using the `rownames()` function.114 In any case, let’s return to the `subset()` function, and look at what happens when we don’t use all three of the arguments. Firstly, suppose that I didn’t bother to specify the `select` argument. Let’s see what happens:
```
subset( x = itng,
subset = speaker == "makka-pakka" )
```
Not surprisingly, R has kept the same cases from the original data set (i.e., rows 7 through 10), but this time it has kept all of the variables from the data frame. Equally unsurprisingly, if I don’t specify the `subset` argument, what we find is that R keeps all of the cases:
```
subset( x = itng,
select = utterance )
```
Again, it’s important to note that this output is still a data frame: it’s just a data frame with only a single variable.
### 7.5.2 Using square brackets: I. Rows and columns
Throughout the book so far, whenever I’ve been subsetting a vector I’ve tended use the square brackets `[]` to do so. But in the previous section when I started talking about subsetting a data frame I used the `subset()` function. As a consequence, you might be wondering whether it is possible to use the square brackets to subset a data frame. The answer, of course, is yes. Not only can you use square brackets for this purpose, as you become more familiar with R you’ll find that this is actually much more convenient than using `subset()` . Unfortunately, the use of square brackets for this purpose is somewhat complicated, and can be very confusing to novices. So be warned: this section is more complicated than it feels like it “should” be. With that warning in place, I’ll try to walk you through it slowly. For this section, I’ll use a slightly different data set, namely the `garden` data frame that is stored in the `"nightgarden2.Rdata"` file.
```
load(file.path(projecthome,"data","nightgarden2.Rdata"))
garden
```
As you can see, the `garden` data frame contains 3 variables and 5 cases, and this time around I’ve used the `rownames()` function to attach slightly verbose labels to each of the cases. Moreover, let’s assume that what we want to do is to pick out rows 4 and 5 (the two cases when Makka-Pakka is speaking), and columns 1 and 2 (variables `speaker` and `utterance` ).
How shall we do this? As usual, there’s more than one way. The first way is based on the observation that, since a data frame is basically a table, every element in the data frame has a row number and a column number. So, if we want to pick out a single element, we have to specify the row number and a column number within the square brackets. By convention, the row number comes first. So, for the data frame above, which has 5 rows and 3 columns, the numerical indexing scheme looks like this:
```
knitr::kable(data.frame(stringsAsFactors=FALSE, row = c("1","2","3", "4", "5"), col1 = c("[1,1]", "[2,1]", "[3,1]", "[4,1]", "[5,1]"), col2 = c("[1,2]", "[2,2]", "[3,2]", "[4,2]", "[5,2]"), col3 = c("[1,3]", "[2,3]", "[3,3]", "[4,3]", "[5,3]")))
```
row | col1 | col2 | col3 |
| --- | --- | --- | --- |
1 | [1,1] | [1,2] | [1,3] |
2 | [2,1] | [2,2] | [2,3] |
3 | [3,1] | [3,2] | [3,3] |
4 | [4,1] | [4,2] | [4,3] |
5 | [5,1] | [5,2] | [5,3] |
If I want the 3rd case of the 2nd variable, what I would type is `garden[3,2]` , and R would print out some output showing that, this element corresponds to the utterance `"ee"` . However, let’s hold off from actually doing that for a moment, because there’s something slightly counterintuitive about the specifics of what R does under those circumstances (see Section 7.5.4). Instead, let’s aim to solve our original problem, which is to pull out two rows (4 and 5) and two columns (1 and 2). This is fairly simple to do, since R allows us to specify multiple rows and multiple columns. So let’s try that: `garden[ 4:5, 1:2 ]`
Clearly, that’s exactly what we asked for: the output here is a data frame containing two variables and two cases. Note that I could have gotten the same answer if I’d used the `c()` function to produce my vectors rather than the `:` operator. That is, the following command is equivalent to the last one:
```
garden[ c(4,5), c(1,2) ]
```
It’s just not as pretty. However, if the columns and rows that you want to keep don’t happen to be next to each other in the original data frame, then you might find that you have to resort to using commands like
```
garden[ c(2,4,5), c(1,3) ]
```
to extract them. A second way to do the same thing is to use the names of the rows and columns. That is, instead of using the row numbers and column numbers, you use the character strings that are used as the labels for the rows and columns. To apply this idea to our `garden` data frame, we would use a command like this:
```
garden[ c("case.4", "case.5"), c("speaker", "utterance") ]
```
Once again, this produces exactly the same output, so I haven’t bothered to show it. Note that, although this version is more annoying to type than the previous version, it’s a bit easier to read, because it’s often more meaningful to refer to the elements by their names rather than their numbers. Also note that you don’t have to use the same convention for the rows and columns. For instance, I often find that the variable names are meaningful and so I sometimes refer to them by name, whereas the row names are pretty arbitrary so it’s easier to refer to them by number. In fact, that’s more or less exactly what’s happening with the `garden` data frame, so it probably makes more sense to use this as the command:
```
garden[ 4:5, c("speaker", "utterance") ]
```
Again, the output is identical.
Finally, both the rows and columns can be indexed using logicals vectors as well. For example, although I claimed earlier that my goal was to extract cases 4 and 5, it’s pretty obvious that what I really wanted to do was select the cases where Makka-Pakka is speaking. So what I could have done is create a logical vector that indicates which cases correspond to Makka-Pakka speaking:
```
is.MP.speaking <- garden$speaker == "makka-pakka"
is.MP.speaking
```
As you can see, the 4th and 5th elements of this vector are `TRUE` while the others are `FALSE` . Now that I’ve constructed this “indicator” variable, what I can do is use this vector to select the rows that I want to keep:
```
garden[ is.MP.speaking, c("speaker", "utterance") ]
```
And of course the output is, yet again, the same.
### 7.5.3 Using square brackets: II. Some elaborations
There are two fairly useful elaborations on this “rows and columns” approach that I should point out. Firstly, what if you want to keep all of the rows, or all of the columns? To do this, all we have to do is leave the corresponding entry blank, but it is crucial to remember to keep the comma*! For instance, suppose I want to keep all the rows in the `garden` data, but I only want to retain the first two columns. The easiest way do this is to use a command like this: `garden[ , 1:2 ]`
Alternatively, if I want to keep all the columns but only want the last two rows, I use the same trick, but this time I leave the second index blank. So my command becomes:
`garden[ 4:5, ]`
```
## speaker utterance line
## case.4 makka-pakka pip 7
## case.5 makka-pakka onk 9
```
The second elaboration I should note is that it’s still okay to use negative indexes as a way of telling R to delete certain rows or columns. For instance, if I want to delete the 3rd column, then I use this command:
`garden[ , -3 ]`
whereas if I want to delete the 3rd row, then I’d use this one:
`garden[ -3, ]`
So that’s nice.
### 7.5.4 Using square brackets: III. Understanding “dropping”
At this point some of you might be wondering why I’ve been so terribly careful to choose my examples in such a way as to ensure that the output always has are multiple rows and multiple columns. The reason for this is that I’ve been trying to hide the somewhat curious “dropping” behaviour that R produces when the output only has a single column. I’ll start by showing you what happens, and then I’ll try to explain it. Firstly, let’s have a look at what happens when the output contains only a single row:
`garden[ 5, ]`
```
## speaker utterance line
## case.5 makka-pakka onk 9
```
This is exactly what you’d expect to see: a data frame containing three variables, and only one case per variable. Okay, no problems so far. What happens when you ask for a single column? Suppose, for instance, I try this as a command:
`garden[ , 3 ]` Based on everything that I’ve shown you so far, you would be well within your rights to expect to see R produce a data frame containing a single variable (i.e., `line` ) and five cases. After all, that is what the `subset()` command does in this situation, and it’s pretty consistent with everything else that I’ve shown you so far about how square brackets work. In other words, you should expect to see this:
However, that is emphatically not what happens. What you actually get is this:
`garden[ , 3 ]` `## [1] 1 2 5 7 9` That output is not a data frame at all! That’s just an ordinary numeric vector containing 5 elements. What’s going on here is that R has “noticed” that the output that we’ve asked for doesn’t really “need” to be wrapped up in a data frame at all, because it only corresponds to a single variable. So what it does is “drop” the output from a data frame containing a single variable, “down” to a simpler output that corresponds to that variable. This behaviour is actually very convenient for day to day usage once you’ve become familiar with it – and I suppose that’s the real reason why R does this – but there’s no escaping the fact that it is deeply confusing to novices. It’s especially confusing because the behaviour appears only for a very specific case: (a) it only works for columns and not for rows, because the columns correspond to variables and the rows do not, and (b) it only applies to the “rows and columns” version of the square brackets, and not to the `subset()` function,115 or to the “just columns” use of the square brackets (next section). As I say, it’s very confusing when you’re just starting out. For what it’s worth, you can suppress this behaviour if you want, by setting `drop = FALSE` when you construct your bracketed expression. That is, you could do something like this:
```
garden[ , 3, drop = FALSE ]
```
I suppose that helps a little bit, in that it gives you some control over the dropping behaviour, but I’m not sure it helps to make things any easier to understand. Anyway, that’s the “dropping” special case. Fun, isn’t it?
### 7.5.5 Using square brackets: IV. Columns only
As if the weird “dropping” behaviour wasn’t annoying enough, R actually provides a completely different way of using square brackets to index a data frame. Specifically, if you only give a single index, R will assume you want the corresponding columns, not the rows. Do not be fooled by the fact that this second method also uses square brackets: it behaves differently to the “rows and columns” method that I’ve discussed in the last few sections. Again, what I’ll do is show you what happens first, and then I’ll try to explain why it happens afterwards. To that end, let’s start with the following command:
`garden[ 1:2 ]`
As you can see, the output gives me the first two columns, much as if I’d typed `garden[,1:2]` . It doesn’t give me the first two rows, which is what I’d have gotten if I’d used a command like `garden[1:2,]` . Not only that, if I ask for a single column, R does not drop the output: `garden[3]`
As I said earlier, the only case where dropping occurs by default is when you use the “row and columns” version of the square brackets, and the output happens to correspond to a single column. However, if you really want to force R to drop the output, you can do so using the “double brackets” notation:
`garden[[3]]` `## [1] 1 2 5 7 9` Note that R will only allow you to ask for one column at a time using the double brackets. If you try to ask for multiple columns in this way, you get completely different behaviour,116 which may or may not produce an error, but definitely won’t give you the output you’re expecting. The only reason I’m mentioning it at all is that you might run into double brackets when doing further reading, and a lot of books don’t explicitly point out the difference between `[` and `[[` . However, I promise that I won’t be using `[[` anywhere else in this book. Okay, for those few readers that have persevered with this section long enough to get here without having set fire to the book, I should explain why R has these two different systems for subsetting a data frame (i.e., “row and column” versus “just columns”), and why they behave so differently to each other. I’m not 100% sure about this since I’m still reading through some of the old references that describe the early development of R, but I think the answer relates to the fact that data frames are actually a very strange hybrid of two different kinds of thing. At a low level, a data frame is a list (Section 4.9). I can demonstrate this to you by overriding the normal `print()` function117 and forcing R to print out the `garden` data frame using the default print method rather than the special one that is defined only for data frames. Here’s what we get:
```
print.default( garden )
```
```
## $speaker
## [1] upsy-daisy upsy-daisy tombliboo makka-pakka makka-pakka
## Levels: makka-pakka tombliboo upsy-daisy
##
## $utterance
## [1] pip pip ee pip onk
## Levels: ee onk oo pip
##
## $line
## [1] 1 2 5 7 9
##
## attr(,"class")
## [1] "data.frame"
```
Apart from the weird part of the output right at the bottom, this is identical to the print out that you get when you print out a list (see Section 4.9). In other words, a data frame is a list. View from this “list based” perspective, it’s clear what `garden[1]` is: it’s the first variable stored in the list, namely `speaker` . In other words, when you use the “just columns” way of indexing a data frame, using only a single index, R assumes that you’re thinking about the data frame as if it were a list of variables. In fact, when you use the `$` operator you’re taking advantage of the fact that the data frame is secretly a list.
However, a data frame is more than just a list. It’s a very special kind of list where all the variables are of the same length, and the first element in each variable happens to correspond to the first “case” in the data set. That’s why no-one ever wants to see a data frame printed out in the default “list-like” way that I’ve shown in the extract above. In terms of the deeper meaning behind what a data frame is used for, a data frame really does have this rectangular shape to it:
`print( garden )`
Because of the fact that a data frame is basically a table of data, R provides a second “row and column” method for interacting with the data frame (see Section 7.11.1 for a related example). This method makes much more sense in terms of the high-level table of data interpretation of what a data frame is, and so for the most part it’s this method that people tend to prefer. In fact, throughout the rest of the book I will be sticking to the “row and column” approach (though I will use `$` a lot), and never again referring to the “just columns” approach. However, it does get used a lot in practice, so I think it’s important that this book explain what’s going on.
And now let us never speak of this again.
## 7.6 Sorting, flipping and merging data
In this section I discuss a few useful operations that I feel are loosely related to one another: sorting a vector, sorting a data frame, binding two or more vectors together into a data frame (or matrix), and flipping a data frame (or matrix) on its side. They’re all fairly straightforward tasks, at least in comparison to some of the more obnoxious data handling problems that turn up in real life.
### 7.6.1 Sorting a numeric or character vector
One thing that you often want to do is sort a variable. If it’s a numeric variable you might want to sort in increasing or decreasing order. If it’s a character vector you might want to sort alphabetically, etc. The `sort()` function provides this capability.
```
numbers <- c(2,4,3)
sort( x = numbers )
```
`## [1] 2 3 4`
You can ask for R to sort in decreasing order rather than increasing:
```
sort( x = numbers, decreasing = TRUE )
```
`## [1] 4 3 2`
And you can ask it to sort text data in alphabetical order:
```
text <- c("aardvark", "zebra", "swing")
sort( text )
```
That’s pretty straightforward. That being said, it’s important to note that I’m glossing over something here. When you apply `sort()` to a character vector it doesn’t strictly sort into alphabetical order. R actually has a slightly different notion of how characters are ordered (see Section 7.8.5 and Table 7.3), which is more closely related to how computers store text data than to how letters are ordered in the alphabet. However, that’s a topic we’ll discuss later. For now, the only thing I should note is that the `sort()` function doesn’t alter the original variable. Rather, it creates a new, sorted variable as the output. So if I inspect my original `text` variable: `text`
I can see that it has remained unchanged.
### 7.6.2 Sorting a factor
You can also sort factors, but the story here is slightly more subtle because there’s two different ways you can sort a factor: alphabetically (by label) or by factor level. The `sort()` function uses the latter. To illustrate, let’s look at the two different examples. First, let’s create a factor in the usual way:
```
fac <- factor( text )
fac
```
Now let’s sort it:
`sort(fac)`
This looks like it’s sorted things into alphabetical order, but that’s only because the factor levels themselves happen to be alphabetically ordered. Suppose I deliberately define the factor levels in a non-alphabetical order:
```
fac <- factor( text, levels = c("zebra","swing","aardvark") )
fac
```
Now what happens when we try to sort `fac` this time? The answer: `sort(fac)`
It sorts the data into the numerical order implied by the factor levels, not the alphabetical order implied by the labels attached to those levels. Normally you never notice the distinction, because by default the factor levels are assigned in alphabetical order, but it’s important to know the difference:
### 7.6.3 Sorting a data frame
The `sort()` function doesn’t work properly with data frames. If you want to sort a data frame the standard advice that you’ll find online is to use the `order()` function (not described in this book) to determine what order the rows should be sorted, and then use square brackets to do the shuffling. There’s nothing inherently wrong with this advice, I just find it tedious. To that end, the `lsr` package includes a function called `sortFrame()` that you can use to do the sorting. The first argument to the function is named ( `x` ), and should correspond to the data frame that you want sorted. After that, all you do is type a list of the names of the variables that you want to use to do the sorting. For instance, if I type this:
```
sortFrame( garden, speaker, line)
```
what R does is first sort by `speaker` (factor level order). Any ties (i.e., data from the same speaker) are then sorted in order of `line` (increasing numerical order). You can use the minus sign to indicate that numerical variables should be sorted in reverse order:
```
sortFrame( garden, speaker, -line)
```
As of the current writing, the `sortFrame()` function is under development. I’ve started introducing functionality to allow you to use the `-` sign to non-numeric variables or to make a distinction between sorting factors alphabetically or by factor level. The idea is that you should be able to type in something like this:
and have the output correspond to a sort of the `garden` data frame in reverse alphabetical order (or reverse factor level order) of `speaker` . As things stand right now, this will actually work, and it will produce sensible output:
However, I’m not completely convinced that I’ve set this up in the ideal fashion, so this may change a little bit in the future.
### 7.6.4 Binding vectors together
A not-uncommon task that you might find yourself needing to undertake is to combine several vectors. For instance, let’s suppose we have the following two numeric vectors:
```
cake.1 <- c(100, 80, 0, 0, 0)
cake.2 <- c(100, 100, 90, 30, 10)
```
The numbers here might represent the amount of each of the two cakes that are left at five different time points. Apparently the first cake is tastier, since that one gets devoured faster. We’ve already seen one method for combining these vectors: we could use the `data.frame()` function to convert them into a data frame with two variables, like so:
```
cake.df <- data.frame( cake.1, cake.2 )
cake.df
```
```
## cake.1 cake.2
## 1 100 100
## 2 80 100
## 3 0 90
## 4 0 30
## 5 0 10
```
Two other methods that I want to briefly refer to are the `rbind()` and `cbind()` functions, which will convert the vectors into a matrix. I’ll discuss matrices properly in Section 7.11.1 but the details don’t matter too much for our current purposes. The `cbind()` function (“column bind”) produces a very similar looking output to the data frame example:
```
cake.mat1 <- cbind( cake.1, cake.2 )
cake.mat1
```
```
## cake.1 cake.2
## [1,] 100 100
## [2,] 80 100
## [3,] 0 90
## [4,] 0 30
## [5,] 0 10
```
but nevertheless it’s important to keep in mind that `cake.mat1` is a matrix rather than a data frame, and so has a few differences from the `cake.df` variable. The `rbind()` function (“row bind”) produces a somewhat different output: it binds the vectors together row-wise rather than column-wise, so the output now looks like this:
```
cake.mat2 <- rbind( cake.1, cake.2 )
cake.mat2
```
```
## [,1] [,2] [,3] [,4] [,5]
## cake.1 100 80 0 0 0
## cake.2 100 100 90 30 10
```
You can add names to a matrix by using the `rownames()` and `colnames()` functions, and I should also point out that there’s a fancier function in R called `merge()` that supports more complicated “database like” merging of vectors and data frames, but I won’t go into details here.
### 7.6.5 Binding multiple copies of the same vector together
It is sometimes very useful to bind together multiple copies of the same vector. You could do this using the `rbind` and `cbind` functions, using comands like this one
```
fibonacci <- c( 1,1,2,3,5,8 )
rbind( fibonacci, fibonacci, fibonacci )
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## fibonacci 1 1 2 3 5 8
## fibonacci 1 1 2 3 5 8
## fibonacci 1 1 2 3 5 8
```
but that can be pretty annoying, especially if you needs lots of copies. To make this a little easier, the `lsr` package has two additional functions `rowCopy` and `colCopy` that do the same job, but all you have to do is specify the number of copies that you want, instead of typing the name in over and over again. The two arguments you need to specify are `x` , the vector to be copied, and `times` , indicating how many copies should be created:118
```
rowCopy( x = fibonacci, times = 3 )
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 1 2 3 5 8
## [2,] 1 1 2 3 5 8
## [3,] 1 1 2 3 5 8
```
Of course, in practice you don’t need to name the arguments all the time. For instance, here’s an example using the `colCopy()` function with the argument names omitted:
```
colCopy( fibonacci, 3 )
```
```
## [,1] [,2] [,3]
## [1,] 1 1 1
## [2,] 1 1 1
## [3,] 2 2 2
## [4,] 3 3 3
## [5,] 5 5 5
## [6,] 8 8 8
```
### 7.6.6 Transposing a matrix or data frame
One of the main reasons that I wanted to discuss the `rbind()` and `cbind()` functions in the same section as the `data.frame()` function is that it immediately raises the question of how to “flip” or transpose a matrix or data frame. Notice that in the last section I was able to produce two different matrices, `cake.mat1` and `cake.mat2` that were basically mirror images of one another. A natural question to ask is whether you can directly transform one into another. The transpose function `t()` allows us to do this in a straightforward fashion. To start with, I’ll show you how to transpose a matrix, and then I’ll move onto talk about data frames. Firstly, let’s load a matrix I prepared earlier, from the `cakes.Rdata` file:
```
load(file.path(projecthome,"data","cakes.Rdata"))
cakes
```
And just to make sure you believe me that this is actually a matrix:
`class( cakes )` `## [1] "matrix"`
Okay, now let’s transpose the matrix:
```
cakes.flipped <- t( cakes )
cakes.flipped
```
The output here is still a matrix:
```
class( cakes.flipped )
```
`## [1] "matrix"` At this point you should have two questions: (1) how do we do the same thing for data frames? and (2) why should we care about this? Let’s start with the how question. First, I should note that you can transpose a data frame just fine using the `t()` function, but that has the slightly awkward consequence of converting the output from a data frame to a matrix, which isn’t usually what you want. It’s quite easy to convert the output back again, of course,119 but I hate typing two commands when I can do it with one. To that end, the `lsr` package has a simple “convenience” function called `tFrame()` which does exactly the same thing as `t()` but converts the output to a data frame for you. To illustrate this, let’s transpose the `itng` data frame that we used earlier. Here’s the original data frame: `itng`
and here’s what happens when you transpose it using `tFrame()` : `tFrame( itng )`
```
## V1 V2 V3 V4 V5 V6
## speaker upsy-daisy upsy-daisy upsy-daisy upsy-daisy tombliboo tombliboo
## utterance pip pip onk onk ee oo
## V7 V8 V9 V10
## speaker makka-pakka makka-pakka makka-pakka makka-pakka
## utterance pip pip onk onk
```
An important point to recognise is that transposing a data frame is not always a sensible thing to do: in fact, I’d go so far as to argue that it’s usually not sensible. It depends a lot on whether the “cases” from your original data frame would make sense as variables, and to think of each of your original “variables” as cases. I think that’s emphatically not true for our `itng` data frame, so I wouldn’t advise doing it in this situation. That being said, sometimes it really is true. For instance, had we originally stored our `cakes` variable as a data frame instead of a matrix, then it would absolutely be sensible to flip the data frame!120 There are some situations where it is useful to flip your data frame, so it’s nice to know that you can do it. Indeed, that’s the main reason why I have spent so much time talking about this topic. A lot of statistical tools make the assumption that the rows of your data frame (or matrix) correspond to observations, and the columns correspond to the variables. That’s not unreasonable, of course, since that is a pretty standard convention. However, think about our `cakes` example here. This is a situation where you might want do an analysis of the different cakes (i.e. cakes as variables, time points as cases), but equally you might want to do an analysis where you think of the times as being the things of interest (i.e., times as variables, cakes as cases). If so, then it’s useful to know how to flip a matrix or data frame around.
## 7.7 Reshaping a data frame
One of the most annoying tasks that you need to undertake on a regular basis is that of reshaping a data frame. Framed in the most general way, reshaping the data means taking the data in whatever format it’s given to you, and converting it to the format you need it. Of course, if we’re going to characterise the problem that broadly, then about half of this chapter can probably be thought of as a kind of reshaping. So we’re going to have to narrow things down a little bit. To that end, I’ll talk about a few different tools that you can use for a few different tasks. In particular, I’ll discuss a couple of easy to use (but limited) functions that I’ve included in the `lsr` package. In future versions of the book I plan to expand this discussion to include some of the more powerful tools that are available in R, but I haven’t had the time to do so yet.
### 7.7.1 Long form and wide form data
The most common format in which you might obtain data is as a “case by variable” layout, commonly known as the wide form of the data.
```
load(file.path(projecthome,"data","repeated.Rdata"))
who()
```
To get a sense of what I’m talking about, consider an experiment in which we are interested in the different effects that alcohol and and caffeine have on people’s working memory capacity (WMC) and reaction times (RT). We recruit 10 participants, and measure their WMC and RT under three different conditions: a “no drug” condition, in which they are not under the influence of either caffeine or alcohol, a “caffeine” condition, in which they are under the inflence of caffeine, and an “alcohol” condition, in which… well, you can probably guess. Ideally, I suppose, there would be a fourth condition in which both drugs are administered, but for the sake of simplicity let’s ignore that. The `drugs` data frame gives you a sense of what kind of data you might observe in an experiment like this: `drugs`
This is a data set in “wide form”, in which each participant corresponds to a single row. We have two variables that are characteristics of the subject (i.e., their `id` number and their `gender` ) and six variables that refer to one of the two measured variables (WMC or RT) in one of the three testing conditions (alcohol, caffeine or no drug). Because all of the testing conditions (i.e., the three drug types) are applied to all participants, drug type is an example of a within-subject factor.
### 7.7.2 Reshaping data using
`wideToLong()`
The “wide form” of this data set is useful for some situations: it is often very useful to have each row correspond to a single subject. However, it is not the only way in which you might want to organise this data. For instance, you might want to have a separate row for each “testing occasion”. That is, “participant 1 under the influence of alcohol” would be one row, and “participant 1 under the influence of caffeine” would be another row. This way of organising the data is generally referred to as the long form of the data. It’s not too difficult to switch between wide and long form, and I’ll explain how it works in a moment; for now, let’s just have a look at what the long form of this data set looks like:
```
## id gender drug WMC RT
## 1 1 female alcohol 3.7 488
## 2 2 female alcohol 6.4 607
## 3 3 female alcohol 4.6 643
## 4 4 male alcohol 6.4 684
## 5 5 female alcohol 4.9 593
## 6 6 male alcohol 5.4 492
```
The `drugs.2` data frame that we just created has 30 rows: each of the 10 participants appears in three separate rows, one corresponding to each of the three testing conditions. And instead of having a variable like `WMC_caffeine` that indicates that we were measuring “WMC” in the “caffeine” condition, this information is now recorded in two separate variables, one called `drug` and another called `WMC` . Obviously, the long and wide forms of the data contain the same information, but they represent quite different ways of organising that information. Sometimes you find yourself needing to analyse data in wide form, and sometimes you find that you need long form. So it’s really useful to know how to switch between the two. In the example I gave above, I used a function called `wideToLong()` to do the transformation. The `wideToLong()` function is part of the `lsr` package. The key to understanding this function is that it relies on the variable names to do all the work. Notice that the variable names in the `drugs` data frame follow a very clear scheme. Whenever you have a variable with a name like `WMC_caffeine` you know that the variable being measured is “WMC”, and that the specific condition in which it is being measured is the “caffeine” condition. Similarly, you know that `RT_no.drug` refers to the “RT” variable measured in the “no drug” condition. The measured variable comes first (e.g., `WMC` ), followed by a separator character (in this case the separator is an underscore, `_` ), and then the name of the condition in which it is being measured (e.g., `caffeine` ). There are two different prefixes (i.e, the strings before the separator, `WMC` , `RT` ) which means that there are two separate variables being measured. There are three different suffixes (i.e., the strings after the separtator, `caffeine` , `alcohol` , `no.drug` ) meaning that there are three different levels of the within-subject factor. Finally, notice that the separator string (i.e., `_` ) does not appear anywhere in two of the variables ( `id` , `gender` ), indicating that these are between-subject variables, namely variables that do not vary within participant (e.g., a person’s `gender` is the same regardless of whether they’re under the influence of alcohol, caffeine etc).
Because of the fact that the variable naming scheme here is so informative, it’s quite possible to reshape the data frame without any additional input from the user. For example, in this particular case, you could just type the following:
`wideToLong( drugs )`
```
## id gender within WMC RT
## 1 1 female alcohol 3.7 488
## 2 2 female alcohol 6.4 607
## 3 3 female alcohol 4.6 643
## 4 4 male alcohol 6.4 684
## 5 5 female alcohol 4.9 593
## 6 6 male alcohol 5.4 492
## 7 7 male alcohol 7.9 690
## 8 8 male alcohol 4.1 486
## 9 9 female alcohol 5.2 686
## 10 10 female alcohol 6.2 645
## 11 1 female caffeine 3.7 236
## 12 2 female caffeine 7.3 376
## 13 3 female caffeine 7.4 226
## 14 4 male caffeine 7.8 206
## 15 5 female caffeine 5.2 262
## 16 6 male caffeine 6.6 230
## 17 7 male caffeine 7.9 259
## 18 8 male caffeine 5.9 230
## 19 9 female caffeine 6.2 273
## 20 10 female caffeine 7.4 240
## 21 1 female no.drug 3.9 371
## 22 2 female no.drug 7.9 349
## 23 3 female no.drug 7.3 412
## 24 4 male no.drug 8.2 252
## 25 5 female no.drug 7.0 439
## 26 6 male no.drug 7.2 464
## 27 7 male no.drug 8.9 327
## 28 8 male no.drug 4.5 305
## 29 9 female no.drug 7.2 327
## 30 10 female no.drug 7.8 498
```
This is pretty good, actually. The only think it has gotten wrong here is that it doesn’t know what name to assign to the within-subject factor, so instaed of calling it something sensible like `drug` , it has use the unimaginative name `within` . If you want to ensure that the `wideToLong()` function applies a sensible name, you have to specify the `within` argument, which is just a character string that specifies the name of the within-subject factor. So when I used this command earlier,
all I was doing was telling R to use `drug` as the name of the within subject factor. Now, as I was hinting earlier, the `wideToLong()` function is very inflexible. It requires that the variable names all follow this naming scheme that I outlined earlier. If you don’t follow this naming scheme it won’t work.121 The only flexibility that I’ve included here is that you can change the separator character by specifying the `sep` argument. For instance, if you were using variable names of the form `WMC/caffeine` , for instance, you could specify that `sep="/"` , using a command like this
```
drugs.2 <- wideToLong( data = drugs, within = "drug", sep = "/" )
```
and it would still work.
### 7.7.3 Reshaping data using
`longToWide()` To convert data from long form to wide form, the `lsr` package also includes a function called `longToWide()` . Recall from earlier that the long form of the data (i.e., the `drugs.2` data frame) contains variables named `id` , `gender` , `drug` , `WMC` and `RT` . In order to convert from long form to wide form, all you need to do is indicate which of these variables are measured separately for each condition (i.e., `WMC` and `RT` ), and which variable is the within-subject factor that specifies the condition (i.e., `drug` ). You do this via a two-sided formula, in which the measured variables are on the left hand side, and the within-subject factor is on the ritght hand side. In this case, the formula would be `WMC + RT ~ drug` . So the command that we would use might look like this:
```
longToWide( data=drugs.2, formula= WMC+RT ~ drug )
```
or, if we chose to omit argument names, we could simplify it to this:
```
longToWide( drugs.2, WMC+RT ~ drug )
```
Note that, just like the `wideToLong()` function, the `longToWide()` function allows you to override the default separator character. For instance, if the command I used had been
```
longToWide( drugs.2, WMC+RT ~ drug, sep="/" )
```
```
## id gender WMC/alcohol RT/alcohol WMC/caffeine RT/caffeine WMC/no.drug
## 1 1 female 3.7 488 3.7 236 3.9
## 2 2 female 6.4 607 7.3 376 7.9
## 3 3 female 4.6 643 7.4 226 7.3
## 4 4 male 6.4 684 7.8 206 8.2
## 5 5 female 4.9 593 5.2 262 7.0
## 6 6 male 5.4 492 6.6 230 7.2
## 7 7 male 7.9 690 7.9 259 8.9
## 8 8 male 4.1 486 5.9 230 4.5
## 9 9 female 5.2 686 6.2 273 7.2
## 10 10 female 6.2 645 7.4 240 7.8
## RT/no.drug
## 1 371
## 2 349
## 3 412
## 4 252
## 5 439
## 6 464
## 7 327
## 8 305
## 9 327
## 10 498
```
the output would contain variables with names like `RT/alcohol` instead of `RT_alcohol` .
### 7.7.4 Reshaping with multiple within-subject factors
As I mentioned above, the `wideToLong()` and `longToWide()` functions are quite limited in terms of what they can do. However, they do handle a broader range of situations than the one outlined above. Consider the following, fairly simple psychological experiment. I’m interested in the effects of practice on some simple decision making problem. It doesn’t really matter what the problem is, other than to note that I’m interested in two distinct outcome variables. Firstly, I care about people’s accuracy, measured by the proportion of decisions that people make correctly, denoted PC. Secondly, I care about people’s speed, measured by the mean response time taken to make those decisions, denoted MRT. That’s standard in psychological experiments: the speed-accuracy trade-off is pretty ubiquitous, so we generally need to care about both variables. To look at the effects of practice over the long term, I test each participant on two days, `day1` and `day2` , where for the sake of argument I’ll assume that `day1` and `day2` are about a week apart. To look at the effects of practice over the short term, the testing during each day is broken into two “blocks”, `block1` and `block2` , which are about 20 minutes apart. This isn’t the world’s most complicated experiment, but it’s still a fair bit more complicated than the last one. This time around we have two within-subject factors (i.e., `day` and `block` ) and we have two measured variables for each condition (i.e., `PC` and `MRT` ). The `choice` data frame shows what the wide form of this kind of data might look like: `choice`
Notice that this time around we have variable names of the form `MRT/block1/day2` . As before, the first part of the name refers to the measured variable (response time), but there are now two suffixes, one indicating that the testing took place in block 1, and the other indicating that it took place on day 2. And just to complicate matters, it uses `/` as the separator character rather than `_` . Even so, reshaping this data set is pretty easy. The command to do it is,
```
choice.2 <- wideToLong( choice, within=c("block","day"), sep="/" )
```
which is pretty much the exact same command we used last time. The only difference here is that, because there are two within-subject factors, the `within` argument is a vector that contains two names. When we look at the long form data frame that this creates, we get this: `choice.2`
```
## id gender MRT PC block day
## 1 1 male 415 79 block1 day1
## 2 2 male 500 83 block1 day1
## 3 3 female 478 91 block1 day1
## 4 4 female 550 75 block1 day1
## 5 1 male 400 88 block1 day2
## 6 2 male 490 92 block1 day2
## 7 3 female 468 98 block1 day2
## 8 4 female 502 89 block1 day2
## 9 1 male 455 82 block2 day1
## 10 2 male 532 86 block2 day1
## 11 3 female 499 90 block2 day1
## 12 4 female 602 78 block2 day1
## 13 1 male 450 93 block2 day2
## 14 2 male 518 97 block2 day2
## 15 3 female 474 100 block2 day2
## 16 4 female 588 95 block2 day2
```
In this long form data frame we have two between-subject variables ( `id` and `gender` ), two variables that define our within-subject manipulations ( `block` and `day` ), and two more contain the measurements we took ( `MRT` and `PC` ). To convert this back to wide form is equally straightforward. We use the `longToWide()` function, but this time around we need to alter the formula in order to tell it that we have two within-subject factors. The command is now
```
longToWide( choice.2, MRT+PC ~ block+day, sep="/" )
```
and this produces a wide form data set containing the same variables as the original `choice` data frame.
### 7.7.5 What other options are there?
The advantage to the approach described in the previous section is that it solves a quite specific problem (but a commonly encountered one) with a minimum of fuss. The disadvantage is that the tools are quite limited in scope. They allow you to switch your data back and forth between two different formats that are very common in everyday data analysis. However, there a number of other tools that you can use if need be. Just within the core packages distributed with R there is the `reshape()` function, as well as the `stack()` and `unstack()` functions, all of which can be useful under certain circumstances. And there are of course thousands of packages on CRAN that you can use to help you with different tasks. One popular package for this purpose is the `reshape` package, written by <NAME> (??? for details see Wickham2007). There are two key functions in this package, called `melt()` and `cast()` that are pretty useful for solving a lot of reshaping problems. In a future version of this book I intend to discuss `melt()` and `cast()` in a fair amount of detail.
## 7.8 Working with text
Sometimes your data set is quite text heavy. This can be for a lot of different reasons. Maybe the raw data are actually taken from text sources (e.g., newspaper articles), or maybe your data set contains a lot of free responses to survey questions, in which people can write whatever text they like in response to some query. Or maybe you just need to rejig some of the text used to describe nominal scale variables. Regardless of what the reason is, you’ll probably want to know a little bit about how to handle text in R. Some things you already know how to do: I’ve discussed the use of `nchar()` to calculate the number of characters in a string (Section 3.8.1), and a lot of the general purpose tools that I’ve discussed elsewhere (e.g., the `==` operator) have been applied to text data as well as to numeric data. However, because text data is quite rich, and generally not as well structured as numeric data, R provides a lot of additional tools that are quite specific to text. In this section I discuss only those tools that come as part of the base packages, but there are other possibilities out there: the `stringr` package provides a powerful alternative that is a lot more coherent than the basic tools, and is well worth looking into.
### 7.8.1 Shortening a string
The first task I want to talk about is how to shorten a character string. For example, suppose that I have a vector that contains the names of several different animals:
```
animals <- c( "cat", "dog", "kangaroo", "whale" )
```
It might be useful in some contexts to extract the first three letters of each word. This is often useful when annotating figures, or when creating variable labels: it’s often very inconvenient to use the full name, so you want to shorten it to a short code for space reasons. The `strtrim()` function can be used for this purpose. It has two arguments: `x` is a vector containing the text to be shortened and `width` specifies the number of characters to keep. When applied to the `animals` data, here’s what we get:
```
strtrim( x = animals, width = 3 )
```
```
## [1] "cat" "dog" "kan" "wha"
```
Note that the only thing that `strtrim()` does is chop off excess characters at the end of a string. It doesn’t insert any whitespace characters to fill them out if the original string is shorter than the `width` argument. For example, if I trim the `animals` data to 4 characters, here’s what I get:
```
strtrim( x = animals, width = 4 )
```
```
## [1] "cat" "dog" "kang" "whal"
```
The `"cat"` and `"dog"` strings still only use 3 characters. Okay, but what if you don’t want to start from the first letter? Suppose, for instance, I only wanted to keep the second and third letter of each word. That doesn’t happen quite as often, but there are some situations where you need to do something like that. If that does happen, then the function you need is `substr()` , in which you specify a `start` point and a `stop` point instead of specifying the width. For instance, to keep only the 2nd and 3rd letters of the various `animals` , I can do the following:
```
substr( x = animals, start = 2, stop = 3 )
```
```
## [1] "at" "og" "an" "ha"
```
### 7.8.2 Pasting strings together
Much more commonly, you will need either to glue several character strings together or to pull them apart. To glue several strings together, the `paste()` function is very useful. There are three arguments to the `paste()` function:
*
`...` As usual, the dots “match” up against any number of inputs. In this case, the inputs should be the various different strings you want to paste together. *
`sep` . This argument should be a string, indicating what characters R should use as separators, in order to keep each of the original strings separate from each other in the pasted output. By default the value is a single space, `sep = " "` . This is made a little clearer when we look at the examples. *
`collapse` . This is an argument indicating whether the `paste()` function should interpret vector inputs as things to be collapsed, or whether a vector of inputs should be converted into a vector of outputs. The default value is `collapse = NULL` which is interpreted as meaning that vectors should not be collapsed. If you want to collapse vectors into as single string, then you should specify a value for `collapse` . Specifically, the value of `collapse` should correspond to the separator character that you want to use for the collapsed inputs. Again, see the examples below for more details.
That probably doesn’t make much sense yet, so let’s start with a simple example. First, let’s try to paste two words together, like this:
```
paste( "hello", "world" )
```
`## [1] "hello world"` Notice that R has inserted a space between the `"hello"` and `"world"` . Suppose that’s not what I wanted. Instead, I might want to use `.` as the separator character, or to use no separator at all. To do either of those, I would need to specify `sep = "."` or `sep = ""` .122 For instance:
```
paste( "hello", "world", sep = "." )
```
`## [1] "hello.world"` Now let’s consider a slightly more complicated example. Suppose I have two vectors that I want to `paste()` together. Let’s say something like this:
```
hw <- c( "hello", "world" )
ng <- c( "nasty", "government" )
```
And suppose I want to paste these together. However, if you think about it, this statement is kind of ambiguous. It could mean that I want to do an “element wise” paste, in which all of the first elements get pasted together ( `"hello nasty"` ) and all the second elements get pasted together ( `"world government"` ). Or, alternatively, I might intend to collapse everything into one big string (
```
"hello nasty world government"
```
). By default, the `paste()` function assumes that you want to do an element-wise paste: `paste( hw, ng )`
```
## [1] "hello nasty" "world government"
```
However, there’s nothing stopping you from overriding this default. All you have to do is specify a value for the `collapse` argument, and R will chuck everything into one dirty big string. To give you a sense of exactly how this works, what I’ll do in this next example is specify different values for `sep` and `collapse` :
```
paste( hw, ng, sep = ".", collapse = ":::")
```
```
## [1] "hello.nasty:::world.government"
```
### 7.8.3 Splitting strings
At other times you have the opposite problem to the one in the last section: you have a whole lot of text bundled together into a single string that needs to be pulled apart and stored as several different variables. For instance, the data set that you get sent might include a single variable containing someone’s full name, and you need to separate it into first names and last names. To do this in R you can use the `strsplit()` function, and for the sake of argument, let’s assume that the string you want to split up is the following string:
```
monkey <- "It was the best of times. It was the blurst of times."
```
To use the `strsplit()` function to break this apart, there are three arguments that you need to pay particular attention to:
*
`x` . A vector of character strings containing the data that you want to split. *
`split` . Depending on the value of the `fixed` argument, this is either a fixed string that specifies a delimiter, or a regular expression that matches against one or more possible delimiters. If you don’t know what regular expressions are (probably most readers of this book), don’t use this option. Just specify a separator string, just like you would for the `paste()` function. *
`fixed` . Set `fixed = TRUE` if you want to use a fixed delimiter. As noted above, unless you understand regular expressions this is definitely what you want. However, the default value is `fixed = FALSE` , so you have to set it explicitly.
Let’s look at a simple example:
```
monkey.1 <- strsplit( x = monkey, split = " ", fixed = TRUE )
monkey.1
```
One thing to note in passing is that the output here is a list (you can tell from the `[[1]]` part of the output), whose first and only element is a character vector. This is useful in a lot of ways, since it means that you can input a character vector for `x` and then then have the `strsplit()` function split all of them, but it’s kind of annoying when you only have a single input. To that end, it’s useful to know that you can `unlist()` the output: `unlist( monkey.1 )`
To understand why it’s important to remember to use the `fixed = TRUE` argument, suppose we wanted to split this into two separate sentences. That is, we want to use `split = "."` as our delimiter string. As long as we tell R to remember to treat this as a fixed separator character, then we get the right answer:
```
strsplit( x = monkey, split = ".", fixed = TRUE )
```
```
## [[1]]
## [1] "It was the best of times" " It was the blurst of times"
```
However, if we don’t do this, then R will assume that when you typed `split = "."` you were trying to construct a “regular expression”, and as it happens the character `.` has a special meaning within a regular expression. As a consequence, if you forget to include the `fixed = TRUE` part, you won’t get the answers you’re looking for.
### 7.8.4 Making simple conversions
A slightly different task that comes up quite often is making transformations to text. A simple example of this would be converting text to lower case or upper case, which you can do using the `toupper()` and `tolower()` functions. Both of these functions have a single argument `x` which contains the text that needs to be converted. An example of this is shown below:
```
text <- c( "lIfe", "Impact" )
tolower( x = text )
```
```
## [1] "life" "impact"
```
A slightly more powerful way of doing text transformations is to use the `chartr()` function, which allows you to specify a “character by character” substitution. This function contains three arguments, `old` , `new` and `x` . As usual `x` specifies the text that needs to be transformed. The `old` and `new` arguments are strings of the same length, and they specify how `x` is to be converted. Every instance of the first character in `old` is converted to the first character in `new` and so on. For instance, suppose I wanted to convert `"albino"` to `"libido"` . To do this, I need to convert all of the `"a"` characters (all 1 of them) in `"albino"` into `"l"` characters (i.e., `a` \(\rightarrow\) `l` ). Additionally, I need to make the substitutions `l` \(\rightarrow\) `i` and `n` \(\rightarrow\) `d` . To do so, I would use the following command:
```
old.text <- "albino"
chartr( old = "aln", new = "lid", x = old.text )
```
`## [1] "libido"`
### 7.8.5 Applying logical operations to text
In Section 3.9.5 we discussed a very basic text processing tool, namely the ability to use the equality operator `==` to test to see if two strings are identical to each other. However, you can also use other logical operators too. For instance R also allows you to use the `<` and `>` operators to determine which of two strings comes first, alphabetically speaking. Sort of. Actually, it’s a bit more complicated than that, but let’s start with a simple example: `"cat" < "dog"` `## [1] TRUE` In this case, we see that `"cat"` does does come before `"dog"` alphabetically, so R judges the statement to be true. However, if we ask R to tell us if `"cat"` comes before `"anteater"` , `"cat" < "anteater"` `## [1] FALSE` It tell us that the statement is false. So far, so good. But text data is a bit more complicated than the dictionary suggests. What about `"cat"` and `"CAT"` ? Which of these comes first? Let’s try it and find out: `"CAT" < "cat"` `## [1] FALSE` In other words, R assumes that uppercase letters come before lowercase ones. Fair enough. No-one is likely to be surprised by that. What you might find surprising is that R assumes that all uppercase letters come before all lowercase ones. That is, while `"anteater" < "zebra"` is a true statement, and the uppercase equivalent `"ANTEATER" < "ZEBRA"` is also true, it is not true to say that `"anteater" < "ZEBRA"` , as the following extract illustrates: `"anteater" < "ZEBRA"` `## [1] TRUE`
This may seem slightly counterintuitive. With that in mind, it may help to have a quick look Table 7.3, which lists various text characters in the order that R uses.
Characters |
| --- |
! " # $ % & ’ ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ ] ^ _ ‘ a b c d e f g h i j k l m n o p q r s t u v w x y z } | { |
### 7.8.6 Concatenating and printing with
`cat()` One function that I want to make a point of talking about, even though it’s not quite on topic, is the `cat()` function. The `cat()` function is a of mixture of `paste()` and `print()` . That is, what it does is concatenate strings and then print them out. In your own work you can probably survive without it, since `print()` and `paste()` will actually do what you need, but the `cat()` function is so widely used that I think it’s a good idea to talk about it here. The basic idea behind `cat()` is straightforward. Like `paste()` , it takes several arguments as inputs, which it converts to strings, collapses (using a separator character specified using the `sep` argument), and prints on screen. If you want, you can use the `file` argument to tell R to print the output into a file rather than on screen (I won’t do that here). However, it’s important to note that the `cat()` function collapses vectors first, and then concatenates them. That is, notice that when I use `cat()` to combine `hw` and `ng` , I get a different result than if I’d used `paste()` `cat( hw, ng )`
```
paste( hw, ng, collapse = " " )
```
```
## [1] "hello nasty world government"
```
Notice the difference in the ordering of words. There’s a few additional details that I need to mention about `cat()` . Firstly, `cat()` really is a function for printing, and not for creating text strings to store for later. You can’t assign the output to a variable, as the following example illustrates: `x <- cat( hw, ng )`
`x` `## NULL` Despite my attempt to store the output as a variable, `cat()` printed the results on screen anyway, and it turns out that the variable I created doesn’t contain anything at all.123 Secondly, the `cat()` function makes use of a number of “special” characters. I’ll talk more about these in the next section, but I’ll illustrate the basic point now, using the example of `"\n"` which is interpreted as a “new line” character. For instance, compare the behaviour of `print()` and `cat()` when asked to print the string `"hello\nworld"` :
```
print( "hello\nworld" ) # print literally:
```
```
## [1] "hello\nworld"
```
```
cat( "hello\nworld" ) # interpret as newline
```
```
## hello
## world
```
In fact, this behaviour is important enough that it deserves a section of its very own…
### 7.8.7 Using escape characters in text
The previous section brings us quite naturally to a fairly fundamental issue when dealing with strings, namely the issue of delimiters and escape characters. Reduced to its most basic form, the problem we have is that R commands are written using text characters, and our strings also consist of text characters. So, suppose I want to type in the word “hello”, and have R encode it as a string. If I were to just type `hello` , R will think that I’m referring to a variable or a function called `hello` rather than interpret it as a string. The solution that R adopts is to require you to enclose your string by delimiter characters, which can be either double quotes or single quotes. So, when I type `"hello"` or `'hello'` then R knows that it should treat the text in between the quote marks as a character string. However, this isn’t a complete solution to the problem: after all, `"` and `'` are themselves perfectly legitimate text characters, and so we might want to include those in our string as well. For instance, suppose I wanted to encode the name “O’Rourke” as a string. It’s not legitimate for me to type `'O'rourke'` because R is too stupid to realise that “O’Rourke” is a real word. So it will interpret the `'O'` part as a complete string, and then will get confused when it reaches the `Rourke'` part. As a consequence, what you get is an error message:
```
'O'Rourke'
Error: unexpected symbol in "'O'Rourke"
```
To some extent, R offers us a cheap fix to the problem because of the fact that it allows us to use either `"` or `'` as the delimiter character. Although `'O'rourke'` will make R cry, it is perfectly happy with `"O'Rourke"` : `"O'Rourke"` `## [1] "O'Rourke"`
This is a real advantage to having two different delimiter characters. Unfortunately, anyone with even the slightest bit of deviousness to them can see the problem with this. Suppose I’m reading a book that contains the following passage,
<NAME> says, “Yay, money!”. It’s a joke, but no-one laughs.
and I want to enter this as a string. Neither the `'` or `"` delimiters will solve the problem here, since this string contains both a single quote character and a double quote character. To encode strings like this one, we have to do something a little bit clever.
Escape.sequence | Interpretation |
| --- | --- |
| Newline |
| Horizontal Tab |
| Vertical Tab |
| Backspace |
| Carriage Return |
| Form feed |
| Alert sound |
| Backslash |
| Single quote |
| Double quote |
The solution to the problem is to designate an escape character, which in this case is `\` , the humble backslash. The escape character is a bit of a sacrificial lamb: if you include a backslash character in your string, R will not treat it as a literal character at all. It’s actually used as a way of inserting “special” characters into your string. For instance, if you want to force R to insert actual quote marks into the string, then what you actually type is `\'` or `\"` (these are called escape sequences). So, in order to encode the string discussed earlier, here’s a command I could use:
```
PJ <- "<NAME> says, \"Yay, money!\". It\'s a joke, but no-one laughs."
```
Notice that I’ve included the backslashes for both the single quotes and double quotes. That’s actually overkill: since I’ve used `"` as my delimiter, I only needed to do this for the double quotes. Nevertheless, the command has worked, since I didn’t get an error message. Now let’s see what happens when I print it out: `print( PJ )`
```
## [1] "<NAME> says, \"Yay, money!\". It's a joke, but no-one laughs."
```
Hm. Why has R printed out the string using `\"` ? For the exact same reason that I needed to insert the backslash in the first place. That is, when R prints out the `PJ` string, it has enclosed it with delimiter characters, and it wants to unambiguously show us which of the double quotes are delimiters and which ones are actually part of the string. Fortunately, if this bugs you, you can make it go away by using the `print.noquote()` function, which will just print out the literal string that you encoded in the first place: `print.noquote( PJ )` Typing `cat(PJ)` will produce a similar output. Introducing the escape character solves a lot of problems, since it provides a mechanism by which we can insert all sorts of characters that aren’t on the keyboard. For instance, as far as a computer is concerned, “new line” is actually a text character. It’s the character that is printed whenever you hit the “return” key on your keyboard. If you want to insert a new line character into your string, you can actually do this by including the escape sequence `\n` . Or, if you want to insert a backslash character, then you can use `\\` . A list of the standard escape sequences recognised by R is shown in Table 7.4. A lot of these actually date back to the days of the typewriter (e.g., carriage return), so they might seem a bit counterintuitive to people who’ve never used one. In order to get a sense for what the various escape sequences do, we’ll have to use the `cat()` function, because it’s the only function “dumb” enough to literally print them out:
```
cat( "xxxx\boo" ) # \b is a backspace, so it deletes the preceding x
cat( "xxxx\too" ) # \t is a tab, so it inserts a tab space
cat( "xxxx\noo" ) # \n is a newline character
cat( "xxxx\roo" ) # \r returns you to the beginning of the line
```
And that’s pretty much it. There are a few other escape sequence that R recognises, which you can use to insert arbitrary ASCII or Unicode characters into your string (type `?Quotes` for more details) but I won’t go into details here.
### 7.8.8 Matching and substituting text
Another task that we often want to solve is find all strings that match a certain criterion, and possibly even to make alterations to the text on that basis. There are several functions in R that allow you to do this, three of which I’ll talk about briefly here: `grep()` , `gsub()` and `sub()` . Much like the `substr()` function that I talked about earlier, all three of these functions are intended to be used in conjunction with regular expressions (see Section 7.8.9 but you can also use them in a simpler fashion, since they all allow you to set `fixed = TRUE` , which means we can ignore all this regular expression rubbish and just use simple text matching. So, how do these functions work? Let’s start with the `grep()` function. The purpose of this function is to input a vector of character strings `x` , and to extract all those strings that fit a certain pattern. In our examples, I’ll assume that the `pattern` in question is a literal sequence of characters that the string must contain (that’s what `fixed = TRUE` does). To illustrate this, let’s start with a simple data set, a vector that contains the names of three `beers` . Something like this:
```
beers <- c( "little creatures", "sierra nevada", "coopers pale" )
```
Next, let’s use `grep()` to find out which of these strings contains the substring `"er"` . That is, the `pattern` that we need to match is the fixed string `"er"` , so the command we need to use is:
`## [1] 2 3` What the output here is telling us is that the second and third elements of `beers` both contain the substring `"er"` . Alternatively, however, we might prefer it if `grep()` returned the actual strings themselves. We can do this by specifying `value = TRUE` in our function call. That is, we’d use a command like this:
```
## [1] "sierra nevada" "coopers pale"
```
The other two functions that I wanted to mention in this section are `gsub()` and `sub()` . These are both similar in spirit to `grep()` insofar as what they do is search through the input strings ( `x` ) and find all of the strings that match a `pattern` . However, what these two functions do is replace the pattern with a `replacement` string. The `gsub()` function will replace all instances of the pattern, whereas the `sub()` function just replaces the first instance of it in each string. To illustrate how this works, suppose I want to replace all instances of the letter `"a"` with the string `"BLAH"` . I can do this to the `beers` data using the `gsub()` function:
```
gsub( pattern = "a", replacement = "BLAH", x = beers, fixed = TRUE )
```
Notice that all three of the `"a"` s in `"sierra nevada"` have been replaced. In contrast, let’s see what happens when we use the exact same command, but this time using the `sub()` function instead:
```
sub( pattern = "a", replacement = "BLAH", x = beers, fixed = TRUE )
```
Only the first `"a"` is changed.
### 7.8.9 Regular expressions (not really)
There’s one last thing I want to talk about regarding text manipulation, and that’s the concept of a regular expression. Throughout this section we’ve often needed to specify `fixed = TRUE` in order to force R to treat some of our strings as actual strings, rather than as regular expressions. So, before moving on, I want to very briefly explain what regular expressions are. I’m not going to talk at all about how they work or how you specify them, because they’re genuinely complicated and not at all relevant to this book. However, they are extremely powerful tools and they’re quite widely used by people who have to work with lots of text data (e.g., people who work with natural language data), and so it’s handy to at least have a vague idea about what they are. The basic idea is quite simple. Suppose I want to extract all strings in my `beers` vector that contain a vowel followed immediately by the letter `"s"` . That is, I want to finds the beer names that contain either `"as"` , `"es"` , `"is"` , `"os"` or `"us"` . One possibility would be to manually specify all of these possibilities and then match against these as fixed strings one at a time, but that’s tedious. The alternative is to try to write out a single “regular” expression that matches all of these. The regular expression that does this124 is `"[aeiou]s"` , and you can kind of see what the syntax is doing here. The bracketed expression means “any of the things in the middle”, so the expression as a whole means “any of the things in the middle” (i.e. vowels) followed by the letter `"s"` . When applied to our beer names we get this:
```
grep( pattern = "[aeiou]s", x = beers, value = TRUE )
```
```
## [1] "little creatures"
```
So it turns out that only `"little creatures"` contains a vowel followed by the letter `"s"` . But of course, had the data contained a beer like `"fosters"` , that would have matched as well because it contains the string `"os"` . However, I deliberately chose not to include it because Fosters is not – in my opinion – a proper beer.125 As you can tell from this example, regular expressions are a neat tool for specifying patterns in text: in this case, “vowel then s”. So they are definitely things worth knowing about if you ever find yourself needing to work with a large body of text. However, since they are fairly complex and not necessary for any of the applications discussed in this book, I won’t talk about them any further.
## 7.9 Reading unusual data files
In this section I’m going to switch topics (again!) and turn to the question of how you can load data from a range of different sources. Throughout this book I’ve assumed that your data are stored as an `.Rdata` file or as a “properly” formatted CSV file. And if so, then the basic tools that I discussed in Section 4.5 should be quite sufficient. However, in real life that’s not a terribly plausible assumption to make, so I’d better talk about some of the other possibilities that you might run into.
### 7.9.1 Loading data from text files
The first thing I should point out is that if your data are saved as a text file but aren’t quite in the proper CSV format, then there’s still a pretty good chance that the `read.csv()` function (or equivalently, `read.table()` ) will be able to open it. You just need to specify a few more of the optional arguments to the function. If you type `?read.csv` you’ll see that the `read.csv()` function actually has several arguments that you can specify. Obviously you need to specify the `file` that you want it to load, but the others all have sensible default values. Nevertheless, you will sometimes need to change them. The ones that I’ve often found myself needing to change are:
*
`header` . A lot of the time when you’re storing data as a CSV file, the first row actually contains the column names and not data. If that’s not true, you need to set `header = FALSE` . *
`sep` . As the name “comma separated value” indicates, the values in a row of a CSV file are usually separated by commas. This isn’t universal, however. In Europe the decimal point is typically written as `,` instead of `.` and as a consequence it would be somewhat awkward to use `,` as the separator. Therefore it is not unusual to use `;` over there. At other times, I’ve seen a TAB character used. To handle these cases, we’d need to set `sep = ";"` or `sep = "\t"` . *
`quote` . It’s conventional in CSV files to include a quoting character for textual data. As you can see by looking at the `booksales.csv}` file, this is usually a double quote character, `"` . But sometimes there is no quoting character at all, or you might see a single quote mark `'` used instead. In those cases you’d need to specify `quote = ""` or `quote = "'"` . *
`skip` . It’s actually very common to receive CSV files in which the first few rows have nothing to do with the actual data. Instead, they provide a human readable summary of where the data came from, or maybe they include some technical info that doesn’t relate to the data. To tell R to ignore the first (say) three lines, you’d need to set `skip = 3` *
`na.strings` . Often you’ll get given data with missing values. For one reason or another, some entries in the table are missing. The data file needs to include a “special” string to indicate that the entry is missing. By default R assumes that this string is `NA` , since that’s what it would do, but there’s no universal agreement on what to use in this situation. If the file uses `???` instead, then you’ll need to set `na.strings = "???"` .
It’s kind of nice to be able to have all these options that you can tinker with. For instance, have a look at the data file shown pictured in Figure 7.1. This file contains almost the same data as the last file (except it doesn’t have a header), and it uses a bunch of wacky features that you don’t normally see in CSV files. In fact, it just so happens that I’m going to have to change all five of those arguments listed above in order to load this file. Here’s how I would do it:
```
data <- read.csv(
file = file.path(projecthome,"data","booksales2.csv"), # specify path to the file
header = FALSE, # variable names in the file?
skip = 8, # ignore the first 8 lines
quote = "*", # what indicates text data?
sep = "\t", # what separates different entries?
na.strings = "NFI" ) # what is the code for missing data?
```
If I now have a look at the data I’ve loaded, I see that this is what I’ve got:
`head( data )`
```
## V1 V2 V3 V4
## 1 January 31 0 high
## 2 February 28 100 high
## 3 March 31 200 low
## 4 April 30 50 out
## 5 May 31 NA out
## 6 June 30 0 high
```
Because I told R to expect `*` to be used as the quoting character instead of `"` ; to look for tabs (which we write like this: `\t` ) instead of commas, and to skip the first 8 lines of the file, it’s basically loaded the right data. However, since `booksales2.csv` doesn’t contain the column names, R has made them up. Showing the kind of imagination I expect from insentient software, R decided to call them `V1` , `V2` , `V3` and `V4` . Finally, because I told it that the file uses “NFI” to denote missing data, R correctly figures out that the sales data for May are actually missing.
In real life you’ll rarely see data this stupidly formatted.126
### 7.9.2 Loading data from SPSS (and other statistics packages)
The commands listed above are the main ones we’ll need for data files in this book. But in real life we have many more possibilities. For example, you might want to read data files in from other statistics programs. Since SPSS is probably the most widely used statistics package in psychology, it’s worth briefly showing how to open SPSS data files (file extension `.sav` ). It’s surprisingly easy. The extract below should illustrate how to do so:
```
library( foreign ) # load the package
X <- read.spss( file.path(projecthome,"data","datafile.sav" )) # create a list containing the data
X <- as.data.frame( X ) # convert to data frame
```
If you wanted to import from an SPSS file to a data frame directly, instead of importing a list and then converting the list to a data frame, you can do that too:
```
X <- read.spss( file = "datafile.sav", to.data.frame = TRUE )
```
And that’s pretty much it, at least as far as SPSS goes. As far as other statistical software goes, the `foreign` package provides a wealth of possibilities. To open SAS files, check out the `read.ssd()` and `read.xport()` functions. To open data from Minitab, the `read.mtp()` function is what you’re looking for. For Stata, the `read.dta()` function is what you want. For Systat, the `read.systat()` function is what you’re after.
### 7.9.3 Loading Excel files
A different problem is posed by Excel files. Despite years of yelling at people for sending data to me encoded in a proprietary data format, I get sent a lot of Excel files. In general R does a pretty good job of opening them, but it’s bit finicky because Microsoft don’t seem to be terribly fond of people using non-Microsoft products, and go to some lengths to make it tricky. If you get an Excel file, my suggestion would be to open it up in Excel (or better yet, OpenOffice, since that’s free software) and then save the spreadsheet as a CSV file. Once you’ve got the data in that format, you can open it using `read.csv()` . However, if for some reason you’re desperate to open the `.xls` or `.xlsx` file directly, then you can use the `read.xls()` function in the `gdata` package:
```
library( gdata ) # load the package
X <- read.xls( "datafile.xlsx" ) # create a data frame
```
This usually works. And if it doesn’t, you’re probably justified in “suggesting” to the person that sent you the file that they should send you a nice clean CSV file instead.
### 7.9.4 Loading Matlab (& Octave) files
A lot of scientific labs use Matlab as their default platform for scientific computing; or Octave as a free alternative. Opening Matlab data files (file extension `.mat` ) slightly more complicated, and if it wasn’t for the fact that Matlab is so very widespread and is an extremely good platform, I wouldn’t mention it. However, since Matlab is so widely used, I think it’s worth discussing briefly how to get Matlab and R to play nicely together. The way to do this is to install the `R.matlab` package (don’t forget to install the dependencies too). Once you’ve installed and loaded the package, you have access to the `readMat()` function. As any Matlab user will know, the `.mat` files that Matlab produces are workspace files, very much like the `.Rdata` files that R produces. So you can’t import a `.mat` file as a data frame. However, you can import it as a list. So, when we do this:
```
library( R.matlab ) # load the package
data <- readMat( "matlabfile.mat" ) # read the data file to a list
```
The `data` object that gets created will be a list, containing one variable for every variable stored in the Matlab file. It’s fairly straightforward, though there are some subtleties that I’m ignoring. In particular, note that if you don’t have the `Rcompression` package, you can’t open Matlab files above the version 6 format. So, if like me you’ve got a recent version of Matlab, and don’t have the `Rcompression` package, you’ll need to save your files using the `-v6` flag otherwise R can’t open them. Oh, and Octave users? The `foreign` package contains a `read.octave()` command. Just this once, the world makes life easier for you folks than it does for all those cashed-up swanky Matlab bastards.
### 7.9.5 Saving other kinds of data
Given that I talked extensively about how to load data from non-R files, it might be worth briefly mentioning that R is also pretty good at writing data into other file formats besides it’s own native ones. I won’t discuss them in this book, but the `write.csv()` function can write CSV files, and the `write.foreign()` function (in the `foreign` package) can write SPSS, Stata and SAS files. There are also a lot of low level commands that you can use to write very specific information to a file, so if you really, really needed to you could create your own
```
write.obscurefiletype()
```
function, but that’s also a long way beyond the scope of this book. For now, all that I want you to recognise is that this capability is there if you need it.
### 7.9.6 Are we done yet?
Of course not. If I’ve learned nothing else about R it’s that you’re never bloody done. This listing doesn’t even come close to exhausting the possibilities. Databases are supported by the `RODBC` , `DBI` , and `RMySQL` packages among others. You can open webpages using the `RCurl` package. Reading and writing JSON objects is supported through the `rjson` package. And so on. In a sense, the right question is not so much “can R do this?” so much as “whereabouts in the wilds of CRAN is the damn package that does it?”
## 7.10 Coercing data from one class to another
Sometimes you want to change the variable class. This can happen for all sorts of reasons. Sometimes when you import data from files, it can come to you in the wrong format: numbers sometimes get imported as text, dates usually get imported as text, and many other possibilities besides. Regardless of how you’ve ended up in this situation, there’s a very good chance that sometimes you’ll want to convert a variable from one class into another one. Or, to use the correct term, you want to coerce the variable from one class into another. Coercion is a little tricky, and so I’ll only discuss the very basics here, using a few simple examples.
Firstly, let’s suppose we have a variable `x` that is supposed to be representing a number, but the data file that you’ve been given has encoded it as text. Let’s imagine that the variable is something like this:
```
x <- "100" # the variable
class(x) # what class is it?
```
`## [1] "character"` Obviously, if I want to do calculations using `x` in its current state, R is going to get very annoyed at me. It thinks that `x` is text, so it’s not going to allow me to try to do mathematics using it! Obviously, we need to coerce `x` from character to numeric. We can do that in a straightforward way by using the `as.numeric()` function:
```
x <- as.numeric(x) # coerce the variable
class(x) # what class is it?
```
`## [1] "numeric"`
```
x + 1 # hey, addition works!
```
`## [1] 101` Not surprisingly, we can also convert it back again if we need to. The function that we use to do this is the `as.character()` function:
```
x <- as.character(x) # coerce back to text
class(x) # check the class:
```
`## [1] "character"` However, there’s some fairly obvious limitations: you can’t coerce the string `"hello world"` into a number because, well, there’s isn’t a number that corresponds to it. Or, at least, you can’t do anything useful:
```
as.numeric( "hello world" ) # this isn't going to work.
```
```
## Warning: NAs introduced by coercion
```
`## [1] NA` In this case R doesn’t give you an error message; it just gives you a warning, and then says that the data is missing (see Section 4.6.1 for the interpretation of `NA` ). That gives you a feel for how to change between numeric and character data. What about logical data? To cover this briefly, coercing text to logical data is pretty intuitive: you use the `as.logical()` function, and the character strings `"T"` , `"TRUE"` , `"True"` and `"true"` all convert to the logical value of `TRUE` . Similarly `"F"` , `"FALSE"` , `"False"` , and `"false"` all become `FALSE` . All other strings convert to `NA` . When you go back the other way using `as.character()` , `TRUE` converts to `"TRUE"` and `FALSE` converts to `"FALSE"` . Converting numbers to logicals – again using `as.logical()` – is straightforward. Following the convention in the study of logic, the number `0` converts to `FALSE` . Everything else is `TRUE` . Going back using `as.numeric()` , `FALSE` converts to `0` and `TRUE` converts to `1` .
## 7.11 Other useful data structures
Up to this point we have encountered several different kinds of variables. At the simplest level, we’ve seen numeric data, logical data and character data. However, we’ve also encountered some more complicated kinds of variables, namely factors, formulas, data frames and lists. We’ll see a few more specialised data structures later on in this book, but there’s a few more generic ones that I want to talk about in passing. None of them are central to the rest of the book (and in fact, the only one we’ll even see anywhere else is the matrix), but they do crop up a fair bit in real life.
### 7.11.1 Matrices
In various different places in this chapter I’ve made reference to an R data structure called a matrix, and mentioned that I’d talk a bit more about matrices later on. That time has come. Much like a data frame, a matrix is basically a big rectangular table of data, and in fact there are quite a few similarities between the two. However, there are also some key differences, so it’s important to talk about matrices in a little detail. Let’s start by using `rbind()` to create a small matrix:127
```
row.1 <- c( 2,3,1 ) # create data for row 1
row.2 <- c( 5,6,7 ) # create data for row 2
M <- rbind( row.1, row.2 ) # row bind them into a matrix
print( M ) # and print it out...
```
```
## [,1] [,2] [,3]
## row.1 2 3 1
## row.2 5 6 7
```
The variable `M` is a matrix, which we can confirm by using the `class()` function. Notice that, when we bound the two vectors together, R retained the names of the original variables as row names. We could delete these if we wanted by typing `rownames(M)<-NULL` , but I generally prefer having meaningful names attached to my variables, so I’ll keep them. In fact, let’s also add some highly unimaginative column names as well:
```
colnames(M) <- c( "col.1", "col.2", "col.3" )
print(M)
```
```
## col.1 col.2 col.3
## row.1 2 3 1
## row.2 5 6 7
```
You can use square brackets to subset a matrix in much the same way that you can for data frames, again specifying a row index and then a column index. For instance, `M[2,3]` pulls out the entry in the 2nd row and 3rd column of the matrix (i.e., `7` ), whereas `M[2,]` pulls out the entire 2nd row, and `M[,3]` pulls out the entire 3rd column. However, it’s worth noting that when you pull out a column, R will print the results horizontally, not vertically. The reason for this relates to how matrices (and arrays generally) are implemented. The original matrix `M` is treated as a two-dimensional objects, containing 2 rows and 3 columns. However, whenever you pull out a single row or a single column, the result is considered to be one-dimensional. As far as R is concerned there’s no real reason to distinguish between a one-dimensional object printed vertically (a column) and a one-dimensional object printed horizontally (a row), and it prints them all out horizontally.128 There is also a way of using only a single index, but due to the internal structure to how R defines a matrix, it works very differently to what we saw previously with data frames.
The single-index approach is illustrated in Table 7.5 but I don’t really want to focus on it since we’ll never really need it for this book, and matrices don’t play anywhere near as large a role in this book as data frames do. The reason for these differences is that for this is that, for both data frames and matrices, the “row and column” version exists to allow the human user to interact with the object in the psychologically meaningful way: since both data frames and matrices are basically just tables of data, it’s the same in each case. However, the single-index version is really a method for you to interact with the object in terms of its internal structure, and the internals for data frames and matrices are quite different.
Row | Col.1 | Col.2 | Col.3 |
| --- | --- | --- | --- |
Row 1 | [1,1] | [1,2] | [1,3] |
Row 2 | [2,1] | [2,2] | [2,3] |
Row | Col.1 | Col.2 | Col.3 |
| --- | --- | --- | --- |
Row 1 | 1 | 3 | 5 |
Row 2 | 2 | 4 | 6 |
The critical difference between a data frame and a matrix is that, in a data frame, we have this notion that each of the columns corresponds to a different variable: as a consequence, the columns in a data frame can be of different data types. The first column could be numeric, and the second column could contain character strings, and the third column could be logical data. In that sense, there is a fundamental asymmetry build into a data frame, because of the fact that columns represent variables (which can be qualitatively different to each other) and rows represent cases (which cannot). Matrices are intended to be thought of in a different way. At a fundamental level, a matrix really is just one variable: it just happens that this one variable is formatted into rows and columns. If you want a matrix of numeric data, every single element in the matrix must be a number. If you want a matrix of character strings, every single element in the matrix must be a character string. If you try to mix data of different types together, then R will either spit out an error, or quietly coerce the underlying data into a list. If you want to find out what class R secretly thinks the data within the matrix is, you need to do something like this:
`class( M[1] )` `## [1] "numeric"` You can’t type `class(M)` , because all that will happen is R will tell you that `M` is a matrix: we’re not interested in the class of the matrix itself, we want to know what class the underlying data is assumed to be. Anyway, to give you a sense of how R enforces this, let’s try to change one of the elements of our numeric matrix into a character string:
```
M[1,2] <- "text"
M
```
```
## col.1 col.2 col.3
## row.1 "2" "text" "1"
## row.2 "5" "6" "7"
```
It looks as if R has coerced all of the data in our matrix into character strings. And in fact, if we now typed in `class(M[1])` we’d see that this is exactly what has happened. If you alter the contents of one element in a matrix, R will change the underlying data type as necessary. There’s only one more thing I want to talk about regarding matrices. The concept behind a matrix is very much a mathematical one, and in mathematics a matrix is a most definitely a two-dimensional object. However, when doing data analysis, we often have reasons to want to use higher dimensional tables (e.g., sometimes you need to cross-tabulate three variables against each other). You can’t do this with matrices, but you can do it with arrays. An array is just like a matrix, except it can have more than two dimensions if you need it to. In fact, as far as R is concerned a matrix is just a special kind of array, in much the same way that a data frame is a special kind of list. I don’t want to talk about arrays too much, but I will very briefly show you an example of what a 3D array looks like. To that end, let’s cross tabulate the `speaker` and `utterance` variables from the `nightgarden.Rdata` data file, but we’ll add a third variable to the cross-tabs this time, a logical variable which indicates whether or not I was still awake at this point in the show:
```
dan.awake <- c( TRUE,TRUE,TRUE,TRUE,TRUE,FALSE,FALSE,FALSE,FALSE,FALSE )
```
Now that we’ve got all three variables in the workspace (assuming you loaded the `nightgarden.Rdata` data earlier in the chapter) we can construct our three way cross-tabulation, using the `table()` function.
```
xtab.3d <- table( speaker, utterance, dan.awake )
xtab.3d
```
```
## , , dan.awake = FALSE
##
## utterance
## speaker ee onk oo pip
## makka-pakka 0 2 0 2
## tombliboo 0 0 1 0
## upsy-daisy 0 0 0 0
##
## , , dan.awake = TRUE
##
## utterance
## speaker ee onk oo pip
## makka-pakka 0 0 0 0
## tombliboo 1 0 0 0
## upsy-daisy 0 2 0 2
```
Hopefully this output is fairly straightforward: because R can’t print out text in three dimensions, what it does is show a sequence of 2D slices through the 3D table. That is, the
```
, , dan.awake = FALSE
```
part indicates that the 2D table that follows below shows the 2D cross-tabulation of `speaker` against `utterance` only for the `dan.awake = FALSE` instances, and so on.129
### 7.11.2 Ordered factors
One topic that I neglected to mention when discussing factors previously (Section 4.7 is that there are actually two different types of factor in R, unordered factors and ordered factors. An unordered factor corresponds to a nominal scale variable, and all of the factors we’ve discussed so far in this book have been unordered (as will all the factors used anywhere else except in this section). However, it’s often very useful to explicitly tell R that your variable is ordinal scale, and if so you need to declare it to be an ordered factor. For instance, earlier in this chapter we made use of a variable consisting of Likert scale data, which we represented as the `likert.raw` variable: `likert.raw`
We can declare this to be an ordered factor in by using the `factor()` function, and setting `ordered = TRUE` . To illustrate how this works, let’s create an ordered factor called `likert.ordinal` and have a look at it:
```
likert.ordinal <- factor( x = likert.raw, # the raw data
levels = seq(7,1,-1), # strongest agreement is 1, weakest is 7
ordered = TRUE ) # and it's ordered
print( likert.ordinal )
```
```
## [1] 1 7 3 4 4 4 2 6 5 5
## Levels: 7 < 6 < 5 < 4 < 3 < 2 < 1
```
Notice that when we print out the ordered factor, R explicitly tells us what order the levels come in. Because I wanted to order my levels in terms of increasing strength of agreement, and because a response of 1 corresponded to the strongest agreement and 7 to the strongest disagreement, it was important that I tell R to encode 7 as the lowest value and 1 as the largest. Always check this when creating an ordered factor: it’s very easy to accidentally encode your data “upside down” if you’re not paying attention. In any case, note that we can (and should) attach meaningful names to these factor levels by using the `levels()` function, like this:
```
levels( likert.ordinal ) <- c( "strong.disagree", "disagree", "weak.disagree",
"neutral", "weak.agree", "agree", "strong.agree" )
print( likert.ordinal )
```
```
## [1] strong.agree strong.disagree weak.agree neutral
## [5] neutral neutral agree disagree
## [9] weak.disagree weak.disagree
## 7 Levels: strong.disagree < disagree < weak.disagree < ... < strong.agree
```
One nice thing about using ordered factors is that there are a lot of analyses for which R automatically treats ordered factors differently from unordered factors, and generally in a way that is more appropriate for ordinal data. However, since I don’t discuss that in this book, I won’t go into details. Like so many things in this chapter, my main goal here is to make you aware that R has this capability built into it; so if you ever need to start thinking about ordinal scale variables in more detail, you have at least some idea where to start looking!
### 7.11.3 Dates and times
Times and dates are very annoying types of data. To a first approximation we can say that there are 365 days in a year, 24 hours in a day, 60 minutes in an hour and 60 seconds in a minute, but that’s not quite correct. The length of the solar day is not exactly 24 hours, and the length of solar year is not exactly 365 days, so we have a complicated system of corrections that have to be made to keep the time and date system working. On top of that, the measurement of time is usually taken relative to a local time zone, and most (but not all) time zones have both a standard time and a daylight savings time, though the date at which the switch occurs is not at all standardised. So, as a form of data, times and dates suck. Unfortunately, they’re also important. Sometimes it’s possible to avoid having to use any complicated system for dealing with times and dates. Often you just want to know what year something happened in, so you can just use numeric data: in quite a lot of situations something as simple as `this.year <- 2011` works just fine. If you can get away with that for your application, this is probably the best thing to do. However, sometimes you really do need to know the actual date. Or, even worse, the actual time. In this section, I’ll very briefly introduce you to the basics of how R deals with date and time data. As with a lot of things in this chapter, I won’t go into details because I don’t use this kind of data anywhere else in the book. The goal here is to show you the basics of what you need to do if you ever encounter this kind of data in real life. And then we’ll all agree never to speak of it again. To start with, let’s talk about the date. As it happens, modern operating systems are very good at keeping track of the time and date, and can even handle all those annoying timezone issues and daylight savings pretty well. So R takes the quite sensible view that it can just ask the operating system what the date is. We can pull the date using the `Sys.Date()` function:
```
today <- Sys.Date() # ask the operating system for the date
print(today) # display the date
```
`## [1] "2019-01-11"` Okay, that seems straightforward. But, it does rather look like `today` is just a character string, doesn’t it? That would be a problem, because dates really do have a numeric character to them, and it would be nice to be able to do basic addition and subtraction to them. Well, fear not. If you type in `class(today)` , R will tell you that the class of the `today` variable is `"Date"` . What this means is that, hidden underneath this text string that prints out an actual date, R actually has a numeric representation.130 What that means is that you actually can add and subtract days. For instance, if we add 1 to `today` , R will print out the date for tomorrow: `today + 1` `## [1] "2019-01-12"`
Let’s see what happens when we add 365 days:
`today + 365` `## [1] "2020-01-11"` This is particularly handy if you forget that a year is a leap year since in that case you’d probably get it wrong is doing this in your head. R provides a number of functions for working with dates, but I don’t want to talk about them in any detail. I will, however, make passing mention of the `weekdays()` function which will tell you what day of the week a particular date corresponded to, which is extremely convenient in some situations: `weekdays( today )` `## [1] "Friday"` I’ll also point out that you can use the `as.Date()` to convert various different kinds of data into dates. If the data happen to be strings formatted exactly according to the international standard notation (i.e., `yyyy-mm-dd` ) then the conversion is straightforward, because that’s the format that R expects to see by default. You can convert dates from other formats too, but it’s slightly trickier, and beyond the scope of this book. What about times? Well, times are even more annoying, so much so that I don’t intend to talk about them at all in this book, other than to point you in the direction of some vaguely useful things. R itself does provide you with some tools for handling time data, and in fact there are two separate classes of data that are used to represent times, known by the odd names `POSIXct` and `POSIXlt` . You can use these to work with times if you want to, but for most applications you would probably be better off downloading the `chron` package, which provides some much more user friendly tools for working with times and dates.
## 7.12 Miscellaneous topics
To finish this chapter, I have a few topics to discuss that don’t really fit in with any of the other things in this chapter. They’re all kind of useful things to know about, but they are really just “odd topics” that don’t fit with the other examples. Here goes:
### 7.12.1 The problems with floating point arithmetic
If I’ve learned nothing else about transfinite arithmetic (and I haven’t) it’s that infinity is a tedious and inconvenient concept. Not only is it annoying and counterintuitive at times, but it has nasty practical consequences. As we were all taught in high school, there are some numbers that cannot be represented as a decimal number of finite length, nor can they be represented as any kind of fraction between two whole numbers; \(\sqrt{2}\), \(\pi\) and \(e\), for instance. In everyday life we mostly don’t care about this. I’m perfectly happy to approximate \(\pi\) as 3.14, quite frankly. Sure, this does produce some rounding errors from time to time, and if I’d used a more detailed approximation like 3.1415926535 I’d be less likely to run into those issues, but in all honesty I’ve never needed my calculations to be that precise. In other words, although our pencil and paper calculations cannot represent the number \(\pi\) exactly as a decimal number, we humans are smart enough to realise that we don’t care. Computers, unfortunately, are dumb … and you don’t have to dig too deep in order to run into some very weird issues that arise because they can’t represent numbers perfectly. Here is my favourite example:
`0.1 + 0.2 == 0.3` `## [1] FALSE`
Obviously, R has made a mistake here, because this is definitely the wrong answer. Your first thought might be that R is broken, and you might be considering switching to some other language. But you can reproduce the same error in dozens of different programming languages, so the issue isn’t specific to R. Your next thought might be that it’s something in the hardware, but you can get the same mistake on any machine. It’s something deeper than that.
The fundamental issue at hand is floating point arithmetic, which is a fancy way of saying that computers will always round a number to fixed number of significant digits. The exact number of significant digits that the computer stores isn’t important to us:131 what matters is that whenever the number that the computer is trying to store is very long, you get rounding errors. That’s actually what’s happening with our example above. There are teeny tiny rounding errors that have appeared in the computer’s storage of the numbers, and these rounding errors have in turn caused the internal storage of `0.1 + 0.2` to be a tiny bit different from the internal storage of `0.3` . How big are these differences? Let’s ask R: `0.1 + 0.2 - 0.3` `## [1] 5.551115e-17` Very tiny indeed. No sane person would care about differences that small. But R is not a sane person, and the equality operator `==` is very literal minded. It returns a value of `TRUE` only when the two values that it is given are absolutely identical to each other. And in this case they are not. However, this only answers half of the question. The other half of the question is, why are we getting these rounding errors when we’re only using nice simple numbers like 0.1, 0.2 and 0.3? This seems a little counterintuitive. The answer is that, like most programming languages, R doesn’t store numbers using their decimal expansion (i.e., base 10: using digits 0, 1, 2 …, 9). We humans like to write our numbers in base 10 because we have 10 fingers. But computers don’t have fingers, they have transistors; and transistors are built to store 2 numbers not 10. So you can see where this is going: the internal storage of a number in R is based on its binary expansion (i.e., base 2: using digits 0 and 1). And unfortunately, here’s what the binary expansion of 0.1 looks like: \[
.1 \mbox{(decimal)} = .00011001100110011... \mbox{(binary)}
\] and the pattern continues forever. In other words, from the perspective of your computer, which likes to encode numbers in binary,132 0.1 is not a simple number at all. To a computer, 0.1 is actually an infinitely long binary number! As a consequence, the computer can make minor errors when doing calculations here.
With any luck you now understand the problem, which ultimately comes down to the twin fact that (1) we usually think in decimal numbers and computers usually compute with binary numbers, and (2) computers are finite machines and can’t store infinitely long numbers. The only questions that remain are when you should care and what you should do about it. Thankfully, you don’t have to care very often: because the rounding errors are small, the only practical situation that I’ve seen this issue arise for is when you want to test whether an arithmetic fact holds exactly numbers are identical (e.g., is someone’s response time equal to exactly \(2 \times 0.33\) seconds?) This is pretty rare in real world data analysis, but just in case it does occur, it’s better to use a test that allows for a small tolerance. That is, if the difference between the two numbers is below a certain threshold value, we deem them to be equal for all practical purposes. For instance, you could do something like this, which asks whether the difference between the two numbers is less than a tolerance of \(10^{-10}\)
```
abs( 0.1 + 0.2 - 0.3 ) < 10^-10
```
`## [1] TRUE` To deal with this problem, there is a function called `all.equal()` that lets you test for equality but allows a small tolerance for rounding errors:
```
all.equal( 0.1 + 0.2, 0.3 )
```
`## [1] TRUE`
### 7.12.2 The recycling rule
There’s one thing that I haven’t mentioned about how vector arithmetic works in R, and that’s the recycling rule. The easiest way to explain it is to give a simple example. Suppose I have two vectors of different length, `x` and `y` , and I want to add them together. It’s not obvious what that actually means, so let’s have a look at what R does:
```
x <- c( 1,1,1,1,1,1 ) # x is length 6
y <- c( 0,1 ) # y is length 2
x + y # now add them:
```
`## [1] 1 2 1 2 1 2` As you can see from looking at this output, what R has done is “recycle” the value of the shorter vector (in this case `y` ) several times. That is, the first element of `x` is added to the first element of `y` , and the second element of `x` is added to the second element of `y` . However, when R reaches the third element of `x` there isn’t any corresponding element in `y` , so it returns to the beginning: thus, the third element of `x` is added to the first element of `y` . This process continues until R reaches the last element of `x` . And that’s all there is to it really. The same recycling rule also applies for subtraction, multiplication and division. The only other thing I should note is that, if the length of the longer vector isn’t an exact multiple of the length of the shorter one, R still does it, but also gives you a warning message:
```
x <- c( 1,1,1,1,1 ) # x is length 5
y <- c( 0,1 ) # y is length 2
x + y # now add them:
```
```
## Warning in x + y: longer object length is not a multiple of shorter object
## length
```
`## [1] 1 2 1 2 1`
### 7.12.3 An introduction to environments
In this section I want to ask a slightly different question: what is the workspace exactly? This question seems simple, but there’s a fair bit to it. This section can be skipped if you’re not really interested in the technical details. In the description I gave earlier, I talked about the workspace as an abstract location in which R variables are stored. That’s basically true, but it hides a couple of key details. For example, any time you have R open, it has to store lots of things in the computer’s memory, not just your variables. For example, the `who()` function that I wrote has to be stored in memory somewhere, right? If it weren’t I wouldn’t be able to use it. That’s pretty obvious. But equally obviously it’s not in the workspace either, otherwise you should have seen it! Here’s what’s happening. R needs to keep track of a lot of different things, so what it does is organise them into environments, each of which can contain lots of different variables and functions. Your workspace is one such environment. Every package that you have loaded is another environment. And every time you call a function, R briefly creates a temporary environment in which the function itself can work, which is then deleted after the calculations are complete. So, when I type in `search()` at the command line `search()`
```
## [1] ".GlobalEnv" "package:BayesFactor" "package:Matrix"
## [4] "package:coda" "package:effects" "package:lmtest"
## [7] "package:zoo" "package:gplots" "package:sciplot"
## [10] "package:HistData" "package:MASS" "package:lsr"
## [13] "package:psych" "package:car" "package:carData"
## [16] "tools:rstudio" "package:stats" "package:graphics"
## [19] "package:grDevices" "package:utils" "package:datasets"
## [22] "package:methods" "Autoloads" "package:base"
```
what I’m actually looking at is a sequence of environments. The first one, `".GlobalEnv"` is the technically-correct name for your workspace. No-one really calls it that: it’s either called the workspace or the global environment. And so when you type in `objects()` or `who()` what you’re really doing is listing the contents of `".GlobalEnv"` . But there’s no reason why we can’t look up the contents of these other environments using the `objects()` function (currently `who()` doesn’t support this). You just have to be a bit more explicit in your command. If I wanted to find out what is in the `package:stats` environment (i.e., the environment into which the contents of the `stats` package have been loaded), here’s what I’d get
```
head(objects("package:stats"))
```
```
## [1] "acf" "acf2AR" "add.scope" "add1" "addmargins"
## [6] "aggregate"
```
where this time I’ve used head() to hide a lot of output because the stats package contains about 500 functions. In fact, you can actually use the environment panel in Rstudio to browse any of your loaded packages (just click on the text that says “Global Environment” and you’ll see a dropdown menu like the one shown in Figure 7.2). The key thing to understand then, is that you can access any of the R variables and functions that are stored in one of these environments, precisely because those are the environments that you have loaded!133
### 7.12.4 Attaching a data frame
The last thing I want to mention in this section is the `attach()` function, which you often see referred to in introductory R books. Whenever it is introduced, the author of the book usually mentions that the `attach()` function can be used to “attach” the data frame to the search path, so you don’t have to use the `$` operator. That is, if I use the command `attach(df)` to attach my data frame, I no longer need to type `df$variable` , and instead I can just type `variable` . This is true as far as it goes, but it’s very misleading and novice users often get led astray by this description, because it hides a lot of critical details. Here is the very abridged description: when you use the `attach()` function, what R does is create an entirely new environment in the search path, just like when you load a package. Then, what it does is copy all of the variables in your data frame into this new environment. When you do this, however, you end up with two completely different versions of all your variables: one in the original data frame, and one in the new environment. Whenever you make a statement like `df$variable` you’re working with the variable inside the data frame; but when you just type `variable` you’re working with the copy in the new environment. And here’s the part that really upsets new users: changes to one version are not reflected in the other version. As a consequence, it’s really easy for R to end up with different value stored in the two different locations, and you end up really confused as a result. To be fair to the writers of the `attach()` function, the help documentation does actually state all this quite explicitly, and they even give some examples of how this can cause confusion at the bottom of the help page. And I can actually see how it can be very useful to create copies of your data in a separate location (e.g., it lets you make all kinds of modifications and deletions to the data without having to touch the original data frame). However, I don’t think it’s helpful for new users, since it means you have to be very careful to keep track of which copy you’re talking about. As a consequence of all this, for the purpose of this book I’ve decided not to use the `attach()` function. It’s something that you can investigate yourself once you’re feeling a little more confident with R, but I won’t do it here.
## 7.13 Summary
Obviously, there’s no real coherence to this chapter. It’s just a grab bag of topics and tricks that can be handy to know about, so the best wrap up I can give here is just to repeat this list:
There are a number of books out there that extend this discussion. A couple of my favourites are Spector (2008) “Data Manipulation with R” and Teetor (2011) “R Cookbook”.
The quote comes from Home is the Hangman, published in 1975.↩
*
As usual, you can assign this output to a variable. If you type
```
speaker.freq <- table(speaker)
```
at the command prompt R will store the table as a variable. If you then type `class(speaker.freq)` you’ll see that the output is actually of class `table` . The key thing to note about a table object is that it’s basically a matrix (see Section 7.11.1.↩ *
It’s worth noting that there’s also a more powerful function called
`recode()` function in the `car` package that I won’t discuss in this book but is worth looking into if you’re looking for a bit more flexibility.↩ *
If you’ve read further into the book, and are re-reading this section, then a good example of this would be someone choosing to do an ANOVA using
`age.group3` as the grouping variable, instead of running a regression using `age` as a predictor. There are sometimes good reasons for do this: for instance, if the relationship between `age` and your outcome variable is highly non-linear, and you aren’t comfortable with trying to run non-linear regression! However, unless you really do have a good rationale for doing this, it’s best not to. It tends to introduce all sorts of other problems (e.g., the data will probably violate the normality assumption), and you can lose a lot of power.↩ *
The real answer is 0: $10 for a sandwich is a total ripoff so I should go next door and buy noodles.↩
*
Again, I doubt that’s the right “real world” answer. I suspect that most sandwich shops won’t allow you to pay off your debts to them in sandwiches. But you get the idea.↩
*
Actually, that’s a bit of a lie: the
`log()` function is more flexible than that, and can be used to calculate logarithms in any base. The `log()` function has a `base` argument that you can specify, which has a default value of \(e\). Thus `log10(1000)` is actually equivalent to
```
log(x = 1000, base = 10)
```
It’s also worth checking out the
`match()` function↩ *
It also works on data frames if you ever feel the need to import all of your variables from the data frame into the workspace. This can be useful at times, though it’s not a good idea if you have large data sets or if you’re working with multiple data sets at once. In particular, if you do this, never forget that you now have two copies of all your variables, one in the workspace and another in the data frame.↩
*
You can do this yourself using the
`make.names()` function. In fact, this is itself a handy thing to know about. For example, if you want to convert the names of the variables in the `speech.by.char` list into valid R variable names, you could use a command like this:
```
names(speech.by.char) <- make.names(names(speech.by.char))
```
. However, I won’t go into details here.↩ *
Conveniently, if you type
`rownames(df) <- NULL` R will renumber all the rows from scratch. For the `df` data frame, the labels that currently run from 7 to 10 will be changed to go from 1 to 4.↩ *
Actually, you can make the
`subset()` function behave this way by using the optional `drop` argument, but by default `subset()` does not drop, which is probably more sensible and more intuitive to novice users.↩ *
Specifically, recursive indexing, a handy tool in some contexts but not something that I want to discuss here.↩
*
Note for advanced users: both of these functions are just wrappers to the
`matrix()` function, which is pretty flexible in terms of the ability to convert vectors into matrices. Also, while I’m on this topic, I’ll briefly mention the fact that if you’re a Matlab user and looking for an equivalent of Matlab’s `repmat()` function, I’d suggest checking out the `matlab` package which contains R versions of a lot of handy Matlab functions.↩ *
The function you need for that is called
`as.data.frame()` .↩ *
In truth, I suspect that most of the cases when you can sensibly flip a data frame occur when all of the original variables are measurements of the same type (e.g., all variables are response times), and if so you could easily have chosen to encode your data as a matrix instead of as a data frame. But since people do sometimes prefer to work with data frames, I’ve written the
`tFrame()` function for the sake of convenience. I don’t really think it’s something that is needed very often.↩ *
This limitation is deliberate, by the way: if you’re getting to the point where you want to do something more complicated, you should probably start learning how to use
`reshape()` , `cast()` and `melt()` or some of other the more advanced tools. The `wideToLong()` and `longToWide()` functions are included only to help you out when you’re first starting to use R.↩ *
To be honest, it does bother me a little that the default value of
`sep` is a space. Normally when I want to paste strings together I don’t want any separator character, so I’d prefer it if the default were `sep=""` . To that end, it’s worth noting that there’s also a `paste0()` function, which is identical to `paste()` except that it always assumes that `sep=""` . Type `?paste` for more information about this.↩ *
Note that you can capture the output from
`cat()` if you want to, but you have to be sneaky and use the `capture.output()` function. For example, the command
```
x <- capture.output(cat(hw,ng))
```
would work just fine.↩ *
Sigh. For advanced users: R actually supports two different ways of specifying regular expressions. One is the POSIX standard, the other is to use Perl-style regular expressions. The default is generally POSIX. If you understand regular expressions, that probably made sense to you. If not, don’t worry. It’s not important.↩
*
I thank <NAME> for this example.↩
*
If you’re lucky.↩
*
You can also use the
`matrix()` command itself, but I think the “binding” approach is a little more intuitive.↩ *
This has some interesting implications for how matrix algebra is implemented in R (which I’ll admit I initially found odd), but that’s a little beyond the scope of this book. However, since there will be a small proportion of readers that do care, I’ll quickly outline the basic thing you need to get used to: when multiplying a matrix by a vector (or one-dimensional array) using the
`\%*\%` operator R will attempt to interpret the vector (or 1D array) as either a row-vector or column-vector, depending on whichever one makes the multiplication work. That is, suppose \(M\) is the \(2\times3\) matrix, and \(v\) is a \(1 \times 3\) row vector. It is impossible to multiply \(Mv\), since the dimensions don’t conform, but you can multiply by the corresponding column vector, \(Mv^t\). So, if I set `v <- M[2,]` and then try to calculate `M \%*\% v` , which you’d think would fail, it actually works because R treats the one dimensional array as if it were a column vector for the purposes of matrix multiplication. Note that if both objects are one dimensional arrays/vectors, this leads to ambiguity since \(vv^t\) (inner product) and \(v^tv\) (outer product) yield different answers. In this situation, the `\%*\%` operator returns the inner product not the outer product. To understand all the details, check out the help documentation.↩ *
I should note that if you type
`class(xtab.3d)` you’ll discover that this is a `"table"` object rather than an `"array"` object. However, this labelling is only skin deep. The underlying data structure here is actually an array. Advanced users may wish to check this using the command
```
class(unclass(xtab.3d))
```
, but it’s not important for our purposes. All I really want to do in this section is show you what the output looks like when you encounter a 3D array.↩ *
Date objects are coded as the number of days that have passed since January 1, 1970.↩
*
For advanced users: type
`?double` for more information.↩ *
Or at least, that’s the default. If all your numbers are integers (whole numbers), then you can explicitly tell R to store them as integers by adding an
`L` suffix at the end of the number. That is, an assignment like `x <- 2L` tells R to assign `x` a value of 2, and to store it as an integer rather than as a binary expansion. Type `?integer` for more details.↩ *
For advanced users: that’s a little over simplistic in two respects. First, it’s a terribly imprecise way of talking about scoping. Second, it might give you the impression that all the variables in question are actually loaded into memory. That’s not quite true, since that would be very wasteful of memory. Instead R has a “lazy loading” mechanism, in which what R actually does is create a “promise” to load those objects if they’re actually needed. For details, check out the
`delayedAssign()` function.↩
Date: 2011-11-22
Categories:
Tags:
# Chapter 8 Basic programming
Machine dreams hold a special vertigo.
–<NAME>
Up to this point in the book I’ve tried hard to avoid using the word “programming” too much because – at least in my experience – it’s a word that can cause a lot of fear. For one reason or another, programming (like mathematics and statistics) is often perceived by people on the “outside” as a black art, a magical skill that can be learned only by some kind of super-nerd. I think this is a shame. It’s certainly true that advanced programming is a very specialised skill: several different skills actually, since there’s quite a lot of different kinds of programming out there. However, the basics of programming aren’t all that hard, and you can accomplish a lot of very impressive things just using those basics.
With that in mind, the goal of this chapter is to discuss a few basic programming concepts and how to apply them in R. However, before I do, I want to make one further attempt to point out just how non-magical programming really is, via one very simple observation: you already know how to do it. Stripped to its essentials, programming is nothing more (and nothing less) than the process of writing out a bunch of instructions that a computer can understand. To phrase this slightly differently, when you write a computer program, you need to write it in a programming language that the computer knows how to interpret. R is one such language. Although I’ve been having you type all your commands at the command prompt, and all the commands in this book so far have been shown as if that’s what I were doing, it’s also quite possible (and as you’ll see shortly, shockingly easy) to write a program using these R commands. In other words, if this is the first time reading this book, then you’re only one short chapter away from being able to legitimately claim that you can program in R, albeit at a beginner’s level.
## 8.1 Scripts
Computer programs come in quite a few different forms: the kind of program that we’re mostly interested in from the perspective of everyday data analysis using R is known as a script. The idea behind a script is that, instead of typing your commands into the R console one at a time, instead you write them all in a text file. Then, once you’ve finished writing them and saved the text file, you can get R to execute all the commands in your file by using the `source()` function. In a moment I’ll show you exactly how this is done, but first I’d better explain why you should care.
### 8.1.1 Why use scripts?
Before discussing scripting and programming concepts in any more detail, it’s worth stopping to ask why you should bother. After all, if you look at the R commands that I’ve used everywhere else this book, you’ll notice that they’re all formatted as if I were typing them at the command line. Outside this chapter you won’t actually see any scripts. Do not be fooled by this. The reason that I’ve done it that way is purely for pedagogical reasons. My goal in this book is to teach statistics and to teach R. To that end, what I’ve needed to do is chop everything up into tiny little slices: each section tends to focus on one kind of statistical concept, and only a smallish number of R functions. As much as possible, I want you to see what each function does in isolation, one command at a time. By forcing myself to write everything as if it were being typed at the command line, it imposes a kind of discipline on me: it prevents me from piecing together lots of commands into one big script. From a teaching (and learning) perspective I think that’s the right thing to do… but from a data analysis perspective, it is not. When you start analysing real world data sets, you will rapidly find yourself needing to write scripts.
To understand why scripts are so very useful, it may be helpful to consider the drawbacks to typing commands directly at the command prompt. The approach that we’ve been adopting so far, in which you type commands one at a time, and R sits there patiently in between commands, is referred to as the interactive style. Doing your data analysis this way is rather like having a conversation … a very annoying conversation between you and your data set, in which you and the data aren’t directly speaking to each other, and so you have to rely on R to pass messages back and forth. This approach makes a lot of sense when you’re just trying out a few ideas: maybe you’re trying to figure out what analyses are sensible for your data, or maybe just you’re trying to remember how the various R functions work, so you’re just typing in a few commands until you get the one you want. In other words, the interactive style is very useful as a tool for exploring your data. However, it has a number of drawbacks:
It’s hard to save your work effectively. You can save the workspace, so that later on you can load any variables you created. You can save your plots as images. And you can even save the history or copy the contents of the R console to a file. Taken together, all these things let you create a reasonably decent record of what you did. But it does leave a lot to be desired. It seems like you ought to be able to save a single file that R could use (in conjunction with your raw data files) and reproduce everything (or at least, everything interesting) that you did during your data analysis.
*
It’s annoying to have to go back to the beginning when you make a mistake. Suppose you’ve just spent the last two hours typing in commands. Over the course of this time you’ve created lots of new variables and run lots of analyses. Then suddenly you realise that there was a nasty typo in the first command you typed, so all of your later numbers are wrong. Now you have to fix that first command, and then spend another hour or so combing through the R history to try and recreate what you did.
*
You can’t leave notes for yourself. Sure, you can scribble down some notes on a piece of paper, or even save a Word document that summarises what you did. But what you really want to be able to do is write down an English translation of your R commands, preferably right “next to” the commands themselves. That way, you can look back at what you’ve done and actually remember what you were doing. In the simple exercises we’ve engaged in so far, it hasn’t been all that hard to remember what you were doing or why you were doing it, but only because everything we’ve done could be done using only a few commands, and you’ve never been asked to reproduce your analysis six months after you originally did it! When your data analysis starts involving hundreds of variables, and requires quite complicated commands to work, then you really, really need to leave yourself some notes to explain your analysis to, well, yourself.
*
It’s nearly impossible to reuse your analyses later, or adapt them to similar problems. Suppose that, sometime in January, you are handed a difficult data analysis problem. After working on it for ages, you figure out some really clever tricks that can be used to solve it. Then, in September, you get handed a really similar problem. You can sort of remember what you did, but not very well. You’d like to have a clean record of what you did last time, how you did it, and why you did it the way you did. Something like that would really help you solve this new problem.
*
It’s hard to do anything except the basics. There’s a nasty side effect of these problems. Typos are inevitable. Even the best data analyst in the world makes a lot of mistakes. So the chance that you’ll be able to string together dozens of correct R commands in a row are very small. So unless you have some way around this problem, you’ll never really be able to do anything other than simple analyses.
*
It’s difficult to share your work other people. Because you don’t have this nice clean record of what R commands were involved in your analysis, it’s not easy to share your work with other people. Sure, you can send them all the data files you’ve saved, and your history and console logs, and even the little notes you wrote to yourself, but odds are pretty good that no-one else will really understand what’s going on (trust me on this: I’ve been handed lots of random bits of output from people who’ve been analysing their data, and it makes very little sense unless you’ve got the original person who did the work sitting right next to you explaining what you’re looking at)
Ideally, what you’d like to be able to do is something like this… Suppose you start out with a data set `myrawdata.csv` . What you want is a single document – let’s call it `mydataanalysis.R` – that stores all of the commands that you’ve used in order to do your data analysis. Kind of similar to the R history but much more focused. It would only include the commands that you want to keep for later. Then, later on, instead of typing in all those commands again, you’d just tell R to run all of the commands that are stored in `mydataanalysis.R` . Also, in order to help you make sense of all those commands, what you’d want is the ability to add some notes or comments within the file, so that anyone reading the document for themselves would be able to understand what each of the commands actually does. But these comments wouldn’t get in the way: when you try to get R to run `mydataanalysis.R` it would be smart enough would recognise that these comments are for the benefit of humans, and so it would ignore them. Later on you could tweak a few of the commands inside the file (maybe in a new file called `mynewdatanalaysis.R` ) so that you can adapt an old analysis to be able to handle a new problem. And you could email your friends and colleagues a copy of this file so that they can reproduce your analysis themselves.
In other words, what you want is a script.
### 8.1.2 Our first script
Okay then. Since scripts are so terribly awesome, let’s write one. To do this, open up a simple text editing program, like TextEdit (on a Mac) or Notebook (on Windows). Don’t use a fancy word processing program like Microsoft Word or OpenOffice: use the simplest program you can find. Open a new text document, and type some R commands, hitting enter after each command. Let’s try using `x <- "hello world"` and `print(x)` as our commands. Then save the document as `hello.R` , and remember to save it as a plain text file: don’t save it as a word document or a rich text file. Just a boring old plain text file. Also, when it asks you where to save the file, save it to whatever folder you’re using as your working directory in R. At this point, you should be looking at something like Figure 8.1. And if so, you have now successfully written your first R program. Because I don’t want to take screenshots for every single script, I’m going to present scripts using extracts formatted as follows:
```
## --- hello.R
x <- "hello world"
print(x)
```
The line at the top is the filename, and not part of the script itself. Below that, you can see the two R commands that make up the script itself. Next to each command I’ve included the line numbers. You don’t actually type these into your script, but a lot of text editors (including the one built into Rstudio that I’ll show you in a moment) will show line numbers, since it’s a very useful convention that allows you to say things like “line 1 of the script creates a new variable, and line 2 prints it out”.
So how do we run the script? Assuming that the `hello.R` file has been saved to your working directory, then you can run the script using the following command: `source( "hello.R" )` If the script file is saved in a different directory, then you need to specify the path to the file, in exactly the same way that you would have to when loading a data file using `load()` . In any case, when you type this command, R opens up the script file: it then reads each command in the file in the same order that they appear in the file, and executes those commands in that order. The simple script that I’ve shown above contains two commands. The first one creates a variable `x` and the second one prints it on screen. So, when we run the script, this is what we see on screen:
```
source(file.path(projecthome,"scripts","hello.R"))
```
`## [1] "hello world"` If we inspect the workspace using a command like `who()` or `objects()` , we discover that R has created the new variable `x` within the workspace, and not surprisingly `x` is a character string containing the text `"hello world"` . And just like that, you’ve written your first program R. It really is that simple.
### 8.1.3 Using Rstudio to write scripts
In the example above I assumed that you were writing your scripts using a simple text editor. However, it’s usually more convenient to use a text editor that is specifically designed to help you write scripts. There’s a lot of these out there, and experienced programmers will all have their own personal favourites. For our purposes, however, we can just use the one built into Rstudio. To create new script file in R studio, go to the “File” menu, select the “New” option, and then click on “R script”. This will open a new window within the “source” panel. Then you can type the commands you want (or code as it is generally called when you’re typing the commands into a script file) and save it when you’re done. The nice thing about using Rstudio to do this is that it automatically changes the colour of the text to indicate which parts of the code are comments and which are parts are actual R commands (these colours are called syntax highlighting, but they’re not actually part of the file – it’s just Rstudio trying to be helpful. To see an example of this, let’s open up our `hello.R` script in Rstudio. To do this, go to the “File” menu again, and select “Open…”. Once you’ve opened the file, you should be looking at something like Figure 8.2. As you can see (if you’re looking at this book in colour) the character string “hello world” is highlighted in green. Using Rstudio for your text editor is convenient for other reasons too. Notice in the top right hand corner of Figure 8.2 there’s a little button that reads “Source”? If you click on that, Rstudio will construct the relevant `source()` command for you, and send it straight to the R console. So you don’t even have to type in the `source()` command, which actually I think is a great thing, because it really bugs me having to type all those extra keystrokes every time I want to run my script. Anyway, Rstudio provide several other convenient little tools to help make scripting easier, but I won’t discuss them here.135
### 8.1.4 Commenting your script
When writing up your data analysis as a script, one thing that is generally a good idea is to include a lot of comments in the code. That way, if someone else tries to read it (or if you come back to it several days, weeks, months or years later) they can figure out what’s going on. As a beginner, I think it’s especially useful to comment thoroughly, partly because it gets you into the habit of commenting the code, and partly because the simple act of typing in an explanation of what the code does will help you keep it clear in your own mind what you’re trying to achieve. To illustrate this idea, consider the following script:
```
## --- itngscript.R
# A script to analyse nightgarden.Rdata_
# author: <NAME>_
# date: 22/11/2011_
# Create a cross tabulation and print it out:
cat( "tabulating data...\n" )
itng.table <- table( speaker, utterance )
print( itng.table )
```
You’ll notice that I’ve gone a bit overboard with my commenting: at the top of the script I’ve explained the purpose of the script, who wrote it, and when it was written. Then, throughout the script file itself I’ve added a lot of comments explaining what each section of the code actually does. In real life people don’t tend to comment this thoroughly, but the basic idea is a very good one: you really do want your script to explain itself. Nevertheless, as you’d expect R completely ignores all of the commented parts. When we run this script, this is what we see on screen:
```
## --- itngscript.R
# A script to analyse nightgarden.Rdata
# author: <NAME>
# date: 22/11/2011
```
## loading data from nightgarden.Rdata...
```
```
load(file.path(projecthome,"data","nightgarden.Rdata"))
# Create a cross tabulation and print it out:
cat( "tabulating data...\n" )
```
```
## tabulating data...
```
```
itng.table <- table( speaker, utterance )
print( itng.table )
```
Even here, notice that the script announces its behaviour. The first two lines of the output tell us a lot about what the script is actually doing behind the scenes (the code do to this corresponds to the two `cat()` commands on lines 8 and 12 of the script). It’s usually a pretty good idea to do this, since it helps ensure that the output makes sense when the script is executed.
### 8.1.5 Differences between scripts and the command line
For the most part, commands that you insert into a script behave in exactly the same way as they would if you typed the same thing in at the command line. The one major exception to this is that if you want a variable to be printed on screen, you need to explicitly tell R to print it. You can’t just type the name of the variable. For example, our original `hello.R` script produced visible output. The following script does not:
```
## --- silenthello.R
x <- "hello world"
x
```
It does still create the variable `x` when you `source()` the script, but it won’t print anything on screen. However, apart from the fact that scripts don’t use “auto-printing” as it’s called, there aren’t a lot of differences in the underlying mechanics. There are a few stylistic differences though. For instance, if you want to load a package at the command line, you would generally use the `library()` function. If you want do to it from a script, it’s conventional to use `require()` instead. The two commands are basically identical, the only difference being that if the package doesn’t exist, `require()` produces a warning whereas `library()` gives you an error. Stylistically, what this means is that if the `require()` command fails in your script, R will boldly continue on and try to execute the rest of the script. Often that’s what you’d like to see happen, so it’s better to use `require()` . Clearly, however, you can get by just fine using the `library()` command for everyday usage.
### 8.1.6 Done!
At this point, you’ve learned the basics of scripting. You are now officially allowed to say that you can program in R, though you probably shouldn’t say it too loudly. There’s a lot more to learn, but nevertheless, if you can write scripts like these then what you are doing is in fact basic programming. The rest of this chapter is devoted to introducing some of the key commands that you need in order to make your programs more powerful; and to help you get used to thinking in terms of scripts, for the rest of this chapter I’ll write up most of my extracts as scripts.
## 8.2 Loops
The description I gave earlier for how a script works was a tiny bit of a lie. Specifically, it’s not necessarily the case that R starts at the top of the file and runs straight through to the end of the file. For all the scripts that we’ve seen so far that’s exactly what happens, and unless you insert some commands to explicitly alter how the script runs, that is what will always happen. However, you actually have quite a lot of flexibility in this respect. Depending on how you write the script, you can have R repeat several commands, or skip over different commands, and so on. This topic is referred to as flow control, and the first concept to discuss in this respect is the idea of a loop. The basic idea is very simple: a loop is a block of code (i.e., a sequence of commands) that R will execute over and over again until some termination criterion is met. Looping is a very powerful idea. There are three different ways to construct a loop in R, based on the `while` , `for` and `repeat` functions. I’ll only discuss the first two in this book.
### 8.2.1 The
`while` loop A `while` loop is a simple thing. The basic format of the loop looks like this:
The code corresponding to CONDITION needs to produce a logical value, either `TRUE` or `FALSE` . Whenever R encounters a `while` statement, it checks to see if the CONDITION is `TRUE` . If it is, then R goes on to execute all of the commands inside the curly brackets, proceeding from top to bottom as usual. However, when it gets to the bottom of those statements, it moves back up to the `while` statement. Then, like the mindless automaton it is, it checks to see if the CONDITION is `TRUE` . If it is, then R goes on to execute all … well, you get the idea. This continues endlessly until at some point the CONDITION turns out to be `FALSE` . Once that happens, R jumps to the bottom of the loop (i.e., to the `}` character), and then continues on with whatever commands appear next in the script. To start with, let’s keep things simple, and use a `while` loop to calculate the smallest multiple of 17 that is greater than or equal to 1000. This is a very silly example since you can actually calculate it using simple arithmetic operations, but the point here isn’t to do something novel. The point is to show how to write a `while` loop. Here’s the script:
```
## --- whileexample.R
x <- 0
while ( x < 1000 ) {
x <- x + 17
}
print( x )
```
When we run this script, R starts at the top and creates a new variable called `x` and assigns it a value of 0. It then moves down to the loop, and “notices” that the condition here is `x < 1000` . Since the current value of `x` is zero, the condition is true, so it enters the body of the loop (inside the curly braces). There’s only one command here136 which instructs R to increase the value of `x` by 17. R then returns to the top of the loop, and rechecks the condition. The value of `x` is now 17, but that’s still less than 1000, so the loop continues. This cycle will continue for a total of 59 iterations, until finally `x` reaches a value of 1003 (i.e., \(59 \times 17 = 1003\)). At this point, the loop stops, and R finally reaches line 5 of the script, prints out the value of `x` on screen, and then halts. Let’s watch:
```
source(file.path(projecthome,"scripts","whileexample.R"))
```
`## [1] 1003`
Truly fascinating stuff.
### 8.2.2 The
`for` loop The `for` loop is also pretty simple, though not quite as simple as the `while` loop. The basic format of this loop goes like this:
```
for ( VAR in VECTOR ) {
STATEMENT1
STATEMENT2
ETC
}
```
In a `for` loop, R runs a fixed number of iterations. We have a VECTOR which has several elements, each one corresponding to a possible value of the variable VAR. In the first iteration of the loop, VAR is given a value corresponding to the first element of VECTOR; in the second iteration of the loop VAR gets a value corresponding to the second value in VECTOR; and so on. Once we’ve exhausted all of the values in VECTOR, the loop terminates and the flow of the program continues down the script.
Once again, let’s use some very simple examples. Firstly, here is a program that just prints out the word “hello” three times and then stops:
```
## --- forexample.R
for ( i in 1:3 ) {
print( "hello" )
}
```
This is the simplest example of a `for` loop. The vector of possible values for the `i` variable just corresponds to the numbers from 1 to 3. Not only that, the body of the loop doesn’t actually depend on `i` at all. Not surprisingly, here’s what happens when we run it:
```
source(file.path(projecthome,"scripts","forexample.R"))
```
```
## [1] "hello"
## [1] "hello"
## [1] "hello"
```
However, there’s nothing that stops you from using something non-numeric as the vector of possible values, as the following example illustrates. This time around, we’ll use a character vector to control our loop, which in this case will be a vector of `words` . And what we’ll do in the loop is get R to convert the word to upper case letters, calculate the length of the word, and print it out. Here’s the script:
```
## --- forexample2.R
#the words_
words <- c("it","was","the","dirty","end","of","winter")
#loop over the words_
for ( w in words ) {
w.length <- nchar( w ) # calculate the number of letters_
W <- toupper( w ) # convert the word to upper case letters_
msg <- paste( W, "has", w.length, "letters" ) # a message to print_
print( msg ) # print it_
}
```
And here’s the output:
```
source(file.path(projecthome,"scripts","forexample2.R"))
```
```
## [1] "IT has 2 letters"
## [1] "WAS has 3 letters"
## [1] "THE has 3 letters"
## [1] "DIRTY has 5 letters"
## [1] "END has 3 letters"
## [1] "OF has 2 letters"
## [1] "WINTER has 6 letters"
```
Again, pretty straightforward I hope.
### 8.2.3 A more realistic example of a loop
To give you a sense of how you can use a loop in a more complex situation, let’s write a simple script to simulate the progression of a mortgage. Suppose we have a nice young couple who borrow $300000 from the bank, at an annual interest rate of 5%. The mortgage is a 30 year loan, so they need to pay it off within 360 months total. Our happy couple decide to set their monthly mortgage payment at $1600 per month. Will they pay off the loan in time or not? Only time will tell.137 Or, alternatively, we could simulate the whole process and get R to tell us. The script to run this is a fair bit more complicated.
```
## --- mortgage.R
# set up
month <- 0 # count the number of months
balance <- 300000 # initial mortgage balance
payments <- 1600 # monthly payments
interest <- 0.05 # 5% interest rate per year
total.paid <- 0 # track what you've paid the bank
# convert annual interest to a monthly multiplier
monthly.multiplier <- (1+interest) ^ (1/12)
# keep looping until the loan is paid off...
while ( balance > 0 ) {
# do the calculations for this month
month <- month + 1 # one more month
balance <- balance * monthly.multiplier # add the interest
balance <- balance - payments # make the payments
total.paid <- total.paid + payments # track the total paid
# print the results on screen
cat( "month", month, ": balance", round(balance), "\n")
} # end of loop
# print the total payments at the end
cat("total payments made", total.paid, "\n" )
```
To explain what’s going on, let’s go through it carefully. In the first block of code (under `#set up` ) all we’re doing is specifying all the variables that define the problem. The loan starts with a `balance` of $300,000 owed to the bank on `month` zero, and at that point in time the `total.paid` money is nothing. The couple is making monthly `payments` of $1600, at an annual `interest` rate of 5%. Next, we convert the annual percentage interest into a monthly multiplier. That is, the number that you have to multiply the current balance by each month in order to produce an annual interest rate of 5%. An annual interest rate of 5% implies that, if no payments were made over 12 months the balance would end up being \(1.05\) times what it was originally, so the annual multiplier is \(1.05\). To calculate the monthly multiplier, we need to calculate the 12th root of 1.05 (i.e., raise 1.05 to the power of 1/12). We store this value in as the `monthly.multiplier` variable, which as it happens corresponds to a value of about 1.004. All of which is a rather long winded way of saying that the annual interest rate of 5% corresponds to a monthly interest rate of about 0.4%. Anyway… all of that is really just setting the stage. It’s not the interesting part of the script. The interesting part (such as it is) is the loop. The `while` statement on tells R that it needs to keep looping until the `balance` reaches zero (or less, since it might be that the final payment of $1600 pushes the balance below zero). Then, inside the body of the loop, we have two different blocks of code. In the first bit, we do all the number crunching. Firstly we increase the value `month` by 1. Next, the bank charges the interest, so the `balance` goes up. Then, the couple makes their monthly payment and the `balance` goes down. Finally, we keep track of the total amount of money that the couple has paid so far, by adding the `payments` to the running tally. After having done all this number crunching, we tell R to issue the couple with a very terse monthly statement, which just indicates how many months they’ve been paying the loan and how much money they still owe the bank. Which is rather rude of us really. I’ve grown attached to this couple and I really feel they deserve better than that. But, that’s banks for you. In any case, the key thing here is the tension between the increase in `balance` on and the decrease. As long as the decrease is bigger, then the balance will eventually drop to zero and the loop will eventually terminate. If not, the loop will continue forever! This is actually very bad programming on my part: I really should have included something to force R to stop if this goes on too long. However, I haven’t shown you how to evaluate “if” statements yet, so we’ll just have to hope that the author of the book has rigged the example so that the code actually runs. Hm. I wonder what the odds of that are? Anyway, assuming that the loop does eventually terminate, there’s one last line of code that prints out the total amount of money that the couple handed over to the bank over the lifetime of the loan.
Now that I’ve explained everything in the script in tedious detail, let’s run it and see what happens:
```
source(file.path(projecthome,"scripts","mortgage.R"))
```
```
## month 1 : balance 299622
## month 2 : balance 299243
## month 3 : balance 298862
## month 4 : balance 298480
## month 5 : balance 298096
## month 6 : balance 297710
## month 7 : balance 297323
## month 8 : balance 296934
## month 9 : balance 296544
## month 10 : balance 296152
## month 11 : balance 295759
## month 12 : balance 295364
## month 13 : balance 294967
## month 14 : balance 294569
## month 15 : balance 294169
## month 16 : balance 293768
## month 17 : balance 293364
## month 18 : balance 292960
## month 19 : balance 292553
## month 20 : balance 292145
## month 21 : balance 291735
## month 22 : balance 291324
## month 23 : balance 290911
## month 24 : balance 290496
## month 25 : balance 290079
## month 26 : balance 289661
## month 27 : balance 289241
## month 28 : balance 288820
## month 29 : balance 288396
## month 30 : balance 287971
## month 31 : balance 287545
## month 32 : balance 287116
## month 33 : balance 286686
## month 34 : balance 286254
## month 35 : balance 285820
## month 36 : balance 285385
## month 37 : balance 284947
## month 38 : balance 284508
## month 39 : balance 284067
## month 40 : balance 283625
## month 41 : balance 283180
## month 42 : balance 282734
## month 43 : balance 282286
## month 44 : balance 281836
## month 45 : balance 281384
## month 46 : balance 280930
## month 47 : balance 280475
## month 48 : balance 280018
## month 49 : balance 279559
## month 50 : balance 279098
## month 51 : balance 278635
## month 52 : balance 278170
## month 53 : balance 277703
## month 54 : balance 277234
## month 55 : balance 276764
## month 56 : balance 276292
## month 57 : balance 275817
## month 58 : balance 275341
## month 59 : balance 274863
## month 60 : balance 274382
## month 61 : balance 273900
## month 62 : balance 273416
## month 63 : balance 272930
## month 64 : balance 272442
## month 65 : balance 271952
## month 66 : balance 271460
## month 67 : balance 270966
## month 68 : balance 270470
## month 69 : balance 269972
## month 70 : balance 269472
## month 71 : balance 268970
## month 72 : balance 268465
## month 73 : balance 267959
## month 74 : balance 267451
## month 75 : balance 266941
## month 76 : balance 266428
## month 77 : balance 265914
## month 78 : balance 265397
## month 79 : balance 264878
## month 80 : balance 264357
## month 81 : balance 263834
## month 82 : balance 263309
## month 83 : balance 262782
## month 84 : balance 262253
## month 85 : balance 261721
## month 86 : balance 261187
## month 87 : balance 260651
## month 88 : balance 260113
## month 89 : balance 259573
## month 90 : balance 259031
## month 91 : balance 258486
## month 92 : balance 257939
## month 93 : balance 257390
## month 94 : balance 256839
## month 95 : balance 256285
## month 96 : balance 255729
## month 97 : balance 255171
## month 98 : balance 254611
## month 99 : balance 254048
## month 100 : balance 253483
## month 101 : balance 252916
## month 102 : balance 252346
## month 103 : balance 251774
## month 104 : balance 251200
## month 105 : balance 250623
## month 106 : balance 250044
## month 107 : balance 249463
## month 108 : balance 248879
## month 109 : balance 248293
## month 110 : balance 247705
## month 111 : balance 247114
## month 112 : balance 246521
## month 113 : balance 245925
## month 114 : balance 245327
## month 115 : balance 244727
## month 116 : balance 244124
## month 117 : balance 243518
## month 118 : balance 242911
## month 119 : balance 242300
## month 120 : balance 241687
## month 121 : balance 241072
## month 122 : balance 240454
## month 123 : balance 239834
## month 124 : balance 239211
## month 125 : balance 238585
## month 126 : balance 237958
## month 127 : balance 237327
## month 128 : balance 236694
## month 129 : balance 236058
## month 130 : balance 235420
## month 131 : balance 234779
## month 132 : balance 234136
## month 133 : balance 233489
## month 134 : balance 232841
## month 135 : balance 232189
## month 136 : balance 231535
## month 137 : balance 230879
## month 138 : balance 230219
## month 139 : balance 229557
## month 140 : balance 228892
## month 141 : balance 228225
## month 142 : balance 227555
## month 143 : balance 226882
## month 144 : balance 226206
## month 145 : balance 225528
## month 146 : balance 224847
## month 147 : balance 224163
## month 148 : balance 223476
## month 149 : balance 222786
## month 150 : balance 222094
## month 151 : balance 221399
## month 152 : balance 220701
## month 153 : balance 220000
## month 154 : balance 219296
## month 155 : balance 218590
## month 156 : balance 217880
## month 157 : balance 217168
## month 158 : balance 216453
## month 159 : balance 215735
## month 160 : balance 215014
## month 161 : balance 214290
## month 162 : balance 213563
## month 163 : balance 212833
## month 164 : balance 212100
## month 165 : balance 211364
## month 166 : balance 210625
## month 167 : balance 209883
## month 168 : balance 209138
## month 169 : balance 208390
## month 170 : balance 207639
## month 171 : balance 206885
## month 172 : balance 206128
## month 173 : balance 205368
## month 174 : balance 204605
## month 175 : balance 203838
## month 176 : balance 203069
## month 177 : balance 202296
## month 178 : balance 201520
## month 179 : balance 200741
## month 180 : balance 199959
## month 181 : balance 199174
## month 182 : balance 198385
## month 183 : balance 197593
## month 184 : balance 196798
## month 185 : balance 196000
## month 186 : balance 195199
## month 187 : balance 194394
## month 188 : balance 193586
## month 189 : balance 192775
## month 190 : balance 191960
## month 191 : balance 191142
## month 192 : balance 190321
## month 193 : balance 189496
## month 194 : balance 188668
## month 195 : balance 187837
## month 196 : balance 187002
## month 197 : balance 186164
## month 198 : balance 185323
## month 199 : balance 184478
## month 200 : balance 183629
## month 201 : balance 182777
## month 202 : balance 181922
## month 203 : balance 181063
## month 204 : balance 180201
## month 205 : balance 179335
## month 206 : balance 178466
## month 207 : balance 177593
## month 208 : balance 176716
## month 209 : balance 175836
## month 210 : balance 174953
## month 211 : balance 174065
## month 212 : balance 173175
## month 213 : balance 172280
## month 214 : balance 171382
## month 215 : balance 170480
## month 216 : balance 169575
## month 217 : balance 168666
## month 218 : balance 167753
## month 219 : balance 166836
## month 220 : balance 165916
## month 221 : balance 164992
## month 222 : balance 164064
## month 223 : balance 163133
## month 224 : balance 162197
## month 225 : balance 161258
## month 226 : balance 160315
## month 227 : balance 159368
## month 228 : balance 158417
## month 229 : balance 157463
## month 230 : balance 156504
## month 231 : balance 155542
## month 232 : balance 154576
## month 233 : balance 153605
## month 234 : balance 152631
## month 235 : balance 151653
## month 236 : balance 150671
## month 237 : balance 149685
## month 238 : balance 148695
## month 239 : balance 147700
## month 240 : balance 146702
## month 241 : balance 145700
## month 242 : balance 144693
## month 243 : balance 143683
## month 244 : balance 142668
## month 245 : balance 141650
## month 246 : balance 140627
## month 247 : balance 139600
## month 248 : balance 138568
## month 249 : balance 137533
## month 250 : balance 136493
## month 251 : balance 135449
## month 252 : balance 134401
## month 253 : balance 133349
## month 254 : balance 132292
## month 255 : balance 131231
## month 256 : balance 130166
## month 257 : balance 129096
## month 258 : balance 128022
## month 259 : balance 126943
## month 260 : balance 125861
## month 261 : balance 124773
## month 262 : balance 123682
## month 263 : balance 122586
## month 264 : balance 121485
## month 265 : balance 120380
## month 266 : balance 119270
## month 267 : balance 118156
## month 268 : balance 117038
## month 269 : balance 115915
## month 270 : balance 114787
## month 271 : balance 113654
## month 272 : balance 112518
## month 273 : balance 111376
## month 274 : balance 110230
## month 275 : balance 109079
## month 276 : balance 107923
## month 277 : balance 106763
## month 278 : balance 105598
## month 279 : balance 104428
## month 280 : balance 103254
## month 281 : balance 102074
## month 282 : balance 100890
## month 283 : balance 99701
## month 284 : balance 98507
## month 285 : balance 97309
## month 286 : balance 96105
## month 287 : balance 94897
## month 288 : balance 93683
## month 289 : balance 92465
## month 290 : balance 91242
## month 291 : balance 90013
## month 292 : balance 88780
## month 293 : balance 87542
## month 294 : balance 86298
## month 295 : balance 85050
## month 296 : balance 83797
## month 297 : balance 82538
## month 298 : balance 81274
## month 299 : balance 80005
## month 300 : balance 78731
## month 301 : balance 77452
## month 302 : balance 76168
## month 303 : balance 74878
## month 304 : balance 73583
## month 305 : balance 72283
## month 306 : balance 70977
## month 307 : balance 69666
## month 308 : balance 68350
## month 309 : balance 67029
## month 310 : balance 65702
## month 311 : balance 64369
## month 312 : balance 63032
## month 313 : balance 61688
## month 314 : balance 60340
## month 315 : balance 58986
## month 316 : balance 57626
## month 317 : balance 56261
## month 318 : balance 54890
## month 319 : balance 53514
## month 320 : balance 52132
## month 321 : balance 50744
## month 322 : balance 49351
## month 323 : balance 47952
## month 324 : balance 46547
## month 325 : balance 45137
## month 326 : balance 43721
## month 327 : balance 42299
## month 328 : balance 40871
## month 329 : balance 39438
## month 330 : balance 37998
## month 331 : balance 36553
## month 332 : balance 35102
## month 333 : balance 33645
## month 334 : balance 32182
## month 335 : balance 30713
## month 336 : balance 29238
## month 337 : balance 27758
## month 338 : balance 26271
## month 339 : balance 24778
## month 340 : balance 23279
## month 341 : balance 21773
## month 342 : balance 20262
## month 343 : balance 18745
## month 344 : balance 17221
## month 345 : balance 15691
## month 346 : balance 14155
## month 347 : balance 12613
## month 348 : balance 11064
## month 349 : balance 9509
## month 350 : balance 7948
## month 351 : balance 6380
## month 352 : balance 4806
## month 353 : balance 3226
## month 354 : balance 1639
## month 355 : balance 46
## month 356 : balance -1554
## total payments made 569600
```
So our nice young couple have paid off their $300,000 loan in just 4 months shy of the 30 year term of their loan, at a bargain basement price of $568,046 (since 569600 - 1554 = 568046). A happy ending!
## 8.3 Conditional statements
A second kind of flow control that programming languages provide is the ability to evaluate conditional statements. Unlike loops, which can repeat over and over again, a conditional statement only executes once, but it can switch between different possible commands depending on a CONDITION that is specified by the programmer. The power of these commands is that they allow the program itself to make choices, and in particular, to make different choices depending on the context in which the program is run. The most prominent of example of a conditional statement is the `if` statement, and the accompanying `else` statement. The basic format of an `if` statement in R is as follows:
And the execution of the statement is pretty straightforward. If the CONDITION is true, then R will execute the statements contained in the curly braces. If the CONDITION is false, then it dose not. If you want to, you can extend the `if` statement to include an `else` statement as well, leading to the following syntax:
```
if ( CONDITION ) {
STATEMENT1
STATEMENT2
ETC
} else {
STATEMENT3
STATEMENT4
ETC
}
```
As you’d expect, the interpretation of this version is similar. If the CONDITION is true, then the contents of the first block of code (i.e., STATEMENT1, STATEMENT2, ETC) are executed; but if it is false, then the contents of the second block of code (i.e., STATEMENT3, STATEMENT4, ETC) are executed instead.
To give you a feel for how you can use `if` and `else` to do something useful, the example that I’ll show you is a script that prints out a different message depending on what day of the week you run it. We can do this making use of some of the tools that we discussed in Section 7.11.3. Here’s the script:
```
## --- ifelseexample.R
# find out what day it is...
today <- Sys.Date() # pull the date from the system clock
day <- weekdays( today ) # what day of the week it is_
# now make a choice depending on the day...
if ( day == "Monday" ) {
print( "I don't like Mondays" )
} else {
print( "I'm a happy little automaton" )
}
```
Since today happens to be a Friday, when I run the script here’s what happens:
```
source(file.path(projecthome,"scripts","ifelseexample.R"))
```
There are other ways of making conditional statements in R. In particular, the `ifelse()` function and the `switch()` functions can be very useful in different contexts. However, my main aim in this chapter is to briefly cover the very basics, so I’ll move on.
## 8.4 Writing functions
In this section I want to talk about functions again. Functions were introduced in Section 3.5, but you’ve learned a lot about R since then, so we can talk about them in more detail. In particular, I want to show you how to create your own. To stick with the same basic framework that I used to describe loops and conditionals, here’s the syntax that you use to create a function:
```
FNAME <- function ( ARG1, ARG2, ETC ) {
STATEMENT1
STATEMENT2
ETC
return( VALUE )
}
```
What this does is create a function with the name FNAME, which has arguments ARG1, ARG2 and so forth. Whenever the function is called, R executes the statements in the curly braces, and then outputs the contents of VALUE to the user. Note, however, that R does not execute the commands inside the function in the workspace. Instead, what it does is create a temporary local environment: all the internal statements in the body of the function are executed there, so they remain invisible to the user. Only the final results in the VALUE are returned to the workspace.
To give a simple example of this, let’s create a function called `quadruple()` which multiplies its inputs by four. In keeping with the approach taken in the rest of the chapter, I’ll use a script to do this:
```
## --- functionexample.R
quadruple <- function(x) {
y <- x*4
return(y)
}
```
When we run this script, as follows
```
source(file.path(projecthome,"scripts","functionexample.R"))
```
nothing appears to have happened, but there is a new object created in the workspace called `quadruple` . Not surprisingly, if we ask R to tell us what kind of object it is, it tells us that it is a function: `class( quadruple )` `## [1] "function"` And now that we’ve created the `quadruple()` function, we can call it just like any other function And if I want to store the output as a variable, I can do this:
```
my.var <- quadruple(10)
print(my.var)
```
`## [1] 40` An important thing to recognise here is that the two internal variables that the `quadruple()` function makes use of, `x` and `y` , stay internal. That is, if we inspect the contents of the workspace,
```
library(lsr)
who()
```
```
## -- Name -- -- Class -- -- Size --
## afl.finalists factor 400
## afl.margins numeric 176
## afl2 data.frame 4296 x 2
## age numeric 11
## age.breaks numeric 4
## age.group factor 11
## age.group2 factor 11
## age.group3 factor 11
## age.labels character 3
## animals character 4
## balance numeric 1
## beers character 3
## cake.1 numeric 5
## cake.2 numeric 5
## cake.df data.frame 5 x 2
## cake.mat1 matrix 5 x 2
## cake.mat2 matrix 2 x 5
## cakes matrix 4 x 5
## cakes.flipped matrix 5 x 4
## choice data.frame 4 x 10
## choice.2 data.frame 16 x 6
## colour logical 1
## d.cor numeric 1
## dan.awake logical 10
## data data.frame 12 x 4
## day character 1
## describeImg list 0
## df data.frame 4 x 1
## drugs data.frame 10 x 8
## drugs.2 data.frame 30 x 5
## effort data.frame 10 x 2
## emphCol character 1
## emphColLight character 1
## emphGrey character 1
## eps logical 1
## fac factor 3
## fibonacci numeric 6
## Fibonacci numeric 7
## freq integer 17
## garden data.frame 5 x 3
## generateRLineTypes function
## generateRPointShapes function
## height numeric 1
## hw character 2
## i integer 1
## interest numeric 1
## is.MP.speaking logical 5
## itng data.frame 10 x 2
## itng.table table 3 x 4
## likert.centred numeric 10
## likert.ordinal ordered 10
## likert.raw numeric 10
## M matrix 2 x 3
## makka.pakka character 4
## monkey character 1
## monkey.1 list 1
## month numeric 1
## monthly.multiplier numeric 1
## msg character 1
## my.var numeric 1
## ng character 2
## numbers numeric 3
## old list 66
## old.text character 1
## oneCorPlot function
## opinion.dir numeric 10
## opinion.strength numeric 10
## out.0 data.frame 100 x 2
## out.1 data.frame 100 x 2
## out.2 data.frame 100 x 2
## parenthood data.frame 100 x 4
## payments numeric 1
## PJ character 1
## plotOne function
## projecthome character 1
## quadruple function
## row.1 numeric 3
## row.2 numeric 3
## some.data numeric 18
## speaker character 10
## speech.by.char list 3
## suspicious.cases logical 176
## teams character 17
## text character 2
## today Date 1
## tombliboo character 2
## total.paid numeric 1
## upsy.daisy character 4
## utterance character 10
## w character 1
## W character 1
## w.length integer 1
## width numeric 1
## words character 7
## x numeric 1
## X1 numeric 11
## X2 numeric 11
## X3 numeric 11
## X4 numeric 11
## xtab.3d table 3 x 4 x 2
## y numeric 2
## Y1 numeric 11
## Y2 numeric 11
## Y3 numeric 11
## Y4 numeric 11
```
we see everything in our workspace from this chapter including the `quadruple()` function itself, as well as the `my.var` variable that we just created.
Now that we know how to create our own functions in R, it’s probably a good idea to talk a little more about some of the other properties of functions that I’ve been glossing over. To start with, let’s take this opportunity to type the name of the function at the command line without the parentheses:
`quadruple`
```
## function(x) {
## y <- x*4
## return(y)
## }
```
As you can see, when you type the name of a function at the command line, R prints out the underlying source code that we used to define the function in the first place. In the case of the `quadruple()` function, this is quite helpful to us – we can read this code and actually see what the function does. For other functions, this is less helpful, as we saw back in Section 3.5 when we tried typing `citation` rather than `citation()` .
### 8.4.1 Function arguments revisited
Okay, now that we are starting to get a sense for how functions are constructed, let’s have a look at two, slightly more complicated functions that I’ve created. The source code for these functions is contained within the `functionexample2.R` and `functionexample3.R` scripts. Let’s start by looking at the first one:
```
## --- functionexample2.R
pow <- function( x, y = 1) {
out <- x^y # raise x to the power y
return( out )
}
```
and if we type
```
source("functionexample2.R")
```
to load the `pow()` function into our workspace, then we can make use of it. As you can see from looking at the code for this function, it has two arguments `x` and `y` , and all it does is raise `x` to the power of `y` . For instance, this command `pow(x=3, y=2)` `## [1] 9` calculates the value of \(3^2\). The interesting thing about this function isn’t what it does, since R already has has perfectly good mechanisms for calculating powers. Rather, notice that when I defined the function, I specified `y=1` when listing the arguments? That’s the default value for `y` . So if we enter a command without specifying a value for `y` , then the function assumes that we want `y=1` : `pow( x=3 )` `## [1] 3` However, since I didn’t specify any default value for `x` when I defined the `pow()` function, we always need to input a value for `x` . If we don’t R will spit out an error message. So now you know how to specify default values for an argument. The other thing I should point out while I’m on this topic is the use of the `...` argument. The `...` argument is a special construct in R which is only used within functions. It is used as a way of matching against multiple user inputs: in other words, `...` is used as a mechanism to allow the user to enter as many inputs as they like. I won’t talk at all about the low-level details of how this works at all, but I will show you a simple example of a function that makes use of it. To that end, consider the following script:
```
## --- functionexample3.R
doubleMax <- function( ... ) {
max.val <- max( ... ) # find the largest value in ...
out <- 2 * max.val # double it
return( out )
}
```
When we type
```
source("functionexample3.R")
```
, R creates the `doubleMax()` function. You can type in as many inputs as you like. The `doubleMax()` function identifies the largest value in the inputs, by passing all the user inputs to the `max()` function, and then doubles it. For example: `doubleMax( 1,2,5 )` `## [1] 10`
### 8.4.2 There’s more to functions than this
There’s a lot of other details to functions that I’ve hidden in my description in this chapter. Experienced programmers will wonder exactly how the “scoping rules” work in R,138 or want to know how to use a function to create variables in other environments139, or if function objects can be assigned as elements of a list140 and probably hundreds of other things besides. However, I don’t want to have this discussion get too cluttered with details, so I think it’s best – at least for the purposes of the current book – to stop here.
## 8.5 Implicit loops
There’s one last topic I want to discuss in this chapter. In addition to providing the explicit looping structures via `while` and `for` , R also provides a collection of functions for implicit loops. What I mean by this is that these are functions that carry out operations very similar to those that you’d normally use a loop for. However, instead of typing out the whole loop, the whole thing is done with a single command. The main reason why this can be handy is that – due to the way that R is written – these implicit looping functions are usually about to do the same calculations much faster than the corresponding explicit loops. In most applications that beginners might want to undertake, this probably isn’t very important, since most beginners tend to start out working with fairly small data sets and don’t usually need to undertake extremely time consuming number crunching. However, because you often see these functions referred to in other contexts, it may be useful to very briefly discuss a few of them. The first and simplest of these functions is `sapply()` . The two most important arguments to this function are `X` , which specifies a vector containing the data, and `FUN` , which specifies the name of a function that should be applied to each element of the data vector. The following example illustrates the basics of how it works:
```
words <- c("along", "the", "loom", "of", "the", "land")
sapply( X = words, FUN = nchar )
```
```
## along the loom of the land
## 5 3 4 2 3 4
```
Notice how similar this is to the second example of a `for` loop in Section 8.2.2. The `sapply()` function has implicitly looped over the elements of `words` , and for each such element applied the `nchar()` function to calculate the number of letters in the corresponding word. The second of these functions is `tapply()` , which has three key arguments. As before `X` specifies the data, and `FUN` specifies a function. However, there is also an `INDEX` argument which specifies a grouping variable.141 What the `tapply()` function does is loop over all of the different values that appear in the `INDEX` variable. Each such value defines a group: the `tapply()` function constructs the subset of `X` that corresponds to that group, and then applies the function `FUN` to that subset of the data. This probably sounds a little abstract, so let’s consider a specific example, using the `nightgarden.Rdata` file that we used in Chapter 7.
```
gender <- c( "male","male","female","female","male" )
age <- c( 10,12,9,11,13 )
tapply( X = age, INDEX = gender, FUN = mean )
```
```
## female male
## 10.00000 11.66667
```
In this extract, what we’re doing is using `gender` to define two different groups of people, and using their `ages` as the data. We then calculate the `mean()` of the ages, separately for the males and the females. A closely related function is `by()` . It actually does the same thing as `tapply()` , but the output is formatted a bit differently. This time around the three arguments are called `data` , `INDICES` and `FUN` , but they’re pretty much the same thing. An example of how to use the `by()` function is shown in the following extract:
```
by( data = age, INDICES = gender, FUN = mean )
```
```
## gender: female
## [1] 10
## --------------------------------------------------------
## gender: male
## [1] 11.66667
```
The `tapply()` and `by()` functions are quite handy things to know about, and are pretty widely used. However, although I do make passing reference to the `tapply()` later on, I don’t make much use of them in this book. Before moving on, I should mention that there are several other functions that work along similar lines, and have suspiciously similar names: `lapply` , `mapply` , `apply` , `vapply` , `rapply` and `eapply` . However, none of these come up anywhere else in this book, so all I wanted to do here is draw your attention to the fact that they exist.
## 8.6 Summary
In this chapter I talked about several key programming concepts, things that you should know about if you want to start converting your simple scripts into full fledged programs:
* Writing and using scripts (Section 8.1).
* Using loops (Section 8.2) and implicit loops (Section 8.5).
* Making conditional statements (Section 8.3)
* Writing your own functions (Section 8.4)
As always, there are lots of things I’m ignoring in this chapter. It takes a lot of work to become a proper programmer, just as it takes a lot of work to be a proper psychologist or a proper statistician, and this book is certainly not going to provide you with all the tools you need to make that step. However, you’d be amazed at how much you can achieve using only the tools that I’ve covered up to this point. Loops, conditionals and functions are very powerful things, especially when combined with the various tools discussed in Chapters 3, 4 and 7. Believe it or not, you’re off to a pretty good start just by having made it to this point. If you want to keep going, there are (as always!) several other books you might want to look at. One that I’ve read and enjoyed is “A first course in statistical programming with R” Braun and Murdoch (2007), but quite a few people have suggested to me that “The art of programming with R” Matloff and Matloff (2011) is worth the effort too.
The quote comes from Count Zero (1986)↩
*
Okay, I lied. Sue me. One of the coolest features of Rstudio is the support for R Markdown, which lets you embed R code inside a Markdown document, and you can automatically publish your R Markdown to the web on Rstudio’s servers. If you’re the kind of nerd interested in this sort of thing, it’s really nice. And, yes, since I’m also that kind of nerd, of course I’m aware that iPython notebooks do the same thing and that R just nicked their idea. So what? It’s still cool. And anyway, this book isn’t called Learning Statistics with Python now, is it? Hm. Maybe I should write a Python version…↩
*
As an aside: if there’s only a single command that you want to include inside your loop, then you don’t actually need to bother including the curly braces at all. However, until you’re comfortable programming in R I’d advise always using them, even when you don’t have to.↩
*
Okay, fine. This example is still a bit ridiculous, in three respects. Firstly, the bank absolutely will not let the couple pay less than the amount required to terminate the loan in 30 years. Secondly, a constant interest rate of 30 years is hilarious. Thirdly, you can solve this much more efficiently than through brute force simulation. However, we’re not exactly in the business of being realistic or efficient here.↩
*
Lexical scope.↩
*
The
`assign()` function.↩ *
Yes.↩
*
Or a list of such variables.↩
# Part IV. Statistical theory
Part IV of the book is by far the most theoretical one, focusing as it does on the theory of statistical inference. Over the next three chapters my goal is to give you an introduction to probability theory (Chapter 9, sampling and estimation (Chapter 10 and statistical hypothesis testing (Chapter 11). Before we get started though, I want to say something about the big picture. Statistical inference is primarily about learning from data. The goal is no longer merely to describe our data, but to use the data to draw conclusions about the world. To motivate the discussion, I want to spend a bit of time talking about a philosophical puzzle known as the riddle of induction, because it speaks to an issue that will pop up over and over again throughout the book: statistical inference relies on assumptions. This sounds like a bad thing. In everyday life people say things like ``you should never make assumptions’’, and psychology classes often talk about assumptions and biases as bad things that we should try to avoid. From bitter personal experience I have learned never to say such things around philosophers.
## On the limits of logical reasoning
The whole art of war consists in getting at what is on the other side of the hill, or, in other words, in learning what we do not know from what we do.
– <NAME>, 1st Duke of Wellington
I am told that quote above came about as a consequence of a carriage ride across the countryside.142. He and his companion, <NAME>, were playing a guessing game, each trying to predict what would be on the other side of each hill. In every case it turned out that Wellesley was right and Croker was wrong. Many years later when Wellesley was asked about the game, he explained that “the whole art of war consists in getting at what is on the other side of the hill”. Indeed, war is not special in this respect. All of life is a guessing game of one form or another, and getting by on a day to day basis requires us to make good guesses. So let’s play a guessing game of our own.
Suppose you and I are observing the Wellesley-Croker competition, and after every three hills you and I have to predict who will win the next one, Wellesley or Croker. Let’s say that `W` refers to a Wellesley victory and `C` refers to a Croker victory. After three hills, our data set looks like this: `WWW` You: Three in a row doesn’t mean much. I suppose Wellesley might be better at this than Croker, but it might just be luck. Still, I’m a bit of a gambler. I’ll bet on Wellesley.
Me: I agree that three in a row isn’t informative, and I see no reason to prefer Wellesley’s guesses over Croker’s. I can’t justify betting at this stage. Sorry. No bet for me.
Your gamble paid off: three more hills go by, and Wellesley wins all three. Going into the next round of our game the score is 1-0 in favour of you, and our data set looks like this:
`WWW WWW`
I’ve organised the data into blocks of three so that you can see which batch corresponds to the observations that we had available at each step in our little side game. After seeing this new batch, our conversation continues:
You: Six wins in a row for <NAME>. This is starting to feel a bit suspicious. I’m still not certain, but I reckon that he’s going to win the next one too.
Me: I guess I don’t see that. Sure, I agree that Wellesley has won six in a row, but I don’t see any logical reason why that means he’ll win the seventh one. No bet.
You: Do your really think so? Fair enough, but my bet worked out last time, and I’m okay with my choice.
For a second time you were right, and for a second time I was wrong. Wellesley wins the next three hills, extending his winning record against Croker to 9-0. The data set available to us is now this:
`WWW WWW WWW`
And our conversation goes like this:
You: Okay, this is pretty obvious. Wellesley is way better at this game. We both agree he’s going to win the next hill, right?
Me: Is there really any logical evidence for that? Before we started this game, there were lots of possibilities for the first 10 outcomes, and I had no idea which one to expect.
`WWW WWW WWW W` was one possibility, but so was `WCC CWC WWC C` and `WWW WWW WWW C` or even `CCC CCC CCC C` . Because I had no idea what would happen so I’d have said they were all equally likely. I assume you would have too, right? I mean, that’s what it means to say you have “no idea”, isn’t it? You: I suppose so.
Me: Well then, the observations we’ve made logically rule out all possibilities except two:
`WWW WWW WWW C` or `WWW WWW WWW W` . Both of these are perfectly consistent with the evidence we’ve encountered so far, aren’t they? You: Yes, of course they are. Where are you going with this?
Me: So what’s changed then? At the start of our game, you’d have agreed with me that these are equally plausible, and none of the evidence that we’ve encountered has discriminated between these two possibilities. Therefore, both of these possibilities remain equally plausible, and I see no logical reason to prefer one over the other. So yes, while I agree with you that Wellesley’s run of 9 wins in a row is remarkable, I can’t think of a good reason to think he’ll win the 10th hill. No bet.
You: I see your point, but I’m still willing to chance it. I’m betting on Wellesley.
Wellesley’s winning streak continues for the next three hills. The score in the Wellesley-Croker game is now 12-0, and the score in our game is now 3-0. As we approach the fourth round of our game, our data set is this:
`WWW WWW WWW WWW`
and the conversation continues:
You: Oh yeah! Three more wins for Wellesley and another victory for me. Admit it, I was right about him! I guess we’re both betting on Wellesley this time around, right?
Me: I don’t know what to think. I feel like we’re in the same situation we were in last round, and nothing much has changed. There are only two legitimate possibilities for a sequence of 13 hills that haven’t already been ruled out,
`WWW WWW WWW WWW C` and `WWW WWW WWW WWW W` . It’s just like I said last time: if all possible outcomes were equally sensible before the game started, shouldn’t these two be equally sensible now given that our observations don’t rule out either one? I agree that it feels like Wellesley is on an amazing winning streak, but where’s the logical evidence that the streak will continue? You: I think you’re being unreasonable. Why not take a look at our scorecard, if you need evidence? You’re the expert on statistics and you’ve been using this fancy logical analysis, but the fact is you’re losing. I’m just relying on common sense and I’m winning. Maybe you should switch strategies.
Me: Hm, that is a good point and I don’t want to lose the game, but I’m afraid I don’t see any logical evidence that your strategy is better than mine. It seems to me that if there were someone else watching our game, what they’d have observed is a run of three wins to you. Their data would look like this:
`YYY` . Logically, I don’t see that this is any different to our first round of watching Wellesley and Croker. Three wins to you doesn’t seem like a lot of evidence, and I see no reason to think that your strategy is working out any better than mine. If I didn’t think that `WWW` was good evidence then for Wellesley being better than Croker at their game, surely I have no reason now to think that `YYY` is good evidence that you’re better at ours? You: Okay, now I think you’re being a jerk.
Me: I don’t see the logical evidence for that.
## Learning without making assumptions is a myth
There are lots of different ways in which we could dissect this dialogue, but since this is a statistics book pitched at psychologists, and not an introduction to the philosophy and psychology of reasoning, I’ll keep it brief. What I’ve described above is sometimes referred to as the riddle of induction: it seems entirely reasonable to think that a 12-0 winning record by Wellesley is pretty strong evidence that he will win the 13th game, but it is not easy to provide a proper logical justification for this belief. On the contrary, despite the obviousness of the answer, it’s not actually possible to justify betting on Wellesley without relying on some assumption that you don’t have any logical justification for.
The riddle of induction is most associated with the philosophical work of <NAME> and more recently <NAME>, but you can find examples of the problem popping up in fields as diverse literature (Lewis Carroll) and machine learning (the “no free lunch” theorem). There really is something weird about trying to “learn what we do not know from what we do”. The critical point is that assumptions and biases are unavoidable if you want to learn anything about the world. There is no escape from this, and it is just as true for statistical inference as it is for human reasoning. In the dialogue, I was taking aim at your perfectly sensible inferences as a human being, but the common sense reasoning that you relied on in is no different to what a statistician would have done. Your “common sense” half of the dialog relied an implicit assumption that there exists some difference in skill between Wellesley and Croker, and what you were doing was trying to work out what that difference in skill level would be. My “logical analysis” rejects that assumption entirely. All I was willing to accept is that there are sequences of wins and losses, and that I did not know which sequences would be observed. Throughout the dialogue, I kept insisting that all logically possible data sets were equally plausible at the start of the Wellesely-Croker game, and the only way in which I ever revised my beliefs was to eliminate those possibilities that were factually inconsistent with the observations.
That sounds perfectly sensible on its own terms. In fact, it even sounds like the hallmark of good deductive reasoning. Like <NAME>, my approach was to rule out that which is impossible, in the hope that what would be left is the truth. Yet as we saw, ruling out the impossible never led me to make a prediction. On its own terms, everything I said in my half of the dialogue was entirely correct. An inability to make any predictions is the logical consequence of making “no assumptions”. In the end I lost our game, because you did make some assumptions and those assumptions turned out to be right. Skill is a real thing, and because you believed in the existence of skill you were able to learn that Wellesley had more of it than Croker. Had you relied on a less sensible assumption to drive your learning, you might not have won the game.
Ultimately there are two things you should take away from this. Firstly, as I’ve said, you cannot avoid making assumptions if you want to learn anything from your data. But secondly, once you realise that assumptions are necessary, it becomes important to make sure you make the right ones! A data analysis that relies on few assumptions is not necessarily better than one that makes many assumptions: it all depends on whether those assumptions are good ones for your data. As we go through the rest of this book I’ll often point out the assumptions that underpin a particular tool, and how you can check whether those assumptions are sensible.
Date: 2010-10-30
Categories:
Tags:
# Chapter 9 Introduction to probability
[God] has afforded us only the twilight … of Probability.
– <NAME>
Up to this point in the book, we’ve discussed some of the key ideas in experimental design, and we’ve talked a little about how you can summarise a data set. To a lot of people, this is all there is to statistics: it’s about calculating averages, collecting all the numbers, drawing pictures, and putting them all in a report somewhere. Kind of like stamp collecting, but with numbers. However, statistics covers much more than that. In fact, descriptive statistics is one of the smallest parts of statistics, and one of the least powerful. The bigger and more useful part of statistics is that it provides tools that let you make inferences about data.
Once you start thinking about statistics in these terms – that statistics is there to help us draw inferences from data – you start seeing examples of it everywhere. For instance, here’s a tiny extract from a newspaper article in the Sydney Morning Herald (30 Oct 2010):
“I have a tough job,” the Premier said in response to a poll which found her government is now the most unpopular Labor administration in polling history, with a primary vote of just 23 per cent.
This kind of remark is entirely unremarkable in the papers or in everyday life, but let’s have a think about what it entails. A polling company has conducted a survey, usually a pretty big one because they can afford it. I’m too lazy to track down the original survey, so let’s just imagine that they called 1000 NSW voters at random, and 230 (23%) of those claimed that they intended to vote for the ALP. For the 2010 Federal election, the Australian Electoral Commission reported 4,610,795 enrolled voters in NSW; so the opinions of the remaining 4,609,795 voters (about 99.98% of voters) remain unknown to us. Even assuming that no-one lied to the polling company the only thing we can say with 100% confidence is that the true ALP primary vote is somewhere between 230/4610795 (about 0.005%) and 4610025/4610795 (about 99.83%). So, on what basis is it legitimate for the polling company, the newspaper, and the readership to conclude that the ALP primary vote is only about 23%?
The answer to the question is pretty obvious: if I call 1000 people at random, and 230 of them say they intend to vote for the ALP, then it seems very unlikely that these are the only 230 people out of the entire voting public who actually intend to do so. In other words, we assume that the data collected by the polling company is pretty representative of the population at large. But how representative? Would we be surprised to discover that the true ALP primary vote is actually 24%? 29%? 37%? At this point everyday intuition starts to break down a bit. No-one would be surprised by 24%, and everybody would be surprised by 37%, but it’s a bit hard to say whether 29% is plausible. We need some more powerful tools than just looking at the numbers and guessing.
Inferential statistics provides the tools that we need to answer these sorts of questions, and since these kinds of questions lie at the heart of the scientific enterprise, they take up the lions share of every introductory course on statistics and research methods. However, the theory of statistical inference is built on top of probability theory. And it is to probability theory that we must now turn. This discussion of probability theory is basically background: there’s not a lot of statistics per se in this chapter, and you don’t need to understand this material in as much depth as the other chapters in this part of the book. Nevertheless, because probability theory does underpin so much of statistics, it’s worth covering some of the basics.
## 9.1 How are probability and statistics different?
Before we start talking about probability theory, it’s helpful to spend a moment thinking about the relationship between probability and statistics. The two disciplines are closely related but they’re not identical. Probability theory is “the doctrine of chances”. It’s a branch of mathematics that tells you how often different kinds of events will happen. For example, all of these questions are things you can answer using probability theory:
* What are the chances of a fair coin coming up heads 10 times in a row?
* If I roll two six sided dice, how likely is it that I’ll roll two sixes?
* How likely is it that five cards drawn from a perfectly shuffled deck will all be hearts?
* What are the chances that I’ll win the lottery?
Notice that all of these questions have something in common. In each case the “truth of the world” is known, and my question relates to the “what kind of events” will happen. In the first question I know that the coin is fair, so there’s a 50% chance that any individual coin flip will come up heads. In the second question, I know that the chance of rolling a 6 on a single die is 1 in 6. In the third question I know that the deck is shuffled properly. And in the fourth question, I know that the lottery follows specific rules. You get the idea. The critical point is that probabilistic questions start with a known model of the world, and we use that model to do some calculations. The underlying model can be quite simple. For instance, in the coin flipping example, we can write down the model like this: \[ P(\mbox{heads}) = 0.5 \] which you can read as “the probability of heads is 0.5”. As we’ll see later, in the same way that percentages are numbers that range from 0% to 100%, probabilities are just numbers that range from 0 to 1. When using this probability model to answer the first question, I don’t actually know exactly what’s going to happen. Maybe I’ll get 10 heads, like the question says. But maybe I’ll get three heads. That’s the key thing: in probability theory, the model is known, but the data are not.
So that’s probability. What about statistics? Statistical questions work the other way around. In statistics, we do not know the truth about the world. All we have is the data, and it is from the data that we want to learn the truth about the world. Statistical questions tend to look more like these:
* If my friend flips a coin 10 times and gets 10 heads, are they playing a trick on me?
* If five cards off the top of the deck are all hearts, how likely is it that the deck was shuffled? - If the lottery commissioner’s spouse wins the lottery, how likely is it that the lottery was rigged?
This time around, the only thing we have are data. What I know is that I saw my friend flip the coin 10 times and it came up heads every time. And what I want to infer is whether or not I should conclude that what I just saw was actually a fair coin being flipped 10 times in a row, or whether I should suspect that my friend is playing a trick on me. The data I have look like this:
```
H H H H H H H H H H H
```
and what I’m trying to do is work out which “model of the world” I should put my trust in. If the coin is fair, then the model I should adopt is one that says that the probability of heads is 0.5; that is, \(P(\mbox{heads}) = 0.5\). If the coin is not fair, then I should conclude that the probability of heads is not 0.5, which we would write as \(P(\mbox{heads}) \neq 0.5\). In other words, the statistical inference problem is to figure out which of these probability models is right. Clearly, the statistical question isn’t the same as the probability question, but they’re deeply connected to one another. Because of this, a good introduction to statistical theory will start with a discussion of what probability is and how it works.
## 9.2 What does probability mean?
Let’s start with the first of these questions. What is “probability”? It might seem surprising to you, but while statisticians and mathematicians (mostly) agree on what the rules of probability are, there’s much less of a consensus on what the word really means. It seems weird because we’re all very comfortable using words like “chance”, “likely”, “possible” and “probable”, and it doesn’t seem like it should be a very difficult question to answer. If you had to explain “probability” to a five year old, you could do a pretty good job. But if you’ve ever had that experience in real life, you might walk away from the conversation feeling like you didn’t quite get it right, and that (like many everyday concepts) it turns out that you don’t really know what it’s all about.
So I’ll have a go at it. Let’s suppose I want to bet on a soccer game between two teams of robots, Arduino Arsenal and C Milan. After thinking about it, I decide that there is an 80% probability that Arduino Arsenal winning. What do I mean by that? Here are three possibilities…
* They’re robot teams, so I can make them play over and over again, and if I did that, Arduino Arsenal would win 8 out of every 10 games on average.
* For any given game, I would only agree that betting on this game is only “fair” if a $1 bet on C Milan gives a $5 payoff (i.e. I get my $1 back plus a $4 reward for being correct), as would a $4 bet on Arduino Arsenal (i.e., my $4 bet plus a $1 reward).
* My subjective “belief” or “confidence” in an Arduino Arsenal victory is four times as strong as my belief in a C Milan victory.
Each of these seems sensible. However they’re not identical, and not every statistician would endorse all of them. The reason is that there are different statistical ideologies (yes, really!) and depending on which one you subscribe to, you might say that some of those statements are meaningless or irrelevant. In this section, I give a brief introduction the two main approaches that exist in the literature. These are by no means the only approaches, but they’re the two big ones.
### 9.2.1 The frequentist view
The first of the two major approaches to probability, and the more dominant one in statistics, is referred to as the frequentist view, and it defines probability as a long-run frequency. Suppose we were to try flipping a fair coin, over and over again. By definition, this is a coin that has \(P(H) = 0.5\). What might we observe? One possibility is that the first 20 flips might look like this:
```
T,H,H,H,H,T,T,H,H,H,H,T,H,H,T,T,T,T,T,H
```
In this case 11 of these 20 coin flips (55%) came up heads. Now suppose that I’d been keeping a running tally of the number of heads (which I’ll call \(N_H\)) that I’ve seen, across the first \(N\) flips, and calculate the proportion of heads \(N_H / N\) every time. Here’s what I’d get (I did literally flip coins to produce this!):
number.of.flips | number.of.heads | proportion |
| --- | --- | --- |
1 | 0 | 0.00 |
2 | 1 | 0.50 |
3 | 2 | 0.67 |
4 | 3 | 0.75 |
5 | 4 | 0.80 |
6 | 4 | 0.67 |
7 | 4 | 0.57 |
8 | 5 | 0.63 |
9 | 6 | 0.67 |
10 | 7 | 0.70 |
11 | 8 | 0.73 |
12 | 8 | 0.67 |
13 | 9 | 0.69 |
14 | 10 | 0.71 |
15 | 10 | 0.67 |
16 | 10 | 0.63 |
17 | 10 | 0.59 |
18 | 10 | 0.56 |
19 | 10 | 0.53 |
20 | 11 | 0.55 |
Notice that at the start of the sequence, the proportion of heads fluctuates wildly, starting at .00 and rising as high as .80. Later on, one gets the impression that it dampens out a bit, with more and more of the values actually being pretty close to the “right” answer of .50. This is the frequentist definition of probability in a nutshell: flip a fair coin over and over again, and as \(N\) grows large (approaches infinity, denoted \(N\rightarrow \infty\)), the proportion of heads will converge to 50%. There are some subtle technicalities that the mathematicians care about, but qualitatively speaking, that’s how the frequentists define probability. Unfortunately, I don’t have an infinite number of coins, or the infinite patience required to flip a coin an infinite number of times. However, I do have a computer, and computers excel at mindless repetitive tasks. So I asked my computer to simulate flipping a coin 1000 times, and then drew a picture of what happens to the proportion \(N_H / N\) as \(N\) increases. Actually, I did it four times, just to make sure it wasn’t a fluke. The results are shown in Figure 9.1. As you can see, the proportion of observed heads eventually stops fluctuating, and settles down; when it does, the number at which it finally settles is the true probability of heads.
The frequentist definition of probability has some desirable characteristics. Firstly, it is objective: the probability of an event is necessarily grounded in the world. The only way that probability statements can make sense is if they refer to (a sequence of) events that occur in the physical universe.143 Secondly, it is unambiguous: any two people watching the same sequence of events unfold, trying to calculate the probability of an event, must inevitably come up with the same answer. However, it also has undesirable characteristics. Firstly, infinite sequences don’t exist in the physical world. Suppose you picked up a coin from your pocket and started to flip it. Every time it lands, it impacts on the ground. Each impact wears the coin down a bit; eventually, the coin will be destroyed. So, one might ask whether it really makes sense to pretend that an “infinite” sequence of coin flips is even a meaningful concept, or an objective one. We can’t say that an “infinite sequence” of events is a real thing in the physical universe, because the physical universe doesn’t allow infinite anything. More seriously, the frequentist definition has a narrow scope. There are lots of things out there that human beings are happy to assign probability to in everyday language, but cannot (even in theory) be mapped onto a hypothetical sequence of events. For instance, if a meteorologist comes on TV and says, “the probability of rain in Adelaide on 2 November 2048 is 60%” we humans are happy to accept this. But it’s not clear how to define this in frequentist terms. There’s only one city of Adelaide, and only 2 November 2048. There’s no infinite sequence of events here, just a once-off thing. Frequentist probability genuinely forbids us from making probability statements about a single event. From the frequentist perspective, it will either rain tomorrow or it will not; there is no “probability” that attaches to a single non-repeatable event. Now, it should be said that there are some very clever tricks that frequentists can use to get around this. One possibility is that what the meteorologist means is something like this: “There is a category of days for which I predict a 60% chance of rain; if we look only across those days for which I make this prediction, then on 60% of those days it will actually rain”. It’s very weird and counterintuitive to think of it this way, but you do see frequentists do this sometimes. And it will come up later in this book (see Section 10.5).
### 9.2.2 The Bayesian view
The Bayesian view of probability is often called the subjectivist view, and it is a minority view among statisticians, but one that has been steadily gaining traction for the last several decades. There are many flavours of Bayesianism, making hard to say exactly what “the” Bayesian view is. The most common way of thinking about subjective probability is to define the probability of an event as the degree of belief that an intelligent and rational agent assigns to that truth of that event. From that perspective, probabilities don’t exist in the world, but rather in the thoughts and assumptions of people and other intelligent beings. However, in order for this approach to work, we need some way of operationalising “degree of belief”. One way that you can do this is to formalise it in terms of “rational gambling”, though there are many other ways. Suppose that I believe that there’s a 60% probability of rain tomorrow. If someone offers me a bet: if it rains tomorrow, then I win $5, but if it doesn’t rain then I lose $5. Clearly, from my perspective, this is a pretty good bet. On the other hand, if I think that the probability of rain is only 40%, then it’s a bad bet to take. Thus, we can operationalise the notion of a “subjective probability” in terms of what bets I’m willing to accept.
What are the advantages and disadvantages to the Bayesian approach? The main advantage is that it allows you to assign probabilities to any event you want to. You don’t need to be limited to those events that are repeatable. The main disadvantage (to many people) is that we can’t be purely objective – specifying a probability requires us to specify an entity that has the relevant degree of belief. This entity might be a human, an alien, a robot, or even a statistician, but there has to be an intelligent agent out there that believes in things. To many people this is uncomfortable: it seems to make probability arbitrary. While the Bayesian approach does require that the agent in question be rational (i.e., obey the rules of probability), it does allow everyone to have their own beliefs; I can believe the coin is fair and you don’t have to, even though we’re both rational. The frequentist view doesn’t allow any two observers to attribute different probabilities to the same event: when that happens, then at least one of them must be wrong. The Bayesian view does not prevent this from occurring. Two observers with different background knowledge can legitimately hold different beliefs about the same event. In short, where the frequentist view is sometimes considered to be too narrow (forbids lots of things that that we want to assign probabilities to), the Bayesian view is sometimes thought to be too broad (allows too many differences between observers).
### 9.2.3 What’s the difference? And who is right?
Now that you’ve seen each of these two views independently, it’s useful to make sure you can compare the two. Go back to the hypothetical robot soccer game at the start of the section. What do you think a frequentist and a Bayesian would say about these three statements? Which statement would a frequentist say is the correct definition of probability? Which one would a Bayesian do? Would some of these statements be meaningless to a frequentist or a Bayesian? If you’ve understood the two perspectives, you should have some sense of how to answer those questions.
Okay, assuming you understand the different, you might be wondering which of them is right? Honestly, I don’t know that there is a right answer. As far as I can tell there’s nothing mathematically incorrect about the way frequentists think about sequences of events, and there’s nothing mathematically incorrect about the way that Bayesians define the beliefs of a rational agent. In fact, when you dig down into the details, Bayesians and frequentists actually agree about a lot of things. Many frequentist methods lead to decisions that Bayesians agree a rational agent would make. Many Bayesian methods have very good frequentist properties.
For the most part, I’m a pragmatist so I’ll use any statistical method that I trust. As it turns out, that makes me prefer Bayesian methods, for reasons I’ll explain towards the end of the book, but I’m not fundamentally opposed to frequentist methods. Not everyone is quite so relaxed. For instance, consider <NAME>, one of the towering figures of 20th century statistics and a vehement opponent to all things Bayesian, whose paper on the mathematical foundations of statistics referred to Bayesian probability as “an impenetrable jungle [that] arrests progress towards precision of statistical concepts” Fisher (1922b). Or the psychologist <NAME>, who suggests that relying on frequentist methods could turn you into “a potent but sterile intellectual rake who leaves in his merry path a long train of ravished maidens but no viable scientific offspring” Meehl (1967). The history of statistics, as you might gather, is not devoid of entertainment.
In any case, while I personally prefer the Bayesian view, the majority of statistical analyses are based on the frequentist approach. My reasoning is pragmatic: the goal of this book is to cover roughly the same territory as a typical undergraduate stats class in psychology, and if you want to understand the statistical tools used by most psychologists, you’ll need a good grasp of frequentist methods. I promise you that this isn’t wasted effort. Even if you end up wanting to switch to the Bayesian perspective, you really should read through at least one book on the “orthodox” frequentist view. And since R is the most widely used statistical language for Bayesians, you might as well read a book that uses R. Besides, I won’t completely ignore the Bayesian perspective. Every now and then I’ll add some commentary from a Bayesian point of view, and I’ll revisit the topic in more depth in Chapter 17.
## 9.3 Basic probability theory
Ideological arguments between Bayesians and frequentists notwithstanding, it turns out that people mostly agree on the rules that probabilities should obey. There are lots of different ways of arriving at these rules. The most commonly used approach is based on the work of <NAME>, one of the great Soviet mathematicians of the 20th century. I won’t go into a lot of detail, but I’ll try to give you a bit of a sense of how it works. And in order to do so, I’m going to have to talk about my pants.
### 9.3.1 Introducing probability distributions
One of the disturbing truths about my life is that I only own 5 pairs of pants: three pairs of jeans, the bottom half of a suit, and a pair of tracksuit pants. Even sadder, I’ve given them names: I call them \(X_1\), \(X_2\), \(X_3\), \(X_4\) and \(X_5\). I really do: that’s why they call me Mister Imaginative. Now, on any given day, I pick out exactly one of pair of pants to wear. Not even I’m so stupid as to try to wear two pairs of pants, and thanks to years of training I never go outside without wearing pants anymore. If I were to describe this situation using the language of probability theory, I would refer to each pair of pants (i.e., each \(X\)) as an elementary event. The key characteristic of elementary events is that every time we make an observation (e.g., every time I put on a pair of pants), then the outcome will be one and only one of these events. Like I said, these days I always wear exactly one pair of pants, so my pants satisfy this constraint. Similarly, the set of all possible events is called a sample space. Granted, some people would call it a “wardrobe”, but that’s because they’re refusing to think about my pants in probabilistic terms. Sad.
Okay, now that we have a sample space (a wardrobe), which is built from lots of possible elementary events (pants), what we want to do is assign a probability of one of these elementary events. For an event \(X\), the probability of that event \(P(X)\) is a number that lies between 0 and 1. The bigger the value of \(P(X)\), the more likely the event is to occur. So, for example, if \(P(X) = 0\), it means the event \(X\) is impossible (i.e., I never wear those pants). On the other hand, if \(P(X) = 1\) it means that event \(X\) is certain to occur (i.e., I always wear those pants). For probability values in the middle, it means that I sometimes wear those pants. For instance, if \(P(X) = 0.5\) it means that I wear those pants half of the time.
At this point, we’re almost done. The last thing we need to recognise is that “something always happens”. Every time I put on pants, I really do end up wearing pants (crazy, right?). What this somewhat trite statement means, in probabilistic terms, is that the probabilities of the elementary events need to add up to 1. This is known as the law of total probability, not that any of us really care. More importantly, if these requirements are satisfied, then what we have is a probability distribution. For example, this is an example of a probability distribution
Which.pants | Blue.jeans | Grey.jeans | Black.jeans | Black.suit | Blue.tracksuit |
| --- | --- | --- | --- | --- | --- |
Label | \(X_1\) | \(X_2\) | \(X_3\) | \(X_4\) | \(X_5\) |
Probability | \(P(X_1) = .5\) | \(P(X_2) = .3\) | \(P(X_3) = .1\) | \(P(X_4) = 0\) | \(P(X_5) = .1\) |
Each of the events has a probability that lies between 0 and 1, and if we add up the probability of all events, they sum to 1. Awesome. We can even draw a nice bar graph (see Section 6.7) to visualise this distribution, as shown in Figure 9.2. And at this point, we’ve all achieved something. You’ve learned what a probability distribution is, and I’ve finally managed to find a way to create a graph that focuses entirely on my pants. Everyone wins!
The only other thing that I need to point out is that probability theory allows you to talk about non elementary events as well as elementary ones. The easiest way to illustrate the concept is with an example. In the pants example, it’s perfectly legitimate to refer to the probability that I wear jeans. In this scenario, the “Dan wears jeans” event said to have happened as long as the elementary event that actually did occur is one of the appropriate ones; in this case “blue jeans”, “black jeans” or “grey jeans”. In mathematical terms, we defined the “jeans” event \(E\) to correspond to the set of elementary events \((X_1, X_2, X_3)\). If any of these elementary events occurs, then \(E\) is also said to have occurred. Having decided to write down the definition of the \(E\) this way, it’s pretty straightforward to state what the probability \(P(E)\) is: we just add everything up. In this particular case \[ P(E) = P(X_1) + P(X_2) + P(X_3) \] and, since the probabilities of blue, grey and black jeans respectively are .5, .3 and .1, the probability that I wear jeans is equal to .9.
At this point you might be thinking that this is all terribly obvious and simple and you’d be right. All we’ve really done is wrap some basic mathematics around a few common sense intuitions. However, from these simple beginnings it’s possible to construct some extremely powerful mathematical tools. I’m definitely not going to go into the details in this book, but what I will do is list – in Table 9.1 – some of the other rules that probabilities satisfy. These rules can be derived from the simple assumptions that I’ve outlined above, but since we don’t actually use these rules for anything in this book, I won’t do so here.
English | Notation | NANA | Formula |
| --- | --- | --- | --- |
Not \(A\) | \(P(\neg A)\) | = | \(1-P(A)\) |
\(A\) or \(B\) | \(P(A \cup B)\) | = | \(P(A) + P(B) - P(A \cap B)\) |
\(A\) and \(B\) | \(P(A \cap B)\) | = | \(P(A|B) P(B)\) |
## 9.4 The binomial distribution
As you might imagine, probability distributions vary enormously, and there’s an enormous range of distributions out there. However, they aren’t all equally important. In fact, the vast majority of the content in this book relies on one of five distributions: the binomial distribution, the normal distribution, the \(t\) distribution, the \(\chi^2\) (“chi-square”) distribution and the \(F\) distribution. Given this, what I’ll do over the next few sections is provide a brief introduction to all five of these, paying special attention to the binomial and the normal. I’ll start with the binomial distribution, since it’s the simplest of the five.
### 9.4.1 Introducing the binomial
The theory of probability originated in the attempt to describe how games of chance work, so it seems fitting that our discussion of the binomial distribution should involve a discussion of rolling dice and flipping coins. Let’s imagine a simple “experiment”: in my hot little hand I’m holding 20 identical six-sided dice. On one face of each die there’s a picture of a skull; the other five faces are all blank. If I proceed to roll all 20 dice, what’s the probability that I’ll get exactly 4 skulls? Assuming that the dice are fair, we know that the chance of any one die coming up skulls is 1 in 6; to say this another way, the skull probability for a single die is approximately \(.167\). This is enough information to answer our question, so let’s have a look at how it’s done.
As usual, we’ll want to introduce some names and some notation. We’ll let \(N\) denote the number of dice rolls in our experiment; which is often referred to as the size parameter of our binomial distribution. We’ll also use \(\theta\) to refer to the the probability that a single die comes up skulls, a quantity that is usually called the success probability of the binomial.144 Finally, we’ll use \(X\) to refer to the results of our experiment, namely the number of skulls I get when I roll the dice. Since the actual value of \(X\) is due to chance, we refer to it as a random variable. In any case, now that we have all this terminology and notation, we can use it to state the problem a little more precisely. The quantity that we want to calculate is the probability that \(X = 4\) given that we know that \(\theta = .167\) and \(N=20\). The general “form” of the thing I’m interested in calculating could be written as \[ P(X \ | \ \theta, N) \] and we’re interested in the special case where \(X=4\), \(\theta = .167\) and \(N=20\). There’s only one more piece of notation I want to refer to before moving on to discuss the solution to the problem. If I want to say that \(X\) is generated randomly from a binomial distribution with parameters \(\theta\) and \(N\), the notation I would use is as follows: \[ X \sim \mbox{Binomial}(\theta, N) \]
Yeah, yeah. I know what you’re thinking: notation, notation, notation. Really, who cares? Very few readers of this book are here for the notation, so I should probably move on and talk about how to use the binomial distribution. I’ve included the formula for the binomial distribution in Table 9.2, since some readers may want to play with it themselves, but since most people probably don’t care that much and because we don’t need the formula in this book, I won’t talk about it in any detail. Instead, I just want to show you what the binomial distribution looks like. To that end, Figure 9.3 plots the binomial probabilities for all possible values of \(X\) for our dice rolling experiment, from \(X=0\) (no skulls) all the way up to \(X=20\) (all skulls). Note that this is basically a bar chart, and is no different to the “pants probability” plot I drew in Figure 9.2. On the horizontal axis we have all the possible events, and on the vertical axis we can read off the probability of each of those events. So, the probability of rolling 4 skulls out of 20 times is about 0.20 (the actual answer is 0.2022036, as we’ll see in a moment). In other words, you’d expect that to happen about 20% of the times you repeated this experiment.
Binomial | Normal |
| --- | --- |
\(P(X | \theta, N) = \displaystyle\frac{N!}{X! (N-X)!} \theta^X (1-\theta)^{N-X}\) | \(p(X | \mu, \sigma) = \displaystyle\frac{1}{\sqrt{2\pi}\sigma} \exp \left( -\frac{(X - \mu)^2}{2\sigma^2} \right)\) |
### 9.4.2 Working with the binomial distribution in R
Although some people find it handy to know the formulas in Table 9.2, most people just want to know how to use the distributions without worrying too much about the maths. To that end, R has a function called `dbinom()` that calculates binomial probabilities for us. The main arguments to the function are
*
`x` . This is a number, or vector of numbers, specifying the outcomes whose probability you’re trying to calculate. *
`size` . This is a number telling R the size of the experiment. *
`prob` . This is the success probability for any one trial in the experiment. So, in order to calculate the probability of getting `x = 4` skulls, from an experiment of `size = 20` trials, in which the probability of getting a skull on any one trial is `prob = 1/6` … well, the command I would use is simply this:
```
dbinom( x = 4, size = 20, prob = 1/6 )
```
`## [1] 0.2022036`
To give you a feel for how the binomial distribution changes when we alter the values of \(\theta\) and \(N\), let’s suppose that instead of rolling dice, I’m actually flipping coins. This time around, my experiment involves flipping a fair coin repeatedly, and the outcome that I’m interested in is the number of heads that I observe. In this scenario, the success probability is now \(\theta = 1/2\). Suppose I were to flip the coin \(N=20\) times. In this example, I’ve changed the success probability, but kept the size of the experiment the same. What does this do to our binomial distribution? Well, as Figure 9.4 shows, the main effect of this is to shift the whole distribution, as you’d expect. Okay, what if we flipped a coin \(N=100\) times? Well, in that case, we get Figure 9.5. The distribution stays roughly in the middle, but there’s a bit more variability in the possible outcomes.
At this point, I should probably explain the name of the `dbinom()` function. Obviously, the “binom” part comes from the fact that we’re working with the binomial distribution, but the “d” prefix is probably a bit of a mystery. In this section I’ll give a partial explanation: specifically, I’ll explain why there is a prefix. As for why it’s a “d” specifically, you’ll have to wait until the next section. What’s going on here is that R actually provides four functions in relation to the binomial distribution. These four functions are `dbinom()` , `pbinom()` , `rbinom()` and `qbinom()` , and each one calculates a different quantity of interest. Not only that, R does the same thing for every probability distribution that it implements. No matter what distribution you’re talking about, there’s a `d` function, a `p` function, a `q` function and a `r` function. This is illustrated in Table 9.3, using the binomial distribution and the normal distribution as examples.
What.it.does | Prefix | Normal.distribution | Binomial.distribution |
| --- | --- | --- | --- |
probability (density) of | d | dnorm() | dbinom() |
cumulative probability of | p | dnorm() | pbinom() |
generate random number from | r | rnorm() | rbinom() |
q qnorm() qbinom() | q | qnorm() | qbinom( |
Let’s have a look at what all four functions do. Firstly, all four versions of the function require you to specify the `size` and `prob` arguments: no matter what you’re trying to get R to calculate, it needs to know what the parameters are. However, they differ in terms of what the other argument is, and what the output is. So let’s look at them one at a time.
* The
`d` form we’ve already seen: you specify a particular outcome `x` , and the output is the probability of obtaining exactly that outcome. (the “d” is short for density, but ignore that for now). * The
`p` form calculates the cumulative probability. You specify a particular quantile `q` , and it tells you the probability of obtaining an outcome smaller than or equal to `q` . * The
`q` form calculates the quantiles of the distribution. You specify a probability value `p` , and gives you the corresponding percentile. That is, the value of the variable for which there’s a probability `p` of obtaining an outcome lower than that value. * The
`r` form is a random number generator: specifically, it generates `n` random outcomes from the distribution. This is a little abstract, so let’s look at some concrete examples. Again, we’ve already covered `dbinom()` so let’s focus on the other three versions. We’ll start with `pbinom()` , and we’ll go back to the skull-dice example. Again, I’m rolling 20 dice, and each die has a 1 in 6 chance of coming up skulls. Suppose, however, that I want to know the probability of rolling 4 or fewer skulls. If I wanted to, I could use the `dbinom()` function to calculate the exact probability of rolling 0 skulls, 1 skull, 2 skulls, 3 skulls and 4 skulls and then add these up, but there’s a faster way. Instead, I can calculate this using the `pbinom()` function. Here’s the command:
```
pbinom( q= 4, size = 20, prob = 1/6)
```
`## [1] 0.7687492`
In other words, there is a 76.9% chance that I will roll 4 or fewer skulls. Or, to put it another way, R is telling us that a value of 4 is actually the 76.9th percentile of this binomial distribution.
Next, let’s consider the `qbinom()` function. Let’s say I want to calculate the 75th percentile of the binomial distribution. If we’re sticking with our skulls example, I would use the following command to do this:
```
qbinom( p = 0.75, size = 20, prob = 1/6)
```
`## [1] 4` Hm. There’s something odd going on here. Let’s think this through. What the `qbinom()` function appears to be telling us is that the 75th percentile of the binomial distribution is 4, even though we saw from the `pbinom()` function that 4 is actually the 76.9th percentile. And it’s definitely the `pbinom()` function that is correct. I promise. The weirdness here comes from the fact that our binomial distribution doesn’t really have a 75th percentile. Not really. Why not? Well, there’s a 56.7% chance of rolling 3 or fewer skulls (you can type `pbinom(3, 20, 1/6)` to confirm this if you want), and a 76.9% chance of rolling 4 or fewer skulls. So there’s a sense in which the 75th percentile should lie “in between” 3 and 4 skulls. But that makes no sense at all! You can’t roll 20 dice and get 3.9 of them come up skulls. This issue can be handled in different ways: you could report an in between value (or interpolated value, to use the technical name) like 3.9, you could round down (to 3) or you could round up (to 4). The `qbinom()` function rounds upwards: if you ask for a percentile that doesn’t actually exist (like the 75th in this example), R finds the smallest value for which the the percentile rank is at least what you asked for. In this case, since the “true” 75th percentile (whatever that would mean) lies somewhere between 3 and 4 skulls, R rounds up and gives you an answer of 4. This subtlety is tedious, I admit, but thankfully it’s only an issue for discrete distributions like the binomial (see Section 2.2.5 for a discussion of continuous versus discrete). The other distributions that I’ll talk about (normal, \(t\), \(\chi^2\) and \(F\)) are all continuous, and so R can always return an exact quantile whenever you ask for it. Finally, we have the random number generator. To use the `rbinom()` function, you specify how many times R should “simulate” the experiment using the `n` argument, and it will generate random outcomes from the binomial distribution. So, for instance, suppose I were to repeat my die rolling experiment 100 times. I could get R to simulate the results of these experiments by using the following command:
```
rbinom( n = 100, size = 20, prob = 1/6 )
```
```
## [1] 3 2 9 2 4 4 3 7 1 0 1 5 3 5 4 3 3 2 3 1 4 3 2 3 2 0 4 2 4 4 6 1 3 4 7
## [36] 5 4 4 3 4 2 3 1 3 3 4 6 6 2 5 9 1 5 2 3 4 1 3 4 3 4 4 4 4 2 1 3 2 6 3
## [71] 2 4 6 4 4 2 4 1 5 4 2 4 8 3 3 2 3 5 5 3 1 2 3 4 6 2 2 2 1 2
```
As you can see, these numbers are pretty much what you’d expect given the distribution shown in Figure 9.3. Most of the time I roll somewhere between 1 to 5 skulls. There are a lot of subtleties associated with random number generation using a computer,145 but for the purposes of this book we don’t need to worry too much about them.
## 9.5 The normal distribution
While the binomial distribution is conceptually the simplest distribution to understand, it’s not the most important one. That particular honour goes to the normal distribution, which is also referred to as “the bell curve” or a “Gaussian distribution”. A normal distribution is described using two parameters, the mean of the distribution \(\mu\) and the standard deviation of the distribution \(\sigma\). The notation that we sometimes use to say that a variable \(X\) is normally distributed is as follows: \[ X \sim \mbox{Normal}(\mu,\sigma) \] Of course, that’s just notation. It doesn’t tell us anything interesting about the normal distribution itself. As was the case with the binomial distribution, I have included the formula for the normal distribution in this book, because I think it’s important enough that everyone who learns statistics should at least look at it, but since this is an introductory text I don’t want to focus on it, so I’ve tucked it away in Table 9.2. Similarly, the R functions for the normal distribution are `dnorm()` , `pnorm()` , `qnorm()` and `rnorm()` . However, they behave in pretty much exactly the same way as the corresponding functions for the binomial distribution, so there’s not a lot that you need to know. The only thing that I should point out is that the argument names for the parameters are `mean` and `sd` . In pretty much every other respect, there’s nothing else to add.
Instead of focusing on the maths, let’s try to get a sense for what it means for a variable to be normally distributed. To that end, have a look at Figure 9.6, which plots a normal distribution with mean \(\mu = 0\) and standard deviation \(\sigma = 1\). You can see where the name “bell curve” comes from: it looks a bit like a bell. Notice that, unlike the plots that I drew to illustrate the binomial distribution, the picture of the normal distribution in Figure 9.6 shows a smooth curve instead of “histogram-like” bars. This isn’t an arbitrary choice: the normal distribution is continuous, whereas the binomial is discrete. For instance, in the die rolling example from the last section, it was possible to get 3 skulls or 4 skulls, but impossible to get 3.9 skulls. The figures that I drew in the previous section reflected this fact: in Figure 9.3, for instance, there’s a bar located at \(X=3\) and another one at \(X=4\), but there’s nothing in between. Continuous quantities don’t have this constraint. For instance, suppose we’re talking about the weather. The temperature on a pleasant Spring day could be 23 degrees, 24 degrees, 23.9 degrees, or anything in between since temperature is a continuous variable, and so a normal distribution might be quite appropriate for describing Spring temperatures.146
With this in mind, let’s see if we can’t get an intuition for how the normal distribution works. Firstly, let’s have a look at what happens when we play around with the parameters of the distribution. To that end, Figure 9.7 plots normal distributions that have different means, but have the same standard deviation. As you might expect, all of these distributions have the same “width”. The only difference between them is that they’ve been shifted to the left or to the right. In every other respect they’re identical. In contrast, if we increase the standard deviation while keeping the mean constant, the peak of the distribution stays in the same place, but the distribution gets wider, as you can see in Figure 9.8. Notice, though, that when we widen the distribution, the height of the peak shrinks. This has to happen: in the same way that the heights of the bars that we used to draw a discrete binomial distribution have to sum to 1, the total area under the curve for the normal distribution must equal 1.
Before moving on, I want to point out one important characteristic of the normal distribution. Irrespective of what the actual mean and standard deviation are, 68.3% of the area falls within 1 standard deviation of the mean. Similarly, 95.4% of the distribution falls within 2 standard deviations of the mean, and 99.7% of the distribution is within 3 standard deviations. This idea is illustrated in Figures 9.9 and 9.10.
### 9.5.1 Probability density
There’s something I’ve been trying to hide throughout my discussion of the normal distribution, something that some introductory textbooks omit completely. They might be right to do so: this “thing” that I’m hiding is weird and counterintuitive even by the admittedly distorted standards that apply in statistics. Fortunately, it’s not something that you need to understand at a deep level in order to do basic statistics: rather, it’s something that starts to become important later on when you move beyond the basics. So, if it doesn’t make complete sense, don’t worry: try to make sure that you follow the gist of it.
Throughout my discussion of the normal distribution, there’s been one or two things that don’t quite make sense. Perhaps you noticed that the \(y\)-axis in these figures is labelled “Probability Density” rather than density. Maybe you noticed that I used \(p(X)\) instead of \(P(X)\) when giving the formula for the normal distribution. Maybe you’re wondering why R uses the “d” prefix for functions like `dnorm()` . And maybe, just maybe, you’ve been playing around with the `dnorm()` function, and you accidentally typed in a command like this:
```
dnorm( x = 1, mean = 1, sd = 0.1 )
```
`## [1] 3.989423` And if you’ve done the last part, you’re probably very confused. I’ve asked R to calculate the probability that `x = 1` , for a normally distributed variable with `mean = 1` and standard deviation `sd = 0.1` ; and it tells me that the probability is 3.99. But, as we discussed earlier, probabilities can’t be larger than 1. So either I’ve made a mistake, or that’s not a probability.
As it turns out, the second answer is correct. What we’ve calculated here isn’t actually a probability: it’s something else. To understand what that something is, you have to spend a little time thinking about what it really means to say that \(X\) is a continuous variable. Let’s say we’re talking about the temperature outside. The thermometer tells me it’s 23 degrees, but I know that’s not really true. It’s not exactly 23 degrees. Maybe it’s 23.1 degrees, I think to myself. But I know that that’s not really true either, because it might actually be 23.09 degrees. But, I know that… well, you get the idea. The tricky thing with genuinely continuous quantities is that you never really know exactly what they are.
Now think about what this implies when we talk about probabilities. Suppose that tomorrow’s maximum temperature is sampled from a normal distribution with mean 23 and standard deviation 1. What’s the probability that the temperature will be exactly 23 degrees? The answer is “zero”, or possibly, “a number so close to zero that it might as well be zero”. Why is this? It’s like trying to throw a dart at an infinitely small dart board: no matter how good your aim, you’ll never hit it. In real life you’ll never get a value of exactly 23. It’ll always be something like 23.1 or 22.99998 or something. In other words, it’s completely meaningless to talk about the probability that the temperature is exactly 23 degrees. However, in everyday language, if I told you that it was 23 degrees outside and it turned out to be 22.9998 degrees, you probably wouldn’t call me a liar. Because in everyday language, “23 degrees” usually means something like “somewhere between 22.5 and 23.5 degrees”. And while it doesn’t feel very meaningful to ask about the probability that the temperature is exactly 23 degrees, it does seem sensible to ask about the probability that the temperature lies between 22.5 and 23.5, or between 20 and 30, or any other range of temperatures.
The point of this discussion is to make clear that, when we’re talking about continuous distributions, it’s not meaningful to talk about the probability of a specific value. However, what we can talk about is the probability that the value lies within a particular range of values. To find out the probability associated with a particular range, what you need to do is calculate the “area under the curve”. We’ve seen this concept already: in Figure 9.9, the shaded areas shown depict genuine probabilities (e.g., in the left hand panel of Figure 9.9 it shows the probability of observing a value that falls within 1 standard deviation of the mean).
Okay, so that explains part of the story. I’ve explained a little bit about how continuous probability distributions should be interpreted (i.e., area under the curve is the key thing), but I haven’t actually explained what the `dnorm()` function actually calculates. Equivalently, what does the formula for \(p(x)\) that I described earlier actually mean? Obviously, \(p(x)\) doesn’t describe a probability, but what is it? The name for this quantity \(p(x)\) is a probability density, and in terms of the plots we’ve been drawing, it corresponds to the height of the curve. The densities themselves aren’t meaningful in and of themselves: but they’re “rigged” to ensure that the area under the curve is always interpretable as genuine probabilities. To be honest, that’s about as much as you really need to know for now.147
## 9.6 Other useful distributions
The normal distribution is the distribution that statistics makes most use of (for reasons to be discussed shortly), and the binomial distribution is a very useful one for lots of purposes. But the world of statistics is filled with probability distributions, some of which we’ll run into in passing. In particular, the three that will appear in this book are the \(t\) distribution, the \(\chi^2\) distribution and the \(F\) distribution. I won’t give formulas for any of these, or talk about them in too much detail, but I will show you some pictures.
* The \(t\) distribution is a continuous distribution that looks very similar to a normal distribution, but has heavier tails: see Figure 9.11. This distribution tends to arise in situations where you think that the data actually follow a normal distribution, but you don’t know the mean or standard deviation. As you might expect, the relevant R functions are
`dt()` , `pt()` , `qt()` and `rt()` , and we’ll run into this distribution again in Chapter 13.
* The \(\chi^2\) distribution is another distribution that turns up in lots of different places. The situation in which we’ll see it is when doing categorical data analysis (Chapter 12), but it’s one of those things that actually pops up all over the place. When you dig into the maths (and who doesn’t love doing that?), it turns out that the main reason why the \(\chi^2\) distribution turns up all over the place is that, if you have a bunch of variables that are normally distributed, square their values and then add them up (a procedure referred to as taking a “sum of squares”), this sum has a \(\chi^2\) distribution. You’d be amazed how often this fact turns out to be useful. Anyway, here’s what a \(\chi^2\) distribution looks like: Figure 9.12. Once again, the R commands for this one are pretty predictable:
`dchisq()` , `pchisq()` , `qchisq()` , `rchisq()` .
* The \(F\) distribution looks a bit like a \(\chi^2\) distribution, and it arises whenever you need to compare two \(\chi^2\) distributions to one another. Admittedly, this doesn’t exactly sound like something that any sane person would want to do, but it turns out to be very important in real world data analysis. Remember when I said that \(\chi^2\) turns out to be the key distribution when we’re taking a “sum of squares”? Well, what that means is if you want to compare two different “sums of squares”, you’re probably talking about something that has an \(F\) distribution. Of course, as yet I still haven’t given you an example of anything that involves a sum of squares, but I will… in Chapter 14. And that’s where we’ll run into the \(F\) distribution. Oh, and here’s a picture: Figure 9.13. And of course we can get R to do things with \(F\) distributions just by using the commands
`df()` , `pf()` , `qf()` and `rf()` . Because these distributions are all tightly related to the normal distribution and to each other, and because they are will turn out to be the important distributions when doing inferential statistics later in this book, I think it’s useful to do a little demonstration using R, just to “convince ourselves” that these distributions really are related to each other in the way that they’re supposed to be. First, we’ll use the `rnorm()` function to generate 1000 normally-distributed observations:
```
normal.a <- rnorm( n=1000, mean=0, sd=1 )
print(head(normal.a))
```
```
## [1] 0.002520116 -1.759249354 -0.055968257 0.879791922 1.166488549
## [6] 0.789723465
```
So the `normal.a` variable contains 1000 numbers that are normally distributed, and have mean 0 and standard deviation 1, and the actual print out of these numbers goes on for rather a long time. Note that, because the default parameters of the `rnorm()` function are `mean=0` and `sd=1` , I could have shortened the command to `rnorm( n=1000 )` . In any case, what we can do is use the `hist()` function to draw a histogram of the data, like so: `hist( normal.a )`
If you do this, you should see something similar to Figure 9.14. Your plot won’t look quite as pretty as the one in the figure, of course, because I’ve played around with all the formatting (see Chapter 6), and I’ve also plotted the true distribution of the data as a solid black line (i.e., a normal distribution with mean 0 and standard deviation 1) so that you can compare the data that we just generated to the true distribution.
In the previous example all I did was generate lots of normally distributed observations using `rnorm()` and then compared those to the true probability distribution in the figure (using `dnorm()` to generate the black line in the figure, but I didn’t show the commmands for that). Now let’s try something trickier. We’ll try to generate some observations that follow a chi-square distribution with 3 degrees of freedom, but instead of using `rchisq()` , we’ll start with variables that are normally distributed, and see if we can exploit the known relationships between normal and chi-square distributions to do the work. As I mentioned earlier, a chi-square distribution with \(k\) degrees of freedom is what you get when you take \(k\) normally-distributed variables (with mean 0 and standard deviation 1), square them, and add them up. Since we want a chi-square distribution with 3 degrees of freedom, we’ll need to supplement our `normal.a` data with two more sets of normally-distributed observations, imaginatively named `normal.b` and `normal.c` :
```
normal.b <- rnorm( n=1000 ) # another set of normally distributed data
normal.c <- rnorm( n=1000 ) # and another!
```
Now that we’ve done that, the theory says we should square these and add them together, like this
```
chi.sq.3 <- (normal.a)^2 + (normal.b)^2 + (normal.c)^2
```
and the resulting `chi.sq.3` variable should contain 1000 observations that follow a chi-square distribution with 3 degrees of freedom. You can use the `hist()` function to have a look at these observations yourself, using a command like this, `hist( chi.sq.3 )` and you should obtain a result that looks pretty similar to the chi-square plot in Figure 9.14. Once again, the plot that I’ve drawn is a little fancier: in addition to the histogram of `chi.sq.3` , I’ve also plotted a chi-square distribution with 3 degrees of freedom. It’s pretty clear that – even though I used `rnorm()` to do all the work rather than `rchisq()` – the observations stored in the `chi.sq.3` variable really do follow a chi-square distribution. Admittedly, this probably doesn’t seem all that interesting right now, but later on when we start encountering the chi-square distribution in Chapter 12, it will be useful to understand the fact that these distributions are related to one another.
We can extend this demonstration to the \(t\) distribution and the \(F\) distribution. Earlier, I implied that the \(t\) distribution is related to the normal distribution when the standard deviation is unknown. That’s certainly true, and that’s the what we’ll see later on in Chapter 13, but there’s a somewhat more precise relationship between the normal, chi-square and \(t\) distributions. Suppose we “scale” our chi-square data by dividing it by the degrees of freedom, like so
```
scaled.chi.sq.3 <- chi.sq.3 / 3
```
We then take a set of normally distributed variables and divide them by (the square root of) our scaled chi-square variable which had \(df=3\), and the result is a \(t\) distribution with 3 degrees of freedom. If we plot the histogram of `t.3` , we end up with something that looks very similar to the t distribution in Figure 9.14.
```
normal.d <- rnorm( n=1000 ) # yet another set of normally distributed data
t.3 <- normal.d / sqrt( scaled.chi.sq.3 ) # divide by square root of scaled chi-square to get t
hist (t.3)
```
Similarly, we can obtain an \(F\) distribution by taking the ratio between two scaled chi-square distributions. Suppose, for instance, we wanted to generate data from an \(F\) distribution with 3 and 20 degrees of freedom. We could do this using `df()` , but we could also do the same thing by generating two chi-square variables, one with 3 degrees of freedom, and the other with 20 degrees of freedom. As the example with `chi.sq.3` illustrates, we can actually do this using `rnorm()` if we really want to, but this time I’ll take a short cut:
```
chi.sq.20 <- rchisq( 1000, 20) # generate chi square data with df = 20...
scaled.chi.sq.20 <- chi.sq.20 / 20 # scale the chi square variable...
F.3.20 <- scaled.chi.sq.3 / scaled.chi.sq.20 # take the ratio of the two chi squares...
hist( F.3.20 ) # ... and draw a picture
```
The resulting `F.3.20` variable does in fact store variables that follow an \(F\) distribution with 3 and 20 degrees of freedom. This is illustrated in Figure 9.14, which plots the histgram of the observations stored in `F.3.20` against the true \(F\) distribution with \(df_1 = 3\) and \(df_2 = 20\). Again, they match.
Okay, time to wrap this section up. We’ve seen three new distributions: \(\chi^2\), \(t\) and \(F\). They’re all continuous distributions, and they’re all closely related to the normal distribution. I’ve talked a little bit about the precise nature of this relationship, and shown you some R commands that illustrate this relationship. The key thing for our purposes, however, is not that you have a deep understanding of all these different distributions, nor that you remember the precise relationships between them. The main thing is that you grasp the basic idea that these distributions are all deeply related to one another, and to the normal distribution. Later on in this book, we’re going to run into data that are normally distributed, or at least assumed to be normally distributed. What I want you to understand right now is that, if you make the assumption that your data are normally distributed, you shouldn’t be surprised to see \(\chi^2\), \(t\) and \(F\) distributions popping up all over the place when you start trying to do your data analysis.
## 9.7 Summary
In this chapter we’ve talked about probability. We’ve talked what probability means, and why statisticians can’t agree on what it means. We talked about the rules that probabilities have to obey. And we introduced the idea of a probability distribution, and spent a good chunk of the chapter talking about some of the more important probability distributions that statisticians work with. The section by section breakdown looks like this:
* Probability theory versus statistics (Section 9.1)
* Frequentist versus Bayesian views of probability (Section 9.2)
* Basics of probability theory (Section 9.3)
* Binomial distribution (Section 9.4), normal distribution (Section 9.5), and others (Section 9.6)
As you’d expect, my coverage is by no means exhaustive. Probability theory is a large branch of mathematics in its own right, entirely separate from its application to statistics and data analysis. As such, there are thousands of books written on the subject and universities generally offer multiple classes devoted entirely to probability theory. Even the “simpler” task of documenting standard probability distributions is a big topic. I’ve described five standard probability distributions in this chapter, but sitting on my bookshelf I have a 45-chapter book called “Statistical Distributions” Evans, Hastings, and Peacock (2011) that lists a lot more than that. Fortunately for you, very little of this is necessary. You’re unlikely to need to know dozens of statistical distributions when you go out and do real world data analysis, and you definitely won’t need them for this book, but it never hurts to know that there’s other possibilities out there.
Picking up on that last point, there’s a sense in which this whole chapter is something of a digression. Many undergraduate psychology classes on statistics skim over this content very quickly (I know mine did), and even the more advanced classes will often “forget” to revisit the basic foundations of the field. Most academic psychologists would not know the difference between probability and density, and until recently very few would have been aware of the difference between Bayesian and frequentist probability. However, I think it’s important to understand these things before moving onto the applications. For example, there are a lot of rules about what you’re “allowed” to say when doing statistical inference, and many of these can seem arbitrary and weird. However, they start to make sense if you understand that there is this Bayesian/frequentist distinction. Similarly, in Chapter 13 we’re going to talk about something called the \(t\)-test, and if you really want to have a grasp of the mechanics of the \(t\)-test it really helps to have a sense of what a \(t\)-distribution actually looks like. You get the idea, I hope.
<NAME>. 1922b. “On the Mathematical Foundation of Theoretical Statistics.” Philosophical Transactions of the Royal Society A 222: 309–68.
This doesn’t mean that frequentists can’t make hypothetical statements, of course; it’s just that if you want to make a statement about probability, then it must be possible to redescribe that statement in terms of a sequence of potentially observable events, and the relative frequencies of different outcomes that appear within that sequence.↩
*
Note that the term “success” is pretty arbitrary, and doesn’t actually imply that the outcome is something to be desired. If \(\theta\) referred to the probability that any one passenger gets injured in a bus crash, I’d still call it the success probability, but that doesn’t mean I want people to get hurt in bus crashes!↩
*
Since computers are deterministic machines, they can’t actually produce truly random behaviour. Instead, what they do is take advantage of various mathematical functions that share a lot of similarities with true randomness. What this means is that any random numbers generated on a computer are pseudorandom, and the quality of those numbers depends on the specific method used. By default R uses the “Mersenne twister” method. In any case, you can find out more by typing
`?Random` , but as usual the R help files are fairly dense.↩ *
In practice, the normal distribution is so handy that people tend to use it even when the variable isn’t actually continuous. As long as there are enough categories (e.g., Likert scale responses to a questionnaire), it’s pretty standard practice to use the normal distribution as an approximation. This works out much better in practice than you’d think.↩
*
For those readers who know a little calculus, I’ll give a slightly more precise explanation. In the same way that probabilities are non-negative numbers that must sum to 1, probability densities are non-negative numbers that must integrate to 1 (where the integral is taken across all possible values of \(X\)). To calculate the probability that \(X\) falls between \(a\) and \(b\) we calculate the definite integral of the density function over the corresponding range, \(\int_a^b p(x) \ dx\). If you don’t remember or never learned calculus, don’t worry about this. It’s not needed for this book.↩
# Chapter 10 Estimating unknown quantities from a sample
At the start of the last chapter I highlighted the critical distinction between descriptive statistics and inferential statistics. As discussed in Chapter 5, the role of descriptive statistics is to concisely summarise what we do know. In contrast, the purpose of inferential statistics is to “learn what we do not know from what we do”. Now that we have a foundation in probability theory, we are in a good position to think about the problem of statistical inference. What kinds of things would we like to learn about? And how do we learn them? These are the questions that lie at the heart of inferential statistics, and they are traditionally divided into two “big ideas”: estimation and hypothesis testing. The goal in this chapter is to introduce the first of these big ideas, estimation theory, but I’m going to witter on about sampling theory first because estimation theory doesn’t make sense until you understand sampling. As a consequence, this chapter divides naturally into two parts Sections 10.1 through 10.3 are focused on sampling theory, and Sections 10.4 and 10.5 make use of sampling theory to discuss how statisticians think about estimation.
## 10.1 Samples, populations and sampling
In the prelude to Part I discussed the riddle of induction, and highlighted the fact that all learning requires you to make assumptions. Accepting that this is true, our first task to come up with some fairly general assumptions about data that make sense. This is where sampling theory comes in. If probability theory is the foundations upon which all statistical theory builds, sampling theory is the frame around which you can build the rest of the house. Sampling theory plays a huge role in specifying the assumptions upon which your statistical inferences rely. And in order to talk about “making inferences” the way statisticians think about it, we need to be a bit more explicit about what it is that we’re drawing inferences from (the sample) and what it is that we’re drawing inferences about (the population).
In almost every situation of interest, what we have available to us as researchers is a sample of data. We might have run experiment with some number of participants; a polling company might have phoned some number of people to ask questions about voting intentions; etc. Regardless: the data set available to us is finite, and incomplete. We can’t possibly get every person in the world to do our experiment; a polling company doesn’t have the time or the money to ring up every voter in the country etc. In our earlier discussion of descriptive statistics (Chapter 5, this sample was the only thing we were interested in. Our only goal was to find ways of describing, summarising and graphing that sample. This is about to change.
### 10.1.1 Defining a population
A sample is a concrete thing. You can open up a data file, and there’s the data from your sample. A population, on the other hand, is a more abstract idea. It refers to the set of all possible people, or all possible observations, that you want to draw conclusions about, and is generally much bigger than the sample. In an ideal world, the researcher would begin the study with a clear idea of what the population of interest is, since the process of designing a study and testing hypotheses about the data that it produces does depend on the population about which you want to make statements. However, that doesn’t always happen in practice: usually the researcher has a fairly vague idea of what the population is and designs the study as best he/she can on that basis.
Sometimes it’s easy to state the population of interest. For instance, in the “polling company” example that opened the chapter, the population consisted of all voters enrolled at the a time of the study – millions of people. The sample was a set of 1000 people who all belong to that population. In most situations the situation is much less simple. In a typical a psychological experiment, determining the population of interest is a bit more complicated. Suppose I run an experiment using 100 undergraduate students as my participants. My goal, as a cognitive scientist, is to try to learn something about how the mind works. So, which of the following would count as “the population”:
* All of the undergraduate psychology students at the University of Adelaide?
* Undergraduate psychology students in general, anywhere in the world?
* Australians currently living?
* Australians of similar ages to my sample?
* Anyone currently alive?
* Any human being, past, present or future?
* Any biological organism with a sufficient degree of intelligence operating in a terrestrial environment?
* Any intelligent being?
Each of these defines a real group of mind-possessing entities, all of which might be of interest to me as a cognitive scientist, and it’s not at all clear which one ought to be the true population of interest. As another example, consider the Wellesley-Croker game that we discussed in the prelude. The sample here is a specific sequence of 12 wins and 0 losses for Wellesley. What is the population?
* All outcomes until Wellesley and Croker arrived at their destination?
* All outcomes if Wellesley and Croker had played the game for the rest of their lives?
* All outcomes if Wellseley and Croker lived forever and played the game until the world ran out of hills?
* All outcomes if we created an infinite set of parallel universes and the Wellesely/Croker pair made guesses about the same 12 hills in each universe?
Again, it’s not obvious what the population is.
### 10.1.2 Simple random samples
Irrespective of how I define the population, the critical point is that the sample is a subset of the population, and our goal is to use our knowledge of the sample to draw inferences about the properties of the population. The relationship between the two depends on the procedure by which the sample was selected. This procedure is referred to as a sampling method, and it is important to understand why it matters.
To keep things simple, let’s imagine that we have a bag containing 10 chips. Each chip has a unique letter printed on it, so we can distinguish between the 10 chips. The chips come in two colours, black and white. This set of chips is the population of interest, and it is depicted graphically on the left of Figure 10.1. As you can see from looking at the picture, there are 4 black chips and 6 white chips, but of course in real life we wouldn’t know that unless we looked in the bag. Now imagine you run the following “experiment”: you shake up the bag, close your eyes, and pull out 4 chips without putting any of them back into the bag. First out comes the \(a\) chip (black), then the \(c\) chip (white), then \(j\) (white) and then finally \(b\) (black). If you wanted, you could then put all the chips back in the bag and repeat the experiment, as depicted on the right hand side of Figure 10.1. Each time you get different results, but the procedure is identical in each case. The fact that the same procedure can lead to different results each time, we refer to it as a random process.148 However, because we shook the bag before pulling any chips out, it seems reasonable to think that every chip has the same chance of being selected. A procedure in which every member of the population has the same chance of being selected is called a simple random sample. The fact that we did not put the chips back in the bag after pulling them out means that you can’t observe the same thing twice, and in such cases the observations are said to have been sampled without replacement.
To help make sure you understand the importance of the sampling procedure, consider an alternative way in which the experiment could have been run. Suppose that my 5-year old son had opened the bag, and decided to pull out four black chips without putting any of them back in the bag. This biased sampling scheme is depicted in Figure 10.2. Now consider the evidentiary value of seeing 4 black chips and 0 white chips. Clearly, it depends on the sampling scheme, does it not? If you know that the sampling scheme is biased to select only black chips, then a sample that consists of only black chips doesn’t tell you very much about the population! For this reason, statisticians really like it when a data set can be considered a simple random sample, because it makes the data analysis much easier.
A third procedure is worth mentioning. This time around we close our eyes, shake the bag, and pull out a chip. This time, however, we record the observation and then put the chip back in the bag. Again we close our eyes, shake the bag, and pull out a chip. We then repeat this procedure until we have 4 chips. Data sets generated in this way are still simple random samples, but because we put the chips back in the bag immediately after drawing them it is referred to as a sample with replacement. The difference between this situation and the first one is that it is possible to observe the same population member multiple times, as illustrated in Figure 10.3.
In my experience, most psychology experiments tend to be sampling without replacement, because the same person is not allowed to participate in the experiment twice. However, most statistical theory is based on the assumption that the data arise from a simple random sample with replacement. In real life, this very rarely matters. If the population of interest is large (e.g., has more than 10 entities!) the difference between sampling with- and without- replacement is too small to be concerned with. The difference between simple random samples and biased samples, on the other hand, is not such an easy thing to dismiss.
### 10.1.3 Most samples are not simple random samples
As you can see from looking at the list of possible populations that I showed above, it is almost impossible to obtain a simple random sample from most populations of interest. When I run experiments, I’d consider it a minor miracle if my participants turned out to be a random sampling of the undergraduate psychology students at Adelaide university, even though this is by far the narrowest population that I might want to generalise to. A thorough discussion of other types of sampling schemes is beyond the scope of this book, but to give you a sense of what’s out there I’ll list a few of the more important ones:
* Stratified sampling. Suppose your population is (or can be) divided into several different subpopulations, or strata. Perhaps you’re running a study at several different sites, for example. Instead of trying to sample randomly from the population as a whole, you instead try to collect a separate random sample from each of the strata. Stratified sampling is sometimes easier to do than simple random sampling, especially when the population is already divided into the distinct strata. It can also be more efficient that simple random sampling, especially when some of the subpopulations are rare. For instance, when studying schizophrenia it would be much better to divide the population into two149 strata (schizophrenic and not-schizophrenic), and then sample an equal number of people from each group. If you selected people randomly, you would get so few schizophrenic people in the sample that your study would be useless. This specific kind of of stratified sampling is referred to as oversampling because it makes a deliberate attempt to over-represent rare groups.
* Snowball sampling is a technique that is especially useful when sampling from a “hidden” or hard to access population, and is especially common in social sciences. For instance, suppose the researchers want to conduct an opinion poll among transgender people. The research team might only have contact details for a few trans folks, so the survey starts by asking them to participate (stage 1). At the end of the survey, the participants are asked to provide contact details for other people who might want to participate. In stage 2, those new contacts are surveyed. The process continues until the researchers have sufficient data. The big advantage to snowball sampling is that it gets you data in situations that might otherwise be impossible to get any. On the statistical side, the main disadvantage is that the sample is highly non-random, and non-random in ways that are difficult to address. On the real life side, the disadvantage is that the procedure can be unethical if not handled well, because hidden populations are often hidden for a reason. I chose transgender people as an example here to highlight this: if you weren’t careful you might end up outing people who don’t want to be outed (very, very bad form), and even if you don’t make that mistake it can still be intrusive to use people’s social networks to study them. It’s certainly very hard to get people’s informed consent before contacting them, yet in many cases the simple act of contacting them and saying “hey we want to study you” can be hurtful. Social networks are complex things, and just because you can use them to get data doesn’t always mean you should.
* Convenience sampling is more or less what it sounds like. The samples are chosen in a way that is convenient to the researcher, and not selected at random from the population of interest. Snowball sampling is one type of convenience sampling, but there are many others. A common example in psychology are studies that rely on undergraduate psychology students. These samples are generally non-random in two respects: firstly, reliance on undergraduate psychology students automatically means that your data are restricted to a single subpopulation. Secondly, the students usually get to pick which studies they participate in, so the sample is a self selected subset of psychology students not a randomly selected subset. In real life, most studies are convenience samples of one form or another. This is sometimes a severe limitation, but not always.
### 10.1.4 How much does it matter if you don’t have a simple random sample?
Okay, so real world data collection tends not to involve nice simple random samples. Does that matter? A little thought should make it clear to you that it can matter if your data are not a simple random sample: just think about the difference between Figures 10.1 and 10.2. However, it’s not quite as bad as it sounds. Some types of biased samples are entirely unproblematic. For instance, when using a stratified sampling technique you actually know what the bias is because you created it deliberately, often to increase the effectiveness of your study, and there are statistical techniques that you can use to adjust for the biases you’ve introduced (not covered in this book!). So in those situations it’s not a problem.
More generally though, it’s important to remember that random sampling is a means to an end, not the end in itself. Let’s assume you’ve relied on a convenience sample, and as such you can assume it’s biased. A bias in your sampling method is only a problem if it causes you to draw the wrong conclusions. When viewed from that perspective, I’d argue that we don’t need the sample to be randomly generated in every respect: we only need it to be random with respect to the psychologically-relevant phenomenon of interest. Suppose I’m doing a study looking at working memory capacity. In study 1, I actually have the ability to sample randomly from all human beings currently alive, with one exception: I can only sample people born on a Monday. In study 2, I am able to sample randomly from the Australian population. I want to generalise my results to the population of all living humans. Which study is better? The answer, obviously, is study 1. Why? Because we have no reason to think that being “born on a Monday” has any interesting relationship to working memory capacity. In contrast, I can think of several reasons why “being Australian” might matter. Australia is a wealthy, industrialised country with a very well-developed education system. People growing up in that system will have had life experiences much more similar to the experiences of the people who designed the tests for working memory capacity. This shared experience might easily translate into similar beliefs about how to “take a test”, a shared assumption about how psychological experimentation works, and so on. These things might actually matter. For instance, “test taking” style might have taught the Australian participants how to direct their attention exclusively on fairly abstract test materials relative to people that haven’t grown up in a similar environment; leading to a misleading picture of what working memory capacity is.
There are two points hidden in this discussion. Firstly, when designing your own studies, it’s important to think about what population you care about, and try hard to sample in a way that is appropriate to that population. In practice, you’re usually forced to put up with a “sample of convenience” (e.g., psychology lecturers sample psychology students because that’s the least expensive way to collect data, and our coffers aren’t exactly overflowing with gold), but if so you should at least spend some time thinking about what the dangers of this practice might be.
Secondly, if you’re going to criticise someone else’s study because they’ve used a sample of convenience rather than laboriously sampling randomly from the entire human population, at least have the courtesy to offer a specific theory as to how this might have distorted the results. Remember, everyone in science is aware of this issue, and does what they can to alleviate it. Merely pointing out that “the study only included people from group BLAH” is entirely unhelpful, and borders on being insulting to the researchers, who are of course aware of the issue. They just don’t happen to be in possession of the infinite supply of time and money required to construct the perfect sample. In short, if you want to offer a responsible critique of the sampling process, then be helpful. Rehashing the blindingly obvious truisms that I’ve been rambling on about in this section isn’t helpful.
### 10.1.5 Population parameters and sample statistics
Okay. Setting aside the thorny methodological issues associated with obtaining a random sample and my rather unfortunate tendency to rant about lazy methodological criticism, let’s consider a slightly different issue. Up to this point we have been talking about populations the way a scientist might. To a psychologist, a population might be a group of people. To an ecologist, a population might be a group of bears. In most cases the populations that scientists care about are concrete things that actually exist in the real world. Statisticians, however, are a funny lot. On the one hand, they are interested in real world data and real science in the same way that scientists are. On the other hand, they also operate in the realm of pure abstraction in the way that mathematicians do. As a consequence, statistical theory tends to be a bit abstract in how a population is defined. In much the same way that psychological researchers operationalise our abstract theoretical ideas in terms of concrete measurements (Section 2.1, statisticians operationalise the concept of a “population” in terms of mathematical objects that they know how to work with. You’ve already come across these objects in Chapter 9: they’re called probability distributions.
The idea is quite simple. Let’s say we’re talking about IQ scores. To a psychologist, the population of interest is a group of actual humans who have IQ scores. A statistician “simplifies” this by operationally defining the population as the probability distribution depicted in Figure ??. IQ tests are designed so that the average IQ is 100, the standard deviation of IQ scores is 15, and the distribution of IQ scores is normal. These values are referred to as the population parameters because they are characteristics of the entire population. That is, we say that the population mean \(\mu\) is 100, and the population standard deviation \(\sigma\) is 15.
```
## [1] "n= 100 mean= 100.553253383599 sd= 15.0078396909989"
```
```
## [1] "n= 10000 mean= 99.9199456720285 sd= 14.9487011509218"
```
Now suppose I run an experiment. I select 100 people at random and administer an IQ test, giving me a simple random sample from the population. My sample would consist of a collection of numbers like this:
```
106 101 98 80 74 ... 107 72 100
```
Each of these IQ scores is sampled from a normal distribution with mean 100 and standard deviation 15. So if I plot a histogram of the sample, I get something like the one shown in Figure 10.4b. As you can see, the histogram is roughly the right shape, but it’s a very crude approximation to the true population distribution shown in Figure 10.4a. When I calculate the mean of my sample, I get a number that is fairly close to the population mean 100 but not identical. In this case, it turns out that the people in my sample have a mean IQ of 98.5, and the standard deviation of their IQ scores is 15.9. These sample statistics are properties of my data set, and although they are fairly similar to the true population values, they are not the same. In general, sample statistics are the things you can calculate from your data set, and the population parameters are the things you want to learn about. Later on in this chapter I’ll talk about how you can estimate population parameters using your sample statistics (Section 10.4 and how to work out how confident you are in your estimates (Section 10.5 but before we get to that there’s a few more ideas in sampling theory that you need to know about.
## 10.2 The law of large numbers
In the previous section I showed you the results of one fictitious IQ experiment with a sample size of \(N=100\). The results were somewhat encouraging: the true population mean is 100, and the sample mean of 98.5 is a pretty reasonable approximation to it. In many scientific studies that level of precision is perfectly acceptable, but in other situations you need to be a lot more precise. If we want our sample statistics to be much closer to the population parameters, what can we do about it?
The obvious answer is to collect more data. Suppose that we ran a much larger experiment, this time measuring the IQs of 10,000 people. We can simulate the results of this experiment using R. In Section 9.5 I introduced the `rnorm()` function, which generates random numbers sampled from a normal distribution. For an experiment with a sample size of `n = 10000` , and a population with `mean = 100` and `sd = 15` , R produces our fake IQ data using these commands:
```
IQ <- rnorm(n = 10000, mean = 100, sd = 15) # generate IQ scores
IQ <- round(IQ) # IQs are whole numbers!
print(head(IQ))
```
```
## [1] 113 104 64 89 101 100
```
I can compute the mean IQ using the command `mean(IQ)` and the standard deviation using the command `sd(IQ)` , and I can draw a histgram using `hist()` . The histogram of this much larger sample is shown in Figure 10.4c. Even a moment’s inspections makes clear that the larger sample is a much better approximation to the true population distribution than the smaller one. This is reflected in the sample statistics: the mean IQ for the larger sample turns out to be 99.9, and the standard deviation is 15.1. These values are now very close to the true population.
I feel a bit silly saying this, but the thing I want you to take away from this is that large samples generally give you better information. I feel silly saying it because it’s so bloody obvious that it shouldn’t need to be said. In fact, it’s such an obvious point that when <NAME> – one of the founders of probability theory – formalised this idea back in 1713, he was kind of a jerk about it. Here’s how he described the fact that we all share this intuition:
For even the most stupid of men, by some instinct of nature, by himself and without any instruction (which is a remarkable thing), is convinced that the more observations have been made, the less danger there is of wandering from one’s goal Stigler (1986)
Okay, so the passage comes across as a bit condescending (not to mention sexist), but his main point is correct: it really does feel obvious that more data will give you better answers. The question is, why is this so? Not surprisingly, this intuition that we all share turns out to be correct, and statisticians refer to it as the law of large numbers. The law of large numbers is a mathematical law that applies to many different sample statistics, but the simplest way to think about it is as a law about averages. The sample mean is the most obvious example of a statistic that relies on averaging (because that’s what the mean is… an average), so let’s look at that. When applied to the sample mean, what the law of large numbers states is that as the sample gets larger, the sample mean tends to get closer to the true population mean. Or, to say it a little bit more precisely, as the sample size “approaches” infinity (written as \(N \rightarrow \infty\)) the sample mean approaches the population mean (\(\bar{X} \rightarrow \mu\)).150
I don’t intend to subject you to a proof that the law of large numbers is true, but it’s one of the most important tools for statistical theory. The law of large numbers is the thing we can use to justify our belief that collecting more and more data will eventually lead us to the truth. For any particular data set, the sample statistics that we calculate from it will be wrong, but the law of large numbers tells us that if we keep collecting more data those sample statistics will tend to get closer and closer to the true population parameters.
## 10.3 Sampling distributions and the central limit theorem
The law of large numbers is a very powerful tool, but it’s not going to be good enough to answer all our questions. Among other things, all it gives us is a “long run guarantee”. In the long run, if we were somehow able to collect an infinite amount of data, then the law of large numbers guarantees that our sample statistics will be correct. But as <NAME> famously argued in economics, a long run guarantee is of little use in real life:
[The] long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task, if in tempestuous seasons they can only tell us, that when the storm is long past, the ocean is flat again. Keynes (1923)
As in economics, so too in psychology and statistics. It is not enough to know that we will eventually arrive at the right answer when calculating the sample mean. Knowing that an infinitely large data set will tell me the exact value of the population mean is cold comfort when my actual data set has a sample size of \(N=100\). In real life, then, we must know something about the behaviour of the sample mean when it is calculated from a more modest data set!
### 10.3.1 Sampling distribution of the mean
With this in mind, let’s abandon the idea that our studies will have sample sizes of 10000, and consider a very modest experiment indeed. This time around we’ll sample \(N=5\) people and measure their IQ scores. As before, I can simulate this experiment in R using the `rnorm()` function:
```
> IQ.1 <- round( rnorm(n=5, mean=100, sd=15 ))
> IQ.1
[1] 90 82 94 99 110
```
The mean IQ in this sample turns out to be exactly 95. Not surprisingly, this is much less accurate than the previous experiment. Now imagine that I decided to replicate the experiment. That is, I repeat the procedure as closely as possible: I randomly sample 5 new people and measure their IQ. Again, R allows me to simulate the results of this procedure:
```
> IQ.2 <- round( rnorm(n=5, mean=100, sd=15 ))
> IQ.2
[1] 78 88 111 111 117
```
This time around, the mean IQ in my sample is 101. If I repeat the experiment 10 times I obtain the results shown in Table ??, and as you can see the sample mean varies from one replication to the next.
NANA | Person.1 | Person.2 | Person.3 | Person.4 | Person.5 | Sample.Mean | caption |
| --- | --- | --- | --- | --- | --- | --- | --- |
Replication 1 | 90 | 82 | 94 | 99 | 110 | 95.0 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 2 | 78 | 88 | 111 | 111 | 117 | 101.0 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 3 | 111 | 122 | 91 | 98 | 86 | 101.6 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 4 | 98 | 96 | 119 | 99 | 107 | 103.8 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 5 | 105 | 113 | 103 | 103 | 98 | 104.4 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 6 | 81 | 89 | 93 | 85 | 114 | 92.4 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 7 | 100 | 93 | 108 | 98 | 133 | 106.4 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 8 | 107 | 100 | 105 | 117 | 85 | 102.8 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 9 | 86 | 119 | 108 | 73 | 116 | 100.4 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Replication 10 | 95 | 126 | 112 | 120 | 76 | 105.8 | Ten replications of the IQ experiment, each with a sample size of \(N=5\). |
Now suppose that I decided to keep going in this fashion, replicating this “five IQ scores” experiment over and over again. Every time I replicate the experiment I write down the sample mean. Over time, I’d be amassing a new data set, in which every experiment generates a single data point. The first 10 observations from my data set are the sample means listed in Table ??, so my data set starts out like this:
```
95.0 101.0 101.6 103.8 104.4 ...
```
What if I continued like this for 10,000 replications, and then drew a histogram? Using the magical powers of R that’s exactly what I did, and you can see the results in Figure 10.5. As this picture illustrates, the average of 5 IQ scores is usually between 90 and 110. But more importantly, what it highlights is that if we replicate an experiment over and over again, what we end up with is a distribution of sample means! This distribution has a special name in statistics: it’s called the sampling distribution of the mean.
Sampling distributions are another important theoretical idea in statistics, and they’re crucial for understanding the behaviour of small samples. For instance, when I ran the very first “five IQ scores” experiment, the sample mean turned out to be 95. What the sampling distribution in Figure 10.5 tells us, though, is that the “five IQ scores” experiment is not very accurate. If I repeat the experiment, the sampling distribution tells me that I can expect to see a sample mean anywhere between 80 and 120.
### 10.3.2 Sampling distributions exist for any sample statistic!
One thing to keep in mind when thinking about sampling distributions is that any sample statistic you might care to calculate has a sampling distribution. For example, suppose that each time I replicated the “five IQ scores” experiment I wrote down the largest IQ score in the experiment. This would give me a data set that started out like this:
```
110 117 122 119 113 ...
```
Doing this over and over again would give me a very different sampling distribution, namely the sampling distribution of the maximum. The sampling distribution of the maximum of 5 IQ scores is shown in Figure 10.6. Not surprisingly, if you pick 5 people at random and then find the person with the highest IQ score, they’re going to have an above average IQ. Most of the time you’ll end up with someone whose IQ is measured in the 100 to 140 range.
### 10.3.3 The central limit theorem
An illustration of the how sampling distribution of the mean depends on sample size. In each panel, I generated 10,000 samples of IQ data, and calculated the mean IQ observed within each of these data sets. The histograms in these plots show the distribution of these means (i.e., the sampling distribution of the mean). Each individual IQ score was drawn from a normal distribution with mean 100 and standard deviation 15, which is shown as the solid black line).
At this point I hope you have a pretty good sense of what sampling distributions are, and in particular what the sampling distribution of the mean is. In this section I want to talk about how the sampling distribution of the mean changes as a function of sample size. Intuitively, you already know part of the answer: if you only have a few observations, the sample mean is likely to be quite inaccurate: if you replicate a small experiment and recalculate the mean you’ll get a very different answer. In other words, the sampling distribution is quite wide. If you replicate a large experiment and recalculate the sample mean you’ll probably get the same answer you got last time, so the sampling distribution will be very narrow. You can see this visually in Figures 10.7, 10.8 and 10.9: the bigger the sample size, the narrower the sampling distribution gets. We can quantify this effect by calculating the standard deviation of the sampling distribution, which is referred to as the standard error. The standard error of a statistic is often denoted SE, and since we’re usually interested in the standard error of the sample mean, we often use the acronym SEM. As you can see just by looking at the picture, as the sample size \(N\) increases, the SEM decreases.
Okay, so that’s one part of the story. However, there’s something I’ve been glossing over so far. All my examples up to this point have been based on the “IQ scores” experiments, and because IQ scores are roughly normally distributed, I’ve assumed that the population distribution is normal. What if it isn’t normal? What happens to the sampling distribution of the mean? The remarkable thing is this: no matter what shape your population distribution is, as \(N\) increases the sampling distribution of the mean starts to look more like a normal distribution. To give you a sense of this, I ran some simulations using R. To do this, I started with the “ramped” distribution shown in the histogram in Figure 10.10. As you can see by comparing the triangular shaped histogram to the bell curve plotted by the black line, the population distribution doesn’t look very much like a normal distribution at all. Next, I used R to simulate the results of a large number of experiments. In each experiment I took \(N=2\) samples from this distribution, and then calculated the sample mean. Figure ?? plots the histogram of these sample means (i.e., the sampling distribution of the mean for \(N=2\)). This time, the histogram produces a \(\cap\)-shaped distribution: it’s still not normal, but it’s a lot closer to the black line than the population distribution in Figure ??. When I increase the sample size to \(N=4\), the sampling distribution of the mean is very close to normal (Figure ??, and by the time we reach a sample size of \(N=8\) it’s almost perfectly normal. In other words, as long as your sample size isn’t tiny, the sampling distribution of the mean will be approximately normal no matter what your population distribution looks like!
```
# needed for printing
width <- 6
height <- 6
# parameters of the beta
a <- 2
b <- 1
# mean and standard deviation of the beta
s <- sqrt( a*b / (a+b)^2 / (a+b+1) )
m <- a / (a+b)
# define function to draw a plot
plotOne <- function(n,N=50000) {
# generate N random sample means of size n
X <- matrix(rbeta(n*N,a,b),n,N)
X <- colMeans(X)
# plot the data
hist( X, breaks=seq(0,1,.025), border="white", freq=FALSE,
col=ifelse(colour,emphColLight,emphGrey),
xlab="Sample Mean", ylab="", xlim=c(0,1.2),
main=paste("Sample Size =",n), axes=FALSE,
font.main=1, ylim=c(0,5)
)
box()
axis(1)
#axis(2)
# plot the theoretical distribution
lines( x <- seq(0,1.2,.01), dnorm(x,m,s/sqrt(n)),
lwd=2, col="black", type="l"
)
}
for( i in c(1,2,4,8)) {
plotOne(i)}
```
On the basis of these figures, it seems like we have evidence for all of the following claims about the sampling distribution of the mean:
* The mean of the sampling distribution is the same as the mean of the population
* The standard deviation of the sampling distribution (i.e., the standard error) gets smaller as the sample size increases
* The shape of the sampling distribution becomes normal as the sample size increases
As it happens, not only are all of these statements true, there is a very famous theorem in statistics that proves all three of them, known as the central limit theorem. Among other things, the central limit theorem tells us that if the population distribution has mean \(\mu\) and standard deviation \(\sigma\), then the sampling distribution of the mean also has mean \(\mu\), and the standard error of the mean is \[ \mbox{SEM} = \frac{\sigma}{ \sqrt{N} } \] Because we divide the population standard devation \(\sigma\) by the square root of the sample size \(N\), the SEM gets smaller as the sample size increases. It also tells us that the shape of the sampling distribution becomes normal.151
This result is useful for all sorts of things. It tells us why large experiments are more reliable than small ones, and because it gives us an explicit formula for the standard error it tells us how much more reliable a large experiment is. It tells us why the normal distribution is, well, normal. In real experiments, many of the things that we want to measure are actually averages of lots of different quantities (e.g., arguably, “general” intelligence as measured by IQ is an average of a large number of “specific” skills and abilities), and when that happens, the averaged quantity should follow a normal distribution. Because of this mathematical law, the normal distribution pops up over and over again in real data.
## 10.4 Estimating population parameters
In all the IQ examples in the previous sections, we actually knew the population parameters ahead of time. As every undergraduate gets taught in their very first lecture on the measurement of intelligence, IQ scores are defined to have mean 100 and standard deviation 15. However, this is a bit of a lie. How do we know that IQ scores have a true population mean of 100? Well, we know this because the people who designed the tests have administered them to very large samples, and have then “rigged” the scoring rules so that their sample has mean 100. That’s not a bad thing of course: it’s an important part of designing a psychological measurement. However, it’s important to keep in mind that this theoretical mean of 100 only attaches to the population that the test designers used to design the tests. Good test designers will actually go to some lengths to provide “test norms” that can apply to lots of different populations (e.g., different age groups, nationalities etc).
This is very handy, but of course almost every research project of interest involves looking at a different population of people to those used in the test norms. For instance, suppose you wanted to measure the effect of low level lead poisoning on cognitive functioning in Port Pirie, a South Australian industrial town with a lead smelter. Perhaps you decide that you want to compare IQ scores among people in Port Pirie to a comparable sample in Whyalla, a South Australian industrial town with a steel refinery.152 Regardless of which town you’re thinking about, it doesn’t make a lot of sense simply to assume that the true population mean IQ is 100. No-one has, to my knowledge, produced sensible norming data that can automatically be applied to South Australian industrial towns. We’re going to have to estimate the population parameters from a sample of data. So how do we do this?
### 10.4.1 Estimating the population mean
Suppose we go to Port Pirie and 100 of the locals are kind enough to sit through an IQ test. The average IQ score among these people turns out to be \(\bar{X}=98.5\). So what is the true mean IQ for the entire population of Port Pirie? Obviously, we don’t know the answer to that question. It could be \(97.2\), but if could also be \(103.5\). Our sampling isn’t exhaustive so we cannot give a definitive answer. Nevertheless if I was forced at gunpoint to give a “best guess” I’d have to say \(98.5\). That’s the essence of statistical estimation: giving a best guess.
In this example, estimating the unknown poulation parameter is straightforward. I calculate the sample mean, and I use that as my estimate of the population mean. It’s pretty simple, and in the next section I’ll explain the statistical justification for this intuitive answer. However, for the moment what I want to do is make sure you recognise that the sample statistic and the estimate of the population parameter are conceptually different things. A sample statistic is a description of your data, whereas the estimate is a guess about the population. With that in mind, statisticians often different notation to refer to them. For instance, if true population mean is denoted \(\mu\), then we would use \(\hat\mu\) to refer to our estimate of the population mean. In contrast, the sample mean is denoted \(\bar{X}\) or sometimes \(m\). However, in simple random samples, the estimate of the population mean is identical to the sample mean: if I observe a sample mean of \(\bar{X} = 98.5\), then my estimate of the population mean is also \(\hat\mu = 98.5\). To help keep the notation clear, here’s a handy table:
```
knitr::kable(data.frame(stringsAsFactors=FALSE,
Symbol = c("$\\bar{X}$", "$\\mu$", "$\\hat{\\mu}$"),
What.is.it = c("Sample mean", "True population mean",
"Estimate of the population mean"),
Do.we.know.what.it.is = c("Yes calculated from the raw data",
"Almost never known for sure",
"Yes identical to the sample mean")))
```
Symbol | What.is.it | Do.we.know.what.it.is |
| --- | --- | --- |
\(\bar{X}\) | Sample mean | Yes calculated from the raw data |
\(\mu\) | True population mean | Almost never known for sure |
\(\hat{\mu}\) | Estimate of the population mean | Yes identical to the sample mean |
### 10.4.2 Estimating the population standard deviation
So far, estimation seems pretty simple, and you might be wondering why I forced you to read through all that stuff about sampling theory. In the case of the mean, our estimate of the population parameter (i.e. \(\hat\mu\)) turned out to identical to the corresponding sample statistic (i.e. \(\bar{X}\)). However, that’s not always true. To see this, let’s have a think about how to construct an estimate of the population standard deviation, which we’ll denote \(\hat\sigma\). What shall we use as our estimate in this case? Your first thought might be that we could do the same thing we did when estimating the mean, and just use the sample statistic as our estimate. That’s almost the right thing to do, but not quite.
Here’s why. Suppose I have a sample that contains a single observation. For this example, it helps to consider a sample where you have no intutions at all about what the true population values might be, so let’s use something completely fictitious. Suppose the observation in question measures the cromulence of my shoes. It turns out that my shoes have a cromulence of 20. So here’s my sample:
`20`
This is a perfectly legitimate sample, even if it does have a sample size of \(N=1\). It has a sample mean of 20, and because every observation in this sample is equal to the sample mean (obviously!) it has a sample standard deviation of 0. As a description of the sample this seems quite right: the sample contains a single observation and therefore there is no variation observed within the sample. A sample standard deviation of \(s = 0\) is the right answer here. But as an estimate of the population standard deviation, it feels completely insane, right? Admittedly, you and I don’t know anything at all about what “cromulence” is, but we know something about data: the only reason that we don’t see any variability in the sample is that the sample is too small to display any variation! So, if you have a sample size of \(N=1\), it feels like the right answer is just to say “no idea at all”.
Notice that you don’t have the same intuition when it comes to the sample mean and the population mean. If forced to make a best guess about the population mean, it doesn’t feel completely insane to guess that the population mean is 20. Sure, you probably wouldn’t feel very confident in that guess, because you have only the one observation to work with, but it’s still the best guess you can make.
Let’s extend this example a little. Suppose I now make a second observation. My data set now has \(N=2\) observations of the cromulence of shoes, and the complete sample now looks like this:
`20, 22`
This time around, our sample is just large enough for us to be able to observe some variability: two observations is the bare minimum number needed for any variability to be observed! For our new data set, the sample mean is \(\bar{X}=21\), and the sample standard deviation is \(s=1\). What intuitions do we have about the population? Again, as far as the population mean goes, the best guess we can possibly make is the sample mean: if forced to guess, we’d probably guess that the population mean cromulence is 21. What about the standard deviation? This is a little more complicated. The sample standard deviation is only based on two observations, and if you’re at all like me you probably have the intuition that, with only two observations, we haven’t given the population “enough of a chance” to reveal its true variability to us. It’s not just that we suspect that the estimate is wrong: after all, with only two observations we expect it to be wrong to some degree. The worry is that the error is systematic. Specifically, we suspect that the sample standard deviation is likely to be smaller than the population standard deviation.
This intuition feels right, but it would be nice to demonstrate this somehow. There are in fact mathematical proofs that confirm this intuition, but unless you have the right mathematical background they don’t help very much. Instead, what I’ll do is use R to simulate the results of some experiments. With that in mind, let’s return to our IQ studies. Suppose the true population mean IQ is 100 and the standard deviation is 15. I can use the `rnorm()` function to generate the the results of an experiment in which I measure \(N=2\) IQ scores, and calculate the sample standard deviation. If I do this over and over again, and plot a histogram of these sample standard deviations, what I have is the sampling distribution of the standard deviation. I’ve plotted this distribution in Figure 10.11. Even though the true population standard deviation is 15, the average of the sample standard deviations is only 8.5. Notice that this is a very different result to what we found in Figure 10.8 when we plotted the sampling distribution of the mean. If you look at that sampling distribution, what you see is that the population mean is 100, and the average of the sample means is also 100.
Now let’s extend the simulation. Instead of restricting ourselves to the situation where we have a sample size of \(N=2\), let’s repeat the exercise for sample sizes from 1 to 10. If we plot the average sample mean and average sample standard deviation as a function of sample size, you get the results shown in Figure 10.12. On the left hand side (panel a), I’ve plotted the average sample mean and on the right hand side (panel b), I’ve plotted the average standard deviation. The two plots are quite different: on average, the average sample mean is equal to the population mean. It is an unbiased estimator, which is essentially the reason why your best estimate for the population mean is the sample mean.153 The plot on the right is quite different: on average, the sample standard deviation \(s\) is smaller than the population standard deviation \(\sigma\). It is a biased estimator. In other words, if we want to make a “best guess” \(\hat\sigma\) about the value of the population standard deviation \(\sigma\), we should make sure our guess is a little bit larger than the sample standard deviation \(s\).
The fix to this systematic bias turns out to be very simple. Here’s how it works. Before tackling the standard deviation, let’s look at the variance. If you recall from Section 5.2, the sample variance is defined to be the average of the squared deviations from the sample mean. That is: \[ s^2 = \frac{1}{N} \sum_{i=1}^N (X_i - \bar{X})^2 \] The sample variance \(s^2\) is a biased estimator of the population variance \(\sigma^2\). But as it turns out, we only need to make a tiny tweak to transform this into an unbiased estimator. All we have to do is divide by \(N-1\) rather than by \(N\). If we do that, we obtain the following formula: \[ \hat\sigma^2 = \frac{1}{N-1} \sum_{i=1}^N (X_i - \bar{X})^2 \] This is an unbiased estimator of the population variance \(\sigma\). Moreover, this finally answers the question we raised in Section 5.2. Why did R give us slightly different answers when we used the `var()` function? Because the `var()` function calculates \(\hat\sigma^2\) not \(s^2\), that’s why. A similar story applies for the standard deviation. If we divide by \(N-1\) rather than \(N\), our estimate of the population standard deviation becomes: \[
\hat\sigma = \sqrt{\frac{1}{N-1} \sum_{i=1}^N (X_i - \bar{X})^2}
\] and when we use R’s built in standard deviation function `sd()` , what it’s doing is calculating \(\hat\sigma\), not \(s\).154
One final point: in practice, a lot of people tend to refer to \(\hat{\sigma}\) (i.e., the formula where we divide by \(N-1\)) as the sample standard deviation. Technically, this is incorrect: the sample standard deviation should be equal to \(s\) (i.e., the formula where we divide by \(N\)). These aren’t the same thing, either conceptually or numerically. One is a property of the sample, the other is an estimated characteristic of the population. However, in almost every real life application, what we actually care about is the estimate of the population parameter, and so people always report \(\hat\sigma\) rather than \(s\). This is the right number to report, of course, it’s that people tend to get a little bit imprecise about terminology when they write it up, because “sample standard deviation” is shorter than “estimated population standard deviation”. It’s no big deal, and in practice I do the same thing everyone else does. Nevertheless, I think it’s important to keep the two concepts separate: it’s never a good idea to confuse “known properties of your sample” with “guesses about the population from which it came”. The moment you start thinking that \(s\) and \(\hat\sigma\) are the same thing, you start doing exactly that.
To finish this section off, here’s another couple of tables to help keep things clear:
```
knitr::kable(data.frame(stringsAsFactors=FALSE,
Symbol = c("$s$", "$\\sigma$", "$\\hat{\\sigma}$", "$s^2$",
"$\\sigma^2$", "$\\hat{\\sigma}^2$"),
What.is.it = c("Sample standard deviation",
"Population standard deviation",
"Estimate of the population standard deviation", "Sample variance",
"Population variance",
"Estimate of the population variance"),
Do.we.know.what.it.is = c("Yes - calculated from the raw data",
"Almost never known for sure",
"Yes - but not the same as the sample standard deviation",
"Yes - calculated from the raw data",
"Almost never known for sure",
"Yes - but not the same as the sample variance")
))
```
Symbol | What.is.it | Do.we.know.what.it.is |
| --- | --- | --- |
\(s\) | Sample standard deviation | Yes - calculated from the raw data |
\(\sigma\) | Population standard deviation | Almost never known for sure |
\(\hat{\sigma}\) | Estimate of the population standard deviation | Yes - but not the same as the sample standard deviation |
\(s^2\) | Sample variance | Yes - calculated from the raw data |
\(\sigma^2\) | Population variance | Almost never known for sure |
\(\hat{\sigma}^2\) | Estimate of the population variance | Yes - but not the same as the sample variance |
## 10.5 Estimating a confidence interval
Statistics means never having to say you’re certain – Unknown origin155 but I’ve never found the original source.
Up to this point in this chapter, I’ve outlined the basics of sampling theory which statisticians rely on to make guesses about population parameters on the basis of a sample of data. As this discussion illustrates, one of the reasons we need all this sampling theory is that every data set leaves us with a some of uncertainty, so our estimates are never going to be perfectly accurate. The thing that has been missing from this discussion is an attempt to quantify the amount of uncertainty that attaches to our estimate. It’s not enough to be able guess that, say, the mean IQ of undergraduate psychology students is 115 (yes, I just made that number up). We also want to be able to say something that expresses the degree of certainty that we have in our guess. For example, it would be nice to be able to say that there is a 95% chance that the true mean lies between 109 and 121. The name for this is a confidence interval for the mean.
Armed with an understanding of sampling distributions, constructing a confidence interval for the mean is actually pretty easy. Here’s how it works. Suppose the true population mean is \(\mu\) and the standard deviation is \(\sigma\). I’ve just finished running my study that has \(N\) participants, and the mean IQ among those participants is \(\bar{X}\). We know from our discussion of the central limit theorem (Section 10.3.3 that the sampling distribution of the mean is approximately normal. We also know from our discussion of the normal distribution Section 9.5 that there is a 95% chance that a normally-distributed quantity will fall within two standard deviations of the true mean. To be more precise, we can use the `qnorm()` function to compute the 2.5th and 97.5th percentiles of the normal distribution
```
qnorm( p = c(.025, .975) )
```
```
## [1] -1.959964 1.959964
```
Okay, so I lied earlier on. The more correct answer is that 95% chance that a normally-distributed quantity will fall within 1.96 standard deviations of the true mean. Next, recall that the standard deviation of the sampling distribution is referred to as the standard error, and the standard error of the mean is written as SEM. When we put all these pieces together, we learn that there is a 95% probability that the sample mean \(\bar{X}\) that we have actually observed lies within 1.96 standard errors of the population mean. Mathematically, we write this as: \[ \mu - \left( 1.96 \times \mbox{SEM} \right) \ \leq \ \bar{X}\ \leq \ \mu + \left( 1.96 \times \mbox{SEM} \right) \] where the SEM is equal to \(\sigma / \sqrt{N}\), and we can be 95% confident that this is true. However, that’s not answering the question that we’re actually interested in. The equation above tells us what we should expect about the sample mean, given that we know what the population parameters are. What we want is to have this work the other way around: we want to know what we should believe about the population parameters, given that we have observed a particular sample. However, it’s not too difficult to do this. Using a little high school algebra, a sneaky way to rewrite our equation is like this: \[ \bar{X} - \left( 1.96 \times \mbox{SEM} \right) \ \leq \ \mu \ \leq \ \bar{X} + \left( 1.96 \times \mbox{SEM}\right) \] What this is telling is is that the range of values has a 95% probability of containing the population mean \(\mu\). We refer to this range as a 95% confidence interval, denoted \(\mbox{CI}_{95}\). In short, as long as \(N\) is sufficiently large – large enough for us to believe that the sampling distribution of the mean is normal – then we can write this as our formula for the 95% confidence interval: \[ \mbox{CI}_{95} = \bar{X} \pm \left( 1.96 \times \frac{\sigma}{\sqrt{N}} \right) \] Of course, there’s nothing special about the number 1.96: it just happens to be the multiplier you need to use if you want a 95% confidence interval. If I’d wanted a 70% confidence interval, I could have used the `qnorm()` function to calculate the 15th and 85th quantiles:
```
qnorm( p = c(.15, .85) )
```
```
## [1] -1.036433 1.036433
```
and so the formula for \(\mbox{CI}_{70}\) would be the same as the formula for \(\mbox{CI}_{95}\) except that we’d use 1.04 as our magic number rather than 1.96.
### 10.5.1 A slight mistake in the formula
As usual, I lied. The formula that I’ve given above for the 95% confidence interval is approximately correct, but I glossed over an important detail in the discussion. Notice my formula requires you to use the standard error of the mean, SEM, which in turn requires you to use the true population standard deviation \(\sigma\). Yet, in Section @ref(pointestimates I stressed the fact that we don’t actually know the true population parameters. Because we don’t know the true value of \(\sigma\), we have to use an estimate of the population standard deviation \(\hat{\sigma}\) instead. This is pretty straightforward to do, but this has the consequence that we need to use the quantiles of the \(t\)-distribution rather than the normal distribution to calculate our magic number; and the answer depends on the sample size. When \(N\) is very large, we get pretty much the same value using `qt()` that we would if we used `qnorm()` …
`## [1] 1.960201`
But when \(N\) is small, we get a much bigger number when we use the \(t\) distribution:
`## [1] 2.262157`
There’s nothing too mysterious about what’s happening here. Bigger values mean that the confidence interval is wider, indicating that we’re more uncertain about what the true value of \(\mu\) actually is. When we use the \(t\) distribution instead of the normal distribution, we get bigger numbers, indicating that we have more uncertainty. And why do we have that extra uncertainty? Well, because our estimate of the population standard deviation \(\hat\sigma\) might be wrong! If it’s wrong, it implies that we’re a bit less sure about what our sampling distribution of the mean actually looks like… and this uncertainty ends up getting reflected in a wider confidence interval.
### 10.5.2 Interpreting a confidence interval
The hardest thing about confidence intervals is understanding what they mean. Whenever people first encounter confidence intervals, the first instinct is almost always to say that “there is a 95% probabaility that the true mean lies inside the confidence interval”. It’s simple, and it seems to capture the common sense idea of what it means to say that I am “95% confident”. Unfortunately, it’s not quite right. The intuitive definition relies very heavily on your own personal beliefs about the value of the population mean. I say that I am 95% confident because those are my beliefs. In everyday life that’s perfectly okay, but if you remember back to Section 9.2, you’ll notice that talking about personal belief and confidence is a Bayesian idea. Personally (speaking as a Bayesian) I have no problem with the idea that the phrase “95% probability” is allowed to refer to a personal belief. However, confidence intervals are not Bayesian tools. Like everything else in this chapter, confidence intervals are frequentist tools, and if you are going to use frequentist methods then it’s not appropriate to attach a Bayesian interpretation to them. If you use frequentist methods, you must adopt frequentist interpretations!
Okay, so if that’s not the right answer, what is? Remember what we said about frequentist probability: the only way we are allowed to make “probability statements” is to talk about a sequence of events, and to count up the frequencies of different kinds of events. From that perspective, the interpretation of a 95% confidence interval must have something to do with replication. Specifically: if we replicated the experiment over and over again and computed a 95% confidence interval for each replication, then 95% of those intervals would contain the true mean. More generally, 95% of all confidence intervals constructed using this procedure should contain the true population mean. This idea is illustrated in Figure 10.13, which shows 50 confidence intervals constructed for a “measure 10 IQ scores” experiment (top panel) and another 50 confidence intervals for a “measure 25 IQ scores” experiment (bottom panel). A bit fortuitously, across the 100 replications that I simulated, it turned out that exactly 95 of them contained the true mean.
The critical difference here is that the Bayesian claim makes a probability statement about the population mean (i.e., it refers to our uncertainty about the population mean), which is not allowed under the frequentist interpretation of probability because you can’t “replicate” a population! In the frequentist claim, the population mean is fixed and no probabilistic claims can be made about it. Confidence intervals, however, are repeatable so we can replicate experiments. Therefore a frequentist is allowed to talk about the probability that the confidence interval (a random variable) contains the true mean; but is not allowed to talk about the probability that the true population mean (not a repeatable event) falls within the confidence interval.
I know that this seems a little pedantic, but it does matter. It matters because the difference in interpretation leads to a difference in the mathematics. There is a Bayesian alternative to confidence intervals, known as credible intervals. In most situations credible intervals are quite similar to confidence intervals, but in other cases they are drastically different. As promised, though, I’ll talk more about the Bayesian perspective in Chapter 17.
### 10.5.3 Calculating confidence intervals in R
As far as I can tell, the core packages in R don’t include a simple function for calculating confidence intervals for the mean. They do include a lot of complicated, extremely powerful functions that can be used to calculate confidence intervals associated with lots of different things, such as the `confint()` function that we’ll use in Chapter 15. But I figure that when you’re first learning statistics, it might be useful to start with something simpler. As a consequence, the `lsr` package includes a function called `ciMean()` which you can use to calculate your confidence intervals. There are two arguments that you might want to specify:156
*
`x` . This should be a numeric vector containing the data. *
`conf` . This should be a number, specifying the confidence level. By default, `conf = .95` , since 95% confidence intervals are the de facto standard in psychology. So, for example, if I load the `afl24.Rdata` file, calculate the confidence interval associated with the mean attendance:
```
> ciMean( x = afl$attendance )
2.5% 97.5%
31597.32 32593.12
```
Hopefully that’s fairly clear.
### 10.5.4 Plotting confidence intervals in R
There’s several different ways you can draw graphs that show confidence intervals as error bars. I’ll show three versions here, but this certainly doesn’t exhaust the possibilities. In doing so, what I’m assuming is that you want to draw is a plot showing the means and confidence intervals for one variable, broken down by different levels of a second variable. For instance, in our `afl` data that we discussed earlier, we might be interested in plotting the average `attendance` by `year` . I’ll do this using two different functions, `bargraph.CI()` and `lineplot.CI()` (both of which are in the `sciplot` package). Assuming that you’ve installed these packages on your system (see Section 4.2 if you’ve forgotten how to do this), you’ll need to load them. You’ll also need to load the `lsr` package, because we’ll make use of the `ciMean()` function to actually calculate the confidence intervals
```
load( file.path(projecthome, "data/afl24.Rdata" )) # contains the "afl" data frame
library( sciplot ) # bargraph.CI() and lineplot.CI() functions
library( lsr ) # ciMean() function
```
Here’s how to plot the means and confidence intervals drawn using `bargraph.CI()` .
We can use the same arguments when calling the `lineplot.CI()` function:
## 10.6 Summary
In this chapter I’ve covered two main topics. The first half of the chapter talks about sampling theory, and the second half talks about how we can use sampling theory to construct estimates of the population parameters. The section breakdown looks like this:
* Basic ideas about samples, sampling and populations (Section 10.1)
* Statistical theory of sampling: the law of large numbers (Section 10.2), sampling distributions and the central limit theorem (Section 10.3).
* Estimating means and standard deviations (Section 10.4)
* Estimating a confidence interval (Section 10.5)
As always, there’s a lot of topics related to sampling and estimation that aren’t covered in this chapter, but for an introductory psychology class this is fairly comprehensive I think. For most applied researchers you won’t need much more theory than this. One big question that I haven’t touched on in this chapter is what you do when you don’t have a simple random sample. There is a lot of statistical theory you can draw on to handle this situation, but it’s well beyond the scope of this book.
The proper mathematical definition of randomness is extraordinarily technical, and way beyond the scope of this book. We’ll be non-technical here and say that a process has an element of randomness to it whenever it is possible to repeat the process and get different answers each time.↩
*
Nothing in life is that simple: there’s not an obvious division of people into binary categories like “schizophrenic” and “not schizophrenic”. But this isn’t a clinical psychology text, so please forgive me a few simplifications here and there.↩
*
Technically, the law of large numbers pertains to any sample statistic that can be described as an average of independent quantities. That’s certainly true for the sample mean. However, it’s also possible to write many other sample statistics as averages of one form or another. The variance of a sample, for instance, can be rewritten as a kind of average and so is subject to the law of large numbers. The minimum value of a sample, however, cannot be written as an average of anything and is therefore not governed by the law of large numbers.↩
*
As usual, I’m being a bit sloppy here. The central limit theorem is a bit more general than this section implies. Like most introductory stats texts, I’ve discussed one situation where the central limit theorem holds: when you’re taking an average across lots of independent events drawn from the same distribution. However, the central limit theorem is much broader than this. There’s a whole class of things called “\(U\)-statistics” for instance, all of which satisfy the central limit theorem and therefore become normally distributed for large sample sizes. The mean is one such statistic, but it’s not the only one.↩
*
Please note that if you were actually interested in this question, you would need to be a lot more careful than I’m being here. You can’t just compare IQ scores in Whyalla to Port Pirie and assume that any differences are due to lead poisoning. Even if it were true that the only differences between the two towns corresponded to the different refineries (and it isn’t, not by a long shot), you need to account for the fact that people already believe that lead pollution causes cognitive deficits: if you recall back to Chapter 2, this means that there are different demand effects for the Port Pirie sample than for the Whyalla sample. In other words, you might end up with an illusory group difference in your data, caused by the fact that people think that there is a real difference. I find it pretty implausible to think that the locals wouldn’t be well aware of what you were trying to do if a bunch of researchers turned up in Port Pirie with lab coats and IQ tests, and even less plausible to think that a lot of people would be pretty resentful of you for doing it. Those people won’t be as co-operative in the tests. Other people in Port Pirie might be more motivated to do well because they don’t want their home town to look bad. The motivational effects that would apply in Whyalla are likely to be weaker, because people don’t have any concept of “iron ore poisoning” in the same way that they have a concept for “lead poisoning”. Psychology is hard.↩
*
I should note that I’m hiding something here. Unbiasedness is a desirable characteristic for an estimator, but there are other things that matter besides bias. However, it’s beyond the scope of this book to discuss this in any detail. I just want to draw your attention to the fact that there’s some hidden complexity here.↩
*
Okay, I’m hiding something else here. In a bizarre and counterintuitive twist, since \(\hat\sigma^2\) is an unbiased estimator of \(\sigma^2\), you’d assume that taking the square root would be fine, and \(\hat\sigma\) would be an unbiased estimator of \(\sigma\). Right? Weirdly, it’s not. There’s actually a subtle, tiny bias in \(\hat\sigma\). This is just bizarre: \(\hat\sigma^2\) is and unbiased estimate of the population variance \(\sigma^2\), but when you take the square root, it turns out that \(\hat\sigma\) is a biased estimator of the population standard deviation \(\sigma\). Weird, weird, weird, right? So, why is \(\hat\sigma\) biased? The technical answer is “because non-linear transformations (e.g., the square root) don’t commute with expectation”, but that just sounds like gibberish to everyone who hasn’t taken a course in mathematical statistics. Fortunately, it doesn’t matter for practical purposes. The bias is small, and in real life everyone uses \(\hat\sigma\) and it works just fine. Sometimes mathematics is just annoying.↩
*
This quote appears on a great many t-shirts and websites, and even gets a mention in a few academic papers (e.g., \url{http://www.amstat.org/publications/jse/v10n3/friedman.html↩
*
As of the current writing, these are the only arguments to the function. However, I am planning to add a bit more functionality to
`ciMean()` . However, regardless of what those future changes might look like, the `x` and `conf` arguments will remain the same, and the commands used in this book will still work.↩
Date: 2006-01-01
Categories:
Tags:
# Chapter 11 Hypothesis testing
The process of induction is the process of assuming the simplest law that can be made to harmonize with our experience. This process, however, has no logical foundation but only a psychological one. It is clear that there are no grounds for believing that the simplest course of events will really happen. It is an hypothesis that the sun will rise tomorrow: and this means that we do not know whether it will rise.
– <NAME>
In the last chapter, I discussed the ideas behind estimation, which is one of the two “big ideas” in inferential statistics. It’s now time to turn out attention to the other big idea, which is hypothesis testing. In its most abstract form, hypothesis testing really a very simple idea: the researcher has some theory about the world, and wants to determine whether or not the data actually support that theory. However, the details are messy, and most people find the theory of hypothesis testing to be the most frustrating part of statistics. The structure of the chapter is as follows. Firstly, I’ll describe how hypothesis testing works, in a fair amount of detail, using a simple running example to show you how a hypothesis test is “built”. I’ll try to avoid being too dogmatic while doing so, and focus instead on the underlying logic of the testing procedure.158 Afterwards, I’ll spend a bit of time talking about the various dogmas, rules and heresies that surround the theory of hypothesis testing.
## 11.1 A menagerie of hypotheses
Eventually we all succumb to madness. For me, that day will arrive once I’m finally promoted to full professor. Safely ensconced in my ivory tower, happily protected by tenure, I will finally be able to take leave of my senses (so to speak), and indulge in that most thoroughly unproductive line of psychological research: the search for extrasensory perception (ESP).159
Let’s suppose that this glorious day has come. My first study is a simple one, in which I seek to test whether clairvoyance exists. Each participant sits down at a table, and is shown a card by an experimenter. The card is black on one side and white on the other. The experimenter takes the card away, and places it on a table in an adjacent room. The card is placed black side up or white side up completely at random, with the randomisation occurring only after the experimenter has left the room with the participant. A second experimenter comes in and asks the participant which side of the card is now facing upwards. It’s purely a one-shot experiment. Each person sees only one card, and gives only one answer; and at no stage is the participant actually in contact with someone who knows the right answer. My data set, therefore, is very simple. I have asked the question of \(N\) people, and some number \(X\) of these people have given the correct response. To make things concrete, let’s suppose that I have tested \(N = 100\) people, and \(X = 62\) of these got the answer right… a surprisingly large number, sure, but is it large enough for me to feel safe in claiming I’ve found evidence for ESP? This is the situation where hypothesis testing comes in useful. However, before we talk about how to test hypotheses, we need to be clear about what we mean by hypotheses.
### 11.1.1 Research hypotheses versus statistical hypotheses
The first distinction that you need to keep clear in your mind is between research hypotheses and statistical hypotheses. In my ESP study, my overall scientific goal is to demonstrate that clairvoyance exists. In this situation, I have a clear research goal: I am hoping to discover evidence for ESP. In other situations I might actually be a lot more neutral than that, so I might say that my research goal is to determine whether or not clairvoyance exists. Regardless of how I want to portray myself, the basic point that I’m trying to convey here is that a research hypothesis involves making a substantive, testable scientific claim… if you are a psychologist, then your research hypotheses are fundamentally about psychological constructs. Any of the following would count as research hypotheses:
* Listening to music reduces your ability to pay attention to other things. This is a claim about the causal relationship between two psychologically meaningful concepts (listening to music and paying attention to things), so it’s a perfectly reasonable research hypothesis.
* Intelligence is related to personality. Like the last one, this is a relational claim about two psychological constructs (intelligence and personality), but the claim is weaker: correlational not causal.
* Intelligence is* speed of information processing. This hypothesis has a quite different character: it’s not actually a relational claim at all. It’s an ontological claim about the fundamental character of intelligence (and I’m pretty sure it’s wrong). It’s worth expanding on this one actually: It’s usually easier to think about how to construct experiments to test research hypotheses of the form “does X affect Y?” than it is to address claims like “what is X?” And in practice, what usually happens is that you find ways of testing relational claims that follow from your ontological ones. For instance, if I believe that intelligence is* speed of information processing in the brain, my experiments will often involve looking for relationships between measures of intelligence and measures of speed. As a consequence, most everyday research questions do tend to be relational in nature, but they’re almost always motivated by deeper ontological questions about the state of nature.
Notice that in practice, my research hypotheses could overlap a lot. My ultimate goal in the ESP experiment might be to test an ontological claim like “ESP exists”, but I might operationally restrict myself to a narrower hypothesis like “Some people can `see’ objects in a clairvoyant fashion”. That said, there are some things that really don’t count as proper research hypotheses in any meaningful sense:
* Love is a battlefield. This is too vague to be testable. While it’s okay for a research hypothesis to have a degree of vagueness to it, it has to be possible to operationalise your theoretical ideas. Maybe I’m just not creative enough to see it, but I can’t see how this can be converted into any concrete research design. If that’s true, then this isn’t a scientific research hypothesis, it’s a pop song. That doesn’t mean it’s not interesting – a lot of deep questions that humans have fall into this category. Maybe one day science will be able to construct testable theories of love, or to test to see if God exists, and so on; but right now we can’t, and I wouldn’t bet on ever seeing a satisfying scientific approach to either.
* The first rule of tautology club is the first rule of tautology club. This is not a substantive claim of any kind. It’s true by definition. No conceivable state of nature could possibly be inconsistent with this claim. As such, we say that this is an unfalsifiable hypothesis, and as such it is outside the domain of science. Whatever else you do in science, your claims must have the possibility of being wrong.
* More people in my experiment will say “yes” than “no”. This one fails as a research hypothesis because it’s a claim about the data set, not about the psychology (unless of course your actual research question is whether people have some kind of “yes” bias!). As we’ll see shortly, this hypothesis is starting to sound more like a statistical hypothesis than a research hypothesis.
As you can see, research hypotheses can be somewhat messy at times; and ultimately they are scientific claims. Statistical hypotheses are neither of these two things. Statistical hypotheses must be mathematically precise, and they must correspond to specific claims about the characteristics of the data generating mechanism (i.e., the “population”). Even so, the intent is that statistical hypotheses bear a clear relationship to the substantive research hypotheses that you care about! For instance, in my ESP study my research hypothesis is that some people are able to see through walls or whatever. What I want to do is to “map” this onto a statement about how the data were generated. So let’s think about what that statement would be. The quantity that I’m interested in within the experiment is \(P(\mbox{"correct"})\), the true-but-unknown probability with which the participants in my experiment answer the question correctly. Let’s use the Greek letter \(\theta\) (theta) to refer to this probability. Here are four different statistical hypotheses:
* If ESP doesn’t exist and if my experiment is well designed, then my participants are just guessing. So I should expect them to get it right half of the time and so my statistical hypothesis is that the true probability of choosing correctly is \(\theta = 0.5\).
* Alternatively, suppose ESP does exist and participants can see the card. If that’s true, people will perform better than chance. The statistical hypotheis would be that \(\theta > 0.5\).
* A third possibility is that ESP does exist, but the colours are all reversed and people don’t realise it (okay, that’s wacky, but you never know…). If that’s how it works then you’d expect people’s performance to be below chance. This would correspond to a statistical hypothesis that \(\theta < 0.5\).
* Finally, suppose ESP exists, but I have no idea whether people are seeing the right colour or the wrong one. In that case, the only claim I could make about the data would be that the probability of making the correct answer is not equal to 50. This corresponds to the statistical hypothesis that \(\theta \neq 0.5\).
All of these are legitimate examples of a statistical hypothesis because they are statements about a population parameter and are meaningfully related to my experiment.
What this discussion makes clear, I hope, is that when attempting to construct a statistical hypothesis test the researcher actually has two quite distinct hypotheses to consider. First, he or she has a research hypothesis (a claim about psychology), and this corresponds to a statistical hypothesis (a claim about the data generating population). In my ESP example, these might be
Dan.s.research.hypothesis | Dan.s.statistical.hypothesis |
| --- | --- |
ESP.exists | \(\theta \neq 0.5\) |
And the key thing to recognise is this: a statistical hypothesis test is a test of the statistical hypothesis, not the research hypothesis. If your study is badly designed, then the link between your research hypothesis and your statistical hypothesis is broken. To give a silly example, suppose that my ESP study was conducted in a situation where the participant can actually see the card reflected in a window; if that happens, I would be able to find very strong evidence that \(\theta \neq 0.5\), but this would tell us nothing about whether “ESP exists”.
### 11.1.2 Null hypotheses and alternative hypotheses
So far, so good. I have a research hypothesis that corresponds to what I want to believe about the world, and I can map it onto a statistical hypothesis that corresponds to what I want to believe about how the data were generated. It’s at this point that things get somewhat counterintuitive for a lot of people. Because what I’m about to do is invent a new statistical hypothesis (the “null” hypothesis, \(H_0\)) that corresponds to the exact opposite of what I want to believe, and then focus exclusively on that, almost to the neglect of the thing I’m actually interested in (which is now called the “alternative” hypothesis, \(H_1\)). In our ESP example, the null hypothesis is that \(\theta = 0.5\), since that’s what we’d expect if ESP didn’t exist. My hope, of course, is that ESP is totally real, and so the alternative to this null hypothesis is \(\theta \neq 0.5\). In essence, what we’re doing here is dividing up the possible values of \(\theta\) into two groups: those values that I really hope aren’t true (the null), and those values that I’d be happy with if they turn out to be right (the alternative). Having done so, the important thing to recognise is that the goal of a hypothesis test is not to show that the alternative hypothesis is (probably) true; the goal is to show that the null hypothesis is (probably) false. Most people find this pretty weird.
The best way to think about it, in my experience, is to imagine that a hypothesis test is a criminal trial160… the trial of the null hypothesis. The null hypothesis is the defendant, the researcher is the prosecutor, and the statistical test itself is the judge. Just like a criminal trial, there is a presumption of innocence: the null hypothesis is deemed to be true unless you, the researcher, can prove beyond a reasonable doubt that it is false. You are free to design your experiment however you like (within reason, obviously!), and your goal when doing so is to maximise the chance that the data will yield a conviction… for the crime of being false. The catch is that the statistical test sets the rules of the trial, and those rules are designed to protect the null hypothesis – specifically to ensure that if the null hypothesis is actually true, the chances of a false conviction are guaranteed to be low. This is pretty important: after all, the null hypothesis doesn’t get a lawyer. And given that the researcher is trying desperately to prove it to be false, someone has to protect it.
## 11.2 Two types of errors
Before going into details about how a statistical test is constructed, it’s useful to understand the philosophy behind it. I hinted at it when pointing out the similarity between a null hypothesis test and a criminal trial, but I should now be explicit. Ideally, we would like to construct our test so that we never make any errors. Unfortunately, since the world is messy, this is never possible. Sometimes you’re just really unlucky: for instance, suppose you flip a coin 10 times in a row and it comes up heads all 10 times. That feels like very strong evidence that the coin is biased (and it is!), but of course there’s a 1 in 1024 chance that this would happen even if the coin was totally fair. In other words, in real life we always have to accept that there’s a chance that we did the wrong thing. As a consequence, the goal behind statistical hypothesis testing is not to eliminate errors, but to minimise them.
At this point, we need to be a bit more precise about what we mean by “errors”. Firstly, let’s state the obvious: it is either the case that the null hypothesis is true, or it is false; and our test will either reject the null hypothesis or retain it.161 So, as the table below illustrates, after we run the test and make our choice, one of four things might have happened:
retain \(H_0\) | retain \(H_0\) |
| --- | --- |
\(H_0\) is true | correct decision | error (type I) |
\(H_0\) is false | error (type II) | correct decision |
As a consequence there are actually two different types of error here. If we reject a null hypothesis that is actually true, then we have made a type I error. On the other hand, if we retain the null hypothesis when it is in fact false, then we have made a type II error.
Remember how I said that statistical testing was kind of like a criminal trial? Well, I meant it. A criminal trial requires that you establish “beyond a reasonable doubt” that the defendant did it. All of the evidentiary rules are (in theory, at least) designed to ensure that there’s (almost) no chance of wrongfully convicting an innocent defendant. The trial is designed to protect the rights of a defendant: as the English jurist <NAME> famously said, it is “better that ten guilty persons escape than that one innocent suffer.” In other words, a criminal trial doesn’t treat the two types of error in the same way~… punishing the innocent is deemed to be much worse than letting the guilty go free. A statistical test is pretty much the same: the single most important design principle of the test is to control the probability of a type I error, to keep it below some fixed probability. This probability, which is denoted \(\alpha\), is called the significance level of the test (or sometimes, the size of the test). And I’ll say it again, because it is so central to the whole set-up~… a hypothesis test is said to have significance level \(\alpha\) if the type I error rate is no larger than \(\alpha\).
So, what about the type II error rate? Well, we’d also like to keep those under control too, and we denote this probability by \(\beta\). However, it’s much more common to refer to the power of the test, which is the probability with which we reject a null hypothesis when it really is false, which is \(1-\beta\). To help keep this straight, here’s the same table again, but with the relevant numbers added:
retain \(H_0\) | reject \(H_0\) |
| --- | --- |
\(H_0\) is true | \(1-\alpha\) (probability of correct retention) | \(\alpha\) (type I error rate) |
\(H_0\) is false | \(\beta\) (type II error rate) | \(1-\beta\) (power of the test) |
A “powerful” hypothesis test is one that has a small value of \(\beta\), while still keeping \(\alpha\) fixed at some (small) desired level. By convention, scientists make use of three different \(\alpha\) levels: \(.05\), \(.01\) and \(.001\). Notice the asymmetry here~… the tests are designed to ensure that the \(\alpha\) level is kept small, but there’s no corresponding guarantee regarding \(\beta\). We’d certainly like the type II error rate to be small, and we try to design tests that keep it small, but this is very much secondary to the overwhelming need to control the type I error rate. As Blackstone might have said if he were a statistician, it is “better to retain 10 false null hypotheses than to reject a single true one”. To be honest, I don’t know that I agree with this philosophy – there are situations where I think it makes sense, and situations where I think it doesn’t – but that’s neither here nor there. It’s how the tests are built.
## 11.3 Test statistics and sampling distributions
At this point we need to start talking specifics about how a hypothesis test is constructed. To that end, let’s return to the ESP example. Let’s ignore the actual data that we obtained, for the moment, and think about the structure of the experiment. Regardless of what the actual numbers are, the form of the data is that \(X\) out of \(N\) people correctly identified the colour of the hidden card. Moreover, let’s suppose for the moment that the null hypothesis really is true: ESP doesn’t exist, and the true probability that anyone picks the correct colour is exactly \(\theta = 0.5\). What would we expect the data to look like? Well, obviously, we’d expect the proportion of people who make the correct response to be pretty close to 50%. Or, to phrase this in more mathematical terms, we’d say that \(X/N\) is approximately \(0.5\). Of course, we wouldn’t expect this fraction to be exactly 0.5: if, for example we tested \(N=100\) people, and \(X = 53\) of them got the question right, we’d probably be forced to concede that the data are quite consistent with the null hypothesis. On the other hand, if \(X = 99\) of our participants got the question right, then we’d feel pretty confident that the null hypothesis is wrong. Similarly, if only \(X=3\) people got the answer right, we’d be similarly confident that the null was wrong. Let’s be a little more technical about this: we have a quantity \(X\) that we can calculate by looking at our data; after looking at the value of \(X\), we make a decision about whether to believe that the null hypothesis is correct, or to reject the null hypothesis in favour of the alternative. The name for this thing that we calculate to guide our choices is a test statistic.
Having chosen a test statistic, the next step is to state precisely which values of the test statistic would cause is to reject the null hypothesis, and which values would cause us to keep it. In order to do so, we need to determine what the sampling distribution of the test statistic would be if the null hypothesis were actually true (we talked about sampling distributions earlier in Section 10.3.1). Why do we need this? Because this distribution tells us exactly what values of \(X\) our null hypothesis would lead us to expect. And therefore, we can use this distribution as a tool for assessing how closely the null hypothesis agrees with our data.
How do we actually determine the sampling distribution of the test statistic? For a lot of hypothesis tests this step is actually quite complicated, and later on in the book you’ll see me being slightly evasive about it for some of the tests (some of them I don’t even understand myself). However, sometimes it’s very easy. And, fortunately for us, our ESP example provides us with one of the easiest cases. Our population parameter \(\theta\) is just the overall probability that people respond correctly when asked the question, and our test statistic \(X\) is the count of the number of people who did so, out of a sample size of \(N\). We’ve seen a distribution like this before, in Section 9.4: that’s exactly what the binomial distribution describes! So, to use the notation and terminology that I introduced in that section, we would say that the null hypothesis predicts that \(X\) is binomially distributed, which is written \[ X \sim \mbox{Binomial}(\theta,N) \] Since the null hypothesis states that \(\theta = 0.5\) and our experiment has \(N=100\) people, we have the sampling distribution we need. This sampling distribution is plotted in Figure 11.1. No surprises really: the null hypothesis says that \(X=50\) is the most likely outcome, and it says that we’re almost certain to see somewhere between 40 and 60 correct responses.
## 11.4 Making decisions
Okay, we’re very close to being finished. We’ve constructed a test statistic (\(X\)), and we chose this test statistic in such a way that we’re pretty confident that if \(X\) is close to \(N/2\) then we should retain the null, and if not we should reject it. The question that remains is this: exactly which values of the test statistic should we associate with the null hypothesis, and which exactly values go with the alternative hypothesis? In my ESP study, for example, I’ve observed a value of \(X=62\). What decision should I make? Should I choose to believe the null hypothesis, or the alternative hypothesis?
### 11.4.1 Critical regions and critical values
To answer this question, we need to introduce the concept of a critical region for the test statistic \(X\). The critical region of the test corresponds to those values of \(X\) that would lead us to reject null hypothesis (which is why the critical region is also sometimes called the rejection region). How do we find this critical region? Well, let’s consider what we know:
* \(X\) should be very big or very small in order to reject the null hypothesis.
* If the null hypothesis is true, the sampling distribution of \(X\) is Binomial\((0.5, N)\).
* If \(\alpha =.05\), the critical region must cover 5% of this sampling distribution.
It’s important to make sure you understand this last point: the critical region corresponds to those values of \(X\) for which we would reject the null hypothesis, and the sampling distribution in question describes the probability that we would obtain a particular value of \(X\) if the null hypothesis were actually true. Now, let’s suppose that we chose a critical region that covers 20% of the sampling distribution, and suppose that the null hypothesis is actually true. What would be the probability of incorrectly rejecting the null? The answer is of course 20%. And therefore, we would have built a test that had an \(\alpha\) level of \(0.2\). If we want \(\alpha = .05\), the critical region is only allowed to cover 5% of the sampling distribution of our test statistic.
As it turns out, those three things uniquely solve the problem: our critical region consists of the most extreme values, known as the tails of the distribution. This is illustrated in Figure 11.2. As it turns out, if we want \(\alpha = .05\), then our critical regions correspond to \(X \leq 40\) and \(X \geq 60\).162 That is, if the number of people saying “true” is between 41 and 59, then we should retain the null hypothesis. If the number is between 0 to 40 or between 60 to 100, then we should reject the null hypothesis. The numbers 40 and 60 are often referred to as the critical values, since they define the edges of the critical region.
At this point, our hypothesis test is essentially complete: (1) we choose an \(\alpha\) level (e.g., \(\alpha = .05\), (2) come up with some test statistic (e.g., \(X\)) that does a good job (in some meaningful sense) of comparing \(H_0\) to \(H_1\), (3) figure out the sampling distribution of the test statistic on the assumption that the null hypothesis is true (in this case, binomial) and then (4) calculate the critical region that produces an appropriate \(\alpha\) level (0-40 and 60-100). All that we have to do now is calculate the value of the test statistic for the real data (e.g., \(X = 62\)) and then compare it to the critical values to make our decision. Since 62 is greater than the critical value of 60, we would reject the null hypothesis. Or, to phrase it slightly differently, we say that the test has produced a significant result.
### 11.4.2 A note on statistical “significance”
Like other occult techniques of divination, the statistical method has a private jargon deliberately contrived to obscure its methods from non-practitioners.
– Attributed to <NAME>163
A very brief digression is in order at this point, regarding the word “significant”. The concept of statistical significance is actually a very simple one, but has a very unfortunate name. If the data allow us to reject the null hypothesis, we say that “the result is statistically significant”, which is often shortened to “the result is significant”. This terminology is rather old, and dates back to a time when “significant” just meant something like “indicated”, rather than its modern meaning, which is much closer to “important”. As a result, a lot of modern readers get very confused when they start learning statistics, because they think that a “significant result” must be an important one. It doesn’t mean that at all. All that “statistically significant” means is that the data allowed us to reject a null hypothesis. Whether or not the result is actually important in the real world is a very different question, and depends on all sorts of other things.
### 11.4.3 The difference between one sided and two sided tests
There’s one more thing I want to point out about the hypothesis test that I’ve just constructed. If we take a moment to think about the statistical hypotheses I’ve been using, \[ \begin{array}{cc} H_0 : & \theta = .5 \\ H_1 : & \theta \neq .5 \end{array} \] we notice that the alternative hypothesis covers both the possibility that \(\theta < .5\) and the possibility that \(\theta > .5\). This makes sense if I really think that ESP could produce better-than-chance performance or worse-than-chance performance (and there are some people who think that). In statistical language, this is an example of a two-sided test. It’s called this because the alternative hypothesis covers the area on both “sides” of the null hypothesis, and as a consequence the critical region of the test covers both tails of the sampling distribution (2.5% on either side if \(\alpha =.05\)), as illustrated earlier in Figure 11.2.
However, that’s not the only possibility. It might be the case, for example, that I’m only willing to believe in ESP if it produces better than chance performance. If so, then my alternative hypothesis would only covers the possibility that \(\theta > .5\), and as a consequence the null hypothesis now becomes \(\theta \leq .5\): \[ \begin{array}{cc} H_0 : & \theta \leq .5 \\ H_1 : & \theta > .5 \end{array} \] When this happens, we have what’s called a one-sided test, and when this happens the critical region only covers one tail of the sampling distribution. This is illustrated in Figure 11.3.
## 11.5 The \(p\) value of a test
In one sense, our hypothesis test is complete; we’ve constructed a test statistic, figured out its sampling distribution if the null hypothesis is true, and then constructed the critical region for the test. Nevertheless, I’ve actually omitted the most important number of all: the \(p\) value. It is to this topic that we now turn. There are two somewhat different ways of interpreting a \(p\) value, one proposed by <NAME> and the other by <NAME>. Both versions are legitimate, though they reflect very different ways of thinking about hypothesis tests. Most introductory textbooks tend to give Fisher’s version only, but I think that’s a bit of a shame. To my mind, Neyman’s version is cleaner, and actually better reflects the logic of the null hypothesis test. You might disagree though, so I’ve included both. I’ll start with Neyman’s version…
### 11.5.1 A softer view of decision making
One problem with the hypothesis testing procedure that I’ve described is that it makes no distinction at all between a result this “barely significant” and those that are “highly significant”. For instance, in my ESP study the data I obtained only just fell inside the critical region - so I did get a significant effect, but was a pretty near thing. In contrast, suppose that I’d run a study in which \(X=97\) out of my \(N=100\) participants got the answer right. This would obviously be significant too, but my a much larger margin; there’s really no ambiguity about this at all. The procedure that I described makes no distinction between the two. If I adopt the standard convention of allowing \(\alpha = .05\) as my acceptable Type I error rate, then both of these are significant results.
This is where the \(p\) value comes in handy. To understand how it works, let’s suppose that we ran lots of hypothesis tests on the same data set: but with a different value of \(\alpha\) in each case. When we do that for my original ESP data, what we’d get is something like this
Value of \(\alpha\) | Reject the null? |
| --- | --- |
0.05 | Yes |
0.04 | Yes |
0.03 | Yes |
0.02 | No |
0.01 | No |
When we test ESP data (\(X=62\) successes out of \(N=100\) observations) using \(\alpha\) levels of .03 and above, we’d always find ourselves rejecting the null hypothesis. For \(\alpha\) levels of .02 and below, we always end up retaining the null hypothesis. Therefore, somewhere between .02 and .03 there must be a smallest value of \(\alpha\) that would allow us to reject the null hypothesis for this data. This is the \(p\) value; as it turns out the ESP data has \(p = .021\). In short:
\(p\) is defined to be the smallest Type I error rate (\(\alpha\)) that you have to be willing to tolerate if you want to reject the null hypothesis.
If it turns out that \(p\) describes an error rate that you find intolerable, then you must retain the null. If you’re comfortable with an error rate equal to \(p\), then it’s okay to reject the null hypothesis in favour of your preferred alternative.
In effect, \(p\) is a summary of all the possible hypothesis tests that you could have run, taken across all possible \(\alpha\) values. And as a consequence it has the effect of “softening” our decision process. For those tests in which \(p \leq \alpha\) you would have rejected the null hypothesis, whereas for those tests in which \(p > \alpha\) you would have retained the null. In my ESP study I obtained \(X=62\), and as a consequence I’ve ended up with \(p = .021\). So the error rate I have to tolerate is 2.1%. In contrast, suppose my experiment had yielded \(X=97\). What happens to my \(p\) value now? This time it’s shrunk to \(p = 1.36 \times 10^{-25}\), which is a tiny, tiny164 Type I error rate. For this second case I would be able to reject the null hypothesis with a lot more confidence, because I only have to be “willing” to tolerate a type I error rate of about 1 in 10 trillion trillion in order to justify my decision to reject.
### 11.5.2 The probability of extreme data
The second definition of the \(p\)-value comes from <NAME>, and it’s actually this one that you tend to see in most introductory statistics textbooks. Notice how, when I constructed the critical region, it corresponded to the tails (i.e., extreme values) of the sampling distribution? That’s not a coincidence: almost all “good” tests have this characteristic (good in the sense of minimising our type II error rate, \(\beta\)). The reason for that is that a good critical region almost always corresponds to those values of the test statistic that are least likely to be observed if the null hypothesis is true. If this rule is true, then we can define the \(p\)-value as the probability that we would have observed a test statistic that is at least as extreme as the one we actually did get. In other words, if the data are extremely implausible according to the null hypothesis, then the null hypothesis is probably wrong.
### 11.5.3 A common mistake
Okay, so you can see that there are two rather different but legitimate ways to interpret the \(p\) value, one based on Neyman’s approach to hypothesis testing and the other based on Fisher’s. Unfortunately, there is a third explanation that people sometimes give, especially when they’re first learning statistics, and it is absolutely and completely wrong. This mistaken approach is to refer to the \(p\) value as “the probability that the null hypothesis is true”. It’s an intuitively appealing way to think, but it’s wrong in two key respects: (1) null hypothesis testing is a frequentist tool, and the frequentist approach to probability does not allow you to assign probabilities to the null hypothesis… according to this view of probability, the null hypothesis is either true or it is not; it cannot have a “5% chance” of being true. (2) even within the Bayesian approach, which does let you assign probabilities to hypotheses, the \(p\) value would not correspond to the probability that the null is true; this interpretation is entirely inconsistent with the mathematics of how the \(p\) value is calculated. Put bluntly, despite the intuitive appeal of thinking this way, there is no justification for interpreting a \(p\) value this way. Never do it.
## 11.6 Reporting the results of a hypothesis test
When writing up the results of a hypothesis test, there’s usually several pieces of information that you need to report, but it varies a fair bit from test to test. Throughout the rest of the book I’ll spend a little time talking about how to report the results of different tests (see Section 12.1.9 for a particularly detailed example), so that you can get a feel for how it’s usually done. However, regardless of what test you’re doing, the one thing that you always have to do is say something about the \(p\) value, and whether or not the outcome was significant.
The fact that you have to do this is unsurprising; it’s the whole point of doing the test. What might be surprising is the fact that there is some contention over exactly how you’re supposed to do it. Leaving aside those people who completely disagree with the entire framework underpinning null hypothesis testing, there’s a certain amount of tension that exists regarding whether or not to report the exact \(p\) value that you obtained, or if you should state only that \(p < \alpha\) for a significance level that you chose in advance (e.g., \(p<.05\)).
### 11.6.1 The issue
To see why this is an issue, the key thing to recognise is that \(p\) values are terribly convenient. In practice, the fact that we can compute a \(p\) value means that we don’t actually have to specify any \(\alpha\) level at all in order to run the test. Instead, what you can do is calculate your \(p\) value and interpret it directly: if you get \(p = .062\), then it means that you’d have to be willing to tolerate a Type I error rate of 6.2% to justify rejecting the null. If you personally find 6.2% intolerable, then you retain the null. Therefore, the argument goes, why don’t we just report the actual \(p\) value and let the reader make up their own minds about what an acceptable Type I error rate is? This approach has the big advantage of “softening” the decision making process – in fact, if you accept the Neyman definition of the \(p\) value, that’s the whole point of the \(p\) value. We no longer have a fixed significance level of \(\alpha = .05\) as a bright line separating “accept” from “reject” decisions; and this removes the rather pathological problem of being forced to treat \(p = .051\) in a fundamentally different way to \(p = .049\).
This flexibility is both the advantage and the disadvantage to the \(p\) value. The reason why a lot of people don’t like the idea of reporting an exact \(p\) value is that it gives the researcher a bit too much freedom. In particular, it lets you change your mind about what error tolerance you’re willing to put up with after you look at the data. For instance, consider my ESP experiment. Suppose I ran my test, and ended up with a \(p\) value of .09. Should I accept or reject? Now, to be honest, I haven’t yet bothered to think about what level of Type I error I’m “really” willing to accept. I don’t have an opinion on that topic. But I do have an opinion about whether or not ESP exists, and I definitely have an opinion about whether my research should be published in a reputable scientific journal. And amazingly, now that I’ve looked at the data I’m starting to think that a 9% error rate isn’t so bad, especially when compared to how annoying it would be to have to admit to the world that my experiment has failed. So, to avoid looking like I just made it up after the fact, I now say that my \(\alpha\) is .1: a 10% type I error rate isn’t too bad, and at that level my test is significant! I win.
In other words, the worry here is that I might have the best of intentions, and be the most honest of people, but the temptation to just “shade” things a little bit here and there is really, really strong. As anyone who has ever run an experiment can attest, it’s a long and difficult process, and you often get very attached to your hypotheses. It’s hard to let go and admit the experiment didn’t find what you wanted it to find. And that’s the danger here. If we use the “raw” \(p\)-value, people will start interpreting the data in terms of what they want to believe, not what the data are actually saying… and if we allow that, well, why are we bothering to do science at all? Why not let everyone believe whatever they like about anything, regardless of what the facts are? Okay, that’s a bit extreme, but that’s where the worry comes from. According to this view, you really must specify your \(\alpha\) value in advance, and then only report whether the test was significant or not. It’s the only way to keep ourselves honest.
### 11.6.2 Two proposed solutions
In practice, it’s pretty rare for a researcher to specify a single \(\alpha\) level ahead of time. Instead, the convention is that scientists rely on three standard significance levels: .05, .01 and .001. When reporting your results, you indicate which (if any) of these significance levels allow you to reject the null hypothesis. This is summarised in Table 11.1. This allows us to soften the decision rule a little bit, since \(p<.01\) implies that the data meet a stronger evidentiary standard than \(p<.05\) would. Nevertheless, since these levels are fixed in advance by convention, it does prevent people choosing their \(\alpha\) level after looking at the data.
Usual notation | Signif. stars | Signif. stars | The null is… |
| --- | --- | --- | --- |
\(p>.05\) | NA | The test wasn’t significant | Retained |
\(p<.05\) | * | The test was significant at $= .05 but not at \(\alpha =.01\) or \(\alpha = .001\).$ | Rejected |
\(p<.01\) | ** | The test was significant at \(\alpha = .05\) and \(\alpha = .01\) but not at $= .001 | Rejected |
\(p<.001\) | *** | The test was significant at all levels | Rejected |
Nevertheless, quite a lot of people still prefer to report exact \(p\) values. To many people, the advantage of allowing the reader to make up their own mind about how to interpret \(p = .06\) outweighs any disadvantages. In practice, however, even among those researchers who prefer exact \(p\) values it is quite common to just write \(p<.001\) instead of reporting an exact value for small \(p\). This is in part because a lot of software doesn’t actually print out the \(p\) value when it’s that small (e.g., SPSS just writes \(p = .000\) whenever \(p<.001\)), and in part because a very small \(p\) value can be kind of misleading. The human mind sees a number like .0000000001 and it’s hard to suppress the gut feeling that the evidence in favour of the alternative hypothesis is a near certainty. In practice however, this is usually wrong. Life is a big, messy, complicated thing: and every statistical test ever invented relies on simplifications, approximations and assumptions. As a consequence, it’s probably not reasonable to walk away from any statistical analysis with a feeling of confidence stronger than \(p<.001\) implies. In other words, \(p<.001\) is really code for “as far as this test is concerned, the evidence is overwhelming.”
In light of all this, you might be wondering exactly what you should do. There’s a fair bit of contradictory advice on the topic, with some people arguing that you should report the exact \(p\) value, and other people arguing that you should use the tiered approach illustrated in Table 11.1. As a result, the best advice I can give is to suggest that you look at papers/reports written in your field and see what the convention seems to be. If there doesn’t seem to be any consistent pattern, then use whichever method you prefer.
## 11.7 Running the hypothesis test in practice
At this point some of you might be wondering if this is a “real” hypothesis test, or just a toy example that I made up. It’s real. In the previous discussion I built the test from first principles, thinking that it was the simplest possible problem that you might ever encounter in real life. However, this test already exists: it’s called the binomial test, and it’s implemented by an R function called `binom.test()` . To test the null hypothesis that the response probability is one-half `p = .5` ,165 using data in which `x = 62` of `n = 100` people made the correct response, here’s how to do it in R:
```
binom.test( x=62, n=100, p=.5 )
```
```
##
## Exact binomial test
##
## data: 62 and 100
## number of successes = 62, number of trials = 100, p-value =
## 0.02098
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.5174607 0.7152325
## sample estimates:
## probability of success
## 0.62
```
Right now, this output looks pretty unfamiliar to you, but you can see that it’s telling you more or less the right things. Specifically, the \(p\)-value of 0.02 is less than the usual choice of \(\alpha = .05\), so you can reject the null. We’ll talk a lot more about how to read this sort of output as we go along; and after a while you’ll hopefully find it quite easy to read and understand. For now, however, I just wanted to make the point that R contains a whole lot of functions corresponding to different kinds of hypothesis test. And while I’ll usually spend quite a lot of time explaining the logic behind how the tests are built, every time I discuss a hypothesis test the discussion will end with me showing you a fairly simple R command that you can use to run the test in practice.
## 11.8 Effect size, sample size and power
In previous sections I’ve emphasised the fact that the major design principle behind statistical hypothesis testing is that we try to control our Type I error rate. When we fix \(\alpha = .05\) we are attempting to ensure that only 5% of true null hypotheses are incorrectly rejected. However, this doesn’t mean that we don’t care about Type II errors. In fact, from the researcher’s perspective, the error of failing to reject the null when it is actually false is an extremely annoying one. With that in mind, a secondary goal of hypothesis testing is to try to minimise \(\beta\), the Type II error rate, although we don’t usually talk in terms of minimising Type II errors. Instead, we talk about maximising the power of the test. Since power is defined as \(1-\beta\), this is the same thing.
### 11.8.1 The power function
Let’s take a moment to think about what a Type II error actually is. A Type II error occurs when the alternative hypothesis is true, but we are nevertheless unable to reject the null hypothesis. Ideally, we’d be able to calculate a single number \(\beta\) that tells us the Type II error rate, in the same way that we can set \(\alpha = .05\) for the Type I error rate. Unfortunately, this is a lot trickier to do. To see this, notice that in my ESP study the alternative hypothesis actually corresponds to lots of possible values of \(\theta\). In fact, the alternative hypothesis corresponds to every value of \(\theta\) except 0.5. Let’s suppose that the true probability of someone choosing the correct response is 55% (i.e., \(\theta = .55\)). If so, then the true sampling distribution for \(X\) is not the same one that the null hypothesis predicts: the most likely value for \(X\) is now 55 out of 100. Not only that, the whole sampling distribution has now shifted, as shown in Figure 11.4. The critical regions, of course, do not change: by definition, the critical regions are based on what the null hypothesis predicts. What we’re seeing in this figure is the fact that when the null hypothesis is wrong, a much larger proportion of the sampling distribution distribution falls in the critical region. And of course that’s what should happen: the probability of rejecting the null hypothesis is larger when the null hypothesis is actually false! However \(\theta = .55\) is not the only possibility consistent with the alternative hypothesis. Let’s instead suppose that the true value of \(\theta\) is actually 0.7. What happens to the sampling distribution when this occurs? The answer, shown in Figure 11.5, is that almost the entirety of the sampling distribution has now moved into the critical region. Therefore, if \(\theta = 0.7\) the probability of us correctly rejecting the null hypothesis (i.e., the power of the test) is much larger than if \(\theta = 0.55\). In short, while \(\theta = .55\) and \(\theta = .70\) are both part of the alternative hypothesis, the Type II error rate is different.
What all this means is that the power of a test (i.e., \(1-\beta\)) depends on the true value of \(\theta\). To illustrate this, I’ve calculated the expected probability of rejecting the null hypothesis for all values of \(\theta\), and plotted it in Figure 11.6. This plot describes what is usually called the power function of the test. It’s a nice summary of how good the test is, because it actually tells you the power (\(1-\beta\)) for all possible values of \(\theta\). As you can see, when the true value of \(\theta\) is very close to 0.5, the power of the test drops very sharply, but when it is further away, the power is large.
### 11.8.2 Effect size
Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned with mice when there are tigers abroad
– George Box 1976
The plot shown in Figure 11.6 captures a fairly basic point about hypothesis testing. If the true state of the world is very different from what the null hypothesis predicts, then your power will be very high; but if the true state of the world is similar to the null (but not identical) then the power of the test is going to be very low. Therefore, it’s useful to be able to have some way of quantifying how “similar” the true state of the world is to the null hypothesis. A statistic that does this is called a measure of effect size (e.g. Cohen 1988; Ellis 2010). Effect size is defined slightly differently in different contexts,166 (and so this section just talks in general terms) but the qualitative idea that it tries to capture is always the same: how big is the difference between the true population parameters, and the parameter values that are assumed by the null hypothesis? In our ESP example, if we let \(\theta_0 = 0.5\) denote the value assumed by the null hypothesis, and let \(\theta\) denote the true value, then a simple measure of effect size could be something like the difference between the true value and null (i.e., \(\theta - \theta_0\)), or possibly just the magnitude of this difference, \(\mbox{abs}(\theta - \theta_0)\).
big effect size | small effect size |
| --- | --- |
significant result | difference is real, and of practical importance | difference is real, but might not be interesting |
non-significant result | no effect observed | no effect observed |
Why calculate effect size? Let’s assume that you’ve run your experiment, collected the data, and gotten a significant effect when you ran your hypothesis test. Isn’t it enough just to say that you’ve gotten a significant effect? Surely that’s the point of hypothesis testing? Well, sort of. Yes, the point of doing a hypothesis test is to try to demonstrate that the null hypothesis is wrong, but that’s hardly the only thing we’re interested in. If the null hypothesis claimed that \(\theta = .5\), and we show that it’s wrong, we’ve only really told half of the story. Rejecting the null hypothesis implies that we believe that \(\theta \neq .5\), but there’s a big difference between \(\theta = .51\) and \(\theta = .8\). If we find that \(\theta = .8\), then not only have we found that the null hypothesis is wrong, it appears to be very wrong. On the other hand, suppose we’ve successfully rejected the null hypothesis, but it looks like the true value of \(\theta\) is only .51 (this would only be possible with a large study). Sure, the null hypothesis is wrong, but it’s not at all clear that we actually care, because the effect size is so small. In the context of my ESP study we might still care, since any demonstration of real psychic powers would actually be pretty cool167, but in other contexts a 1% difference isn’t very interesting, even if it is a real difference. For instance, suppose we’re looking at differences in high school exam scores between males and females, and it turns out that the female scores are 1% higher on average than the males. If I’ve got data from thousands of students, then this difference will almost certainly be statistically significant, but regardless of how small the \(p\) value is it’s just not very interesting. You’d hardly want to go around proclaiming a crisis in boys education on the basis of such a tiny difference would you? It’s for this reason that it is becoming more standard (slowly, but surely) to report some kind of standard measure of effect size along with the the results of the hypothesis test. The hypothesis test itself tells you whether you should believe that the effect you have observed is real (i.e., not just due to chance); the effect size tells you whether or not you should care.
### 11.8.3 Increasing the power of your study
Not surprisingly, scientists are fairly obsessed with maximising the power of their experiments. We want our experiments to work, and so we want to maximise the chance of rejecting the null hypothesis if it is false (and of course we usually want to believe that it is false!) As we’ve seen, one factor that influences power is the effect size. So the first thing you can do to increase your power is to increase the effect size. In practice, what this means is that you want to design your study in such a way that the effect size gets magnified. For instance, in my ESP study I might believe that psychic powers work best in a quiet, darkened room; with fewer distractions to cloud the mind. Therefore I would try to conduct my experiments in just such an environment: if I can strengthen people’s ESP abilities somehow, then the true value of \(\theta\) will go up168 and therefore my effect size will be larger. In short, clever experimental design is one way to boost power; because it can alter the effect size.
Unfortunately, it’s often the case that even with the best of experimental designs you may have only a small effect. Perhaps, for example, ESP really does exist, but even under the best of conditions it’s very very weak. Under those circumstances, your best bet for increasing power is to increase the sample size. In general, the more observations that you have available, the more likely it is that you can discriminate between two hypotheses. If I ran my ESP experiment with 10 participants, and 7 of them correctly guessed the colour of the hidden card, you wouldn’t be terribly impressed. But if I ran it with 10,000 participants and 7,000 of them got the answer right, you would be much more likely to think I had discovered something. In other words, power increases with the sample size. This is illustrated in Figure 11.7, which shows the power of the test for a true parameter of \(\theta = 0.7\), for all sample sizes \(N\) from 1 to 100, where I’m assuming that the null hypothesis predicts that \(\theta_0 = 0.5\).
```
## [1] 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.11837800
## [7] 0.08257300 0.05771362 0.19643626 0.14945203 0.11303734 0.25302172
## [13] 0.20255096 0.16086106 0.29695959 0.24588947 0.38879291 0.33269435
## [19] 0.28223844 0.41641377 0.36272868 0.31341925 0.43996501 0.38859619
## [25] 0.51186665 0.46049782 0.41129777 0.52752694 0.47870819 0.58881596
## [31] 0.54162450 0.49507894 0.59933871 0.55446069 0.65155826 0.60907715
## [37] 0.69828554 0.65867614 0.61815357 0.70325017 0.66542910 0.74296156
## [43] 0.70807163 0.77808343 0.74621569 0.71275488 0.78009449 0.74946571
## [49] 0.81000236 0.78219322 0.83626633 0.81119597 0.78435605 0.83676444
## [55] 0.81250680 0.85920268 0.83741123 0.87881491 0.85934395 0.83818214
## [61] 0.87858194 0.85962510 0.89539581 0.87849413 0.91004390 0.89503851
## [67] 0.92276845 0.90949768 0.89480727 0.92209753 0.90907263 0.93304809
## [73] 0.92153987 0.94254237 0.93240638 0.92108426 0.94185449 0.93185881
## [79] 0.95005094 0.94125189 0.95714694 0.94942195 0.96327866 0.95651332
## [85] 0.94886329 0.96265653 0.95594208 0.96796884 0.96208909 0.97255504
## [91] 0.96741721 0.97650832 0.97202770 0.97991117 0.97601093 0.97153910
## [97] 0.97944717 0.97554675 0.98240749 0.97901142
```
Because power is important, whenever you’re contemplating running an experiment it would be pretty useful to know how much power you’re likely to have. It’s never possible to know for sure, since you can’t possibly know what your effect size is. However, it’s often (well, sometimes) possible to guess how big it should be. If so, you can guess what sample size you need! This idea is called power analysis, and if it’s feasible to do it, then it’s very helpful, since it can tell you something about whether you have enough time or money to be able to run the experiment successfully. It’s increasingly common to see people arguing that power analysis should be a required part of experimental design, so it’s worth knowing about. I don’t discuss power analysis in this book, however. This is partly for a boring reason and partly for a substantive one. The boring reason is that I haven’t had time to write about power analysis yet. The substantive one is that I’m still a little suspicious of power analysis. Speaking as a researcher, I have very rarely found myself in a position to be able to do one – it’s either the case that (a) my experiment is a bit non-standard and I don’t know how to define effect size properly, (b) I literally have so little idea about what the effect size will be that I wouldn’t know how to interpret the answers. Not only that, after extensive conversations with someone who does stats consulting for a living (my wife, as it happens), I can’t help but notice that in practice the only time anyone ever asks her for a power analysis is when she’s helping someone write a grant application. In other words, the only time any scientist ever seems to want a power analysis in real life is when they’re being forced to do it by bureaucratic process. It’s not part of anyone’s day to day work. In short, I’ve always been of the view that while power is an important concept, power analysis is not as useful as people make it sound, except in the rare cases where (a) someone has figured out how to calculate power for your actual experimental design and (b) you have a pretty good idea what the effect size is likely to be. Maybe other people have had better experiences than me, but I’ve personally never been in a situation where both (a) and (b) were true. Maybe I’ll be convinced otherwise in the future, and probably a future version of this book would include a more detailed discussion of power analysis, but for now this is about as much as I’m comfortable saying about the topic.
## 11.9 Some issues to consider
What I’ve described to you in this chapter is the orthodox framework for null hypothesis significance testing (NHST). Understanding how NHST works is an absolute necessity, since it has been the dominant approach to inferential statistics ever since it came to prominence in the early 20th century. It’s what the vast majority of working scientists rely on for their data analysis, so even if you hate it you need to know it. However, the approach is not without problems. There are a number of quirks in the framework, historical oddities in how it came to be, theoretical disputes over whether or not the framework is right, and a lot of practical traps for the unwary. I’m not going to go into a lot of detail on this topic, but I think it’s worth briefly discussing a few of these issues.
### 11.9.1 Neyman versus Fisher
The first thing you should be aware of is that orthodox NHST is actually a mash-up of two rather different approaches to hypothesis testing, one proposed by <NAME> and the other proposed by <NAME> (for a historical summary see Lehmann 2011). The history is messy because Fisher and Neyman were real people whose opinions changed over time, and at no point did either of them offer “the definitive statement” of how we should interpret their work many decades later. That said, here’s a quick summary of what I take these two approaches to be.
First, let’s talk about Fisher’s approach. As far as I can tell, Fisher assumed that you only had the one hypothesis (the null), and what you want to do is find out if the null hypothesis is inconsistent with the data. From his perspective, what you should do is check to see if the data are “sufficiently unlikely” according to the null. In fact, if you remember back to our earlier discussion, that’s how Fisher defines the \(p\)-value. According to Fisher, if the null hypothesis provided a very poor account of the data, you could safely reject it. But, since you don’t have any other hypotheses to compare it to, there’s no way of “accepting the alternative” because you don’t necessarily have an explicitly stated alternative. That’s more or less all that there was to it.
In contrast, Neyman thought that the point of hypothesis testing was as a guide to action, and his approach was somewhat more formal than Fisher’s. His view was that there are multiple things that you could do (accept the null or accept the alternative) and the point of the test was to tell you which one the data support. From this perspective, it is critical to specify your alternative hypothesis properly. If you don’t know what the alternative hypothesis is, then you don’t know how powerful the test is, or even which action makes sense. His framework genuinely requires a competition between different hypotheses. For Neyman, the \(p\) value didn’t directly measure the probability of the data (or data more extreme) under the null, it was more of an abstract description about which “possible tests” were telling you to accept the null, and which “possible tests” were telling you to accept the alternative.
As you can see, what we have today is an odd mishmash of the two. We talk about having both a null hypothesis and an alternative (Neyman), but usually169 define the \(p\) value in terms of exreme data (Fisher), but we still have \(\alpha\) values (Neyman). Some of the statistical tests have explicitly specified alternatives (Neyman) but others are quite vague about it (Fisher). And, according to some people at least, we’re not allowed to talk about accepting the alternative (Fisher). It’s a mess: but I hope this at least explains why it’s a mess.
### 11.9.2 Bayesians versus frequentists
Earlier on in this chapter I was quite emphatic about the fact that you cannot interpret the \(p\) value as the probability that the null hypothesis is true. NHST is fundamentally a frequentist tool (see Chapter 9) and as such it does not allow you to assign probabilities to hypotheses: the null hypothesis is either true or it is not. The Bayesian approach to statistics interprets probability as a degree of belief, so it’s totally okay to say that there is a 10% chance that the null hypothesis is true: that’s just a reflection of the degree of confidence that you have in this hypothesis. You aren’t allowed to do this within the frequentist approach. Remember, if you’re a frequentist, a probability can only be defined in terms of what happens after a large number of independent replications (i.e., a long run frequency). If this is your interpretation of probability, talking about the “probability” that the null hypothesis is true is complete gibberish: a null hypothesis is either true or it is false. There’s no way you can talk about a long run frequency for this statement. To talk about “the probability of the null hypothesis” is as meaningless as “the colour of freedom”. It doesn’t have one!
Most importantly, this isn’t a purely ideological matter. If you decide that you are a Bayesian and that you’re okay with making probability statements about hypotheses, you have to follow the Bayesian rules for calculating those probabilities. I’ll talk more about this in Chapter 17, but for now what I want to point out to you is the \(p\) value is a terrible approximation to the probability that \(H_0\) is true. If what you want to know is the probability of the null, then the \(p\) value is not what you’re looking for!
### 11.9.3 Traps
As you can see, the theory behind hypothesis testing is a mess, and even now there are arguments in statistics about how it “should” work. However, disagreements among statisticians are not our real concern here. Our real concern is practical data analysis. And while the “orthodox” approach to null hypothesis significance testing has many drawbacks, even an unrepentant Bayesian like myself would agree that they can be useful if used responsibly. Most of the time they give sensible answers, and you can use them to learn interesting things. Setting aside the various ideologies and historical confusions that we’ve discussed, the fact remains that the biggest danger in all of statistics is thoughtlessness. I don’t mean stupidity, here: I literally mean thoughtlessness. The rush to interpret a result without spending time thinking through what each test actually says about the data, and checking whether that’s consistent with how you’ve interpreted it. That’s where the biggest trap lies.
To give an example of this, consider the following example (see Gelman and Stern 2006). Suppose I’m running my ESP study, and I’ve decided to analyse the data separately for the male participants and the female participants. Of the male participants, 33 out of 50 guessed the colour of the card correctly. This is a significant effect (\(p = .03\)). Of the female participants, 29 out of 50 guessed correctly. This is not a significant effect (\(p = .32\)). Upon observing this, it is extremely tempting for people to start wondering why there is a difference between males and females in terms of their psychic abilities. However, this is wrong. If you think about it, we haven’t actually run a test that explicitly compares males to females. All we have done is compare males to chance (binomial test was significant) and compared females to chance (binomial test was non significant). If we want to argue that there is a real difference between the males and the females, we should probably run a test of the null hypothesis that there is no difference! We can do that using a different hypothesis test,170 but when we do that it turns out that we have no evidence that males and females are significantly different (\(p = .54\)). Now do you think that there’s anything fundamentally different between the two groups? Of course not. What’s happened here is that the data from both groups (male and female) are pretty borderline: by pure chance, one of them happened to end up on the magic side of the \(p = .05\) line, and the other one didn’t. That doesn’t actually imply that males and females are different. This mistake is so common that you should always be wary of it: the difference between significant and not-significant is not evidence of a real difference – if you want to say that there’s a difference between two groups, then you have to test for that difference!
The example above is just that: an example. I’ve singled it out because it’s such a common one, but the bigger picture is that data analysis can be tricky to get right. Think about what it is you want to test, why you want to test it, and whether or not the answers that your test gives could possibly make any sense in the real world.
## 11.10 Summary
Null hypothesis testing is one of the most ubiquitous elements to statistical theory. The vast majority of scientific papers report the results of some hypothesis test or another. As a consequence it is almost impossible to get by in science without having at least a cursory understanding of what a \(p\)-value means, making this one of the most important chapters in the book. As usual, I’ll end the chapter with a quick recap of the key ideas that we’ve talked about:
* Research hypotheses and statistical hypotheses. Null and alternative hypotheses. (Section 11.1).
* Type 1 and Type 2 errors (Section 11.2)
* Test statistics and sampling distributions (Section 11.3)
* Hypothesis testing as a decision making process (Section 11.4)
* \(p\)-values as “soft” decisions (Section 11.5)
* Writing up the results of a hypothesis test (Section 11.6)
* Effect size and power (Section 11.8)
* A few issues to consider regarding hypothesis testing (Section 11.9)
Later in the book, in Chapter 17, I’ll revisit the theory of null hypothesis tests from a Bayesian perspective, and introduce a number of new tools that you can use if you aren’t particularly fond of the orthodox approach. But for now, though, we’re done with the abstract statistical theory, and we can start discussing specific data analysis tools.
The quote comes from Wittgenstein’s (1922) text, Tractatus Logico-Philosphicus.↩
*
A technical note. The description below differs subtly from the standard description given in a lot of introductory texts. The orthodox theory of null hypothesis testing emerged from the work of <NAME> and <NAME> in the early 20th century; but Fisher and Neyman actually had very different views about how it should work. The standard treatment of hypothesis testing that most texts use is a hybrid of the two approaches. The treatment here is a little more Neyman-style than the orthodox view, especially as regards the meaning of the \(p\) value.↩
*
My apologies to anyone who actually believes in this stuff, but on my reading of the literature on ESP, it’s just not reasonable to think this is real. To be fair, though, some of the studies are rigorously designed; so it’s actually an interesting area for thinking about psychological research design. And of course it’s a free country, so you can spend your own time and effort proving me wrong if you like, but I wouldn’t think that’s a terribly practical use of your intellect.↩
*
This analogy only works if you’re from an adversarial legal system like UK/US/Australia. As I understand these things, the French inquisitorial system is quite different.↩
*
An aside regarding the language you use to talk about hypothesis testing. Firstly, one thing you really want to avoid is the word “prove”: a statistical test really doesn’t prove that a hypothesis is true or false. Proof implies certainty, and as the saying goes, statistics means never having to say you’re certain. On that point almost everyone would agree. However, beyond that there’s a fair amount of confusion. Some people argue that you’re only allowed to make statements like “rejected the null”, “failed to reject the null”, or possibly “retained the null”. According to this line of thinking, you can’t say things like “accept the alternative” or “accept the null”. Personally I think this is too strong: in my opinion, this conflates null hypothesis testing with Karl Popper’s falsificationist view of the scientific process. While there are similarities between falsificationism and null hypothesis testing, they aren’t equivalent. However, while I personally think it’s fine to talk about accepting a hypothesis (on the proviso that “acceptance” doesn’t actually mean that it’s necessarily true, especially in the case of the null hypothesis), many people will disagree. And more to the point, you should be aware that this particular weirdness exists, so that you’re not caught unawares by it when writing up your own results.↩
*
Strictly speaking, the test I just constructed has \(\alpha = .057\), which is a bit too generous. However, if I’d chosen 39 and 61 to be the boundaries for the critical region, then the critical region only covers 3.5% of the distribution. I figured that it makes more sense to use 40 and 60 as my critical values, and be willing to tolerate a 5.7% type I error rate, since that’s as close as I can get to a value of \(\alpha = .05\).↩
*
The internet seems fairly convinced that Ashley said this, though I can’t for the life of me find anyone willing to give a source for the claim.↩
*
That’s \(p = .000000000000000000000000136\) for folks that don’t like scientific notation!↩
*
Note that the
`p` here has nothing to do with a \(p\) value. The `p` argument in the `binom.test()` function corresponds to the probability of making a correct response, according to the null hypothesis. In other words, it’s the \(\theta\) value.↩ *
There’s an R package called
`compute.es` that can be used for calculating a very broad range of effect size measures; but for the purposes of the current book we won’t need it: all of the effect size measures that I’ll talk about here have functions in the `lsr` package↩ *
Although in practice a very small effect size is worrying, because even very minor methodological flaws might be responsible for the effect; and in practice no experiment is perfect, so there are always methodological issues to worry about.↩
*
Notice that the true population parameter \(\theta\) doesn’t necessarily correspond to an immutable fact of nature. In this context \(\theta\) is just the true probability that people would correctly guess the colour of the card in the other room. As such the population parameter can be influenced by all sorts of things. Of course, this is all on the assumption that ESP actually exists!↩
*
Although this book describes both Neyman’s and Fisher’s definition of the \(p\) value, most don’t. Most introductory textbooks will only give you the Fisher version.↩
*
In this case, the Pearson chi-square test of independence (Chapter 12;
`chisq.test()` in R) is what we use; see also the `prop.test()` function.↩
Date: 2010-01-12
Categories:
Tags:
# Chapter 12 Categorical data analysis
Now that we’ve got the basic theory behind hypothesis testing, it’s time to start looking at specific tests that are commonly used in psychology. So where should we start? Not every textbook agrees on where to start, but I’m going to start with “\(\chi^2\) tests” (this chapter) and “\(t\)-tests” (Chapter 13). Both of these tools are very frequently used in scientific practice, and while they’re not as powerful as “analysis of variance” (Chapter 14) and “regression” (Chapter 15) they’re much easier to understand.
The term “categorical data” is just another name for “nominal scale data”. It’s nothing that we haven’t already discussed, it’s just that in the context of data analysis people tend to use the term “categorical data” rather than “nominal scale data”. I don’t know why. In any case, categorical data analysis refers to a collection of tools that you can use when your data are nominal scale. However, there are a lot of different tools that can be used for categorical data analysis, and this chapter only covers a few of the more common ones.
## 12.1 The \(\chi^2\) goodness-of-fit test
The \(\chi^2\) goodness-of-fit test is one of the oldest hypothesis tests around: it was invented by <NAME> around the turn of the century (Pearson 1900), with some corrections made later by <NAME> (Fisher 1922a). To introduce the statistical problem that it addresses, let’s start with some psychology…
### 12.1.1 The cards data
Over the years, there have been a lot of studies showing that humans have a lot of difficulties in simulating randomness. Try as we might to “act” random, we think in terms of patterns and structure, and so when asked to “do something at random”, what people actually do is anything but random. As a consequence, the study of human randomness (or non-randomness, as the case may be) opens up a lot of deep psychological questions about how we think about the world. With this in mind, let’s consider a very simple study. Suppose I asked people to imagine a shuffled deck of cards, and mentally pick one card from this imaginary deck “at random”. After they’ve chosen one card, I ask them to mentally select a second one. For both choices, what we’re going to look at is the suit (hearts, clubs, spades or diamonds) that people chose. After asking, say, \(N=200\) people to do this, I’d like to look at the data and figure out whether or not the cards that people pretended to select were really random. The data are contained in the `randomness.Rdata` file, which contains a single data frame called `cards` . Let’s take a look:
```
library( lsr )
load( file.path(projecthome, "data/randomness.Rdata" ))
str(cards)
```
```
## 'data.frame': 200 obs. of 3 variables:
## $ id : Factor w/ 200 levels "subj1","subj10",..: 1 112 124 135 146 157 168 179 190 2 ...
## $ choice_1: Factor w/ 4 levels "clubs","diamonds",..: 4 2 3 4 3 1 3 2 4 2 ...
## $ choice_2: Factor w/ 4 levels "clubs","diamonds",..: 1 1 1 1 4 3 2 1 1 4 ...
```
As you can see, the `cards` data frame contains three variables, an `id` variable that assigns a unique identifier to each participant, and the two variables `choice_1` and `choice_2` that indicate the card suits that people chose. Here’s the first few entries in the data frame: `head( cards )`
```
## id choice_1 choice_2
## 1 subj1 spades clubs
## 2 subj2 diamonds clubs
## 3 subj3 hearts clubs
## 4 subj4 spades clubs
## 5 subj5 hearts spades
## 6 subj6 clubs hearts
```
For the moment, let’s just focus on the first choice that people made. We’ll use the `table()` function to count the number of times that we observed people choosing each suit. I’ll save the table to a variable called `observed` , for reasons that will become clear very soon:
```
observed <- table( cards$choice_1 )
observed
```
That little frequency table is quite helpful. Looking at it, there’s a bit of a hint that people might be more likely to select hearts than clubs, but it’s not completely obvious just from looking at it whether that’s really true, or if this is just due to chance. So we’ll probably have to do some kind of statistical analysis to find out, which is what I’m going to talk about in the next section.
Excellent. From this point on, we’ll treat this table as the data that we’re looking to analyse. However, since I’m going to have to talk about this data in mathematical terms (sorry!) it might be a good idea to be clear about what the notation is. In R, if I wanted to pull out the number of people that selected diamonds, I could do it by name by typing `observed["diamonds"]` but, since `"diamonds"` is second element of the `observed` vector, it’s equally effective to refer to it as `observed[2]` . The mathematical notation for this is pretty similar, except that we shorten the human-readable word “observed” to the letter \(O\), and we use subscripts rather than brackets: so the second observation in our table is written as `observed[2]` in R, and is written as \(O_2\) in maths. The relationship between the English descriptions, the R commands, and the mathematical symbols are illustrated below:
label | index \(i\) | math. symbol | R command | the value |
| --- | --- | --- | --- | --- |
clubs \(\clubsuit\) | 1 | \(O_1\) | | 35 |
diamonds \(\diamondsuit\) | 2 | \(O_2\) | | 51 |
hearts \(\heartsuit\) | 3 | \(O_3\) | | 64 |
spades \(\spadesuit\) | 4 | \(O_4\) | | 50 |
Hopefully that’s pretty clear. It’s also worth nothing that mathematicians prefer to talk about things in general rather than specific things, so you’ll also see the notation \(O_i\), which refers to the number of observations that fall within the \(i\)-th category (where \(i\) could be 1, 2, 3 or 4). Finally, if we want to refer to the set of all observed frequencies, statisticians group all of observed values into a vector, which I’ll refer to as \(O\). \[ O = (O_1, O_2, O_3, O_4) \] Again, there’s nothing new or interesting here: it’s just notation. If I say that \(O~=~(35, 51, 64, 50)\) all I’m doing is describing the table of observed frequencies (i.e., `observed` ), but I’m referring to it using mathematical notation, rather than by referring to an R variable.
### 12.1.2 The null hypothesis and the alternative hypothesis
As the last section indicated, our research hypothesis is that “people don’t choose cards randomly”. What we’re going to want to do now is translate this into some statistical hypotheses, and construct a statistical test of those hypotheses. The test that I’m going to describe to you is Pearson’s \(\chi^2\) goodness of fit test, and as is so often the case, we have to begin by carefully constructing our null hypothesis. In this case, it’s pretty easy. First, let’s state the null hypothesis in words:
\(H_0\) |
| --- |
All four suits are chosen with equal probability |
Now, because this is statistics, we have to be able to say the same thing in a mathematical way. To do this, let’s use the notation \(P_j\) to refer to the true probability that the \(j\)-th suit is chosen. If the null hypothesis is true, then each of the four suits has a 25% chance of being selected: in other words, our null hypothesis claims that \(P_1 = .25\), \(P_2 = .25\), \(P_3 = .25\) and finally that \(P_4 = .25\). However, in the same way that we can group our observed frequencies into a vector \(O\) that summarises the entire data set, we can use \(P\) to refer to the probabilities that correspond to our null hypothesis. So if I let the vector \(P = (P_1, P_2, P_3, P_4)\) refer to the collection of probabilities that describe our null hypothesis, then we have
\[ H_0: {P} = (.25, .25, .25, .25) \]
In this particular instance, our null hypothesis corresponds to a vector of probabilities \(P\) in which all of the probabilities are equal to one another. But this doesn’t have to be the case. For instance, if the experimental task was for people to imagine they were drawing from a deck that had twice as many clubs as any other suit, then the null hypothesis would correspond to something like \(P = (.4, .2, .2, .2)\). As long as the probabilities are all positive numbers, and they all sum to 1, them it’s a perfectly legitimate choice for the null hypothesis. However, the most common use of the goodness of fit test is to test a null hypothesis that all of the categories are equally likely, so we’ll stick to that for our example.
What about our alternative hypothesis, \(H_1\)? All we’re really interested in is demonstrating that the probabilities involved aren’t all identical (that is, people’s choices weren’t completely random). As a consequence, the “human friendly” versions of our hypotheses look like this:
\(H_0\) | \(H_1\) |
| --- | --- |
All four suits are chosen with equal probability | At least one of the suit-choice probabilities isn’t .25 |
and the “mathematician friendly” version is
\(H_0\) | \(H_1\) |
| --- | --- |
\(P = (.25, .25, .25, .25)\) | \(P \neq (.25,.25,.25,.25)\) |
Conveniently, the mathematical version of the hypotheses looks quite similar to an R command defining a vector. So maybe what I should do is store the \(P\) vector in R as well, since we’re almost certainly going to need it later. And because I’m so imaginative, I’ll call this R vector `probabilities` ,
```
probabilities <- c(clubs = .25, diamonds = .25, hearts = .25, spades = .25)
probabilities
```
```
## clubs diamonds hearts spades
## 0.25 0.25 0.25 0.25
```
### 12.1.3 The “goodness of fit” test statistic
At this point, we have our observed frequencies \(O\) and a collection of probabilities \(P\) corresponding the null hypothesis that we want to test. We’ve stored these in R as the corresponding variables `observed` and `probabilities` . What we now want to do is construct a test of the null hypothesis. As always, if we want to test \(H_0\) against \(H_1\), we’re going to need a test statistic. The basic trick that a goodness of fit test uses is to construct a test statistic that measures how “close” the data are to the null hypothesis. If the data don’t resemble what you’d “expect” to see if the null hypothesis were true, then it probably isn’t true. Okay, if the null hypothesis were true, what would we expect to see? Or, to use the correct terminology, what are the expected frequencies. There are \(N=200\) observations, and (if the null is true) the probability of any one of them choosing a heart is \(P_3 = .25\), so I guess we’re expecting \(200 \times .25 = 50\) hearts, right? Or, more specifically, if we let \(E_i\) refer to “the number of category \(i\) responses that we’re expecting if the null is true”, then \[
E_i = N \times P_i
\] This is pretty easy to calculate in R:
```
N <- 200 # sample size
expected <- N * probabilities # expected frequencies
expected
```
```
## clubs diamonds hearts spades
## 50 50 50 50
```
None of which is very surprising: if there are 200 observation that can fall into four categories, and we think that all four categories are equally likely, then on average we’d expect to see 50 observations in each category, right?
Now, how do we translate this into a test statistic? Clearly, what we want to do is compare the expected number of observations in each category (\(E_i\)) with the observed number of observations in that category (\(O_i\)). And on the basis of this comparison, we ought to be able to come up with a good test statistic. To start with, let’s calculate the difference between what the null hypothesis expected us to find and what we actually did find. That is, we calculate the “observed minus expected” difference score, \(O_i - E_i\). This is illustrated in the following table.
\(\clubsuit\) | \(\diamondsuit\) | \(\heartsuit\) | \(\spadesuit\) |
| --- | --- | --- | --- |
expected frequency | \(E_i\) | 50 | 50 | 50 | 50 |
observed frequency | \(O_i\) | 35 | 51 | 64 | 50 |
difference score | \(O_i - E_i\) | -15 | 1 | 14 | 0 |
The same calculations can be done in R, using our `expected` and `observed` variables: `observed - expected`
```
##
## clubs diamonds hearts spades
## -15 1 14 0
```
Regardless of whether we do the calculations by hand or whether we do them in R, it’s clear that people chose more hearts and fewer clubs than the null hypothesis predicted. However, a moment’s thought suggests that these raw differences aren’t quite what we’re looking for. Intuitively, it feels like it’s just as bad when the null hypothesis predicts too few observations (which is what happened with hearts) as it is when it predicts too many (which is what happened with clubs). So it’s a bit weird that we have a negative number for clubs and a positive number for heards. One easy way to fix this is to square everything, so that we now calculate the squared differences, \((E_i - O_i)^2\). As before, we could do this by hand, but it’s easier to do it in R…
```
(observed - expected)^2
```
```
##
## clubs diamonds hearts spades
## 225 1 196 0
```
Now we’re making progress. What we’ve got now is a collection of numbers that are big whenever the null hypothesis makes a bad prediction (clubs and hearts), but are small whenever it makes a good one (diamonds and spades). Next, for some technical reasons that I’ll explain in a moment, let’s also divide all these numbers by the expected frequency \(E_i\), so we’re actually calculating \(\frac{(E_i-O_i)^2}{E_i}\). Since \(E_i = 50\) for all categories in our example, it’s not a very interesting calculation, but let’s do it anyway. The R command becomes:
```
##
## clubs diamonds hearts spades
## 4.50 0.02 3.92 0.00
```
In effect, what we’ve got here are four different “error” scores, each one telling us how big a “mistake” the null hypothesis made when we tried to use it to predict our observed frequencies. So, in order to convert this into a useful test statistic, one thing we could do is just add these numbers up. The result is called the goodness of fit statistic, conventionally referred to either as \(X^2\) or GOF. We can calculate it using this command in R
`## [1] 8.44`
The formula for this statistic looks remarkably similar to the R command. If we let \(k\) refer to the total number of categories (i.e., \(k=4\) for our cards data), then the \(X^2\) statistic is given by: \[ X^2 = \sum_{i=1}^k \frac{(O_i - E_i)^2}{E_i} \] Intuitively, it’s clear that if \(X^2\) is small, then the observed data \(O_i\) are very close to what the null hypothesis predicted \(E_i\), so we’re going to need a large \(X^2\) statistic in order to reject the null. As we’ve seen from our calculations, in our cards data set we’ve got a value of \(X^2 = 8.44\). So now the question becomes, is this a big enough value to reject the null?
### 12.1.4 The sampling distribution of the GOF statistic (advanced)
To determine whether or not a particular value of \(X^2\) is large enough to justify rejecting the null hypothesis, we’re going to need to figure out what the sampling distribution for \(X^2\) would be if the null hypothesis were true. So that’s what I’m going to do in this section. I’ll show you in a fair amount of detail how this sampling distribution is constructed, and then – in the next section – use it to build up a hypothesis test. If you want to cut to the chase and are willing to take it on faith that the sampling distribution is a chi-squared (\(\chi^2\)) distribution with \(k-1\) degrees of freedom, you can skip the rest of this section. However, if you want to understand why the goodness of fit test works the way it does, read on…
Okay, let’s suppose that the null hypothesis is actually true. If so, then the true probability that an observation falls in the \(i\)-th category is \(P_i\) – after all, that’s pretty much the definition of our null hypothesis. Let’s think about what this actually means. If you think about it, this is kind of like saying that “nature” makes the decision about whether or not the observation ends up in category \(i\) by flipping a weighted coin (i.e., one where the probability of getting a head is \(P_j\)). And therefore, we can think of our observed frequency \(O_i\) by imagining that nature flipped \(N\) of these coins (one for each observation in the data set)… and exactly \(O_i\) of them came up heads. Obviously, this is a pretty weird way to think about the experiment. But what it does (I hope) is remind you that we’ve actually seen this scenario before. It’s exactly the same set up that gave rise to the binomial distribution in Section 9.4. In other words, if the null hypothesis is true, then it follows that our observed frequencies were generated by sampling from a binomial distribution: \[ O_i \sim \mbox{Binomial}(P_i, N) \] Now, if you remember from our discussion of the central limit theorem (Section 10.3.3), the binomial distribution starts to look pretty much identical to the normal distribution, especially when \(N\) is large and when \(P_i\) isn’t too close to 0 or 1. In other words as long as \(N \times P_i\) is large enough – or, to put it another way, when the expected frequency \(E_i\) is large enough – the theoretical distribution of \(O_i\) is approximately normal. Better yet, if \(O_i\) is normally distributed, then so is \((O_i - E_i)/\sqrt{E_i}\) … since \(E_i\) is a fixed value, subtracting off \(E_i\) and dividing by \(\sqrt{E_i}\) changes the mean and standard deviation of the normal distribution; but that’s all it does. Okay, so now let’s have a look at what our goodness of fit statistic actually is. What we’re doing is taking a bunch of things that are normally-distributed, squaring them, and adding them up. Wait. We’ve seen that before too! As we discussed in Section 9.6, when you take a bunch of things that have a standard normal distribution (i.e., mean 0 and standard deviation 1), square them, then add them up, then the resulting quantity has a chi-square distribution. So now we know that the null hypothesis predicts that the sampling distribution of the goodness of fit statistic is a chi-square distribution. Cool.
There’s one last detail to talk about, namely the degrees of freedom. If you remember back to Section 9.6, I said that if the number of things you’re adding up is \(k\), then the degrees of freedom for the resulting chi-square distribution is \(k\). Yet, what I said at the start of this section is that the actual degrees of freedom for the chi-square goodness of fit test is \(k-1\). What’s up with that? The answer here is that what we’re supposed to be looking at is the number of genuinely independent things that are getting added together. And, as I’ll go on to talk about in the next section, even though there’s \(k\) things that we’re adding, only \(k-1\) of them are truly independent; and so the degrees of freedom is actually only \(k-1\). That’s the topic of the next section.171
### 12.1.5 Degrees of freedom
When I introduced the chi-square distribution in Section 9.6, I was a bit vague about what “degrees of freedom” actually means. Obviously, it matters: looking Figure 12.1 you can see that if we change the degrees of freedom, then the chi-square distribution changes shape quite substantially. But what exactly is it? Again, when I introduced the distribution and explained its relationship to the normal distribution, I did offer an answer… it’s the number of “normally distributed variables” that I’m squaring and adding together. But, for most people, that’s kind of abstract, and not entirely helpful. What we really need to do is try to understand degrees of freedom in terms of our data. So here goes.
The basic idea behind degrees of freedom is quite simple: you calculate it by counting up the number of distinct “quantities” that are used to describe your data; and then subtracting off all of the “constraints” that those data must satisfy.172 This is a bit vague, so let’s use our cards data as a concrete example. We describe out data using four numbers, \(O_1\), \(O_2\), \(O_3\) and \(O_4\) corresponding to the observed frequencies of the four different categories (hearts, clubs, diamonds, spades). These four numbers are the random outcomes of our experiment. But, my experiment actually has a fixed constraint built into it: the sample size \(N\).173 That is, if we know how many people chose hearts, how many chose diamonds and how many chose clubs; then we’d be able to figure out exactly how many chose spades. In other words, although our data are described using four numbers, they only actually correspond to \(4-1 = 3\) degrees of freedom. A slightly different way of thinking about it is to notice that there are four probabilities that we’re interested in (again, corresponding to the four different categories), but these probabilities must sum to one, which imposes a constraint. Therefore, the degrees of freedom is \(4-1 = 3\). Regardless of whether you want to think about it in terms of the observed frequencies or in terms of the probabilities, the answer is the same. In general, when running the chi-square goodness of fit test for an experiment involving \(k\) groups, then the degrees of freedom will be \(k-1\).
### 12.1.6 Testing the null hypothesis
The final step in the process of constructing our hypothesis test is to figure out what the rejection region is. That is, what values of \(X^2\) would lead is to reject the null hypothesis. As we saw earlier, large values of \(X^2\) imply that the null hypothesis has done a poor job of predicting the data from our experiment, whereas small values of \(X^2\) imply that it’s actually done pretty well. Therefore, a pretty sensible strategy would be to say there is some critical value, such that if \(X^2\) is bigger than the critical value we reject the null; but if \(X^2\) is smaller than this value we retain the null. In other words, to use the language we introduced in Chapter @ref(hypothesistesting the chi-squared goodness of fit test is always a one-sided test. Right, so all we have to do is figure out what this critical value is. And it’s pretty straightforward. If we want our test to have significance level of \(\alpha = .05\) (that is, we are willing to tolerate a Type I error rate of 5%), then we have to choose our critical value so that there is only a 5% chance that \(X^2\) could get to be that big if the null hypothesis is true. That is to say, we want the 95th percentile of the sampling distribution. This is illustrated in Figure 12.2.
Ah, but – I hear you ask – how do I calculate the 95th percentile of a chi-squared distribution with \(k-1\) degrees of freedom? If only R had some function, called… oh, I don’t know, `qchisq()` … that would let you calculate this percentile (see Chapter 9 if you’ve forgotten). Like this…
```
qchisq( p = .95, df = 3 )
```
`## [1] 7.814728` So if our \(X^2\) statistic is bigger than 7.81 or so, then we can reject the null hypothesis. Since we actually calculated that before (i.e., \(X^2 = 8.44\)) we can reject the null. If we want an exact \(p\)-value, we can calculate it using the `pchisq()` function:
```
pchisq( q = 8.44, df = 3, lower.tail = FALSE )
```
`## [1] 0.03774185` This is hopefully pretty straightforward, as long as you recall that the “ `p` ” form of the probability distribution functions in R always calculates the probability of getting a value of less than the value you entered (in this case 8.44). We want the opposite: the probability of getting a value of 8.44 or more. That’s why I told R to use the upper tail, not the lower tail. That said, it’s usually easier to calculate the \(p\)-value this way:
```
1-pchisq( q = 8.44, df = 3 )
```
`## [1] 0.03774185`
So, in this case we would reject the null hypothesis, since \(p < .05\). And that’s it, basically. You now know “Pearson’s \(\chi^2\) test for the goodness of fit”. Lucky you.
### 12.1.7 Doing the test in R
Gosh darn it. Although we did manage to do everything in R as we were going through that little example, it does rather feel as if we’re typing too many things into the magic computing box. And I hate typing. Not surprisingly, R provides a function that will do all of these calculations for you. In fact, there are several different ways of doing it. The one that most people use is the `chisq.test()` function, which comes with every installation of R. I’ll show you how to use the `chisq.test()` function later on (in Section @ref(chisq.test), but to start out with I’m going to show you the `goodnessOfFitTest()` function in the `lsr` package, because it produces output that I think is easier for beginners to understand. It’s pretty straightforward: our raw data are stored in the variable `cards$choice_1` , right? If you want to test the null hypothesis that all four suits are equally likely, then (assuming you have the `lsr` package loaded) all you have to do is type this:
```
goodnessOfFitTest( cards$choice_1 )
```
R then runs the test, and prints several lines of text. I’ll go through the output line by line, so that you can make sure that you understand what you’re looking at. The first two lines are just telling you things you already know:
```
Chi-square test against specified probabilities
Data variable: cards$choice_1
```
The first line tells us what kind of hypothesis test we ran, and the second line tells us the name of the variable that we ran it on. After that comes a statement of what the null and alternative hypotheses are:
```
Hypotheses:
null: true probabilities are as specified
alternative: true probabilities differ from those specified
```
For a beginner, it’s kind of handy to have this as part of the output: it’s a nice reminder of what your null and alternative hypotheses are. Don’t get used to seeing this though. The vast majority of hypothesis tests in R aren’t so kind to novices. Most R functions are written on the assumption that you already understand the statistical tool that you’re using, so they don’t bother to include an explicit statement of the null and alternative hypothesis. The only reason that `goodnessOfFitTest()` actually does give you this is that I wrote it with novices in mind.
The next part of the output shows you the comparison between the observed frequencies and the expected frequencies:
```
Descriptives:
observed freq. expected freq. specified prob.
clubs 35 50 0.25
diamonds 51 50 0.25
hearts 64 50 0.25
spades 50 50 0.25
```
The first column shows what the observed frequencies were, the second column shows the expected frequencies according to the null hypothesis, and the third column shows you what the probabilities actually were according to the null. For novice users, I think this is helpful: you can look at this part of the output and check that it makes sense: if it doesn’t you might have typed something incorrecrtly.
The last part of the output is the “important” stuff: it’s the result of the hypothesis test itself. There are three key numbers that need to be reported: the value of the \(X^2\) statistic, the degrees of freedom, and the \(p\)-value:
```
Test results:
X-squared statistic: 8.44
degrees of freedom: 3
p-value: 0.038
```
Notice that these are the same numbers that we came up with when doing the calculations the long way.
### 12.1.8 Specifying a different null hypothesis
At this point you might be wondering what to do if you want to run a goodness of fit test, but your null hypothesis is not that all categories are equally likely. For instance, let’s suppose that someone had made the theoretical prediction that people should choose red cards 60% of the time, and black cards 40% of the time (I’ve no idea why you’d predict that), but had no other preferences. If that were the case, the null hypothesis would be to expect 30% of the choices to be hearts, 30% to be diamonds, 20% to be spades and 20% to be clubs. This seems like a silly theory to me, and it’s pretty easy to test it using our data. All we need to do is specify the probabilities associated with the null hypothesis. We create a vector like this:
```
nullProbs <- c(clubs = .2, diamonds = .3, hearts = .3, spades = .2)
nullProbs
```
```
## clubs diamonds hearts spades
## 0.2 0.3 0.3 0.2
```
Now that we have an explicitly specified null hypothesis, we include it in our command. This time round I’ll use the argument names properly. The data variable corresponds to the argument `x` , and the probabilities according to the null hypothesis correspond to the argument `p` . So our command is:
```
goodnessOfFitTest( x = cards$choice_1, p = nullProbs )
```
As you can see the null hypothesis and the expected frequencies are different to what they were last time. As a consequence our \(X^2\) test statistic is different, and our \(p\)-value is different too. Annoyingly, the \(p\)-value is .192, so we can’t reject the null hypothesis. Sadly, despite the fact that the null hypothesis corresponds to a very silly theory, these data don’t provide enough evidence against it.
### 12.1.9 How to report the results of the test
So now you know how the test works, and you know how to do the test using a wonderful magic computing box. The next thing you need to know is how to write up the results. After all, there’s no point in designing and running an experiment and then analysing the data if you don’t tell anyone about it! So let’s now talk about what you need to do when reporting your analysis. Let’s stick with our card-suits example. If I wanted to write this result up for a paper or something, the conventional way to report this would be to write something like this:
Of the 200 participants in the experiment, 64 selected hearts for their first choice, 51 selected diamonds, 50 selected spades, and 35 selected clubs. A chi-square goodness of fit test was conducted to test whether the choice probabilities were identical for all four suits. The results were significant (\(\chi^2(3) = 8.44, p<.05\)), suggesting that people did not select suits purely at random.
This is pretty straightforward, and hopefully it seems pretty unremarkable. That said, there’s a few things that you should note about this description:
* The statistical test is preceded by the descriptive statistics. That is, I told the reader something about what the data look like before going on to do the test. In general, this is good practice: always remember that your reader doesn’t know your data anywhere near as well as you do. So unless you describe it to them properly, the statistical tests won’t make any sense to them, and they’ll get frustrated and cry.
* The description tells you what the null hypothesis being tested is. To be honest, writers don’t always do this, but it’s often a good idea in those situations where some ambiguity exists; or when you can’t rely on your readership being intimately familiar with the statistical tools that you’re using. Quite often the reader might not know (or remember) all the details of the test that your using, so it’s a kind of politeness to “remind” them! As far as the goodness of fit test goes, you can usually rely on a scientific audience knowing how it works (since it’s covered in most intro stats classes). However, it’s still a good idea to be explicit about stating the null hypothesis (briefly!) because the null hypothesis can be different depending on what you’re using the test for. For instance, in the cards example my null hypothesis was that all the four suit probabilities were identical (i.e., \(P_1 = P_2 = P_3 = P_4 = 0.25\)), but there’s nothing special about that hypothesis. I could just as easily have tested the null hypothesis that \(P_1 = 0.7\) and \(P_2 = P_3 = P_4 = 0.1\) using a goodness of fit test. So it’s helpful to the reader if you explain to them what your null hypothesis was. Also, notice that I described the null hypothesis in words, not in maths. That’s perfectly acceptable. You can describe it in maths if you like, but since most readers find words easier to read than symbols, most writers tend to describe the null using words if they can.
* A “stat block” is included. When reporting the results of the test itself, I didn’t just say that the result was significant, I included a “stat block” (i.e., the dense mathematical-looking part in the parentheses), which reports all the “raw” statistical data. For the chi-square goodness of fit test, the information that gets reported is the test statistic (that the goodness of fit statistic was 8.44), the information about the distribution used in the test (\(\chi^2\) with 3 degrees of freedom, which is usually shortened to \(\chi^2(3)\)), and then the information about whether the result was significant (in this case \(p<.05\)). The particular information that needs to go into the stat block is different for every test, and so each time I introduce a new test I’ll show you what the stat block should look like.174 However the general principle is that you should always provide enough information so that the reader could check the test results themselves if they really wanted to.
* The results are interpreted. In addition to indicating that the result was significant, I provided an interpretation of the result (i.e., that people didn’t choose randomly). This is also a kindness to the reader, because it tells them something about what they should believe about what’s going on in your data. If you don’t include something like this, it’s really hard for your reader to understand what’s going on.175
As with everything else, your overriding concern should be that you explain things to your reader. Always remember that the point of reporting your results is to communicate to another human being. I cannot tell you just how many times I’ve seen the results section of a report or a thesis or even a scientific article that is just gibberish, because the writer has focused solely on making sure they’ve included all the numbers, and forgotten to actually communicate with the human reader.
### 12.1.10 A comment on statistical notation (advanced)
Satan delights equally in statistics and in quoting scripture
– <NAME>
If you’ve been reading very closely, and are as much of a mathematical pedant as I am, there is one thing about the way I wrote up the chi-square test in the last section that might be bugging you a little bit. There’s something that feels a bit wrong with writing “\(\chi^2(3) = 8.44\)”, you might be thinking. After all, it’s the goodness of fit statistic that is equal to 8.44, so shouldn’t I have written \(X^2 = 8.44\) or maybe GOF\(=8.44\)? This seems to be conflating the sampling distribution (i.e., \(\chi^2\) with \(df = 3\)) with the test statistic (i.e., \(X^2\)). Odds are you figured it was a typo, since \(\chi\) and \(X\) look pretty similar. Oddly, it’s not. Writing \(\chi^2(3) = 8.44\) is essentially a highly condensed way of writing “the sampling distribution of the test statistic is \(\chi^2(3)\), and the value of the test statistic is 8.44”.
In one sense, this is kind of stupid. There are lots of different test statistics out there that turn out to have a chi-square sampling distribution: the \(X^2\) statistic that we’ve used for our goodness of fit test is only one of many (albeit one of the most commonly encountered ones). In a sensible, perfectly organised world, we’d always have a separate name for the test statistic and the sampling distribution: that way, the stat block itself would tell you exactly what it was that the researcher had calculated. Sometimes this happens. For instance, the test statistic used in the Pearson goodness of fit test is written \(X^2\); but there’s a closely related test known as the \(G\)-test176 , in which the test statistic is written as \(G\). As it happens, the Pearson goodness of fit test and the \(G\)-test both test the same null hypothesis; and the sampling distribution is exactly the same (i.e., chi-square with \(k-1\) degrees of freedom). If I’d done a \(G\)-test for the cards data rather than a goodness of fit test, then I’d have ended up with a test statistic of \(G = 8.65\), which is slightly different from the \(X^2 = 8.44\) value that I got earlier; and produces a slightly smaller \(p\)-value of \(p = .034\). Suppose that the convention was to report the test statistic, then the sampling distribution, and then the \(p\)-value. If that were true, then these two situations would produce different stat blocks: my original result would be written \(X^2 = 8.44, \chi^2(3), p = .038\), whereas the new version using the \(G\)-test would be written as \(G = 8.65, \chi^2(3), p = .034\). However, using the condensed reporting standard, the original result is written \(\chi^2(3) = 8.44, p = .038\), and the new one is written \(\chi^2(3) = 8.65, p = .034\), and so it’s actually unclear which test I actually ran.
So why don’t we live in a world in which the contents of the stat block uniquely specifies what tests were ran? The deep reason is that life is messy. We (as users of statistical tools) want it to be nice and neat and organised… we want it to be designed, as if it were a product. But that’s not how life works: statistics is an intellectual discipline just as much as any other one, and as such it’s a massively distributed, partly-collaborative and partly-competitive project that no-one really understands completely. The things that you and I use as data analysis tools weren’t created by an Act of the Gods of Statistics; they were invented by lots of different people, published as papers in academic journals, implemented, corrected and modified by lots of other people, and then explained to students in textbooks by someone else. As a consequence, there’s a lot of test statistics that don’t even have names; and as a consequence they’re just given the same name as the corresponding sampling distribution. As we’ll see later, any test statistic that follows a \(\chi^2\) distribution is commonly called a “chi-square statistic”; anything that follows a \(t\)-distribution is called a “\(t\)-statistic” and so on. But, as the \(X^2\) versus \(G\) example illustrates, two different things with the same sampling distribution are still, well, different.
As a consequence, it’s sometimes a good idea to be clear about what the actual test was that you ran, especially if you’re doing something unusual. If you just say “chi-square test”, it’s not actually clear what test you’re talking about. Although, since the two most common chi-square tests are the goodness of fit test and the independence test (Section 12.2), most readers with stats training can probably guess. Nevertheless, it’s something to be aware of.
## 12.2 The \(\chi^2\) test of independence (or association)
GUARDBOT1: | Halt! |
| --- | --- |
GUARDBOT2: | Be you robot or human? |
LEELA: | Robot…we be. |
FRY: | Uh, yup! Just two robots out roboting it up! Eh? |
GUARDBOT1: | Administer the test. |
GUARDBOT2: | Which of the following would you most prefer? A: A puppy, B: A pretty flower from your sweetie, or C: A large properly-formatted data file? |
GUARDBOT1: | Choose! |
– Futurama, “Fear of a Bot Planet
The other day I was watching an animated documentary examining the quaint customs of the natives of the planet Chapek 9. Apparently, in order to gain access to their capital city, a visitor must prove that they’re a robot, not a human. In order to determine whether or not visitor is human, they ask whether the visitor prefers puppies, flowers or large, properly formatted data files. “Pretty clever,” I thought to myself “but what if humans and robots have the same preferences? That probably wouldn’t be a very good test then, would it?” As it happens, I got my hands on the testing data that the civil authorities of Chapek 9 used to check this. It turns out that what they did was very simple… they found a bunch of robots and a bunch of humans and asked them what they preferred. I saved their data in a file called `chapek9.Rdata` , which I can now load and have a quick look at:
```
load( file.path(projecthome, "data/chapek9.Rdata" ))
str(chapek9)
```
```
## 'data.frame': 180 obs. of 2 variables:
## $ species: Factor w/ 2 levels "robot","human": 1 2 2 2 1 2 2 1 2 1 ...
## $ choice : Factor w/ 3 levels "puppy","flower",..: 2 3 3 3 3 2 3 3 1 2 ...
```
Okay, so we have a single data frame called `chapek9` , which contains two factors, `species` and `choice` . As always, it’s nice to have a quick look at the data, `head(chapek9)`
and then take a `summary()` , `summary(chapek9)`
```
## species choice
## robot:87 puppy : 28
## human:93 flower: 43
## data :109
```
In total there are 180 entries in the data frame, one for each person (counting both robots and humans as “people”) who was asked to make a choice. Specifically, there’s 93 humans and 87 robots; and overwhelmingly the preferred choice is the data file. However, these summaries don’t address the question we’re interested in. To do that, we need a more detailed description of the data. What we want to do is look at the `choices` broken down by `species` . That is, we need to cross-tabulate the data (see Section 7.1). There’s quite a few ways to do this, as we’ve seen, but since our data are stored in a data frame, it’s convenient to use the `xtabs()` function.
```
chapekFrequencies <- xtabs( ~ choice + species, data = chapek9)
chapekFrequencies
```
That’s more or less what we’re after. So, if we add the row and column totals (which is convenient for the purposes of explaining the statistical tests), we would have a table like this,
Robot | Human | Total |
| --- | --- | --- |
Puppy | 13 | 15 | 28 |
Flower | 30 | 13 | 43 |
Data file | 44 | 65 | 109 |
Total | 87 | 93 | 180 |
which actual | ly would | be a nice | way to report the descriptive statistics for this data set. In any case, it’s quite clear that the vast majority of the humans chose the data file, whereas the robots tended to be a lot more even in their preferences. Leaving aside the question of why the humans might be more likely to choose the data file for the moment (which does seem quite odd, admittedly), our first order of business is to determine if the discrepancy between human choices and robot choices in the data set is statistically significant. |
### 12.2.1 Constructing our hypothesis test
How do we analyse this data? Specifically, since my research hypothesis is that “humans and robots answer the question in different ways”, how can I construct a test of the null hypothesis that “humans and robots answer the question the same way”? As before, we begin by establishing some notation to describe the data:
Robot | Human | Total |
| --- | --- | --- |
Puppy | \(O_{11}\) | \(O_{12}\) | \(R_{1}\) |
Flower | \(O_{21}\) | \(O_{22}\) | \(R_{2}\) |
Data file | \(O_{31}\) | \(O_{32}\) | \(R_{3}\) |
Total | \(C_{1}\) | \(C_{2}\) | \(N\) |
In this notation we say that \(O_{ij}\) is a count (observed frequency) of the number of respondents that are of species \(j\) (robots or human) who gave answer \(i\) (puppy, flower or data) when asked to make a choice. The total number of observations is written \(N\), as usual. Finally, I’ve used \(R_i\) to denote the row totals (e.g., \(R_1\) is the total number of people who chose the flower), and \(C_j\) to denote the column totals (e.g., \(C_1\) is the total number of robots).177
So now let’s think about what the null hypothesis says. If robots and humans are responding in the same way to the question, it means that the probability that “a robot says puppy” is the same as the probability that “a human says puppy”, and so on for the other two possibilities. So, if we use \(P_{ij}\) to denote “the probability that a member of species \(j\) gives response \(i\)” then our null hypothesis is that:
\(H_0\): | All of the following are true: |
| --- | --- |
\(P_{11} = P_{12}\) (same probability of saying puppy) |
\(P_{21} = P_{22}\) (same probability of saying flower) and |
\(P_{31} = P_{32}\) (same probability of saying data). |
And actually, since the null hypothesis is claiming that the true choice probabilities don’t depend on the species of the person making the choice, we can let \(P_i\) refer to this probability: e.g., \(P_1\) is the true probability of choosing the puppy.
Next, in much the same way that we did with the goodness of fit test, what we need to do is calculate the expected frequencies. That is, for each of the observed counts \(O_{ij}\), we need to figure out what the null hypothesis would tell us to expect. Let’s denote this expected frequency by \(E_{ij}\). This time, it’s a little bit trickier. If there are a total of \(C_j\) people that belong to species \(j\), and the true probability of anyone (regardless of species) choosing option \(i\) is \(P_i\), then the expected frequency is just: \[ E_{ij} = C_j \times P_i \] Now, this is all very well and good, but we have a problem. Unlike the situation we had with the goodness of fit test, the null hypothesis doesn’t actually specify a particular value for \(P_i\). It’s something we have to estimate (Chapter 10) from the data! Fortunately, this is pretty easy to do. If 28 out of 180 people selected the flowers, then a natural estimate for the probability of choosing flowers is \(28/180\), which is approximately \(.16\). If we phrase this in mathematical terms, what we’re saying is that our estimate for the probability of choosing option \(i\) is just the row total divided by the total sample size: \[ \hat{P}_i = \frac{R_i}{N} \] Therefore, our expected frequency can be written as the product (i.e. multiplication) of the row total and the column total, divided by the total number of observations:178 \[ E_{ij} = \frac{R_i \times C_j}{N} \] Now that we’ve figured out how to calculate the expected frequencies, it’s straightforward to define a test statistic; following the exact same strategy that we used in the goodness of fit test. In fact, it’s pretty much the same statistic. For a contingency table with \(r\) rows and \(c\) columns, the equation that defines our \(X^2\) statistic is \[ X^2 = \sum_{i=1}^r \sum_{j=1}^c \frac{({E}_{ij} - O_{ij})^2}{{E}_{ij}} \] The only difference is that I have to include two summation sign (i.e., \(\sum\)) to indicate that we’re summing over both rows and columns. As before, large values of \(X^2\) indicate that the null hypothesis provides a poor description of the data, whereas small values of \(X^2\) suggest that it does a good job of accounting for the data. Therefore, just like last time, we want to reject the null hypothesis if \(X^2\) is too large.
Not surprisingly, this statistic is \(\chi^2\) distributed. All we need to do is figure out how many degrees of freedom are involved, which actually isn’t too hard. As I mentioned before, you can (usually) think of the degrees of freedom as being equal to the number of data points that you’re analysing, minus the number of constraints. A contingency table with \(r\) rows and \(c\) columns contains a total of \(r \times c\) observed frequencies, so that’s the total number of observations. What about the constraints? Here, it’s slightly trickier. The answer is always the same \[ df = (r-1)(c-1) \] but the explanation for why the degrees of freedom takes this value is different depending on the experimental design. For the sake of argument, let’s suppose that we had honestly intended to survey exactly 87 robots and 93 humans (column totals fixed by the experimenter), but left the row totals free to vary (row totals are random variables). Let’s think about the constraints that apply here. Well, since we deliberately fixed the column totals by Act of Experimenter, we have \(c\) constraints right there. But, there’s actually more to it than that. Remember how our null hypothesis had some free parameters (i.e., we had to estimate the \(P_i\) values)? Those matter too. I won’t explain why in this book, but every free parameter in the null hypothesis is rather like an additional constraint. So, how many of those are there? Well, since these probabilities have to sum to 1, there’s only \(r-1\) of these. So our total degrees of freedom is: \[ \begin{array}{rcl} df &=& \mbox{(number of observations)} - \mbox{(number of constraints)} \\ &=& (rc) - (c + (r-1)) \\ &=& rc - c - r + 1 \\ &=& (r - 1)(c - 1) \end{array} \] Alternatively, suppose that the only thing that the experimenter fixed was the total sample size \(N\). That is, we quizzed the first 180 people that we saw, and it just turned out that 87 were robots and 93 were humans. This time around our reasoning would be slightly different, but would still lead is to the same answer. Our null hypothesis still has \(r-1\) free parameters corresponding to the choice probabilities, but it now also has \(c-1\) free parameters corresponding to the species probabilities, because we’d also have to estimate the probability that a randomly sampled person turns out to be a robot.179 Finally, since we did actually fix the total number of observations \(N\), that’s one more constraint. So now we have, \(rc\) observations, and \((c-1) + (r-1) + 1\) constraints. What does that give? \[ \begin{array}{rcl} df &=& \mbox{(number of observations)} - \mbox{(number of constraints)} \\ &=& rc - ( (c-1) + (r-1) + 1) \\ &=& rc - c - r + 1 \\ &=& (r - 1)(c - 1) \end{array} \] Amazing.
### 12.2.2 Doing the test in R
Okay, now that we know how the test works, let’s have a look at how it’s done in R. As tempting as it is to lead you through the tedious calculations so that you’re forced to learn it the long way, I figure there’s no point. I already showed you how to do it the long way for the goodness of fit test in the last section, and since the test of independence isn’t conceptually any different, you won’t learn anything new by doing it the long way. So instead, I’ll go straight to showing you the easy way. As always, R lets you do it multiple ways. There’s the `chisq.test()` function, which I’ll talk about in Section @ref(chisq.test, but first I want to use the `associationTest()` function in the `lsr` package, which I think is easier on beginners. It works in the exact same way as the `xtabs()` function. Recall that, in order to produce the contingency table, we used this command:
```
xtabs( formula = ~choice+species, data = chapek9 )
```
The `associationTest()` function has exactly the same structure: it needs a `formula` that specifies which variables you’re cross-tabulating, and the name of a `data` frame that contains those variables. So the command is just this:
```
associationTest( formula = ~choice+species, data = chapek9 )
```
Just like we did with the goodness of fit test, I’ll go through it line by line. The first two lines are, once again, just reminding you what kind of test you ran and what variables were used:
```
Chi-square test of categorical association
Variables: choice, species
```
Next, it tells you what the null and alternative hypotheses are (and again, I want to remind you not to get used to seeing these hypotheses written out so explicitly):
```
Hypotheses:
null: variables are independent of one another
alternative: some contingency exists between variables
```
Next, it shows you the observed contingency table that is being tested:
and it also shows you what the expected frequencies would be if the null hypothesis were true:
```
Expected contingency table under the null hypothesis:
species
choice robot human
puppy 13.5 14.5
flower 20.8 22.2
data 52.7 56.3
```
The next part describes the results of the hypothesis test itself:
```
Test results:
X-squared statistic: 10.722
degrees of freedom: 2
p-value: 0.005
```
And finally, it reports a measure of effect size:
```
Other information:
estimated effect size (Cramer's v): 0.244
```
You can ignore this bit for now. I’ll talk about it in just a moment.
This output gives us enough information to write up the result:
Pearson’s \(\chi^2\) revealed a significant association between species and choice (\(\chi^2(2) = 10.7, p < .01\)): robots appeared to be more likely to say that they prefer flowers, but the humans were more likely to say they prefer data.
Notice that, once again, I provided a little bit of interpretation to help the human reader understand what’s going on with the data. Later on in my discussion section, I’d provide a bit more context. To illustrate the difference, here’s what I’d probably say later on:
The fact that humans appeared to have a stronger preference for raw data files than robots is somewhat counterintuitive. However, in context it makes some sense: the civil authority on Chapek 9 has an unfortunate tendency to kill and dissect humans when they are identified. As such it seems most likely that the human participants did not respond honestly to the question, so as to avoid potentially undesirable consequences. This should be considered to be a substantial methodological weakness.
This could be classified as a rather extreme example of a reactivity effect, I suppose. Obviously, in this case the problem is severe enough that the study is more or less worthless as a tool for understanding the difference preferences among humans and robots. However, I hope this illustrates the difference between getting a statistically significant result (our null hypothesis is rejected in favour of the alternative), and finding something of scientific value (the data tell us nothing of interest about our research hypothesis due to a big methodological flaw).
### 12.2.3 Postscript
I later found out the data were made up, and I’d been watching cartoons instead of doing work.
## 12.3 The continuity correction
Okay, time for a little bit of a digression. I’ve been lying to you a little bit so far. There’s a tiny change that you need to make to your calculations whenever you only have 1 degree of freedom. It’s called the “continuity correction”, or sometimes the Yates correction. Remember what I pointed out earlier: the \(\chi^2\) test is based on an approximation, specifically on the assumption that binomial distribution starts to look like a normal distribution for large \(N\). One problem with this is that it often doesn’t quite work, especially when you’ve only got 1 degree of freedom (e.g., when you’re doing a test of independence on a \(2 \times 2\) contingency table). The main reason for this is that the true sampling distribution for the \(X^2\) statistic is actually discrete (because you’re dealing with categorical data!) but the \(\chi^2\) distribution is continuous. This can introduce systematic problems. Specifically, when \(N\) is small and when \(df=1\), the goodness of fit statistic tends to be “too big”, meaning that you actually have a bigger \(\alpha\) value than you think (or, equivalently, the \(p\) values are a bit too small). Yates (1934) suggested a simple fix, in which you redefine the goodness of fit statistic as: \[ X^2 = \sum_{i} \frac{(|E_i - O_i| - 0.5)^2}{E_i} \] Basically, he just subtracts off 0.5 everywhere. As far as I can tell from reading Yates’ paper, the correction is basically a hack. It’s not derived from any principled theory: rather, it’s based on an examination of the behaviour of the test, and observing that the corrected version seems to work better. I feel obliged to explain this because you will sometimes see R (or any other software for that matter) introduce this correction, so it’s kind of useful to know what they’re about. You’ll know when it happens, because the R output will explicitly say that it has used a “continuity correction” or “Yates’ correction”.
## 12.4 Effect size
As we discussed earlier (Section 11.8), it’s becoming commonplace to ask researchers to report some measure of effect size. So, let’s suppose that you’ve run your chi-square test, which turns out to be significant. So you now know that there is some association between your variables (independence test) or some deviation from the specified probabilities (goodness of fit test). Now you want to report a measure of effect size. That is, given that there is an association/deviation, how strong is it?
There are several different measures that you can choose to report, and several different tools that you can use to calculate them. I won’t discuss all of them,180 but will instead focus on the most commonly reported measures of effect size.
By default, the two measures that people tend to report most frequently are the \(\phi\) statistic and the somewhat superior version, known as Cram'er’s \(V\). Mathematically, they’re very simple. To calculate the \(\phi\) statistic, you just divide your \(X^2\) value by the sample size, and take the square root: \[ \phi = \sqrt{\frac{X^2}{N}} \] The idea here is that the \(\phi\) statistic is supposed to range between 0 (no at all association) and 1 (perfect association), but it doesn’t always do this when your contingency table is bigger than \(2 \times 2\), which is a total pain. For bigger tables it’s actually possible to obtain \(\phi>1\), which is pretty unsatisfactory. So, to correct for this, people usually prefer to report the \(V\) statistic proposed by Cramér (1946). It’s a pretty simple adjustment to \(\phi\). If you’ve got a contingency table with \(r\) rows and \(c\) columns, then define \(k = \min(r,c)\) to be the smaller of the two values. If so, then Cram'er’s \(V\) statistic is \[ V = \sqrt{\frac{X^2}{N(k-1)}} \] And you’re done. This seems to be a fairly popular measure, presumably because it’s easy to calculate, and it gives answers that aren’t completely silly: you know that \(V\) really does range from 0 (no at all association) to 1 (perfect association).
Calculating \(V\) or \(\phi\) is obviously pretty straightforward. So much so that the core packages in R don’t seem to have functions to do it, though other packages do. To save you the time and effort of finding one, I’ve included one in the `lsr` package, called `cramersV()` . It takes a contingency table as input, and prints out the measure of effect size:
```
cramersV( chapekFrequencies )
```
`## [1] 0.244058` However, if you’re using the `associationTest()` function to do your analysis, then you won’t actually need to use this at all, because it reports the Cram'er’s \(V\) statistic as part of the output.
## 12.5 Assumptions of the test(s)
All statistical tests make assumptions, and it’s usually a good idea to check that those assumptions are met. For the chi-square tests discussed so far in this chapter, the assumptions are:
* Expected frequencies are sufficiently large. Remember how in the previous section we saw that the \(\chi^2\) sampling distribution emerges because the binomial distribution is pretty similar to a normal distribution? Well, like we discussed in Chapter 9 this is only true when the number of observations is sufficiently large. What that means in practice is that all of the expected frequencies need to be reasonably big. How big is reasonably big? Opinions differ, but the default assumption seems to be that you generally would like to see all your expected frequencies larger than about 5, though for larger tables you would probably be okay if at least 80% of the the expected frequencies are above 5 and none of them are below 1. However, from what I’ve been able to discover , these seem to have been proposed as rough guidelines, not hard and fast rules; and they seem to be somewhat conservative [Larntz1978].
* Data are independent of one another. One somewhat hidden assumption of the chi-square test is that you have to genuinely believe that the observations are independent. Here’s what I mean. Suppose I’m interested in proportion of babies born at a particular hospital that are boys. I walk around the maternity wards, and observe 20 girls and only 10 boys. Seems like a pretty convincing difference, right? But later on, it turns out that I’d actually walked into the same ward 10 times, and in fact I’d only seen 2 girls and 1 boy. Not as convincing, is it? My original 30 observations were massively non-independent… and were only in fact equivalent to 3 independent observations. Obviously this is an extreme (and extremely silly) example, but it illustrates the basic issue. Non-independence “stuffs things up”. Sometimes it causes you to falsely reject the null, as the silly hospital example illustrats, but it can go the other way too. To give a slightly less stupid example, let’s consider what would happen if I’d done the cards experiment slightly differently: instead of asking 200 people to try to imagine sampling one card at random, suppose I asked 50 people to select 4 cards. One possibility would be that everyone selects one heart, one club, one diamond and one spade (in keeping with the “representativeness heuristic”; Tversky & Kahneman 1974). This is highly non-random behaviour from people, but in this case, I would get an observed frequency of 50 four all four suits. For this example, the fact that the observations are non-independent (because the four cards that you pick will be related to each other) actually leads to the opposite effect… falsely retaining the null.
If you happen to find yourself in a situation where independence is violated, it may be possible to use the McNemar test (which we’ll discuss) or the Cochran test (which we won’t). Similarly, if your expected cell counts are too small, check out the Fisher exact test. It is to these topics that we now turn.
## 12.6 The most typical way to do chi-square tests in R
When discussing how to do a chi-square goodness of fit test (Section 12.1.7) and the chi-square test of independence (Section 12.2.2), I introduced you to two separate functions in the `lsr` package. We ran our goodness of fit tests using the `goodnessOfFitTest()` function, and our tests of independence (or association) using the `associationTest()` function. And both of those functions produced quite detailed output, showing you the relevant descriptive statistics, printing out explicit reminders of what the hypotheses are, and so on. When you’re first starting out, it can be very handy to be given this sort of guidance. However, once you start becoming a bit more proficient in statistics and in R it can start to get very tiresome. A real statistician hardly needs to be told what the null and alternative hypotheses for a chi-square test are, and if an advanced R user wants the descriptive statistics to be printed out, they know how to produce them! For this reason, the basic `chisq.test()` function in R is a lot more terse in its output, and because the mathematics that underpin the goodness of fit test and the test of independence is basically the same in each case, it can run either test depending on what kind of input it is given. First, here’s the goodness of fit test. Suppose you have the frequency table `observed` that we used earlier, `observed`
If you want to run the goodness of fit test against the hypothesis that all four suits are equally likely to appear, then all you need to do is input this frequenct table to the `chisq.test()` function:
```
chisq.test( x = observed )
```
Notice that the output is very compressed in comparison to the `goodnessOfFitTest()` function. It doesn’t bother to give you any descriptive statistics, it doesn’t tell you what null hypothesis is being tested, and so on. And as long as you already understand the test, that’s not a problem. Once you start getting familiar with R and with statistics, you’ll probably find that you prefer this simple output rather than the rather lengthy output that `goodnessOfFitTest()` produces. Anyway, if you want to change the null hypothesis, it’s exactly the same as before, just specify the probabilities using the `p` argument. For instance:
```
chisq.test( x = observed, p = c(.2, .3, .3, .2) )
```
Again, these are the same numbers that the `goodnessOfFitTest()` function reports at the end of the output. It just hasn’t included any of the other details. What about a test of independence? As it turns out, the `chisq.test()` function is pretty clever.181 If you input a cross-tabulation rather than a simple frequency table, it realises that you’re asking for a test of independence and not a goodness of fit test. Recall that we already have this cross-tabulation stored as the `chapekFrequencies` variable: `chapekFrequencies`
To get the test of independence, all we have to do is feed this frequency table into the `chisq.test()` function like so:
```
chisq.test( chapekFrequencies )
```
```
##
## Pearson's Chi-squared test
##
## data: chapekFrequencies
## X-squared = 10.722, df = 2, p-value = 0.004697
```
Again, the numbers are the same as last time, it’s just that the output is very terse and doesn’t really explain what’s going on in the rather tedious way that `associationTest()` does. As before, my intuition is that when you’re just getting started it’s easier to use something like `associationTest()` because it shows you more detail about what’s going on, but later on you’ll probably find that `chisq.test()` is more convenient.
## 12.7 The Fisher exact test
What should you do if your cell counts are too small, but you’d still like to test the null hypothesis that the two variables are independent? One answer would be “collect more data”, but that’s far too glib: there are a lot of situations in which it would be either infeasible or unethical do that. If so, statisticians have a kind of moral obligation to provide scientists with better tests. In this instance, Fisher (1922) kindly provided the right answer to the question. To illustrate the basic idea, let’s suppose that we’re analysing data from a field experiment, looking at the emotional status of people who have been accused of witchcraft; some of whom are currently being burned at the stake.182 Unfortunately for the scientist (but rather fortunately for the general populace), it’s actually quite hard to find people in the process of being set on fire, so the cell counts are awfully small in some cases. The `salem.Rdata` file illustrates the point:
```
load( file.path(projecthome, "data/salem.Rdata"))
salem.tabs <- table( trial )
print( salem.tabs )
```
```
## on.fire
## happy FALSE TRUE
## FALSE 3 3
## TRUE 10 0
```
Looking at this data, you’d be hard pressed not to suspect that people not on fire are more likely to be happy than people on fire. However, the chi-square test makes this very hard to test because of the small sample size. If I try to do so, R gives me a warning message:
```
chisq.test( salem.tabs )
```
```
## Warning in chisq.test(salem.tabs): Chi-squared approximation may be
## incorrect
```
```
##
## Pearson's Chi-squared test with Yates' continuity correction
##
## data: salem.tabs
## X-squared = 3.3094, df = 1, p-value = 0.06888
```
Speaking as someone who doesn’t want to be set on fire, I’d really like to be able to get a better answer than this. This is where Fisher’s exact test comes in very handy.
The Fisher exact test works somewhat differently to the chi-square test (or in fact any of the other hypothesis tests that I talk about in this book) insofar as it doesn’t have a test statistic; it calculates the \(p\)-value “directly”. I’ll explain the basics of how the test works for a \(2 \times 2\) contingency table, though the test works fine for larger tables. As before, let’s have some notation:
Happy | Sad | Total |
| --- | --- | --- |
Set on fire | \(O_{11}\) | \(O_{12}\) | \(R_{1}\) |
Not set on fire | \(O_{21}\) | \(O_{22}\) | \(R_{2}\) |
Total | \(C_{1}\) | \(C_{2}\) | \(N\) |
In order to construct the test Fisher treats both the row and column totals (\(R_1\), \(R_2\), \(C_1\) and \(C_2\)) are known, fixed quantities; and then calculates the probability that we would have obtained the observed frequencies that we did (\(O_{11}\), \(O_{12}\), \(O_{21}\) and \(O_{22}\)) given those totals. In the notation that we developed in Chapter 9 this is written: \[ P(O_{11}, O_{12}, O_{21}, O_{22} \ | \ R_1, R_2, C_1, C_2) \] and as you might imagine, it’s a slightly tricky exercise to figure out what this probability is, but it turns out that this probability is described by a distribution known as the hypergeometric distribution.183 Now that we know this, what we have to do to calculate our \(p\)-value is calculate the probability of observing this particular table or a table that is “more extreme”.184 Back in the 1920s, computing this sum was daunting even in the simplest of situations, but these days it’s pretty easy as long as the tables aren’t too big and the sample size isn’t too large. The conceptually tricky issue is to figure out what it means to say that one contingency table is more “extreme” than another. The easiest solution is to say that the table with the lowest probability is the most extreme. This then gives us the \(p\)-value.
The implementation of the test in R is via the `fisher.test()` function. Here’s how it is used:
```
fisher.test( salem.tabs )
```
```
##
## Fisher's Exact Test for Count Data
##
## data: salem.tabs
## p-value = 0.03571
## alternative hypothesis: true odds ratio is not equal to 1
## 95 percent confidence interval:
## 0.000000 1.202913
## sample estimates:
## odds ratio
## 0
```
This is a bit more output than we got from some of our earlier tests. The main thing we’re interested in here is the \(p\)-value, which in this case is small enough (\(p=.036\)) to justify rejecting the null hypothesis that people on fire are just as happy as people not on fire.
## 12.8 The McNemar test
Suppose you’ve been hired to work for the Australian Generic Political Party (AGPP), and part of your job is to find out how effective the AGPP political advertisements are. So, what you do, is you put together a sample of \(N=100\) people, and ask them to watch the AGPP ads. Before they see anything, you ask them if they intend to vote for the AGPP; and then after showing the ads, you ask them again, to see if anyone has changed their minds. Obviously, if you’re any good at your job, you’d also do a whole lot of other things too, but let’s consider just this one simple experiment. One way to describe your data is via the following contingency table:
Before | After | Total |
| --- | --- | --- |
Yes | 30 | 10 | 40 |
No | 70 | 90 | 160 |
Total | 100 | 100 | 200 |
At first pass, you might think that this situation lends itself to the Pearson \(\chi^2\) test of independence (as per Section 12.2). However, a little bit of thought reveals that we’ve got a problem: we have 100 participants, but 200 observations. This is because each person has provided us with an answer in both the before column and the after column. What this means is that the 200 observations aren’t independent of each other: if voter A says “yes” the first time and voter B says “no”, then you’d expect that voter A is more likely to say “yes” the second time than voter B! The consequence of this is that the usual \(\chi^2\) test won’t give trustworthy answers due to the violation of the independence assumption. Now, if this were a really uncommon situation, I wouldn’t be bothering to waste your time talking about it. But it’s not uncommon at all: this is a standard repeated measures design, and none of the tests we’ve considered so far can handle it. Eek.
The solution to the problem was published by McNemar (1947). The trick is to start by tabulating your data in a slightly different way:
Before: Yes | Before: No | Total |
| --- | --- | --- |
After: Yes | 5 | 5 | 10 |
After: No | 25 | 65 | 90 |
Total | 30 | 70 | 100 |
This is exactly the same data, but it’s been rewritten so that each of our 100 participants appears in only one cell. Because we’ve written our data this way, the independence assumption is now satisfied, and this is a contingency table that we can use to construct an \(X^2\) goodness of fit statistic. However, as we’ll see, we need to do it in a slightly nonstandard way. To see what’s going on, it helps to label the entries in our table a little differently:
Before: Yes | Before: No | Total |
| --- | --- | --- |
After: Yes | \(a\) | \(b\) | \(a+b\) |
After: No | \(c\) | \(d\) | \(c+d\) |
Total | \(a+c\) | \(b+d\) | \(n\) |
Next, let’s think about what our null hypothesis is: it’s that the “before” test and the “after” test have the same proportion of people saying “Yes, I will vote for AGPP”. Because of the way that we have rewritten the data, it means that we’re now testing the hypothesis that the row totals and column totals come from the same distribution. Thus, the null hypothesis in McNemar’s test is that we have “marginal homogeneity”. That is, the row totals and column totals have the same distribution: \(P_a + P_b = P_a + P_c\), and similarly that \(P_c + P_d = P_b + P_d\). Notice that this means that the null hypothesis actually simplifies to \(P_b = P_c\). In other words, as far as the McNemar test is concerned, it’s only the off-diagonal entries in this table (i.e., \(b\) and \(c\)) that matter! After noticing this, the McNemar test of marginal homogeneity is no different to a usual \(\chi^2\) test. After applying the Yates correction, our test statistic becomes: \[ X^2 = \frac{(|b-c| - 0.5)^2}{b+c} \] or, to revert to the notation that we used earlier in this chapter: \[ X^2 = \frac{(|O_{12}-O_{21}| - 0.5)^2}{O_{12} + O_{21}} \] and this statistic has an (approximately) \(\chi^2\) distribution with \(df=1\). However, remember that – just like the other \(\chi^2\) tests – it’s only an approximation, so you need to have reasonably large expected cell counts for it to work.
### 12.8.1 Doing the McNemar test in R
Now that you know what the McNemar test is all about, lets actually run one. The `agpp.Rdata` file contains the raw data that I discussed previously, so let’s have a look at it:
```
load(file.path(projecthome, "data/agpp.Rdata"))
str(agpp)
```
```
## 'data.frame': 100 obs. of 3 variables:
## $ id : Factor w/ 100 levels "subj.1","subj.10",..: 1 13 24 35 46 57 68 79 90 2 ...
## $ response_before: Factor w/ 2 levels "no","yes": 1 2 2 2 1 1 1 1 1 1 ...
## $ response_after : Factor w/ 2 levels "no","yes": 2 1 1 1 1 1 1 2 1 1 ...
```
The `agpp` data frame contains three variables, an `id` variable that labels each participant in the data set (we’ll see why that’s useful in a moment), a `response_before` variable that records the person’s answer when they were asked the question the first time, and a `response_after` variable that shows the answer that they gave when asked the same question a second time. As usual, here’s the first 6 entries: `head(agpp)`
```
## id response_before response_after
## 1 subj.1 no yes
## 2 subj.2 yes no
## 3 subj.3 yes no
## 4 subj.4 yes no
## 5 subj.5 no no
## 6 subj.6 no no
```
and here’s a summary:
`summary(agpp)`
```
## id response_before response_after
## subj.1 : 1 no :70 no :90
## subj.10 : 1 yes:30 yes:10
## subj.100: 1
## subj.11 : 1
## subj.12 : 1
## subj.13 : 1
## (Other) :94
```
Notice that each participant appears only once in this data frame. When we tabulate this data frame using `xtabs()` , we get the appropriate table:
```
right.table <- xtabs( ~ response_before + response_after, data = agpp)
print( right.table )
```
```
## response_after
## response_before no yes
## no 65 5
## yes 25 5
```
and from there, we can run the McNemar test by using the `mcnemar.test()` function:
```
mcnemar.test( right.table )
```
```
##
## McNemar's Chi-squared test with continuity correction
##
## data: right.table
## McNemar's chi-squared = 12.033, df = 1, p-value = 0.0005226
```
And we’re done. We’ve just run a McNemar’s test to determine if people were just as likely to vote AGPP after the ads as they were before hand. The test was significant (\(\chi^2(1) = 12.04, p<.001\)), suggesting that they were not. And in fact, it looks like the ads had a negative effect: people were less likely to vote AGPP after seeing the ads. Which makes a lot of sense when you consider the quality of a typical political advertisement.
## 12.9 What’s the difference between McNemar and independence?
Let’s go all the way back to the beginning of the chapter, and look at the `cards` data set again. If you recall, the actual experimental design that I described involved people making two choices. Because we have information about the first choice and the second choice that everyone made, we can construct the following contingency table that cross-tabulates the first choice against the second choice.
```
cardChoices <- xtabs( ~ choice_1 + choice_2, data = cards )
cardChoices
```
```
## choice_2
## choice_1 clubs diamonds hearts spades
## clubs 10 9 10 6
## diamonds 20 4 13 14
## hearts 20 18 3 23
## spades 18 13 15 4
```
Suppose I wanted to know whether the choice you make the second time is dependent on the choice you made the first time. This is where a test of independence is useful, and what we’re trying to do is see if there’s some relationship between the rows and columns of this table. Here’s the result:
```
chisq.test( cardChoices )
```
```
##
## Pearson's Chi-squared test
##
## data: cardChoices
## X-squared = 29.237, df = 9, p-value = 0.0005909
```
Alternatively, suppose I wanted to know if on average, the frequencies of suit choices were different the second time than the first time. In that situation, what I’m really trying to see if the row totals in `cardChoices` (i.e., the frequencies for `choice_1` ) are different from the column totals (i.e., the frequencies for `choice_2` ). That’s when you use the McNemar test:
```
mcnemar.test( cardChoices )
```
```
##
## McNemar's Chi-squared test
##
## data: cardChoices
## McNemar's chi-squared = 16.033, df = 6, p-value = 0.01358
```
Notice that the results are different! These aren’t the same test.
## 12.10 Summary
The key ideas discussed in this chapter are:
* The chi-square goodness of fit test (Section 12.1) is used when you have a table of observed frequencies of different categories; and the null hypothesis gives you a set of “known” probabilities to compare them to. You can either use the
`goodnessOfFitTest()` function in the `lsr` package to run this test, or the `chisq.test()` function. * The chi-square test of independence (Section 12.2) is used when you have a contingency table (cross-tabulation) of two categorical variables. The null hypothesis is that there is no relationship/association between the variables. You can either use the
`associationTest()` function in the `lsr` package, or you can use `chisq.test()` . * Effect size for a contingency table can be measured in several ways (Section 12.4). In particular we noted the Cramer’s \(V\) statistic, which can be calculated using
`cramersV()` . This is also part of the output produced by `associationTest()` . * Both versions of the Pearson test rely on two assumptions: that the expected frequencies are sufficiently large, and that the observations are independent (Section 12.5). The Fisher exact test (Section 12.7) can be used when the expected frequencies are small,
```
fisher.test(x = contingency.table)
```
. The McNemar test (Section 12.8) can be used for some kinds of violations of independence,
```
mcnemar.test(x = contingency.table)
```
.
If you’re interested in learning more about categorical data analysis, a good first choice would be Agresti (1996) which, as the title suggests, provides an Introduction to Categorical Data Analysis. If the introductory book isn’t enough for you (or can’t solve the problem you’re working on) you could consider Agresti (2002), Categorical Data Analysis. The latter is a more advanced text, so it’s probably not wise to jump straight from this book to that one.
<NAME>. 2002. Categorical Data Analysis. 2nd ed. Hoboken, NJ: Wiley.
I should point out that this issue does complicate the story somewhat: I’m not going to cover it in this book, but there’s a sneaky trick that you can do to rewrite the equation for the goodness of fit statistic as a sum over \(k-1\) independent things. When we do so we get the “proper” sampling distribution, which is chi-square with \(k-1\) degrees of freedom. In fact, in order to get the maths to work out properly, you actually have to rewrite things that way. But it’s beyond the scope of an introductory book to show the maths in that much detail: all I wanted to do is give you a sense of why the goodness of fit statistic is associated with the chi-squared distribution.↩
*
I feel obliged to point out that this is an over-simplification. It works nicely for quite a few situations; but every now and then we’ll come across degrees of freedom values that aren’t whole numbers. Don’t let this worry you too much – when you come across this, just remind yourself that “degrees of freedom” is actually a bit of a messy concept, and that the nice simple story that I’m telling you here isn’t the whole story. For an introductory class, it’s usually best to stick to the simple story: but I figure it’s best to warn you to expect this simple story to fall apart. If I didn’t give you this warning, you might start getting confused when you see \(df = 3.4\) or something; and (incorrectly) thinking that you had misunderstood something that I’ve taught you, rather than (correctly) realising that there’s something that I haven’t told you.↩
*
In practice, the sample size isn’t always fixed… e.g., we might run the experiment over a fixed period of time, and the number of people participating depends on how many people show up. That doesn’t matter for the current purposes.↩
*
Well, sort of. The conventions for how statistics should be reported tend to differ somewhat from discipline to discipline; I’ve tended to stick with how things are done in psychology, since that’s what I do. But the general principle of providing enough information to the reader to allow them to check your results is pretty universal, I think.↩
*
To some people, this advice might sound odd, or at least in conflict with the “usual” advice on how to write a technical report. Very typically, students are told that the “results” section of a report is for describing the data and reporting statistical analysis; and the “discussion” section is for providing interpretation. That’s true as far as it goes, but I think people often interpret it way too literally. The way I usually approach it is to provide a quick and simple interpretation of the data in the results section, so that my reader understands what the data are telling us. Then, in the discussion, I try to tell a bigger story; about how my results fit with the rest of the scientific literature. In short; don’t let the “interpretation goes in the discussion” advice turn your results section into incomprehensible garbage. Being understood by your reader is much more important.↩
*
Complicating matters, the \(G\)-test is a special case of a whole class of tests that are known as likelihood ratio tests. I don’t cover LRTs in this book, but they are quite handy things to know about.↩
*
A technical note. The way I’ve described the test pretends that the column totals are fixed (i.e., the researcher intended to survey 87 robots and 93 humans) and the row totals are random (i.e., it just turned out that 28 people chose the puppy). To use the terminology from my mathematical statistics textbook (Hogg, McKean, and Craig 2005) I should technically refer to this situation as a chi-square test of homogeneity; and reserve the term chi-square test of independence for the situation where both the row and column totals are random outcomes of the experiment. In the initial drafts of this book that’s exactly what I did. However, it turns out that these two tests are identical; and so I’ve collapsed them together.↩
*
Technically, \(E_{ij}\) here is an estimate, so I should probably write it \(\hat{E}_{ij}\). But since no-one else does, I won’t either.↩
*
A problem many of us worry about in real life.↩
*
Though I do feel that it’s worth mentioning the
`assocstats()` function in the `vcd` package. If you install and load the `vcd` package, then a command like
```
assocstats( chapekFrequencies )
```
will run the \(\chi^2\) test as well as the likelihood ratio test (not discussed here); and then report three different measures of effect size: \(\phi^2\), Cram'er’s \(V\), and the contingency coefficient (not discussed here)↩ *
Not really.↩
*
This example is based on a joke article published in the Journal of Irreproducible Results.↩
*
The R functions for this distribution are
`dhyper()` , `phyper()` , `qhyper()` and `rhyper()` , though you don’t need them for this book, and I haven’t given you enough information to use these to perform the Fisher exact test the long way.↩ *
Not surprisingly, the Fisher exact test is motivated by Fisher’s interpretation of a \(p\)-value, not Neyman’s!↩
Date: 2013-10-01
Categories:
Tags:
# Chapter 13 Comparing two means
In the previous chapter we covered the situation when your outcome variable is nominal scale and your predictor variable185 is also nominal scale. Lots of real world situations have that character, and so you’ll find that chi-square tests in particular are quite widely used. However, you’re much more likely to find yourself in a situation where your outcome variable is interval scale or higher, and what you’re interested in is whether the average value of the outcome variable is higher in one group or another. For instance, a psychologist might want to know if anxiety levels are higher among parents than non-parents, or if working memory capacity is reduced by listening to music (relative to not listening to music). In a medical context, we might want to know if a new drug increases or decreases blood pressure. An agricultural scientist might want to know whether adding phosphorus to Australian native plants will kill them.186 In all these situations, our outcome variable is a fairly continuous, interval or ratio scale variable; and our predictor is a binary “grouping” variable. In other words, we want to compare the means of the two groups.
The standard answer to the problem of comparing means is to use a \(t\)-test, of which there are several varieties depending on exactly what question you want to solve. As a consequence, the majority of this chapter focuses on different types of \(t\)-test: one sample \(t\)-tests are discussed in Section 13.2, independent samples \(t\)-tests are discussed in Sections 13.3 and 13.4, and paired samples \(t\)-tests are discussed in Section 13.5. After that, we’ll talk a bit about Cohen’s \(d\), which is the standard measure of effect size for a \(t\)-test (Section 13.8). The later sections of the chapter focus on the assumptions of the \(t\)-tests, and possible remedies if they are violated. However, before discussing any of these useful things, we’ll start with a discussion of the \(z\)-test.
## 13.1 The one-sample \(z\)-test
In this section I’ll describe one of the most useless tests in all of statistics: the \(z\)-test. Seriously – this test is almost never used in real life. Its only real purpose is that, when teaching statistics, it’s a very convenient stepping stone along the way towards the \(t\)-test, which is probably the most (over)used tool in all statistics.
### 13.1.1 The inference problem that the test addresses
To introduce the idea behind the \(z\)-test, let’s use a simple example. A friend of mine, Dr Zeppo, grades his introductory statistics class on a curve. Let’s suppose that the average grade in his class is 67.5, and the standard deviation is 9.5. Of his many hundreds of students, it turns out that 20 of them also take psychology classes. Out of curiosity, I find myself wondering: do the psychology students tend to get the same grades as everyone else (i.e., mean 67.5) or do they tend to score higher or lower? He emails me the `zeppo.Rdata` file, which I use to pull up the `grades` of those students,
```
load( file.path(projecthome, "data/zeppo.Rdata" ))
print( grades )
```
```
## [1] 50 60 60 64 66 66 67 69 70 74 76 76 77 79 79 79 81 82 82 89
```
and calculate the mean:
`mean( grades )` `## [1] 72.3`
Hm. It might be that the psychology students are scoring a bit higher than normal: that sample mean of \(\bar{X} = 72.3\) is a fair bit higher than the hypothesised population mean of \(\mu = 67.5\), but on the other hand, a sample size of \(N = 20\) isn’t all that big. Maybe it’s pure chance.
To answer the question, it helps to be able to write down what it is that I think I know. Firstly, I know that the sample mean is \(\bar{X} = 72.3\). If I’m willing to assume that the psychology students have the same standard deviation as the rest of the class then I can say that the population standard deviation is \(\sigma = 9.5\). I’ll also assume that since Dr Zeppo is grading to a curve, the psychology student grades are normally distributed.
Next, it helps to be clear about what I want to learn from the data. In this case, my research hypothesis relates to the population mean \(\mu\) for the psychology student grades, which is unknown. Specifically, I want to know if \(\mu = 67.5\) or not. Given that this is what I know, can we devise a hypothesis test to solve our problem? The data, along with the hypothesised distribution from which they are thought to arise, are shown in Figure 13.1. Not entirely obvious what the right answer is, is it? For this, we are going to need some statistics.
### 13.1.2 Constructing the hypothesis test
The first step in constructing a hypothesis test is to be clear about what the null and alternative hypotheses are. This isn’t too hard to do. Our null hypothesis, \(H_0\), is that the true population mean \(\mu\) for psychology student grades is 67.5%; and our alternative hypothesis is that the population mean isn’t 67.5%. If we write this in mathematical notation, these hypotheses become, \[ \begin{array}{ll} H_0: & \mu = 67.5 \\ H_1: & \mu \neq 67.5 \end{array} \] though to be honest this notation doesn’t add much to our understanding of the problem, it’s just a compact way of writing down what we’re trying to learn from the data. The null hypotheses \(H_0\) and the alternative hypothesis \(H_1\) for our test are both illustrated in Figure 13.2. In addition to providing us with these hypotheses, the scenario outlined above provides us with a fair amount of background knowledge that might be useful. Specifically, there are two special pieces of information that we can add:
1 The psychology grades are normally distributed. 1 The true standard deviation of these scores \(\sigma\) is known to be 9.5.
For the moment, we’ll act as if these are absolutely trustworthy facts. In real life, this kind of absolutely trustworthy background knowledge doesn’t exist, and so if we want to rely on these facts we’ll just have make the assumption that these things are true. However, since these assumptions may or may not be warranted, we might need to check them. For now though, we’ll keep things simple.
The next step is to figure out what we would be a good choice for a diagnostic test statistic; something that would help us discriminate between \(H_0\) and \(H_1\). Given that the hypotheses all refer to the population mean \(\mu\), you’d feel pretty confident that the sample mean \(\bar{X}\) would be a pretty useful place to start. What we could do, is look at the difference between the sample mean \(\bar{X}\) and the value that the null hypothesis predicts for the population mean. In our example, that would mean we calculate \(\bar{X} - 67.5\). More generally, if we let \(\mu_0\) refer to the value that the null hypothesis claims is our population mean, then we’d want to calculate \[ \bar{X} - \mu_0 \] If this quantity equals or is very close to 0, things are looking good for the null hypothesis. If this quantity is a long way away from 0, then it’s looking less likely that the null hypothesis is worth retaining. But how far away from zero should it be for us to reject \(H_0\)?
To figure that out, we need to be a bit more sneaky, and we’ll need to rely on those two pieces of background knowledge that I wrote down previously, namely that the raw data are normally distributed, and we know the value of the population standard deviation \(\sigma\). If the null hypothesis is actually true, and the true mean is \(\mu_0\), then these facts together mean that we know the complete population distribution of the data: a normal distribution with mean \(\mu_0\) and standard deviation \(\sigma\). Adopting the notation from Section 9.5, a statistician might write this as: \[ X \sim \mbox{Normal}(\mu_0,\sigma^2) \]
Okay, if that’s true, then what can we say about the distribution of \(\bar{X}\)? Well, as we discussed earlier (see Section 10.3.3), the sampling distribution of the mean \(\bar{X}\) is also normal, and has mean \(\mu\). But the standard deviation of this sampling distribution \(\mbox{SE}({\bar{X}})\), which is called the standard error of the mean, is \[ \mbox{SE}({\bar{X}}) = \frac{\sigma}{\sqrt{N}} \] In other words, if the null hypothesis is true then the sampling distribution of the mean can be written as follows: \[ \bar{X} \sim \mbox{Normal}(\mu_0,\mbox{SE}({\bar{X}})) \] Now comes the trick. What we can do is convert the sample mean \(\bar{X}\) into a standard score (Section 5.6). This is conventionally written as \(z\), but for now I’m going to refer to it as \(z_{\bar{X}}\). (The reason for using this expanded notation is to help you remember that we’re calculating standardised version of a sample mean, not a standardised version of a single observation, which is what a \(z\)-score usually refers to). When we do so, the \(z\)-score for our sample mean is \[ z_{\bar{X}} = \frac{\bar{X} - \mu_0}{\mbox{SE}({\bar{X}})} \] or, equivalently \[ z_{\bar{X}} = \frac{\bar{X} - \mu_0}{\sigma / \sqrt{N}} \] This \(z\)-score is our test statistic. The nice thing about using this as our test statistic is that like all \(z\)-scores, it has a standard normal distribution: \[ z_{\bar{X}} \sim \mbox{Normal}(0,1) \] (again, see Section 5.6 if you’ve forgotten why this is true). In other words, regardless of what scale the original data are on, the \(z\)-statistic iteself always has the same interpretation: it’s equal to the number of standard errors that separate the observed sample mean \(\bar{X}\) from the population mean \(\mu_0\) predicted by the null hypothesis. Better yet, regardless of what the population parameters for the raw scores actually are, the 5% critical regions for \(z\)-test are always the same, as illustrated in Figures 13.4 and 13.3. And what this meant, way back in the days where people did all their statistics by hand, is that someone could publish a table like this:
desired \(\alpha\) level | two-sided test | one-sided test |
| --- | --- | --- |
.1 | 1.644854 | 1.281552 |
.05 | 1.959964 | 1.644854 |
.01 | 2.575829 | 2.326348 |
.001 | 3.290527 | 3.090232 |
which in turn meant that researchers could calculate their \(z\)-statistic by hand, and then look up the critical value in a text book. That was an incredibly handy thing to be able to do back then, but it’s kind of unnecessary these days, since it’s trivially easy to do it with software like R.
### 13.1.3 A worked example using R
Now, as I mentioned earlier, the \(z\)-test is almost never used in practice. It’s so rarely used in real life that the basic installation of R doesn’t have a built in function for it. However, the test is so incredibly simple that it’s really easy to do one manually. Let’s go back to the data from Dr Zeppo’s class. Having loaded the `grades` data, the first thing I need to do is calculate the sample mean:
```
sample.mean <- mean( grades )
print( sample.mean )
```
`## [1] 72.3`
Then, I create variables corresponding to known population standard deviation (\(\sigma = 9.5\)), and the value of the population mean that the null hypothesis specifies (\(\mu_0 = 67.5\)):
```
mu.null <- 67.5
sd.true <- 9.5
```
Let’s also create a variable for the sample size. We could count up the number of observations ourselves, and type `N <- 20` at the command prompt, but counting is tedious and repetitive. Let’s get R to do the tedious repetitive bit by using the `length()` function, which tells us how many elements there are in a vector:
```
N <- length( grades )
print( N )
```
`## [1] 20`
Next, let’s calculate the (true) standard error of the mean:
```
sem.true <- sd.true / sqrt(N)
print(sem.true)
```
`## [1] 2.124265`
And finally, we calculate our \(z\)-score:
```
z.score <- (sample.mean - mu.null) / sem.true
print( z.score )
```
`## [1] 2.259606`
At this point, we would traditionally look up the value 2.26 in our table of critical values. Our original hypothesis was two-sided (we didn’t really have any theory about whether psych students would be better or worse at statistics than other students) so our hypothesis test is two-sided (or two-tailed) also. Looking at the little table that I showed earlier, we can see that 2.26 is bigger than the critical value of 1.96 that would be required to be significant at \(\alpha = .05\), but smaller than the value of 2.58 that would be required to be significant at a level of \(\alpha = .01\). Therefore, we can conclude that we have a significant effect, which we might write up by saying something like this:
With a mean grade of 73.2 in the sample of psychology students, and assuming a true population standard deviation of 9.5, we can conclude that the psychology students have significantly different statistics scores to the class average (\(z = 2.26\), \(N=20\), \(p<.05\)).
However, what if want an exact \(p\)-value? Well, back in the day, the tables of critical values were huge, and so you could look up your actual \(z\)-value, and find the smallest value of \(\alpha\) for which your data would be significant (which, as discussed earlier, is the very definition of a \(p\)-value). However, looking things up in books is tedious, and typing things into computers is awesome. So let’s do it using R instead. Now, notice that the \(\alpha\) level of a \(z\)-test (or any other test, for that matter) defines the total area “under the curve” for the critical region, right? That is, if we set \(\alpha = .05\) for a two-sided test, then the critical region is set up such that the area under the curve for the critical region is \(.05\). And, for the \(z\)-test, the critical value of 1.96 is chosen that way because the area in the lower tail (i.e., below \(-1.96\)) is exactly \(.025\) and the area under the upper tail (i.e., above \(1.96\)) is exactly \(.025\). So, since our observed \(z\)-statistic is \(2.26\), why not calculate the area under the curve below \(-2.26\) or above \(2.26\)? In R we can calculate this using the `pnorm()` function. For the upper tail:
```
upper.area <- pnorm( q = z.score, lower.tail = FALSE )
print( upper.area )
```
`## [1] 0.01192287` The `lower.tail = FALSE` is me telling R to calculate the area under the curve from 2.26 and upwards. If I’d told it that `lower.tail = TRUE` , then R would calculate the area from 2.26 and below, and it would give me an answer 0.9880771. Alternatively, to calculate the area from \(-2.26\) and below, we get
```
lower.area <- pnorm( q = -z.score, lower.tail = TRUE )
print( lower.area )
```
`## [1] 0.01192287`
Thus we get our \(p\)-value:
```
p.value <- lower.area + upper.area
print( p.value )
```
`## [1] 0.02384574`
### 13.1.4 Assumptions of the \(z\)-test
As I’ve said before, all statistical tests make assumptions. Some tests make reasonable assumptions, while other tests do not. The test I’ve just described – the one sample \(z\)-test – makes three basic assumptions. These are:
* Normality. As usually described, the \(z\)-test assumes that the true population distribution is normal.187 is often pretty reasonable, and not only that, it’s an assumption that we can check if we feel worried about it (see Section 13.9).
* Independence. The second assumption of the test is that the observations in your data set are not correlated with each other, or related to each other in some funny way. This isn’t as easy to check statistically: it relies a bit on good experimetal design. An obvious (and stupid) example of something that violates this assumption is a data set where you “copy” the same observation over and over again in your data file: so you end up with a massive “sample size”, consisting of only one genuine observation. More realistically, you have to ask yourself if it’s really plausible to imagine that each observation is a completely random sample from the population that you’re interested in. In practice, this assumption is never met; but we try our best to design studies that minimise the problems of correlated data.
* Known standard deviation. The third assumption of the \(z\)-test is that the true standard deviation of the population is known to the researcher. This is just stupid. In no real world data analysis problem do you know the standard deviation \(\sigma\) of some population, but are completely ignorant about the mean \(\mu\). In other words, this assumption is always wrong.
In view of the stupidity of assuming that \(\sigma\) is known, let’s see if we can live without it. This takes us out of the dreary domain of the \(z\)-test, and into the magical kingdom of the \(t\)-test, with unicorns and fairies and leprechauns, and um…
## 13.2 The one-sample \(t\)-test
After some thought, I decided that it might not be safe to assume that the psychology student grades necessarily have the same standard deviation as the other students in Dr Zeppo’s class. After all, if I’m entertaining the hypothesis that they don’t have the same mean, then why should I believe that they absolutely have the same standard deviation? In view of this, I should really stop assuming that I know the true value of \(\sigma\). This violates the assumptions of my \(z\)-test, so in one sense I’m back to square one. However, it’s not like I’m completely bereft of options. After all, I’ve still got my raw data, and those raw data give me an estimate of the population standard deviation:
`sd( grades )` `## [1] 9.520615`
In other words, while I can’t say that I know that \(\sigma = 9.5\), I can say that \(\hat\sigma = 9.52\).
Okay, cool. The obvious thing that you might think to do is run a \(z\)-test, but using the estimated standard deviation of 9.52 instead of relying on my assumption that the true standard deviation is 9.5. So, we could just type this new number into R and out would come the answer. And you probably wouldn’t be surprised to hear that this would still give us a significant result. This approach is close, but it’s not quite correct. Because we are now relying on an estimate of the population standard deviation, we need to make some adjustment for the fact that we have some uncertainty about what the true population standard deviation actually is. Maybe our data are just a fluke … maybe the true population standard deviation is 11, for instance. But if that were actually true, and we ran the \(z\)-test assuming \(\sigma=11\), then the result would end up being non-significant. That’s a problem, and it’s one we’re going to have to address.
### 13.2.1 Introducing the \(t\)-test
This ambiguity is annoying, and it was resolved in 1908 by a guy called <NAME> (Student 1908), who was working as a chemist for the Guinness brewery at the time (see Box 1987). Because Guinness took a dim view of its employees publishing statistical analysis (apparently they felt it was a trade secret), he published the work under the pseudonym “A Student”, and to this day, the full name of the \(t\)-test is actually Student’s \(t\)-test. The key thing that Gosset figured out is how we should accommodate the fact that we aren’t completely sure what the true standard deviation is.188 The answer is that it subtly changes the sampling distribution. In the \(t\)-test, our test statistic (now called a \(t\)-statistic) is calculated in exactly the same way I mentioned above. If our null hypothesis is that the true mean is \(\mu\), but our sample has mean \(\bar{X}\) and our estimate of the population standard deviation is \(\hat{\sigma}\), then our \(t\) statistic is: \[ t = \frac{\bar{X} - \mu}{\hat{\sigma}/\sqrt{N} } \] The only thing that has changed in the equation is that instead of using the known true value \(\sigma\), we use the estimate \(\hat{\sigma}\). And if this estimate has been constructed from \(N\) observations, then the sampling distribution turns into a \(t\)-distribution with \(N-1\) degrees of freedom (df). The \(t\) distribution is very similar to the normal distribution, but has “heavier” tails, as discussed earlier in Section 9.6 and illustrated in Figure 13.5. Notice, though, that as df gets larger, the \(t\)-distribution starts to look identical to the standard normal distribution. This is as it should be: if you have a sample size of \(N = 70,000,000\) then your “estimate” of the standard deviation would be pretty much perfect, right? So, you should expect that for large \(N\), the \(t\)-test would behave exactly the same way as a \(z\)-test. And that’s exactly what happens!
### 13.2.2 Doing the test in R
As you might expect, the mechanics of the \(t\)-test are almost identical to the mechanics of the \(z\)-test. So there’s not much point in going through the tedious exercise of showing you how to do the calculations using low level commands: it’s pretty much identical to the calculations that we did earlier, except that we use the estimated standard deviation (i.e., something like `se.est <- sd(grades)` ), and then we test our hypothesis using the \(t\) distribution rather than the normal distribution (i.e. we use `pt()` rather than `pnorm()` . And so instead of going through the calculations in tedious detail for a second time, I’ll jump straight to showing you how \(t\)-tests are actually done in practice. The situation with \(t\)-tests is very similar to the one we encountered with chi-squared tests in Chapter 12. R comes with one function called `t.test()` that is very flexible (it can run lots of different kinds of \(t\)-tests) and is somewhat terse (the output is quite compressed). Later on in the chapter I’ll show you how to use the `t.test()` function (Section 13.7), but to start out with I’m going to rely on some simpler functions in the `lsr` package. Just like last time, what I’ve done is written a few simpler functions, each of which does only one thing. So, if you want to run a one-sample \(t\)-test, use the `oneSampleTTest()` function! It’s pretty straightforward to use: all you need to do is specify `x` , the variable containing the data, and `mu` , the true population mean according to the null hypothesis. All you need to type is this:
```
library(lsr)
oneSampleTTest( x=grades, mu=67.5 )
```
```
##
## One sample t-test
##
## Data variable: grades
##
## Descriptive statistics:
## grades
## mean 72.300
## std dev. 9.521
##
## Hypotheses:
## null: population mean equals 67.5
## alternative: population mean not equal to 67.5
##
## Test results:
## t-statistic: 2.255
## degrees of freedom: 19
## p-value: 0.036
##
## Other information:
## two-sided 95% confidence interval: [67.844, 76.756]
## estimated effect size (Cohen's d): 0.504
```
Easy enough. Now lets go through the output. Just like we saw in the last chapter, I’ve written the functions so that the output is pretty verbose. It tries to describe in a lot of detail what its actually done:
```
One sample t-test
Data variable: grades
Descriptive statistics:
grades
mean 72.300
std dev. 9.521
Hypotheses:
null: population mean equals 67.5
alternative: population mean not equal to 67.5
Test results:
t-statistic: 2.255
degrees of freedom: 19
p-value: 0.036
Other information:
two-sided 95% confidence interval: [67.844, 76.756]
estimated effect size (Cohen's d): 0.504
```
Reading this output from top to bottom, you can see it’s trying to lead you through the data analysis process. The first two lines tell you what kind of test was run and what data were used. It then gives you some basic information about the sample: specifically, the sample mean and standard deviation of the data. It then moves towards the inferential statistics part. It starts by telling you what the null and alternative hypotheses were, and then it reports the results of the test: the \(t\)-statistic, the degrees of freedom, and the \(p\)-value. Finally, it reports two other things you might care about: the confidence interval for the mean, and a measure of effect size (we’ll talk more about effect sizes later).
So that seems straightforward enough. Now what do we do with this output? Well, since we’re pretending that we actually care about my toy example, we’re overjoyed to discover that the result is statistically significant (i.e. \(p\) value below .05). We could report the result by saying something like this:
With a mean grade of 72.3, the psychology students scored slightly higher than the average grade of 67.5 (\(t(19) = 2.25\), \(p<.05\)); the 95% confidence interval is [67.8, 76.8].
where \(t(19)\) is shorthand notation for a \(t\)-statistic that has 19 degrees of freedom. That said, it’s often the case that people don’t report the confidence interval, or do so using a much more compressed form than I’ve done here. For instance, it’s not uncommon to see the confidence interval included as part of the stat block, like this:
\(t(19) = 2.25\), \(p<.05\), CI\(_{95} = [67.8, 76.8]\)
With that much jargon crammed into half a line, you know it must be really smart.189
### 13.2.3 Assumptions of the one sample \(t\)-test
Okay, so what assumptions does the one-sample \(t\)-test make? Well, since the \(t\)-test is basically a \(z\)-test with the assumption of known standard deviation removed, you shouldn’t be surprised to see that it makes the same assumptions as the \(z\)-test, minus the one about the known standard deviation. That is
* Normality. We’re still assuming that the the population distribution is normal^[A technical comment… in the same way that we can weaken the assumptions of the \(z\)-test so that we’re only talking about the sampling distribution, we can weaken the \(t\) test assumptions so that we don’t have to assume normality of the population. However, for the \(t\)-test, it’s trickier to do this. As before, we can replace the assumption of population normality with an assumption that the sampling distribution of \(\bar{X}\) is normal. However, remember that we’re also relying on a sample estimate of the standard deviation; and so we also require the sampling distribution of \(\hat{\sigma}\) to be chi-square. That makes things nastier, and this version is rarely used in practice: fortunately, if the population is normal, then both of these two assumptions are met., and as noted earlier, there are standard tools that you can use to check to see if this assumption is met (Section 13.9), and other tests you can do in it’s place if this assumption is violated (Section 13.10).
* Independence. Once again, we have to assume that the observations in our sample are generated independently of one another. See the earlier discussion about the \(z\)-test for specifics (Section 13.1.4).
Overall, these two assumptions aren’t terribly unreasonable, and as a consequence the one-sample \(t\)-test is pretty widely used in practice as a way of comparing a sample mean against a hypothesised population mean.
## 13.3 The independent samples \(t\)-test (Student test)
Although the one sample \(t\)-test has its uses, it’s not the most typical example of a \(t\)-test190. A much more common situation arises when you’ve got two different groups of observations. In psychology, this tends to correspond to two different groups of participants, where each group corresponds to a different condition in your study. For each person in the study, you measure some outcome variable of interest, and the research question that you’re asking is whether or not the two groups have the same population mean. This is the situation that the independent samples \(t\)-test is designed for.
### 13.3.1 The data
Suppose we have 33 students taking Dr Harpo’s statistics lectures, and Dr Harpo doesn’t grade to a curve. Actually, Dr Harpo’s grading is a bit of a mystery, so we don’t really know anything about what the average grade is for the class as a whole. There are two tutors for the class, Anastasia and Bernadette. There are \(N_1 = 15\) students in Anastasia’s tutorials, and \(N_2 = 18\) in Bernadette’s tutorials. The research question I’m interested in is whether Anastasia or Bernadette is a better tutor, or if it doesn’t make much of a difference. Dr Harpo emails me the course grades, in the `harpo.Rdata` file. As usual, I’ll load the file and have a look at what variables it contains:
```
load (file.path(projecthome, "data/harpo.Rdata" ))
str(harpo)
```
```
## 'data.frame': 33 obs. of 2 variables:
## $ grade: num 65 72 66 74 73 71 66 76 69 79 ...
## $ tutor: Factor w/ 2 levels "Anastasia","Bernadette": 1 2 2 1 1 2 2 2 2 2 ...
```
As we can see, there’s a single data frame with two variables, `grade` and `tutor` . The `grade` variable is a numeric vector, containing the grades for all \(N = 33\) students taking Dr Harpo’s class; the `tutor` variable is a factor that indicates who each student’s tutor was. The first six observations in this data set are shown below: `head( harpo )`
We can calculate means and standard deviations, using the `mean()` and `sd()` functions. Rather than show the R output, here’s a nice little summary table:
mean | std dev | N |
| --- | --- | --- |
Anastasia’s students | 74.53 | 9.00 | 15 |
Bernadette’s students | 69.06 | 5.77 | 18 |
To give you a more detailed sense of what’s going on here, I’ve plotted histograms showing the distribution of grades for both tutors (Figure 13.6 and 13.7). Inspection of these histograms suggests that the students in Anastasia’s class may be getting slightly better grades on average, though they also seem a little more variable.
Here is a simpler plot showing the means and corresponding confidence intervals for both groups of students (Figure 13.8).
### 13.3.2 Introducing the test
The independent samples \(t\)-test comes in two different forms, Student’s and Welch’s. The original Student \(t\)-test – which is the one I’ll describe in this section – is the simpler of the two, but relies on much more restrictive assumptions than the Welch \(t\)-test. Assuming for the moment that you want to run a two-sided test, the goal is to determine whether two “independent samples” of data are drawn from populations with the same mean (the null hypothesis) or different means (the alternative hypothesis). When we say “independent” samples, what we really mean here is that there’s no special relationship between observations in the two samples. This probably doesn’t make a lot of sense right now, but it will be clearer when we come to talk about the paired samples \(t\)-test later on. For now, let’s just point out that if we have an experimental design where participants are randomly allocated to one of two groups, and we want to compare the two groups’ mean performance on some outcome measure, then an independent samples \(t\)-test (rather than a paired samples \(t\)-test) is what we’re after.
Okay, so let’s let \(\mu_1\) denote the true population mean for group 1 (e.g., Anastasia’s students), and \(\mu_2\) will be the true population mean for group 2 (e.g., Bernadette’s students),191 and as usual we’ll let \(\bar{X}_1\) and \(\bar{X}_2\) denote the observed sample means for both of these groups. Our null hypothesis states that the two population means are identical (\(\mu_1 = \mu_2\)) and the alternative to this is that they are not (\(\mu_1 \neq \mu_2\)). Written in mathematical-ese, this is… \[ \begin{array}{ll} H_0: & \mu_1 = \mu_2 \\ H_1: & \mu_1 \neq \mu_2 \end{array} \]
To construct a hypothesis test that handles this scenario, we start by noting that if the null hypothesis is true, then the difference between the population means is exactly zero, \(\mu_1 - \mu_2 = 0\) As a consequence, a diagnostic test statistic will be based on the difference between the two sample means. Because if the null hypothesis is true, then we’d expect \[ \bar{X}_1 - \bar{X}_2 \] to be pretty close to zero. However, just like we saw with our one-sample tests (i.e., the one-sample \(z\)-test and the one-sample \(t\)-test) we have to be precise about exactly how close to zero this difference should be. And the solution to the problem is more or less the same one: we calculate a standard error estimate (SE), just like last time, and then divide the difference between means by this estimate. So our \(t\)-statistic will be of the form \[ t = \frac{\bar{X}_1 - \bar{X}_2}{\mbox{SE}} \] We just need to figure out what this standard error estimate actually is. This is a bit trickier than was the case for either of the two tests we’ve looked at so far, so we need to go through it a lot more carefully to understand how it works.
### 13.3.3 A “pooled estimate” of the standard deviation
In the original “Student \(t\)-test”, we make the assumption that the two groups have the same population standard deviation: that is, regardless of whether the population means are the same, we assume that the population standard deviations are identical, \(\sigma_1 = \sigma_2\). Since we’re assuming that the two standard deviations are the same, we drop the subscripts and refer to both of them as \(\sigma\). How should we estimate this? How should we construct a single estimate of a standard deviation when we have two samples? The answer is, basically, we average them. Well, sort of. Actually, what we do is take a weighed average of the variance estimates, which we use as our pooled estimate of the variance. The weight assigned to each sample is equal to the number of observations in that sample, minus 1. Mathematically, we can write this as \[ \begin{array}{rcl} w_1 &=& N_1 - 1\\ w_2 &=& N_2 - 1 \end{array} \] Now that we’ve assigned weights to each sample, we calculate the pooled estimate of the variance by taking the weighted average of the two variance estimates, \({\hat\sigma_1}^2\) and \({\hat\sigma_2}^2\) \[ \hat\sigma^2_p = \frac{w_1 {\hat\sigma_1}^2 + w_2 {\hat\sigma_2}^2}{w_1 + w_2} \] Finally, we convert the pooled variance estimate to a pooled standard deviation estimate, by taking the square root. This gives us the following formula for \(\hat\sigma_p\), \[ \hat\sigma_p = \sqrt{\frac{w_1 {\hat\sigma_1}^2 + w_2 {\hat\sigma_2}^2}{w_1 + w_2}} \] And if you mentally substitute \(w_1 = N_1 -1\) and \(w_2 = N_2 -1\) into this equation you get a very ugly looking formula; a very ugly formula that actually seems to be the “standard” way of describing the pooled standard deviation estimate. It’s not my favourite way of thinking about pooled standard deviations, however.192
### 13.3.4 The same pooled estimate, described differently
I prefer to think about it like this. Our data set actually corresponds to a set of \(N\) observations, which are sorted into two groups. So let’s use the notation \(X_{ik}\) to refer to the grade received by the \(i\)-th student in the \(k\)-th tutorial group: that is, \(X_{11}\) is the grade received by the first student in Anastasia’s class, \(X_{21}\) is her second student, and so on. And we have two separate group means \(\bar{X}_1\) and \(\bar{X}_2\), which we could “generically” refer to using the notation \(\bar{X}_k\), i.e., the mean grade for the \(k\)-th tutorial group. So far, so good. Now, since every single student falls into one of the two tutorials, and so we can describe their deviation from the group mean as the difference \[ X_{ik} - \bar{X}_k \] So why not just use these deviations (i.e., the extent to which each student’s grade differs from the mean grade in their tutorial?) Remember, a variance is just the average of a bunch of squared deviations, so let’s do that. Mathematically, we could write it like this: \[ \frac{\sum_{ik} \left( X_{ik} - \bar{X}_k \right)^2}{N} \] where the notation “\(\sum_{ik}\)” is a lazy way of saying “calculate a sum by looking at all students in all tutorials”, since each “\(ik\)” corresponds to one student.193 But, as we saw in Chapter 10, calculating the variance by dividing by \(N\) produces a biased estimate of the population variance. And previously, we needed to divide by \(N-1\) to fix this. However, as I mentioned at the time, the reason why this bias exists is because the variance estimate relies on the sample mean; and to the extent that the sample mean isn’t equal to the population mean, it can systematically bias our estimate of the variance. But this time we’re relying on two sample means! Does this mean that we’ve got more bias? Yes, yes it does. And does this mean we now need to divide by \(N-2\) instead of \(N-1\), in order to calculate our pooled variance estimate? Why, yes… \[ \hat\sigma^2_p = \frac{\sum_{ik} \left( X_{ik} - \bar{X}_k \right)^2}{N -2} \] Oh, and if you take the square root of this then you get \(\hat{\sigma}_p\), the pooled standard deviation estimate. In other words, the pooled standard deviation calculation is nothing special: it’s not terribly different to the regular standard deviation calculation.
### 13.3.5 Completing the test
Regardless of which way you want to think about it, we now have our pooled estimate of the standard deviation. From now on, I’ll drop the silly \(p\) subscript, and just refer to this estimate as \(\hat\sigma\). Great. Let’s now go back to thinking about the bloody hypothesis test, shall we? Our whole reason for calculating this pooled estimate was that we knew it would be helpful when calculating our standard error estimate. But, standard error of what? In the one-sample \(t\)-test, it was the standard error of the sample mean, \(\mbox{SE}({\bar{X}})\), and since \(\mbox{SE}({\bar{X}}) = \sigma / \sqrt{N}\) that’s what the denominator of our \(t\)-statistic looked like. This time around, however, we have two sample means. And what we’re interested in, specifically, is the the difference between the two \(\bar{X}_1 - \bar{X}_2\). As a consequence, the standard error that we need to divide by is in fact the standard error of the difference between means. As long as the two variables really do have the same standard deviation, then our estimate for the standard error is \[ \mbox{SE}({\bar{X}_1 - \bar{X}_2}) = \hat\sigma \sqrt{\frac{1}{N_1} + \frac{1}{N_2}} \] and our \(t\)-statistic is therefore \[ t = \frac{\bar{X}_1 - \bar{X}_2}{\mbox{SE}({\bar{X}_1 - \bar{X}_2})} \] Just as we saw with our one-sample test, the sampling distribution of this \(t\)-statistic is a \(t\)-distribution (shocking, isn’t it?) as long as the null hypothesis is true, and all of the assumptions of the test are met. The degrees of freedom, however, is slightly different. As usual, we can think of the degrees of freedom to be equal to the number of data points minus the number of constraints. In this case, we have \(N\) observations (\(N_1\) in sample 1, and \(N_2\) in sample 2), and 2 constraints (the sample means). So the total degrees of freedom for this test are \(N-2\).
### 13.3.6 Doing the test in R
Not surprisingly, you can run an independent samples \(t\)-test using the `t.test()` function (Section 13.7), but once again I’m going to start with a somewhat simpler function in the `lsr` package. That function is unimaginatively called
. First, recall that our data look like this: `head( harpo )`
The outcome variable for our test is the student `grade` , and the groups are defined in terms of the `tutor` for each class. So you probably won’t be too surprised to see that we’re going to describe the test that we want in terms of an R formula that reads like this `grade ~ tutor` . The specific command that we need is:
```
independentSamplesTTest(
formula = grade ~ tutor, # formula specifying outcome and group variables
data = harpo, # data frame that contains the variables
var.equal = TRUE # assume that the two groups have the same variance
)
```
The first two arguments should be familiar to you. The first one is the formula that tells R what variables to use and the second one tells R the name of the data frame that stores those variables. The third argument is not so obvious. By saying `var.equal = TRUE` , what we’re really doing is telling R to use the Student independent samples \(t\)-test. More on this later. For now, lets ignore that bit and look at the output:
The output has a very familiar form. First, it tells you what test was run, and it tells you the names of the variables that you used. The second part of the output reports the sample means and standard deviations for both groups (i.e., both tutorial groups). The third section of the output states the null hypothesis and the alternative hypothesis in a fairly explicit form. It then reports the test results: just like last time, the test results consist of a \(t\)-statistic, the degrees of freedom, and the \(p\)-value. The final section reports two things: it gives you a confidence interval, and an effect size. I’ll talk about effect sizes later. The confidence interval, however, I should talk about now.
It’s pretty important to be clear on what this confidence interval actually refers to: it is a confidence interval for the difference between the group means. In our example, Anastasia’s students had an average grade of 74.5, and Bernadette’s students had an average grade of 69.1, so the difference between the two sample means is 5.4. But of course the difference between population means might be bigger or smaller than this. The confidence interval reported by the
function tells you that there’s a 95% chance that the true difference between means lies between 0.2 and 10.8.
In any case, the difference between the two groups is significant (just barely), so we might write up the result using text like this:
The mean grade in Anastasia’s class was 74.5% (std dev = 9.0), whereas the mean in Bernadette’s class was 69.1% (std dev = 5.8). A Student’s independent samples \(t\)-test showed that this 5.4% difference was significant (\(t(31) = 2.1\), \(p<.05\), \(CI_{95} = [0.2, 10.8]\), \(d = .74\)), suggesting that a genuine difference in learning outcomes has occurred.
Notice that I’ve included the confidence interval and the effect size in the stat block. People don’t always do this. At a bare minimum, you’d expect to see the \(t\)-statistic, the degrees of freedom and the \(p\) value. So you should include something like this at a minimum: \(t(31) = 2.1\), \(p<.05\). If statisticians had their way, everyone would also report the confidence interval and probably the effect size measure too, because they are useful things to know. But real life doesn’t always work the way statisticians want it to: you should make a judgment based on whether you think it will help your readers, and (if you’re writing a scientific paper) the editorial standard for the journal in question. Some journals expect you to report effect sizes, others don’t. Within some scientific communities it is standard practice to report confidence intervals, in other it is not. You’ll need to figure out what your audience expects. But, just for the sake of clarity, if you’re taking my class: my default position is that it’s usually worth includng the effect size, but don’t worry about the confidence interval unless the assignment asks you to or implies that you should.
### 13.3.7 Positive and negative \(t\) values
Before moving on to talk about the assumptions of the \(t\)-test, there’s one additional point I want to make about the use of \(t\)-tests in practice. The first one relates to the sign of the \(t\)-statistic (that is, whether it is a positive number or a negative one). One very common worry that students have when they start running their first \(t\)-test is that they often end up with negative values for the \(t\)-statistic, and don’t know how to interpret it. In fact, it’s not at all uncommon for two people working independently to end up with R outputs that are almost identical, except that one person has a negative \(t\) values and the other one has a positive \(t\) value. Assuming that you’re running a two-sided test, then the \(p\)-values will be identical. On closer inspection, the students will notice that the confidence intervals also have the opposite signs. This is perfectly okay: whenever this happens, what you’ll find is that the two versions of the R output arise from slightly different ways of running the \(t\)-test. What’s happening here is very simple. The \(t\)-statistic that R is calculating here is always of the form \[ t = \frac{\mbox{(mean 1)} -\mbox{(mean 2)}}{ \mbox{(SE)}} \] If “mean 1” is larger than “mean 2” the \(t\) statistic will be positive, whereas if “mean 2” is larger then the \(t\) statistic will be negative. Similarly, the confidence interval that R reports is the confidence interval for the difference “(mean 1) minus (mean 2)”, which will be the reverse of what you’d get if you were calculating the confidence interval for the difference “(mean 2) minus (mean 1)”.
Okay, that’s pretty straightforward when you think about it, but now consider our \(t\)-test comparing Anastasia’s class to Bernadette’s class. Which one should we call “mean 1” and which one should we call “mean 2”. It’s arbitrary. However, you really do need to designate one of them as “mean 1” and the other one as “mean 2”. Not surprisingly, the way that R handles this is also pretty arbitrary. In earlier versions of the book I used to try to explain it, but after a while I gave up, because it’s not really all that important, and to be honest I can never remember myself. Whenever I get a significant \(t\)-test result, and I want to figure out which mean is the larger one, I don’t try to figure it out by looking at the \(t\)-statistic. Why would I bother doing that? It’s foolish. It’s easier just look at the actual group means, since the R output actually shows them!
Here’s the important thing. Because it really doesn’t matter what R printed out, I usually try to report the \(t\)-statistic in such a way that the numbers match up with the text. Here’s what I mean… suppose that what I want to write in my report is “Anastasia’s class had higher grades than Bernadette’s class”. The phrasing here implies that Anastasia’s group comes first, so it makes sense to report the \(t\)-statistic as if Anastasia’s class corresponded to group 1. If so, I would write
(I wouldn’t actually emphasise the word “higher” in real life, I’m just doing it to emphasise the point that “higher” corresponds to positive \(t\) values). On the other hand, suppose the phrasing I wanted to use has Bernadette’s class listed first. If so, it makes more sense to treat her class as group 1, and if so, the write up looks like this:
Because I’m talking about one group having “lower” scores this time around, it is more sensible to use the negative form of the \(t\)-statistic. It just makes it read more cleanly.
One last thing: please note that you can’t do this for other types of test statistics. It works for \(t\)-tests, but it wouldn’t be meaningful for chi-square testsm \(F\)-tests or indeed for most of the tests I talk about in this book. So don’t overgeneralise this advice! I’m really just talking about \(t\)-tests here and nothing else!
### 13.3.8 Assumptions of the test
As always, our hypothesis test relies on some assumptions. So what are they? For the Student t-test there are three assumptions, some of which we saw previously in the context of the one sample \(t\)-test (see Section 13.2.3):
* Normality. Like the one-sample \(t\)-test, it is assumed that the data are normally distributed. Specifically, we assume that both groups are normally distributed. In Section 13.9 we’ll discuss how to test for normality, and in Section 13.10 we’ll discuss possible solutions.
* Independence. Once again, it is assumed that the observations are independently sampled. In the context of the Student test this has two aspects to it. Firstly, we assume that the observations within each sample are independent of one another (exactly the same as for the one-sample test). However, we also assume that there are no cross-sample dependencies. If, for instance, it turns out that you included some participants in both experimental conditions of your study (e.g., by accidentally allowing the same person to sign up to different conditions), then there are some cross sample dependencies that you’d need to take into account.
* Homogeneity of variance (also called “homoscedasticity”). The third assumption is that the population standard deviation is the same in both groups. You can test this assumption using the Levene test, which I’ll talk about later on in the book (Section 14.7). However, there’s a very simple remedy for this assumption, which I’ll talk about in the next section.
## 13.4 The independent samples \(t\)-test (Welch test)
The biggest problem with using the Student test in practice is the third assumption listed in the previous section: it assumes that both groups have the same standard deviation. This is rarely true in real life: if two samples don’t have the same means, why should we expect them to have the same standard deviation? There’s really no reason to expect this assumption to be true. We’ll talk a little bit about how you can check this assumption later on because it does crop up in a few different places, not just the \(t\)-test. But right now I’ll talk about a different form of the \(t\)-test (Welch 1947) that does not rely on this assumption. A graphical illustration of what the Welch \(t\) test assumes about the data is shown in Figure 13.10, to provide a contrast with the Student test version in Figure 13.9. I’ll admit it’s a bit odd to talk about the cure before talking about the diagnosis, but as it happens the Welch test is the default \(t\)-test in R, so this is probably the best place to discuss it.
The Welch test is very similar to the Student test. For example, the \(t\)-statistic that we use in the Welch test is calculated in much the same way as it is for the Student test. That is, we take the difference between the sample means, and then divide it by some estimate of the standard error of that difference: \[ t = \frac{\bar{X}_1 - \bar{X}_2}{\mbox{SE}({\bar{X}_1 - \bar{X}_2})} \] The main difference is that the standard error calculations are different. If the two populations have different standard deviations, then it’s a complete nonsense to try to calculate a pooled standard deviation estimate, because you’re averaging apples and oranges.194 But you can still estimate the standard error of the difference between sample means; it just ends up looking different: \[ \mbox{SE}({\bar{X}_1 - \bar{X}_2}) = \sqrt{ \frac{{\hat{\sigma}_1}^2}{N_1} + \frac{{\hat{\sigma}_2}^2}{N_2} } \] The reason why it’s calculated this way is beyond the scope of this book. What matters for our purposes is that the \(t\)-statistic that comes out of the Welch test is actually somewhat different to the one that comes from the Student test.
The second difference between Welch and Student is that the degrees of freedom are calculated in a very different way. In the Welch test, the “degrees of freedom” doesn’t have to be a whole number any more, and it doesn’t correspond all that closely to the “number of data points minus the number of constraints” heuristic that I’ve been using up to this point. The degrees of freedom are, in fact… \[ \mbox{df} = \frac{ ({\hat{\sigma}_1}^2 / N_1 + {\hat{\sigma}_2}^2 / N_2)^2 }{ ({\hat{\sigma}_1}^2 / N_1)^2 / (N_1 -1 ) + ({\hat{\sigma}_2}^2 / N_2)^2 / (N_2 -1 ) } \] … which is all pretty straightforward and obvious, right? Well, perhaps not. It doesn’t really matter for out purposes. What matters is that you’ll see that the “df” value that pops out of a Welch test tends to be a little bit smaller than the one used for the Student test, and it doesn’t have to be a whole number.
### 13.4.1 Doing the test in R
To run a Welch test in R is pretty easy. All you have to do is not bother telling R to assume equal variances. That is, you take the command we used to run a Student’s \(t\)-test and drop the `var.equal = TRUE` bit. So the command for a Welch test becomes:
```
independentSamplesTTest(
formula = grade ~ tutor, # formula specifying outcome and group variables
data = harpo # data frame that contains the variables
)
```
Not too difficult, right? Not surprisingly, the output has exactly the same format as it did last time too:
The very first line is different, because it’s telling you that its run a Welch test rather than a Student test, and of course all the numbers are a bit different. But I hope that the interpretation of this output should be fairly obvious. You read the output in the same way that you would for the Student test. You’ve got your descriptive statistics, the hypotheses, the test results and some other information. So that’s all pretty easy.
Except, except… our result isn’t significant anymore. When we ran the Student test, we did get a significant effect; but the Welch test on the same data set is not (\(t(23.03) = 2.03\), \(p = .054\)). What does this mean? Should we panic? Is the sky burning? Probably not. The fact that one test is significant and the other isn’t doesn’t itself mean very much, especially since I kind of rigged the data so that this would happen. As a general rule, it’s not a good idea to go out of your way to try to interpret or explain the difference between a \(p\)-value of .049 and a \(p\)-value of .051. If this sort of thing happens in real life, the difference in these \(p\)-values is almost certainly due to chance. What does matter is that you take a little bit of care in thinking about what test you use. The Student test and the Welch test have different strengths and weaknesses. If the two populations really do have equal variances, then the Student test is slightly more powerful (lower Type II error rate) than the Welch test. However, if they don’t have the same variances, then the assumptions of the Student test are violated and you may not be able to trust it: you might end up with a higher Type I error rate. So it’s a trade off. However, in real life, I tend to prefer the Welch test; because almost no-one actually believes that the population variances are identical.
### 13.4.2 Assumptions of the test
The assumptions of the Welch test are very similar to those made by the Student \(t\)-test (see Section 13.3.8), except that the Welch test does not assume homogeneity of variance. This leaves only the assumption of normality, and the assumption of independence. The specifics of these assumptions are the same for the Welch test as for the Student test.
## 13.5 The paired-samples \(t\)-test
Regardless of whether we’re talking about the Student test or the Welch test, an independent samples \(t\)-test is intended to be used in a situation where you have two samples that are, well, independent of one another. This situation arises naturally when participants are assigned randomly to one of two experimental conditions, but it provides a very poor approximation to other sorts of research designs. In particular, a repeated measures design – in which each participant is measured (with respect to the same outcome variable) in both experimental conditions – is not suited for analysis using independent samples \(t\)-tests. For example, we might be interested in whether listening to music reduces people’s working memory capacity. To that end, we could measure each person’s working memory capacity in two conditions: with music, and without music. In an experimental design such as this one,195 each participant appears in both groups. This requires us to approach the problem in a different way; by using the paired samples \(t\)-test.
### 13.5.1 The data
The data set that we’ll use this time comes from Dr Chico’s class.196 In her class, students take two major tests, one early in the semester and one later in the semester. To hear her tell it, she runs a very hard class, one that most students find very challenging; but she argues that by setting hard assessments, students are encouraged to work harder. Her theory is that the first test is a bit of a “wake up call” for students: when they realise how hard her class really is, they’ll work harder for the second test and get a better mark. Is she right? To test this, let’s have a look at the `chico.Rdata` file:
```
load( file.path(projecthome, "data/chico.Rdata" ))
str(chico)
```
```
## 'data.frame': 20 obs. of 3 variables:
## $ id : Factor w/ 20 levels "student1","student10",..: 1 12 14 15 16 17 18 19 20 2 ...
## $ grade_test1: num 42.9 51.8 71.7 51.6 63.5 58 59.8 50.8 62.5 61.9 ...
## $ grade_test2: num 44.6 54 72.3 53.4 63.8 59.3 60.8 51.6 64.3 63.2 ...
```
The data frame `chico` contains three variables: an `id` variable that identifies each student in the class, the `grade_test1` variable that records the student grade for the first test, and the `grade_test2` variable that has the grades for the second test. Here’s the first six students: `head( chico )`
At a glance, it does seem like the class is a hard one (most grades are between 50% and 60%), but it does look like there’s an improvement from the first test to the second one. If we take a quick look at the descriptive statistics
```
library( psych )
describe( chico )
```
```
## vars n mean sd median trimmed mad min max range skew
## id* 1 20 10.50 5.92 10.5 10.50 7.41 1.0 20.0 19.0 0.00
## grade_test1 2 20 56.98 6.62 57.7 56.92 7.71 42.9 71.7 28.8 0.05
## grade_test2 3 20 58.38 6.41 59.7 58.35 6.45 44.6 72.3 27.7 -0.05
## kurtosis se
## id* -1.38 1.32
## grade_test1 -0.35 1.48
## grade_test2 -0.39 1.43
```
we see that this impression seems to be supported. Across all 20 students197 the mean grade for the first test is 57%, but this rises to 58% for the second test. Although, given that the standard deviations are 6.6% and 6.4% respectively, it’s starting to feel like maybe the improvement is just illusory; maybe just random variation. This impression is reinforced when you see the means and confidence intervals plotted in Figure 13.11. If we were to rely on this plot alone, we’d come to the same conclusion that we got from looking at the descriptive statistics that the `describe()` function produced. Looking at how wide those confidence intervals are, we’d be tempted to think that the apparent improvement in student performance is pure chance.
Nevertheless, this impression is wrong. To see why, take a look at the scatterplot of the grades for test 1 against the grades for test 2. shown in Figure 13.12.
In this plot, each dot corresponds to the two grades for a given student: if their grade for test 1 (\(x\) co-ordinate) equals their grade for test 2 (\(y\) co-ordinate), then the dot falls on the line. Points falling above the line are the students that performed better on the second test. Critically, almost all of the data points fall above the diagonal line: almost all of the students do seem to have improved their grade, if only by a small amount. This suggests that we should be looking at the improvement made by each student from one test to the next, and treating that as our raw data. To do this, we’ll need to create a new variable for the `improvement` that each student makes, and add it to the `chico` data frame. The easiest way to do this is as follows:
```
chico$improvement <- chico$grade_test2 - chico$grade_test1
```
Notice that I assigned the output to a variable called `chico$improvement` . That has the effect of creating a new variable called `improvement` inside the `chico` data frame. So now when I look at the `chico` data frame, I get an output that looks like this: `head( chico )`
Now that we’ve created and stored this `improvement` variable, we can draw a histogram showing the distribution of these improvement scores (using the `hist()` function), shown in Figure 13.13. When we look at histogram, it’s very clear that there is a real improvement here. The vast majority of the students scored higher on the test 2 than on test 1, reflected in the fact that almost the entire histogram is above zero. In fact, if we use `ciMean()` to compute a confidence interval for the population mean of this new variable,
```
ciMean( x = chico$improvement )
```
```
## 2.5% 97.5%
## [1,] 0.9508686 1.859131
```
we see that it is 95% certain that the true (population-wide) average improvement would lie between 0.95% and 1.86%. So you can see, qualitatively, what’s going on: there is a real “within student” improvement (everyone improves by about 1%), but it is very small when set against the quite large “between student” differences (student grades vary by about 20% or so).
### 13.5.2 What is the paired samples \(t\)-test?
In light of the previous exploration, let’s think about how to construct an appropriate \(t\) test. One possibility would be to try to run an independent samples \(t\)-test using `grade_test1` and `grade_test2` as the variables of interest. However, this is clearly the wrong thing to do: the independent samples \(t\)-test assumes that there is no particular relationship between the two samples. Yet clearly that’s not true in this case, because of the repeated measures structure to the data. To use the language that I introduced in the last section, if we were to try to do an independent samples \(t\)-test, we would be conflating the within subject differences (which is what we’re interested in testing) with the between subject variability (which we are not). The solution to the problem is obvious, I hope, since we already did all the hard work in the previous section. Instead of running an independent samples \(t\)-test on `grade_test1` and `grade_test2` , we run a one-sample \(t\)-test on the within-subject difference variable, `improvement` . To formalise this slightly, if \(X_{i1}\) is the score that the \(i\)-th participant obtained on the first variable, and \(X_{i2}\) is the score that the same person obtained on the second one, then the difference score is: \[
D_{i} = X_{i1} - X_{i2}
\] Notice that the difference scores is variable 1 minus variable 2 and not the other way around, so if we want improvement to correspond to a positive valued difference, we actually want “test 2” to be our “variable 1”. Equally, we would say that \(\mu_D = \mu_1 - \mu_2\) is the population mean for this difference variable. So, to convert this to a hypothesis test, our null hypothesis is that this mean difference is zero; the alternative hypothesis is that it is not: \[
\begin{array}{ll}
H_0: & \mu_D = 0 \\
H_1: & \mu_D \neq 0
\end{array}
\] (this is assuming we’re talking about a two-sided test here). This is more or less identical to the way we described the hypotheses for the one-sample \(t\)-test: the only difference is that the specific value that the null hypothesis predicts is 0. And so our \(t\)-statistic is defined in more or less the same way too. If we let \(\bar{D}\) denote the mean of the difference scores, then \[
t = \frac{\bar{D}}{\mbox{SE}({\bar{D}})}
\] which is \[
t = \frac{\bar{D}}{\hat\sigma_D / \sqrt{N}}
\] where \(\hat\sigma_D\) is the standard deviation of the difference scores. Since this is just an ordinary, one-sample \(t\)-test, with nothing special about it, the degrees of freedom are still \(N-1\). And that’s it: the paired samples \(t\)-test really isn’t a new test at all: it’s a one-sample \(t\)-test, but applied to the difference between two variables. It’s actually very simple; the only reason it merits a discussion as long as the one we’ve just gone through is that you need to be able to recognise when a paired samples test is appropriate, and to understand why it’s better than an independent samples \(t\) test.
### 13.5.3 Doing the test in R, part 1
How do you do a paired samples \(t\)-test in R. One possibility is to follow the process I outlined above: create a “difference” variable and then run a one sample \(t\)-test on that. Since we’ve already created a variable called `chico$improvement` , let’s do that:
```
oneSampleTTest( chico$improvement, mu=0 )
```
```
##
## One sample t-test
##
## Data variable: chico$improvement
##
## Descriptive statistics:
## improvement
## mean 1.405
## std dev. 0.970
##
## Hypotheses:
## null: population mean equals 0
## alternative: population mean not equal to 0
##
## Test results:
## t-statistic: 6.475
## degrees of freedom: 19
## p-value: <.001
##
## Other information:
## two-sided 95% confidence interval: [0.951, 1.859]
## estimated effect size (Cohen's d): 1.448
```
The output here is (obviously) formatted exactly the same was as it was the last time we used the `oneSampleTTest()` function (Section 13.2), and it confirms our intuition. There’s an average improvement of 1.4% from test 1 to test 2, and this is significantly different from 0 (\(t(19)=6.48, p<.001\)). However, suppose you’re lazy and you don’t want to go to all the effort of creating a new variable. Or perhaps you just want to keep the difference between one-sample and paired-samples tests clear in your head. If so, you can use the `pairedSamplesTTest()` function, also in the `lsr` package. Let’s assume that your data organised like they are in the `chico` data frame, where there are two separate variables, one for each measurement. The way to run the test is to input a one-sided formula, just like you did when running a test of association using the `associationTest()` function in Chapter 12. For the `chico` data frame, the formula that you need would be
```
~ grade_time2 + grade_time1
```
. As usual, you’ll also need to input the name of the data frame too. So the command just looks like this:
```
pairedSamplesTTest(
formula = ~ grade_test2 + grade_test1, # one-sided formula listing the two variables
data = chico # data frame containing the two variables
)
```
```
##
## Paired samples t-test
##
## Variables: grade_test2 , grade_test1
##
## Descriptive statistics:
## grade_test2 grade_test1 difference
## mean 58.385 56.980 1.405
## std dev. 6.406 6.616 0.970
##
## Hypotheses:
## null: population means equal for both measurements
## alternative: different population means for each measurement
##
## Test results:
## t-statistic: 6.475
## degrees of freedom: 19
## p-value: <.001
##
## Other information:
## two-sided 95% confidence interval: [0.951, 1.859]
## estimated effect size (Cohen's d): 1.448
```
The numbers are identical to those that come from the one sample test, which of course they have to be given that the paired samples \(t\)-test is just a one sample test under the hood. However, the output is a bit more detailed:
This time around the descriptive statistics block shows you the means and standard deviations for the original variables, as well as for the difference variable (notice that it always defines the difference as the first listed variable mines the second listed one). The null hypothesis and the alternative hypothesis are now framed in terms of the original variables rather than the difference score, but you should keep in mind that in a paired samples test it’s still the difference score being tested. The statistical information at the bottom about the test result is of course the same as before.
### 13.5.4 Doing the test in R, part 2
The paired samples \(t\)-test is a little different from the other \(t\)-tests, because it is used in repeated measures designs. For the `chico` data, every student is “measured” twice, once for the first test, and again for the second test. Back in Section 7.7 I talked about the fact that repeated measures data can be expressed in two standard ways, known as wide form and long form. The `chico` data frame is in wide form: every row corresponds to a unique person. I’ve shown you the data in that form first because that’s the form that you’re most used to seeing, and it’s also the format that you’re most likely to receive data in. However, the majority of tools in R for dealing with repeated measures data expect to receive data in long form. The paired samples \(t\)-test is a bit of an exception that way. As you make the transition from a novice user to an advanced one, you’re going to have to get comfortable with long form data, and switching between the two forms. To that end, I want to show you how to apply the `pairedSamplesTTest()` function to long form data. First, let’s use the `wideToLong()` function to create a long form version of the `chico` data frame. If you’ve forgotten how the `wideToLong()` function works, it might be worth your while quickly re-reading Section 7.7. Assuming that you’ve done so, or that you’re already comfortable with data reshaping, I’ll use it to create a new data frame called `chico2` :
```
chico2 <- wideToLong( chico, within="time" )
head( chico2 )
```
As you can see, this has created a new data frame containing three variables: an `id` variable indicating which person provided the data, a `time` variable indicating which test the data refers to (i.e., test 1 or test 2), and a `grade` variable that records what score the person got on that test. Notice that this data frame is in long form: every row corresponds to a unique measurement. Because every person provides two observations (test 1 and test 2), there are two rows for every person. To see this a little more clearly, I’ll use the `sortFrame()` function to sort the rows of `chico2` by `id` variable (see Section 7.6.3).
```
chico2 <- sortFrame( chico2, id )
head( chico2 )
```
```
## id improvement time grade
## 1 student1 1.7 test1 42.9
## 21 student1 1.7 test2 44.6
## 10 student10 1.3 test1 61.9
## 30 student10 1.3 test2 63.2
## 11 student11 1.4 test1 50.4
## 31 student11 1.4 test2 51.8
```
As you can see, there are two rows for “student1”: one showing their grade on the first test, the other showing their grade on the second test.198
Okay, suppose that we were given the `chico2` data frame to analyse. How would we run our paired samples \(t\)-test now? One possibility would be to use the `longToWide()` function (Section 7.7) to force the data back into wide form, and do the same thing that we did previously. But that’s sort of defeating the point, and besides, there’s an easier way. Let’s think about what how the `chico2` data frame is structured: there are three variables here, and they all matter. The outcome measure is stored as the `grade` , and we effectively have two “groups” of measurements (test 1 and test 2) that are defined by the `time` points at which a test is given. Finally, because we want to keep track of which measurements should be paired together, we need to know which student obtained each grade, which is what the `id` variable gives us. So, when your data are presented to you in long form, we would want specify a two-sided formula and a data frame, in the same way that we do for an independent samples \(t\)-test: the formula specifies the outcome variable and the groups, so in this case it would be `grade ~ time` , and the data frame is `chico2` . However, we also need to tell it the id variable, which in this case is boringly called `id` . So our command is:
```
pairedSamplesTTest(
formula = grade ~ time, # two sided formula: outcome ~ group
data = chico2, # data frame
id = "id" # name of the id variable
)
```
```
##
## Paired samples t-test
##
## Outcome variable: grade
## Grouping variable: time
## ID variable: id
##
## Descriptive statistics:
## test1 test2 difference
## mean 56.980 58.385 -1.405
## std dev. 6.616 6.406 0.970
##
## Hypotheses:
## null: population means equal for both measurements
## alternative: different population means for each measurement
##
## Test results:
## t-statistic: -6.475
## degrees of freedom: 19
## p-value: <.001
##
## Other information:
## two-sided 95% confidence interval: [-1.859, -0.951]
## estimated effect size (Cohen's d): 1.448
```
Note that the name of the id variable is `"id"` and not `id` . Note that the `id` variable must be a factor. As of the current writing, you do need to include the quote marks, because the `pairedSamplesTTest()` function is expecting a character string that specifies the name of a variable. If I ever find the time I’ll try to relax this constraint. As you can see, it’s a bit more detailed than the output from `oneSampleTTest()` . It gives you the descriptive statistics for the original variables, states the null hypothesis in a fashion that is a bit more appropriate for a repeated measures design, and then reports all the nuts and bolts from the hypothesis test itself. Not surprisingly the numbers the same as the ones that we saw last time. One final comment about the `pairedSamplesTTest()` function. One of the reasons I designed it to be able handle long form and wide form data is that I want you to be get comfortable thinking about repeated measures data in both formats, and also to become familiar with the different ways in which R functions tend to specify models and tests for repeated measures data. With that last point in mind, I want to highlight a slightly different way of thinking about what the paired samples \(t\)-test is doing. There’s a sense in which what you’re really trying to do is look at how the outcome variable ( `grade` ) is related to the grouping variable ( `time` ), after taking account of the fact that there are individual differences between people ( `id` ). So there’s a sense in which `id` is actually a second predictor: you’re trying to predict the `grade` on the basis of the `time` and the `id` . With that in mind, the `pairedSamplesTTest()` function lets you specify a formula like this one `grade ~ time + (id)` This formula tells R everything it needs to know: the variable on the left ( `grade` ) is the outcome variable, the bracketed term on the right ( `id` ) is the id variable, and the other term on the right is the grouping variable ( `time` ). If you specify your formula that way, then you only need to specify the `formula` and the `data` frame, and so you can get away with using a command as simple as this one:
```
pairedSamplesTTest(
formula = grade ~ time + (id),
data = chico2
)
```
or you can drop the argument names and just do this:
```
> pairedSamplesTTest( grade ~ time + (id), chico2 )
```
These commands will produce the same output as the last one, I personally find this format a lot more elegant. That being said, the main reason for allowing you to write your formulas that way is that they’re quite similar to the way that mixed models (fancy pants repeated measures analyses) are specified in the `lme4` package. This book doesn’t talk about mixed models (yet!), but if you go on to learn more statistics you’ll find them pretty hard to avoid, so I’ve tried to lay a little bit of the groundwork here.
## 13.6 One sided tests
When introducing the theory of null hypothesis tests, I mentioned that there are some situations when it’s appropriate to specify a one-sided test (see Section 11.4.3). So far, all of the \(t\)-tests have been two-sided tests. For instance, when we specified a one sample \(t\)-test for the grades in Dr Zeppo’s class, the null hypothesis was that the true mean was 67.5%. The alternative hypothesis was that the true mean was greater than or less than 67.5%. Suppose we were only interested in finding out if the true mean is greater than 67.5%, and have no interest whatsoever in testing to find out if the true mean is lower than 67.5%. If so, our null hypothesis would be that the true mean is 67.5% or less, and the alternative hypothesis would be that the true mean is greater than 67.5%. The `oneSampleTTest()` function lets you do this, by specifying the `one.sided` argument. If you set `one.sided="greater"` , it means that you’re testing to see if the true mean is larger than `mu` . If you set `one.sided="less"` , then you’re testing to see if the true mean is smaller than `mu` . Here’s how it would work for Dr Zeppo’s class:
```
oneSampleTTest( x=grades, mu=67.5, one.sided="greater" )
```
```
##
## One sample t-test
##
## Data variable: grades
##
## Descriptive statistics:
## grades
## mean 72.300
## std dev. 9.521
##
## Hypotheses:
## null: population mean less than or equal to 67.5
## alternative: population mean greater than 67.5
##
## Test results:
## t-statistic: 2.255
## degrees of freedom: 19
## p-value: 0.018
##
## Other information:
## one-sided 95% confidence interval: [68.619, Inf]
## estimated effect size (Cohen's d): 0.504
```
Notice that there are a few changes from the output that we saw last time. Most important is the fact that the null and alternative hypotheses have changed, to reflect the different test. The second thing to note is that, although the \(t\)-statistic and degrees of freedom have not changed, the \(p\)-value has. This is because the one-sided test has a different rejection region from the two-sided test. If you’ve forgotten why this is and what it means, you may find it helpful to read back over Chapter 11, and Section 11.4.3 in particular. The third thing to note is that the confidence interval is different too: it now reports a “one-sided” confidence interval rather than a two-sided one. In a two-sided confidence interval, we’re trying to find numbers \(a\) and \(b\) such that we’re 95% confident that the true mean lies between \(a\) and \(b\). In a one-sided confidence interval, we’re trying to find a single number \(a\) such that we’re 95% confident that the true mean is greater than \(a\) (or less than \(a\) if you set `one.sided="less"` ). So that’s how to do a one-sided one sample \(t\)-test. However, all versions of the \(t\)-test can be one-sided. For an independent samples \(t\) test, you could have a one-sided test if you’re only interestd in testing to see if group A has higher scores than group B, but have no interest in finding out if group B has higher scores than group A. Let’s suppose that, for Dr Harpo’s class, you wanted to see if Anastasia’s students had higher grades than Bernadette’s. The
function lets you do this, again by specifying the `one.sided` argument. However, this time around you need to specify the name of the group that you’re expecting to have the higher score. In our case, we’d write
```
one.sided = "Anastasia"
```
. So the command would be:
```
independentSamplesTTest(
formula = grade ~ tutor,
data = harpo,
one.sided = "Anastasia"
)
```
```
##
## Welch's independent samples t-test
##
## Outcome variable: grade
## Grouping variable: tutor
##
## Descriptive statistics:
## <NAME>
## mean 74.533 69.056
## std dev. 8.999 5.775
##
## Hypotheses:
## null: population means are equal, or smaller for group 'Anastasia'
## alternative: population mean is larger for group 'Anastasia'
##
## Test results:
## t-statistic: 2.034
## degrees of freedom: 23.025
## p-value: 0.027
##
## Other information:
## one-sided 95% confidence interval: [0.863, Inf]
## estimated effect size (Cohen's d): 0.724
```
Again, the output changes in a predictable way. The definition of the null and alternative hypotheses has changed, the \(p\)-value has changed, and it now reports a one-sided confidence interval rather than a two-sided one.
What about the paired samples \(t\)-test? Suppose we wanted to test the hypothesis that grades go up from test 1 to test 2 in Dr Zeppo’s class, and are not prepared to consider the idea that the grades go down. Again, we can use the `one.sided` argument to specify the one-sided test, and it works the same way it does for the independent samples \(t\)-test. You need to specify the name of the group whose scores are expected to be larger under the alternative hypothesis. If your data are in wide form, as they are in the `chico` data frame, you’d use this command:
```
pairedSamplesTTest(
formula = ~ grade_test2 + grade_test1,
data = chico,
one.sided = "grade_test2"
)
```
```
##
## Paired samples t-test
##
## Variables: grade_test2 , grade_test1
##
## Descriptive statistics:
## grade_test2 grade_test1 difference
## mean 58.385 56.980 1.405
## std dev. 6.406 6.616 0.970
##
## Hypotheses:
## null: population means are equal, or smaller for measurement 'grade_test2'
## alternative: population mean is larger for measurement 'grade_test2'
##
## Test results:
## t-statistic: 6.475
## degrees of freedom: 19
## p-value: <.001
##
## Other information:
## one-sided 95% confidence interval: [1.03, Inf]
## estimated effect size (Cohen's d): 1.448
```
Yet again, the output changes in a predictable way. The hypotheses have changed, the \(p\)-value has changed, and the confidence interval is now one-sided. If your data are in long form, as they are in the `chico2` data frame, it still works the same way. Either of the following commands would work,
```
> pairedSamplesTTest(
formula = grade ~ time,
data = chico2,
id = "id",
one.sided = "test2"
)
> pairedSamplesTTest(
formula = grade ~ time + (id),
data = chico2,
one.sided = "test2"
)
```
and would produce the same answer as the output shown above.
## 13.7 Using the t.test() function
In this chapter, we’ve talked about three different kinds of \(t\)-test: the one sample test, the independent samples test (Student’s and Welch’s), and the paired samples test. In order to run these different tests, I’ve shown you three different functions: `oneSampleTTest()` ,
and `pairedSamplesTTest()` . I wrote these as three different functions for two reasons. Firstly, I thought it made sense to have separate functions for each test, in order to help make it clear to beginners that there are different tests. Secondly, I wanted to show you some functions that produced “verbose” output, to help you see what hypotheses are being tested and so on. However, once you’ve started to become familiar with \(t\)-tests and with using R, you might find it easier to use the `t.test()` function. It’s one function, but it can run all four of the different \(t\)-tests that we’ve talked about. Here’s how it works. Firstly, suppose you want to run a one sample \(t\)-test. To run the test on the `grades` data from Dr Zeppo’s class (Section 13.2), we’d use a command like this:
```
t.test( x = grades, mu = 67.5 )
```
```
##
## One Sample t-test
##
## data: grades
## t = 2.2547, df = 19, p-value = 0.03615
## alternative hypothesis: true mean is not equal to 67.5
## 95 percent confidence interval:
## 67.84422 76.75578
## sample estimates:
## mean of x
## 72.3
```
The input is the same as for the `oneSampleTTest()` : we specify the sample data using the argument `x` , and the value against which it is to be tested using the argument `mu` . The output is a lot more compressed.
As you can see, it still has all the information you need. It tells you what type of test it ran and the data it tested it on. It gives you the \(t\)-statistic, the degrees of freedom and the \(p\)-value. And so on. There’s nothing wrong with this output, but in my experience it can be a little confusing when you’re just starting to learn statistics, because it’s a little disorganised. Once you know what you’re looking at though, it’s pretty easy to read off the relevant information.
What about independent samples \(t\)-tests? As it happens, the `t.test()` function can be used in much the same way as the
function, by specifying a formula, a data frame, and using `var.equal` to indicate whether you want a Student test or a Welch test. If you want to run the Welch test from Section 13.4, then you’d use this command:
```
##
## Welch Two Sample t-test
##
## data: grade by tutor
## t = 2.0342, df = 23.025, p-value = 0.05361
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.09249349 11.04804904
## sample estimates:
## mean in group Anastasia mean in group Bernadette
## 74.53333 69.05556
```
If you want to do the Student test, it’s exactly the same except that you need to add an additional argument indicating that `var.equal = TRUE` . This is no different to how it worked in the
function. Finally, we come to the paired samples \(t\)-test. Somewhat surprisingly, given that most R functions for dealing with repeated measures data require data to be in long form, the `t.test()` function isn’t really set up to handle data in long form. Instead it expects to be given two separate variables, `x` and `y` , and you need to specify `paired=TRUE` . And on top of that, you’d better make sure that the first element of `x` and the first element of `y` actually correspond to the same person! Because it doesn’t ask for an “id” variable. I don’t know why. So, in order to run the paired samples \(t\) test on the data from Dr Chico’s class, we’d use this command:
```
t.test( x = chico$grade_test2, # variable 1 is the "test2" scores
y = chico$grade_test1, # variable 2 is the "test1" scores
paired = TRUE # paired test
)
```
```
##
## Paired t-test
##
## data: chico$grade_test2 and chico$grade_test1
## t = 6.4754, df = 19, p-value = 3.321e-06
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.9508686 1.8591314
## sample estimates:
## mean of the differences
## 1.405
```
Yet again, these are the same numbers that we saw in Section 13.5. Feel free to check.
## 13.8 Effect size
The most commonly used measure of effect size for a \(t\)-test is Cohen’s \(d\) (Cohen 1988). It’s a very simple measure in principle, with quite a few wrinkles when you start digging into the details. Cohen himself defined it primarily in the context of an independent samples \(t\)-test, specifically the Student test. In that context, a natural way of defining the effect size is to divide the difference between the means by an estimate of the standard deviation. In other words, we’re looking to calculate something along the lines of this: \[ d = \frac{\mbox{(mean 1)} - \mbox{(mean 2)}}{\mbox{std dev}} \] and he suggested a rough guide for interpreting \(d\) in Table ??. You’d think that this would be pretty unambiguous, but it’s not; largely because Cohen wasn’t too specific on what he thought should be used as the measure of the standard deviation (in his defence, he was trying to make a broader point in his book, not nitpick about tiny details). As discussed by McGrath and Meyer (2006), there are several different version in common usage, and each author tends to adopt slightly different notation. For the sake of simplicity (as opposed to accuracy) I’ll use \(d\) to refer to any statistic that you calculate from the sample, and use \(\delta\) to refer to a theoretical population effect. Obviously, that does mean that there are several different things all called \(d\). The `cohensD()` function in the `lsr` package uses the `method` argument to distinguish between them, so that’s what I’ll do in the text. My suspicion is that the only time that you would want Cohen’s \(d\) is when you’re running a \(t\)-test, and if you’re using the `oneSampleTTest` ,
```
independentSamplesTTest
```
and `pairedSamplesTTest()` functions to run your \(t\)-tests, then you don’t need to learn any new commands, because they automatically produce an estimate of Cohen’s \(d\) as part of the output. However, if you’re using `t.test()` then you’ll need to use the `cohensD()` function (also in the `lsr` package) to do the calculations.
\(d\)-value | rough interpretation |
| --- | --- |
about 0.2 | small effect |
about 0.5 | moderate effect |
about 0.8 | large effect |
### 13.8.1 Cohen’s \(d\) from one sample
The simplest situation to consider is the one corresponding to a one-sample \(t\)-test. In this case, the one sample mean \(\bar{X}\) and one (hypothesised) population mean \(\mu_o\) to compare it to. Not only that, there’s really only one sensible way to estimate the population standard deviation: we just use our usual estimate \(\hat{\sigma}\). Therefore, we end up with the following as the only way to calculate \(d\), \[ d = \frac{\bar{X} - \mu_0}{\hat{\sigma}} \] When writing the `cohensD()` function, I’ve made some attempt to make it work in a similar way to `t.test()` . As a consequence, `cohensD()` can calculate your effect size regardless of which type of \(t\)-test you performed. If what you want is a measure of Cohen’s \(d\) to accompany a one-sample \(t\)-test, there’s only two arguments that you need to care about. These are:
*
`x` . A numeric vector containing the sample data. *
`mu` . The mean against which the mean of `x` is compared (default value is `mu = 0` ). We don’t need to specify what `method` to use, because there’s only one version of \(d\) that makes sense in this context. So, in order to compute an effect size for the data from Dr Zeppo’s class (Section 13.2), we’d type something like this:
```
cohensD( x = grades, # data are stored in the grades vector
mu = 67.5 # compare students to a mean of 67.5
)
```
`## [1] 0.5041691` and, just so that you can see that there’s nothing fancy going on, the command below shows you how to calculate it if there weren’t no fancypants `cohensD()` function available:
```
( mean(grades) - 67.5 ) / sd(grades)
```
`## [1] 0.5041691`
Yep, same number. Overall, then, the psychology students in Dr Zeppo’s class are achieving grades (mean = 72.3%) that are about .5 standard deviations higher than the level that you’d expect (67.5%) if they were performing at the same level as other students. Judged against Cohen’s rough guide, this is a moderate effect size.
### 13.8.2 Cohen’s \(d\) from a Student \(t\) test
The majority of discussions of Cohen’s \(d\) focus on a situation that is analogous to Student’s independent samples \(t\) test, and it’s in this context that the story becomes messier, since there are several different versions of \(d\) that you might want to use in this situation, and you can use the `method` argument to the `cohensD()` function to pick the one you want. To understand why there are multiple versions of \(d\), it helps to take the time to write down a formula that corresponds to the true population effect size \(\delta\). It’s pretty straightforward, \[
\delta = \frac{\mu_1 - \mu_2}{\sigma}
\] where, as usual, \(\mu_1\) and \(\mu_2\) are the population means corresponding to group 1 and group 2 respectively, and \(\sigma\) is the standard deviation (the same for both populations). The obvious way to estimate \(\delta\) is to do exactly the same thing that we did in the \(t\)-test itself: use the sample means as the top line, and a pooled standard deviation estimate for the bottom line: \[
d = \frac{\bar{X}_1 - \bar{X}_2}{\hat{\sigma}_p}
\] where \(\hat\sigma_p\) is the exact same pooled standard deviation measure that appears in the \(t\)-test. This is the most commonly used version of Cohen’s \(d\) when applied to the outcome of a Student \(t\)-test ,and is sometimes referred to as Hedges’ \(g\) statistic (Hedges 1981). It corresponds to `method = "pooled"` in the `cohensD()` function, and it’s the default. However, there are other possibilities, which I’ll briefly describe. Firstly, you may have reason to want to use only one of the two groups as the basis for calculating the standard deviation. This approach (often called Glass’ \(\Delta\)) only makes most sense when you have good reason to treat one of the two groups as a purer reflection of “natural variation” than the other. This can happen if, for instance, one of the two groups is a control group. If that’s what you want, then use `method = "x.sd"` or `method = "y.sd"` when using `cohensD()` . Secondly, recall that in the usual calculation of the pooled standard deviation we divide by \(N-2\) to correct for the bias in the sample variance; in one version of Cohen’s \(d\) this correction is omitted. Instead, we divide by \(N\). This version ( `method = "raw"` ) makes sense primarily when you’re trying to calculate the effect size in the sample; rather than estimating an effect size in the population. Finally, there is a version based on Hedges and Olkin (1985), who point out there is a small bias in the usual (pooled) estimation for Cohen’s \(d\). Thus they introduce a small correction ( `method = "corrected"` ), by multiplying the usual value of \(d\) by \((N-3)/(N-2.25)\). In any case, ignoring all those variations that you could make use of if you wanted, let’s have a look at how to calculate the default version. In particular, suppose we look at the data from Dr Harpo’s class (the `harpo` data frame). The command that we want to use is very similar to the relevant `t.test()` command, but also specifies a `method`
```
cohensD( formula = grade ~ tutor, # outcome ~ group
data = harpo, # data frame
method = "pooled" # which version to calculate?
)
```
`## [1] 0.7395614` This is the version of Cohen’s \(d\) that gets reported by the
function whenever it runs a Student \(t\)-test.
### 13.8.3 Cohen’s \(d\) from a Welch test
Suppose the situation you’re in is more like the Welch test: you still have two independent samples, but you no longer believe that the corresponding populations have equal variances. When this happens, we have to redefine what we mean by the population effect size. I’ll refer to this new measure as \(\delta^\prime\), so as to keep it distinct from the measure \(\delta\) which we defined previously. What Cohen (1988) suggests is that we could define our new population effect size by averaging the two population variances. What this means is that we get: \[ \delta^\prime = \frac{\mu_1 - \mu_2}{\sigma^\prime} \] where \[ \sigma^\prime = \sqrt{\displaystyle{\frac{ {\sigma_1}^2 + {\sigma_2}^2}{2}}} \] This seems quite reasonable, but notice that none of the measures that we’ve discussed so far are attempting to estimate this new quantity. It might just be my own ignorance of the topic, but I’m only aware of one version of Cohen’s \(d\) that actually estimates the unequal-variance effect size \(\delta^\prime\) rather than the equal-variance effect size \(\delta\). All we do to calculate \(d\) for this version ( `method = "unequal"` ) is substitute the sample means \(\bar{X}_1\) and \(\bar{X}_2\) and the corrected sample standard deviations \(\hat{\sigma}_1\) and \(\hat{\sigma}_2\) into the equation for \(\delta^\prime\). This gives us the following equation for \(d\), \[
d = \frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\displaystyle{\frac{ {\hat\sigma_1}^2 + {\hat\sigma_2}^2}{2}}}}
\] as our estimate of the effect size. There’s nothing particularly difficult about calculating this version in R, since all we have to do is change the `method` argument:
```
cohensD( formula = grade ~ tutor,
data = harpo,
method = "unequal"
)
```
`## [1] 0.7244995` This is the version of Cohen’s \(d\) that gets reported by the
function whenever it runs a Welch \(t\)-test.
### 13.8.4 Cohen’s \(d\) from a paired-samples test
Finally, what should we do for a paired samples \(t\)-test? In this case, the answer depends on what it is you’re trying to do. If you want to measure your effect sizes relative to the distribution of difference scores, the measure of \(d\) that you calculate is just ( `method = "paired"` ) \[
d = \frac{\bar{D}}{\hat{\sigma}_D}
\] where \(\hat{\sigma}_D\) is the estimate of the standard deviation of the differences. The calculation here is pretty straightforward
```
cohensD( x = chico$grade_test2,
y = chico$grade_test1,
method = "paired"
)
```
`## [1] 1.447952` This is the version of Cohen’s \(d\) that gets reported by the `pairedSamplesTTest()` function. The only wrinkle is figuring out whether this is the measure you want or not. To the extent that you care about the practical consequences of your research, you often want to measure the effect size relative to the original variables, not the difference scores (e.g., the 1% improvement in Dr Chico’s class is pretty small when measured against the amount of between-student variation in grades), in which case you use the same versions of Cohen’s \(d\) that you would use for a Student or Welch test. For instance, when we do that for Dr Chico’s class,
```
cohensD( x = chico$grade_test2,
y = chico$grade_test1,
method = "pooled"
)
```
`## [1] 0.2157646`
what we see is that the overall effect size is quite small, when assessed on the scale of the original variables.
## 13.9 Checking the normality of a sample
All of the tests that we have discussed so far in this chapter have assumed that the data are normally distributed. This assumption is often quite reasonable, because the central limit theorem (Section 10.3.3) does tend to ensure that many real world quantities are normally distributed: any time that you suspect that your variable is actually an average of lots of different things, there’s a pretty good chance that it will be normally distributed; or at least close enough to normal that you can get away with using \(t\)-tests. However, life doesn’t come with guarantees; and besides, there are lots of ways in which you can end up with variables that are highly non-normal. For example, any time you think that your variable is actually the minimum of lots of different things, there’s a very good chance it will end up quite skewed. In psychology, response time (RT) data is a good example of this. If you suppose that there are lots of things that could trigger a response from a human participant, then the actual response will occur the first time one of these trigger events occurs.199 This means that RT data are systematically non-normal. Okay, so if normality is assumed by all the tests, and is mostly but not always satisfied (at least approximately) by real world data, how can we check the normality of a sample? In this section I discuss two methods: QQ plots, and the Shapiro-Wilk test.
### 13.9.1 QQ plots
```
## Normally Distributed Data
## skew= -0.02936155
## kurtosis= -0.06035938
##
## Shapiro-Wilk normality test
##
## data: data
## W = 0.99108, p-value = 0.7515
```
The Shapiro-Wilk statistic associated with the data in Figures 13.14 and 13.15 is \(W = .99\), indicating that no significant departures from normality were detected (\(p = .73\)). As you can see, these data form a pretty straight line; which is no surprise given that we sampled them from a normal distribution! In contrast, have a look at the two data sets shown in Figures 13.16, 13.17, 13.18, 13.19. Figures 13.16 and 13.17 show the histogram and a QQ plot for a data set that is highly skewed: the QQ plot curves upwards. Figures 13.18 and 13.19 show the same plots for a heavy tailed (i.e., high kurtosis) data set: in this case, the QQ plot flattens in the middle and curves sharply at either end.
```
## Skewed Data
## skew= 1.889475
## kurtosis= 4.4396
##
## Shapiro-Wilk normality test
##
## data: data
## W = 0.81758, p-value = 8.908e-10
```
The skewness of the data in Figures 13.16 and 13.17 is 1.94, and is reflected in a QQ plot that curves upwards. As a consequence, the Shapiro-Wilk statistic is \(W=.80\), reflecting a significant departure from normality (\(p<.001\)).
```
## Heavy-Tailed Data
## skew= -0.05308273
## kurtosis= 7.508765
##
## Shapiro-Wilk normality test
##
## data: data
## W = 0.83892, p-value = 4.718e-09
```
Figures 13.18 and 13.19 shows the same plots for a heavy tailed data set, again consisting of 100 observations. In this case, the heavy tails in the data produce a high kurtosis (2.80), and cause the QQ plot to flatten in the middle, and curve away sharply on either side. The resulting Shapiro-Wilk statistic is \(W = .93\), again reflecting significant non-normality (\(p < .001\)).
One way to check whether a sample violates the normality assumption is to draw a “quantile-quantile” plot (QQ plot). This allows you to visually check whether you’re seeing any systematic violations. In a QQ plot, each observation is plotted as a single dot. The x co-ordinate is the theoretical quantile that the observation should fall in, if the data were normally distributed (with mean and variance estimated from the sample) and on the y co-ordinate is the actual quantile of the data within the sample. If the data are normal, the dots should form a straight line. For instance, lets see what happens if we generate data by sampling from a normal distribution, and then drawing a QQ plot using the R function `qqnorm()` . The `qqnorm()` function has a few arguments, but the only one we really need to care about here is `y` , a vector specifying the data whose normality we’re interested in checking. Here’s the R commands:
```
normal.data <- rnorm( n = 100 ) # generate N = 100 normally distributed numbers
hist( x = normal.data ) # draw a histogram of these numbers
```
```
qqnorm( y = normal.data ) # draw the QQ plot
```
### 13.9.2 Shapiro-Wilk tests
Although QQ plots provide a nice way to informally check the normality of your data, sometimes you’ll want to do something a bit more formal. And when that moment comes, the Shapiro-Wilk test (Shapiro and Wilk 1965) is probably what you’re looking for.200 As you’d expect, the null hypothesis being tested is that a set of \(N\) observations is normally distributed. The test statistic that it calculates is conventionally denoted as \(W\), and it’s calculated as follows. First, we sort the observations in order of increasing size, and let \(X_1\) be the smallest value in the sample, \(X_2\) be the second smallest and so on. Then the value of \(W\) is given by \[ W = \frac{ \left( \sum_{i = 1}^N a_i X_i \right)^2 }{ \sum_{i = 1}^N (X_i - \bar{X})^2} \] where \(\bar{X}\) is the mean of the observations, and the \(a_i\) values are … mumble, mumble … something complicated that is a bit beyond the scope of an introductory text.
Because it’s a little hard to explain the maths behind the \(W\) statistic, a better idea is to give a broad brush description of how it behaves. Unlike most of the test statistics that we’ll encounter in this book, it’s actually small values of \(W\) that indicated departure from normality. The \(W\) statistic has a maximum value of 1, which arises when the data look “perfectly normal”. The smaller the value of \(W\), the less normal the data are. However, the sampling distribution for \(W\) – which is not one of the standard ones that I discussed in Chapter 9 and is in fact a complete pain in the arse to work with – does depend on the sample size \(N\). To give you a feel for what these sampling distributions look like, I’ve plotted three of them in Figure 13.20. Notice that, as the sample size starts to get large, the sampling distribution becomes very tightly clumped up near \(W=1\), and as a consequence, for larger samples \(W\) doesn’t have to be very much smaller than 1 in order for the test to be significant.
To run the test in R, we use the `shapiro.test()` function. It has only a single argument `x` , which is a numeric vector containing the data whose normality needs to be tested. For example, when we apply this function to our `normal.data` , we get the following:
```
shapiro.test( x = normal.data )
```
```
##
## Shapiro-Wilk normality test
##
## data: normal.data
## W = 0.98654, p-value = 0.4076
```
So, not surprisingly, we have no evidence that these data depart from normality. When reporting the results for a Shapiro-Wilk test, you should (as usual) make sure to include the test statistic \(W\) and the \(p\) value, though given that the sampling distribution depends so heavily on \(N\) it would probably be a politeness to include \(N\) as well.
## 13.10 Testing non-normal data with Wilcoxon tests
Okay, suppose your data turn out to be pretty substantially non-normal, but you still want to run something like a \(t\)-test? This situation occurs a lot in real life: for the AFL winning margins data, for instance, the Shapiro-Wilk test made it very clear that the normality assumption is violated. This is the situation where you want to use Wilcoxon tests.
Like the \(t\)-test, the Wilcoxon test comes in two forms, one-sample and two-sample, and they’re used in more or less the exact same situations as the corresponding \(t\)-tests. Unlike the \(t\)-test, the Wilcoxon test doesn’t assume normality, which is nice. In fact, they don’t make any assumptions about what kind of distribution is involved: in statistical jargon, this makes them nonparametric tests. While avoiding the normality assumption is nice, there’s a drawback: the Wilcoxon test is usually less powerful than the \(t\)-test (i.e., higher Type II error rate). I won’t discuss the Wilcoxon tests in as much detail as the \(t\)-tests, but I’ll give you a brief overview.
### 13.10.1 Two sample Wilcoxon test
I’ll start by describing the two sample Wilcoxon test (also known as the Mann-Whitney test), since it’s actually simpler than the one sample version. Suppose we’re looking at the scores of 10 people on some test. Since my imagination has now failed me completely, let’s pretend it’s a “test of awesomeness”, and there are two groups of people, “A” and “B”. I’m curious to know which group is more awesome. The data are included in the file `awesome.Rdata` , and like many of the data sets I’ve been using, it contains only a single data frame, in this case called `awesome` . Here’s the data:
```
load(file.path(projecthome, "data/awesome.Rdata"))
print( awesome )
```
```
## scores group
## 1 6.4 A
## 2 10.7 A
## 3 11.9 A
## 4 7.3 A
## 5 10.0 A
## 6 14.5 B
## 7 10.4 B
## 8 12.9 B
## 9 11.7 B
## 10 13.0 B
```
As long as there are no ties (i.e., people with the exact same awesomeness score), then the test that we want to do is surprisingly simple. All we have to do is construct a table that compares every observation in group \(A\) against every observation in group \(B\). Whenever the group \(A\) datum is larger, we place a check mark in the table:
We then count up the number of checkmarks. This is our test statistic, \(W\).201 The actual sampling distribution for \(W\) is somewhat complicated, and I’ll skip the details. For our purposes, it’s sufficient to note that the interpretation of \(W\) is qualitatively the same as the interpretation of \(t\) or \(z\). That is, if we want a two-sided test, then we reject the null hypothesis when \(W\) is very large or very small; but if we have a directional (i.e., one-sided) hypothesis, then we only use one or the other.
The structure of the `wilcox.test()` function should feel very familiar to you by now. When you have your data organised in terms of an outcome variable and a grouping variable, then you use the `formula` and `data` arguments, so your command looks like this:
```
wilcox.test( formula = scores ~ group, data = awesome)
```
```
##
## Wilcoxon rank sum test
##
## data: scores by group
## W = 3, p-value = 0.05556
## alternative hypothesis: true location shift is not equal to 0
```
Just like we saw with the `t.test()` function, there is an `alternative` argument that you can use to switch between two-sided tests and one-sided tests, plus a few other arguments that we don’t need to worry too much about at an introductory level. Similarly, the `wilcox.test()` function allows you to use the `x` and `y` arguments when you have your data stored separately for each group. For instance, suppose we use the data from the `awesome2.Rdata` file:
```
load( file.path(projecthome, "data/awesome2.Rdata" ))
score.A
```
```
## [1] 6.4 10.7 11.9 7.3 10.0
```
`score.B`
```
## [1] 14.5 10.4 12.9 11.7 13.0
```
When your data are organised like this, then you would use a command like this:
```
wilcox.test( x = score.A, y = score.B )
```
```
##
## Wilcoxon rank sum test
##
## data: score.A and score.B
## W = 3, p-value = 0.05556
## alternative hypothesis: true location shift is not equal to 0
```
The output that R produces is pretty much the same as last time.
### 13.10.2 One sample Wilcoxon test
What about the one sample Wilcoxon test (or equivalently, the paired samples Wilcoxon test)? Suppose I’m interested in finding out whether taking a statistics class has any effect on the happiness of students. Here’s my data:
```
load( file.path(projecthome, "data/happy.Rdata" ))
print( happiness )
```
```
## before after change
## 1 30 6 -24
## 2 43 29 -14
## 3 21 11 -10
## 4 24 31 7
## 5 23 17 -6
## 6 40 2 -38
## 7 29 31 2
## 8 56 21 -35
## 9 38 8 -30
## 10 16 21 5
```
What I’ve measured here is the happiness of each student `before` taking the class and `after` taking the class; the `change` score is the difference between the two. Just like we saw with the \(t\)-test, there’s no fundamental difference between doing a paired-samples test using `before` and `after` , versus doing a one-sample test using the `change` scores. As before, the simplest way to think about the test is to construct a tabulation. The way to do it this time is to take those change scores that are positive valued, and tabulate them against all the complete sample. What you end up with is a table that looks like this:
Counting up the tick marks this time, we get a test statistic of \(V = 7\). As before, if our test is two sided, then we reject the null hypothesis when \(V\) is very large or very small. As far of running it in R goes, it’s pretty much what you’d expect. For the one-sample version, the command you would use is
```
wilcox.test( x = happiness$change,
mu = 0
)
```
As this shows, we have a significant effect. Evidently, taking a statistics class does have an effect on your happiness. Switching to a paired samples version of the test won’t give us different answers, of course; but here’s the command to do it:
```
wilcox.test( x = happiness$after,
y = happiness$before,
paired = TRUE
)
```
## 13.11 Summary
* A one sample \(t\)-test is used to compare a single sample mean against a hypothesised value for the population mean. (Section 13.2)
* An independent samples \(t\)-test is used to compare the means of two groups, and tests the null hypothesis that they have the same mean. It comes in two forms: the Student test (Section 13.3 assumes that the groups have the same standard deviation, the Welch test (Section 13.4) does not.
* A paired samples \(t\)-test is used when you have two scores from each person, and you want to test the null hypothesis that the two scores have the same mean. It is equivalent to taking the difference between the two scores for each person, and then running a one sample \(t\)-test on the difference scores. (Section 13.5)
* Effect size calculations for the difference between means can be calculated via the Cohen’s \(d\) statistic. (Section 13.8).
* You can check the normality of a sample using QQ plots and the Shapiro-Wilk test. (Section 13.9)
* If your data are non-normal, you can use Wilcoxon tests instead of \(t\)-tests. (Section 13.10)
Informal experimentation in my garden suggests that yes, it does. Australian natives are adapted to low phosphorus levels relative to everywhere else on Earth, apparently, so if you’ve bought a house with a bunch of exotics and you want to plant natives, don’t follow my example: keep them separate. Nutrients to European plants are poison to Australian ones. There’s probably a joke in that, but I can’t figure out what it is.↩
*
Actually this is too strong. Strictly speaking the \(z\) test only requires that the sampling distribution of the mean be normally distributed; if the population is normal then it necessarily follows that the sampling distribution of the mean is also normal. However, as we saw when talking about the central limit theorem, it’s quite possible (even commonplace) for the sampling distribution to be normal even if the population distribution itself is non-normal. However, in light of the sheer ridiculousness of the assumption that the true standard deviation is known, there really isn’t much point in going into details on this front!↩
*
Well, sort of. As I understand the history, Gosset only provided a partial solution: the general solution to the problem was provided by <NAME>.↩
*
More seriously, I tend to think the reverse is true: I get very suspicious of technical reports that fill their results sections with nothing except the numbers. It might just be that I’m an arrogant jerk, but I often feel like an author that makes no attempt to explain and interpret their analysis to the reader either doesn’t understand it themselves, or is being a bit lazy. Your readers are smart, but not infinitely patient. Don’t annoy them if you can help it.↩
*
Although it is the simplest, which is why I started with it.↩
*
A funny question almost always pops up at this point: what the heck is the population being referred to in this case? Is it the set of students actually taking <NAME>’s class (all 33 of them)? The set of people who might take the class (an unknown number) of them? Or something else? Does it matter which of these we pick? It’s traditional in an introductory behavioural stats class to mumble a lot at this point, but since I get asked this question every year by my students, I’ll give a brief answer. Technically yes, it does matter: if you change your definition of what the “real world” population actually is, then the sampling distribution of your observed mean \(\bar{X}\) changes too. The \(t\)-test relies on an assumption that the observations are sampled at random from an infinitely large population; and to the extent that real life isn’t like that, then the \(t\)-test can be wrong. In practice, however, this isn’t usually a big deal: even though the assumption is almost always wrong, it doesn’t lead to a lot of pathological behaviour from the test, so we tend to just ignore it.↩
*
Yes, I have a “favourite” way of thinking about pooled standard deviation estimates. So what?↩
*
Well, I guess you can average apples and oranges, and what you end up with is a delicious fruit smoothie. But no one really thinks that a fruit smoothie is a very good way to describe the original fruits, do they?↩
*
This design is very similar to the one in Section 12.8 that motivated the McNemar test. This should be no surprise. Both are standard repeated measures designs involving two measurements. The only difference is that this time our outcome variable is interval scale (working memory capacity) rather than a binary, nominal scale variable (a yes-or-no question).↩
*
At this point we have Drs Harpo, Chico and Zeppo. No prizes for guessing who Dr Groucho is.↩
*
This is obviously a class being taught at a very small or very expensive university, or else is a postgraduate class. I’ve never taught an intro stats class with less than 350 students.↩
*
The
`sortFrame()` function sorts factor variables like `id` in alphabetical order, which is why it jumps from “student1” to “student10”↩ *
This is a massive oversimplification.↩
*
Either that, or the Kolmogorov-Smirnov test, which is probably more traditional than the Shapiro-Wilk, though most things I’ve read seem to suggest Shapiro-Wilk is the better test of normality; although Kolomogorov-Smirnov is a general purpose test of distributional equivalence, so it can be adapted to handle other kinds of distribution tests; in R it’s implemented via the
`ks.test()` function.↩ *
Actually, there are two different versions of the test statistic; they differ from each other by a constant value. The version that I’ve described is the one that R calculates.↩
Date: 2014-10-01
Categories:
Tags:
# Chapter 14 Comparing several means (one-way ANOVA)
This chapter introduces one of the most widely used tools in statistics, known as “the analysis of variance”, which is usually referred to as ANOVA. The basic technique was developed by <NAME> in the early 20th century, and it is to him that we owe the rather unfortunate terminology. The term ANOVA is a little misleading, in two respects. Firstly, although the name of the technique refers to variances, ANOVA is concerned with investigating differences in means. Secondly, there are several different things out there that are all referred to as ANOVAs, some of which have only a very tenuous connection to one another. Later on in the book we’ll encounter a range of different ANOVA methods that apply in quite different situations, but for the purposes of this chapter we’ll only consider the simplest form of ANOVA, in which we have several different groups of observations, and we’re interested in finding out whether those groups differ in terms of some outcome variable of interest. This is the question that is addressed by a one-way ANOVA.
The structure of this chapter is as follows: In Section 14.1 I’ll introduce a fictitious data set that we’ll use as a running example throughout the chapter. After introducing the data, I’ll describe the mechanics of how a one-way ANOVA actually works (Section 14.2) and then focus on how you can run one in R (Section 14.3). These two sections are the core of the chapter. The remainder of the chapter discusses a range of important topics that inevitably arise when running an ANOVA, namely how to calculate effect sizes (Section 14.4), post hoc tests and corrections for multiple comparisons (Section 14.5) and the assumptions that ANOVA relies upon (Section 14.6). We’ll also talk about how to check those assumptions and some of the things you can do if the assumptions are violated (Sections 14.7 to 14.10). At the end of the chapter we’ll talk a little about the relationship between ANOVA and other statistical tools (Section 14.11).
## 14.1 An illustrative data set
Suppose you’ve become involved in a clinical trial in which you are testing a new antidepressant drug called Joyzepam. In order to construct a fair test of the drug’s effectiveness, the study involves three separate drugs to be administered. One is a placebo, and the other is an existing antidepressant / anti-anxiety drug called Anxifree. A collection of 18 participants with moderate to severe depression are recruited for your initial testing. Because the drugs are sometimes administered in conjunction with psychological therapy, your study includes 9 people undergoing cognitive behavioural therapy (CBT) and 9 who are not. Participants are randomly assigned (doubly blinded, of course) a treatment, such that there are 3 CBT people and 3 no-therapy people assigned to each of the 3 drugs. A psychologist assesses the mood of each person after a 3 month run with each drug: and the overall improvement in each person’s mood is assessed on a scale ranging from \(-5\) to \(+5\).
With that as the study design, let’s now look at what we’ve got in the data file:
```
load(file.path(projecthome, "data", "clinicaltrial.Rdata")) # load data
str(clin.trial)
```
```
## 'data.frame': 18 obs. of 3 variables:
## $ drug : Factor w/ 3 levels "placebo","anxifree",..: 1 1 1 2 2 2 3 3 3 1 ...
## $ therapy : Factor w/ 2 levels "no.therapy","CBT": 1 1 1 1 1 1 1 1 1 2 ...
## $ mood.gain: num 0.5 0.3 0.1 0.6 0.4 0.2 1.4 1.7 1.3 0.6 ...
```
So we have a single data frame called `clin.trial` , containing three variables; `drug` , `therapy` and `mood.gain` . Next, let’s print the data frame to get a sense of what the data actually look like. `print( clin.trial )`
```
## drug therapy mood.gain
## 1 placebo no.therapy 0.5
## 2 placebo no.therapy 0.3
## 3 placebo no.therapy 0.1
## 4 anxifree no.therapy 0.6
## 5 anxifree no.therapy 0.4
## 6 anxifree no.therapy 0.2
## 7 joyzepam no.therapy 1.4
## 8 joyzepam no.therapy 1.7
## 9 joyzepam no.therapy 1.3
## 10 placebo CBT 0.6
## 11 placebo CBT 0.9
## 12 placebo CBT 0.3
## 13 anxifree CBT 1.1
## 14 anxifree CBT 0.8
## 15 anxifree CBT 1.2
## 16 joyzepam CBT 1.8
## 17 joyzepam CBT 1.3
## 18 joyzepam CBT 1.4
```
For the purposes of this chapter, what we’re really interested in is the effect of `drug` on `mood.gain` . The first thing to do is calculate some descriptive statistics and draw some graphs. In Chapter 5 we discussed a variety of different functions that can be used for this purpose. For instance, we can use the `xtabs()` function to see how many people we have in each group:
```
xtabs( ~drug, clin.trial )
```
```
## drug
## placebo anxifree joyzepam
## 6 6 6
```
Similarly, we can use the `aggregate()` function to calculate means and standard deviations for the `mood.gain` variable broken down by which `drug` was administered:
```
## drug mood.gain
## 1 placebo 0.2810694
## 2 anxifree 0.3920034
## 3 joyzepam 0.2136976
```
Finally, we can use `plotmeans()` from the `gplots` package to produce a pretty picture.
```
library(gplots)
plotmeans( formula = mood.gain ~ drug, # plot mood.gain by drug
data = clin.trial, # the data frame
xlab = "Drug Administered", # x-axis label
ylab = "Mood Gain", # y-axis label
n.label = FALSE # don't display sample size
)
```
The results are shown in Figure 14.1, which plots the average mood gain for all three conditions; error bars show 95% confidence intervals. As the plot makes clear, there is a larger improvement in mood for participants in the Joyzepam group than for either the Anxifree group or the placebo group. The Anxifree group shows a larger mood gain than the control group, but the difference isn’t as large.
The question that we want to answer is: are these difference “real”, or are they just due to chance?
## 14.2 How ANOVA works
In order to answer the question posed by our clinical trial data, we’re going to run a one-way ANOVA. As usual, I’m going to start by showing you how to do it the hard way, building the statistical tool from the ground up and showing you how you could do it in R if you didn’t have access to any of the cool built-in ANOVA functions. And, as always, I hope you’ll read it carefully, try to do it the long way once or twice to make sure you really understand how ANOVA works, and then – once you’ve grasped the concept – never ever do it this way again.
The experimental design that I described in the previous section strongly suggests that we’re interested in comparing the average mood change for the three different drugs. In that sense, we’re talking about an analysis similar to the \(t\)-test (Chapter 13, but involving more than two groups. If we let \(\mu_P\) denote the population mean for the mood change induced by the placebo, and let \(\mu_A\) and \(\mu_J\) denote the corresponding means for our two drugs, Anxifree and Joyzepam, then the (somewhat pessimistic) null hypothesis that we want to test is that all three population means are identical: that is, neither of the two drugs is any more effective than a placebo. Mathematically, we write this null hypothesis like this: \[ \begin{array}{rcl} H_0 &:& \mbox{it is true that } \mu_P = \mu_A = \mu_J \end{array} \] As a consequence, our alternative hypothesis is that at least one of the three different treatments is different from the others. It’s a little trickier to write this mathematically, because (as we’ll discuss) there are quite a few different ways in which the null hypothesis can be false. So for now we’ll just write the alternative hypothesis like this: \[ \begin{array}{rcl} H_1 &:& \mbox{it is *not* true that } \mu_P = \mu_A = \mu_J \end{array} \] This null hypothesis is a lot trickier to test than any of the ones we’ve seen previously. How shall we do it? A sensible guess would be to “do an ANOVA”, since that’s the title of the chapter, but it’s not particularly clear why an “analysis of variances” will help us learn anything useful about the means. In fact, this is one of the biggest conceptual difficulties that people have when first encountering ANOVA. To see how this works, I find it most helpful to start by talking about variances. In fact, what I’m going to do is start by playing some mathematical games with the formula that describes the variance. That is, we’ll start out by playing around with variances, and it will turn out that this gives us a useful tool for investigating means.
### 14.2.1 Two formulas for the variance of \(Y\)
Firstly, let’s start by introducing some notation. We’ll use \(G\) to refer to the total number of groups. For our data set, there are three drugs, so there are \(G=3\) groups. Next, we’ll use \(N\) to refer to the total sample size: there are a total of \(N=18\) people in our data set. Similarly, let’s use \(N_k\) to denote the number of people in the \(k\)-th group. In our fake clinical trial, the sample size is \(N_k = 6\) for all three groups.202 Finally, we’ll use \(Y\) to denote the outcome variable: in our case, \(Y\) refers to mood change. Specifically, we’ll use \(Y_{ik}\) to refer to the mood change experienced by the \(i\)-th member of the \(k\)-th group. Similarly, we’ll use \(\bar{Y}\) to be the average mood change, taken across all 18 people in the experiment, and \(\bar{Y}_k\) to refer to the average mood change experienced by the 6 people in group \(k\).
Excellent. Now that we’ve got our notation sorted out, we can start writing down formulas. To start with, let’s recall the formula for the variance that we used in Section 5.2, way back in those kinder days when we were just doing descriptive statistics. The sample variance of \(Y\) is defined as follows: \[ \mbox{Var}(Y) = \frac{1}{N} \sum_{k=1}^G \sum_{i=1}^{N_k} \left(Y_{ik} - \bar{Y} \right)^2 \] This formula looks pretty much identical to the formula for the variance in Section 5.2. The only difference is that this time around I’ve got two summations here: I’m summing over groups (i.e., values for \(k\)) and over the people within the groups (i.e., values for \(i\)). This is purely a cosmetic detail: if I’d instead used the notation \(Y_p\) to refer to the value of the outcome variable for person \(p\) in the sample, then I’d only have a single summation. The only reason that we have a double summation here is that I’ve classified people into groups, and then assigned numbers to people within groups.
A concrete example might be useful here. Let’s consider this table, in which we have a total of \(N=5\) people sorted into \(G=2\) groups. Arbitrarily, let’s say that the “cool” people are group 1, and the “uncool” people are group 2, and it turns out that we have three cool people (\(N_1 = 3\)) and two uncool people (\(N_2 = 2\)).
name | person (\(p\)) | group | group num (\(k\)) | index in group (\(i\)) | grumpiness (\(Y_{ik}\) or \(Y_p\)) |
| --- | --- | --- | --- | --- | --- |
Ann | 1 | cool | 1 | 1 | 20 |
Ben | 2 | cool | 1 | 2 | 55 |
Cat | 3 | cool | 1 | 3 | 21 |
Dan | 4 | uncool | 2 | 1 | 91 |
Egg | 5 | uncool | 2 | 2 | 22 |
Notice that I’ve constructed two different labelling schemes here. We have a “person” variable \(p\), so it would be perfectly sensible to refer to \(Y_p\) as the grumpiness of the \(p\)-th person in the sample. For instance, the table shows that Dan is the four so we’d say \(p = 4\). So, when talking about the grumpiness \(Y\) of this “Dan” person, whoever he might be, we could refer to his grumpiness by saying that \(Y_p = 91\), for person \(p = 4\) that is. However, that’s not the only way we could refer to Dan. As an alternative we could note that Dan belongs to the “uncool” group (\(k = 2\)), and is in fact the first person listed in the uncool group (\(i = 1\)). So it’s equally valid to refer to Dan’s grumpiness by saying that \(Y_{ik} = 91\), where \(k = 2\) and \(i = 1\). In other words, each person \(p\) corresponds to a unique \(ik\) combination, and so the formula that I gave above is actually identical to our original formula for the variance, which would be \[ \mbox{Var}(Y) = \frac{1}{N} \sum_{p=1}^N \left(Y_{p} - \bar{Y} \right)^2 \] In both formulas, all we’re doing is summing over all of the observations in the sample. Most of the time we would just use the simpler \(Y_p\) notation: the equation using \(Y_p\) is clearly the simpler of the two. However, when doing an ANOVA it’s important to keep track of which participants belong in which groups, and we need to use the \(Y_{ik}\) notation to do this.
### 14.2.2 From variances to sums of squares
Okay, now that we’ve got a good grasp on how the variance is calculated, let’s define something called the total sum of squares, which is denoted SS\(_{tot}\). This is very simple: instead of averaging the squared deviations, which is what we do when calculating the variance, we just add them up. So the formula for the total sum of squares is almost identical to the formula for the variance: \[ \mbox{SS}_{tot} = \sum_{k=1}^G \sum_{i=1}^{N_k} \left(Y_{ik} - \bar{Y} \right)^2 \] When we talk about analysing variances in the context of ANOVA, what we’re really doing is working with the total sums of squares rather than the actual variance. One very nice thing about the total sum of squares is that we can break it up into two different kinds of variation. Firstly, we can talk about the within-group sum of squares, in which we look to see how different each individual person is from their own group mean: \[ \mbox{SS}_w = \sum_{k=1}^G \sum_{i=1}^{N_k} \left( Y_{ik} - \bar{Y}_k \right)^2 \] where \(\bar{Y}_k\) is a group mean. In our example, \(\bar{Y}_k\) would be the average mood change experienced by those people given the \(k\)-th drug. So, instead of comparing individuals to the average of all people in the experiment, we’re only comparing them to those people in the the same group. As a consequence, you’d expect the value of \(\mbox{SS}_w\) to be smaller than the total sum of squares, because it’s completely ignoring any group differences – that is, the fact that the drugs (if they work) will have different effects on people’s moods.
Next, we can define a third notion of variation which captures only the differences between groups. We do this by looking at the differences between the group means \(\bar{Y}_k\) and grand mean \(\bar{Y}\). In order to quantify the extent of this variation, what we do is calculate the between-group sum of squares: \[ \begin{array}{rcl} \mbox{SS}_{b} &=& \sum_{k=1}^G \sum_{i=1}^{N_k} \left( \bar{Y}_k - \bar{Y} \right)^2 \\ &=& \sum_{k=1}^G N_k \left( \bar{Y}_k - \bar{Y} \right)^2 \end{array} \] It’s not too difficult to show that the total variation among people in the experiment \(\mbox{SS}_{tot}\) is actually the sum of the differences between the groups \(\mbox{SS}_b\) and the variation inside the groups \(\mbox{SS}_w\). That is: \[ \mbox{SS}_w + \mbox{SS}_{b} = \mbox{SS}_{tot} \] Yay.
Okay, so what have we found out? We’ve discovered that the total variability associated with the outcome variable (\(\mbox{SS}_{tot}\)) can be mathematically carved up into the sum of “the variation due to the differences in the sample means for the different groups” (\(\mbox{SS}_{b}\)) plus “all the rest of the variation” (\(\mbox{SS}_{w}\)). How does that help me find out whether the groups have different population means? Um. Wait. Hold on a second… now that I think about it, this is exactly what we were looking for. If the null hypothesis is true, then you’d expect all the sample means to be pretty similar to each other, right? And that would imply that you’d expect \(\mbox{SS}_{b}\) to be really small, or at least you’d expect it to be a lot smaller than the “the variation associated with everything else”, \(\mbox{SS}_{w}\). Hm. I detect a hypothesis test coming on…
### 14.2.3 From sums of squares to the \(F\)-test
As we saw in the last section, the qualitative idea behind ANOVA is to compare the two sums of squares values \(\mbox{SS}_b\) and \(\mbox{SS}_w\) to each other: if the between-group variation is \(\mbox{SS}_b\) is large relative to the within-group variation \(\mbox{SS}_w\) then we have reason to suspect that the population means for the different groups aren’t identical to each other. In order to convert this into a workable hypothesis test, there’s a little bit of “fiddling around” needed. What I’ll do is first show you what we do to calculate our test statistic – which is called an \(F\) ratio – and then try to give you a feel for why we do it this way.
In order to convert our SS values into an \(F\)-ratio, the first thing we need to calculate is the degrees of freedom associated with the SS\(_b\) and SS\(_w\) values. As usual, the degrees of freedom corresponds to the number of unique “data points” that contribute to a particular calculation, minus the number of “constraints” that they need to satisfy. For the within-groups variability, what we’re calculating is the variation of the individual observations (\(N\) data points) around the group means (\(G\) constraints). In contrast, for the between groups variability, we’re interested in the variation of the group means (\(G\) data points) around the grand mean (1 constraint). Therefore, the degrees of freedom here are: \[ \begin{array}{lcl} \mbox{df}_b &=& G-1 \\ \mbox{df}_w &=& N-G \\ \end{array} \] Okay, that seems simple enough. What we do next is convert our summed squares value into a “mean squares” value, which we do by dividing by the degrees of freedom: \[ \begin{array}{lcl} \mbox{MS}_b &=& \displaystyle\frac{\mbox{SS}_b }{ \mbox{df}_b} \\ \mbox{MS}_w &=& \displaystyle\frac{\mbox{SS}_w }{ \mbox{df}_w} \end{array} \] Finally, we calculate the \(F\)-ratio by dividing the between-groups MS by the within-groups MS: \[ F = \frac{\mbox{MS}_b }{ \mbox{MS}_w } \] At a very general level, the intuition behind the \(F\) statistic is straightforward: bigger values of \(F\) means that the between-groups variation is large, relative to the within-groups variation. As a consequence, the larger the value of \(F\), the more evidence we have against the null hypothesis. But how large does \(F\) have to be in order to actually reject \(H_0\)? In order to understand this, you need a slightly deeper understanding of what ANOVA is and what the mean squares values actually are.
The next section discusses that in a bit of detail, but for readers that aren’t interested in the details of what the test is actually measuring, I’ll cut to the chase. In order to complete our hypothesis test, we need to know the sampling distribution for \(F\) if the null hypothesis is true. Not surprisingly, the sampling distribution for the \(F\) statistic under the null hypothesis is an \(F\) distribution. If you recall back to our discussion of the \(F\) distribution in Chapter @ref(probability, the \(F\) distribution has two parameters, corresponding to the two degrees of freedom involved: the first one df\(_1\) is the between groups degrees of freedom df\(_b\), and the second one df\(_2\) is the within groups degrees of freedom df\(_w\).
A summary of all the key quantities involved in a one-way ANOVA, including the formulas showing how they are calculated, is shown in Table 14.1.
df | sum of squares | mean squares | \(F\) statistic | \(p\) value |
| --- | --- | --- | --- | --- |
between groups | \(\mbox{df}_b = G-1\) | SS\(_b = \displaystyle\sum_{k=1}^G N_k (\bar{Y}_k - \bar{Y})^2\) | \(\mbox{MS}_b = \frac{\mbox{SS}_b}{\mbox{df}_b}\) | \(F = \frac{\mbox{MS}_b }{ \mbox{MS}_w }\) | [complicated] |
within groups | \(\mbox{df}_w = N-G\) | SS\(_w = \sum_{k=1}^G \sum_{i = 1}^{N_k} ( {Y}_{ik} - \bar{Y}_k)^2\) | \(\mbox{MS}_w = \frac{\mbox{SS}_w}{\mbox{df}_w}\) | - | - |
### 14.2.4 The model for the data and the meaning of \(F\) (advanced)
At a fundamental level, ANOVA is a competition between two different statistical models, \(H_0\) and \(H_1\). When I described the null and alternative hypotheses at the start of the section, I was a little imprecise about what these models actually are. I’ll remedy that now, though you probably won’t like me for doing so. If you recall, our null hypothesis was that all of the group means are identical to one another. If so, then a natural way to think about the outcome variable \(Y_{ik}\) is to describe individual scores in terms of a single population mean \(\mu\), plus the deviation from that population mean. This deviation is usually denoted \(\epsilon_{ik}\) and is traditionally called the error or residual associated with that observation. Be careful though: just like we saw with the word “significant”, the word “error” has a technical meaning in statistics that isn’t quite the same as its everyday English definition. In everyday language, “error” implies a mistake of some kind; in statistics, it doesn’t (or at least, not necessarily). With that in mind, the word “residual” is a better term than the word “error”. In statistics, both words mean “leftover variability”: that is, “stuff” that the model can’t explain. In any case, here’s what the null hypothesis looks like when we write it as a statistical model: \[ Y_{ik} = \mu + \epsilon_{ik} \] where we make the assumption (discussed later) that the residual values \(\epsilon_{ik}\) are normally distributed, with mean 0 and a standard deviation \(\sigma\) that is the same for all groups. To use the notation that we introduced in Chapter 9 we would write this assumption like this: \[ \epsilon_{ik} \sim \mbox{Normal}(0, \sigma^2) \]
What about the alternative hypothesis, \(H_1\)? The only difference between the null hypothesis and the alternative hypothesis is that we allow each group to have a different population mean. So, if we let \(\mu_k\) denote the population mean for the \(k\)-th group in our experiment, then the statistical model corresponding to \(H_1\) is: \[ Y_{ik} = \mu_k + \epsilon_{ik} \] where, once again, we assume that the error terms are normally distributed with mean 0 and standard deviation \(\sigma\). That is, the alternative hypothesis also assumes that \[ \epsilon \sim \mbox{Normal}(0, \sigma^2) \]
Okay, now that we’ve described the statistical models underpinning \(H_0\) and \(H_1\) in more detail, it’s now pretty straightforward to say what the mean square values are measuring, and what this means for the interpretation of \(F\). I won’t bore you with the proof of this, but it turns out that the within-groups mean square, MS\(_w\), can be viewed as an estimator (in the technical sense: Chapter 10 of the error variance \(\sigma^2\). The between-groups mean square MS\(_b\) is also an estimator; but what it estimates is the error variance plus a quantity that depends on the true differences among the group means. If we call this quantity \(Q\), then we can see that the \(F\)-statistic is basically203
\[ F = \frac{\hat{Q} + \hat\sigma^2}{\hat\sigma^2} \] where the true value \(Q=0\) if the null hypothesis is true, and \(Q > 0\) if the alternative hypothesis is true (e.g. ch. 10 Hays 1994). Therefore, at a bare minimum the \(F\) value must be larger than 1 to have any chance of rejecting the null hypothesis. Note that this doesn’t mean that it’s impossible to get an \(F\)-value less than 1. What it means is that, if the null hypothesis is true the sampling distribution of the \(F\) ratio has a mean of 1,204 and so we need to see \(F\)-values larger than 1 in order to safely reject the null.
To be a bit more precise about the sampling distribution, notice that if the null hypothesis is true, both MS\(_b\) and MS\(_w\) are estimators of the variance of the residuals \(\epsilon_{ik}\). If those residuals are normally distributed, then you might suspect that the estimate of the variance of \(\epsilon_{ik}\) is chi-square distributed… because (as discussed in Section 9.6 that’s what a chi-square distribution is: it’s what you get when you square a bunch of normally-distributed things and add them up. And since the \(F\) distribution is (again, by definition) what you get when you take the ratio between two things that are \(\chi^2\) distributed… we have our sampling distribution. Obviously, I’m glossing over a whole lot of stuff when I say this, but in broad terms, this really is where our sampling distribution comes from.
### 14.2.5 A worked example
The previous discussion was fairly abstract, and a little on the technical side, so I think that at this point it might be useful to see a worked example. For that, let’s go back to the clinical trial data that I introduced at the start of the chapter. The descriptive statistics that we calculated at the beginning tell us our group means: an average mood gain of 0.45 for the placebo, 0.72 for Anxifree, and 1.48 for Joyzepam. With that in mind, let’s party like it’s 1899205 and start doing some pencil and paper calculations. I’ll only do this for the first 5 observations, because it’s not bloody 1899 and I’m very lazy. Let’s start by calculating \(\mbox{SS}_{w}\), the within-group sums of squares. First, let’s draw up a nice table to help us with our calculations…
group (\(k\)) | outcome (\(Y_{ik}\)) |
| --- | --- |
placebo | 0.5 |
placebo | 0.3 |
placebo | 0.1 |
anxifree | 0.6 |
anxifree | 0.4 |
At this stage, the only thing I’ve included in the table is the raw data itself: that is, the grouping variable (i.e., `drug` ) and outcome variable (i.e. `mood.gain` ) for each person. Note that the outcome variable here corresponds to the \(Y_{ik}\) value in our equation previously. The next step in the calculation is to write down, for each person in the study, the corresponding group mean; that is, \(\bar{Y}_k\). This is slightly repetitive, but not particularly difficult since we already calculated those group means when doing our descriptive statistics:
group (\(k\)) | outcome (\(Y_{ik}\)) | group mean (\(\bar{Y}_k\)) |
| --- | --- | --- |
placebo | 0.5 | 0.45 |
placebo | 0.3 | 0.45 |
placebo | 0.1 | 0.45 |
anxifree | 0.6 | 0.72 |
anxifree | 0.4 | 0.72 |
Now that we’ve written those down, we need to calculate – again for every person – the deviation from the corresponding group mean. That is, we want to subtract \(Y_{ik} - \bar{Y}_k\). After we’ve done that, we need to square everything. When we do that, here’s what we get:
group (\(k\)) | outcome (\(Y_{ik}\)) | group mean (\(\bar{Y}_k\)) | dev. from group mean (\(Y_{ik} - \bar{Y}_{k}\)) | squared deviation (\((Y_{ik} - \bar{Y}_{k})^2\)) |
| --- | --- | --- | --- | --- |
placebo | 0.5 | 0.45 | 0.05 | 0.0025 |
placebo | 0.3 | 0.45 | -0.15 | 0.0225 |
placebo | 0.1 | 0.45 | -0.35 | 0.1225 |
anxifree | 0.6 | 0.72 | -0.12 | 0.0136 |
anxifree | 0.4 | 0.72 | -0.32 | 0.1003 |
The last step is equally straightforward. In order to calculate the within-group sum of squares, we just add up the squared deviations across all observations: \[ \begin{array}{rcl} \mbox{SS}_w &=& 0.0025 + 0.0225 + 0.1225 + 0.0136 + 0.1003 \\ &=& 0.2614 \end{array} \]
Of course, if we actually wanted to get the right answer, we’d need to do this for all 18 observations in the data set, not just the first five. We could continue with the pencil and paper calculations if we wanted to, but it’s pretty tedious. Alternatively, it’s not too hard to get R to do it. Here’s how:
```
outcome <- clin.trial$mood.gain
group <- clin.trial$drug
gp.means <- tapply(outcome,group,mean)
gp.means <- gp.means[group]
dev.from.gp.means <- outcome - gp.means
squared.devs <- dev.from.gp.means ^2
```
It might not be obvious from inspection what these commands are doing: as a general rule, the human brain seems to just shut down when faced with a big block of programming. However, I strongly suggest that – if you’re like me and tend to find that the mere sight of this code makes you want to look away and see if there’s any beer left in the fridge or a game of footy on the telly – you take a moment and look closely at these commands one at a time. Every single one of these commands is something you’ve seen before somewhere else in the book. There’s nothing novel about them (though I’ll have to admit that the `tapply()` function takes a while to get a handle on), so if you’re not quite sure how these commands work, this might be a good time to try playing around with them yourself, to try to get a sense of what’s happening. On the other hand, if this does seem to make sense, then you won’t be all that surprised at what happens when I wrap these variables in a data frame, and print it out…
```
Y <- data.frame( group, outcome, gp.means,
dev.from.gp.means, squared.devs )
print(Y, digits = 2)
```
```
## group outcome gp.means dev.from.gp.means squared.devs
## 1 placebo 0.5 0.45 0.050 0.0025
## 2 placebo 0.3 0.45 -0.150 0.0225
## 3 placebo 0.1 0.45 -0.350 0.1225
## 4 anxifree 0.6 0.72 -0.117 0.0136
## 5 anxifree 0.4 0.72 -0.317 0.1003
## 6 anxifree 0.2 0.72 -0.517 0.2669
## 7 joyzepam 1.4 1.48 -0.083 0.0069
## 8 joyzepam 1.7 1.48 0.217 0.0469
## 9 joyzepam 1.3 1.48 -0.183 0.0336
## 10 placebo 0.6 0.45 0.150 0.0225
## 11 placebo 0.9 0.45 0.450 0.2025
## 12 placebo 0.3 0.45 -0.150 0.0225
## 13 anxifree 1.1 0.72 0.383 0.1469
## 14 anxifree 0.8 0.72 0.083 0.0069
## 15 anxifree 1.2 0.72 0.483 0.2336
## 16 joyzepam 1.8 1.48 0.317 0.1003
## 17 joyzepam 1.3 1.48 -0.183 0.0336
## 18 joyzepam 1.4 1.48 -0.083 0.0069
```
If you compare this output to the contents of the table I’ve been constructing by hand, you can see that R has done exactly the same calculations that I was doing, and much faster too. So, if we want to finish the calculations of the within-group sum of squares in R, we just ask for the `sum()` of the `squared.devs` variable:
```
SSw <- sum( squared.devs )
print( SSw )
```
`## [1] 1.391667` Obviously, this isn’t the same as what I calculated, because R used all 18 observations. But if I’d typed
```
sum( squared.devs[1:5] )
```
instead, it would have given the same answer that I got earlier.
Okay. Now that we’ve calculated the within groups variation, \(\mbox{SS}_w\), it’s time to turn our attention to the between-group sum of squares, \(\mbox{SS}_b\). The calculations for this case are very similar. The main difference is that, instead of calculating the differences between an observation \(Y_{ik}\) and a group mean \(\bar{Y}_k\) for all of the observations, we calculate the differences between the group means \(\bar{Y}_k\) and the grand mean \(\bar{Y}\) (in this case 0.88) for all of the groups…
group (\(k\)) | group mean (\(\bar{Y}_k\)) | grand mean (\(\bar{Y}\)) | deviation (\(\bar{Y}_{k} - \bar{Y}\)) | squared deviations (\((\bar{Y}_{k} - \bar{Y})^2\)) |
| --- | --- | --- | --- | --- |
placebo | 0.45 | 0.88 | -0.43 | 0.18 |
anxifree | 0.72 | 0.88 | -0.16 | 0.03 |
joyzepam | 1.48 | 0.88 | 0.60 | 0.36 |
However, for the between group calculations we need to multiply each of these squared deviations by \(N_k\), the number of observations in the group. We do this because every observation in the group (all \(N_k\) of them) is associated with a between group difference. So if there are six people in the placebo group, and the placebo group mean differs from the grand mean by 0.19, then the total between group variation associated with these six people is \(6 \times 0.16 = 1.14\). So we have to extend our little table of calculations…
group (\(k\)) | squared deviations (\((\bar{Y}_{k} - \bar{Y})^2\)) | sample size (\(N_k\)) | weighted squared dev (\(N_k (\bar{Y}_{k} - \bar{Y})^2\)) |
| --- | --- | --- | --- |
placebo | 0.18 | 6 | 1.11 |
anxifree | 0.03 | 6 | 0.16 |
joyzepam | 0.36 | 6 | 2.18 |
And so now our between group sum of squares is obtained by summing these “weighted squared deviations” over all three groups in the study: \[ \begin{array}{rcl} \mbox{SS}_{b} &=& 1.11 + 0.16 + 2.18 \\ &=& 3.45 \end{array} \] As you can see, the between group calculations are a lot shorter, so you probably wouldn’t usually want to bother using R as your calculator. However, if you did decide to do so, here’s one way you could do it:
```
gp.means <- tapply(outcome,group,mean)
grand.mean <- mean(outcome)
dev.from.grand.mean <- gp.means - grand.mean
squared.devs <- dev.from.grand.mean ^2
gp.sizes <- tapply(outcome,group,length)
wt.squared.devs <- gp.sizes * squared.devs
```
Again, I won’t actually try to explain this code line by line, but – just like last time – there’s nothing in there that we haven’t seen in several places elsewhere in the book, so I’ll leave it as an exercise for you to make sure you understand it. Once again, we can dump all our variables into a data frame so that we can print it out as a nice table:
```
Y <- data.frame( gp.means, grand.mean, dev.from.grand.mean,
squared.devs, gp.sizes, wt.squared.devs )
print(Y, digits = 2)
```
```
## gp.means grand.mean dev.from.grand.mean squared.devs gp.sizes
## placebo 0.45 0.88 -0.43 0.188 6
## anxifree 0.72 0.88 -0.17 0.028 6
## joyzepam 1.48 0.88 0.60 0.360 6
## wt.squared.devs
## placebo 1.13
## anxifree 0.17
## joyzepam 2.16
```
Clearly, these are basically the same numbers that we got before. There are a few tiny differences, but that’s only because the hand-calculated versions have some small errors caused by the fact that I rounded all my numbers to 2 decimal places at each step in the calculations, whereas R only does it at the end (obviously, R s version is more accurate). Anyway, here’s the R command showing the final step:
```
SSb <- sum( wt.squared.devs )
print( SSb )
```
`## [1] 3.453333`
which is (ignoring the slight differences due to rounding error) the same answer that I got when doing things by hand.
Now that we’ve calculated our sums of squares values, \(\mbox{SS}_b\) and \(\mbox{SS}_w\), the rest of the ANOVA is pretty painless. The next step is to calculate the degrees of freedom. Since we have \(G = 3\) groups and \(N = 18\) observations in total, our degrees of freedom can be calculated by simple subtraction: \[ \begin{array}{lclcl} \mbox{df}_b &=& G - 1 &=& 2 \\ \mbox{df}_w &=& N - G &=& 15 \end{array} \] Next, since we’ve now calculated the values for the sums of squares and the degrees of freedom, for both the within-groups variability and the between-groups variability, we can obtain the mean square values by dividing one by the other: \[ \begin{array}{lclclcl} \mbox{MS}_b &=& \displaystyle\frac{\mbox{SS}_b }{ \mbox{df}_b } &=& \displaystyle\frac{3.45}{ 2} &=& 1.73 \end{array} \] \[ \begin{array}{lclclcl} \mbox{MS}_w &=& \displaystyle\frac{\mbox{SS}_w }{ \mbox{df}_w } &=& \displaystyle\frac{1.39}{15} &=& 0.09 \end{array} \] We’re almost done. The mean square values can be used to calculate the \(F\)-value, which is the test statistic that we’re interested in. We do this by dividing the between-groups MS value by the and within-groups MS value. \[ F \ = \ \frac{\mbox{MS}_b }{ \mbox{MS}_w } \ = \ \frac{1.73}{0.09} \ = \ 18.6 \] Woohooo! This is terribly exciting, yes? Now that we have our test statistic, the last step is to find out whether the test itself gives us a significant result. As discussed in Chapter @ref(hypothesistesting, what we really ought to do is choose an \(\alpha\) level (i.e., acceptable Type I error rate) ahead of time, construct our rejection region, etc etc. But in practice it’s just easier to directly calculate the \(p\)-value. Back in the “old days”, what we’d do is open up a statistics textbook or something and flick to the back section which would actually have a huge lookup table… that’s how we’d “compute” our \(p\)-value, because it’s too much effort to do it any other way. However, since we have access to R, I’ll use the `pf()` function to do it instead. Now, remember that I explained earlier that the \(F\)-test is always one sided? And that we only reject the null hypothesis for very large \(F\)-values? That means we’re only interested in the upper tail of the \(F\)-distribution. The command that you’d use here would be this…
```
pf( 18.6, df1 = 2, df2 = 15, lower.tail = FALSE)
```
`## [1] 8.672727e-05`
Therefore, our \(p\)-value comes to 0.0000867, or \(8.67 \times 10^{-5}\) in scientific notation. So, unless we’re being extremely conservative about our Type I error rate, we’re pretty much guaranteed to reject the null hypothesis.
At this point, we’re basically done. Having completed our calculations, it’s traditional to organise all these numbers into an ANOVA table like the one in Table@reftab:anovatable. For our clinical trial data, the ANOVA table would look like this:
df | sum of squares | mean squares | \(F\)-statistic | \(p\)-value |
| --- | --- | --- | --- | --- |
between groups | 2 | 3.45 | 1.73 | 18.6 | \(8.67 \times 10^{-5}\) |
within groups | 15 | 1.39 | 0.09 | - | - |
These days, you’ll probably never have much reason to want to construct one of these tables yourself, but you will find that almost all statistical software (R included) tends to organise the output of an ANOVA into a table like this, so it’s a good idea to get used to reading them. However, although the software will output a full ANOVA table, there’s almost never a good reason to include the whole table in your write up. A pretty standard way of reporting this result would be to write something like this:
One-way ANOVA showed a significant effect of drug on mood gain (\(F(2,15) = 18.6, p<.001\)).
Sigh. So much work for one short sentence.
## 14.3 Running an ANOVA in R
I’m pretty sure I know what you’re thinking after reading the last section, especially if you followed my advice and tried typing all the commands in yourself…. doing the ANOVA calculations yourself sucks. There’s quite a lot of calculations that we needed to do along the way, and it would be tedious to have to do this over and over again every time you wanted to do an ANOVA. One possible solution to the problem would be to take all these calculations and turn them into some R functions yourself. You’d still have to do a lot of typing, but at least you’d only have to do it the one time: once you’ve created the functions, you can reuse them over and over again. However, writing your own functions is a lot of work, so this is kind of a last resort. Besides, it’s much better if someone else does all the work for you…
### 14.3.1 Using the
`aov()` function to specify your ANOVA To make life easier for you, R provides a function called `aov()` , which – obviously – is an acronym of “Analysis Of Variance.”206 If you type `?aov` and have a look at the help documentation, you’ll see that there are several arguments to the `aov()` function, but the only two that we’re interested in are `formula` and `data` . As we’ve seen in a few places previously, the `formula` argument is what you use to specify the outcome variable and the grouping variable, and the `data` argument is what you use to specify the data frame that stores these variables. In other words, to do the same ANOVA that I laboriously calculated in the previous section, I’d use a command like this:
```
aov( formula = mood.gain ~ drug, data = clin.trial )
```
Actually, that’s not quite the whole story, as you’ll see as soon as you look at the output from this command, which I’ve hidden for the moment in order to avoid confusing you. Before we go into specifics, I should point out that either of these commands will do the same thing:
```
aov( clin.trial$mood.gain ~ clin.trial$drug )
aov( mood.gain ~ drug, clin.trial )
```
In the first command, I didn’t specify a `data` set, and instead relied on the `$` operator to tell R how to find the variables. In the second command, I dropped the argument names, which is okay in this case because `formula` is the first argument to the `aov()` function, and `data` is the second one. Regardless of how I specify the ANOVA, I can assign the output of the `aov()` function to a variable, like this for example:
```
my.anova <- aov( mood.gain ~ drug, clin.trial )
```
This is almost always a good thing to do, because there’s lots of useful things that we can do with the `my.anova` variable. So let’s assume that it’s this last command that I used to specify the ANOVA that I’m trying to run, and as a consequence I have this `my.anova` variable sitting in my workspace, waiting for me to do something with it…
### 14.3.2 Understanding what the
`aov()` function produces Now that we’ve seen how to use the `aov()` function to create `my.anova` we’d better have a look at what this variable actually is. The first thing to do is to check to see what class of variable we’ve created, since it’s kind of interesting in this case. When we do that… `class( my.anova )` `## [1] "aov" "lm"` … we discover that `my.anova` actually has two classes! The first class tells us that it’s an `aov` (analysis of variance) object, but the second tells us that it’s also an `lm` (linear model) object. Later on, we’ll see that this reflects a pretty deep statistical relationship between ANOVA and regression (Chapter 15 and it means that any function that exists in R for dealing with regressions can also be applied to `aov` objects, which is neat; but I’m getting ahead of myself. For now, I want to note that what we’ve created is an `aov` object, and to also make the point that `aov` objects are actually rather complicated beasts. I won’t be trying to explain everything about them, since it’s way beyond the scope of an introductory statistics subject, but to give you a tiny hint of some of the stuff that R stores inside an `aov` object, let’s ask it to print out the `names()` of all the stored quantities… `names( my.anova )`
```
## [1] "coefficients" "residuals" "effects" "rank"
## [5] "fitted.values" "assign" "qr" "df.residual"
## [9] "contrasts" "xlevels" "call" "terms"
## [13] "model"
```
As we go through the rest of the book, I hope that a few of these will become a little more obvious to you, but right now that’s going to look pretty damned opaque. That’s okay. You don’t need to know any of the details about it right now, and most of it you don’t need at all… what you do need to understand is that the `aov()` function does a lot of calculations for you, not just the basic ones that I outlined in the previous sections. What this means is that it’s generally a good idea to create a variable like `my.anova` that stores the output of the `aov()` function… because later on, you can use `my.anova` as an input to lots of other functions: those other functions can pull out bits and pieces from the `aov` object, and calculate various other things that you might need. Right then. The simplest thing you can do with an `aov` object is to `print()` it out. When we do that, it shows us a few of the key quantities of interest: `print( my.anova )`
```
## Call:
## aov(formula = mood.gain ~ drug, data = clin.trial)
##
## Terms:
## drug Residuals
## Sum of Squares 3.453333 1.391667
## Deg. of Freedom 2 15
##
## Residual standard error: 0.3045944
## Estimated effects may be unbalanced
```
Specificially, it prints out a reminder of the command that you used when you called `aov()` in the first place, shows you the sums of squares values, the degrees of freedom, and a couple of other quantities that we’re not really interested in right now. Notice, however, that R doesn’t use the names “between-group” and “within-group”. Instead, it tries to assign more meaningful names: in our particular example, the between groups variance corresponds to the effect that the `drug` has on the outcome variable; and the within groups variance is corresponds to the “leftover” variability, so it calls that the residuals. If we compare these numbers to the numbers that I calculated by hand in Section 14.2.5, you can see that they’re identical… the between groups sums of squares is \(\mbox{SS}_b = 3.45\), the within groups sums of squares is \(\mbox{SS}_w = 1.39\), and the degrees of freedom are 2 and 15 repectively.
### 14.3.3 Running the hypothesis tests for the ANOVA
Okay, so we’ve verified that `my.anova` seems to be storing a bunch of the numbers that we’re looking for, but the `print()` function didn’t quite give us the output that we really wanted. Where’s the \(F\)-value? The \(p\)-value? These are the most important numbers in our hypothesis test, but the `print()` function doesn’t provide them. To get those numbers, we need to use a different function. Instead of asking R to `print()` out the `aov` object, we should have asked for a `summary()` of it.207 When we do that… `summary( my.anova )`
… we get all of the key numbers that we calculated earlier. We get the sums of squares, the degrees of freedom, the mean squares, the \(F\)-statistic, and the \(p\)-value itself. These are all identical to the numbers that we calculated ourselves when doing it the long and tedious way, and it’s even organised into the same kind of ANOVA table that I showed in Table 14.1, and then filled out by hand in Section 14.2.5. The only things that are even slightly different is that some of the row and column names are a bit different.
## 14.4 Effect size
There’s a few different ways you could measure the effect size in an ANOVA, but the most commonly used measures are \(\eta^2\) (eta squared) and partial \(\eta^2\). For a one way analysis of variance they’re identical to each other, so for the moment I’ll just explain \(\eta^2\). The definition of \(\eta^2\) is actually really simple: \[ \eta^2 = \frac{\mbox{SS}_b}{\mbox{SS}_{tot}} \] That’s all it is. So when I look at the ANOVA table above, I see that \(\mbox{SS}_b = 3.45\) and \(\mbox{SS}_{tot} = 3.45 + 1.39 = 4.84\). Thus we get an \(\eta^2\) value of \[ \eta^2 = \frac{3.45}{4.84} = 0.71 \] The interpretation of \(\eta^2\) is equally straightforward: it refers to the proportion of the variability in the outcome variable ( `mood.gain` ) that can be explained in terms of the predictor ( `drug` ). A value of \(\eta^2 = 0\) means that there is no relationship at all between the two, whereas a value of \(\eta^2 = 1\) means that the relationship is perfect. Better yet, the \(\eta^2\) value is very closely related to a squared correlation (i.e., \(r^2\)). So, if you’re trying to figure out whether a particular value of \(\eta^2\) is big or small, it’s sometimes useful to remember that \[
\eta= \sqrt{\frac{\mbox{SS}_b}{\mbox{SS}_{tot}}}
\] can be interpreted as if it referred to the magnitude of a Pearson correlation. So in our drugs example, the \(\eta^2\) value of .71 corresponds to an \(\eta\) value of \(\sqrt{.71} = .84\). If we think about this as being equivalent to a correlation of about .84, we’d conclude that the relationship between `drug` and `mood.gain` is strong. The core packages in R don’t include any functions for calculating \(\eta^2\). However, it’s pretty straightforward to calculate it directly from the numbers in the ANOVA table. In fact, since I’ve already got the `SSw` and `SSb` variables lying around from my earlier calculations, I can do this:
```
SStot <- SSb + SSw # total sums of squares
eta.squared <- SSb / SStot # eta-squared value
print( eta.squared )
```
`## [1] 0.7127623` However, since it can be tedious to do this the long way (especially when we start running more complicated ANOVAs, such as those in Chapter 16 I’ve included an `etaSquared()` function in the `lsr` package which will do it for you. For now, the only argument you need to care about is `x` , which should be the `aov` object corresponding to your ANOVA. When we do this, what we get as output is this:
```
etaSquared( x = my.anova )
```
```
## eta.sq eta.sq.part
## drug 0.7127623 0.7127623
```
The output here shows two different numbers. The first one corresponds to the \(\eta^2\) statistic, precisely as described above. The second one refers to “partial \(\eta^2\)”, which is a somewhat different measure of effect size that I’ll describe later. For the simple ANOVA that we’ve just run, they’re the same number. But this won’t always be true once we start running more complicated ANOVAs.208
## 14.5 Multiple comparisons and post hoc tests
Any time you run an ANOVA with more than two groups, and you end up with a significant effect, the first thing you’ll probably want to ask is which groups are actually different from one another. In our drugs example, our null hypothesis was that all three drugs (placebo, Anxifree and Joyzepam) have the exact same effect on mood. But if you think about it, the null hypothesis is actually claiming three different things all at once here. Specifically, it claims that:
* Your competitor’s drug (Anxifree) is no better than a placebo (i.e., \(\mu_A = \mu_P\))
* Your drug (Joyzepam) is no better than a placebo (i.e., \(\mu_J = \mu_P\))
* Anxifree and Joyzepam are equally effective (i.e., \(\mu_J = \mu_A\))
If any one of those three claims is false, then the null hypothesis is also false. So, now that we’ve rejected our null hypothesis, we’re thinking that at least one of those things isn’t true. But which ones? All three of these propositions are of interest: you certainly want to know if your new drug Joyzepam is better than a placebo, and it would be nice to know how well it stacks up against an existing commercial alternative (i.e., Anxifree). It would even be useful to check the performance of Anxifree against the placebo: even if Anxifree has already been extensively tested against placebos by other researchers, it can still be very useful to check that your study is producing similar results to earlier work.
When we characterise the null hypothesis in terms of these three distinct propositions, it becomes clear that there are eight possible “states of the world” that we need to distinguish between:
possibility: | is \(\mu_P = \mu_A\)? | is \(\mu_P = \mu_J\)? | is \(\mu_A = \mu_J\)? | which hypothesis? |
| --- | --- | --- | --- | --- |
1 | \(\checkmark\) | \(\checkmark\) | \(\checkmark\) | null |
2 | \(\checkmark\) | \(\checkmark\) | alternative |
3 | \(\checkmark\) | \(\checkmark\) | alternative |
4 | \(\checkmark\) | alternative |
5 | \(\checkmark\) | \(\checkmark\) | alternative |
6 | \(\checkmark\) | alternative |
7 | \(\checkmark\) | alternative |
8 | alternative |
By rejecting the null hypothesis, we’ve decided that we don’t believe that #1 is the true state of the world. The next question to ask is, which of the other seven possibilities do we think is right? When faced with this situation, its usually helps to look at the data. For instance, if we look at the plots in Figure 14.1, it’s tempting to conclude that Joyzepam is better than the placebo and better than Anxifree, but there’s no real difference between Anxifree and the placebo. However, if we want to get a clearer answer about this, it might help to run some tests.
### 14.5.1 Running “pairwise” \(t\)-tests
How might we go about solving our problem? Given that we’ve got three separate pairs of means (placebo versus Anxifree, placebo versus Joyzepam, and Anxifree versus Joyzepam) to compare, what we could do is run three separate \(t\)-tests and see what happens. There’s a couple of ways that we could do this. One method would be to construct new variables corresponding the groups you want to compare (e.g., `anxifree` , `placebo` and `joyzepam` ), and then run a \(t\)-test on these new variables:
```
t.test( anxifree, placebo, var.equal = TRUE ) # Student t-test
anxifree <- with(clin.trial, mood.gain[drug == "anxifree"]) # mood change due to anxifree
placebo <- with(clin.trial, mood.gain[drug == "placebo"]) # mood change due to placebo
```
or, you could use the `subset` argument in the `t.test()` function to select only those observations corresponding to one of the two groups we’re interested in:
```
t.test( formula = mood.gain ~ drug,
data = clin.trial,
subset = drug %in% c("placebo","anxifree"),
var.equal = TRUE
)
```
See Chapter 7 if you’ve forgotten how the `%in%` operator works. Regardless of which version we do, R will print out the results of the \(t\)-test, though I haven’t included that output here. If we go on to do this for all possible pairs of variables, we can look to see which (if any) pairs of groups are significantly different to each other. This “lots of \(t\)-tests idea” isn’t a bad strategy, though as we’ll see later on there are some problems with it. However, for the moment our bigger problem is that it’s a pain to have to type in such a long command over and over again: for instance, if your experiment has 10 groups, then you have to run 45 \(t\)-tests. That’s way too much typing. To help keep the typing to a minimum, R provides a function called `pairwise.t.test()` that automatically runs all of the \(t\)-tests for you. There are three arguments that you need to specify, the outcome variable `x` , the group variable `g` , and the `p.adjust.method` argument, which “adjusts” the \(p\)-value in one way or another. I’ll explain \(p\)-value adjustment in a moment, but for now we can just set
```
p.adjust.method = "none"
```
since we’re not doing any adjustments. For our example, here’s what we do:
```
pairwise.t.test( x = clin.trial$mood.gain, # outcome variable
g = clin.trial$drug, # grouping variable
p.adjust.method = "none" # which correction to use?
)
```
One thing that bugs me slightly about the `pairwise.t.test()` function is that you can’t just give it an `aov` object, and have it produce this output. After all, I went to all that trouble earlier of getting R to create the `my.anova` variable and – as we saw in Section 14.3.2 – R has actually stored enough information inside it that I should just be able to get it to run all the pairwise tests using `my.anova` as an input. To that end, I’ve included a `posthocPairwiseT()` function in the `lsr` package that lets you do this. The idea behind this function is that you can just input the `aov` object itself,209 and then get the pairwise tests as an output. As of the current writing, `posthocPairwiseT()` is actually just a simple way of calling `pairwise.t.test()` function, but you should be aware that I intend to make some changes to it later on. Here’s an example:
```
posthocPairwiseT( x = my.anova, p.adjust.method = "none" )
```
In later versions, I plan to add more functionality (e.g., adjusted confidence intervals), but for now I think it’s at least kind of useful. To see why, let’s suppose you’ve run your ANOVA and stored the results in `my.anova` , and you’re happy using the Holm correction (the default method in `pairwise.t.test()` , which I’ll explain this in a moment). In that case, all you have to do is type this:
and R will output the test results. Much more convenient, I think.
### 14.5.2 Corrections for multiple testing
In the previous section I hinted that there’s a problem with just running lots and lots of \(t\)-tests. The concern is that when running these analyses, what we’re doing is going on a “fishing expedition”: we’re running lots and lots of tests without much theoretical guidance, in the hope that some of them come up significant. This kind of theory-free search for group differences is referred to as post hoc analysis (“post hoc” being Latin for “after this”).210
It’s okay to run post hoc analyses, but a lot of care is required. For instance, the analysis that I ran in the previous section is actually pretty dangerous: each individual \(t\)-test is designed to have a 5% Type I error rate (i.e., \(\alpha = .05\)), and I ran three of these tests. Imagine what would have happened if my ANOVA involved 10 different groups, and I had decided to run 45 “post hoc” \(t\)-tests to try to find out which ones were significantly different from each other, you’d expect 2 or 3 of them to come up significant by chance alone. As we saw in Chapter 11, the central organising principle behind null hypothesis testing is that we seek to control our Type I error rate, but now that I’m running lots of \(t\)-tests at once, in order to determine the source of my ANOVA results, my actual Type I error rate across this whole family of tests has gotten completely out of control.
The usual solution to this problem is to introduce an adjustment to the \(p\)-value, which aims to control the total error rate across the family of tests (see Shaffer 1995). An adjustment of this form, which is usually (but not always) applied because one is doing post hoc analysis, is often referred to as a correction for multiple comparisons, though it is sometimes referred to as “simultaneous inference”. In any case, there are quite a few different ways of doing this adjustment. I’ll discuss a few of them in this section and in Section 16.8, but you should be aware that there are many other methods out there (see, e.g., Hsu 1996).
### 14.5.3 Bonferroni corrections
The simplest of these adjustments is called the Bonferroni correction (Dunn 1961), and it’s very very simple indeed. Suppose that my post hoc analysis consists of \(m\) separate tests, and I want to ensure that the total probability of making any Type I errors at all is at most \(\alpha\).211 If so, then the Bonferroni correction just says “multiply all your raw \(p\)-values by \(m\)”. If we let \(p\) denote the original \(p\)-value, and let \(p^\prime_j\) be the corrected value, then the Bonferroni correction tells that: \[ p^\prime = m \times p \] And therefore, if you’re using the Bonferroni correction, you would reject the null hypothesis if \(p^\prime < \alpha\). The logic behind this correction is very straightforward. We’re doing \(m\) different tests; so if we arrange it so that each test has a Type I error rate of at most \(\alpha / m\), then the total Type I error rate across these tests cannot be larger than \(\alpha\). That’s pretty simple, so much so that in the original paper, the author writes:
The method given here is so simple and so general that I am sure it must have been used before this. I do not find it, however, so can only conclude that perhaps its very simplicity has kept statisticians from realizing that it is a very good method in some situations (pp 52-53 Dunn 1961)
To use the Bonferroni correction in R, you can use the `pairwise.t.test()` function,212 making sure that you set
```
p.adjust.method = "bonferroni"
```
. Alternatively, since the whole reason why we’re doing these pairwise tests in the first place is because we have an ANOVA that we’re trying to understand, it’s probably more convenient to use the `posthocPairwiseT()` function in the `lsr` package, since we can use `my.anova` as the input:
```
posthocPairwiseT( my.anova, p.adjust.method = "bonferroni")
```
```
##
## Pairwise comparisons using t tests with pooled SD
##
## data: mood.gain and drug
##
## placebo anxifree
## anxifree 0.4506 -
## joyzepam 9.1e-05 0.0017
##
## P value adjustment method: bonferroni
```
If we compare these three \(p\)-values to those that we saw in the previous section when we made no adjustment at all, it is clear that the only thing that R has done is multiply them by 3.
### 14.5.4 Holm corrections
Although the Bonferroni correction is the simplest adjustment out there, it’s not usually the best one to use. One method that is often used instead is the Holm correction (Holm 1979). The idea behind the Holm correction is to pretend that you’re doing the tests sequentially; starting with the smallest (raw) \(p\)-value and moving onto the largest one. For the \(j\)-th largest of the \(p\)-values, the adjustment is either \[ p^\prime_j = j \times p_j \] (i.e., the biggest \(p\)-value remains unchanged, the second biggest \(p\)-value is doubled, the third biggest \(p\)-value is tripled, and so on), or \[ p^\prime_j = p^\prime_{j+1} \] whichever one is larger. This might sound a little confusing, so let’s go through it a little more slowly. Here’s what the Holm correction does. First, you sort all of your \(p\)-values in order, from smallest to largest. For the smallest \(p\)-value all you do is multiply it by \(m\), and you’re done. However, for all the other ones it’s a two-stage process. For instance, when you move to the second smallest \(p\) value, you first multiply it by \(m-1\). If this produces a number that is bigger than the adjusted \(p\)-value that you got last time, then you keep it. But if it’s smaller than the last one, then you copy the last \(p\)-value. To illustrate how this works, consider the table below, which shows the calculations of a Holm correction for a collection of five \(p\)-values:
raw \(p\) | rank \(j\) | \(p \times j\) | Holm \(p\) |
| --- | --- | --- | --- |
.001 | 5 | .005 | .005 |
.005 | 4 | .020 | .020 |
.019 | 3 | .057 | .057 |
.022 | 2 | .044 | .057 |
.103 | 1 | .103 | .103 |
Hopefully that makes things clear.
Although it’s a little harder to calculate, the Holm correction has some very nice properties: it’s more powerful than Bonferroni (i.e., it has a lower Type II error rate), but – counterintuitive as it might seem – it has the same Type I error rate. As a consequence, in practice there’s never any reason to use the simpler Bonferroni correction, since it is always outperformed by the slightly more elaborate Holm correction. Because of this, the Holm correction is the default one used by `pairwise.t.test()` and `posthocPairwiseT()` . To run the Holm correction in R, you could specify
```
p.adjust.method = "Holm"
```
if you wanted to, but since it’s the default you can just to do this:
```
##
## Pairwise comparisons using t tests with pooled SD
##
## data: mood.gain and drug
##
## placebo anxifree
## anxifree 0.1502 -
## joyzepam 9.1e-05 0.0011
##
## P value adjustment method: holm
```
As you can see, the biggest \(p\)-value (corresponding to the comparison between Anxifree and the placebo) is unaltered: at a value of \(.15\), it is exactly the same as the value we got originally when we applied no correction at all. In contrast, the smallest \(p\)-value (Joyzepam versus placebo) has been multiplied by three.
### 14.5.5 Writing up the post hoc test
Finally, having run the post hoc analysis to determine which groups are significantly different to one another, you might write up the result like this:
Post hoc tests (using the Holm correction to adjust \(p\)) indicated that Joyzepam produced a significantly larger mood change than both Anxifree (\(p = .001\)) and the placebo (\(p = 9.1 \times 10^{-5}\)). We found no evidence that Anxifree performed better than the placebo (\(p = .15\)).
Or, if you don’t like the idea of reporting exact \(p\)-values, then you’d change those numbers to \(p<.01\), \(p<.001\) and \(p > .05\) respectively. Either way, the key thing is that you indicate that you used Holm’s correction to adjust the \(p\)-values. And of course, I’m assuming that elsewhere in the write up you’ve included the relevant descriptive statistics (i.e., the group means and standard deviations), since these \(p\)-values on their own aren’t terribly informative.
## 14.6 Assumptions of one-way ANOVA
Like any statistical test, analysis of variance relies on some assumptions about the data. There are three key assumptions that you need to be aware of: normality, homogeneity of variance and independence. If you remember back to Section 14.2.4 – which I hope you at least skimmed even if you didn’t read the whole thing – I described the statistical models underpinning ANOVA, which I wrote down like this: \[ \begin{array}{lrcl} H_0: & Y_{ik} &=& \mu + \epsilon_{ik} \\ H_1: & Y_{ik} &=& \mu_k + \epsilon_{ik} \end{array} \] In these equations \(\mu\) refers to a single, grand population mean which is the same for all groups, and \(\mu_k\) is the population mean for the \(k\)-th group. Up to this point we’ve been mostly interested in whether our data are best described in terms of a single grand mean (the null hypothesis) or in terms of different group-specific means (the alternative hypothesis). This makes sense, of course: that’s actually the important research question! However, all of our testing procedures have – implicitly – relied on a specific assumption about the residuals, \(\epsilon_{ik}\), namely that \[ \epsilon_{ik} \sim \mbox{Normal}(0, \sigma^2) \] None of the maths works properly without this bit. Or, to be precise, you can still do all the calculations, and you’ll end up with an \(F\)-statistic, but you have no guarantee that this \(F\)-statistic actually measures what you think it’s measuring, and so any conclusions that you might draw on the basis of the \(F\) test might be wrong.
So, how do we check whether this assumption about the residuals is accurate? Well, as I indicated above, there are three distinct claims buried in this one statement, and we’ll consider them separately.
* Normality. The residuals are assumed to be normally distributed. As we saw in Section 13.9, we can assess this by looking at QQ plots or running a Shapiro-Wilk test. I’ll talk about this in an ANOVA context in Section 14.9.
* Homogeneity of variance. Notice that we’ve only got the one value for the population standard deviation (i.e., \(\sigma\)), rather than allowing each group to have it’s own value (i.e., \(\sigma_k\)). This is referred to as the homogeneity of variance (sometimes called homoscedasticity) assumption. ANOVA assumes that the population standard deviation is the same for all groups. We’ll talk about this extensively in Section 14.7.
* Independence. The independence assumption is a little trickier. What it basically means is that, knowing one residual tells you nothing about any other residual. All of the \(\epsilon_{ik}\) values are assumed to have been generated without any “regard for” or “relationship to” any of the other ones. There’s not an obvious or simple way to test for this, but there are some situations that are clear violations of this: for instance, if you have a repeated-measures design, where each participant in your study appears in more than one condition, then independence doesn’t hold; there’s a special relationship between some observations… namely those that correspond to the same person! When that happens, you need to use something like repeated measures ANOVA. I don’t currently talk about repeated measures ANOVA in this book, but it will be included in later versions.
### 14.6.1 How robust is ANOVA?
One question that people often want to know the answer to is the extent to which you can trust the results of an ANOVA if the assumptions are violated. Or, to use the technical language, how robust is ANOVA to violations of the assumptions. Due to deadline constraints I don’t have the time to discuss this topic. This is a topic I’ll cover in some detail in a later version of the book.
## 14.7 Checking the homogeneity of variance assumption
There’s more than one way to skin a cat, as the saying goes, and more than one way to test the homogeneity of variance assumption, too (though for some reason no-one made a saying out of that). The most commonly used test for this that I’ve seen in the literature is the Levene test (Levene 1960), and the closely related Brown-Forsythe test (Brown and Forsythe 1974), both of which I’ll describe here. Alternatively, you could use the Bartlett test, which is implemented in R via the `bartlett.test()` function, but I’ll leave it as an exercise for the reader to go check that one out if you’re interested.
Levene’s test is shockingly simple. Suppose we have our outcome variable \(Y_{ik}\). All we do is define a new variable, which I’ll call \(Z_{ik}\), corresponding to the absolute deviation from the group mean: \[ Z_{ik} = \left| Y_{ik} - \bar{Y}_k \right| \] Okay, what good does this do us? Well, let’s take a moment to think about what \(Z_{ik}\) actually is, and what we’re trying to test. The value of \(Z_{ik}\) is a measure of how the \(i\)-th observation in the \(k\)-th group deviates from its group mean. And our null hypothesis is that all groups have the same variance; that is, the same overall deviations from the group means! So, the null hypothesis in a Levene’s test is that the population means of \(Z\) are identical for all groups. Hm. So what we need now is a statistical test of the null hypothesis that all group means are identical. Where have we seen that before? Oh right, that’s what ANOVA is… and so all that the Levene’s test does is run an ANOVA on the new variable \(Z_{ik}\).
What about the Brown-Forsythe test? Does that do anything particularly different? Nope. The only change from the Levene’s test is that it constructs the transformed variable \(Z\) in a slightly different way, using deviations from the group medians rather than deviations from the group means. That is, for the Brown-Forsythe test, \[ Z_{ik} = \left| Y_{ik} - \mbox{median}_k(Y) \right| \] where \(\mbox{median}_k(Y)\) is the median for group \(k\). Regardless of whether you’re doing the standard Levene test or the Brown-Forsythe test, the test statistic – which is sometimes denoted \(F\), but sometimes written as \(W\) – is calculated in exactly the same way that the \(F\)-statistic for the regular ANOVA is calculated, just using a \(Z_{ik}\) rather than \(Y_{ik}\). With that in mind, let’s just move on and look at how to run the test in R.
### 14.7.1 Running the Levene’s test in R
Okay, so how do we run the Levene test? Obviously, since the Levene test is just an ANOVA, it would be easy enough to manually create the transformed variable \(Z_{ik}\) and then use the `aov()` function to run an ANOVA on that. However, that’s the tedious way to do it. A better way to do run your Levene’s test is to use the `leveneTest()` function, which is in the `car` package. As usual, we first load the package `library( car )` and now that we have, we can run our Levene test. The main argument that you need to specify is `y` , but you can do this in lots of different ways. Probably the simplest way to do it is actually input the original `aov` object. Since I’ve got the `my.anova` variable stored from my original ANOVA, I can just do this:
```
leveneTest( my.anova )
```
If we look at the output, we see that the test is non-significant \((F_{2,15} = 1.47, p = .26)\), so it looks like the homogeneity of variance assumption is fine. Remember, although R reports the test statistic as an \(F\)-value, it could equally be called \(W\), in which case you’d just write \(W_{2,15} = 1.47\). Also, note the part of the output that says `center = median` . That’s telling you that, by default, the `leveneTest()` function actually does the Brown-Forsythe test. If you want to use the mean instead, then you need to explicitly set the `center` argument, like this:
```
leveneTest( y = my.anova, center = mean )
```
That being said, in most cases it’s probably best to stick to the default value, since the Brown-Forsythe test is a bit more robust than the original Levene test.
### 14.7.2 Additional comments
Two more quick comments before I move onto a different topic. Firstly, as mentioned above, there are other ways of calling the `leveneTest()` function. Although the vast majority of situations that call for a Levene test involve checking the assumptions of an ANOVA (in which case you probably have a variable like `my.anova` lying around), sometimes you might find yourself wanting to specify the variables directly. Two different ways that you can do this are shown below:
```
leveneTest(y = mood.gain ~ drug, data = clin.trial) # y is a formula in this case
leveneTest(y = clin.trial$mood.gain, group = clin.trial$drug) # y is the outcome
```
Secondly, I did mention that it’s possible to run a Levene test just using the `aov()` function. I don’t want to waste a lot of space on this, but just in case some readers are interested in seeing how this is done, here’s the code that creates the new variables and runs an ANOVA. If you are interested, feel free to run this to verify that it produces the same answers as the Levene test (i.e., with `center = mean` ):
```
Y <- clin.trial $ mood.gain # the original outcome variable, Y
G <- clin.trial $ drug # the grouping variable, G
gp.mean <- tapply(Y, G, mean) # calculate group means
Ybar <- gp.mean[G] # group mean associated with each obs
Z <- abs(Y - Ybar) # the transformed variable, Z
summary( aov(Z ~ G) ) # run the ANOVA
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## G 2 0.0616 0.03080 1.45 0.266
## Residuals 15 0.3187 0.02125
```
That said, I don’t imagine that many people will care about this. Nevertheless, it’s nice to know that you could do it this way if you wanted to. And for those of you who do try it, I think it helps to demystify the test a little bit when you can see – with your own eyes – the way in which Levene’s test relates to ANOVA.
## 14.8 Removing the homogeneity of variance assumption
In our example, the homogeneity of variance assumption turned out to be a pretty safe one: the Levene test came back non-significant, so we probably don’t need to worry. However, in real life we aren’t always that lucky. How do we save our ANOVA when the homogeneity of variance assumption is violated? If you recall from our discussion of \(t\)-tests, we’ve seen this problem before. The Student \(t\)-test assumes equal variances, so the solution was to use the Welch \(t\)-test, which does not. In fact, Welch (1951) also showed how we can solve this problem for ANOVA too (the Welch one-way test). It’s implemented in R using the `oneway.test()` function. The arguments that we’ll need for our example are:
*
`formula` . This is the model formula, which (as usual) needs to specify the outcome variable on the left hand side and the grouping variable on the right hand side: i.e., something like `outcome ~ group` . *
`data` . Specifies the data frame containing the variables. *
`var.equal` . If this is `FALSE` (the default) a Welch one-way test is run. If it is `TRUE` then it just runs a regular ANOVA. The function also has a `subset` argument that lets you analyse only some of the observations and a `na.action` argument that tells it how to handle missing data, but these aren’t necessary for our purposes. So, to run the Welch one-way ANOVA for our example, we would do this:
```
oneway.test(mood.gain ~ drug, data = clin.trial)
```
```
##
## One-way analysis of means (not assuming equal variances)
##
## data: mood.gain and drug
## F = 26.322, num df = 2.0000, denom df = 9.4932, p-value = 0.000134
```
To understand what’s happening here, let’s compare these numbers to what we got earlier in Section 14.3 when we ran our original ANOVA. To save you the trouble of flicking back, here are those numbers again, this time calculated by setting `var.equal = TRUE` for the `oneway.test()` function:
```
oneway.test(mood.gain ~ drug, data = clin.trial, var.equal = TRUE)
```
```
##
## One-way analysis of means
##
## data: mood.gain and drug
## F = 18.611, num df = 2, denom df = 15, p-value = 8.646e-05
```
Okay, so originally our ANOVA gave us the result \(F(2,15) = 18.6\), whereas the Welch one-way test gave us \(F(2,9.49) = 26.32\). In other words, the Welch test has reduced the within-groups degrees of freedom from 15 to 9.49, and the \(F\)-value has increased from 18.6 to 26.32.
## 14.9 Checking the normality assumption
Testing the normality assumption is relatively straightforward. We covered most of what you need to know in Section 13.9. The only thing we really need to know how to do is pull out the residuals (i.e., the \(\epsilon_{ik}\) values) so that we can draw our QQ plot and run our Shapiro-Wilk test. First, let’s extract the residuals. R provides a function called `residuals()` that will do this for us. If we pass our `my.anova` to this function, it will return the residuals. So let’s do that:
```
my.anova.residuals <- residuals( object = my.anova ) # extract the residuals
```
We can print them out too, though it’s not exactly an edifying experience. In fact, given that I’m on the verge of putting myself to sleep just typing this, it might be a good idea to skip that step. Instead, let’s draw some pictures and run ourselves a hypothesis test:
```
hist( x = my.anova.residuals ) # plot a histogram (similar to Figure @ref{fig:normalityanova}a)
```
```
qqnorm( y = my.anova.residuals ) # draw a QQ plot (similar to Figure @ref{fig:normalityanova}b)
```
```
shapiro.test( x = my.anova.residuals ) # run Shapiro-Wilk test
```
```
##
## Shapiro-Wilk normality test
##
## data: my.anova.residuals
## W = 0.96019, p-value = 0.6053
```
The histogram and QQ plot are both look pretty normal to me.213 This is supported by the results of our Shapiro-Wilk test (\(W = .96\), \(p = .61\)) which finds no indication that normality is violated.
## 14.10 Removing the normality assumption
Now that we’ve seen how to check for normality, we are led naturally to ask what we can do to address violations of normality. In the context of a one-way ANOVA, the easiest solution is probably to switch to a non-parametric test (i.e., one that doesn’t rely on any particular assumption about the kind of distribution involved). We’ve seen non-parametric tests before, in Chapter 13: when you only have two groups, the Wilcoxon test provides the non-parametric alternative that you need. When you’ve got three or more groups, you can use the Kruskal-Wallis rank sum test (Kruskal and Wallis 1952). So that’s the test we’ll talk about next.
### 14.10.1 The logic behind the Kruskal-Wallis test
The Kruskal-Wallis test is surprisingly similar to ANOVA, in some ways. In ANOVA, we started with \(Y_{ik}\), the value of the outcome variable for the \(i\)th person in the \(k\)th group. For the Kruskal-Wallis test, what we’ll do is rank order all of these \(Y_{ik}\) values, and conduct our analysis on the ranked data. So let’s let \(R_{ik}\) refer to the ranking given to the \(i\)th member of the \(k\)th group. Now, let’s calculate \(\bar{R}_k\), the average rank given to observations in the \(k\)th group: \[ \bar{R}_k = \frac{1}{N_K} \sum_{i} R_{ik} \] and let’s also calculate \(\bar{R}\), the grand mean rank: \[ \bar{R} = \frac{1}{N} \sum_{i} \sum_{k} R_{ik} \] Now that we’ve done this, we can calculate the squared deviations from the grand mean rank \(\bar{R}\). When we do this for the individual scores – i.e., if we calculate \((R_{ik} - \bar{R})^2\) – what we have is a “nonparametric” measure of how far the \(ik\)-th observation deviates from the grand mean rank. When we calculate the squared deviation of the group means from the grand means – i.e., if we calculate \((\bar{R}_k - \bar{R} )^2\) – then what we have is a nonparametric measure of how much the group deviates from the grand mean rank. With this in mind, let’s follow the same logic that we did with ANOVA, and define our ranked sums of squares measures in much the same way that we did earlier. First, we have our “total ranked sums of squares”: \[ \mbox{RSS}_{tot} = \sum_k \sum_i ( R_{ik} - \bar{R} )^2 \] and we can define the “between groups ranked sums of squares” like this: \[ \begin{array}{rcl} \mbox{RSS}_{b} &=& \sum_k \sum_i ( \bar{R}_k - \bar{R} )^2 \\ &=& \sum_k N_k ( \bar{R}_k - \bar{R} )^2 \end{array} \] So, if the null hypothesis is true and there are no true group differences at all, you’d expect the between group rank sums \(\mbox{RSS}_{b}\) to be very small, much smaller than the total rank sums \(\mbox{RSS}_{tot}\). Qualitatively this is very much the same as what we found when we went about constructing the ANOVA \(F\)-statistic; but for technical reasons the Kruskal-Wallis test statistic, usually denoted \(K\), is constructed in a slightly different way: \[ K = (N - 1) \times \frac{\mbox{RSS}_b}{\mbox{RSS}_{tot}} \] and, if the null hypothesis is true, then the sampling distribution of \(K\) is approximately chi-square with \(G-1\) degrees of freedom (where \(G\) is the number of groups). The larger the value of \(K\), the less consistent the data are with null hypothesis, so this is a one-sided test: we reject \(H_0\) when \(K\) is sufficiently large.
### 14.10.2 Additional details
The description in the previous section illustrates the logic behind the Kruskal-Wallis test. At a conceptual level, this is the right way to think about how the test works. However, from a purely mathematical perspective it’s needlessly complicated. I won’t show you the derivation, but you can use a bit of algebraic jiggery-pokery214 to show that the equation for \(K\) can be rewritten as \[ K = \frac{12}{N(N-1)} \sum_k N_k {\bar{R}_k}^2 - 3(N+1) \] It’s this last equation that you sometimes see given for \(K\). This is way easier to calculate than the version I described in the previous section, it’s just that it’s totally meaningless to actual humans. It’s probably best to think of \(K\) the way I described it earlier… as an analogue of ANOVA based on ranks. But keep in mind that the test statistic that gets calculated ends up with a rather different look to it than the one we used for our original ANOVA.
But wait, there’s more! Dear lord, why is there always more? The story I’ve told so far is only actually true when there are no ties in the raw data. That is, if there are no two observations that have exactly the same value. If there are ties, then we have to introduce a correction factor to these calculations. At this point I’m assuming that even the most diligent reader has stopped caring (or at least formed the opinion that the tie-correction factor is something that doesn’t require their immediate attention). So I’ll very quickly tell you how it’s calculated, and omit the tedious details about why it’s done this way. Suppose we construct a frequency table for the raw data, and let \(f_j\) be the number of observations that have the \(j\)-th unique value. This might sound a bit abstract, so here’s the R code showing a concrete example:
```
f <- table( clin.trial$mood.gain ) # frequency table for mood gain
print(f) # we have some ties
```
```
##
## 0.1 0.2 0.3 0.4 0.5 0.6 0.8 0.9 1.1 1.2 1.3 1.4 1.7 1.8
## 1 1 2 1 1 2 1 1 1 1 2 2 1 1
```
Looking at this table, notice that the third entry in the frequency table has a value of \(2\). Since this corresponds to a `mood.gain` of 0.3, this table is telling us that two people’s mood increased by 0.3. More to the point, note that we can say that `f[3]` has a value of `2` . Or, in the mathematical notation I introduced above, this is telling us that \(f_3 = 2\). Yay. So, now that we know this, the tie correction factor (TCF) is: \[
\mbox{TCF} = 1 - \frac{\sum_j {f_j}^3 - f_j}{N^3 - N}
\] The tie-corrected value of the Kruskal-Wallis statistic obtained by dividing the value of \(K\) by this quantity: it is this tie-corrected version that R calculates. And at long last, we’re actually finished with the theory of the Kruskal-Wallis test. I’m sure you’re all terribly relieved that I’ve cured you of the existential anxiety that naturally arises when you realise that you don’t know how to calculate the tie-correction factor for the Kruskal-Wallis test. Right?
### 14.10.3 How to run the Kruskal-Wallis test in R
Despite the horror that we’ve gone through in trying to understand what the Kruskal-Wallis test actually does, it turns out that running the test is pretty painless, since R has a function called `kruskal.test()` . The function is pretty flexible, and allows you to input your data in a few different ways. Most of the time you’ll have data like the `clin.trial` data set, in which you have your outcome variable `mood.gain` , and a grouping variable `drug` . If so, you can call the `kruskal.test()` function by specifying a formula, and a data frame:
```
kruskal.test(mood.gain ~ drug, data = clin.trial)
```
```
##
## Kruskal-Wallis rank sum test
##
## data: mood.gain by drug
## Kruskal-Wallis chi-squared = 12.076, df = 2, p-value = 0.002386
```
A second way of using the `kruskal.test()` function, which you probably won’t have much reason to use, is to directly specify the outcome variable and the grouping variable as separate input arguments, `x` and `g` :
```
kruskal.test(x = clin.trial$mood.gain, g = clin.trial$drug)
```
```
##
## Kruskal-Wallis rank sum test
##
## data: clin.trial$mood.gain and clin.trial$drug
## Kruskal-Wallis chi-squared = 12.076, df = 2, p-value = 0.002386
```
This isn’t very interesting, since it’s just plain easier to specify a formula. However, sometimes it can be useful to specify `x` as a list. What I mean is this. Suppose you actually had data as three separate variables, `placebo` , `anxifree` and `joyzepam` . If that’s the format that your data are in, then it’s convenient to know that you can bundle all three together as a list:
```
mood.gain <- list( placebo, joyzepam, anxifree )
kruskal.test( x = mood.gain )
```
And again, this would give you exactly the same results as the command we tried originally.
## 14.11 On the relationship between ANOVA and the Student \(t\) test
There’s one last thing I want to point out before finishing. It’s something that a lot of people find kind of surprising, but it’s worth knowing about: an ANOVA with two groups is identical to the Student \(t\)-test. No, really. It’s not just that they are similar, but they are actually equivalent in every meaningful way. I won’t try to prove that this is always true, but I will show you a single concrete demonstration. Suppose that, instead of running an ANOVA on our `mood.gain ~ drug` model, let’s instead do it using `therapy` as the predictor. If we run this ANOVA, here’s what we get:
```
summary( aov( mood.gain ~ therapy, data = clin.trial ))
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## therapy 1 0.467 0.4672 1.708 0.21
## Residuals 16 4.378 0.2736
```
Overall, it looks like there’s no significant effect here at all but, as we’ll see in Chapter @ref(anova2 this is actually a misleading answer! In any case, it’s irrelevant to our current goals: our interest here is in the \(F\)-statistic, which is \(F(1,16) = 1.71\), and the \(p\)-value, which is .21. Since we only have two groups, I didn’t actually need to resort to an ANOVA, I could have just decided to run a Student \(t\)-test. So let’s see what happens when I do that:
```
t.test( mood.gain ~ therapy, data = clin.trial, var.equal = TRUE )
```
```
##
## Two Sample t-test
##
## data: mood.gain by therapy
## t = -1.3068, df = 16, p-value = 0.2098
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.8449518 0.2005073
## sample estimates:
## mean in group no.therapy mean in group CBT
## 0.7222222 1.0444444
```
Curiously, the \(p\)-values are identical: once again we obtain a value of \(p = .21\). But what about the test statistic? Having run a \(t\)-test instead of an ANOVA, we get a somewhat different answer, namely \(t(16) = -1.3068\). However, there is a fairly straightforward relationship here. If we square the \(t\)-statistic
`1.3068 ^ 2` `## [1] 1.707726`
we get the \(F\)-statistic from before.
## 14.12 Summary
There’s a fair bit covered in this chapter, but there’s still a lot missing. Most obviously, I haven’t yet discussed any analog of the paired samples \(t\)-test for more than two groups. There is a way of doing this, known as repeated measures ANOVA, which will appear in a later version of this book. I also haven’t discussed how to run an ANOVA when you are interested in more than one grouping variable, but that will be discussed in a lot of detail in Chapter 16. In terms of what we have discussed, the key topics were:
* The basic logic behind how ANOVA works (Section 14.2) and how to run one in R (Section 14.3).
* How to compute an effect size for an ANOVA (Section 14.4)
* Post hoc analysis and corrections for multiple testing (Section 14.5).
* The assumptions made by ANOVA (Section 14.6).
* How to check the homogeneity of variance assumption (Section 14.7) and what to do if it is violated (Section 14.8).
* How to check the normality assumption (Section 14.9 and what to do if it is violated (Section 14.10).
As with all of the chapters in this book, there are quite a few different sources that I’ve relied upon, but the one stand-out text that I’ve been most heavily influenced by is Sahai and Ageel (2000). It’s not a good book for beginners, but it’s an excellent book for more advanced readers who are interested in understanding the mathematics behind ANOVA.
When all groups have the same number of observations, the experimental design is said to be “balanced”. Balance isn’t such a big deal for one-way ANOVA, which is the topic of this chapter. It becomes more important when you start doing more complicated ANOVAs.↩
*
In a later versions I’m intending to expand on this. But because I’m writing in a rush, and am already over my deadlines, I’ll just briefly note that if you read ahead to Chapter 16 and look at how the “treatment effect” at level \(k\) of a factor is defined in terms of the \(\alpha_k\) values (see Section 16.2), it turns out that \(Q\) refers to a weighted mean of the squared treatment effects, \(Q=(\sum_{k=1}^G N_k\alpha_k^2)/(G-1)\).↩
*
Or, if we want to be sticklers for accuracy, \(1 + \frac{2}{df_2 - 2}\).↩
*
Or, to be precise, party like “it’s 1899 and we’ve got no friends and nothing better to do with our time than do some calculations that wouldn’t have made any sense in 1899 because ANOVA didn’t exist until about the 1920s”.↩
*
Actually, it also provides a function called
`anova()` , but that works a bit differently, so let’s just ignore it for now.↩ *
It’s worth noting that you can get the same result by using the command
`anova( my.anova )` .↩ *
A potentially important footnote – I wrote the
`etaSquared()` function for the `lsr` package as a teaching exercise, but like all the other functions in the `lsr` package it hasn’t been exhaustively tested. As of this writing – `lsr` package version 0.5 – there is at least one known bug in the code. In some cases at least, it doesn’t work (and can give very silly answers) when you set the `weights` on the observations to something other than uniform. That doesn’t matter at all for this book, since those kinds of analyses are well beyond the scope, but I haven’t had a chance to revisit the package in a long time; it would probably be wise to be very cautious with the use of this function in any context other than very simple introductory analyses. Thanks to <NAME> for finding the bug! (Oh, and while I’m here, there’s an interesting blog post by <NAME> suggesting that eta-squared itself is perhaps not the best measure of effect size in real world data analysis: http://daniellakens.blogspot.com.au/2015/06/why-you-should-use-omega-squared.html↩ *
I should point out that there are other functions in R for running multiple comparisons, and at least one of them works this way: the
`TukeyHSD()` function takes an `aov` object as its input, and outputs Tukey’s “honestly significant difference” tests. I talk about Tukey’s HSD in Chapter 16.↩ *
If you do have some theoretical basis for wanting to investigate some comparisons but not others, it’s a different story. In those circumstances you’re not really running “post hoc” analyses at all: you’re making “planned comparisons”. I do talk about this situation later in the book (Section 16.9), but for now I want to keep things simple.↩
*
It’s worth noting in passing that not all adjustment methods try to do this. What I’ve described here is an approach for controlling “family wise Type I error rate”. However, there are other post hoc tests seek to control the “false discovery rate”, which is a somewhat different thing.↩
*
There’s also a function called
`p.adjust()` in which you can input a vector of raw \(p\)-values, and it will output a vector of adjusted \(p\)-values. This can be handy sometimes. I should also note that more advanced users may wish to consider using some of the tools provided by the `multcomp` package.↩ *
Note that neither of these figures has been tidied up at all: if you want to create nicer looking graphs it’s always a good idea to use the tools from Chapter 6 to help you draw cleaner looking images.↩
*
A technical term.↩
Date: 2015-10-01
Categories:
Tags:
# Chapter 15 Linear regression
The goal in this chapter is to introduce linear regression, the standard tool that statisticians rely on when analysing the relationship between interval scale predictors and interval scale outcomes. Stripped to its bare essentials, linear regression models are basically a slightly fancier version of the Pearson correlation (Section 5.7) though as we’ll see, regression models are much more powerful tools.
## 15.1 What is a linear regression model?
Since the basic ideas in regression are closely tied to correlation, we’ll return to the `parenthood.Rdata` file that we were using to illustrate how correlations work. Recall that, in this data set, we were trying to find out why Dan is so very grumpy all the time, and our working hypothesis was that I’m not getting enough sleep. We drew some scatterplots to help us examine the relationship between the amount of sleep I get, and my grumpiness the following day. The actual scatterplot that we draw is the one shown in Figure 15.1, and as we saw previously this corresponds to a correlation of \(r=-.90\), but what we find ourselves secretly imagining is something that looks closer to Figure 15.2. That is, we mentally draw a straight line through the middle of the data. In statistics, this line that we’re drawing is called a regression line. Notice that – since we’re not idiots – the regression line goes through the middle of the data. We don’t find ourselves imagining anything like the rather silly plot shown in Figure 15.3.
This is not highly surprising: the line that I’ve drawn in Figure 15.3 doesn’t “fit” the data very well, so it doesn’t make a lot of sense to propose it as a way of summarising the data, right? This is a very simple observation to make, but it turns out to be very powerful when we start trying to wrap just a little bit of maths around it. To do so, let’s start with a refresher of some high school maths. The formula for a straight line is usually written like this: \[ y = mx + c \] Or, at least, that’s what it was when I went to high school all those years ago. The two variables are \(x\) and \(y\), and we have two coefficients, \(m\) and \(c\). The coefficient \(m\) represents the slope of the line, and the coefficient \(c\) represents the \(y\)-intercept of the line. Digging further back into our decaying memories of high school (sorry, for some of us high school was a long time ago), we remember that the intercept is interpreted as “the value of \(y\) that you get when \(x=0\)”. Similarly, a slope of \(m\) means that if you increase the \(x\)-value by 1 unit, then the \(y\)-value goes up by \(m\) units; a negative slope means that the \(y\)-value would go down rather than up. Ah yes, it’s all coming back to me now.
Now that we’ve remembered that, it should come as no surprise to discover that we use the exact same formula to describe a regression line. If \(Y\) is the outcome variable (the DV) and \(X\) is the predictor variable (the IV), then the formula that describes our regression is written like this: \[ \hat{Y_i} = b_1 X_i + b_0 \] Hm. Looks like the same formula, but there’s some extra frilly bits in this version. Let’s make sure we understand them. Firstly, notice that I’ve written \(X_i\) and \(Y_i\) rather than just plain old \(X\) and \(Y\). This is because we want to remember that we’re dealing with actual data. In this equation, \(X_i\) is the value of predictor variable for the \(i\)th observation (i.e., the number of hours of sleep that I got on day \(i\) of my little study), and \(Y_i\) is the corresponding value of the outcome variable (i.e., my grumpiness on that day). And although I haven’t said so explicitly in the equation, what we’re assuming is that this formula works for all observations in the data set (i.e., for all \(i\)). Secondly, notice that I wrote \(\hat{Y}_i\) and not \(Y_i\). This is because we want to make the distinction between the actual data \(Y_i\), and the estimate \(\hat{Y}_i\) (i.e., the prediction that our regression line is making). Thirdly, I changed the letters used to describe the coefficients from \(m\) and \(c\) to \(b_1\) and \(b_0\). That’s just the way that statisticians like to refer to the coefficients in a regression model. I’ve no idea why they chose \(b\), but that’s what they did. In any case \(b_0\) always refers to the intercept term, and \(b_1\) refers to the slope.
Excellent, excellent. Next, I can’t help but notice that – regardless of whether we’re talking about the good regression line or the bad one – the data don’t fall perfectly on the line. Or, to say it another way, the data \(Y_i\) are not identical to the predictions of the regression model \(\hat{Y_i}\). Since statisticians love to attach letters, names and numbers to everything, let’s refer to the difference between the model prediction and that actual data point as a residual, and we’ll refer to it as \(\epsilon_i\).215 Written using mathematics, the residuals are defined as: \[ \epsilon_i = Y_i - \hat{Y}_i \] which in turn means that we can write down the complete linear regression model as: \[ Y_i = b_1 X_i + b_0 + \epsilon_i \]
## 15.2 Estimating a linear regression model
Okay, now let’s redraw our pictures, but this time I’ll add some lines to show the size of the residual for all observations. When the regression line is good, our residuals (the lengths of the solid black lines) all look pretty small, as shown in Figure 15.4, but when the regression line is a bad one, the residuals are a lot larger, as you can see from looking at Figure 15.5. Hm. Maybe what we “want” in a regression model is small residuals. Yes, that does seem to make sense. In fact, I think I’ll go so far as to say that the “best fitting” regression line is the one that has the smallest residuals. Or, better yet, since statisticians seem to like to take squares of everything why not say that …
The estimated regression coefficients, \(\hat{b}_0\) and \(\hat{b}_1\) are those that minimise the sum of the squared residuals, which we could either write as \(\sum_i (Y_i - \hat{Y}_i)^2\) or as \(\sum_i {\epsilon_i}^2\).
Yes, yes that sounds even better. And since I’ve indented it like that, it probably means that this is the right answer. And since this is the right answer, it’s probably worth making a note of the fact that our regression coefficients are estimates (we’re trying to guess the parameters that describe a population!), which is why I’ve added the little hats, so that we get \(\hat{b}_0\) and \(\hat{b}_1\) rather than \(b_0\) and \(b_1\). Finally, I should also note that – since there’s actually more than one way to estimate a regression model – the more technical name for this estimation process is ordinary least squares (OLS) regression.
At this point, we now have a concrete definition for what counts as our “best” choice of regression coefficients, \(\hat{b}_0\) and \(\hat{b}_1\). The natural question to ask next is, if our optimal regression coefficients are those that minimise the sum squared residuals, how do we find these wonderful numbers? The actual answer to this question is complicated, and it doesn’t help you understand the logic of regression.216 As a result, this time I’m going to let you off the hook. Instead of showing you how to do it the long and tedious way first, and then “revealing” the wonderful shortcut that R provides you with, let’s cut straight to the chase… and use the `lm()` function (short for “linear model”) to do all the heavy lifting.
### 15.2.1 Using the
`lm()` function The `lm()` function is a fairly complicated one: if you type `?lm` , the help files will reveal that there are a lot of arguments that you can specify, and most of them won’t make a lot of sense to you. At this stage however, there’s really only two of them that you care about, and as it turns out you’ve seen them before:
*
`formula` . A formula that specifies the regression model. For the simple linear regression models that we’ve talked about so far, in which you have a single predictor variable as well as an intercept term, this formula is of the form `outcome ~ predictor` . However, more complicated formulas are allowed, and we’ll discuss them later. *
`data` . The data frame containing the variables. As we saw with `aov()` in Chapter 14, the output of the `lm()` function is a fairly complicated object, with quite a lot of technical information buried under the hood. Because this technical information is used by other functions, it’s generally a good idea to create a variable that stores the results of your regression. With this in mind, to run my linear regression, the command I want to use is this:
```
regression.1 <- lm( formula = dan.grump ~ dan.sleep,
data = parenthood )
```
Note that I used
as the formula: in the model that I’m trying to estimate, `dan.grump` is the outcome variable, and `dan.sleep` is the predictor variable. It’s always a good idea to remember which one is which! Anyway, what this does is create an “ `lm` object” (i.e., a variable whose class is `"lm"` ) called `regression.1` . Let’s have a look at what happens when we `print()` it out:
```
print( regression.1 )
```
This looks promising. There’s two separate pieces of information here. Firstly, R is politely reminding us what the command was that we used to specify the model in the first place, which can be helpful. More importantly from our perspective, however, is the second part, in which R gives us the intercept \(\hat{b}_0 = 125.96\) and the slope \(\hat{b}_1 = -8.94\). In other words, the best-fitting regression line that I plotted in Figure 15.2 has this formula: \[ \hat{Y}_i = -8.94 \ X_i + 125.96 \]
### 15.2.2 Interpreting the estimated model
The most important thing to be able to understand is how to interpret these coefficients. Let’s start with \(\hat{b}_1\), the slope. If we remember the definition of the slope, a regression coefficient of \(\hat{b}_1 = -8.94\) means that if I increase \(X_i\) by 1, then I’m decreasing \(Y_i\) by 8.94. That is, each additional hour of sleep that I gain will improve my mood, reducing my grumpiness by 8.94 grumpiness points. What about the intercept? Well, since \(\hat{b}_0\) corresponds to “the expected value of \(Y_i\) when \(X_i\) equals 0”, it’s pretty straightforward. It implies that if I get zero hours of sleep (\(X_i =0\)) then my grumpiness will go off the scale, to an insane value of (\(Y_i = 125.96\)). Best to be avoided, I think.
## 15.3 Multiple linear regression
The simple linear regression model that we’ve discussed up to this point assumes that there’s a single predictor variable that you’re interested in, in this case `dan.sleep` . In fact, up to this point, every statistical tool that we’ve talked about has assumed that your analysis uses one predictor variable and one outcome variable. However, in many (perhaps most) research projects you actually have multiple predictors that you want to examine. If so, it would be nice to be able to extend the linear regression framework to be able to include multiple predictors. Perhaps some kind of multiple regression model would be in order? Multiple regression is conceptually very simple. All we do is add more terms to our regression equation. Let’s suppose that we’ve got two variables that we’re interested in; perhaps we want to use both `dan.sleep` and `baby.sleep` to predict the `dan.grump` variable. As before, we let \(Y_i\) refer to my grumpiness on the \(i\)-th day. But now we have two \(X\) variables: the first corresponding to the amount of sleep I got and the second corresponding to the amount of sleep my son got. So we’ll let \(X_{i1}\) refer to the hours I slept on the \(i\)-th day, and \(X_{i2}\) refers to the hours that the baby slept on that day. If so, then we can write our regression model like this: \[
Y_i = b_2 X_{i2} + b_1 X_{i1} + b_0 + \epsilon_i
\] As before, \(\epsilon_i\) is the residual associated with the \(i\)-th observation, \(\epsilon_i = {Y}_i - \hat{Y}_i\). In this model, we now have three coefficients that need to be estimated: \(b_0\) is the intercept, \(b_1\) is the coefficient associated with my sleep, and \(b_2\) is the coefficient associated with my son’s sleep. However, although the number of coefficients that need to be estimated has changed, the basic idea of how the estimation works is unchanged: our estimated coefficients \(\hat{b}_0\), \(\hat{b}_1\) and \(\hat{b}_2\) are those that minimise the sum squared residuals.
### 15.3.1 Doing it in R
Multiple regression in R is no different to simple regression: all we have to do is specify a more complicated `formula` when using the `lm()` function. For example, if we want to use both `dan.sleep` and `baby.sleep` as predictors in our attempt to explain why I’m so grumpy, then the formula we need is this:
Notice that, just like last time, I haven’t explicitly included any reference to the intercept term in this formula; only the two predictor variables and the outcome. By default, the `lm()` function assumes that the model should include an intercept (though you can get rid of it if you want). In any case, I can create a new regression model – which I’ll call `regression.2` – using the following command:
```
regression.2 <- lm( formula = dan.grump ~ dan.sleep + baby.sleep,
data = parenthood )
```
And just like last time, if we `print()` out this regression model we can see what the estimated regression coefficients are:
The coefficient associated with `dan.sleep` is quite large, suggesting that every hour of sleep I lose makes me a lot grumpier. However, the coefficient for `baby.sleep` is very small, suggesting that it doesn’t really matter how much sleep my son gets; not really. What matters as far as my grumpiness goes is how much sleep I get. To get a sense of what this multiple regression model looks like, Figure 15.6 shows a 3D plot that plots all three variables, along with the regression model itself.
### 15.3.2 Formula for the general case
The equation that I gave above shows you what a multiple regression model looks like when you include two predictors. Not surprisingly, then, if you want more than two predictors all you have to do is add more \(X\) terms and more \(b\) coefficients. In other words, if you have \(K\) predictor variables in the model then the regression equation looks like this: \[ Y_i = \left( \sum_{k=1}^K b_{k} X_{ik} \right) + b_0 + \epsilon_i \]
## 15.4 Quantifying the fit of the regression model
So we now know how to estimate the coefficients of a linear regression model. The problem is, we don’t yet know if this regression model is any good. For example, the `regression.1` model claims that every hour of sleep will improve my mood by quite a lot, but it might just be rubbish. Remember, the regression model only produces a prediction \(\hat{Y}_i\) about what my mood is like: my actual mood is \(Y_i\). If these two are very close, then the regression model has done a good job. If they are very different, then it has done a bad job.
### 15.4.1 The \(R^2\) value
Once again, let’s wrap a little bit of mathematics around this. Firstly, we’ve got the sum of the squared residuals: \[ \mbox{SS}_{res} = \sum_i (Y_i - \hat{Y}_i)^2 \] which we would hope to be pretty small. Specifically, what we’d like is for it to be very small in comparison to the total variability in the outcome variable, \[ \mbox{SS}_{tot} = \sum_i (Y_i - \bar{Y})^2 \] While we’re here, let’s calculate these values in R. Firstly, in order to make my R commands look a bit more similar to the mathematical equations, I’ll create variables `X` and `Y` :
```
X <- parenthood$dan.sleep # the predictor
Y <- parenthood$dan.grump # the outcome
```
Now that we’ve done this, let’s calculate the \(\hat{Y}\) values and store them in a variable called `Y.pred` . For the simple model that uses only a single predictor, `regression.1` , we would do the following:
```
Y.pred <- -8.94 * X + 125.97
```
Okay, now that we’ve got a variable which stores the regression model predictions for how grumpy I will be on any given day, let’s calculate our sum of squared residuals. We would do that using the following command:
```
SS.resid <- sum( (Y - Y.pred)^2 )
print( SS.resid )
```
`## [1] 1838.722`
Wonderful. A big number that doesn’t mean very much. Still, let’s forge boldly onwards anyway, and calculate the total sum of squares as well. That’s also pretty simple:
```
SS.tot <- sum( (Y - mean(Y))^2 )
print( SS.tot )
```
`## [1] 9998.59`
Hm. Well, it’s a much bigger number than the last one, so this does suggest that our regression model was making good predictions. But it’s not very interpretable.
Perhaps we can fix this. What we’d like to do is to convert these two fairly meaningless numbers into one number. A nice, interpretable number, which for no particular reason we’ll call \(R^2\). What we would like is for the value of \(R^2\) to be equal to 1 if the regression model makes no errors in predicting the data. In other words, if it turns out that the residual errors are zero – that is, if \(\mbox{SS}_{res} = 0\) – then we expect \(R^2 = 1\). Similarly, if the model is completely useless, we would like \(R^2\) to be equal to 0. What do I mean by “useless”? Tempting as it is demand that the regression model move out of the house, cut its hair and get a real job, I’m probably going to have to pick a more practical definition: in this case, all I mean is that the residual sum of squares is no smaller than the total sum of squares, \(\mbox{SS}_{res} = \mbox{SS}_{tot}\). Wait, why don’t we do exactly that? The formula that provides us with out \(R^2\) value is pretty simple to write down, \[ R^2 = 1 - \frac{\mbox{SS}_{res}}{\mbox{SS}_{tot}} \] and equally simple to calculate in R:
```
R.squared <- 1 - (SS.resid / SS.tot)
print( R.squared )
```
`## [1] 0.8161018` The \(R^2\) value, sometimes called the coefficient of determination217 has a simple interpretation: it is the proportion of the variance in the outcome variable that can be accounted for by the predictor. So in this case, the fact that we have obtained \(R^2 = .816\) means that the predictor ( `my.sleep` ) explains 81.6% of the variance in the outcome ( `my.grump` ). Naturally, you don’t actually need to type in all these commands yourself if you want to obtain the \(R^2\) value for your regression model. As we’ll see later on in Section 15.5.3, all you need to do is use the `summary()` function. However, let’s put that to one side for the moment. There’s another property of \(R^2\) that I want to point out.
### 15.4.2 The relationship between regression and correlation
At this point we can revisit my earlier claim that regression, in this very simple form that I’ve discussed so far, is basically the same thing as a correlation. Previously, we used the symbol \(r\) to denote a Pearson correlation. Might there be some relationship between the value of the correlation coefficient \(r\) and the \(R^2\) value from linear regression? Of course there is: the squared correlation \(r^2\) is identical to the \(R^2\) value for a linear regression with only a single predictor. To illustrate this, here’s the squared correlation:
```
r <- cor(X, Y) # calculate the correlation
print( r^2 ) # print the squared correlation
```
`## [1] 0.8161027`
Yep, same number. In other words, running a Pearson correlation is more or less equivalent to running a linear regression model that uses only one predictor variable.
### 15.4.3 The adjusted \(R^2\) value
One final thing to point out before moving on. It’s quite common for people to report a slightly different measure of model performance, known as “adjusted \(R^2\)”. The motivation behind calculating the adjusted \(R^2\) value is the observation that adding more predictors into the model will always call the \(R^2\) value to increase (or at least not decrease). The adjusted \(R^2\) value introduces a slight change to the calculation, as follows. For a regression model with \(K\) predictors, fit to a data set containing \(N\) observations, the adjusted \(R^2\) is: \[ \mbox{adj. } R^2 = 1 - \left(\frac{\mbox{SS}_{res}}{\mbox{SS}_{tot}} \times \frac{N-1}{N-K-1} \right) \] This adjustment is an attempt to take the degrees of freedom into account. The big advantage of the adjusted \(R^2\) value is that when you add more predictors to the model, the adjusted \(R^2\) value will only increase if the new variables improve the model performance more than you’d expect by chance. The big disadvantage is that the adjusted \(R^2\) value can’t be interpreted in the elegant way that \(R^2\) can. \(R^2\) has a simple interpretation as the proportion of variance in the outcome variable that is explained by the regression model; to my knowledge, no equivalent interpretation exists for adjusted \(R^2\).
An obvious question then, is whether you should report \(R^2\) or adjusted \(R^2\). This is probably a matter of personal preference. If you care more about interpretability, then \(R^2\) is better. If you care more about correcting for bias, then adjusted \(R^2\) is probably better. Speaking just for myself, I prefer \(R^2\): my feeling is that it’s more important to be able to interpret your measure of model performance. Besides, as we’ll see in Section 15.5, if you’re worried that the improvement in \(R^2\) that you get by adding a predictor is just due to chance and not because it’s a better model, well, we’ve got hypothesis tests for that.
## 15.5 Hypothesis tests for regression models
So far we’ve talked about what a regression model is, how the coefficients of a regression model are estimated, and how we quantify the performance of the model (the last of these, incidentally, is basically our measure of effect size). The next thing we need to talk about is hypothesis tests. There are two different (but related) kinds of hypothesis tests that we need to talk about: those in which we test whether the regression model as a whole is performing significantly better than a null model; and those in which we test whether a particular regression coefficient is significantly different from zero.
At this point, you’re probably groaning internally, thinking that I’m going to introduce a whole new collection of tests. You’re probably sick of hypothesis tests by now, and don’t want to learn any new ones. Me too. I’m so sick of hypothesis tests that I’m going to shamelessly reuse the \(F\)-test from Chapter 14 and the \(t\)-test from Chapter 13. In fact, all I’m going to do in this section is show you how those tests are imported wholesale into the regression framework.
### 15.5.1 Testing the model as a whole
Okay, suppose you’ve estimated your regression model. The first hypothesis test you might want to try is one in which the null hypothesis that there is no relationship between the predictors and the outcome, and the alternative hypothesis is that the data are distributed in exactly the way that the regression model predicts. Formally, our “null model” corresponds to the fairly trivial “regression” model in which we include 0 predictors, and only include the intercept term \(b_0\) \[ H_0: Y_i = b_0 + \epsilon_i \] If our regression model has \(K\) predictors, the “alternative model” is described using the usual formula for a multiple regression model: \[ H_1: Y_i = \left( \sum_{k=1}^K b_{k} X_{ik} \right) + b_0 + \epsilon_i \]
How can we test these two hypotheses against each other? The trick is to understand that just like we did with ANOVA, it’s possible to divide up the total variance \(\mbox{SS}_{tot}\) into the sum of the residual variance \(\mbox{SS}_{res}\) and the regression model variance \(\mbox{SS}_{mod}\). I’ll skip over the technicalities, since we covered most of them in the ANOVA chapter, and just note that: \[ \mbox{SS}_{mod} = \mbox{SS}_{tot} - \mbox{SS}_{res} \] And, just like we did with the ANOVA, we can convert the sums of squares in to mean squares by dividing by the degrees of freedom. \[ \begin{array}{rcl} \mbox{MS}_{mod} &=& \displaystyle\frac{\mbox{SS}_{mod} }{df_{mod}} \\ \\ \mbox{MS}_{res} &=& \displaystyle\frac{\mbox{SS}_{res} }{df_{res} } \end{array} \]
So, how many degrees of freedom do we have? As you might expect, the \(df\) associated with the model is closely tied to the number of predictors that we’ve included. In fact, it turns out that \(df_{mod} = K\). For the residuals, the total degrees of freedom is \(df_{res} = N -K - 1\).
Now that we’ve got our mean square values, you’re probably going to be entirely unsurprised (possibly even bored) to discover that we can calculate an \(F\)-statistic like this: \[ F = \frac{\mbox{MS}_{mod}}{\mbox{MS}_{res}} \] and the degrees of freedom associated with this are \(K\) and \(N-K-1\). This \(F\) statistic has exactly the same interpretation as the one we introduced in Chapter 14. Large \(F\) values indicate that the null hypothesis is performing poorly in comparison to the alternative hypothesis. And since we already did some tedious “do it the long way” calculations back then, I won’t waste your time repeating them. In a moment I’ll show you how to do the test in R the easy way, but first, let’s have a look at the tests for the individual regression coefficients.
### 15.5.2 Tests for individual coefficients
The \(F\)-test that we’ve just introduced is useful for checking that the model as a whole is performing better than chance. This is important: if your regression model doesn’t produce a significant result for the \(F\)-test then you probably don’t have a very good regression model (or, quite possibly, you don’t have very good data). However, while failing this test is a pretty strong indicator that the model has problems, passing the test (i.e., rejecting the null) doesn’t imply that the model is good! Why is that, you might be wondering? The answer to that can be found by looking at the coefficients for the `regression.2` model:
I can’t help but notice that the estimated regression coefficient for the `baby.sleep` variable is tiny (0.01), relative to the value that we get for `dan.sleep` (-8.95). Given that these two variables are absolutely on the same scale (they’re both measured in “hours slept”), I find this suspicious. In fact, I’m beginning to suspect that it’s really only the amount of sleep that I get that matters in order to predict my grumpiness.
Once again, we can reuse a hypothesis test that we discussed earlier, this time the \(t\)-test. The test that we’re interested has a null hypothesis that the true regression coefficient is zero (\(b = 0\)), which is to be tested against the alternative hypothesis that it isn’t (\(b \neq 0\)). That is: \[ \begin{array}{rl} H_0: & b = 0 \\ H_1: & b \neq 0 \end{array} \] How can we test this? Well, if the central limit theorem is kind to us, we might be able to guess that the sampling distribution of \(\hat{b}\), the estimated regression coefficient, is a normal distribution with mean centred on \(b\). What that would mean is that if the null hypothesis were true, then the sampling distribution of \(\hat{b}\) has mean zero and unknown standard deviation. Assuming that we can come up with a good estimate for the standard error of the regression coefficient, \(\mbox{SE}({\hat{b}})\), then we’re in luck. That’s exactly the situation for which we introduced the one-sample \(t\) way back in Chapter 13. So let’s define a \(t\)-statistic like this, \[ t = \frac{\hat{b}}{\mbox{SE}({\hat{b})}} \] I’ll skip over the reasons why, but our degrees of freedom in this case are \(df = N- K- 1\). Irritatingly, the estimate of the standard error of the regression coefficient, \(\mbox{SE}({\hat{b}})\), is not as easy to calculate as the standard error of the mean that we used for the simpler \(t\)-tests in Chapter 13. In fact, the formula is somewhat ugly, and not terribly helpful to look at. For our purposes it’s sufficient to point out that the standard error of the estimated regression coefficient depends on both the predictor and outcome variables, and is somewhat sensitive to violations of the homogeneity of variance assumption (discussed shortly).
In any case, this \(t\)-statistic can be interpreted in the same way as the \(t\)-statistics that we discussed in Chapter 13. Assuming that you have a two-sided alternative (i.e., you don’t really care if \(b >0\) or \(b < 0\)), then it’s the extreme values of \(t\) (i.e., a lot less than zero or a lot greater than zero) that suggest that you should reject the null hypothesis.
### 15.5.3 Running the hypothesis tests in R
To compute all of the quantities that we have talked about so far, all you need to do is ask for a `summary()` of your regression model. Since I’ve been using `regression.2` as my example, let’s do that:
```
summary( regression.2 )
```
```
##
## Call:
## lm(formula = dan.grump ~ dan.sleep + baby.sleep, data = parenthood)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.0345 -2.2198 -0.4016 2.6775 11.7496
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 125.96557 3.04095 41.423 <2e-16 ***
## dan.sleep -8.95025 0.55346 -16.172 <2e-16 ***
## baby.sleep 0.01052 0.27106 0.039 0.969
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.354 on 97 degrees of freedom
## Multiple R-squared: 0.8161, Adjusted R-squared: 0.8123
## F-statistic: 215.2 on 2 and 97 DF, p-value: < 2.2e-16
```
The output that this command produces is pretty dense, but we’ve already discussed everything of interest in it, so what I’ll do is go through it line by line. The first line reminds us of what the actual regression model is:
```
Call:
lm(formula = dan.grump ~ dan.sleep + baby.sleep, data = parenthood)
```
You can see why this is handy, since it was a little while back when we actually created the `regression.2` model, and so it’s nice to be reminded of what it was we were doing. The next part provides a quick summary of the residuals (i.e., the \(\epsilon_i\) values),
```
Residuals:
Min 1Q Median 3Q Max
-11.0345 -2.2198 -0.4016 2.6775 11.7496
```
which can be convenient as a quick and dirty check that the model is okay. Remember, we did assume that these residuals were normally distributed, with mean 0. In particular it’s worth quickly checking to see if the median is close to zero, and to see if the first quartile is about the same size as the third quartile. If they look badly off, there’s a good chance that the assumptions of regression are violated. These ones look pretty nice to me, so let’s move on to the interesting stuff. The next part of the R output looks at the coefficients of the regression model:
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 125.96557 3.04095 41.423 <2e-16 ***
dan.sleep -8.95025 0.55346 -16.172 <2e-16 ***
baby.sleep 0.01052 0.27106 0.039 0.969
---
Signif. codes: 0 �***� 0.001 �**� 0.01 �*� 0.05 �.� 0.1 � � 1
```
Each row in this table refers to one of the coefficients in the regression model. The first row is the intercept term, and the later ones look at each of the predictors. The columns give you all of the relevant information. The first column is the actual estimate of \(b\) (e.g., 125.96 for the intercept, and -8.9 for the `dan.sleep` predictor). The second column is the standard error estimate \(\hat\sigma_b\). The third column gives you the \(t\)-statistic, and it’s worth noticing that in this table \(t= \hat{b}/\mbox{SE}({\hat{b}})\) every time. Finally, the fourth column gives you the actual \(p\) value for each of these tests.218 The only thing that the table itself doesn’t list is the degrees of freedom used in the \(t\)-test, which is always \(N-K-1\) and is listed immediately below, in this line:
```
Residual standard error: 4.354 on 97 degrees of freedom
```
The value of \(df = 97\) is equal to \(N-K-1\), so that’s what we use for our \(t\)-tests. In the final part of the output we have the \(F\)-test and the \(R^2\) values which assess the performance of the model as a whole
```
Residual standard error: 4.354 on 97 degrees of freedom
Multiple R-squared: 0.8161, Adjusted R-squared: 0.8123
F-statistic: 215.2 on 2 and 97 DF, p-value: < 2.2e-16
```
So in this case, the model performs significantly better than you’d expect by chance (\(F(2,97) = 215.2\), \(p<.001\)), which isn’t all that surprising: the \(R^2 = .812\) value indicate that the regression model accounts for 81.2% of the variability in the outcome measure. However, when we look back up at the \(t\)-tests for each of the individual coefficients, we have pretty strong evidence that the `baby.sleep` variable has no significant effect; all the work is being done by the `dan.sleep` variable. Taken together, these results suggest that `regression.2` is actually the wrong model for the data: you’d probably be better off dropping the `baby.sleep` predictor entirely. In other words, the `regression.1` model that we started with is the better model.
## 15.6 Testing the significance of a correlation
### 15.6.1 Hypothesis tests for a single correlation
I don’t want to spend too much time on this, but it’s worth very briefly returning to the point I made earlier, that Pearson correlations are basically the same thing as linear regressions with only a single predictor added to the model. What this means is that the hypothesis tests that I just described in a regression context can also be applied to correlation coefficients. To see this, let’s take a `summary()` of the `regression.1` model:
```
summary( regression.1 )
```
```
##
## Call:
## lm(formula = dan.grump ~ dan.sleep, data = parenthood)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.025 -2.213 -0.399 2.681 11.750
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 125.9563 3.0161 41.76 <2e-16 ***
## dan.sleep -8.9368 0.4285 -20.85 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.332 on 98 degrees of freedom
## Multiple R-squared: 0.8161, Adjusted R-squared: 0.8142
## F-statistic: 434.9 on 1 and 98 DF, p-value: < 2.2e-16
```
The important thing to note here is the \(t\) test associated with the predictor, in which we get a result of \(t(98) = -20.85\), \(p<.001\). Now let’s compare this to the output of a different function, which goes by the name of `cor.test()` . As you might expect, this function runs a hypothesis test to see if the observed correlation between two variables is significantly different from 0. Let’s have a look:
```
##
## Pearson's product-moment correlation
##
## data: parenthood$dan.sleep and parenthood$dan.grump
## t = -20.854, df = 98, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.9340614 -0.8594714
## sample estimates:
## cor
## -0.903384
```
Again, the key thing to note is the line that reports the hypothesis test itself, which seems to be saying that \(t(98) = -20.85\), \(p<.001\). Hm. Looks like it’s exactly the same test, doesn’t it? And that’s exactly what it is. The test for the significance of a correlation is identical to the \(t\) test that we run on a coefficient in a regression model.
### 15.6.2 Hypothesis tests for all pairwise correlations
Okay, one more digression before I return to regression properly. In the previous section I talked about the `cor.test()` function, which lets you run a hypothesis test on a single correlation. The `cor.test()` function is (obviously) an extension of the `cor()` function, which we talked about in Section 5.7. However, the `cor()` function isn’t restricted to computing a single correlation: you can use it to compute all pairwise correlations among the variables in your data set. This leads people to the natural question: can the `cor.test()` function do the same thing? Can we use `cor.test()` to run hypothesis tests for all possible parwise correlations among the variables in a data frame? The answer is no, and there’s a very good reason for this. Testing a single correlation is fine: if you’ve got some reason to be asking “is A related to B?”, then you should absolutely run a test to see if there’s a significant correlation. But if you’ve got variables A, B, C, D and E and you’re thinking about testing the correlations among all possible pairs of these, a statistician would want to ask: what’s your hypothesis? If you’re in the position of wanting to test all possible pairs of variables, then you’re pretty clearly on a fishing expedition, hunting around in search of significant effects when you don’t actually have a clear research hypothesis in mind. This is dangerous, and the authors of `cor.test()` obviously felt that they didn’t want to support that kind of behaviour. On the other hand… a somewhat less hardline view might be to argue we’ve encountered this situation before, back in Section 14.5 when we talked about post hoc tests in ANOVA. When running post hoc tests, we didn’t have any specific comparisons in mind, so what we did was apply a correction (e.g., Bonferroni, Holm, etc) in order to avoid the possibility of an inflated Type I error rate. From this perspective, it’s okay to run hypothesis tests on all your pairwise correlations, but you must treat them as post hoc analyses, and if so you need to apply a correction for multiple comparisons. That’s what the `correlate()` function in the `lsr` package does. When we use the `correlate()` function in Section 5.7 all it did was print out the correlation matrix. But you can get it to output the results of all the pairwise tests as well by specifying `test=TRUE` . Here’s what happens with the `parenthood` data:
```
library(lsr)
correlate(parenthood, test=TRUE)
```
```
##
## CORRELATIONS
## ============
## - correlation type: pearson
## - correlations shown only when both variables are numeric
##
## dan.sleep baby.sleep dan.grump day
## dan.sleep . 0.628*** -0.903*** -0.098
## baby.sleep 0.628*** . -0.566*** -0.010
## dan.grump -0.903*** -0.566*** . 0.076
## day -0.098 -0.010 0.076 .
##
## ---
## Signif. codes: . = p < .1, * = p<.05, ** = p<.01, *** = p<.001
##
##
## p-VALUES
## ========
## - total number of tests run: 6
## - correction for multiple testing: holm
##
## dan.sleep baby.sleep dan.grump day
## dan.sleep . 0.000 0.000 0.990
## baby.sleep 0.000 . 0.000 0.990
## dan.grump 0.000 0.000 . 0.990
## day 0.990 0.990 0.990 .
##
##
## SAMPLE SIZES
## ============
##
## dan.sleep baby.sleep dan.grump day
## dan.sleep 100 100 100 100
## baby.sleep 100 100 100 100
## dan.grump 100 100 100 100
## day 100 100 100 100
```
The output here contains three matrices. First it prints out the correlation matrix. Second it prints out a matrix of \(p\)-values, using the Holm method219 to correct for multiple comparisons. Finally, it prints out a matrix indicating the sample size (number of pairwise complete cases) that contributed to each correlation.
So there you have it. If you really desperately want to do pairwise hypothesis tests on your correlations, the `correlate()` function will let you do it. But please, please be careful. I can’t count the number of times I’ve had a student panicking in my office because they’ve run these pairwise correlation tests, and they get one or two significant results that don’t make any sense. For some reason, the moment people see those little significance stars appear, they feel compelled to throw away all common sense and assume that the results must correspond to something real that requires an explanation. In most such cases, my experience has been that the right answer is “it’s a Type I error”.
## 15.7 Regarding regression coefficients
Before moving on to discuss the assumptions underlying linear regression and what you can do to check if they’re being met, there’s two more topics I want to briefly discuss, both of which relate to the regression coefficients. The first thing to talk about is calculating confidence intervals for the coefficients; after that, I’ll discuss the somewhat murky question of how to determine which of predictor is most important.
### 15.7.1 Confidence intervals for the coefficients
Like any population parameter, the regression coefficients \(b\) cannot be estimated with complete precision from a sample of data; that’s part of why we need hypothesis tests. Given this, it’s quite useful to be able to report confidence intervals that capture our uncertainty about the true value of \(b\). This is especially useful when the research question focuses heavily on an attempt to find out how strongly variable \(X\) is related to variable \(Y\), since in those situations the interest is primarily in the regression weight \(b\). Fortunately, confidence intervals for the regression weights can be constructed in the usual fashion, \[ \mbox{CI}(b) = \hat{b} \pm \left( t_{crit} \times \mbox{SE}({\hat{b})} \right) \] where \(\mbox{SE}({\hat{b}})\) is the standard error of the regression coefficient, and \(t_{crit}\) is the relevant critical value of the appropriate \(t\) distribution. For instance, if it’s a 95% confidence interval that we want, then the critical value is the 97.5th quantile of a \(t\) distribution with \(N-K-1\) degrees of freedom. In other words, this is basically the same approach to calculating confidence intervals that we’ve used throughout. To do this in R we can use the `confint()` function. There arguments to this function are
*
`object` . The regression model ( `lm` object) for which confidence intervals are required. *
`parm` . A vector indicating which coefficients we should calculate intervals for. This can be either a vector of numbers or (more usefully) a character vector containing variable names. By default, all coefficients are included, so usually you don’t bother specifying this argument. *
`level` . A number indicating the confidence level that should be used. As is usually the case, the default value is 0.95, so you wouldn’t usually need to specify this argument. So, suppose I want 99% confidence intervals for the coefficients in the `regression.2` model. I could do this using the following command:
```
confint( object = regression.2,
level = .99)
```
```
## 0.5 % 99.5 %
## (Intercept) 117.9755724 133.9555593
## dan.sleep -10.4044419 -7.4960575
## baby.sleep -0.7016868 0.7227357
```
Simple enough.
### 15.7.2 Calculating standardised regression coefficients
One more thing that you might want to do is to calculate “standardised” regression coefficients, often denoted \(\beta\). The rationale behind standardised coefficients goes like this. In a lot of situations, your variables are on fundamentally different scales. Suppose, for example, my regression model aims to predict people’s IQ scores, using their educational attainment (number of years of education) and their income as predictors. Obviously, educational attainment and income are not on the same scales: the number of years of schooling can only vary by 10s of years, whereas income would vary by 10,000s of dollars (or more). The units of measurement have a big influence on the regression coefficients: the \(b\) coefficients only make sense when interpreted in light of the units, both of the predictor variables and the outcome variable. This makes it very difficult to compare the coefficients of different predictors. Yet there are situations where you really do want to make comparisons between different coefficients. Specifically, you might want some kind of standard measure of which predictors have the strongest relationship to the outcome. This is what standardised coefficients aim to do.
The basic idea is quite simple: the standardised coefficients are the coefficients that you would have obtained if you’d converted all the variables to \(z\)-scores before running the regression.220 The idea here is that, by converting all the predictors to \(z\)-scores, they all go into the regression on the same scale, thereby removing the problem of having variables on different scales. Regardless of what the original variables were, a \(\beta\) value of 1 means that an increase in the predictor of 1 standard deviation will produce a corresponding 1 standard deviation increase in the outcome variable. Therefore, if variable A has a larger absolute value of \(\beta\) than variable B, it is deemed to have a stronger relationship with the outcome. Or at least that’s the idea: it’s worth being a little cautious here, since this does rely very heavily on the assumption that “a 1 standard deviation change” is fundamentally the same kind of thing for all variables. It’s not always obvious that this is true.
Leaving aside the interpretation issues, let’s look at how it’s calculated. What you could do is standardise all the variables yourself and then run a regression, but there’s a much simpler way to do it. As it turns out, the \(\beta\) coefficient for a predictor \(X\) and outcome \(Y\) has a very simple formula, namely \[ \beta_X = b_X \times \frac{\sigma_X}{\sigma_Y} \] where \(\sigma_X\) is the standard deviation of the predictor, and \(\sigma_Y\) is the standard deviation of the outcome variable \(Y\). This makes matters a lot simpler. To make things even simpler, the `lsr` package includes a function `standardCoefs()` that computes the \(\beta\) coefficients.
```
standardCoefs( regression.2 )
```
```
## b beta
## dan.sleep -8.95024973 -0.90474809
## baby.sleep 0.01052447 0.00217223
```
This clearly shows that the `dan.sleep` variable has a much stronger effect than the `baby.sleep` variable. However, this is a perfect example of a situation where it would probably make sense to use the original coefficients \(b\) rather than the standardised coefficients \(\beta\). After all, my sleep and the baby’s sleep are already on the same scale: number of hours slept. Why complicate matters by converting these to \(z\)-scores?
## 15.8 Assumptions of regression
The linear regression model that I’ve been discussing relies on several assumptions. In Section 15.9 we’ll talk a lot more about how to check that these assumptions are being met, but first, let’s have a look at each of them.
* Normality. Like half the models in statistics, standard linear regression relies on an assumption of normality. Specifically, it assumes that the residuals are normally distributed. It’s actually okay if the predictors \(X\) and the outcome \(Y\) are non-normal, so long as the residuals \(\epsilon\) are normal. See Section 15.9.3.
* Linearity. A pretty fundamental assumption of the linear regression model is that relationship between \(X\) and \(Y\) actually be linear! Regardless of whether it’s a simple regression or a multiple regression, we assume that the relatiships involved are linear. See Section 15.9.4.
* Homogeneity of variance. Strictly speaking, the regression model assumes that each residual \(\epsilon_i\) is generated from a normal distribution with mean 0, and (more importantly for the current purposes) with a standard deviation \(\sigma\) that is the same for every single residual. In practice, it’s impossible to test the assumption that every residual is identically distributed. Instead, what we care about is that the standard deviation of the residual is the same for all values of \(\hat{Y}\), and (if we’re being especially paranoid) all values of every predictor \(X\) in the model. See Section 15.9.5.
* Uncorrelated predictors. The idea here is that, is a multiple regression model, you don’t want your predictors to be too strongly correlated with each other. This isn’t “technically” an assumption of the regression model, but in practice it’s required. Predictors that are too strongly correlated with each other (referred to as “collinearity”) can cause problems when evaluating the model. See Section 15.9.6
* Residuals are independent of each other. This is really just a “catch all” assumption, to the effect that “there’s nothing else funny going on in the residuals”. If there is something weird (e.g., the residuals all depend heavily on some other unmeasured variable) going on, it might screw things up.
* No “bad” outliers. Again, not actually a technical assumption of the model (or rather, it’s sort of implied by all the others), but there is an implicit assumption that your regression model isn’t being too strongly influenced by one or two anomalous data points; since this raises questions about the adequacy of the model, and the trustworthiness of the data in some cases. See Section 15.9.2.
## 15.9 Model checking
The main focus of this section is regression diagnostics, a term that refers to the art of checking that the assumptions of your regression model have been met, figuring out how to fix the model if the assumptions are violated, and generally to check that nothing “funny” is going on. I refer to this as the “art” of model checking with good reason: it’s not easy, and while there are a lot of fairly standardised tools that you can use to diagnose and maybe even cure the problems that ail your model (if there are any, that is!), you really do need to exercise a certain amount of judgment when doing this. It’s easy to get lost in all the details of checking this thing or that thing, and it’s quite exhausting to try to remember what all the different things are. This has the very nasty side effect that a lot of people get frustrated when trying to learn all the tools, so instead they decide not to do any model checking. This is a bit of a worry!
In this section, I describe several different things you can do to check that your regression model is doing what it’s supposed to. It doesn’t cover the full space of things you could do, but it’s still much more detailed than what I see a lot of people doing in practice; and I don’t usually cover all of this in my intro stats class myself. However, I do think it’s important that you get a sense of what tools are at your disposal, so I’ll try to introduce a bunch of them here. Finally, I should note that this section draws quite heavily from the Fox and Weisberg (2011) text, the book associated with the `car` package. The `car` package is notable for providing some excellent tools for regression diagnostics, and the book itself talks about them in an admirably clear fashion. I don’t want to sound too gushy about it, but I do think that Fox and Weisberg (2011) is well worth reading.
### 15.9.1 Three kinds of residuals
The majority of regression diagnostics revolve around looking at the residuals, and by now you’ve probably formed a sufficiently pessimistic theory of statistics to be able to guess that – precisely because of the fact that we care a lot about the residuals – there are several different kinds of residual that we might consider. In particular, the following three kinds of residual are referred to in this section: “ordinary residuals”, “standardised residuals”, and “Studentised residuals”. There is a fourth kind that you’ll see referred to in some of the Figures, and that’s the “Pearson residual”: however, for the models that we’re talking about in this chapter, the Pearson residual is identical to the ordinary residual.
The first and simplest kind of residuals that we care about are ordinary residuals. These are the actual, raw residuals that I’ve been talking about throughout this chapter. The ordinary residual is just the difference between the fitted value \(\hat{Y}_i\) and the observed value \(Y_i\). I’ve been using the notation \(\epsilon_i\) to refer to the \(i\)-th ordinary residual, and by gum I’m going to stick to it. With this in mind, we have the very simple equation \[ \epsilon_i = Y_i - \hat{Y}_i \] This is of course what we saw earlier, and unless I specifically refer to some other kind of residual, this is the one I’m talking about. So there’s nothing new here: I just wanted to repeat myself. In any case, you can get R to output a vector of ordinary residuals, you can use a command like this:
```
residuals( object = regression.2 )
```
```
## 1 2 3 4 5 6
## -2.1403095 4.7081942 1.9553640 -2.0602806 0.7194888 -0.4066133
## 7 8 9 10 11 12
## 0.2269987 -1.7003077 0.2025039 3.8524589 3.9986291 -4.9120150
## 13 14 15 16 17 18
## 1.2060134 0.4946578 -2.6579276 -0.3966805 3.3538613 1.7261225
## 19 20 21 22 23 24
## -0.4922551 -5.6405941 -0.4660764 2.7238389 9.3653697 0.2841513
## 25 26 27 28 29 30
## -0.5037668 -1.4941146 8.1328623 1.9787316 -1.5126726 3.5171148
## 31 32 33 34 35 36
## -8.9256951 -2.8282946 6.1030349 -7.5460717 4.5572128 -10.6510836
## 37 38 39 40 41 42
## -5.6931846 6.3096506 -2.1082466 -0.5044253 0.1875576 4.8094841
## 43 44 45 46 47 48
## -5.4135163 -6.2292842 -4.5725232 -5.3354601 3.9950111 2.1718745
## 49 50 51 52 53 54
## -3.4766440 0.4834367 6.2839790 2.0109396 -1.5846631 -2.2166613
## 55 56 57 58 59 60
## 2.2033140 1.9328736 -1.8301204 -1.5401430 2.5298509 -3.3705782
## 61 62 63 64 65 66
## -2.9380806 0.6590736 -0.5917559 -8.6131971 5.9781035 5.9332979
## 67 68 69 70 71 72
## -1.2341956 3.0047669 -1.0802468 6.5174672 -3.0155469 2.1176720
## 73 74 75 76 77 78
## 0.6058757 -2.7237421 -2.2291472 -1.4053822 4.7461491 11.7495569
## 79 80 81 82 83 84
## 4.7634141 2.6620908 -11.0345292 -0.7588667 1.4558227 -0.4745727
## 85 86 87 88 89 90
## 8.9091201 -1.1409777 0.7555223 -0.4107130 0.8797237 -1.4095586
## 91 92 93 94 95 96
## 3.1571385 -3.4205757 -5.7228699 -2.2033958 -3.8647891 0.4982711
## 97 98 99 100
## -5.5249495 4.1134221 -8.2038533 5.6800859
```
One drawback to using ordinary residuals is that they’re always on a different scale, depending on what the outcome variable is and how good the regression model is. That is, Unless you’ve decided to run a regression model without an intercept term, the ordinary residuals will have mean 0; but the variance is different for every regression. In a lot of contexts, especially where you’re only interested in the pattern of the residuals and not their actual values, it’s convenient to estimate the standardised residuals, which are normalised in such a way as to have standard deviation 1. The way we calculate these is to divide the ordinary residual by an estimate of the (population) standard deviation of these residuals. For technical reasons, mumble mumble, the formula for this is: \[ \epsilon_{i}^\prime = \frac{\epsilon_i}{\hat{\sigma} \sqrt{1-h_i}} \] where \(\hat\sigma\) in this context is the estimated population standard deviation of the ordinary residuals, and \(h_i\) is the “hat value” of the \(i\)th observation. I haven’t explained hat values to you yet (but have no fear,221 it’s coming shortly), so this won’t make a lot of sense. For now, it’s enough to interpret the standardised residuals as if we’d converted the ordinary residuals to \(z\)-scores. In fact, that is more or less the truth, it’s just that we’re being a bit fancier. To get the standardised residuals, the command you want is this:
```
rstandard( model = regression.2 )
```
```
## 1 2 3 4 5 6
## -0.49675845 1.10430571 0.46361264 -0.47725357 0.16756281 -0.09488969
## 7 8 9 10 11 12
## 0.05286626 -0.39260381 0.04739691 0.89033990 0.95851248 -1.13898701
## 13 14 15 16 17 18
## 0.28047841 0.11519184 -0.61657092 -0.09191865 0.77692937 0.40403495
## 19 20 21 22 23 24
## -0.11552373 -1.31540412 -0.10819238 0.62951824 2.17129803 0.06586227
## 25 26 27 28 29 30
## -0.11980449 -0.34704024 1.91121833 0.45686516 -0.34986350 0.81233165
## 31 32 33 34 35 36
## -2.08659993 -0.66317843 1.42930082 -1.77763064 1.07452436 -2.47385780
## 37 38 39 40 41 42
## -1.32715114 1.49419658 -0.49115639 -0.11674947 0.04401233 1.11881912
## 43 44 45 46 47 48
## -1.27081641 -1.46422595 -1.06943700 -1.24659673 0.94152881 0.51069809
## 49 50 51 52 53 54
## -0.81373349 0.11412178 1.47938594 0.46437962 -0.37157009 -0.51609949
## 55 56 57 58 59 60
## 0.51800753 0.44813204 -0.42662358 -0.35575611 0.58403297 -0.78022677
## 61 62 63 64 65 66
## -0.67833325 0.15484699 -0.13760574 -2.05662232 1.40238029 1.37505125
## 67 68 69 70 71 72
## -0.28964989 0.69497632 -0.24945316 1.50709623 -0.69864682 0.49071427
## 73 74 75 76 77 78
## 0.14267297 -0.63246560 -0.51972828 -0.32509811 1.10842574 2.72171671
## 79 80 81 82 83 84
## 1.09975101 0.62057080 -2.55172097 -0.17584803 0.34340064 -0.11158952
## 85 86 87 88 89 90
## 2.10863391 -0.26386516 0.17624445 -0.09504416 0.20450884 -0.32730740
## 91 92 93 94 95 96
## 0.73475640 -0.79400855 -1.32768248 -0.51940736 -0.91512580 0.11661226
## 97 98 99 100
## -1.28069115 0.96332849 -1.90290258 1.31368144
```
Note that this function uses a different name for the input argument, but it’s still just a linear regression object that the function wants to take as its input here.
The third kind of residuals are Studentised residuals (also called “jackknifed residuals”) and they’re even fancier than standardised residuals. Again, the idea is to take the ordinary residual and divide it by some quantity in order to estimate some standardised notion of the residual, but the formula for doing the calculations this time is subtly different: \[ \epsilon_{i}^* = \frac{\epsilon_i}{\hat{\sigma}_{(-i)} \sqrt{1-h_i}} \] Notice that our estimate of the standard deviation here is written \(\hat{\sigma}_{(-i)}\). What this corresponds to is the estimate of the residual standard deviation that you would have obtained, if you just deleted the \(i\)th observation from the data set. This sounds like the sort of thing that would be a nightmare to calculate, since it seems to be saying that you have to run \(N\) new regression models (even a modern computer might grumble a bit at that, especially if you’ve got a large data set). Fortunately, some terribly clever person has shown that this standard deviation estimate is actually given by the following equation: \[ \hat\sigma_{(-i)} = \hat{\sigma} \ \sqrt{\frac{N-K-1 - {\epsilon_{i}^\prime}^2}{N-K-2}} \] Isn’t that a pip? Anyway, the command that you would use if you wanted to pull out the Studentised residuals for our regression model is
```
rstudent( model = regression.2 )
```
```
## 1 2 3 4 5 6
## -0.49482102 1.10557030 0.46172854 -0.47534555 0.16672097 -0.09440368
## 7 8 9 10 11 12
## 0.05259381 -0.39088553 0.04715251 0.88938019 0.95810710 -1.14075472
## 13 14 15 16 17 18
## 0.27914212 0.11460437 -0.61459001 -0.09144760 0.77533036 0.40228555
## 19 20 21 22 23 24
## -0.11493461 -1.32043609 -0.10763974 0.62754813 2.21456485 0.06552336
## 25 26 27 28 29 30
## -0.11919416 -0.34546127 1.93818473 0.45499388 -0.34827522 0.81089646
## 31 32 33 34 35 36
## -2.12403286 -0.66125192 1.43712830 -1.79797263 1.07539064 -2.54258876
## 37 38 39 40 41 42
## -1.33244515 1.50388257 -0.48922682 -0.11615428 0.04378531 1.12028904
## 43 44 45 46 47 48
## -1.27490649 -1.47302872 -1.07023828 -1.25020935 0.94097261 0.50874322
## 49 50 51 52 53 54
## -0.81230544 0.11353962 1.48863006 0.46249410 -0.36991317 -0.51413868
## 55 56 57 58 59 60
## 0.51604474 0.44627831 -0.42481754 -0.35414868 0.58203894 -0.77864171
## 61 62 63 64 65 66
## -0.67643392 0.15406579 -0.13690795 -2.09211556 1.40949469 1.38147541
## 67 68 69 70 71 72
## -0.28827768 0.69311245 -0.24824363 1.51717578 -0.69679156 0.48878534
## 73 74 75 76 77 78
## 0.14195054 -0.63049841 -0.51776374 -0.32359434 1.10974786 2.81736616
## 79 80 81 82 83 84
## 1.10095270 0.61859288 -2.62827967 -0.17496714 0.34183379 -0.11101996
## 85 86 87 88 89 90
## 2.14753375 -0.26259576 0.17536170 -0.09455738 0.20349582 -0.32579584
## 91 92 93 94 95 96
## 0.73300184 -0.79248469 -1.33298848 -0.51744314 -0.91435205 0.11601774
## 97 98 99 100
## -1.28498273 0.96296745 -1.92942389 1.31867548
```
Before moving on, I should point out that you don’t often need to manually extract these residuals yourself, even though they are at the heart of almost all regression diagnostics. That is, the `residuals()` , `rstandard()` and `rstudent()` functions are all useful to know about, but most of the time the various functions that run the diagnostics will take care of these calculations for you. Even so, it’s always nice to know how to actually get hold of these things yourself in case you ever need to do something non-standard.
### 15.9.2 Three kinds of anomalous data
One danger that you can run into with linear regression models is that your analysis might be disproportionately sensitive to a smallish number of “unusual” or “anomalous” observations. I discussed this idea previously in Section 6.5.2 in the context of discussing the outliers that get automatically identified by the `boxplot()` function, but this time we need to be much more precise. In the context of linear regression, there are three conceptually distinct ways in which an observation might be called “anomalous”. All three are interesting, but they have rather different implications for your analysis.
The second way in which an observation can be unusual is if it has high leverage: this happens when the observation is very different from all the other observations. This doesn’t necessarily have to correspond to a large residual: if the observation happens to be unusual on all variables in precisely the same way, it can actually lie very close to the regression line. An example of this is shown in Figure 15.8. The leverage of an observation is operationalised in terms of its hat value, usually written \(h_i\). The formula for the hat value is rather complicated222 but its interpretation is not: \(h_i\) is a measure of the extent to which the \(i\)-th observation is “in control” of where the regression line ends up going. You can extract the hat values using the following command:
```
## 1 2 3 4 5 6
## 0.02067452 0.04105320 0.06155445 0.01685226 0.02734865 0.03129943
## 7 8 9 10 11 12
## 0.02735579 0.01051224 0.03698976 0.01229155 0.08189763 0.01882551
## 13 14 15 16 17 18
## 0.02462902 0.02718388 0.01964210 0.01748592 0.01691392 0.03712530
## 19 20 21 22 23 24
## 0.04213891 0.02994643 0.02099435 0.01233280 0.01853370 0.01804801
## 25 26 27 28 29 30
## 0.06722392 0.02214927 0.04472007 0.01039447 0.01381812 0.01105817
## 31 32 33 34 35 36
## 0.03468260 0.04048248 0.03814670 0.04934440 0.05107803 0.02208177
## 37 38 39 40 41 42
## 0.02919013 0.05928178 0.02799695 0.01519967 0.04195751 0.02514137
## 43 44 45 46 47 48
## 0.04267879 0.04517340 0.03558080 0.03360160 0.05019778 0.04587468
## 49 50 51 52 53 54
## 0.03701290 0.05331282 0.04814477 0.01072699 0.04047386 0.02681315
## 55 56 57 58 59 60
## 0.04556787 0.01856997 0.02919045 0.01126069 0.01012683 0.01546412
## 61 62 63 64 65 66
## 0.01029534 0.04428870 0.02438944 0.07469673 0.04135090 0.01775697
## 67 68 69 70 71 72
## 0.04217616 0.01384321 0.01069005 0.01340216 0.01716361 0.01751844
## 73 74 75 76 77 78
## 0.04863314 0.02158623 0.02951418 0.01411915 0.03276064 0.01684599
## 79 80 81 82 83 84
## 0.01028001 0.02920514 0.01348051 0.01752758 0.05184527 0.04583604
## 85 86 87 88 89 90
## 0.05825858 0.01359644 0.03054414 0.01487724 0.02381348 0.02159418
## 91 92 93 94 95 96
## 0.02598661 0.02093288 0.01982480 0.05063492 0.05907629 0.03682026
## 97 98 99 100
## 0.01817919 0.03811718 0.01945603 0.01373394
```
In general, if an observation lies far away from the other ones in terms of the predictor variables, it will have a large hat value (as a rough guide, high leverage is when the hat value is more than 2-3 times the average; and note that the sum of the hat values is constrained to be equal to \(K+1\)). High leverage points are also worth looking at in more detail, but they’re much less likely to be a cause for concern unless they are also outliers. % guide from Venables and Ripley.
This brings us to our third measure of unusualness, the influence of an observation. A high influence observation is an outlier that has high leverage. That is, it is an observation that is very different to all the other ones in some respect, and also lies a long way from the regression line. This is illustrated in Figure 15.9. Notice the contrast to the previous two figures: outliers don’t move the regression line much, and neither do high leverage points. But something that is an outlier and has high leverage… that has a big effect on the regression line.
That’s why we call these points high influence; and it’s why they’re the biggest worry. We operationalise influence in terms of a measure known as Cook’s distance, \[ D_i = \frac{{\epsilon_i^*}^2 }{K+1} \times \frac{h_i}{1-h_i} \] Notice that this is a multiplication of something that measures the outlier-ness of the observation (the bit on the left), and something that measures the leverage of the observation (the bit on the right). In other words, in order to have a large Cook’s distance, an observation must be a fairly substantial outlier and have high leverage. In a stunning turn of events, you can obtain these values using the following command:
```
cooks.distance( model = regression.2 )
```
```
## 1 2 3 4 5
## 1.736512e-03 1.740243e-02 4.699370e-03 1.301417e-03 2.631557e-04
## 6 7 8 9 10
## 9.697585e-05 2.620181e-05 5.458491e-04 2.876269e-05 3.288277e-03
## 11 12 13 14 15
## 2.731835e-02 8.296919e-03 6.621479e-04 1.235956e-04 2.538915e-03
## 16 17 18 19 20
## 5.012283e-05 3.461742e-03 2.098055e-03 1.957050e-04 1.780519e-02
## 21 22 23 24 25
## 8.367377e-05 1.649478e-03 2.967594e-02 2.657610e-05 3.448032e-04
## 26 27 28 29 30
## 9.093379e-04 5.699951e-02 7.307943e-04 5.716998e-04 2.459564e-03
## 31 32 33 34 35
## 5.214331e-02 6.185200e-03 2.700686e-02 5.467345e-02 2.071643e-02
## 36 37 38 39 40
## 4.606378e-02 1.765312e-02 4.689817e-02 2.316122e-03 7.012530e-05
## 41 42 43 44 45
## 2.827824e-05 1.076083e-02 2.399931e-02 3.381062e-02 1.406498e-02
## 46 47 48 49 50
## 1.801086e-02 1.561699e-02 4.179986e-03 8.483514e-03 2.444787e-04
## 51 52 53 54 55
## 3.689946e-02 7.794472e-04 1.941235e-03 2.446230e-03 4.270361e-03
## 56 57 58 59 60
## 1.266609e-03 1.824212e-03 4.804705e-04 1.163181e-03 3.187235e-03
## 61 62 63 64 65
## 1.595512e-03 3.703826e-04 1.577892e-04 1.138165e-01 2.827715e-02
## 66 67 68 69 70
## 1.139374e-02 1.231422e-03 2.260006e-03 2.241322e-04 1.028479e-02
## 71 72 73 74 75
## 2.841329e-03 1.431223e-03 3.468538e-04 2.941757e-03 2.738249e-03
## 76 77 78 79 80
## 5.045357e-04 1.387108e-02 4.230966e-02 4.187440e-03 3.861831e-03
## 81 82 83 84 85
## 2.965826e-02 1.838888e-04 2.149369e-03 1.993929e-04 9.168733e-02
## 86 87 88 89 90
## 3.198994e-04 3.262192e-04 4.547383e-05 3.400893e-04 7.881487e-04
## 91 92 93 94 95
## 4.801204e-03 4.493095e-03 1.188427e-02 4.796360e-03 1.752666e-02
## 96 97 98 99 100
## 1.732793e-04 1.012302e-02 1.225818e-02 2.394964e-02 8.010508e-03
```
As a rough guide, Cook’s distance greater than 1 is often considered large (that’s what I typically use as a quick and dirty rule), though a quick scan of the internet and a few papers suggests that \(4/N\) has also been suggested as a possible rule of thumb.
As hinted above, you don’t usually need to make use of these functions, since you can have R automatically draw the critical plots.223 For the `regression.2` model, these are the plots showing Cook’s distance (Figure 15.10) and the more detailed breakdown showing the scatter plot of the Studentised residual against leverage (Figure 15.11). To draw these, we can use the `plot()` function. When the main argument `x` to this function is a linear model object, it will draw one of six different plots, each of which is quite useful for doing regression diagnostics. You specify which one you want using the `which` argument (a number between 1 and 6). If you don’t do this then R will draw all six. The two plots of interest to us in this context are generated using the following commands: An obvious question to ask next is, if you do have large values of Cook’s distance, what should you do? As always, there’s no hard and fast rules. Probably the first thing to do is to try running the regression with that point excluded and see what happens to the model performance and to the regression coefficients. If they really are substantially different, it’s time to start digging into your data set and your notes that you no doubt were scribbling as your ran your study; try to figure out why the point is so different. If you start to become convinced that this one data point is badly distorting your results, you might consider excluding it, but that’s less than ideal unless you have a solid explanation for why this particular case is qualitatively different from the others and therefore deserves to be handled separately.224 To give an example, let’s delete the observation from day 64, the observation with the largest Cook’s distance for the `regression.2` model. We can do this using the `subset` argument:
```
lm( formula = dan.grump ~ dan.sleep + baby.sleep, # same formula
data = parenthood, # same data frame...
subset = -64 # ...but observation 64 is deleted
)
```
```
##
## Call:
## lm(formula = dan.grump ~ dan.sleep + baby.sleep, data = parenthood,
## subset = -64)
##
## Coefficients:
## (Intercept) dan.sleep baby.sleep
## 126.3553 -8.8283 -0.1319
```
As you can see, those regression coefficients have barely changed in comparison to the values we got earlier. In other words, we really don’t have any problem as far as anomalous data are concerned.
### 15.9.3 Checking the normality of the residuals
Like many of the statistical tools we’ve discussed in this book, regression models rely on a normality assumption. In this case, we assume that the residuals are normally distributed. The tools for testing this aren’t fundamentally different to those that we discussed earlier in Section 13.9. Firstly, I firmly believe that it never hurts to draw an old fashioned histogram. The command I use might be something like this:
```
hist( x = residuals( regression.2 ), # data are the residuals
xlab = "Value of residual", # x-axis label
main = "", # no title
breaks = 20 # lots of breaks
)
```
The resulting plot is shown in Figure 15.12, and as you can see the plot looks pretty damn close to normal, almost unnaturally so.
I could also run a Shapiro-Wilk test to check, using the `shapiro.test()` function; the \(W\) value of .99, at this sample size, is non-significant (\(p=.84\)), again suggesting that the normality assumption isn’t in any danger here. As a third measure, we might also want to draw a QQ-plot using the `qqnorm()` function. The QQ plot is an excellent one to draw, and so you might not be surprised to discover that it’s one of the regression plots that we can produce using the `plot()` function:
```
plot( x = regression.2, which = 2 ) # Figure @ref{fig:regressionplot2}
```
The output is shown in Figure 15.13, showing the standardised residuals plotted as a function of their theoretical quantiles according to the regression model. The fact that the output appends the model specification to the picture is nice.
### 15.9.4 Checking the linearity of the relationship
The third thing we might want to test is the linearity of the relationships between the predictors and the outcomes. There’s a few different things that you might want to do in order to check this. Firstly, it never hurts to just plot the relationship between the fitted values \(\hat{Y}_i\) and the observed values \(Y_i\) for the outcome variable, as illustrated in Figure 15.14. To draw this we could use the `fitted.values()` function to extract the \(\hat{Y_i}\) values in much the same way that we used the `residuals()` function to extract the \(\epsilon_i\) values. So the commands to draw this figure might look like this:
```
yhat.2 <- fitted.values( object = regression.2 )
plot( x = yhat.2,
y = parenthood$dan.grump,
xlab = "Fitted Values",
ylab = "Observed Values"
)
```
One of the reasons I like to draw these plots is that they give you a kind of “big picture view”. If this plot looks approximately linear, then we’re probably not doing too badly (though that’s not to say that there aren’t problems). However, if you can see big departures from linearity here, then it strongly suggests that you need to make some changes.
In any case, in order to get a more detailed picture it’s often more informative to look at the relationship between the fitted values and the residuals themselves. Again, we could draw this plot using low level commands, but there’s an easier way. Just `plot()` the regression model, and select `which = 1` :
```
plot(x = regression.2, which = 1)
```
The output is shown in Figure 15.15. As you can see, not only does it draw the scatterplot showing the fitted value against the residuals, it also plots a line through the data that shows the relationship between the two. Ideally, this should be a straight, perfectly horizontal line. There’s some hint of curvature here, but it’s not clear whether or not we be concerned.
A somewhat more advanced version of the same plot is produced by the `residualPlots()` function in the `car` package. This function not only draws plots comparing the fitted values to the residuals, it does so for each individual predictor. The command is and the resulting plots are shown in Figure 15.16.
```
## Test stat Pr(>|Test stat|)
## dan.sleep 2.1604 0.03323 *
## baby.sleep -0.5445 0.58733
## Tukey test 2.1615 0.03066 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Note that this function also reports the results of a bunch of curvature tests. For a predictor variable \(X\) in some regression model, this test is equivalent to adding a new predictor to the model corresponding to \(X^2\), and running the \(t\)-test on the \(b\) coefficient associated with this new predictor. If it comes up significant, it implies that there is some nonlinear relationship between the variable and the residuals.
The third line here is the Tukey test, which is basically the same test, except that instead of squaring one of the predictors and adding it to the model, you square the fitted-value. In any case, the fact that the curvature tests have come up significant is hinting that the curvature that we can see in Figures 15.15 and 15.16 is genuine;225 although it still bears remembering that the pattern in Figure 15.14 is pretty damn straight: in other words the deviations from linearity are pretty small, and probably not worth worrying about.
In a lot of cases, the solution to this problem (and many others) is to transform one or more of the variables. We discussed the basics of variable transformation in Sections 7.2 and (mathfunc), but I do want to make special note of one additional possibility that I didn’t mention earlier: the Box-Cox transform. The Box-Cox function is a fairly simple one, but it’s very widely used \[ f(x,\lambda) = \frac{x^\lambda - 1}{\lambda} \] for all values of \(\lambda\) except \(\lambda = 0\). When \(\lambda = 0\) we just take the natural logarithm (i.e., \(\ln(x)\)). You can calculate it using the `boxCox()` function in the `car` package. Better yet, if what you’re trying to do is convert a data to normal, or as normal as possible, there’s the `powerTransform()` function in the `car` package that can estimate the best value of \(\lambda\). Variable transformation is another topic that deserves a fairly detailed treatment, but (again) due to deadline constraints, it will have to wait until a future version of this book.
### 15.9.5 Checking the homogeneity of variance
The regression models that we’ve talked about all make a homogeneity of variance assumption: the variance of the residuals is assumed to be constant. The “default” plot that R provides to help with doing this ( `which = 3` when using `plot()` ) shows a plot of the square root of the size of the residual \(\sqrt{|\epsilon_i|}\), as a function of the fitted value \(\hat{Y}_i\). We can produce the plot using the following command,
```
plot(x = regression.2, which = 3)
```
and the resulting plot is shown in Figure 15.17. Note that this plot actually uses the standardised residuals (i.e., converted to \(z\) scores) rather than the raw ones, but it’s immaterial from our point of view. What we’re looking to see here is a straight, horizontal line running through the middle of the plot.
A slightly more formal approach is to run hypothesis tests. The `car` package provides a function called `ncvTest()` (non-constant variance test) that can be used for this purpose (Cook and Weisberg 1983). I won’t explain the details of how it works, other than to say that the idea is that what you do is run a regression to see if there is a relationship between the squared residuals \(\epsilon_i\) and the fitted values \(\hat{Y}_i\), or possibly to run a regression using all of the original predictors instead of just \(\hat{Y}_i\).226 Using the default settings, the `ncvTest()` looks for a relationship between \(\hat{Y}_i\) and the variance of the residuals, making it a straightforward analogue of Figure 15.17. So if we run it for our model,
```
ncvTest( regression.2 )
```
```
## Non-constant Variance Score Test
## Variance formula: ~ fitted.values
## Chisquare = 0.09317511, Df = 1, p = 0.76018
```
We see that our original impression was right: there’s no violations of homogeneity of variance in this data.
It’s a bit beyond the scope of this chapter to talk too much about how to deal with violations of homogeneity of variance, but I’ll give you a quick sense of what you need to consider. The main thing to worry about, if homogeneity of variance is violated, is that the standard error estimates associated with the regression coefficients are no longer entirely reliable, and so your \(t\) tests for the coefficients aren’t quite right either. A simple fix to the problem is to make use of a “heteroscedasticity corrected covariance matrix” when estimating the standard errors. These are often called sandwich estimators, for reasons that only make sense if you understand the maths at a low level227 have implemented as the default in the `hccm()` function is a tweak on this, proposed by <NAME> (2000). This version uses \(\Sigma = \mbox{diag}(\epsilon_i^2/(1-h_i^2))\), where \(h_i\) is the \(i\)th hat value. Gosh, regression is fun, isn’t it?] You don’t need to understand what this means (not for an introductory class), but it might help to note that there’s a `hccm()` function in the `car()` package that does it. Better yet, you don’t even need to use it. You can use the `coeftest()` function in the `lmtest` package, but you need the `car` package loaded:
```
library(lmtest)
library(car)
coeftest( regression.2, vcov= hccm )
```
```
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 125.965566 3.247285 38.7910 <2e-16 ***
## dan.sleep -8.950250 0.615820 -14.5339 <2e-16 ***
## baby.sleep 0.010524 0.291565 0.0361 0.9713
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Not surprisingly, these \(t\) tests are pretty much identical to the ones that we saw when we used the
```
summary(regression.2)
```
command earlier; because the homogeneity of variance assumption wasn’t violated. But if it had been, we might have seen some more substantial differences.
### 15.9.6 Checking for collinearity
The last kind of regression diagnostic that I’m going to discuss in this chapter is the use of variance inflation factors (VIFs), which are useful for determining whether or not the predictors in your regression model are too highly correlated with each other. There is a variance inflation factor associated with each predictor \(X_k\) in the model, and the formula for the \(k\)-th VIF is: \[ \mbox{VIF}_k = \frac{1}{1-{R^2_{(-k)}}} \] where \(R^2_{(-k)}\) refers to \(R\)-squared value you would get if you ran a regression using \(X_k\) as the outcome variable, and all the other \(X\) variables as the predictors. The idea here is that \(R^2_{(-k)}\) is a very good measure of the extent to which \(X_k\) is correlated with all the other variables in the model. Better yet, the square root of the VIF is pretty interpretable: it tells you how much wider the confidence interval for the corresponding coefficient \(b_k\) is, relative to what you would have expected if the predictors are all nice and uncorrelated with one another. If you’ve only got two predictors, the VIF values are always going to be the same, as we can see if we use the `vif()` function ( `car` package)…
```
vif( mod = regression.2 )
```
```
## dan.sleep baby.sleep
## 1.651038 1.651038
```
And since the square root of 1.65 is 1.28, we see that the correlation between our two predictors isn’t causing much of a problem.
To give a sense of how we could end up with a model that has bigger collinearity problems, suppose I were to run a much less interesting regression model, in which I tried to predict the `day` on which the data were collected, as a function of all the other variables in the data set. To see why this would be a bit of a problem, let’s have a look at the correlation matrix for all four variables: `cor( parenthood )`
We have some fairly large correlations between some of our predictor variables! When we run the regression model and look at the VIF values, we see that the collinearity is causing a lot of uncertainty about the coefficients. First, run the regression…
```
regression.3 <- lm( day ~ baby.sleep + dan.sleep + dan.grump, parenthood )
```
and second, look at the VIFs…
`vif( regression.3 )`
```
## baby.sleep dan.sleep dan.grump
## 1.651064 6.102337 5.437903
```
Yep, that’s some mighty fine collinearity you’ve got there.
## 15.10 Model selection
One fairly major problem that remains is the problem of “model selection”. That is, if we have a data set that contains several variables, which ones should we include as predictors, and which ones should we not include? In other words, we have a problem of variable selection. In general, model selection is a complex business, but it’s made somewhat simpler if we restrict ourselves to the problem of choosing a subset of the variables that ought to be included in the model. Nevertheless, I’m not going to try covering even this reduced topic in a lot of detail. Instead, I’ll talk about two broad principles that you need to think about; and then discuss one concrete tool that R provides to help you select a subset of variables to include in your model. Firstly, the two principles:
* It’s nice to have an actual substantive basis for your choices. That is, in a lot of situations you the researcher have good reasons to pick out a smallish number of possible regression models that are of theoretical interest; these models will have a sensible interpretation in the context of your field. Never discount the importance of this. Statistics serves the scientific process, not the other way around.
* To the extent that your choices rely on statistical inference, there is a trade off between simplicity and goodness of fit. As you add more predictors to the model, you make it more complex; each predictor adds a new free parameter (i.e., a new regression coefficient), and each new parameter increases the model’s capacity to “absorb” random variations. So the goodness of fit (e.g., \(R^2\)) continues to rise as you add more predictors no matter what. If you want your model to be able to generalise well to new observations, you need to avoid throwing in too many variables.
This latter principle is often referred to as Ockham’s razor, and is often summarised in terms of the following pithy saying: do not multiply entities beyond necessity. In this context, it means: don’t chuck in a bunch of largely irrelevant predictors just to boost your \(R^2\). Hm. Yeah, the original was better.
In any case, what we need is an actual mathematical criterion that will implement the qualitative principle behind Ockham’s razor in the context of selecting a regression model. As it turns out there are several possibilities. The one that I’ll talk about is the Akaike information criterion (AIC; Akaike 1974) simply because it’s the default one used in the R function `step()` . In the context of a linear regression model (and ignoring terms that don’t depend on the model in any way!), the AIC for a model that has \(K\) predictor variables plus an intercept is:228 \[
\mbox{AIC} = \displaystyle\frac{\mbox{SS}_{res}}{\hat{\sigma}}^2+ 2K
\] The smaller the AIC value, the better the model performance is. If we ignore the low level details, it’s fairly obvious what the AIC does: on the left we have a term that increases as the model predictions get worse; on the right we have a term that increases as the model complexity increases. The best model is the one that fits the data well (low residuals; left hand side) using as few predictors as possible (low \(K\); right hand side). In short, this is a simple implementation of Ockham’s razor.
### 15.10.1 Backward elimination
Okay, let’s have a look at the `step()` function at work. In this example I’ll keep it simple and use only the basic backward elimination approach. That is, start with the complete regression model, including all possible predictors. Then, at each “step” we try all possible ways of removing one of the variables, and whichever of these is best (in terms of lowest AIC value) is accepted. This becomes our new regression model; and we then try all possible deletions from the new model, again choosing the option with lowest AIC. This process continues until we end up with a model that has a lower AIC value than any of the other possible models that you could produce by deleting one of its predictors. Let’s see this in action. First, I need to define the model from which the process starts.
```
full.model <- lm( formula = dan.grump ~ dan.sleep + baby.sleep + day,
data = parenthood
)
```
That’s nothing terribly new: yet another regression. Booooring. Still, we do need to do it: the `object` argument to the `step()` function will be this regression model. With this in mind, I would call the `step()` function using the following command:
```
step( object = full.model, # start at the full model
direction = "backward" # allow it remove predictors but not add them
)
```
```
## Start: AIC=299.08
## dan.grump ~ dan.sleep + baby.sleep + day
##
## Df Sum of Sq RSS AIC
## - baby.sleep 1 0.1 1837.2 297.08
## - day 1 1.6 1838.7 297.16
## <none> 1837.1 299.08
## - dan.sleep 1 4909.0 6746.1 427.15
##
## Step: AIC=297.08
## dan.grump ~ dan.sleep + day
##
## Df Sum of Sq RSS AIC
## - day 1 1.6 1838.7 295.17
## <none> 1837.2 297.08
## - dan.sleep 1 8103.0 9940.1 463.92
##
## Step: AIC=295.17
## dan.grump ~ dan.sleep
##
## Df Sum of Sq RSS AIC
## <none> 1838.7 295.17
## - dan.sleep 1 8159.9 9998.6 462.50
```
although in practice I didn’t need to specify `direction` because `"backward"` is the default. The output is somewhat lengthy, so I’ll go through it slowly. Firstly, the output reports the AIC value for the current best model:
```
Start: AIC=299.08
dan.grump ~ dan.sleep + baby.sleep + day
```
That’s our starting point. Since small AIC values are good, we want to see if we can get a value smaller than 299.08 by deleting one of those three predictors. So what R does is try all three possibilities, calculate the AIC values for each one, and then print out a short table with the results:
```
Df Sum of Sq RSS AIC
- baby.sleep 1 0.1 1837.2 297.08
- day 1 1.6 1838.7 297.16
<none> 1837.1 299.08
- dan.sleep 1 4909.0 6746.1 427.15
```
To read this table, it helps to note that the text in the left hand column is telling you what change R made to the regression model. So the line that reads `<none>` is the actual model we started with, and you can see on the right hand side that this still corresponds to an AIC value of 299.08 (obviously). The other three rows in the table correspond to the other three models that it looked at: it tried removing the `baby.sleep` variable, which is indicated by `- baby.sleep` , and this produced an AIC value of 297.08. That was the best of the three moves, so it’s at the top of the table. So, this move is accepted, and now we start again. There are two predictors left in the model, `dan.sleep` and `day` , so it tries deleting those:
```
Step: AIC=297.08
dan.grump ~ dan.sleep + day
Df Sum of Sq RSS AIC
- day 1 1.6 1838.7 295.17
<none> 1837.2 297.08
- dan.sleep 1 8103.0 9940.1 463.92
```
Okay, so what we can see is that removing the `day` variable lowers the AIC value from 297.08 to 295.17. So R decides to keep that change too, and moves on:
Df Sum of Sq RSS AIC
<none> 1838.7 295.17
- dan.sleep 1 8159.9 9998.6 462.50
```
This time around, there’s no further deletions that can actually improve the AIC value. So the `step()` function stops, and prints out the result of the best regression model it could find:
which is (perhaps not all that surprisingly) the `regression.1` model that we started with at the beginning of the chapter.
### 15.10.2 Forward selection
As an alternative, you can also try forward selection. This time around we start with the smallest possible model as our start point, and only consider the possible additions to the model. However, there’s one complication: you also need to tell `step()` what the largest possible model you’re willing to entertain is, using the `scope` argument. The simplest usage is like this:
```
null.model <- lm( dan.grump ~ 1, parenthood ) # intercept only.
step( object = null.model, # start with null.model
direction = "forward", # only consider "addition" moves
scope = dan.grump ~ dan.sleep + baby.sleep + day # largest model allowed
)
```
```
## Start: AIC=462.5
## dan.grump ~ 1
##
## Df Sum of Sq RSS AIC
## + dan.sleep 1 8159.9 1838.7 295.17
## + baby.sleep 1 3202.7 6795.9 425.89
## <none> 9998.6 462.50
## + day 1 58.5 9940.1 463.92
##
## Step: AIC=295.17
## dan.grump ~ dan.sleep
##
## Df Sum of Sq RSS AIC
## <none> 1838.7 295.17
## + day 1 1.55760 1837.2 297.08
## + baby.sleep 1 0.02858 1838.7 297.16
```
If I do this, the output takes on a similar form, but now it only considers addition ( `+` ) moves rather than deletion ( `-` ) moves:
```
Start: AIC=462.5
dan.grump ~ 1
Df Sum of Sq RSS AIC
+ dan.sleep 1 8159.9 1838.7 295.17
+ baby.sleep 1 3202.7 6795.9 425.89
<none> 9998.6 462.50
+ day 1 58.5 9940.1 463.92
Df Sum of Sq RSS AIC
<none> 1838.7 295.17
+ day 1 1.55760 1837.2 297.08
+ baby.sleep 1 0.02858 1838.7 297.16
As you can see, it’s found the same model. In general though, forward and backward selection don’t always have to end up in the same place.
### 15.10.3 A caveat
Automated variable selection methods are seductive things, especially when they’re bundled up in (fairly) simple functions like `step()` . They provide an element of objectivity to your model selection, and that’s kind of nice. Unfortunately, they’re sometimes used as an excuse for thoughtlessness. No longer do you have to think carefully about which predictors to add to the model and what the theoretical basis for their inclusion might be… everything is solved by the magic of AIC. And if we start throwing around phrases like Ockham’s razor, well, it sounds like everything is wrapped up in a nice neat little package that no-one can argue with. Or, perhaps not. Firstly, there’s very little agreement on what counts as an appropriate model selection criterion. When I was taught backward elimination as an undergraduate, we used \(F\)-tests to do it, because that was the default method used by the software. The default in the `step()` function is AIC, and since this is an introductory text that’s the only method I’ve described, but the AIC is hardly the Word of the Gods of Statistics. It’s an approximation, derived under certain assumptions, and it’s guaranteed to work only for large samples when those assumptions are met. Alter those assumptions and you get a different criterion, like the BIC for instance. Take a different approach again and you get the NML criterion. Decide that you’re a Bayesian and you get model selection based on posterior odds ratios. Then there are a bunch of regression specific tools that I haven’t mentioned. And so on. All of these different methods have strengths and weaknesses, and some are easier to calculate than others (AIC is probably the easiest of the lot, which might account for its popularity). Almost all of them produce the same answers when the answer is “obvious” but there’s a fair amount of disagreement when the model selection problem becomes hard. What does this mean in practice? Well, you could go and spend several years teaching yourself the theory of model selection, learning all the ins and outs of it; so that you could finally decide on what you personally think the right thing to do is. Speaking as someone who actually did that, I wouldn’t recommend it: you’ll probably come out the other side even more confused than when you started. A better strategy is to show a bit of common sense… if you’re staring at the results of a `step()` procedure, and the model that makes sense is close to having the smallest AIC, but is narrowly defeated by a model that doesn’t make any sense… trust your instincts. Statistical model selection is an inexact tool, and as I said at the beginning, interpretability matters.
### 15.10.4 Comparing two regression models
An alternative to using automated model selection procedures is for the researcher to explicitly select two or more regression models to compare to each other. You can do this in a few different ways, depending on what research question you’re trying to answer. Suppose we want to know whether or not the amount of sleep that my son got has any relationship to my grumpiness, over and above what we might expect from the amount of sleep that I got. We also want to make sure that the day on which we took the measurement has no influence on the relationship. That is, we’re interested in the relationship between `baby.sleep` and `dan.grump` , and from that perspective `dan.sleep` and `day` are nuisance variable or covariates that we want to control for. In this situation, what we would like to know is whether
(which I’ll call Model 1, or `M1` ) is a better regression model for these data than
(which I’ll call Model 0, or `M0` ). There are two different ways we can compare these two models, one based on a model selection criterion like AIC, and the other based on an explicit hypothesis test. I’ll show you the AIC based approach first because it’s simpler, and follows naturally from the `step()` function that we saw in the last section. The first thing I need to do is actually run the regressions:
```
M0 <- lm( dan.grump ~ dan.sleep + day, parenthood )
M1 <- lm( dan.grump ~ dan.sleep + day + baby.sleep, parenthood )
```
Now that I have my regression models, I could use the `summary()` function to run various hypothesis tests and other useful statistics, just as we have discussed throughout this chapter. However, since the current focus on model comparison, I’ll skip this step and go straight to the AIC calculations. Conveniently, the `AIC()` function in R lets you input several regression models, and it will spit out the AIC values for each of them:229 `AIC( M0, M1 )`
```
## df AIC
## M0 4 582.8681
## M1 5 584.8646
```
Since Model 0 has the smaller AIC value, it is judged to be the better model for these data.
A somewhat different approach to the problem comes out of the hypothesis testing framework. Suppose you have two regression models, where one of them (Model 0) contains a subset of the predictors from the other one (Model 1). That is, Model 1 contains all of the predictors included in Model 0, plus one or more additional predictors. When this happens we say that Model 0 is nested within Model 1, or possibly that Model 0 is a submodel of Model 1. Regardless of the terminology what this means is that we can think of Model 0 as a null hypothesis and Model 1 as an alternative hypothesis. And in fact we can construct an \(F\) test for this in a fairly straightforward fashion. We can fit both models to the data and obtain a residual sum of squares for both models. I’ll denote these as SS\(_{res}^{(0)}\) and SS\(_{res}^{(1)}\) respectively. The superscripting here just indicates which model we’re talking about. Then our \(F\) statistic is \[ F = \frac{(\mbox{SS}_{res}^{(0)} - \mbox{SS}_{res}^{(1)})/k}{(\mbox{SS}_{res}^{(1)})/(N-p-1)} \] where \(N\) is the number of observations, \(p\) is the number of predictors in the full model (not including the intercept), and \(k\) is the difference in the number of parameters between the two models.230 The degrees of freedom here are \(k\) and \(N-p-1\). Note that it’s often more convenient to think about the difference between those two SS values as a sum of squares in its own right. That is: \[ \mbox{SS}_\Delta = \mbox{SS}_{res}^{(0)} - \mbox{SS}_{res}^{(1)} \] The reason why this his helpful is that we can express \(\mbox{SS}_\Delta\) a measure of the extent to which the two models make different predictions about the the outcome variable. Specifically: \[ \mbox{SS}_\Delta = \sum_{i} \left( \hat{y}_i^{(1)} - \hat{y}_i^{(0)} \right)^2 \] where \(\hat{y}_i^{(0)}\) is the fitted value for \(y_i\) according to model \(M_0\) and \(\hat{y}_i^{(1)}\) is the is the fitted value for \(y_i\) according to model \(M_1\).
Okay, so that’s the hypothesis test that we use to compare two regression models to one another. Now, how do we do it in R? The answer is to use the `anova()` function. All we have to do is input the two models that we want to compare (null model first): `anova( M0, M1 )`
```
## Analysis of Variance Table
##
## Model 1: dan.grump ~ dan.sleep + day
## Model 2: dan.grump ~ dan.sleep + day + baby.sleep
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 97 1837.2
## 2 96 1837.1 1 0.063688 0.0033 0.9541
```
Note that, just like we saw with the output from the `step()` function, R has used the acronym `RSS` to refer to the residual sum of squares from each model. That is, RSS in this output corresponds to SS\(_{res}\) in the formula above. Since we have \(p>.05\) we retain the null hypothesis ( `M0` ). This approach to regression, in which we add all of our covariates into a null model, and then add the variables of interest into an alternative model, and then compare the two models in hypothesis testing framework, is often referred to as hierarchical regression.
## 15.11 Summary
* Basic ideas in linear regression and how regression models are estimated (Sections 15.1 and 15.2).
* Multiple linear regression (Section 15.3).
* Measuring the overall performance of a regression model using \(R^2\) (Section 15.4)
* Hypothesis tests for regression models (Section 15.5)
* Calculating confidence intervals for regression coefficients, and standardised coefficients (Section 15.7)
* The assumptions of regression (Section 15.8) and how to check them (Section 15.9)
* Selecting a regression model (Section 15.10)
The \(\epsilon\) symbol is the Greek letter epsilon. It’s traditional to use \(\epsilon_i\) or \(e_i\) to denote a residual.↩
*
Or at least, I’m assuming that it doesn’t help most people. But on the off chance that someone reading this is a proper kung fu master of linear algebra (and to be fair, I always have a few of these people in my intro stats class), it will help you to know that the solution to the estimation problem turns out to be \(\hat{b} = (X^TX)^{-1} X^T y\), where \(\hat{b}\) is a vector containing the estimated regression coefficients, \(X\) is the “design matrix” that contains the predictor variables (plus an additional column containing all ones; strictly \(X\) is a matrix of the regressors, but I haven’t discussed the distinction yet), and \(y\) is a vector containing the outcome variable. For everyone else, this isn’t exactly helpful, and can be downright scary. However, since quite a few things in linear regression can be written in linear algebra terms, you’ll see a bunch of footnotes like this one in this chapter. If you can follow the maths in them, great. If not, ignore it.↩
*
And by “sometimes” I mean “almost never”. In practice everyone just calls it “\(R\)-squared”.↩
*
Note that, although R has done multiple tests here, it hasn’t done a Bonferroni correction or anything. These are standard one-sample \(t\)-tests with a two-sided alternative. If you want to make corrections for multiple tests, you need to do that yourself.↩
*
You can change the kind of correction it applies by specifying the
`p.adjust.method` argument.↩ *
Strictly, you standardise all the regressors: that is, every “thing” that has a regression coefficient associated with it in the model. For the regression models that I’ve talked about so far, each predictor variable maps onto exactly one regressor, and vice versa. However, that’s not actually true in general: we’ll see some examples of this in Chapter 16. But for now, we don’t need to care too much about this distinction.↩
*
Or have no hope, as the case may be.↩
*
Again, for the linear algebra fanatics: the “hat matrix” is defined to be that matrix \(H\) that converts the vector of observed values \(y\) into a vector of fitted values \(\hat{y}\), such that \(\hat{y} = H y\). The name comes from the fact that this is the matrix that “puts a hat on \(y\)”. The hat value of the \(i\)-th observation is the \(i\)-th diagonal element of this matrix (so technically I should be writing it as \(h_{ii}\) rather than \(h_{i}\)). Oh, and in case you care, here’s how it’s calculated: \(H = X(X^TX)^{-1} X^T\). Pretty, isn’t it?↩
*
Though special mention should be made of the
`influenceIndexPlot()` and `influencePlot()` functions in the `car` package. These produce somewhat more detailed pictures than the default plots that I’ve shown here. There’s also an `outlierTest()` function that tests to see if any of the Studentised residuals are significantly larger than would be expected by chance.↩ *
An alternative is to run a “robust regression”; I’ll discuss robust regression in a later version of this book.↩
*
And, if you take the time to check the
`residualPlots()` for `regression.1` , it’s pretty clear that this isn’t some wacky distortion being caused by the fact that `baby.sleep` is a useless predictor variable. It’s an actual nonlinearity in the relationship between `dan.sleep` and `dan.grump` .↩ *
Note that the underlying mechanics of the test aren’t the same as the ones I’ve described for regressions; the goodness of fit is assessed using what’s known as a score-test not an \(F\)-test, and the test statistic is (approximately) \(\chi^2\) distributed if there’s no relationship↩
*
Again, a footnote that should be read only by the two readers of this book that love linear algebra (mmmm… I love the smell of matrix computations in the morning; smells like… nerd). In these estimators, the covariance matrix for \(b\) is given by \((X^T X)^{-1}\ X^T \Sigma X \ (X^T X)^{-1}\). See, it’s a “sandwich”? Assuming you think that \((X^T X)^{-1} = \mbox{"bread"}\) and \(X^T \Sigma X = \mbox{"filling"}\), that is. Which of course everyone does, right? In any case, the usual estimator is what you get when you set \(\Sigma = \hat\sigma^2 I\). The corrected version that I learned originally uses \(\Sigma = \mbox{diag}(\epsilon_i^2)\) (White 1980). However, the version that Fox and Weisberg (2011)↩
*
Note, however, that the
`step()` function computes the full version of AIC, including the irrelevant constants that I’ve dropped here. As a consequence this equation won’t correctly describe the AIC values that you see in the outputs here. However, if you calculate the AIC values using my formula for two different regression models and take the difference between them, this will be the same as the differences between AIC values that `step()` reports. In practice, this is all you care about: the actual value of an AIC statistic isn’t very informative, but the differences between two AIC values are useful, since these provide a measure of the extent to which one model outperforms another.↩ *
While I’m on this topic I should point out that there is also a function called
`BIC()` which computes the Bayesian information criterion (BIC) for the models. So you could type `BIC(M0,M1)` and get a very similar output. In fact, while I’m not particularly impressed with either AIC or BIC as model selection methods, if you do find yourself using one of these two, the empirical evidence suggests that BIC is the better criterion of the two. In most simulation studies that I’ve seen, BIC does a much better job of selecting the correct model.↩ *
It’s worth noting in passing that this same \(F\) statistic can be used to test a much broader range of hypotheses than those that I’m mentioning here. Very briefly: notice that the nested model M0 corresponds to the full model M1 when we constrain some of the regression coefficients to zero. It is sometimes useful to construct submodels by placing other kinds of constraints on the regression coefficients. For instance, maybe two different coefficients might have to sum to zero, or something like that. You can construct hypothesis tests for those kind of constraints too, but it is somewhat more complicated and the sampling distribution for \(F\) can end up being something known as the non-central \(F\) distribution, which is waaaaay beyond the scope of this book! All I want to do is alert you to this possibility.↩
Date: 2015-12-03
Categories:
Tags:
# Chapter 16 Factorial ANOVA
Over the course of the last few chapters you can probably detect a general trend. We started out looking at tools that you can use to compare two groups to one another, most notably the \(t\)-test (Chapter 13). Then, we introduced analysis of variance (ANOVA) as a method for comparing more than two groups (Chapter 14). The chapter on regression (Chapter 15) covered a somewhat different topic, but in doing so it introduced a powerful new idea: building statistical models that have multiple predictor variables used to explain a single outcome variable. For instance, a regression model could be used to predict the number of errors a student makes in a reading comprehension test based on the number of hours they studied for the test, and their score on a standardised IQ test. The goal in this chapter is to import this idea into the ANOVA framework. For instance, suppose we were interested in using the reading comprehension test to measure student achievements in three different schools, and we suspect that girls and boys are developing at different rates (and so would be expected to have different performance on average). Each student is classified in two different ways: on the basis of their gender, and on the basis of their school. What we’d like to do is analyse the reading comprehension scores in terms of both of these grouping variables. The tool for doing so is generically referred to as factorial ANOVA. However, since we have two grouping variables, we sometimes refer to the analysis as a two-way ANOVA, in contrast to the one-way ANOVAs that we ran in Chapter 14.
## 16.1 Factorial ANOVA 1: balanced designs, no interactions
When we discussed analysis of variance in Chapter 14, we assumed a fairly simple experimental design: each person falls into one of several groups, and we want to know whether these groups have different means on some outcome variable. In this section, I’ll discuss a broader class of experimental designs, known as factorial designs, in we have more than one grouping variable. I gave one example of how this kind of design might arise above. Another example appears in Chapter 14, in which we were looking at the effect of different drugs on the `mood.gain` experienced by each person. In that chapter we did find a significant effect of drug, but at the end of the chapter we also ran an analysis to see if there was an effect of therapy. We didn’t find one, but there’s something a bit worrying about trying to run two separate analyses trying to predict the same outcome. Maybe there actually is an effect of therapy on mood gain, but we couldn’t find it because it was being “hidden” by the effect of drug? In other words, we’re going to want to run a single analysis that includes both `drug` and `therapy` as predictors. For this analysis each person is cross-classified by the drug they were given (a factor with 3 levels) and what therapy they received (a factor with 2 levels). We refer to this as a \(3 \times 2\) factorial design. If we cross-tabulate `drug` by `therapy` , using the `xtabs()` function (see Section 7.1), we get the following table:231
```
load(file.path(projecthome, "data","clinicaltrial.Rdata"))
xtabs( ~ drug + therapy, clin.trial )
```
```
## therapy
## drug no.therapy CBT
## placebo 3 3
## anxifree 3 3
## joyzepam 3 3
```
As you can see, not only do we have participants corresponding to all possible combinations of the two factors, indicating that our design is completely crossed, it turns out that there are an equal number of people in each group. In other words, we have a balanced design. In this section I’ll talk about how to analyse data from balanced designs, since this is the simplest case. The story for unbalanced designs is quite tedious, so we’ll put it to one side for the moment.
### 16.1.1 What hypotheses are we testing?
Like one-way ANOVA, factorial ANOVA is a tool for testing certain types of hypotheses about population means. So a sensible place to start would be to be explicit about what our hypotheses actually are. However, before we can even get to that point, it’s really useful to have some clean and simple notation to describe the population means. Because of the fact that observations are cross-classified in terms of two different factors, there are quite a lot of different means that one might be interested. To see this, let’s start by thinking about all the different sample means that we can calculate for this kind of design. Firstly, there’s the obvious idea that we might be interested in this table of group means:
Now, this output shows a cross-tabulation of the group means for all possible combinations of the two factors (e.g., people who received the placebo and no therapy, people who received the placebo while getting CBT, etc). However, we can also construct tables that ignore one of the two factors. That gives us output that looks like this:
```
aggregate( mood.gain ~ therapy, clin.trial, mean )
```
```
## therapy mood.gain
## 1 no.therapy 0.7222222
## 2 CBT 1.0444444
```
But of course, if we can ignore one factor we can certainly ignore both. That is, we might also be interested in calculating the average mood gain across all 18 participants, regardless of what drug or psychological therapy they were given:
```
mean( clin.trial$mood.gain )
```
`## [1] 0.8833333`
At this point we have 12 different sample means to keep track of! It is helpful to organise all these numbers into a single table, which would look like this:
```
knitr::kable(tibble::tribble(
~V1, ~V2, ~V3, ~V4,
NA, "no therapy", "CBT", "total",
"placebo", "0.30", "0.60", "0.45",
"anxifree", "0.40", "1.03", "0.72",
"joyzepam", "1.47", "1.50", "1.48",
"total", "0.72", "1.04", "0.88"
), col.names = c("", "no therapy", "CBT", "total"))
```
no therapy | CBT | total |
| --- | --- | --- |
NA | no therapy | CBT | total |
placebo | 0.30 | 0.60 | 0.45 |
anxifree | 0.40 | 1.03 | 0.72 |
joyzepam | 1.47 | 1.50 | 1.48 |
total | 0.72 | 1.04 | 0.88 |
Now, each of these different means is of course a sample statistic: it’s a quantity that pertains to the specific observations that we’ve made during our study. What we want to make inferences about are the corresponding population parameters: that is, the true means as they exist within some broader population. Those population means can also be organised into a similar table, but we’ll need a little mathematical notation to do so. As usual, I’ll use the symbol \(\mu\) to denote a population mean. However, because there are lots of different means, I’ll need to use subscripts to distinguish between them.
Here’s how the notation works. Our table is defined in terms of two factors: each row corresponds to a different level of Factor A (in this case `drug` ), and each column corresponds to a different level of Factor B (in this case `therapy` ). If we let \(R\) denote the number of rows in the table, and \(C\) denote the number of columns, we can refer to this as an \(R \times C\) factorial ANOVA. In this case \(R=3\) and \(C=2\). We’ll use lowercase letters to refer to specific rows and columns, so \(\mu_{rc}\) refers to the population mean associated with the \(r\)th level of Factor A (i.e. row number \(r\)) and the \(c\)th level of Factor B (column number \(c\)).232 So the population means are now written like this:
"placebo", "$\\mu_{11}$", "$\\mu_{12}$", "",
"anxifree", "$\\mu_{21}$", "$\\mu_{22}$", "",
"joyzepam", "$\\mu_{31}$", "$\\mu_{32}$", "",
"total", "", "", ""
), col.names = c( "", "no therapy", "CBT", "total"))
```
no therapy | CBT | total |
| --- | --- | --- |
placebo | \(\mu_{11}\) | \(\mu_{12}\) |
anxifree | \(\mu_{21}\) | \(\mu_{22}\) |
joyzepam | \(\mu_{31}\) | \(\mu_{32}\) |
total |
Okay, what about the remaining entries? For instance, how should we describe the average mood gain across the entire (hypothetical) population of people who might be given Joyzepam in an experiment like this, regardless of whether they were in CBT? We use the “dot” notation to express this. In the case of Joyzepam, notice that we’re talking about the mean associated with the third row in the table. That is, we’re averaging across two cell means (i.e., \(\mu_{31}\) and \(\mu_{32}\)). The result of this averaging is referred to as a marginal mean, and would be denoted \(\mu_{3.}\) in this case. The marginal mean for CBT corresponds to the population mean associated with the second column in the table, so we use the notation \(\mu_{.2}\) to describe it. The grand mean is denoted \(\mu_{..}\) because it is the mean obtained by averaging (marginalising233) over both. So our full table of population means can be written down like this:
"placebo", "$\\mu_{11}$", "$\\mu_{12}$", "$\\mu_{1.}$",
"anxifree", "$\\mu_{21}$", "$\\mu_{22}$", "$\\mu_{2.}$",
"joyzepam", "$\\mu_{31}$", "$\\mu_{32}$", "$\\mu_{3.}$",
"total", "$\\mu_{.1}$", "$\\mu_{.2}$", "$\\mu_{..}$"
), col.names=c( NA, "no therapy", "CBT", "total"))
```
NA | no therapy | CBT | total |
| --- | --- | --- | --- |
placebo | \(\mu_{11}\) | \(\mu_{12}\) | \(\mu_{1.}\) |
anxifree | \(\mu_{21}\) | \(\mu_{22}\) | \(\mu_{2.}\) |
joyzepam | \(\mu_{31}\) | \(\mu_{32}\) | \(\mu_{3.}\) |
total | \(\mu_{.1}\) | \(\mu_{.2}\) | \(\mu_{..}\) |
Now that we have this notation, it is straightforward to formulate and express some hypotheses. Let’s suppose that the goal is to find out two things: firstly, does the choice of drug have any effect on mood, and secondly, does CBT have any effect on mood? These aren’t the only hypotheses that we could formulate of course, and we’ll see a really important example of a different kind of hypothesis in Section 16.2, but these are the two simplest hypotheses to test, and so we’ll start there. Consider the first test. If drug has no effect, then we would expect all of the row means to be identical, right? So that’s our null hypothesis. On the other hand, if the drug does matter then we should expect these row means to be different. Formally, we write down our null and alternative hypotheses in terms of the equality of marginal means:
```
knitr::kable(tibble::tribble(
~V1, ~V2,
"Null hypothesis $H_0$:", "row means are the same i.e. $\\mu_{1.} = \\mu_{2.} = \\mu_{3.}$",
"Alternative hypothesis $H_1$:", "at least one row mean is different."
), col.names = c("", ""))
```
Null hypothesis \(H_0\): | row means are the same i.e. \(\mu_{1.} = \mu_{2.} = \mu_{3.}\) |
| --- | --- |
Alternative hypothesis \(H_1\): | at least one row mean is different. |
It’s worth noting that these are exactly the same statistical hypotheses that we formed when we ran a one-way ANOVA on these data back in Chapter 14. Back then I used the notation \(\mu_P\) to refer to the mean mood gain for the placebo group, with \(\mu_A\) and \(\mu_J\) corresponding to the group means for the two drugs, and the null hypothesis was \(\mu_P = \mu_A = \mu_J\). So we’re actually talking about the same hypothesis: it’s just that the more complicated ANOVA requires more careful notation due to the presence of multiple grouping variables, so we’re now referring to this hypothesis as \(\mu_{1.} = \mu_{2.} = \mu_{3.}\). However, as we’ll see shortly, although the hypothesis is identical, the test of that hypothesis is subtly different due to the fact that we’re now acknowledging the existence of the second grouping variable.
Speaking of the other grouping variable, you won’t be surprised to discover that our second hypothesis test is formulated the same way. However, since we’re talking about the psychological therapy rather than drugs, our null hypothesis now corresponds to the equality of the column means:
```
knitr::kable(tibble::tribble(
~V1, ~V2,
"Null hypothesis $H_0$:", "column means are the same, i.e., $\\mu_{.1} = \\mu_{.2}$",
"Alternative hypothesis $H_1$:", "column means are different, i.e., $\\mu_{.1} \\neq \\mu_{.2}"
), col.names = c("", ""))
```
Null hypothesis \(H_0\): | column means are the same, i.e., \(\mu_{.1} = \mu_{.2}\) |
| --- | --- |
Alternative hypothesis \(H_1\): | column means are different, i.e., ${.1} {.2} |
### 16.1.2 Running the analysis in R
The null and alternative hypotheses that I described in the last section should seem awfully familiar: they’re basically the same as the hypotheses that we were testing in our simpler one-way ANOVAs in Chapter 14. So you’re probably expecting that the hypothesis tests that are used in factorial ANOVA will be essentially the same as the \(F\)-test from Chapter 14. You’re expecting to see references to sums of squares (SS), mean squares (MS), degrees of freedom (df), and finally an \(F\)-statistic that we can convert into a \(p\)-value, right? Well, you’re absolutely and completely right. So much so that I’m going to depart from my usual approach. Throughout this book, I’ve generally taken the approach of describing the logic (and to an extent the mathematics) that underpins a particular analysis first; and only then introducing the R commands that you’d use to produce the analysis. This time I’m going to do it the other way around, and show you the R commands first. The reason for doing this is that I want to highlight the similarities between the simple one-way ANOVA tool that we discussed in Chapter 14, and the more complicated tools that we’re going to use in this chapter.
If the data you’re trying to analyse correspond to a balanced factorial design, then running your analysis of variance is easy. To see how easy it is, let’s start by reproducing the original analysis from Chapter 14. In case you’ve forgotten, for that analysis we were using only a single factor (i.e., `drug` ) to predict our outcome variable (i.e., `mood.gain` ), and so this was what we did:
```
model.1 <- aov( mood.gain ~ drug, clin.trial )
summary( model.1 )
```
Note that this time around I’ve used the name `model.1` as the label for my `aov` object, since I’m planning on creating quite a few other models too. To start with, suppose I’m also curious to find out if `therapy` has a relationship to `mood.gain` . In light of what we’ve seen from our discussion of multiple regression in Chapter 15, you probably won’t be surprised that all we have to do is extend the formula: in other words, if we specify
as our model, we’ll probably get what we’re after:
This output is pretty simple to read too: the first row of the table reports a between-group sum of squares (SS) value associated with the `drug` factor, along with a corresponding between-group \(df\) value. It also calculates a mean square value (MS), and \(F\)-statistic and a \(p\)-value. There is also a row corresponding to the `therapy` factor, and a row corresponding to the residuals (i.e., the within groups variation). Not only are all of the individual quantities pretty familiar, the relationships between these different quantities has remained unchanged: just like we saw with the original one-way ANOVA, note that the mean square value is calculated by dividing SS by the corresponding \(df\). That is, it’s still true that \[ \mbox{MS} = \frac{\mbox{SS}}{df} \] regardless of whether we’re talking about `drug` , `therapy` or the residuals. To see this, let’s not worry about how the sums of squares values are calculated: instead, let’s take it on faith that R has calculated the SS values correctly, and try to verify that all the rest of the numbers make sense. First, note that for the `drug` factor, we divide \(3.45\) by \(2\), and end up with a mean square value of \(1.73\). For the `therapy` factor, there’s only 1 degree of freedom, so our calculations are even simpler: dividing \(0.47\) (the SS value) by 1 gives us an answer of \(0.47\) (the MS value). Turning to the \(F\) statistics and the \(p\) values, notice that we have two of each: one corresponding to the `drug` factor and the other corresponding to the `therapy` factor. Regardless of which one we’re talking about, the \(F\) statistic is calculated by dividing the mean square value associated with the factor by the mean square value associated with the residuals. If we use “A” as shorthand notation to refer to the first factor (factor A; in this case `drug` ) and “R” as shorthand notation to refer to the residuals, then the \(F\) statistic associated with factor A is denoted \(F_A\), and is calculated as follows: \[
F_{A} = \frac{\mbox{MS}_{A}}{\mbox{MS}_{R}}
\] and an equivalent formula exists for factor B (i.e., `therapy` ). Note that this use of “R” to refer to residuals is a bit awkward, since we also used the letter R to refer to the number of rows in the table, but I’m only going to use “R” to mean residuals in the context of SS\(_R\) and MS\(_R\), so hopefully this shouldn’t be confusing. Anyway, to apply this formula to the `drugs` factor, we take the mean square of \(1.73\) and divide it by the residual mean square value of \(0.07\), which gives us an \(F\)-statistic of \(26.15\). The corresponding calculation for the `therapy` variable would be to divide \(0.47\) by \(0.07\) which gives \(7.08\) as the \(F\)-statistic. Not surprisingly, of course, these are the same values that R has reported in the ANOVA table above. The last part of the ANOVA table is the calculation of the \(p\) values. Once again, there is nothing new here: for each of our two factors, what we’re trying to do is test the null hypothesis that there is no relationship between the factor and the outcome variable (I’ll be a bit more precise about this later on). To that end, we’ve (apparently) followed a similar strategy that we did in the one way ANOVA, and have calculated an \(F\)-statistic for each of these hypotheses. To convert these to \(p\) values, all we need to do is note that the that the sampling distribution for the \(F\) statistic under the null hypothesis (that the factor in question is irrelevant) is an \(F\) distribution: and that two degrees of freedom values are those corresponding to the factor, and those corresponding to the residuals. For the `drug` factor we’re talking about an \(F\) distribution with 2 and 14 degrees of freedom (I’ll discuss degrees of freedom in more detail later). In contrast, for the `therapy` factor sampling distribution is \(F\) with 1 and 14 degrees of freedom. If we really wanted to, we could calculate the \(p\) value ourselves using the `pf()` function (see Section 9.6). Just to prove that there’s nothing funny going on, here’s what that would look like for the `drug` variable:
```
pf( q=26.15, df1=2, df2=14, lower.tail=FALSE )
```
`## [1] 1.871981e-05`
And as you can see, this is indeed the \(p\) value reported in the ANOVA table above.
At this point, I hope you can see that the ANOVA table for this more complicated analysis corresponding to `model.2` should be read in much the same way as the ANOVA table for the simpler analysis for `model.1` . In short, it’s telling us that the factorial ANOVA for our \(3 \times 2\) design found a significant effect of drug (\(F_{2,14} = 26.15, p < .001\)) as well as a significant effect of therapy (\(F_{1,14} = 7.08, p = .02\)). Or, to use the more technically correct terminology, we would say that there are two main effects of drug and therapy. At the moment, it probably seems a bit redundant to refer to these as “main” effects: but it actually does make sense. Later on, we’re going to want to talk about the possibility of “interactions” between the two factors, and so we generally make a distinction between main effects and interaction effects.
### 16.1.3 How are the sum of squares calculated?
In the previous section I had two goals: firstly, to show you that the R commands needed to do factorial ANOVA are pretty much the same ones that we used for a one way ANOVA. The only difference is the `formula` argument to the `aov()` function. Secondly, I wanted to show you what the ANOVA table looks like in this case, so that you can see from the outset that the basic logic and structure behind factorial ANOVA is the same as that which underpins one way ANOVA. Try to hold onto that feeling. It’s genuinely true, insofar as factorial ANOVA is built in more or less the same way as the simpler one-way ANOVA model. It’s just that this feeling of familiarity starts to evaporate once you start digging into the details. Traditionally, this comforting sensation is replaced by an urge to murder the the authors of statistics textbooks.
Okay, let’s start looking at some of those details. The explanation that I gave in the last section illustrates the fact that the hypothesis tests for the main effects (of drug and therapy in this case) are \(F\)-tests, but what it doesn’t do is show you how the sum of squares (SS) values are calculated. Nor does it tell you explicitly how to calculate degrees of freedom (\(df\) values) though that’s a simple thing by comparison. Let’s assume for now that we have only two predictor variables, Factor A and Factor B. If we use \(Y\) to refer to the outcome variable, then we would use \(Y_{rci}\) to refer to the outcome associated with the \(i\)-th member of group \(rc\) (i.e., level/row \(r\) for Factor A and level/column \(c\) for Factor B). Thus, if we use \(\bar{Y}\) to refer to a sample mean, we can use the same notation as before to refer to group means, marginal means and grand means: that is, \(\bar{Y}_{rc}\) is the sample mean associated with the \(r\)th level of Factor A and the \(c\)th level of Factor B, \(\bar{Y}_{r.}\) would be the marginal mean for the \(r\)th level of Factor A, \(\bar{Y}_{.c}\) would be the marginal mean for the \(c\)th level of Factor B, and \(\bar{Y}_{..}\) is the grand mean. In other words, our sample means can be organised into the same table as the population means. For our clinical trial data, that table looks like this:
```
knitr::kable(tibble::tribble(
~V1, ~V2, ~V3, ~V4,
"placebo", "$\\bar{Y}_{11}$", "$\\bar{Y}_{12}$", "$\\bar{Y}_{1.}$",
"anxifree", "$\\bar{Y}_{21}$", "$\\bar{Y}_{22}$", "$\\bar{Y}_{2.}$",
"joyzepam", "$\\bar{Y}_{31}$", "$\\bar{Y}_{32}$", "$\\bar{Y}_{3.}$",
"total", "$\\bar{Y}_{.1}$", "$\\bar{Y}_{.2}$", "$\\bar{Y}_{..}$"
), col.names = c("", "no therapy", "CBT", "total"))
```
no therapy | CBT | total |
| --- | --- | --- |
placebo | \(\bar{Y}_{11}\) | \(\bar{Y}_{12}\) | \(\bar{Y}_{1.}\) |
anxifree | \(\bar{Y}_{21}\) | \(\bar{Y}_{22}\) | \(\bar{Y}_{2.}\) |
joyzepam | \(\bar{Y}_{31}\) | \(\bar{Y}_{32}\) | \(\bar{Y}_{3.}\) |
total | \(\bar{Y}_{.1}\) | \(\bar{Y}_{.2}\) | \(\bar{Y}_{..}\) |
And if we look at the sample means that I showed earlier, we have \(\bar{Y}_{11} = 0.30\), \(\bar{Y}_{12} = 0.60\) etc. In our clinical trial example, the `drugs` factor has 3 levels and the `therapy` factor has 2 levels, and so what we’re trying to run is a \(3 \times 2\) factorial ANOVA. However, we’ll be a little more general and say that Factor A (the row factor) has \(R\) levels and Factor B (the column factor) has \(C\) levels, and so what we’re runnning here is an \(R \times C\) factorial ANOVA. Now that we’ve got our notation straight, we can compute the sum of squares values for each of the two factors in a relatively familiar way. For Factor A, our between group sum of squares is calculated by assessing the extent to which the (row) marginal means \(\bar{Y}_{1.}\), \(\bar{Y}_{2.}\) etc, are different from the grand mean \(\bar{Y}_{..}\). We do this in the same way that we did for one-way ANOVA: calculate the sum of squared difference between the \(\bar{Y}_{i.}\) values and the \(\bar{Y}_{..}\) values. Specifically, if there are \(N\) people in each group, then we calculate this: \[ \mbox{SS}_{A} = (N \times C) \sum_{r=1}^R \left( \bar{Y}_{r.} - \bar{Y}_{..} \right)^2 \] As with one-way ANOVA, the most interesting234 part of this formula is the \(\left( \bar{Y}_{r.} - \bar{Y}_{..} \right)^2\) bit, which corresponds to the squared deviation associated with level \(r\). All that this formula does is calculate this squared deviation for all \(R\) levels of the factor, add them up, and then multiply the result by \(N \times C\). The reason for this last part is that there are multiple cells in our design that have level \(r\) on Factor A: in fact, there are \(C\) of them, one corresponding to each possible level of Factor B! For instance, in our toy example, there are two different cells in the design corresponding to the `anxifree` drug: one for people with `no.therapy` , and one for the `CBT` group. Not only that, within each of these cells there are \(N\) observations. So, if we want to convert our SS value into a quantity that calculates the between-groups sum of squares on a “per observation” basis, we have to multiply by by \(N \times C\). The formula for factor B is of course the same thing, just with some subscripts shuffled around: \[
\mbox{SS}_{B} = (N \times R) \sum_{c=1}^C \left( \bar{Y}_{.c} - \bar{Y}_{..} \right)^2
\] Now that we have these formulas, we can check them against the R output from the earlier section. First, notice that we calculated all the marginal means (i.e., row marginal means \(\bar{Y}_{r.}\) and column marginal means \(\bar{Y}_{.c}\)) earlier using `aggregate()` , and we also calculated the grand mean. Let’s repeat those calculations, but this time we’ll save the results to varibles so that we can use them in subsequent calculations:
```
drug.means <- aggregate( mood.gain ~ drug, clin.trial, mean )[,2]
therapy.means <- aggregate( mood.gain ~ therapy, clin.trial, mean )[,2]
grand.mean <- mean( clin.trial$mood.gain )
```
Okay, now let’s calculate the sum of squares associated with the main effect of `drug` . There are a total of \(N=3\) people in each group, and \(C=2\) different types of therapy. Or, to put it another way, there are \(3 \times 2 = 6\) people who received any particular drug. So our calculations are:
```
SS.drug <- (3*2) * sum( (drug.means - grand.mean)^2 )
SS.drug
```
`## [1] 3.453333`
Not surprisingly, this is the same number that you get when you look up the SS value for the drugs factor in the ANOVA table that I presented earlier. We can repeat the same kind of calculation for the effect of therapy. Again there are \(N=3\) people in each group, but since there are \(R=3\) different drugs, this time around we note that there are \(3 \times 3 = 9\) people who received CBT, and an additional 9 people who received the placebo. So our calculation is now:
```
SS.therapy <- (3*3) * sum( (therapy.means - grand.mean)^2 )
SS.therapy
```
`## [1] 0.4672222`
and we are, once again, unsurprised to see that our calculations are identical to the ANOVA output.
So that’s how you calculate the SS values for the two main effects. These SS values are analogous to the between-group sum of squares values that we calculated when doing one-way ANOVA in Chapter 14. However, it’s not a good idea to think of them as between-groups SS values anymore, just because we have two different grouping variables and it’s easy to get confused. In order to construct an \(F\) test, however, we also need to calculate the within-groups sum of squares. In keeping with the terminology that we used in the regression chapter (Chapter 15) and the terminology that R uses when printing out the ANOVA table, I’ll start referring to the within-groups SS value as the residual sum of squares SS\(_R\).
The easiest way to think about the residual SS values in this context, I think, is to think of it as the leftover variation in the outcome variable after you take into account the differences in the marginal means (i.e., after you remove SS\(_A\) and SS\(_B\)). What I mean by that is we can start by calculating the total sum of squares, which I’ll label SS\(_T\). The formula for this is pretty much the same as it was for one-way ANOVA: we take the difference between each observation \(Y_{rci}\) and the grand mean \(\bar{Y}_{..}\), square the differences, and add them all up \[ \mbox{SS}_T = \sum_{r=1}^R \sum_{c=1}^C \sum_{i=1}^N \left( Y_{rci} - \bar{Y}_{..}\right)^2 \] The “triple summation” here looks more complicated than it is. In the first two summations, we’re summing across all levels of Factor A (i.e., over all possible rows \(r\) in our table), across all levels of Factor B (i.e., all possible columns \(c\)). Each \(rc\) combination corresponds to a single group, and each group contains \(N\) people: so we have to sum across all those people (i.e., all \(i\) values) too. In other words, all we’re doing here is summing across all observations in the data set (i.e., all possible \(rci\) combinations).
At this point, we know the total variability of the outcome variable SS\(_T\), and we know how much of that variability can be attributed to Factor A (SS\(_A\)) and how much of it can be attributed to Factor B (SS\(_B\)). The residual sum of squares is thus defined to be the variability in \(Y\) that can’t be attributed to either of our two factors. In other words: \[ \mbox{SS}_R = \mbox{SS}_T - (\mbox{SS}_A + \mbox{SS}_B) \] Of course, there is a formula that you can use to calculate the residual SS directly, but I think that it makes more conceptual sense to think of it like this. The whole point of calling it a residual is that it’s the leftover variation, and the formula above makes that clear. I should also note that, in keeping with the terminology used in the regression chapter, it is commonplace to refer to \(\mbox{SS}_A + \mbox{SS}_B\) as the variance attributable to the “ANOVA model”, denoted SS\(_M\), and so we often say that the total sum of squares is equal to the model sum of squares plus the residual sum of squares. Later on in this chapter we’ll see that this isn’t just a surface similarity: ANOVA and regression are actually the same thing under the hood.
In any case, it’s probably worth taking a moment to check that we can calculate SS\(_R\) using this formula, and verify that we do obtain the same answer that R produces in its ANOVA table. The calculations are pretty straightforward. First we calculate the total sum of squares:
```
SS.tot <- sum( (clin.trial$mood.gain - grand.mean)^2 )
SS.tot
```
`## [1] 4.845`
and then we use it to calculate the residual sum of squares:
```
SS.res <- SS.tot - (SS.drug + SS.therapy)
SS.res
```
`## [1] 0.9244444`
Yet again, we get the same answer.
### 16.1.4 What are our degrees of freedom?
The degrees of freedom are calculated in much the same way as for one-way ANOVA. For any given factor, the degrees of freedom is equal to the number of levels minus 1 (i.e., \(R-1\) for the row variable, Factor A, and \(C-1\) for the column variable, Factor B). So, for the `drugs` factor we obtain \(df = 2\), and for the `therapy` factor we obtain \(df=1\). Later on on, when we discuss the interpretation of ANOVA as a regression model (see Section 16.6) I’ll give a clearer statement of how we arrive at this number, but for the moment we can use the simple definition of degrees of freedom, namely that the degrees of freedom equals the number of quantities that are observed, minus the number of constraints. So, for the `drugs` factor, we observe 3 separate group means, but these are constrained by 1 grand mean; and therefore the degrees of freedom is 2. For the residuals, the logic is similar, but not quite the same. The total number of observations in our experiment is 18. The constraints correspond to the 1 grand mean, the 2 additional group means that the `drug` factor introduces, and the 1 additional group mean that the the `therapy` factor introduces, and so our degrees of freedom is 14. As a formula, this is \(N-1 -(R-1)-(C-1)\), which simplifies to \(N-R-C+1\).
### 16.1.5 Factorial ANOVA versus one-way ANOVAs
Now that we’ve seen how a factorial ANOVA works, it’s worth taking a moment to compare it to the results of the one way analyses, because this will give us a really good sense of why it’s a good idea to run the factorial ANOVA. In Chapter 14 that I ran a one-way ANOVA that looked to see if there are any differences between drugs, and a second one-way ANOVA to see if there were any differences between therapies. As we saw in Section 16.1.1, the null and alternative hypotheses tested by the one-way ANOVAs are in fact identical to the hypotheses tested by the factorial ANOVA. Looking even more carefully at the ANOVA tables, we can see that the sum of squares associated with the factors are identical in the two different analyses (3.45 for `drug` and 0.92 for `therapy` ), as are the degrees of freedom (2 for `drug` , 1 for `therapy` ). But they don’t give the same answers! Most notably, when we ran the one-way ANOVA for `therapy` in Section 14.11 we didn’t find a significant effect (the \(p\)-value was 0.21). However, when we look at the main effect of `therapy` within the context of the two-way ANOVA, we do get a significant effect (\(p=.019\)). The two analyses are clearly not the same. Why does that happen? The answer lies in understanding how the residuals are calculated. Recall that the whole idea behind an \(F\)-test is to compare the variability that can be attributed to a particular factor with the variability that cannot be accounted for (the residuals). If you run a one-way ANOVA for `therapy` , and therefore ignore the effect of `drug` , the ANOVA will end up dumping all of the drug-induced variability into the residuals! This has the effect of making the data look more noisy than they really are, and the effect of `therapy` which is correctly found to be significant in the two-way ANOVA now becomes non-significant. If we ignore something that actually matters (e.g., `drug` ) when trying to assess the contribution of something else (e.g., `therapy` ) then our analysis will be distorted. Of course, it’s perfectly okay to ignore variables that are genuinely irrelevant to the phenomenon of interest: if we had recorded the colour of the walls, and that turned out to be non-significant in a three-way ANOVA (i.e.
), it would be perfectly okay to disregard it and just report the simpler two-way ANOVA that doesn’t include this irrelevant factor. What you shouldn’t do is drop variables that actually make a difference!
### 16.1.6 What kinds of outcomes does this analysis capture?
The ANOVA model that we’ve been talking about so far covers a range of different patterns that we might observe in our data. For instance, in a two-way ANOVA design, there are four possibilities: (a) only Factor A matters, (b) only Factor B matters, (c) both A and B matter, and (d) neither A nor B matters. An example of each of these four possibilities is plotted in Figure ??.
## 16.2 Factorial ANOVA 2: balanced designs, interactions allowed
The four patterns of data shown in Figure ?? are all quite realistic: there are a great many data sets that produce exactly those patterns. However, they are not the whole story, and the ANOVA model that we have been talking about up to this point is not sufficient to fully account for a table of group means. Why not? Well, so far we have the ability to talk about the idea that drugs can influence mood, and therapy can influence mood, but no way of talking about the possibility of an interaction between the two. An interaction between A and B is said to occur whenever the effect of Factor A is different, depending on which level of Factor B we’re talking about. Several examples of an interaction effect with the context of a 2 x 2 ANOVA are shown in Figure ??. To give a more concrete example, suppose that the operation of Anxifree and Joyzepam is governed quite different physiological mechanisms, and one consequence of this is that while Joyzepam has more or less the same effect on mood regardless of whether one is in therapy, Anxifree is actually much more effective when administered in conjunction with CBT. The ANOVA that we developed in the previous section does not capture this idea. To get some idea of whether an interaction is actually happening here, it helps to plot the various group means. There are quite a few different ways draw these plots in R. One easy way is to use the `interaction.plot()` function, but this function won’t draw error bars for you. A fairly simple function that will include error bars for you is the `lineplot.CI()` function in the `sciplots` package (see Section 10.5.4). The command
```
library(sciplot)
library(lsr)
lineplot.CI( x.factor = clin.trial$drug,
response = clin.trial$mood.gain,
group = clin.trial$therapy,
ci.fun = ciMean,
xlab = "drug",
ylab = "mood gain" )
```
produces the output is shown in Figure 16.9 (don’t forget that the `ciMean` function is in the `lsr` package, so you need to have `lsr` loaded!). Our main concern relates to the fact that the two lines aren’t parallel. The effect of CBT (difference between solid line and dotted line) when the drug is Joyzepam (right side) appears to be near zero, even smaller than the effect of CBT when a placebo is used (left side). However, when Anxifree is administered, the effect of CBT is larger than the placebo (middle). Is this effect real, or is this just random variation due to chance? Our original ANOVA cannot answer this question, because we make no allowances for the idea that interactions even exist! In this section, we’ll fix this problem.
### 16.2.1 What exactly is an interaction effect?
The key idea that we’re going to introduce in this section is that of an interaction effect. What that means for our R formulas is that we’ll write down models like so although there are only two factors involved in our model (i.e., `drug` and `therapy` ), there are three distinct terms (i.e., `drug` , `therapy` and `drug:therapy` ). That is, in addition to the main effects of `drug` and `therapy` , we have a new component to the model, which is our interaction term `drug:therapy` . Intuitively, the idea behind an interaction effect is fairly simple: it just means that the effect of Factor A is different, depending on which level of Factor B we’re talking about. But what does that actually mean in terms of our data? Figure ?? depicts several different patterns that, although quite different to each other, would all count as an interaction effect. So it’s not entirely straightforward to translate this qualitative idea into something mathematical that a statistician can work with. As a consequence, the way that the idea of an interaction effect is formalised in terms of null and alternative hypotheses is slightly difficult, and I’m guessing that a lot of readers of this book probably won’t be all that interested. Even so, I’ll try to give the basic idea here. To start with, we need to be a little more explicit about our main effects. Consider the main effect of Factor A ( `drug` in our running example). We originally formulated this in terms of the null hypothesis that the two marginal means \(\mu_{r.}\) are all equal to each other. Obviously, if all of these are equal to each other, then they must also be equal to the grand mean \(\mu_{..}\) as well, right? So what we can do is define the effect of Factor A at level \(r\) to be equal to the difference between the marginal mean \(\mu_{r.}\) and the grand mean \(\mu_{..}\). Let’s denote this effect by \(\alpha_r\), and note that \[ \alpha_r = \mu_{r.} - \mu_{..} \] Now, by definition all of the \(\alpha_r\) values must sum to zero, for the same reason that the average of the marginal means \(\mu_{r.}\) must be the grand mean \(\mu_{..}\). We can similarly define the effect of Factor B at level \(i\) to be the difference between the column marginal mean \(\mu_{.c}\) and the grand mean \(\mu_{..}\) \[ \beta_c = \mu_{.c} - \mu_{..} \] and once again, these \(\beta_c\) values must sum to zero. The reason that statisticians sometimes like to talk about the main effects in terms of these \(\alpha_r\) and \(\beta_c\) values is that it allows them to be precise about what it means to say that there is no interaction effect. If there is no interaction at all, then these \(\alpha_r\) and \(\beta_c\) values will perfectly describe the group means \(\mu_{rc}\). Specifically, it means that \[ \mu_{rc} = \mu_{..} + \alpha_r + \beta_c \] That is, there’s nothing special about the group means that you couldn’t predict perfectly by knowing all the marginal means. And that’s our null hypothesis, right there. The alternative hypothesis is that \[ \mu_{rc} \neq \mu_{..} + \alpha_r + \beta_c \] for at least one group \(rc\) in our table. However, statisticians often like to write this slightly differently. They’ll usually define the specific interaction associated with group \(rc\) to be some number, awkwardly referred to as \((\alpha\beta)_{rc}\), and then they will say that the alternative hypothesis is that \[\mu_{rc} = \mu_{..} + \alpha_r + \beta_c + (\alpha\beta)_{rc}\] where \((\alpha\beta)_{rc}\) is non-zero for at least one group. This notation is kind of ugly to look at, but it is handy as we’ll see in the next section when discussing how to calculate the sum of squares.
### 16.2.2 Calculating sums of squares for the interaction
How should we calculate the sum of squares for the interaction terms, SS\(_{A:B}\)? Well, first off, it helps to notice how the previous section defined the interaction effect in terms of the extent to which the actual group means differ from what you’d expect by just looking at the marginal means. Of course, all of those formulas refer to population parameters rather than sample statistics, so we don’t actually know what they are. However, we can estimate them by using sample means in place of population means. So for Factor A, a good way to estimate the main effect at level \(r\) as the difference between the sample marginal mean \(\bar{Y}_{rc}\) and the sample grand mean \(\bar{Y}_{..}\). That is, we would use this as our estimate of the effect: \[ \hat{\alpha}_r = \bar{Y}_{r.} - \bar{Y}_{..} \] Similarly, our estimate of the main effect of Factor B at level \(c\) can be defined as follows: \[ \hat{\beta}_c = \bar{Y}_{.c} - \bar{Y}_{..} \] Now, if you go back to the formulas that I used to describe the SS values for the two main effects, you’ll notice that these effect terms are exactly the quantities that we were squaring and summing! So what’s the analog of this for interaction terms? The answer to this can be found by first rearranging the formula for the group means \(\mu_{rc}\) under the alternative hypothesis, so that we get this: \[\begin{eqnarray*} (\alpha \beta)_{rc} &=& \mu_{rc} - \mu_{..} - \alpha_r - \beta_c \\ &=& \mu_{rc} - \mu_{..} - (\mu_{r.} - \mu_{..}) - (\mu_{.c} - \mu_{..}) \\ &=& \mu_{rc} - \mu_{r.} - \mu_{.c} + \mu_{..} \end{eqnarray*}\]
So, once again, if we substitute our sample statistics in place of the population means, we get the following as our estimate of the interaction effect for group \(rc\), which is \[ \hat{(\alpha\beta)}_{rc} = \bar{Y}_{rc} - \bar{Y}_{r.} - \bar{Y}_{.c} + \bar{Y}_{..} \] Now all we have to do is sum all of these estimates across all \(R\) levels of Factor A and all \(C\) levels of Factor B, and we obtain the following formula for the sum of squares associated with the interaction as a whole: \[ \mbox{SS}_{A:B} = N \sum_{r=1}^R \sum_{c=1}^C \left( \bar{Y}_{rc} - \bar{Y}_{r.} - \bar{Y}_{.c} + \bar{Y}_{..} \right)^2 \] where, we multiply by \(N\) because there are \(N\) observations in each of the groups, and we want our SS values to reflect the variation among observations accounted for by the interaction, not the variation among groups.
Now that we have a formula for calculating SS\(_{A:B}\), it’s important to recognise that the interaction term is part of the model (of course), so the total sum of squares associated with the model, SS\(_M\) is now equal to the sum of the three relevant SS values, \(\mbox{SS}_A + \mbox{SS}_B + \mbox{SS}_{A:B}\). The residual sum of squares \(\mbox{SS}_R\) is still defined as the leftover variation, namely \(\mbox{SS}_T - \mbox{SS}_M\), but now that we have the interaction term this becomes \[ \mbox{SS}_R = \mbox{SS}_T - (\mbox{SS}_A + \mbox{SS}_B + \mbox{SS}_{A:B}) \] As a consequence, the residual sum of squares SS\(_R\) will be smaller than in our original ANOVA that didn’t include interactions.
### 16.2.3 Degrees of freedom for the interaction
Calculating the degrees of freedom for the interaction is, once again, slightly trickier than the corresponding calculation for the main effects. To start with, let’s think about the ANOVA model as a whole. Once we include interaction effects in the model, we’re allowing every single group has a unique mean, \(\mu_{rc}\). For an \(R \times C\) factorial ANOVA, this means that there are \(R \times C\) quantities of interest in the model, and only the one constraint: all of the group means need to average out to the grand mean. So the model as a whole needs to have \((R\times C) - 1\) degrees of freedom. But the main effect of Factor A has \(R-1\) degrees of freedom, and the main effect of Factor B has \(C-1\) degrees of freedom. Which means that the degrees of freedom associated with the interaction is \[\begin{eqnarray*} df_{A:B} &=& (R\times C - 1) - (R - 1) - (C -1 ) \\ &=& RC - R - C + 1 \\ &=& (R-1)(C-1) \end{eqnarray*}\]
which is just the product of the degrees of freedom associated with the row factor and the column factor.
What about the residual degrees of freedom? Because we’ve added interaction terms, which absorb some degrees of freedom, there are fewer residual degrees of freedom left over. Specifically, note that if the model with interaction has a total of \((R\times C) - 1\), and there are \(N\) observations in your data set that are constrained to satisfy 1 grand mean, your residual degrees of freedom now become \(N-(R \times C)-1+1\), or just \(N-(R \times C)\).
### 16.2.4 Running the ANOVA in R
Adding interaction terms to the ANOVA model in R is straightforward. Returning to our running example of the clinical trial, in addition to the main effect terms of `drug` and `therapy` , we include the interaction term `drug:therapy` . So the R command to create the ANOVA model now looks like this:
```
model.3 <- aov( mood.gain ~ drug + therapy + drug:therapy, clin.trial )
```
However, R allows a convenient shorthand. Instead of typing out all three terms, you can shorten the right hand side of the formula to `drug*therapy` . The `*` operator inside the formula is taken to indicate that you want both main effects and the interaction. So we can also run our ANOVA like this, and get the same answer:
```
model.3 <- aov( mood.gain ~ drug * therapy, clin.trial )
summary( model.3 )
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## drug 2 3.453 1.7267 31.714 1.62e-05 ***
## therapy 1 0.467 0.4672 8.582 0.0126 *
## drug:therapy 2 0.271 0.1356 2.490 0.1246
## Residuals 12 0.653 0.0544
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
As it turns out, while we do have a significant main effect of drug (\(F_{2,12} = 31.7, p <.001\)) and therapy type (\(F_{1,12} = 8.6, p=.013\)), there is no significant interaction between the two (\(F_{2,12} = 2.5, p = 0.125\)).
### 16.2.5 Interpreting the results
There’s a couple of very important things to consider when interpreting the results of factorial ANOVA. Firstly, there’s the same issue that we had with one-way ANOVA, which is that if you obtain a significant main effect of (say) `drug` , it doesn’t tell you anything about which drugs are different to one another. To find that out, you need to run additional analyses. We’ll talk about some analyses that you can run in Sections 16.7 and ??. The same is true for interaction effects: knowing that there’s a significant interaction doesn’t tell you anything about what kind of interaction exists. Again, you’ll need to run additional analyses.
Secondly, there’s a very peculiar interpretation issue that arises when you obtain a significant interaction effect but no corresponding main effect. This happens sometimes. For instance, in the crossover interaction shown in Figure ??, this is exactly what you’d find: in this case, neither of the main effects would be significant, but the interaction effect would be. This is a difficult situation to interpret, and people often get a bit confused about it. The general advice that statisticians like to give in this situation is that you shouldn’t pay much attention to the main effects when an interaction is present. The reason they say this is that, although the tests of the main effects are perfectly valid from a mathematical point of view, when there is a significant interaction effect the main effects rarely test interesting hypotheses. Recall from Section 16.1.1 that the null hypothesis for a main effect is that the marginal means are equal to each other, and that a marginal mean is formed by averaging across several different groups. But if you have a significant interaction effect, then you know that the groups that comprise the marginal mean aren’t homogeneous, so it’s not really obvious why you would even care about those marginal means.
Here’s what I mean. Again, let’s stick with a clinical example. Suppose that we had a \(2 \times 2\) design comparing two different treatments for phobias (e.g., systematic desensitisation vs flooding), and two different anxiety reducing drugs (e.g., Anxifree vs Joyzepam). Now suppose what we found was that Anxifree had no effect when desensitisation was the treatment, and Joyzepam had no effect when flooding was the treatment. But both were pretty effective for the other treatment. This is a classic crossover interaction, and what we’d find when running the ANOVA is that there is no main effect of drug, but a significant interaction. Now, what does it actually mean to say that there’s no main effect? Wel, it means that, if we average over the two different psychological treatments, then the average effect of Anxifree and Joyzepam is the same. But why would anyone care about that? When treating someone for phobias, it is never the case that a person can be treated using an “average” of flooding and desensitisation: that doesn’t make a lot of sense. You either get one or the other. For one treatment, one drug is effective; and for the other treatment, the other drug is effective. The interaction is the important thing; the main effect is kind of irrelevant.
This sort of thing happens a lot: the main effect are tests of marginal means, and when an interaction is present we often find ourselves not being terribly interested in marginal means, because they imply averaging over things that the interaction tells us shouldn’t be averaged! Of course, it’s not always the case that a main effect is meaningless when an interaction is present. Often you can get a big main effect and a very small interaction, in which case you can still say things like “drug A is generally more effective than drug B” (because there was a big effect of drug), but you’d need to modify it a bit by adding that “the difference in effectiveness was different for different psychological treatments”. In any case, the main point here is that whenever you get a significant interaction you should stop and think about what the main effect actually means in this context. Don’t automatically assume that the main effect is interesting.
## 16.3 Effect size, estimated means, and confidence intervals
In this section I’ll discuss a few additional quantities that you might find yourself wanting to calculate for a factorial ANOVA. The main thing you will probably want to calculate is the effect size for each term in your model, but you may also want to R to give you some estimates for the group means and associated confidence intervals.
### 16.3.1 Effect sizes
The effect size calculations for a factorial ANOVA is pretty similar to those used in one way ANOVA (see Section 14.4). Specifically, we can use \(\eta^2\) (eta-squared) as simple way to measure how big the overall effect is for any particular term. As before, \(\eta^2\) is defined by dividing the sum of squares associated with that term by the total sum of squares. For instance, to determine the size of the main effect of Factor A, we would use the following formula \[ \eta_A^2 = \frac{\mbox{SS}_{A}}{\mbox{SS}_{T}} \] As before, this can be interpreted in much the same way as \(R^2\) in regression.235 It tells you the proportion of variance in the outcome variable that can be accounted for by the main effect of Factor A. It is therefore a number that ranges from 0 (no effect at all) to 1 (accounts for all of the variability in the outcome). Moreover, the sum of all the \(\eta^2\) values, taken across all the terms in the model, will sum to the the total \(R^2\) for the ANOVA model. If, for instance, the ANOVA model fits perfectly (i.e., there is no within-groups variability at all!), the \(\eta^2\) values will sum to 1. Of course, that rarely if ever happens in real life.
However, when doing a factorial ANOVA, there is a second measure of effect size that people like to report, known as partial \(\eta^2\). The idea behind partial \(\eta^2\) (which is sometimes denoted \(_p\eta^2\) or \(\eta^2_p\)) is that, when measuring the effect size for a particular term (say, the main effect of Factor A), you want to deliberately ignore the other effects in the model (e.g., the main effect of Factor B). That is, you would pretend that the effect of all these other terms is zero, and then calculate what the \(\eta^2\) value would have been. This is actually pretty easy to calculate. All you have to do is remove the sum of squares associated with the other terms from the denominator. In other words, if you want the partial \(\eta^2\) for the main effect of Factor A, the denominator is just the sum of the SS values for Factor A and the residuals: \[ \mbox{partial } \eta^2_A = \frac{\mbox{SS}_{A}}{\mbox{SS}_{A} + \mbox{SS}_{R}} \] This will always give you a larger number than \(\eta^2\), which the cynic in me suspects accounts for the popularity of partial \(\eta^2\). And once again you get a number between 0 and 1, where 0 represents no effect. However, it’s slightly trickier to interpret what a large partial \(\eta^2\) value means. In particular, you can’t actually compare the partial \(\eta^2\) values across terms! Suppose, for instance, there is no within-groups variability at all: if so, SS\(_R = 0\). What that means is that every term has a partial \(\eta^2\) value of 1. But that doesn’t mean that all terms in your model are equally important, or indeed that they are equally large. All it mean is that all terms in your model have effect sizes that are large relative to the residual variation. It is not comparable across terms.
To see what I mean by this, it’s useful to see a concrete example. Once again, we’ll use the `etaSquared()` function from the `lsr` package. As before, we input the `aov` object for which we want the \(\eta^2\) calculations performed, and R outputs a matrix showing the effect sizes for each term in the model. First, let’s have a look at the effect sizes for the original ANOVA without the interaction term:
```
etaSquared( model.2 )
```
```
## eta.sq eta.sq.part
## drug 0.7127623 0.7888325
## therapy 0.0964339 0.3357285
```
Looking at the \(\eta^2\) values first, we see that `drug` accounts for 71.3% of the variance (i.e. \(\eta^2 = 0.713\)) in `mood.gain` , whereas `therapy` only accounts for 9.6%. This leaves a total of 19.1% of the variation unaccounted for (i.e., the residuals constitute 19.1% of the variation in the outcome). Overall, this implies that we have a very large effect236 of `drug` and a modest effect of `therapy` . Now let’s look at the partial \(\eta^2\) values. Because the effect of `therapy` isn’t all that large, controlling for it doesn’t make much of a difference, so the partial \(\eta^2\) for `drug` doesn’t increase very much, and we obtain a value of \(_p\eta^2 = 0.789\)). In contrast, because the effect of `drug` was very large, controlling for it makes a big difference, and so when we calculate the partial \(\eta^2\) for `therapy` you can see that it rises to \(_p\eta^2 = 0.336\). The question that we have to ask ourselves is, what does these partial \(\eta^2\) values actually mean? The way I generally interpret the partial \(\eta^2\) for the main effect of Factor A is to interpret it as a statement about a hypothetical experiment in which only Factor A was being varied. So, even though in this experiment we varied both A and B, we can easily imagine an experiment in which only Factor A was varied: the partial \(\eta^2\) statistic tells you how much of the variance in the outcome variable you would expect to see accounted for in that experiment. However, it should be noted that this interpretation – like many things associated with main effects – doesn’t make a lot of sense when there is a large and significant interaction effect.
Speaking of interaction effects, here’s what we get when we calculate the effect sizes for the model that includes the interaction term. As you can see, the \(\eta^2\) values for the main effects don’t change, but the partial \(\eta^2\) values do:
```
etaSquared( model.3 )
```
```
## eta.sq eta.sq.part
## drug 0.71276230 0.8409091
## therapy 0.09643390 0.4169559
## drug:therapy 0.05595689 0.2932692
```
### 16.3.2 Estimated group means
In many situations you will find yourself wanting to report estimates of all the group means based on the results of your ANOVA, as well as confidence intervals associated with them. You can use the `effect()` function in the `effects` package to do this (don’t forget to install the package if you don’t have it already!). If the ANOVA that you have run is a saturated model (i.e., contains all possible main effects and all possible interaction effects) then the estimates of the group means are actually identical to the sample means, though the confidence intervals will use a pooled estimate of the standard errors, rather than use a separate one for each group. To illustrate this, let’s apply the `effect()` function to our saturated model (i.e., `model.3` ) for the clinical trial data. The `effect()` function contains two arguments we care about: the `term` argument specifies what terms in the model we want the means to be calculated for, and the `mod` argument specifies the model:
```
library(effects)
eff <- effect( term = "drug*therapy", mod = model.3 )
eff
```
```
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.300000 0.600000
## anxifree 0.400000 1.033333
## joyzepam 1.466667 1.500000
```
Notice that these are actually the same numbers we got when computing the sample means earlier (i.e., the `group.means` variable that we computed using `aggregate()` ). One useful thing that we can do using the effect variable `eff` , however, is extract the confidence intervals using the `summary()` function: `summary(eff)`
```
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.300000 0.600000
## anxifree 0.400000 1.033333
## joyzepam 1.466667 1.500000
##
## Lower 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.006481093 0.3064811
## anxifree 0.106481093 0.7398144
## joyzepam 1.173147759 1.2064811
##
## Upper 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.5935189 0.8935189
## anxifree 0.6935189 1.3268522
## joyzepam 1.7601856 1.7935189
```
In this output, we see that the estimated mean mood gain for the placebo group with no therapy was 0.300, with a 95% confidence interval from 0.006 to 0.594. Note that these are not the same confidence intervals that you would get if you calculated them separately for each group, because of the fact that the ANOVA model assumes homogeneity of variance and therefore uses a pooled estimate of the standard deviation.
When the model doesn’t contain the interaction term, then the estimated group means will be different from the sample means. Instead of reporting the sample mean, the `effect()` function will calculate the value of the group means that would be expected on the basis of the marginal means (i.e., assuming no interaction). Using the notation we developed earlier, the estimate reported for \(\mu_{rc}\), the mean for level \(r\) on the (row) Factor A and level \(c\) on the (column) Factor B would be \(\mu_{..} + \alpha_r + \beta_c\). If there are genuinely no interactions between the two factors, this is actually a better estimate of the population mean than the raw sample mean would be. The command to obtain these estimates is actually identical to the last one, except that we use `model.2` . When you do this, R will give you a warning message:
```
eff <- effect( "drug*therapy", model.2 )
```
```
## NOTE: drug:therapy does not appear in the model
```
but this isn’t anything to worry about. This is R being polite, and letting you know that the estimates it is constructing are based on the assumption that no interactions exist. It kind of makes sense that it would do this: when we use `"drug*therapy"` as our input, we’re telling R that we want it to output the estimated group means (rather than marginal means), but the actual input `"drug*therapy"` might mean that you want interactions included or you might not. There’s no actual ambiguity here, because the model itself either does or doesn’t have interactions, but the authors of the function thought it sensible to include a warning just to make sure that you’ve specified the actual model you care about. But, assuming that we genuinely don’t believe that there are any interactions, `model.2` is the right model to use, so we can ignore this warning.237 In any case, when we inspect the output, we get the following table of estimated group means: `eff`
```
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.2888889 0.6111111
## anxifree 0.5555556 0.8777778
## joyzepam 1.3222222 1.6444444
```
As before, we can obtain confidence intervals using the following command:
`summary( eff )`
```
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.2888889 0.6111111
## anxifree 0.5555556 0.8777778
## joyzepam 1.3222222 1.6444444
##
## Lower 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.02907986 0.3513021
## anxifree 0.29574653 0.6179687
## joyzepam 1.06241319 1.3846354
##
## Upper 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.5486979 0.8709201
## anxifree 0.8153646 1.1375868
## joyzepam 1.5820313 1.9042535
```
but the output looks pretty much the same as last time, and this book is already way too long, so I won’t include it here.
## 16.4 Assumption checking
As with one-way ANOVA, the key assumptions of factorial ANOVA are homogeneity of variance (all groups have the same standard deviation), normality of the residuals, and independence of the observations. The first two are things we can test for. The third is something that you need to assess yourself by asking if there are any special relationships between different observations. Additionally, if you aren’t using a saturated model (e.g., if you’ve omitted the interaction terms) then you’re also assuming that the omitted terms aren’t important. Of course, you can check this last one by running an ANOVA with the omitted terms included and see if they’re significant, so that’s pretty easy. What about homogeneity of variance and normality of the residuals? As it turns out, these are pretty easy to check: it’s no different to the checks we did for a one-way ANOVA.
### 16.4.1 Levene test for homogeneity of variance
To test whether the groups have the same variance, we can use the Levene test. The theory behind the Levene test was discussed in Section 14.7, so I won’t discuss it again. Once again, you can use the `leveneTest()` function in the `car` package to do this. This function expects that you have a saturated model (i.e., included all of the relevant terms), because the test is primarily concerned with the within-group variance, and it doesn’t really make a lot of sense to calculate this any way other than with respect to the full model. So we try either of the following commands:
R will spit out the following error:
```
Error in leveneTest.formula(formula(y), data = model.frame(y), ...) :
Model must be completely crossed formula only.
```
Instead, if you want to run the Levene test, you need to specify a saturated model. Either of the following two commands would work:238
```
library(car)
leveneTest( model.3 )
```
```
leveneTest( mood.gain ~ drug * therapy, clin.trial )
```
The fact that the Levene test is non-significant means that we can safely assume that the homogeneity of variance assumption is not violated.
### 16.4.2 Normality of residuals
As with one-way ANOVA, we can test for the normality of residuals in a straightforward fashion (see Section 14.9). First, we use the `residuals()` function to extract the residuals from the model itself, and then we can examine those residuals in a few different ways. It’s generally a good idea to examine them graphically, by drawing histograms (i.e., `hist()` function) and QQ plots (i.e., `qqnorm()` function. If you want a formal test for the normality of the residuals, then we can run the Shapiro-Wilk test (i.e., `shapiro.test()` ). If we wanted to check the residuals with respect to `model.2` (i.e., the model with both main effects but no interactions) then we could do the following:
```
resid <- residuals( model.2 ) # pull the residuals
hist( resid ) # draw a histogram
```
```
qqnorm( resid ) # draw a normal QQ plot
```
```
shapiro.test( resid ) # run the Shapiro-Wilk test
```
```
##
## Shapiro-Wilk normality test
##
## data: resid
## W = 0.95635, p-value = 0.5329
```
I haven’t included the plots (you can draw them yourself if you want to see them), but you can see from the non-significance of the Shapiro-Wilk test that normality isn’t violated here.
## 16.5 The \(F\) test as a model comparison
At this point, I want to talk in a little more detail about what the \(F\)-tests in an ANOVA are actually doing. In the context of ANOVA, I’ve been referring to the \(F\)-test as a way of testing whether a particular term in the model (e.g., main effect of Factor A) is significant. This interpretation is perfectly valid, but it’s not necessarily the most useful way to think about the test. In fact, it’s actually a fairly limiting way of thinking about what the \(F\)-test does. Consider the clinical trial data we’ve been working with in this chapter. Suppose I want to see if there are any effects of any kind that involve `therapy` . I’m not fussy: I don’t care if it’s a main effect or an interaction effect.239 One thing I could do is look at the output for `model.3` earlier: in this model we did see a main effect of therapy (\(p=.013\)) but we did not see an interaction effect (\(p=.125\)). That’s kind of telling us what we want to know, but it’s not quite the same thing. What we really want is a single test that jointly checks the main effect of therapy and the interaction effect. Given the way that I’ve been describing the ANOVA \(F\)-test up to this point, you’d be tempted to think that this isn’t possible. On the other hand, if you recall the chapter on regression (in Section 15.10), we were able to use \(F\)-tests to make comparisons between a wide variety of regression models. Perhaps something of that sort is possible with ANOVA? And of course, the answer here is yes. The thing that you really need to understand is that the \(F\)-test, as it is used in both ANOVA and regression, is really a comparison of two statistical models. One of these models is the full model (alternative hypothesis), and the other model is a simpler model that is missing one or more of the terms that the full model includes (null hypothesis). The null model cannot contain any terms that are not in the full model. In the example I gave above, the full model is `model.3` , and it contains a main effect for therapy, a main effect for drug, and the drug by therapy interaction term. The null model would be `model.1` since it contains only the main effect of drug.
### 16.5.1 The \(F\) test comparing two models
Let’s frame this in a slightly more abstract way. We’ll say that our full model can be written as an R formula that contains several different terms, say `Y ~ A + B + C + D` . Our null model only contains some subset of these terms, say `Y ~ A + B` . Some of these terms might be main effect terms, others might be interaction terms. It really doesn’t matter. The only thing that matters here is that we want to treat some of these terms as the “starting point” (i.e. the terms in the null model, `A` and `B` ), and we want to see if including the other terms (i.e., `C` and `D` ) leads to a significant improvement in model performance, over and above what could be achieved by a model that includes only `A` and `B` . In essence, we have null and alternative hypotheses that look like this:
Hypothesis | Correct model? | R formula for correct model |
| --- | --- | --- |
HypothesisCorrect model?R formula for correct modelNull\(M0\)Y ~ A + BAlternative\(M1\)Y ~ A + B + C + D
Is there a way of making this comparison directly?
To answer this, let’s go back to fundamentals. As we saw in Chapter 14, the \(F\)-test is constructed from two kinds of quantity: sums of squares (SS) and degrees of freedom (df). These two things define a mean square value (MS = SS/df), and we obtain our \(F\) statistic by contrasting the MS value associated with “the thing we’re interested in” (the model) with the MS value associated with “everything else” (the residuals). What we want to do is figure out how to talk about the SS value that is associated with the difference between two models. It’s actually not all that hard to do.
Let’s start with the fundamental rule that we used throughout the chapter on regression: \[ \mbox{SS}_{T} = \mbox{SS}_{M} + \mbox{SS}_{R} \] That is, the total sums of squares (i.e., the overall variability of the outcome variable) can be decomposed into two parts: the variability associated with the model \(\mbox{SS}_{M}\), and the residual or leftover variability, \(\mbox{SS}_{R}\). However, it’s kind of useful to rearrange this equation slightly, and say that the SS value associated with a model is defined like this… \[ \mbox{SS}_{M} = \mbox{SS}_{T} - \mbox{SS}_{R} \] Now, in our scenario, we have two models: the null model (M0) and the full model (M1):
\[ \mbox{SS}_{M0} = \mbox{SS}_{T} - \mbox{SS}_{R0} \]
\[ \mbox{SS}_{M1} = \mbox{SS}_{T} - \mbox{SS}_{R1} \] Next, let’s think about what it is we actually care about here. What we’re interested in is the difference between the full model and the null model. So, if we want to preserve the idea that what we’re doing is an “analysis of the variance” (ANOVA) in the outcome variable, what we should do is define the SS associated with the difference to be equal to the difference in the SS: \[ \begin{array} \mbox{SS}_{\Delta} &=& \mbox{SS}_{M1} - \mbox{SS}_{M0}\\ &=& (\mbox{SS}_{T} - \mbox{SS}_{R1}) - (\mbox{SS}_{T} - \mbox{SS}_{R0} ) \\ &=& \mbox{SS}_{R0} - \mbox{SS}_{R1} \end{array} \]
Now that we have our degrees of freedom, we can calculate mean squares and \(F\) values in the usual way. Specifically, we’re interested in the mean square for the difference between models, and the mean square for the residuals associated with the full model (M1), which are given by \[ \begin{array} \mbox{MS}_{\Delta} &=& \frac{\mbox{SS}_{\Delta} }{ \mbox{df}_{\Delta} } \\ \mbox{MS}_{R1} &=& \frac{ \mbox{SS}_{R1} }{ \mbox{df}_{R1} }\\ \end{array} \] Finally, taking the ratio of these two gives us our \(F\) statistic: \[ F = \frac\mbox{MS}_{\Delta}\mbox{MS}_{R1} \] ### Running the test in R
At this point, it may help to go back to our concrete example. The null model here is `model.1` , which stipulates that there is a main effect of drug, but no other effects exist. We expressed this via the model formula `mood.gain ~ drug` . The alternative model here is `model.3` , which stipulates that there is a main effect of drug, a main effect of therapy, and an interaction. If we express this in the “long” format, this model corresponds to the formula
, though we often express this using the `*` shorthand. The key thing here is that if we compare `model.1` to `model.3` , we’re lumping the main effect of therapy and the interaction term together. Running this test in R is straightforward: we just input both models to the `anova()` function, and it will run the exact \(F\)-test that I outlined above.
```
anova( model.1, model.3 )
```
```
## Analysis of Variance Table
##
## Model 1: mood.gain ~ drug
## Model 2: mood.gain ~ drug * therapy
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 15 1.39167
## 2 12 0.65333 3 0.73833 4.5204 0.02424 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Let’s see if we can reproduce this \(F\)-test ourselves. Firstly, if you go back and look at the ANOVA tables that we printed out for `model.1` and `model.3` you can reassure yourself that the RSS values printed in this table really do correspond to the residual sum of squares associated with these two models. So let’s type them in as variables:
```
ss.res.null <- 1.392
ss.res.full <- 0.653
```
Now, following the procedure that I described above, we will say that the “between model” sum of squares, is the difference between these two residual sum of squares values. So, if we do the subtraction, we discover that the sum of squares associated with those terms that appear in the full model but not the null model is:
```
ss.diff <- ss.res.null - ss.res.full
ss.diff
```
`## [1] 0.739` Right. Next, as always we need to convert these SS values into MS (mean square) values, which we do by dividing by the degrees of freedom. The degrees of freedom associated with the full-model residuals hasn’t changed from our original ANOVA for `model.3` : it’s the total sample size \(N\), minus the total number of groups \(G\) that are relevant to the model. We have 18 people in the trial and 6 possible groups (i.e., 2 therapies \(\times\) 3 drugs), so the degrees of freedom here is 12. The degrees of freedom for the null model are calculated similarly. The only difference here is that there are only 3 relevant groups (i.e., 3 drugs), so the degrees of freedom here is 15. And, because the degrees of freedom associated with the difference is equal to the difference in the two degrees of freedom, we arrive at the conclusion that we have \(15-12 = 3\) degrees of freedom. Now that we know the degrees of freedom, we can calculate our MS values:
```
ms.res <- ss.res.full / 12
ms.diff <- ss.diff / 3
```
Okay, now that we have our two MS values, we can divide one by the other, and obtain an \(F\)-statistic …
```
F.stat <- ms.diff / ms.res
F.stat
```
`## [1] 4.526799` … and, just as we had hoped, this turns out to be identical to the \(F\)-statistic that the `anova()` function produced earlier.
## 16.6 ANOVA as a linear model
One of the most important things to understand about ANOVA and regression is that they’re basically the same thing. On the surface of it, you wouldn’t think that this is true: after all, the way that I’ve described them so far suggests that ANOVA is primarily concerned with testing for group differences, and regression is primarily concerned with understanding the correlations between variables. And as far as it goes, that’s perfectly true. But when you look under the hood, so to speak, the underlying mechanics of ANOVA and regression are awfully similar. In fact, if you think about it, you’ve already seen evidence of this. ANOVA and regression both rely heavily on sums of squares (SS), both make use of \(F\) tests, and so on. Looking back, it’s hard to escape the feeling that Chapters 14 and 15 were a bit repetitive.
The reason for this is that ANOVA and regression are both kinds of linear models. In the case of regression, this is kind of obvious. The regression equation that we use to define the relationship between predictors and outcomes is the equation for a straight line, so it’s quite obviously a linear model. And if that wasn’t a big enough clue, the simple fact that the command to run a regression is `lm()` is kind of a hint too. When we use an R formula like
```
outcome ~ predictor1 + predictor2
```
what we’re really working with is the somewhat uglier linear model: \[
Y_{p} = b_1 X_{1p} + b_2 X_{2p} + b_0 + \epsilon_{p}
\] where \(Y_p\) is the outcome value for the \(p\)-th observation (e.g., \(p\)-th person), \(X_{1p}\) is the value of the first predictor for the \(p\)-th observation, \(X_{2p}\) is the value of the second predictor for the \(p\)-th observation, the \(b_1\), \(b_2\) and \(b_0\) terms are our regression coefficients, and \(\epsilon_{p}\) is the \(p\)-th residual. If we ignore the residuals \(\epsilon_{p}\) and just focus on the regression line itself, we get the following formula: \[
\hat{Y}_{p} = b_1 X_{1p} + b_2 X_{2p} + b_0
\] where \(\hat{Y}_p\) is the value of \(Y\) that the regression line predicts for person \(p\), as opposed to the actually-observed value \(Y_p\). The thing that isn’t immediately obvious is that we can write ANOVA as a linear model as well. However, it’s actually pretty straightforward to do this. Let’s start with a really simple example: rewriting a \(2 \times 2\) factorial ANOVA as a linear model.
### 16.6.1 Some data
To make things concrete, let’s suppose that our outcome variable is the `grade` that a student receives in my class, a ratio-scale variable corresponding to a mark from 0% to 100%. There are two predictor variables of interest: whether or not the student turned up to lectures (the `attend` variable), and whether or not the student actually read the textbook (the `reading` variable). We’ll say that `attend = 1` if the student attended class, and `attend = 0` if they did not. Similarly, we’ll say that `reading = 1` if the student read the textbook, and `reading = 0` if they did not. Okay, so far that’s simple enough. The next thing we need to do is to wrap some maths around this (sorry!). For the purposes of this example, let \(Y_p\) denote the `grade` of the \(p\)-th student in the class. This is not quite the same notation that we used earlier in this chapter: previously, we’ve used the notation \(Y_{rci}\) to refer to the \(i\)-th person in the \(r\)-th group for predictor 1 (the row factor) and the \(c\)-th group for predictor 2 (the column factor). This extended notation was really handy for describing how the SS values are calculated, but it’s a pain in the current context, so I’ll switch notation here. Now, the \(Y_p\) notation is visually simpler than \(Y_{rci}\), but it has the shortcoming that it doesn’t actually keep track of the group memberships! That is, if I told you that \(Y_{0,0,3} = 35\), you’d immediately know that we’re talking about a student (the 3rd such student, in fact) who didn’t attend the lectures (i.e., `attend = 0` ) and didn’t read the textbook (i.e. `reading = 0` ), and who ended up failing the class ( `grade = 35` ). But if I tell you that \(Y_p = 35\) all you know is that the \(p\)-th student didn’t get a good grade. We’ve lost some key information here. Of course, it doesn’t take a lot of thought to figure out how to fix this: what we’ll do instead is introduce two new variables \(X_{1p}\) and \(X_{2p}\) that keep track of this information. In the case of our hypothetical student, we know that \(X_{1p} = 0\) (i.e., `attend = 0` ) and \(X_{2p} = 0\) (i.e., `reading = 0` ). So the data might look like this:
```
knitr::kable(tibble::tribble(
~V1, ~V2, ~V3, ~V4,
"1", "90", "1", "1",
"2", "87", "1", "1",
"3", "75", "0", "1",
"4", "60", "1", "0",
"5", "35", "0", "0",
"6", "50", "0", "0",
"7", "65", "1", "0",
"8", "70", "0", "1"),
col.names= c("person $p$", "grade $Y_p$", "attendance $X_{1p}$", "reading $X_{2p}$"), align = 'cccc')
```
person \(p\) | grade \(Y_p\) | attendance \(X_{1p}\) | reading \(X_{2p}\) |
| --- | --- | --- | --- |
1 | 90 | 1 | 1 |
2 | 87 | 1 | 1 |
3 | 75 | 0 | 1 |
4 | 60 | 1 | 0 |
5 | 35 | 0 | 0 |
6 | 50 | 0 | 0 |
7 | 65 | 1 | 0 |
8 | 70 | 0 | 1 |
This isn’t anything particularly special, of course: it’s exactly the format in which we expect to see our data! In other words, if your data have been stored as a data frame in R then you’re probably expecting to see something that looks like the `rtfm.1` data frame:
```
load(file.path(projecthome, "data","rtfm.Rdata"))
rtfm.1
```
Well, sort of. I suspect that a few readers are probably frowning a little at this point. Earlier on in the book I emphasised the importance of converting nominal scale variables such as `attend` and `reading` to factors, rather than encoding them as numeric variables. The `rtfm.1` data frame doesn’t do this, but the `rtfm.2` data frame does, and so you might instead be expecting to see data like this: `rtfm.2`
However, for the purposes of this section it’s important that we be able to switch back and forth between these two different ways of thinking about the data. After all, our goal in this section is to look at some of the mathematics that underpins ANOVA, and if we want to do that we need to be able to see the numerical representation of the data (in `rtfm.1` ) as well as the more meaningful factor representation (in `rtfm.2` ). In any case, we can use the `xtabs()` function to confirm that this data set corresponds to a balanced design
```
xtabs( ~ attend + reading, rtfm.2 )
```
```
## reading
## attend no yes
## no 2 2
## yes 2 2
```
For each possible combination of the `attend` and `reading` variables, we have exactly two students. If we’re interested in calculating the mean `grade` for each of these cells, we can use the `aggregate()` function:
```
aggregate( grade ~ attend + reading, rtfm.2, mean )
```
```
## attend reading grade
## 1 no no 42.5
## 2 yes no 62.5
## 3 no yes 72.5
## 4 yes yes 88.5
```
Looking at this table, one gets the strong impression that reading the text and attending the class both matter a lot.
### 16.6.2 ANOVA with binary factors as a regression model
Okay, let’s get back to talking about the mathematics. We now have our data expressed in terms of three numeric variables: the continuous variable \(Y\), and the two binary variables \(X_1\) and \(X_2\). What I want to you to recognise is that our 2$$2 factorial ANOVA is exactly equivalent to the regression model \[ Y_{p} = b_1 X_{1p} + b_2 X_{2p} + b_0 + \epsilon_p \] This is, of course, the exact same equation that I used earlier to describe a two-predictor regression model! The only difference is that \(X_1\) and \(X_2\) are now binary variables (i.e., values can only be 0 or 1), whereas in a regression analysis we expect that \(X_1\) and \(X_2\) will be continuous. There’s a couple of ways I could try to convince you of this. One possibility would be to do a lengthy mathematical exercise, proving that the two are identical. However, I’m going to go out on a limb and guess that most of the readership of this book will find that to be annoying rather than helpful. Instead, I’ll explain the basic ideas, and then rely on R to show that that ANOVA analyses and regression analyses aren’t just similar, they’re identical for all intents and purposes.240 Let’s start by running this as an ANOVA. To do this, we’ll use the `rtfm.2` data frame, since that’s the one in which I did the proper thing of coding `attend` and `reading` as factors, and I’ll use the `aov()` function to do the analysis. Here’s what we get…
```
anova.model <- aov( grade ~ attend + reading, data = rtfm.2 )
summary( anova.model )
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## attend 1 648 648 21.60 0.00559 **
## reading 1 1568 1568 52.27 0.00079 ***
## Residuals 5 150 30
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
So, by reading the key numbers off the ANOVA table and the table of means that we presented earlier, we can see that the students obtained a higher grade if they attended class \((F_{1,5} = 26.1, p = .0056)\) and if they read the textbook \((F_{1,5} = 52.3, p = .0008)\). Let’s make a note of those \(p\)-values and those \(F\) statistics.
```
library(effects)
Effect( c("attend","reading"), anova.model )
```
```
##
## attend*reading effect
## reading
## attend no yes
## no 43.5 71.5
## yes 61.5 89.5
```
Now let’s think about the same analysis from a linear regression perspective. In the `rtfm.1` data set, we have encoded `attend` and `reading` as if they were numeric predictors. In this case, this is perfectly acceptable. There really is a sense in which a student who turns up to class (i.e. `attend = 1` ) has in fact done “more attendance” than a student who does not (i.e. `attend = 0` ). So it’s not at all unreasonable to include it as a predictor in a regression model. It’s a little unusual, because the predictor only takes on two possible values, but it doesn’t violate any of the assumptions of linear regression. And it’s easy to interpret. If the regression coefficient for `attend` is greater than 0, it means that students that attend lectures get higher grades; if it’s less than zero, then students attending lectures get lower grades. The same is true for our `reading` variable. Wait a second… why is this true? It’s something that is intuitively obvious to everyone who has taken a few stats classes and is comfortable with the maths, but it isn’t clear to everyone else at first pass. To see why this is true, it helps to look closely at a few specific students. Let’s start by considering the 6th and 7th students in our data set (i.e. \(p=6\) and \(p=7\)). Neither one has read the textbook, so in both cases we can set `reading = 0` . Or, to say the same thing in our mathematical notation, we observe \(X_{2,6} = 0\) and \(X_{2,7} = 0\). However, student number 7 did turn up to lectures (i.e., `attend = 1` , \(X_{1,7} = 1\)) whereas student number 6 did not (i.e., `attend = 0` , \(X_{1,6} = 0\)). Now let’s look at what happens when we insert these numbers into the general formula for our regression line. For student number 6, the regression predicts that \[
\begin{array}
\hat{Y}_{6} &=& b_1 X_{1,6} + b_2 X_{2,6} + b_0 \\
&=& (b_1 \times 0) + ( b_2 \times 0) + b_0 \\
&=& b_0
\end{array}
\] So we’re expecting that this student will obtain a grade corresponding to the value of the intercept term \(b_0\). What about student 7? This time, when we insert the numbers into the formula for the regression line, we obtain the following: \[
\begin{array}
\hat{Y}_{7} &=& b_1 X_{1,7} + b_2 X_{2,7} + b_0 \\
&=& (b_1 \times 1) + ( b_2 \times 0) + b_0 \\
&=& b_1 + b_0
\end{array}
\] Because this student attended class, the predicted grade is equal to the intercept term \(b_0\) plus the coefficient associated with the `attend` variable, \(b_1\). So, if \(b_1\) is greater than zero, we’re expecting that the students who turn up to lectures will get higher grades than those students who don’t. If this coefficient is negative, we’re expecting the opposite: students who turn up at class end up performing much worse. In fact, we can push this a little bit further. What about student number 1, who turned up to class (\(X_{1,1} = 1\)) and read the textbook (\(X_{2,1} = 1\))? If we plug these numbers into the regression, we get \[
\begin{array}
\hat{Y}_{1} &=& b_1 X_{1,1} + b_2 X_{2,1} + b_0 \\
&=& (b_1 \times 1) + ( b_2 \times 1) + b_0 \\
&=& b_1 + b_2 + b_0
\end{array}
\] So if we assume that attending class helps you get a good grade (i.e., \(b_1 > 0\)) and if we assume that reading the textbook also helps you get a good grade (i.e., \(b_2 >0\)), then our expectation is that student 1 will get a grade that that is higher than student 6 and student 7.
And at this point, you won’t be at all suprised to learn that the regression model predicts that student 3, who read the book but didn’t attend lectures, will obtain a grade of \(b_2 + b_0\). I won’t bore you with yet another regression formula. Instead, what I’ll do is show you the following table of expected grades:
"attended - no","$b_0$","$b_0 + b_2$",
"attended - yes", "$b_0 + b_1$", "$b_0 + b_1 + b_2$"),
col.names = c("","read textbook? no", "read textbook? yes"))
```
read textbook? no | read textbook? yes |
| --- | --- |
attended - no | \(b_0\) | \(b_0 + b_2\) |
attended - yes | \(b_0 + b_1\) | \(b_0 + b_1 + b_2\) |
As you can see, the intercept term \(b_0\) acts like a kind of “baseline” grade that you would expect from those students who don’t take the time to attend class or read the textbook. Similarly, \(b_1\) represents the boost that you’re expected to get if you come to class, and \(b_2\) represents the boost that comes from reading the textbook. In fact, if this were an ANOVA you might very well want to characterise \(b_1\) as the main effect of attendance, and \(b_2\) as the main effect of reading! In fact, for a simple \(2 \times 2\) ANOVA that’s exactly how it plays out.
Okay, now that we’re really starting to see why ANOVA and regression are basically the same thing, let’s actually run our regression using the `rtfm.1` data and the `lm()` function to convince ourselves that this is really true. Running the regression in the usual way gives us the following output:241
```
regression.model <- lm( grade ~ attend + reading, data = rtfm.1 )
summary( regression.model )
```
There’s a few interesting things to note here. Firstly, notice that the intercept term is 43.5, which is close to the “group” mean of 42.5 observed for those two students who didn’t read the text or attend class. Moreover, it’s identical to the predicted group mean that we pulled out of our ANOVA using the `Effects()` function! Secondly, notice that we have the regression coefficient of \(b_1 = 18.0\) for the attendance variable, suggesting that those students that attended class scored 18% higher than those who didn’t. So our expectation would be that those students who turned up to class but didn’t read the textbook would obtain a grade of \(b_0 + b_1\), which is equal to \(43.5 + 18.0 = 61.5\). Again, this is similar to the observed group mean of 62.5, and identical to the expected group mean that we pulled from our ANOVA. You can verify for yourself that the same thing happens when we look at the students that read the textbook. Actually, we can push a little further in establishing the equivalence of our ANOVA and our regression. Look at the \(p\)-values associated with the `attend` variable and the `reading` variable in the regression output. They’re identical to the ones we encountered earlier when running the ANOVA. This might seem a little surprising, since the test used when running our regression model calculates a \(t\)-statistic and the ANOVA calculates an \(F\)-statistic. However, if you can remember all the way back to Chapter 9, I mentioned that there’s a relationship between the \(t\)-distribution and the \(F\)-distribution: if you have some quantity that is distributed according to a \(t\)-distribution with \(k\) degrees of freedom and you square it, then this new squared quantity follows an \(F\)-distribution whose degrees of freedom are 1 and \(k\). We can check this with respect to the \(t\) statistics in our regression model. For the `attend` variable we get a \(t\) value of 4.648. If we square this number we end up with 21.604, which is identical to the corresponding \(F\) statistic in our ANOVA. Finally, one last thing you should know. Because R understands the fact that ANOVA and regression are both examples of linear models, it lets you extract the classic ANOVA table from your regression model using the `anova()` function. All you have to do is this:
```
anova( regression.model )
```
```
## Analysis of Variance Table
##
## Response: grade
## Df Sum Sq Mean Sq F value Pr(>F)
## attend 1 648 648 21.600 0.0055943 **
## reading 1 1568 1568 52.267 0.0007899 ***
## Residuals 5 150 30
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
### 16.6.3 Changing the baseline category
At this point, you’re probably convinced that the ANOVA and the regression are actually identical to each other. So there’s one last thing I should show you. What happens if I use the data from `rtfm.2` to run the regression? In `rtfm.2` , we coded the `attend` and `reading` variables as factors rather than as numeric variables. Does this matter? It turns out that it doesn’t. The only differences are superficial:
```
regression.model.2 <- lm( grade ~ attend + reading, data = rtfm.2 )
summary( regression.model.2 )
```
The only thing that is different is that R labels the two variables differently: the output now refers to `attendyes` and `readingyes` . You can probably guess what this means. When R refers to `readingyes` it’s trying to indicate that it is assuming that “yes = 1” and “no = 0”. This is important. Suppose we wanted to say that “yes = 0” and “no = 1”. We could still run this as a regression model, but now all of our coefficients will go in the opposite direction, because the effect of `readingno` would be referring to the consequences of not reading the textbook. To show you how this works, we can use the `relevel()` function in R to change which level of the `reading` variable is set to “0”. Here’s how it works. First, let’s get R to print out the `reading` factor as it currently stands: `rtfm.2$reading`
Notice that order in which R prints out the levels is “no” and then “yes”. Now let’s apply the `relevel()` function:
```
relevel( x = rtfm.2$reading, ref = "yes" )
```
R now lists “yes” before “no”. This means that R will now treat “yes” as the “reference” level (sometimes called the baseline level) when you include it in an ANOVA. So let’s now create a new data frame with our factors recoded…
```
rtfm.3 <- rtfm.2 # copy the old data frame
rtfm.3$reading <- relevel( rtfm.2$reading, ref="yes" ) # re-level the reading factor
rtfm.3$attend <- relevel( rtfm.2$attend, ref="yes" ) # re-level the attend factor
```
Finally, let’s re-run our regression, this time using the re-coded data:
```
regression.model.3 <- lm( grade ~ attend + reading, data = rtfm.3 )
summary( regression.model.3 )
```
As you can see, there are now a few changes. Most obviously, the `attendno` and `readingno` effects are both negative, though they’re the same magnitude as before: if you don’t read the textbook, for instance, you should expect your grade to drop by 28% relative to someone who did. The \(t\)-statistics have reversed sign too. The \(p\)-values remain the same, of course. The intercept has changed too. In our original regression, the baseline corresponded to a student who didn’t attend class and didn’t read the textbook, so we got a value of 43.5 as the expected baseline grade. However, now that we’ve recoded our variables, the baseline corresponds to a student who has read the textbook and did attend class, and for that student we would expect a grade of 89.5.
### 16.6.4 How to encode non binary factors as contrasts
At this point, I’ve shown you how we can view a \(2\times 2\) ANOVA into a linear model. And it’s pretty easy to see how this generalises to a \(2 \times 2 \times 2\) ANOVA or a \(2 \times 2 \times 2 \times 2\) ANOVA… it’s the same thing, really: you just add a new binary variable for each of your factors. Where it begins to get trickier is when we consider factors that have more than two levels. Consider, for instance, the \(3 \times 2\) ANOVA that we ran earlier in this chapter using the `clin.trial` data. How can we convert the three-level `drug` factor into a numerical form that is appropriate for a regression? The answer to this question is pretty simple, actually. All we have to do is realise that a three-level factor can be redescribed as two binary variables. Suppose, for instance, I were to create a new binary variable called `druganxifree` . Whenever the `drug` variable is equal to `"anxifree"` we set `druganxifree = 1` . Otherwise, we set `druganxifree = 0` . This variable sets up a contrast, in this case between anxifree and the other two drugs. By itself, of course, the `druganxifree` contrast isn’t enough to fully capture all of the information in our `drug` variable. We need a second contrast, one that allows us to distinguish between joyzepam and the placebo. To do this, we can create a second binary contrast, called `drugjoyzepam` , which equals 1 if the drug is joyzepam, and 0 if it is not. Taken together, these two contrasts allows us to perfectly discriminate between all three possible drugs. The table below illustrates this:
```
knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"`placebo`", "0", "0",
"`anxifree`", "1", "0",
"`joyzepam`", "0", "1"
), col.names = c( "`drug`", "`druganxifree`", "`drugjoyzepam`"))
```
drugdruganxifreedrugjoyzepamplacebo00anxifree10joyzepam01
If the drug administered to a patient is a placebo, then both of the two contrast variables will equal 0. If the drug is Anxifree, then the `druganxifree` variable will equal 1, and `drugjoyzepam` will be 0. The reverse is true for Joyzepam: `drugjoyzepam` is 1, and `druganxifree` is 0. Creating contrast variables manually is not too difficult to do using basic R commands. For example, here’s how we would create the `druganxifree` variable:
```
druganxifree <- as.numeric( clin.trial$drug == "anxifree" )
druganxifree
```
The
```
clin.trial$drug == "anxifree"
```
part of the command returns a logical vector that has a value of `TRUE` if the drug is Anxifree, and a value of `FALSE` if the drug is Joyzepam or the placebo. The `as.numeric()` function just converts `TRUE` to 1 and `FALSE` to 0. Obviously, this command creates the `druganxifree` variable inside the workspace. If you wanted to add it to the `clin.trial` data frame, you’d use a commmand like this instead:
```
clin.trial$druganxifree <- as.numeric( clin.trial$drug == "anxifree" )
```
You could then repeat this for the other contrasts that you wanted to use. However, it’s kind of tedious to do this over and over again for every single contrast that you want to create. To make it a little easier, the `lsr` package contains a simple function called `expandFactors()` that will convert every factor in a data frame into a set of contrast variables.242 We can use it to create a new data frame, `clin.trial.2` that contains the same data as `clin.trial` , but with the two factors represented in terms of the contrast variables:
```
clin.trial.2 <- expandFactors( clin.trial )
```
`clin.trial.2`
It’s not as pretty as the original `clin.trial` data, but it’s definitely saying the same thing. We have now recoded our three-level factor in terms of two binary variables, and we’ve already seen that ANOVA and regression behave the same way for binary variables. However, there are some additional complexities that arise in this case, which we’ll discuss in the next section.
### 16.6.5 The equivalence between ANOVA and regression for non-binary factors
Now we have two different versions of the same data set: our original data frame `clin.trial` in which the `drug` variable is expressed as a single three-level factor, and the expanded data set `clin.trial.2` in which it is expanded into two binary contrasts. Once again, the thing that we want to demonstrate is that our original \(3 \times 2\) factorial ANOVA is equivalent to a regression model applied to the contrast variables. Let’s start by re-running the ANOVA:
```
drug.anova <- aov( mood.gain ~ drug + therapy, clin.trial )
summary( drug.anova )
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## drug 2 3.453 1.7267 26.149 1.87e-05 ***
## therapy 1 0.467 0.4672 7.076 0.0187 *
## Residuals 14 0.924 0.0660
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Obviously, there’s no surprises here. That’s the exact same ANOVA that we ran earlier, except for the fact that I’ve arbitrarily decided to rename the output variable as `drug.anova` for some stupid reason.243 Next, let’s run a regression, using `druganxifree` , `drugjoyzepam` and `therapyCBT` as the predictors. Here’s what we get:
```
drug.regression <- lm( mood.gain ~ druganxifree + drugjoyzepam + therapyCBT, clin.trial.2 )
summary( drug.regression )
```
Hm. This isn’t the same output that we got last time. Not surprisingly, the regression output prints out the results for each of the three predictors separately, just like it did every other time we used `lm()` . On the one hand, we can see that the \(p\)-value for the `therapyCBT` variable is exactly the same as the one for the `therapy` factor in our original ANOVA, so we can be reassured that the regression model is doing the same thing as the ANOVA did. On the other hand, this regression model is testing the `druganxifree` contrast and the `drugjoyzepam` contrast separately, as if they were two completely unrelated variables. It’s not surprising of course, because the poor `lm()` function has no way of knowing that `drugjoyzepam` and `druganxifree` are actually the two different contrasts that we used to encode our three-level `drug` factor. As far as it knows, `drugjoyzepam` and `druganxifree` are no more related to one another than `drugjoyzepam` and `therapyCBT` . However, you and I know better. At this stage we’re not at all interested in determining whether these two contrasts are individually significant. We just want to know if there’s an “overall” effect of drug. That is, what we want R to do is to run some kind of “omnibus” test, one in which the two “drug-related” contrasts are lumped together for the purpose of the test. Sound familiar? This is exactly the situation that we discussed in Section 16.5, and it is precisely this situation that the \(F\)-test is built to handle. All we need to do is specify our null model, which in this case would include the `therapyCBT` predictor, and omit both of the drug-related variables, and then run it through the `anova()` function:
```
nodrug.regression <- lm( mood.gain ~ therapyCBT, clin.trial.2 )
anova( nodrug.regression, drug.regression )
```
```
## Analysis of Variance Table
##
## Model 1: mood.gain ~ therapyCBT
## Model 2: mood.gain ~ druganxifree + drugjoyzepam + therapyCBT
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 16 4.3778
## 2 14 0.9244 2 3.4533 26.149 1.872e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Ah, that’s better. Our \(F\)-statistic is 26.1, the degrees of freedom are 2 and 14, and the \(p\)-value is \(0.000019\). The numbers are identical to the ones we obtained for the main effect of `drug` in our original ANOVA. Once again, we see that ANOVA and regression are essentially the same: they are both linear models, and the underlying statistical machinery for ANOVA is identical to the machinery used in regression. The importance of this fact should not be understated. Throughout the rest of this chapter we’re going to rely heavily on this idea.
### 16.6.6 Degrees of freedom as parameter counting!
At long last, I can finally give a definition of degrees of freedom that I am happy with. Degrees of freedom are defined in terms of the number of parameters that have to be estimated in a model. For a regression model or an ANOVA, the number of parameters corresponds to the number of regression coefficients (i.e. \(b\)-values), including the intercept. Keeping in mind that any \(F\)-test is always a comparison between two models, the first \(df\) is the difference in the number of parameters. For example, model comparison above, the null model (
```
mood.gain ~ therapyCBT
```
) has two parameters: there’s one regression coefficient for the `therapyCBT` variable, and a second one for the intercept. The alternative model (
```
mood.gain ~ druganxifree + drugjoyzepam + therapyCBT
```
) has four parameters: one regression coefficient for each of the three contrasts, and one more for the intercept. So the degrees of freedom associated with the difference between these two models is \(df_1 = 4-2 = 2\). What about the case when there doesn’t seem to be a null model? For instance, you might be thinking of the \(F\)-test that appears at the very bottom of the regression output. I originally described that as a test of the regression model as a whole. However, that is still a comparison between two models. The null model is the trivial model that only includes an intercept, which is written as `outcome ~ 1` in R, and the alternative model is the full regression model. The null model in this case contains 1 regression coefficient, for the intercept term. The alternative model contains \(K+1\) regression coefficients, one for each of the \(K\) predictor variables and one more for the intercept. So the \(df\) value that you see in this \(F\) test is equal to \(df_1 = K+1-1 = K\).
What about the second \(df\) value that appears in the \(F\)-test? This always refers to the degrees of freedom associated with the residuals. It is possible to think of this in terms of parameters too, but in a slightly counterintuitive way. Think of it like this: suppose that the total number of observations across the study as a whole is \(N\). If you wanted to perfectly describe each of these \(N\) values, you need to do so using, well… \(N\) numbers. When you build a regression model, what you’re really doing is specifying some of the numbers need to perfectly describe the data. If your model has \(K\) predictors and an intercept, then you’ve specified \(K+1\) numbers. So, without bothering to figure out exactly how this would be done, how many more numbers do you think are going to be needed to transform a \(K+1\) parameter regression model into a perfect redescription of the raw data? If you found yourself thinking that \((K+1) + (N-K-1) = N\), and so the answer would have to be \(N-K-1\), well done! That’s exactly right: in principle you can imagine an absurdly complicated regression model that includes a parameter for every single data point, and it would of course provide a perfect description of the data. This model would contain \(N\) parameters in total, but we’re interested in the difference between the number of parameters required to describe this full model (i.e. \(N\)) and the number of parameters used by the simpler regression model that you’re actually interested in (i.e., \(K+1\)), and so the second degrees of freedom in the \(F\) test is \(df_2 = N-K-1\), where \(K\) is the number of predictors (in a regression model) or the number of contrasts (in an ANOVA). In the example I gave above, there are \(N=18\) observations in the data set, and \(K+1 = 4\) regression coefficients associated with the ANOVA model, so the degrees of freedom for the residuals is \(df_2 = 18-4 = 14\).
### 16.6.7 A postscript
There’s one last thing I want to mention in this section. In the previous example, I used the `aov()` function to run an ANOVA using the `clin.trial` data which codes `drug` variable as a single factor. I also used the `lm()` function to run a regression using the `clin.trial` data in which we have two separate contrasts describing the drug. However, it’s also possible to use the `lm()` function on the the original data. That is, you could use a command like this:
```
drug.lm <- lm( mood.gain ~ drug + therapy, clin.trial )
```
The fact that `drug` is a three-level factor does not matter. As long as the `drug` variable has been declared to be a factor, R will automatically translate it into two binary contrast variables, and will perform the appropriate analysis. After all, as I’ve been saying throughout this section, ANOVA and regression are both linear models, and `lm()` is the function that handles linear models. In fact, the `aov()` function doesn’t actually do very much of the work when you run an ANOVA using it: internally, R just passes all the hard work straight to `lm()` . However, I want to emphasise again that it is critical that your factor variables are declared as such. If `drug` were declared to be a numeric variable, then R would be happy to treat it as one. After all, it might be that `drug` refers to the number of drugs that one has taken in the past, or something that is genuinely numeric. R won’t second guess you here. It assumes your factors are factors and your numbers are numbers. Don’t make the mistake of encoding your factors as numbers, or R will run the wrong analysis. This is not a flaw in R: it is your responsibility as the analyst to make sure you’re specifying the right model for your data. Software really can’t be trusted with this sort of thing. Okay, warnings aside, it’s actually kind of neat to run your ANOVA using the `lm()` function in the way I did above. Because you’ve called the `lm()` function, the `summary()` that R pulls out is formatted like a regression. To save space I won’t show you the output here, but you can easily verify this by typing `summary( drug.lm )`
However, because the `drug` and `therapy` variables were both factors, the `anova()` function actually knows which contrasts to group together for the purposes of running the \(F\)-tests, so you can extract the classic ANOVA table. Again, I won’t reproduce the output here since it’s identical to the ANOVA table I showed at the start of the section, but it’s worth trying the following command `anova( drug.lm )`
```
## Analysis of Variance Table
##
## Response: mood.gain
## Df Sum Sq Mean Sq F value Pr(>F)
## drug 2 3.4533 1.72667 26.1490 1.872e-05 ***
## therapy 1 0.4672 0.46722 7.0757 0.01866 *
## Residuals 14 0.9244 0.06603
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
just to see for yourself. However, this behaviour of the `anova()` function only occurs when the predictor variables are factors. If we try a command like
```
anova( drug.regression )
```
, the output will continue to treate `druganxifree` and `drugjoyzepam` as if they were two distinct binary factors. This is because in the `drug.regression` model we included all the contrasts as “raw” variables, so R had no idea which ones belonged together. However, when we ran the `drug.lm` model, we gave R the original factor variables, so it does know which contrasts go together. The behaviour of the `anova()` function reflects that.
## 16.7 Different ways to specify contrasts
In the previous section, I showed you a method for converting a factor into a collection of contrasts. In the method I showed you, we specify a set of binary variables, in which define a table like this one:
"\"placebo\"", "0", "0",
"\"anxifree\"", "1", "0",
"\"joyzepam\"", "0", "1"
), col.names = c("drug", "druganxifree", "drugjoyzepam"))
```
drug | druganxifree | drugjoyzepam |
| --- | --- | --- |
“placebo” | 0 | 0 |
“anxifree” | 1 | 0 |
“joyzepam” | 0 | 1 |
Each row in the table corresponds to one of the factor levels, and each column corresponds to one of the contrasts. This table, which always has one more row than columns, has a special name: it is called a contrast matrix. However, there are lots of different ways to specify a contrast matrix. In this section I discuss a few of the standard contrast matrices that statisticians use, and how you can use them in R. If you’re planning to read the section on unbalanced ANOVA later on (Section 16.10) it’s worth reading this section carefully. If not, you can get away with skimming it, because the choice of contrasts doesn’t matter much for balanced designs.
### 16.7.1 Treatment contrasts
In the particular kind of contrasts that I’ve described above, one level of the factor is special, and acts as a kind of “baseline” category (i.e., `placebo` in our example), against which the other two are defined. The name for these kinds of contrast is treatment contrasts. The name reflects the fact that these contrasts are quite natural and sensible when one of the categories in your factor really is special because it actually does represent a baseline. That makes sense in our clinical trial example: the `placebo` condition corresponds to the situation where you don’t give people any real drugs, and so it’s special. The other two conditions are defined in relation to the placebo: in one case you replace the placebo with Anxifree, and in the other case your replace it with Joyzepam. R comes with a variety of functions that can generate different kinds of contrast matrices. For example, the table shown above is a matrix of treatment contrasts for a factor that has 3 levels. But suppose I want a matrix of treatment contrasts for a factor with 5 levels? The `contr.treatment()` function will do this:
```
contr.treatment( n=5 )
```
Notice that, by default, the first level of the factor is always treated as the baseline category (i.e., it’s the one that has all zeros, and doesn’t have an explicit contrast associated with it). In Section 16.6.3 I mentioned that you can use the `relevel()` function to change which category is the first level of the factor.244 There’s also a special function in R called `contr.SAS()` that generates a treatment contrast matrix in which the last category is treated as the baseline: `contr.SAS( n=5 )`
However, you can actually select any category you like as the baseline within the `contr.treatment()` function, by specifying the `base` argument in that function. See the help documentation for more details.
### 16.7.2 Helmert contrasts
Treatment contrasts are useful for a lot of situations, and they’re the default in R. However, they make most sense in the situation when there really is a baseline category, and you want to assess all the other groups in relation to that one. In other situations, however, no such baseline category exists, and it may make more sense to compare each group to the mean of the other groups. This is where Helmert contrasts, generated by the `contr.helmert()` function, can be useful. The idea behind Helmert contrasts is to compare each group to the mean of the “previous” ones. That is, the first contrast represents the difference between group 2 and group 1, the second contrast represents the difference between group 3 and the mean of groups 1 and 2, and so on. This translates to a contrast matrix that looks like this: `contr.helmert( n=5 )`
One useful thing about Helmert contrasts is that every contrast sums to zero (i.e., all the columns sum to zero). This has the consequence that, when we interpret the ANOVA as a regression, the intercept term corresponds to the grand mean \(\mu_{..})\) if we are using Helmert contrasts. Compare this to treatment contrasts, in which the intercept term corresponds to the group mean for the baseline category. This property can be very useful in some situations. It doesn’t matter very much if you have a balanced design, which we’ve been assuming so far, but it will turn out to be important later when we consider unbalanced designs in Section 16.10. In fact, the main reason why I’ve even bothered to include this section on specifying is that contrasts become important if you want to understand unbalanced ANOVA.
### 16.7.3 Sum to zero contrasts
The third option that I should briefly mention are “sum to zero” contrasts, which are used to construct pairwise comparisons between groups. Specifically, each contrast encodes the difference between one of the groups and a baseline category, which in this case corresponds to the last group:
`contr.sum( n=5 )`
Much like Helmert contrasts, we see that each column sums to zero, which means that the intercept term corresponds to the grand mean when ANOVA is treated as a regression model. When interpreting these contrasts, the thing to recognise is that each of these contrasts is a pairwise comparison between group 5 and one of the other four groups. Specifically, contrast 1 corresponds to a “group 1 minus group 5” comparison, contrast 2 corresponds to a “group 2 minus group 5” comparison, and so on.
### 16.7.4 Viewing and setting the default contrasts in R
Every factor variable in R is associated with a contrast matrix. It has to be, otherwise R wouldn’t be able to run ANOVAs properly! If you don’t specify one explictly, or R will implicitly specify one for you. Here’s what I mean. When I created the `clin.trial` data, I didn’t specify any contrast matrix for either of the factors. You can see this by using the `attr()` function to print out the “contrasts” attribute of the factors. For example:
`## NULL` The `NULL` output here means that R is telling you that the `drug` factor doesn’t have any attribute called “contrasts” for which it has any data. There is no contrast matrix stored anywhere explicitly for this factor. However, if we now ask R to tell us what contrasts are set up for this factor, it give us this:
These are the same treatment contrast that we set up manually in Section 16.6. How did R know to set up treatment contrasts, even though I never actually told it anything about what contrasts I wanted? The answer is that R has a hidden list of default “options” that it looks up to resolve situations like this. You can print out all of the options by typing `options()` at the command prompt, but it’s not a very enlightening read. There are a lot of options, and we’re only interested in contrasts right now. Instead of printing out all of the options, we can ask for just one, like this:
```
options( "contrasts" )
```
```
## $contrasts
## [1] "contr.treatment" "contr.poly"
```
What this is telling us is that the default contrasts for unordered factors (i.e., nominal scale variables) are treatment contrasts, and the default for ordered factors (i.e., interval scale variables) are “polynomial” contrasts. I don’t discuss ordered factors much in this book, and so I won’t go into what polynomial contrasts are all about. The key thing is that the `options()` function also allows you to reset these defaults (though only for the current session: they’ll revert to the original settings once you close R). Here’s the command:
```
options(contrasts = c("contr.helmert", "contr.poly"))
```
Once we’ve done this, we can inspect the contrast settings again:
`options("contrasts")`
```
## $contrasts
## [1] "contr.helmert" "contr.poly"
```
Now we see that the default contrasts for unordered factors have changed. So if I now ask R to tell me what contrasts are associated with the `drug` factor, it gives a different answer because I changed the default:
Those are Helmert contrasts. In general, if you’re changing the default settings for something in R, it’s a good idea to reset them to their original values once you’re done. So let’s do that:
```
options(contrasts = c("contr.treatment", "contr.poly"))
```
### 16.7.5 Setting the contrasts for a single factor
In the previous section, I showed you how to alter the default contrasts. However, suppose that all you really want to do is change the contrasts associated with a single factor, and leave the defaults as they are. To do this, what you need to do is specifically assign the contrast matrix as an “attribute’ of the factor. This is easy to do via the `contrasts()` function. For instance, suppose I wanted to use sum to zero contrasts for the `drug` factor, but keep the default treatment contrasts for everything else. I could do that like so:
```
contrasts( clin.trial$drug ) <- contr.sum(3)
```
And if I now inspect the contrasts, I get the following
```
contrasts( clin.trial$drug)
```
However, the contrasts for everything else will still be the defaults. You can check that we have actually made a specific change to the factor itself by checking to see if it now has an attribute, using the command
. This will print out the same output shown above, because the contrast has in fact been attached to the `drug` factor, and does not rely on the defaults. If you want to wipe the attribute and revert the defaults, use a command like this:
```
contrasts( clin.trial$drug ) <- NULL
```
### 16.7.6 Setting the contrasts for a single analysis
One last way of changing contrasts. You might find yourself wanting to change the contrasts only for one specific analysis. That’s allowed too, because the `aov()` and `lm()` functions have a `contrasts` argument that you can use. To change contrasts for one specific analysis, we first set up a list variable that names245 the contrast types that you want to use for each of the factors:
```
my.contrasts <- list( drug = contr.helmert, therapy = contr.helmert )
```
Next, fit the ANOVA model in the usual way, but this time we’ll specify the `contrasts` argument:
```
mod <- aov( mood.gain ~ drug*therapy, clin.trial, contrasts = my.contrasts )
```
If you try a command like `summary(aov)` you won’t see any difference in the output because the choice of contrasts does not affect the outcome when you have a balanced design (this won’t always be true later on). However, if you want to check that it has actually worked, you can inspect the value of `mod$contrasts` : `mod$contrasts`
```
## $drug
## [,1] [,2]
## placebo -1 -1
## anxifree 1 -1
## joyzepam 0 2
##
## $therapy
## [,1]
## no.therapy -1
## CBT 1
```
As you can see, for the purposes of this one particular ANOVA, R has used Helmert contrasts for both variables. If I had omitted the part of the command that specified the `contrasts` argument, you’d be looking at treatment contrasts here because it would have reverted to whatever values the `contrasts()` function prints out for each of the factors.
## 16.8 Post hoc tests
Time to switch to a different topic. Let’s suppose you’ve done your ANOVA, and it turns out that you obtained some significant effects. Because of the fact that the \(F\)-tests are “omnibus” tests that only really test the null hypothesis that there are no differences among groups, obtaining a significant effect doesn’t tell you which groups are different to which other ones. We discussed this issue back in Chapter 14, and in that chapter our solution was to run \(t\)-tests for all possible pairs of groups, making corrections for multiple comparisons (e.g., Bonferroni, Holm) to control the Type I error rate across all comparisons. The methods that we used back in Chapter 14 have the advantage of being relatively simple, and being the kind of tools that you can use in a lot of different situations where you’re testing multiple hypotheses, but they’re not necessarily the best choices if you’re interested in doing efficient post hoc testing in an ANOVA context. There are actually quite a lot of different methods for performing multiple comparisons in the statistics literature (Hsu 1996), and it would be beyond the scope of an introductory text like this one to discuss all of them in any detail.
That being said, there’s one tool that I do want to draw your attention to, namely Tukey’s “Honestly Significant Difference”, or Tukey’s HSD for short. For once, I’ll spare you the formulas, and just stick to the qualitative ideas. The basic idea in Tukey’s HSD is to examine all relevant pairwise comparisons between groups, and it’s only really appropriate to use Tukey’s HSD if it is pairwise differences that you’re interested in.246 For instance, in `model.2` , where we specified a main effect for drug and a main effect of therapy, we would be interested in the following four comparisons:
* The difference in mood gain for people given Anxifree versus people given the placebo.
* The difference in mood gain for people given Joyzepam versus people given the placebo.
* The difference in mood gain for people given Anxifree versus people given Joyzepam.
* The difference in mood gain for people treated with CBT and people given no therapy.
For any one of these comparisons, we’re interested in the true difference between (population) group means. Tukey’s HSD constructs simultaneous confidence intervals for all four of these comparisons. What we mean by 95% “simultaneous” confidence interval is that there is a 95% probability that all of these confidence intervals contain the relevant true value. Moreover, we can use these confidence intervals to calculate an adjusted \(p\) value for any specific comparison.
The `TukeyHSD()` function in R is pretty easy to use: you simply input the model that you want to run the post hoc tests for. For example, if we were looking to run post hoc tests for `model.2` , here’s the command we would use: `TukeyHSD( model.2 )`
```
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = mood.gain ~ drug + therapy, data = clin.trial)
##
## $drug
## diff lwr upr p adj
## anxifree-placebo 0.2666667 -0.1216321 0.6549655 0.2062942
## joyzepam-placebo 1.0333333 0.6450345 1.4216321 0.0000186
## joyzepam-anxifree 0.7666667 0.3783679 1.1549655 0.0003934
##
## $therapy
## diff lwr upr p adj
## CBT-no.therapy 0.3222222 0.0624132 0.5820312 0.0186602
```
The output here is (I hope) pretty straightforward. The first comparison, for example, is the Anxifree versus placebo difference, and the first part of the output indicates that the observed difference in group means is \(.27\). The next two numbers indicate that the 95% (simultaneous) confidence interval for this comparison runs from \(-.12\) to \(.65\). Because the confidence interval for the difference includes 0, we cannot reject the null hypothesis that the two group means are identical, and so we’re not all that surprised to see that the adjusted \(p\)-value is \(.21\). In contrast, if you look at the next line, we see that the observed difference between Joyzepam and the placebo is 1.03, and the 95% confidence interval runs from \(.64\) to \(1.42\). Because the interval excludes 0, we see that the result is significant \((p<.001)\).
So far, so good. What about the situation where your model includes interaction terms? For instance, in `model.3` we allowed for the possibility that there is an interaction between drug and therapy. If that’s the case, the number of pairwise comparisons that we need to consider starts to increase. As before, we need to consider the three comparisons that are relevant to the main effect of `drug` and the one comparison that is relevant to the main effect of `therapy` . But, if we want to consider the possibility of a significant interaction (and try to find the group differences that underpin that significant interaction), we need to include comparisons such as the following:
* The difference in mood gain for people given Anxifree and treated with CBT, versus people given the placebo and treated with CBT
* The difference in mood gain for people given Anxifree and given no therapy, versus people given the placebo and given no therapy.
* etc
There are quite a lot of these comparisons that you need to consider. So, when we run the `TukeyHSD()` command for `model.3` we see that it has made a lot of pairwise comparisons (19 in total). Here’s the output: `TukeyHSD( model.3 )`
```
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = mood.gain ~ drug * therapy, data = clin.trial)
##
## $drug
## diff lwr upr p adj
## anxifree-placebo 0.2666667 -0.09273475 0.6260681 0.1597148
## joyzepam-placebo 1.0333333 0.67393191 1.3927348 0.0000160
## joyzepam-anxifree 0.7666667 0.40726525 1.1260681 0.0002740
##
## $therapy
## diff lwr upr p adj
## CBT-no.therapy 0.3222222 0.08256504 0.5618794 0.012617
##
## $`drug:therapy`
## diff lwr
## anxifree:no.therapy-placebo:no.therapy 0.10000000 -0.539927728
## joyzepam:no.therapy-placebo:no.therapy 1.16666667 0.526738939
## placebo:CBT-placebo:no.therapy 0.30000000 -0.339927728
## anxifree:CBT-placebo:no.therapy 0.73333333 0.093405606
## joyzepam:CBT-placebo:no.therapy 1.20000000 0.560072272
## joyzepam:no.therapy-anxifree:no.therapy 1.06666667 0.426738939
## placebo:CBT-anxifree:no.therapy 0.20000000 -0.439927728
## anxifree:CBT-anxifree:no.therapy 0.63333333 -0.006594394
## joyzepam:CBT-anxifree:no.therapy 1.10000000 0.460072272
## placebo:CBT-joyzepam:no.therapy -0.86666667 -1.506594394
## anxifree:CBT-joyzepam:no.therapy -0.43333333 -1.073261061
## joyzepam:CBT-joyzepam:no.therapy 0.03333333 -0.606594394
## anxifree:CBT-placebo:CBT 0.43333333 -0.206594394
## joyzepam:CBT-placebo:CBT 0.90000000 0.260072272
## joyzepam:CBT-anxifree:CBT 0.46666667 -0.173261061
## upr p adj
## anxifree:no.therapy-placebo:no.therapy 0.7399277 0.9940083
## joyzepam:no.therapy-placebo:no.therapy 1.8065944 0.0005667
## placebo:CBT-placebo:no.therapy 0.9399277 0.6280049
## anxifree:CBT-placebo:no.therapy 1.3732611 0.0218746
## joyzepam:CBT-placebo:no.therapy 1.8399277 0.0004380
## joyzepam:no.therapy-anxifree:no.therapy 1.7065944 0.0012553
## placebo:CBT-anxifree:no.therapy 0.8399277 0.8917157
## anxifree:CBT-anxifree:no.therapy 1.2732611 0.0529812
## joyzepam:CBT-anxifree:no.therapy 1.7399277 0.0009595
## placebo:CBT-joyzepam:no.therapy -0.2267389 0.0067639
## anxifree:CBT-joyzepam:no.therapy 0.2065944 0.2750590
## joyzepam:CBT-joyzepam:no.therapy 0.6732611 0.9999703
## anxifree:CBT-placebo:CBT 1.0732611 0.2750590
## joyzepam:CBT-placebo:CBT 1.5399277 0.0050693
## joyzepam:CBT-anxifree:CBT 1.1065944 0.2139229
```
It looks pretty similar to before, but with a lot more comparisons made.
## 16.9 The method of planned comparisons
Okay, I have a confession to make. I haven’t had time to write this section, but I think the method of planned comparisons is important enough to deserve a quick discussion. In our discussions of multiple comparisons, in the previous section and back in Chapter 14, I’ve been assuming that the tests you want to run are genuinely post hoc. For instance, in our drugs example above, maybe you thought that the drugs would all have different effects on mood (i.e., you hypothesised a main effect of drug), but you didn’t have any specific hypothesis about how they would be different, nor did you have any real idea about which pairwise comparisons would be worth looking at. If that is the case, then you really have to resort to something like Tukey’s HSD to do your pairwise comparisons.
The situation is rather different, however, if you genuinely did have real, specific hypotheses about which comparisons are of interest, and you never ever have any intention to look at any other comparisons besides the ones that you specified ahead of time. When this is true, and if you honestly and rigourously stick to your noble intentions to not run any other comparisons (even when the data look like they’re showing you deliciously significant effects for stuff you didn’t have a hypothesis test for), then it doesn’t really make a lot of sense to run something like Tukey’s HSD, because it makes corrections for a whole bunch of comparisons that you never cared about and never had any intention of looking at. Under those circumstances, you can safely run a (limited) number of hypothesis tests without making an adjustment for multiple testing. This situation is known as the method of planned comparisons, and it is sometimes used in clinical trials. In a later version of this book, I would like to talk a lot more about planned comparisons.
## 16.10 Factorial ANOVA 3: unbalanced designs
Factorial ANOVA is a very handy thing to know about. It’s been one of the standard tools used to analyse experimental data for many decades, and you’ll find that you can’t read more than two or three papers in psychology without running into an ANOVA in there somewhere. However, there’s one huge difference between the ANOVAs that you’ll see in a lot of real scientific articles and the ANOVA that I’ve just described: in real life, we’re rarely lucky enough to have perfectly balanced designs. For one reason or another, it’s typical to end up with more observations in some cells than in others. Or, to put it another way, we have an unbalanced design.
Unbalanced designs need to be treated with a lot more care than balanced designs, and the statistical theory that underpins them is a lot messier. It might be a consequence of this messiness, or it might be a shortage of time, but my experience has been that undergraduate research methods classes in psychology have a nasty tendency to ignore this issue completely. A lot of stats textbooks tend to gloss over it too. The net result of this, I think, is that a lot of active researchers in the field don’t actually know that there’s several different “types” of unbalanced ANOVAs, and they produce quite different answers. In fact, reading the psychological literature, I’m kind of amazed at the fact that most people who report the results of an unbalanced factorial ANOVA don’t actually give you enough details to reproduce the analysis: I secretly suspect that most people don’t even realise that their statistical software package is making a whole lot of substantive data analysis decisions on their behalf. It’s actually a little terrifying, when you think about it. So, if you want to avoid handing control of your data analysis to stupid software, read on…
### 16.10.1 The coffee data
As usual, it will help us to work with some data. The `coffee.Rdata` file contains a hypothetical data set (the `coffee` data frame) that produces an unbalanced \(3 \times 2\) ANOVA. Suppose we were interested in finding out whether or not the tendency of people to `babble` when they have too much coffee is purely an effect of the coffee itself, or whether there’s some effect of the `milk` and `sugar` that people add to the coffee. Suppose we took 18 people, and gave them some coffee to drink. The amount of coffee / caffeine was held constant, and we varied whether or not milk was added: so `milk` is a binary factor with two levels, `"yes"` and `"no"` . We also varied the kind of sugar involved. The coffee might contain `"real"` sugar, or it might contain `"fake"` sugar (i.e., artificial sweetener), or it might contain `"none"` at all, so the `sugar` variable is a three level factor. Our outcome variable is a continuous variable that presumably refers to some psychologically sensible measure of the extent to which someone is “babbling”. The details don’t really matter for our purpose. To get a sense of what the data look like, we use the `some()` function in the `car` package. The `some()` function randomly picks a few of the observations in the data frame to print out, which is often very handy:
```
load(file.path(projecthome, "data","coffee.Rdata"))
some( coffee )
```
```
## milk sugar babble
## 1 yes real 4.6
## 2 no fake 4.4
## 3 no fake 3.9
## 6 no real 5.5
## 7 yes none 3.9
## 8 yes none 3.5
## 9 yes none 3.7
## 10 no fake 5.6
## 17 no none 5.3
## 18 yes fake 5.7
```
If we use the `aggregate()` function to quickly produce a table of means, we get a strong impression that there are differences between the groups:
```
aggregate( babble ~ milk + sugar, coffee, mean )
```
```
## milk sugar babble
## 1 yes none 3.700
## 2 no none 5.550
## 3 yes fake 5.800
## 4 no fake 4.650
## 5 yes real 5.100
## 6 no real 5.875
```
This is especially true when we compare these means to the standard deviations for the `babble` variable, which you can calculate using `aggregate()` in much the same way. Across groups, this standard deviation varies from .14 to .71, which is fairly small relative to the differences in group means.247 So far, it’s looking like a straightforward factorial ANOVA, just like we did earlier. The problem arises when we check to see how many observations we have in each group:
```
xtabs( ~ milk + sugar, coffee )
```
```
## sugar
## milk none fake real
## yes 3 2 3
## no 2 4 4
```
This violates one of our original assumptions, namely that the number of people in each group is the same. We haven’t really discussed how to handle this situation.
### 16.10.2 “Standard ANOVA” does not exist for unbalanced designs
Unbalanced designs lead us to the somewhat unsettling discovery that there isn’t really any one thing that we might refer to as a standard ANOVA. In fact, it turns out that there are three fundamentally different ways248 in which you might want to run an ANOVA in an unbalanced design. If you have a balanced design, all three versions produce identical results, with the sums of squares, \(F\)-values etc all conforming to the formulas that I gave at the start of the chapter. However, when your design is unbalanced they don’t give the same answers. Furthermore, they are not all equally appropriate to every situation: some methods will be more appropriate to your situation than others. Given all this, it’s important to understand what the different types of ANOVA are and how they differ from one another.
The first kind of ANOVA is conventionally referred to as Type I sum of squares. I’m sure you can guess what they other two are called. The “sum of squares” part of the name was introduced by the SAS statistical software package, and has become standard nomenclature, but it’s a bit misleading in some ways. I think the logic for referring to them as different types of sum of squares is that, when you look at the ANOVA tables that they produce, the key difference in the numbers is the SS values. The degrees of freedom don’t change, the MS values are still defined as SS divided by df, etc. However, what the terminology gets wrong is that it hides the reason why the SS values are different from one another. To that end, it’s a lot more helpful to think of the three different kinds of ANOVA as three different hypothesis testing strategies. These different strategies lead to different SS values, to be sure, but it’s the strategy that is the important thing here, not the SS values themselves. Recall from Section 16.5 and 16.6 that any particular \(F\)-test is best thought of as a comparison between two linear models. So when you’re looking at an ANOVA table, it helps to remember that each of those \(F\)-tests corresponds to a pair of models that are being compared. Of course, this leads naturally to the question of which pair of models is being compared. This is the fundamental difference between ANOVA Types I, II and III: each one corresponds to a different way of choosing the model pairs for the tests.
### 16.10.3 Type I sum of squares
The Type I method is sometimes referred to as the “sequential” sum of squares, because it involves a process of adding terms to the model one at a time. Consider the coffee data, for instance. Suppose we want to run the full \(3 \times 2\) factorial ANOVA, including interaction terms. The full model, as we’ve discussed earlier, is expressed by the R formula
, though we often shorten it by using the `sugar * milk` notation. The Type I strategy builds this model up sequentially, starting from the simplest possible model and gradually adding terms. The simplest possible model for the data would be one in which neither milk nor sugar is assumed to have any effect on babbling. The only term that would be included in such a model is the intercept, and in R formula notation we would write it as `babble ~ 1` . This is our initial null hypothesis. The next simplest model for the data would be one in which only one of the two main effects is included. In the coffee data, there are two different possible choices here, because we could choose to add milk first or to add sugar first (pardon the pun). The order actually turns out to matter, as we’ll see later, but for now let’s just make a choice arbitrarily, and pick sugar. So the second model in our sequence of models is `babble ~ sugar` , and it forms the alternative hypothesis for our first test. We now have our first hypothesis test:
```
knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ 1`",
"Alternative model:", "`babble ~ sugar`"
), col.names= c("", ""))
```
Null model:babble ~ 1Alternative model:babble ~ sugar
This comparison forms our hypothesis test of the main effect of `sugar` . The next step in our model building exercise it to add the other main effect term, so the next model in our sequence is
. The second hypothesis test is then formed by comparing the following pair of models:
Null model:babble ~ sugarAlternative model:babble ~ sugar + milk
This comparison forms our hypothesis test of the main effect of `milk` . In one sense, this approach is very elegant: the alternative hypothesis from the first test forms the null hypothesis for the second one. It is in this sense that the Type I method is strictly sequential. Every test builds directly on the results of the last one. However, in another sense it’s very inelegant, because there’s a strong asymmetry between the two tests. The test of the main effect of `sugar` (the first test) completely ignores `milk` , whereas the test of the main effect of `milk` (the second test) does take `sugar` into account. In any case, the fourth model in our sequence is now the full model,
, and the corresponding hypothesis test is
Null model:babble ~ sugar + milkAlternative model:babble ~ sugar + milk + sugar:milk
Type I sum of squares is the default hypothesis testing method used by the `anova()` function, so it’s easy to produce the results from a Type I analysis. We just type in the same commands that we always did. Since we’ve now reached the point that we don’t need to hide the fact that ANOVA and regression are both linear models, I’ll use the `lm()` function to run the analyses:
```
## Analysis of Variance Table
##
## Response: babble
## Df Sum Sq Mean Sq F value Pr(>F)
## sugar 2 3.5575 1.77876 6.7495 0.010863 *
## milk 1 0.9561 0.95611 3.6279 0.081061 .
## sugar:milk 2 5.9439 2.97193 11.2769 0.001754 **
## Residuals 12 3.1625 0.26354
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Leaving aside for one moment the question of how this result should be interpreted, let’s take note of the fact that our three \(p\)-values are \(.0109\), \(.0811\) and \(.0018\) respectively. Next, let’s see if we can replicate the analysis using tools that we’re a little more familiar with. First, let’s fit all four models:
```
mod.1 <- lm( babble ~ 1, coffee )
mod.2 <- lm( babble ~ sugar, coffee )
mod.3 <- lm( babble ~ sugar + milk, coffee )
mod.4 <- lm( babble ~ sugar + milk + sugar:milk, coffee )
```
To run the first hypothesis test comparing `mod.1` to `mod.2` we can use the command `anova(mod.1, mod.2)` in much the same way that we did in Section 16.5. Similarly, we can use the commands `anova(mod.2, mod.3)` and `anova(mod.3, mod.4)` and to run the second and third hypothesis tests. However, rather than run each of those commands separately, we can enter the full sequence of models like this:
```
anova( mod.1, mod.2, mod.3, mod.4 )
```
```
## Analysis of Variance Table
##
## Model 1: babble ~ 1
## Model 2: babble ~ sugar
## Model 3: babble ~ sugar + milk
## Model 4: babble ~ sugar + milk + sugar:milk
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 17 13.6200
## 2 15 10.0625 2 3.5575 6.7495 0.010863 *
## 3 14 9.1064 1 0.9561 3.6279 0.081061 .
## 4 12 3.1625 2 5.9439 11.2769 0.001754 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
This output is rather more verbose than the last one, but it’s telling essentially the same story.249
The big problem with using Type I sum of squares is the fact that it really does depend on the order in which you enter the variables. Yet, in many situations the researcher has no reason to prefer one ordering over another. This is presumably the case for our milk and sugar problem. Should we add milk first, or sugar first? It feels exactly as arbitrary as a data analysis question as it does as a coffee-making question. There may in fact be some people with firm opinions about ordering, but it’s hard to imagine a principled answer to the question. Yet, look what happens when we change the ordering:
```
## Analysis of Variance Table
##
## Response: babble
## Df Sum Sq Mean Sq F value Pr(>F)
## milk 1 1.4440 1.44400 5.4792 0.037333 *
## sugar 2 3.0696 1.53482 5.8238 0.017075 *
## milk:sugar 2 5.9439 2.97193 11.2769 0.001754 **
## Residuals 12 3.1625 0.26354
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The \(p\)-values for both main effect terms have changed, and fairly dramatically. Among other things, the effect of `milk` has become significant (though one should avoid drawing any strong conclusions about this, as I’ve mentioned previously). Which of these two ANOVAs should one report? It’s not immediately obvious. When you look at the hypothesis tests that are used to define the “first” main effect and the “second” one, it’s clear that they’re qualitatively different from one another. In our initial example, we saw that the test for the main effect of `sugar` completely ignores `milk` , whereas the test of the main effect of `milk` does take `sugar` into account. As such, the Type I testing strategy really does treat the first main effect as if it had a kind of theoretical primacy over the second one. In my experience there is very rarely if ever any theoretically primacy of this kind that would justify treating any two main effects asymmetrically. The consequence of all this is that Type I tests are very rarely of much interest, and so we should move on to discuss Type II tests and Type III tests. However, for the sake of completeness – on the off chance that you ever find yourself needing to run Type I tests – I’ll comment briefly on how R determines the ordering of terms in a Type I test. The key principle in Type I sum of squares is that the hypothesis testing be sequential, with terms added one at a time. However, it does also imply that main effects be added first (e.g., factors `A` , `B` , `C` etc), followed by first order interaction terms (e.g., terms like `A:B` and `B:C` ), then second order interactions (e.g., `A:B:C` ) and so on. Within each “block” you can specify whatever order you like. So, for instance, if we specified our model using a command like this,
and then used `anova(mod)` to produce sequential hypothesis tests, what we’d see is that the main effect terms would be entered `A` then `B` and then `C` , but then the interactions would be entered in the order `B:C` first, then `A:B` and then finally `A:C` . Reordering the terms within each group will change the ordering, as we saw earlier. However, changing the order of terms across blocks has no effect. For instance, if we tried to move the interaction term `B:C` to the front, like this,
it would have no effect. R would still enter the terms in the same order as last time. If for some reason you really, really need an interaction term to be entered first, then you have to do it the long way, creating each model manually using a separate `lm()` command and then using a command like
```
anova(mod.1, mod.2, mod.3, mod.4)
```
to force R to enter them in the order that you want.
### 16.10.4 Type III sum of squares
Having just finished talking about Type I tests, you might think that the natural thing to do next would be to talk about Type II tests. However, I think it’s actually a bit more natural to discuss Type III tests (which are simple) before talking about Type II tests (which are trickier). The basic idea behind Type III tests is extremely simple: regardless of which term you’re trying to evaluate, run the \(F\)-test in which the alternative hypothesis corresponds to the full ANOVA model as specified by the user, and the null model just deletes that one term that you’re testing. For instance, in the coffee example, in which our full model was
, the test for a main effect of `sugar` would correspond to a comparison between the following two models:
Null model:babble ~ milk + sugar:milkAlternative model:babble ~ sugar + milk + sugar:milk
Similarly the main effect of `milk` is evaluated by testing the full model against a null model that removes the `milk` term, like so:
Null model:babble ~ sugar + sugar:milkAlternative model:babble ~ sugar + milk + sugar:milk
Finally, the interaction term `sugar:milk` is evaluated in exactly the same way. Once again, we test the full model against a null model that removes the `sugar:milk` interaction term, like so:
Null model:babble ~ sugar + milkAlternative model:babble ~ sugar + milk + sugar:milk
The basic idea generalises to higher order ANOVAs. For instance, suppose that we were trying to run an ANOVA with three factors, `A` , `B` and `C` , and we wanted to consider all possible main effects and all possible interactions, including the three way interaction `A:B:C` . The table below shows you what the Type III tests look like for this situation:
Term being tested isNull model is outcome ~ ...Alternative model is outcome ~ ...AB + C + A:B + A:C + B:C + A:B:CA + B + C + A:B + A:C + B:C + A:B:CBA + C + A:B + A:C + B:C + A:B:CA + B + C + A:B + A:C + B:C + A:B:CCA + B + A:B + A:C + B:C + A:B:CA + B + C + A:B + A:C + B:C + A:B:CA:BA + B + C + A:C + B:C + A:B:CA + B + C + A:B + A:C + B:C + A:B:CA:CA + B + C + A:B + B:C + A:B:CA + B + C + A:B + A:C + B:C + A:B:CB:CA + B + C + A:B + A:C + A:B:CA + B + C + A:B + A:C + B:C + A:B:CA:B:CA + B + C + A:B + A:C + B:CA + B + C + A:B + A:C + B:C + A:B:C
As ugly as that table looks, it’s pretty simple. In all cases, the alternative hypothesis corresponds to the full model, which contains three main effect terms (e.g. `A` ), three first order interactions (e.g. `A:B` ) and one second order interaction (i.e., `A:B:C` ). The null model always contains 6 of thes 7 terms: and the missing one is the one whose significance we’re trying to test. At first pass, Type III tests seem like a nice idea. Firstly, we’ve removed the asymmetry that caused us to have problems when running Type I tests. And because we’re now treating all terms the same way, the results of the hypothesis tests do not depend on the order in which we specify them. This is definitely a good thing. However, there is a big problem when interpreting the results of the tests, especially for main effect terms. Consider the coffee data. Suppose it turns out that the main effect of `milk` is not significant according to the Type III tests. What this is telling us is that
is a better model for the data than the full model. But what does that even mean? If the interaction term `sugar:milk` was also non-significant, we’d be tempted to conclude that the data are telling us that the only thing that matters is `sugar` . But suppose we have a significant interaction term, but a non-significant main effect of `milk` . In this case, are we to assume that there really is an “effect of sugar”, an “interaction between milk and sugar”, but no “effect of milk”? That seems crazy. The right answer simply must be that it’s meaningless250 to talk about the main effect if the interaction is significant. In general, this seems to be what most statisticians advise us to do, and I think that’s the right advice. But if it really is meaningless to talk about non-significant main effects in the presence of a significant interaction, then it’s not at all obvious why Type III tests should allow the null hypothesis to rely on a model that includes the interaction but omits one of the main effects that make it up. When characterised in this fashion, the null hypotheses really don’t make much sense at all. Later on, we’ll see that Type III tests can be redeemed in some contexts, but I’d better show you how to actually compute a Type III ANOVA first. The `anova()` function in R does not directly support Type II tests or Type III tests. Technically, you can do it by creating the various models that form the null and alternative hypotheses for each test, and then using `anova()` to compare the models to one another. I outlined the gist of how that would be done when talking about Type I tests, but speaking from first hand experience251 I can tell you that it’s very tedious. In practice, the `anova()` function is only used to produce Type I tests or to compare specific models of particular interest (see Section 16.5). If you want Type II or Type III tests you need to use the `Anova()` function in the `car` package. It’s pretty easy to use, since there’s a `type` argument that you specify. So, to return to our coffee example, our Type III tests are run as follows:
```
mod <- lm( babble ~ sugar * milk, coffee )
Anova( mod, type=3 )
```
```
## Anova Table (Type III tests)
##
## Response: babble
## Sum Sq Df F value Pr(>F)
## (Intercept) 41.070 1 155.839 3.11e-08 ***
## sugar 5.880 2 11.156 0.001830 **
## milk 4.107 1 15.584 0.001936 **
## sugar:milk 5.944 2 11.277 0.001754 **
## Residuals 3.162 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
As you can see, I got lazy this time and used `sugar * milk` as a shorthand way of referring to
```
sugar + milk + sugar:milk
```
. The important point here is that this is just a regular ANOVA table, and we can see that our Type III tests are significant for all terms, even the intercept.
Except, as usual, it’s not that simple. One of the perverse features of the Type III testing strategy is that the results turn out to depend on the contrasts that you use to encode your factors (see Section 16.7 if you’ve forgotten what the different types of contrasts are). The results that I presented in the ANOVA table above are based on the R default, which is treatment contrasts; and as we’ll see later, this is usually a very poor choice if you want to run Type III tests. So let’s see what happens if switch to Helmert contrasts:
```
my.contrasts <- list( milk = "contr.Helmert", sugar = "contr.Helmert" )
mod.H <- lm( babble ~ sugar * milk, coffee, contrasts = my.contrasts )
Anova( mod.H, type=3 )
```
Oh, that’s not good at all. In the case of `milk` in particular, the \(p\)-value has changed from .002 to .07. This is a pretty substantial difference, and hopefully it gives you a sense of how important it is that you take care when using Type III tests. Okay, so if the \(p\)-values that come out of Type III analyses are so sensitive to the choice of contrasts, does that mean that Type III tests are essentially arbitrary and not to be trusted? To some extent that’s true, and when we turn to a discussion of Type II tests we’ll see that Type II analyses avoid this arbitrariness entirely, but I think that’s too strong a conclusion. Firstly, it’s important to recognise that some choices of contrasts will always produce the same answers. Of particular importance is the fact that if the columns of our contrast matrix are all constrained to sum to zero, then the Type III analysis will always give the same answers. This means that you’ll get the same answers if you use `contr.Helmert` or `contr.sum` or `contr.poly` , but different answers for `contr.treatment` or `contr.SAS` .
```
random.contrasts <- matrix( rnorm(6), 3, 2 ) # create a random matrix
random.contrasts[, 1] <- random.contrasts[, 1] - mean( random.contrasts[, 1] ) # contrast 1 sums to 0
random.contrasts[, 2] <- random.contrasts[, 2] - mean( random.contrasts[, 2] ) # contrast 2 sums to 0
random.contrasts # print it to check that we really have an arbitrary contrast matrix...
```
```
## [,1] [,2]
## [1,] 0.2421359 -0.8168047
## [2,] 0.5476197 0.9503084
## [3,] -0.7897556 -0.1335037
```
```
contrasts( coffee$sugar ) <- random.contrasts # random contrasts for sugar
contrasts( coffee$milk ) <- contr.Helmert(2) # Helmert contrasts for the milk factor
mod.R <- lm( babble ~ sugar * milk, coffee ) # R will use the contrasts that we assigned
Anova( mod.R, type = 3 )
```
Yep, same answers.
### 16.10.5 Type II sum of squares
Okay, so we’ve seen Type I and III tests now, and both are pretty straightforward: Type I tests are performed by gradually adding terms one at a time, whereas Type III tests are performed by taking the full model and looking to see what happens when you remove each term. However, both have some serious flaws: Type I tests are dependent on the order in which you enter the terms, and Type III tests are dependent on how you code up your contrasts. Because of these flaws, neither one is easy to interpret. Type II tests are a little harder to describe, but they avoid both of these problems, and as a result they are a little easier to interpret.
Type II tests are broadly similar to Type III tests: start with a “full” model, and test a particular term by deleting it from that model. However, Type II tests are based on the marginality principle which states that you should not omit a lower order term from your model if there are any higher order ones that depend on it. So, for instance, if your model contains the interaction `A:B` (a 2nd order term), then it really ought to contain the main effects `A` and `B` (1st order terms). Similarly, if it contains a three way interaction term `A:B:C` , then the model must also include the main effects `A` , `B` and `C` as well as the simpler interactions `A:B` , `A:C` and `B:C` . Type III tests routinely violate the marginality principle. For instance, consider the test of the main effect of `A` in the context of a three-way ANOVA that includes all possible interaction terms. According to Type III tests, our null and alternative models are:
Null model:outcome ~ B + C + A:B + A:C + B:C + A:B:CAlternative model:outcome ~ A + B + C + A:B + A:C + B:C + A:B:C
Notice that the null hypothesis omits `A` , but includes `A:B` , `A:C` and `A:B:C` as part of the model. This, according to the Type II tests, is not a good choice of null hypothesis. What we should do instead, if we want to test the null hypothesis that `A` is not relevant to our `outcome` , is to specify the null hypothesis that is the most complicated model that does not rely on `A` in any form, even as an interaction. The alternative hypothesis corresponds to this null model plus a main effect term of `A` . This is a lot closer to what most people would intuitively think of as a “main effect of `A` ”, and it yields the following as our Type II test of the main effect of `A` .252
Null model:outcome ~ B + C + B:CAlternative model:outcome ~ A + B + C + B:C
Anyway, just to give you a sense of how the Type II tests play out, here’s the full table of tests that would be applied in a three-way factorial ANOVA:
Term being tested isNull model is outcome ~ ...Alternative model is outcome ~ ...AB + C + B:CA + B + C + B:CBA + C + A:CA + B + C + A:CCA + B + A:BA + B + C + A:BA:BA + B + C + A:C + B:CA + B + C + A:B + A:C + B:CA:CA + B + C + A:B + B:CA + B + C + A:B + A:C + B:CB:CA + B + C + A:B + A:CA + B + C + A:B + A:C + B:CA:B:CA + B + C + A:B + A:C + B:CA + B + C + A:B + A:C + B:C + A:B:C
In the context of the two way ANOVA that we’ve been using in the coffee data, the hypothesis tests are even simpler. The main effect of `sugar` corresponds to an \(F\)-test comparing these two models:
Null model:babble ~ milkAlternative model:babble ~ sugar + milk
The test for the main effect of `milk` is
Null model:babble ~ sugarAlternative model:babble ~ sugar + milk
Finally, the test for the interaction `sugar:milk` is:
Null model:babble ~ sugar + milkAlternative model:babble ~ sugar + milk + sugar:milk
Running the tests are again straightforward. We use the `Anova()` function, specifying `type=2` :
```
mod <- lm( babble ~ sugar*milk, coffee )
Anova( mod, type = 2 )
```
```
## Anova Table (Type II tests)
##
## Response: babble
## Sum Sq Df F value Pr(>F)
## sugar 3.0696 2 5.8238 0.017075 *
## milk 0.9561 1 3.6279 0.081061 .
## sugar:milk 5.9439 2 11.2769 0.001754 **
## Residuals 3.1625 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Type II tests have some clear advantages over Type I and Type III tests. They don’t depend on the order in which you specify factors (unlike Type I), and they don’t depend on the contrasts that you use to specify your factors (unlike Type III). And although opinions may differ on this last point, and it will definitely depend on what you’re trying to do with your data, I do think that the hypothesis tests that they specify are more likely to correspond to something that you actually care about. As a consequence, I find that it’s usually easier to interpret the results of a Type II test than the results of a Type I or Type III test. For this reason, my tentative advice is that, if you can’t think of any obvious model comparisons that directly map onto your research questions but you still want to run an ANOVA in an unbalanced design, Type II tests are probably a better choice than Type I or Type III.253
### 16.10.6 Effect sizes (and non-additive sums of squares)
The `etaSquared()` function in the `lsr` package computes \(\eta^2\) and partial \(\eta^2\) values for unbalanced designs and for different Types of tests. It’s pretty straightforward. All you have to do is indicate which `type` of tests you’re doing,
```
etaSquared( mod, type=2 )
```
```
## eta.sq eta.sq.part
## sugar 0.22537682 0.4925493
## milk 0.07019886 0.2321436
## sugar:milk 0.43640732 0.6527155
```
and out pops the \(\eta^2\) and partial \(\eta^2\) values, as requested. However, when you’ve got an unbalanced design, there’s a bit of extra complexity involved. To see why, let’s expand the output from the `etaSquared()` function so that it displays the full ANOVA table:
```
es <- etaSquared( mod, type=2, anova=TRUE )
es
```
```
## eta.sq eta.sq.part SS df MS F
## sugar 0.22537682 0.4925493 3.0696323 2 1.5348161 5.823808
## milk 0.07019886 0.2321436 0.9561085 1 0.9561085 3.627921
## sugar:milk 0.43640732 0.6527155 5.9438677 2 2.9719339 11.276903
## Residuals 0.23219530 NA 3.1625000 12 0.2635417 NA
## p
## sugar 0.017075099
## milk 0.081060698
## sugar:milk 0.001754333
## Residuals NA
```
Okay, if you remember back to our very early discussions of ANOVA, one of the key ideas behind the sums of squares calculations is that if we add up all the SS terms associated with the effects in the model, and add that to the residual SS, they’re supposed to add up to the total sum of squares. And, on top of that, the whole idea behind \(\eta^2\) is that – because you’re dividing one of the SS terms by the total SS value – is that an \(\eta^2\) value can be interpreted as the proportion of variance accounted for by a particular term.
Now take a look at the output above. Because I’ve included the \(\eta^2\) value associated with the residuals (i.e., proportion of variance in the outcome attributed to the residuals, rather than to one of the effects), you’d expect all the \(\eta^2\) values to sum to 1. Because, the whole idea here was that the variance in the outcome variable can be divided up into the variability attributable to the model, and the variability in the residuals. Right? Right? And yet when we add up the \(\eta^2\) values for our model…
`sum( es[,"eta.sq"] )` `## [1] 0.9641783`
… we discover that for Type II and Type III tests they generally don’t sum to 1. Some of the variability has gone “missing”. It’s not being attributed to the model, and it’s not being attributed to the residuals either. What’s going on here?
Before giving you the answer, I want to push this idea a little further. From a mathematical perspective, it’s easy enough to see that the missing variance is a consequence of the fact that in Types II and III, the individual SS values are not obliged to the total sum of squares, and will only do so if you have balanced data. I’ll explain why this happens and what it means in a second, but first let’s verify that this is the case using the ANOVA table. First, we can calculate the total sum of squares directly from the raw data:
```
ss.tot <- sum( (coffee$babble - mean(coffee$babble))^2 )
ss.tot
```
`## [1] 13.62`
Next, we can read off all the SS values from one of our Type I ANOVA tables, and add them up. As you can see, this gives us the same answer, just like it’s supposed to:
```
type.I.sum <- 3.5575 + 0.9561 + 5.9439 + 3.1625
type.I.sum
```
`## [1] 13.62`
However, when we do the same thing for the Type II ANOVA table, it turns out that the SS values in the table add up to slightly less than the total SS value:
```
type.II.sum <- 0.9561 + 3.0696 + 5.9439 + 3.1625
type.II.sum
```
`## [1] 13.1321`
So, once again, we can see that there’s a little bit of variance that has “disappeared” somewhere.
Okay, time to explain what’s happened. The reason why this happens is that, when you have unbalanced designs, your factors become correlated with one another, and it becomes difficult to tell the difference between the effect of Factor A and the effect of Factor B. In the extreme case, suppose that we’d run a \(2 \times 2\) design in which the number of participants in each group had been as follows:
sugar | no sugar |
| --- | --- |
milk | 100 | 0 |
no milk | 0 | 100 |
Here we have a spectacularly unbalanced design: 100 people have milk and sugar, 100 people have no milk and no sugar, and that’s all. There are 0 people with milk and no sugar, and 0 people with sugar but no milk. Now suppose that, when we collected the data, it turned out there is a large (and statistically significant) difference between the “milk and sugar” group and the “no-milk and no-sugar” group. Is this a main effect of sugar? A main effect of milk? Or an interaction? It’s impossible to tell, because the presence of sugar has a perfect association with the presence of milk. Now suppose the design had been a little more balanced:
sugar | no sugar |
| --- | --- |
milk | 100 | 5 |
no milk | 5 | 100 |
This time around, it’s technically possible to distinguish between the effect of milk and the effect of sugar, because we have a few people that have one but not the other. However, it will still be pretty difficult to do so, because the association between sugar and milk is still extremely strong, and there are so few observations in two of the groups. Again, we’re very likely to be in the situation where we know that the predictor variables (milk and sugar) are related to the outcome (babbling), but we don’t know if the nature of that relationship is a main effect of one predictor, or the other predictor or the interaction.
This uncertainty is the reason for the missing variance. The “missing” variance corresponds to variation in the outcome variable that is clearly attributable to the predictors, but we don’t know which of the effects in the model is responsible. When you calculate Type I sum of squares, no variance ever goes missing: the sequentiual nature of Type I sum of squares means that the ANOVA automatically attributes this variance to whichever effects are entered first. However, the Type II and Type III tests are more conservative. Variance that cannot be clearly attributed to a specific effect doesn’t get attributed to any of them, and it goes missing.
## 16.11 Summary
* Factorial ANOVA with balanced designs, without interactions (Section 16.1) and with interactions included (Section 16.2)
* Effect size, estimated means, and confidence intervals in a factorial ANOVA (Section 16.3)
* Understanding the linear model underling ANOVA (Sections 16.5, 16.6 and 16.7)
* Post hoc testing using Tukey’s HSD (Section 16.8), and a brief commentary on planned comparisons (Section 16.9)
* Factorial ANOVA with unbalanced designs (Section 16.10)
The R command we need is
```
xtabs(~ drug+gender, clin.trial)
```
The nice thing about the subscript notation is that generalises nicely: if our experiment had involved a third factor, then we could just add a third subscript. In principle, the notation extends to as many factors as you might care to include, but in this book we’ll rarely consider analyses involving more than two factors, and never more than three.↩
*
Technically, marginalising isn’t quite identical to a regular mean: it’s a weighted average, where you take into account the frequency of the different events that you’re averaging over. However, in a balanced design, all of our cell frequencies are equal by definition, so the two are equivalent. We’ll discuss unbalanced designs later, and when we do so you’ll see that all of our calculations become a real headache. But let’s ignore this for now.↩
*
English translation: “least tedious”.↩
*
This chapter seems to be setting a new record for the number of different things that the letter R can stand for: so far we have R referring to the software package, the number of rows in our table of means, the residuals in the model, and now the correlation coefficient in a regression. Sorry: we clearly don’t have enough letters in the alphabet. However, I’ve tried pretty hard to be clear on which thing R is referring to in each case.↩
*
Implausibly large, I would think: the artificiality of this data set is really starting to show!↩
*
In fact, there’s a function
`Effect()` within the `effects` package that has slightly different arguments, but computes the same things, and won’t give you this warning message.↩ *
Due to the way that the
`leveneTest()` function is implemented, however, if you use a formula like
```
mood.gain \~\ drug + therapy + drug:therapy
```
, or input an ANOVA object based on a formula like this, you actually get the error message. That shouldn’t happen, because this actually is a fully crossed model. However, there’s a quirky shortcut in the way that the `leveneTest()` function checks whether your model is fully crossed that means that it doesn’t recognise this as a fully crossed model. Essentially what the function is doing is checking that you used `*` (which ensures that the model is fully crossed), and not `+` or `:` in your model formula. So if you’ve manually typed out all of the relevant terms for a fully crossed model, the `leveneTest()` function doesn’t detect it. I think this is a bug.↩ *
There could be all sorts of reasons for doing this, I would imagine.↩
*
This is cheating in some respects: because ANOVA and regression are provably the same thing, R is lazy: if you read the help documentation closely, you’ll notice that the
`aov()` function is actually just the `lm()` function in disguise! But we shan’t let such things get in the way of our story, shall we?↩ *
In the example given above, I’ve typed
```
summary( regression.model )
```
to get the hypothesis tests. However, the `summary()` function does produce a lot of output, which is why I’ve used the `BLAH BLAH BLAH` text to hide the unnecessary parts of the output. But in fact, you can use the `coef()` function to do the same job. If you the command
```
coef( summary( regression.model ))
```
you’ll get exactly the same output that I’ve shown above (minus the `BLAH BLAH BLAH` ). Compare and contrast this to the output of
```
coef( regression.model )
```
Advanced users may want to look into the
`model.matrix()` function, which produces similar output. Alternatively, you can use a command like
```
contr.treatment(3)[clin.trial\$drug,]
```
. I’ll talk about the `contr.treatment()` function later.↩ *
Future versions of this book will try to be a bit more consistent with the naming scheme for variables. One of the many problems with having to write a lengthy text very quickly to meet a teaching deadline is that you lose some internal consistency.↩
*
The
`lsr` package contains a more general function called `permuteLevels()` that can shuffle them in any way you like.↩ *
Technically, this list actually stores the functions themselves. R allows lists to contain functions, which is really neat for advanced purposes, but not something that matters for this book.↩
*
If, for instance, you actually would find yourself interested to know if Group A is significantly different from the mean of Group B and Group C, then you need to use a different tool (e.g., Scheffe’s method, which is more conservative, and beyond the scope this book). However, in most cases you probably are interested in pairwise group differences, so Tukey’s HSD is a pretty useful thing to know about.↩
*
This discrepancy in standard deviations might (and should) make you wonder if we have a violation of the homogeneity of variance assumption. I’ll leave it as an exercise for the reader to check this using the
`leveneTest()` function.↩ *
Actually, this is a bit of a lie. ANOVAs can vary in other ways besides the ones I’ve discussed in this book. For instance, I’ve completely ignored the difference between fixed-effect models, in which the levels of a factor are “fixed” by the experimenter or the world, and random-effect models, in which the levels are random samples from a larger population of possible levels (this book only covers fixed-effect models). Don’t make the mistake of thinking that this book – or any other one – will tell you “everything you need to know” about statistics, any more than a single book could possibly tell you everything you need to know about psychology, physics or philosophy. Life is too complicated for that to ever be true. This isn’t a cause for despair, though. Most researchers get by with a basic working knowledge of ANOVA that doesn’t go any further than this book does. I just want you to keep in mind that this book is only the beginning of a very long story, not the whole story.↩
*
The one thing that might seem a little opaque to some people is why the residual degrees of freedom in this output look different from one another (i.e., ranging from 12 to 17) whereas in the original one the residual degrees of freedom is fixed at 12. It’s actually the case that R uses a residual df of 12 in all cases (that’s why the \(p\) values are the same in the two outputs, and it’s enough to verify that
```
pf(6.7495, 2,12, lower.tail=FALSE))
```
gives the correct answer of \(p=.010863\), for instance, whereas
```
pf(6.7495, 2,15, lower.tail=FALSE))
```
would have given a \(p\)-value of about \(.00812\). It’s the residual degrees of freedom in the full model (i.e., the last one) that matters here.↩ *
Or, at the very least, rarely of interest.↩
*
Yes, I’m actually a big enough nerd that I’ve written my own functions implementing Type II tests and Type III tests. I only did it to convince myself that I knew how the different Types of test worked, but it did turn out to be a handy exercise: the
`etaSquared()` function in the `lsr` package relies on it. There’s actually even an argument in the `etaSquared()` function called `anova` . By default, `anova=FALSE` and the function just prints out the effect sizes. However, if you set `anova=TRUE` it will spit out the full ANOVA table as well. This works for Types I, II and III. Just set the `types` argument to select which type of test you want.↩ *
Note, of course, that this does depend on the model that the user specified. If original ANOVA model doesn’t contain an interaction term for
`B:C` , then obviously it won’t appear in either the null or the alternative. But that’s true for Types I, II and III. They never include any terms that you didn’t include, but they make different choices about how to construct tests for the ones that you did include.↩ *
I find it amusing to note that the default in R is Type I and the default in SPSS is Type III (with Helmert contrasts). Neither of these appeals to me all that much. Relatedly, I find it depressing that almost nobody in the psychological literature ever bothers to report which Type of tests they ran, much less the order of variables (for Type I) or the contrasts used (for Type III). Often they don’t report what software they used either. The only way I can ever make any sense of what people typically report is to try to guess from auxiliary cues which software they were using, and to assume that they never changed the default settings. Please don’t do this… now that you know about these issues, make sure you indicate what software you used, and if you’re reporting ANOVA results for unbalanced data, then specify what Type of tests you ran, specify order information if you’ve done Type I tests and specify contrasts if you’ve done Type III tests. Or, even better, do hypotheses tests that correspond to things you really care about, and then report those!↩
# Chapter 17 Bayesian statistics
In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence. – <NAME>.
The ideas I’ve presented to you in this book describe inferential statistics from the frequentist perspective. I’m not alone in doing this. In fact, almost every textbook given to undergraduate psychology students presents the opinions of the frequentist statistician as the theory of inferential statistics, the one true way to do things. I have taught this way for practical reasons. The frequentist view of statistics dominated the academic field of statistics for most of the 20th century, and this dominance is even more extreme among applied scientists. It was and is current practice among psychologists to use frequentist methods. Because frequentist methods are ubiquitous in scientific papers, every student of statistics needs to understand those methods, otherwise they will be unable to make sense of what those papers are saying! Unfortunately – in my opinion at least – the current practice in psychology is often misguided, and the reliance on frequentist methods is partly to blame. In this chapter I explain why I think this, and provide an introduction to Bayesian statistics, an approach that I think is generally superior to the orthodox approach.
This chapter comes in two parts. In Sections 17.1 through 17.3 I talk about what Bayesian statistics are all about, covering the basic mathematical rules for how it works as well as an explanation for why I think the Bayesian approach is so useful. Afterwards, I provide a brief overview of how you can do Bayesian versions of chi-square tests (Section 17.6), \(t\)-tests (Section 17.7), regression (Section 17.8) and ANOVA (Section 17.9).
## 17.1 Probabilistic reasoning by rational agents
From a Bayesian perspective, statistical inference is all about belief revision. I start out with a set of candidate hypotheses \(h\) about the world. I don’t know which of these hypotheses is true, but do I have some beliefs about which hypotheses are plausible and which are not. When I observe the data \(d\), I have to revise those beliefs. If the data are consistent with a hypothesis, my belief in that hypothesis is strengthened. If the data inconsistent with the hypothesis, my belief in that hypothesis is weakened. That’s it! At the end of this section I’ll give a precise description of how Bayesian reasoning works, but first I want to work through a simple example in order to introduce the key ideas. Consider the following reasoning problem:
I’m carrying an umbrella. Do you think it will rain?
In this problem, I have presented you with a single piece of data (\(d =\) I’m carrying the umbrella), and I’m asking you to tell me your beliefs about whether it’s raining. You have two possible hypotheses, \(h\): either it rains today or it does not. How should you solve this problem?
### 17.1.1 Priors: what you believed before
The first thing you need to do ignore what I told you about the umbrella, and write down your pre-existing beliefs about rain. This is important: if you want to be honest about how your beliefs have been revised in the light of new evidence, then you must say something about what you believed before those data appeared! So, what might you believe about whether it will rain today? You probably know that I live in Australia, and that much of Australia is hot and dry. And in fact you’re right: the city of Adelaide where I live has a Mediterranean climate, very similar to southern California, southern Europe or northern Africa. I’m writing this in January, and so you can assume it’s the middle of summer. In fact, you might have decided to take a quick look on Wikipedia255 and discovered that Adelaide gets an average of 4.4 days of rain across the 31 days of January. Without knowing anything else, you might conclude that the probability of January rain in Adelaide is about 15%, and the probability of a dry day is 85%. If this is really what you believe about Adelaide rainfall (and now that I’ve told it to you, I’m betting that this really is what you believe) then what I have written here is your prior distribution, written \(P(h)\):
Hypothesis | Degree of Belief |
| --- | --- |
Rainy day | 0.15 |
Dry day | 0.85 |
### 17.1.2 Likelihoods: theories about the data
To solve the reasoning problem, you need a theory about my behaviour. When does Dan carry an umbrella? You might guess that I’m not a complete idiot,256 and I try to carry umbrellas only on rainy days. On the other hand, you also know that I have young kids, and you wouldn’t be all that surprised to know that I’m pretty forgetful about this sort of thing. Let’s suppose that on rainy days I remember my umbrella about 30% of the time (I really am awful at this). But let’s say that on dry days I’m only about 5% likely to be carrying an umbrella. So you might write out a little table like this:
Hypothesis | Umbrella | No umbrella |
| --- | --- | --- |
Rainy day | 0.30 | 0.70 |
Dry day | 0.05 | 0.95 |
It’s important to remember that each cell in this table describes your beliefs about what data \(d\) will be observed, given the truth of a particular hypothesis \(h\). This “conditional probability” is written \(P(d|h)\), which you can read as “the probability of \(d\) given \(h\)”. In Bayesian statistics, this is referred to as likelihood of data \(d\) given hypothesis \(h\).257
### 17.1.3 The joint probability of data and hypothesis
At this point, all the elements are in place. Having written down the priors and the likelihood, you have all the information you need to do Bayesian reasoning. The question now becomes, how do we use this information? As it turns out, there’s a very simple equation that we can use here, but it’s important that you understand why we use it, so I’m going to try to build it up from more basic ideas.
Let’s start out with one of the rules of probability theory. I listed it way back in Table 9.1, but I didn’t make a big deal out of it at the time and you probably ignored it. The rule in question is the one that talks about the probability that two things are true. In our example, you might want to calculate the probability that today is rainy (i.e., hypothesis \(h\) is true) and I’m carrying an umbrella (i.e., data \(d\) is observed). The joint probability of the hypothesis and the data is written \(P(d,h)\), and you can calculate it by multiplying the prior \(P(h)\) by the likelihood \(P(d|h)\). Mathematically, we say that: \[ P(d,h) = P(d|h) P(h) \]
So, what is the probability that today is a rainy day and I remember to carry an umbrella? As we discussed earlier, the prior tells us that the probability of a rainy day is 15%, and the likelihood tells us that the probability of me remembering my umbrella on a rainy day is 30%. So the probability that both of these things are true is calculated by multiplying the two:
\[ \begin{array} P(\mbox{rainy}, \mbox{umbrella}) & = & P(\mbox{umbrella} | \mbox{rainy}) \times P(\mbox{rainy}) \\ & = & 0.30 \times 0.15 \\ & = & 0.045 \end{array} \]
In other words, before being told anything about what actually happened, you think that there is a 4.5% probability that today will be a rainy day and that I will remember an umbrella. However, there are of course four possible things that could happen, right? So let’s repeat the exercise for all four. If we do that, we end up with the following table:
Umbrella | No-umbrella |
| --- | --- |
Rainy | 0.045 | 0.105 |
Dry | 0.0425 | 0.8075 |
This table captures all the information about which of the four possibilities are likely. To really get the full picture, though, it helps to add the row totals and column totals. That gives us this table:
Umbrella | No-umbrella | Total |
| --- | --- | --- |
Rainy | 0.0450 | 0.1050 | 0.15 |
Dry | 0.0425 | 0.8075 | 0.85 |
Total | 0.0875 | 0.9125 | 1 |
This is a very useful table, so it’s worth taking a moment to think about what all these numbers are telling us. First, notice that the row sums aren’t telling us anything new at all. For example, the first row tells us that if we ignore all this umbrella business, the chance that today will be a rainy day is 15%. That’s not surprising, of course: that’s our prior. The important thing isn’t the number itself: rather, the important thing is that it gives us some confidence that our calculations are sensible! Now take a look at the column sums, and notice that they tell us something that we haven’t explicitly stated yet. In the same way that the row sums tell us the probability of rain, the column sums tell us the probability of me carrying an umbrella. Specifically, the first column tells us that on average (i.e., ignoring whether it’s a rainy day or not), the probability of me carrying an umbrella is 8.75%. Finally, notice that when we sum across all four logically-possible events, everything adds up to 1. In other words, what we have written down is a proper probability distribution defined over all possible combinations of data and hypothesis.
Now, because this table is so useful, I want to make sure you understand what all the elements correspond to, and how they written:
Umbrella | No-umbrella |
| --- | --- |
Rainy | \(P\)(Umbrella, Rainy) | \(P\)(No-umbrella, Rainy) | \(P\)(Rainy) |
Dry | \(P\)(Umbrella, Dry) | \(P\)(No-umbrella, Dry) | \(P\)(Dry) |
\(P\)(Umbrella) | \(P\)(No-umbrella) |
Finally, let’s use “proper” statistical notation. In the rainy day problem, the data corresponds to the observation that I do or do not have an umbrella. So we’ll let \(d_1\) refer to the possibility that you observe me carrying an umbrella, and \(d_2\) refers to you observing me not carrying one. Similarly, \(h_1\) is your hypothesis that today is rainy, and \(h_2\) is the hypothesis that it is not. Using this notation, the table looks like this:
\(d_1\) | \(d_2\) |
| --- | --- |
\(h_1\) | \(P(h_1, d_1)\) | \(P(h_1, d_2)\) | \(P(h_1)\) |
\(h_2\) | \(P(h_2, d_1)\) | \(P(h_2, d_2)\) | \(P(h_2)\) |
\(P(d_1)\) | \(P(d_2)\) |
### 17.1.4 Updating beliefs using Bayes’ rule
The table we laid out in the last section is a very powerful tool for solving the rainy day problem, because it considers all four logical possibilities and states exactly how confident you are in each of them before being given any data. It’s now time to consider what happens to our beliefs when we are actually given the data. In the rainy day problem, you are told that I really am carrying an umbrella. This is something of a surprising event: according to our table, the probability of me carrying an umbrella is only 8.75%. But that makes sense, right? A guy carrying an umbrella on a summer day in a hot dry city is pretty unusual, and so you really weren’t expecting that. Nevertheless, the problem tells you that it is true. No matter how unlikely you thought it was, you must now adjust your beliefs to accommodate the fact that you now know that I have an umbrella.258 To reflect this new knowledge, our revised table must have the following numbers:
In other words, the facts have eliminated any possibility of “no umbrella”, so we have to put zeros into any cell in the table that implies that I’m not carrying an umbrella. Also, you know for a fact that I am carrying an umbrella, so the column sum on the left must be 1 to correctly describe the fact that \(P(\mbox{umbrella})=1\).
What two numbers should we put in the empty cells? Again, let’s not worry about the maths, and instead think about our intuitions. When we wrote out our table the first time, it turned out that those two cells had almost identical numbers, right? We worked out that the joint probability of “rain and umbrella” was 4.5%, and the joint probability of “dry and umbrella” was 4.25%. In other words, before I told you that I am in fact carrying an umbrella, you’d have said that these two events were almost identical in probability, yes? But notice that both of these possibilities are consistent with the fact that I actually am carrying an umbrella. From the perspective of these two possibilities, very little has changed. I hope you’d agree that it’s still true that these two possibilities are equally plausible. So what we expect to see in our final table is some numbers that preserve the fact that “rain and umbrella” is slightly more plausible than “dry and umbrella”, while still ensuring that numbers in the table add up. Something like this, perhaps?
What this table is telling you is that, after being told that I’m carrying an umbrella, you believe that there’s a 51.4% chance that today will be a rainy day, and a 48.6% chance that it won’t. That’s the answer to our problem! The posterior probability of rain \(P(h|d)\) given that I am carrying an umbrella is 51.4%
How did I calculate these numbers? You can probably guess. To work out that there was a 0.514 probability of “rain”, all I did was take the 0.045 probability of “rain and umbrella” and divide it by the 0.0875 chance of “umbrella”. This produces a table that satisfies our need to have everything sum to 1, and our need not to interfere with the relative plausibility of the two events that are actually consistent with the data. To say the same thing using fancy statistical jargon, what I’ve done here is divide the joint probability of the hypothesis and the data \(P(d,h)\) by the marginal probability of the data \(P(d)\), and this is what gives us the posterior probability of the hypothesis given that we know the data have been observed. To write this as an equation:259 \[ P(h | d) = \frac{P(d,h)}{P(d)} \]
However, remember what I said at the start of the last section, namely that the joint probability \(P(d,h)\) is calculated by multiplying the prior \(P(h)\) by the likelihood \(P(d|h)\). In real life, the things we actually know how to write down are the priors and the likelihood, so let’s substitute those back into the equation. This gives us the following formula for the posterior probability:
\[ P(h | d) = \frac{P(d|h) P(h)}{P(d)} \]
And this formula, folks, is known as Bayes’ rule. It describes how a learner starts out with prior beliefs about the plausibility of different hypotheses, and tells you how those beliefs should be revised in the face of data. In the Bayesian paradigm, all statistical inference flows from this one simple rule.
## 17.2 Bayesian hypothesis tests
In Chapter 11 I described the orthodox approach to hypothesis testing. It took an entire chapter to describe, because null hypothesis testing is a very elaborate contraption that people find very hard to make sense of. In contrast, the Bayesian approach to hypothesis testing is incredibly simple. Let’s pick a setting that is closely analogous to the orthodox scenario. There are two hypotheses that we want to compare, a null hypothesis \(h_0\) and an alternative hypothesis \(h_1\). Prior to running the experiment we have some beliefs \(P(h)\) about which hypotheses are true. We run an experiment and obtain data \(d\). Unlike frequentist statistics Bayesian statistics does allow to talk about the probability that the null hypothesis is true. Better yet, it allows us to calculate the posterior probability of the null hypothesis, using Bayes’ rule:
\[ P(h_0 | d) = \frac{P(d|h_0) P(h_0)}{P(d)} \]
This formula tells us exactly how much belief we should have in the null hypothesis after having observed the data \(d\). Similarly, we can work out how much belief to place in the alternative hypothesis using essentially the same equation. All we do is change the subscript:
\[ P(h_1 | d) = \frac{P(d|h_1) P(h_1)}{P(d)} \]
It’s all so simple that I feel like an idiot even bothering to write these equations down, since all I’m doing is copying Bayes rule from the previous section.260
### 17.2.1 The Bayes factor
In practice, most Bayesian data analysts tend not to talk in terms of the raw posterior probabilities \(P(h_0|d)\) and \(P(h_1|d)\). Instead, we tend to talk in terms of the posterior odds ratio. Think of it like betting. Suppose, for instance, the posterior probability of the null hypothesis is 25%, and the posterior probability of the alternative is 75%. The alternative hypothesis is three times as probable as the null, so we say that the odds are 3:1 in favour of the alternative. Mathematically, all we have to do to calculate the posterior odds is divide one posterior probability by the other:
\[ \frac{P(h_1 | d)}{P(h_0 | d)} = \frac{0.75}{0.25} = 3 \]
Or, to write the same thing in terms of the equations above: \[ \frac{P(h_1 | d)}{P(h_0 | d)} = \frac{P(d|h_1)}{P(d|h_0)} \times \frac{P(h_1)}{P(h_0)} \]
Actually, this equation is worth expanding on. There are three different terms here that you should know. On the left hand side, we have the posterior odds, which tells you what you believe about the relative plausibilty of the null hypothesis and the alternative hypothesis after seeing the data. On the right hand side, we have the prior odds, which indicates what you thought before seeing the data. In the middle, we have the Bayes factor, which describes the amount of evidence provided by the data: \[ \begin{array}{ccccc}\displaystyle \frac{P(h_1 | d)}{P(h_0 | d)} &=& \displaystyle\frac{P(d|h_1)}{P(d|h_0)} &\times& \displaystyle\frac{P(h_1)}{P(h_0)} \\[6pt] \\[-2pt] \uparrow && \uparrow && \uparrow \\[6pt] \mbox{Posterior odds} && \mbox{Bayes factor} && \mbox{Prior odds} \end{array} \] The Bayes factor (sometimes abbreviated as BF) has a special place in the Bayesian hypothesis testing, because it serves a similar role to the \(p\)-value in orthodox hypothesis testing: it quantifies the strength of evidence provided by the data, and as such it is the Bayes factor that people tend to report when running a Bayesian hypothesis test. The reason for reporting Bayes factors rather than posterior odds is that different researchers will have different priors. Some people might have a strong bias to believe the null hypothesis is true, others might have a strong bias to believe it is false. Because of this, the polite thing for an applied researcher to do is report the Bayes factor. That way, anyone reading the paper can multiply the Bayes factor by their own personal prior odds, and they can work out for themselves what the posterior odds would be. In any case, by convention we like to pretend that we give equal consideration to both the null hypothesis and the alternative, in which case the prior odds equals 1, and the posterior odds becomes the same as the Bayes factor.
### 17.2.2 Interpreting Bayes factors
One of the really nice things about the Bayes factor is the numbers are inherently meaningful. If you run an experiment and you compute a Bayes factor of 4, it means that the evidence provided by your data corresponds to betting odds of 4:1 in favour of the alternative. However, there have been some attempts to quantify the standards of evidence that would be considered meaningful in a scientific context. The two most widely used are from Jeffreys (1961) and Kass and Raftery (1995). Of the two, I tend to prefer the Kass and Raftery (1995) table because it’s a bit more conservative. So here it is:
Bayes factor | Interpretation |
| --- | --- |
1 - 3 | Negligible evidence |
3 - 20 | Positive evidence |
20 - 150 | Strong evidence |
$>$150 | Very strong evidence |
And to be perfectly honest, I think that even the Kass and Raftery standards are being a bit charitable. If it were up to me, I’d have called the “positive evidence” category “weak evidence”. To me, anything in the range 3:1 to 20:1 is “weak” or “modest” evidence at best. But there are no hard and fast rules here: what counts as strong or weak evidence depends entirely on how conservative you are, and upon the standards that your community insists upon before it is willing to label a finding as “true”.
In any case, note that all the numbers listed above make sense if the Bayes factor is greater than 1 (i.e., the evidence favours the alternative hypothesis). However, one big practical advantage of the Bayesian approach relative to the orthodox approach is that it also allows you to quantify evidence for the null. When that happens, the Bayes factor will be less than 1. You can choose to report a Bayes factor less than 1, but to be honest I find it confusing. For example, suppose that the likelihood of the data under the null hypothesis \(P(d|h_0)\) is equal to 0.2, and the corresponding likelihood \(P(d|h_0)\) under the alternative hypothesis is 0.1. Using the equations given above, Bayes factor here would be:
\[ \mbox{BF} = \frac{P(d|h_1)}{P(d|h_0)} = \frac{0.1}{0.2} = 0.5 \]
Read literally, this result tells is that the evidence in favour of the alternative is 0.5 to 1. I find this hard to understand. To me, it makes a lot more sense to turn the equation “upside down”, and report the amount op evidence in favour of the null. In other words, what we calculate is this:
\[ \mbox{BF}^\prime = \frac{P(d|h_0)}{P(d|h_1)} = \frac{0.2}{0.1} = 2 \]
And what we would report is a Bayes factor of 2:1 in favour of the null. Much easier to understand, and you can interpret this using the table above.
## 17.3 Why be a Bayesian?
Up to this point I’ve focused exclusively on the logic underpinning Bayesian statistics. We’ve talked about the idea of “probability as a degree of belief”, and what it implies about how a rational agent should reason about the world. The question that you have to answer for yourself is this: how do you want to do your statistics? Do you want to be an orthodox statistician, relying on sampling distributions and \(p\)-values to guide your decisions? Or do you want to be a Bayesian, relying on Bayes factors and the rules for rational belief revision? And to be perfectly honest, I can’t answer this question for you. Ultimately it depends on what you think is right. It’s your call, and your call alone. That being said, I can talk a little about why I prefer the Bayesian approach.
### 17.3.1 Statistics that mean what you think they mean
You keep using that word. I do not think it means what you think it means
– <NAME>, <NAME>261
To me, one of the biggest advantages to the Bayesian approach is that it answers the right questions. Within the Bayesian framework, it is perfectly sensible and allowable to refer to “the probability that a hypothesis is true”. You can even try to calculate this probability. Ultimately, isn’t that what you want your statistical tests to tell you? To an actual human being, this would seem to be the whole point of doing statistics: to determine what is true and what isn’t. Any time that you aren’t exactly sure about what the truth is, you should use the language of probability theory to say things like “there is an 80% chance that Theory A is true, but a 20% chance that Theory B is true instead”.
This seems so obvious to a human, yet it is explicitly forbidden within the orthodox framework. To a frequentist, such statements are a nonsense because “the theory is true” is not a repeatable event. A theory is true or it is not, and no probabilistic statements are allowed, no matter how much you might want to make them. There’s a reason why, back in Section 11.5, I repeatedly warned you not to interpret the \(p\)-value as the probability of that the null hypothesis is true. There’s a reason why almost every textbook on statstics is forced to repeat that warning. It’s because people desperately want that to be the correct interpretation. Frequentist dogma notwithstanding, a lifetime of experience of teaching undergraduates and of doing data analysis on a daily basis suggests to me that most actual humans thing that “the probability that the hypothesis is true” is not only meaningful, it’s the thing we care most about. It’s such an appealing idea that even trained statisticians fall prey to the mistake of trying to interpret a \(p\)-value this way. For example, here is a quote from an official Newspoll report in 2013, explaining how to interpret their (frequentist) data analysis:262
Throughout the report, where relevant, statistically significant changes have been noted. All significance tests have been based on the 95 percent level of confidence. This means that if a change is noted as being statistically significant, there is a 95 percent probability that a real change has occurred, and is not simply due to chance variation. (emphasis added)
Nope! That’s not what \(p<.05\) means. That’s not what 95% confidence means to a frequentist statistician. The bolded section is just plain wrong. Orthodox methods cannot tell you that “there is a 95% chance that a real change has occurred”, because this is not the kind of event to which frequentist probabilities may be assigned. To an ideological frequentist, this sentence should be meaningless. Even if you’re a more pragmatic frequentist, it’s still the wrong definition of a \(p\)-value. It is simply not an allowed or correct thing to say if you want to rely on orthodox statistical tools.
On the other hand, let’s suppose you are a Bayesian. Although the bolded passage is the wrong definition of a \(p\)-value, it’s pretty much exactly what a Bayesian means when they say that the posterior probability of the alternative hypothesis is greater than 95%. And here’s the thing. If the Bayesian posterior is actually thing you want to report, why are you even trying to use orthodox methods? If you want to make Bayesian claims, all you have to do is be a Bayesian and use Bayesian tools.
Speaking for myself, I found this to be a the most liberating thing about switching to the Bayesian view. Once you’ve made the jump, you no longer have to wrap your head around counterinuitive definitions of \(p\)-values. You don’t have to bother remembering why you can’t say that you’re 95% confident that the true mean lies within some interval. All you have to do is be honest about what you believed before you ran the study, and then report what you learned from doing it. Sounds nice, doesn’t it? To me, this is the big promise of the Bayesian approach: you do the analysis you really want to do, and express what you really believe the data are telling you.
## 17.4 Evidentiary standards you can believe
If [\(p\)] is below .02 it is strongly indicated that the [null] hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at .05 and consider that [smaller values of \(p\)] indicate a real discrepancy.
– <NAME> (1925)
Consider the quote above by <NAME>, one of the founders of what has become the orthodox approach to statistics. If anyone has ever been entitled to express an opinion about the intended function of \(p\)-values, it’s Fisher. In this passage, taken from his classic guide Statistical Methods for Research Workers, he’s pretty clear about what it means to reject a null hypothesis at \(p<.05\). In his opinion, if we take \(p<.05\) to mean there is “a real effect”, then “we shall not often be astray”. This view is hardly unusual: in my experience, most practitioners express views very similar to Fisher’s. In essence, the \(p<.05\) convention is assumed to represent a fairly stringent evidentiary standard.
Well, how true is that? One way to approach this question is to try to convert \(p\)-values to Bayes factors, and see how the two compare. It’s not an easy thing to do because a \(p\)-value is a fundamentally different kind of calculation to a Bayes factor, and they don’t measure the same thing. However, there have been some attempts to work out the relationship between the two, and it’s somewhat surprising. For example, Johnson (2013) presents a pretty compelling case that (for \(t\)-tests at least) the \(p<.05\) threshold corresponds roughly to a Bayes factor of somewhere between 3:1 and 5:1 in favour of the alternative. If that’s right, then Fisher’s claim is a bit of a stretch. Let’s suppose that the null hypothesis is true about half the time (i.e., the prior probability of \(H_0\) is 0.5), and we use those numbers to work out the posterior probability of the null hypothesis given that it has been rejected at \(p<.05\). Using the data from Johnson (2013), we see that if you reject the null at \(p<.05\), you’ll be correct about 80% of the time. I don’t know about you, but in my opinion an evidentiary standard that ensures you’ll be wrong on 20% of your decisions isn’t good enough. The fact remains that, quite contrary to Fisher’s claim, if you reject at \(p<.05\) you shall quite often go astray. It’s not a very stringent evidentiary threshold at all.
## 17.5 The \(p\)-value is a lie.
The cake is a lie.
The cake is a lie. The cake is a lie. The cake is a lie. – Portal263
Okay, at this point you might be thinking that the real problem is not with orthodox statistics, just the \(p<.05\) standard. In one sense, that’s true. The recommendation that Johnson (2013) gives is not that “everyone must be a Bayesian now”. Instead, the suggestion is that it would be wiser to shift the conventional standard to something like a \(p<.01\) level. That’s not an unreasonable view to take, but in my view the problem is a little more severe than that. In my opinion, there’s a fairly big problem built into the way most (but not all) orthodox hypothesis tests are constructed. They are grossly naive about how humans actually do research, and because of this most \(p\)-values are wrong.
Sounds like an absurd claim, right? Well, consider the following scenario. You’ve come up with a really exciting research hypothesis and you design a study to test it. You’re very diligent, so you run a power analysis to work out what your sample size should be, and you run the study. You run your hypothesis test and out pops a \(p\)-value of 0.072. Really bloody annoying, right?
What should you do? Here are some possibilities:
* You conclude that there is no effect, and try to publish it as a null result
* You guess that there might be an effect, and try to publish it as a “borderline significant” result
* You give up and try a new study
* You collect some more data to see if the \(p\) value goes up or (preferably!) drops below the “magic” criterion of \(p<.05\)
Which would you choose? Before reading any further, I urge you to take some time to think about it. Be honest with yourself. But don’t stress about it too much, because you’re screwed no matter what you choose. Based on my own experiences as an author, reviewer and editor, as well as stories I’ve heard from others, here’s what will happen in each case:
Let’s start with option 1. If you try to publish it as a null result, the paper will struggle to be published. Some reviewers will think that \(p=.072\) is not really a null result. They’ll argue it’s borderline significant. Other reviewers will agree it’s a null result, but will claim that even though some null results are publishable, yours isn’t. One or two reviewers might even be on your side, but you’ll be fighting an uphill battle to get it through.
*
Okay, let’s think about option number 2. Suppose you try to publish it as a borderline significant result. Some reviewers will claim that it’s a null result and should not be published. Others will claim that the evidence is ambiguous, and that you should collect more data until you get a clear significant result. Again, the publication process does not favour you.
*
Given the difficulties in publishing an “ambiguous” result like \(p=.072\), option number 3 might seem tempting: give up and do something else. But that’s a recipe for career suicide. If you give up and try a new project else every time you find yourself faced with ambiguity, your work will never be published. And if you’re in academia without a publication record you can lose your job. So that option is out.
*
It looks like you’re stuck with option 4. You don’t have conclusive results, so you decide to collect some more data and re-run the analysis. Seems sensible, but unfortunately for you, if you do this all of your \(p\)-values are now incorrect. All of them. Not just the \(p\)-values that you calculated for this study. All of them. All the \(p\)-values you calculated in the past and all the \(p\)-values you will calculate in the future. Fortunately, no-one will notice. You’ll get published, and you’ll have lied.
Wait, what? How can that last part be true? I mean, it sounds like a perfectly reasonable strategy doesn’t it? You collected some data, the results weren’t conclusive, so now what you want to do is collect more data until the the results are conclusive. What’s wrong with that?
Honestly, there’s nothing wrong with it. It’s a reasonable, sensible and rational thing to do. In real life, this is exactly what every researcher does. Unfortunately, the theory of null hypothesis testing as I described it in Chapter 11 forbids you from doing this.264 The reason is that the theory assumes that the experiment is finished and all the data are in. And because it assumes the experiment is over, it only considers two possible decisions. If you’re using the conventional \(p<.05\) threshold, those decisions are:
Outcome | Action |
| --- | --- |
\(p\) less than .05 | Reject the null |
\(p\) greater than .05 | Retain the null |
What you’re doing is adding a third possible action to the decision making problem. Specifically, what you’re doing is using the \(p\)-value itself as a reason to justify continuing the experiment. And as a consequence you’ve transformed the decision-making procedure into one that looks more like this:
Outcome | Action |
| --- | --- |
\(p\) less than .05 | Stop the experiment and reject the null |
\(p\) between .05 and .1 | Continue the experiment |
\(p\) greater than .1 | Stop the experiment and retain the null |
The “basic” theory of null hypothesis testing isn’t built to handle this sort of thing, not in the form I described back in Chapter 11. If you’re the kind of person who would choose to “collect more data” in real life, it implies that you are not making decisions in accordance with the rules of null hypothesis testing. Even if you happen to arrive at the same decision as the hypothesis test, you aren’t following the decision process it implies, and it’s this failure to follow the process that is causing the problem.265 Your \(p\)-values are a lie.
Worse yet, they’re a lie in a dangerous way, because they’re all too small. To give you a sense of just how bad it can be, consider the following (worst case) scenario. Imagine you’re a really super-enthusiastic researcher on a tight budget who didn’t pay any attention to my warnings above. You design a study comparing two groups. You desperately want to see a significant result at the \(p<.05\) level, but you really don’t want to collect any more data than you have to (because it’s expensive). In order to cut costs, you start collecting data, but every time a new observation arrives you run a \(t\)-test on your data. If the \(t\)-tests says \(p<.05\) then you stop the experiment and report a significant result. If not, you keep collecting data. You keep doing this until you reach your pre-defined spending limit for this experiment. Let’s say that limit kicks in at \(N=1000\) observations. As it turns out, the truth of the matter is that there is no real effect to be found: the null hypothesis is true. So, what’s the chance that you’ll make it to the end of the experiment and (correctly) conclude that there is no effect? In an ideal world, the answer here should be 95%. After all, the whole point of the \(p<.05\) criterion is to control the Type I error rate at 5%, so what we’d hope is that there’s only a 5% chance of falsely rejecting the null hypothesis in this situation. However, there’s no guarantee that will be true. You’re breaking the rules: you’re running tests repeatedly, “peeking” at your data to see if you’ve gotten a significant result, and all bets are off.
So how bad is it? The answer is shown as the solid black line in Figure 17.1, and it’s astoundingly bad. If you peek at your data after every single observation, there is a 49% chance that you will make a Type I error. That’s, um, quite a bit bigger than the 5% that it’s supposed to be. By way of comparison, imagine that you had used the following strategy. Start collecting data. Every single time an observation arrives, run a Bayesian \(t\)-test (Section 17.7 and look at the Bayes factor. I’ll assume that Johnson (2013) is right, and I’ll treat a Bayes factor of 3:1 as roughly equivalent to a \(p\)-value of .05.266 This time around, our trigger happy researcher uses the following procedure: if the Bayes factor is 3:1 or more in favour of the null, stop the experiment and retain the null. If it is 3:1 or more in favour of the alternative, stop the experiment and reject the null. Otherwise continue testing. Now, just like last time, let’s assume that the null hypothesis is true. What happens? As it happens, I ran the simulations for this scenario too, and the results are shown as the dashed line in Figure 17.1. It turns out that the Type I error rate is much much lower than the 49% rate that we were getting by using the orthodox \(t\)-test.
In some ways, this is remarkable. The entire point of orthodox null hypothesis testing is to control the Type I error rate. Bayesian methods aren’t actually designed to do this at all. Yet, as it turns out, when faced with a “trigger happy” researcher who keeps running hypothesis tests as the data come in, the Bayesian approach is much more effective. Even the 3:1 standard, which most Bayesians would consider unacceptably lax, is much safer than the \(p<.05\) rule.
### 17.5.1 Is it really this bad?
The example I gave in the previous section is a pretty extreme situation. In real life, people don’t run hypothesis tests every time a new observation arrives. So it’s not fair to say that the \(p<.05\) threshold “really” corresponds to a 49% Type I error rate (i.e., \(p=.49\)). But the fact remains that if you want your \(p\)-values to be honest, then you either have to switch to a completely different way of doing hypothesis tests, or you must enforce a strict rule: no peeking. You are not allowed to use the data to decide when to terminate the experiment. You are not allowed to look at a “borderline” \(p\)-value and decide to collect more data. You aren’t even allowed to change your data analyis strategy after looking at data. You are strictly required to follow these rules, otherwise the \(p\)-values you calculate will be nonsense.
And yes, these rules are surprisingly strict. As a class exercise a couple of years back, I asked students to think about this scenario. Suppose you started running your study with the intention of collecting \(N=80\) people. When the study starts out you follow the rules, refusing to look at the data or run any tests. But when you reach \(N=50\) your willpower gives in… and you take a peek. Guess what? You’ve got a significant result! Now, sure, you know you said that you’d keep running the study out to a sample size of \(N=80\), but it seems sort of pointless now, right? The result is significant with a sample size of \(N=50\), so wouldn’t it be wasteful and inefficient to keep collecting data? Aren’t you tempted to stop? Just a little? Well, keep in mind that if you do, your Type I error rate at \(p<.05\) just ballooned out to 8%. When you report \(p<.05\) in your paper, what you’re really saying is \(p<.08\). That’s how bad the consequences of “just one peek” can be.
Now consider this … the scientific literature is filled with \(t\)-tests, ANOVAs, regressions and chi-square tests. When I wrote this book I didn’t pick these tests arbitrarily. The reason why these four tools appear in most introductory statistics texts is that these are the bread and butter tools of science. None of these tools include a correction to deal with “data peeking”: they all assume that you’re not doing it. But how realistic is that assumption? In real life, how many people do you think have “peeked” at their data before the experiment was finished and adapted their subsequent behaviour after seeing what the data looked like? Except when the sampling procedure is fixed by an external constraint, I’m guessing the answer is “most people have done it”. If that has happened, you can infer that the reported \(p\)-values are wrong. Worse yet, because we don’t know what decision process they actually followed, we have no way to know what the \(p\)-values should have been. You can’t compute a \(p\)-value when you don’t know the decision making procedure that the researcher used. And so the reported \(p\)-value remains a lie.
Given all of the above, what is the take home message? It’s not that Bayesian methods are foolproof. If a researcher is determined to cheat, they can always do so. Bayes’ rule cannot stop people from lying, nor can it stop them from rigging an experiment. That’s not my point here. My point is the same one I made at the very beginning of the book in Section 1.1: the reason why we run statistical tests is to protect us from ourselves. And the reason why “data peeking” is such a concern is that it’s so tempting, even for honest researchers. A theory for statistical inference has to acknowledge this. Yes, you might try to defend \(p\)-values by saying that it’s the fault of the researcher for not using them properly. But to my mind that misses the point. A theory of statistical inference that is so completely naive about humans that it doesn’t even consider the possibility that the researcher might look at their own data isn’t a theory worth having. In essence, my point is this:
Good laws have their origins in bad morals.
– <NAME>267
Good rules for statistical testing have to acknowledge human frailty. None of us are without sin. None of us are beyond temptation. A good system for statistical inference should still work even when it is used by actual humans. Orthodox null hypothesis testing does not.268
## 17.6 Bayesian analysis of contingency tables
Time to change gears. Up to this point I’ve been talking about what Bayesian inference is and why you might consider using it. I now want to briefly describe how to do Bayesian versions of various statistical tests. The discussions in the next few sections are not as detailed as I’d like, but I hope they’re enough to help you get started. So let’s begin.
The first kind of statistical inference problem I discussed in this book appeared in Chapter 12, in which we discussed categorical data analysis problems. In that chapter I talked about several different statistical problems that you might be interested in, but the one that appears most often in real life is the analysis of contingency tables. In this kind of data analysis situation, we have a cross-tabulation of one variable against another one, and the goal is to find out if there is some association between these variables. The data set I used to illustrate this problem is found in the `chapek9.Rdata` file, and it contains a single data frame `chapek9`
```
load(file.path(projecthome, "data","chapek9.Rdata"))
head(chapek9)
```
In this data set, we supposedly sampled 180 beings and measured two things. First, we checked whether they were humans or robots, as captured by the `species` variable. Second, we asked them to nominate whether they most preferred flowers, puppies, or data. When we produce the cross-tabulation, we get this as the results:
```
crosstab <- xtabs( ~ species + choice, chapek9 )
crosstab
```
Surprisingly, the humans seemed to show a much stronger preference for data than the robots did. At the time we speculated that this might have been because the questioner was a large robot carrying a gun, and the humans might have been scared.
### 17.6.1 The orthodox text
Just to refresh your memory, here’s how we analysed these data back in Chapter@refch:chisquare. Because we want to determine if there is some association between `species` and `choice` , we used the `associationTest()` function in the `lsr` package to run a chi-square test of association. The results looked like this:
```
library(lsr)
associationTest( ~species + choice, chapek9 )
```
Because we found a small \(p\) value (in this case \(p<.01\)), we concluded that the data are inconsistent with the null hypothesis of no association, and we rejected it.
### 17.6.2 The Bayesian test
How do we run an equivalent test as a Bayesian? Well, like every other bloody thing in statistics, there’s a lot of different ways you could do it. However, for the sake of everyone’s sanity, throughout this chapter I’ve decided to rely on one R package to do the work. Specifically, I’m going to use the `BayesFactor` package written by <NAME> and <NAME>, which as of this writing is in version 0.9.10. For the analysis of contingency tables, the `BayesFactor` package contains a function called `contingencyTableBF()` . The data that you need to give to this function is the contingency table itself (i.e., the `crosstab` variable above), so you might be expecting to use a command like this:
```
library( BayesFactor ) # ...because we have to load the package
contingencyTableBF( crosstab ) # ...because that makes sense, right?
```
However, if you try this you’ll get an error message. This is because the `contingencyTestBF()` function needs one other piece of information from you: it needs to know what sampling plan you used to run your experiment. You can specify the sampling plan using the `sampleType` argument. So I should probably tell you what your options are! The `contingencyTableBF()` function distinguishes between four different types of experiment:
* Fixed sample size. Suppose that in our
`chapek9` example, our experiment was designed like this: we deliberately set out to test 180 people, but we didn’t try to control the number of humans or robots, nor did we try to control the choices they made. In this design, the total number of observations \(N\) is fixed, but everything else is random. This is referred to as “joint multinomial” sampling, and if that’s what you did you should specify
. In the case of the `chapek9` data, that’s actually what I had in mind when I invented the data set. * Fixed row (or column) totals. A different kind of design might work like this. We decide ahead of time that we want 180 people, but we try to be a little more systematic about it. Specifically, the experimenter constrains it so that we get a predetermined number of humans and robots (e.g., 90 of each). In this design, either the row totals or the column totals are fixed, but not both. This is referred to as “independent multinomial” sampling, and if that’s what you did you should specify
. * Both row and column totals fixed. Another logical possibility is that you designed the experiment so that both the row totals and the column totals are fixed. This doesn’t make any sense at all in the
`chapek9` example, but there are other deisgns that can work this way. Suppose that I show you a collection of 20 toys, and then given them 10 stickers that say `boy` and another 10 that say `girl` . I then give them 10 `blue` stickers and 10 `pink` stickers. I then ask you to put the stickers on the 20 toys such that every toy has a colour and every toy has a gender. No matter how you assign the stickers, the total number of pink and blue toys will be 10, as will the number of boys and girls. In this design both the rows and columns of the contingency table are fixed. This is referred to as “hypergeometric” sampling, and if that’s what you’ve done you should specify
```
sampleType = "hypergeom"
```
. * Nothing is fixed. Finally, it might be the case that nothing is fixed. Not the row columns, not the column totals, and not the total sample size either. For instance, in the
`chapek9` scenario, suppose what I’d done is run the study for a fixed length of time. By chance, it turned out that I got 180 people to turn up to study, but it could easily have been something else. This is referred to as “Poisson” sampling, and if that’s what you’ve done you should specify `sampleType="poisson"` . Okay, so now we have enough knowledge to actually run a test. For the `chapek9` data, I implied that we designed the study such that the total sample size \(N\) was fixed, so we should set
. The command that we need is,
```
library( BayesFactor )
contingencyTableBF( crosstab, sampleType = "jointMulti" )
```
As with most R commands, the output initially looks suspiciously similar to utter gibberish. Fortunately, it’s actually pretty simple once you get past the initial impression. Firstly, note that the stuff at the top and bottom are irrelevant fluff. You already know that you’re doing a Bayes factor analysis. You already know that you’re analysing a contingency table, and you already know that you specified a joint multinomial sampling plan. So let’s strip that out and take a look at what’s left over:
```
[1] Non-indep. (a=1) : 15.92684 @plusorminus0%
Against denominator:
Null, independence, a = 1
```
Let’s also ignore those two `a=1` bits, since they’re technical details that you don’t need to know about at this stage.269 The rest of the output is actually pretty straightforward. At the bottom, the output defines the null hypothesis for you: in this case, the null hypothesis is that there is no relationship between `species` and `choice` . Or, to put it another way, the null hypothesis is that these two variables are independent. Now if you look at the line above it, you might (correctly) guess that the `Non-indep.` part refers to the alternative hypothesis. In this case, the alternative is that there is a relationship between `species` and `choice` : that is, they are not independent. So the only thing left in the output is the bit that reads
```
15.92684 @plusorminus0%
```
The 15.9 part is the Bayes factor, and it’s telling you that the odds for the alternative hypothesis against the null are about 16:1. The \(\pm0\%\) part is not very interesting: essentially, all it’s telling you is that R has calculated an exact Bayes factor, so the uncertainty about the Bayes factor is 0%.270 In any case, the data are telling us that we have moderate evidence for the alternative hypothesis.
### 17.6.3 Writing up the results
When writing up the results, my experience has been that there aren’t quite so many “rules” for how you “should” report Bayesian hypothesis tests. That might change in the future if Bayesian methods become standard and some task force starts writing up style guides, but in the meantime I would suggest using some common sense. For example, I would avoid writing this:
A Bayesian test of association found a significant result (BF=15.92)
To my mind, this write up is unclear. Even assuming that you’ve already reported the relevant descriptive statistics, there are a number of things I am unhappy with. First, the concept of “statistical significance” is pretty closely tied with \(p\)-values, so it reads slightly strangely. Second, the “BF=15.92” part will only make sense to people who already understand Bayesian methods, and not everyone does. Third, it is somewhat unclear exactly which test was run and what software was used to do so.
On the other hand, unless precision is extremely important, I think that this is taking things a step too far:
We ran a Bayesian test of association using version 0.9.10-1 of the BayesFactor package using default priors and a joint multinomial sampling plan. The resulting Bayes factor of 15.92 to 1 in favour of the alternative hypothesis indicates that there is moderately strong evidence for the non-independence of species and choice.
Everything about that passage is correct, of course. <NAME> Rouder (2015) built their Bayesian tests of association using the paper by <NAME> (1974), the specific test we used assumes that the experiment relied on a joint multinomial sampling plan, and indeed the Bayes factor of 15.92 is moderately strong evidence. It’s just far too wordy.
In most situations you just don’t need that much information. My preference is usually to go for something a little briefer. First, if you’re reporting multiple Bayes factor analyses in your write up, then somewhere you only need to cite the software once, at the beginning of the results section. So you might have one sentence like this:
All analyses were conducted using the BayesFactor package in R , and unless otherwise stated default parameter values were used
Notice that I don’t bother including the version number? That’s because the citation itself includes that information (go check my reference list if you don’t believe me). There’s no need to clutter up your results with redundant information that almost no-one will actually need. When you get to the actual test you can get away with this:
A test of association produced a Bayes factor of 16:1 in favour of a relationship between species and choice.
Short and sweet. I’ve rounded 15.92 to 16, because there’s not really any important difference between 15.92:1 and 16:1. I spelled out “Bayes factor” rather than truncating it to “BF” because not everyone knows the abbreviation. I indicated exactly what the effect is (i.e., “a relationship between species and choice”) and how strong the evidence was. I didn’t bother indicating whether this was “moderate” evidence or “strong” evidence, because the odds themselves tell you! There’s nothing stopping you from including that information, and I’ve done so myself on occasions, but you don’t strictly need it. Similarly, I didn’t bother to indicate that I ran the “joint multinomial” sampling plan, because I’m assuming that the method section of my write up would make clear how the experiment was designed. (I might change my mind about that if the method section was ambiguous.) Neither did I bother indicating that this was a Bayesian test of association: if your reader can’t work that out from the fact that you’re reporting a Bayes factor and the fact that you’re citing the `BayesFactor` package for all your analyses, then there’s no chance they’ll understand anything you’ve written. Besides, if you keep writing the word “Bayes” over and over again it starts to look stupid. Bayes Bayes Bayes Bayes Bayes. See?
### 17.6.4 Other sampling plans
Up to this point all I’ve shown you is how to use the `contingencyTableBF()` function for the joint multinomial sampling plan (i.e., when the total sample size \(N\) is fixed, but nothing else is). For the Poisson sampling plan (i.e., nothing fixed), the command you need is identical except for the `sampleType` argument:
```
contingencyTableBF(crosstab, sampleType = "poisson" )
```
```
## Bayes factor analysis
## --------------
## [1] Non-indep. (a=1) : 28.20757 ±0%
##
## Against denominator:
## Null, independence, a = 1
## ---
## Bayes factor type: BFcontingencyTable, poisson
```
Notice that the Bayes factor of 28:1 here is not the identical to the Bayes factor of 16:1 that we obtained from the last test. The sampling plan actually does matter.
What about the design in which the row columns (or column totals) are fixed? As I mentioned earlier, this corresponds to the “independent multinomial” sampling plan. Again, you need to specify the `sampleType` argument, but this time you need to specify whether you fixed the rows or the columns. For example, suppose I deliberately sampled 87 humans and 93 robots, then I would need to indicate that the `fixedMargin` of the contingency table is the `"rows"` . So the command I would use is:
```
contingencyTableBF(crosstab, sampleType = "indepMulti", fixedMargin="rows")
```
Again, the Bayes factor is different, with the evidence for the alternative dropping to a mere 9:1. As you might expect, the answers would be diffrent again if it were the columns of the contingency table that the experimental design fixed.
Finally, if we turn to hypergeometric sampling in which everything is fixed, we get…
```
contingencyTableBF(crosstab, sampleType = "hypergeom")
#Error in contingencyHypergeometric(as.matrix(data2), a) :
# hypergeometric contingency tables restricted to 2 x 2 tables; see help for contingencyTableBF()
```
… an error message. Okay, some quick reading through the help files hints that support for larger contingency tables is coming, but it’s not been implemented yet. In the meantime, let’s imagine we have data from the “toy labelling” experiment I described earlier in this section. Specifically, let’s say our data look like this:
```
toys <- data.frame(stringsAsFactors=FALSE,
gender = c("girl", "boy"),
pink = c(8, 2),
blue = c(2, 8)
)
```
The Bayesian test with hypergeometric sampling gives us this:
```
contingencyTableBF(toys, sampleType = "hypergeom")
#Bayes factor analysis
#--------------
#[1] Non-indep. (a=1) : 8.294321 @plusorminus0%
#
#Against denominator:
# Null, independence, a = 1
#---
#Bayes factor type: BFcontingencyTable, hypergeometric
```
The Bayes factor of 8:1 provides modest evidence that the labels were being assigned in a way that correlates gender with colour, but it’s not conclusive.
## 17.7 Bayesian \(t\)-tests
The second type of statistical inference problem discussed in this book is the comparison between two means, discussed in some detail in the chapter on \(t\)-tests (Chapter 13. If you can remember back that far, you’ll recall that there are several versions of the \(t\)-test. The `BayesFactor` package contains a function called `ttestBF()` that is flexible enough to run several different versions of the \(t\)-test. I’ll talk a little about Bayesian versions of the independent samples \(t\)-tests and the paired samples \(t\)-test in this section.
### 17.7.1 Independent samples \(t\)-test
The most common type of \(t\)-test is the independent samples \(t\)-test, and it arises when you have data that look something like this:
```
load(file.path(projecthome, "data","harpo.Rdata"))
head(harpo)
```
In this data set, we have two groups of students, those who received lessons from Anastasia and those who took their classes with Bernadette. The question we want to answer is whether there’s any difference in the grades received by these two groups of student. Back in Chapter@refch:ttest I suggested you could analyse this kind of data using the
function in the `lsr` package. For example, if you want to run a Student’s \(t\)-test, you’d use a command like this:
```
independentSamplesTTest(
formula = grade ~ tutor,
data = harpo,
var.equal = TRUE
)
```
Like most of the functions that I wrote for this book, the
is very wordy. It prints out a bunch of descriptive statistics and a reminder of what the null and alternative hypotheses are, before finally getting to the test results. I wrote it that way deliberately, in order to help make things a little clearer for people who are new to statistics.
Again, we obtain a \(p\)-value less than 0.05, so we reject the null hypothesis.
What does the Bayesian version of the \(t\)-test look like? Using the `ttestBF()` function, we can obtain a Bayesian analog of Student’s independent samples \(t\)-test using the following command:
```
## Bayes factor analysis
## --------------
## [1] Alt., r=0.707 : 1.754927 ±0%
##
## Against denominator:
## Null, mu1-mu2 = 0
## ---
## Bayes factor type: BFindepSample, JZS
```
Notice that format of this command is pretty standard. As usual we have a `formula` argument in which we specify the outcome variable on the left hand side and the grouping variable on the right. The `data` argument is used to specify the data frame containing the variables. However, notice that there’s no analog of the `var.equal` argument. This is because the `BayesFactor` package does not include an analog of the Welch test, only the Student test.271 In any case, when you run this command you get this as the output: So what does all this mean? Just as we saw with the `contingencyTableBF()` function, the output is pretty dense. But, just like last time, there’s not a lot of information here that you actually need to process. Firstly, let’s examine the bottom line. The `BFindepSample` part just tells you that you ran an independent samples \(t\)-test, and the `JZS` part is technical information that is a little beyond the scope of this book.272 Clearly, there’s nothing to worry about in that part. In the line above, the text `Null, mu1-mu2 = 0` is just telling you that the null hypothesis is that there are no differences between means. But you already knew that. So the only part that really matters is this line here:
```
[1] Alt., r=0.707 : 1.754927 @plusorminus0%
```
Ignore the `r=0.707` part: it refers to a technical detail that we won’t worry about in this chapter.273 Instead, you should focus on the part that reads `1.754927` . This is the Bayes factor: the evidence provided by these data are about 1.8:1 in favour of the alternative.
Before moving on, it’s worth highlighting the difference between the orthodox test results and the Bayesian one. According to the orthodox test, we obtained a significant result, though only barely. Nevertheless, many people would happily accept \(p=.043\) as reasonably strong evidence for an effect. In contrast, notice that the Bayesian test doesn’t even reach 2:1 odds in favour of an effect, and would be considered very weak evidence at best. In my experience that’s a pretty typical outcome. Bayesian methods usually require more evidence before rejecting the null.
### 17.7.2 Paired samples \(t\)-test
Back in Section 13.5 I discussed the `chico` data frame in which students grades were measured on two tests, and we were interested in finding out whether grades went up from test 1 to test 2. Because every student did both tests, the tool we used to analyse the data was a paired samples \(t\)-test. To remind you of what the data look like, here’s the first few cases:
```
load(file.path(projecthome, "data","chico.Rdata"))
head(chico)
```
We originally analysed the data using the `pairedSamplesTTest()` function in the `lsr` package, but this time we’ll use the `ttestBF()` function from the `BayesFactor` package to do the same thing. The easiest way to do it with this data set is to use the `x` argument to specify one variable and the `y` argument to specify the other. All we need to do then is specify `paired=TRUE` to tell R that this is a paired samples test. So here’s our command:
```
ttestBF(
x = chico$grade_test1,
y = chico$grade_test2,
paired = TRUE
)
```
```
## Bayes factor analysis
## --------------
## [1] Alt., r=0.707 : 5992.05 ±0%
##
## Against denominator:
## Null, mu = 0
## ---
## Bayes factor type: BFoneSample, JZS
```
At this point, I hope you can read this output without any difficulty. The data provide evidence of about 6000:1 in favour of the alternative. We could probably reject the null with some confidence!
## 17.8 Bayesian regression
Okay, so now we’ve seen Bayesian equivalents to orthodox chi-square tests and \(t\)-tests. What’s next? If I were to follow the same progression that I used when developing the orthodox tests you’d expect to see ANOVA next, but I think it’s a little clearer if we start with regression.
### 17.8.1 A quick refresher
In Chapter 15 I used the `parenthood` data to illustrate the basic ideas behind regression. To remind you of what that data set looks like, here’s the first six observations:
```
load(file.path(projecthome, "data","parenthood.Rdata"))
head(parenthood)
```
```
## dan.sleep baby.sleep dan.grump day
## 1 7.59 10.18 56 1
## 2 7.91 11.66 60 2
## 3 5.14 7.92 82 3
## 4 7.71 9.61 55 4
## 5 6.68 9.75 67 5
## 6 5.99 5.04 72 6
```
Back in Chapter 15 I proposed a theory in which my grumpiness ( `dan.grump` ) on any given day is related to the amount of sleep I got the night before ( `dan.sleep` ), and possibly to the amount of sleep our baby got ( `baby.sleep` ), though probably not to the `day` on which we took the measurement. We tested this using a regression model. In order to estimate the regression model we used the `lm()` function, like so:
The hypothesis tests for each of the terms in the regression model were extracted using the `summary()` function as shown below: `summary(model)`
```
##
## Call:
## lm(formula = dan.grump ~ dan.sleep + day + baby.sleep, data = parenthood)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.906 -2.284 -0.295 2.652 11.880
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 126.278707 3.242492 38.945 <2e-16 ***
## dan.sleep -8.969319 0.560007 -16.016 <2e-16 ***
## day -0.004403 0.015262 -0.288 0.774
## baby.sleep 0.015747 0.272955 0.058 0.954
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.375 on 96 degrees of freedom
## Multiple R-squared: 0.8163, Adjusted R-squared: 0.8105
## F-statistic: 142.2 on 3 and 96 DF, p-value: < 2.2e-16
```
When interpreting the results, each row in this table corresponds to one of the possible predictors. The `(Intercept)` term isn’t usually interesting, though it is highly significant. The important thing for our purposes is the fact that `dan.sleep` is significant at \(p<.001\) and neither of the other variables are.
### 17.8.2 The Bayesian version
Okay, so how do we do the same thing using the `BayesFactor` package? The easiest way is to use the `regressionBF()` function instead of `lm()` . As before, we use `formula` to indicate what the full regression model looks like, and the `data` argument to specify the data frame. So the command is:
So that’s pretty straightforward: it’s exactly what we’ve been doing throughout the book. The output, however, is a little different from what you get from `lm()` . The format of this is pretty familiar. At the bottom we have some techical rubbish, and at the top we have some information about the Bayes factors. What’s new is the fact that we seem to have lots of Bayes factors here. What’s all this about? The trick to understanding this output is to recognise that if we’re interested in working out which of the 3 predictor variables are related to `dan.grump` , there are actually 8 possible regression models that could be considered. One possibility is the intercept only model, in which none of the three variables have an effect. At the other end of the spectrum is the full model in which all three variables matter. So what `regressionBF()` does is treat the intercept only model as the null hypothesis, and print out the Bayes factors for all other models when compared against that null. For example, if we look at line 4 in the table, we see that the evidence is about \(10^{33}\) to 1 in favour of the claim that a model that includes both `dan.sleep` and `day` is better than the intercept only model. Or if we look at line 1, we can see that the odds are about \(1.6 \times 10^{34}\) that a model containing the `dan.sleep` variable (but no others) is better than the intercept only model.
### 17.8.3 Finding the best model
In practice, this isn’t super helpful. In most situations the intercept only model is one that you don’t really care about at all. What I find helpful is to start out by working out which model is the best one, and then seeing how well all the alternatives compare to it. Here’s how you do that. In this case, it’s easy enough to see that the best model is actually the one that contains `dan.sleep` only (line 1), because it has the largest Bayes factor. However, if you’ve got a lot of possible models in the output, it’s handy to know that you can use the `head()` function to pick out the best few models. First, we have to go back and save the Bayes factor information to a variable:
Let’s say I want to see the best three models. To do this, I use the `head()` function specifying `n=3` , and here’s what I get as the result: `head( models, n = 3)`
This is telling us that the model in line 1 (i.e.,
) is the best one. That’s almost what I’m looking for, but it’s still comparing all the models against the intercept only model. That seems silly. What I’d like to know is how big the difference is between the best model and the other good models. For that, there’s this trick:
```
head( models/max(models), n = 3)
```
```
## Bayes factor analysis
## --------------
## [1] dan.sleep : 1 ±0%
## [2] dan.sleep + day : 0.0626532 ±0.01%
## [3] dan.sleep + baby.sleep : 0.0602154 ±0.01%
##
## Against denominator:
## dan.grump ~ dan.sleep
## ---
## Bayes factor type: BFlinearModel, JZS
```
Notice the bit at the bottom showing that the “denominator” has changed. What that means is that the Bayes factors are now comparing each of those 3 models listed against the
model. Obviously, the Bayes factor in the first line is exactly 1, since that’s just comparing the best model to itself. More to the point, the other two Bayes factors are both less than 1, indicating that they’re all worse than that model. The Bayes factors of 0.06 to 1 imply that the odds for the best model over the second best model are about 16:1. You can work this out by simple arithmetic (i.e., \(0.06 / 1 \approx 16\)), but the other way to do it is to directly compare the models. To see what I mean, here’s the original output: `models`
The best model corresponds to row 1 in this table, and the second best model corresponds to row 4. All you have to do to compare these two models is this:
```
models[1] / models[4]
```
```
## Bayes factor analysis
## --------------
## [1] dan.sleep : 15.96088 ±0.01%
##
## Against denominator:
## dan.grump ~ dan.sleep + day
## ---
## Bayes factor type: BFlinearModel, JZS
```
And there you have it. You’ve found the regression model with the highest Bayes factor (i.e.,
), and you know that the evidence for that model over the next best alternative (i.e.,
) is about 16:1.
### 17.8.4 Extracting Bayes factors for all included terms
Okay, let’s say you’ve settled on a specific regression model. What Bayes factors should you report? In this example, I’m going to pretend that you decided that
is the model you think is best. Sometimes it’s sensible to do this, even when it’s not the one with the highest Bayes factor. Usually this happens because you have a substantive theoretical reason to prefer one model over the other. However, in this case I’m doing it because I want to use a model with more than one predictor as my example! Having figured out which model you prefer, it can be really useful to call the `regressionBF()` function and specifying `whichModels="top"` . You use your “preferred” model as the `formula` argument, and then the output will show you the Bayes factors that result when you try to drop predictors from this model:
```
regressionBF(
formula = dan.grump ~ dan.sleep + baby.sleep,
data = parenthood,
whichModels = "top"
)
```
```
##
|
| | 0%
|
|====================== | 33%
|
|=========================================== | 67%
|
|=================================================================| 100%
```
```
## Bayes factor top-down analysis
## --------------
## When effect is omitted from dan.sleep + baby.sleep , BF is...
## [1] Omit baby.sleep : 16.60705 ±0.01%
## [2] Omit dan.sleep : 1.025403e-26 ±0.01%
##
## Against denominator:
## dan.grump ~ dan.sleep + baby.sleep
## ---
## Bayes factor type: BFlinearModel, JZS
```
Okay, so now you can see the results a bit more clearly. The Bayes factor when you try to drop the `dan.sleep` predictor is about \(10^{-26}\), which is very strong evidence that you shouldn’t drop it. On the other hand, the Bayes factor actually goes up to 17 if you drop `baby.sleep` , so you’d usually say that’s pretty strong evidence for dropping that one.
## 17.9 Bayesian ANOVA
As you can tell, the `BayesFactor` package is pretty flexible, and it can do Bayesian versions of pretty much everything in this book. In fact, it can do a few other neat things that I haven’t covered in the book at all. However, I have to stop somewhere, and so there’s only one other topic I want to cover: Bayesian ANOVA.
### 17.9.1 A quick refresher
As with the other examples, I think it’s useful to start with a reminder of how I discussed ANOVA earlier in the book. First, let’s remind ourselves of what the data were. The example I used originally is the `clin.trial` data frame, which looks like this
```
load(file.path(projecthome, "data","clinicaltrial.Rdata"))
head(clin.trial)
```
```
## drug therapy mood.gain
## 1 placebo no.therapy 0.5
## 2 placebo no.therapy 0.3
## 3 placebo no.therapy 0.1
## 4 anxifree no.therapy 0.6
## 5 anxifree no.therapy 0.4
## 6 anxifree no.therapy 0.2
```
To run our orthodox analysis in earlier chapters we used the `aov()` function to do all the heavy lifting. In Chapter 16 I recommended using the `Anova()` function from the `car` package to produce the ANOVA table, because it uses Type II tests by default. If you’ve forgotten what “Type II tests” are, it might be a good idea to re-read Section 16.10, because it will become relevant again in a moment. In any case, here’s what our analysis looked like:
```
library(car)
model <- aov( mood.gain ~ drug * therapy, data = clin.trial )
Anova(model)
```
```
## Anova Table (Type II tests)
##
## Response: mood.gain
## Sum Sq Df F value Pr(>F)
## drug 3.4533 2 31.7143 1.621e-05 ***
## therapy 0.4672 1 8.5816 0.01262 *
## drug:therapy 0.2711 2 2.4898 0.12460
## Residuals 0.6533 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
That’s pretty clearly showing us evidence for a main effect of `drug` at \(p<.001\), an effect of `therapy` at \(p<.05\) and no interaction.
### 17.9.2 The Bayesian version
How do we do the same thing using Bayesian methods? The `BayesFactor` package contains a function called `anovaBF()` that does this for you. It uses a pretty standard `formula` and `data` structure, so the command should look really familiar. Just like we did with regression, it will be useful to save the output to a variable:
```
models <- anovaBF(
formula = mood.gain ~ drug * therapy,
data = clin.trial
)
```
```
##
|
| | 0%
|
|================ | 25%
|
|================================ | 50%
|
|================================================= | 75%
|
|=================================================================| 100%
```
The output is quite different to the traditional ANOVA, but it’s not too bad once you understand what you’re looking for. Let’s take a look:
`models` This looks very similar to the output we obtained from the `regressionBF()` function, and with good reason. Remember what I said back in Section 16.6: under the hood, ANOVA is no different to regression, and both are just different examples of a linear model. Becasue of this, the `anovaBF()` reports the output in much the same way. For instance, if we want to identify the best model we could use the same commands that we used in the last section. One variant that I find quite useful is this: `models/max(models)`
```
## Bayes factor analysis
## --------------
## [1] drug : 0.3436062 ±1.34%
## [2] therapy : 0.001022285 ±1.34%
## [3] drug + therapy : 1 ±0%
## [4] drug + therapy + drug:therapy : 0.954423 ±1.64%
##
## Against denominator:
## mood.gain ~ drug + therapy
## ---
## Bayes factor type: BFlinearModel, JZS
```
By “dividing” the `models` output by the best model (i.e., `max(models)` ), what R is doing is using the best model (which in this case is `drugs + therapy` ) as the denominator, which gives you a pretty good sense of how close the competitors are. For instance, the model that contains the interaction term is almost as good as the model without the interaction, since the Bayes factor is 0.98. In other words, the data do not clearly indicate whether there is or is not an interaction.
### 17.9.3 Constructing Bayesian Type II tests
Okay, that’s all well and good, you might be thinking, but what do I report as the alternative to the \(p\)-value? In the classical ANOVA table, you get a single \(p\)-value for every predictor in the model, so you can talk about the significance of each effect. What’s the Bayesian analog of this?
It’s a good question, but the answer is tricky. Remember what I said in Section 16.10 about ANOVA being complicated. Even in the classical version of ANOVA there are several different “things” that ANOVA might correspond to. Specifically, I discussed how you get different \(p\)-values depending on whether you use Type I tests, Type II tests or Type III tests. To work out which Bayes factor is analogous to “the” \(p\)-value in a classical ANOVA, you need to work out which version of ANOVA you want an analog for. For the purposes of this section, I’ll assume you want Type II tests, because those are the ones I think are most sensible in general. As I discussed back in Section 16.10, Type II tests for a two-way ANOVA are reasonably straightforward, but if you have forgotten that section it wouldn’t be a bad idea to read it again before continuing.
Assuming you’ve had a refresher on Type II tests, let’s have a look at how to pull them from the Bayes factor table. Suppose we want to test the main effect of `drug` . The null hypothesis for this test corresponds to a model that includes an effect of `therapy` , but no effect of `drug` . The alternative hypothesis is the model that includes both. In other words, what we want is the Bayes factor corresponding to this comparison:
As it happens, we can read the answer to this straight off the table because it corresponds to a comparison between the model in line 2 of the table and the model in line 3: the Bayes factor in this case represents evidence for the null of 0.001 to 1. Or, more helpfully, the odds are about 1000 to 1 against the null.
The main effect of `therapy` can be calculated in much the same way. In this case, the null model is the one that contains only an effect of drug, and the alternative is the model that contains both. So the relevant comparison is between lines 2 and 1 in the table. The odds in favour of the null here are only 0.35 to 1. Again, I find it useful to frame things the other way around, so I’d refer to this as evidence of about 3 to 1 in favour of an effect of `therapy` .
Finally, in order to test an interaction effect, the null model here is one that contains both main effects but no interaction. The alternative model adds the interaction. That is:
If we look those two models up in the table, we see that this comparison is between the models on lines 3 and 4 of the table. The odds of 0.98 to 1 imply that these two models are fairly evenly matched.
You might be thinking that this is all pretty laborious, and I’ll concede that’s true. At some stage I might consider adding a function to the `lsr` package that would automate this process and construct something like a “Bayesian Type II ANOVA table” from the output of the `anovaBF()` function. However, I haven’t had time to do this yet, nor have I made up my mind about whether it’s really a good idea to do this. In the meantime, I thought I should show you the trick for how I do this in practice. The command that I use when I want to grab the right Bayes factors for a Type II ANOVA is this one: `max(models)/models`
```
## denominator
## numerator drug therapy drug + therapy
## drug + therapy 2.910308 978.2007 1
## denominator
## numerator drug + therapy + drug:therapy
## drug + therapy 1.047753
```
The output isn’t quite so pretty as the last one, but the nice thing is that you can read off everything you need. The best model is `drug + therapy` , so all the other models are being compared to that. What’s the Bayes factor for the main effect of `drug` ? The relevant null hypothesis is the one that contains only `therapy` , and the Bayes factor in question is 954:1. The main effect of `therapy` is weaker, and the evidence here is only 2.8:1. Finally, the evidence against an interaction is very weak, at 1.01:1. Reading the results off this table is sort of counterintuitive, because you have to read off the answers from the “wrong” part of the table. For instance, the evidence for an effect of `drug` can be read from the column labelled `therapy` , which is pretty damned weird. To be fair to the authors of the package, I don’t think they ever intended for the `anovaBF()` function to be used this way. My understanding274 is that their view is simply that you should find the best model and report that model: there’s no inherent reason why a Bayesian ANOVA should try to follow the exact same design as an orthodox ANOVA.275
In any case, if you know what you’re looking for, you can look at this table and then report the results of the Bayesian analysis in a way that is pretty closely analogous to how you’d report a regular Type II ANOVA. As I mentioned earlier, there’s still no convention on how to do that, but I usually go for something like this:
A Bayesian Type II ANOVA found evidence for main effects of drug (Bayes factor: 954:1) and therapy (Bayes factor: 3:1), but no clear evidence for or against an interaction (Bayes factor: 1:1).
## 17.10 Summary
The first half of this chapter was focused primarily on the theoretical underpinnings of Bayesian statistics. I introduced the mathematics for how Bayesian inference works (Section 17.1), and gave a very basic overview of how Bayesian hypothesis testing is typically done (Section 17.2). Finally, I devoted some space to talking about why I think Bayesian methods are worth using (Section 17.3.
The second half of the chapter was a lot more practical, and focused on tools provided by the `BayesFactor` package. Specifically, I talked about using the `contingencyTableBF()` function to do Bayesian analogs of chi-square tests (Section 17.6, the `ttestBF()` function to do Bayesian \(t\)-tests, (Section 17.7), the `regressionBF()` function to do Bayesian regressions, and finally the `anovaBF()` function for Bayesian ANOVA.
If you’re interested in learning more about the Bayesian approach, there are many good books you could look into. <NAME>’s book Doing Bayesian Data Analysis is a pretty good place to start (Kruschke 2011), and is a nice mix of theory and practice. His approach is a little different to the “Bayes factor” approach that I’ve discussed here, so you won’t be covering the same ground. If you’re a cognitive psychologist, you might want to check out <NAME> and <NAME>’ book Bayesian Cognitive Modeling (Lee and Wagenmakers 2014). I picked these two because I think they’re especially useful for people in my discipline, but there’s a lot of good books out there, so look around!
<NAME>. 1925. Statistical Methods for Research Workers. Edinburgh, UK: Oliver; Boyd.
It’s a leap of faith, I know, but let’s run with it okay?↩
*
Um. I hate to bring this up, but some statisticians would object to me using the word “likelihood” here. The problem is that the word “likelihood” has a very specific meaning in frequentist statistics, and it’s not quite the same as what it means in Bayesian statistics. As far as I can tell, Bayesians didn’t originally have any agreed upon name for the likelihood, and so it became common practice for people to use the frequentist terminology. This wouldn’t have been a problem, except for the fact that the way that Bayesians use the word turns out to be quite different to the way frequentists do. This isn’t the place for yet another lengthy history lesson, but to put it crudely: when a Bayesian says “a likelihood function” they’re usually referring one of the rows of the table. When a frequentist says the same thing, they’re referring to the same table, but to them “a likelihood function” almost always refers to one of the columns. This distinction matters in some contexts, but it’s not important for our purposes.↩
*
If we were being a bit more sophisticated, we could extend the example to accommodate the possibility that I’m lying about the umbrella. But let’s keep things simple, shall we?↩
*
You might notice that this equation is actually a restatement of the same basic rule I listed at the start of the last section. If you multiply both sides of the equation by \(P(d)\), then you get \(P(d) P(h| d) = P(d,h)\), which is the rule for how joint probabilities are calculated. So I’m not actually introducing any “new” rules here, I’m just using the same rule in a different way.↩
*
Obviously, this is a highly simplified story. All the complexity of real life Bayesian hypothesis testing comes down to how you calculate the likelihood \(P(d|h)\) when the hypothesis \(h\) is a complex and vague thing. I’m not going to talk about those complexities in this book, but I do want to highlight that although this simple story is true as far as it goes, real life is messier than I’m able to cover in an introductory stats textbook.↩
*
http://www.imdb.com/title/tt0093779/quotes. I should note in passing that I’m not the first person to use this quote to complain about frequentist methods. <NAME> and colleagues had the idea first. I’m shamelessly stealing it because it’s such an awesome pull quote to use in this context and I refuse to miss any opportunity to quote The Princess Bride.↩
*
http://about.abc.net.au/reports-publications/appreciation-survey-summary-report-2013/↩
*
In the interests of being completely honest, I should acknowledge that not all orthodox statistical tests that rely on this silly assumption. There are a number of sequential analysis tools that are sometimes used in clinical trials and the like. These methods are built on the assumption that data are analysed as they arrive, and these tests aren’t horribly broken in the way I’m complaining about here. However, sequential analysis methods are constructed in a very different fashion to the “standard” version of null hypothesis testing. They don’t make it into any introductory textbooks, and they’re not very widely used in the psychological literature. The concern I’m raising here is valid for every single orthodox test I’ve presented so far, and for almost every test I’ve seen reported in the papers I read.↩
*
A related problem: http://xkcd.com/1478/↩
*
Some readers might wonder why I picked 3:1 rather than 5:1, given that Johnson (2013) suggests that \(p=.05\) lies somewhere in that range. I did so in order to be charitable to the \(p\)-value. If I’d chosen a 5:1 Bayes factor instead, the results would look even better for the Bayesian approach.↩
*
Okay, I just know that some knowledgeable frequentists will read this and start complaining about this section. Look, I’m not dumb. I absolutely know that if you adopt a sequential analysis perspective you can avoid these errors within the orthodox framework. I also know that you can explictly design studies with interim analyses in mind. So yes, in one sense I’m attacking a “straw man” version of orthodox methods. However, the straw man that I’m attacking is the one that is used by almost every single practitioner. If it ever reaches the point where sequential methods become the norm among experimental psychologists and I’m no longer forced to read 20 extremely dubious ANOVAs a day, I promise I’ll rewrite this section and dial down the vitriol. But until that day arrives, I stand by my claim that default Bayes factor methods are much more robust in the face of data analysis practices as they exist in the real world. Default orthodox methods suck, and we all know it.↩
*
If you’re desperate to know, you can find all the gory details in Gunel and Dickey (1974). However, that’s a pretty technical paper. The help documentation to the
`contingencyTableBF()` gives this explanation: “the argument `priorConcentration` indexes the expected deviation from the null hypothesis under the alternative, and corresponds to Gunel and Dickey’s (1974) \(a\) parameter.” As I write this I’m about halfway through the Gunel and Dickey paper, and I agree that setting \(a=1\) is a pretty sensible default choice, since it corresponds to an assumption that you have very little a priori knowledge about the contingency table.↩ *
In some of the later examples, you’ll see that this number is not always 0%. This is because the
`BayesFactor` package often has to run some simulations to compute approximate Bayes factors. So the answers you get won’t always be identical when you run the command a second time. That’s why the output of these functions tells you what the margin for error is.↩ *
Apparently this omission is deliberate. I have this vague recollection that I spoke to <NAME> about this once, and his opinion was that when homogeneity of variance is violated the results of a \(t\)-test are uninterpretable. I can see the argument for this, but I’ve never really held a strong opinion myself. (Jeff, if you never said that, I’m sorry)↩
*
Just in case you’re interested: the “JZS” part of the output relates to how the Bayesian test expresses the prior uncertainty about the variance \(\sigma^2\), and it’s short for the names of three people: “<NAME>”. See Rouder et al. (2009) for details.↩
*
Again, in case you care … the null hypothesis here specifies an effect size of 0, since the two means are identical. The alternative hypothesis states that there is an effect, but it doesn’t specify exactly how big the effect will be. The \(r\) value here relates to how big the effect is expected to be according to the alternative. You can type
`?ttestBF` to get more details.↩ *
Again, guys, sorry if I’ve misread you.↩
*
I don’t even disagree with them: it’s not at all obvious why a Bayesian ANOVA should reproduce (say) the same set of model comparisons that the Type II testing strategy uses. It’s precisely because of the fact that I haven’t really come to any strong conclusions that I haven’t added anything to the
`lsr` package to make Bayesian Type II tests easier to produce.↩
# Epilogue
“Begin at the beginning”, the King said, very gravely, “and go on till you come to the end: then stop”
– <NAME>
It feels somewhat strange to be writing this chapter, and more than a little inappropriate. An epilogue is what you write when a book is finished, and this book really isn’t finished. There are a lot of things still missing from this book. It doesn’t have an index yet. A lot of references are missing. There are no “do it yourself exercises”. And in general, I feel that there a lot of things that are wrong with the presentation, organisation and content of this book. Given all that, I don’t want to try to write a “proper” epilogue. I haven’t finished writing the substantive content yet, so it doesn’t make sense to try to bring it all together. But this version of the book is going to go online for students to use, and you will be able to purchase a hard copy too, so I want to give it at least a veneer of closure. So let’s give it a go, shall we?
## The undiscovered statistics
First, I’m going to talk a bit about some of the content that I wish I’d had the chance to cram into this version of the book, just so that you can get a sense of what other ideas are out there in the world of statistics. I think this would be important even if this book were getting close to a final product: one thing that students often fail to realise is that their introductory statistics classes are just that: an introduction. If you want to go out into the wider world and do real data analysis, you have to learn a whole lot of new tools that extend the content of your undergraduate lectures in all sorts of different ways. Don’t assume that something can’t be done just because it wasn’t covered in undergrad. Don’t assume that something is the right thing to do just because it {} covered in an undergrad class. To stop you from falling victim to that trap, I think it’s useful to give a bit of an overview of some of the other ideas out there.
### Omissions within the topics covered
Even within the topics that I have covered in the book, there are a lot of omissions that I’d like to redress in future version of the book. Just sticking to things that are purely about statistics (rather than things associated with R), the following is a representative but not exhaustive list of topics that I’d like to expand on in later versions:
* Other types of correlations In Chapter 5 I talked about two types of correlation: Pearson and Spearman. Both of these methods of assessing correlation are applicable to the case where you have two continuous variables and want to assess the relationship between them. What about the case where your variables are both nominal scale? Or when one is nominal scale and the other is continuous? There are actually methods for computing correlations in such cases (e.g., polychoric correlation), but I just haven’t had time to write about them yet.
* More detail on effect sizes In general, I think the treatment of effect sizes throughout the book is a little more cursory than it should be. In almost every instance, I’ve tended just to pick one measure of effect size (usually the most popular one) and describe that. However, for almost all tests and models there are multiple ways of thinking about effect size, and I’d like to go into more detail in the future.
* Dealing with violated assumptions In a number of places in the book I’ve talked about some things you can do when you find that the assumptions of your test (or model) are violated, but I think that I ought to say more about this. In particular, I think it would have been nice to talk in a lot more detail about how you can tranform variables to fix problems. I talked a bit about this in Sections 7.2, 7.3 and 15.9.4, but the discussion isn’t detailed enough I think.
* Interaction terms for regression In Chapter 16 I talked about the fact that you can have interaction terms in an ANOVA, and I also pointed out that ANOVA can be interpreted as a kind of linear regression model. Yet, when talking about regression in Chapter 15 I made no mention of interactions at all. However, there’s nothing stopping you from including interaction terms in a regression model. It’s just a little more complicated to figure out what an “interaction” actually means when you’re talking about the interaction between two continuous predictors, and it can be done in more than one way. Even so, I would have liked to talk a little about this.
* Method of planned comparison As I mentioned this in Chapter 16, it’s not always appropriate to be using post hoc correction like Tukey’s HSD when doing an ANOVA, especially when you had a very clear (and limited) set of comparisons that you cared about ahead of time. I would like to talk more about this in a future version of book.
* Multiple comparison methods Even within the context of talking about post hoc tests and multiple comparisons, I would have liked to talk about the methods in more detail, and talk about what other methods exist besides the few options I mentioned.
## Statistical models missing from the book
Statistics is a huge field. The core tools that I’ve described in this book (chi-square tests, \(t\)-tests, ANOVA and regression) are basic tools that are widely used in everyday data analysis, and they form the core of most introductory stats books. However, there are a lot of other tools out there. There are so very many data analysis situations that these tools don’t cover, and in future versions of this book I want to talk about them. To give you a sense of just how much more there is, and how much more work I want to do to finish this thing, the following is a list of statistical modelling tools that I would have liked to talk about. Some of these will definitely make it into future versions of the book.
Analysis of covariance In Chapter 16 I spent a bit of time discussing the connection between ANOVA and regression, pointing out that any ANOVA model can be recast as a kind of regression model. More generally, both are examples of linear models, and it’s quite possible to consider linear models that are more general than either. The classic example of this is “analysis of covariance” (ANCOVA), and it refers to the situation where some of your predictors are continuous (like in a regression model) and others are categorical (like in an ANOVA).
*
Nonlinear regression When discussing regression in Chapter 15, we saw that regression assume that the relationship between predictors and outcomes is linear. One the other hand, when we talked about the simpler problem of correlation in Chapter 5, we saw that there exist tools (e.g., Spearman correlations) that are able to assess non-linear relationships between variables. There are a number of tools in statistics that can be used to do non-linear regression. For instance, some non-linear regression models assume that the relationship between predictors and outcomes is monotonic (e.g., isotonic regression), while others assume that it is smooth but not necessarily monotonic (e.g., Lowess regression), while others assume that the relationship is of a known form that happens to be nonlinear (e.g., polynomial regression).
*
Logistic regression Yet another variation on regression occurs when the outcome variable is binary valued, but the predictors are continuous. For instance, suppose you’re investigating social media, and you want to know if it’s possible to predict whether or not someone is on Twitter as a function of their income, their age, and a range of other variables. This is basically a regression model, but you can’t use regular linear regression because the outcome variable is binary (you’re either on Twitter or you’re not): because the outcome variable is binary, there’s no way that the residuals could possibly be normally distributed. There are a number of tools that statisticians can apply to this situation, the most prominent of which is logistic regression.
*
The General Linear Model (GLM) The GLM is actually a family of models that includes logistic regression, linear regression, (some) nonlinear regression, ANOVA and many others. The basic idea in the GLM is essentially the same idea that underpins linear models, but it allows for the idea that your data might not be normally distributed, and allows for nonlinear relationships between predictors and outcomes. There are a lot of very handy analyses that you can run that fall within the GLM, so it’s a very useful thing to know about.
*
Survival analysis In Chapter 2 I talked about “differential attrition”, the tendency for people to leave the study in a non-random fashion. Back then, I was talking about it as a potential methodological concern, but there are a lot of situations in which differential attrition is actually the thing you’re interested in. Suppose, for instance, you’re interested in finding out how long people play different kinds of computer games in a single session. Do people tend to play RTS (real time strategy) games for longer stretches than FPS (first person shooter) games? You might design your study like this. People come into the lab, and they can play for as long or as little as they like. Once they’re finished, you record the time they spent playing. However, due to ethical restrictions, let’s suppose that you cannot allow them to keep playing longer than two hours. A lot of people will stop playing before the two hour limit, so you know exactly how long they played. But some people will run into the two hour limit, and so you don’t know how long they would have kept playing if you’d been able to continue the study. As a consequence, your data are systematically censored: you’re missing all of the very long times. How do you analyse this data sensibly? This is the problem that survival analysis solves. It is specifically designed to handle this situation, where you’re systematically missing one “side” of the data because the study ended. It’s very widely used in health research, and in that context it is often literally used to analyse survival. For instance, you may be tracking people with a particular type of cancer, some who have received treatment A and others who have received treatment B, but you only have funding to track them for 5 years. At the end of the study period some people are alive, others are not. In this context, survival analysis is useful for determining which treatment is more effective, and telling you about the risk of death that people face over time.
*
Repeated measures ANOVA When talking about reshaping data in Chapter 7, I introduced some data sets in which each participant was measured in multiple conditions (e.g., in the drugs data set, the working memory capacity (WMC) of each person was measured under the influence of alcohol and caffeine). It is quite common to design studies that have this kind of repeated measures structure. A regular ANOVA doesn’t make sense for these studies, because the repeated measurements mean that independence is violated (i.e., observations from the same participant are more closely related to one another than to observations from other participants. Repeated measures ANOVA is a tool that can be applied to data that have this structure. The basic idea behind RM-ANOVA is to take into account the fact that participants can have different overall levels of performance. For instance, Amy might have a WMC of 7 normally, which falls to 5 under the influence of caffeine, whereas Borat might have a WMC of 6 normally, which falls to 4 under the influence of caffeine. Because this is a repeated measures design, we recognise that – although Amy has a higher WMC than Borat – the effect of caffeine is identical for these two people. In other words, a repeated measures design means that we can attribute some of the variation in our WMC measurement to individual differences (i.e., some of it is just that Amy has higher WMC than Borat), which allows us to draw stronger conclusions about the effect of caffeine.
*
Mixed models Repeated measures ANOVA is used in situations where you have observations clustered within experimental units. In the example I gave above, we have multiple WMC measures for each participant (i.e., one for each condition). However, there are a lot of other ways in which you can end up with multiple observations per participant, and for most of those situations the repeated measures ANOVA framework is insufficient. A good example of this is when you track individual people across multiple time points. Let’s say you’re tracking happiness over time, for two people. Aaron’s happiness starts at 10, then drops to 8, and then to 6. Belinda’s happiness starts at 6, then rises to 8 and then to 10. Both of these two people have the same “overall” level of happiness (the average across the three time points is 8), so a repeated measures ANOVA analysis would treat Aaron and Belinda the same way. But that’s clearly wrong. Aaron’s happiness is decreasing, whereas Belinda’s is increasing. If you want to optimally analyse data from an experiment where people can change over time, then you need a more powerful tool than repeated measures ANOVA. The tools that people use to solve this problem are called “mixed” models, because they are designed to learn about individual experimental units (e.g. happiness of individual people over time) as well as overall effects (e.g. the effect of money on happiness over time). Repeated measures ANOVA is perhaps the simplest example of a mixed model, but there’s a lot you can do with mixed models that you can’t do with repeated measures ANOVA.
*
Reliability analysis Back in Chapter 2 I talked about reliability as one of the desirable characteristics of a measurement. One of the different types of reliability I mentioned was inter-item reliability. For example, when designing a survey used to measure some aspect to someone’s personality (e.g., extraversion), one generally attempts to include several different questions that all ask the same basic question in lots of different ways. When you do this, you tend to expect that all of these questions will tend to be correlated with one another, because they’re all measuring the same latent construct. There are a number of tools (e.g., Cronbach’s \(\alpha\)) that you can use to check whether this is actually true for your study.
*
Factor analysis One big shortcoming with reliability measures like Cronbach’s \(\alpha\) is that they assume that your observed variables are all measuring a single latent construct. But that’s not true in general. If you look at most personality questionnaires, or IQ tests, or almost anything where you’re taking lots of measurements, it’s probably the case that you’re actually measuring several things at once. For example, all the different tests used when measuring IQ do tend to correlate with one another, but the pattern of correlations that you see across tests suggests that there are multiple different “things” going on in the data. Factor analysis (and related tools like principal components analysis and independent components analsysis) is a tool that you can use to help you figure out what these things are. Broadly speaking, what you do with these tools is take a big correlation matrix that describes all pairwise correlations between your variables, and attempt to express this pattern of correlations using only a small number of latent variables. Factor analysis is a very useful tool – it’s a great way of trying to see how your variables are related to one another – but it can be tricky to use well. A lot of people make the mistake of thinking that when factor analysis uncovers a latent variable (e.g., extraversion pops out as a latent variable when you factor analyse most personality questionnaires), it must actually correspond to a real “thing”. That’s not necessarily true. Even so, factor analysis is a very useful thing to know about (especially for psychologists), and I do want to talk about it in a later version of the book.
*
Multidimensional scaling Factor analysis is an example of an “unsupervised learning” model. What this means is that, unlike most of the “supervised learning” tools I’ve mentioned, you can’t divide up your variables in to predictors and outcomes. Regression is supervised learning; factor analysis is unsupervised learning. It’s not the only type of unsupervised learning model however. For example, in factor analysis one is concerned with the analysis of correlations between variables. However, there are many situations where you’re actually interested in analysing similarities or dissimilarities between objects, items or people. There are a number of tools that you can use in this situation, the best known of which is multidimensional scaling (MDS). In MDS, the idea is to find a “geometric” representation of your items. Each item is “plotted” as a point in some space, and the distance between two points is a measure of how dissimilar those items are.
*
Clustering Another example of an unsupervised learning model is clustering (also referred to as classification), in which you want to organise all of your items into meaningful groups, such that similar items are assigned to the same groups. A lot of clustering is unsupervised, meaning that you don’t know anything about what the groups are, you just have to guess. There are other “supervised clustering” situations where you need to predict group memberships on the basis of other variables, and those group memberships are actually observables: logistic regression is a good example of a tool that works this way. However, when you don’t actually know the group memberships, you have to use different tools (e.g., \(k\)-means clustering). There’s even situations where you want to do something called “semi-supervised clustering”, in which you know the group memberships for some items but not others. As you can probably guess, clustering is a pretty big topic, and a pretty useful thing to know about.
*
Causal models One thing that I haven’t talked about much in this book is how you can use statistical modeling to learn about the causal relationships between variables. For instance, consider the following three variables which might be of interest when thinking about how someone died in a firing squad. We might want to measure whether or not an execution order was given (variable A), whether or not a marksman fired their gun (variable B), and whether or not the person got hit with a bullet (variable C). These three variables are all correlated with one another (e.g., there is a correlation between guns being fired and people getting hit with bullets), but we actually want to make stronger statements about them than merely talking about correlations. We want to talk about causation. We want to be able to say that the execution order (A) causes the marksman to fire (B) which causes someone to get shot (C). We can express this by a directed arrow notation: we write it as \(A \rightarrow B \rightarrow C\). This “causal chain” is a fundamentally different explanation for events than one in which the marksman fires first, which causes the shooting \(B \rightarrow C\), and then causes the executioner to “retroactively” issue the execution order, \(B \rightarrow A\). This “common effect” model says that A and C are both caused by B. You can see why these are different. In the first causal model, if we had managed to stop the executioner from issuing the order (intervening to change A), then no shooting would have happened. In the second model, the shooting would have happened any way because the marksman was not following the execution order. There is a big literature in statistics on trying to understand the causal relationships between variables, and a number of different tools exist to help you test different causal stories about your data. The most widely used of these tools (in psychology at least) is structural equations modelling (SEM), and at some point I’d like to extend the book to talk about it.
Of course, even this listing is incomplete. I haven’t mentioned time series analysis, item response theory, market basket analysis, classification and regression trees, or any of a huge range of other topics. However, the list that I’ve given above is essentially my wish list for this book. Sure, it would double the length of the book, but it would mean that the scope has become broad enough to cover most things that applied researchers in psychology would need to use.
## Other ways of doing inference
A different sense in which this book is incomplete is that it focuses pretty heavily on a very narrow and old-fashioned view of how inferential statistics should be done. In Chapter 10 I talked a little bit about the idea of unbiased estimators, sampling distributions and so on. In Chapter 11 I talked about the theory of null hypothesis significance testing and \(p\)-values. These ideas have been around since the early 20th century, and the tools that I’ve talked about in the book rely very heavily on the theoretical ideas from that time. I’ve felt obligated to stick to those topics because the vast majority of data analysis in science is also reliant on those ideas. However, the theory of statistics is not restricted to those topics, and – while everyone should know about them because of their practical importance – in many respects those ideas do not represent best practice for contemporary data analysis. One of the things that I’m especially happy with is that I’ve been able to go a little beyond this. Chapter 17 now presents the Bayesian perspective in a reasonable amount of detail, but the book overall is still pretty heavily weighted towards the frequentist orthodoxy. Additionally, there are a number of other approaches to inference that are worth mentioning:
Bootstrapping Throughout the book, whenever I’ve introduced a hypothesis test, I’ve had a strong tendency just to make assertions like “the sampling distribution for BLAH is a \(t\)-distribution” or something like that. In some cases, I’ve actually attempted to justify this assertion. For example, when talking about \(\chi^2\) tests in Chapter 12, I made reference to the known relationship between normal distributions and \(\chi^2\) distributions (see Chapter 9) to explain how we end up assuming that the sampling distribution of the goodness of fit statistic is \(\chi^2\). However, it’s also the case that a lot of these sampling distributions are, well, wrong. The \(\chi^2\) test is a good example: it is based on an assumption about the distribution of your data, an assumption which is known to be wrong for small sample sizes! Back in the early 20th century, there wasn’t much you could do about this situation: statisticians had developed mathematical results that said that “under assumptions BLAH about the data, the sampling distribution is approximately BLAH”, and that was about the best you could do. A lot of times they didn’t even have that: there are lots of data analysis situations for which no-one has found a mathematical solution for the sampling distributions that you need. And so up until the late 20th century, the corresponding tests didn’t exist or didn’t work. However, computers have changed all that now. There are lots of fancy tricks, and some not-so-fancy, that you can use to get around it. The simplest of these is bootstrapping, and in it’s simplest form it’s incredibly simple. Here it is: simulate the results of your experiment lots and lots of time, under the twin assumptions that (a) the null hypothesis is true and (b) the unknown population distribution actually looks pretty similar to your raw data. In other words, instead of assuming that the data are (for instance) normally distributed, just assume that the population looks the same as your sample, and then use computers to simulate the sampling distribution for your test statistic if that assumption holds. Despite relying on a somewhat dubious assumption (i.e., the population distribution is the same as the sample!) bootstrapping is quick and easy method that works remarkably well in practice for lots of data analysis problems.
*
Cross validation One question that pops up in my stats classes every now and then, usually by a student trying to be provocative, is “Why do we care about inferential statistics at all? Why not just describe your sample?” The answer to the question is usually something like this: “Because our true interest as scientists is not the specific sample that we have observed in the past, we want to make predictions about data we might observe in the future”. A lot of the issues in statistical inference arise because of the fact that we always expect the future to be similar to but a bit different from the past. Or, more generally, new data won’t be quite the same as old data. What we do, in a lot of situations, is try to derive mathematical rules that help us to draw the inferences that are most likely to be correct for new data, rather than to pick the statements that best describe old data. For instance, given two models A and B, and a data set X you collected today, try to pick the model that will best describe a new data set Y that you’re going to collect tomorrow. Sometimes it’s convenient to simulate the process, and that’s what cross-validation does. What you do is divide your data set into two subsets, X1 and X2. Use the subset X1 to train the model (e.g., estimate regression coefficients, let’s say), but then assess the model performance on the other one X2. This gives you a measure of how well the model generalises from an old data set to a new one, and is often a better measure of how good your model is than if you just fit it to the full data set X.
*
Robust statistics Life is messy, and nothing really works the way it’s supposed to. This is just as true for statistics as it is for anything else, and when trying to analyse data we’re often stuck with all sorts of problems in which the data are just messier than they’re supposed to be. Variables that are supposed to be normally distributed are not actually normally distributed, relationships that are supposed to be linear are not actually linear, and some of the observations in your data set are almost certainly junk (i.e., not measuring what they’re supposed to). All of this messiness is ignored in most of the statistical theory I developed in this book. However, ignoring a problem doesn’t always solve it. Sometimes, it’s actually okay to ignore the mess, because some types of statistical tools are “robust”: if the data don’t satisfy your theoretical assumptions, they still work pretty well. Other types of statistical tools are not robust: even minor deviations from the theoretical assumptions cause them to break. Robust statistics is a branch of stats concerned with this question, and they talk about things like the “breakdown point” of a statistic: that is, how messy does your data have to be before the statistic cannot be trusted? I touched on this in places. The mean is not a robust estimator of the central tendency of a variable; the median is. For instance, suppose I told you that the ages of my five best friends are 34, 39, 31, 43 and 4003 years. How old do you think they are on average? That is, what is the true population mean here? If you use the sample mean as your estimator of the population mean, you get an answer of 830 years. If you use the sample median as the estimator of the population mean, you get an answer of 39 years. Notice that, even though you’re “technically” doing the wrong thing in the second case (using the median to estimate the mean!) you’re actually getting a better answer. The problem here is that one of the observations is clearly, obviously a lie. I don’t have a friend aged 4003 years. It’s probably a typo: I probably meant to type 43. But what if I had typed 53 instead of 43, or 34 instead of 43? Could you be sure if this was a typo? Sometimes the errors in the data are subtle, so you can’t detect them just by eyeballing the sample, but they’re still errors that contaminate your data, and they still affect your conclusions. Robust statistics is a concerned with how you can make safe inferences even when faced with contamination that you don’t know about. It’s pretty cool stuff.
### Miscellaneous topics
Missing data Suppose you’re doing a survey, and you’re interested in exercise and weight. You send data to four people. Adam says he exercises a lot and is not overweight. Briony says she exercises a lot and is not overweight. Carol says she does not exercise and is overweight. Dan says he does not exercise and refuses to answer the question about his weight. Elaine does not return the survey. You now have a missing data problem. There is one entire survey missing, and one question missing from another one, What do you do about it? I’ve only barely touched on this question in this book, in Section5.8, and in that section all I did was tell you about some R commands you can use to ignore the missing data. But ignoring missing data is not, in general, a safe thing to do. Let’s think about Dan’s survey here. Firstly, notice that, on the basis of my other responses, I appear to be more similar to Carol (neither of us exercise) than to Adam or Briony. So if you were forced to guess my weight, you’d guess that I’m closer to her than to them. Maybe you’d make some correction for the fact that Adam and I are males and Briony and Carol are females. The statistical name for this kind of guessing is “imputation”. Doing imputation safely is hard, but important, especially when the missing data are missing in a systematic way. Because of the fact that people who are overweight are often pressured to feel poorly about their weight (often thanks to public health campaigns), we actually have reason to suspect that the people who are not responding are more likely to be overweight than the people who do respond. Imputing a weight to Dan means that the number of overweight people in the sample will probably rise from 1 out of 3 (if we ignore Dan), to 2 out of 4 (if we impute Dan’s weight). Clearly this matters. But doing it sensibly is more complicated than it sounds. Earlier, I suggested you should treat me like Carol, since we gave the same answer to the exercise question. But that’s not quite right: there is a systematic difference between us. She answered the question, and I didn’t. Given the social pressures faced by overweight people, isn’t it likely that I’m more overweight than Carol? And of course this is still ignoring the fact that it’s not sensible to impute a single weight to me, as if you actually knew my weight. Instead, what you need to do it is impute a range of plausible guesses (referred to as multiple imputation), in order to capture the fact that you’re more uncertain about my weight than you are about Carol’s. And let’s not get started on the problem posed by the fact that Elaine didn’t send in the survey. As you can probably guess, dealing with missing data is an increasingly important topic. In fact, I’ve been told that a lot of journals in some fields will not accept studies that have missing data unless some kind of sensible multiple imputation scheme is followed.
*
Power analysis In Chapter 11 I discussed the concept of power (i.e., how likely are you to be able to detect an effect if it actually exists), and referred to power analysis, a collection of tools that are useful for assessing how much power your study has. Power analysis can be useful for planning a study (e.g., figuring out how large a sample you’re likely to need), but it also serves a useful role in analysing data that you already collected. For instance, suppose you get a significant result, and you have an estimate of your effect size. You can use this information to estimate how much power your study actually had. This is kind of useful, especially if your effect size is not large. For instance, suppose you reject the null hypothesis \(p<.05\), but you use power analysis to figure out that your estimated power was only \(.08\). The significant result means that, if the null hypothesis was in fact true, there was a 5% chance of getting data like this. But the low power means that, even if the null hypothesis is false, the effect size was really as small as it looks, there was only an 8% chance of getting data like the one you did. This suggests that you need to be pretty cautious, because luck seems to have played a big part in your results, one way or the other!
*
Data analysis using theory-inspired models In a few places in this book I’ve mentioned response time (RT) data, where you record how long it takes someone to do something (e.g., make a simple decision). I’ve mentioned that RT data are almost invariably non-normal, and positively skewed. Additionally, there’s a thing known as the speed-accuracy tradeoff: if you try to make decisions too quickly (low RT), you’re likely to make poorer decisions (lower accuracy). So if you measure both the accuracy of a participant’s decisions and their RT, you’ll probably find that speed and accuracy are related. There’s more to the story than this, of course, because some people make better decisions than others regardless of how fast they’re going. Moreover, speed depends on both cognitive processes (i.e., time spend thinking) but also physiological ones (e.g., how fast can you move your muscles). It’s starting to sound like analysing this data will be a complicated process. And indeed it is, but one of the things that you find when you dig into the psychological literature is that there already exist mathematical models (called “sequential sampling models”) that describe how people make simple decisions, and these models take into account a lot of the factors I mentioned above. You won’t find any of these theoretically-inspired models in a standard statistics textbook. Standard stats textbooks describe standard tools, tools that could meaningfully be applied in lots of different disciplines, not just psychology. ANOVA is an example of a standard tool: it is just as applicable to psychology as to pharmacology. Sequential sampling models are not: they are psychology-specific, more or less. This doesn’t make them less powerful tools: in fact, if you’re analysing data where people have to make choices quickly, you should really be using sequential sampling models to analyse the data. Using ANOVA or regression or whatever won’t work as well, because the theoretical assumptions that underpin them are not well-matched to your data. In contrast, sequential sampling models were explicitly designed to analyse this specific type of data, and their theoretical assumptions are extremely well-matched to the data. Obviously, it’s impossible to cover this sort of thing properly, because there are thousands of context-specific models in every field of science. Even so, one thing that I’d like to do in later versions of the book is to give some case studies that are of particular relevance to psychologists, just to give a sense for how psychological theory can be used to do better statistical analysis of psychological data. So, in later versions of the book I’ll probably talk about how to analyse response time data, among other things.
## Learning the basics, and learning them in R
Okay, that was… long. And even that listing is massively incomplete. There really are a lot of big ideas in statistics that I haven’t covered in this book. It can seem pretty depressing to finish a 600-page textbook only to be told that this only the beginning, especially when you start to suspect that half of the stuff you’ve been taught is wrong. For instance, there are a lot of people in the field who would strongly argue against the use of the classical ANOVA model, yet I’ve devote two whole chapters to it! Standard ANOVA can be attacked from a Bayesian perspective, or from a robust statistics perspective, or even from a “it’s just plain wrong” perspective (people very frequently use ANOVA when they should actually be using mixed models). So why learn it at all?
As I see it, there are two key arguments. Firstly, there’s the pure pragmatism argument. Rightly or wrongly, ANOVA is widely used. If you want to understand the scientific literature, you need to understand ANOVA. And secondly, there’s the “incremental knowledge” argument. In the same way that it was handy to have seen one-way ANOVA before trying to learn factorial ANOVA, understanding ANOVA is helpful for understanding more advanced tools, because a lot of those tools extend on or modify the basic ANOVA setup in some way. For instance, although mixed models are way more useful than ANOVA and regression, I’ve never heard of anyone learning how mixed models work without first having worked through ANOVA and regression. You have to learn to crawl before you can climb a mountain.
Actually, I want to push this point a bit further. One thing that I’ve done a lot of in this book is talk about fundamentals. I spent a lot of time on probability theory. I talked about the theory of estimation and hypothesis tests in more detail than I needed to. When talking about R, I spent a lot of time talking about how the language works, and talking about things like writing your own scripts, functions and programs. I didn’t just teach you how to draw a histogram using `hist()` , I tried to give a basic overview of how the graphics system works. Why did I do all this? Looking back, you might ask whether I really needed to spend all that time talking about what a probability distribution is, or why there was even a section on probability density. If the goal of the book was to teach you how to run a \(t\)-test or an ANOVA, was all that really necessary? Or, come to think of it, why bother with R at all? There are lots of free alternatives out there: PSPP, for instance, is an SPSS-like clone that is totally free, has simple “point and click” menus, and can (I think) do every single analysis that I’ve talked about in this book. And you can learn PSPP in about 5 minutes. Was this all just a huge waste of everyone’s time??? The answer, I hope you’ll agree, is no. The goal of an introductory stats is not to teach ANOVA. It’s not to teach \(t\)-tests, or regressions, or histograms, or \(p\)-values. The goal is to start you on the path towards becoming a skilled data analyst. And in order for you to become a skilled data analyst, you need to be able to do more than ANOVA, more than \(t\)-tests, regressions and histograms. You need to be able to think properly about data. You need to be able to learn the more advanced statistical models that I talked about in the last section, and to understand the theory upon which they are based. And you need to have access to software that will let you use those advanced tools. And this is where – in my opinion at least – all that extra time I’ve spent on the fundamentals pays off. If you understand the graphics system in R, then you can draw the plots that you want, not just the canned plots that someone else has built into R for you. If you understand probability theory, you’ll find it much easier to switch from frequentist analyses to Bayesian ones. If you understand the core mechanics of R, you’ll find it much easier to generalise from linear regressions using `lm()` to using generalised linear models with `glm()` or linear mixed effects models using `lme()` and `lmer()` . You’ll even find that a basic knowledge of R will go a long way towards teaching you how to use other statistical programming languages that are based on it. Bayesians frequently rely on tools like WinBUGS and JAGS, which have a number of similarities to R, and can in fact be called from within R. In fact, because R is the “lingua franca of statistics”, what you’ll find is that most ideas in the statistics literature has been implemented somewhere as a package that you can download from CRAN. The same cannot be said for PSPP, or even SPSS.
In short, I think that the big payoff for learning statistics this way is extensibility. For a book that only covers the very basics of data analysis, this book has a massive overhead in terms of learning R, probability theory and so on. There’s a whole lot of other things that it pushes you to learn besides the specific analyses that the book covers. So if your goal had been to learn how to run an ANOVA in the minimum possible time, well, this book wasn’t a good choice. But as I say, I don’t think that is your goal. I think you want to learn how to do data analysis. And if that really is your goal, you want to make sure that the skills you learn in your introductory stats class are naturally and cleanly extensible to the more complicated models that you need in real world data analysis. You want to make sure that you learn to use the same tools that real data analysts use, so that you can learn to do what they do. And so yeah, okay, you’re a beginner right now (or you were when you started this book), but that doesn’t mean you should be given a dumbed-down story, a story in which I don’t tell you about probability density, or a story where I don’t tell you about the nightmare that is factorial ANOVA with unbalanced designs. And it doesn’t mean that you should be given baby toys instead of proper data analysis tools. Beginners aren’t dumb; they just lack knowledge. What you need is not to have the complexities of real world data analysis hidden from from you. What you need are the skills and tools that will let you handle those complexities when they inevitably ambush you in the real world.
And what I hope is that this book – or the finished book that this will one day turn into – is able to help you with that.
# References
———. 2002. Categorical Data Analysis. 2nd ed. Hoboken, NJ: Wiley.
———. 1922b. “On the Mathematical Foundation of Theoretical Statistics.” Philosophical Transactions of the Royal Society A 222: 309–68.
———. 1925. Statistical Methods for Research Workers. Edinburgh, UK: Oliver; Boyd.
<NAME>., <NAME>, and <NAME>. 2005. Introduction to Mathematical Statistics. 6th ed. Upper Saddle River, NJ: Pearson.
<NAME>., <NAME>, <NAME>, <NAME>, and <NAME>. 2009. “Bayesian T-Tests for Accepting and Rejecting the Null Hypothesis.” Psychonomic Bulletin & Review 16: 225–37.
<NAME>. 1980. “A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity.” Econometrika 48: 817–38. |
block_keys | hex | Erlang | Toggle Theme
BlockKeys v0.1.10
API Reference
===
Modules
---
[BlockKeys](BlockKeys.html)
Generates or restores a wallet from mnemonic phrases
[BlockKeys.Base58](BlockKeys.Base58.html)
Documentation for Base58
[BlockKeys.Base58.Check](BlockKeys.Base58.Check.html)
[BlockKeys.Base58.Encoder](BlockKeys.Base58.Encoder.html)
[BlockKeys.Bitcoin](BlockKeys.Bitcoin.html)
Helper module to derive and convert to a Bitcoin Address
[BlockKeys.Bitcoin.Address](BlockKeys.Bitcoin.Address.html)
Converts a public extended key into a Bitcoin Address
[BlockKeys.CKD](BlockKeys.CKD.html)
This module derives children keys given an extended public or private key and a path
[BlockKeys.Crypto](BlockKeys.Crypto.html)
This module is a wrapper around cryptographic functions
[BlockKeys.Encoding](BlockKeys.Encoding.html)
This module contains Base58check encoding and decoding functions for extended keys
[BlockKeys.Ethereum](BlockKeys.Ethereum.html)
Helper module to derive and convert to an Ethereum Address
[BlockKeys.Ethereum.Address](BlockKeys.Ethereum.Address.html)
Converts a public extended key into an Ethereum Address
[BlockKeys.Mnemonic](BlockKeys.Mnemonic.html)
BIP32 implementation responsible for generating mnemonic phrases, seeds and public / private address trees
Toggle Theme
BlockKeys v0.1.10 BlockKeys
===
Generates or restores a wallet from mnemonic phrases
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[from_mnemonic(phrase, network \\ :mainnet)](#from_mnemonic/2)
[generate(network \\ :mainnet)](#generate/1)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Base58
===
Documentation for Base58.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[decode(data)](#decode/1)
[decode_check(data, length \\ 25)](#decode_check/2)
[encode(data, hash \\ "")](#encode/2)
[encode_check(data, prefix)](#encode_check/2)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Base58.Check
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[decode_check(data, length \\ 25)](#decode_check/2)
[encode_check(data, prefix)](#encode_check/2)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Base58.Encoder
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[alphabet()](#alphabet/0)
[decode(data)](#decode/1)
[encode(data, hash \\ "")](#encode/2)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Bitcoin
===
Helper module to derive and convert to a Bitcoin Address
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[address(key, path)](#address/2)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Bitcoin.Address
===
Converts a public extended key into a Bitcoin Address
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[from_xpub(xpub)](#from_xpub/1)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.CKD
===
This module derives children keys given an extended public or private key and a path
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[child_key(error, index)](#child_key/2)
[child_key_private(key, child_index)](#child_key_private/2)
[child_key_public(key, child_index)](#child_key_public/2)
[derive(extended_key, path)](#derive/2)
Returns a Base58 encode check child extended key given an extended key and a path
[master_keys(encoded_seed)](#master_keys/1)
[master_private_key(arg, network \\ :mainnet)](#master_private_key/2)
[master_public_key(key)](#master_public_key/1)
[Link to this section](#functions)
Functions
===
Returns a Base58 encode check child extended key given an extended key and a path
###
Examples
```
iex> BlockKeys.derive("<KEY>", "m/44'/0'/0'")
"<KEY>"
```
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Crypto
===
This module is a wrapper around cryptographic functions
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[ec_point_addition(parent_key, child_key)](#ec_point_addition/2)
[hash160(data)](#hash160/1)
[public_key(private_key)](#public_key/1)
[public_key_decompress(public_key)](#public_key_decompress/1)
[ripemd160(data)](#ripemd160/1)
[sha256(data)](#sha256/1)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Encoding
===
This module contains Base58check encoding and decoding functions for extended keys
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[base58_encode(bytes, version_prefix \\ "")](#base58_encode/2)
[decode_extended_key(key)](#decode_extended_key/1)
[encode_extended_key(version_number, depth, fingerprint, index, chain_code, key)](#encode_extended_key/6)
[encode_private(map)](#encode_private/1)
[encode_public(payload)](#encode_public/1)
[private_version_number(atom)](#private_version_number/1)
[public_version_number(atom)](#public_version_number/1)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Ethereum
===
Helper module to derive and convert to an Ethereum Address
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[address(key, path)](#address/2)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Ethereum.Address
===
Converts a public extended key into an Ethereum Address
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[from_xpub(xpub)](#from_xpub/1)
[Link to this section](#functions)
Functions
===
Toggle Theme
BlockKeys v0.1.10 BlockKeys.Mnemonic
===
BIP32 implementation responsible for generating mnemonic phrases, seeds and public / private address trees.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[entropy_from_phrase(phrase)](#entropy_from_phrase/1)
Takes a string of word phrases and converts them back to 256bit entropy
[generate_phrase(entropy \\ :crypto.strong_rand_bytes(32))](#generate_phrase/1)
Generates the 24 random manmonic words
[generate_seed(mnemonic, password \\ "")](#generate_seed/2)
Given a binary of entropy it will generate teh hex encoded seed
[salt(password)](#salt/1)
[Link to this section](#functions)
Functions
===
Takes a string of word phrases and converts them back to 256bit entropy
Examples
---
```
iex> BlockKeys.Mnemonic.entropy_from_phrase("safe result wire cattle sauce luggage couple legend pause rather employ pear trigger live daring unlock music lyrics smoke mistake endorse kite obey siren"")
be16fbf0922bf9098c4bfca1764923d10e89054de77091f0af3346f49cf665fe
```
Generates the 24 random manmonic words.
Can optionally accept entropy string to used to generate a mnemonic.
Examples
---
```
iex> BlockKeys.Bip32Mnemonic.generate_phrase()
"baby shadow city tower diamond magnet avocado champion crash ..."
iex> BlockKeys.Mnemonic.generate_phrase("1234")
"couple muscle snack"
```
NOTE: For now the seed can be only generated from 32 bytes entropy
Given a binary of entropy it will generate teh hex encoded seed
Examples
---
```
iex> BlockKeys.Mnemonic.generate_seed("weather neither click twin monster night bridge door immense tornado crack model canal answer harbor weasel winter fan universe burden price quote tail ride"
"af7f48a70d0ecedc77df984117e336e12f0f0e681a4c95b25f4f17516d7dc4cca456e3a400bd1c6a5a604af67eb58dc6e0eb46fd520ad99ef27855d119dca517"
```
|